Artificial Intelligence and You




By Baybars Charkas

These days, you never have to read too far into a class syllabus to find

some or other mention of artificial intelligence, usually warning you not to use it for reasons of plagiarism, or warning you to use it sparingly. When I entered university two short years ago, no professor felt it mattered to devote a section of their syllabus to AI. That would have seemed a bit silly. And yet, it has suddenly become difficult to escape mention of AI, in the classroom or elsewhere.


Looking back, it may be difficult to appreciate just how much AI’s profile has risen in the public consciousness. In the space of maybe a year, AI went from being the preserve of science fiction and computer geeks to the subject of universal conversation. Programs like ChatGPT, people like Sam Altman, and computer science lingo like “deep learning” and “back propagation” are household names. Now everyone from around the world feels they have to cultivate an opinion on AI and its consequences. 


A great deal of bibble-babble has been generated about AI. Every single day an article is published comparing AI to the printing press, gunpowder, the steam engine or some other epoch-making invention. Inevitably someone will make the claim that AI is as critical to human history as the creation of agriculture or the discovery of fire. Most people who bother their heads with this kind of thing agree that we are on the cusp of a new age of discovery. Where disagreement lies, it seems, is whether this breakthrough brings good news or bad. 


Though many people will think of it as essentially “recent,” artificial intelligence is not new. Its history reaches back to the end of the Second World War, when computer science was formalized as a field of study. Over the decades, researchers attempted to create machines that could “think” for themselves. What exactly it means for a computer to “think,” I will not get into, because I do not understand it. In any case, progress was slow, but the researchers’ attempts showed results. By the 90s, AI models managed to beat chess grandmasters at their game. The U.S. government got interested, as did newly-minted tech companies like Google and Microsoft. Quietly, a million-dollar industry turned into a multi-million-dollar industry, then a billion-dollar industry.  


None of history as I have told it thus far explains the sudden explosion of AI in the past year, however. Interest in AI has grown with its development, but in many respects that development has been so slow as to be imperceptible to a lay audience. Sure, people have been talking about autonomous robots and fretting about the TikTok “algorithm” for a while, but more often than not the words “artificial intelligence” went unsaid. So what, then, lies behind all this newfound interest? What gives? 


The logical answer would be that AI’s world-englobing popularity reflects rapid advancements in the field over recent years. This is true. Language models like Chat GPT are a breakthrough in AI research. Faulty though it may be, Chat GPT is the culmination of an unprecedented improvement of machine learning, a model that is much more complex than anything preceding it. Perhaps more important in explaining the popularity of AI, however, is that this technology is available now more than ever to ordinary people. In the past, most everyone interacted with AI indirectly when they browsed Google or scrolled through social media. Today, anyone with access to the Internet can have full-blown conversations with AI, ask it to copy edit their essay, or use it to produce deep fakes of politicians’ voices or likenesses and have them say or do outlandish things. 


It is a law of human behavior that, wherever people’s attention is, their money will be also. Almost overnight, a billion-dollar industry has grown into a multi-billion-dollar one. The sums are eye-watering. Microsoft alone has pledged $13 billion dollars to Open AI, the corporation that developed Chat GPT. The industry has captivated the wealthiest and most powerful men on the planet: Elon Musk, Jeff Bezos, Bill Gates and many more commanders of capital have lined up for a piece of the AI pie. Any self-respecting corporation that has the slightest to do with tech, from the smallest start-ups to Fortune 500 titans, now devotes paragraphs to AI in its shareholder reports. 


The best, or worst, part of it all–depending on where you stand–is that the technology shows very few signs of slowing down. It is worth noting that, in terms of its potential, artificial intelligence has barely even left the cradle. AI pioneers like Geoffrey Hinton recognize that Chat GPT, for all its glamor, is still an “idiot savant,” genius at one or a few things but totally useless elsewhere. All the same, language models are getting better, partly as a consequence of their increased use, and every week carries news of a new breakthrough, initiative, invention and so forth. The pace of advancement is intensifying. Every day brings us closer to an artificial general intelligence: a synthetic, conscious mind that is able to function just like a human’s–or better. 


The breakneck speed at which AI has progressed has given life to a whole ecosystem of futurist aspirations that would have been considered utopian fantasies a few years ago. Among AI’s vaunted potential benefits could be curing cancer, solving congestion on roads, automating menial labor, regulating supply-chains, and on and on. 


In parallel, the technology has given many cause for concern. In particular, experts in the field have warned that AI tech may bring with it a whole freight of unforeseen and unforeseeable consequences. If it is true that this new technology is as formative to human development as agriculture or fire, then it has the potential to be very dangerous if left uncontrolled. 


AI detractors argue that AI research is proceeding much too fast, and that AI researchers do not really understand the implications of the work that they are doing. Skeptics fear that AI will be used by dictators to persecute dissidents, by data companies to farm the private information of millions of users, by corporations to coerce customers into buying their products, by scammers and swindlers to impersonate loved ones or the bank, by scornful exes to create convincing revenge pornography. 


What is worse is the prospect that human beings could lose control of the technology. If AI becomes sentient, it is entirely possible for it to act and think in a way that humans do not want or cannot plan for. And if the technology becomes so powerful that it eclipses the combined intelligence of every human put together (after all, an AI’s “brain” is not constrained like a human’s, which has to fit within a skull), it could conceivably outwit and outmaneuver any of the guardrails we put in its way. The result, AI boffins like Geoffrey Hinton warn, may amount to the annihilation of all human life. 


The fear that human-created intelligence would turn against its creator is much older than AI itself. Mary Shelley’s Frankenstein, the original work of science fiction, is all about the pitfalls of man-made intelligence, and it was written over 200 years ago. In his lust to push science to its limit, Victor Frankenstein engineers a body and gives it life, the result of which is his death and the ruin of his family. Closer to the modern day, an entire industry of film has grown up on the fear of AI, from 2001: A Space Odyssey with its HAL 9000 and Terminator with Skynet. Whether or not any of their plots or scenarios are plausible, these stories have entered the public subconsciousness, and they inform much of how we think of artificial intelligence. More and more, onlookers to this field are asking whether the captains of AI research really are great innovators, or whether their fiendish push towards a technology even they do not fully comprehend is reckless and dangerous. Will they–will we all–end up undermined by the very technology they are uncritically championing, like latter-day Victor Frankensteins? 


All this preamble brings me to the subject I want to discuss: What should you, the average person, make of all this? It seems, on one hand, foolish and harmful to attempt to stand in the way of a technology that could have unimpeachably positive benefits to humanity. On the other hand, it is perfectly reasonable to believe that this technology, which is currently being developed by a tight circle of corporations, could spin out of control if not handled with care. We are at an impasse, where the stakes for mankind could not be higher, and the next few years will set the tone for AI development going forward. 


As with many tough problems, finding the road ahead means looking between the extremes. A full halt of AI research does not appear to be feasible or helpful. Whether it is 19th century Luddites or today’s AI sceptics, it seems those who stand in the way of technological progress inevitably end up outpaced by the technology they want to curb. By the same token, those politely intoning the assurances of certain AI CEOs and tech bros that all is well and that there is no reason to worry are wrong, because there is always reason to worry that any technology can be abused if it falls in the wrong hands. 


The sustainable way forward, then, is to encourage the development of AI while also placing robust limits to its research and use. To this end, governments participation and regulation is absolutely critical. It is the job of elected leaders to hold AI researchers to account, to ensure that the technology is being used responsibly and ethically, and to strictly regulate against potential abuses of the technology by bad actors. Much of this work depends on elevating world leaders’ consciousness of the stakes at play. It seems many political and tech leaders have woken up to the potential risks, as evidenced by British Prime Minister Rishi Sunak’s summit on AI Security in Bletchley Park this November. Furthermore, the firing and rehiring of Open AI CEO Sam Altman, in a flurry of speculation that Altman was too gung-ho about AI technology, will hopefully encourage that company to critically review its behavior and plot a decent course towards sustainable AI development. 


When she wrote Frankenstein, Mary Shelley could not have possibly imagined the technological strides humanity has made in the past two centuries. Nevertheless, the destructive impulse to create was as strong in Shelley’s day as it is today. In order to craft her narrative, Shelley looked back to earlier works of literature to identify that same impulse. One of these works, John Milton’s Paradise Lost, itself the story of creation rebelling against its Creator, offers a sobering warning about the limits of exploration and the follies of intellectual adventurism. It is with that warning that I will leave you: 


But knowledge is as food, and needs no less 

Her temperance over appetite, to know

In measure what the mind may well contain 

Oppresses else with surfeit, and soon turns 

Wisdom to folly, as nourishment to wind.


Comments

Popular posts from this blog

America's Complicity

A Review of PSU's School of Theatre's Production of Urinetown

The Peril of Zero Tolerance Policies: Enabling Generational Cycles of Poverty and Detrimental Effects on the Lives of Marginalized Students