There have been many inflection points in the 80-year history of AI, but 2023 will probably go down as the year it moved to the very center of our global digital lives. AI has become the standard-bearer for the good, the bad and the ugly of (computer) technology, generating strong and contradictory feelings and dispositions.
The “godfathers” of the current dominant version of AI, the winners of the 2018 Turing Award for “conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing,” expressed in 2023 alarm, awe, and humility about what they helped create.
Alarm: “Ten thousand neural nets can learn ten thousand different things at the same time, then share what they’ve learned.” This combination of immortality and replicability, [Hinton] says, suggests that “we should be concerned about digital intelligence taking over from biological intelligence”—Geoff Hinton, “Why the Father of A.I. Fears What He’s Built”
Awe: “I want to understand and bridge the gap between current AI and human intelligence… after playing with [ChatGPT] enough, it dawned on me that it was an amazingly surprising advance. We’re moving much faster than I anticipated. I have the impression that it might not take a lot to fix what’s missing… If we were to bridge that gap, we would have machines that were smarter than us”—Yoshua Bengio, “‘AI Godfather’ Yoshua Bengio: We need a humanity defense organization”
Humility: “We’re missing something big to get machines to learn efficiently, like humans and animals do. We don’t know what it is yet”—Yann LeCun, “How Not to Be Stupid About AI, With Yann LeCun”
In “Computing Machinery and Intelligence,” Alan Turing proposed “the imitation game” in 1950. Avoiding the thorny issue of defining “thinking,” Turing replaced the question "Can machines think?" with the question "Can machines do what we (as thinking entities) can do?" Turing’s imitation game, or the “Turing Test” as it came to be known, assessed how well a computer program could convincingly imitate human conversation.
Clearly, just sounding human does not answer Turing’s own benchmark which is to make the machine think like humans. Butfooling human judges by simulating human conversation was a “good enough” test for Turing. Five years after Turing’s paper was published, John McCarthy coined the term “artificial intelligence” and the new engineering discipline has been playing the simulation game ever since.
The Oxford English Dictionary defines “artificial” as something that is “man-made,” going back to 1616 to find its meaning as “contrived or fabricated for a particular purpose, esp. for deception.” Merriam-Webster defines “artificial” as “humanly contrived, often on a natural model,” listing as synonyms “phony, fake, false, bogus.”
Indeed, the inflection points in the 80-year history of AI have been mostly driven by a lot of fake news, generating alarm and awe (and very little humility) with promises of human-like intelligence or even superintelligence just around the corner.
There’s no doubt that computer engineers have succeeded in endowing computers with ever-growing menu of capabilities, including assisting humans with their “cognitive tasks,” starting in the late 1940s with calculation. But computers have not been endowed with human-like intelligence. Not even over the last few years when ingenious “data scientists” (defined as professional possessing both computer programming and statistical analysis skills), managed to teach computers how to process the vast online troves of texts and images so they can respond to queries and create new narratives and pictures.
In 2023, the widely-accepted notion that we are experiencing a new stage in our digital lives spread like wildfire. It is, so the notion goes, comparable in its impact to the internet (actually, the Web, in 1993) and the smart phone (in 2007). By now, we are expecting a new “quantum change” every 15 years so we are due for a new one. In addition, this new stage reinforces even more the digerati’s conviction, expressed so well already in 1968 by the digital prophet Stewart Brand, that “we are as gods and we might as well get used to it.”
In 2023, OpenAI chief scientist Ilya Sutskever cemented his image as the poster-boy of the “we are as gods” movement, promising the immediate arrival of “artificial general intelligence” or AGI, or even “superintelligence.” Sutskever explained the power of the idol he helped create: “That first-time experience is what hooked people… The first time you use [ChatGPT], I think it’s almost a spiritual experience. You go, ‘Oh my God, this computer seems to understand.’”
Attention is all you need and my small language model pays attention to the word “seems” in the statement above. The variant of GPT-3 that got all the attention was ChatGPT, which OpenAI tweaked to be conversational and “on point.” Someone at OpenAI must have read or re-read Turing’s musings from 1950 about how to impress humans, convincing them that the computer “seems to understand.” Or maybe someone read or re-read Joseph Weizenbaum’s accounts about how surprised he was when he found out that people took seriously his Eliza program which mimicked the conversation of a Rogerian therapist.
The human ingenuity or HI of AI engineers not only pushed forward the state of natural language processing but also, in a brilliant marketing move, gave the masses worldwide a taste of that special “spiritual experience.” As a result, many were convinced that we (or at least, AI creators) are indeed as gods and that we might as well enjoy AI, or regulate it, or stop it before it destroys humanity.
In 2023, OpenAI announced that superintelligence AI, “the most impactful technology humanity has ever invented,” could arrive this decade and “could lead to the disempowerment of humanity or even human extinction.”
“We are as gods” is a very human delusion, a false belief about modern man’s ability to “change everything,” even human nature, with technology. It has been promoted, for a long time, by the creators of science fiction and, since the 1950s, by the creators of “artificial intelligence,” i.e., all computer-based programs, tools, and applications.
A synonym for delusion is hallucination. Merriam-Webster defines hallucination as “a sensory perception (such as a visual image or a sound) that occurs in the absence of an actual external stimulus and usually arises from neurological disturbance… or in response to drugs.”
In 2023, “hallucination” became the word of the year because it was used to describe the false and inaccurate answers large language models sometimes confidently provide. For many, these “hallucinations” were the only “gap” that needs to be bridged in order to have machines that are “smarter than us.”
The term “hallucination” was first used in the computer science literature to describe specific state-of-the-art advances in computer vision. More recently, however, the term has been used to describe errors in image captioning and object detection.
Given that hallucinations are the result of a “neurological disturbance,” it makes sense to call “hallucinations” the statistical hiccups of the “artificial neural networks” that are the foundation of large language models and their chattering offsprings. It is the brilliant marketing move to make them conversational and engaging, however, that forces them to invent an answer. They would not “seem” to have human intelligence if they admit their ignorance… and there would be very little “engagement” with impatient humans.
Seventy years ago today (January 7, 1954), IBM and Georgetown University demonstrated automatic translation of more than sixty Russian sentences into English, the first public demonstration of machine translation. “It is expected by IBM and Georgetown University, which collaborated on this project, that within a few years there will be a number of ‘brains’ translating all languages with equal aplomb and dispatch,” reported the Christian Science Monitor.
In 2006, John Hutchins summarized the effect of the predictions regarding the immanent arrival of superintelligence: “A persistent and unfortunate effect of the demonstration was the impression given to many observers outside the field of MT [machine translation] that fully automatic translation of good quality was much closer than in fact was the case. It was an impression which was to last – in the minds of the general public and indeed with computer scientists outside the MT field – for many years.”
There’s no need to be afraid or in owe of a computer simulation, while celebrating the human ingenuity that finds new ways to make that general-purpose machine, the computer, help us with additional cognitive tasks. It is the very same human ingenuity, human intelligence, human creativity, that also finds new ways to convince humanity that “we are as gods.”
Let’s hope that in 2024, “humility” will replace “hallucination” as the word of the year.