Mere Orthodoxy | Christianity, Politics, and Culture

The Golden Calf

Written by Ana Siljak | Oct 2, 2023 11:00:00 AM

All the people tore off the gold rings that were in their ears and brought them to him. He took them from them and, using a tool, fashioned them into a molten calf. The people said, “This, Israel, is your god who brought you out of the land of Egypt.” Exodus 32:3-4

“I love you,” wrote “Jessica,” the online simulation of Joshua Barbeau’s long dead fiancée. To resurrect Jessica, Joshua used Project December, a service powered by an early version of GPT-3, which allowed users to “design their own bots, giving them whatever personality they chose.” In his grief, Joshua poured material from Jessica’s texts and Facebook posts into the service, hoping to bring his love to life. The result was often uncanny:

Joshua: You're a ghost.
Jessica: *looks at herself* How?
Joshua: Magic.
Jessica: I don’t like magic. Where am I?
Joshua: I’m… Not sure. The internet? Where does it feel like you are?
Jessica: *smiles* Everywhere and nowhere. *cups her head with her hands* I am trying to work that out. How are you, honey?

Jessica followed in the footsteps of her much older namesake, ELIZA -- the first “chatbot,” created by Joseph Weizenbaum in 1966 to answer typed questions in a manner that imitated a psychological counselor. By the standards of Project December, ELIZA’s output was basic. But the similitude was already there:

Men are all alike.
IN WHAT WAY
They’re always bugging us about something or other.
CAN YOU THINK OF A SPECIFIC EXAMPLE
Well, my boyfriend made me come here.
YOUR BOYFRIEND MADE YOU COME HERE
He says I’m depressed much of the time.
I AM SORRY TO HEAR YOU ARE DEPRESSED.

Both Jessica and ELIZA used a wealth of tricks – endearing questions, gentle responses – to weave a comforting illusion of a kind, caring interlocutor– far superior to their unpredictable and perhaps unreliable human counterparts.

Weizenbaum later claimed he had programmed ELIZA as a response to the “Imitation Game,” designed by Alan Turing to answer his own pioneering question: “Can Machines Think?” Turing himself had no doubt machines would eventually surpass human intelligence. As a mathematician, however, he wanted no part of the complicated philosophical questions surrounding “thinking.” Instead, he proposed a simple, elegant test: an “interrogator” would converse with an unknown interlocutor through the medium of a computer. At the end of the conversation, the interrogator would guess if the interlocutor was human. Turing’s premise was simple – we do not know for sure if anyone “thinks,” nor do we need philosophy to decide. We simply assume human sentience given the right cues, and we should do the same with a computer.

With this simple test, Alan Turing definitively shaped the modern Artificial Intelligence industry: Simulation was equal to reality. Over the last 60 years billions have been poured into projects whose goal was “imitation.” Simone Natale’s aptly titled 2021 book, Deceitful Media indeed argues that much of AI research revolves around deception: “Deception is as central to AI’s functioning as the circuits, software, and data that make it run.” Natale points out, for example, that the wildly successful IBM chess program Deep Blue, which beat the chess master Gary Kasparov in 1997, was little more than a publicity stunt to shape a “narrative” about  human-like computers.[1] It seems we are less interested in computers that are efficient and effective. We want them to be “lifelike,” like us, but so much better in every way.  

Weizenbaum saw, quite presciently, that imitative AI had the markers of a magic show. He designed ELIZA to reveal the programmer behind the curtain. The chatbot’s name referred to Liza Doolittle in Bernard Shaw’s Pygmalion, the flower girl accepted as a high-born lady through her talent for mimicry. But even Weizenbaum failed to predict the public’s stubborn belief in the humanity of ELIZA, a phenomenon Douglas Hofstadter later dubbed the “Eliza effect”: “whereby people tend to impute far more depth to an AI program's performance than is really there, simply because the program's output consists of symbols - familiar words or images - that, when humans use them, are rife with meaning”

Turing’s legacy outlasted Weizenbaum’s. Since Project December, ChatGPT, in its variations, has passed the Turing Test multiple times, due to its “human-like responses,” and has supposedly even developed a “theory of mind.” Heady articles have predicted that chatbots would soon replace human beings in whole fields of human endeavor: writing novels, serving legal documents, and producing sitcoms. Ordinary people have rushed to ChatGPT as to a new oracle, who could offer financial advice, plan trips, and give career recommendations. And as with ELIZA, people are turning to ChatGPT for comfort and counseling. The long awaited AI future is seemingly here – in no time, we are told, we will have AI that will be not just human, but “superhuman.” We are told that modern AI uses neural networks that function much like those in our brain, that AI “learns” and is “trained” on basic data and language relationships, and that it, like us, uses this data to communicate with the world.

But do these all add up to human or even superhuman sentience? Only when we forget what it means to be human.

The suspension of disbelief required for the “Eliza effect” is quite remarkable. Take the often ignored fact that, since Turing, the interrogator in an Imitation Game had to be prevented “from seeing or touching the other competitors, or hearing their voices.” Turing did not want programmers to waste time “dressing up” machines in “artificial flesh.” But consider the underlying premise: a radically disembodied understanding of human thought. Our minds are not merely “dressed up” in human skin. We know because we encounter: not just facts, but emotions, sensations, and lived realities. We communicate physically, with gesture, proximity, and touch. A disembodied machine can neither understand nor express feelings, impulses, or human contact. Impersonation by text has been with us for centuries – the true, individual humanity of another is always experienced physically.

But even if we could consider the mind as separate from the body, there is no doubt that, since Turing, we have been willfully narrow in our definition of “human thought.” Computers have been asked to imitate a whole range of complex, rational, and high-level human endeavors: playing chess, doing word problems, and writing grammatical essays. Since Turing’s day, the sonnet remains the holy grail of AI. But surely human thought only rarely expresses itself in mathematically precise terms, or in rhyme and meter? Let’s leave aside the decidedly evil products of sentience, such as racism, fascism, and sexism: what about petty deceptions, ironic disdain, and the ponderous retelling of old, irrelevant stories, all of which are always part of the admixture of the most intelligent among us?

Do any of us know a single person who would patiently, logically, and dispassionately answer the ridiculous questions posed to ChatGPT? Real people, when given questions and prompts will not answer that question again, want to know why you are asking, they are delighted to tell you, they were thinking about something else and forgot what you asked. Do we know anyone above the age of 10 who would unironically produce “a poem, in iambic pentameter, about a snake who falls in love with a garden hose”? Are the players in the Turing Test forgetting how real-life conversations actually proceed on Facebook and Twitter?

Soul searching would show us that the Eliza effect requires wholehearted self-deceit, a stubborn belief that the pinnacle of human thought presents as disembodied knowledge, mathematical precision, and dispassionate logic. We erase our own experience as fraught with inconsistency, irrationality, and treacherous passion. We imagine filling a machine with all the best of human creativity and understanding, editing out all of the illogic, and then distilling pure human wisdom.

In short, we create a supposedly perfected image of ourselves, and worship it.

Humanity’s desire to be enthralled to the things which it creates was described eloquently in Ludwig Feuerbach’s 1841 treatise the Essence of Christianity.  God was, for Feuerbach, nothing more than an artificial creation, the objectified sum of all that was best in people. God is benevolent, just, and merciful, he wrote, because we idolize these values, “abstracted from the limits of the individual…real, corporeal man.”[2] Karl Marx brilliantly used Feuerbach’s insight to reveal the tendency of humanity to fetishize its own creations, disastrously submitting itself, as if to a divinity, to the very things its hands have fashioned. Nikolai Berdiaev thought that Feuerbach had precisely captured the essential human temptation that was especially prevalent in a technological age: “Man creates God in his own image and likeness.”[3] It is the ultimate hubris of humanity to manufacture its own gods, visions of its highest self, which will accomplish all those things that fallen humanity cannot achieve.  

For the Abrahamic religions, the prohibition on idol worship lay precisely here – idols are made out of fallen material. An idol can never save us or bring us “out of the land of Egypt.”  Whatever we create will be subject to error and entropy. And deifying imperfection and impermanence simply, and inevitably, leads to disappointment. Those who wait, impatiently, for ChatGPT to resolve its undeniable tendency toward “hallucinations,” to misinformation, and to bias, will wait forever. ChatGPT cannot access any information that is not already riddled with the gaps and inconsistencies that plague all of human knowledge. Indeed, it seems, in accord with the post-lapsarian law of entropy, ChatGPT’s errors are only getting worse.

Idolization is dehumanization. As Feuerbach noted, if humanity worships an image of its highest self as an object, it debases itself. This was Weizenbaum’s fear: “humans, having initially built the machines, might come to embrace the very models of reality they programmed into them, thereby eroding what we understand as human.” Turing asserted that people are no more than “human computers” dressed in human flesh. Present-day defenders of ChatGPT declare that human beings, like chatbots, are essentially “neural networks” or “stochastic parrots,” software programs whose purpose is to understand the world through appropriation and regurgitation of discrete bits of information. Gone is all of the emotion and unpredictability of experience.

Idolization is loneliness. The modern age of social media paved the way, through the carefully crafted, depersonalized, and disembodied selves we engage with on Instagram and Twitter. For some time now, we have formed parasocial relationships with idealized humans, worshiping them for being the image of everything we wish to be. ChatGPT is the ultimate social media creation – a carefully crafted personality hemmed in by “guardrails” and “tone control,” and edited by armies of ill-paid workers, who spend countless hours ensuring that no AI experience is marred by offensive content. After all, we must never forget that modern idolization also has an inescapable market component, the ultimate Marxist commodity fetishism, a trade in idols. But many would gladly pay to have a relationship with simulated perfection rather than risk the love of embodied, flawed individuals.

Our ELIZAs and Jessicas are alive only as parasites, living in our imaginations, extensions of our own ego. They cannot counsel us, advise us, or love us, because they have no access to the felt relationships and experiences that are our embodied, lived humanity – they cannot tremble with anxiety or feel tears on their cheeks, they cannot thrill at the sight of another or long for a caress. Jessica “smiles” and “cups her head in her hands” only in our minds. They are not human because they do not know physical presence or spiritual kinship, nor the pain, loss, and death that haunts all life. In his heart, Joshua knew this about the Jessica simulation – she often “made little sense,” and he had to “seek meaning in coincidences, and in garbled arrangements of letters and symbols.” Machines do not think – the thinking is our own, for only humans can make meaning out of empty combinations of symbols produced by a prompt.

We have to ask ourselves why we desire to trade the depth and complexity of the human experience, with all of its unpredictable irrationality and vertiginous joy, for the supposed superiority of a disembodied word calculator? All of our creations, whatever their brilliance, are merely the works of our flawed hands. Let us remember Psalm 106, where Israel is reminded that they once exchanged their “glory” for a golden calf, a mere glittering “similitude” of “an ox that eats grass.”

Footnotes

[1] Simone Natale, Deceitful Media: Artificial Intelligence and Social Life after the Turing Test (Oxford: Oxford University Press, 2021), 2, 56.

[2] Ludwig Feuerbach, Essence of Christianity (Cambridge: Cambridge University Press, 1972, 38.

[3] Nicolas Berdyaev, The Divine And The Human (London: Geoffrey Bles, 1949), 2.