Why We Need Christian Anthropology and Epistemology in the Age of LLMs
October 14th, 2025 | 11 min read
By Josh Pauling

Last year I co-authored a book with Robin Phillips about life in the digital age. The potential implications of what is imprecisely called artificial “intelligence” on our humanity was a theme throughout the book. Recently, Robin thought it would be interesting to see what LLMs had to say about our book collaboration. Here’s what ChatGPT pumped out.
Robin: Tell me about the friendship and collaboration between Robin Phillips and Joshua Pauling.
ChatGPT: Robin Phillips and Joshua Pauling share a strong intellectual and personal friendship, marked by a shared commitment to Orthodox Christian thought and a deep engagement with theological, cultural, and philosophical issues. Their collaboration is rooted in a mutual desire to explore and communicate the relevance of Eastern Orthodox Christianity in addressing contemporary challenges.
So far so bad. I’m not even a member of an Orthodox church. And, I must admit, I’ve never met Robin in person. We’ve certainly forged a real friendship through many phone conversations and email dialogues over the years, as frequently happens in shared intellectual work. But ChatGPT is just stringing words together that seem plausible. It gets worse. ChatGPT then broke things down into its signature bullet-point style (shortened for our purposes here, since most of it was just re-wording the same concepts ad nauseum):
Key Aspects of Their Friendship and Collaboration:
Shared Intellectual Interests: Both Phillips and Pauling are deeply interested in the relationship between faith and culture, particularly how Orthodox Christianity can offer a meaningful response to modern secularism.
Writing and Publishing: Phillips and Pauling have worked together on various projects, including co-authoring articles and essays that bridge Orthodox theology and modern cultural discourse.
Public Discourse and Lectures: They often collaborate in giving talks, lectures, and workshops, aiming to bring Orthodox thought to a wider audience.
Shared Vision for the Future of Orthodox Thought: One of the core aspects of their friendship is a shared vision for the future of Orthodox intellectual life.
Personal Connection: Beyond their professional collaboration, Phillips and Pauling have a personal friendship based on mutual respect and shared experiences in Orthodox life.
It sounds impressive. Except, it’s not true. I’ve never directly collaborated with Robin “in giving talks, lectures, and workshops, aiming to bring Orthodox thought to a wider audience,” nor in “co-authoring articles and essays” (aside from our book). My personal friendship with Robin is not based on “shared experiences in Orthodox life,” as much as I value the Orthodox tradition and our shared patristic, sacramental, and liturgical inheritance.
A real person can find this out in a matter of seconds. Nearly every publication I’ve written for has in my bio something about being vicar at All Saints Lutheran Church (LCMS), as does my bio on Amazon and in the book we co-wrote. A human knows that means I am a member of a Lutheran Church, not an Eastern Orthodox one. But ChatGPT does not; in fact, it doesn’t really know anything.
You may say, well ChatGPT is getting better. It might answer Robin’s question more accurately now—especially if it ever scans this article. Spoiler: Robin recently asked Grok about our collaboration and the answer was more modest and accurate.
But there is a much larger point here than playing “gotcha” with what ChatGPT got wrong about who I am. The rapid development and deployment of such technologies into every aspect of life requires that we more robustly answer the who am I question—that is, the question of what it means to be human. I am more than the synthesis of the online information bits about me to be mined by an LLM. I am more than the words and images of me stored digitally.
I am more than who AI says I am. We can harness the best of our theological and philosophical traditions to offer a clearer picture of what a person is—and what truth is. Below are three guiding principles for doing so.
We Need More than a “Humanity of the Gaps”
We must have more than a “humanity of the gaps” approach. Patrick Anderson explains this view as:
An inclination among some skeptics of technology to locate the quintessential human quality in whatever it is that no machine can yet do. What makes us special in a world of large language models, for instance, is that the best human writers can still write better than even the most advanced algorithms. You find this sort of argument all over, and there are several problems with it. Chief among them is that it unwittingly places our sense of ourselves at the feet of the historical contingencies which underlie the development of technology.
The phrase is a play on the 19th century concept of the “God of the gaps,” which attempted to retain a role for God in the universe “in whatever it is that science cannot yet explain.” As science came to explain more mysteries of the universe, God’s role continually shrunk until there was no role left for him at all. Similarly, we must not fall into the “humanity of the gaps” trap, where the definition of what it means to be human continually narrows as LLMs get better until there is nothing left that makes one distinctly human. This approach paints ourselves into a corner, just as it did in understanding God’s role in creation.
James Boyle highlights how LLMs challenge our conception of ourselves in his recent book The Line: AI and the Future of Personhood:
We do not yet understand the magnitude of the transformation that happened when hundreds of millions of people were exposed to large language models. At least since Aristotle, humans have claimed that they have a unique ability to manipulate complex abstract language, and that this difference from the rest of the animal world justifies their lofty moral status. Large language models have done—or, perhaps I should say, are doing—what American Sign Language–using chimpanzees and parrots with large vocabularies could not. They have thrown the significance of that claim to human uniqueness into doubt in a way we have not yet fully digested. That is a seismic shift, not about AI and the salience of the Turing Test—hardly the center of popular attention—but in our conception of ourselves, of what makes humans different from animals and machines. (244-245)
Boyle suggests three additional aspects of humanity worth focusing on: “innovation, the possibility of autonomous action and community formation, and a demonstrated link between an understanding of the word and understanding of the material world— embodied consciousness based on learning the way a child does, not on next-word prediction” (244). These are a good start, but some of these might crumble too, as things like reinforcement learning (systems working together) and embodied AI (systems with a body) develop further.
We Need Theological Anthropology
Instead of defining humanity over and against what a machine can or cannot yet do, we need a timeless and objective understanding of human beings. And that starts by focusing on what human beings are; not so much on what they do. The rich tradition of historic Christian anthropology has much to offer here, reminding us that human beings are:
We Need Theological AnthropologyInstead of defining humanity over and against what a machine can or cannot yet do, we need a timeless and objective understanding of human beings. And that starts by focusing on what human beings are; not so much on what they do. The rich tradition of historic Christian anthropology has much to offer here, reminding us that human beings are:
- Creatures of God with inherent limits that predate the Fall, uniquely created in the image of God to share in his likeness and be his representatives on earth (Gen. 1-2).
- A unified body and soul in one person, in a way that is so intertwined that it transcends our understanding. The material and spiritual aspects of the human person bridge the two realms of heaven and earth (Gen. 2:7, Ps. 8, 1 Cor. 15:47).
- Designed with bodies that are good and that have a telos, a destiny, reflected in the body’s ordering towards relationship and communion with others (especially manifest in marriage), and ultimately with the Holy Trinity (Gen. 2:18-25, Eph. 5:25-33, Rev. 21:3).
- Creatures radically affected by the Fall, which causes a doubleness in man: a deep-seated corruption that affects every aspect of our being, yet a remaining mystery and majesty to the human creature as a work of God (Gen. 3, Gen. 9:6).
The Vatican’s January 2025 statement, Antiqua Et Nova, explores this territory further, and models how Christian theology and anthropology provide a framework for thinking about LLMs and personhood.
The theological anthropology introduced above contrasts sharply with the technological anthropology implied by the digital ecosystem. For one, the digital environment trains us to think of ourselves as self-constructed in accord with our own choosing. “Disembodied interaction appears to grant each individual the power to craft an ‘identity’ for themselves based solely on inner desire,” Mary Harrington writes. “In the virtual realm, I no longer need to be whatever I wish to be recognized as; I can simply say I am that thing. Who is anyone to disagree?”
This constructivist approach to the self that we’ve been swimming in culturally for years (Live your truth! Express yourself! Let the real you shine through!) is made even more plausible in the digital environment. When we can craft personal identities and profiles online and live through devices that separate us from our bodies, it’s not much of a leap to think we are that thing which we’ve constructed. And neither is it much of a leap to turn to LLMs for further defining reality and ourselves. If the digital environment has already reduced human beings to words and images on a screen, the construction of one’s mind or inner will, what is the difference when the words and images are generated by LLMs?
We Need Epistemology and Theories of Truth
This brings us to reckon with epistemology and theories of truth. If, as Harrington put it, “I can simply say I am that thing,” and “who is anyone to disagree?” then we are implicitly working from a specific epistemology—an epistemology that primes us for LLMs to bring things home to roost. Let’s return to what ChatGPT generated about Robin and I as a case in point. What ChatGPT did is try and assemble a coherent story based on the online content it found about Robin and me. But a coherent story is not the same as a true story. Erik Larson explains how this relates to correspondence and coherence theories of truth: “The correspondence theory demands external verification—truth is discovered by checking against the world itself. The coherence theory, on the other hand, stays internal—truth emerges from consistency with prior knowledge.”
Larson goes on to argue that LLMs function primarily within a coherence paradigm: “A large language model (LLM) exists entirely in cyberspace, disconnected from the world. It cannot directly verify anything against external reality; it can only produce statements that fit within the linguistic and statistical patterns it has absorbed.” This means that “LLMs are not truth-seeking mechanisms. This isn’t just because they are probabilistic—probabilities and truth are distinct concepts—but because they are fundamentally untethered from reality.” Which brings us back to theological anthropology: a human being is an embodied soul—tethered to reality.
Thomas Fowler makes a similar argument in his First Things essay: “The goal of human knowing is always to know something about reality, regardless of whether that knowledge has operational value. By contrast, neither an animal nor an AI seeks the reality of the real. AI must employ symbols, which have no meaning except that assigned to them by someone outside the computer system….The AI paradigm of knowing is different from—and inferior to—the human paradigm.” Fowler sees in this a nominalist tendency to reject universal objective natures and only see relative meaning in the specific names we assign to things. We might even say that an LLM’s qausi-nominalist approach to reality undercuts objective meaning and further solidifies the self in purely constructivist terms.
To an LLM, a person is the sum of information and data collected about their behavior and whatever images and words of theirs that are available. It’s just names—names without an objective meaning or correspondence to reality. This reductionist approach to human identity and constructivist approach to truth assumes that a person is the aggregate of data points about them—just words. And significantly, data points can change as we rearrange and relabel information. But isn’t that already what we’re doing with ourselves, where descriptions of such things as “human, “man,” and “woman” become infinitely malleable?
I Am More Than AI Says That I Am
All of this is to say that when we look to LLMs for truth, we’ve already made some serious anthropological and epistemological errors. We could even say that the underlying epistemology of LLMs, in tandem with our culture’s faulty anthropology, creates a feedback loop of sorts. We live in a world where a person can construct their own identity and live their own truth, as long as it is internally coherent to them (and the digital environment makes this seem more plausible, palatable, and possible than ever before). LLMs functions in a similar way, probabilistically constructing and rearranging content in ways that might be coherent, but not necessarily truthful or corresponding to reality (and sometimes not even coherent). Both nudge us further toward thinking that all knowledge is arbitrary naming—that truth doesn’t correspond to reality, but to one’s own will, desires, and preferences.
This is about much more than poking fun at what chatbots get wrong. The deeper issue is how LLMs erode our understanding of truth and impact our understanding of ourselves. And this gets more problematic the more accurate the technology becomes. As LLMs get better, we become increasingly susceptible to viewing them as arbiters of truth, definers of reality—definers of ourselves. This is already happening as individuals develop relationships with chatbots, as the market for digital companions and intimate partners grows, and as we outsource human decisions to LLMs for its supposed neutrality and objectivity. The machine becomes the template for the human.
LLMs are forcing us to grapple with deep anthropological and epistemological questions. What does it mean to be me—the unique person that is me—when LLMs reduce all things, including humans, to strings of words and images loosed from the surly bonds of objective reality?
The Christian tradition has responses at the ready, as Antiqua Et Nova offers:
Human intelligence is not primarily about completing functional tasks but about understanding and actively engaging with reality in all its dimensions….Since AI lacks the richness of corporeality, relationality, and the openness of the human heart to truth and goodness, its capacities—though seemingly limitless—are incomparable with the human ability to grasp reality. So much can be learned from an illness, an embrace of reconciliation, and even a simple sunset; indeed, many experiences we have as humans open new horizons and offer the possibility of attaining new wisdom. No device, working solely with data, can measure up to these and countless other experiences present in our lives.
Living in a body, being an enfleshed soul, means that we have the capacity for the development of virtue and the pursuit of transcendent goodness. Human embodiment cannot be reduced to electrical signals. Human wisdom cannot be reduced to data inputs and outputs. Human comprehension cannot be reduced to computation. A human person cannot be reduced to a computer—no matter who AI says that I am.
Joshua Pauling is headmaster at All Saints Classical Academy and vicar at All Saints Lutheran Church (LCMS) in Charlotte, NC. He is author of Education's End, and co-author with Robin Phillips of Are We All Cyborgs Now? Reclaiming Our Humanity from the Machine. He has written for Front Porch Republic, LOGIA: A Journal of Lutheran Theology, Modern Reformation, Public Discourse, Quillette, Touchstone, and is a frequent guest on the Issues, Etc. podcast.
Topics: