Mere Orthodoxy | Christianity, Politics, and Culture

Aquinas, AI, and the Pursuit of Learning

Written by Alex Stevens | Apr 2, 2026 11:00:01 AM

My eight-year-old daughter recently invoked my fatherly duty to help her do research for a presentation about a figure from the Middle Ages; Thomas Aquinas, in her case. My wife and I bought a few books, checked one out from the library, and found some relevant material in reference works we already owned. Notably, we did not search for random articles or videos off the Internet or try using generative AI as a personal tutor; yet this was not enough to entirely avoid AI slop.

Among our stack of works on this historic figure, two stand in stark contrast – G.K. Chesterton’s St. Thomas Aquinas and a cute looking, illustrated children’s book. One, a well-respected ode to a man whose intellectual contributions can hardly be matched; a poetic sketch of a humble saint. The other, a simulacrum of a fun biography about Aquinas full of duplicate pages, dubious content, emoji bullet points, and enough six-fingered people to send Inigo Montoya into a frenzy. The tell-tale signs of AI-generated content were on full display in this children’s book. These distortions of reality ended up being a rather ironic tribute to the brilliant mind that wrote disputations on truth and knowledge.

There was an ongoing joke while I was taking an Artificial Intelligence course in college – make sure your AI algorithm can’t produce Skynet. James Cameron’s Terminator has not left the popular imagination, but it is not the sole source for imagined future battles or dystopias. One of the struggles in The Matrix, to know what is real, makes an indefatigable T-800, or a T-1000 composed of mimetic polyalloy seem only mildly challenging by comparison. The rapid advance of generative AI has caused people to reconsider what would constitute science fiction (emphasis on fiction) in the next ten years.

It is that similarity with the challenges of The Matrix that, I think, causes anxiety about the future with respect to learning. There are real challenges in the pursuit of learning that are compounded by the wide-spread ability to quickly produce shoddy AI-generated content. It is easy to see how this evokes the question, “What is even real anymore?” or “Will people be able to separate fact from fiction?” While some will certainly be misled by it, this content does not fundamentally impair our God-given capacity to learn about His creation, to pursue the truth.

Eggs are Eggs

Even before generative AI became popular, learning without the benefit of a trusted instructor or instructor-selected materials could be intimidating. For many subjects, there are an overwhelming number of sources to choose from. Different authors can come from different schools of thought or emphasize a different set of facts, while some are just clearer than others. Not all works are equally trustworthy, either. Some have exaggerated or unsubstantiated claims, faulty logic, or even a valid argument based on a false premise. For example, how much credibility should be given to the claims of miracles after Aquinas’ death? Does the fact that something so extraordinary was only mentioned by one of my daughter’s sources reflect differently on the author who included it versus those who excluded it?

Another layer of complexity is context. People, events, and ideas do not generally appear ex nihilo. Knowing about King David’s time as a shepherd when he was young shows where he draws the imagery for the beloved Psalm 23. Grasping the early trinitarian controversies requires knowing about the Arians, Nicenes, and debates about “homoiousios” vs “homoousios”. Understanding some differences between the Franciscan and Dominican orders colors Thomas’s family’s reaction to his choice to join the Dominicans.

The piles of books one needs to read to understand an argument and its historical significance can quickly become overwhelming. Yet we needn’t cry out, “What is truth?” or “Who will rescue me from this corpus of death?” Truth is knowable, even for non-academics. You can trust me, I’ve read people who have read Aquinas. As Chesterton wrote in his biography on Aquinas, “the philosophy of St. Thomas stands founded on the universal common conviction that eggs are eggs.” This is as it must be for the Christian who believes that God has spoken to His people through His word, which was recorded for all time in the Holy Scriptures. God would not have given His word to His people and made them a pillar and buttress of the truth (1 Tim 3:15) if it was not knowable, even if all matters are not alike plain.

In the Psalms David compared God to a rock (Ps 18:2; Ps 62:6-7). He used his real knowledge of the world – rocks are hard, rocks are good for building – acquired through his senses and experience, to proclaim divine truth – the Lord is a fortress, a deliverer – acquired through his experience of salvation. Jesus taught about the kingdom of heaven through many comparisons (Matt 13:24–50; Mark 4:26–29; Luke 13:18–21). The objects (e.g., mustard seed) and experiences (e.g., finding a pearl of great value) he used for comparison would have been understandable to His hearers, even if they were blind to the truth of the kingdom. The apostle Paul opens his letter to the Romans with a bold declaration that creation itself makes it evident that there is a Creator, and, therefore, all are responsible for knowing it (Rom 1:18-20). So, even if some deny it, we can know that eggs are eggs and there is a good God who made them.

AI Slop Enters the Chat

While we can explore, learn, and know a great many things, that doesn’t make all things equally easy to learn. Given the many challenges to learning that existed prior to generative AI, does the presence of AI slop make the challenge of pursuing truth insurmountable? Or, to rephrase the question, does the existence of content that is noticeably low quality, produced with presumably low effort, so completely hinder our ability, individually or collectively, to learn that it will cause a crisis? I don’t think so.

Feed-driven social media, those platforms that are designed to keep you scrolling their feed (e.g., Facebook, X, YouTube, etc.), are probably where this content flows most freely, and I am not original in saying that it would do most people good to stay off it entirely. Given that many choose to stay on it, though, it would be to our great benefit to not treat it as a valid source of information. The effort to sift through what’s found there will likely outweigh the benefit of searching there to begin with. After all, not all AI-generated content is noticeable instantly, and the remaining content is rarely known for its virtue or truthfulness.

Anecdotally, books are not even a guaranteed safe haven from AI slop. The relative ease to self-publish, while good for genuine writers, also facilitates these less desirable products. Even though retailers like Amazon make returns and refunds easy, encountering enough lackluster books will create issues of trust. Reviews, while not completely unhelpful, have their own trust problems as anyone who has bought a previously-unknown-brand item can likely attest. Like feed-driven social media, review-driven retailers might not be the right place to start a search when looking to learn.

While it is good for some people to know about looking for vanishing points from converging parallels or feeding visual noise through Fourier transformations to analyze images as Hany Farid demonstrates, this will not be accessible to everyone. The detection techniques are not likely to stay the same over time, either. Some are working on techniques that might be helpful for websites more broadly. One such proposal looks to create networks of sites that have been vouched to be human content. It doesn’t hurt to hope that one of these solutions could become as ubiquitous as CAPTCHA systems in separating the wheat from the chaff. Adoption is always a struggle, though, and these solutions could find themselves with similar trust problems as reviews.

While it might be tempting to just disregard all articles, books, and videos published after 2022, when ChatGPT was released, this will not solve our learning problems. Time will go on, new topics will arise, advances will occur, and not all subjects will have the benefit of decades or centuries of publications. Eventually, content from the AI era will need to be consumed, and that will be sooner rather than later for some fields, like medicine. It is doubtful, for example, that the broader population will advocate for doctors to not read any research on new strains of viruses simply because it was written in the AI-era.

Fortunately, AI slop is not necessarily introducing new problems so much as adding to old ones. From a learning perspective, the main problems with slop are that it is low quality and of dubious trustworthiness. These are real challenges, but they are not insurmountable. Further technology might play some role in overcoming it, but it seems more likely to me that cultivating learning practices whose first step is not “search the Internet” will fare better.

Truth, Trust, and Telos

For all learners, but especially for those who are self-directed, we should diligently seek the truth, think humbly in community, and remember our purpose for learning. The seriousness of the topic or level of interest might determine the rigor with which these practices are applied, but they can apply broadly, whether it is a hobby, career skill-up, or Bible study. Over time we can cultivate trust in different spheres – individuals, communities, publishers, etc. – that will help us avoid or mitigate the woes of the AI slop slough.

  • Diligently seeking the truth is a commitment to live in and understand the reality God has given to us, not the one we would rather have. Asking questions, reading broadly, and validating what we learn (as much as we are able) are helpful ways to approach this.

  • Thinking humbly in community is an acknowledgement that our own perspective can be limited and that there are many things we can learn from others. Soliciting recommendations and engaging others by articulating ideas and receiving feedback assume that we can learn from others. Humility, a shared goal or interest, and an appropriate level of openness will be the key to building trust.

  • Remembering our purpose for learning is, hopefully, a freeing reminder that only God is infinite and omniscient. We do not have to go down the infinite rabbit holes we come across to say that we know, or know about, a topic or to be able to accomplish a particular task. Further, what is the “why” behind our “why”? For any pursuit, once going back enough levels, we should remember that our chief end is “to glorify God, and to enjoy him for ever.” (WSC 1). If the journey is at odds with that chief end, it is off track and needs to be righted.

These practices are not linear, mutually exclusive, all encompassing, or even equally applicable to all pursuits, but they are generally important to learning and building trust. Part of the problem of the pursuit of truth is lacking trust in the sources. Trust is predicated on dependability. Generally, the more often a person, group, or organization provides useful input (sources, feedback, ideas) that points us towards truth, the more we trust. AI slop is bad for the economy of trust not only when it contains false information, but especially if the recipients discover that the content is AI slop without a previous disclosure.

Given this relationship between trust and the pursuit of truth, it might feel like it is a chicken-and-egg problem. How can one learn without first having trust, and how can one validate a source if they don't yet have knowledge? It is important to realize, though, that we have not lost our tie to reality and our God-given capacities. There are certain things we can verify through our senses, trial and error, and reason. Deriving everything from first principles would be quite a task, one that most are not up to; but we aren’t left to derive everything from first principles.

We started with a daughter looking to her parents for guidance. Children learn to trust their parents because they are (hopefully) consistently there to provide for them. We ought to be good stewards of that trust and teach them good things – truth about the world, the God who made the world, and how to live in it. As part of this, it is our privilege and duty to equip them for being in community and learning, for pursuing the true, the good, and the beautiful.

Lifelong learning is good, and I commend it to all, but there is an important caveat in this whole matter – there is a real need and place for experts, and, by definition, amateurs are not experts. My point is that one does not need total knowledge with absolute certainty on every point to say that they have knowledge. There are many endeavors we can undertake that do not require us to be experts, simply more knowledge than we currently have; ones that help us to better love God and neighbor, or be better stewards of what we’ve been given, or otherwise seeking to glorify God and enjoy Him. These endeavors are worth pursuing.

A Christian perspective should give us confidence in persevering in pursuit of learning, because it is not a fruitless endeavor. AI Slop is not an impenetrable barrier between us and God’s creation; it is noise that can be overcome through diligence and community. Joining or forming communities that value the truth can be powerful for building trust, not because they do or don’t use generative AI, but because they are tethered to reality. Welcoming requests for recommendations or questions that could have been a Google search might feel inconvenient at times, but it is more dignifying and constructive than the alternative, even when no one has all the answers.