I admit that I was a little let down by the ending of Paul Kingsnorth’s Against The Machine. Much has been written about this book and rightfully so. Kingsnorth’s critique of the all-encompassing machine culture we live in is a necessary wake up call from a short-form video induced hypnosis that much of our society seems to be in. I appreciate a lot about the book, but Kingsnorth’s postures for the machine age, for me, left at least a little to be desired.
I’ve joked with people that Kingsnorth only has two options for people: reduce your screen time or bomb a data center. That’s reductionistic of course, but only a little. The way Kingsnorth describes these two options, two “Ascetics” as he calls them, are “Cooked” (reduce your screen time) or “Raw” (at least think about maybe bombing a data center).
The Cooked Ascetic is more about drawing lines and limits around technology. For Kingsnorth, he drew a line at scanning QR codes, for some reason. I, also, do not love QR code menus and such, but I’m not sure why that would be a line in the technological sand. What Kingsnorth gets right about the Cooked Ascetic is that we absolutely need to draw lines and limits with our technology use. It’s also true that, as he says, you can “use the Machine against itself: use the internet to connect with others who feel the same, or to learn the kinds of skills necessary to keep pushing your refusal out further, if you want to.” There are ways to use the Machine against itself and, somewhat ironically, technologies like AI and Bitcoin actually create more avenues of doing that, not less.
The Raw Ascetic goes completely off the grid. As he says, “Bombing data centres: this is the mindset of the raw tech-ascetic.” Raw ascetics band together and “never swipe another screen.” My first question to Kingsnorth is: besides the Amish, who is really a Raw Ascetic? My second question is: how many people are economically privileged enough to buy land and completely disconnect from civilization and sustain their life from the ground? Even Kingsnorth himself is, by his own admission, not a Raw Ascetic; and you can tell he isn’t too happy about it. Even he is reaching an audience using Substack, writing on a computer, and corresponding with folks through email. These categories of Raw and Cooked Asceticism are as close as Kingsnorth gets to practical suggestions on how to resist the machine. You can either reduce your screen time a little bit or opt-out of civilization and consider becoming a techno-terrorist.
My problem with Kingsnorth is not necessarily that he is encouraging some kind of intentional engagement or withdrawal from technology. I’ve done as much in my Monk or Missionary article, and I even suggest that maybe about 80% of Christians should consider deleting their social media. I stand by that. If anything, it should probably be more. I’ve made similarly harsh critiques of social media, like in my piece, To Go Forward, We Must Go Backward. I’m not a techno-accelerationist.
It seems to me the thing that those in the Well-Done Cooked (you might say) and Raw (if they exist) camps seem to miss is that much of this technology is simply baked (so to speak) into our societies infrastructure now, and increasingly so. You might think that’s a bad thing, and depending on the example I might agree with you, but it does no good to pretend otherwise just as it does no good to wish we lived in a time before cars and interstates. You might not like that we live in a car-centric society, but we do. Now what?
Plus, I’m not convinced that all of our technological progress and gains in efficiency are bad. For instance, I like Google Maps. I like having Google Maps pulled up on my Apple CarPlay. It makes driving places better. Do I like that the other day I saw a “Sponsored” tag under a restaurant on Google Maps as I drove by? Not at all. But I still think Google Maps as a technology is a net good, even if I think Google is mostly a bad company and the business model makes the product worse over time. It’s okay, actually, to like some of our new technological capabilities, imperfect as they are.
What AI?
What I’ve started to notice is that the conversation around artificial intelligence, specifically, is that the primary thing being encouraged by some with more Kingsnorthian affinities is entire abstinence. AI-Raw Asceticism. Certainly, I don’t feel any need to make anyone use AI. But my frustration with this perspective is that it’s a really flat way of viewing AI. As if AI is a singular technology that does one thing and it’s always a bad thing because of the way it plagiarizes work through its training data or atrophies critical thinking. I usually see some of these folks complain about AI on my social media feed that served me their posts using, you guessed it, AI algorithms. Now, you might say that there’s a difference between AI algorithms and generative AI via LLMs, and I would agree with you, but that’s my point. Whenever you talk about AI—and this will be more and more true as time goes on—you have to answer the question, “What AI?”
In February 2026, Anthropic released two new models—Opus 4.6 and Sonnet 4.6—and two new products—Claude Code and Claude Cowork. The new models are far and away better models than anything that has previously been released. Hallucinations are a mere fraction of what they were just a few months ago. These models can now spend hours working on a single task. And yes, I’m aware it sounds like a sales pitch, but the free models simply are not as good as the paid models. I’m sorry that’s how business works. Which brings me to Claude Code and Cowork.
Before February 2026, AI chatbots really were mostly that: a chatbot. It functioned as a sort of anti-social media. It had the same social functions as text messaging and DMs but you were conversing with a robot instead of a human. You would say something to it and it would use predictive reasoning to say something back. This is where the fears around plagiarism, cheating, and mental and emotional dependency came in—and for good reason! The chatbot’s capabilities were primarily language oriented and therefore lent itself to more uses along what you might think of as humanities work. I agree with many of the concerns and critiques around AI and chatbots in this realm. Pastors shouldn’t outsource their sermons to AI, writers shouldn’t outsource their writing to AI, people shouldn’t outsource their therapy or relationships to AI, etc etc. I agree.
Code and Cowork fundamentally changed the best use cases of AI. The outputs of Code and Cowork are not primarily language, but tasks. They don’t say something to you as much as they do something for you. And the things they do for you are not, typically, things that otherwise require significant amounts of philosophical deliberation. They write and push code, organize files, summarize data, edit spreadsheets, make slide decks, etc. All of the things that, ironically, require humans to function like machines. Plain english is now a coding language.
I know someone who took a business idea they sat on for a few years and made it entirely themselves in four days and for $350. It normally would have taken six months and $50,000. He did this the very next day after hearing that he should take Claude Code seriously. He has never written a line of code in his life. What’s left to do now for him is smoothing out the rough edges and do the more human work of making relationships, showing the benefits of the product, and helping people use it the best they can. I don’t think it’s “dehumanizing” for a machine to write code—or fix a spreadsheet, or other similar kinds of work—instead of a human. The machine did machine work so the human could do more human work.
That is a fundamentally different use than the chat-oriented LLMs of 2022-2025. Which means that the conversation around AI needs to shift. We’re not just having humanities-centric conversations about AI. We’re having more STEM and Kevin-from-The-Office-centric conversations. This distinction is non-trivial and matters. Which is why I’m not convinced by a simple “No AI” policy because it doesn’t answer the questions, “Which AI?” and “For what purpose?” The reason we feel repulsed by the lifestyle of someone like Roy Lee, the founder of AI-cheating company Cluely, is because of the way he uses this technology in such a deeply inhuman way. And we should be repulsed by it. This is not how any technology should be used. For lesser reasons than this, it’s entirely reasonable for AI to be off-limits in some—even many!—areas. But the introduction of more agentic-style artificial intelligence should caution us against enacting blanket bans on a technology that is far more multifaceted than it was even a few weeks ago.
Revolution vs. Preservation
Which brings me back to Kingsnorth. While I don’t think his Raw and Cooked categories are adequate to meet the moment, I don’t think he is entirely off either. I just don’t think I would use those words. The categories that seem more helpful for living in a world with AI are Revolution and Preservation.
You might loosely map Revolution to Raw and Preservation to Cooked, but I don’t think they’re exactly identical. The reason being is that Raw and Cooked both derive their identity from their explicit relationship to technology. You either revolt against the technology or you slightly modify your behavior with it. The Raw Ascetic and the Revolutionary both look similar in that they eschew nearly all use of digital technology and dream of bombing data centers. The problem is that they are still reactionaries, ultimately centering their lives around technology, if only so they can get away from it. Kingsnorth himself says as much when he declares that his politics are best described as “Reactionary Radicalism.” Kingsnorth says that Reactionary Radicals “oppose any technology which threatens the moral economy”, which I agree with, but I’m not sure how QR codes do that, annoying as they can be. And why are we assuming pixel pushing is an essential part of the moral economy and not just part of the current economy? If we want people to be with their families, associate in communities, and work with their hands, we would want to free them from screens as best as we can by making that work as efficient as possible, right?
One of the odd features of agentic-AI and the rise of an agent-first internet where agents act on behalf of people online through prompts sent via text is the potential of a more offline life being more possible than it is today, especially when you consider that it’s becoming easier and easier to “own the data center” in your own home through open source software—including open source AI models, which, of course, you wouldn’t exactly want to bomb.
The Cooked and Preservationist also look similar in that they modify their relationship with technology by drawing lines, creating limits, and accepting inconvenience. But the Preservationist does not primarily derive their identity from their relationship to technology. They are mindful of technologies formative power, for sure, but instead of all of their energy going toward revolting against technological development (and in many cases, may well be functionally locked out of future jobs, infrastructure, and positive efficiency gains for doing so), they direct their energy toward preserving what is good from a more pre-digital age.
The Preservationist still has family meals at the table and hosts people for dinner (and says “No phones allowed”). The Preservationist still gets up in the morning and says their prayers, reads their Bible, takes care of their body, and cultivates wisdom. The Preservationist still goes to church each week, listens to the preaching of the Word, fellowships with the saints, and partakes of the Lord’s Supper. They organize get-togethers, vote in their local elections, play with their children, and go on dates with their spouse. They build a library of books in their home. They participate in church meetings, volunteer in their kid’s school or with a local charity, and give some of their money away to those in need.
Kingsnorth gets right at this earlier in his book. He says,
The antidote to this is to dig down to those foundations and begin the work of repair. We are going to have to learn to be adults again; to get our feet back on the ground, to rebuild families and communities, to learn again the meaning of worship and commitment, of limits and longing. We are, in short, going to have to grow up. This is long, hard work; intergenerational work. It is myth work. We don’t really want to begin, and we don’t really know how to. Does any child want to grow up? But there is nothing else for it; no other path is going to get us home.
This was the thread that he needed to pull the most and is why I say I was let down by the end of his book, but not by all of it. Here, Kingsnorth gives us the mentality of the Preservationist. Not just reacting to technology, but building something better by preserving what came before the technology. By volunteering to grow up and be the adult in the room. By moving toward the things that make us more human and less like a machine. What the AI Doomers fail to see, I think, are the ways that AI has the potential of making this more possible, not less.
Of course, there is concern about AI taking many, many jobs away. I, in no way, want to minimize how transformative that could be in our society. Our economy could be radically upended. But we should be asking why we want humans doing the work of machines in the first place. Let the machines do machine work so the humans can do human work. Relating, creating, building, molding, sculpting, rearranging, and contemplating. That, I think, is the role of the Preservationist. Preserve the Good, True, and Beautiful—or as Kingsnorth himself put it: People, Place, Past, and Prayer—and let the machines push the pixels.
Kingsnorth wasn’t far off, but I don’t think a Raw Revolution is the answer. And I don’t think it will do to simply not have a smart phone or not scan QR Codes. It’s not that his Cooked category is entirely wrong; it’s that it sees itself primarily as reactionary against something instead of positively trying to preserve and cultivate something of the old that is good in the new world while being willing to utilize technology in order to better enable that kind of life.
These are the conversations I believe we need to be having. The “How” and “What” of preservation. Building the new institutions, cultivating the new communities, fostering the new habits, and having the old conversations to preserve the good of the old world as we enter into the new. As Kingsnorth rightly said, this is intergenerational work.
I’m not afraid of using these new tools. But I want my mind, my heart, my life to be so steeped in the riches of what came before me that the machine’s place stays exactly where it is and ever ought to be: as just a machine.
Ian Harber is the Director of Communications and Marketing for Mere Orthodoxy. He is the author of the book, Walking Through Deconstruction: How To Be A Companion In A Crisis Of Faith (IVP '25). He has written for The Gospel Coalition, Mere Orthodoxy, RELEVANT, and more. Ian lives in Denton, TX with his wife and two sons.
Topics: