Skip to main content

Wendell Berry's Unanswered Question

October 13th, 2025 | 15 min read

By Samuel C Heard

Earlier this year, the language education company Duolingo caused a stir when its CEO, Luis von Ahn, put out a memo announcing his intentions on making Duolingo an “AI-first” organization. He wrote in April

“AI is already changing how work gets done. It’s not a question of if or when. It’s happening now. When there’s a shift this big, the worst thing you can do is wait. In 2012, we bet on mobile ....  We’re making a similar call now, and this time the platform shift is AI.”

As part of that announcement, von Ahn told his employees that they would begin to phase out contract workers, explaining that they were doing the kind of work that could be handled by AI: “Being AI-first means we will need to rethink much of how we work. Making minor tweaks to systems designed for humans won’t get us there.”

It turned out that not everyone was thrilled about these changes, and Duolingo faced a considerable amount of backlash for the announcement. This apparently came as a surprise to von Ahn — he wasn’t sure why people were targeting Duolingo, seeing as other companies around him were making the same decisions. “Every tech company is doing similar things, [but] we were open about it,” he told the Financial Times

In this regard, von Ahn has been right. Though not nearly as outspoken as Duolingo, over the past year a number of other companies have indeed been phasing out workers in a pivot to AI: 

The list goes on. Some companies seem to be pushing AI as far as they possibly can go. In May, finance company Klarna’s CEO Sebastian Siemiatkowski said that the company had dismissed 40% of its workforce — including the entirety of its customer service department — so that they could prioritize AI implementation. The company even touted a “replacement” of Siemiatkowski himself by using an AI-generated version of him to give a quarterly update. It’s no wonder von Ahn was baffled. 

Here we face the irony of our present moment: A company like Duolingo faces backlash for going “AI-first” just as the novel technology gets stitched into the fabric of our lives everywhere else. Not only are people now losing their positions to AI — even the workers who have kept their jobs are now being regularly briefed on how to implement AI and LLMs in their line of work. Browse any major website, and you’re likely to find a new “AI tool” to help you carry out your tasks. AI-generated texts now populate our search results before finding links to other pages. We’re having trouble discerning whether that post, that image or piece of music has been created with AI. The onslaught of an “AI-first” culture is not a question of if or when. It’s happening now.

And yet, our criticism of companies like Duolingo for making this culture explicit shows a state of dissonance and unease about this whole situation. We feel conflicted; we are divided among ourselves. We don’t like it when a company says it’s going “AI-first.” But then again, why is every institution getting involved in the technology? Why are dozens of companies firing workers to be replaced with AI?  Why is every website asking us to interact with their new chatbot? If, collectively, we find the concept of an “AI-first” culture off-putting, then wouldn’t our best course of action be to limit this technology’s usage in everyday life?

Of course, the answer to this question is that some people do indeed want to create an “AI-first” culture — von Ahn’s memo proves the point. But I wager that they stand in the minority. The rest of us remain unconvinced. Even those who regularly promote its purported benefits — the AI optimists among us — would likely shudder at the idea of an “AI-first” culture. Many, if not most of us do not want this. 

But what’s harder, I think, is for us to explain why we don’t want this. And there’s the rub: No matter how much we dislike it, an “AI-first” mindset can still grow in a culture that largely feels opposed to its domination. All it takes is a community that cannot properly justify why its work must be done by people. A culture that has no stable footing in its own worth. 

Why Should People Do Our Work?

The simple truth is that it’s actually pretty straightforward to resist the growth of an “AI-first” culture, if that’s indeed what we want. All we’d need to do is answer the following question: “Why should people do our work?”

On the surface, even that sounds a little foolish. “Why should people do our work?” feels akin to asking why we would choose the sun for daylight. What other choice do we have?

But of course, the past few years have shown us that we do indeed have a choice between a person and a non-person to accomplish our work for us. If I’m looking for a piece of art for my living room, I can now prompt an image to a chatbot just as easily as I could commission an artist or photographer. An employer deciding whether to hire a copywriter must now factor in the reality that at any moment, an LLM can produce the necessary writing on demand. People experiencing mental health crises are no longer required to seek out counselors in person, but can now interact with AI to diagnose their struggles. It turns out that the sun is not our only option for daylight. Artificial light sources abound. 

By its very definition, an “AI-first” culture is one whose members collectively and repeatedly choose the artificial “non-person” over the person. So it follows that, if this is the sort of culture we’re trying to avoid, then we need to choose the person over the non-person in the majority of our interactions with the world. Maybe there is a place for AI chatbots in our ideal culture. But this technology would need to remain subservient to the actual choosing of persons if we want to avoid going “AI-first.” In other words, if given the choice between a graphic designer and an AI model, then we need to be willing to choose the designer, at a minimum, 6 out of 10 times — or better yet, 9 out of 10 times, or even 99 out of 100 times. 

Hopefully you see where this is going. To choose a person over a non-person — a graphic designer over an AI chatbot, for example — looks nice on paper. But pause to consider what this actually means. In choosing the designer, you have opted for the following realities:

  • The designer might disagree with you or feel resistant to your wishes. (AI has been engineered to agree with you.)
  • The designer’s work takes time, sometimes days. (AI produces images in seconds.)
  • The designer is a human being, and that means feelings are involved. It takes social energy to engage this person face to face. (AI demands nothing from you.)
  • Most importantly — the designer requires a paycheck. (AI requires, at most, a subscription.)

Now, our question becomes a little more clear: Why, exactly, should we rely on people to do our work? After all, it’s hard to compete with a machine that “doesn't go on strike” and “doesn't ask for a pay raise," as one CEO recently put it. These things produce images and words instantaneously. They don’t talk back, and they demand nothing other than your ability to generate a prompt. We know that our resistance to going “AI-first” means choosing the person over the non-person, but it’s in our attempts to justify this choice where things start to go awry. After all, why would we want to choose the less convenient option?

Some have tried to argue for people on the basis of our skill and quality of work. At the moment, it’s a popular move among many AI skeptics to claim that AI technology is not actually as intelligent as advertised, and thus cannot be trusted when compared to the quality of craftsmanship from a human being. We ought to choose the person, then, in order to get a finer resulting product. 

It’s easy to see why this argument is being made — when an AI company CEO proclaims its model has reached “PhD level” knowledge, and then a few days later that very model fails to recognize how many b’s are in the word “blueberry” you’re left scratching your head a little. Never mind the fact that AI has already made itself notorious for generating convoluted “slop” images. And of course, we all know that it can produce an email, but people are getting better at recognizing an AI-generated tone of voice. 

As someone who is fairly skeptical of this technology and its usefulness at the present moment, I am sympathetic to this argument being made. But I have several concerns with this line of thought. For one, while it may be true that our AI chatbot is not a fully trustworthy guide at present, what happens if it reaches that proclaimed “PhD level” in five years? Two years? Should it evolve to become a superintelligence — the closest thing we can get to omniscience — would we still be able to justify choosing the person over the non-person? We have to grapple with this technology not only as it currently stands, but also based on what it has the potential to become.

Ultimately, as tempting as it may seem, we cannot ground our argument for people on the basis of quality of output. It may work now in some contexts, but the argument falls on its face as soon as AI creates a product of similar quality to its human counterpart. And we don’t have to wait years to see this happen: Already, LLMs are crafting blogs, emails, and social media posts, all of which are usually engaging, or at a minimum coherent. Can a marketing writer create better copy? Of course — though even now, this isn’t guaranteed. Does it even need to be better, though? In the eyes of marketing leads and business owners looking at the bottom line, a “good enough” text made by a cheap AI model may be a lot more tempting than hiring a writer. Especially when we set efficiency and profit as our baseline standards, no person is going to get the upper hand over the machine. We might as well throw in the towel. 

But if we really do want to resist going “AI-first,” then we still need to be able to explain why we would ever choose people to do our work, especially when we have the option to offload our jobs to machines. This proves a challenge, especially when we only rely on the quality of our products to do the talking for us. 

The only other option, then, is for us to move away from these considerations of our efficiency and productivity and get to the real root of the matter. Questions of why we should rely on people to do our work  — indeed, why people should even work at all — are necessarily intertwined with our purpose: who we are, why we exist, what our labor is even for. The first question we should be asking, then, should not be “Are people more efficient?” or “Can people create a better product?” Instead, we must ask ourselves what writer and farmer Wendell Berry asked us 40 years ago: “What are people for?”

What Are People For, Anyway?

I’m not sure whether it is encouraging or disheartening to remember that this problem — our inability to explain why people should do our work — is not a new issue for us as a society. It was a problem Berry himself pointed out four decades ago, when he wrote an essay around this very question. 

Berry’s essay “What Are People For?” is not directly addressing AI, but it’s easy to see the parallels. When Berry penned this essay in 1985, he was witnessing a situation not unlike our own: more and more, people and their work were being replaced by machines. In Berry’s case, the replacement he observed was happening on farmlands across the country. In the decades after World War II, rural communities across America underwent a radical transformation following the advent of industrial farming equipment. Within only a few years, machines proved capable of doing the work of hundreds, even thousands of men. 

What resulted was a mass migration of people away from the countryside. Once families began to feel — or were rather told, one way or another — that they were no longer needed in their rural communities, they packed their bags and moved into the cities. Some, of course, did this voluntarily, seeing that they were now free to move beyond the drudgery of farming. But many others had no choice. Small farms began to fail, since they could no longer compete with industrial-empowered farms producing at such a larger scale. Farm sizes swelled dramatically, while the overall number of farms decreased at an unheard of rate.

 

Berry, a farmer himself, spent several decades observing these changes firsthand. And from his vantage point, it seemed as though all the major institutions around him were taking a sort of Darwinian, “survival-of-the fittest” posture toward these changes, claiming this exit as a victory. As he begins his essay: 

Since World War II, the governing agricultural doctrine in government offices, universities, and corporations has been that ‘there are too many people on the farm.’ This idea has supported, if indeed it has not caused, one of the most consequential migrations of history: millions of rural people moving from country to city in a stream that has not slackened from the war’s end until now. And the strongest force behind this migration, then as now, has been the economic ruin on the farm. Today, with hundreds of farm families losing their farms every week, the economists are still saying, as they have said all along, that these people deserve to fail, that they have failed because they are the ‘least efficient producers,’ and that the rest of us are better off for their failure.

Of course, the idea that there could be too many people working on a farm feels asinine to Berry — it betrays a complete ignorance of the actual working conditions of a farm.  But more than that, to Berry, the claim that our farms have “too many people” reveals an unawareness of, or even indifference to, the long-term health of our land as it’s run by machines. Do fewer people really result in better care of the farmland? Are our new practices sustainable, with topsoil erosion rates moving faster than the time of the Dust Bowl? Berry thinks not. 

But that’s not Berry’s only response to these economic leaders. As the essay progresses, Berry asks us to consider what is to become of those who have moved to the cities, only to find that they have no place there, either: 

When the ‘too many’ of the country arrive in the city, they are not called ‘too many.’ In the city they are called ‘unemployed’ or ‘permanently unemployable.’ But what will happen if the economists ever perceive that there are too many people in the cities? There appear to be only two possibilities: either they will have to recognize that their earlier diagnosis was a tragic error, or they will conclude that there are too many people in country and city both — and what further inhumanities will be justified by that diagnosis?

The point is not lost on us today. If a person’s work can be replaced by machines in the country, then it can just as easily be replaced in the city, as well. “What then?” Berry asks. Are we to say that there are too many people in the cities? Will we be left to conclude that people are obsolete? 

And this is where Berry strikes at the heart of the matter, asking the question behind the question:

The great question that hovers over this issue, one that we have dealt with mainly by indifference, is the question of what people are for. Is their greatest dignity in unemployment? Is the obsolescence of human beings now our social goal? One would conclude so from our attitude toward work, especially the manual work necessary to the long-term preservation of the land, and from our rush toward mechanization, automation, and computerization. In a country that puts an absolute premium on labor-saving measures, short workdays, and retirement, why should there be any surprise at permanence of unemployment and welfare dependency? These are only different names for our national ambitions.

The Question That Still Needs an Answer

Fundamentally, what Berry recognizes is that before we can even begin to consider whether there are “too many people” at work, we must step back and ask a greater question. And that question cannot be rooted in economics, but in teleology: What are people for? To what ends do we exist? Why did God put us here? Only after we have answered these questions can we go on to decide if and why people ought to be doing our work, or whether we can offload any of our labor to artificial machines. 

But Berry also understands this — that the collective leaders of society, those who hold sway over these choices, have failed to answer the question, either by forgetting that the question exists or by ignoring its presence altogether. And in so doing, they have let the enticement of profit and the hand of economics answer the question for them. This was true 40 years ago, and it remains true today: If we fail to answer the question “What are people for?” then we will let the allures and idols of profit, productivity, and convenience answer the question for us. And we do not want these false gods answering this question — because as Berry points out, the end result is nothing other than our obsolescence. 

Hard to believe we could be made obsolete? Take a few modern-day scenarios, hypothetical but not altogether unbelievable, to demonstrate this reality:

  • Two people begin chatting on a dating app. Each one, seeking to put their best foot forward, decides to consult an LLM to craft their message. Soon, both are copying and pasting text into their conversation, one LLM talking to another.
  • A student, overwhelmed by an assignment, decides to use an AI chatbot to write his essay. A teacher, overwhelmed by deadlines and the amount of work piling up, runs her students’ essays through an AI chatbot to grade their papers.
  • A pastor, pressed for time and struggling through a text, gives his congregation an AI-generated sermon. A congregant, out on Sunday and too busy to listen during the week, decides to feed the audio to a chatbot to generate a summary of the message. 
  • A hiring manager, tired of sifting through applications, allows an AI model to evaluate candidates. A job candidate, looking to get an edge over other applicants, uses an AI model to craft a compelling résumé and cover letter. 

On their own, none of these situations feel consequential. Each person chooses the machine not with any deeply nefarious intent, but because the machine is extremely convenient, efficient, and effective at creating the necessary product. But in each circumstance, people have been made obsolete — maybe not completely, but at least momentarily. No person actually did anything here. No person is contributing or benefitting from one another. These individuals have, at least for a second, made themselves unnecessary, both to each other and to the broader community. 

Expand these examples to a much larger scale, and you have the workings of an “AI-first” culture — one in which the necessity and value of people are not guaranteed. In this culture, businesses determine that they have too many people working on their writing and design projects, or too many humans in their human relations department. Musicians find that they no longer show up on playlists. Therapists lose patients, who show a preference to the artificial. In this world, profit runs the day. The non-person is elevated, celebrated even. Things that are efficient are kept; things that are messy get left behind. What are people for? Nothing, apparently. It’s the question left unasked.

Again, I’m willing to bet that most of us don’t want this. But our answer is not to begin comparing our output and productivity levels to those of our machines. That’s where we go wrong. F0r in the act of comparing ourselves to our non-person counterparts, we have already assumed our value as being equal to theirs. We have suggested, both to ourselves and to others, that people can indeed be made irrelevant. Had any of the people or businesses mentioned above stopped for just one second to ask “What are people for, anyway?”, it’s possible they could have recognized that a person exists for a greater purpose than that of a machine, and that a person’s work, though messy and sometimes inefficient, may be of more real value than anything generated through artificial means. 

Thus, to ask Berry’s question “What are people for?” is really to begin taking a stand against our obsolescence. “What are people for?” acknowledges that we exist on purpose, for a purpose. It implies that we have an aim that can only be accomplished by us as individuals. Machines may serve us in our progress toward these ultimate ends, but they cannot replace us, nor can they do the work for us. In other words — people are not unnecessary. We are valuable.

But it’s not good enough simply to leave it there. If we seek to be the kind of culture that can completely justify the work of our own people, then we need to have a shared understanding of what we are even aiming for. Asking may be the right place to start, but Wendell Berry’s question still needs an answer. Leaving the question blank will only tempt our false gods to fill that answer in for us.

I, for one, cannot claim to have this answer completely figured out. But I know where I am looking. To put my cards on the table, I have been a Christian for over two decades. I am fully convinced in my own mind that the answer to this question is found in the personhood and deity of Christ, in the words of his Scripture and the teachings of his church. I have begun my search there, and I am content to continue searching on this path. But I know I’ve not yet arrived. 

But I also know this: I will not stop looking until I’ve found my answer. People are far too valuable for us to do otherwise.   

Samuel C Heard

Samuel Heard is a writer and editor based in Greenville, South Carolina, where he lives with his wife and three children. His writing has appeared in Mere Orthodoxy, Baptist Press, Ekstasis, The Center for Faith and Culture, and elsewhere. Connect with him at scheard.com.

Topics:

Technology