A pair of dice rolls across the table. A man glances at a chart and then pulls a measure of music from a pile. He rolls the dice again and selects another measure. Then another. His friend watches and laughs as the piece grows longer and longer. The man sits down at the piano and begins to play. As the randomly arranged segments begin dancing through the air, his friend’s face turns from mirth to furrowed brow. There is something delightfully scandalous about it.
This is the Musikalisches Würfelspiel, an eighteenth-century parlor game traditionally associated with Mozart. The trick was simple: a set of cleverly composed musical measures that would produce pleasant music no matter how they were arranged. The game appeared to reduce the musical craft to a random mechanism. Much of the discussion around artificial intelligence treats the use of the technology as a kind of würfelspiel.
Standard accounts worry users will become nothing more than curators who arrange pre-composed parts, re-rolling prompts until they get an output that vaguely matches the intention and stitching the results together, all without any attention to the slow, careful development of the fundamental skills essential to craft. And the question of how artificial intelligence relates to craft is important not only because the work that we do is constitutive of a functioning society, but because many forms of work are so integrated with thick community relationships and practices that disruption in these spaces can be close to identity-destroying.
While this is an intuitive picture of the technology and its applications, this evaluation is stuck a few years behind the actual practice. Earlier iterations of frontier models were crude: users would enter a prompt, press enter, and be greeted with text, image, or audio, much like rolling a die and “composing” a song. There was limited control of the process or the outputs, and there was a glaring disparity between the amount of effort from the user and the quality of the output. However, modern workflows have become remarkably more complicated over the last few years and can be most clearly seen in two examples: image generation and music production, where workflows are highly granular and involve a great deal of control over the creative process.
One popular local UI framework, ComfyUI, allows the user to deploy hundreds of custom nodes to finely tune inputs and outputs at every stage of generation. Advanced workflows using such tools begin to resemble something like a synthesizer board, with different dials and knobs tuning the fundamentals of the image: subject, composition, color, framing, and style. A user might draw a sketch and create a node setup that reworks the line art, adds specific pieces and styles of clothing, and then imagines the character on a turnaround sheet, mimicking a concept photoshoot. (It should be no surprise that these sorts of workflows are already being incorporated into the fashion industry.)
Music production has seen similar shifts. Large numbers of musical artists and producers have integrated Suno, a premier music generation model, or similar artificial intelligence programs into their workflows. Inexperienced artists who simply enter a prompt and publish the results find that their work is largely (and rightly) ignored. At the other end of the skill distribution, the most advanced artist workflows are similar in complexity and granularity to image generation and involve significant amounts of human input based on knowledge of music theory and sound production. A sample can be produced by an AI program according to specific parameters, its stems separated out, and the results dropped as separate tracks into professional studio software. This is then handled as a standalone sample, remixed, or otherwise edited and incorporated into a full song with a synthesized drum track and human-composed instrumentals or vocals.
The most telling development across these creative spaces is that most of the concerns with machine learning seem to terminate in discussions around the ethical sourcing of data sets rather than suggestions that the technology is fundamentally outsourcing creative capacities. Disruption here is already turning into creative adoption, which is what we would expect if the technology was another form of acceleration continuous with the industrial revolution rather than something completely different. The fact that these communities can distinguish between a deskilled and skilled use of the technology cuts directly against deskilling worries as totalizing or novel. Practitioners are applying and developing core fundamentals to new technological processes and discussing and exploring successful integrations in ways that begin to look suspiciously like a craft.
***
However, even if we grant that modern practitioners are able to exert fine-grained control of the models, this might ultimately be a shallow control given that we still don’t understand how these models work at a fundamental level. It is reasonable to worry that a concern about implicit knowledge in traditional craft can naturally develop into a full-blown rejection of artificial intelligence if we continue to use a tool we could never understand. Auditing a model’s internal processes to improve outcomes is a genuine concern that seemed, at least for a while, to represent an impassable frontier of knowledge.
Yet the mysterious nature of these models will not remain mysterious forever. Researchers have begun opening these black boxes and mapping their internal structures. For example, current research has discovered that the internal model of Claude counts character lines using 6D helical manifolds, a geometric approach to variables that mimics place location neurons in mammals. This suggests the internal structures of these models are elegant and orderly rather than illegible. That computer scientists are beginning to reverse engineer the technology in a similar manner to how we have reverse engineered the brain should not be surprising. As Christians we live and work under a cultural mandate that not only presupposes creation is knowable but charges us with the task of knowing it.
If AI is not sui generis with respect to its positive practices, neither is it with respect to its dangers. Some of its uses will deserve to be categorized under an umbrella of technologies designed to hook directly into your dopamine systems to maximize profit. Furthermore, even setting aside workflows where AI is doing material work—protein synthesis, diagnostic imaging, drone-directed agriculture—most use cases tend to neatly integrate workflows that exist entirely in the digital space. AI is unlikely to restrain, let alone reverse, the most common digital modes of production.
Yet the greatest danger is both more pervasive and less obvious: AI is much more likely to be deployed as a multiplicative layer that allows ever more efficient micro-targeting of digital services and physical products by industries that already profit from compulsive behavior. The advent of hyper-personalized, real-time engagement strategies will require legislative safeguards, especially if AI leads to video advertisements generated in real time for an exhaustively mapped individual profile.
The question of how this intersects with a Christian anthropology is necessarily complicated. Much of what has been written exists at a theological level with a defensive posture, targeting the fringe ideology of Silicon Valley tech elites in an apologetics framing or treating the use of AI as an unwitting portal to the demonic. Yet if the continuity thesis holds, then we should ground our response in a few important fundamental remembrances.
Technology has existed since the Garden and is an integral component of our cultural mandate. We should also remember that one of the core distinctions between the Creator and his creatures is that we never create matter but merely (!) rearrange it. This becomes clear whether we consider an ancient farmer in Mesopotamia irrigating a plot of soil, a medieval peasant in Northumbria weaving a basket from flax, or a young musician in London taking the raw outputs of machine sound, adjusting its pitch, volume, and length, and incorporating it into a DAW loop. While there are all sorts of important distinctions and qualifications between pre- and post-industrial craft, there is no metaphysical distance between the two.
If the continuity between industrial technological disruption and its digital age permutations holds, then so do the spiritual realities. As Joseph Minich demonstrated in his book Bulwarks of Unbelief, our experience of the post-industrial world is phenomenologically bent toward unbelief. Rather than breaking with this posture, artificial intelligence represents an acceleration of this pressure. This suggests not a novel response, but a deeper one consistent with those practices that have always defined Christian community.
I am thinking here not only of my church’s weekly dinner and hymn sing but an elder who is also an artist who is working on a mural painting for the narthex, a leader practicing his craft in a local community. These sorts of practices create the ground—the posture of learned humility, embodied friendship, and grounding in a historical tradition—needed to cultivate people who can use technology responsibly. Our task is not to develop a unique theology of AI but to catechize our members into a people who can wield this technology without becoming captive to its internal logic. Like alcohol, artificial intelligence will become a test of character, a dangerous good that divides the foolish from the wise.