On Living Fast

Sometimes it seems like our minds race to keep up with the pace of technology, that the flood of information overwhelms us. The reality, argues Tom Vanderbilt, is the reverse: technology is actually racing to keep up with us.

Our senses are voracious, taking in and processing the world at a rapid clip. It takes only 25 milliseconds for a flash of recognition to light up our brains and a quarter-second to understand what we’ve seen. That is the pace at which we experience life. Recent studies show that we enjoy running at the speed of mind. When the information we receive through our senses and the tools that deliver them are keeping pace with our brain, we experience a certain degree of pleasure. We’re in a groove.

So when, say, movies speed up their delivery of visual stimuli, we seem to quite like it, which translates into greater demand. And our wish is Hollywood’s command. Movies have steadily and relentlessly offered up quicker scenes, moving from a ten-second average in film’s mid-century “golden era” to today’s five-second scene (or the 1.7 second bludgeoning of Quantum of Solace). That is why action films like The Bourne Ultimatum seem to have a more visceral quality; their frenetic pace is moving more in step with our minds.

Yet for this we pay a price. Our brains are less able to weave these strings of rapid-fire stimuli into sustained experiences that linger in our memory. We then beg for more technologies that allow us to enjoy experiences and our rapid paces. Or, Instagram. Here’s Vanderbilt:

The “technical” acceleration of being able to send and receive more emails, at any time, to anyone in the world, is matched by a “social” acceleration in which people are expected to be able to send and receive emails at any time, in any place. The desire to keep up with this acceleration in the pace of life thus begets a call for faster technologies to stem the tide. And faced with a scarcity of time (either real or perceived), we react with a “compression of episodes of action”—doing more things, faster, or multitasking. This increasingly dense collection of smaller, decontextualized events bump up against each other, but lack overall connection or meaning. What is the temporal experience of reading several hundred Tweets versus one article, and what is remembered afterwards?

Vanderbilt’s essay isn’t about answers, but instead offering the sort of clarity that begs further questions. It seems undeniably good that Google is able to offer search results at precisely the speed with which our brains demand it—less than 300 milliseconds. Or that our desire for communication and connection is no longer frustrated by the tools we’ve created. We can refresh our Twitter feed with a long drag and a “pop” of release. Like an itch being instantly scratched. It feels good. We want more. Now.

Perhaps this is also why we sense withdrawal when we’ve been away from technology’s instant gratification for too long, or feel frustrated when other devices (or people) in our lives don’t offer the same immediacy.

Do we need to carve out time to refresh and reboot ourselves? Do we go cold turkey or slap on a patch to satiate our desire for speed?

Today’s speed is useful, no doubt. Our brain enjoys it and longs for it. Yet we must remain mindful of what may be lost: the deep remembrance that our soul desires.

email

Lost At Sea, in Space, in the Cloud

Two of my favorite films of recent months, Gravity and All is Lost, have more than a few things in common. Both are basically one-man or one-woman shows about individuals trying to survive in an incomprehensibly vast wilderness. Gravity finds Sandra Bullock desperately attempting to return to terra firma after being stranded in space. All is Lost shows Robert Redford (in a mostly silent, yet tour de force performance) lost in the Indian Ocean after his solo yacht venture goes awry. Both films are very much about the visceral, unnerving feeling of alone-ness; both are about the frailty and contingency of man in an often-hostile universe, but also man’s ingenuity, adaptability and cleverness in survival mode. Both are very good films that you should see before they leave theaters.

Another survival-story of sorts: Blockbuster video. The once-dominant video store chain survived the digital revolution (and transition to cloud-based media consumption) longer than many expected. Yet as we knew it would eventually, Blockbuster announced this week that it will soon be closing its final 300 stores and ending its DVD-by-mail service.

The death of Blockbuster, following the death of the record store and the local bookstore (Barnes and Nobles will surely not survive much longer), marks the ongoing transition to a new era in which cultural commerce unfolds no longer in any sort of common, physical Third Place, but in a digital diaspora wherein individuals personally access streams and store (I wouldn’t say “collect”) media for their convenient consumption. And while this iMedia world has its advantages (the ability to access millions of songs and movies on one’s phone with just a few clicks and swipes), it also has severe drawbacks.

all is lostSuch as: Are we losing a sense of common culture? Perhaps that is an outdated question. To the extent that it ever existed (in America for instance) “common culture” has been rapidly dissipating since at least the 1960s. Still, I wonder if the post-Blockbuster world of cloud-based media consumption is making it ever more unlikely that “culture” or “the arts” or “media” will be something that in the future pulls people together in unifying experiences, discussions and debates. After all, we don’t have to talk to anyone anymore (not even a person behind a counter!) when we purchase a movie, an album, a book. From start to finish, our entire experience of pop culture can happen through one little screen and/or one pair of headphones, wholly unique to us and totally tailored to our tastes, preferences and whims.

With everyone becoming their own self-styled curator, commentator, and ala carte consumer, and with the Internet exponentially subdividing niches, genres, and micro-communities for any of a billion interests, it seems implausible that “common” anything will survive the 21st century. As much as the Internet has gotten mileage out of the “connectedness” metaphor, it seems to be more adept at making us isolated consumers with the power to curate consumer pathways and narrative webs entirely on our own timetables and at our own discretion. We are subject to no one and nothing but our “instant” whims and desires; the curatorial power of “gatekeepers” has been diminished; metanarratives have been long deconstructed. We’re on our own, lost in the vast wilderness of the consumptive “cloud.”

Perhaps this is why the theme of “isolation” seems ever more ubiquitous in our cultural narratives. The solo shows of Gravity and All is Lost are not (of course) overt commentaries on 21st century media consumption trends. But I do think the subtext is there. We are alone, navigating our way in a free-for-all space. There’s a freedom in that. But also a terror. It’s a reverse claustrophobia: a fear of too many choices, too many open roads, too few guides and too little guidance.

One sees the isolation elsewhere. Mad Men’s Don Draper and Breaking Bad’s Walter White are quintessentially American anti-heroes: stubbornly independent, allergic to attachment and subsequently desperately alone. Walt White’s Whitman-esque “Song of Myself” in Breaking Bad–often played out in the vast, unforgiving landscapes of the desert Southwest–illustrates the sobering reality that utter independence often leads to wayward isolation. To a lesser extent, Noah Baumbach’s Frances Ha conveys a similar, albeit more humorous, sense of freedom as isolation in the character of Frances (Greta Gerwig), a twentysomething hipster whose freewheeling decisions to go to Paris on a whim, for example, or to literally dance in the streets of Manhattan, only deepen her directionless despair. It’s perhaps noteworthy that the free spirit dancing pose of the Frances Ha poster resembles the iconic iPod ads featuring silhouette bodies solo dancing against a bright neon background.

gravityAre these movie and TV narratives reflecting the unforeseen isolation of the iPod age? As we further individualize our mediated and cultured lives and embrace the freedom to dance to whatever cultural beat we like, are we simply left spinning and dizzy? That’s certainly the way I felt after watching Gravity and, to a lesser extent, All is Lost: dizzy, unsteady, destabilized, sea-sick. I was left feeling hungry for ballast, for anchors, for solidity; for something outside of myself to offer orientation.

Because going to Blockbuster on a Friday night used to be overwhelming enough. But at least the options were finite. These days the sheer ubiquity of all that is available, all that is recommended, all that is buzzed about in ceaseless streams of 140-character bursts, leaves me with a bit of vertigo: spinning like Sandra Bullock in Gravity, pulled in a million directions at the mercy of vacuity, untethered and uncertain which way is up.

Selfie Deception

What and how we consume says a lot about what we value. And what and how we consume has never been more public.

Thanks to the broadcasting devices in our pockets and the social network audiences always just a few finger taps away, our interactions vis-a-vis culture are increasingly the means by which people make assumptions about who we are and what we worship.

One of the premises of my new book, Gray Matters, is that in this consumerism-as-social-media-identity world, it is all the more imperative that Christians be intentional, thoughtful and critical in their consumer choices. People are watching. We are observed, processed, known through our consumptive habits. What message are we sending?

The new paradigm of digital/mediated/consumer “identity” is on disturbing display in Sofia Coppola’s new film, The Bling Ring, which depicts the true-life drama of a group of L.A. teens who robbed the Hollywood Hills mansions of celebrities in the late 2000s. The film’s opening is interspersed with snapshots of partying teens’ photos on Facebook and Instagram, and the plot turns on the way that social media makes one’s cultural consumption public, enviable, and (in this case) vulnerable to property theft. But what is most striking is the sheer proliferation of “selfies”: characters holding out their arms with phone cameras to document (and immediately publish to the world) all manner of pursed-lip posing, stolen cash flaunting, booze-imbing and other such glamorization of vice.

There’s an unsettling ambience of directionless vacuity in these youngsters’ lives. Where is their sense of purpose (moral or otherwise)? All that seems to animate their reckless behavior is the possibility that it will play well on social media or get picked up by TMZ.

Bling’s teen bandits are obsessed, first and foremost, with celebrity. But it’s not that they are fans of the films or television shows which made people celebrities in the first place. Nor is it that they are particularly interested in the celebrities as people, with unique personalities and stories. Rather, what interests these Millennials most about celebrities is simply the celebrity-ness of them: their paparazzi aura, nightclub exploits, tabloid scandals and–above all–haute fashion. In short: their conspicuous consumption. As Richard Brody observes in his New Yorker review of the film,

Nobody here cares very much about movies or television shows. Nobody talks about stories, and certainly nobody is reading anything other than magazines. They know the actors whom series and movies have turned into celebrities but have little interest in the shows themselves.

This sort of fetishizing of celebrity at its most superficial (the Louboutin heels, Rolex watches, Birkin bags and Herve Leger dresses they wear), isolated from any broader narrative of who they are and why they are famous, helps explains the existence of famous-for-being-rich people like Kim Kardashian and Paris Hilton. But it also reveals a larger cultural problem, which Brody pinpoints as “narrative deprivation.”

Today’s youth, reared in the Google age of on-demand, isolated bits of information and the real-time feeds of a million little “snapshots” (tweets, Vines, rabble-rousing blog posts, etc.), have no patience for narratives that give context or make connections. It doesn’t matter who Kim Kardashian is or how she became famous. What matters is that she gets to wear Lanvin dresses while on red carpets with Kanye West, while paparazzi take note of the slightest details of her Judith Leiber clutch. And these kids want that too. Brody continues:

In their selfies and their videos, the teens broadcast themselves living out crude fantasies of what, as one of them says, “everyone” aspires to be. What isn’t shared is the way they actually live: the teens don’t depict themselves breaking into houses and cars, stealing, selling stolen goods, or driving drunk. They don’t talk about their own lives in terms of stories. Rather, they live in a world that detaches effect from cause, and they depict only the outcomes.

Hence the sheer ubiquity of selfies. For them, earning jail time for thievery is a small price to pay for the opportunity to broadcast images of themselves wearing Prada sunglasses and guzzling Cristal at Lindsay Lohan’s favorite nightclub. It doesn’t matter what they had to do to get there (steal) or what will happen later (jail). The “now” of social media glory–however fleeting it may be–is what matters.

This “narrative deprivation” is symptomatic of (or perhaps another name for) “narrative collapse,” a phenomenon discussed at length in Douglas Rushkoff’s Present Shock. Rushkoff suggests that today’s world is defined by presentist, fragmented media consumption and an “entropic, static hum of everybody trying to capture the slipping moment.”

Narrativity and goals are surrendered to a skewed notion of the real and the immediate; the Tweet; the status update. What we are doing at any given moment becomes all-important–which is behavioristically doomed. For this desperate approach to time is at once flawed and narcissistic. Which “now” is important: the now I just lived or the now I’m in right now?

Social media’s “what are you doing now?” invitation to pose, pontificate and consume conspicuously only amplifies the narcissistic presentism of the generation depicted in The Bling Ring. It makes it easier than ever to tell the world exactly what you want them to know about you. Through a carefully cropped and color-corrected selfie, depicting whatever glamorized “now” we think paints us in the best light, we can construct a public persona as we see fit.

But it’s a double deception. The projections of our self that we put on social media blast are more often than not deceptive in the way they skew, ignore or amplify realities that constitute our true identity. But it’s also a self-deception. That social media conflates our identity with what we consume leads us to the erroneous conclusion that “who I am” can be easily summed up in the ingredient-listing “profiles” of the bands, brands, books and causes we “like,” the restaurants at which we “check-in,” or the songs we let everyone know we are currently enjoying.

Social media exacerbates our ever-growing tendency to approach cultural consumption as more of a public, performative act than an enjoyable, enriching experience. It becomes less about the thing we consume and more about how our consuming of it fits our preferred image. Bling’s high school burglars steal thousands of dollars worth of jewelry, clothes, and shoes not because they find those things inherently interesting, beautiful or pleasurable; but because they hope the accoutrements of celebrity will rub off on them. The things themselves are merely a means to an end.

For anyone who loves culture and recognizes the inherent beauty and value in, say, an expertly crafted table or an exceptionally roasted coffee bean, it is regrettable to see such things reduced to status symbol or fodder for social media selfie-deception. Making cultural items mere props in our social media performance is just another way of “using” culture to meet our needs rather than “receiving” it and letting it “work on us,” to borrow from C.S. Lewis’ An Experiment in Criticism.

For Christians, resisting the temptation to use culture rather than value it for its inherent goodness is a worthy endeavor, but it’s not enough. Using culture for self-worship is bad, but worshipping culture for its own sake is too. The “goodness” of culture, while certainly a thing to be celebrated, comes not from what it can do for us or even what it is in itself, but rather what it reflects about God and how it points humanity toward Him.

Every piece of culture we consume is an opportunity to glorify and give thanks to the Creator. We of all people should not cheapen culture by reducing it to something that mostly serves our narcissism. We of all people should not strip a cultural thing of its God-given goodness by focusing on its potential to aid in our strategic social media identity construction.

For Christians, culture should never be a tool in service of selfie-deception or self-worship. Rather, it should be something that brings us to posture of gratitude and confronts us with who we really are, laying our deceptions bare and focusing us away from ourselves. And if our consumption of culture communicates anything to the world, it should be a testimony not to our own greatness, style, or Valencia-filtered taste, but to the grandeur and glory of God.

This is the second in a series of posts on contemporary Christianity’s relationship to culture, based on ideas from my soon-to-be released book, Gray Matters: Navigating the Space Between Legalism and Liberty (Baker Books).

Marijuana, Caffeine, and a Therapeutic Drug Culture

That’s the subject of my latest essay over at The Gospel Coalition.  Here’s my concluding paragraphs:

Yet the more interesting cases come closer to us. Consider the interrelationship between caffeine and marijuana. On the one hand, many of us rely on caffeine to fuel our work obsessions. Caffeine abuses reveal an overworked, exhausted culture that refuses to rest. A cup of tea is a wonderful gift. Five cups a day may signify unhealthy dependency.

On the other hand, recreational marijuana use seems can engender something resembling sloth. Proper relaxation is a sort of satisfaction—”a job well done”—not a form of escape. Cannabis use may undercut this rest, or at least short-circuit it.

Sloth and overwork are symptoms of the same diseased understanding of how we labor. Some people will strap themselves to and die on the wheel of performance, while others escape their troubles by medicating themselves. In that sense, drugs are (ab)used to therapeutically fill a gap that is felt without being articulated.

Drug use of various kinds highlights our culture’s fundamental commitments and raises questions about how we interact with those commitments as Christians. Just how far does the therapeutic mentality infiltrate our churches? The fastest-growing segment of drug use seems to be painkillers and prescription medicines. Such “white collar” abuses reveal the same sort of escapist mentality that marijuana may foster in different social contexts.

Expanding the framework for evaluating marijuana implicates us all. But the gospel of Jesus Christ creates churches where we carry one another’s burdens. We admonish one another by observing the ways we have failed in our discipleship because we idolize performance and success. Then we begin the process of repenting for our own sins and ensuring that a gospel-centered judgment about whether to use marijuana will actually sound like good news.

I approached the piece as something of an exercise in moral reasoning.  It’s underdeveloped in a lot of ways, but I am attempting to expand some of my earlier thoughts on the body into new areas.  Make of all of it what you will.

 

Is Ethical Advocacy Enough? A Dialogue on Digital Rights and Art

Editor’s Note: This is the final piece in our series on the music industry. The first part, written by Matt Miller, dealt with the suggestion that we remove DRM altogether; the second part, written by Stephen Carradini, detailed the power shift in the post-Napster era. Today we have a dialogue between these two thinkers.

Matt:

Stephen, what I think your piece points out really well is the power differential that’s at the root of this problem: consumers have the power to get free music at will, whereas artists’ power is restricted to either allowing their work to be stolen or keeping it to themselves entirely. As I say in my essay, I don’t think there’s any way to reverse that power imbalance. So the question is, how can artists make these power-mad consumers take responsibility for their actions? Can we? Or are we just going to have a world where musicians all have to teach or something on the side to make a living?

Stephen:

Well, I think “allowing work to be stolen” and “power-mad” are the wrong terms. It’s not stealing if artists offer it to them. And most music consumers are passively making decisions about how to buy music, which doesn’t fit “power-mad.” The latter point is the big problem: After the initial bang of Napster and the like, people just slid into a situation where it was easier to get music illegally (which inadvertently gave them a ton of power). This is partially the industry’s fault for putting up DRM walls to legal purchases (as you noted in your piece) and partially human nature’s innate laziness. I would like to be optimistic and say that people will realize the long-term effects of their actions and change direction, but I’m not. I’m sure instances of cultures changing course without consumers seeing immediate tangible benefits are rare to nonexistent. Why would we think that people will purposefully spend more money for no extra benefit? How would that ad campaign go?

English: The crossed out copyright symbol with...

English: The crossed out copyright symbol with a musical note on the right hand side is the free music symbol, signifying a lack of copyright restrictions on music. (Photo credit: Wikipedia)

Matt:

The ad campaign could start with an advocacy message akin to campaigns for fair trade products. (Unfortunately “fair trade music” currently means exchanging free music for an email address—which hardly seems fair to me.) Granted, fair trade hasn’t become ubiquitous, but this sort of moral campaign takes a while to play out. Historically, Americans have actually been fairly willing to get on board with changing their consumer behavior in response to ethical arguments: we were the nation that instituted Prohibition, to pick only the most obvious example. But even if such advocacy fails, I still see hope for artists who can create loyalty in their fans. We may be entering the age in which only cult acts survive.

For example: one of my favorite bands, Over the Rhine, does no gimmicky marketing, nor do their tracks carry DRM. I don’t know if downloading has cut into their income, and they’ve never been famous, but they’re making a living–and doing so, I suspect, largely because their fans feel such affection for them that we want to pay for their albums. Bigger, more populist artists may not be able to inspire such loyalty, though, and so their days may truly be numbered. But as a fan of more niche styles of music, I’m optimistic for my favorite artists. That said, I don’t see either of these approaches being taken very widely. So you’re free to argue that they won’t be taken at all. Continue reading

Consumption and the Creator’s (lack of) Power

Editor’s Note: This is part two of this week’s series on the music industry. Don’t miss the first post, written by Matt Miller, here. Come back on Friday to read a dialogue between Miller and the author of today’s post, Stephen Carradini.

There are many questions surrounding the post-Napster music industry, but one has been answered definitively: control of the payment structure has shifted from gatekeepers in the music industry to consumers. With an almost infinite amount of music available for free at the touch of a keyboard, the consumer now decides when and how much to pay. Musicians can’t definitively state what their work is worth anymore; they can only hope to prop up the horrible business model of making optional payments as easy as possible for the consumer.

The music-purchasing process before file-sharing was standard capitalism. Music businessmen set prices for CDs to make a profit. Consumers did not have enough money to buy all the music that they wanted or the power to change prices, so they had to choose which albums to buy. Napster and its kind flipped the script: Consumers acquired what they wanted for free, and music businessmen had little power to change the price.

Cut forward ten years, and the aftereffects have been ugly. Pirate Bay is wreaking havoc on a second industry. Major labels have crashed. Bands give their music away in hopes of attracting people to shows. But the weirdest development, in my opinion, is streaming subscription services like Spotify, which legalize and encourage a mentality where money is decoupled from the experience of music.

Sufjan Stevens at the Uptown Theater in Kansas...

Sufjan Stevens at the Uptown Theater in Kansas City on October 17, 2010 (Photo credit: Wikipedia)

When a consumer subscribes to Spotify, he or she pays to use the platform—not to buy music. Paying the artist for his or her work is secondary to keeping the platform alive. That’s the first problem. The second problem is that after discovering a great new band, the consumer does not actually buy anything. He or she simply streams the albums, for which bands get paid $0.0037 per stream, or maybe $0.006 per stream. If I listen to all five of the Sufjan Stevens albums I own on Spotify, Sufjan will make approximately $0.67 ($.006 per stream x 111 tracks). This comes into odds with the price Sufjan has set on his own music at Bandcamp.com: $8 for an album. To pay him via Spotify what he charges for a download of Age of Adz, I’d have to listen to the whole 11-track album a little more than 121 times (1334 streams).

This is where the consumer has to come to terms with a difficult question: Do other people’s opinions matter? Sufjan Stevens believes his albums are worth $8. If a consumer disregards this cost via streaming, piracy, or otherwise, the consumer is saying that Sufjan’s opinion of his own work does not matter, and that he is irrelevant to the pricing of his own work. The consumer indicates that his or her opinion on music is the only one worth considering, and no one (not even the artist) can say what music is worth. The consumer is saying to the artist, “I will pay you what you are worth, but I decide that. I am doing you a service by paying you—even if it’s not what you wanted to be paid.”

That, of course, sounds a little crazy.  Continue reading

Advocacy and Ending Copyright Infringement

Editor’s note:  We like thinking through every area of culture here at Mere-O, which is why I’m delighted that we’re going to spend a little time talking about the way technological changes have affected people’s ability to make money–and what we should do about it.  Matt Miller goes first, Stephen Carradini will write on Wednesday, and then on Friday they will do a dialogue on the issues that come up.  Thanks for reading.  — MLA

The Internet’s already over two decades old, and we still haven’t figured out how to cope with its potential for copyright infringement.

Any creative medium that can be digitally replicated finds itself in danger—music, film, photography, and every form of writing. Widespread pirating, Google Books, and other digital distribution methods threaten to make art something that consumers expect to get for free, always and everywhere.

Professionals in all these fields continue to feel acute anxiety at the potential that digital distribution could destroy their livelihood—and they’re right to feel this way. Similarly, anyone concerned about the ongoing artistic vitality of our culture should be concerned about a future in which it’s impossible to make a living in the arts. How are artists to make money when their work can be copied and distributed with such ease?

DRM is killing music, and it's a rip off! Paro...

DRM is killing music, and it’s a rip off! Parody of home taping is a rip off. Based off Image:DRM Is Killing Music.png (Photo credit: Wikipedia)

Many people have proposed or attempted solutions to the problem, but without much success or variation: in essence, all the proposals I have seen boil down to three approaches. And only one of the three, in my view, offers us any hopeful way forward.

The first approach is that most frequently taken by the big media companies: find a way to regain control over the means of copying and distribution. This might mean enhanced digital rights management (DRM) software; it might mean refusing to publish e-books; it might mean lobbying to enhance copyright law via SOPA/PIPA.

The problems with this solution are myriad, ranging from the rage you induce in consumers (and some artists) to lost revenue from refusing to engage in the digital market. Bottom line: increasing copy restriction doesn’t work, because those dedicated to circumventing DRM or copyright law are always more nimble than those dedicated to enforcing them.  Continue reading

What makes the edifi “Christian?” On Media, Habits, and Christian Virtue

Last week news spread about the release of Family Christian’s edifi, which is being billed as the world’s first Christian multimedia tablet.

Okay, I know what you’re thinking. At least I know what you should be thinking. What exactly makes a multimedia tablet “Christian?” There are a number of denomination-specific punch lines that come to mind, but I’ll leave those to your imagination. And, in fairness, I should note that I have not been able to find the notion of a “Christian tablet” linked back directly to the company or its representatives.

edifi-christian-tabletThat said, it is certainly clear that the edifi tablet is being marketed as a tablet specifically designed for Christians. According to the technology supervisor at Family Christian with the Bunyan-esque name, Brian Honorable, “It goes along with our mission: trying to get people closer to God … through a tablet.” Mr. Honorable also added, “We definitely had to tailor it to our customers.” Presumably, Christians.

So perhaps the question should then be, what makes a multimedia tablet Christian-friendly? To answer this question, we should see if any of the tablet’s features distinguish the edifi from its, dare I say, secular competitors.

Examining the technical specifications won’t get us very far. The tablet is a rebranded Cydle Multipad M7 manufactured by a South Korean company without the benefit of any divinely inspired design documents (so far as we know). If we look to the tablet’s software we get marginally closer. Family Christian’s website identifies four “family-friendly” features: the pre-loaded Family Christian Reader app (with five free Christian titles included), Safe Search Wi-Fi web browsing, 27 Bible translations, and Christian internet radio.  Continue reading

Resisting the Rhetoric of Technological Inevitability

Articles about technology often come with a snappy, provocative question for a title. Take, for example, two of the most widely discussed tech articles in recent memory, both of which appeared in The Atlantic: Nicholas Carr’s “Is Google Making Us Stupid?” and Stephen Marche’s “Is Facebook Making Us Lonely?” The titles are rhetorical, of course, and they tend to obscure the argument of the article in each case, but the interrogative form at least gestures toward something like a debate about the issue in question.

Every so often, though, you run into a piece that drops the pretense of reasoned debate altogether. Consider the title of Lisa Miller’s recent article in The Washington Post: “The religious authorities and pundits are wrong: Technology is good for religion.” Well, there you have it. What else is there to say?

One might have hoped that the title was a poor reflection of the content of Miller’s essay, but sadly this is not the case. Miller’s article is a textbook example of what I have elsewhere called a Borg Complex. Like the Borg of Star Trek fame, tech writers sometimes like to insist “resistance is futile” when it comes to technology; individuals and institutions must either assimilate or die.

So, for example, Miller claims, “When new generations bring their values to religion, religion will have to adapt.” “Groups that restrict and fear [technology],” she believes, “participate in their own demise.” Then she concludes, fatefully, “If religious groups don’t embrace and encourage the practice of faith online, the faithful might go shopping instead.”

Miller is certainly right to draw attention to the relationship between new technologies and religious communities. It is an important topic and it deserves serious and considered attention. Unfortunately, the tone of Miller’s article shuts the door on such discussions.

Writers who suffer from a Borg Complex usually work with certain unspoken assumptions. On the one hand, they tacitly endorse a view philosophers of technology call technological determinism. Technological determinists believe that technology autonomously drives history. Individuals and institutions are merely passive victims or beneficiaries of technical advance. But most scholars of technology would argue that technological determinism is not the best way of understanding the complex relationship between human beings and technology.

Throughout the course of invention, development, production, and adoption, the fate of new technologies hinges on countless human choices and social factors. At each level the evolution of any given technology could have been otherwise. We do have choices to make with regards to technology, and if we are to live faithfully and wisely we had better take responsibility for those choices.

The tacit endorsement of technological determinism is also often joined by certain unspoken assumptions about the character of the institution or individuals that are being urged to jump on the technological bandwagon de jour. In other words, some normative judgments are usually being smuggled into the conversation. Consider some of the normative assumptions Miller casually makes throughout the course of her article.

Speaking of a particular religious app, she writes, “It encourages among users a broad sense of community and mutual support, which is what good religion does.” One of the two scholars she cites is an economist who teaches a course on the Economics of Religion. Miller writes of the professor, “Understanding that religion is always about people making choices, he asks students to research the coolest religion apps.” The lesson Miller draws from the other scholar she cites, Heidi Cambell, is this: “religious authorities have long wanted the faithful to behave in ways that they do not behave.”

Clearly, there is a narrative about religion that informs these statements. “Good” religion is reduced to mutual support, religious affiliation is subject to the logic of the marketplace, and authority tends toward oppression. This narrative framing Miller’s thesis is in tension with the narrative that emerges from the biblical witness and the Christian tradition.

Regarding technology, media theorist Marshall McLuhan believed, “There is absolutely no inevitability as long as there is a willingness to contemplate what is happening.” It seems to be a corollary of this principle that those who have no willingness to contemplate will proclaim technology’s inevitability.

The Church is called to better things. With technology, as with all other facets of human endeavor, we are called to exercise discernment and wisdom. Insofar as technology comprises what the Apostle Paul calls “the pattern of this world,” we are to resist conformity. This is not to say that the position of the Church toward technology should always be one of resistance. It is only to say that it ought always to be one of thoughtfulness in the service of faithfulness.

What Social Networks Do–And Don’t Do–for Churches

Back in December, I had the opportunity to participate in a roundtable for Christianity Today about the prospects and limits of social networking for churches.   Here’s my opening:

The benefits of social networking are many but require judicious and responsible use to be enjoyed. When done well, social networking can enhance the fellowship of the church by providing congregants a window into each other’s lives. It can mobilize congregants to serve their neighbors and enhance the church’s mission by embedding the community of church relationships in the broader community.

But social media can merely offer a short-term, technological solution to deeper, more fundamental problems. Social networking can give the appearance of intimacy and community without enabling the substance of embodied friendship.

The more we wed ourselves to social networking as a strategy for building community, the more we risk forgetting that the problems in our communities do not hinge upon lack of access to shared information about each other’s lives. They result from our own reluctance to share space and meals together, and to enter into environments and social situations that require our embodied presence. The comforting arm around a shoulder that comes when we “weep with those who weep” will never have an equal virtual substitute.

It only gets better, as they say, from there.