If I call into my banking institution today, I hear a friendly-sounding lady answer the phone, and say “Welcome to Wells Fargo. To access your accounts, press, or say, “one.” To open a new account, press, or say, “two…” And so on. If I press 1, she kindly says, “Thank you. To expedite the handlng of your call, please enter your account number, followed by the pound key.” She asked me nicely, said please, and even explained the purpose of her asking… After I enter the account number (5114-30-5337*) she thanks me again.

The convention of automated “receptionists” has already taken hold of most large corporations and businesses… the convention of automated receptionists who respond not only to touch tone commands but voice commands is slowly setting in.

Certain credit card companies are striving more and more to imitate a casual, friendly conversation between you and the “customer service rep” on the other line. They hire professional voice-actors with calm, melodious voices to replace dry, formal, lines like “Your number could not be recognized. Please try again” with the more colloquial “Hm, I couldn’t get that to work. Can you say the number again for me?” To which you respond by speaking into the microphone.

*Had you there for a second… :)

Now, I am not disturbed by robots. I do not mind science fiction movies depicting the end of humanity and the rise of the superintelligent machines; I have little to no fear that we will ever “make them too smart.” Nor am I morally or artistically opposed, in principle, to the production of high-correspondance imitations of people and animals in things like wax sculpture, Computer Generated Imagery, or (eventually) 3D holograms.

I am disturbed, however, by the advent of “conversations” with automated machines.

I am disturbed for two reasons. One, because of the automated “person” who must be created; Two, because of the effect on me, as a person, while talking to a well-developed mechanical fascimile of a soul.

1. The mimetic machine, let’s call her Teller… What’s wrong with her? Come, you may say, she is a wonderful invention. She is designed to make incoming callers feel comfortable, patient, calm, and happy. She talks with an extreme form of kindliness, (perhaps verging on ingratiating.) She repeats requests only with great apologies, she asks for information only very politely. Is this so wrong?

Beyond this (what is even more valuable from the company’s point of view,) her lovely voice and indefatiguable patience apply to every single caller, any season, any time of day — she never changes moods, never has an “off day,” never even grows old. In this way, she is like unto a Telereceptionist Goddess, ever distributing her grace and love to thousands of customers per day without tiring, without pay, without need for thanks.

Fine and well. There is a problem, however. She is not a goddess. “She” is not even a she.

Though to be a better-than-human machine is to seem like a god, in reality, it is to be worse than a beast.

There is an image in Steven Spielburgs well-disliked “AI,” that captures this beastliness beautifully. A family is sitting around the table, eating and enjoying light, pleasant conversation. They seem normal, except for the fact that we know their little boy, David, is a “mecha,” a robot boy. Mom and dad exchange a laugh over some anecdote that dad has just relayed. They glance over at David, who is peering at them with the same, frozen expression of childlike wonder and unflinching affection. It is a bit spooky that he is not “joining in their reindeer games.” Their smiles droop a bit, and they return to their dinner. Suddenly, David starts laughing maniacally. His mouth opens wide, his eyes remain fixed in that solid expression of innocence, but his mouth and body blurt out a rude imitation of human laughter. The parents are startled, and try to wait it out while David’s programming tells him when it is appropriate and human to stop laughing — which he does, just as abruptly as he began.

David’s near-humanity makes him like a beast — it is because he is perpetually, unflinchingly angelic that he is very like a devil.

The makers of “mechas”, (in AI), and the people in charge of automated teller machines (at major banks) might respond with the same objection: “But if you make a machine that sounds like a machine, people don’t like it. They feel that the experience is cold, mechanical, unnatural.” Then they stop using the automated service, or they call in and hit “0” to talk to an operator, and our costs skyrocket.”

I can sympathize with this. I do not run a bank (or a robot production company) and so I do not as heavily feel the weight of these costs.

However, the more and more humanish we make automated teller machines, the more pleasant they are at first, but to the same degree that they are pleasant, they have the potential to be potential horrible and terrifying, saying the wrong thing at the wrong time, saying it over and over and over and over and over with no feeling, no human awareness, no human sympathy.

Spend enough time with the Los Angeles City automated teller (a system wherein there is simply no option to talk to an operater) and you will begin to feel what it is like… it is lost in a city full of robots, all of whom are answering your pleas for help or direction with a vaguely friendly, unrelentless, not-quite relevent piece of advice.

The second reason I am disturbed is because of the effect on humans that takes place from interacting with this voice-images. The first time I tried “saying” my credit card number rather than dialing it in, I immediately noticed this difference: I don’t have to be nice.
Maybe I am the only one who feels bad for these poor phone bank people and tries to be cheerful with them over the phone, but I realized that, when “talking to” a machine, there is no way in heaven or hell that I can be rude, or distracted, or anything else for that matter… what am I going to do, hurt their feelings?

The same applies for signs at the edge of fastfood restaurants that say “Thank you.” Who is saying thank you? What of them is in that statement? Am I supposed to say, “You’re welcome?”

Human speech is for communication; it is not for automated pattern recognition, programmed into a chipset by some other human person.
The advent of automated tellers such as these, as it becomes more pervasive, will not effect the robots to whom we are “speaking,” but it will have unfortunate consequences of the way humans interact with other humans. Using speech to push buttons on a human-sounding machine will begin to form habits (over long periods of time) of mechanical, unnatural, unfeeling modes of speaking.

It will also encourage talented people to invest their time and energy in creating new and ever more convincing humanish representatives, something which is interesting and fruitful, artistically, but potentially horrible and frightful, pragmatically.

Print Friendly, PDF & Email

Posted by Keith E. Buhler

11 Comments

  1. Interesting reading. This, of course, is not the first, nor will it be the last lamenting of the robot encroachment into humanity.

    The unfortunate truth is that, though you’re right, the reality is that humans, on average, with their laziness, moodiness, and stupidity, are often less pleasing than “robots.” In fact, it is the intense politeness you hear over the phone which provides the first alert that this is not a human but a machine. (Of course, this is confirmed later by “press or say ‘one’…”) In fact, I don’t know about you, but occasionally a very polite human will answer my call and I will at first think I am hearing a recording because it is so clearly spoken, so polite, and so cheerful. It’s a pity that those are not the human characteristics.

    Ah, and alas, robots don’t use smiley icons, either.

    If you haven’t seen Battlestar Galactica, I recommend it whether you like sci-fi or not just for its philosophical questions. (but you have to start at the beginning) This whole post reminded me of a line from the series, where on one of the “robots” taking over the universe says to one of the humans defending humanity:

    “You once said ‘Why does humanity deserve to survive?’ Maybe you don’t.” – Cylon, “Sharon” Model – Battlestar Galactica.

    Warren

    Reply

  2. the virtue of the virtual…

    Keith Buhler isn’t afraid of robots. He’s afraid of robot voices….

    Reply

  3. Just a Couple things….

    First, a Question. Isnt this statement “Using speech to push buttons on a human-sounding machine will begin to form habits (over long periods of time) of mechanical, unnatural, unfeeling modes of speaking.” merely a choice? I mean arent habits formed by choosing? Now, granted, outside things may affect our habits, but isnt choosing to speak unfeelingly, a choice of the will ? (coincedentally the thing which seperates us from the robots). Outside forces have a hand in making habits, but the ultimate choice comes down to whether we choose to speak rudely or not….doesnt it?

    Secondly, if things have an end, i.e. the end of a dry-erase marker is to write on a whiteboard, and if the end of a business is to make as much profit as they can with as little cost as they can, then isnt using Automated Teller’s a good thing, in that it helps the said business achieve their end, specifically making money and cutting costs (of hiring a receptionist) ?

    Reply

  4. Interesting article. It definitely merits further reflection. Something has disturbed you, but putting your finger on it can be such a difficult task.

    You say, “Human speech is for communication…” Ah, but surely these machines are communicating? They communicate the bank hours, cod rates, locations, etc.

    You say they will be, “potentially horrible and frightful, pragmatically”. Potentially horrible and frightful indeed! … But I am unsure of if they will prove unpragmatic … if the end is to make money, as NikeBasketball suggests it is for them.

    Reply

  5. Teaching robots to speak like humans only makes sense in the short run because currently it is harder to round up humans and have them speak like robots.

    But let’s just take human speech as a communication device. The protocol of speech is language. Humans have their different languages, but as protocols for communication are really very poor.

    Wars are fought and people hated and killed over differences of interpretation of just one compilation of text because of its ambiguity. In other words, if God had written the Bible in, say, C++, perhaps there wouldn’t be so much disagreement about what it says.

    Aside: Of course, that’s just silly. But even if a perfect unambiguous language were available, who here thinks that God would have chosen it for his Bible?

    So, I for one, welcome the day when humans speak more like robots.

    Reply

  6. “If you haven’t seen Battlestar Galactica, I recommend it whether you like sci-fi or not just for its philosophical questions.”

    I’ll put it on my list.

    Reply

  7. Thanks for your comments, Nike.

    Let me know if this are or aren’t helpful:

    You said: “[Aren’t] habits formed by choosing? Outside forces have a hand in making habits, but the ultimate choice comes down to whether we choose to speak rudely or not. doesnt it?”

    Yes, it is a choice. I think good choices are good to make. I think ‘outside forces’ that make good choices easier are good, and I want to pursue them.

    For instance, getting my work done (I work from home) is only possible when I am focused and not distracted. Having the door to my room open lets noise in. Noise is (typically) distracting. So having the door open is not conducive to my getting work done. Having it open does not necessarily make me distracted; whether or not I stay focused is ultimately my choice. But having it closed makes it easier to be focused.

    The more and more “people” we interact with for whom “common human decency” is not necessary, the harder it will become to maintain the habit of politeness, decency, etc. Not impossible, but a little more difficult. (Again, I’m talking about the cumulative effect of hundreds of automated tellers over the course of twenty years, I’m not talking about having one phone call to Wells Fargo and then becoming an insensitive jerk.)

    Having very robotic robots makes it easier not to confuse them with people, and easier to maintain habits of politeness, decency, etc.

    Therefore, I want all of my robotic assistants (if I ever need them for my company or family or whatnot)
    — and my signs and answering machines and anything psuedohuman — to be very obviously and explicitly psuedohuman. I want them to APPEAR as they ARE.

    A robot that appears to be a god is more like a beast. A robot that appears to be a robot is a robot.

    Is that at all helpful in answering your question, and clarifying the best course of action?

    Reply

  8. Secondly, if things have an end, i.e. the end of a dry-erase marker is to write on a whiteboard, and if the end of a business is to make as much profit as they can with as little cost as they can, then isnt using Automated Teller’s a good thing, in that it helps the said business achieve their end, specifically making money and cutting costs (of hiring a receptionist)?

    Great question…

    1.Why do you think that that is the end of business?

    2. If making as much profit as possible with as little cost is the end of business, then it is essential to success of a given business decision that the negative consequences, the costs, of that business decisio, is mitigated.

    As support of this, consider a business that sells high quality lumber and paper products by mining the vast natural resources latent in hundreds of National Forests across the country. They would earn a massive financial profit from the wood and paper goods; they would incur irreparable losses from the obliteration of the forests. The cost is not worth the benefit.

    If I am right, then having very well-disguised machines has deleterious effects on the people who use them. It is up to each business, and the American corporate community as a whole, to decide if the cost is worth the benefit.

    That said, I’ll also point out, the “negative consequence” I am predicting, if it is actually a possibility, is a totally unnecessary cost. All business have to do is use machines that are not well-disguised, or not disguised at all.

    Reply

  9. Thanks for stopping by, as well, makelovehappen.

    You quoted me, saying: “You say, “Human speech is for communication…” Ah, but surely these machines are communicating? They communicate the bank hours, cod rates, locations, etc.”

    1. Let’s define communication as ongoing. The electronic voice modulation is communicating something to me, but then its “message” ceases. If I ask it a further question, “Well, are you open holidays!?” It will not respond (at least, not the Wells Fargo teleprompter they currently have.)

    So who is doing the communicating? The people designing the device, the people writing the script, and the actors recording the messages.

    How is their communication ongoing, whereas the machine is not?

    Wells Fargo engineers, writers, and actors change the machine, occasionally. (its the perennial “Please listen carefully, as the menu options have changed.”) The machine does not change.

    “But what if the machine changes itself?”

    Then it was probably designed by a person to change “by itself” according to certain algorithms. That’s a different topic altogether! :)  All I’m saying right is that communication, to be communication, must be able to continue between the two parties… it must be ongoing, or potentially ongoing, like the comment field on Mere-Orthodoxy. Since you can respond to what I’m typing right now, we are communicating. If you had to read it over and over, and you could never say anything back… or if you could, but I could not respond to your response, then I would say we have technically ceased communicating. Writings do not communicate, people communicate, through writing… right?

    So, back to the point: electronic voice modulation machines eventually stop coming up with new things to say, and start repeating themselves.

    Reply

  10. I said, “Her lovely voice and indefatiguable patience apply to every single caller, any season, any time of day — she never changes moods, never has an “off day,” never even grows old. In this way, she is like unto a Telereceptionist Goddess, ever distributing her grace and love to thousands of customers per day without tiring, without pay, without need for thanks.”

    Warren said, “So, I for one, welcome the day when humans speak more like robots.”

    I too welcome the day when humans speak more like gods!

    Reply

  11. Keith, I enjoyed reading your post. Language is a unique gift from God, an attribute that we alone in all of creation share with God. Since we can’t see God or touch God, our relationship with God is primarily conducted through language: in prayer, study of the Word, song, silent meditation. So I think part of the discomfort has to do with souless, mechanical entities using a facility that has up until now been reserved for the part of creation made in the image of God.

    We see, too, from the 2nd verse in Genesis to the next to last verse in Revelation, the words of God as a means of carrying out his will in history. Language is not merely a means of conveying information, but a means of exerting our influence over the world. When robots command language, there is the worry that they, too, will use words to shape the world in a way that only humans have been able to up to now.

    There are lots of interesting things to think about here. The vision of robots in AI was that they might tempt us to give away parts of our humanity that were only intended for us, as God’s children. We don’t know what the consequences will be as we give robots more and more human-like capacities, but it is worth considering.

    Reply

Leave a reply

Your email address will not be published.