Operation Christmas Child – Shoebox Collection Week is Here!

Love Your AI?: Artificial Intelligence and Moral Agency

The idea of sentient artificial intelligence, complete with a personality, has long been a central theme of science fiction. The Oscar-winning 2013 movie “Her,” starring Joaquin Phoenix and the voice of Scarlett Johansson, told the story of a relationship between a lonely man and an AI named Samantha.
Love Your AI?: Artificial Intelligence and Moral Agency

BreakPoint.org

The idea of sentient artificial intelligence, complete with a personality, has long been a central theme of science fiction. The Oscar-winning 2013 movie “Her,” starring Joaquin Phoenix and the voice of Scarlett Johansson, told the story of a relationship between a lonely man and an AI named Samantha.

On the small-screen, sentient robots are the stars of HBO’s “Westworld.” In fact, the protagonists of the series, Dolores and Maeve, are scarcely distinguishable, if at all, from the people who have repeatedly brutalized them since their creation, including in their own capacity for brutality. The biggest difference is that the audience is intended to excuse, or at least understand, the robots’ brutality.

Which touches on another theme of science fiction: What are our ethical obligations, if any, to the intelligent machines we build?

Considering that what’s depicted in books and on screen has been light years more advanced than the current realities of artificial intelligence, such philosophical questions have remained in science fiction.

Consider, for example, that seventy years ago, mathematician Alan Turing proposed what’s now called the “Turing Test.” Instead of answering the question “can a machine think?” which presupposes an agreed-upon definition of “think,” the “Turing Test” seeks to determine how well a machine can imitate a person. For example, could a machine fool someone who was asking it a series of non-scripted questions into thinking that it was a person.

Turing thought that a machine would eventually pass his test. Seventy years later, no machine has, and the most optimistic estimate says that it will take at least another decade.

That’s why a recent article in Aeon, which suggested that artificial intelligence should receive the same ethical and moral considerations afforded to laboratory animals, seems, to put it politely, premature.

One of the principal reasons that we feel a moral obligation to the animals in our care is that they can suffer and feel pain and they can let us know that.

A machine can do none of these things. If a person kicks a dog, it will yelp, and the person’s cruelty will be clear. Smash your MacBook because you’re upset with Siri, and the only one suffering will be you, since those things cost over $1000.

In fact, anything we currently call “artificial intelligence” is far more like the software on our laptops than one of “hosts” from Westworld. This will likely be true for the indefinite future.

So a better question to be asking is not what will we be doing to artificial intelligence but, instead, what will the AI we create do to us? It was this question that prompted a recent statement released by the Ethics and Religious Liberty Commission of the Southern Baptist Convention. Entitled “Artificial Intelligence: An Evangelical Statement of Principles,” the statement offers necessary guidelines to engage the challenges of AI before the technology becomes a fait accompli. I quickly signed my name to the statement.

While none of us know with any certainty what direction this technology will take, we do know what ethical principles should govern the creation and implementation of technologies like AI.

First and foremost is that no technology should be “assigned a level of human identity, worth, dignity, or moral agency.” Only humans are created in the image of God, and we should never permit technology to subvert that exalted status. Machines will never be people.

You might be rolling your eyes that I would even think it necessary to say that. We must say it—out loud and to each other—for at least two reasons. First, there is much talk of AI and other technology merging with human biology to create a new kind of species. While that’s definitely still the stuff of science fiction, even discussing the possibility is the kind of subversion the Statement warns against.

Second, on a personal level, AI already threatens to replace real people in many of our lives. Artificial worlds, conversations, and friends too often distract us from the real ones, and for some, can even become replacements for real ones.

It’s worth repeating, the damage our technology can do to us is far graver than anything we can do to it. And again, machines are not people.

Download mp3 audio here.


BreakPoint is a Christian worldview ministry that seeks to build and resource a movement of Christians committed to living and defending Christian worldview in all areas of life. Begun by Chuck Colson in 1991 as a daily radio broadcast, BreakPoint provides a Christian perspective on today’s news and trends via radio, interactive media, and print. Today BreakPoint commentaries, co-hosted by Eric Metaxas and John Stonestreet, air daily on more than 1,200 outlets with an estimated weekly listening audience of eight million people. Feel free to contact us at BreakPoint.org where you can read and search answers to common questions.

John Stonestreet, the host of The Point, a daily national radio program, provides thought-provoking commentaries on current events and life issues from a biblical worldview. John holds degrees from Trinity Evangelical Divinity School (IL) and Bryan College (TN), and is the co-author of Making Sense of Your World: A Biblical Worldview.

Publication date: May 21, 2019

Photo courtesy: Unsplash/Hitesh Choudhary

SHARE

Christianity / Newsletters / BreakPoint / Love Your AI?: Artificial Intelligence and Moral Agency