Ethics Regarding AI: What is Acceptable?

John Stonestreet

Recently, the U.S. Senate held a closed-door meeting with the biggest names from the world of big tech, such as Bill Gates, Elon Musk, and Mark Zuckerberg. Senate leadership informed the that the purpose of the meeting was to have a conversation about how the federal government could “encourage” the development of artificial intelligence while also mitigating its “risks.” 

Given that focus, it’s more interesting who wasn’t invited than who was: no ethicists, philosophers, or theologians, nor really anyone outside the highly specialized tech sector. For a meeting meant to explore the future direction of AI and the ethics necessary to guide it, nearly everyone in that room had a vested financial interest in its continued growth and expansion.  

Thirty years ago, in his book Technopoly: The Surrender of Culture to Technology, cultural critic Neil Postman described how technology was radically reshaping our understanding of life and the world, both as individuals and societies. Too often, when it comes to new technologies, we so mix “can” and “should” that we convince ourselves if we can do a thing, we should.  

The shift toward a technocratic society redefines our understanding of knowledge. Technical knowledge takes priority over all else. In other words, the how is revered over the what and the why. In the process, things are stripped of their essential meaning. The distinction between what we can do and what we are for is lost. Technocratism also comes with a heavy dose of “chronological snobbery,” the idea that our innovations and inventions make us better than our ancestors, even in a moral sense. 

Another feature of a technocratic age is hyper-specialization. In higher education, students are encouraged to pursue increasingly detailed areas of study. The result is those who can do, but most have not truly wrestled with whether they should. Downstream is one of the corruptions of primary education, in which elementary and secondary teachers spend a disproportionate amount of their preparation on education theory and pedagogy rather than on the subject areas they need to know. In other words, they study the how far more than the what and the why. 

Of course, those who are researching, inventing, and developing AI should be invited to important meetings about AI. However, questioning the risks, dangers, or even potential benefits of AI requires answering deeper questions first–questions outside the realm of strict science: 

What is the goal of our technologies? What should be our goal? What is off-limits and why? What is our operating definition of the good that we are pursuing through technology? Where is the uncrossable line between healing and enhancement, and what are the other proper limits of our technologies? What are people? What technocratic challenges have we faced in the past, and what can we learn?  

The questions we commit ourselves to answering will shape our list of invites, among other things. The presidential years of George W. Bush are mostly defined by his handling of the 9/11 terrorist attacks and subsequent invasions of Iraq and Afghanistan. However, he also faced a specific challenge of our technocratic age. How he handled it is a model for the technocratic challenges of today. 

A central issue of Bush’s second presidential campaign was embryonic stem cell research. Democratic vice-presidential candidate John Edwards promised that if John Kerry became president, “people like [actor] Christopher Reeve will get up out of that wheelchair and walk again.” Bush strongly opposed the creation of any new stem cell lines that required the destruction of human life, including embryos. His ethical clarity was due in part to remarkable work done by the President’s Council on Bioethics to develop an ethical framework for promising technologies.  

In fact, their work led to an incredible volume of stories, poetry, fables, history, essays, and Scripture. Published two years into Bush’s first term, Being Human is unparalleled in its historical and ideological depth and breadth. Chaired by renowned bioethicist Leon Kass, the Council consisted of scientists, medical professionals, legal scholars, ethicists, and philosophers. The title Being Human points to the kinds of what and why questions that concerned the Council, before dealing with the how.  

Historically, President Bush’s position on embryo-destructive research has been thoroughly vindicated. The additional funding committed to research into adult and induced pluripotent stem cells produced amazing medical breakthroughs. But none of the promises of embryonic stem cell therapies ever materialized, even after his Oval Office successor reversed Bush’s policies, rebuilt the Council around only scientists and medical researchers, and released enormous funding for embryo-destructive research.  

Of course, had the utopian predictions about ESC materialized, the killing of some humans to benefit others would still have been morally reprehensible. Ends do not justify means. This is an ethical observation, not a scientific one. 

What we “should” or “shouldn’t” do with AI depends heavily on the kind of world this is and the kinds of creatures that human beings are. If, as some have argued, AI is to be accorded the same dignity as human beings, then replacing humans in entire industries and putting tens of thousands out of work is not morally problematic. If human beings are unique and exceptional, and both labor and relationships are central to our identity, the moral questions are far weightier.

This Breakpoint was co-authored by Maria Baer. For more resources to live like a Christian in this cultural moment, go to breakpoint.org. 

 Photo Credit: © Getty Images/David Gyung

Publication Date: September 25, 2023

John Stonestreet is President of the Colson Center for Christian Worldview, and radio host of BreakPoint, a daily national radio program providing thought-provoking commentaries on current events and life issues from a biblical worldview. John holds degrees from Trinity Evangelical Divinity School (IL) and Bryan College (TN), and is the co-author of Making Sense of Your World: A Biblical Worldview.

The views expressed in this commentary do not necessarily reflect those of CrosswalkHeadlines.


BreakPoint is a program of the Colson Center for Christian Worldview. BreakPoint commentaries offer incisive content people can't find anywhere else; content that cuts through the fog of relativism and the news cycle with truth and compassion. Founded by Chuck Colson (1931 – 2012) in 1991 as a daily radio broadcast, BreakPoint provides a Christian perspective on today's news and trends. Today, you can get it in written and a variety of audio formats: on the web, the radio, or your favorite podcast app on the go.

More from Christianity.com