(RNS) — It seems as if every time we turn around there’s a new worry about artificial intelligence. AI is going to take over the nuclear launch codes and kill us all. Or was it just going to shut down the electrical grid? Maybe just the internet?
Wait, wasn’t it going to enslave us and use us as sources of energy? Or just replace all the creatives who provide us all of our music and movies? Isn’t that what the Hollywood strike was all about?
Some of these worries are legitimate. Some are fairy tales that have already been explored in dozens of popular movies over the last couple generations. (Paging HAL!)
While we’re obsessed with its dystopic downsides, we fail to account for the good things that AI may do for us in the coming years, from cancer screenings to road design. AI is going to change countless lives for the better.
But there is a foundational threat posed by AI that we all seem to be ignoring, one very much related to theology and an enchanted view of what academics sometimes call moral anthropology. AI has the capacity to undermine our understanding of the human person.
Let me explain by way of example.
We are fooling ourselves into thinking that a language model or image platform could be, well, like us.
This past week, OpenAI announced that its algorithmic language model and imaging platform “can now see, hear, and speak.” For instance, show AI an image of a bike and ask it how to lower the seat: Open AI’s platform can analyze the image, determine what kind of bike is in the image, search its databases, and spit back the likely answer — in text or voice audio.
AI is not, of course, really thinking. “It” is a series of algorithms and neural networks with access to a very large database made by human beings. As one professor at the University of Michigan who studies machine learning put it, “Stop using anthropomorphic language to describe models.”
There’s that Greek word “anthropos” — human — again. The professor is worried that when we use language that assumes the form or structures of the human, we are implicitly corrupting the way we think about AI. We are fooling ourselves into thinking that a language model or image platform could be, well, like us.
But the worry goes deeper than that, in the opposite direction. While some may be inclined to move closer to the view that AI is like us, the broader culture is actually primed to move closer to the view that we are like AI. Indeed, many students in my classes in recent years have said something like, “Well, aren’t we just essentially organic machines? What is substantially different about the way we analyze a photo, engage a database, and spit back an answer to a question?
The underlying problem here is our culture’s advanced state of what the philosopher Charles Taylor called “disenchantment,” especially when it comes to our understanding of ourselves. In the secular age of the post-Christian West, our cultural subjectivity no longer has a way to make sense of supernatural concepts, such as being made in the image and likeness of God, of the soul, grace, a will that is transcendent and free, or (in some extreme cases) even consciousness.
Let us similarly respond to AI with prudence and care, neither rejecting the life-changingly good things that will come with it nor uncritically accepting every dangerous or destructive application.
We do have a way of making sense of machines, computers, algorithms, neural nets — basically all forms of matter in motion. The last few centuries and especially the last few decades have been preparing us to imagine ourselves as very similar to AI. Our ability to see, hear, speak and other actions of beings, which are no longer considered supernatural, are therefore comparable to the actions of other kinds of neural nets.
If we explained AI to a medieval person, there is zero chance that they would confuse it with creatures like us. Their cultural idea of how humans are formed simply wouldn’t allow them to make that mistake.
I, too, fundamentally dissent from our 21st-century reductionist view of the human person. Instead I choose to go with the wisdom of Jedi Master Yoda, who taught Luke Skywalker in “The Empire Strikes Back” that we are not mere “crude matter,” but are, rather, “luminous beings.” We are ensouled creatures whose form reflects the image and likeness of God.
Let us similarly respond to AI with prudence and care, neither rejecting the life-changingly good things that will come with it nor uncritically accepting every dangerous or destructive application. But, above all, let us resist the idea that AI is like us or (even worse) that we are like AI. Neither could be further from the truth