On Jan. 28, the Dicastery for the Doctrine of the Faith and the Dicastery for Culture and Education published “Antiqua et Nova,” a thorough theological treatment of artificial intelligence. The document contributes to a series of recent Vatican statements on A.I., including some issued directly by the Holy Father. Of these, one recent and particularly important one was a speech delivered at the G7 summit on June 14, 2024. Here, Pope Francis argued in front of the gathered politicians on the need to “create the conditions” for a “positive and fruitful” use of artificial intelligence in the future. He warned that A.I. is a tool that can shape human culture, and that how we choose to use A.I. will determine the way it will shape culture going forward. Will it solve diseases or drop bombs? Help stop climate change or increase global inequities?
The benefits of A.I. can outstrip its risks only if we are careful to create conditions for use that welcome a culture of encounter. Otherwise, we run the risk of further promoting a culture of death and technocracy, which Francis has termed the “technocratic paradigm.”
For Francis, this is the biggest risk of A.I.: It contributes to the technocratic paradigm, in which human beings over the past few centuries have increasingly become seen as cogs in a global machine of profit and war. Modern technology has exacerbated the rise of the technocratic paradigm, with climate change, increasing rates of depression and rising global inequity as symptoms. We need political action, the pope argued to the G7, in order to create the conditions of possibility to promote the common good in the future of A.I.
Ethics and Action
From both the Vatican and the pope, we have heard a consistent message: A.I. is a tool that is created by and shapes human culture. And we have seen specific calls to action: In his G7 address, for example, Francis argued for a ban against lethal autonomous weapons, and pushed for a political response to A.I. that could help create an “economy for the common good.”
Thus far, however, we have not seen much in the way of specific recommendations. We believe that this is an important next step for Francis as he emerges as a leading global voice in conversations about A.I.
It has, after all, become common to greet the rise of A.I. with a kind of doom-and-gloom pessimism. This pessimism, however, regularly resists concrete calls to action. For example, the Vatican-sponsored “Rome Call for A.I. Ethics,” which Francis positively quoted at the G7, is a shared document with positive, generic language with which most will be able to agree. And yet it offers little in the way of enforcement or accountability.
Microsoft, for example, is one of the original signers of the “Rome Call for A.I. Ethics,” and yet seemed to follow in Elon Musk’s footsteps by firing its entire A.I. ethics and society team in March 2023, right before launching its massive Copilot A.I. system. Microsoft still claims to hold “responsible A.I. principles,” but again, it has no one to hold it accountable but itself.
In May 2023, the Center for A.I. Safety released a statement warning that an unfettered artificial intelligence industry could literally bring about the end of humanity: “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Signatories included Sam Altman, Bill Gates, Demis Hassabis and Ray Kurzweil. At a meeting of the bipartisan Senate Judiciary Committee, Mr. Altman also urged U.S. lawmakers to regulate A.I., calling it a culture-altering technology that must be developed with transparency and an eye to the good. Again, doom and gloom. Also again, precious little in terms of concrete recommendations.
Reckoning With the Future of A.I.
We should emphasize: Vatican documents such as “Antiqua et Nova” and Francis’ address at the G7 have not merely echoed these previous statements. Francis has added to the international conversation by framing concerns about A.I. in the language of the Catholic intellectual tradition. Addresses such as those to the G7 have provided the world with a tradition and language for our collective concerns, and in this way went beyond other, similar calls of alarm.
Yet it is a sign that one has played it relatively safe if one has not drawn some lines in the sand that some are not willing to cross. Everyone nods along and takes comfort in being on the same page, but they are ultimately not challenged to act. For example, while Francis has pushed for a ban on fully autonomous weapons systems, a ban on such weapons is relatively uncontroversial. The more dangerous usage of A.I. is in the realm of partially autonomous systems, which are already deployed around the world and in deep need of political restraint.
One of the hardest things to do in conversations about A.I. is to say something that everyone else isn’t already thinking. Over the past few years, the set of algorithms, applications and programs widely termed “artificial intelligence” has transitioned from a largely invisible part of modern computing infrastructure to the primary way in which people see the future of computer use.
This move has occurred not only because of the explosion of generative A.I. in the last two years but also because of a concentrated global marketing effort to convince the public that A.I. is the future, that “strong” or “general” A.I. is coming soon and that humans should hope for or fear the future. It doesn’t matter which, as long as they feel strongly about it. The political response to A.I. has been caution and concern, even in places like the United States, where the federal government seems incapable of action on the issue.
The Dangers of A.I.
Yet the current harms and threats of A.I. are numerous, including the increasing appearance of deepfakes, ongoing weapons development, violations of personal privacy in terms of body and facial images, replacements of human creativity, and a rise in plagiarism, censorship and disinformation. In response to these challenges, we need to move beyond the recognition of the obvious—that we want to avoid the bad aspects of A.I. even while fostering the good. We need concrete recommendations about how to move toward these goals.
The best known governance response has been from the European Union, whose A.I. Act will go into effect later this year, requiring transparency, watermarking and accountability for uses of A.I. that might harm the public. While the end result of this act is unknown, it is a remarkable and helpful step in governance, since history has proven time and again that capitalistic projects cannot be trusted to regulate themselves.
But, again, more concrete steps in line with the A.I. Act must be taken. For example, the energy usage required for A.I. is only increasing. Google recently reported that despite its stated goal to be carbon neutral by 2030, its energy usage has actually increased by 48 percent compared with 2019, making the goal “challenging.” Because carbon neutrality is only a self-imposed goal meant to endear Google to the public, there is no accountability for this increase in energy usage and no regulatory repercussions.
All evidence points to similar energy increases from all major A.I. producers, thanks to the vast amounts of energy required to train and query A.I. systems, although the recent emergence of DeepSeek as a powerful model with less training requirements has begun to complicate this narrative. Given the emphasis on climate change throughout Francis’ pontificate, this could be a natural place for him to intervene with specific requests for action from world leaders.
Likewise, consider the challenge of A.I.-generated deepfakes, which are generated images and videos that show people doing things they did not do and can strip away human dignity. In the same vein, facial recognition systems currently used by police at airports and at many borders have the effect of transforming people into digital identities, leading to false arrests and false identifications. Further, hallucinations—false information that A.I. systems present as true—continue to persist in all generative A.I. solutions.
How can world leaders begin to mitigate the risks of deepfakes and hallucinations? How can governing bodies pass legislation requiring the kind of watermarking and transparency needed to resist them?
It is also important to note that the risks of A.I. disproportionately affect those who are already on the outskirts of society: the poor, the unhoused, the immigrant. The countries that Francis addressed are already utilizing A.I. solutions that censor, that track identities and criminalize, and they are already producing weapons that incorporate A.I.
Concrete Action
In Pope Francis’ address to the G7, he offered a philosophical background of technological development, pleaded for A.I. development focused on the common good and urged members of the G7 to use politics to create the conditions of possibility for A.I. systems that create a culture of encounter. All things that we—and we assume, all people of good will—should readily agree with.
Likewise, “Antiqua et Nova,” while not an encyclical, continues to signal that A.I. is a key focus for this, perhaps the final stretch of Francis’ pontificate. “Antiqua et Nova” cites previous Vatican publications on A.I., including the G7 speech and the book Encountering A.I.,published by the Dicastery for Culture and Education last year. This document represents the clearest picture of a Catholic theology of A.I. to date, and as such is a decisive step forward in Francis’ focus on A.I.
But as significant a step as this theological document represents, there remains so much more to do. In laying this groundwork, Pope Francis also set himself up for what we believe is the important next step for him: to address the specific and imminent harms of A.I. with specific recommendations to world leaders.
The pope’s comments at the G7 were met with the nodding of heads in agreement by world leaders, saying, “Yes, that’s what I mean to do—create a better world.” That is a good start, and it is encouraging that world leaders have found in Francis someone worth listening to. The next step for him is to draw some lines in the sand and to make recommendations that may cause more stirring in seats than nodding of heads. To the pope’s credit, Vatican City published its first “Guidelines on Artificial Intelligence” law in late December, attempting to control the way artificial intelligence is used within the state. The law echoes aspects of the European Union law and represents—at least on the policy side—a small but significant step forward.
But there remains so much more to do. A.I. is not a distant, lingering menace, like the threat of nuclear weapons. It is already actively transforming society, community by community, person by person. It can offer real and wonderful possibilities for medical and scientific advancements and has the potential to alleviate many human stresses. Without immediate changes to the global capitalist structure of A.I. deployment and wealth procurement, even the noblest efforts will serve only to increase inequity and cause the deterioration of our already challenging environmental future.
Pope Francis’ voice on this issue has become a clarion call of hope for billions of people across the world, no matter their religious persuasion. While he cannot regulate multinational corporations, he can offer examples of responsible technology use and lift up specific instances of failure. He can be prophetic in his tangible hopes for a technological future while pushing for swift and decisive action.
As Catholic scholars engaged in A.I., we are grateful to see the pope consciously and actively engage in A.I. discussions. It is our hope that, having laid crucial theological groundwork over the past year, Francis will now engage these issues in the future with calls to concrete actions by governments and corporations. The role of the church should be prophetic in its hopeful visions of the future, but also in calling attention to the ills of the present.