Loading...
Loading...
Click here if you don’t see subscription options
Bridget RyderMarch 19, 2025
United States Vice-President JD Vance delivers a speech during the Artificial Intelligence Action Summit at the Grand Palais in Paris, France, Tuesday, Feb. 11, 2025. (Sean Kilpatrick/The Canadian Press via AP)United States Vice-President JD Vance delivers a speech during the Artificial Intelligence Action Summit at the Grand Palais in Paris, France, Tuesday, Feb. 11, 2025. (Sean Kilpatrick/The Canadian Press via AP)

The Trump administration has proposed a light regulatory hand to govern the emerging artificial intelligence industry, urging other global leaders to do the same. It seems that Europe, which had stepped ahead as a world leader in regulating artificial intelligence, is now following the White House’s lead. As the use of artificial intelligence grows, Catholic ethicists and industry leaders say that more remains to be done to mitigate the potential impact of artificial intelligence on human culture and society.

During his first official trip abroad in February, Vice President JD Vance met with global leaders at the Artificial Intelligence Action Summit in Paris and the Munich Security Conference in Germany. He made it clear that the United States intends to be a global A.I. leader and urged against regulation that might throttle back the accelerating technology motivated by fears of job losses and the safety of A.I. technology.

“We need international regulatory regimes that foster the creation of A.I. technology rather than strangle it,” Mr. Vance said. “And we need our European friends in particular to look to this new frontier with optimism rather than trepidation.”

While India, China, the Vatican and many European governments signed the final document produced at the conference, the United States did not. The joint statement called for better monitoring of how the spread of A.I. technology is affecting jobs and announced a new A.I. partnership backed by an initial $400 million in pledges from A.I. companies. It will focus on open source A.I. for public interest use like health care.

In June 2024 European officials issued the world’s first broad regulatory framework on A.I. technology, an effort greeted by the Commission of the Bishops’ Conferences of the European Union as a first step to ensuring ethically sound development and implementation of artificial intelligence technologies.

The E.U.’s A.I. Act seeks to contain some of the new technology’s potential dangers. It established four risk categories, with descending levels of regulation according to the degree of potential hazards. A.I. models considered to pose “unacceptable risk” are directly outlawed by the E.U. framework, like A.I. tech designed to “manipulate cognitive human behavior.” Experts in child protection have expressed concerns about A.I.-assisted online grooming of children and teens.

The European law also prohibits A.I. systems used for “social scoring,” defined as “classifying people based on behaviour, socio-economic status or personal characteristics,” or direct biometric surveillance such as facial recognition, both of which are employed by the Chinese Communist government to monitor and control its population.

Other “high risk” A.I. models include its use in law enforcement, migration control, worker management, access to public and key private services or in important civic infrastructure like public transportation. These A.I. models will have to be assessed by the individual governments of E.U. member states before being allowed on the A.I market. Those that are approved become registered in an E.U. database. The European regulation also stipulates that individuals can file complaints with national authorities if they feel they have been wronged by A.I. systems.

A.I. models categorized as “limited risk” will have to follow transparency rules. These include chatbots and generative-A.I. apps like the now widely available ChatGPT. A.I.-generated content or images must be labeled as such. Only “minimal or no risk” A.I., like spam filters, escaped regulation under the act.

The E.U. law may not be “global in scope,” Domingo Sugranyes Bickel, director of seminars at the Paul VI Foundation, a technology think tank, told America. “But it affects 450 million consumers, a significant market, and companies that want to operate in it will have to implement [its] rules and perhaps adopt them for other territories as well for cost-saving reasons.”

Trump advisor and tech entrepreneur Elon Musk knows personally how E.U. regulations can affect U.S. tech companies seeking a global reach. The European Union is already a world leader in consumer protection online through its General Data Protection Rules and Digital Services Act, which have forced all websites and social media platforms that want to be seen by users within the E.U. to conform to rules concerning transparency and consent for data collection and use for social media platforms; there are also rules concerning monitoring illegal content.

Mr. Musk has clashed with E.U. regulators, and his company X faces hefty fines over violations of E.U. rules regarding illegal content. Mr. Trump has threatened to impose tariffs on European goods, partly in retaliation for its dealings with U.S. tech companies like Mr. Musk’s.

The latest Vatican document on artificial intelligence, “Antiqua et Nova,” pointed out the need for all layers of society, including government, to consider and address the industry’s potential dangers.

“As these applications and their social impacts become clearer, appropriate responses should be made at all levels of society, following the principle of subsidiarity,” the document says. “Individual users, families, civil society, corporations, institutions, governments, and international organizations should work at their proper levels to ensure that A.I. is used for the good of all.

However, the Vatican document describes the European framework as a preliminary step, as it establishes little more than which A.I. systems “can be introduced into the EU’s internal market.”

In the United States, Congress has yet to pass any national laws aimed at governing artificial intelligence. Former President Joe Biden issued an executive order that mirrored the E.U.’s regulation in many ways, subjecting powerful A.I. models to government evaluation before they would be allowed to enter into public use and mandating the labeling of A.I.-generated content and assessments by government agencies of A.I.’s impact on jobs.

The Biden order also prohibited bias based on gender or race in A.I. models used for hiring or housing purposes like screening renters. Some individual American states have passed their own A.I. regulations, mostly likewise focused on prohibiting gender or race bias in A.I. models used for job hiring and housing applications.

But in January, President Donald Trump rescinded Mr. Biden’s order and issued his own executive instructions focused on deregulation in an effort to advance A.I. development. The Trump order mandates the creation of a national A.I. Action Plan, which is now open for public comment, and the elimination of all regulations put in place to comply with Mr. Biden’s executive order that are now deemed “inconsistent with enhancing America’s leadership in AI.”

Friederike Ladenburger is the adviser for ethics, research and health for the European bishops’ conferences. In an email to America, she wrote that the European Union was not likely to tear down what it has already built up in terms of an A.I. regulatory structure. But in light of the Trump administration’s aggressive stance, the community, she said, was not likely to build up stronger guardrails any time soon.

Just days after the Paris conference ended, the European Commission put aside a directive on liability in artificial intelligence as too difficult to negotiate into law. At the same time, the tagline of the Competitiveness Compass, the current policy package rolled out by the European Commission for leaders of member states to review, uses the tagline: “Make Business Easier and Faster.”

“There is indeed a clear intention to strengthen the European economy,” Ms. Ladenburger wrote. “We all share the goal of improving the economy and enhancing the well-being of individuals, families and communities. However, this progress must be pursued in a comprehensive and ethical manner. In the context of A.I., this means avoiding complete deregulation and striving for a balanced approach—one that… promotes economic growth while ensuring the responsible implementation of A.I. technologies.”

Some industry leaders warn that the European Union’s current regulations may inadvertently cause one problem ethicists have warned about—that the expansion of A.I. tech will promote monopolies and a concentration of technological and economic power in relatively few hands. Matthew Sanders, founder of the Catholic A.I. company Longbeard, explained to America that while the E.U. rules sound like common-sense regulations, governments currently have a limited technological capacity for assessing A.I. programs.

It could take years for A.I. apps to receive approval before hitting the market, given this deficit in expertise. This situation, he fears, will shut out small and startup A.I. companies and leave A.I. technology controlled by a few large companies willing to work in lock-step with governments.

“That will not lead to a safer world,” he said. “That will lead to a world in which governments are able to influence the information appetite of all their citizens, and I don’t think anyone at this point trusts their government enough to [wield] that awesome responsibility.”

That perspective does not mean Mr. Sanders is insensitive to the dangers of A.I. He acknowledges that a less regulated commercial introduction of A.I. would be more chaotic, but he believes a competitive market will provide incentive for industry to develop and offer safe and accurate A.I. products.

He argues that companies like Google would not want their prestige tarnished by A.I. models that prove unreliable, biased or engaged in censoring. When different A.I. models are available at the same time, he said, “If there is a bad actor spreading misinformation, it’s checked by the other A.I.s.” That is part of the reason he believes an open-source A.I. option is imperative.

Open source A.I., free for all developers to explore, would allow an “upstart in his garage… to tweak [an A.I.] model for his own purposes,” he said. “In this respect open source has this ability to check even the big players.”

No matter how far the E.U.’s regulatory scheme reaches globally, Mr. Sanders and other experts emphasize that the task of addressing the potential dangers and disruptions that A.I. brings is only getting started and urgently needs to be taken up by all sectors of society.

Many estimates suggest significant job loss and dislocation because of the rise of artificial intelligence. Mr. Sanders worries that over the next 10 years A.I. could eliminate up to 75 percent of current jobs even as it helps create new ones. He urges more civic discussion on the topic and vigilance by the public to ensure that legislators are adequately informed and responding to the rapidly expanding power and impact of A.I. tech.

The latest from america

A Homily for the Third Sunday of Lent, by Terrance Klein
Terrance KleinMarch 19, 2025
Today, March 19, was a positive day for Pope Francis according to the latest medical report from his doctors in Rome’s Gemelli Hospital, which the Vatican released at 7 p.m. this evening.
Gerard O’ConnellMarch 19, 2025
“Human fragility has the power to make us more lucid about what endures and what passes, what brings life and what kills,” Pope Francis wrote in a letter to the editor in chief of Corriere della Sera.
Gerard O’ConnellMarch 18, 2025
I am well aware of the dangers of “raising awareness” and making a show of one’s fasting. But I am telling people about this publicly so that they can consider whether Pepfar is something they would like to fast over.
Matthew LoftusMarch 18, 2025