Loading...
Loading...
Click here if you don’t see subscription options
Kevin ClarkeJuly 18, 2024
Andrii Denysenko, CEO of design and production bureau "UkrPrototyp," stands by Odyssey, a 1,750-pound ground drone prototype, at a corn field in northern Ukraine, on June 28, 2024. Facing manpower shortages and uneven international assistance, Ukraine is struggling to halt Russia’s incremental but pounding advance in the east and is counting heavily on innovation at home. (AP Photo/Anton Shtuka)Andrii Denysenko, CEO of design and production bureau "UkrPrototyp," stands by Odyssey, a 1,750-pound ground drone prototype, at a corn field in northern Ukraine, on June 28, 2024. Facing manpower shortages and uneven international assistance, Ukraine is struggling to halt Russia’s incremental but pounding advance in the east and is counting heavily on innovation at home. (AP Photo/Anton Shtuka)

The Weekly Dispatch takes a deep dive into breaking events and issues of significance around our world and our nation today, providing the background readers need to make better sense of the headlines speeding past us each week. For more news and analysis from around the world, visit Dispatches.

Considering the Russian Federation’s overwhelming numerical advantage in its war against Ukraine, it is not hard to understand why Ukraine has come to rely so thoroughly on what it has dubbed its “Unmanned Systems Forces,” a cutting-edge arsenal of aerial, terrestrial and marine drones and unmanned fighting vehicles. In May, the U.S.F. became a fourth branch of the nation’s military—joining Ukraine’s army, navy and air force.

Unmanned platform entrepreneur Andrii Denysenko, working on a $35,000 ground recon and assault vehicle called the Odyssey, told The Associated Press: “We are fighting a huge country, and they don’t have any resource limits. We understand that we cannot spend a lot of human lives. War is mathematics.”

The A.P. reports that about 250 defense startups across the embattled nation “are creating the killing machines at secret locations that typically look like rural car repair shops.” Ukraine’s drones and battlefield vehicles are often put together with off-the-shelf commercial components modified to suit the Ukraine military’s particular needs.

The vehicles of the unmanned force have scored stinging successes against Russian troops and armor in the contested territories of eastern Ukraine. They have hit manufacturing and logistics sites in Russia proper and detonated fuel and ammo dumps behind battlelines. They have also essentially neutralized the Russian fleet on the Black Sea. The Ukrainians are offering a real-time case study in adroit, innovative and, not least important, low-cost countermeasures that are no doubt being studied by militaries around the world.

One thing most of the unmanned strike platforms being developed by Ukraine have in common—at least for now—is that human handlers are still remotely guiding them across the battlefield. But reports are already surfacing of drones launched into Russia that are relying on artificial, not human, intelligence in decisions to evade defensive countermeasures, pick targets and finally conclude a strike.

According to Reuters, the use of drone swarms to overwhelm Russian defensive countermeasures creates a degree of complexity too profound for remote human pilots to contend with. Ukraine has begun to turn swarm attacks over to A.I. algorithms.

How long before Ukrainian tech and software developers begin deploying battle vehicles liberated completely from human oversight in identifying, pursuing and finally liquidating battlefield targets? The battlefield of the future—once something only imagined in “I’ll be back” style science fiction—is fast coming upon us, a combat zone freed from human control.

In practical terms, Ukraine’s U.S.F. is rushing far ahead of militaries around the world. But Ukraine is hardly alone in exploring the futuristic military potential of A.I.-managed or otherwise autonomous fighting platforms, called Lethal Autonomous Weapons Systems or LAWS for short.

Russia, China, Israel, South Korea and other states are also experimenting with and even deploying A.I.-assisted or -guided weapons systems. Recently, Israel was sharply criticized for its use of “Lavender,” an A.I.-driven target analysis program that created an overly expansive list of some 37,000 people in Gaza for the Israel Defense Forces to choose from. And, according to the British daily The Guardian, the U.S. military sponsors more than 800 A.I.-related projects, directing almost $2 billion to A.I. initiatives in the 2024 budget alone.

The infamous Defense Advanced Research Projects Agency—anybody recall the “Total Information Awareness Program”?—is hard at work developing bleeding-edge tech in the pursuit of more effective ways to, well, kill America’s enemies. Its Robotic Autonomy in Complex Environments with Resiliency (yes, that’s RACER, DARPA does love its acronyms) program is fast developing autonomous tanks and other battlefield vehicles. Other DARPA initiatives are experimenting with humanless fighters and sea drones.

Current Department of Defense policy does require “that all systems, including LAWS, be designed to ‘allow commanders and operators to exercise appropriate levels of human judgment over the use of force.’” That may sound ethically reassuring. But what level of human intervention do specific systems allow and how do human LAWS managers decide what is “appropriate”?

According to the Congressional Research Service, a 2018 white paper called appropriate a “flexible term,” noting: “What is ‘appropriate’ can differ across weapon systems, domains of warfare, types of warfare, operational contexts, and even across different functions in a weapon system.” The report adds that “‘human judgment over the use of force’ does not require manual human ‘control’ of the weapon system…but rather broader human involvement in decisions about how, when, where, and why the weapon will be employed.”

In short, U.S. weapons that rely on autonomous or A.I. features are already in the field, particularly defensive systems that operate on trigger mechanisms. That is not new or necessarily high-tech, of course—old fashioned landmines, for example, operate autonomously. The worrisome new tech would rely not on mechanical triggers but artificial intelligence in literally calling the shots.

One of the remarkable aspects of this autonomous military frontier is how little it is addressed by international humanitarian law. What is at risk? Perhaps everything.

“If indeed AI poses an extinction-level existential threat to the future of humankind akin to the atomic bomb, as many in the field claim, the absence of a universally accepted global governance framework for military AI is a crucial concern,” Carnegie Europe fellow Raluca Csernatoni writes for the Carnegie Endowment for International Peace. “While this future Oppenheimer moment is worrying, the present risk of mission creep is more troubling because AI systems initially designed for specific civilian tasks can be repurposed to serve military objectives.”

United Nations Secretary General António Guterres has been among the global leaders troubled by the absence of international law or diplomatic accords governing LAWS. In his New Agenda for Peace, a policy brief released in 2023, he wrote: “Fully autonomous weapons systems have the potential to significantly change warfare and may strain or even erode existing legal frameworks.” Autonomous weapons, he said, “raise humanitarian, legal, security and ethical concerns and pose a direct threat to human rights and fundamental freedoms.”

“Machines with the power and discretion to take lives without human involvement are morally repugnant and politically unacceptable and should be prohibited by international law,” the secretary general concludes. A U.N. resolution in December 2023 called for a review of LAWS under current humanitarian law, and a U.N. report is expected by the next meeting of the general assembly in September.

The church has likewise long worried about the rise of the machines in combat. The human capacity for mercy, the church has persistently taught, must remain a viable component in even the snappiest of snap decisions made on modern battlefields.

Ten years ago, Vatican officials joined a handful of nations then calling for a preemptive ban on “fully autonomous weapons”—a proposal resisted by Russia, the United States and other nations that have been moving ahead with LAWS development and deployment. Cardinal Silvano Maria Tomasi, C.S., then the permanent observer of the Holy See to the United Nations in Geneva, said that humankind risked becoming “slaves of their own inventions.”

“Meaningful human involvement is absolutely essential in decisions affecting the life and death of human beings,” then Archbishop Tomasi told the scientists and diplomats gathered for a Vatican-sponsored LAWS conference in May 2014. He said it was essential “to recognize that autonomous weapon systems can never replace the human capacity for moral reasoning, including in the context of war.”

In a statement released in 2016, “The Humanization of Robots and the Robotization of the Human Person,” the Rev. Antoine Abi Ghanem and advisor Stefano Saldi, then representing the Vatican’s mission in Geneva, wrote: “The idea of a ‘moral’ and ‘human’ war waged by non-conscious, non-responsible and non-human agents is a lure that conceals desperation and a dangerous lack of confidence in the human person…. Robots and artificial intelligence systems are based on rules, including protocols for the invention of new rules. But legal and ethical decisions often require going beyond the rule in order to save the spirit of the rule itself.”

And most recently in his historic address to G7 leaders in Rome in July—it was the first time a pope had met with that group of world leaders—Pope Francis broadly warned about the threat posed by artificial intelligence and specifically called for a ban on autonomous weapons systems. “We would condemn humanity to a future without hope if we took away people’s ability to make decisions about themselves and their lives, by dooming them to depend on the choices of machines,” he said. “We need to ensure and safeguard a space for proper human control over the choices made by artificial intelligence programs: Human dignity itself depends on it.”

He repeated that message soon after in a statement released to corporate developers and proponents of artificial intelligence and world faith and political leaders gathered in Hiroshima, Japan. Recalling that Hiroshima itself was a sorrowful example of a technology overwhelming human moral judgment, he described as “urgent” the necessity to “reconsider the development and use of devices like the so-called ‘lethal autonomous weapons’ and ultimately ban their use.”

“No machine should ever choose to take the life of a human being,” the pope said.

More from America

A deeper dive


A brave new world in defense spending

There are many signs that lethal autonomous weapons production is not a normal sector, even within a vast weapons manufacturing industry already regularly deplored by religious leaders, particularly Pope Francis, because of its scandalous cost and the nature of the output of its production lines. The LAWS sub-market raises alarms among diplomats, defense analysts, religious leaders, scientists and ethicists alike.

Lethal autonomous weapons calling the shots?
Lethal autonomous weapons calling the shots?

A market research firm noted in 2021 that the autonomous market promises explosive annual growth of 10.4 percent, expanding from $16.2 billion in 2024 to likely more than $24 billion by 2033. But the same researchers note a key “hindrance” to growth: 274 companies and organizations and 3,806 individuals who work on the development of artificial intelligence have pledged not to work on autonomous weapons systems, agreeing that “the decision to take a human life should never be delegated to a machine.”

152: The number of U.N. member states that voted in December 2023 to request that the U.N. secretary general prepare a report on autonomous weapons systems, seeking guidance on humanitarian, legal, security, technological and ethical concerns posed by the emerging weapons systems. The report is expected to be delivered at the U.N. General Assembly this month.

30: The number of countries, including the Holy See, that have called for a pre-emptive ban on LAWS.

$1.8 billion: The Department of Defense allocation for its A.I. development programs in 2024.

$500 million: The amount the Pentagon will spend in 2024 to finance its Replicator Initiative, an A.I.-enabled systems program that includes research and development of kamikaze drones, unmanned surface vessels and counter-drone systems.

1,000: The number of unmanned warplanes under development for the U.S. Air Force—with an anticipated cost of $6 billion over five years.

Sources: Allied Market Research, Business Research Company, United Nations, Congressional Research Service, Stop Killer Robots, Human Rights Watch, The Associated Press, Future of Life Institute, Defensescoop, Air Force Times, The Guardian.

The latest from america

A Homily for the Thirty-second Sunday in Ordinary Time, by Father Terrance Klein
Terrance KleinNovember 06, 2024
A man carries a Hezbollah flag as he walks on the rubble of his destroyed apartment following an Israeli airstrike in Dahiyeh, Beirut, Lebanon, Friday, Nov. 1, 2024. (AP Photo/Hassan Ammar)
Joseph Hazboun, CNEWA’s regional director in Jerusalem, described expanding difficulties for the Christian Arab community on the West Bank but added that nothing, of course, compared to the complete humanitarian breakdown being experienced in Gaza.
Kevin ClarkeNovember 06, 2024
This election highlights the deep divisions in American society. But perhaps the strange working of mercy and providence is evident even there, keeping us attentive to the need for conversion and reconciliation.
Sam Sawyer, S.J.November 06, 2024
Kamala Harris did worse with women, Hispanics and young people than did the Democratic candidates in the last two presidential election.
Thomas J. ReeseNovember 06, 2024