Loading...
Loading...
Click here if you don’t see subscription options
Kevin ClarkeJuly 18, 2024
Andrii Denysenko, CEO of design and production bureau "UkrPrototyp," stands by Odyssey, a 1,750-pound ground drone prototype, at a corn field in northern Ukraine, on June 28, 2024. Facing manpower shortages and uneven international assistance, Ukraine is struggling to halt Russia’s incremental but pounding advance in the east and is counting heavily on innovation at home. (AP Photo/Anton Shtuka)Andrii Denysenko, CEO of design and production bureau "UkrPrototyp," stands by Odyssey, a 1,750-pound ground drone prototype, at a corn field in northern Ukraine, on June 28, 2024. Facing manpower shortages and uneven international assistance, Ukraine is struggling to halt Russia’s incremental but pounding advance in the east and is counting heavily on innovation at home. (AP Photo/Anton Shtuka)

The Weekly Dispatch takes a deep dive into breaking events and issues of significance around our world and our nation today, providing the background readers need to make better sense of the headlines speeding past us each week. For more news and analysis from around the world, visit Dispatches.

Considering the Russian Federation’s overwhelming numerical advantage in its war against Ukraine, it is not hard to understand why Ukraine has come to rely so thoroughly on what it has dubbed its “Unmanned Systems Forces,” a cutting-edge arsenal of aerial, terrestrial and marine drones and unmanned fighting vehicles. In May, the U.S.F. became a fourth branch of the nation’s military—joining Ukraine’s army, navy and air force.

Unmanned platform entrepreneur Andrii Denysenko, working on a $35,000 ground recon and assault vehicle called the Odyssey, told The Associated Press: “We are fighting a huge country, and they don’t have any resource limits. We understand that we cannot spend a lot of human lives. War is mathematics.”

The A.P. reports that about 250 defense startups across the embattled nation “are creating the killing machines at secret locations that typically look like rural car repair shops.” Ukraine’s drones and battlefield vehicles are often put together with off-the-shelf commercial components modified to suit the Ukraine military’s particular needs.

The vehicles of the unmanned force have scored stinging successes against Russian troops and armor in the contested territories of eastern Ukraine. They have hit manufacturing and logistics sites in Russia proper and detonated fuel and ammo dumps behind battlelines. They have also essentially neutralized the Russian fleet on the Black Sea. The Ukrainians are offering a real-time case study in adroit, innovative and, not least important, low-cost countermeasures that are no doubt being studied by militaries around the world.

One thing most of the unmanned strike platforms being developed by Ukraine have in common—at least for now—is that human handlers are still remotely guiding them across the battlefield. But reports are already surfacing of drones launched into Russia that are relying on artificial, not human, intelligence in decisions to evade defensive countermeasures, pick targets and finally conclude a strike.

According to Reuters, the use of drone swarms to overwhelm Russian defensive countermeasures creates a degree of complexity too profound for remote human pilots to contend with. Ukraine has begun to turn swarm attacks over to A.I. algorithms.

How long before Ukrainian tech and software developers begin deploying battle vehicles liberated completely from human oversight in identifying, pursuing and finally liquidating battlefield targets? The battlefield of the future—once something only imagined in “I’ll be back” style science fiction—is fast coming upon us, a combat zone freed from human control.

In practical terms, Ukraine’s U.S.F. is rushing far ahead of militaries around the world. But Ukraine is hardly alone in exploring the futuristic military potential of A.I.-managed or otherwise autonomous fighting platforms, called Lethal Autonomous Weapons Systems or LAWS for short.

Russia, China, Israel, South Korea and other states are also experimenting with and even deploying A.I.-assisted or -guided weapons systems. Recently, Israel was sharply criticized for its use of “Lavender,” an A.I.-driven target analysis program that created an overly expansive list of some 37,000 people in Gaza for the Israel Defense Forces to choose from. And, according to the British daily The Guardian, the U.S. military sponsors more than 800 A.I.-related projects, directing almost $2 billion to A.I. initiatives in the 2024 budget alone.

The infamous Defense Advanced Research Projects Agency—anybody recall the “Total Information Awareness Program”?—is hard at work developing bleeding-edge tech in the pursuit of more effective ways to, well, kill America’s enemies. Its Robotic Autonomy in Complex Environments with Resiliency (yes, that’s RACER, DARPA does love its acronyms) program is fast developing autonomous tanks and other battlefield vehicles. Other DARPA initiatives are experimenting with humanless fighters and sea drones.

Current Department of Defense policy does require “that all systems, including LAWS, be designed to ‘allow commanders and operators to exercise appropriate levels of human judgment over the use of force.’” That may sound ethically reassuring. But what level of human intervention do specific systems allow and how do human LAWS managers decide what is “appropriate”?

According to the Congressional Research Service, a 2018 white paper called appropriate a “flexible term,” noting: “What is ‘appropriate’ can differ across weapon systems, domains of warfare, types of warfare, operational contexts, and even across different functions in a weapon system.” The report adds that “‘human judgment over the use of force’ does not require manual human ‘control’ of the weapon system…but rather broader human involvement in decisions about how, when, where, and why the weapon will be employed.”

In short, U.S. weapons that rely on autonomous or A.I. features are already in the field, particularly defensive systems that operate on trigger mechanisms. That is not new or necessarily high-tech, of course—old fashioned landmines, for example, operate autonomously. The worrisome new tech would rely not on mechanical triggers but artificial intelligence in literally calling the shots.

One of the remarkable aspects of this autonomous military frontier is how little it is addressed by international humanitarian law. What is at risk? Perhaps everything.

“If indeed AI poses an extinction-level existential threat to the future of humankind akin to the atomic bomb, as many in the field claim, the absence of a universally accepted global governance framework for military AI is a crucial concern,” Carnegie Europe fellow Raluca Csernatoni writes for the Carnegie Endowment for International Peace. “While this future Oppenheimer moment is worrying, the present risk of mission creep is more troubling because AI systems initially designed for specific civilian tasks can be repurposed to serve military objectives.”

United Nations Secretary General António Guterres has been among the global leaders troubled by the absence of international law or diplomatic accords governing LAWS. In his New Agenda for Peace, a policy brief released in 2023, he wrote: “Fully autonomous weapons systems have the potential to significantly change warfare and may strain or even erode existing legal frameworks.” Autonomous weapons, he said, “raise humanitarian, legal, security and ethical concerns and pose a direct threat to human rights and fundamental freedoms.”

“Machines with the power and discretion to take lives without human involvement are morally repugnant and politically unacceptable and should be prohibited by international law,” the secretary general concludes. A U.N. resolution in December 2023 called for a review of LAWS under current humanitarian law, and a U.N. report is expected by the next meeting of the general assembly in September.

The church has likewise long worried about the rise of the machines in combat. The human capacity for mercy, the church has persistently taught, must remain a viable component in even the snappiest of snap decisions made on modern battlefields.

Ten years ago, Vatican officials joined a handful of nations then calling for a preemptive ban on “fully autonomous weapons”—a proposal resisted by Russia, the United States and other nations that have been moving ahead with LAWS development and deployment. Cardinal Silvano Maria Tomasi, C.S., then the permanent observer of the Holy See to the United Nations in Geneva, said that humankind risked becoming “slaves of their own inventions.”

“Meaningful human involvement is absolutely essential in decisions affecting the life and death of human beings,” then Archbishop Tomasi told the scientists and diplomats gathered for a Vatican-sponsored LAWS conference in May 2014. He said it was essential “to recognize that autonomous weapon systems can never replace the human capacity for moral reasoning, including in the context of war.”

In a statement released in 2016, “The Humanization of Robots and the Robotization of the Human Person,” the Rev. Antoine Abi Ghanem and advisor Stefano Saldi, then representing the Vatican’s mission in Geneva, wrote: “The idea of a ‘moral’ and ‘human’ war waged by non-conscious, non-responsible and non-human agents is a lure that conceals desperation and a dangerous lack of confidence in the human person…. Robots and artificial intelligence systems are based on rules, including protocols for the invention of new rules. But legal and ethical decisions often require going beyond the rule in order to save the spirit of the rule itself.”

And most recently in his historic address to G7 leaders in Rome in July—it was the first time a pope had met with that group of world leaders—Pope Francis broadly warned about the threat posed by artificial intelligence and specifically called for a ban on autonomous weapons systems. “We would condemn humanity to a future without hope if we took away people’s ability to make decisions about themselves and their lives, by dooming them to depend on the choices of machines,” he said. “We need to ensure and safeguard a space for proper human control over the choices made by artificial intelligence programs: Human dignity itself depends on it.”

He repeated that message soon after in a statement released to corporate developers and proponents of artificial intelligence and world faith and political leaders gathered in Hiroshima, Japan. Recalling that Hiroshima itself was a sorrowful example of a technology overwhelming human moral judgment, he described as “urgent” the necessity to “reconsider the development and use of devices like the so-called ‘lethal autonomous weapons’ and ultimately ban their use.”

“No machine should ever choose to take the life of a human being,’” the pope said.

More from America

A deeper dive

The latest from america

I cannot tell you exactly why I am getting emotional, except to say that maybe I am sorely in the mood for something simple and nonaffected and happy and endearing and guileless. (Maybe everyone is?)
Joe Hoover, S.J.July 18, 2024
In an interview with America’s Gerard O’Connell, Cardinal José Tolentino de Mendonça discusses his love for cinema and poetry, what it’s like working in the Roman Curia and Pope Francis’ “Gospel simplicity.”
Gerard O’ConnellJuly 18, 2024
A movement known as Catholic integralism has been enjoying something of a revival in contemporary American political thought, especially among Catholic critics of liberalism and modernity. But history tells us that integralism can be more harmful than helpful.
“Preach” host Ricardo da Silva, S.J., talks with Deacon Greg Kandra about the homily he wrote immediately after hearing the news of the attempted assassination of Donald Trump at a rally in Butler, Pa., this past weekend.
PreachJuly 18, 2024