In war, speed kills. The soldier who is a split second quicker on the draw may walk away from a firefight unscathed; the ship that sinks an enemy vessel first may spare itself a volley of missiles. In cases where humans can’t keep up with the pace of modern conflict, machines step in. When a rocket-propelled grenade is streaking toward an armored ground vehicle, an automated system onboard the vehicle identifies the threat, tracks it, and fires a countermeasure to intercept it, all before the crew inside is even aware. Similarly, US Navy ships equipped with the Aegis combat system can switch on Auto-Special mode, which automatically swats down incoming warheads according to carefully programmed rules.

These kinds of defensive systems have been around for decades, and at least 30 countries now use them. In many ways, they’re akin to the automatic braking systems in newer cars, intervening only under specific emergency conditions. But militaries, like automakers, have gradually been giving machines freer rein. In an exercise last year, the United States demonstrated how automation could be used throughout the so-called kill chain: A satellite spotted a mock enemy ship and directed a surveillance plane to fly closer to confirm the identification; the surveillance plane then passed its data to an airborne command-and-control plane, which selected a naval destroyer to carry out an attack. In this scenario, automation bought more time for officers at the end of the kill chain to make an informed decision—whether or not to fire on the enemy ship.

Militaries have a compelling reason to keep humans involved in lethal decisions. For one thing, they’re a bulwark against malfunctions and flawed interpretations of data; they’ll make sure, before pulling the trigger, that the automated system hasn’t misidentified a friendly ship or neutral vessel. Beyond that, though, even the most advanced forms of artificial intelligence cannot understand context, apply judgment, or respond to novel situations as well as a person. Humans are better suited to getting inside the mind of an enemy commander, seeing through a feint, or knowing when to maintain the element of surprise and when to attack.

But machines are faster, and firing first can carry a huge advantage. Given this competitive pressure, it isn’t a stretch to imagine a day when the only way to stay alive is to embrace a fully automated kill chain. If just one major power were to do this, others might feel compelled to follow suit, even against their better judgment. In 2016, then deputy secretary of defense Robert Work framed the conundrum in layperson’s terms: “If our competitors go to Terminators,” he asked, “and it turns out the Terminators are able to make decisions faster, even if they’re bad, how would we respond?”

Terminators aren’t rolling off the assembly line just yet, but each new generation of weapons seems to get us closer. And while no nation has declared its intention to build fully autonomous weapons, few have forsworn them either. The risks from warfare at machine speed are far greater than just a single errant missile. Military scholars in China have hypothesized about a “battlefield singularity,” a point at which combat moves faster than human cognition. In this state of “hyperwar,” as some American strategists have dubbed it, unintended escalations could quickly spiral out of control. The 2010 “flash crash” in the stock market offers a useful parallel: Automated trading algorithms contributed to a temporary loss of nearly a trillion dollars in a single afternoon. To prevent another such calamity, financial regulators updated the circuit breakers that halt trading when prices plummet too quickly. But how do you pull the plug on a flash war?

You May Also Like

Live Updates: The Trial of FTX Founder Sam Bankman-Fried, the Verdict

Here’s what we’re expecting to happen in court over the next couple…

Amazon Fire TV Stick owners discover three little-known hacks that instantly upgrade their telly for free

YOU don’t need a crystal ball to find out when the batteries…

Sam Altman appears to admit the existence of a secret new doomsday AI system he helped build – that could be the leap to artificial general intelligence

Sam Altman has appeared to lead credence to the theory he was…

Apple Will Finally Pay for Throttling iPhones With ‘Batterygate’ Settlement

If you had battery-related performance issues on an older iPhone—and you got…