The Autonomous Battlefield

foreignaffairs.com · anjel · 19 hours ago · view on HN · military-analysis
quality 2/10 · low quality
0 net
AI Summary

This Foreign Affairs article by Petraeus and Flanagan examines the emerging autonomous warfare capabilities already being deployed in Ukraine, arguing that the U.S. military faces critical gaps in drone production, doctrine, and command structures needed to compete in an era where autonomous systems will operate in coordinated formations without continuous human control. The article emphasizes that military advantage will accrue to nations that first develop operational concepts and organizational structures for autonomous systems, not merely those with the largest drone fleets.

Entities
David Petraeus Isaac C. Flanagan Ukraine Russia United States Poland Predator drone 13th National Guard brigade (Ukraine) Nowa Deba
The Autonomous Battlefield | Foreign Affairs Skip to content The Autonomous Battlefield And Why the U.S. Military Isn’t Ready for It David Petraeus and Isaac C. Flanagan March 12, 2026 Presenting an AI-powered drone interceptor, Nowa Deba, Poland, November 2025 Kacper Pempel / Reuters Listen Subscribe to unlock this feature or Sign in . Share & Download Print Subscribe to unlock this feature or Sign in . Save Sign in and save to read later Close Share The Autonomous Battlefield And Why the U.S. Military Isn’t Ready for It David Petraeus and Isaac C. Flanagan Share in email Share on Facebook Share on X Share on LinkedIn Copy Link Copied Article link: https://www.foreignaffairs.com/middle-east/autonomous-battlefield https://www.foreignaffairs.com/middle-east/autonomous-battlefield Copy Gift Link Copied This is a subscriber-only feature. Subscribe now or Sign in . Create Citation Copied Chicago MLA APSA APA Chicago Cite not available at the moment MLA Cite not available at the moment APSA Cite not available at the moment APA Cite not available at the moment Download PDF This is a subscriber-only feature. Subscribe now or Sign in . Request Reprint Request reprint permissions here . The era of autonomous warfare will not announce itself with robotic armies marching across battlefields. Instead, it is already emerging, quietly and inexorably, in the skies and fields of eastern Ukraine (and to a lesser degree in the Middle East), where missions are increasingly executed by machines at speeds no human can match and electronic warfare is severing the links between operators and their machines. Very soon, autonomous systems will no longer operate individually; over time, they will form platoon- or even battalion-sized units that share information and coordinate without human intervention. And the side that waits for human approval before acting will lose. This transition demands that militaries rethink not just the nature of command but the fundamental nature of war. The adaptation challenge goes beyond technological and industrial issues, although those aspects are enormously important. Already, Ukrainian engineers are rapidly developing software for autonomous navigation, and Ukrainian military technicians are now assembling first-person-view drones and other types in extraordinary numbers: some 3.5 million last year and a potential seven million this year, compared with 300,000 to 400,000 now assembled annually in the United States . The U.S. military will have to adapt much faster to manufacture drones in the enormous numbers required and to learn to employ autonomous systems effectively. But hardware and software will not be enough. It will be just as critical to develop new concepts and doctrine, adjust organizational structures, and institute the new kinds of military education and training that autonomous warfare will demand. These are all areas in which military institutions are often overly deliberate. But which militaries move first to change how they think about command and how the nature of war is evolving will determine which countries win the wars of the future. MACHINE LEARNING Unmanned systems in warfare exist on a spectrum, but not all of them are autonomous. At one end are remote-controlled systems: machines piloted or driven continuously by a human operator via a communications link. (Think of a Predator drone operator in Nevada piloting missions over Afghanistan.) Militaries began incorporating remote-controlled systems decades ago: unmanned target aircraft date back to World War I, and guided aerial weapons were operational by World War II. But the modern era of remote control began in 1995, when the Predator first flew reconnaissance missions over Bosnia. By 2015, the U.S. military was operating nearly 11,000 unmanned aerial vehicles, up from 90 in 2001; the Pentagon now plans to field more than 300,000. Today, an estimated 200,000 remote-controlled drones are being launched monthly in Ukraine, alongside unmanned surface vessels that have sunk Russian warships and, in one case, shot down fighter jets over the ocean. But none of these systems, however impressive, are autonomous. They depend on a human at the controls. Autonomy begins when that human is no longer required—either because electronic warfare severs a system’s command-and-control link and onboard programming takes over or because the system no longer needs remote piloting to complete the mission. The autonomous threshold is already being crossed in Ukraine . Unmanned systems fielded by both Kyiv and Moscow increasingly default to onboard programming when jamming severs their communications links, continuing their missions until human control can be restored or the mission is complete. Kyiv and Moscow alike have pushed the envelope on autonomy because electronic warfare and air defenses have become so pervasive in the operational environment. No commander can count on continuous human control. Ukrainian operators now routinely launch systems knowing that their control links will be jammed or spoofed within minutes. Their success depends on how well they have preprogrammed the onboard software that takes control when communications are cut. In a December 2024 Ukrainian assault on Russian forces near Kharkiv, Ukraine’s 13th National Guard brigade launched what was reported to be the first offensive operation conducted entirely with unmanned systems. Instead of deploying soldiers on the ground, remotely controlled ground vehicles advanced to lay and clear mines and fire on Russian defenses while surveillance, bomber, and suicide drones provided battlefield awareness and air support. Unmanned systems fielded in Ukraine increasingly default to onboard programming. The attack destroyed Russian defensive positions and ultimately enabled Ukrainian infantry to advance and seize ground they still hold today. Not a single soldier was exposed during the initial assault—and careful planning and disciplined communications meant that not a single autonomous system was lost to Russian jamming, either. This coordination was impressive. But it was still controlled by humans. Pilots based in separate locations watched shared video feeds and sequenced their actions manually, and the systems did not communicate with each other. A much more fundamental shift is on the horizon: autonomy from launch. These systems will execute independently from the start of a mission. This is not the autonomy of a cruise missile or drone following a predetermined flight path to a fixed location. Autonomy from launch means systems that adapt their execution within commander-set constraints: coordinating with other elements in a formation, responding to changing conditions, and selecting among authorized actions when disconnected from human control, although humans will monitor their progress and retain the ability to retask or abort as long as communications remain open. Currently, autonomy from launch exists only in fledgling form. Individual drones equipped with artificial-intelligence-assisted targeting—which can find and strike targets without an operator’s continuous control—number in the thousands among millions of remotely controlled systems. But over time, such machines will not operate as standalone units. Instead, commanders will mass them into formations—air, ground, and maritime systems that include drones, sensors, and targeting elements that direct and coordinate movement and strikes. These formations will execute the commander’s intent and preprogrammed directions even when disconnected. MECHANICAL WAVE Militaries worldwide now know that they need to produce many more drones. But they risk missing the deeper point. The advantage in the coming era will not go to the side that assembles the largest fleet of unmanned systems. It will go to the side that first develops the operational concepts to employ them—and then redesigns command-and-control systems, organizations, and training and operations to match. The technology is arriving. But the big ideas, the concepts, must arrive first. The autonomous formation—whether it is the equivalent of a platoon-sized or battalion-strength fleet of autonomous systems—will integrate air, ground, and maritime systems with sensors, weapons, mobility, and protection. Not only will such formations be able to execute a commander’s intent at a remove and potentially out of contact, but they will also coordinate with each other at machine speed. This will radically change the traditional timing of battles and enable militaries to identify and exploit fleeting tactical windows faster than adversaries can respond—even if those adversaries have also deployed remotely piloted systems. Consider the advantages: a military that possesses such synchronized systems—and deploys them carefully and effectively—can compress the time it traditionally takes for staff to detail strike options to commanders, for commanders to deliberate and then issue orders to subordinates, and for subordinates to relay orders to the pilots or drivers that are remotely controlling systems. In conventional high-intensity conflicts—the kind currently being waged in Ukraine—autonomous formations will be able to maintain offensive momentum even when electronic jamming severs communication links. The sides that win future wars will not be the ones with the most drones. Autonomous formations will transform irregular warfare (such as counterinsurgency campaigns in the Sahel or Gaza), stability operations (maintaining order in a postconflict environment), and gray-zone competition (such as the maritime pressure Beijing exerts in the South China Sea), too. In these scenarios, autonomous capabilities will enable far more persistent intelligence, surveillance, and reconnaissance by, for instance, allowing for the monitoring of vast stretches of territory around the clock as sensors automatically detect unusual movements or changes in patterns—tasks that today require rotating shifts of human analysts watching video feeds. Autonomous systems will also enhance force protection and precision strikes by continuously guarding personnel and collapsing the time between identifying an enemy target and striking it. But especially in high-intensity conventional fights, such as the kind being waged in Ukraine, the compressed decision cycle that autonomy offers will transform how commanders orchestrate operations. Delegation and degraded-communications planning will become increasingly important. Such formations operating inside preset boundaries will maintain offensive momentum even when electronic warfare cuts links across entire sectors. The side that masters this and has enough unmanned systems will win.Any military that tries to retain human control of the tempo of battle, meanwhile, will experience a serious liability. Militaries will have to decide in advance which choices must remain under human control and which can be delegated to machines—and ensure that autonomous execution aligns with the commander’s intent when actions are faster than individuals can intervene. Ultimately, the winner will not be the side with the most drones but the side that best solves the command-design problem (and still has plenty of unmanned systems). To be sure, humans should retain certain key judgments, including when to escalate, how to engage populations, and whether a strike serves or undermines political objectives. In democracies, in particular, these decisions will need to remain irreducibly human across every type of conflict. CHANGE OF STATE To do this, commanders will have to shift their focus from controlling systems in battle to preprogramming them. Instead of relying on pilots or drivers to control individual systems remotely or approving each strike in real time, they will have to translate their intent into terms precise enough for machines to execute—specifying not just what success looks like but also which exact actions are permitted, which are prohibited, and what the system should do when it encounters conditions the commander did not anticipate. They must also determine the conditions that have to be met before autonomous systems can execute kinetic or nonkinetic actions: for instance, a commander might authorize autonomous systems to strike enemy armored vehicles moving within a defined corridor but require human confirmation before engaging any target within 500 meters of a hospital or school. They also must set constraints in the algorithms that guide the systems—including the mission’s geographic boundaries, time limits, targets, prohibited actions, and abort criteria. They must plan for the potential loss of communications with the systems. And after a mission begins, they must monitor its progress and, when communications permit, retask or abort. In a broad sense, this process resembles the traditional way commanders delegate tasks to trusted subordinates. But the fact that their new subordinates will be software systems operating at much greater speeds and potentially out of contact will require far more rigorous advance planning. Autonomous systems’ targets and the constraints on their actions must be more explicitly designed. And fail-safes must be preprogrammed, not assumed. Speed without adequate governance can, of course, yield errors and unintended escalation. If one side’s autonomous systems engage targets at machine speed, the adversary’s autonomous defenses may respond in kind, and within minutes, both sides could find themselves in an escalated exchange that no human commander intended or authorized. The legitimacy of a campaign will remain irreducibly human-centric. No algorithm can subjectively determine whether a strike would serve strategic objectives or create more enemies than it takes off the battlefield. Commanders will have to retain control over, and accountability for, these kinds of decisions even as they delegate tactical execution to autonomous systems. Human command will never disappear. But execution—the sensing, targeting, movement directions, timing, and striking—will shift to algorithmically piloted machines that humans program but do not control moment to moment. The central contest will thus be between forces that treat autonomy as a gadget—more remotely controlled machines doing the same things that conventional weapons and forces did, but faster—and those that treat it as a command-design problem requiring new concepts, new doctrine, new organizational structures, and fundamentally different training and leader education along with revolutionary software and hardware. THE RISKS OF AUTOPILOT History repeatedly shows that failing to identify and execute the right big ideas—the right strategy—as the nature of warfare changes exacts terrible costs. The United States tried for 13 years in Vietnam, for example, to win a war of attrition against Vietcong insurgents and North Vietnamese units before it realized in late 1968 that it could not prevail with large-unit search-and-destroy operations and shifted to a counterinsurgency campaign focused on the security of the people. The adjustment came too late, however, as domestic support for continuing the war had already been lost. Forty years later, the U.S. military took eight years in Afghanistan to develop the kind of comprehensive civil-military counterinsurgency campaign that would be effective. It then took another year to get the inputs right to execute the new strategy. And less than a year later, a drawdown of forces (announced in the same speech detailing the force increase) commenced that was ultimately based more on conditions in Washington than those on the ground in Afghanistan. Much attention was paid in the U.S. press to the Iraq war’s 2007 “surge,” when President George W. Bush sent nearly 30,000 additional troops to that country. These additional forces proved enormously important in accelerating the implementation of a new approach. But the adjustment that mattered most was the change in strategy from “clear and leave” to “clear, hold, and build,” a shift that included living with the Iraqi people after clearing extremists from their neighborhoods and establishing gated communities with walls, entry control points, and biometric ID cards to keep insurgents out. That and several other “big ideas” helped pull Iraq out of a vicious cycle of Sunni-Shiite civil war and drove violence down by nearly 90 percent within 18 months. More recently, Israel has executed impressive operations against Iran and the Lebanese militia Hezbollah, dramatically degrading the capabilities of those enemies and helping create the conditions for the dramatic overthrow of Bashar al-Assad’s murderous regime in Syria. And now, operating together with U.S. forces, it is further degrading the Iranian military as well as successfully targeting Iran’s top leaders, missile stockpiles, and drone capabilities. But in October 2023, Israeli Prime Minister Benjamin Netanyahu set three main objectives for the country’s operations in Gaza: return the hostages, destroy Hamas militarily, and prevent the militant group from controlling Gazans’ lives. Although the effort to gain the hostages’ release was a very important success, Israel has struggled to craft the right approach toward the latter two objectives and the operation continues without a clear path to the desired outcome. No joint U.S. military doctrine for autonomous formations yet exists. At present, it is far from clear that the United States or its allies in Europe and the Indo-Pacific are developing the concepts for large-scale autonomous operations or even semiautonomous ones that operate with human oversight but default to preprogrammed tasks when communications links are degraded. Failure to get the big ideas right about the autonomous transition would be catastrophic for the U.S. military edge. The Pentagon’s Replicator initiative promised to field thousands of autonomous systems by mid-2025 but delivered only hundreds—and even that program focused on hardware procurement, not the operational concepts for how autonomous formations would actually fight. No joint doctrine for autonomous formations yet exists. No major command has been tasked with developing one. No new unmanned systems force has been established. In essence, the U.S. military is buying more drones without adequately considering how coordinated autonomous forces should be structured, coordinated, commanded, and controlled. This failure could become a crippling deficiency if U.S. or allied forces face a test like Ukraine’s. Ukraine adapted quickly to remotely controlled unmanned systems because it had no choice: it is fighting for its national survival and has short supply chains, a flat organizational culture that rewarded initiative, and engineers who work directly alongside combat units on the frontlines. The U.S. military operates under different conditions entirely: it has multiyear procurement cycles, doctrinal review timelines measured in years, and an institutional culture that separates technology development from operational command. Ukraine updates its drone software every two weeks and its hardware every few weeks; NATO’s doctrine revision cycle takes 15 to 20 months. In a conflict that escalates rapidly, there would be no time to learn on the job. SNOOZE AND LOSE The United States is also not yet producing remotely the quantity of unmanned systems and networks a conflict like Ukraine’s demands, nor the number of missile interceptors and counterdrone systems needed for larger conflicts such as the ongoing one in the Middle East. But while much attention has been paid to this industrial base deficit, the U.S. military must first develop sound operational and tactical concepts for autonomous warfare before fielding these systems at scale. It must codify these concepts into doctrine that guides future operations. It must redesign organizational structures to execute the new concepts—for instance, by creating dedicated units that are built from the ground up around solving the challenges of human-machine teams rather than adding autonomous systems to organizations previously designed around crewed platforms. It must educate military leaders at all levels on how to command programmed, software-defined subordinates. It must train units to execute autonomous operations, including when communications with systems have been degraded. And it must procure the necessary hardware and software and dramatically scale production, as well as conduct rigorous experiments that feed lessons back into concept development. And the military must do all this before potential adversaries do. China is already investing heavily in what it calls “intelligentized warfare,” which thoroughly integrates artificial intelligence into command, targeting, and force coordination. The People’s Liberation Army has also published doctrine on attacking adversary AI systems through data corruption, algorithm disruption, and electronic warfare. Russia, meanwhile, is learning through brutal trial and error in Ukraine, iterating faster than any Western institution (albeit without a coherent doctrinal framework). Neither competitor will wait for the United States to complete its own transformation. All these projects typically take considerable time. The U.S. military’s procedures for institutional adaptation were designed for a time when platforms lasted decades and doctrine evolved between major wars. Revising the Pentagon’s core military doctrine normally requires a minimum of 15 months; it often takes much longer. (The exception was the 2006 Army and Marine Corps Counterinsurgency Manual, published ten months after its drafting commenced.) Full strategic transformation—from concept to validated, fielded capability—typically takes many years. U.S. military institutions must teach commanders how to lead software subordinates. This process must be sped up. Ukrainian drone units are already able to update software, tactics, and hardware continuously. Very soon, a technique that works on Monday may be rendered obsolete by Friday. The U.S. military’s procurement system, in particular, cannot keep up with new demands. Doctrinal development can be accelerated by authorizing theater commanders to publish interim operational guidance—provisional concepts that units can test and refine—without waiting for the full joint doctrine publication cycle. Leader education can be reformed by embedding autonomous operations into existing war games and exercises at the services’ staff and war colleges rather than creating separate programs from scratch. And the feedback loop between field experimentation and doctrine can be compressed by stationing concept developers alongside operational units, as Ukraine has done with its drone innovation teams, rather than routing lessons learned through headquarters months after the fact. The deepest problem is in education. The institutions that train U.S. and allied military personnel do not yet seek to systematically develop skills to command autonomous systems. They will need to mint a new generation of commanders adept at programming algorithms (or directing programmers) to execute operational objectives; managing the degradation of communications; understanding how autonomous systems behave when sensors fail or circumstances move beyond preprogrammed conditions; and treating software engineers, data scientists, and electronic warfare specialists as essential staff. Promotion and assignment systems must be redesigned to identify and advance officers who can command using software subordinates, not only those who excel at traditional command and staff work. The military promotes what it values; if it values autonomous competence, it must measure and reward it. Autonomous systems are already extensively deployed in combat. A future in which they operate as fully fledged formations will soon be at hand. If Washington recognizes the shifts it must make now, autonomous formations will enable a new operational reality: coordinated and synchronized machine-tempo execution in which commanders delegate preprogrammed and carefully bounded algorithms to synchronize sensors and weapons into formations that maneuver independently while commanders retain responsibility for intent, limits, and accountability. If Washington fails to grasp the stakes, it will field increasingly capable unmanned systems (although likely in insufficient numbers) without any of the concepts, doctrine, organizations, and educated leaders needed to employ them effectively. It will have autonomous trinkets instead of autonomous warfare. And it will lose to adversaries who solve the command-design problem first. Loading... You are reading a free article Subscribe to Foreign Affairs to get unlimited access. Paywall-free reading of new articles and over a century of archives Six issues a year in print and online, plus audio articles Unlock access to the Foreign Affairs app for reading on the go Subscribe Already a subscriber? Sign In David Petraeus is a Partner at the investment firm KKR and Kissinger Fellow at Yale University’s Jackson School. Between 2007 and 2011, he served in top U.S. military roles, including command of the surge in Iraq. Between 2011 and 2012, he was Director of the CIA. He is a co-author of Conflict: The Evolution of Warfare from 1945 to Gaza . Isaac C. Flanagan is Co-Founder of Zero Line, a nonprofit organization that works with international partners to identify critical needs in Ukraine’s defense sector. All statements of fact, opinion, or analysis expressed are those of the authors and do not reflect the official positions or views of the U.S. government. Nothing in the contents should be construed as asserting or implying U.S. government authentication of information or endorsement of the authors’ views. More by David Petraeus More by Isaac C. Flanagan Topics & Regions: Middle East Israel Iran Arms Control & Disarmament Defense & Military War & Military Strategy U.S. Foreign Policy Donald Trump Administration War in Iran Artificial Intelligence Recommended Iran’s Drone Advantage The Pentagon Copied Tehran’s Technology but Is Still Struggling to Keep Up Michael C. Horowitz and Lauren A. Kahn America’s Drone Delusion Why the Lessons of Ukraine Don’t Apply to a Conflict With China Justin Bronk The Dawn of Automated Warfare Artificial Intelligence Will Be the Key to Victory in Ukraine—and Elsewhere Eric Schmidt and Greg Grant How to Lose the Drone War American Military Doctrine Is Stifling Innovation Jacquelyn Schneider and Julia Macdonald China’s AI Arsenal The PLA’s Tech Strategy Is Working Sam Bresnick , Emelia S. Probasco , and Cole McFaul What Drones Can—and Cannot—Do on the Battlefield The Pentagon Should Learn From Israel and Ukraine Michael C. Horowitz , Lauren A. Kahn , and Joshua A. Schwartz Most Read The Curse of Middle-Sized Wars In Iran, Trump Risks Falling Into a Familiar Trap Robert D. Kaplan Iran’s Drone Advantage The Pentagon Copied Tehran’s Technology but Is Still Struggling to Keep Up Michael C. Horowitz and Lauren A. Kahn Why Escalation Favors Iran America and Israel May Have Bitten Off More Than They Can Chew Robert A. Pape What Is the Endgame in Iran? Trump Needs to Figure Out What He Wants—and Quickly Colin H. Kahl America’s Endangered AI How Weak Cyberdefenses Threaten U.S. Tech Dominance Fred Heiding and Chris Inglis Subscribe to Foreign Affairs This Week Our editors’ top picks, delivered free to your inbox every Friday. Sign Up * Note that when you provide your email address, the Foreign Affairs Privacy Policy and Terms of Use will apply to your newsletter subscription. Cookies on ForeignAffairs.com We and third parties may collect data using cookies and similar technologies for site functionality and advertising. Choose “Accept All” to consent to optional cookies. Manage your preferences in our cookie consent form. Privacy Policy . Manage Cookies Accept All Reject Optional Close Get the Weekly Foreign Affairs Newsletter Our editors’ top picks from the week, delivered on Friday. Sign Up * Note that when you provide your email address, the Foreign Affairs Privacy Policy and Terms of Use will apply, and you will receive occasional marketing emails. No, Thanks