Weak Cyberdefenses Threaten U.S. Tech Dominance

foreignaffairs.com · fheiding · 3 hours ago · view on HN · threat
0 net
America’s Endangered AI | Foreign Affairs Skip to content America’s Endangered AI How Weak Cyberdefenses Threaten U.S. Tech Dominance Fred Heiding and Chris Inglis March 11, 2026 At an AI summit in New Delhi, India, February 2026 Bhawika Chhabra / Reuters FRED HEIDING is a Postdoctoral Research Fellow at the Belfer Center’s Defense, Emerging Technology, and Strategy (DETS) program at Harvard Kennedy School. CHRIS INGLIS was the first U.S. National Cyber Director. More by Fred Heiding More by Chris Inglis Listen Subscribe to unlock this feature or Sign in . Share & Download Print Subscribe to unlock this feature or Sign in . Save Sign in and save to read later Close Share America’s Endangered AI How Weak Cyberdefenses Threaten U.S. Tech Dominance Fred Heiding and Chris Inglis Share in email Share on Facebook Share on X Share on LinkedIn Copy Link Copied Article link: https://www.foreignaffairs.com/united-states/americas-endangered-ai https://www.foreignaffairs.com/united-states/americas-endangered-ai Copy Gift Link Copied This is a subscriber-only feature. Subscribe now or Sign in . Create Citation Copied Chicago MLA APSA APA Chicago Cite not available at the moment MLA Cite not available at the moment APSA Cite not available at the moment APA Cite not available at the moment Download PDF This is a subscriber-only feature. Subscribe now or Sign in . Request Reprint Request reprint permissions here . In September 2025, the U.S. artificial intelligence company Anthropic announced that Chinese cyber-operators had used its models to target around 30 companies, agencies, and institutions. It was, Anthropic declared, “the first documented case of a large-scale cyberattack executed without substantial human intervention.” This assault on U.S. infrastructure was innovative in its use of cutting-edge American AI models. And although it was the first, it was not the last. That same year, OpenAI reported attempts by Iranian hackers to use large language models to support phishing campaigns, malware development, and influence operations. Tehran’s efforts have intensified following Washington’s strikes on Iran that began in February 2026. Already, Palo Alto Networks, a U.S. cybersecurity company, has reported the mobilization of more than 60 Iranian-aligned cyber groups, many of which are leveraging AI tools to exploit Western infrastructure. As the fighting continues, U.S. data centers, cloud providers, and financial networks are likely to become targets for cyberattacks. Large AI training clusters and model-weight repositories concentrate enormous economic and strategic value, making them especially attractive targets for foreign intelligence services seeking to steal, sabotage, or replicate advanced AI systems. Right now, the United States is extremely vulnerable to such attacks. U.S. cyberdefenses are inadequate, yet AI is rapidly being integrated into all aspects of American life, economic activity, and national security. As a result, the potential damage that Beijing, Moscow, and Tehran can cause is growing by the hour. Washington, in other words, is caught in a double bind. On the one hand, U.S. companies are producing the technology that is powering the AI revolution. On the other hand, the results of this revolution are being stolen by foreign adversaries capable of deploying AI to weaken the United States. Each AI breakthrough thus produces great promise and increasing risk. Defenses are weak, and a strategy for protecting American AI technology is wanting. Washington must quickly awaken to this situation, learn from the recent history of international cyber-responses, and change course. AN EASY TARGET For decades, Western firms have struggled to protect their industries from cybertheft and espionage, particularly that perpetrated by Chinese actors. Under Beijing’s direction, many of the most technologically advanced companies in the United States and in U.S. allied countries have been targeted for their intellectual property and business secrets. For example, in 2004, Canada’s then largest technology company, Nortel Networks, was hacked by Chinese actors who made off with its patents and know-how. Nortel declared bankruptcy in 2009, and the Chinese telecommunications firm Huawei moved into the gap its competitor’s collapse had left. American Superconductor was the target of a similar strategy in 2011. Its wind-turbine control software was stolen by China’s Sinovel Wind Group, causing the U.S. company to lose hundreds of millions of dollars in revenue and stock value. Hundreds of U.S. jobs were cut, and the Chinese firm later sold wind turbines incorporating the stolen software back to the United States. Researchers at Harvard University and the University of California, Berkeley, have cataloged more than 300 severe incidents of Chinese cyber-espionage and intellectual property theft from 1995 to 2024. In every one of these incidents, the United States failed to secure its assets and critical technologies, including semiconductor manufacturing plans and aerospace designs. Foreign adversaries used these attacks to accelerate their own development, progressing by theft rather than by innovation. Today, AI systems present an even larger prize for these hackers, and since Silicon Valley has prioritized speed over security, cutting-edge systems have been left particularly vulnerable to cyberattacks and espionage. The progress and deployment of AI depend on a layered infrastructure comprising hardware, data, software, and human expertise, as well as clear assignments of expectations, roles, and responsibilities for the various stakeholders in the overall ecosystem. Compute resources (that is, the resources that allow computers to process data and run applications) form the bedrock of AI. Specialized chips, such as GPUs and TPUs, work alongside central processing units and other accelerators. Training data, the raw material for language models’ “intelligence,” includes everything from text and images to video, audio, code, and structured data sets. Software frameworks such as TensorFlow and PyTorch help developers access and modify AI-powered tools—including widely used text or image generation. Orchestration tools and operations pipelines ensure that data and models move smoothly through their life cycles. Each layer of this infrastructure must function reliably, deliver the capabilities it promises, operate as intended under diverse use cases, and be reasonably secure. At each layer, human operators and technical defense systems must be able to anticipate, resist, and rapidly contain external attacks. Modern AI systems struggle across all these dimensions because of their opacity, sensitivity, and unpredictability. Meanwhile, the growing complexity of these systems makes it unlikely they can be secured without deliberate intervention from the U.S. government. THE BIGGEST PRIZE Attackers targeting AI infrastructure have three principal objectives. First, they aim to steal or corrupt intellectual property (such as training data or model weights) to strengthen their own industries, or to ransom or degrade the victim’s capabilities. Second, they seek to compromise the tools that use AI systems, disrupting services including fraud detection, logistics, and health diagnostics. Third, they want to undermine users’ trust in whatever systems they strike. Attackers especially want national leaders to conclude that these systems won’t perform as intended during contingencies and crises. AI model weights are a particular concern. These weights are the intelligence of a model, and they are the product of immense investment, training hours, and hard-won expertise. Stealing the weights would let a foreign actor replicate the model’s intelligence. Corrupting them would undermine system performance and user confidence. They could be stolen through exfiltration via cloud misconfigurations, insider theft, side-channel leaks, API (application programming interface) extraction, or by compromising supply chains. Some attack scenarios assume that model weights could be smuggled out via systematic probing through carefully crafted model queries or API calls, a standard way for software programs to communicate and exchange data. Such an attack would be difficult. But it is well within the demonstrated capabilities of Beijing’s cyber forces. Indeed, multiple dangerous gaps in cyberdefenses have already been found. In 2025, security researchers from the cloud security company Wiz investigated Nvidia—the U.S. company that designs the world’s most widely used AI chips—and discovered a vulnerability that allowed attackers to execute malware directly on one of its servers. Had it been exploited, attackers could have stolen model weights, manipulated outputs, leaked sensitive data, or established deeper network footholds before patches were issued. That same year, researchers from the University of Toronto demonstrated a hardware attack on Nvidia’s AI chips that would distort the calculations used to train AI models on them. If exploited, such an attack would have rendered models inaccurate and unpredictable. Separately, vulnerabilities in widely used open-source software have allowed researchers from the U.S.-based cybersecurity firm Horizon3.ai to execute malware on systems running AI models, which, if exploited by an attacker, could have been used to steal the credentials of the organizations operating them. There is no evidence that these weaknesses were exploited before researchers identified them, but they underscore the fragility of today’s AI infrastructure. Software that is free to access and use is particularly at risk from hackers because no infiltration is necessary to access it. Open-source software libraries underpin much of modern digital infrastructure, and they provide researchers with the code needed to build their own systems. As a result, cyberattacks can infect the popular software used by millions of developers—and they have. Some of the most widespread cyberattacks in history have been undertaken in this way. A single flaw in a popular open-source library can expose every system that depends on it, developers and organizations, around the planet. In addition to sophisticated cyberattacks, AI labs and vendors are prime targets for more traditional forms of coercion. Recent reports from publications including The Times of London suggest that Chinese and Russian intelligence services are blackmailing tech employees and even engaging in honeypot operations—or romancing them to steal their information. The schemes often start with a LinkedIn message or a “chance” meeting at a conference. The resulting relationships sometimes extend even to marriage and children, all while the intelligence asset quietly relays trade secrets back to their home countries. Trade secrets are also vulnerable to conventional insider theft. In 2024, Linwei Ding, a Google software engineer, was indicted for stealing more than 2,000 pages of AI trade secrets, including details of Google’s proprietary chips and infrastructure, and passing them to Chinese technology companies. STRENGTHENING DEFENSES U.S. policymakers must expect adversaries to probe AI systems with the same intensity and success as they have done with other strategic industries. Without improved defenses, clear deterrence policies, and enforceable accountability across AI labs, supply chains, and government agencies, Washington risks ceding to Beijing the technical competitive advantage it has spent trillions of dollars developing. The United States is starting to take some steps in the right direction. The Trump Administration just released a new cyberstrategy focused on deterrence, strengthening public-private collaboration, and coordinating policy across federal agencies. If sustained, his plan would meaningfully strengthen U.S. cyber-resilience. But it is still mostly a blueprint, and its impact will depend on its implementation and execution. Securing AI infrastructure requires coordinated action focusing on three primary pressure points: chips, wires, and humans. At the chip level, manufacturers and cloud providers must be able to reliably track export-controlled hardware and the security-critical systems it powers, ensuring that restricted compute remains under authorized control. Companies, including Nvidia, are already experimenting with location verification schemes to ensure GPUs and other regulated hardware are monitored and remain under authorized control. These systems allow regulators and operators to identify, isolate, and shut down hardware that operates outside agreed boundaries. U.S. policymakers must continue incentivizing such schemes by exploring chip-tracking mandates and export-control enforcement. Verified chips should be paired with outbound safeguards at the data center level, ensuring that sensitive model data cannot leave through unauthorized channels or disguised traffic. At the wire level, model operators must monitor and police data flows to prevent the exfiltration or poisoning of model weights, training data, and sensitive outputs. Research conducted at Harvard and Stanford offers promising ways to prevent AI model weights from being stolen by making outbound monitoring more comprehensive and efficient. Modern AI models operate at a massive scale, handling millions of requests per day, leaving little room for security checks. But defenders can audit small subsets of model interactions by rerunning them on trusted, isolated systems and comparing the results against expected behavior. When a model’s responses deviate from the norm, they should be flagged for further review. Thus, subtle manipulation can be detected at a relatively low cost. At the human level, institutions must assume that insiders will be targeted through coercion, compromise, and errors. To guard against attacks, organizations should require strict multipart verification for sensitive data. No single person should be able to access or remove critical model weights. Anthropic, for instance, reportedly requires multiple authorized employees, each holding a separate key or credential, to jointly approve access to its most sensitive model weights. This multiparty control system is expensive to enforce and maintain, but it ensures that no insider can act alone. Policymakers should incentivize these schemes by making them a condition of federal contracts and export-control privileges, and offering reduced regulatory liability to companies that implement them. This would offset the costs and operational challenges of mandating them. CIRCLING THE WAGONS To strengthen U.S. cyberdefenses, Congress should designate AI systems and their supply chains as a formal critical infrastructure sector. This would enable the Cybersecurity and Infrastructure Security Agency to set and enforce baseline security requirements. It would also empower the agency to coordinate incident response, receive mandatory cyber-incident reports from AI labs, and give AI firms priority access to federal resources. But designation is not enough. Turning legal status into real resilience requires drawing on lessons that foreign governments have already learned, often the hard way, about protecting their assets. This includes deciding who is responsible for what, how industry and government cooperate, and how security objectives can be enforced without stifling innovation. A 2025 Belfer Center study of the history of national cyberstrategies shows that there is no one-size-fits-all blueprint for effective governance. Successful approaches must be tailored to the unique combination of threats, resource constraints, and political dynamics that distinguish each country. But the study did highlight three approaches, pursued by different governments, each of which has clear relevance to Washington as U.S. officials consider the most effective means of securing AI infrastructure. First, some nations have successfully embedded industry figures in government planning and response, creating public programs that include private experts. The United Kingdom’s Industry 100 scheme entails having experienced business practitioners advise the British government on cybersecurity issues. Similar programs are needed to translate U.S. private competence into public capacity for governing artificial intelligence, especially as its development continues. Bringing former employees and selected policy spokespeople from AI labs into Washington is not enough. There must be a continuous information exchange between technical employees currently working at AI labs and U.S. government agencies. When appropriate, Washington should even provide security clearances to private employees. It should set up a joint group with experts from AI model labs, cloud providers, chipmakers, and other key sectors. This would build the relationships, information channels, and institutional muscle memory needed to define and implement appropriate security practices, both to contain incidents when they occur and, ideally, to prevent them altogether. Second, some governments have kept incident response teams institutionally distinct from law enforcement to incentivize early and frequent incident reports. Companies are often reluctant to disclose breaches for fear of triggering criminal investigations or regulatory penalties. But this unwillingness to disclose undermines collective defenses and information sharing, and so the United States must find a way to ease its companies’ fears. To do so, it might look to Singapore, which has structured its response teams within its Ministry of Digital Development and Information rather than within the security or defense ministries. This allows these teams to focus on prevention and response without facing regulatory pressure. Australia has followed the same approach, giving its Signals Directorate no legal enforcement powers and implementing an “All Hazards Incident Response” protocol that prohibits critical infrastructure companies from being held liable for consumer negligence when responding to incidents. This has lowered the legal and reputational risk of disclosure, encouraging companies to report and collaborate rather than quietly containing attacks to avoid liability. U.S. policymakers should use the same approach to incentivize AI companies to report risks without letting fear of reprisal undermine communication. Finally, Washington should use more of its authority to coordinate, set baselines, and surge resources during a crisis. Right now, Western technical infrastructure is characterized by decentralization and attendant anonymity, whereas many Asian counterparts emphasize centralized ownership and fully traceable data. This allows the latter set of countries to respond to attacks quickly and decisively. After the 2018 SingHealth breach, for example, Singapore’s Cyber Security Agency coordinated a centralized, cross-government response and surged technical resources across the health-care sector to prevent future attacks. The United States may not want to go quite as far as Asian governments; the Western model is human-centric and innovative. But a combination of the two approaches would be optimal. WASHINGTON REACHES OUT A model of soft collaboration could be most suitable for the United States. Under this approach, public officials would be placed in major AI labs, establish shared incident-reporting systems, and coordinate nonsensitive threat data. This approach would allow the United States to preserve democratic decentralization while capturing the security benefits of coherent oversight and traceability. An independent, empowered, and capable AI Assistance Office could anchor this balance, ensuring that cooperation enhances resilience without crossing into coercion. An AI Litigation Task Force, recently recommended in a White House Executive Order, would reinforce this architecture by getting federal agencies to work together on AI security enforcement, liability, and incident response rather than at cross-purposes. The emphasis should be on creating a harmonious interaction between semiautonomous institutions while avoiding the perils of centralization, with its false promise of efficiency and effectiveness. The United States, however, should create a single, binding set of minimum security controls for private-sector frontier AI labs that cover model-weight storage, training pipelines, and insider access, much as nuclear security and core financial-market infrastructure are regulated. Federal procurement and cyber-insurance frameworks should reward demonstrably resilient AI architectures and cultivate a robust ecosystem of AI security startups capable of innovating faster and with deeper domain expertise than the public sector can develop. Finally, Washington must clarify—through legislation and formal executive policy statements—that large-scale AI intellectual property theft constitutes a strategic attack that will trigger economic sanctions, export-control escalation, criminal prosecution, and if necessary, cyber-countermeasures including disruption of adversaries’ infrastructure. AI infrastructure has become the latest U.S. critical infrastructure sector, and it is being built faster than it can be protected. Rapid experimentation is useful for innovation, which should remain a priority for AI labs and policymakers, but it must be paired with the discipline, standards, and accountability expected of other security-critical domains. As Michael Beckley has observed in Foreign Affairs , the nature of power has changed. Modern economies no longer hold their wealth in physical deposits that can be seized in ports, factories, or oil fields. Nearly 90 percent of corporate assets in advanced economies are intangibles, such as software, patents, and data. The spoils of modern conquest are therefore largely digital. Looting has accordingly moved online, where it has become rampant. The United States’ limited response and failed cyber-deterrence have encouraged further aggression, allowing foreign competitors to build their industries on stolen foundations. This failure has weakened U.S. critical infrastructure and eroded long-term competitiveness. The United States has one final chance to correct that mistake, before the winner of the AI race is crowned. If it is to succeed, Washington must take prompt action to build credible defenses for chips, wires, and humans. Otherwise, China will triumph thanks to the United States’ own inventions. Loading... You are reading a free article Subscribe to Foreign Affairs to get unlimited access. Paywall-free reading of new articles and over a century of archives Six issues a year in print and online, plus audio articles Unlock access to the Foreign Affairs app for reading on the go Subscribe Already a subscriber? Sign In Topics & Regions: United States Politics & Society Science & Technology Artificial Intelligence Research & Development Recommended The Autonomous Battlefield And Why the U.S. Military Isn’t Ready for It David Petraeus and Isaac C. Flanagan China’s AI Arsenal The PLA’s Tech Strategy Is Working Sam Bresnick , Emelia S. Probasco , and Cole McFaul China’s Private Sector Pivot How Beijing Is Encouraging Entrepreneurs Without Giving Up Control Lizzi C. Lee and Jing Qian The AI Trilemma How to Regulate a Revolutionary Technology Sebastian Elbaum and Sebastian Mallaby The AI Divide How U.S.-Chinese Competition Could Leave Most Countries Behind Sam Winter-Levy and Anton Leicht Geopolitics in the Age of Artificial Intelligence Strategy and Power in an Uncertain AI Future Jake Sullivan and Tal Feldman Most Read The Curse of Middle-Sized Wars In Iran, Trump Risks Falling Into a Familiar Trap Robert D. Kaplan Iran’s Drone Advantage The Pentagon Copied Tehran’s Technology but Is Still Struggling to Keep Up Michael C. Horowitz and Lauren A. Kahn Why Escalation Favors Iran America and Israel May Have Bitten Off More Than They Can Chew Robert A. Pape What Is the Endgame in Iran? Trump Needs to Figure Out What He Wants—and Quickly Colin H. Kahl America’s Endangered AI How Weak Cyberdefenses Threaten U.S. Tech Dominance Fred Heiding and Chris Inglis Subscribe to Foreign Affairs This Week Our editors’ top picks, delivered free to your inbox every Friday. Sign Up * Note that when you provide your email address, the Foreign Affairs Privacy Policy and Terms of Use will apply to your newsletter subscription. Cookies on ForeignAffairs.com We and third parties may collect data using cookies and similar technologies for site functionality and advertising. Choose “Accept All” to consent to optional cookies. Manage your preferences in our cookie consent form. Privacy Policy . Manage Cookies Accept All Reject Optional Close Get the Weekly Foreign Affairs Newsletter Our editors’ top picks from the week, delivered on Friday. Sign Up * Note that when you provide your email address, the Foreign Affairs Privacy Policy and Terms of Use will apply, and you will receive occasional marketing emails. No, Thanks