Command Injection Bug in OpenAI Codex Exposed GitHub OAuth Tokens
quality 9/10 · excellent
0 net
Tags
Command Injection Bug in OpenAI Codex Exposed GitHub OAuth Tokens - Decipher Skip to primary navigation Skip to main content Skip to primary sidebar AI OpenAI Vulnerability Command Injection Bug in OpenAI Codex Exposed GitHub OAuth Tokens The bug is a command injection issue and lies in the way that Codex processed GitHub branch names during the execution of tasks. By Dennis Fisher March 30, 2026 | 3 min read Share Share on X Share on Facebook Share on LinkedIn Researchers have discovered a weakness in OpenAI’s Codex cloud environment that enables an adversary to steal GitHub OAuth tokens and potentially move laterally within a victim’s environment. The bug affected several different Codex interfaces, including the Codex CLI, SDK, and the ChatGPT site. The bug is a command injection issue and lies in the way that Codex processed GitHub branch names during the execution of tasks. Researchers with BeyondTrust’s Phantom Labs team discovered the flaw and found that they could append arbitrary commands to GitHub branch name parameters, which would then allow them to execute arbitrary payloads inside the Codex agent’s container and snag the authentication tokens. The Phantom Labs researchers developed a method to automate this technique and compromise several users who are interacting with a target GitHub repository. The researchers disclosed the vulnerability to OpenAI, which rated it as a critical issue and has fixed the bug in all of the affected interfaces. This issue is indicative of a larger set of concerns around the use of AI coding agents and the privileged identities that they often carry with them. These agents can run inside live execution environments and may have access to sensitive data such as credentials, source code, or user information. Like other similar tools, Codex enables developers to write tools and apps quickly and efficiently with the help of an AI agent. Codex can be accessed through the ChatGPT interface and when a user submits commands through the prompt, Codex creates a container in which to run those tasks. Codex, which has more than 2 million weekly users, integrates with GitHub and in order to enable the tool to access their GitHub repositories, developers have to use GitHub’s OAuth authentication process. The ways in which these agents are used can have unintended consequences. “The ChatGPT Codex GitHub application is privileged by default, as it requires access to repositories, workflows, actions, and more. These permissions become more impactful when the application is authorized within an organization’s GitHub environment, granting Codex access to private organizational resources,” the Phantom Labs analysis says. “In order to obtain the OAuth token, we needed to get execution within the bash shell where the remote configuration was set and the repository was retrieved. After exploring several unsuccessful avenues—including setup scripts and environment variables—we identified that the branch parameter in the POST request to https://chatgpt.com/backend-api/wham/tasks was reflected in the environment setup script and remote configuration. To verify that values passed in the branch parameter were reflected in the environment setup, we provided a payload of “-1” as the branch name. The container raised an error in the Codex environment logs, indicating a lack of input sanitization in the POST request. We then created a command injection payload to output the git remote URL and the embedded OAuth token to a file, before asking the Codex agent to read and return the file’s contents.” The researchers were able to use a version of the same technique to leak GitHub OAuth tokens through the Codex CLI, SDK, and IDE environments, as well. Command injection vulnerabilities have been a persistent problem in application security for many years, and the rise of AI coding tools has brought them to the forefront yet again. Many organizations are deploying Codex and other such tools to help speed up and automate software development and deployment, but the ways in which these agents are used can have unintended consequences, as the Phantom Labs research shows. Search Search Recent Posts Command Injection Bug in OpenAI Codex Exposed GitHub OAuth Tokens TeamPCP’s Supply Chain Attack Spree Continues For AI and Security, ‘The Storm is Coming’ DoJ Sentences Russian Initial Access Broker to 6 Years in Prison Mark Watney: Space Hacker Recent Comments No comments to show. By Dennis Fisher March 30, 2026 | 3 min read Dennis Fisher is an award-winning journalist and author. He is one of the co-founders of Decipher and Threatpost and has been writing about cybersecurity since 2000. Dennis enjoys finding the stories behind the headlines and digging into the motivations and thinking of both defenders and attackers. He is the author of 2.5 novels and once met Shaq. Contact: dennis at decipher.sc. AI OpenAI Vulnerability Share Share on X Share on Facebook Share on LinkedIn sidebar Customize Reject All Accept All Powered by ✖ ► Necessary Cookies Always Active Necessary cookies enable essential site features like secure log-ins and consent preference adjustments. They do not store personal data. None ► Functional Cookies Remark Functional cookies support features like content sharing on social media, collecting feedback, and enabling third-party tools. None ► Analytical Cookies Remark Analytical cookies track visitor interactions, providing insights on metrics like visitor count, bounce rate, and traffic sources. None ► Advertisement Cookies Remark Advertisement cookies deliver personalized ads based on your previous visits and analyze the effectiveness of ad campaigns. None ► Unclassified Cookies Remark Unclassified cookies are cookies that we are in the process of classifying, together with the providers of individual cookies. None Reject All Save My Preferences Accept All Powered by