The $0 Supply Chain Hack: Hijacking Microsoft's Setup.exe (And Broke Their Bounty Policy)

sudoaman.medium.com · Aman Kumar (ak) · 21 days ago · research
quality 9/10 · excellent
0 net
The $0 Supply Chain Hack: Hijacking Microsoft's Setup.exe (And Broke Their Bounty Policy) | by Aman Kumar (ak) - Freedium Milestone: 20GB Reached We’ve reached 20GB of stored data — thank you for helping us grow! Patreon Ko-fi Liberapay Close < Go to the original The $0 Supply Chain Hack: Hijacking Microsoft's Setup.exe (And Broke Their Bounty Policy) Aman Kumar (ak) Follow ~8 min read · March 22, 2026 (Updated: March 22, 2026) · Free: No Scanners see a "404 Error." I see a broken chain of trust. Here is the deep technical analysis of how I hijacked a legitimate Microsoft download endpoint and why I walked away with $0 to prove an ethical point. Hello everyone, Aman Kumar (ak) here. There is a fundamental misunderstanding in our industry about what constitutes "Advanced Hacking." We are sold a narrative that compromising a trillion-dollar infrastructure requires zero-day exploits, kernel-level bypasses, or nation-state resources. It is a lie. The most dangerous vulnerabilities in modern cloud infrastructure are not in the Code Complexity ; they are in the Process Complexity . In large organizations, resources are created, used, and deleted every day. But the pointers to those resources like the code, the documentation, the scripts are often outlive the resources themselves. In December 2025, I demonstrated this by identifying a Critical Supply Chain Vulnerability in a core Microsoft Financial Application. By exploiting a "Dangling Resource," I effectively became the distributor of their official setup.exe installer. Microsoft's engineering team fixed it rapidly. But the subsequent bounty adjudication process revealed a massive loophole in how corporate policies classify modern cloud risks. This isn't just a bug report. This is a masterclass in Cloud Architecture, Human Error, and knowing exactly where to draw the ethical line. -_- Disclaimer: The vulnerabilities discussed in this article were reported to the Microsoft Security Response Center (MSRC) under the Coordinated Vulnerability Disclosure (CVD) program. All issues have been remediated, and I have been authorized to disclose these findings. Part 1: The Psychology of "Dangling Trust" (The Business Card Analogy) To understand how this happened, we must move away from code for a second and look at Human Logic . Supply Chain attacks are dangerous because they exploit Trust , not just Software. Imagine a CEO of a massive bank. Let's call him "Ansh." Ansh prints 10,000 premium Business Cards . On the card, he prints his direct private line: +91-13370–01337 . He hands these cards to his most important partners. He tells them: "If you need money, call this number. It is me." Two years later, Ansh switches carriers. He cancels the +91-13370–01337 contract. The telecom company releases the number back into the public pool. But Ansh forgot to collect the 10,000 Business Cards. The partners still have the card. They still trust the number. The trust exists, but the endpoint is gone. Now, I come along. I go to the telecom company and ask: "Is +91–13370–01337 available?" They say: "Yes, nobody owns it." I buy it for ₹500. Now, when the partners call that number, they think they are talking to Ansh. They are talking to Me. I don't need to hack them. I don't need to phish them. They called me. In this vulnerability: The Business Card = The Microsoft Application's Source Code. The Phone Number = The Azure Storage Account. The Partner = The Internal Microsoft Employee downloading the tool. Me = The Researcher who claimed the deleted account. Part 2: The Discovery (The Failure of Automation) I was conducting research on a Microsoft-owned subdomain associated with internal financial planning tools ( PJMUI ). Standard reconnaissance methodology involves running automated scanners ( Nuclei , Zap , Burp Pro ). If you run a scanner against this target, it will crawl the site, check the links, receive a 404 Not Found or NXDOMAIN error on the download link, and mark it as "Dead." Automation sees a Dead End. A Researcher sees a Ghost. I prefer the Donkey Work . I opened the browser DevTools , navigated to the Sources tab, and began a manual review of the static assets. Specifically, I was analyzing the minified JavaScript bundles ( main.bundle.js ). Fig 1: The "dangling pointer" buried deep in the production JavaScript bundle. I wasn't looking for syntax errors. I was looking for hardcoded dependencies . Developers often hardcode storage URLs for assets like images, PDFs, or installers to offload traffic from the main server. And there it was: // Hardcoded dependency in the production bundle var InstallerLinkForEditFinancialPlan = "https://aurorasapaoinstaller.blob.core.windows.net/sapaoinstaller/setup.exe"; The Architectural Insight: This variable InstallerLinkForEditFinancialPlan defines the source of truth for the application's installer. It points to an Azure Blob Storage account named aurorasapaoinstaller . I immediately queried the DNS records for this endpoint. $ host aurorasapaoinstaller.blob.core.windows.net Host aurorasapaoinstaller.blob.core.windows.net not found: 3(NXDOMAIN) Fig 2: The digital ghost. NXDOMAIN confirms the storage account was deleted, but the code still pointed to it. NXDOMAIN. The DNS record was gone. This meant the Azure Storage Account had been deleted. However, the application code was still live in production, directing users to this location. The "Business Card" was still in circulation, but the "Phone Number" was disconnected. Part 3: The Exploitation (Azure Namespace Logic) Fig 3: Claiming the endpoint. Azure allowed re-registering the globally unique name instantly for pennies. This is where deep knowledge of Azure internals is required. Unlike AWS S3 buckets (which share a global namespace but have complex claiming rules), Azure Storage Account names are Global and Instant . When a Microsoft engineer deleted the aurorasapaoinstaller account, that name was immediately released back into the global public pool. It didn't become "Reserved." It became "Available." The Attack Path: Acquisition: I logged into my own Azure tenant. Registration: I attempted to create a new Storage Account with the exact name: aurorasapaoinstaller . Confirmation: Azure validated the name as available. I registered it. Cost: ~ ₹5 00. Replication: The code expected a specific path: /sapaoinstaller/setup.exe . Deployment: I created the container sapaoinstaller , set the access level to Blob (Public Read) , and uploaded a benign text file renamed as setup.exe . The Result: Instantly, the DNS propagated. The URL https://aurorasapaoinstaller.blob.core.windows.net/sapaoinstaller/setup.exe began resolving to My Azure Account . The broken link in the Microsoft application turned green. Any user clicking "Download" was no longer receiving an error. They were receiving my file. Part 4: The Impact Analysis (Why this Matters) Fig 4: The kill chain complete. The browser trusts the Microsoft domain and downloads the attacker-controlled file. When I report to MSRC, I don't just say "I can host a file." I explain the Business Risk . 1. The Trust Boundary Violation The victim is likely an internal employee or a B2B partner. They are logged into a trusted Microsoft domain ( .azurefd.net ). They see a download link. The URL is a valid *.blob.core.windows.net domain (which is trusted by corporate firewalls). There is zero friction. The trust is absolute. 2. The Malware Chain (RCE) If I were a malicious actor, I would not host a text file. I would host a malicious binary disguised as the installer. Scenario: The Finance Manager downloads setup.exe . Execution: They run it to install the tool. Payload: The installer executes a Reverse Shell or Cobalt Strike Beacon in the background while showing a fake "Installation Complete" bar. Result: I have established a foothold inside Microsoft's corporate network via a legitimate employee's machine. 3. Supply Chain Persistence Because this logic is hardcoded in the JavaScript bundle, this attack persists until the developers push a new code deployment. I could have held this link for months, harvesting credentials or infecting machines silently. Part 5: The Fix & MSRC Response I submitted this report with a Proof of Concept hosting a benign file. Ethical Hacking means verifying the vulnerability, not exploiting the victim. Triage: MSRC triaged the report within 48 hours. Verification: They confirmed the behavior and the dangling pointer. Remediation: Microsoft reclaimed the storage account name (locking it to their internal tenant) and pushed a patch to the PJMUI application to remove the dead code. MSRC verified the bug, patched it, and awarded 30 points toward the MSRC Researcher Recognition Program. Then, it went to the Finance team for bounty adjudication. Part 6: The Adjudication Reality (The Ethical Catch-22) This is where the story shifts from a technical write-up to a reality check on the Bug Bounty industry. For this Critical Supply Chain compromise, the final cash bounty awarded was exactly $0 . At first, the automated system downgraded the impact to mere "Spoofing" and closed it as Out of Scope. I pushed back immediately. Calling a hijacked distribution pipeline "Spoofing" severely underrepresents the real-world risk of supply chain poisoning. My MSRC Case Manager (who was incredibly professional and transparent throughout this process) went to bat for me internally. After weeks of review, the truth came out: it wasn't about the security impact . It was a massive loophole in their new "Standard Award Policy." The finance team's argument: Even though I hijacked a Microsoft Cloud asset to distribute the file, the malicious setup.exe executes on the End User's local machine , not on the Cloud Server. Under the new matrix, they only pay for exploits that prove direct "Cloud Impact." The Catch-22: To satisfy their requirement and get paid, MSRC stated I would need to provide proof that the malicious .exe could affect and cause RCE back on the cloud application. Think about what that actually means. To prove "Cloud Impact," I would have had to weaponize the payload, actively target a Microsoft employee or customer, steal their session tokens or credentials via the local machine, and pivot back into the Azure backend. Doing that directly violates their Rules of Engagement (RoE). I refused to cross that ethical boundary. As professional researchers, we verify the vulnerability; we do not exploit the victim. I explained this conflict directly to MSRC: you cannot require a researcher to prove downstream cloud impact from a client-side execution without asking them to break the rules. The Big Win: MSRC acknowledged the paradox. They admitted that their new payout policy directly conflicts with their ethical boundaries, forcing researchers to choose between proving impact for a payout and violating the RoE. They formally took this feedback back to their team to address where their documentation and program requirements are in conflict. We didn't just find a logic flaw in their architecture; we found a logic flaw in their policy. Conclusion: The "God Mode" Mindset Do I care about the $0 payout? No. We are professionals. The real value here is the PR, the lesson for the community, and forcing a trillion-dollar company to rethink how they evaluate cloud supply chain risks. To my fellow hunters and researchers: If you want to operate at the top level, you must understand the corporate machinery just as well as you understand the infrastructure. You have to know when to fight for the severity, and more importantly, you have to know when to walk away to maintain your ethical standards. Automation will never find these bugs. And corporate matrices will not always reward them correctly. But securing the ecosystem is what actually matters. Stop relying on the tools. Read the code. Understand the cloud lifecycle. Do the Donkey Work. Happy Hacking. Want to learn the logic? I am building LeetSec , a collective for the breakers and defenders. We don't post fluff; we post payloads. 1. Follow the Publication so you don't miss the upcoming guide (and my custom script). 2. Connect: LinkedIn | X (Twitter) | Instagram (P.S. A professional nod to the MSRC triage team, particularly my case manager. Regardless of the bounty policy dispute, securing a perimeter this vast is a monumental task, and their transparency and willingness to take feedback on their internal policies was exemplary. ^^) #cybersecurity #bug-bounty #cloud-security #supply-chain-security #ethical-hacking Reporting a Problem Sometimes we have problems displaying some Medium posts. If you have a problem that some images aren't loading - try using VPN. Probably you have problem with access to Medium CDN (or fucking Cloudflare's bot detection algorithms are blocking you).