Critical ($100k) bug that could allow an attacker to drain the entire pool in under an Hour

medium.com · 0 day exploit · 14 days ago · research
quality 9/10 · excellent
0 net
Tags
Critical ($100k) bug that could allow an attacker to drain the entire pool in under an Hour | by 0 day exploit - Freedium Milestone: 20GB Reached We’ve reached 20GB of stored data — thank you for helping us grow! Patreon Ko-fi Liberapay Close < Go to the original Critical ($100k) bug that could allow an attacker to drain the entire pool in under an Hour While reviewing the Dogecoin path in a light-client implementation, I found a single control-flow decision that materially weakened header… 0 day exploit Follow ~10 min read · March 29, 2026 (Updated: March 29, 2026) · Free: Yes While reviewing the Dogecoin path in a light-client implementation, I found a single control-flow decision that materially weakened header validation. The problematic line was: skip_pow_verification = true; Once I traced how that assignment propagated through the submission path, the security implication became clear: an attacker could submit AuxPoW-backed headers with attacker-chosen difficulty, establish forged canonical state inside the light client, and use that state to manufacture inclusion proofs for deposits that never occurred on Dogecoin. In a bridge setting, that is not merely a validation inconsistency. It is a direct unauthorized-mint primitive. What stood out about this finding was how quickly the core issue surfaced. I found it within roughly the first hour of focused review. The bug was not visually loud, but the control flow was weak in exactly the wrong place, and once I mapped the path correctly, the impact was hard to ignore. ## How I found it so fast I did not brute-force this by slowly reading every file in sequence. I used AI as a mapping tool, not as a substitute for judgment. That distinction matters. AI did not identify the vulnerability, assign severity, or build the exploit on its own. What it did do was compress the code-comprehension phase. It helped me turn a scattered review problem into a function-to-function reasoning problem much faster than I would have managed by manually rebuilding the full execution graph from scratch every time I hit a new branch. The relevant path was: - `submit_block_header()` - `check_aux()` - `submit_block_header_inner()` - `check_target()` - `check_pow()` - `get_next_work_required()` That accelerated the slowest part of reverse engineering: identifying where the important checks lived, which paths were mutually exclusive, and where attacker-controlled input stopped being normalized against consensus expectations. The value was not code generation. The value was compression. It helped me get to the actual security question faster: Why does the AuxPoW path stop enforcing the same child-header difficulty rules as the normal path? Once that question was isolated, the rest was still ordinary security research: read the code, validate the assumption, build the proof, test the exploit path, and keep pushing until the code either disproved the suspicion or confirmed it. ## The issue was not cryptographic failure. It was consensus failure. The vulnerable logic lived in the Dogecoin submission flow. When a block arrived with AuxPoW data, the contract ran `check_aux()` and then unconditionally set `skip_pow_verification = true`. In practical terms, the code treated the presence of AuxPoW metadata as a reason to disable the standard proof-of-work validation path for the Dogecoin child header. That is the design flaw. AuxPoW should contribute additional evidence. It should not replace baseline consensus checks. In this implementation, however, it effectively became a mode switch: - normal block submission: validate the block's difficulty target - AuxPoW block submission: skip that validation entirely For a light client, that is a broken trust boundary. Once canonical header acceptance differs across paths, downstream consumers are no longer anchored to a single definition of chain reality. ## Root cause analysis The first red flag appeared in `submit_block_header()`: let (block_header, aux_data) = header; let mut skip_pow_verification = skip_pow_verification; if let Some(ref aux_data) = aux_data { self.check_aux(&block_header, aux_data); skip_pow_verification = true; } The second red flag appeared downstream in `submit_block_header_inner()`: if !skip_pow_verification { self.check_target(block_header, prev_block_header); require!( U256::from_le_bytes(&pow_hash.0) <= target_from_bits(block_header.bits), format!("block should have correct pow") ); } `check_target()` is the place where the contract validates the block's `bits` field against the expected Dogecoin difficulty, computed through the network's DigiShield adjustment logic. If that function does not run, the child header's claimed difficulty is never checked against consensus reality. That reduced the issue to a straightforward question: What happens if an attacker submits an AuxPoW block whose `bits` field is entirely attacker-controlled? ## Practical consequence: forged low-cost blocks the network would never accept The normal path rejected malformed or non-canonical difficulty. The AuxPoW path did not. That let me submit a Dogecoin block with arbitrary `bits`, including the minimum-difficulty value `0x1e0fffff`. Because `check_aux()` validates the parent block's proof-of-work against `target_from_bits(block_header.bits)`, the check becomes circular. It does not enforce the expected network difficulty. It enforces proof-of-work relative to the attacker's chosen target. This completely changes attack cost. Instead of satisfying real Dogecoin difficulty, the attacker can choose an extremely permissive target, mine a parent block against that target, and cause the contract to accept a child header the canonical network would reject. In my testing, this was cheap enough that the parent block could be mined in roughly a second on commodity hardware. At that point, the light client is no longer tracking canonical Dogecoin state. It is tracking attacker-authored state that merely satisfies the weakened local acceptance rules. ## Why the issue escalates from integrity failure to direct fund risk A description like "difficulty-check bypass" understates the real consequence. The important property is that once a malicious block is accepted as canonical inside the light client, every downstream system that relies on that state inherits the falsehood. In this case, that included deposit verification. So I pushed beyond the root cause and tested the full exploit chain. I used the real contract stack in a sandbox: - the Dogecoin light client - the bridge contract - the `nbtc` token contract - the supporting mock chain-signature environment I then validated the full exploit path: 1. I chose a fake Dogecoin transaction that did not exist on the real chain. 2. I computed a forged Merkle root that would make inclusion verification succeed for that fake transaction. 3. I built a malicious Dogecoin header pointing to the current tip, but with attacker-controlled minimum-difficulty bits. 4. I wrapped it in AuxPoW data that satisfied `check_aux()`. 5. I submitted the forged block. 6. The light client accepted it as canonical. 7. I called the deposit verification flow with the fake transaction proof. 8. The bridge minted assets backed by a deposit that never occurred. That is where a high-severity validation issue becomes a critical asset-impact issue. Once forged consensus state can be converted into real minted value, the exploit path is no longer theoretical. ## Why a single forged block can imply full-pool exposure The severity becomes clearer when you model the bridge as a pool of claimable value rather than as a sequence of individual deposits. If the bridge pool held, for example, 10,000,000 DOGE, an attacker would not need to drain it incrementally. They could forge a block containing a fabricated deposit for the full amount, prove inclusion against the forged Merkle root, and trigger minting in a single execution path. No real Dogecoin deposit. No legitimate on-chain settlement. No meaningful mining cost. Just a forged block, a fabricated proof, and a bridge that believes what the light client tells it. This is why "single-transaction drain" is not rhetorical framing here. It is a realistic consequence of trusting forged canonical state. ## Why `check_aux()` did not save the system This is the portion of the bug that is easy to misread during review. `check_aux()` was not vacuous. It did perform meaningful validation: - parent block uniqueness - coinbase transaction Merkle inclusion - chain-root embedding in the coinbase script - parent proof-of-work against the target derived from `block_header.bits` The problem is that none of those checks validated whether the submitted Dogecoin child header matched the difficulty the network should actually require at that height. That missing comparison is everything. The contract was therefore validating that the attacker had done work relative to a target they selected, rather than the target consensus required. From a security perspective, that is equivalent to attacker-influenced self-certification. ## What made the exploit path reliable There was no cryptographic break here, and no complex memory-corruption primitive. The exploit was reliable because the system had two different acceptance models for what should have been the same consensus object. One path enforced consensus rules. One path silently weakened them. From there, the exploit was simply compositional: - bypass difficulty validation - submit forged canonical state - anchor a fake Merkle root - prove inclusion for a transaction that never existed - mint against the lie This pattern appears frequently in blockchain systems: once the validator's notion of reality can be desynchronized from canonical chain reality, value-moving workflows downstream become unsafe by construction ## The triage fight became part of the story The technical finding was only half of the experience. The other half was having to argue that it actually mattered. I reported the issue through the bug bounty flow on March 3, 2026, with a full proof of concept. During triage, the finding was initially treated as already known, with the open GitHub PR `#116` cited as evidence of prior awareness. I pushed back hard on that conclusion, because the public timeline did not support it cleanly. PR ` #116 ` had activity in July 2025, then effectively went quiet. After my report, it became active again in March 2026. From the researcher side, that did not look like a team sitting on a clearly understood critical vulnerability. It looked like a dormant hardening PR being used after the fact to minimize researcher credit. That argument did not stay inside the bounty thread. I also engaged directly in the GitHub pull request discussion and kept pressing the same point: There is a real difference between "an open PR touches related code" and "the team had already identified the exact vulnerability, understood its exploitability, and appreciated its asset impact." For me, that distinction was not cosmetic. It was the entire issue. A half-finished hardening PR is not the same thing as a demonstrated exploit chain that shows how forged chain state can mint bridge assets with no real L1 deposit behind them. The position shifted more than once. At one point, triage told me I was correct that the prior-awareness assumption did not hold. Later, after further investigation, they cited a prior private audit issue and maintained the informative classification. By then, the discussion had already revealed something important about vulnerability handling: even when the bug is real, reproducible, and tied to direct asset impact, the fight is not always about technical truth. Sometimes it is about who gets to define what was "already known." As of March 18, 2026, PR ` #116 ` was merged into `main`. That is a concrete date in the public record. But the sequence leading there was one of the most instructive parts of the entire experience. For that reason, this write-up is not just about a bug. It is also about the messy intersection of research, triage, engineering timelines, and how security impact gets reframed once reward decisions and prior-awareness claims enter the room. ## Remediation principles If I were hardening this path, I would make one rule explicit: AuxPoW checks must be additive. They must never disable the base consensus rules for the child header. In practice, that means: 1. Never set `skip_pow_verification = true` just because AuxPoW data is present. 2. Always validate the Dogecoin child header's `bits` against the expected difficulty. 3. Keep AuxPoW parent validation as an extra requirement, not a replacement. 4. Lock the behavior in with regression tests so this exact bypass cannot come back later. The implementation bug is small, but the remediation principle is broader: specialized validation paths should only ever strengthen acceptance criteria, never weaken them. ## Closing thought I started with a suspicious flag assignment and ended with a forged-state exploit path capable of exposing the full bridge pool. That is the uncomfortable lesson here. In bridge systems, the distance between "one validation path skipped a consensus check" and "the attacker can mint against a non-existent deposit" is often much shorter than teams expect. Light clients do not merely parse headers. They define canonical reality for every contract built on top of them. If that reality can be forged, the rest of the system is operating on borrowed time. Full report ;- https://drive.google.com/drive/folders/1WqYAKnAkbeYLeoarvxh7vk0OIEbVFTUK?usp=sharing My friend also found full report ;- https://github.com/blessingblockchain/dogecoin-auxpow-finding Help for making wasm poc thanks to ;- https://hackenproof.com/hackers/LoopGhost007 ## Disclosure note One important point is worth stating clearly. The fact that PR ` #116 ` has now been merged does not automatically mean public disclosure is authorized under the program terms. The currently published NEAR program page on HackenProof says: do not discuss the program or vulnerabilities outside the program without express consent from the organization, and that no vulnerability disclosure, including partial disclosure, is allowed for the moment. So if I publish a piece like this publicly, I would want written confirmation that coordinated disclosure is permitted. A merged fix and a right to disclose are not necessarily the same thing. #security #bug-bounty #hacking #smart-contracts #fund-drain Reporting a Problem Sometimes we have problems displaying some Medium posts. If you have a problem that some images aren't loading - try using VPN. Probably you have problem with access to Medium CDN (or fucking Cloudflare's bot detection algorithms are blocking you).