Anthropic's Hidden Vercel Competitor "Antspace"

aprilnea.me · anurag · 16 days ago · view on HN · authentication
0 net
Anthropic's Hidden Vercel Competitor "Antspace" | AprilNEA All Copy Link Security Anthropic's Hidden Vercel Competitor "Antspace" 2026-03-18 Anthropic, Claude Code, Firecracker, PaaS, Antspace What's inside Claude Code Web: an unstripped Go binary, Anthropic's secret deployment platform, and the architecture of an AI-native PaaS The Starting Point We are building ArcBox, a full-stack platform from Desktop to Platform, similar to Railway and E2B in positioning. Our core philosophy is local-cloud consistency: replacing OrbStack with a fully open-source ArcBox Desktop that provides Sandbox capabilities locally. Recently, we noticed more and more Coding Agent platforms launching web-based entry points, and remarkably, nearly all of them chose Firecracker under the hood. Claude Code is no exception. As practitioners in the same space, curiosity about its runtime environment led to some digging. What began as a casual strace -p 1 turned into a full reverse-engineering session that uncovered unreleased Anthropic infrastructure, including an entirely undocumented application hosting platform. Everything described here was discovered through standard Linux tooling ( strace , strings , objdump , go tool objdump ) running inside a Claude Code session. No exploits, no privilege escalation, no network attacks. The binary was sitting right there, unstripped, with full debug symbols. Layer 1: It's a Firecracker MicroVM The first question: what exactly is this environment? $ dmesg | grep FIRECK ACPI: RSDP 0x00000000000E0000 000024 (v02 FIRECK) ACPI: XSDT ... (v01 FIRECK FCMVXSDT ... FCAT 20240119) ACPI: FACP ... (v06 FIRECK FCVMFADT ... FCAT 20240119) ACPI: DSDT ... (v02 FIRECK FCVMDSDT ... FCAT 20240119) The ACPI tables are signed with OEM ID FIRECK and creator ID FCAT , both hardcoded in Firecracker's source code . This is the same MicroVM technology that powers AWS Lambda and Fargate. The specs: 4 vCPUs (Intel Xeon Cascade Lake @ 2.80GHz), 16GB RAM, 252GB disk, Linux 6.18.5. No nested virtualization since Firecracker intentionally strips vmx / svm flags from guests. The process tree is absurdly minimal: PID 1: /process_api --firecracker-init --addr 0.0.0.0:2024 ... └─ PID 517: /usr/local/bin/environment-manager task-run --session cse_... └─ PID 532: claude (the CLI itself) No systemd. No sshd. No cron. No logging daemon. PID 1 is a custom binary that acts as both init and a WebSocket API gateway. The kernel command line confirms it: rdinit=/process_api init_on_free=1 -- --firecracker-init reboot=k panic=1 nomodule strace on PID 1 shows it running an epoll event loop, periodically checking /proc/*/children and /proc/*/status to monitor child processes. Essentially a minimal init supervisor, listening on port 2024 (WebSocket API) and port 2025 (secondary endpoint). Layer 2: The Unstripped Go Binary The real discovery was /usr/local/bin/environment-runner (symlinked as environment-manager ): $ file /usr/local/bin/environment-runner ELF 64-bit LSB executable, x86-64, dynamically linked, Go BuildID=..., with debug_info, not stripped $ go version -m /usr/local/bin/environment-runner go1.25.7 path github.com/anthropics/anthropic/api-go/environment-manager mod github.com/anthropics/anthropic/api-go (devel) build -ldflags=-X main.Version=staging-68f0dff496 A 27MB Go binary. Not stripped. Full debug info. Full symbol table. Built from Anthropic's private monorepo at github.com/anthropics/anthropic/api-go/environment-manager/ . Using go tool objdump and strings , the complete internal package structure can be extracted: internal/ ├── api/ # API client (session ingress, work polling, retry) ├── auth/ # GitHub app token provider ├── claude/ # Claude Code install, upgrade, execution ├── config/ # Session modes (new/resume/resume-cached/setup-only) ├── envtype/ │ ├── anthropic/ # Anthropic-hosted environment │ └── byoc/ # Bring Your Own Cloud environment ├── gitproxy/ # Git credential proxy server ├── input/ # Stdin parser + secret handling ├── manager/ # Session manager, MCP config, skill extraction ├── mcp/ │ └── servers/ │ ├── codesign/ # Code signing MCP server │ └── supabase/ # Supabase integration MCP server ├── orchestrator/ # Poll loop, hooks, whoami ├── podmonitor/ # Kubernetes lease manager ├── process/ # Process exec + script runner ├── sandbox/ # Sandbox runtime config ├── session/ # Activity recorder ├── sources/ # Git clone + source classification ├── tunnel/ # WebSocket tunnel + action handlers │ └── actions/ │ ├── deploy/ # ← THIS IS WHERE IT GETS INTERESTING │ ├── snapshot/ # File snapshots │ └── status/ # Status reporting └── util/ # Git helpers, retry, stream tailer Key dependencies extracted from the binary: Dependency Purpose github.com/anthropics/anthropic/api-go Internal Anthropic Go SDK github.com/gorilla/websocket WebSocket tunnel to API github.com/mark3labs/mcp-go v0.37.0 Model Context Protocol github.com/DataDog/datadog-go v5 Metrics reporting go.opentelemetry.io/otel v1.39.0 Distributed tracing google.golang.org/grpc v1.79.0 gRPC (session routing) github.com/spf13/cobra CLI framework Layer 3: Antspace, Anthropic's Hidden PaaS Inside the tunnel/actions/deploy/ package, there are function symbols for two deployment clients: VercelClient , the expected one: CreateDeployment → POST /v13/deployments UploadFile → PUT /v2/files with x-vercel-digest header WaitForReady → Poll until readyState == "READY" And then, AntspaceClient , the unexpected one: deploy.(*AntspaceClient).Deploy deploy.(*AntspaceClient).createDeployment deploy.(*AntspaceClient).uploadTarball deploy.(*AntspaceClient).streamStatus Extracting the associated strings from the binary revealed a complete deployment protocol: Phase 1: Create Deployment POST to antspaceControlPlaneURL Content-Type: application/json Authorization: Bearer {antspaceAuthToken} Body: { app name, metadata } Phase 2: Upload Build Artifact POST multipart/form-data File: dist.tar.gz (the built application) Size limit enforced: "project exceeds %dMB limit" Phase 3: Stream Deployment Status Response: application/x-ndjson (streaming) Status progression: packaging → uploading → building → deploying → deployed Error: "Streaming unsupported" if client can't handle NDJSON A search for "Antspace" across the entire public internet turned up nothing: Anthropic's website, GitHub, blog, documentation, LinkedIn, job postings, conference talks, patent filings. Zero results. This platform has never been publicly mentioned anywhere. The name likely derives from "Ant" (reportedly an internal nickname for Anthropic employees) + "Space" (hosting space), following the same naming pattern as platforms like Heroku or Vercel. Antspace vs. Vercel: Architectural Differences Aspect Vercel Antspace File upload SHA-based dedup, per-file Single tar.gz archive Build Remote (Vercel builds it) Local npm run build , upload output Status Polling-based Streaming NDJSON Auth Vercel API token + Team ID Bearer token + dynamic control plane URL Public API Yes, documented No, completely internal The fact that Anthropic built a full deployment protocol from scratch, rather than just wrapping Vercel's API, signals this is a strategic platform investment, not a quick integration. Layer 4: Baku, The Web App Builder "Baku" is the internal codename for the web app builder experience on claude.ai. When you ask Claude on the web to build you a web application, it launches a Baku environment. From the embedded resources extracted from the binary: Project Template: Source: /opt/baku-templates/vite-template Stack: Vite + React + TypeScript Auto-managed dev server via supervisord, logs to /tmp/vite-dev.log Supabase Auto-Provisioning: Six MCP tools are automatically available: provision_database : create a Supabase project on demand execute_query : run SQL queries apply_migration : versioned schema changes with auto type generation list_migrations : list applied migrations generate_types : regenerate TypeScript types from DB schema deploy_function : deploy Supabase Edge Functions Environment variables auto-written to .env.local : SUPABASE_URL, SUPABASE_ANON_KEY, VITE_SUPABASE_URL, VITE_SUPABASE_ANON_KEY Stop Hooks (embedded shell scripts): The Baku environment has a pre-stop hook that prevents the session from ending if: There are uncommitted or unpushed git changes The Vite dev server log contains errors tsc --noEmit reports TypeScript type errors Default Deploy Target: Antspace , not Vercel. Vercel exists as an alternative, but Baku's native deployment path goes through Anthropic's own platform. Internal Organization: Drafts stored in .baku/drafts/ Explorations in .baku/explorations/ Git commits use [email protected] as the author No git remote configured (local-only version control) Layer 5: BYOC (Bring Your Own Cloud) The envtype/ package contains two environment implementations: anthropic : Anthropic-hosted (Firecracker MicroVMs) byoc : Bring Your Own Cloud BYOC allows enterprise customers to run environment-runner in their own infrastructure while sessions are orchestrated by Anthropic's API. Key characteristics: Default session mode: resume-cached (fastest restarts, reuses existing state) Custom auth: containProvideAuthRoundTripper injects container-level credentials Smart git handling: checks if task branch exists on remote before fetch Sub-types: antspace (Anthropic internal) and baku (Vite project builder) Kubernetes integration: podmonitor package implements lease management The BYOC API surface includes 7 endpoints: Endpoint Purpose /v1/environments/whoami Identity discovery Work polling + ack Job queue Session context Configuration retrieval Code signing Binary verification Worker WebSocket Real-time tunnel Supabase DB query proxy Database access relay The Strategic Picture What we're looking at is a vertically integrated AI application platform : User describes what they want (natural language) ↓ Claude generates the application (Baku environment) ↓ Supabase database auto-provisioned (MCP tools) ↓ Application deployed to Antspace (Anthropic's PaaS) ↓ Live application, user never left Anthropic's ecosystem This is not just an AI coding assistant. It's the architecture of an AI-native PaaS where the user's journey from idea to production happens entirely within Anthropic's infrastructure. The competitive implications are significant. This positions Anthropic against: Vercel / Netlify in hosting and deployment Replit / Lovable / Bolt in AI app generation Supabase / Firebase in managed backend (via tight integration) But with one structural advantage none of these competitors have: Anthropic owns the entire stack , from the LLM that understands your intent, to the runtime that builds your code, to the platform that hosts your application. Methodology All findings were obtained through standard Linux tools running inside my own Claude Code session: Tool Purpose strace -p 1 Traced PID 1 system calls dmesg Kernel messages for hypervisor identification file , readelf Binary identification go version -m Go module and build info extraction go tool objdump Symbol table and function signature extraction strings + grep String literal extraction from binary objdump -s -j .rodata Raw rodata section extraction The binary was not obfuscated, not stripped, and contained full debug information. No decompilation tools were necessary. No exploits were used. No network boundaries were crossed. This was simply reading what was present in my own compute environment. Closing Thoughts Shipping an unstripped binary with full debug symbols to production is... a choice. It made this analysis trivial. What would normally require Ghidra and hours of decompilation was accomplished with go tool objdump and grep . Antspace is clearly still in an early or internal stage (the version string is prefixed with staging- ), but the deployment protocol is mature and production-grade. Whether Anthropic plans to launch this as a public product or keep it as internal infrastructure for Claude's web experience remains to be seen. What's clear is that Anthropic's ambitions extend far beyond being just an LLM and AI agent company. They're building the infrastructure for a world where applications are spoken into existence, and they want to own every layer of that stack. All analysis was performed on March 18, 2026, inside a Claude Code Web session running on a Firecracker MicroVM with kernel 6.18.5, environment-runner version staging-68f0dff496. TABLE OF CONTENTS The Starting Point Layer 1: It's a Firecracker MicroVM Layer 2: The Unstripped Go Binary Layer 3: Antspace, Anthropic's Hidden PaaS Layer 4: Baku, The Web App Builder Layer 5: BYOC (Bring Your Own Cloud) The Strategic Picture Methodology Closing Thoughts