A single AI dev workspace for all your services

superblocks.com · ranma · 2 hours ago · view on HN · security
0 net
A Single Dev Workspace for AI Agents - Part1 | Superblocks Pricing Careers Login Book a demo Try for free Blog / Engineering A Single Dev Workspace for AI Agents. Part 1: The Workspace Harness +2 Multiple authors March 10, 2026 5 min. Copied 0:00 At Superblocks we have separate repos per service, spanning Go, TypeScript, Terraform, React, and many more. AI agents and engineers both work well inside a single or few repos. But the more services you add, the more the work crosses repo boundaries, and the friction compounds in three places: ‍ AI agents make locally reasonable decisions that are wrong at the system level. An agent working in one repo sees only one slice of the platform. It doesn't know how services talk to each other or what contracts exist at the boundaries between them. It wires things together with the wrong protocol, misses constraints the other side depends on, and the bugs only show up in integration, not in the repo where the agent was working. ‍ Cross-stack development multiplies setup overhead. Each repo has its own build system, config, and startup process, and those are somewhat well-documented. But standing up a cross-service feature means combining all of them: in our case, Makefiles, Helm charts, docker-compose files, package.json from different repos, figuring out startup order, and managing 4-5 terminals. Even with good docs, new engineers spend effort wiring it all together. ‍ Cross-repo PRs are invisible to each other. Tests pass on both sides, CI/CD handles deployment ordering, but the reviewer on service-a still has to find the service-b PR to judge whether the change makes sense as a whole. Feedback on one cascades into the other. The coordination is manual overhead on top of processes that otherwise work fine individually. ‍ These problems compound. While we could migrate our entire platform to a single polyglot mono-repository, and we most likely will eventually, the timeline, cost and risk for that migration are higher than we have appetite for right now. We came up with a more practical approach that would give us the ROI immediately. ‍ A single workspace repo for orchestration A single repo called workspace , containing zero application code. Just the glue: ‍ workspace/ ├── README.md # Getting started instructions for engineers ├── AGENTS.md # Workspace architecture context for agents ├── agents/cross-repo.md # Cross-repo architecture context for AI agents ├── docs/.md # Data flows, protocols, architectural invariants ├── flake.nix + .envrc # Nix dev shell — all tooling pinned via nix and automatically installed via direnv ├── Tiltfile # Loads shared infra + selected profile ├── justfile # Command runner — wraps Tilt and provides cross-repo git and command running sugar ├── repos.yaml # Single manifest: URLs, local paths, default branches ├── compose/infra.yaml # Shared infra: Postgres, Redis, Kafka, Jaeger, OTel ├── profiles/ # Each profile defines a common workflow in a Tiltfile ├── repos/ # Default checkouts (.gitignore'd) ├── repos/AGENTS.md # Symlink → agents/cross-repo.md └── worktrees/ # Per-branch isolated trees (.gitignore'd) └── / ├── .profile # Profile name (read by `just up `) ├── AGENTS.md # Symlink → agents/cross-repo.md └── / # git worktree ‍ The full workflow: ‍ git clone [email protected]:your-org/workspace.git ~/dev/workspace cd ~/dev/workspace # direnv activates Nix shell ./setup.sh # installs Nix + direnv if missing, clones repos just up feature-x # starts everything in worktree feature-x ‍ A new engineer clones one repo and has a working multi-service environment without stitching together four READMEs. A cloud agent can do the same. Configurations justfile is a task runner, not a build system The temptation with a meta-repo is to centralize build logic. Don't. Keep the logic in context. We use just to define commands to allow engineers and agents to interact with our platform. The justfile is the contract between an engineer and our stack. It defines how they interact with our services and abstracts all the build systems and orchestration away. ‍ We might change how we orchestrate these services, the justfile does not change. When a repo changes its build command, only its Makefile changes. Nothing in the workspace breaks. Workflows are uninterrupted and focused. ‍ repos.yaml as the single manifest for repositories setup.sh, just doctor, and all other cross-repo git commands read from one file. No hardcoded repo lists drifting out of sync: ‍ repos: - name: service-a repo: your-org/service-a default_branch: main - name: service-b repo: your-org/service-b default_branch: main ‍ Profiles: one per team or workflow A profile defines which services to run, how to run them, which repos are needed, and which secrets are required. We use Tilt for orchestrating services, but the workspace was built to abstract Tilt away, so that it is interchangeable. ‍ A profile defines a common workflow for a team of engineers. This might be a long-term team, like our infrastructure crew, or a short-lived project team with a specific workflow. We are building out profiles with different personas in mind. Eventually, our PMs and product folks should be able to clone, set up and kick off an agent within minutes. ‍ Two files, one for tooling and one for Tilt: ‍ # profiles/feature-x.yaml — for humans and just doctor repos: [service-a, service-b, ui] ports: { service-a: 8080, service-b: 8443, ui: 3000 } secrets: - name: DATABASE_URL source: 1password://vaults/dev/items/db-dev ‍ # profiles/feature-x.tilt — Starlark, loaded by Tilt load( '../lib/tilt/helpers.tilt' , 'go_svc' , 'node_svc' , 'docker_svc' , 'is_native' ) node_svc( 'ui' , 'pnpm run dev' , deps=[ 'shared' , 'schemas' ]) # Docker by default; opt into native hot-reload via TILT_NATIVE env var if is_native( 'service-b' ): go_svc( 'service-b' , 'repos/service-b' , port= 8443 ) else : docker_svc( 'service-b' , 'repos/service-b' , port= 8443 ) node_svc( 'service-a' , 'pnpm start:dev --filter service-a' , port= 8080 , resource_deps=[ 'service-b' , 'postgres' , 'redis' ]) ‍ Backend engineers opt into hot-reload. Frontend engineers get pre-built Docker images for services they're not touching. Tilt's dependency graph handles startup ordering and dependencies. ‍ Why just isn't enough for orchestration just has no health checks, no dependency ordering, no per-service logs. When something crashes, you find out via a broken request. Tilt knows Postgres must be healthy before service-a starts, surfaces which service failed and why, and shows per-service logs at localhost:10350 . If we decide to drop Tilt, the just interface stays the same. ‍ One gotcha: pass --project-name workspace to Docker Compose or it infers the name from the directory and collides with per-repo compose stacks. ‍ Worktrees: one branch per feature One checkout per repo means one branch at a time. Git worktrees give each feature an independent file tree with zero copy overhead. Git shares the object store; only the working tree is separate. Each AI agent session gets its own worktree with no cross-branch interference. ‍ just worktree create ENG-123 feature-x # → git worktree add worktrees/ENG-123/ -b ENG-123 per repo # → writes .profile, symlinks cross-repo AGENTS.md and .cursor/rules/ just up ENG-123 # → reads .profile, starts Tilt from worktree paths ‍ For worktree creation, we built a CLI that allows engineers to select which repos they need in a worktree, and which profile to use. This saves us from having to create worktrees for all repos every time. ‍ Giving agents cross-repo context Before: agents see one repo's AGENTS.md. Cross-repo context requires gh api calls that often fail. Agents guess about protocols, package locations, and execution paths. ‍ After, opening a worktree in Claude Code or Cursor: ‍ worktrees/ENG-123/ ├── AGENTS.md ← symlink → agents/cross-repo.md │ data flows, service boundaries, protocol specs ├── service-a/ │ └── AGENTS.md ← repo-specific rules, loaded lazily └── service-b/ └── AGENTS.md ← repo-specific rules, loaded lazily ‍ Ancestor walk-up picks up workspace/AGENTS.md for tooling context: just commands, active profile, port registry. Cursor gets .cursor/rules/ via symlink. Claude Code and OpenCode get AGENTS.md via walk-up. No provider lock-in, no duplicated context files. ‍ cross-repo.md stays narrow on purpose: a routing index and architectural invariants, linking to canonical docs rather than duplicating them. CODEOWNERS enforces review so it doesn't drift. ‍ We also use the workspace to distribute org-wide skills and MCP configurations, so everyone has the same baseline. We .gitignore agent configuration files and write to them during setup, so that we can deduplicate configurations, and allow engineers to experiment with their agentic configurations. ‍ Cross-repo PRs GitHub has no concept of "these three PRs are one feature." We lean on Linear (or Jira) as the aggregation layer and a just pr command to link them together. ‍ Name branches after tickets ( ENG-123 ). Linear and Jira auto-link PRs from multiple repos to the same ticket and won't auto-close until all merge. just pr wraps gh pr create for each repo ahead of default, then does a second pass to append sibling PR URLs to each description. Two passes because the URLs don't exist until after creation. ‍ What's next This post covered the workspace harness: repo layout, profiles, worktrees, and the just interface that ties them together. It's additive. Engineers who don't adopt it work exactly as before, and once repos.yaml and profiles exist, a monorepo migration becomes lower-risk because the topology is already codified. ‍ In Part 2, we go deeper into the AI agent stack: how we structure AGENTS.md files, distribute MCP configurations and skills, and build a dynamic knowledge graph backed by a graph database so agents can query service relationships, data flows, API contracts and team knowledge. Stay tuned. Stay tuned for updates Get the latest Superblocks news and internal tooling market insights. You've successfully signed up Request early access Step 1 of 2 Request early access Step 2 of 2 You’ve been added to the waitlist! Book a demo to skip the waitlist Thank you for your interest! A member of our team will be in touch soon to schedule a demo. Read the Clark blog 8 production apps built 30 days to build them 10 semi-technical builders 0 traditional developers 8+ high-impact solutions shipped 2 days training to get builders productive 0 SQL experience required See full story → See the full Virgin Voyages customer story, including the apps they built and how their teams use them. Why not Replit, Lovable, or Base44? "Those tools are great for proof of concept. But they don't connect well to existing enterprise data sources, and they don't have the governance guardrails that IT requires for production use." Table of Contents The first heading The first heading Explore more All posts +2 Multiple authors March 5, 2026 Transforming Cruise Operations with Vibe Coding +2 Multiple authors January 20, 2026 Building AI Governance: 5 Principles & Frameworks Ungoverned AI can expose your org to fines and public backlash. I analyzed the top responsible AI governance frameworks to share how to build trustworthy AI. +2 Multiple authors December 13, 2025 Enterprise AI Governance: Frameworks + Implementation Guide Enterprise AI governance prevents costly AI failures. Discover practical implementation steps, key principles, and governance frameworks that scale. No items found. +2 Multiple authors Mar 10, 2026