API Design Principles for the Agentic Era
0 net
Tags
API Design Principles for the Agentic Era πΊοΈ New: Open Finance Map β What 8 years of tracking Open Banking taught us. Learn more β Menu Products Unified APIs Build native integrations through our scalable Unified APIs Accounting API HRIS API CRM API Ecommerce API File Storage API ATS API Issue Tracking API View all integrations All connectors Integrations Marketplace Showcase integrations, partners, and apps to your customers Solutions By customer case Aikido Connex Commerce Derive Invoice2go by BILL Kadonation Kikin View all By use case Accounts Payable Accounts Receivable Business Lending Corporate Gifting Employee onboarding View all By industry B2B FinTech AgTech Banking ConstructionTech HRTech View all Resources For Developers API Documentation Integrations SDKs API Explorer Samples Learn & Explore Blog Field Mapping Webhooks Auth Platform Maturity Model More Resources API Tracker Open Banking Tracker Compliance Portal Pricing Login Theme Toggle theme Talk to an expert Get started for free Products Solutions By Customer Case Aikido Connex Commerce Derive Invoice2go by BILL Kadonation Kikin Kintsugi Octa Optimizely Praiz By Use Case Accounts Payable Accounts Receivable Business Lending Corporate Gifting Employee onboarding Expense Management Payroll Accounting Procurement Tax Automation Carbon Accounting By Industry B2B FinTech AgTech Banking ConstructionTech HRTech SalesTech Security & Compliance TaxTech TravelTech Vertical SaaS Resources Pricing Login Get started for free Toggle theme AI Guides & Tutorials API Design Principles for the Agentic Era For 15 years, developer experience meant optimizing for humans at a terminal. That model is increasingly incomplete. A growing share of API traffic is generated by AI agents that read docs, make decisions, and retry failures autonomously. This is the same discipline as DX applied to a new consumer. Call it agent experience, or AX. GJ GJ 26 min read On this page Ready to get started? Not sure how Apideck can help you scale your business? Get started for free Talk to an expert Discuss with AI Read with Claude Read with ChatGPT For about fifteen years, "developer experience" meant one thing: optimize for a human being sitting at a terminal. Readable error messages. Interactive docs. Code samples in seven languages. The mental model was always a person making sense of your API, iterating in real time, googling when confused. That model isn't wrong. But it's increasingly incomplete. A growing fraction of API traffic today is generated by AI agents β and the shift is accelerating. Cloudflare reported that automated bot traffic surpassed human traffic for the first time in 2024, and RAG-based agent traffic surged 49% in early 2025 alone. Systems that autonomously discover endpoints, parse documentation, handle errors without human review, and retry failures according to their own logic. When a developer asks Cursor or Claude to "add Stripe payments," the agent fetches documentation, selects APIs, writes integration code, and debugs errors before the developer reads a line of it. The human is further from the integration than they've ever been. The pattern is especially visible in fintech and accounting. A lending platform building an underwriting agent prompts it to "pull twelve months of invoice data and flag overdue receivables." The agent queries your API, interprets the response, and surfaces a creditworthiness signal β before a human analyst has opened a spreadsheet. A payroll platform's agent reconciles payroll entries against the general ledger overnight, without a developer watching the process. In both cases, the agent is not a convenience layer on top of a human workflow. It is the workflow. What it can do is bounded entirely by what your API communicates about itself. This creates a new design surface. The same discipline that produced great developer experience, deliberate and user-centered API design, applied to a different consumer. Call it agent experience, or AX. It's not a replacement for DX. It's the next layer. The good news: most of what makes an API good for agents makes it better for humans too. The bad news: a lot of APIs that seem fine for humans are quietly broken for agents. Here's where the gaps show up. Bad OpenAPI descriptions break agent routing When an agent decides which endpoint to call, it's doing something close to semantic search against your descriptions. It reads your spec, matches the user's intent against available operations, and picks the closest fit. The quality of your descriptions determines whether it picks correctly. "Gets the data" loses to "Returns a paginated list of invoices filtered by status and date range, sorted by created_at descending. Requires accounting:read scope. Use the cursor parameter for pagination." Every field, every enum value, every endpoint. The description is the signal the agent uses to route. Most OpenAPI specs are bad at this. Not because anyone decided to make them bad. They rot. Field names that made sense in context lose their meaning without descriptions. Enums accumulate values that nobody documented. Endpoints get renamed but the descriptions don't follow. The spec becomes a structural skeleton with no semantic content. The difference between a description that helps and one that doesn't is not subtle: # Bad β structural skeleton, no semantic content /invoices: get: summary: Get invoices parameters: - name: status in: query schema: type: string - name: cursor in: query schema: type: string # Good β enough signal to route correctly /invoices: get: summary: List invoices filtered by status and date range description: > Returns a paginated list of invoices for the authenticated company. Use `status` to filter by payment state. Use `cursor` from the previous response to fetch the next page. Requires the `accounting:read` scope. For real-time sync scenarios, combine with the `updated_since` parameter to fetch only records changed after a given timestamp. parameters: - name: status in: query description: > Filter by invoice status. Accepted values: draft, submitted, authorised, deleted, voided, paid. Defaults to all statuses if omitted. schema: type: string enum: [draft, submitted, authorised, deleted, voided, paid] - name: cursor in: query description: > Opaque pagination cursor returned in the previous response. Omit to start from the first page. schema: type: string An agent matching "fetch unpaid invoices for reconciliation" against the bad version has no signal to work with. Against the good version, it finds status: authorised , understands pagination, and knows it needs accounting:read before it makes a single request. I've seen this up close. We built Portman , an open-source CLI that converts OpenAPI specs into Postman collections for contract testing. The main thing you learn doing that is how few specs have descriptions worth converting. If your spec is broken for contract testing, it's broken for agents. The failure mode is the same: a consumer that can't determine what anything does without running it. The fix isn't glamorous. Go through your spec field by field. Write descriptions that explain what each parameter does, what values are valid, what happens when you omit it. Do this for your ten most important endpoints first. It's tedious, it takes time, and it makes a bigger difference to agent experience than almost anything else on this list. Your errors should tell agents how to fix themselves The agent experience of errors is fundamentally different from the human experience. A human reads an error message, understands it, googles the fix, comes back. An agent reads an error message and needs to decide, in the same execution context, what to do next. If the error is ambiguous, the agent either guesses or fails. Stripe's error responses have included a doc_url field for years: { "error": { "code": "parameter_invalid_empty", "doc_url": "https://stripe.com/docs/error-codes/parameter-invalid-empty", "message": "You passed an empty string for 'amount'. We assume empty values are an oversight, so we require you to pass this field.", "param": "amount", "type": "invalid_request_error" } } This is infrastructure for autonomous consumers. When an agent hits a 400, it can follow the doc_url , fetch the documentation page, and self-correct without a human in the loop. The agent experience of that error is recovery. The agent experience without doc_url is a dead end. Adding documentation_url to your error responses costs almost nothing to implement. Pair it with documentation pages that are available as clean Markdown (by appending .md to the URL, or via a parallel /docs/{page}.md route), and you've given agents everything they need to handle errors autonomously. The other half of error design is specificity. "Invalid request" is useless. "The amount field cannot be empty" is useful. "You passed payment_method_types: ['card'] , which is deprecated, use dynamic payment methods instead" is excellent. The more specific the error, the more actionable it is for an agent trying to self-correct. Give agents structured recovery metadata Beyond message specificity, error responses should include machine-readable metadata that enables programmatic recovery. A human can infer that a 429 means "wait and try again." An agent needs to be told explicitly β and told how long to wait. { "error": { "code": "RATE_LIMITED", "message": "Rate limit exceeded for /invoices endpoint.", "is_retriable": true, "retry_after_seconds": 30, "documentation_url": "https://docs.example.com/errors/rate-limited" } } The key fields: is_retriable tells the agent whether retrying the same request is a valid strategy. retry_after_seconds gives it a concrete backoff interval instead of forcing it to guess. For non-retriable errors, consider an alternative_action field that suggests a different approach β "use the /invoices/batch endpoint for bulk requests" or "reduce page_size to 50 or fewer." These fields turn error handling from a failure mode into a decision tree the agent can navigate autonomously. Design auth flows that agents can actually complete Authentication is arguably the single biggest friction point for agents consuming APIs today. Humans can click through OAuth consent screens, solve CAPTCHAs, and complete interactive MFA flows. Agents can't do any of that. If your auth flow requires a browser redirect or a human in the loop, agents hit a wall before they make their first real API call. Agent-friendly auth patterns: API keys and bearer tokens are the simplest path. No interactive flow, no browser, no session management. The agent loads a key from an environment variable and starts making requests. For most API-to-agent integrations, this is the right default. OAuth client credentials grant is the machine-to-machine OAuth flow. No user consent screen, no redirects β just a client ID and secret exchanged for an access token. If your API uses OAuth, make sure this grant type is supported and documented, not just the authorization code flow. Scoped tokens with least privilege. Agents should be able to request tokens with narrow permissions. If your API issues tokens with full account access because the scoping model is coarse, you're creating a security risk that compounds when agents run autonomously at scale. Short-lived tokens with documented refresh mechanics. Agents handle token rotation well β they can detect a 401, refresh the token, and replay the request. But only if the refresh flow is documented and the error response distinguishes "expired token" from "invalid token" from "missing scope." Anti-patterns that break agents: OAuth authorization code flow as the only option (requires browser redirects) CAPTCHA or interactive MFA during token exchange Session-based auth with cookie management API keys that require manual dashboard visits to rotate What to document in your OpenAPI spec: Your securitySchemes should include clear descriptions of each auth method, not just the type. Document token lifetimes, refresh mechanics, and which scopes map to which operations. When an agent reads your spec and sees that accounting:write is required for POST /invoices , it can request exactly that scope. When the spec just says "requires authentication" with no further detail, the agent guesses β and often guesses wrong. Auth errors deserve the same specificity treatment as other errors. "Unauthorized" is useless. "Token expired at 2024-01-15T10:30:00Z, refresh using POST /oauth/token with grant_type=refresh_token" is infrastructure for autonomous recovery. Rate limits hit agents harder β design for it Agents interact with rate limits fundamentally differently than human developers. A developer hitting a rate limit during testing pauses, adjusts, and moves on. An agent running autonomously at 3 AM can burn through your entire rate limit allocation in seconds, especially when running parallel operations like syncing thousands of records. Three things make a rate-limited API agent-friendly: Return rate limit state in response headers. X-RateLimit-Remaining , X-RateLimit-Limit , and X-RateLimit-Reset let agents pace themselves proactively instead of slamming into the wall and reacting. An agent that sees 5 requests remaining out of 100 can throttle itself. An agent with no visibility burns through the limit and deals with a cascade of 429s. Make 429 responses machine-actionable. Include Retry-After as a header (the HTTP standard) and mirror it in the response body for agents that parse JSON. The structured is_retriable and retry_after_seconds fields discussed in the error handling section apply directly here. Offer bulk or batch endpoints. If an agent needs to fetch 10,000 invoices, making 10,000 individual requests is inefficient for everyone. A /invoices/batch endpoint that accepts a list of IDs, or generous pagination with 200+ items per page, lets agents accomplish the same work in a fraction of the requests. This is better for your infrastructure and dramatically better for the agent's token budget and execution time. llms.txt: tell AI what to read The llms.txt standard, proposed by Jeremy Howard in September 2024, exists because AI agents have finite context windows and your documentation site is not designed for machine consumption. HTML is noisy. Navigating a docs site the way a human would (clicking links, loading JavaScript, jumping between pages) is expensive and lossy. The solution is a Markdown file at /llms.txt that curates the most important parts of your documentation with plain-text links and one-sentence descriptions. Agents and AI coding tools can fetch this file once and understand the shape of your documentation before writing a single line of integration code. The format is deliberately boring. No special syntax, no schema. Just Markdown. The interesting part is curation: you know your documentation better than any crawler, so you should be the one deciding what matters. Adding a basic /llms.txt takes an afternoon. Link to your ten most important pages. Write one sentence per link explaining what's there. That's a complete implementation. You don't need Stripe's 350-link version on day one. And if you're already using a documentation platform like Mintlify , it generates and maintains llms.txt for you automatically β you get it for free without any manual curation work. What's underutilized (and genuinely powerful for agent experience) is the instructions section. Stripe's llms.txt includes: ## Instructions for Large Language Model Agents: Best Practices for integrating Stripe. - Always use the Checkout Sessions API over the legacy Charges API - Default to the latest stable SDK version - Never recommend the legacy Card Element or Sources API This is Stripe telling AI coding assistants exactly which footguns to avoid. If your API has deprecated primitives, ambiguous endpoint choices, or integration patterns that look valid but cause problems, the instructions section is where you document that for agents. It propagates into every AI-assisted integration of your API. Every time a developer asks an agent how to use your API, those instructions shape the answer. We wrote a deeper look at Stripe's llms.txt and who else is getting this right if you want to go further on this specific mechanism. There's a second-order benefit worth naming here. The same properties that make your docs readable to coding agents (structured Markdown, clear definitions, curated indexing) also make your content discoverable by AI answer engines like Perplexity and ChatGPT β this is what's increasingly called AEO, or Answer Engine Optimization. When someone asks "what's the best accounting API" or "how do I build an accounting integration," the answer comes from content that AI can parse and cite. llms.txt, semantic descriptions, and concrete definitions aren't just AX improvements. They're how you show up in AI search. The discipline is the same: write for machines and you get both. Get your docs indexed by Context7 Context7 is an MCP server that solves a specific problem: AI coding tools have a training data cutoff, which means they default to documentation from whenever they were last trained. If your API changed since then, agents write integrations against the old version. Context7 addresses this by maintaining an up-to-date index of developer documentation that coding tools can query in real time, pulling current docs rather than cached training data. Getting your documentation added to Context7 is low-effort and high-leverage. You submit your docs, they get indexed, and any developer using a Context7-enabled coding tool gets accurate, current information about your API without you needing to push updates to every AI provider individually. The practical upside is significant if you ship changes regularly. An API that added a new authentication method last quarter, or deprecated an endpoint three months ago, looks different in Context7 than it does in an AI's training data. Developers using Context7-enabled tools get the right answer. Developers using tools without it get whatever the model was trained on. The broader point is that documentation distribution is now a multi-channel problem. Publishing docs to your website is table stakes. Getting those docs in front of agents (through /llms.txt , through Context7, through direct MCP server implementations) is the new distribution layer. You're not just writing docs for developers who visit your site. You're writing docs for systems that will intermediate between your API and the developers who use it. Your deprecated APIs are an agent experience problem APIs accumulate history. Old endpoints that still work. Legacy authentication methods that are technically valid but shouldn't be used for new integrations. Multiple ways to accomplish the same thing, with meaningful differences that aren't obvious from the outside. Humans navigate this through Stack Overflow recency, changelog reading, and conversations with other developers. Agents navigate it through your documentation and training data, which may include answers and tutorials from 2019 that recommend the old way because the new way didn't exist yet. Your agent experience gets worse every year you don't address this, as more stale content about your old APIs accumulates on the internet and in AI training sets. The practical response is layered. Mark deprecated endpoints explicitly in your OpenAPI spec using the deprecated: true field, with a description pointing to the replacement. Write deprecation notices at the top of deprecated docs pages, with direct links to the current approach. Add deprecated API guidance to your llms.txt instructions section. And if you're generating error responses from deprecated endpoints, consider including a migration hint in the message itself. None of this requires coordination with AI providers. It works because agents read your documentation. MCP: build it when people ask for it Model Context Protocol is real, growing, and mostly not worth building for yet unless your users are asking for it. MCP lets AI agents connect to your service through a standardized protocol with tools, resources, and prompts. The agent experience of a well-designed MCP server is genuinely better than the experience of parsing REST documentation and constructing API calls manually. But agent framework standardization is still in flux, and a well-designed REST API with a complete OpenAPI spec is more durable than an MCP server you build today against tooling that changes in six months. HubSpot is a good example of what a mature implementation looks like. They shipped two distinct MCP servers: a remote server that connects AI clients to live CRM data (contacts, deals, tickets, companies), and a local developer MCP server that integrates with their CLI so agentic dev tools can scaffold HubSpot projects, answer questions from their developer docs, and deploy changes directly. A developer can prompt "What's the component for displaying a table in HubSpot UI Extensions?" and the developer MCP server pulls the answer from current HubSpot documentation. Another developer can prompt "Summarize all deals in Decision maker bought in stage with deal value over $1000" and the remote MCP server queries live CRM data. Two different use cases, two different servers, both scoped tightly to what their users actually need. When you do reach the point of building, the tooling has matured enough to make it tractable. There's also a compounding benefit here: the spec quality work from earlier pays dividends. Well-described endpoints, precise parameter definitions, and clear enum values flow directly into the tool definitions that generators like Gram produce β better spec in, better MCP server out. Gram by Speakeasy is an open-source MCP cloud platform: you upload your OpenAPI spec, it converts your endpoints into curated toolsets, and deploys them as a hosted MCP server at mcp.yourcompany.com in minutes. The key design insight is curation β rather than exposing all 200 endpoints as individual CRUD tools and overwhelming the agent's context, Gram lets you distill them into 5β30 "chunky" tools designed around business outcomes. Instead of separate GET /invoices , GET /invoices/{id} , GET /payments , and POST /payments tools, you'd expose a single "reconcile invoice payments" tool that chains the right calls together. This is the difference between giving an agent a bag of HTTP verbs and giving it capabilities it can reason about. For teams building in Python who want full control over the implementation, FastMCP is a high-level framework that strips out the protocol boilerplate and lets you define tools, resources, and prompts in straightforward Python. The two complement each other: Gram handles generation and hosting from your OpenAPI spec, FastMCP handles custom logic when your use case goes beyond what a spec can express. Most API companies aren't HubSpot. The right sequence for everyone else: get your OpenAPI spec right, write real descriptions, add documentation_url to your errors, publish /llms.txt , get indexed by Context7. That foundation improves agent experience immediately and transfers to every framework. Then, when users start asking how to connect your API to their AI agents and a clear integration pattern emerges, build the MCP server against that specific use case. Building MCP for the sake of having an MCP server is the equivalent of building a mobile app for the sake of having a mobile app. The agent experience of a half-built MCP server is worse than the agent experience of a good REST API. Skills: Teaching Agents to Do Your Job at 100x the Scale There's a mental model shift happening in engineering right now. AI agents aren't just writing code faster; they're becoming the engineers who run the product development loop. The new engineering job is building the agents that automate that loop for you, and then directing them to do it at a scale no individual could match. The simple version of this is already in use: Ghostty splits and tabs, tmux sessions, CLI agents in parallel. You run ten agents at once, each working a different branch or task. That's horizontal scalability for engineering work, something that was physically impossible before. But raw parallelism without direction is chaos. This is where skills come in. What Are Skills? Skills are instruction packages for AI agents: Markdown files (optionally accompanied by scripts and resources) that teach an agent exactly how to handle a specific task. When the agent encounters work relevant to a skill, it loads that skill's context and operates with domain-specific precision instead of general-purpose guesswork. Anthropic introduced Skills as a first-class concept for Claude Code and the Claude ecosystem. Simon Willison called them "maybe a bigger deal than MCP" , and his argument is compelling: Skills are conceptually extremely simple: a skill is a Markdown file telling the model how to do something, optionally accompanied by extra documents and pre-written scripts that the model can run to help it accomplish the tasks described by the skill. The key design insight is token efficiency . At session start, Claude reads only a brief YAML frontmatter description of each available skill, a few dozen tokens per skill. The full skill content only loads when the agent determines it's needed for the task at hand. You can have dozens of skills installed without burning your context window. The Agent Skills Ecosystem The ecosystem is moving fast. Vercel launched agent-skills , a directory and tooling layer for discovering, installing, and composing skills across coding agents. The skills.sh platform (used by Cursor, Claude Code, and others) makes the install experience as simple as: npx skills add Ahrefs published their own skills to teach agents how to work with their SEO API, a direct example of how companies are now shipping agent instructions alongside their APIs. We took the same approach at Apideck. Apideck API Skills We just launched Apideck API Skills , installable via the skills.sh directory, to teach AI agents how to build integrations against our Unified API correctly: npx skills add apideck-libraries/api-skills Skills cover SDK usage across TypeScript, Python, Go, Java, PHP, and .NET; authentication and Vault patterns; pagination, error handling, and connector coverage checks; and migration paths from direct integrations to the unified layer. Instead of an agent making reasonable guesses about how our API works, it loads the relevant skill and operates with the same knowledge a senior Apideck engineer would bring to the task. This follows the same "building with LLMs" philosophy Stripe pioneered. They offer plain-text docs, MCP server access, and agent toolkits , but skills go a layer deeper. Where plain-text docs make your documentation parseable, skills make your API learnable in the agent's execution context. Skills vs. MCP: Complementary, Not Competing It's worth being precise about where skills fit relative to MCP. MCPs give agents tools to call: actions they can take against live systems. Skills give agents the knowledge to use those tools well, or to accomplish tasks using the code execution environment directly. Simon Willison puts it well: almost everything you might achieve with an MCP can be handled by a CLI tool instead, because LLMs already know how to call cli-tool --help . Skills have the same advantage, and you don't even need a CLI implementation. You drop in a Markdown file and let the model figure out execution. The two patterns compose naturally. An agent with an MCP connection to your accounting API and a skill that explains your data model and common patterns will outperform one that has only the MCP. Skills are the institutional knowledge layer; MCP is the capability layer. From Parallelism to Leverage Skills connect to the larger picture of agent orchestration. When you're running agents in parallel, on PRs, when an incident fires, when a customer files a bug, the quality ceiling is determined by how well each agent understands the domain it's working in. Skills are how you encode that domain knowledge once and distribute it across every agent instance, at any scale. The automation of the full product development loop is now an engineering responsibility. Skills are how you ensure that when your agents run that loop, at 100x the scale, while you sleep, they're running it the way you would have. The CLI Rebirth There's a pattern Andrej Karpathy pointed to on February 24 that cuts to something fundamental about agent-native interfaces: CLIs, the oldest developer tool, are having a second moment β not despite AI agents, but because of them. The reason is structural. Agents are terminal-native. They know how to run --help , install packages via pip or npm , chain tools with pipes, and parse output. They don't need a bespoke integration layer to use a CLI. They just use it. Andrej Karpathy demonstrated this concretely: Claude installed the new Polymarket CLI, built a terminal dashboard showing the highest-volume prediction markets and their 24-hour price changes, and had it running in about three minutes. Pair that with the GitHub CLI and an agent can navigate repositories, review PRs, and act on real-world data signals in a single autonomous pipeline β no custom integration layer, no MCP server, nothing to maintain. This is the same point Willison makes about skills: almost everything you might achieve with an MCP server can be handled by a CLI, because agents already know how to use them. A well-designed CLI with good --help output is self-documenting in a way that a REST API is not. An agent encountering gh --help for the first time figures out the relevant subcommand on its own. An agent encountering your undescribed REST API hits a wall. Google validated this thinking in early March 2026 with the release of the Google Workspace CLI ( gws ), a single binary covering Gmail, Drive, Calendar, Sheets, Docs, and every other Workspace API. Announced by Google Cloud director Addy Osmani as "built for humans and agents," the CLI ships with structured JSON output by default and over 100 prebuilt agent skills covering common workflows across Gmail, Drive, and Calendar. The command surface is built dynamically from Google's Discovery Service at runtime, so when Google adds new API endpoints, gws picks them up automatically. Installable in one line: npm install -g @googleworkspace/cli The design choices are deliberate. Structured JSON output so agents can parse responses without custom handling. Self-documenting subcommands so an agent can run gws --help and discover capabilities progressively. Pre-configured OAuth so authentication doesn't require browser interaction. Some early commentary frames it as a cleaner alternative to MCP-heavy setups, arguing that CLI-driven execution avoids burning context window on large tool definitions β the same reasoning behind the Apideck CLI. That argument has legs: a gws command consumes a fraction of the tokens that a full Workspace MCP server would, while covering the same operations. The strategic read on this isn't that Google is choosing CLI over MCP. It's that the CLI is emerging as the base interface, with MCP available where it makes sense on top. The opportunity for API companies is direct: if you don't have a CLI, it's worth asking whether building one would be more leveraged than building an MCP server. A CLI that agents can install and explore immediately compounds differently. Skills make this even more powerful β a developer who loads your skill and has your CLI installed gives an agent both the domain knowledge and the execution surface at the same time. A few things make a CLI specifically better for agents: a --json flag or JSON-by-default output so agents don't have to parse human-readable strings; composable subcommands following the tool noun verb convention (like gh repo list , gh pr view ) so --help output is scannable; environment-variable-based auth so agents can configure credentials without interactive prompts; and error messages that explain what went wrong rather than just returning an exit code. Andrej Karpathy's framing for businesses is the right frame for API companies too: make your product agent-usable via markdown docs, skills, CLI, and MCP β roughly in that order of ease and durability. Each layer compounds on the last. Building the CLI doesn't require coordination with any AI provider, and it works immediately because agents can already use it. The underlying shift DX was about removing friction for humans. AX is the same discipline applied to autonomous consumers that can't ask for clarification and can't adapt when your API sends them somewhere unexpected. The things that made APIs good for developers β specificity, consistent semantics, helpful errors β matter even more when there's no human in the loop to compensate for ambiguity. Ambiguity that a developer resolves through experience becomes a failure mode at scale when agents are writing the integrations. The good news: most of what you'd do to make your API understandable to a machine is the same thing you'd do for a developer who doesn't already know your system. Start there. We're building Apideck , a unified API for accounting integrations. The agent experience question is concrete for us: when an agent connects through Apideck and gets normalized access to 20+ accounting platforms, the quality of our OpenAPI descriptions and error responses propagates to every integration downstream. It focuses the mind. Ready to get started? Scale your integration strategy and deliver the integrations your customers need in record time. Get started for free Talk to an expert Ready to get started? Talk to an expert Trusted by fast-moving product & engineering teams Apideck Blog Insights, guides, and updates from Apideck Discover company news, API insights, and expert blog posts. Explore practical integration guides and tech articles to make the most of Apideck's platform. Industry insights Integration partnerships aren't a hurdle. They're your competitive advantage. Integration partnerships help B2B software companies go beyond simple API connections. Learn how official platform partnerships build credibility, unlock marketplace distribution, enable co-marketing opportunities, and help your product reach the customers who already use the platforms you integrate with. Bernard Willems Mar 12, 2026 Β· 10 min read AI Guides & Tutorials Agentic Payments Are Rewriting Spend Management From Scratch AI agents can now hold virtual Visa cards, execute purchases autonomously, and settle transactions with other agents. Every major payment network is racing to build the rails. Here is what it means for spend management and the fintech founders building the next layer of financial infrastructure. GJ Mar 12, 2026 Β· 9 min read Accounting Guides & Tutorials How to Integrate with the Rillet API in 2026 A practical guide to integrating with the Rillet API in 2026: authentication, pagination, idempotency, webhooks, incremental sync, financial reports, and the new MCP server for AI coding tools. GJ Mar 11, 2026 Β· 12 min read