Skip to main content
Building the Most AI-Friendly Developer Resource It started with a bet. Our DevRel team wanted to see how close AI was to replacing us. So we handed an agent our docs and told it to build a demo app using a few Jupiter APIs. It got there. Eventually. But only with our Mintlify MCP server installed in the IDE, and even then it took a dozen prompts of back-and-forth to reach working code. Without the MCP? The agent wrote integrations against endpoints that didn’t exist, mixed up parameter formats, and produced code that looked right but broke the moment you ran it. That was the wake-up call. When developers integrate a protocol today, their first move is increasingly to hand the docs to an AI agent and say “figure it out.” If the agent gets it right, the developer has a working integration in minutes. If it gets it wrong, they get plausible but broken code, and may not catch it until production. Good docs have always mattered, but now they compound. Every improvement in clarity gets amplified across every AI tool that reads our pages. Every gap gets amplified too. There’s a saying in the AI space: if the output is slop, it’s because the input was slop. The quality and richness of what we feed an agent, and whether it’s in a form the agent can actually ingest, determines what comes out the other side. That was on us. This post is about how we responded: make docs the single source of truth for both humans and AI, and meet AI tools wherever they already work.

Docs Have a New First Reader

The immediate instinct is to bolt something on. Publish an llms.txt. Train a chatbot. Write “machine-optimised” descriptions. Each helps in isolation, but they miss the bigger shift: AI is the primary consumer of our documentation now. And the way it fails is different from humans. A human reads a confusing paragraph and asks for clarification. An AI reads it and generates code that looks right but isn’t. And it does this with complete conviction. Two things became clear early on. AI tools don’t all read the same way. llmstxt.org is a structured index for LLM discovery, MCP is a live query interface for AI editors, agentskills.io is an action schema spec for agent frameworks. A RAG pipeline wants the entire site as a single file, an MCP client wants to query pages in real time, an agent framework wants structured tool definitions, and a build script just wants to curl one page as markdown. Optimising for one doesn’t help the others. What AI reads needs to stay current. This is why OpenAPI generates from code, why JSDoc generates from source, why every team that’s maintained parallel documentation has eventually watched one version go stale. The moment we maintain an “AI version” of our docs alongside the human version, we’ve created a sync problem. Content diverges. Specs go stale. The LLM index says one thing; the actual endpoint does another. We haven’t been doing this long enough to feel the pain yet, but the pattern is predictable. And when AI is the first reader, stale content doesn’t just confuse people, it generates wrong code at scale.

One Source, Many Outputs

Our approach: make the documentation the single source from which every AI entry point is derived. No separate content, no parallel versions. Enrich what’s already there, then generate outputs from it. One of the first things we learned was that our docs were serving HTML pages. Humans don’t care; browsers render them fine. But AI agents parse markdown far more reliably than raw HTML. It took us a while to realise that the format mattered as much as the content. Not a massive change on its own, but these small fixes add up. That was the first one, and it set the direction for everything after. Jupiter’s APIs are designed around developer experience: REST-first, no RPC required, no mandatory API keys, clean JSON in and JSON out. That’s not an AI-specific decision, but it’s what makes everything else work. Any agent that can make HTTP calls can interact with Jupiter without installing an SDK, setting up a blockchain node, or managing binary dependencies. From there, the goal was to enrich the existing docs to serve every AI consumption path.

llms.txt

llms.txt is auto-generated from page frontmatter and navigation config, following the llmstxt.org standard. It’s a structured Markdown index of every API, guide, and reference page with its llmsDescription. For RAG pipelines that need complete content, llms-full.txt concatenates the entire site into a single file. The llmsDescription is a frontmatter field we’ve added to every page alongside the existing human-facing description. We didn’t start here. It took a few rounds of discussion on the team to land on dual descriptions. What an LLM reads efficiently (endpoint paths, parameter names, concrete behaviour) isn’t always what a human wants to scan on a docs page. Trying to serve both audiences with one description means compromising for each.
---
title: "Ultra Swap API"
description: "Overview of Ultra Swap and its features."
llmsDescription: "Jupiter Ultra Swap API provides a managed swap execution
  engine. POST to /ultra/v1/order for quotes, /ultra/v1/execute to submit.
  Handles routing, slippage, gas, MEV protection server-side. No RPC or
  wallet infrastructure required."
---
  • description is short and scannable: written for a human browsing the site.
  • llmsDescription is specific and technical: endpoints, key capabilities, what the API actually does and how to call it. Both live on the same page. Update one, and both stay current.

skill.md

Most agent “skills” or “tools” in the ecosystem are individual function definitions, one tool per API call. Give an agent 50 endpoint definitions and it has to figure out which one matches the developer’s intent. That means loading every tool into context, parsing each schema, and hoping the LLM picks the right one. It works for simple integrations, but a protocol with dozens of endpoints across multiple products needs something more intentional. skill.md follows the agentskills.io specification with an intent-routing approach. Instead of one tool per endpoint, it organises Jupiter’s capabilities by what a developer actually wants to do. An agent that receives “swap tokens” gets routed to the Ultra API family and its first action. “Check price” gets routed to the Price API. The agent doesn’t need to know Jupiter’s full API surface; it expresses intent, and the skill handles the mapping. This works better than a flat list of tools. If endpoints are added or APIs are restructured, the intent categories stay stable. It’s also more token-efficient: instead of 50 tool schemas in context, the agent loads one skill definition with intent categories and lets the router resolve the rest. skill.md is sourced directly from the open-source Jupiter Skills Repository, which also ships framework-specific definitions ready to drop in:
  • Claude / OpenAI: function calling with tool definitions
  • Vercel AI SDK: Zod-typed tool schemas
  • LangChain: @tool decorated functions

MCP

Most AI tools today get their context from web searches. Search results are cached and indexed, not live. When an API changes, search engines may not reflect it for days or weeks. AI assistants generate code against outdated endpoints, deprecated parameters, and stale examples. The developer gets working-looking code that breaks against the current API. The MCP server avoids this. Via the Model Context Protocol, AI editors query the documentation source directly, the same source that generates llms.txt and skill.md. No caching layer, no indexing delay. The context an AI reads is the context that’s currently live.

Markdown Export and OpenAPI Specs

Remember the HTML problem from earlier? This is where that fix landed. Every page in the docs can now be accessed as raw markdown by appending .md to the URL or setting the Accept: text/markdown header. OpenAPI specs are accessible directly as YAML (e.g. dev.jup.ag/openapi-spec/swap/swap.yaml) and feed both human-readable API references and machine-readable schemas. Update a page, its content, frontmatter, or the OpenAPI spec it references, and the change propagates through llms.txt, gets picked up by MCP queries, flows into skill definitions, and shows up in any tool that processes it.

Meeting AI Where It Already Works

AI tooling is fragmented because these are different tools solving different problems. Each one is its own way into our docs. Because everything derives from one source, supporting each of them doesn’t mean maintaining separate systems. It means adding another output.
How AI consumes docsJupiter resource
High-level discoveryllms.txt: structured index of all docs
Full-context ingestionllms-full.txt: complete site content
In-editor queriesMCP server
Agent capabilitiesskill.md + Skills Repository
Single page fetchAdd .md to any URL to get raw markdown
API schemaOpenAPI specs for every product
LLM-friendly descriptionllmsDescription frontmatter on every page
We didn’t force AI into one consumption pattern. We met it wherever it already works. Since shipping these changes, the one-shotability (giving an agent a single prompt and getting working code back) has improved across the board. Jupiter was already the most recommended DeFi API in crypto, but now agents can actually use it without hand-holding. Our biggest open problem is versioning. Jupiter’s APIs have been live for a while, and we’ve had deprecations. Agents don’t always pick up that an endpoint has been superseded, and stale docs from old versions still float around the internet. One lesson from this: deprecate less often, and when you do, avoid breaking changes. It’s not just annoying for developers, it’s stale information that agents will keep serving long after we’ve moved on. New standards will come, consumption patterns will shift, and the way AI reads docs will keep changing. The single-source approach means each new pattern is another output to generate, not another system to maintain. If you’re building on Jupiter with AI, or building AI-friendly docs for your own project, we’d love to hear what’s working and what isn’t. Reach out on Discord or X.

Start Here

Everything described in this post is live and documented.