
Docs Have a New First Reader
The immediate instinct is to bolt something on. Publish anllms.txt. Train
a chatbot. Write “machine-optimised” descriptions. Each helps in isolation,
but they miss the bigger shift: AI is the primary consumer of our
documentation now. And the way it fails is different from humans.
A human reads a confusing paragraph and asks for clarification. An AI reads
it and generates code that looks right but isn’t. And it does this with
complete conviction.
Two things became clear early on.
AI tools don’t all read the same way.
llmstxt.org is a structured index for LLM discovery,
MCP is a live query interface for AI
editors, agentskills.io is an action schema spec for
agent frameworks. A RAG pipeline wants the entire site as a single file, an
MCP client wants to query pages in real time, an agent framework wants
structured tool definitions, and a build script just wants to curl one page as
markdown. Optimising for one doesn’t help the others.
What AI reads needs to stay current. This is why OpenAPI generates from
code, why JSDoc generates from source, why every team that’s maintained
parallel documentation has eventually watched one version go stale. The moment
we maintain an “AI version” of our docs alongside the human version, we’ve
created a sync problem. Content diverges. Specs go stale. The LLM index says
one thing; the actual endpoint does another. We haven’t been doing this long
enough to feel the pain yet, but the pattern is predictable. And when AI is
the first reader, stale content doesn’t just confuse people, it generates wrong
code at scale.
One Source, Many Outputs
Our approach: make the documentation the single source from which every AI entry point is derived. No separate content, no parallel versions. Enrich what’s already there, then generate outputs from it. One of the first things we learned was that our docs were serving HTML pages. Humans don’t care; browsers render them fine. But AI agents parse markdown far more reliably than raw HTML. It took us a while to realise that the format mattered as much as the content. Not a massive change on its own, but these small fixes add up. That was the first one, and it set the direction for everything after. Jupiter’s APIs are designed around developer experience: REST-first, no RPC required, no mandatory API keys, clean JSON in and JSON out. That’s not an AI-specific decision, but it’s what makes everything else work. Any agent that can make HTTP calls can interact with Jupiter without installing an SDK, setting up a blockchain node, or managing binary dependencies. From there, the goal was to enrich the existing docs to serve every AI consumption path.llms.txt
llms.txt is auto-generated from page
frontmatter and navigation config, following the llmstxt.org
standard. It’s a structured Markdown index of every API, guide, and reference
page with its llmsDescription. For RAG pipelines that need complete content,
llms-full.txt concatenates the entire
site into a single file.
The llmsDescription is a frontmatter field we’ve added to every page
alongside the existing human-facing description. We didn’t start here. It
took a few rounds of discussion on the team to land on dual descriptions. What
an LLM reads efficiently (endpoint paths, parameter names, concrete behaviour)
isn’t always what a human wants to scan on a docs page. Trying to serve both
audiences with one description means compromising for each.
descriptionis short and scannable: written for a human browsing the site.llmsDescriptionis specific and technical: endpoints, key capabilities, what the API actually does and how to call it. Both live on the same page. Update one, and both stay current.
skill.md
Most agent “skills” or “tools” in the ecosystem are individual function definitions, one tool per API call. Give an agent 50 endpoint definitions and it has to figure out which one matches the developer’s intent. That means loading every tool into context, parsing each schema, and hoping the LLM picks the right one. It works for simple integrations, but a protocol with dozens of endpoints across multiple products needs something more intentional.skill.md follows the
agentskills.io specification with an intent-routing
approach. Instead of one tool per endpoint, it organises Jupiter’s capabilities
by what a developer actually wants to do. An agent that receives “swap tokens”
gets routed to the Ultra API family and its first action. “Check price” gets
routed to the Price API. The agent doesn’t need to know Jupiter’s full API
surface; it expresses intent, and the skill handles the mapping.
This works better than a flat list of tools. If endpoints are added or APIs
are restructured, the intent categories stay stable. It’s also more
token-efficient: instead of 50 tool schemas in context, the agent loads one
skill definition with intent categories and lets the router resolve the rest.
skill.md is sourced directly from the open-source
Jupiter Skills Repository, which
also ships framework-specific definitions ready to drop in:
- Claude / OpenAI: function calling with tool definitions
- Vercel AI SDK: Zod-typed tool schemas
- LangChain:
@tooldecorated functions
MCP
Most AI tools today get their context from web searches. Search results are cached and indexed, not live. When an API changes, search engines may not reflect it for days or weeks. AI assistants generate code against outdated endpoints, deprecated parameters, and stale examples. The developer gets working-looking code that breaks against the current API. The MCP server avoids this. Via the Model Context Protocol, AI editors query the documentation source directly, the same source that generatesllms.txt
and skill.md. No caching layer, no indexing delay. The context an AI reads
is the context that’s currently live.
Markdown Export and OpenAPI Specs
Remember the HTML problem from earlier? This is where that fix landed. Every page in the docs can now be accessed as raw markdown by appending.md to
the URL or setting the Accept: text/markdown header. OpenAPI specs are
accessible directly as YAML (e.g. dev.jup.ag/openapi-spec/swap/swap.yaml)
and feed both human-readable API references and machine-readable schemas.
Update a page, its content, frontmatter, or the OpenAPI spec it references,
and the change propagates through llms.txt, gets picked up by MCP queries,
flows into skill definitions, and shows up in any tool that processes it.
Meeting AI Where It Already Works
AI tooling is fragmented because these are different tools solving different problems. Each one is its own way into our docs. Because everything derives from one source, supporting each of them doesn’t mean maintaining separate systems. It means adding another output.| How AI consumes docs | Jupiter resource |
|---|---|
| High-level discovery | llms.txt: structured index of all docs |
| Full-context ingestion | llms-full.txt: complete site content |
| In-editor queries | MCP server |
| Agent capabilities | skill.md + Skills Repository |
| Single page fetch | Add .md to any URL to get raw markdown |
| API schema | OpenAPI specs for every product |
| LLM-friendly description | llmsDescription frontmatter on every page |
Start Here
Everything described in this post is live and documented.AI Overview
Why Jupiter is built for AI agents, and the quickstart for getting an
agent from zero to swap in four API calls.
llms.txt
Structured documentation index for LLM consumption. Summary and
full-context versions.
MCP
Connect your AI editor to Jupiter docs and APIs via Model Context
Protocol.
Skills
Pre-built agent skills and the open-source Jupiter Skills
Repository.
