A marketing lead drafts a product card variant in Claude Code. Three iterations, all written into the CMS as drafts, all reviewable in the visual preview, none of them published yet. They pick the strongest one, ask Claude to tighten the headline, approve the change, and the page goes live. Twenty minutes from blank prompt to published page. No browser tab on the CMS, no copy-paste between tools, no email to a developer.
That is what controlling a headless CMS from Claude Code looks like in practice. The setup is small. The governance pattern around it is what makes it safe to use. And the work that still needs human review is narrower than most teams expect.
Why this works now, not earlier
The pieces have been around individually for a while. Headless CMSs ship REST and GraphQL APIs. Claude has had tool use since 2024. What changed is MCP, the Model Context Protocol, which gives an AI a stable way to discover and invoke tools without writing custom integration code per system.
Storyblok ships an official MCP server. DatoCMS has one. Most serious headless platforms either ship a server or have a community version within a week of someone needing it. Claude Code reads ~/.claude/mcp.json (or the project-local equivalent), connects to the servers listed there, and exposes their tools to the model. Each tool call lands in front of you for approval before it runs.
That last part is the one that matters. The integration is interesting. The per-call approval gate is the reason it is shippable in production.
The setup, end to end
Storyblok is the easiest example to walk through because the official server is well-maintained and the access scopes are granular. The shape applies to any headless CMS.
Scope a token
Generate two API tokens in Storyblok. A read-only token that can list and fetch stories, components, and assets. A draft-write token that can update content but cannot publish. Do not generate a publish-capable token at this stage.
Most CMS incidents happen through tokens with too much scope. Treat read and draft-write as the default surface and introduce a publish token only later, with explicit approval friction in front of it.
Install the MCP server
pnpm add -g @storyblok/mcp-server
Or run it directly through pnpm dlx if you would rather not install globally. The server is a small Node process that wraps the Storyblok Management API in MCP-compatible tools.
Register it with Claude Code
Open ~/.claude/mcp.json (or create it) and add an entry:
{
"mcpServers": {
"storyblok-read": {
"command": "pnpm",
"args": ["dlx", "@storyblok/mcp-server"],
"env": {
"STORYBLOK_OAUTH_TOKEN": "${STORYBLOK_READ_TOKEN}",
"STORYBLOK_SPACE_ID": "${STORYBLOK_SPACE_ID}"
}
},
"storyblok-draft": {
"command": "pnpm",
"args": ["dlx", "@storyblok/mcp-server", "--mode=draft"],
"env": {
"STORYBLOK_OAUTH_TOKEN": "${STORYBLOK_DRAFT_TOKEN}",
"STORYBLOK_SPACE_ID": "${STORYBLOK_SPACE_ID}"
}
}
}
}
The two-server pattern is intentional. One server is read-only and can run with auto-approval. The other writes drafts and requires manual approval per call. Claude treats them as separate tool sets, so the model itself sees the boundary.
Try it
Restart Claude Code. Ask: “List the last five published blog stories in Storyblok and show me the slugs.” Claude calls the read-only server and returns the slugs. You confirm the connection works.
Then: “Draft a new variant of the homepage hero with a stronger headline focused on the migration service.” Claude writes the draft through the draft-write server. You see the diff, you approve or reject. The draft sits in Storyblok ready for an editor to review in the visual preview.
What this is good for
A few patterns that earn their keep on a real marketing site:
- Drafting copy variants. Three versions of a hero, a CTA, a meta description, all written into the CMS as drafts and reviewable in the visual preview before any of them go live.
- Bulk reads for analysis. “Pull the last 50 blog posts and tell me which ones do not link to a service page.” Claude works through the data in the conversation, you fix the gaps.
- Tagging and metadata at scale. Adding alt text, schema, or category tags across a content set without writing a script. Slower than a script. But you do not have to write the script.
- Translation triggers. Picking a story, sending it to a translation pipeline, and writing the result back as a draft for review. We covered the broader workflow design in translation workflows that don’t break your CMS.
- Cross-tool reasoning. “Read the last week of Search Console data and surface posts whose impressions dropped.” That cross-tool reasoning is where MCP starts to feel like a real workflow shift, not just a chat interface for the CMS.
The connecting theme is that the model handles the routine reasoning between tools. The team handles the calls that matter.
Where it breaks down
A few places where Claude Code plus an MCP server is the wrong tool, or at least not the only tool:
- Bulk schema migrations. Renaming a field across 2.000 entries belongs in a script with a dry run. The model can write the script. It should not be the script.
- Multi-language sync that needs glossary discipline. AI translation through a CMS workflow is fine for marketing copy. It is not fine for legal text or anything where a glossary mistake is hard to spot.
- Anything where the audit trail matters more than the speed. Compliance edits, regulatory updates, contract terms. Use the CMS UI so the change history reflects the human who made the call.
- Publishing without explicit confirmation. The publish action should never be auto-approved, no matter how confident the model sounds. Treat it the way you treat a force-push to main.
The governance pattern that makes it safe
The interesting part is not the integration. It is the approval gate model around it.
Three rules we apply on any project where Claude Code touches a real CMS:
- Read auto-approves. Write requires confirmation. Publish always requires explicit confirmation. The three actions have different blast radii and should have different friction levels. Bake the boundary into the MCP server config, not into the user’s discipline.
- Tokens are scoped per environment. A staging token never reaches production. A production token never reaches a developer’s laptop. The MCP server config reads from environment variables, and those variables come from a secret manager, not a
.envfile in the repo. - Every tool call gets logged. The MCP server records every invocation with timestamp, tool name, and a payload digest. Pipe that into whatever the team already monitors. A surprised editor in three months should be able to ask “who changed this and when” and get an answer.
These are the same rules a sane DevOps team applies to any automation that touches production. The only difference is that the actor is a model instead of a CI job.
What if your CMS does not have an MCP server yet
Write one. The simplest custom MCP wraps the CMS REST API in five tools:
list_stories(filter)get_story(slug)update_story_draft(slug, fields)publish_story(slug)search_assets(query)
The official MCP TypeScript SDK is at github.com/modelcontextprotocol/typescript-sdk. The tutorial covers how to expose a tool in around twenty lines. If you can write a fetch wrapper, you can ship a usable MCP server in an afternoon.
The bigger investment is deciding which tools to expose. Resist the urge to expose everything. The smallest set that covers your team’s actual use cases is the right place to start. The principle is the same one we wrote about in structured content for SEO and AI: the value comes from clear, narrow types, not from coverage.
The other half of the equation: a brand-correct component library
The MCP server config is the cheap part. The expensive part, and the part that decides whether AI editing actually works on a real brand, is the component library underneath.
This is delivered work, finished before the marketing team ever opens Claude Code. The component library is built once by a development and design team over months and then handed over. The marketing team does not write frontend code, does not touch the Astro or Next.js components, does not edit Storyblok block schemas. They operate the public surface, Claude Code through MCP, and the components handle the rendering automatically. That separation is the whole point.
What is in a complete component library:
- Every Storyblok block has a finished, brand-correct rendered implementation. Hero, intro paragraph, feature grid, testimonial, FAQ, CTA, footer reassurance. The full set a real landing page needs. Each one is signed off by design, coded against the design system, and locked behind block-level schema. The marketing team picks blocks. The visual outcome is settled.
- The block schema constrains what fits where. A
feature_gridaccepts onlyfeature_cardchildren. Acta_sectionaccepts onecta_block. Claude cannot drop a marketing-styled block inside a legal-styled section, because the Storyblok schema forbids the combination before the API call lands. - Variants are real, design-approved components, not toggles. “Hero with image left” and “Hero with image right” are separate blocks. Adding a new variant is a deliberate act done by the design team, not a tweak made in a draft.
- A page-composition briefing lives alongside the project. A
CLAUDE.md(or equivalent) maintained by the dev team: “Feature-launch landing page is Hero with product photo, then a three-card FeatureGrid, then a single Testimonial, then a CTA. Hero subheadline is one sentence, max 90 characters. FeatureGrid card titles are verbs in present tense.” That briefing is the manual the agent reads before drafting. The marketing team does not write it. They benefit from it.
The MCP server is a weekend of setup. The component library plus the composition briefing is months of design and engineering. It is also where the whole approach earns its keep. Without that layer, an AI-driven CMS is a fast way to ship off-brand pages. With it, the marketing team ships brand-correct pages every day end-to-end through the agent surface, while the rendered output stays inside the design system the company paid for.
Where this is heading
I think we are months, not years, away from a working pattern where marketing teams publish content daily without ever opening the CMS back office UI. Drafts get written through Claude Code or a similar agent surface. Reviews happen in the visual preview the CMS already serves to editors today. Publishing happens through the same agent with an explicit confirmation step.
The CMS is still doing all the work. The fields, the validation, the workflow, the audit log. What disappears is the surface, the dashboard the editor used to live in. That surface was always a compromise between what the CMS team built and what the editor needed. An agent surface fits closer to how editors actually work: in a conversation, with the writing tool of their choice, against a brief and a brand.
This shift does not happen because the AI gets better. It happens because the component library and composition briefing above are in place before marketing ever opens the agent. The MCP server is one weekend. The library plus the briefing is a quarter of design and engineering. Teams that start that work now will be publishing this way before the end of 2026.
What this is and what this is not
This is not “AI replaces the CMS.” The CMS is still where content lives. The visual preview, the field validation, the workflow approvals all still happen there. The marketing team still owns the content.
What changes is the editing surface. Some edits happen in the CMS UI because that is the right tool for the job. Some edits happen through Claude Code because the team is already there reasoning about the work. The choice is no longer browser tab versus terminal. It is “which surface fits this edit.”
That shift is small from the outside and meaningful from the inside. The CMS becomes one tool the team works with, not the one tool the team has to context-switch into.
Where to start
If you are running a headless CMS already, install the official MCP server in a sandbox space, scope a read-only token, and let one person on the team try it for a week. The friction points show up in days. Decide whether the draft-write pattern is worth the setup before you roll it out broadly.
If you are not on a headless CMS yet, this is one more reason to consider moving. A traditional CMS like WordPress can technically be wrapped in an MCP server, but the underlying API surface and the lack of structured content make the kind of tool calls that pay off here much harder to model. Our headless website service covers what that move actually looks like, and the website subscription team handles the keep-it-running side once it ships.
The bigger picture is the one we wrote about in websites have two audiences now. Your team using AI tools is one of those audiences. Giving them a clean way to talk to your CMS is the most concrete thing you can do for them this quarter.