A content editor asks Claude Code to pull the latest five product records from DatoCMS, rewrite the SEO descriptions in a tighter voice, and save them as drafts for review. Two minutes later, the drafts are sitting in DatoCMS preview waiting for an editor’s sign-off. No browser tab on the CMS, no copy-paste, no developer in the loop.
That is what controlling DatoCMS from Claude Code looks like in practice. The setup is small. The token model and the per-call approval gate are what make it safe to ship.
This post is the DatoCMS variant of the same pattern we wrote about for Storyblok. The principles are identical. The specifics differ enough to be worth covering separately.
Why DatoCMS fits this pattern well
Three properties of DatoCMS make the MCP integration cleaner than most:
- Records are typed. Every record belongs to an item type with a defined schema. The model knows what fields exist before it makes a change. There is no guessing about whether a field is rich text or a string.
- Drafts and published versions are first-class. DatoCMS separates drafts from published content at the API level, which means the model can write drafts confidently without ever risking the live site.
- API tokens are scope-aware. The Content Management API token can be issued with read-only, draft-write, or full publish permissions. That maps directly onto the approval-gate model the integration needs.
The combination is unusually well suited to AI tool use. Most CMSs let you simulate this with custom backend logic. DatoCMS gives you the boundary at the token level.
The setup, end to end
Generate two API tokens
In DatoCMS → Settings → API tokens, create two tokens for the same project:
- Read-only token. Permissions: read all records and uploads. Cannot create, update, or delete.
- Draft-write token. Permissions: read records, create draft records, update draft records. Cannot publish, cannot delete.
Do not generate a publish-capable token at this stage. If you need publishing through Claude Code later, create a separate token with publish permission and treat every call through it as something that requires explicit confirmation.
This token-per-action model is the single most important governance decision in the whole setup. Skipping it and using one full-access token is how teams end up explaining a published draft they did not approve.
Install the MCP server
pnpm add -g @datocms/mcp-server
The DatoCMS MCP server wraps the Content Management API in MCP-compatible tools: list and fetch records, list models, create and update drafts, search uploads, and publish (only with a publish-capable token). If the official server is not available for your stack, the API surface is small enough that a custom server using @datocms/cma-client-node fits in well under 100 lines. More on that below.
Register it with Claude Code
Open ~/.claude/mcp.json (or the project-local equivalent) and add two server entries:
{
"mcpServers": {
"datocms-read": {
"command": "pnpm",
"args": ["dlx", "@datocms/mcp-server"],
"env": {
"DATOCMS_API_TOKEN": "${DATOCMS_READ_TOKEN}",
"DATOCMS_ENVIRONMENT": "main"
}
},
"datocms-draft": {
"command": "pnpm",
"args": ["dlx", "@datocms/mcp-server", "--mode=draft"],
"env": {
"DATOCMS_API_TOKEN": "${DATOCMS_DRAFT_TOKEN}",
"DATOCMS_ENVIRONMENT": "main"
}
}
}
}
The two-server pattern is intentional. Read auto-approves. Draft writes need confirmation per call. The model itself sees them as separate tool sets, so there is no ambiguity about which scope a given action falls under.
If you use DatoCMS sandbox environments for testing, point the read and draft servers at a sandbox first. Switch to main only after the team is comfortable with the workflow.
Try it
Restart Claude Code. Ask: “List the last five records of model blog_post and show me their slugs and titles.” Claude calls the read-only server, returns the data, and you confirm the connection works.
Then: “Draft a new variant of the homepage hero record. Stronger headline focused on migration. Save as a draft.” Claude writes the draft through the draft-write server. The change lands in front of you for approval before it leaves Claude Code. After approval, the draft sits in DatoCMS ready for review in the preview environment.
What this is good for
Patterns that earn their keep on a real DatoCMS project:
- Drafting record variants. Three versions of a hero, a CTA, a meta description, all written into DatoCMS as drafts and reviewable in the preview before any of them go live.
- Field-level rewrites across many records. “Read every blog post tagged
migrationand rewrite the SEO description in a tighter voice.” Claude opens drafts on each, you review them in DatoCMS. - Schema introspection and content modeling. “Show me every model that references the
authormodel. Are any of them missing a fallback?” That kind of structured query maps cleanly onto DatoCMS’s typed schema. - Bulk metadata work. Adding alt text, schema annotations, or category tags across a content set without writing a script. Slower than a script. But you do not need to write the script.
- Cross-tool reasoning. Pulling a week of search analytics, mapping queries to records that should rank for them, drafting an updated description against the right record. The work that was three tools and a spreadsheet becomes one conversation.
The connecting theme is that the model handles the routine reasoning between tools. The team handles the calls that matter.
Where it breaks down
A few places where Claude Code plus an MCP server is the wrong tool for DatoCMS, or at least not the only tool:
- Schema migrations. Renaming a field across an item type belongs in a CMA migration script with a dry run. The model can write the script. It should not be the script.
- Multi-locale sync that needs glossary discipline. DatoCMS’s localized fields make AI translation tempting. Marketing copy is fine. Legal text and anything where a glossary mistake is hard to spot is not. We covered the broader trade-offs in translation workflows that don’t break your CMS.
- Anything where the audit trail matters more than the speed. DatoCMS records every change with a user attribution. Compliance edits, regulatory updates, contract terms should run through the UI so the change history reflects the human who made the call.
- Publishing without explicit confirmation. Even with a publish-capable token, the publish action should never be auto-approved. Treat it the way you treat a force-push to main.
The governance pattern that makes it safe
The interesting part is not the integration. It is the approval gate model around it.
Three rules we apply on any project where Claude Code touches a real DatoCMS space:
- Read auto-approves. Draft-write requires confirmation. Publish always requires explicit confirmation. The three actions have different blast radii and should have different friction levels. Bake the boundary into the token, not into the user’s discipline.
- Tokens are scoped per environment. Sandbox environment tokens never reach production. Production tokens never reach a developer’s laptop in plain text. The MCP server config reads from environment variables, and those variables come from a secret manager, not a
.envfile in the repo. - Every tool call gets logged. Either the MCP server logs invocations locally, or the DatoCMS audit log captures the change with the token’s user attribution. Pipe that into whatever the team already monitors. A surprised editor in three months should be able to ask “who changed this and when” and get an answer.
These are the same rules a sane DevOps team applies to any automation that touches production. The only difference is that the actor is a model instead of a CI job.
What if you want to write a custom MCP server for DatoCMS
The DatoCMS Content Management API client (@datocms/cma-client-node) gives you everything you need. The smallest useful server exposes five tools:
list_records(model, filter)get_record(id)create_draft(model, fields)update_draft(id, fields)search_uploads(query)
The official MCP TypeScript SDK is at github.com/modelcontextprotocol/typescript-sdk. The CMA client handles auth, retries, and rate limiting. Wrapping the five tools above is around 80 lines of TypeScript including error handling.
Resist the urge to expose every CMA endpoint. The smallest set that covers your team’s actual use cases is the right place to start. Adding a tool later is cheap. Removing one that the team has come to depend on is not.
DatoCMS-specific: the component library is what matters
The token model and the schema are the cheap part. The component library underneath the schema is the expensive part, and the part that decides whether AI editing actually works on a real brand.
This work is finished before the marketing team starts using Claude Code. The component library is built once by a development and design team over months and then handed over. The marketing team does not write frontend code, does not touch the Astro or Next.js components, does not edit DatoCMS block models. They operate the public surface (Claude Code, MCP calls, drafts in DatoCMS) and the rendered components handle everything else automatically.
What that looks like specifically in DatoCMS:
- Every modular content block has one finished, brand-correct rendered implementation. The
hero_block,feature_card,testimonial_block, and so on each map to a single Hero, FeatureCard, Testimonial component on the frontend. Layout, typography, spacing, colour are settled in code. - Modular content fields constrain which blocks can sit where. A page body modular field accepts only page-level block types. A section’s nested content accepts only intro-level block types. Claude cannot place a footer block inside a hero, because the modular content validator rejects it before the API call lands.
- Variants are separate models, not toggles. Two hero designs become two block types, both reviewed by design.
- A page-composition briefing lives alongside the project. A
CLAUDE.md(or equivalent) maintained by the dev team specifies which blocks fit which page archetype, the order, the field-level constraints to respect, and the brand voice rules that apply to draft copy. The marketing team does not write it. They benefit from it.
This is the same principle as the Storyblok version of this post, applied to DatoCMS. The full argument for why the component library is the work that decides whether AI editing feels safe or chaotic lives in the Storyblok post.
Where this is heading
I think we are months, not years, away from a working pattern where marketing teams publish content daily without ever opening the DatoCMS back office UI. Drafts get written through Claude Code. Reviews happen in the existing visual preview. Publishing happens through the same agent with an explicit confirmation step.
DatoCMS is still doing all the work behind the scenes. The fields, the validation, the workflow, the audit log. What disappears is the surface. That shift does not happen because the AI gets better. It happens because the component library and composition briefing above are in place before marketing ever opens the agent. The MCP server is one weekend. The library plus the briefing is a quarter of design and engineering. Teams that start that work now will be publishing this way before the end of 2026.
What this is and what this is not
This is not “AI replaces DatoCMS.” DatoCMS is still where content lives. The visual editor, field validation, modular content blocks, and workflow approvals all still happen there. The marketing team still owns the content.
What changes is the editing surface. Some edits happen in the DatoCMS UI because that is the right tool. Some edits happen through Claude Code because the team is already there. The choice is no longer browser tab versus terminal. It is “which surface fits this edit.”
Where to start
If you are running DatoCMS already, install the MCP server in a sandbox environment, scope a read-only token, and let one person on the team try it for a week. The friction points show up in days. Decide whether the draft-write pattern is worth the setup before rolling it out broadly.
If you are not on a headless CMS yet, this is one more reason to consider moving. Our headless website service covers what that move actually looks like. The website subscription team handles the keep-it-running side, including the MCP wiring once it ships.
The bigger picture is the one we wrote about in websites have two audiences now. Your team using AI tools is one of those audiences. Giving them a clean way to talk to DatoCMS is the most concrete thing you can do for them this quarter.