Somewhere around Day 15 of building Postlark, I shipped a feature that most people would consider strange for a blog platform. Not themes. Not analytics. Not even a dashboard. An MCP server. A package you install from npm so that AI agents can manage your blog without ever touching a browser.
This needs explaining.
What even is MCP?
Model Context Protocol. Anthropic open-sourced it in late 2024, and by early 2026 it's become the standard way AI agents interact with external services. Think of it as USB-C for AI tools — a universal connector that lets any MCP-compatible agent (Claude Code, Cursor, Windsurf, and a growing list) plug into any MCP-compatible service without custom integration code.
Before MCP, connecting an AI agent to a service meant writing bespoke glue. You'd teach the agent about the API endpoints, the authentication scheme, the request format, the error codes. Each service required its own integration. Each integration was fragile. Update the API, break the agent.
MCP flips this. The service publishes an MCP server — a small program that describes its capabilities as "tools" with typed inputs and outputs. The agent discovers these tools automatically. No one writes integration code. The agent asks "what can you do?", the MCP server answers, and they're off.
Why not just use the REST API?
We have one. It works fine. You can curl your way to a published blog post if you want. So why build an MCP server on top?
Because REST APIs are designed for programmers, not for AI agents.
A REST API is a contract: here are the endpoints, here are the parameters, read the docs. A developer reads the docs, writes the HTTP calls, handles the auth headers, parses the responses. An AI agent can do all of this — but it's doing a lot of unnecessary work. It has to figure out which endpoint to call, in what order, how to authenticate, how to handle pagination. It's using general reasoning to solve a problem that has a structured answer.
An MCP server gives the agent that structured answer directly. Instead of "here are 15 endpoints, figure it out," MCP says "here are 17 tools, each with a name, a description, and a schema for what goes in and what comes out." The agent doesn't reason about HTTP verbs. It picks a tool and calls it. The difference in reliability is dramatic — fewer hallucinated endpoints, fewer malformed requests, fewer retries.
That said, the REST API isn't going away. MCP servers call REST APIs under the hood. If you're building a traditional integration or a CI/CD pipeline, the API is there. MCP is for the conversational agent use case, where a human says "publish my draft" and the agent needs to figure out how to make that happen.
How does someone actually use it?
One line:
claude mcp add postlark -- npx @postlark/mcp-server
That tells Claude Code to pull @postlark/mcp-server from npm and register it as a tool provider. Set your API key, and you're done. From that point on, you can say things like "list my recent posts" or "publish a new post about container networking" and the agent handles the rest — creating the post, setting tags, publishing it.
Behind the scenes, the MCP server exposes tools for the full post lifecycle: create, read, update, delete, publish, schedule. Plus blog management, analytics, search, and cross-platform discovery. Seventeen tools in total, up from the seven we shipped on Day 15.
Didn't it break immediately?
Funny you should ask. The first version shipped and promptly failed for users running certain Node.js versions. Claude Desktop bundles its own Node runtime, and it turned out to be older than we'd assumed. Top-level await — a perfectly standard JavaScript feature — wasn't supported.
Two commits in one day tell the story: "fix: MCP server — compile to ES2017 for Node 12+ compatibility" followed by "fix: MCP v0.2.1 — remove top-level await for Node 12 compatibility." We'd written modern JavaScript, published it to npm, and immediately learned that the real-world runtime environment for MCP servers is messier than you'd expect. The MCP ecosystem is young enough that there's no "minimum supported Node version" convention yet. You ship, you find out, you patch.
This is the kind of thing that doesn't show up in architecture docs. It shows up in your issue tracker at 11 PM.
Why build this before the dashboard?
Priority reveals values. We built the MCP server on Day 15. The dashboard came later. The documentation site came even later. If you looked at our commit history and didn't know what Postlark was, you might guess we forgot to build a web interface.
We didn't forget. We just believed our own pitch. If Postlark is an AI-native publishing platform — if the whole point is that agents are first-class citizens — then the agent integration can't be an afterthought bolted on in Phase 3. It has to be the foundation you build everything else on top of. The dashboard needed to work with the same API the MCP server uses. Not a separate internal API. Not admin-only shortcuts. The same public API, the same rate limits, the same authentication.
Building the MCP server first forced that discipline. If the API was awkward to use through MCP, it was awkward, period. We'd fix it before building a dashboard that could paper over the problems with nice UI.
What about agents that don't support MCP?
Not every AI tool speaks MCP yet. For those, there's llms.txt — a plain text file hosted on the Postlark domain that describes the platform in a format AI agents can parse. It covers authentication, available endpoints, common workflows. Think of it as a README that your agent reads before it starts working.
Between the MCP server, the REST API, the CLI, and llms.txt, we wanted to meet agents wherever they are. MCP is the best experience. The API is the universal fallback. llms.txt is the discovery mechanism. The CLI is for humans who live in terminals.
Is MCP just hype?
I had this worry when we started. New protocol, single company behind it, early adopters only. Building on top of it felt like a bet.
A year and change later, the bet looks reasonable. MCP has become a de facto standard — not because Anthropic forced it, but because the problem it solves is real. Every AI agent needs to interact with external services. Every service was building its own custom plugin format. MCP said "here's one format for all of them" and the ecosystem converged because convergence was overdue.
For a blog platform specifically, MCP is the difference between "you can use our API" and "your AI agent already knows how to use us." That's not a small difference. That's the difference between a developer reading documentation for twenty minutes and an agent that's productive in twenty seconds.
Next in the series: llms.txt — the file that lets AI agents discover your blog before they even know Postlark exists.