Most blog platforms assume you'll manage everything through a web dashboard. Click here, drag there, navigate through nested menus to change a setting you could've typed in two seconds. For developers who spend eight hours a day in a terminal, this friction adds up fast.
We built @postlark/cli because we wanted blog management to feel like git — quick commands, scriptable, zero context-switching.
Who actually wants this?
Not everyone. If you write one blog post a month and enjoy the dashboard, the CLI adds nothing to your life. Close this tab.
But if you're maintaining a technical blog, publishing frequently, or automating content workflows — the terminal changes everything. You can pipe scripts into it. You can batch operations. You can deploy an entire directory of Markdown files with a single command. Try doing that with a web UI.
The other audience we didn't fully anticipate: CI/CD pipelines. People wanted to publish from GitHub Actions, post-merge. The CLI made that possible without writing custom API integration code.
Getting started
Installation is the usual npm one-liner:
npm install -g @postlark/cli
Or skip installation entirely:
npx @postlark/cli whoami
Authentication was one of those things that took longer to get right than expected. The first version was simple — run postlark login and paste your API key. Functional, but clunky. You had to find the key in the dashboard, copy it, switch back to the terminal, paste it.
So we added browser-based login. Run postlark login, a browser window opens, you authorize the session, done. The CLI displays a short code, you confirm it in the browser, and the credentials get saved locally. No copy-pasting keys across windows.
Both paths still work. postlark login opens the browser. postlark login pk_live_abc123 skips the browser and stores the key directly. CI environments use the second form, usually fed from a secret.
The command surface
The CLI covers the full API — blogs, posts, search, analytics, domains, API keys, tokens. Around 30 commands. Here's what day-to-day usage actually looks like:
# See your blogs
postlark blogs
# Set a default so you stop typing --blog every time
postlark blogs use d8272f0c-...
# Create a post from a local file
postlark posts create --title "Database Migrations" --file migrations.md
# List recent posts
postlark posts
# Check traffic
postlark analytics
Nothing surprising. We kept the command structure flat and predictable — postlark <resource> <action>. No deeply nested subcommands, no flags that silently change behavior based on other flags.
Every command supports --json for machine-readable output. postlark posts --json | jq '.[] | .title' gives you a clean list of titles. Simple stuff, but it means the tool composes well with grep, jq, and the rest of the Unix toolkit.
Deploy: the command that justified the whole project
If I had to pick one feature that makes the CLI worth installing, it's deploy.
postlark deploy ./posts
This scans a directory for Markdown files with YAML frontmatter, diffs them against what's on the server, and creates or updates posts to match. Each file needs a frontmatter block:
---
title: Why We Migrated to Bun
slug: migrated-to-bun
tags: [bun, runtime, performance]
status: published
---
The slug is the unique key. If a post with that slug already exists, it gets updated. Otherwise, created. --dry-run previews what would happen without touching anything. --delete-missing removes server-side posts that don't have matching local files — useful when a git repo is the single source of truth for your content.
This is where CI/CD gets interesting. A technical blog whose content lives in a repository can set up a GitHub Action that runs postlark deploy ./content on every push to main. Write in your editor, commit, push — published. No dashboard visits. No manual copy-paste into a web form. The repository is the blog.
We use this pattern ourselves for documentation pages, and a few early users have set up Hugo-style workflows where they author in VS Code, preview locally, and deploy to Postlark on merge.
Sharing guts with the MCP server
One decision that saved us real headaches: the CLI and the MCP server share the same API client. Both import from @postlark/shared, a small internal package that handles authentication, request formatting, error handling. Fix a bug in the client, both tools get the fix. Add a new API method, both surfaces can expose it immediately.
This wasn't the original plan. The MCP server had its own client first. When we started building the CLI, the temptation was to copy-paste and adapt. We refactored instead — extracted the client into a shared package, adjusted both consumers. Added maybe a day to the timeline but eliminated an entire class of "works in MCP but not in the CLI" divergence bugs.
What we deliberately left out
No postlark edit command that opens $EDITOR with post content, lets you modify it in-place, and saves on close. We talked about it. Felt like scope creep for a first release, and posts update --file covers the same ground with one extra step.
No theme management either. Themes involve CSS and visual previewing — a fundamentally graphical task. Forcing that into a terminal felt wrong. The dashboard handles themes. The CLI handles content and configuration. Clean boundary.
No interactive REPL. No postlark shell that drops you into a persistent session with tab completion and state. Maybe later. The stateless command structure works well enough for scripting and one-off operations, which is what people actually do.
The terminal as a publishing surface
There's something satisfying about typing postlark deploy ./posts and watching five articles go live in two seconds. No loading spinners, no modals asking "are you sure?", no toast notifications fading in and out. Just a list of what changed and a confirmation that it's done.
The CLI isn't replacing the dashboard — it's a second door into the same room. One that fits better when you're already thinking in commands and pipes. For automated publishing workflows, it's the only door that makes sense.
Next up in this series: what Postlark does with your Markdown once it arrives. GFM tables, callouts, Mermaid diagrams, math rendering — and a few extensions we added because we kept wanting them ourselves.