Most blog platforms come with a familiar ritual. First visit, a banner slides in asking you to accept cookies. Behind it sits a JavaScript tracker recording clicks, scrolls, sessions, time-on-page. It's the default everywhere. We wanted to skip the whole thing.
What Bloggers Actually Need to Know
When we started building analytics for Postlark, the first question wasn't "what can we collect" but "what's genuinely useful for someone running a blog?"
The list was shorter than we expected:
Which posts get read
Where the traffic comes from
Whether readership is growing or shrinking
That covers about 95% of what matters. Nobody needs a heatmap to tell if a blog post landed. View counts, referrer data, and trend lines are enough. The rest is vanity.
All Server-Side, No Client-Side Scripts
The core decision: every analytics event is captured on the server when a request comes in. No JavaScript runs in the reader's browser. No cookies are set. No consent banner is needed.
This sounds limiting, but it captures everything on that shortlist. The server sees the request URL (which post was loaded), the Referer header (where someone came from), geographic headers from the CDN (country-level location), and the User-Agent string (device type). Standard HTTP headers that every web server already receives. Nothing exotic.
What we lose: scroll depth, time on page, click tracking, session replays. For an e-commerce team optimizing checkout conversion funnels, those metrics matter. For a blogger checking whether anyone read their Tuesday post, they genuinely don't.
The Bug Where Nothing Got Counted
Here's a good one. We shipped the first version of the analytics dashboard and watched the numbers. They were... zero. Flat line across every blog, every post, every day.
We assumed it was because the platform was new and nobody was reading yet. That was partially true — but also the view counter was broken. The path-matching logic had a condition that evaluated to false on every single request. Views were being silently discarded. Every visitor got their page served perfectly fine, nobody got counted. We only caught it when we added logging for a completely different feature and noticed the analytics code path was never executing.
Once that was fixed, the numbers swung hard in the other direction — suddenly everything looked wildly popular. Thousands of views on blogs with no real audience.
That was the bot problem.
Googlebot, Bingbot, GPTBot, ClaudeBot, Bytespider, and a whole zoo of crawlers were all getting tallied as genuine readers. Each one faithfully loaded every page and incremented the counter. A brand-new blog with three subscribers was showing traffic numbers that would make a mid-size publication jealous.
The fix was a User-Agent filter — incoming requests get checked against a list of known bot patterns (around 20 and growing) and excluded from view counts. The crawlers still get their pages. We like being crawled and indexed. They just don't inflate anyone's dashboard anymore.
Two bugs, opposite directions. One gave us zero, the other gave us infinity. The real number was somewhere quietly in between.
Unique Visitors Without Fingerprinting
Raw page views are noisy. One person refreshing five times looks identical to five different readers. We wanted unique visitor counts, but browser fingerprinting — the technique where you combine screen resolution, installed fonts, timezone, and other browser quirks into a pseudo-identity — was exactly the kind of tracking we were trying to avoid.
Instead, visitor IPs get hashed with the current date and checked against a short-lived record. If the hash hasn't appeared today, it counts as a new unique visitor. The hash expires within 24 hours. Raw IPs are never stored anywhere. There's no way to reconstruct who visited what — just a daily tally of distinct visitors.
It's less precise than cookies. Someone switching between their phone and laptop counts twice. Coworkers behind a shared office IP might get merged into one. For a blog's purposes, that margin of error is perfectly acceptable. You want to know "roughly 200 people read this," not "exactly 187 unique sessions with a 99.7% confidence interval."
The Dashboard
Four headline numbers greet you: page views and unique visitors, each for the last 7 and 30 days. Below that, a daily bar chart plotting both metrics over time. And a ranked list of your top-performing posts.
Click into any individual post to see its own view history — handy for spotting the moment something gets picked up by a newsletter or starts ranking in search results.
Higher-tier plans unlock additional dimensions: traffic sources parsed from Referer headers (direct visits, search engines, social platforms, and increasingly AI search tools), geographic distribution by country, and device type splits. The AI search referral data is one we're watching closely — as more readers discover blog content through tools like Perplexity, ChatGPT Search, and Google's AI overviews, that channel is becoming too significant for bloggers to fly blind on.
What We Chose Not to Build
No client-side tracking means certain features will never exist natively on Postlark. Session recordings, conversion funnels, real-time visitor maps, A/B testing integrations — all permanently off the table.
For a SaaS product optimizing signup flows, those are table stakes. For a blog, we think they're overkill. And bloggers who disagree aren't locked out — Postlark supports custom HTML injection, so you can drop in Google Analytics, Plausible, Fathom, or whatever you prefer. We just won't be the ones adding tracking scripts by default.
Every Postlark blog loads with zero tracking JavaScript out of the box. Readers get a fast, clean experience. Bloggers get the numbers that actually matter. And nobody has to click "Accept All Cookies."