The Organic Social Analytics Engine: 9 Live Views, $0/month, Zero Engineers
A production analytics dashboard for Shikho's organic social content. 9 interactive views, AI-classified content, reach-weighted statistics with adaptive sample-size floors, theme-aware design, all on a $0/month stack. Built solo, end-to-end: Next.js, Recharts, Google Sheets backend, Claude classifier in the weekly pipeline. Live for the team to make weekly content decisions.
Shikho's organic social work needed a way to look at itself: posts, reach, engagement, what's working, what's not. The Marketing org had no engineering bandwidth to build it. So I did. Nine interactive views, AI-classified content (Claude in the weekly pipeline), reach-weighted statistics with adaptive sample-size floors, theme-aware design, all on a $0/month stack. Live for the team to make weekly content decisions.
Live views
9
Each view answers a specific question a planner asks before publishing
Operating cost
$0/mo
Zero Claude API calls in the dashboard; all classification happens in the weekly pipeline
Engineering team
0
Data refresh
5 min
Reads cached Google Sheet populated by Python pipeline
The Problem
Shikho's organic social was producing real volume across Facebook, YouTube, and Instagram. Reach existed; outcomes existed. What did not exist was the answer to the question every planner asks on Monday: "what worked last week, and what should we publish next week?"
The team's options were the platforms' native analytics (siloed, inconsistent definitions, no cross-platform view), or expensive social analytics SaaS that would have run six figures annually for a feature set we did not need. Engineering had no bandwidth. The choice was buy-out, do-without, or build-it-myself. I picked the third.
What Got Built
A two-part system: a Python weekly pipeline (separate repo) that fetches posts, classifies them with Claude, computes derived metrics, and writes everything to a Google Sheet. And a Next.js dashboard that reads that sheet and renders nine interactive views. Reader-facing, the dashboard is the deliverable. Operationally, the pipeline is the moat: classifier prompts, schema, automated diagnosis, all version-controlled.
The dashboard ships every chart with a metadata layer that mirrors how the team actually consumes data: provenance kind (observed from platform vs AI-classified vs derived), definition tooltip with the formula, sample-size badge, plain-English caption explaining the read, and an accessible "view as table" disclosure for keyboard and screen-reader users. The same vocabulary became the design pattern I later ported to this portfolio's case studies.
Live Dashboard
Production deployment behind team auth. Ask for a guided tour or a screen-share walk-through, and I will show the actual surface as it operates.
Request a walk-throughThe 9 Views
01
Overview
Headline KPIs, reach trend, format mix, content-pillar reach, biggest movers
02
Trends
Daily posting volume, daily reach, week-over-week deltas, multi-metric selector
03
Engagement
Format performance, hook types, engagement mix, format x hour heatmap
04
Reels
Watch time, retention funnel, follower conversion, top reels by metric
05
Timing
Day x hour heatmaps for engagement rate, reach, optionally shares or interactions
06
Diagnosis
Claude's weekly diagnosis with collapsed cards, headline metrics pulled to inline pills, week-selector
07
Plan
Next week's content calendar with hook lines, expected reach, AI-generated narrative
08
Outcomes
Last week vs prior, predicted vs actual, AI accuracy by metric
09
Explore
Filter-first workbench: sticky filters, paginated top posts, slice-and-dice charts, multi-metric composite ranking
Three Decisions That Defined the Build
Decision 01
Classifier in pipeline, not in dashboard
Claude classifies post content (pillar, hook type, format, sentiment) once a week in the Python pipeline; classifications are written to the sheet. The dashboard makes zero Claude API calls. Result: $0 marginal cost per user interaction; the dashboard scales for free with team usage.
Decision 02
Reach-weighted stats with adaptive min-N
Per-post engagement-rate average lets a single 5-view post claim "best format". Reach-weighted (sum-interactions / sum-reach) prevents that. A `minPostsForRange` helper sets the floor (3, 5, 10, depending on date range) so tiny buckets do not crown false winners. The same statistical apparatus runs on every page that ranks anything.
Decision 03
Sheets backend, not Postgres
A real database would have been over-engineered. The data volume (a few thousand posts per quarter) fits comfortably in a Google Sheet. Sheets gives us a free admin UI, free auth, free editor for one-off corrections, and a free API. Vercel reads with 5-minute revalidate. Trade: no joins, no transactions, eventual consistency. Acceptable for an analytics layer, blocking for transactional systems.
What the Dashboard Powers
The Marketing team uses the dashboard for the weekly content review. Diagnosis page surfaces Claude's read of the prior week (what underperformed, what overperformed, why); Plan page surfaces next week's calendar with hook lines and expected reach. Outcomes page tracks predicted-vs-actual so the AI's accuracy is visible per metric, not a black box. Three other functional teams (Brand, Knowledge, CX) use the Engagement and Timing pages for their own weekly planning. The dashboard does not replace human decisions; it removes the half-hour of spreadsheet wrangling that used to precede them.
Learnings
Marketing-built tooling closes the loop faster
The PM, the engineer, and the user are the same person
No PRD, no sprint, no JIRA. Friday: notice a chart that is not answering the right question. Saturday: ship the change. Production analytics as a 1-person feedback loop is a different operating zone than enterprise tooling.
$0 forces good architecture
Free-tier constraints are honest constraints
No backend database. No Claude calls in the user path. Sheets caching strict. The constraints made the system simpler and more legible than a "proper" stack would have been. The tradeoffs are documented in DECISIONS.md, not hidden in cloud bills.
Methodology layers > pretty charts
Sample size, definition, provenance, caption
Every chart's title row carries provenance kind, definition tooltip, sample-size badge, and a caption explaining the read. The metadata is the difference between a dashboard the team trusts and one they audit before quoting.
Stack
Next.js 14, React 18, Recharts, Tailwind, Vercel, Google Sheets, Python, Claude API
Team
Solo build, end-to-end. Used by 4 cross-functional teams.
Timeframe
7 sprints over 4 months, ongoing
Role
Architect, builder, operator