A Live Cross-Functional Improvement Loop: Monthly Churn Feedback to Product Pipeline
262
Live Responses, March 2026
5 teams
Cross-Functional Loop
21%
False-Exit Discovery
3
Product Fixes Now Shipping
1. The Leakage Was a Number, Not a Story
Jul 2025
51%
98K uninstallers from 192K registered base
Jul to Dec 2025 avg
46%
81K avg uninstallers from 177K avg registered base
Jan 2026
31%
69K uninstallers from 224K registered base, lowest in 12 months
Leak rate improved 20 points in 6 months. Quantitative recovery. Qualitative blackout. We could not tell whether product changes, seasonal effects, better targeting, or something else was driving it. The chart told us how much. It did not tell us why.
2. The Sequence
-
Feb 2026: phone-call attempt
13 Class 9 users called Feb 3 to 9. 31% returned "unclear reasoning". Heavy ops, tiny sample, no signal. Method failed before insight could form.
-
March 2026: paid-retargeting pivot, first full cycle
Ads served to recently uninstalled users (registered, used 60+ minutes, uninstalled within 7 days). One open-ended question. 262 responses in 22 days, coded into a 10-category taxonomy.
-
April 2026: cross-functional review
Five teams (Product, Brand, Customer Support, Growth, Knowledge) reviewed coded responses together. 10 action items derived with named ownership and decision status. Permanent thread started for monthly continuity.
-
May 2026 onwards: continuous loop
Same shape every month. 3rd to 25th: collection. Last week: review, action items, status updates. Existing items shift status. New items added. Tracker compounds.
3. Why Live Beats Stale
Stale surveys
Days or weeks after the event
- Reasoning has been rationalised
- Survey form creates social pressure
- Recall fades, friction memory degrades
- Heavy operational lift to scale
Live retargeting
Within days of the uninstall
- Friction memory still active
- Free-form text, no social pressure
- Users respond on their own time
- Scalable: paid ads, low cost, wide surface
4. The Cycle
Step 1: collect
3rd to 25th. Paid retargeting to recent uninstallers. Free-form text capture.
Step 2: code
Last week. 10-category taxonomy. Severity tagged.
Step 3: review
5 teams in one session. Each reads its signal, owns its action.
Step 4: act
~10 action items. Named owner, decision status, timeline.
Step 5: document
Permanent thread updated. Continuity preserved across months.
Step 6: ship
Top product fixes move into active pipeline within the cycle.
5. March 2026 Results: Exit Signal Distribution
262 responses, coded into 10 categories. Severity tagged for triage. Sorted by count.
No reason or vague
73 | 28% | No signal
False or temporary exit
56 | 21% | Recoverable
Content quality
38 | 15% | Critical
Pricing communication gap
24 | 9% | Critical
Mentor or support
18 | 7% | Operations
App and tech issues
16 | 6% | Critical
Curriculum mismatch
14 | 5% | Operations
Outbound call issues
9 | 3% | Operations
The standout finding
21% of "churned" users had not actually churned.
Monthly avg uninstaller volume
~81K
False-exit share (March cycle)
21%
Implied monthly over-count
~17K users
Ramadan break. Exam cycles. Illness. Device sharing across siblings. Returning users not refreshed in our targeting list. The dashboard "leak" was over-counting real churn by roughly a fifth every month. Re-engagement budget was talking to users who had never left.
Action shipped within the cycle: re-engagement audience cleanup with dynamic exclusion of returners. Stale list logic replaced with live exclusion.
6. The 10 Action Items, by Status
Cross-functional review converted signal into a concrete distribution of work. Status tags are decisions, not aspirations.
Shipped within cycle
2 items
Re-engagement audience cleanup
Growth / DM · Operational
FutureBook delivery audit
Ops & Rasel · Completed
In active product pipeline
3 items
Pricing visibility on course cards (mobile matches web quarterly)
Product · Agreed, building
FutureBook QR and deep-link flow fix
Product · Agreed, building
Missing institutions in registration dropdown
Product & Knowledge · Agreed, building
Already in roadmap
3 items
Recorded class discoverability
Product
Chapter-wise navigation
Product
Leaderboard and report card fixes
Product / Tech
Deferred (documented)
2 items
Live comment and teacher inbox SLA
Product / Knowledge · Re-eligible next cycle
Teaching-time ratio tracking
Product / Knowledge · Re-eligible next cycle
7. The Three Real Product Breaks Now Shipping
Of 10 action items, three were genuine shipped-and-broken product issues. Not subjective preferences, not unavoidable noise. All three are now in active product pipeline:
Fix 1: Pricing visibility
Sticker shock at price card
Web shows quarterly pricing. Mobile shows full price (৳15K to ৳20K). Users perceive expense, churn at signup.
Fix: align mobile with web. Long-term: monthly-equivalent framing (৳1000/month) without changing billing.
Fix 2: FutureBook QR flow
QR scan from phone camera fails
Users assume the phone camera will scan the QR. It must be scanned from inside the app. Hard drop-off.
Fix: intermediary instruction screen prompting users to download or open the app first.
Fix 3: Missing institutions
Technical schools not in dropdown
Users from technical schools or colleges cannot complete registration because their institution is not listed.
Fix: expand institution database, add fallback entry option.
8. Key Learnings
Live beats stale
Capture within days of the event
Fresh reasoning beats rationalised reasoning by an order of magnitude. Surveys ask "why did you leave six weeks ago" and get cleaned-up answers.
Retargeting as research
Ad spend can buy qualitative signal
Most growth teams use retargeting to win users back. We use it first to ask why they left, before deciding whether to pursue them.
21% is not real churn
Some "churn" is hidden return
Until you ask, you cannot know. Re-engagement budget targeted users who had not actually left. Audience cleanup followed within the cycle.
Outbound calls were causing churn
The retention system was a churn driver
A student blocked all Shikho phone numbers because the retention team's calls were too frequent. The loop reveals friction your own teams are creating.
Cross-functional ownership
Marketing alone cannot own follow-through
7 of 10 March action items required Product. The loop only works because the workload is distributed and named at every step.
Filter at every level
Different audiences, different outputs
262 responses, 10 categories, 10 action items, 5 teams, 3 priority fixes. Each step has a different audience and a different output.