Streampace docs

Reports library

Streampace ships 22 reports across 6 categories. Every report answers one question, runs one statistical method, and surfaces one decision you can act on.

The math underneath is borrowed from places that take this stuff seriously — survival analysis from medical research (Kaplan-Meier), regression from econometrics, paired t-tests from clinical trials, Gini from economics, z-score from engineering quality control. The plain-language explanations below describe what each one tells you and what to do about it.

Performance

How a stream went.

Stream recap

Question
How did tonight’s stream go?
Method
Just numbers — total ◆, top gifters tonight, pace, vs your lifetime, vs your last stream.
Use it for
The post-stream debrief. The first thing a creator opens after going offline.

Top gifters leaderboards

Question
Who are my biggest fans, this stream / this week / this month / lifetime?
Method
Just rankings, four windows.
Use it for
Know who to thank by name, spot a new top gifter rising, see who’s slipping.

Stream shape

Question
How long should my stream be, and how soon should I go again?
Method
Two charts — avg ◆ per stream by duration band (finds the sweet spot before diminishing returns kick in), and ◆ vs gap since prior stream (does back-to-back help or hurt).
Use it for
Schedule decisions. “Streams over 3h drop in ◆/min” or “I earn more if I take a day off between.”

Cohort retention matrix

Question
Which streams built a sticky audience vs flash-in-the-pan?
Method
Triangle grid — for each stream, what % of its first-time gifters came back 1, 2, 3, …N streams later.
Use it for
Identify what kinds of streams build a real fan base vs one-night spikes.

Stream cadence optimizer

Question
Should I stream more often or less often?
Method
Buckets your closed streams by gap-since-prior-stream into five cadence bands (<1d, 1–2d, 2–3d, 3–5d, 5+d) and reports the median ◆ per band. The eligible band with the highest median is the recommendation — the one that pays best for you specifically, not a generic benchmark.
Use it for
Schedule decisions. Some creators leave money on the table by streaming daily (audience fatigue); others by waiting a week between sessions. Answers the question from your own data.

Go-live time optimizer

Question
What hour should I start streaming?
Method
Buckets your closed streams by start-hour (in your local timezone) and reports the per-hour median ◆ with p25/p75. Surfaces the top-3 eligible start-hours with their % uplift vs your overall median. The prescriptive counterpart to the descriptive day×hour gift heatmap — that one shows when gifts land; this one shows what start-time pays.
Use it for
Pre-stream planning. "Going live at 8pm beats 10pm by 18% on a typical Saturday for me." Validate the recommendation with an A/B-tagged stream on the recommended hour.

Audience

Who’s gifting and why.

Gifter survival curves

Question
How long do new gifters stick around after their first gift?
Method
Kaplan-Meier — the same survival math hospitals use for "how long do patients stay healthy." Plots the % of gifters still active after N streams. Handles the "we don’t know yet, they might still be active" case properly instead of pretending those are losses.
Use it for
Spot the drop-off cliff (e.g. "60% of new gifters never come back after their first stream") and watch whether your retention is improving or eroding over time.

Audience health (RFM)

Question
Who’s loyal, who’s slipping, who’s about to churn?
Method
Recency, Frequency, Monetary — each gifter scored on three axes, bucketed into segments (champions, loyal, at-risk, lapsed).
Use it for
Personal outreach. DM the at-risk top gifters before they ghost.

Top gifter concentration (Gini)

Question
How dependent am I on a few big gifters?
Method
Gini coefficient. 0 = every gifter contributes equally; 1 = one top gifter gives everything. A score of 0.85 means losing your top 5 gifters is catastrophic; 0.40 means you’ve got a healthy diversified base.
Use it for
Risk gauge. High Gini = chase diversification; low Gini = lean into your loyalists.

Lapsed top gifters

Question
Which big gifters have gone quiet?
Method
Top-30 lifetime gifters who haven’t shown up in 14+ days.
Use it for
Re-engagement list. DM them, shout them out when they return.

Engagement

Audience action — likes, shares, comments, follows.

Engagement health per stream

Question
Which streams pulled audience action vs which were quiet?
Method
Per-minute rates of likes / shares / comments / diamonds for each stream, vs the creator’s median across the window. Above-median rows get a ▲ marker; below get a ▼.
Use it for
Vital signs check. Diamonds tell you who paid; engagement rates tell you how alive the room felt. A stream with great engagement but low diamonds is a re-monetisation puzzle, not a quality problem.

Engagement velocity over time

Question
Are engagement rates trending up or down across the last N weeks?
Method
Weekly buckets of likes/shares/comments/diamonds per minute, with a linear regression slope per metric. ±2%/week is the noise floor — anything inside reads as flat.
Use it for
Early warning. Engagement cools weeks before revenue catches up. A falling-likes / flat-diamonds combo says push for fresh content before the gift-rate trails the engagement-rate.

Follow → gift conversion funnel

Question
Of viewers who joined, how many engaged, followed, and gifted?
Method
Five-stage cohort funnel: joiners → engaged → followers → gifters → repeat gifters. Plus a sub-stat — of the gifters, what fraction followed BEFORE their first gift (loyal-fan signal) vs after (impulse gifters).
Use it for
Audience-building diagnostic. A funnel that bleeds at engaged→followed = "audience is here but not committing." Bleeds at followers→gifters = "they like me but won’t pay yet."

Engagement day × hour heatmap

Question
When does the audience actually engage — follows, joins, shares, subs?
Method
7×24 grid of event counts by day-of-week and hour-of-day (creator’s local timezone). Optional filter to one event type. Likes and chat aren’t here — those are aggregate-only by design; see engagement-health.
Use it for
Mirror of the gift heatmap. Looking at the two together surfaces "audience engages here but doesn’t gift" mismatches — useful for finding under-monetised time slots.

Forecasts

Where things are headed.

End-of-stream forecast

Question
Mid-stream — where will I likely finish tonight?
Method
Exponential smoothing on per-minute pace. Projects the rest of the stream with a confidence band.
Use it for
Live decisions. "Push for a bigger goal" or "wrap up, this stream’s done."

Pace decay

Question
Are my streams trending up, flat, or eroding?
Method
Linear regression on pace across the last N streams. Outputs a slope (◆/min change per stream) and a category: rising, flat, falling.
Use it for
Monthly check-in. Catches slow drift before it shows up as a "bad month." Sustained negative slope = something needs to change; flat = economy stable.

Live anomaly detector

Question
Alert me NOW when something unusual is happening on the stream.
Method
Real-time z-score of per-minute pace vs the creator’s typical pattern. Fires push alerts on big surges or dips while live.
Use it for
Live decisions. Surge alert at minute 47 = "lean in, this is your moment"; dip alert = "pivot, what you’re doing isn’t landing."

Experiments

Did the change actually work.

A/B compare

Question
Did changing X actually move the needle, or was it just luck?
Method
Welch’s t-test. Tag streams "version A" vs "version B" (e.g. long intro vs short, weekday vs weekend), the report tells you whether the difference is real or could be chance.
Use it for
Validate strategy changes before committing. Don’t roll out "shorter intros" based on three good streams in a row.

Effect size (Cohen’s d)

Question
Even if the change is real, is it BIG enough to care?
Method
Cohen’s d quantifies how much two groups actually differ. 0.2 = small (barely feel it), 0.5 = medium (noticeable), 0.8+ = large (career-altering).
Use it for
Use alongside A/B compare. A statistically-significant 1% lift isn’t worth changing your strategy for.

Power & sample size

Question
How many streams do I need to actually prove a hypothesis?
Method
Power analysis. Plug in expected effect size + confidence level → tells you the streams needed.
Use it for
Plan experiments before running them. "Test whether 9pm beats 8pm — you’d need 24 streams to detect a 10% lift; 6 streams is wasted time."

Battles

Battle profitability.

Battle profitability

Question
Which opponents and partners are worth my time?
Method
For each battle, calculate the lift in ◆/min vs the creator’s solo baseline. Group by counterpart. Paired t-test flags which lifts are real vs noise.
Use it for
Schedule rematches with profitable opponents, avoid duds, identify partners who carry their weight. Compare-mode shows two creators’ lists side-by-side.

Battle lift over time

Question
Is my battle strategy still working — or has the lift faded?
Method
Same per-battle deltas, bucketed into weekly buckets. Paired t-test per week tells you which weeks the lift is real, which are noise.
Use it for
Catch "battles used to lift my pace but stopped working three weeks ago" patterns. Compare-mode overlays two creators’ weekly lines.

Want to see this in action against your roster?

Streampace ingests the moment you connect a creator — your first stream is fully captured. Demo against your own creator data on a follow-up call.