How It Works
Scoring Formula
Each sub-metric is scored 0-10 by the LLM. Final video score is 0-100.
base = (title_similarity x 5) + (focus_ratio x 3) + (time_to_content x 2)
penalty = (deception x 2) + (sponsor x 1)
score = base - penalty [0-100]Channel score = average of its evaluated video scores.
What the Metrics Mean
LLM Evaluation Process
Each video is evaluated using two parallel LLM calls:
For long transcripts that exceed the model’s context window, we chunk the transcript and aggregate metrics deterministically across all chunks.
Video Selection
15 videos are evaluated per channel - a mix of recent uploads and all-time popular videos, so the score reflects both current behaviour and historical patterns.
- YouTube Shorts are excluded.
- Videos longer than 90 minutes are excluded.
- Visually-driven channels are excluded - transcript-based scoring isn’t a fair measure for them.
Why This Exists
Our feeds overstimulate us more than ever. Exaggerated titles, sensational previews, fake urgency - it all works amazingly well on our monkey brains. It’s just a natural consequence of capitalism and how algorithms have evolved in rewarding our dopamine circuits quickly.
But there should be a counterforce. A platform that’s trusted and keeps content creators accountable while rewarding the honest ones.
Current Limitations
- Transcripts are the only content input - visuals, tone, editing, and pacing are not evaluated.
- AI scoring can be inconsistent across runs.
- 15 videos per channel is a small sample - outlier videos can skew a channel’s score.
Feedback
Found a bug? Disagree with a score? Have a feature suggestion?