Heads up: All models on this page are in their early stages and should be considered placeholders. The underlying methodology is still being refined — take the specific numbers with a grain of salt.

The AI Process

How AI discovers, tests, and improves the pitch grading models — from podcast transcripts to production upgrades.

Continuous cycle — monitoring feeds back into scouting

By the Numbers

658
Episodes Analyzed
Across 8 YouTube channels and 2 RSS feeds
1,842
Insights Extracted
340 scored as high-relevance (8–10)
22
Model Versions Shipped
Each with documented impact
7.7M
Training Pitches
2015–2025 Statcast data
18
Sub-Models per Test
3 categories × 2 hands × 3 types
~50%
Kill Rate
Half of features don’t survive testing

Why We Show the Failures

About half of all tested features don't survive. We document them alongside the successes because the dead ends are often more instructive — CSW looked better than xRV by every per-pitch metric we measured, until we measured the right thing. Cascade+ was our most complex system ever, and direct regression beat it with a fraction of the infrastructure. The willingness to scrap significant investment when the data says so is what makes the process trustworthy.