How the model works
We don't hide the math. The sections below explain exactly how each probability on a match page was produced, and why we think this approach beats the "AI picks" apps people complain about on Reddit.
The core idea
Football — Elo + logistic + Poisson grid
Match-winner probabilities start from per-sport Elo (K=20, home advantage +65). A logistic regression on top takes features like rolling form, head-to-head, rest-day diff, and injury impact, then blends 55% Elo + 45% logistic for the final 1X2.
For goals-based markets we estimate team-level expected goals (λ) from the last 8 matches, then build a bivariate-Poisson scoreline grid. Every goals / BTTS / correct-score / Asian-handicap / HT-FT / first-half / team-totals / corners / cards market is derived from the same grid — mathematically consistent with every other market.
NBA — Normal-distribution scoring
Tennis — best-of-N set math
News context — as a feature, not the predictor
Self-correction: Platt calibrators
The accountability we promise
- Every prediction we ever serve is persisted and graded when the match ends. You can browse the ledger on /accuracy.
- Rolling 30-match Brier + calibration charts are public. Bad stretches show up as clearly as good ones.
- Win / loss streaks on /accuracy/[sport]. No cherry-picked highlight reels.
- Calibrator history at /accuracy/history — every nightly fit is one row with before/after log-loss.
- Every +EV edge we emit can be saved to /picks with real stake + P/L + ROI + bankroll curve + max drawdown.
What this app doesn't do
- It is not live betting — no in-play odds, no second-by-second updates.
- It does not guarantee a winning strategy. +EV over a small sample still loses frequently.
- It does not replace bankroll management — always size stakes you're willing to lose.
- It is not legal in every jurisdiction. Check your local laws before acting on any output.