Archetypes — or, what the app knows before you do
Bob pulled up two Results tabs side by side on his phone. Same him, different tournaments, different starting shapes. He asked why. Tau answered in capabilities, not mechanics.
Bob: Why are these two priors so different? It's the same me.
Uncle Tau: Because it's not the same game. Different format, different player pool, different variance. The app isn't starting from "Bob." It's starting from "what does a player who sits in a tournament like this one look like, before we know anything personal about you."
Bob: So the app has a head-start on me.
Uncle Tau: Every tournament you walk into, it has a reasonable starting belief about what you are. Not a guess — a principled starting belief. The one a generative model over the whole population of tournament poker players would assign to somebody showing up in that specific room. Then your own results pull the belief toward you.
Bob: And that's your job. Build the generative model.
Uncle Tau: Build it, and prove its KL divergence from the true distribution is minimal. That's the theorem you keep hearing me hedge at. Once you have a model you can certify is optimal in that sense, a lot of things become possible.
Bob: Name some.
Uncle Tau: Five, off the top of my head.
One. When you're a small-sample player in a new format, the app knows not to believe you yet. It pulls your estimate back toward what somebody-like-you would look like in that format. Shrinkage that's defensible instead of vibes.
Two. When you've got enough data to speak for yourself, the app knows to stop hedging. It lets your own numbers drive. The transition from "I'm assuming" to "your data is the data" is smooth, and the pace is set by the math, not by a rule of thumb.
Three. You can play two different kinds of tournament, and the app keeps your skill in each one separate. Your self in one format and your self in another are allowed to be different players. That's not a hack — it falls out of the model naturally.
Four. You can point the app at another player — a stake target, a stable-mate, a villain you've been grinding against — and it gives you their shape in the format you care about, with the same machinery. Apples-to-apples comparisons across the whole poker world, not just your own profile.
Five. You can ask, counterfactually, "what does a player with these features look like in that tournament?" Because the generative model is generative. You can sample from it. You can build a scenario. That's what Strategy and Stable Strategy are doing when they price an event for you.
Bob: That's a lot.
Uncle Tau: That's what "we have the model and we can prove it's optimal" buys you. It's also why this conversation stops here. If I sketched the model on a napkin, all five capabilities become commodity. The reason they aren't already is that nobody outside this app has assembled the math into a working generative model of the poker world. Once somebody does, the edge is gone.
Bob: Everybody's a hypocrite.
Uncle Tau: Everybody's a hypocrite. The model itself is a subscriber-call conversation. Teasers end where the math begins.
What you actually see in the app
Bob: So when I'm using this thing…
Uncle Tau: Every Results tab you filter, every Strategy output you generate, every Scout lookup you do — all of them pull the model's belief about somebody-in-that-format, mix in whatever data exists on the specific person, and draw the band. The belief comes from the model. The data comes from you or them. The mix is the Bayesian update you read about in Priors and Posteriors.
Bob: And if I jump formats, the app basically forgets what it knew?
Uncle Tau: Doesn't forget. Doesn't transfer. The belief in your old format stays intact. The belief in the new format starts from the model's opinion about your new room, gets pulled toward you as data arrives. That's the right answer. It's also the only answer you get when the model is provably optimal.
Bob: Thanks, Uncle Tau.
Uncle Tau: Go estimate your shapes, kid. Next time, rake. I have a rant ready.
What's next
- Rake is linear — why rake moves your ROI by a flat shift, not a compounding tax.
- SALSA in one sitting — the simulator that turns the model's belief about you into a finishing distribution for a specific tournament.
Further reading
- The generative model, its construction, and the KL-optimality proof live in an internal paper. For depth — that's a subscriber call, not a wiki page.
- Priors and Posteriors — the update loop this lesson takes for granted.
- Shrinkage and Empirical Bayes — how "pulls your estimate toward what somebody-like-you would look like" actually renders on screen.