Normal Distributions and A Game Tree

Back on March 5th Glen B. Alleman (@galleman) tweeted a link to his Slideshare “Managing in the Presence of Uncertainty,” bit.ly/1CHThA9. I’m still working my way through it. But a few things got me tweeting. He makes a distinction that’s important, a distinction that provides some useful contexts that I’ll discuss here. Glen divides uncertainty into Aleatory and Epistemic uncertainty. Aleatory is the uncertainty of the classic frequentist approach to probabilities. Frequentists see noise under a distribution. Epistemic is addressable via the Bayesian approach to probabilities. Bayesians see knowledge under a distribution, knowledge that can be leveraged by establishing priors and looping through an exploration that improves those priors.

The Bayesian approach emerged after the frequentist approach was established. The Bayesian approach faced the usual adoption pressures as the frequentists leveraged their control of peer review and, hence journals. Name calling and such ensued. See, The Theory that Would Not Die, for the details of the struggle and eventual emergence of the Bayesian approach.

My own contact with the Bayesian approach happened back in 7th grade. We build a bead and matchbox-based game. Nobody mentioned machine intelligence or Bayesian statistics. The game was described in a column, computer recreations, or something like that, in the Scientific American. This was long before microprocessors. Nobody had access to a computer back then.

The game was played on a 3×3 board. On each side a row of pawns faced another row of pawns on the opposite side of the board. These pawns made normal pawn moves from the game of Chess, straight ahead one or one ahead diagonally to capture an opposing pawn. This took care of the generative side of the game. There were three ways to win: 1) occupy a square in your opponent’s pawn row, 2) capture all your opponent’s pawns, 3) Make the last possible move. This took care of the convergence side of the game.

It took a lot of matchboxes to build this game. Each matchbox displayed a board with the possible moves on it that could be made from the positions given on the board. The moves on one matchbox led to a collection of other matchboxes. The matchboxes were the nodes, the moves where the links.

Each matchbox contained beads that matched the color of one of the moves on the game board on that matchbox. A single matchbox might have three or more moves associated with it. A bead of each move color was placed in each matchbox. This gave each move even odds. These odds were the Bayesian priors.

As the game was played, a record was kept as to the path taken during the game. If the machine lost, you removed the  beads that led to the loss. If the machine won, you put two beads of the winning color back into each matchbox. In both cases what you did was update the priors based on what you learned during the last game. It was classic Bayesian. It was classic Stewart Brand’s How Buildings Learn. They learn through accretion. They learn, but they keep secrets.

So lets explore how a game tree organizes its normal distributions.

40 Game Tree Two Tiers

I let the game get just beyond the second move. We’ve played to this point several times, so we have a histogram of the possible moves. N isn’t high enough to give us a continuous rendition of a normal distribution,  but the discrete hints are there. The game tree looks like a binomial tree with equally weighted branches, so the normal is not skewed. Then, we play two more moves deeper into the game tree.

42 Game Tree Three Tiers

Here I’ve depicted the normals for the second and forth moves. We could change the representation by putting the second tier normal under the forth tier normal. This would reflect a frequentists approach depicting the smaller normal as a subset. It looks smaller, but remember that both have an area of one. The deeper we move into the game, the wider the normal would get. To keep the area at one, the normal would also lose height: 6×1, 2×3. I’ve not depicted this, so just imagine it. It happens all the time out in the business world. The F2000 company has thin margins. At F4000, thinner still. Yes, even for F4000 companies, the area under the normal remains one, although the base is wide.

43 Two Normals Overlaid

This figure is just fine. It depicts a proper subset being a normal with the same mean as the containing set. But as a depiction of the game tree, it’s just wrong. Game play flows through the mean at the top of the normal and flows to the base. Further, future game play expands the base and height of the normal. To get to the base, you have to make those first two or four moves, increasing their frequency, after which you expand the base outward and another tier deeper. But, the future is not known yet.

44 01 Two Normals Overlaid

Now, we’ve shown how the two normals fit together. The normal for the subtree converges sooner than the entire tree. The differences between the tails of the normals is a function of the depth of the subtree and tree. Notice that the two normals are not fractals of each other. We are seeing the normal at two different times in its life. The change in tree depth is also a change in bit depth. The set gets the x-axis. The subset gets the x’-axis.

44 02 Two Normals Overlaid

Now, we show that the early normal grows towards the top of the later normal, and the later normal grows down and out. Again, to make a later move, you must make an earlier move. Those probabilities change together. In the pawn game, described earlier, wins terminate a branch of the game tree. This stops the accumulation of frequency and moves the histogram outward towards the outliers.

46 Black Swan

Next we consider the black swan. For product managers, commoditization is a black swan that happens often enough. When some portion of your product becomes commoditized, you lose bits, and you lose addressable market population. Tomorrow’s future is smaller than yesterday’s. As for the normal it converges sooner on another x’-axis. Of course, you knew that commoditization was coming, and given today’s preference for trade secrets over patents, you’ve built under the base of yesterday’s normal.  You were ready. I know. We’ll pretend politely.

47 Black Swan Recovery

You’ve added some bits via an effort represented by the red triangle, the red decision tree, which like playing a game deeper pushes the base x-axis down, which in turn moves  your convergence with the new x-axis into the future.

So I’ve moved your convergence into the future. Congrats. Comments?

Advertisements

One Response to “Normal Distributions and A Game Tree”

  1. Statitical Independence | Product Strategist Says:

    […] swans, chasms, and how these last few things change the size and width of out distributions. See Normal Distributions and a Game Tree, and A Regression […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: