Statistical Independence

In the last few posts, I’ve talked about distributions, black swans, chasms, and how these last few things change the size and width of out distributions. See Normal Distributions and a Game Tree, and A Regression Tree.

So lets start with two normal distributions having the usual dependent relationship.

21 Statistical Independence 01

Their footprints overlap. Where the footprints overlap, the two normal distributions share a relationship, a dependency. Saying the rest of the normal, the unshared portions of the two normal are labeled Independence, but shifting from a frequentist point of view to a Bayesian one has to make us wonder just how independent these normal really are.

The means are separated by some distance measured in bits. I take it for granted that the scheme described in Nowak’s SuperCooperators is a given. He describes the evolutionary computing perspective on the normal as being histograms that are only one bit different from its neighbor. This implies a packing, and some other math. The stuff under the distribution is not random, so it pushes us in the Bayesian knowledge-based approach again. The location of the particular histogram reflects exactly who the customer is, what the company does to serve them, and various aspects of the UX and services provided. Having just sold more than 50% of your addressable market allocation, having just heard from the CFO about last months close, geeks are the yesterday, consumers are tomorrow, and you have a black swan on your hands. Oh, the projections were skyward, and the quarter doesn’t close for another two months. Ouch. Yeah, but it happens to every company. The PR spin might differ.

So lets separate the means of these distributions without changing their shape. Weird stuff happens.

21 Statistical Independence 02

So we opened the distance between the means of our normal distributions, We somehow came by more bits. But, the good news is we are totally independent, statistically independent. Geometry, metrics and rates matter here. Booms deform distributions. Booms do this by stacking the distinct populations that Moore talked about in his books on the technology adoption lifecycle.  See my Innochat framing post for more on the stacking of distinct populations.

Lets look at two situations that are a bit more complicated.

21 Statistical Independence 03

I actually tried to rescale this figure, but it’s a bitmap, so it became illegible at 1.5x. Geez. Better tools someday.

Each distribution has a subpopulation’s normal under it. In our earlier diagrams, the two normal distributions were red and blue. The normal distributions of the subpopulations are dark pink and brown. Both the red and blue distributions have been subjected to volatility, so their x’ axes moved up the y-axes. The volatility for the red distribution is shown in light pink. The volatility for the blue distribution is shown in light blue. The aqua below the x axis represents the future tails of the blue distribution involving the company’s reaction to their volatility.

The distributions are statistically independent now. The yellow area illustrates the distance between their tails. In the near future, however, the distributions will become statistically dependent.

Currently, the subpopulation that was under the red distribution is no longer being served by the company represented by the red distribution. The company may earn cash from distributor sales to the dark pink population. This is typical of American style global markets where we sell, but do not market to local populations. In the game tree sense, not getting feedback from those local populations is a failure to play the deeper game. Yes, it is a cheaper game, a more profitable game, but a short-term game. Volatility might remove the dark pink population from play.

Conversely, the subpopulation under the blue distribution is still be serviced by the company represented by the blue distribution. The population and the subpopulation have been discounted due to the volatility to the same degree.

Next, we’ll look at a top-down view of the same situation.

21 Statistical Independence 05

Here we can see the future tail, in aqua, of the blue distribution that will make the red and blue distributions dependent. The future tail is labeled tomorrow. The discounted portions of the distributions are labeled yesterday. The red and blue portions are labeled today. I could have drawn a straight line between the means. This appeared to be the case in the earlier diagrams, but here we can see how the earlier linear projection was a Bezier curve instead. The gray circle at the curve’s control point is the irreducible unknown. Notice that the linear projection compresses the bits measured on the curve to those shown separating the means. In communications systems learning compresses bits. Note also that the order of the Bezier curve could be higher. In that case, the curve would be even longer.

How we see a normal distribution depends on what we put under the distribution: noise, knowledge, or genomes.

Enjoy. Comments.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: