Archive for March, 2015

Statistical Independence

March 30, 2015

In the last few posts, I’ve talked about distributions, black swans, chasms, and how these last few things change the size and width of out distributions. See Normal Distributions and a Game Tree, and A Regression Tree.

So lets start with two normal distributions having the usual dependent relationship.

Their footprints overlap. Where the footprints overlap, the two normal distributions share a relationship, a dependency. Saying the rest of the normal, the unshared portions of the two normal are labeled Independence, but shifting from a frequentist point of view to a Bayesian one has to make us wonder just how independent these normal really are.

The means are separated by some distance measured in bits. I take it for granted that the scheme described in Nowak’s SuperCooperators is a given. He describes the evolutionary computing perspective on the normal as being histograms that are only one bit different from its neighbor. This implies a packing, and some other math. The stuff under the distribution is not random, so it pushes us in the Bayesian knowledge-based approach again. The location of the particular histogram reflects exactly who the customer is, what the company does to serve them, and various aspects of the UX and services provided. Having just sold more than 50% of your addressable market allocation, having just heard from the CFO about last months close, geeks are the yesterday, consumers are tomorrow, and you have a black swan on your hands. Oh, the projections were skyward, and the quarter doesn’t close for another two months. Ouch. Yeah, but it happens to every company. The PR spin might differ.

So lets separate the means of these distributions without changing their shape. Weird stuff happens.

So we opened the distance between the means of our normal distributions, We somehow came by more bits. But, the good news is we are totally independent, statistically independent. Geometry, metrics and rates matter here. Booms deform distributions. Booms do this by stacking the distinct populations that Moore talked about in his books on the technology adoption lifecycle.  See my Innochat framing post for more on the stacking of distinct populations.

Lets look at two situations that are a bit more complicated.

I actually tried to rescale this figure, but it’s a bitmap, so it became illegible at 1.5x. Geez. Better tools someday.

Each distribution has a subpopulation’s normal under it. In our earlier diagrams, the two normal distributions were red and blue. The normal distributions of the subpopulations are dark pink and brown. Both the red and blue distributions have been subjected to volatility, so their x’ axes moved up the y-axes. The volatility for the red distribution is shown in light pink. The volatility for the blue distribution is shown in light blue. The aqua below the x axis represents the future tails of the blue distribution involving the company’s reaction to their volatility.

The distributions are statistically independent now. The yellow area illustrates the distance between their tails. In the near future, however, the distributions will become statistically dependent.

Currently, the subpopulation that was under the red distribution is no longer being served by the company represented by the red distribution. The company may earn cash from distributor sales to the dark pink population. This is typical of American style global markets where we sell, but do not market to local populations. In the game tree sense, not getting feedback from those local populations is a failure to play the deeper game. Yes, it is a cheaper game, a more profitable game, but a short-term game. Volatility might remove the dark pink population from play.

Conversely, the subpopulation under the blue distribution is still be serviced by the company represented by the blue distribution. The population and the subpopulation have been discounted due to the volatility to the same degree.

Next, we’ll look at a top-down view of the same situation.

Here we can see the future tail, in aqua, of the blue distribution that will make the red and blue distributions dependent. The future tail is labeled tomorrow. The discounted portions of the distributions are labeled yesterday. The red and blue portions are labeled today. I could have drawn a straight line between the means. This appeared to be the case in the earlier diagrams, but here we can see how the earlier linear projection was a Bezier curve instead. The gray circle at the curve’s control point is the irreducible unknown. Notice that the linear projection compresses the bits measured on the curve to those shown separating the means. In communications systems learning compresses bits. Note also that the order of the Bezier curve could be higher. In that case, the curve would be even longer.

How we see a normal distribution depends on what we put under the distribution: noise, knowledge, or genomes.

A Regression Tree

March 25, 2015

Ralph Winters (@RDub2) tweeted a link the blog post “On Some Alternatives to Regression Models” on the Freakonometrics blog. The author of post explains various alternatives to linear regression as it is applied to non-linear functions. Each of the alternatives give us an approximation, or put another way, a gap between the numbers and reality.

The regression tree model gives us an approximation that provides flat planes in various places. Deciding to be on one of those flat planes gives our business something to optimize towards for an interval of time, with a manageable level of cognitive costs. Where you see a square area in the macro sense, you’re dealing with a normal distribution. Where you see long thin rectangles, you’re dealing with a Poisson distribution. The ordinary course of business is from the Poisson towards the Normal, eventually to the normal, and the wider normal. The ordinary course of business is from discovery to enforcement in the repeatable operational sense right up to the black swan where the flat surfaces stop and fall. This all is also ordinary, but usually growth thinking and not knowing where were are under our distributions make for the black swan. No excuses for those swans, since I know the extent of my plane and I cannot change that extent.

I’ve heard of scale being the trigger for a chasm. It struck me as odd, but the flat planes of the regression trees end in two ways, rather than one. They go down or they go up. Both can be disasters to our ongoing operations. Down is the swan, but what of the up? Moore’s chasm is one of reference bases. The early adopter and their business case is made for risk takers in the vertical, but what of those that take less risk. The important thing here is that Moore found the pragmatism gradient. He found a way to climb up to the regression tree’s adjacent plane, up to the next plateau.

The regression tree also shows up the global maxima within the scope of the regression along with the local maximas where your business might be stuck. I was amazed years ago when a discussion with my CEO showed me that that CEO didn’t understand how his very successful business might be stuck on a local maxima. I wasn’t all that attentive to math back then either.

If you were to decide to serve a particular plane, that plane would be ground, aka the x-axis for any analysis going forward. Get agreement on this and make it stick.

In the figure, I’ve annotated the swans and the chasms. There is still plenty of business to be had on the plane. This plane is Poisson. This plane is your world. The other planes are out of scope until you commit to going there. There are 21 units of  business to be had. Nine face the swan. They must know where the line is. These nice share an operational focus. Eight of them face the chasm. They share another operational focus. One of units faces both the swan and the chasm. Five units have it easy. From this the executive should see the outlines of what the problems are going to be, who should be managing what, and what kind of training needs to be done. From this outline, we can design an organization that can cope, experiment, learn, and forget in its particular terrain.

For a  product manager, roadmaps and conversations should be organized around these planes. These planes also organize communications within and between planes, so marketing, offers, service provision, and pricing organize around these planes.

Normal Distributions and A Game Tree

March 22, 2015

Back on March 5th Glen B. Alleman (@galleman) tweeted a link to his Slideshare “Managing in the Presence of Uncertainty,” bit.ly/1CHThA9. I’m still working my way through it. But a few things got me tweeting. He makes a distinction that’s important, a distinction that provides some useful contexts that I’ll discuss here. Glen divides uncertainty into Aleatory and Epistemic uncertainty. Aleatory is the uncertainty of the classic frequentist approach to probabilities. Frequentists see noise under a distribution. Epistemic is addressable via the Bayesian approach to probabilities. Bayesians see knowledge under a distribution, knowledge that can be leveraged by establishing priors and looping through an exploration that improves those priors.

The Bayesian approach emerged after the frequentist approach was established. The Bayesian approach faced the usual adoption pressures as the frequentists leveraged their control of peer review and, hence journals. Name calling and such ensued. See, The Theory that Would Not Die, for the details of the struggle and eventual emergence of the Bayesian approach.

My own contact with the Bayesian approach happened back in 7th grade. We build a bead and matchbox-based game. Nobody mentioned machine intelligence or Bayesian statistics. The game was described in a column, computer recreations, or something like that, in the Scientific American. This was long before microprocessors. Nobody had access to a computer back then.

The game was played on a 3×3 board. On each side a row of pawns faced another row of pawns on the opposite side of the board. These pawns made normal pawn moves from the game of Chess, straight ahead one or one ahead diagonally to capture an opposing pawn. This took care of the generative side of the game. There were three ways to win: 1) occupy a square in your opponent’s pawn row, 2) capture all your opponent’s pawns, 3) Make the last possible move. This took care of the convergence side of the game.

It took a lot of matchboxes to build this game. Each matchbox displayed a board with the possible moves on it that could be made from the positions given on the board. The moves on one matchbox led to a collection of other matchboxes. The matchboxes were the nodes, the moves where the links.

Each matchbox contained beads that matched the color of one of the moves on the game board on that matchbox. A single matchbox might have three or more moves associated with it. A bead of each move color was placed in each matchbox. This gave each move even odds. These odds were the Bayesian priors.

As the game was played, a record was kept as to the path taken during the game. If the machine lost, you removed the  beads that led to the loss. If the machine won, you put two beads of the winning color back into each matchbox. In both cases what you did was update the priors based on what you learned during the last game. It was classic Bayesian. It was classic Stewart Brand’s How Buildings Learn. They learn through accretion. They learn, but they keep secrets.

So lets explore how a game tree organizes its normal distributions.

I let the game get just beyond the second move. We’ve played to this point several times, so we have a histogram of the possible moves. N isn’t high enough to give us a continuous rendition of a normal distribution,  but the discrete hints are there. The game tree looks like a binomial tree with equally weighted branches, so the normal is not skewed. Then, we play two more moves deeper into the game tree.

Here I’ve depicted the normals for the second and forth moves. We could change the representation by putting the second tier normal under the forth tier normal. This would reflect a frequentists approach depicting the smaller normal as a subset. It looks smaller, but remember that both have an area of one. The deeper we move into the game, the wider the normal would get. To keep the area at one, the normal would also lose height: 6×1, 2×3. I’ve not depicted this, so just imagine it. It happens all the time out in the business world. The F2000 company has thin margins. At F4000, thinner still. Yes, even for F4000 companies, the area under the normal remains one, although the base is wide.

This figure is just fine. It depicts a proper subset being a normal with the same mean as the containing set. But as a depiction of the game tree, it’s just wrong. Game play flows through the mean at the top of the normal and flows to the base. Further, future game play expands the base and height of the normal. To get to the base, you have to make those first two or four moves, increasing their frequency, after which you expand the base outward and another tier deeper. But, the future is not known yet.

Now, we’ve shown how the two normals fit together. The normal for the subtree converges sooner than the entire tree. The differences between the tails of the normals is a function of the depth of the subtree and tree. Notice that the two normals are not fractals of each other. We are seeing the normal at two different times in its life. The change in tree depth is also a change in bit depth. The set gets the x-axis. The subset gets the x’-axis.

Now, we show that the early normal grows towards the top of the later normal, and the later normal grows down and out. Again, to make a later move, you must make an earlier move. Those probabilities change together. In the pawn game, described earlier, wins terminate a branch of the game tree. This stops the accumulation of frequency and moves the histogram outward towards the outliers.

Next we consider the black swan. For product managers, commoditization is a black swan that happens often enough. When some portion of your product becomes commoditized, you lose bits, and you lose addressable market population. Tomorrow’s future is smaller than yesterday’s. As for the normal it converges sooner on another x’-axis. Of course, you knew that commoditization was coming, and given today’s preference for trade secrets over patents, you’ve built under the base of yesterday’s normal.  You were ready. I know. We’ll pretend politely.

You’ve added some bits via an effort represented by the red triangle, the red decision tree, which like playing a game deeper pushes the base x-axis down, which in turn moves  your convergence with the new x-axis into the future.

Just a few Quick Notes

March 20, 2015

A tweet lead to this NYTimes article, “How the Recession Reshaped the Economy in 255 Charts” by Updated: JUNE 6, 2014

Notice the data visualization on the first page of the article. Those graphs showing up the upward trends tend to be bubbles. What you don’t see is how the category for each of those bubbles consumes the future. What you don’t see is the shadow cast by those upward efforts. Those shadows tell us how long it will be before we can sell again. The heights tell us of the more immediate or historic falls, the black swans, the fragility, the lost valuations, and the lost bits of the larger world suddenly smaller.

Another tweet led to another NYT article more conventional, “A 3-D View of a Chart That Predicts The Economic Future: The Yield Curve by and MARCH 18, 2015. Imagine your roadmap on this curve.

The figure below is not one of mine, but I thought it was neat enough to buy the book it was in, “How To Be Interesting,” by Jessica Hagy. I picked it up at an airport bookstore a while back. I don’t usually read self-help books, but this one was an exception. As for the figure, it reminded me to see the numerous contexts beyond the reach of our offer into the user experience.

How far do you want to reach into the user experience? As for myself, I’ll reach into the underlying cognition via ethnography. I’ll insist on attention being focused on carrier rather than carried, on the real world that existed before programmers showed up, rather than carrier focused toys. Toys are fine. They’re just not me.

Foster Ecologies

March 4, 2015

retweeted,

If you want to go fast, go alone. To go far, go together – African proverb

Out on Twitter over the last week, I dropped a tweet or two about product ecologies after watching a YouTube on the human biome. Actually, there were several human biomes that we migrate across and live within or under over out lifetime. Products do a similar thing.

Your whole product vendors have third-party developer programs that operate for the vendor’s benefit. These programs are multisided markets, so the third-party developers get something as well. Multisided markets can serve the carrier vendors with their API expanding the original vendor’s functionality, the carried vendors with their models, and another carried vendors data–layers, many layers all on the same roadmap, all moving at different speeds, all moving in different directions–going together. Everyone in this tech ecology gets functionality at some level, but they also get prospects. They get channel. Channel is hard.

Channel is hard because channel participants are there for their own reasons. They have their own motivations. They have to be led just like your matrix team members. They serve and monetize around a population. You’re participation in their channel might demonstrate objectivity to the population they serve. Sales in that case might be accidental.

So we have lots of interests to serve beyond prospects, economic buyers, and users, beyond our monetizations, and beyond the interests of our internal interest groups. Oh, the messiness of going together.

Alone, we will not create a category. I know. Most of us are not trying to create a category. Most of us are not trying to build a business around a discontinuous innovation. We have far to go, so we have to go together. If you tried to go it alone, you’d be facing antitrust issues. Competitors help overcome these issues. No negation tends to go a long way to keeping competition respectful. You can’t have more than 74% of the market, and you would be lucky to get that 74% from a market power allocation. These days it’s market share as an outcome of ones promo spend and VC funding, which doesn’t get anywhere near 74% and isn’t lasting like that 74%. Early and late tech adoption businesses and economics are vastly different.

Going together reminded me of when my employer got a supercomputer gratis. That hardware vendor wanted us to support their machines. Our application generated code. Compiles took a while, so now we could support a new value proposition, compiling our customer’s code for them using that supercomputer. Long ago, I know.

In a later company, the computations moved, so we could move them to machines that were faster, or slower. You could play with an algorithmic ecology. You could run several heuristics to get there ahead of the more accurate algorithms with the algorithm with the best accuracy coming in last, at last. The sort of thing we do with Monte Carlo simulations running in our edges of the unknown later substituting knowns as we capture them.  Running fast, but going far together.

Fostering ecologies gets us down the road together. Foster an ecology today.