Archive for July, 2014

Pie Charts

July 23, 2014

It’s Christmas time, aka pie season. Mom cooked each of us a pie of our own. Dad got an apple pie, I got a cherry pie, everyone got their own. We had five pies. After Santa Claus, we’d eat all the baked goods. There were a lot of baked goods. Then, came Christmas dinner. Eventually, it was time to shove all that food in the refrigerator.

If you ate a few slices of pie, you where sparse, hyperbolic. That pie, as a pie chart, had the angles sum up to less than 360 degrees. If you had several hyperbolic pies, you could save space in the refrigerator by putting the slices of several pies into the pan. The resulting pie had angles summing to more than 360 degrees, so your pie pan contains a spherical space.

What brought that on? I had read some designer talking about how a pie chart should never added up to more than 100 percent. Sure. That’s best. Lets always assume a Euclidean space.

When we do our linear analysis, we assume a Euclidean space. But, there times when we shouldn’t assume the Euclidean. When we are talking about a discontinuous innovation, we start in hyperbolic space. Once the bowling ally Poisson distributions tend to normals and add up to the single normal that we start into the tornado with, we are in a Euclidean space. From there we grow our company from six sigma to forty sigma, that puts us in a spherical space. The normal gets wider, but shorter. The probabilities decrease like our margins.

We are told that innovations are risky. We conclude that innovations are risky, because those linear analyses fail us. Those linear analyses assume linearity long before the space has converged to the Euclidean. When we are in the spherical space, The space has already converged to the Euclidean, and a linear projection can always be made from the spherical. This covers up the non-Euclidean situation. The hyperbolic, however, is too sparse to support a linear analysis. The hyperbolic doesn’t have a linear projection via a geodesic. Instead, you have world lines that generate something like the navigation of a taxicab geometry. You end up fragmenting the linear and turning often. But, worse, you are talking about points, not lines. Between the points, you have nothing, certainly not a projectable linearity.

So this pie/pie pan/pie chart thing, this Euclidean assumption, is the usual thing. It’s the way we, as managers, inject risk into an innovation. Beware of the implicit assumptions. Beware of the space.

 

From A Geometry Proof

July 20, 2014

 

I came across a geometry proof that was immediately interesting to me for what it was saying to my product strategist self.

The citation for the proof is A. Bogomolny, The 80-80-20 Triangle Problem, Solution #12 from Interactive Mathematics Miscellany and Puzzles, http://www.cut-the-knot.org/triangle/80-80-20/Classical12.shtml#solution, Accessed 19 July 2014. I’ll include the graphics and the problem statement from that post.

The problem begins with the diagram and some assertions. The author selected one of many solutions. It’s the solution that I’m going to expound upon, but I’ll barely be talking about the obvious geometry.

Problem:

Question

Asserting:

  • ABC is an isosceles triangle (AB = AC).
  • ∠BAC = 20°.
  • Point D is on side AC such that ∠CBD = 50°.
  • Point E is on side AB such that ∠BCE = 60°.

Search Goal:

Find the measure of ∠CED.

Solution:

Search

 

Any time you are handed a problem, the effort will be a matter of a search for a solution. When you have one solution, you still might continue your search for another solution. The solution above is the twelfth solution.

A search is a matter of traversing a search space. Before we can search a space, that space has to be populated, or generated. We use rules to generate this space. In a game like Go or Chess, the board is a set of rules. In Chess, we go with queen on color as the rule that ensures proper board orientation. In marketing there is a huge population organized via various organizing schemes. In technology adoption, the population is organized around referral bases, so we end up with a quantized collection of populations. Each of those populations are independent of the rest, so we get a nice A+B+C+… marketing effort. Well, that is true within a single phase of the technology adoption lifecycle, and not true across sequential lifecycle phases.  Now, when I said marketing effort, I was talking marketing department. Sales randomizes, or to put it differently, sales is a problem. They sell to people who are not yet prospects. Notice the word prospects.

A search has a budget. A search has a breadth and a depth. You can do a breadth-first search, a depth-first search, or some mix. The nature of that search has organization wide impacts. Where sales is actually selling to the leads generated by marketing, you’ve got alignment on search. In product marketing, we talk about having conversations with our customers. Well, maybe. My definition of customer is an entity has purchased our product. My definition of prospect is an entity that has not purchased our product yet, and is a member of the population we are currently marketing to, in the quantized populations currently addressed by the referral basis and levels of pragmatism currently being addressed by marketing. We are talking about prospects in the sense of ready to buy last year, last month, today, next month, next year. We can’t talk to all these people as if they were a single population. Sales does that, hence the randomization. Marketing does not. Not that marketing understands that. And, product marketing, lets hope not.

The notion of an increasing return is either interesting or totally ignored as a strategic decision. Those increasing returns are predicated on a decreased cost of sale. Decreasing the cost of sale means marketing and selling to customers (initial sales) and to prospects (recurring sales) differently. Unbelievably I used to get the same marketing a prospect would get for a tool that I used for over a decade, that was upgraded annually and that had me dealing with the same sales force. The price for this product was very high, because the company didn’t bother to capture their increasing return. In another company, I had a sales guy gloating in the hallway about throwing a customer that called him to order an upgrade under the bus, because he had higher commissions taking up his time. In that company, we sacrificed a lot to retain customers. But, that was just a marketing strategy. Sales was not aligned with marketing. And, of course, it can’t be said that we captured our increasing return.

So as marketers, we have a population that we market to. We can select the organizing features of that population to exploit. In making these kinds of choices we limit the answers our search will produce. These choices define how we will move. Each rule has either a divergent impact, one that makes the search space larger, or a convergent impact, one that makes the search space smaller. The divergent is generative like a generative grammar. The divergent is discovery learning, a saying yes, an effort to adopt. The convergent is enforcement learning, a saying no, a right or wrong.

So that’s triangles. In the solution column of our table the upper diagram restates the lower diagram by stacking the triangles on of each other. Where stacked, decisions are shared, so we can consider the upper diagram, the decision tree of our successive releases. Those decision trees can be thought of as an organization of a bits. Likewise, our populations. The organization of those bits happened over time, and when programming a cognitive tool, those bits are organized by an imposed cognitive space. The underlying geometry of the innovation, hyperbolic, Euclidean, or spherical also impacts the organization of those bits. The shape of the triangle changes as the underlying innovation is adopted.

In both the upper and lower diagrams, the relevant sides of the triangles were annotated with red arrowheads. I see those sides, those lines, as being factors, the kinds of factors that you would derive from a factor analysis of some portion of the variance generated by a system, of the variance found in data collected from a system. In a purposefully generated system, we should know how much each element is supposed to contribute to the behavior of that system. Well, that’s an ideal. Comparison with a factor analysis will reveal where we are not providing what we expected to provide.

In my triangle model, the base of the triangle is where the user interface is, where the user generates behavior. The base line can represent other things beyond the user interface.

In the upper diagram, the factors would be the lines inside the largest triangle. Factors start out long and steep. Each successive feature gets less steep and less long. Those lines inside the largest triangle exhibit this order. Factors converge to the x-axis as the variance included in the factor analysis increases. In the upper diagram, that convergence would happen at point C. Each of those factors reflects a search, a collection of divergences and convergences, which result in a point or line when we search a two-dimensional space.

If we think of the normal distribution we use to represent the technology adoption lifecycle, and impose a black swan on it, aka we stop heading to the distribution’s convergence with the x-axis, we never make it to point C. Commoditization is an example of this. The point that would be the black swan becomes the point of convergence with the x-axis of the distribution. And, in the upper diagram, the black swan point is the limit of the convergence towards C. We end up with a smaller world, fewer bits, less revenues, and a need for a new triangle to traverse.

More questions arose as I wrote this. Enjoy. Please comment. Thanks.