## Archive for August, 2017

### A Different View of the TALC Geometries

August 25, 2017

I’ve been trying to convey some intuition about why we underestimate the value of discontinuous innovation. The numbers are always small, so small that the standard financial analysis results in a no go decision, a decision not to invest. That standard spreadsheet analysis is done in L2, a Euclidean space. This analysis gets done while the innovation is in hyperbolic space so the underestimation of value would be the normal outcome.

In hyperbolic space, infinity is away at the edge at a distance. In hyperbolic space, the unit measure appears smaller at infinity when viewed from Euclidean space. This can be seen in a hyperbolic tiling. But, we need to keep something in mind here and throughout this discussion, the areas of the circle are the same in Euclidean space. The transform, the projection into hyperbolic space makes it seem otherwise. That L2 financial analysis assumes Euclidean space while the underlying space is hyperbolic, where small does not mean small.

How many innovations, discontinuous ones, have been killed off by this projection? Uncountably many discontinuous innovations have died at the hands of small numbers. Few put those inventions through the stage-gated innovation process because the numbers were small. The inventors that used different stage gates pushed on without worrying about the eventual numbers succeeded wildly. But, these days, the VCs insist on the orthodox analysis, typical of the consumer commodity markets, that nobody hits one out of the ballpark and pays for the rest. The VCs hardly invest at all and insist on the immediate installation of the orthodoxy. This leads us to stasis and much replication of likes.

I see these geometry changes as smooth just as I see the Poisson to normal to high sigma normals as smooth. I haven’t read about differential geometry, but I know it exists. Yet, there is no such thing as differential statistics. We are stuck in data. We can use Monte Carlo Markov Chains (MCMC) to generate data to fit some hypothetical distribution from which we would build something to fit and test fitness towards that hypothetical distribution. But, in sampling that would be unethical or frowned upon. Then again, I’m not a statistician, so it just seems that way to me.

I discussed geometry change in Geometry and numerous other posts. But, in hunting up things for this post, I ran across this figure.  I usually looked at the two-dimensional view of the underlying geometries. So this three-dimensional view is interesting. Resize each geometry as necessary and put them inside each other. The smallest would be the hyperbolic geometry. The largest geometry, the end containment would be the spherical geometry. That would express the geometries differentially in the order that they would occur in the technology adoption lifecycle (TALC) working from the inside out. Risk diminishes in this order as well.

In the above figure, I’ve correlated the TALC with the geometries. I’ve left the technical enthusiasts where Moore put them, rather than in my underlying infrastructural layer below the x-axis. I’ve omitted much of Moore’s TALC elements focusing on those placing the geometries. The early adopters are part of their vertical. Each early adopter owns their hyperbola, shown in black, and seeds the Euclidean of their vertical, shown in red, or normal of the vertical (not shown).  There would be six early adopter/verticals rather than just the two I’ve drawn. The thick black line represents the aggregation of the verticals needed before one enters the tornado, a narrow phase at the beginning of the horizontal. The center of the Euclidean cylinder is the mean of the aggregate normal representing the entire TALC, aka category born by that particular TALC. The early phases of the TALC occur before the mean of the TALC. The late phases start immediately after the mean of the talk.

The Euclidean shown is the nascent seed of the eventual spherical. Where the Euclidean is realized is at a sigma of one. I used to say six, but I’ll go with one for now. Once the sigma is larger than one, the geometry is spherical and tending to more so as the sigmas increase.

From the risk point of view, it is said that innovation is risky. Sure discontinuous innovation (hyperbolic) has more risk than continuous (Euclidean) and commodity continuous (spherical) less risk. Quantifying risk, the hyperbolic geometry gives us an evolution towards a singular success. That singular success takes us to the Euclidean geometry. Further data collection takes us to the higher sigma normals, the spherical space of multiple pathways to numerous successes. The latter, the replications, being hardly risky at all.

Nesting these geometries reveal gaps (-) and surplusses (+).

## The Donut/Torus Again

In an earlier post, I characterized the overlap of distributions used in statistical inference as a donut, as a torus, and later as a ring cyclide. I looked at a figure that described a torus as having positive and negative curvature.

So the torus exhibits all three geometries. Those geometries transition through the Euclidean.

The underlying distributions lay on the torus as well. The standard normal has a sigma of one. The commodity normal has a sigma greater than one. The saddle and peaks refer to components of a hyperbolic saddle. The statistical process proceeds from the Poisson to the standard normal to the commodity normal. On a torus, the saddle points and peaks are concurrent and highly parallel.

Enjoy.

### The Average, or the Core

August 4, 2017

Tonight I ended up reading some of the Wolfram MathWorld discussion of the Heaviside Step Function among other topics.  I only read some of it like most things on that site because I bump into the limits of my knowledge of mathematics. But, the Heaviside step function screamed loudly at me. Well, the figure did, this figure.

Actually, the graph on the left. The Heaviside step function can look like either depending on what one wants to see or show.

The graph on the left is interesting because it illustrates how the average of two numbers might exist while the reality at that value doesn’t. Yes, I know, not quite, but let’s just say the reality is the top and bottom line, and that H(x)=1/2 value is a calculated mirage. All too often the mean shows up where there is no data value at all. Here, the mean of 0 and 1 is (0+1)/2. When we take the situation to involve the standard normal, we know we are talking about a measurement of central tendency, or the core of the distribution. That central tendency or core in our tiny sample is a calculated mirage. “Our average customer …” is mythic, a calculated mirage of a customer in product management speak.

Here I put a standard normal inside the Heaviside step function. Then, I show the mean at the x=1/2 of the Heaviside step function. The core is defined by the inflection points of the standard normal.

The distribution would show skew and kurtosis since n=2. A good estimate of the normal cannot be had with only two data points.

More accurately, the normal would look more like the normal shown in red below. The red normal is higher than the standard normal. The height of the standard normal shown in blue is around 4.0. The height of the green normal is about 2.0. The red normal is around 8.0. I’ve shown the curvature circles generated by the kurtosis of the red distribution. And, I’ve annotated the tails. The red distribution should appear more asymmetrical.

Notice that the standard deviations of these three distributions drive the height of the distribution. The kurtosis clearly does not determine the height, the peakedness or flatness of the distribution, but too many definitions of kurtosis define it as peakedness, rather than the height of the separation between the core and the tails. The inflection points of the curve divide the core from the tail. In some discussions, kurtosis divides the tails from the shoulders, and the inflection points divide the core from the shoulders.

To validate a hypothesis, or bias ourselves to our first conclusion, we need tails. We need the donut. But, before we can get there, we need to estimate the normal when n<36 or we assert a normal when n≥36; otherwise, skew and kurtosis risks will jerk our chains. “Yeah, that code is so yesterday.”

And, remember that we assume our data is normal when we take an average. Check to see if it is normal before you come to any conclusions. Take a mean with a grain of salt.

## Convolution

Another find was an animation illustrating convolution from Wolfram MathWorld “Convolution.” What caught my eye was how the smaller distribution (blue) travels through the larger distribution (red). That illustrates how a technology flows through the technology adoption lifecycle. Invention of a technology, these days, starts outside the market and only enters a market through the business side of innovation.

The larger distribution (red) could also be a pragmatism slice where the smaller distribution (blue) illustrates the fitness of a product to that pragmatism slice.

The distributions are functions. The convolution of the two functions f*g is the green line. The blue area represents “the product  as a function of .” It was the blue area that caught my eye. The green line, the convolution, acts like a belief function from fuzzy logic. Such functions are subsets of the larger function and never exit that larger function. In the technology adoption lifecycle, we eat our way across the population of prospects for an initial sale. You only make that sale once. Only those sales constitute adoption. When we zoom into the pragmatism step, the vendor exits that step and enters the next step. Likewise when we zoom into the adoption phase.

Foster defined disruption as the interval when a new technology’s s-curve is steeper than the existing s-curve, we can think of a population of s-curves. The convolution would be the lessor s-curves, and the blue area represents the area of disruption.  Disruption can be overcome if you can get your s-curve to exceed that of the attacker. Sometimes you just have to realize what was used to attack you. It wasn’t the internet that disrupted the print industry, it was server logs. The internet never competed with the print industry. Fosters disruptions are accidental happenings when two categories collide. Christensen’s disruptions are something else.

Enjoy.