Archive for the ‘Uncategorized’ Category

The Cones of Normal Cores

June 23, 2017

A few days ago, I drew a quick sketch about constraints, symmetries, and asymmetries. Discontinuous inventions break a physical constraint, change the range of a physical constraint, weaken a physical constraint, or bend a physical constraint. That discontinuous invention goes on to become a discontinuous innovation once they escape the lab and business people build a business around it.  Asymmetries present us with the necessity of learning.

So we start with a rotational symmetry out in 01 Symmetryinfinite space. This is the space we seek in the economic sense, the theory yet faced with the realities of practice, the desired, the sameness, the undifferentiated, the mythical abundance of the commodity. We could rotate that line in infinite space and never change anything.

Reality shows up as a constraint deforming the 02 Asymmetry infinite space and the symmetry into an asymmetry, an asymmetry we are not going to understand for a while. Not understanding will lead any learning system through some lessons until we understand. Not understanding makes people fear.

The symmetry generates data supporting a normal distribution. When the symmetry 03 Distributions encounters the constraint, the density is reflected at the boundary of the constraint. That increases the probability density, so the distribution exhibits skew and kurtosis.

The normal distribution of the symmetry is shown in light aqua. The skewed distribution is shown in a darker aqua.

04 Curvatures

The skewed distribution exhibits kurtosis which involves a maximum curvature at the shoulder between the core of the distribution and the long tail of that distribution, and a minimum curvature at the shoulder between the core of the distribution and the short tail of that distribution.

With a discontinuous innovation, we enter the early adopter market via a series of Poisson games. The core of a Poisson distribution, from a top down view, would be a small circle. Those Poisson distributions tend to the normal, aka become a normal distribution.

In the previous figure we annotated these curvatures with circles having the given curvature. The normal distribution gives us two circles with the same curvature as the circle is symmetric . The tail of the normal can be considered to be rotated around the core. The skewed distribution gives us a circle representing the curvature on the long tail side of the core larger than the normal , and a circle representing the curvature on the short tail side shorter than the normal.

These curvature circles generate conics, aka cones. 05 ConesSimilarly, the Poisson distribution is the tip of the cone, and the eventual normal is the base of the cone. The technology adoption process generates a cone that gets larger until we’ve sold fifty percent of our addressable market. The base of the cone gets larger as long as we are in the early phases of the technology adoption lifecycle. Another cone on the same axis and using the same base then gets smaller and comes to a tip as the underlying technology is further adopted in the late phases and finally is deadopted.

The early tip represents the birth of the 06 Birth and Death of a Categorycategory, the later tip represents the death of the category. The time between birth and death can be more than fifty years. These days, the continuous innovations we bring to market in the late mainstreet phase of the technology adoption lifecycle lasts only as long as VC funding can be had. Or, no more than ten years beyond the last round of funding. All of that occurs inside the cone that shrinks its way to the death of the category.

We innovate inside a polygon, so we involve ourselves 07 Multiple Constraintswith more than one constraint. We will look at the distributions involved from the top down looking at the circles that constitute the distributions involved. The normal distributions are represented by circles. Poisson distributions are represented by much smaller circles. Technology adoption moves from a small footprint, a small circle, to a large footprint, a large circle.

Notice that as time pases on the adoption side of the technology adoption lifecycle, the distribution gets larger. Likewise on the deadoption side, the distribution gets smaller. Smaller and larger would be relative to sample size and standard deviations. The theta that is annotated in the diagram indicates the current slope of the technology associated with that constraint and the productivity improvement of the technology’s s-curve, aka price-performance curve, and by price we me the invested dollars to improve the performance.

Notice that when we pair adoption and deadoption, 08 Zero-Sum Gamewe are looking at a zero-sum game. The Poisson distribution would represent the entrant. The circle tangent to the Poisson distribution would represent the incumbent in a Foster disruption. The
s-curves of both company’s competing technologies is still critical in determining if a Foster disruption is actually happening or not, or the duration of such disruption. Christensen disruptions are beyond the scope of this post.

I annotated a zero-sum game on the left, earlier in time. The pair of circles on the right, are not annotated, but are the same zero-sum game. There might be five or more vendors competing with the same technology. They might have entered at different times. Consider Moore’s market share formula he talked about in his books. The near monopolist gets 74% and everyone else gets a similar allocation of the remainder.

Notice that I used the term Core and orientation in the previous figure. The orientation would have to be figured out relative to the associated constraint. But, the circles in each zero-sum game represent curvature of the kurtoses involved that drive the length of the tails of the distribution relative to a core.

That core is much wider than shown 09 Line as Corein all but the weak signal context of a Dirac function that indicates some changes to conditional probabilities.

The arrow attached to each kurtosis indicates the size of each as the distribution normalizes.

The core is usually wider. As it gets wider, the height of the distribution gets lower. The normalization of the standard normal 10 Rectangle as Coreor the fact that the area under the distribution will always equal zero is what causes this. I did not change the kurtosises in the figure, but the thicker core implies progress towards the normal and less difference between the two kurtosises. The width of the range should stay the same throughout the life of the distribution once it begins to normalize. Remember that it takes 36 to 50 or so measurements before a sample normalizes. Various approximation methods help us to approximate the normal when we lack adequate data. Skewness and kurtosis will be present in all samples lacking sufficient measurements. Look for Skewness and kurtosis in the feedback collected during Agile development efforts. The normal, in those circumstances will inform us as to whether the functionality is done and deliverable.

Core width will change over the adoption 11 Core Widthlifecycle.  I drew this figure thinking in terms of standard deviations. But, the Poisson distribution is what we have at the early adopter phase of the lifecycle. In the vertical, we tend to the normal. In the horizontal, some complex data fusions give us a three or more sigma normal and in the late phases we are in the six or more sigma range. The core width is correlated with time, but in the lifecycle, time is determined by seats and dollars, and the lifecycle phase rather than calendar time. Note that I correlated the underlying geometries with time as well. Our financial analysis tells us to pass on discontinuous technologies, because the future looks small in the hyperbolic geometry we don’t know that we are looking at. Euclidean is easy. And, the spherical geometry that leaves us in banker numbers, in information (strategy) overload, aka 30 different approaches that all work. No, he wasn’t lucky. He was spherical.

Enjoy.

 

 

 

 

 

 

 

 

Do we gerrymander our product’s market?

April 5, 2017

Notice: I make absolutely no political statements in the following. 

Of course, we gerrymander our product’s market. We don’t intend to represent all of our customers either. When we introduce a product, we pick who we will sell it to. We find some rules. Sales qualify their prospects with respect to those rules. Sales bring outliers to us forcing us to say no to their deal or forcing us to redefine the product.

We prioritize. We tradeoff. We waste code. We waste prospects, customers, and users. All of these are our mechanisms for gerrymandering our product. We become insensitive to our prospects, customers, and users.

The technology adoption lifecycle organizes our prospects, customers, and users. With discontinuous innovations, our focus changes as we cross the technology adoption lifecycle. We start out in carrier or protocol, shift to carried content, then shift to carrier again, and subsequently shift back to carried content. We start out in a single form-factor and end broadly in many different form-factors. We start out with risk takers and end with those that take the least risk possible.

This latter characterization demonstrates the pragmatism scale underlying the entire technology adoption lifecycle.

With continuous innovations, typical these days, we don’t do the whole lifecycle. We jump into the late phases and move to later phases. We act surprised when our offer suddenly has run through all of our addressable prospects. We surprise ourselves when we realize we need something new. Yes, even Apple surprised itself with this many times since the first Apple computer.

But, here I’m talking about the pragmatism scale organizing our business with the phases of the lifecycle, not just phases. The finer we go with this the more likely a release will address prospects different from our consumers, and users with use cases organized in pragmatism slices, not just time slices. We end up with slices at one scale within slices of another scale. We end up with queues. We end up with boundaries.

Not attending to those boundaries results in gerrymandering which in turn leaves us inattentive to opportunities for customization in use cases, and pricing.

Mathematicians are addressing political gerrymandering now. See How to Quantify (and Fight) Gerrymandering.

Gerrymandering our products is a hard problem. The scales we use need to be aligned with our release cycle. Decide on scales. Then, get agreement on where the product and company are on the technology adoption lifecycle. Make a map. Know your pragmatism boundaries.

Moore described the pragmatism boundaries in terms of reference groups. Everyone in a particular slice refers to people, businesses, and business cases in their slice and nearby adjacencies. Each slice has it’s own evidence. This generates some communications isolations that grant us pricing isolations. Communications channels generate more boundaries, more to map.

The use cases served in the current slice will differ from the use cases in earlier slices. Yes, as time goes by the economic customer becomes more pragmatic, but then, so could the use cases and the marketing content.

To make matters harder, sales consumes each population at a different speed and might sell much more randomly without regard to lifecycle or pragmatism scale or communications channel considerations. Just a warning.

Growth would impact all of this. A prospect once sold is forever a customer ever after.

And, of course, all the talk of listening to customers et. al. becomes a matter of where on our map that customer is speaking from. How does the map bundle that feedback? And, how does that feedback verify efforts?

Quite a mess, a profitable mess.

The Cook’s Customer

March 17, 2017

I was perusing Anthony Bourdain’s Appetites; a cookbook. In it, he asks a few questions about his customers, and he is shockingly honest about the answers to those questions.

What is it that “normal” people do? What makes a “normal” happy family? …

I had little clue how to answer these questions for most of my working life, as I’d been living it on the margins. I didn’t know any normal people. From age seventeen on, normal people have been my customers. They were abstractions, litterally shadowy silhouettes in the dining rooms of wherever it was that I was working at the time. I looked at them through the perspective of the lifelong professional cook chief—which is to say, as someone that did not have a family life, who knew and associated only with fellow resturant profesionals, who worked while normal people played and played when normal people slept.

Do those of us in the software community have this problem? Are our customers still abstractions even if we’ve met them, spoken with them, engaged them in an ethnographic field study? Does their corporate culture look like our culture? Is it true that we work while they sleep? Do they use the same software we use? No, of course not. Do they seek value whee we seek it?

Do they use the same software we use? No, of course not. Do they seek value where we seek it? No, of course not. Do our customer personas point out the differences between us and them? This gets harder with the technical enthusiasts because they seem much more like us than our users or our economic buyers.

Where do we define the closeness of our abstraction, the gap between an atomic bomb and a hypodermic needle? We go with the atomic bombs too often.

Make no mistake, sure I’m asking product managers, but really, I’m asking the developers because we leave this in their hands. And, when we “fully load” those developers to capture all the effort that we can, are we not failing to leave time to know the customer, know the carried content, or even know the carrier. We do tend to make time for our developers to know the carrier.

Developers don’t come to us experts in our carried content, our users, or our economic buyers. They need to learn those things which reach well beyond the “learning” the Agilists mention and experiment towards: was it used; was it used sooner, rather than later (was it successfully learned); does it deliver the value it was supposed to deliver to the entity it was supposed to be delivered to?

Once those questions get answered, tighten the limit, so the gap becomes a fence, rather than a borderland, and answer the questions again. Find the questions tied to the scale of the gap. Enjoy.

I’m sure after working with too many developers that thought that their users were just like them, that your answers should surprise you, just as Anthony’s answers surprised him.  Enjoy.

Kurtosis Risk

January 2, 2017

In the research for my previous posts on kurtosis, I ran across mentions of kurtosis risk. I wasn’t up to diving into that, and getting too far away from what I was writing about in those posts. mc spacer retweeted More On Skew and Kurtosis. I reread the post and decided to conquer kurtosis risk. The exploration was underway.

One of the things they don’t teach you about in that into stats class is the logical proof of what we are doing. We take a mean without checking its normality. We go forward with the normal distribution as if it were normal, ordinary, usual, typical, non-problematic. Then, we meet the data, and it’s anything but normal. When meeting the data, we also meet skew risk and kurtosis risk. It’s like meeting your spouse to be’s mom. Usually, you meet your spouse to be’s dad at the same time. Yeah, they all show up at the same time.

You might get taught various ways to approximate the mean when you have less than 30 data points, aka when your sample is too small. That less than 30 data points is the space where skew risk and kurtosis risk happen. The sample statistics drive around a while getting close to the as yet unknown population mean, equalling it a few times, circling it, and finally pulling in and moving in. Our collection of sample means eventually approximates the population mean.

In artificial intelligence, back in the old days when it was important to think like a human, back in the days of expert systems, we encode the logic in augmented transition networks. A single transition would look like IF StopSign, THEN Stop. Of course, that’s not a network yet. That would wait until we wrote another, IF YeildSign, THEN Yield. That’s just another transition. Those two transitions would with some additional infrastructure become a network, thus they would become an augmented transition network. To make this easier, we used a descriptive language, rather than a procedural one. Prolog gives you the widest infrastructure. Prolog let you present it with a collection of transitions and it would build the proof to achieve the goal. It built a tree and trimmed the inconsistent branches.

We’ve seen that building the tree and trimming the inconsistent branches before. We use generative grammars to build a decision tree for a potential product, and constraints to trim that decision tree, so we arrive at the product fit for the moment. There is a logical argument to our product.

Similarly, there is a logical argument, or a proof, to our statistical analysis. There in that proof of our statistical analysis, our skew and kurtosis risk emerge.

Statistics happen after our data is collected. We think in terms of given (IF or WhatIF, WIF) this data, then these statistics. We don’t think about that driving around as looking for the population mean, as a process. Statistics is static, excepting the Bayesian approach. Logic insists. The proof frames everything we do. When computing a mean, the proof is going to insist on normality. But, this logical insistence is about the future, which means we are actually doing an AsIf analysis. We imagine that we checked for normality. We imagine that we know what we are doing since nobody told us any different yet. An AsIf analysis imagines a future and uses those imagined numbers as the basis for an analysis. In that imagining of the future, we are planning, we are allocating resources, we are taking risks. With samples, those risks are skewness and kurtosis risks.

I’m delayed defining skewness risk in this post until the very end. Once you understand kurtosis risk, skewness risk is nearly the same thing, so bare with me.

valid-distributionWe will use the triangle model, which represents decision trees as triangles, to represent our proof.

In this figure, the root of the decision tree is at the bottom of the figure. The base of the tree is at the top of the figure. In the triangle model, the base of the triangle represents the artifact resulting from the decision tree, or proof.

Here we paired the distribution with its proof. A valid proof enables us to use the distribution. In some cases, the distributions can be used to test a hypothesis. An invalid proof leads to an invalid distribution which leads to an invalid hypothesis. Validity comes and goes.

OK, enough meta. What is Kurtosis risk?

When we assert/imagine/assume (AsIf) that the distribution is normal, but the actual data is not normal, we’ve exposed ourselves to kurtosis risk. We’ve assumed that the sample mean has converged with the population mean. We’ve assumed that we have a legitimate basis for hypothesis testing. Surprise! It hasn’t converged. It does not provide a basis for hypothesis testing.

As an aside, WIFs (What IFs) are what spreadsheets are for. Pick a number, any number to see what the model(s) will do. AsIfs come from scenario planning, a process that is much more textual than numeric. A scenario is an outcome from various qualitative forces.

Back to it. Google sent me to Wikipedia for the above definition of kurtosis. I drew the definition and kept on thinking. This picture is the final result of that thinking.

kurtosis-risk

We start with the top-down, footprint view of normal distribution, a circle. The brown vertical line extends from the green cross on the right representing the mean, median, and mode which are the same for distributions that are normal.

Then, we see that our actual data is an ellipse. The blue vertical line extends from the green cross on the left. That line is labeled as being the mode of the skewed normal. In previous discussions of kurtosis, we use kurtosis to describe the tails of the distribution. In some definitions of kurtosis, kurtosis was seen as describing the peakedness of the distribution where we used kurtosis to describe the core of the distribution.

I drew a line through the two means. This line gave us two tails and a core. I should have drawn the core so it actually touched the two means. Then, I projected the two tails onto an x-axis so I would have a pair of lengths, the cosines of the original lengths. That one is longer and the other shorter is consistent with previous discussions of kurtosis.

A note on the core: I’ve taken the core to the most undifferentiated space under the curve. This is where no marketer wants to get caught. The circle that serves as the footprint of the normal is tessellated by some scheme. A shape in that tessellation represents the base of a histogram bar. From that bar, each adjacent histogram bar is exactly one bit different from that bar. The resolution of the shapes can be any given number of bits different, but that gets messy and, in the 3D graphic tessellation sense, patchy. A string “00000000” would allow its adjacent ring of histogram bars to contain up to eight different bars representing eight unique differences. “Ring” here is descriptive, not a reference to group theory. The histograms of the normal distribution encode all available differences. Refinements work outward from the undifferentiated mean to the highly differentiated circle of convergences, aka the parameter of the normal distributions footprint. We are somewhere under the curve. So are our competitors. So are our prospects and customers.

An ordinary interpretation of a peak with high peakedness is uniqueness or focus. That’s a high kurtosis value. A peak that’s less peaked, rounded, smoother is less unique, less focused, possibly smudged by averaging, tradeoffs, and gaps. It all shows up in the histogram bars. The other thing that shows up is the differences that are our product over the life of the product.

The other thing that shows up is the differences that are our product over the life of the product. A given iteration would have a particular shape. Subsequent iterations would build a path under the histograms that constitute the normal. Customers would cluster around different iterations. A retracted feature would show up as defections to competitors with different configurations more consistent with the cognitive processes of the defectors, our “once upon a time” users. Use tells. Differentiation segments.

So I attend to the tessellations and shapes of my histogram bars, to the sense of place, and to movement.

I then projected the core onto the sphere represented by the circle. Yes, the same circle we used to represent the footprint of the normal distribution. The core then appears as an ellipse. It should be closer to the pole, then it would be smaller. This ellipse should be the same shape as the top of the ellipsoid, containing the ellipse of the data, that the sphere is topologically deformed into.

Then, I drew a vector along the geodesic from the pole to the elliptical projection of the core to represent the force of topological deformation. I also labeled the circle and ellipse view to show how the deformation would be asymmetrical. The right is much less deformed than the right.

summary-veiwNext, I put the kurtosis in the summary view of a box chart using those lengths we found drawing a line through the two means. This box chart is tied to a view of the tails and kurtoses drawn as curvatures. As for the slopes of the distribution’s actual curve, they are approximations.

So that is kurtosis risk? When your sample means have not as yet converged to the population mean, you are exposed to kurtosis risk. Or, as Wikipedia put it when you asserted that the data is normally distributed, but it wasn’t, that assertion gives rise to kurtosis risk.

And, what of skew risk? You expose yourself to skew risk when you assert that your data is symmetric, when in fact, it isn’t. In the math sense, skew transforms the symmetric into the asymmetric and injects the asymmetries into the curvatures of the kurtoses constraining the tails along the radiant lines in the x-axis plane.

This business of the assertion-base for statistics involves constant danger and surprise. A single inconsistent assertion in the middle of the proof can invalidate much of the formerly consistent proof of a once useful analysis. Learn more, be more surprised. Those intro classes blunt the pointed sticks archers call arrows. Before they were pointed, they were blunt–dangerous in different ways. Enjoy.

 

 

 

The Hyperbolic No

December 25, 2016

When we move physical constraints, we innovate discontinuously. When we innovate discontinuously, we create economic wealth as a sideband to making a lot of cash, and we create institutions and careers. We haven’t been doing that lately. Instead, we innovate for cash alone, and we cash in our economic wealth for cash and never replace that economic wealth.

The discontinuity at the core of a discontinuous innovation cannot be overcome by iterating beyond current theory. We need a new theory. That new theory has its own measures and dimensions. These at the invention layer of innovation. These cause a discontinuity at the commercialization layer. That discontinuity is in the populations being served. The existing population says no. The nascent adopting population says or will come to say yes. Polling is fractured by discontinuities.

When we do our financial analysis, the discontinuous innovation generates numbers that fail to motivate us to jump in and make it happen. Why? It’s a question that I’ve spent years looking at. I’ve blogged about it previously as well. My intuition tells me that consistent underreporting is systematic and due to the analysis. My answer revolves around geometry. We do our analyses in terms of a Euclidean geometry, but our realities are multiple, and that Euclidean reality is fleeting. Our Euclidean analysis generates numbers for a hyperbolic space, underreporting the actual long-term results. Results in a hyperbolic space appear smaller and smaller as we tend to

We do our analyses in terms of a Euclidean geometry, but our realities are multiple, and that Euclidean reality is fleeting. Our Euclidean analysis generates numbers for a hyperbolic space, underreporting the actual long-term results. Results in a hyperbolic space appear smaller and smaller as we tend to infinity or the further reaches of our forecasted future. Hyperbolic space is the space of discontinuous innovation.

Once a company achieves a six-sigma normal or the mean under the normal we use to represent the technology adoption lifecycle, or in other terms, once a company has sold fifty percent of its addressable and allocated market share, the company leaves the Euclidean space and enters the spherical space where many different financial analyses of the same opportunity give simultaneous pathways to success. This where a Euclidean analysis would report some failures. Again, a manifestation of the actual geometry, rather than the numbers.

Maps have projections. Those projections have five different properties used in different combinations to generate a graphical impression. Explore that here. Those projections start with the same numbers and tell us a different story. Geometries do the same thing to the numbers from our analysis. Our analysis generates an impression of the future. The math is something mathematicians call L2. We treat L2 as if it were Euclidean. We do that without specifying a metric. It’s linear and that is enough for us. But, it’s not the end of the story.

The technology adoption lifecycle hints at a normal, but the phases decompose into their own normals. And, the bowling alley is really a collection of Poisson distributions that tend to the normal and aggregate to a normal as well. So we see a process from birth to death, from no market population to a stable market population. Here as well, the models change geometries.

I’ve summarized the geometries in the following figure.

geometres

We start at the origin (O). We assert some conditional probability to get a weak signal or a Dirac function. We show a hyperbolic triangle, a Euclidean triangle, and a spherical triangle. Over time, the hyperbolic triangle gains enough angle to become Euclidean. The Euclidean triangle then gains enough angle to become spherical. The angle gain occurs over the technology adoption lifecycle, not shown here, parallel to the line through the origin.

When we look at our numbers we pretend they are Euclidean. The hyperbolic triangle shows us how much volume is missed by

hyperbolic

our assumption of Euclidean space.

Here I drew some concentric circles that we will come back to later. For now, know that the numbers from our analysis report only on the red and yellow areas. We expected that the numbers reported the area of the Euclidean triangle.

 

 

euclidean

 

The green triangle is the Euclidean triangle that we thought our numbers implied. In a six-sigma normal, the numbers from the analysis would be correct. Less than six sigma or more than six sigma, the numbers would be incorrect.

 

 

 

 

 

sphericalIn the spherical geometry, the problem is subtly different. We keep thinking in Euclidean terms, which hides the redundancies in the spherical space. Here, competitors have no problem copy your differentiation even to the point of coding around your patent. You have more competition than expected and end up with less market as a result. The risks are understated.

 

 
hyperbolic-tessilation

To reiterate the problem with the hyperbolic space, we can look at a hyperbolic tessellation.

 

 

 

 

 

euclidean-tessilation

In a Euclidean tessellation, each shape would be the same size.

The differences in impressions generated by the hyperbolic view and the Euclidean view should be obvious. We’ve been making this mistake for decades now.

In a spherical tessellation, the central shape would be the smallest and the edge shapes would be the largest.

Here, in a hyperbolic geometry, the future is at the boundary of the circle. Numbers from this future would appear to be very small.

In a factor analysis view, the first factor would be represented by the red polygon. The second factor would be represented by the orange polygons. The third factor would be represented by the yellow polygons. The edge of the circle lays at the convergence of the long tail with the ground axis. The edge would be lost in the definition of the limit. The convergence is never achieved in a factor analysis.

Building a factor analysis over those tessellations tells us something else. Factor analyses return results from hyperbolic space routinely. The first factor is longer and steeper. The hyperbolic tessellation would do that. Neither of the other spaces would do that. So where you do a factor analysis, you may be engaging in more geometric confusion.

Notice that the spherical geometry of the typical consumer business is, like most business people, biased to optimism. The future is so big. But, to get to those numbers, you have to escape the Euclidean space of the very beginnings of the consumer facing startup.

With a discontinuous innovation and its hyperbolic space, the low numbers and the inability to get good numbers to arrive in the future usually convinces us to not go there, to not launch, so we don’t. But, we’d be wrong. Well, confused.

Economists told us that globalism would work if we didn’t engage in zero-sum thinking. But, that is what we did. We, the herd, engaged in zero-sum thinking and doing. We innovated continuously, which has us ignoring the economic wealth vs cash metric. We, in our innovation songs, confuse the discontinuously innovative past of the Internet with the continuously innovative present. Or worse, disruption. Thinking we’d get the same results. This even when the VCs are not confused. They deal smaller, much smaller now than back then.

Wallowing in cash doesn’t replace the economic wealth lost to globalism. We can fix this in short order without the inputs and errors from our politicians. But, we have to innovate discontinuously to replace that lost economic wealth. It’s time to say yes in the face of the hyperbolic no. We can create careers and get people back to work.

 

 

 

The Shape Of Innovation

November 26, 2016

In the past, I’ve summarized innovation as a decision tree. I’ve summarized innovation as divergence and convergence, generation and tree pruning. So I drew this figure.
context-10The generative grammar produces a surface. The Constraints produce another surface. The realization, represented by the blue line, would be a surface within the enclosed space, shown in yellow. The realization need not be a line or flat surface.

In CAD systems, the two surfaces can be patched, but the challenge here is turning the generative grammar into a form consistent with the equations used to define the constraints. The grammar is a tree. The constraints are lines. Both could be seen as factors in a factor analysis. Doing so would change the shape of the generated space.

context-06In a factor analysis, the first factor is the longest and steepest. The subsequent factors are flatter and shorter.

A factor analysis produces a power law.

A factor analysis represents a single realization. Another realization gives you a different factor analysis.

context-07When you use the same units on the same axes of the realizations, those realizations are consistent or continuous with each other. These are the continuities of continuous innovation. When the units differ in more than size between realizations, when there is no formula that converts from one scale to another, when the basis of the axes differ, the underlying theories are incommensurate or discontinuous. These are the discontinuities of discontinuous innovation.

context-11The surfaces contributing to the shape of the enclosed space can be divided into convex and concave spaces. Convex spaces are considered risky. Concave spaces are considered less risky. Generation is always risky. The containing constraints are unknown.
context-17The grammar is never completely known and changes over time. The black arrow on the left illustrates a change to the grammar. Likewise, the extent of a constraint changes over time, shown by the black arrow on the right. As the grammar changes or the constraints are bent or broken, more space (orange) becomes available for realizations. Unicode, SGML, and XML extended the reach of text. Each broke constraints. Movement of those intersections moves the concavity, the safe harbor in the face of gernerative risks. As shown the concavity moved up and to the left. The concavity abandoned the right. The right might be disrupted int he Foster sense. The constraints structure populations in the sense of a collection of pramatism steps. Nothing about this is about the underserved or disruption in the Christensen sense.

The now addressible space is where products fostering adoption of the new technology get bought.

The generative grammar is a Markov chain. Where the grammar doesn’t present choice, the chain can be thought of as a single node.

context-12The leftmost node is the root of the generative grammar. It presents a choice between two subtrees. Ultimately, both branches would have to be generated, but the choice between them hints at a temporal structure to the realization, and shifting probabilities from there.

New gramatical structures would enlarge the realization. Grammars tend to keep themselves short. They provide paths that we traverse or abandon over historical time. The realization would shift its shape over that historical time. This is where data mining could apply.

When the constraints are seen from a factor analysis perspective, the number of factors are few in the beginning and increase over time. This implies that gaps between the realization and the factors would exit and diminish over time. Each factor costs more han the factor before it. Factors add up to one, and then become a zero-sum game. For another factor to assert itself, existing factors would have to be rescaled.

Insisting on a factor anlaysis perspective leaves up with trying to find a factor designated as the root constraint. And then, defining the face offs. This subgrammar vs this collection

context-18of constraints. Each would have rates, thus differential equations. Each would be a power law. So in our system there would be four differential equations and four power laws. There would also be four convergences. These would be reflected in the frequencies of use histograms.

Notice that nowhere in this discussion was innovation based on an idea from management. The ideas were about enlarging the grammar, aka ontological sortables, and the breaking or bending of constraints. When a constraint built into a realization breaks, Glodratt told us that the realization moves some distance to the next constraint.These efforts explore the continuities and discontinuities of the possible innovations. Produtization is the next step in fostering adoption.

As always, enjoy.

 

 

 

 

Doing Discontinuous Innovation

November 14, 2016

Discontinuous innovation creates economic wealth. Continuous innovation captures cash. Economic wealth, unlike what the financial services companies tell us with their wealth management services, is more than a pile of cash. Cash is the purview of the firm.  Economic wealth is the purview of the economy as it reaches well beyond the firm. Cash is accounted for where economic wealth is not.

Notice that no firm has an imperative to create economic wealth. To the contrary, managers today are taught to convert any economic wealth they encounter into cash. They do this with the assumption that that economic wealth would be put back, but that has yet to happen. Globalism was predicated on using the cash saved to create new categories, new value chains, new careers—economic wealth. Instead, we sent it to Ireland to avoid taxes. Oh well, we let the tail wag the dog.

Likewise, we are taught to lay off people, because we can put that money to better use, but then we don’t put it to better use. Those people we laid off  don’t recover. They work again, but they don’t recover. Oh, well. This is where continuous innovation takes you. Eventually, it is moved offshore. The underlying carrier technologies are lost as well, so those jobs can’t come back. The carrier technologies will evolve elsewhere.

I could go on. I did, but I deleted it.

Anyway, I’ve been tweeting about our need to create new economic wealth as the solution to globalism. Instead, the rage gets pushed to the politicians, so we’ve seen where that got us. The politicians have no constructive solution. We can solve this problem without involving politicians. We can innovate in a discontinuous manner. As a result of those tweets, a product manager that follows me ask, so how do we innovate discontinuously.

I’ll review that here.

  1. Begin with some basic research. That kind of research bends or breaks a constraint on the current way things are done in that domain.

Samuel Arbesman’s “The Half-Life of Ideas” gives us a hint in the first chapter with a graph of the experiments on temperature. Each experiment resulted in a linear range resulting from the theory used to build the measurement system that underlaid the experiment. The experiments gave us a dated collection of lines. The ends of those lines were the end of the theories used to build the experiments. You couldn’t go from one line to the next with a single measurement device, with a single theory. You had a step function on your hands after the second experiment.

The lines on the right side of the graph were replaced with later lines, later experiments. The later lines were longer. These later lines replaced the earlier step functions with another step function. A single measurement device could measure more. The later theory could explain more. The later theory broke or bent a constraint. The earlier theory did so as well when you consider that before the earliest theory, there was no theory, so nothing could be done. As each theory replaced the prior theory more could be done. Value was being delivered. That value escaped the lab once a manager got it sold into a market beyond the lab, aka innovated.

  1. Build that basic research into an infrastructural platform, into your technology/carrier layer, not into a product/carried layer. Do not even think about a product yet.

Moore’s technology adoption lifecycle starts with a technology. After step 2, that’s what you have. You have a technology. Products get a technology adopted. The technical enthusiasts is the first population that needs to be addressed. This population is the geeks. They insist on free. They insist on play. They refer technologies to their bosses.

  1. Explore what vertical industry you want to enter then hire a rainmaker known in that vertical. This rainmaker must be known by the executives in that vertical. This rainmaker is not a sales rep calling themselves a rainmaker.
  2. When the rainmaker presents you with a B2B early adopter, their position in the vertical matters. Their company must be in the middle of the industry’s branch/subtree of the industrial classification tree. They should not be on a leaf or a root of the branch/subtree. This gives you room to grow later. Growth would be up or down and not sideways to peers of the same parent in the subtree.
  3. That B2B early adopter’s vertical must have a significant number of the seats and dollars.
  4. That early adopter must have a product visualization. This product visualization should be carried content, not carrier. Carrier functionality will be built out later in advance of entering the IT horizontal. Code that. Do not code your idea. Do not code before you’re paid. And, code it in an inductive manner as per “Software by Numbers.” Deliver functionality/use cases/jobs to be done in such a way that the client, the early adopter, is motivated to pay for the next unit of functionality.
  5. Steps 3—6 represent a single lane in Moore’s bowling ally. Prepare to cross the chasm between the early adopter and more pragmatic prospects in the early adopters vertical. Ensure that the competitive advantage the early adopter wanted gets achieved. The success of the early adopter is the seed of your success. Notice that most authors and speakers talking about crossing the chasm are not crossing the chasm. There is no chasm in the consumer market.
  6. There must be six lanes before you enter the IT horizontal. That would be six products each in their own vertical. Do not stay in a single vertical. So figure out how you many lanes you can afford and establish a timing of those lanes. Each lane will last at least two years because you negotiate a period exclusion for the client in exchange for keeping ownership of your IP.
  7. Each product will enter its vertical in its own time. The product will remain in the vertical market until all six products in the bowling ally have been in their verticals at least two years. Decide on the timing of the entry into the horizontal market taking all six products into consideration. All six will be modified to sum their customer/user populations into a single population, so they can enter the IT horizontal as a carrier focused technology. The products will shed their carried functionality focus. You want to enter the horizontal with a significant seat count, so it won’t take a lot of sales to win the tornado phase at the front of the IT horizontal.
  8. I’ll leave the rest to you.

For most of you, it doesn’t look like what you’re doing today. It creates economic wealth, will take a decade or more, requires larger VC investments and returns, and it gets a premium on your IPO unlike IPOs in the consumer/late market phases of the technology adoption phases.

One warning. Once you’ve entered the IT horizontal, stay aware of your velocity as you approach having sold half of your addressable market. The technology adoption lifecycle tells us that early phases are on the growth side and that late phases are on the decline side of the normal curve.

There needs to be a tempo to your discontinuous efforts. The continuous efforts can stretch out a category’s life and the life of the companies in that category. Continuous efforts leverage economies of scale. A discontinuous effort takes us to a new peak from which continuous efforts will ride down. Discontinuous innovations must develop their own markets. They won’t fit into your existing markets, so don’t expect to leverage your current economies of the scale. iPhones and Macs didn’t leverage each other.

Don’t expect to do this just once. Apple has had to do discontinuous innovations three or four times now. They need to do it again now that iPhones are declining. Doing it again, and again means that laying off is forgetting how to do it again. It’s a matter or organizational design. I’ve explored that problem. No company has to die. No country has to fall apart due to the loss of their economic wealth.

Value Projection

November 7, 2016

I’ve often used the triangle model to illustrate value projection. In a recent discussion, I thought that a Shapely Value visualization would work. I ended up doing something else.

We’ll start by illustrating the triangle model to show how customers use the enabling software to create some delivered value. The customer’s value is realized by their people using a vendor’s software. The vendor’s software provides no value until it is used to create the value desired by the customer.

value-projection-w-triangle-model-01The gray triangle represents the vendor’s decisions that resulted in the software that they sold to the customer. The base of that triangle represents the user interface that the customer’s staff will use. Their use creates the delivered value.

The red triangle represent’s the customer’s decisions that resulted in that delivered value. The software was a very simple install and use application. Usually, configurations are more complicated. Other software may be involved. It may take multiple deliverables to deliver all the value.

value-projection-w-triangle-model-02

Here we illustrate a more complicated situation where a project with several deliverables and another vendor’s product was needed to achieve the desired value.

 

 

 

When a coallition is involved in value delivery, Shapely value can be used to determine the value each member of the coallition should receive realtive to their contribution to value delivered.

shapely-value

Here I used a regular hexigon to represent six contributors that made equal contributions. The red circle represents the value delivered.

The value delivered is static, which is why I rejected this visualization. The effort involves multiple deliverables.

 

The next thing we had to handle was representing the factors involved in that value delivery. Those factors can be discovered by a factor analysis.

factor-analysis

A factor analysis allocates the variance in the system to a collection of factors. The first factor is the longest and steepest factor. The first factor explains more variance than any of the subsequent individual factors. The second factor is shorter and flatter than the first factor. The second factor is longer and steeper than the first factor. The third factor is flatter and shorter than the second factor.

Even without the details 80 percent of the variance is covered by the first three factors. Additional factors can be found, but they become increasingly expensive to discover.

For our purposes here we will stop after the first three factors or after the first 80 percent of variance. We will allocate some of the delivered value to those factors.

Putting all of this together, we get the following visualization.
value-projection

Here the vendor is at the center of the rings. The rings are organized by the project’s deliverables along the project’s timeline. The first ring represents the UI of the vendor’s application. The distance between this ring and the origin of the circle represents the time it took to deliver the UI. That UI incorporates the factors explaining the relative importance of the delivered elements of the software.  The white area in the vendor ring, adjacent to the purple factor represents the 20 percent of variance or importance that would be allocated to subsequent factors beyond the first three.

The gray rings represent the time gaps between the install. The second customer ring represents the efforts to configure the application. The third ring represents further implementation efforts. The customer’s efforts might involve using an API to extend the functionality of the application. This is shown with the orange and red segments. The extension is organized as a stack crossing the customer’s rings.

The radius of the circles represents time. That being the case, we don’t need the left side of the circles. Time starts at the origin and moves outward.

Different vendors could be represented with different rings, or some allocation of the existing rings. The vendors themselves have relative ranks relative to the delivery of the ultimate value.

I’d appreciate some comments. Enjoy.

Implicit Knowledge

October 24, 2016

One of the distinctions I’ve been making out on twitter is the difference between what I call fictional and non-fictional software. We get an idea. We have to ask the question does users actually do this today without our software. If the answer is “No,” we get to make up how it is to be done. The user tasks are a blank whiteboard. That’s fictional software. But most of the time, the answer is not “No.” In that case, the software is non-fictional, so we need to do an ethnography and find out exactly how the user does it, and what cognitive model of their thinking is while they do what they do. In non-fictional software, neither the developer or the UX designers are free to make things up.

Yesterday, I read “Usability Analysis of Visual Programming Environments: a ‘cognitive dimensions’ framework.” The author, a UX designer, makes some statements that clarified for me that UX design as practiced today, particularly by this designer, is fictional. Tasks exist before they are designed. Tasks exist before they are digitized by programmers. This isn’t new. Yahoo built a search engine without ever looking at existing search engines or asking library science practitioners how to do it. Yahoo made it up and then discovered many of the finding and practices of library science practitioners later. That is to say, they approached, progressed towards convergence with, the user’s real cognitive model of the underlying tasks. There is still a gap.

Agile cannot fix those gaps in non-fictional software. It can only approach and converge to the gap width between the user’s bent cognitive model they use as users, and the real cognitive model they learned eons ago in school. That learning was explicit with a sprinkling of implicit. The implicit does not get captured by asking questions, talking, observing, or iterating. With any luck, a trained observer, an ethnographer, and their observational frameworks can observe and capture that implicit knowledge.

iteration-gap

A Rubik’s Cube can serve as an example. When solving a cube, we explore the problem space, a tree, with a depth first search. We can use simple heuristics to get close. But then, we stop making progress and start diverging away from the solution. We get lost. We are no longer solving. We are iterating. We are making noise in the stochastic sense. We stop twisting and turning. We look for a solution on the web. We find a book. That book contains “The hint,” the key. So after a long delay, we reset the cube, use the hint, and solve the cube.

diverge-converge-delay

We joined the epistemic culture  or what I was calling the functional culture of the cube. We are insiders. We solve the cube until we can do it without thinking, without the search struggles, and without remembering the hint. The explicit knowledge we found in that book was finally internalized and forgotten. The explicit knowledge was made implicit. If a developer asked how to solve the cube, the user doesn’t remember and cannot explicate their own experience. They cannot tell the developer. And, that would be a developer that wasn’t making it up, or fictionalizing the whole mess.

All domains contain and find ways to convey implicit knowledge. The Rubik’s cube example was weakly implicit since it has already been explicated in that book. The weakly implicit knowledge is a problem of insiders that have been exposed to the meme and outsiders who have not. Usually, those that got it teach those that don’t. Insiders teach outsiders. In other domains, implicit knowledge remains implicit but does get transferred between people without explication. Crafts knowledge is implicit. Doing it or practice transfers craft knowledge in particular, and implicit knowledge generally.

Let’s be clear here that generalist 101 class in the domain that you took back in college did not teach you the domain in the practitioner/expert sense. You/we don’t even know the correct questions to ask. I took accounting. I’m not an accountant. It was a checkbox, so I studied it as such. A few years after that class I encountered an accounting student and his tutor. The student was buying some junk food at the snack bar. The tutor asked him what accounts were affected by that transaction. That tutor was an insider. The student was working hard to get inside.

For anyone that will ever be a student of anything, there is no such thing as a checkbox subject. Slap yourself if you think so. Dig into it. Boredom is a choice, a bad one. You’re paying a lot of money, so make it relevant to think like an insider.

Recently, a machine beat a highly-ranked human in Go, a game not amenable to the generative space and heuristic-based pruning approach of the likes of Chess. The cute thing is that a machine learned how to be that human by finding the patterns. That machine was not taught an explicit Go knowledge. That machine now teaches Go players what it discovered implicitly and transfers knowledge via practice and play. The machine cannot explain how to play Go in any explicating manner.

One of my lifetime interest/learner topics was requirements elicitation. Several years ago, I came across a 1996 paper on requirements elicitation. Biases were found. The elicitor assumed the resulting system would be consistent with the current enterprise architecture, and let that architecture guide the questions put to users and customers, their bosses. That biased set of requirements caused waterfall development to fail. But, Agile does not even try to fix this. There will always be that gap between the user’s cognitive model and the cognitive model embedded implicitly in the software. UX designers like the author of the above paper impose UX without regard to the user’s cognitive model as well. I have found other UX designers preaching otherwise.

So the author of the above paper takes a program that already embeds the developer’s assumptions that already diverges and fictionalizes the user’s non-fictional tasks and further fictionalizes those tasks at the UX level. Sad, but that’s contemporary practice.

So what does this mess look like?

dev-ui-induced-gap

Here, we are looking at non-fictional software. The best outcome would end up back at the user’s conceptual model, so there was no gap. I’ve called that gap negative use costs, a term used in the early definition of the total cost of ownership (TCO). Nobody managed negative use costs, so there were no numbers, so in turn Gartner removed from the TCO. Earlier, I had called it training, since the user that knew how to do their job has to do it the way the developer and UX designer defined it. When you insert a manager of any kind in the process, you get more gap. The yellow arrows reflect an aggregation of a population of users. Users typically don’t focus on the carrier layer, so those training costs exist even if there were no negative use costs in the carried content.

As for the paper that triggered this post, “cognitive” is a poor word choice. The framework does not encode the user’s cognitive map. The framework is used to facilitate designer to manager discussions about a very specific problem space, users writing macros. Call it programming and programming languages if you don’t want your users to do it. Still useful info, but the author’s shell is about who gets to be in charge. The product manager is in charge. Well, you’ll resolve that conflict in your organization. You might want to find a UX designer that doesn’t impose their assumptions and divergences on the application.

 

The Tracy-Windom Distribution and the Technology Adoption Lifecycle Phases.

October 11, 2016

In my recent bookstore romps, the local B&N had a copy of Mircea Pitici’s The Best Writing on Mathematics 2015. I’ve read each year’s book since I discovered them in the public library system in San Antonio years ago. I read what I can. I don’t force myself to read every article. But, what I read I contextualize in terms of what does it mean to me, a product strategist. I’m a long way from finished with the 2016 book. I’m thinking I need to buy them going back as far as I can and read every article. Right now that’s impossible.

I thought I was finally finished with kurtosis, but no I wasn’t thanks to the 2015 book. So what brought kurtosis back to the forefront? Natalie Wolchover’s “At the Far Ends of a Universal Law,” did. The math in that article is about the analytic view of phase transitions or coupled differential equations described by something called the Tracy-Widom distribution. That distribution is asymmetric meaning it has skewness, which in turn means it exhibits kurtosis.

In “Mysterious Statistical Law May Finally Have an Explanation” in the October 2014 edition of Wired magazine, the Tracy-Wisdom distribution is explained. It is linked to distributions of eigenvalues, and phase transitions. The phase transition aspect of the Tracy-Widom distribution caught my attention because Geoffrey Moore’s technology adoption lifecycle is a collection of phase transitions.  The article contained a graph of the Tracy-Widom distribution, which I modified somewhat here.

tw2

I annotated the inflection points (IP) because they represent the couplings between the differential equations that comprise the Tracy-Widom distribution. I used thick black and blue lines to represent those differential equations. The Tracy-Wisdom distribution is a third-order differential equation, which is comprised of two second-order differential equations (S-curves), which in turn are comprised of two differential equations each (J-curves).

The cool thing is that we move from a stochastic model to an analytic model.

I removed the core vs tail color coding in the WIRED diagram. In my earlier discussions of kurtosis, the core and tails were defined by the fourth moment, aka the inflection points coupling the first order differential equations. The error persists in this figure because the inflection points were hard to determine by sight. Notice also that the WIRED diagram hints at skewness, but does not indicate how the distribution is leaning. For more on leaning and theta, see Convergence and Divergence—Adoption, De- adoption, Readoption, and More On Skew and Kurtosis. They are taking the Tracy-Widom distribution as a given here, rather than a transformation of the normal. Much about kurtosis is not resolved and settled in the mathematics community at this time.

The dashed vertical line separating the two sides of the distribution intersects the curve at the maxima of the distribution. The maxima would be a mode, rather than a mean. When a normal is skewed, the mean of that normal does not move. The line from the mean on the distribution’s baseline to the maxima slopes meeting the baseline at some θ. Ultimately, the second-order differential equations drive that θ. Given I have no idea where the mean is, I omitted the θ from this diagram.

In the 2015 book, the left side of the distribution is steeper, almost vertical, which generates a curve closer to the base axis, a curve with a tighter curvature, aka a larger value for Κ1 (smaller radius); and the right side is flatter, which generates a looser curvature, aka a smaller value for Κ2 (larger radius)—note that curvature Κ = 1/r.

tw3

So both figures can’t be correct. How did that happen? But, for my purposes, this latter one is more interesting because it shows a greater lag when transitioning between phases in the technology adoption lifecycle and in firms themselves, particularly in firms unaware that they are undergoing a phase transition. In Moore’s bowling alley, where the Poisson games occur. The phase transitions are more symmetric and faster. In the transition between the vertical and the IT horizontal, the phase transition can be slower, less symmetric. In the transition between early and late main street, the phase transition is fast. Most firms miss their quarterly guidance here, so they undergo a black swan, which is surprising since a firm should know when they are approaching having sold 50% of their addressable market. A firm should also know when they are approaching having sold 74% of their addressable market, so they wouldn’t hear from the Justice department or the EU. Of course, most firms never get near that 74% number.

talc-w-t-w

Here I aligned a Tracy-Windom distribution with each technology adoption lifecycle phase boundary. I have no idea about the slopes of the s-curves, the second order differential equations. Your company would have its own slopes. Your processes would give rise to those slopes, so collect your data and find out. Knowing your rates would be useful if you were continuously doing discontinuous innovation.

I’ve labeled the phases and events somewhat differently from Moore. TE is the technical enthusiast layer. They don’t disappear at any point in the lifecycle. They are always there. Well, they do lose focus in the software as media model in the vertical phase of the adoption lifecycle.  Likewise in all late phases. BA is the bowling ally. Keeping your six early-adopter (EA) channels of the bowling alley full is key to continuously doing discontinuous innovation. V is the verticals. There would be one vertical for each early adopter. The early adopter is an executive in a vertical market. IT H is the IT horizontal market. Early main street (EM) is another term for the IT horizontal. If we were talking about a technology other than computing, there would still be a horizontal organization servicing  all departments of an enterprise.An enterprise participates in serval horizontal markets. Late main street (LM) also known as the “Consumer Market” where we are today, a market that orthodox business practice evolved to fit, a market where innovation is continuous, managerial, and worse “disruptive in the Christensen way (cash/competition).” The technical enthusiast and bowling alley is wonderfully discontinuous and disruptive positively in the Foster way (economic wealth/beyond the category). L is laggard or device phase. P is phobic or cloud phase. I the phobic phase computing disappears. The technical enthusiasts will have their own Tracy-Windom distributions. Moore’s chasm being one. Another happens when the focus changes from the carried to the carrier  in the vertical phase. And, yet another happens when aggregating the bowling alley applications into a carrier-focused, geek/IT facing product sold in the tornado. Cloud rewrites another. An M&A would cause another as well. That product would sell in the second (merger) tornado (not shown in the figure).

The first second-order differential equation accounts for what it takes to prepare to make the phase transition. The second second-order differential equation accounts for operationalized work during the phase. The diagram is not always accurate in this regard.

More than enough. Enjoy.

Geez another edit, but over packed.