The Cook’s Customer

March 17, 2017

I was perusing Anthony Bourdain’s Appetites; a cookbook. In it, he asks a few questions about his customers, and he is shockingly honest about the answers to those questions.

What is it that “normal” people do? What makes a “normal” happy family? …

I had little clue how to answer these questions for most of my working life, as I’d been living it on the margins. I didn’t know any normal people. From age seventeen on, normal people have been my customers. They were abstractions, litterally shadowy silhouettes in the dining rooms of wherever it was that I was working at the time. I looked at them through the perspective of the lifelong professional cook chief—which is to say, as someone that did not have a family life, who knew and associated only with fellow resturant profesionals, who worked while normal people played and played when normal people slept.

Do those of us in the software community have this problem? Are our customers still abstractions even if we’ve met them, spoken with them, engaged them in an ethnographic field study? Does their corporate culture look like our culture? Is it true that we work while they sleep? Do they use the same software we use? No, of course not. Do they seek value whee we seek it?

Do they use the same software we use? No, of course not. Do they seek value where we seek it? No, of course not. Do our customer personas point out the differences between us and them? This gets harder with the technical enthusiasts because they seem much more like us than our users or our economic buyers.

Where do we define the closeness of our abstraction, the gap between an atomic bomb and a hypodermic needle? We go with the atomic bombs too often.

Make no mistake, sure I’m asking product managers, but really, I’m asking the developers because we leave this in their hands. And, when we “fully load” those developers to capture all the effort that we can, are we not failing to leave time to know the customer, know the carried content, or even know the carrier. We do tend to make time for our developers to know the carrier.

Developers don’t come to us experts in our carried content, our users, or our economic buyers. They need to learn those things which reach well beyond the “learning” the Agilists mention and experiment towards: was it used; was it used sooner, rather than later (was it successfully learned); does it deliver the value it was supposed to deliver to the entity it was supposed to be delivered to?

Once those questions get answered, tighten the limit, so the gap becomes a fence, rather than a borderland, and answer the questions again. Find the questions tied to the scale of the gap. Enjoy.

I’m sure after working with too many developers that thought that their users were just like them, that your answers should surprise you, just as Anthony’s answers surprised him.  Enjoy.

Kurtosis Risk

January 2, 2017

In the research for my previous posts on kurtosis, I ran across mentions of kurtosis risk. I wasn’t up to diving into that, and getting too far away from what I was writing about in those posts. mc spacer retweeted More On Skew and Kurtosis. I reread the post and decided to conquer kurtosis risk. The exploration was underway.

One of the things they don’t teach you about in that into stats class is the logical proof of what we are doing. We take a mean without checking its normality. We go forward with the normal distribution as if it were normal, ordinary, usual, typical, non-problematic. Then, we meet the data, and it’s anything but normal. When meeting the data, we also meet skew risk and kurtosis risk. It’s like meeting your spouse to be’s mom. Usually, you meet your spouse to be’s dad at the same time. Yeah, they all show up at the same time.

You might get taught various ways to approximate the mean when you have less than 30 data points, aka when your sample is too small. That less than 30 data points is the space where skew risk and kurtosis risk happen. The sample statistics drive around a while getting close to the as yet unknown population mean, equalling it a few times, circling it, and finally pulling in and moving in. Our collection of sample means eventually approximates the population mean.

In artificial intelligence, back in the old days when it was important to think like a human, back in the days of expert systems, we encode the logic in augmented transition networks. A single transition would look like IF StopSign, THEN Stop. Of course, that’s not a network yet. That would wait until we wrote another, IF YeildSign, THEN Yield. That’s just another transition. Those two transitions would with some additional infrastructure become a network, thus they would become an augmented transition network. To make this easier, we used a descriptive language, rather than a procedural one. Prolog gives you the widest infrastructure. Prolog let you present it with a collection of transitions and it would build the proof to achieve the goal. It built a tree and trimmed the inconsistent branches.

We’ve seen that building the tree and trimming the inconsistent branches before. We use generative grammars to build a decision tree for a potential product, and constraints to trim that decision tree, so we arrive at the product fit for the moment. There is a logical argument to our product.

Similarly, there is a logical argument, or a proof, to our statistical analysis. There in that proof of our statistical analysis, our skew and kurtosis risk emerge.

Statistics happen after our data is collected. We think in terms of given (IF or WhatIF, WIF) this data, then these statistics. We don’t think about that driving around as looking for the population mean, as a process. Statistics is static, excepting the Bayesian approach. Logic insists. The proof frames everything we do. When computing a mean, the proof is going to insist on normality. But, this logical insistence is about the future, which means we are actually doing an AsIf analysis. We imagine that we checked for normality. We imagine that we know what we are doing since nobody told us any different yet. An AsIf analysis imagines a future and uses those imagined numbers as the basis for an analysis. In that imagining of the future, we are planning, we are allocating resources, we are taking risks. With samples, those risks are skewness and kurtosis risks.

I’m delayed defining skewness risk in this post until the very end. Once you understand kurtosis risk, skewness risk is nearly the same thing, so bare with me.

valid-distributionWe will use the triangle model, which represents decision trees as triangles, to represent our proof.

In this figure, the root of the decision tree is at the bottom of the figure. The base of the tree is at the top of the figure. In the triangle model, the base of the triangle represents the artifact resulting from the decision tree, or proof.

Here we paired the distribution with its proof. A valid proof enables us to use the distribution. In some cases, the distributions can be used to test a hypothesis. An invalid proof leads to an invalid distribution which leads to an invalid hypothesis. Validity comes and goes.

OK, enough meta. What is Kurtosis risk?

When we assert/imagine/assume (AsIf) that the distribution is normal, but the actual data is not normal, we’ve exposed ourselves to kurtosis risk. We’ve assumed that the sample mean has converged with the population mean. We’ve assumed that we have a legitimate basis for hypothesis testing. Surprise! It hasn’t converged. It does not provide a basis for hypothesis testing.

As an aside, WIFs (What IFs) are what spreadsheets are for. Pick a number, any number to see what the model(s) will do. AsIfs come from scenario planning, a process that is much more textual than numeric. A scenario is an outcome from various qualitative forces.

Back to it. Google sent me to Wikipedia for the above definition of kurtosis. I drew the definition and kept on thinking. This picture is the final result of that thinking.

kurtosis-risk

We start with the top-down, footprint view of normal distribution, a circle. The brown vertical line extends from the green cross on the right representing the mean, median, and mode which are the same for distributions that are normal.

Then, we see that our actual data is an ellipse. The blue vertical line extends from the green cross on the left. That line is labeled as being the mode of the skewed normal. In previous discussions of kurtosis, we use kurtosis to describe the tails of the distribution. In some definitions of kurtosis, kurtosis was seen as describing the peakedness of the distribution where we used kurtosis to describe the core of the distribution.

I drew a line through the two means. This line gave us two tails and a core. I should have drawn the core so it actually touched the two means. Then, I projected the two tails onto an x-axis so I would have a pair of lengths, the cosines of the original lengths. That one is longer and the other shorter is consistent with previous discussions of kurtosis.

A note on the core: I’ve taken the core to the most undifferentiated space under the curve. This is where no marketer wants to get caught. The circle that serves as the footprint of the normal is tessellated by some scheme. A shape in that tessellation represents the base of a histogram bar. From that bar, each adjacent histogram bar is exactly one bit different from that bar. The resolution of the shapes can be any given number of bits different, but that gets messy and, in the 3D graphic tessellation sense, patchy. A string “00000000” would allow its adjacent ring of histogram bars to contain up to eight different bars representing eight unique differences. “Ring” here is descriptive, not a reference to group theory. The histograms of the normal distribution encode all available differences. Refinements work outward from the undifferentiated mean to the highly differentiated circle of convergences, aka the parameter of the normal distributions footprint. We are somewhere under the curve. So are our competitors. So are our prospects and customers.

An ordinary interpretation of a peak with high peakedness is uniqueness or focus. That’s a high kurtosis value. A peak that’s less peaked, rounded, smoother is less unique, less focused, possibly smudged by averaging, tradeoffs, and gaps. It all shows up in the histogram bars. The other thing that shows up is the differences that are our product over the life of the product.

The other thing that shows up is the differences that are our product over the life of the product. A given iteration would have a particular shape. Subsequent iterations would build a path under the histograms that constitute the normal. Customers would cluster around different iterations. A retracted feature would show up as defections to competitors with different configurations more consistent with the cognitive processes of the defectors, our “once upon a time” users. Use tells. Differentiation segments.

So I attend to the tessellations and shapes of my histogram bars, to the sense of place, and to movement.

I then projected the core onto the sphere represented by the circle. Yes, the same circle we used to represent the footprint of the normal distribution. The core then appears as an ellipse. It should be closer to the pole, then it would be smaller. This ellipse should be the same shape as the top of the ellipsoid, containing the ellipse of the data, that the sphere is topologically deformed into.

Then, I drew a vector along the geodesic from the pole to the elliptical projection of the core to represent the force of topological deformation. I also labeled the circle and ellipse view to show how the deformation would be asymmetrical. The right is much less deformed than the right.

summary-veiwNext, I put the kurtosis in the summary view of a box chart using those lengths we found drawing a line through the two means. This box chart is tied to a view of the tails and kurtoses drawn as curvatures. As for the slopes of the distribution’s actual curve, they are approximations.

So that is kurtosis risk? When your sample means have not as yet converged to the population mean, you are exposed to kurtosis risk. Or, as Wikipedia put it when you asserted that the data is normally distributed, but it wasn’t, that assertion gives rise to kurtosis risk.

And, what of skew risk? You expose yourself to skew risk when you assert that your data is symmetric, when in fact, it isn’t. In the math sense, skew transforms the symmetric into the asymmetric and injects the asymmetries into the curvatures of the kurtoses constraining the tails along the radiant lines in the x-axis plane.

This business of the assertion-base for statistics involves constant danger and surprise. A single inconsistent assertion in the middle of the proof can invalidate much of the formerly consistent proof of a once useful analysis. Learn more, be more surprised. Those intro classes blunt the pointed sticks archers call arrows. Before they were pointed, they were blunt–dangerous in different ways. Enjoy.

 

 

 

The Hyperbolic No

December 25, 2016

When we move physical constraints, we innovate discontinuously. When we innovate discontinuously, we create economic wealth as a sideband to making a lot of cash, and we create institutions and careers. We haven’t been doing that lately. Instead, we innovate for cash alone, and we cash in our economic wealth for cash and never replace that economic wealth.

The discontinuity at the core of a discontinuous innovation cannot be overcome by iterating beyond current theory. We need a new theory. That new theory has its own measures and dimensions. These at the invention layer of innovation. These cause a discontinuity at the commercialization layer. That discontinuity is in the populations being served. The existing population says no. The nascent adopting population says or will come to say yes. Polling is fractured by discontinuities.

When we do our financial analysis, the discontinuous innovation generates numbers that fail to motivate us to jump in and make it happen. Why? It’s a question that I’ve spent years looking at. I’ve blogged about it previously as well. My intuition tells me that consistent underreporting is systematic and due to the analysis. My answer revolves around geometry. We do our analyses in terms of a Euclidean geometry, but our realities are multiple, and that Euclidean reality is fleeting. Our Euclidean analysis generates numbers for a hyperbolic space, underreporting the actual long-term results. Results in a hyperbolic space appear smaller and smaller as we tend to

We do our analyses in terms of a Euclidean geometry, but our realities are multiple, and that Euclidean reality is fleeting. Our Euclidean analysis generates numbers for a hyperbolic space, underreporting the actual long-term results. Results in a hyperbolic space appear smaller and smaller as we tend to infinity or the further reaches of our forecasted future. Hyperbolic space is the space of discontinuous innovation.

Once a company achieves a six-sigma normal or the mean under the normal we use to represent the technology adoption lifecycle, or in other terms, once a company has sold fifty percent of its addressable and allocated market share, the company leaves the Euclidean space and enters the spherical space where many different financial analyses of the same opportunity give simultaneous pathways to success. This where a Euclidean analysis would report some failures. Again, a manifestation of the actual geometry, rather than the numbers.

Maps have projections. Those projections have five different properties used in different combinations to generate a graphical impression. Explore that here. Those projections start with the same numbers and tell us a different story. Geometries do the same thing to the numbers from our analysis. Our analysis generates an impression of the future. The math is something mathematicians call L2. We treat L2 as if it were Euclidean. We do that without specifying a metric. It’s linear and that is enough for us. But, it’s not the end of the story.

The technology adoption lifecycle hints at a normal, but the phases decompose into their own normals. And, the bowling alley is really a collection of Poisson distributions that tend to the normal and aggregate to a normal as well. So we see a process from birth to death, from no market population to a stable market population. Here as well, the models change geometries.

I’ve summarized the geometries in the following figure.

geometres

We start at the origin (O). We assert some conditional probability to get a weak signal or a Dirac function. We show a hyperbolic triangle, a Euclidean triangle, and a spherical triangle. Over time, the hyperbolic triangle gains enough angle to become Euclidean. The Euclidean triangle then gains enough angle to become spherical. The angle gain occurs over the technology adoption lifecycle, not shown here, parallel to the line through the origin.

When we look at our numbers we pretend they are Euclidean. The hyperbolic triangle shows us how much volume is missed by

hyperbolic

our assumption of Euclidean space.

Here I drew some concentric circles that we will come back to later. For now, know that the numbers from our analysis report only on the red and yellow areas. We expected that the numbers reported the area of the Euclidean triangle.

 

 

euclidean

 

The green triangle is the Euclidean triangle that we thought our numbers implied. In a six-sigma normal, the numbers from the analysis would be correct. Less than six sigma or more than six sigma, the numbers would be incorrect.

 

 

 

 

 

sphericalIn the spherical geometry, the problem is subtly different. We keep thinking in Euclidean terms, which hides the redundancies in the spherical space. Here, competitors have no problem copy your differentiation even to the point of coding around your patent. You have more competition than expected and end up with less market as a result. The risks are understated.

 

 
hyperbolic-tessilation

To reiterate the problem with the hyperbolic space, we can look at a hyperbolic tessellation.

 

 

 

 

 

euclidean-tessilation

In a Euclidean tessellation, each shape would be the same size.

The differences in impressions generated by the hyperbolic view and the Euclidean view should be obvious. We’ve been making this mistake for decades now.

In a spherical tessellation, the central shape would be the smallest and the edge shapes would be the largest.

Here, in a hyperbolic geometry, the future is at the boundary of the circle. Numbers from this future would appear to be very small.

In a factor analysis view, the first factor would be represented by the red polygon. The second factor would be represented by the orange polygons. The third factor would be represented by the yellow polygons. The edge of the circle lays at the convergence of the long tail with the ground axis. The edge would be lost in the definition of the limit. The convergence is never achieved in a factor analysis.

Building a factor analysis over those tessellations tells us something else. Factor analyses return results from hyperbolic space routinely. The first factor is longer and steeper. The hyperbolic tessellation would do that. Neither of the other spaces would do that. So where you do a factor analysis, you may be engaging in more geometric confusion.

Notice that the spherical geometry of the typical consumer business is, like most business people, biased to optimism. The future is so big. But, to get to those numbers, you have to escape the Euclidean space of the very beginnings of the consumer facing startup.

With a discontinuous innovation and its hyperbolic space, the low numbers and the inability to get good numbers to arrive in the future usually convinces us to not go there, to not launch, so we don’t. But, we’d be wrong. Well, confused.

Economists told us that globalism would work if we didn’t engage in zero-sum thinking. But, that is what we did. We, the herd, engaged in zero-sum thinking and doing. We innovated continuously, which has us ignoring the economic wealth vs cash metric. We, in our innovation songs, confuse the discontinuously innovative past of the Internet with the continuously innovative present. Or worse, disruption. Thinking we’d get the same results. This even when the VCs are not confused. They deal smaller, much smaller now than back then.

Wallowing in cash doesn’t replace the economic wealth lost to globalism. We can fix this in short order without the inputs and errors from our politicians. But, we have to innovate discontinuously to replace that lost economic wealth. It’s time to say yes in the face of the hyperbolic no. We can create careers and get people back to work.

 

 

 

The Shape Of Innovation

November 26, 2016

In the past, I’ve summarized innovation as a decision tree. I’ve summarized innovation as divergence and convergence, generation and tree pruning. So I drew this figure.
context-10The generative grammar produces a surface. The Constraints produce another surface. The realization, represented by the blue line, would be a surface within the enclosed space, shown in yellow. The realization need not be a line or flat surface.

In CAD systems, the two surfaces can be patched, but the challenge here is turning the generative grammar into a form consistent with the equations used to define the constraints. The grammar is a tree. The constraints are lines. Both could be seen as factors in a factor analysis. Doing so would change the shape of the generated space.

context-06In a factor analysis, the first factor is the longest and steepest. The subsequent factors are flatter and shorter.

A factor analysis produces a power law.

A factor analysis represents a single realization. Another realization gives you a different factor analysis.

context-07When you use the same units on the same axes of the realizations, those realizations are consistent or continuous with each other. These are the continuities of continuous innovation. When the units differ in more than size between realizations, when there is no formula that converts from one scale to another, when the basis of the axes differ, the underlying theories are incommensurate or discontinuous. These are the discontinuities of discontinuous innovation.

context-11The surfaces contributing to the shape of the enclosed space can be divided into convex and concave spaces. Convex spaces are considered risky. Concave spaces are considered less risky. Generation is always risky. The containing constraints are unknown.
context-17The grammar is never completely known and changes over time. The black arrow on the left illustrates a change to the grammar. Likewise, the extent of a constraint changes over time, shown by the black arrow on the right. As the grammar changes or the constraints are bent or broken, more space (orange) becomes available for realizations. Unicode, SGML, and XML extended the reach of text. Each broke constraints. Movement of those intersections moves the concavity, the safe harbor in the face of gernerative risks. As shown the concavity moved up and to the left. The concavity abandoned the right. The right might be disrupted int he Foster sense. The constraints structure populations in the sense of a collection of pramatism steps. Nothing about this is about the underserved or disruption in the Christensen sense.

The now addressible space is where products fostering adoption of the new technology get bought.

The generative grammar is a Markov chain. Where the grammar doesn’t present choice, the chain can be thought of as a single node.

context-12The leftmost node is the root of the generative grammar. It presents a choice between two subtrees. Ultimately, both branches would have to be generated, but the choice between them hints at a temporal structure to the realization, and shifting probabilities from there.

New gramatical structures would enlarge the realization. Grammars tend to keep themselves short. They provide paths that we traverse or abandon over historical time. The realization would shift its shape over that historical time. This is where data mining could apply.

When the constraints are seen from a factor analysis perspective, the number of factors are few in the beginning and increase over time. This implies that gaps between the realization and the factors would exit and diminish over time. Each factor costs more han the factor before it. Factors add up to one, and then become a zero-sum game. For another factor to assert itself, existing factors would have to be rescaled.

Insisting on a factor anlaysis perspective leaves up with trying to find a factor designated as the root constraint. And then, defining the face offs. This subgrammar vs this collection

context-18of constraints. Each would have rates, thus differential equations. Each would be a power law. So in our system there would be four differential equations and four power laws. There would also be four convergences. These would be reflected in the frequencies of use histograms.

Notice that nowhere in this discussion was innovation based on an idea from management. The ideas were about enlarging the grammar, aka ontological sortables, and the breaking or bending of constraints. When a constraint built into a realization breaks, Glodratt told us that the realization moves some distance to the next constraint.These efforts explore the continuities and discontinuities of the possible innovations. Produtization is the next step in fostering adoption.

As always, enjoy.

 

 

 

 

Doing Discontinuous Innovation

November 14, 2016

Discontinuous innovation creates economic wealth. Continuous innovation captures cash. Economic wealth, unlike what the financial services companies tell us with their wealth management services, is more than a pile of cash. Cash is the purview of the firm.  Economic wealth is the purview of the economy as it reaches well beyond the firm. Cash is accounted for where economic wealth is not.

Notice that no firm has an imperative to create economic wealth. To the contrary, managers today are taught to convert any economic wealth they encounter into cash. They do this with the assumption that that economic wealth would be put back, but that has yet to happen. Globalism was predicated on using the cash saved to create new categories, new value chains, new careers—economic wealth. Instead, we sent it to Ireland to avoid taxes. Oh well, we let the tail wag the dog.

Likewise, we are taught to lay off people, because we can put that money to better use, but then we don’t put it to better use. Those people we laid off  don’t recover. They work again, but they don’t recover. Oh, well. This is where continuous innovation takes you. Eventually, it is moved offshore. The underlying carrier technologies are lost as well, so those jobs can’t come back. The carrier technologies will evolve elsewhere.

I could go on. I did, but I deleted it.

Anyway, I’ve been tweeting about our need to create new economic wealth as the solution to globalism. Instead, the rage gets pushed to the politicians, so we’ve seen where that got us. The politicians have no constructive solution. We can solve this problem without involving politicians. We can innovate in a discontinuous manner. As a result of those tweets, a product manager that follows me ask, so how do we innovate discontinuously.

I’ll review that here.

  1. Begin with some basic research. That kind of research bends or breaks a constraint on the current way things are done in that domain.

Samuel Arbesman’s “The Half-Life of Ideas” gives us a hint in the first chapter with a graph of the experiments on temperature. Each experiment resulted in a linear range resulting from the theory used to build the measurement system that underlaid the experiment. The experiments gave us a dated collection of lines. The ends of those lines were the end of the theories used to build the experiments. You couldn’t go from one line to the next with a single measurement device, with a single theory. You had a step function on your hands after the second experiment.

The lines on the right side of the graph were replaced with later lines, later experiments. The later lines were longer. These later lines replaced the earlier step functions with another step function. A single measurement device could measure more. The later theory could explain more. The later theory broke or bent a constraint. The earlier theory did so as well when you consider that before the earliest theory, there was no theory, so nothing could be done. As each theory replaced the prior theory more could be done. Value was being delivered. That value escaped the lab once a manager got it sold into a market beyond the lab, aka innovated.

  1. Build that basic research into an infrastructural platform, into your technology/carrier layer, not into a product/carried layer. Do not even think about a product yet.

Moore’s technology adoption lifecycle starts with a technology. After step 2, that’s what you have. You have a technology. Products get a technology adopted. The technical enthusiasts is the first population that needs to be addressed. This population is the geeks. They insist on free. They insist on play. They refer technologies to their bosses.

  1. Explore what vertical industry you want to enter then hire a rainmaker known in that vertical. This rainmaker must be known by the executives in that vertical. This rainmaker is not a sales rep calling themselves a rainmaker.
  2. When the rainmaker presents you with a B2B early adopter, their position in the vertical matters. Their company must be in the middle of the industry’s branch/subtree of the industrial classification tree. They should not be on a leaf or a root of the branch/subtree. This gives you room to grow later. Growth would be up or down and not sideways to peers of the same parent in the subtree.
  3. That B2B early adopter’s vertical must have a significant number of the seats and dollars.
  4. That early adopter must have a product visualization. This product visualization should be carried content, not carrier. Carrier functionality will be built out later in advance of entering the IT horizontal. Code that. Do not code your idea. Do not code before you’re paid. And, code it in an inductive manner as per “Software by Numbers.” Deliver functionality/use cases/jobs to be done in such a way that the client, the early adopter, is motivated to pay for the next unit of functionality.
  5. Steps 3—6 represent a single lane in Moore’s bowling ally. Prepare to cross the chasm between the early adopter and more pragmatic prospects in the early adopters vertical. Ensure that the competitive advantage the early adopter wanted gets achieved. The success of the early adopter is the seed of your success. Notice that most authors and speakers talking about crossing the chasm are not crossing the chasm. There is no chasm in the consumer market.
  6. There must be six lanes before you enter the IT horizontal. That would be six products each in their own vertical. Do not stay in a single vertical. So figure out how you many lanes you can afford and establish a timing of those lanes. Each lane will last at least two years because you negotiate a period exclusion for the client in exchange for keeping ownership of your IP.
  7. Each product will enter its vertical in its own time. The product will remain in the vertical market until all six products in the bowling ally have been in their verticals at least two years. Decide on the timing of the entry into the horizontal market taking all six products into consideration. All six will be modified to sum their customer/user populations into a single population, so they can enter the IT horizontal as a carrier focused technology. The products will shed their carried functionality focus. You want to enter the horizontal with a significant seat count, so it won’t take a lot of sales to win the tornado phase at the front of the IT horizontal.
  8. I’ll leave the rest to you.

For most of you, it doesn’t look like what you’re doing today. It creates economic wealth, will take a decade or more, requires larger VC investments and returns, and it gets a premium on your IPO unlike IPOs in the consumer/late market phases of the technology adoption phases.

One warning. Once you’ve entered the IT horizontal, stay aware of your velocity as you approach having sold half of your addressable market. The technology adoption lifecycle tells us that early phases are on the growth side and that late phases are on the decline side of the normal curve.

There needs to be a tempo to your discontinuous efforts. The continuous efforts can stretch out a category’s life and the life of the companies in that category. Continuous efforts leverage economies of scale. A discontinuous effort takes us to a new peak from which continuous efforts will ride down. Discontinuous innovations must develop their own markets. They won’t fit into your existing markets, so don’t expect to leverage your current economies of the scale. iPhones and Macs didn’t leverage each other.

Don’t expect to do this just once. Apple has had to do discontinuous innovations three or four times now. They need to do it again now that iPhones are declining. Doing it again, and again means that laying off is forgetting how to do it again. It’s a matter or organizational design. I’ve explored that problem. No company has to die. No country has to fall apart due to the loss of their economic wealth.

Value Projection

November 7, 2016

I’ve often used the triangle model to illustrate value projection. In a recent discussion, I thought that a Shapely Value visualization would work. I ended up doing something else.

We’ll start by illustrating the triangle model to show how customers use the enabling software to create some delivered value. The customer’s value is realized by their people using a vendor’s software. The vendor’s software provides no value until it is used to create the value desired by the customer.

value-projection-w-triangle-model-01The gray triangle represents the vendor’s decisions that resulted in the software that they sold to the customer. The base of that triangle represents the user interface that the customer’s staff will use. Their use creates the delivered value.

The red triangle represent’s the customer’s decisions that resulted in that delivered value. The software was a very simple install and use application. Usually, configurations are more complicated. Other software may be involved. It may take multiple deliverables to deliver all the value.

value-projection-w-triangle-model-02

Here we illustrate a more complicated situation where a project with several deliverables and another vendor’s product was needed to achieve the desired value.

 

 

 

When a coallition is involved in value delivery, Shapely value can be used to determine the value each member of the coallition should receive realtive to their contribution to value delivered.

shapely-value

Here I used a regular hexigon to represent six contributors that made equal contributions. The red circle represents the value delivered.

The value delivered is static, which is why I rejected this visualization. The effort involves multiple deliverables.

 

The next thing we had to handle was representing the factors involved in that value delivery. Those factors can be discovered by a factor analysis.

factor-analysis

A factor analysis allocates the variance in the system to a collection of factors. The first factor is the longest and steepest factor. The first factor explains more variance than any of the subsequent individual factors. The second factor is shorter and flatter than the first factor. The second factor is longer and steeper than the first factor. The third factor is flatter and shorter than the second factor.

Even without the details 80 percent of the variance is covered by the first three factors. Additional factors can be found, but they become increasingly expensive to discover.

For our purposes here we will stop after the first three factors or after the first 80 percent of variance. We will allocate some of the delivered value to those factors.

Putting all of this together, we get the following visualization.
value-projection

Here the vendor is at the center of the rings. The rings are organized by the project’s deliverables along the project’s timeline. The first ring represents the UI of the vendor’s application. The distance between this ring and the origin of the circle represents the time it took to deliver the UI. That UI incorporates the factors explaining the relative importance of the delivered elements of the software.  The white area in the vendor ring, adjacent to the purple factor represents the 20 percent of variance or importance that would be allocated to subsequent factors beyond the first three.

The gray rings represent the time gaps between the install. The second customer ring represents the efforts to configure the application. The third ring represents further implementation efforts. The customer’s efforts might involve using an API to extend the functionality of the application. This is shown with the orange and red segments. The extension is organized as a stack crossing the customer’s rings.

The radius of the circles represents time. That being the case, we don’t need the left side of the circles. Time starts at the origin and moves outward.

Different vendors could be represented with different rings, or some allocation of the existing rings. The vendors themselves have relative ranks relative to the delivery of the ultimate value.

I’d appreciate some comments. Enjoy.

Implicit Knowledge

October 24, 2016

One of the distinctions I’ve been making out on twitter is the difference between what I call fictional and non-fictional software. We get an idea. We have to ask the question does users actually do this today without our software. If the answer is “No,” we get to make up how it is to be done. The user tasks are a blank whiteboard. That’s fictional software. But most of the time, the answer is not “No.” In that case, the software is non-fictional, so we need to do an ethnography and find out exactly how the user does it, and what cognitive model of their thinking is while they do what they do. In non-fictional software, neither the developer or the UX designers are free to make things up.

Yesterday, I read “Usability Analysis of Visual Programming Environments: a ‘cognitive dimensions’ framework.” The author, a UX designer, makes some statements that clarified for me that UX design as practiced today, particularly by this designer, is fictional. Tasks exist before they are designed. Tasks exist before they are digitized by programmers. This isn’t new. Yahoo built a search engine without ever looking at existing search engines or asking library science practitioners how to do it. Yahoo made it up and then discovered many of the finding and practices of library science practitioners later. That is to say, they approached, progressed towards convergence with, the user’s real cognitive model of the underlying tasks. There is still a gap.

Agile cannot fix those gaps in non-fictional software. It can only approach and converge to the gap width between the user’s bent cognitive model they use as users, and the real cognitive model they learned eons ago in school. That learning was explicit with a sprinkling of implicit. The implicit does not get captured by asking questions, talking, observing, or iterating. With any luck, a trained observer, an ethnographer, and their observational frameworks can observe and capture that implicit knowledge.

iteration-gap

A Rubik’s Cube can serve as an example. When solving a cube, we explore the problem space, a tree, with a depth first search. We can use simple heuristics to get close. But then, we stop making progress and start diverging away from the solution. We get lost. We are no longer solving. We are iterating. We are making noise in the stochastic sense. We stop twisting and turning. We look for a solution on the web. We find a book. That book contains “The hint,” the key. So after a long delay, we reset the cube, use the hint, and solve the cube.

diverge-converge-delay

We joined the epistemic culture  or what I was calling the functional culture of the cube. We are insiders. We solve the cube until we can do it without thinking, without the search struggles, and without remembering the hint. The explicit knowledge we found in that book was finally internalized and forgotten. The explicit knowledge was made implicit. If a developer asked how to solve the cube, the user doesn’t remember and cannot explicate their own experience. They cannot tell the developer. And, that would be a developer that wasn’t making it up, or fictionalizing the whole mess.

All domains contain and find ways to convey implicit knowledge. The Rubik’s cube example was weakly implicit since it has already been explicated in that book. The weakly implicit knowledge is a problem of insiders that have been exposed to the meme and outsiders who have not. Usually, those that got it teach those that don’t. Insiders teach outsiders. In other domains, implicit knowledge remains implicit but does get transferred between people without explication. Crafts knowledge is implicit. Doing it or practice transfers craft knowledge in particular, and implicit knowledge generally.

Let’s be clear here that generalist 101 class in the domain that you took back in college did not teach you the domain in the practitioner/expert sense. You/we don’t even know the correct questions to ask. I took accounting. I’m not an accountant. It was a checkbox, so I studied it as such. A few years after that class I encountered an accounting student and his tutor. The student was buying some junk food at the snack bar. The tutor asked him what accounts were affected by that transaction. That tutor was an insider. The student was working hard to get inside.

For anyone that will ever be a student of anything, there is no such thing as a checkbox subject. Slap yourself if you think so. Dig into it. Boredom is a choice, a bad one. You’re paying a lot of money, so make it relevant to think like an insider.

Recently, a machine beat a highly-ranked human in Go, a game not amenable to the generative space and heuristic-based pruning approach of the likes of Chess. The cute thing is that a machine learned how to be that human by finding the patterns. That machine was not taught an explicit Go knowledge. That machine now teaches Go players what it discovered implicitly and transfers knowledge via practice and play. The machine cannot explain how to play Go in any explicating manner.

One of my lifetime interest/learner topics was requirements elicitation. Several years ago, I came across a 1996 paper on requirements elicitation. Biases were found. The elicitor assumed the resulting system would be consistent with the current enterprise architecture, and let that architecture guide the questions put to users and customers, their bosses. That biased set of requirements caused waterfall development to fail. But, Agile does not even try to fix this. There will always be that gap between the user’s cognitive model and the cognitive model embedded implicitly in the software. UX designers like the author of the above paper impose UX without regard to the user’s cognitive model as well. I have found other UX designers preaching otherwise.

So the author of the above paper takes a program that already embeds the developer’s assumptions that already diverges and fictionalizes the user’s non-fictional tasks and further fictionalizes those tasks at the UX level. Sad, but that’s contemporary practice.

So what does this mess look like?

dev-ui-induced-gap

Here, we are looking at non-fictional software. The best outcome would end up back at the user’s conceptual model, so there was no gap. I’ve called that gap negative use costs, a term used in the early definition of the total cost of ownership (TCO). Nobody managed negative use costs, so there were no numbers, so in turn Gartner removed from the TCO. Earlier, I had called it training, since the user that knew how to do their job has to do it the way the developer and UX designer defined it. When you insert a manager of any kind in the process, you get more gap. The yellow arrows reflect an aggregation of a population of users. Users typically don’t focus on the carrier layer, so those training costs exist even if there were no negative use costs in the carried content.

As for the paper that triggered this post, “cognitive” is a poor word choice. The framework does not encode the user’s cognitive map. The framework is used to facilitate designer to manager discussions about a very specific problem space, users writing macros. Call it programming and programming languages if you don’t want your users to do it. Still useful info, but the author’s shell is about who gets to be in charge. The product manager is in charge. Well, you’ll resolve that conflict in your organization. You might want to find a UX designer that doesn’t impose their assumptions and divergences on the application.

 

The Tracy-Windom Distribution and the Technology Adoption Lifecycle Phases.

October 11, 2016

In my recent bookstore romps, the local B&N had a copy of Mircea Pitici’s The Best Writing on Mathematics 2015. I’ve read each year’s book since I discovered them in the public library system in San Antonio years ago. I read what I can. I don’t force myself to read every article. But, what I read I contextualize in terms of what does it mean to me, a product strategist. I’m a long way from finished with the 2016 book. I’m thinking I need to buy them going back as far as I can and read every article. Right now that’s impossible.

I thought I was finally finished with kurtosis, but no I wasn’t thanks to the 2015 book. So what brought kurtosis back to the forefront? Natalie Wolchover’s “At the Far Ends of a Universal Law,” did. The math in that article is about the analytic view of phase transitions or coupled differential equations described by something called the Tracy-Widom distribution. That distribution is asymmetric meaning it has skewness, which in turn means it exhibits kurtosis.

In “Mysterious Statistical Law May Finally Have an Explanation” in the October 2014 edition of Wired magazine, the Tracy-Wisdom distribution is explained. It is linked to distributions of eigenvalues, and phase transitions. The phase transition aspect of the Tracy-Widom distribution caught my attention because Geoffrey Moore’s technology adoption lifecycle is a collection of phase transitions.  The article contained a graph of the Tracy-Widom distribution, which I modified somewhat here.

tw2

I annotated the inflection points (IP) because they represent the couplings between the differential equations that comprise the Tracy-Widom distribution. I used thick black and blue lines to represent those differential equations. The Tracy-Wisdom distribution is a third-order differential equation, which is comprised of two second-order differential equations (S-curves), which in turn are comprised of two differential equations each (J-curves).

The cool thing is that we move from a stochastic model to an analytic model.

I removed the core vs tail color coding in the WIRED diagram. In my earlier discussions of kurtosis, the core and tails were defined by the fourth moment, aka the inflection points coupling the first order differential equations. The error persists in this figure because the inflection points were hard to determine by sight. Notice also that the WIRED diagram hints at skewness, but does not indicate how the distribution is leaning. For more on leaning and theta, see Convergence and Divergence—Adoption, De- adoption, Readoption, and More On Skew and Kurtosis. They are taking the Tracy-Widom distribution as a given here, rather than a transformation of the normal. Much about kurtosis is not resolved and settled in the mathematics community at this time.

The dashed vertical line separating the two sides of the distribution intersects the curve at the maxima of the distribution. The maxima would be a mode, rather than a mean. When a normal is skewed, the mean of that normal does not move. The line from the mean on the distribution’s baseline to the maxima slopes meeting the baseline at some θ. Ultimately, the second-order differential equations drive that θ. Given I have no idea where the mean is, I omitted the θ from this diagram.

In the 2015 book, the left side of the distribution is steeper, almost vertical, which generates a curve closer to the base axis, a curve with a tighter curvature, aka a larger value for Κ1 (smaller radius); and the right side is flatter, which generates a looser curvature, aka a smaller value for Κ2 (larger radius)—note that curvature Κ = 1/r.

tw3

So both figures can’t be correct. How did that happen? But, for my purposes, this latter one is more interesting because it shows a greater lag when transitioning between phases in the technology adoption lifecycle and in firms themselves, particularly in firms unaware that they are undergoing a phase transition. In Moore’s bowling alley, where the Poisson games occur. The phase transitions are more symmetric and faster. In the transition between the vertical and the IT horizontal, the phase transition can be slower, less symmetric. In the transition between early and late main street, the phase transition is fast. Most firms miss their quarterly guidance here, so they undergo a black swan, which is surprising since a firm should know when they are approaching having sold 50% of their addressable market. A firm should also know when they are approaching having sold 74% of their addressable market, so they wouldn’t hear from the Justice department or the EU. Of course, most firms never get near that 74% number.

talc-w-t-w

Here I aligned a Tracy-Windom distribution with each technology adoption lifecycle phase boundary. I have no idea about the slopes of the s-curves, the second order differential equations. Your company would have its own slopes. Your processes would give rise to those slopes, so collect your data and find out. Knowing your rates would be useful if you were continuously doing discontinuous innovation.

I’ve labeled the phases and events somewhat differently from Moore. TE is the technical enthusiast layer. They don’t disappear at any point in the lifecycle. They are always there. Well, they do lose focus in the software as media model in the vertical phase of the adoption lifecycle.  Likewise in all late phases. BA is the bowling ally. Keeping your six early-adopter (EA) channels of the bowling alley full is key to continuously doing discontinuous innovation. V is the verticals. There would be one vertical for each early adopter. The early adopter is an executive in a vertical market. IT H is the IT horizontal market. Early main street (EM) is another term for the IT horizontal. If we were talking about a technology other than computing, there would still be a horizontal organization servicing  all departments of an enterprise.An enterprise participates in serval horizontal markets. Late main street (LM) also known as the “Consumer Market” where we are today, a market that orthodox business practice evolved to fit, a market where innovation is continuous, managerial, and worse “disruptive in the Christensen way (cash/competition).” The technical enthusiast and bowling alley is wonderfully discontinuous and disruptive positively in the Foster way (economic wealth/beyond the category). L is laggard or device phase. P is phobic or cloud phase. I the phobic phase computing disappears. The technical enthusiasts will have their own Tracy-Windom distributions. Moore’s chasm being one. Another happens when the focus changes from the carried to the carrier  in the vertical phase. And, yet another happens when aggregating the bowling alley applications into a carrier-focused, geek/IT facing product sold in the tornado. Cloud rewrites another. An M&A would cause another as well. That product would sell in the second (merger) tornado (not shown in the figure).

The first second-order differential equation accounts for what it takes to prepare to make the phase transition. The second second-order differential equation accounts for operationalized work during the phase. The diagram is not always accurate in this regard.

More than enough. Enjoy.

Geez another edit, but over packed.

Customer Lifecycle and the Value Gap

October 2, 2016

John Culter, @johncutlefish,  tweeted a link to Customer Retention Hacking: How to get Users to Commit. Reading the article I was struck reading this quote

You don’t interact with your significant other the same way on your first date as you do on your 50th or 200th date. Similarly, giving a customer a great experience on day one isn’t going to be the same as on day 50.

with how the long tails of an application’s clicks could be organized to make it work with the customer lifecycle.

We start with the 1st day, the onboarding. Different things happen from there. Learning happens differently in each user. Expertise develops over time. Roles diverge over time.

Value projection has its timeline as well. John tweeted a link to The Success Gap: A HUGE Opportunity You Haven’t Considered.

So we’ll review the long-tail of application’s clickstream. Let’s say that every control in your application emits an HTTP request to an HTML page for that control, so that every click get counted, sorted, and summed up by a directory structure. This will tell you what the users are doing. If you can isolate this down to a particular user, you might want to get permission or default permission in an EULA. This will timestamp the application’s clickstream. What’s important for the purposes of this post is the timestamp. You could see what the users does with your application each day via server log analytics. You could see what the user doesn’t do efficiently, or what the user doesn’t know how to do. That knowing or not will be role specific. You need to know the user’s role, and when the user changes roles. Is your user doing self-support? You can see it. Likewise, you can see where a bug happens, because the histograms will change drastically.

daily-long-tail

The histograms on the left aggregate several of the histograms on the right. We save a named file via the menu. We save a named file via shortcut. Those would each have their own histogram. They would be added together in “save a named file.” These aggregations would be defined by the directory structure containing the file for each control. We can save the control clicks by use case. The structure can get messy. With continuous delivery, we would save the server log and put a new server log out there. Play with it. Aggregate down the timeline.

Every click of a control is a micro conversion. Click and you see the next set of controls. Another click could tell you what use case the user is attempting to perform.

Value is projected outward from the application. Further various value propositions are projected from the application. Some use does not move the system towards a value proposition. We can sort that out. The value not yet delivered would constitute the Success Gap.

value-projection

In this figure, I started with the triangle model where an application is a decision tree. The base of the triangle (right side) is the user interface (UI). Ideally, the UI would be organized by the one task, one dialog, or in contemporary terms one use case, one dialog. We do not deliver value. We deliver enablers that enable the user to deliver value through the use of those enablers. The user has an orientation towards the application. A good measure of location would be what training would be required to efficiently use the application. That training can be pushed into the buying cycle, rather than waiting until after the application is installed. Post install training would show up between the user and the UI. There would be various, numerous users each with different competencies and competency.

The triangle model here is correlated with the roadmap and the releases. Released functionality should always deliver value and reduce the value gap. When this is the case, the user is induced to continue subscriptions. Software as Numbers discussed this need to induce in the client-consultancy, custom-build engagement, the type of engagement where discontinuous technologies find client productizations and vertical markets for that product. The focus in such engagements would be carried content in the software as media model.

Notice I’m counting bits here. Used bits and Delivered bits can give you an idea of leverage. Each release delivers some bits to the ultimate value proposition. The value delivered may the users or that of an economic buyer. The economic buyer’s value generally reaches deeper into the future.

In an agile development environment, the iterations would be tactical; the value delivery, strategic. Why the labels? Consider the machine intelligence environment for a moment. Strategic is not a continuation of the tactical. In phase change environments, you have to stop collecting data and begin a new collection. How wide is your tactical learning needs? How wide is your strategic learning needs?

So we have seen how to collect the data about the customer lifecycle, the daily use under different situations. We’ve looked at the success gap. Both of these ideas tie to a timeline. You can measure against the time to return, or the time to value delivery. The retained customer would have to learn again with each release. Permission campaigns can move that learning earlier. Content marketing likewise. The economic buyer might have to be taught the value proposition, and in value-based marketing, sold on the price and configuration. Microservices can partition, so the amount of UI is variable, so the UI purchased is the minimal UI for the expected value projection.

Enjoy.

 

 

Convergence and Divergence—Adoption, De-​adoption, Readoption

September 5, 2016

Skewness Risk

This week, I visited the Varsity Bookstore, the off-campus bookstore for Texas Tech. I looked at a statistics book, sorry no citation made, that said skewness was about how much the normal distribution leaned to one side or another. When it leans, the mean stays put, but the mode moves by some angle theta. My last blog post on kurtosis mentions theta relative to one of the figures.

Lean

The notions of skewness risk and kurtosis risk came up during the work on the earlier post. It took this long to find some details hinted at in places like Investortopia. The thick tails dive under the threshold for extreme outcomes. Even with a black swan, there isn’t that much under the threshold. The negative skewness graph shows how the large losses move the convergence with the horizontal asymptote towards the present. The same thing happens with small losses possibly with the same extent horizontally, since the longer tail magnifies the small loss.

Notice that on the left side of the normal gains happen; on the right losses. Moore’s technology adoption lifecycle similarly shows the left to be growth and the right to be decline. What saves the right tail is that an acquisition is supposed to bring a 10x multiple into play, but that requires the acquirer to play the merger tornado game. That game is not played well if it is played at all. Most acquisitions provide exits to investors tied up with interlocking directors and funds.

The skewness happens because the distribution is tending to the normal, but at the moment captured by the data underlying the distribution data is missing, and the data is not normal. Once the data is captured, the normal will stand upright and centered without skewness, and without skewness, there is no kurtosis.

S-curves

Since I’m on the road, I’ve left the bookstore behind that had a book by a venture capitalist or strategist, no citation, no way to find this book again. But, the author said he didn’t see the relevance of S-curves to the companies in his portfolio. Well, most of those firms are built on commodity software, so they are long past the upsides of that software. Consumer software still commoditizes and that brings a black swan, a missed quarter to the stock price. When that commoditization happens, the underlying software has to be replaced with a better technology. Replacing it is an s-curve play by the seller of that technology, not the users of that technology. Most of his portfolio would be users of, rather than makers of underlying technologies. Simple fact, in the late phases of the technology adoption lifecycle, declines in stock prices, hope for a merger upside, no premium on IPO, and nobody dealing with S-curves is the norm. Oh, and the whole thing being about cash. You get rich in a upper-middle-class way, but it’s too late to create economic wealth. Confusion between early stage financing and early phase adoption is rife. Talk of early adopters is not in the Moore sense, but the Gladwell since. And, no chasms exist to be crossed. So yeah, no S-curves.

S-curves confuse disruption in the Foster sense because they can be temporary if the innovation’s s-curve slope slips below that of the incumbents. Foster put causes before effects where Christensen focuses on effects absent cause. In the 80’s and early 90’s nobody was overserved. It just turned out that the technology left everyone overserved. The small-disk manufacturers were not competing with the large-disk manufactuers. They just served their markets and the markets got bigger on their own. Alas, the old days.

Kurtosis, defined by curvature hinted at defining s-curves in the same way. Curvature is implicit. Mathematically, the curve defines the curvature. We cheat when we claim curvature is the reciprocal of the radius. We don’t know where the center is, so we don’t know the radius, thus we don’t know the curvature. There probably is some software somewhere that can find the curvature.

S-curve

The red line is the s-curve. The blue horizontal line shows where rapid improvement gives way to slower improvement. The line also shows where investment is cheap and where it become increasingly expensive. The large circle gets larger as we go and shifts its center down, so we get a slower and longer curve. At the top of the large circle, we’ve transitioned to that 10x returns if the merger was actually successful.

The s-curve tells us how much change to expect. If you had the s-curve for every contributing technology, then you would have some notion of the rates of change you could expect. We overstate change in our conversations, particularly when we talk about the s-curves and rate of change of the carried content.

Convergence and Divergence

In today’s reading of “Concepts and Fuzzy Logic” by Radin Balahlavek, and George J. Klir, eds. As editors, the goal of this book was to foster a return to the use of fuzzy logic within the disciplines of the psychology of concepts and mathematicians. I’ve always seen ideation as being convergent or divergent, but over the life of a conceptual model, there are several convergences and divergences. The editors here sought to foster a return to a convergent conceptual model that previously converged and later diverged. 

So we start with the verbs, with the tokens with which we parse the adoption of the discontinuous innovation. The drivers at this stage are those driving bibliographic maturity. We converge or diverge. In the converge, we merge separate disciplines. The conceptual model being adopted is the platform technology, the carrier. The disciplines bring their carried content into the mix. The carrier is under adoption, and the new found applications in the discipline in the carried is under adoption as well. Those applications make the business case of those in the current and near-term pragmatism steps. Those applications and the business cases will change as we approach the mid-term and long-term pragmatism steps.

Convergent or Divergent

In a product, care must be taken to the pragmatism steps. Like pricing bifurcations due to communications channel isolation,  the business cases are specific and the reference cases that will be adopted by a population on the pragmatism step are likewise specific. The early adopter’s success will not drive laggards to buy. But, that the macro view of adoption phases where pragmatism steps present the micro view.

We start with two populations. Each adopts a conceptualization at their own rate. Each has its own reference bases. Once adoption begins, a third population emerges, the adopters. People entering either of the disciplines involved after adoption begins can adopt the idea immediately. This is more pronounced when the conceptualization under adoption is discontinuous. Do students of SEO ever get around to print, or worse focus groups?

In the case documented in the book, mathematicians (yellow) worked their way towards fuzzy logic. They took the path of the continuous innovation. The psychology of concepts researchers (red) found fuzzy logic and it solved some of their problems, so it was adopted, but they were not working with mathematicians to accelerate the use of fuzzy logic.

Publication in these populations motivates adoption. Those peer-reviewed papers constitute the touchpoints in a content marketing network. Publication is likewise and event. Adoption and de-adoption are fostered by events.

System of Convergences and Divergences

In every adoption, there are collaborators and defectors in game theory speak. At some point, a defector succeeded in publishing some claims about how fuzzy logic couldn’t do this or that. These claims were accepted uncritically among psychology of concepts researchers. That led to the de-adoption of fuzzy logic by that population. De-adoption happened only in the psychology of concepts population driven by the publication of that defector’s claims. This went unnoticed by the mathematicians working in the same space. Again, like price communications isolation providing opportunities, discipline-specific communications channels provided the isolation here.

At least in this convergence, the two disciplines were not putting each other down like the demographers and ethnographers involved in ethnographic demography were. I can’t find that post mentioning that behavior. It doesn’t help that this blog has stretched across three blogging platforms. But, the behavior is typical. Those converging will be some small portion of the contributing domains.

Mathematicians continue to develop fuzzy logic to this day.

After de-adoption, a researcher looked at the claims and found them to be false. This led the editors to realize that they needed an intervention. Their book was part of that intervention. That accelerated readoption.

Realize here that in the readoption, the base population has changed, and the concepts being adopted have changed as well. The mathematicians widened the conceptual model to be readopted while the psychology of concepts researchers were gone.

Looking at the underlying populations, the psychology of concepts population had not completely adopted fuzzy logic, nor did that population completely de-adopt. Those later in the adoption lifecycle never bothered with fuzzy logic. They didn’t go through de-adoption. They did go through readoption eventually.

One of the messy things about the normal distribution representation of the technology adoption lifecycle is that adoption happens in a time series. The population is spread out along that time series. The timeline moves left to right. Each sale, whether by seats or dollars moves one down the timeline. B2B sales moves are huge. The mean becomes the marker where fifty percent of the seats have been sold. The growth side of the curve ends with the seat sitting at the fifty percent mark. This timeline is present regardless of skewness or kurtosis.

The timeline starts with the Dirac function providing the potential energy that drives the lifecycle. After the Dirac function comes the Poisson games. Then we move on to the convergence with the normal via sample populations of less than thirty, in statistics, these are Poisson approximations of the normal, which leads us to skewness and kurtosis. Once the sample populations are over thirty, we have a normal that is not skewed. Risks become symmetric. This normal is one of a series of three normals: vertical (carried), horizontal (carrier), and post-merger (whole media, both). The standard normal hides the relative sizes of these normals.

The three normals give us a hint towards Moore’s three horizons, which turn the technology adoption lifecycle around. The horizons look at the technology adoption lifecycle in the rear-view mirror as if they are right in front of us. Maybe a backup camera view is a better perspective. The B2B early adopter is barely seen or focused on. It is inconsistent with the present horizon.

Anyway, those two populations are now a third happily solving psychology of concepts problems with fuzzy logic. The defectors lost. The price-performance or s-curves make the case for adoption. Other things make the case for de-adoption, and readoption. The editors here demonstrated the role of the intervenor, or in most cases, the near-monopolistic, market power positioned, market leader that so many programmers abhor. That market leader does much to make the category happen and thrive.

So what is a product manager to do? Start with understanding the conceptual models that comprise your product. Understand the adopting populations for each. Those populations are not on the same page and don’t adopt at the same rates. Those domains do not inject change into your product at the same rates. Those populations might be deviating away from your product due to de-adoption of the underlying conceptualization. Yes, get someone to stay on top of the changes in each of those domains even. Know when a defect and defection is happening. That defection might disrupt you. That’s classic in the sense of how the hell would you, the product manager, have known. It’s not about competition. It’s about conceptualization. They change. They oscillate. They own you and your product if you’ve taken them into your product or service. They happen in the carrier and the carried of the media we play in.

Likewise know your s-curves, aka your price-performance curves. If they touch your product, know them. Sure, you can’t deal with the fabrication plant investment issue, but it will throttle your product if you need that fab.