Upmarket and Downmarket

November 4, 2017

A while back I ran across a developer coding for the upmarket. It took me a while to recall what an upmarket move was. Geez. And, when you’re talking upmarket, there is a down market. I don’t think in those terms since they are late main street and the horizontal phase issues. Not my game.

Downmarket

I decided to look at them from the standpoint of the technology adoption lifecycle, so I drew two figures to take a look at them.

Market Definition--Down Market

I drew the downmarket case starting with the technology adoption lifecycle (TALC) as a normal of normals. The company is in the late mainstreet phase. This is usually where a company builds a downmarket strategy. Companies in this phase are on the decline side of the TALC. Growth really a matter of consuming the market faster and reaching the end of the road, the death of the category sooner. Growth is a stock market trick. Going downmarket is a way to grow by actually increase the size of the population that the company is facing.

I labeled the baseline of the TALC “Former. ” Then, I drew another line under the TALC. This line should be long enough to contain the population that the company is moving downmarket to capture. I labeled this line “Planned.” Then, I drew a standard normal to sit on this new line extending from the original normal.  I did not normalize the new normal.

The current market is a subset of the new down-marketed market. The new market need not be centered at the mean of the current market. The population will be new so the mean and standard deviation could differ. The standard normal view of the TALC assumes a symmetrical distribution. This need not be the case. Having two means do make a mess of the statistics. It might not look like a binomial. It will exhibit some kurtosis. The speed of the efforts separating the means will take time and planning. If the company is public, it must provide guidance before making such efforts. Don’t switch before providing those projections to the investors.

I went with have one mean in the figure.

The downmarket effort starts with a making the decision. The decision will require some infrastructural changes to the marketing and sales efforts at a minimum. It will also require some UX and code revisions to give the downmarket user relevant interfaces. Simple things become much harder when the user doesn’t have the funds they need. The cognitive model may differ from that of the upmarket. These problems may or may not be an issue with your software. The decision might be made across products, particularly in a company organized around their bowling alley. That could mean that this downmarket might be a permanent element across all products.

After some period of time, the decision to move downmarket will become operational. Sales may continue in the current markets as other sales efforts address the new downmarket or the current market might be deemphasized or delayed. I removed it. I color coded the lost earnings in yellow and notated it with a negative sign (-). I color coded the gained earnings in green and notated it with a positive sign (+). The gained earnings are dwarfed by the lost earnings as the scale of the market grows and subsequently hits the first scale constraint. Then, the downmarket move will stop until the current population and projected population can be supported. Efforts to support the increase in scale can start earlier before the scale constraint generates a crisis.

Beyond the first scale constraint, the gains begin to drown the losses. Then, the next scale constraint kicks in. Once again the downmarket move will stop until the infrastructure can support the needs being generated by the downmarket move.

Beyond the second scale constraint, the losses dry up and the gains continue out until the convergence of the normal with the x-axis happens, aka the death of the category. Another managerial action will need to be taken to further extend the life of the category.

Notice that I moved the baseline downward beyond the second scale constraint. I labeled this “Overshoot.” I did this to make the losses look continuous. Initially, the curve sat on the original downmarket baseline, but this gave a sawtooth-shaped curve. I’m unsure at the time of this writing which representation is better. As shown, the convergence with the baseline of the normal shows up on the “Overshoot” line.

Pricing will drive the speed of the downmarket realization. Pricing might impair the downmarket move. The net result of the downmarket move will be an increase in seats, which turns into an increase in eyeballs, financial results will depend on price, policies, and timeframes, and an extension of the life of the category.

Upmarket

In the TALC, we usually start in the upmarket and work our way to the downmarket as we move from early (left) to late (right) phases, from growth to decline. Hardly ever does a company move upmarket after being a lower priced commodity.

Market Definition--Up Market

Here I started with the TALC again. I selected a target population, a smaller population, and drew a horizontal above which would represent the upmarket. The upmarket as a horizontal slice across the normal is shown in yellow and gold. Renormalizing that gets us the green and orange normals. The purple arrow behind the normals provides an operational view as sales grow the eventual standard normal shown in orange. The zeros convey how the market is not growing. The higher prices of an upmarket might shrink the size of the market.

When converting an existing market to a higher price, we can consider the market to be Poisson, eventually a kurtotic normal shown with the gray normals, and finally a standard normal without kurtosis. The figure skips the Poisson distribution and begins with the kurtotic normal. Normals with small populations are taller. They shrink towards the standard normal. When a normal is kurtotic it exhibits a slant which disappears as the kurtosis goes away.

I called all of these changes in the size, shape, and slant of the normal the “Price Dance.” This dance is illustrated with the purple arrows. Once the standard normal is achieved, kurtosis risk is removed. As the standard normal gains sigmas, the risk is reduced further.

The Poisson distribution representing the initial sales at the higher price puts the product back in hyperbolic space. Once the single sigma, standard normal is achieved, the product is in Euclidean space. From the single-sigma standard norm, the sigmas increase. That puts the product in spherical space where the degrees of freedom of strategy and tactics increase making many winning strategies possible. In the hyperbolic space, those degrees of freedom are less than one. Euclidean space has a single degree of freedom. This implies that the Euclidean space is transitory.

The net result of the upmarket move will be an increase in revenues depending on pricing, The number of seats will remain constant with optimal pricing, which in turns leaves eyeballs unchanged. Upmarket moves shorten the life of the category.

Summary

Downmarket moves take a lot of work, more work than an upmarket move. In both cases, the marketing communications will change. Upmarket moves get you more dollars per seat, but you would have to be selling the product. The number of seats does not change or falls with an upmarket more. Downmarket moves get you more seats, more eyeballs, and given pricing, more revenues if any are independent revenues from eyeballs. Downmarket moves extend the life of the category/product/company. Upmarket moves shorten those lives.

Downmarket and upmarket moves are orthodox strategies and tactics. Talk with your CFO. I’d rather keep the lanes of my bowling ally full.

Enjoy.

Advertisements

A Quick Viz, Long Days

October 29, 2017

Three days ago, out on Twitter, a peep tweeted a graph that was supposed to show how a market event amounted to nothing. The line graph dropped the baseline, rose above the 0 Net Zerobaseline, and dropped again to the baseline. It was a quick thing that had me spending the rest of the day, and parts of the following three days hammering on it.

The peeps point was that nothing happened. Grab a hammer and join me in building a case showing just how much did happen.

This was their graph. If you’re in a hurry, you won’t notice the net loss.

I rotated the minima so I could see if the loss was completely recovered. It was not. The 1vertical symmetry is asymmetric. Rotating the minima reveals a gap, labeled A, shows that the upside did not completely recover the value lost in during the first downside.

The second downside loss stops at the line labeled B, the new baseline. There is a gap between the initial baseline and the final baseline. The gap between the baselines is larger than the gap between the peaks. I coped the gap between the peeks and put it below the initial baseline to demonstrate that loss at A did not account for all the loss between the baselines. Subtracting the loss A from the loss between the baselines gives us the gap labeled B.

Notice that the baseline at B moves up slightly. I just saw this after drawing many diagrams. I annotate my error. We will ignore this slight upside. Just one more thing that the peep and I overlooked. I will remove it from subsequent diagrams.

Going back to the first diagram, we had a downside, an upside, and another downside. The first downside (A) and the second downside (B) account for the difference between the initial and final baselines.

2In the figure on the right, I explored the symmetries. The vertical red lines represent the events embedded in the signal. The notation for the symmetry for an event n, span the interval from n-1 to n+1. These spans are shown in gray.

Since I rotated the minima, the symmetry above the signal is actually a vertical (y-axis) symmetry around the origin. I drew purple lines from the vertex at the top to the vertexes at the baseline. Then, I moved the purple lines to the top of the figure. They looked symmetric, but are slightly asymmetric. The left side was three units wide; the right, four units wide.

Both of the horizontal (x-axis) symmetries are asymmetric. The gray box notation demonstrates that these signal components are very asymmetric.

Asymmetries indicate locations where something was learned or forgotten. The Glass-Steagall Act often gets cited as one of the causes of the housing crisis. It was a forgetting. In Stewart Brand’s “How Buildings Learn,” they learned by accretion. We accret synapses as we learn. When we put a picture on a wall, the wall learns about our preferences. The next resident may not pull that nail out, so such remodeling artifacts accret. Our house becomes our home, because we teach our house, and our house learns. So it is with evolution.

Before I created the box notation, I was drawing the upside and downside lines and 3rotating them to see how much area was involved in each of the asymmetries. I’m using the rotation approach in the figure to the left. I’ve annotated the three asymmetries, The white areas are cores, and the orange areas are tails. The asymmetry annotated at the top of the figure is, again, horizontal. The tail is just a line as the asymmetry is slight. The cores are symmetric about vertical lines, not shown, that represent the events encoded into the signal.

In an earlier figure, I just estimated the area of the tail. When I highlighted that area, 4because I use MS Paint to draw these things and it dithers, I got a line of green areas, rather than a single area. I numbered them in order. They are labeled as Area Discontinuities. In a sense, they would be Poisson distributions in individual Poisson games. In area 8, those Poisson distributions become a single normal distribution. That normal has more than 32 data points. With 20 data points, that normal can be estimated. In a sense, there is a line through those Poissons and the normal. This is what happens in the technology adoption lifecycle as we move from early adopters each with their own Poisson game and sum towards the vertical/domain specific market f which the early adopter is a member. This line is one lane of Moore’s bowling alley.

Where the figure mentions “Slower,” that is just about the slope of that last diagonal, the second loss. The red numbers refer to the earlier unrefined gaps we are now calling A and B.

When there are tails, the normal distribution involved will exhibit kurtosis. I built a histogram of the data in the area that I highlighted in green and then, looked at the underlying distribution along the line through those areas. There seemed to be two tails: one thicker and one thinner. Of course, all of this is meaningless, as it results from the dithering. With a vector rendering, there would only be one more consistent area.

The tiny thumbnail in the middle of the thumbnails at the bottom right of the figure shows a negatively skewed normal, but in another interpretation, the distribution is four separate normals. Where I mentioned theta, the associated angle quantifies the kurtosis5

One more thing is happening where a Poisson distribution finally becomes a normal distribution, the geometry shifts from hyperbolic to Euclidean.

 

 

 

In the next figure, I look at the black swan view of the signal. A black swan is usually 6drawn as a vertical line cutting off the tail of the normal distribution, labeled Original and highlighted with yellow and light green. Here we are talking generally. The next figure we will use this to show how the three black swans generate the signal that we’ve been discussing. The negative black swan throws away the portion of the distribution remaining beyond the event driving the black swan, then the remaining data is used to renormalize the remaining subset of the original data. The lifetime of the category is reduced. The convergence with the x-axis contracts, aka moves towards the y-axis. The positive black swan moves the distribution down. The normal becomes enlarged, so it sits on the new x-axis below the original baseline. The new distribution includes the light green and green areas in the figure. The lifetime of the category is lengthened. The convergence moves out into the future, aka moves further away from the y-axis.

In the continuous innovation case, the positive black swan will stay aligned with the driving event. The normal distribution is enlarged just enough to converge with the new x-axis below the prior x-axis. In the discontinuous innovation case, the positive black swan would begin at the B2B early adopter phase of the technology adoption lifecycle. In the discontinuous case, the size of the addressable market would drive the size of the normal, and it is not correlated with the prior distribution.

Now we go back to the example we’ve worked on throughout this post. We will apply the black swan concepts to the signal using the diagram below. There are three black swans. A negative black swan that generates the first loss. A positive black swan follows with a recovery that does not fully recover the value lost in that first loss. This recovery is followed by another negative black swan that contributes to the net loss summed up by the signal. The normals are numbered 0 through 3. The numbers are to the right of the events, and they are on the baseline of the associated normal. The original distribution (0) is located at the event driving the first black swan. The new distribution (1) associated with the first loss, the first negative black swan. The x-axis of this black swan is raised above the original x-axis. This distribution lost the projected data to the right of the event, data expected from the future. Renormalizing the distribution makes it higher from peak to the new baseline, and the distribution contracts horizontally. The rightmost convergence of the normal with the x-axis is where the category ends. The leftmost convergence is fixed. The x-axis represents time. The end of the category will arrive sooner unless some other means to generate revenues is found, aka a continuous innovation is found. The first gain, aka the positive black swan, generates a larger distribution (3). The x-axis is lower than that of the immediately prior x-axis. The convergence moves into the future relative to the immediately prior distribution. This is followed by another loss, the second loss, the second negative black swan. Here the x-axis rises above the previous x-axis. The distribution (3) is renormalized and is smaller than the immediately previous distribution (2).

From a signal perspective, the original signal input was above the output. The black swans move the signal to the line labeled “Restatement.” The shape of the original and restatement generate and output the same signal.

6

Next, we look at the logic underlying the signal. I’ll use the triangle model. In that model, every line is generated by a decision tree represented by a triangle. The x-axis has decisions trees, aka triangles associated with it. Each interval on the x-axis has its own decision tree. The y-axis has its own intervals and decision trees. The events that drove the black swan model drive the intervals and associated decision trees.

7

The pink triangles represent the y-axis decision trees involved in the losses. The green triangle represents the y-axis decision tree for the gain.  The green triangle is higher than the gain, because it does not recover the entire loss from the first loss. I annotated the shortfall. The asymmetry in the vertical axis, that we discussed earlier, appears on the upper right side of the triangle is thicker. This thickness is not constant. The colors and the numbers show the patterns involved on that side of the triangle. The axis of symmetry associated with the green triangle is an average between the baseline of the input signal and the baseline of the output signal. Putting this symmetry axis would increase the asymmetry of the representation.

The erosion would be shown more accurately as subtrees, rather than a single subtree starting at the vertex, like a slice of pie.

On the x-axis, each triangle is shown in blue. The leftmost triangle consists of a blue triangle and yellow triangle. The blue triangle represents the construction of the infrastructure that generates that interval of the signal. The yellow triangle represents the erosion that infrastructure. The black sway, the first lost resulted from that erosion.

Keep in mind that the negative black swan reduces the probability, so they move their baselines up vertically. Positive black swans increase the probability, so they move their baselines down vertically.

In the very first figure, I annotated the asymmetries and symmetries. Asymmetries are very important because they inform us that learning is necessary. Asymmetries in the normal distribution show up as kurtosis due to samples being too small to achieve kurtosis-free normality or symmetry.

The vertical orientation of those pink triangles is new to me as I wrote this. They represent the infrastructure to stop loss, a reactive action. The results may appear positive, but in the long run, represents exposure. These actions will be instanced for the situation being faced. Given that a black swan can happen at any moment, you don’t want to have to invent a response. You want to move from reactive, predictive, proactive time orientations as quickly as possible. Many people see OODA loops as a reactive mechanism. The military trains on the stuff, on the infrastructure–decision trees being part of that infrastructure. Know before you go. Eliminate or reduce those asymmetries before you get into the field, before the black swan shows up.

The events in the original signal view ties to the black swan/distribution view and the logical view are tied together by the red lines representing the events.

6

I drew another figure that is a bit cleaner about the signal view.  The

7

Even if the signal looks like nothing, a net zero, take a closer look, there was much to be seen, much learning got done to produce the result. Know before you go.

Enjoy.

 

 

 

The Mortgage Crisis

September 5, 2017

Last week, I came across another repetition of what passes for an explanation of the mortgage crisis. It claimed that the problem was the propensity of low-quality loans. Sorry, but no. I’m tired of hearing it.

A mortgage package combines loans of all qualities, of all risks. But, being an entity relying on stochastic processes, it must be random. Unfortunately, those mortgage packages were not random. This is the real failing of those mortgage packages. Mortgages happen over time and are temporally organized, as in not random.

The housing boom was great for bankers up to the point where they ran out of high-quality loans. At that point, the mortgage industry looked around for ways to make lower quality loans. Mortgage packages gave them the means. So fifty loans got sold in a given week, the lender packaged them into one package. Some of those loans were refinancing loans on high-quality borrowers. Rolling other debts into the instrument improved the borrower’s credit but didn’t do much for the mortgage package. Still, the averages worked out, otherwise, throw a few of the pre-mortgage packaging loans, high-quality loans, in there to improve the numbers. A few people had to make payments to their new mortgage holding company. Their problem.

But, the real risk was that all of the original fifty loans originated from the same week. They were temporally organized. That breached the definition of the underlying necessities of stochastic systems. That was the part of the iceberg that nobody could see. That;s the explanation that should be endlessly retweeted on Twitter.

Why? Well, we no longer living in a production economy. You can make money without production. You can make money from the volatility economy. You can make money off of puts and calls and packages of those. That allows you to make money off of your own failures to run a successful business. Just hedge. The volatility economy is a multitude of collections of volatility based on a stochastic system, the stock market.  And, with the wrong lessons having been learned about mortgage packages, the regulators want to regulate mortgage packages and other stochastic systems. Or, just make them flat our illegal because they didn’t know how to regulate them. I’m not against regulation. Constraints create wealth. I just see the need for stochastic systems.

Too many stories are wrong, yet, endlessly repeated on twitter. Kodack, …. 3M, …. There was only one writer that wrote about Kodak that understood the real story. With 3M, their innovation story was long past and still being told when the new CEO gutted the much-cited program.

From the product manager view, where do stochastic systems fit in? The bowling alley is a risk package akin to a mortgage package. But, if you are an “innovative” company much-cited in the innovation press these days, don’t worry, your innovation is continuous. The only innovations showing up in the bowling alley are discontinuous. Likewise, crossing the chasm, as originally defined by Moore, was for discontinuous innovations. Those other chasms are matters of scale, rather than the behavior of pragmatism slices.

But, back on point, we engage in stochastic systems even beyond the bowling alley. A UI control has a use frequency. When they have a bug, that use-frequency changes. Use itself is a finite entity unless you work at making your users stay in your functionality longer. All of that boiling down to probabilities. So we have a stochastic system on our hands. In some cases, we even have a volatility economy on our hands.

Enjoy

A Different View of the TALC Geometries

August 25, 2017

I’ve been trying to convey some intuition about why we underestimate the value of discontinuous innovation. The numbers are always small, so small that the standard financial analysis results in a no go decision, a decision not to invest. That standard spreadsheet analysis is done in L2, a Euclidean space. This analysis gets done while the innovation is in hyperbolic space so the underestimation of value would be the normal outcome.

In hyperbolic space, infinity is away at the edge at a distance. In hyperbolic space, the unit measure appears smaller at infinity when viewed from Euclidean space. This can be seen in a hyperbolic tiling. But, we need to keep something in mind here and throughout Hyperboic Tilingthis discussion, the areas of the circle are the same in Euclidean space. The transform, the projection into hyperbolic space makes it seem otherwise. That L2 financial analysis assumes Euclidean space while the underlying space is hyperbolic, where small does not mean small.

How many innovations, discontinuous ones, have been killed off by this projection? Uncountably many discontinuous innovations have died at the hands of small numbers. Few put those inventions through the stage-gated innovation process because the numbers were small. The inventors that used different stage gates pushed on without worrying about the eventual numbers succeeded wildly. But, these days, the VCs insist on the orthodox analysis, typical of the consumer commodity markets, that nobody hits one out of the ballpark and pays for the rest. The VCs hardly invest at all and insist on the immediate installation of the orthodoxy. This leads us to stasis and much replication of likes.

I see these geometry changes as smooth just as I see the Poisson to normal to high sigma normals as smooth. I haven’t read about differential geometry, but I know it exists. Yet, there is no such thing as differential statistics. We are stuck in data. We can use Monte Carlo Markov Chains (MCMC) to generate data to fit some hypothetical distribution from which we would build something to fit and test fitness towards that hypothetical distribution. But, in sampling that would be unethical or frowned upon. Then again, I’m not a statistician, so it just seems that way to me.

I discussed geometry change in Geometry and numerous other posts. But, in hunting up things for this post, I ran across this figure. Geometry Evolution I usually looked at the two-dimensional view of the underlying geometries. So this three-dimensional view is interesting. Resize each geometry as necessary and put them inside each other. The smallest would be the hyperbolic geometry. The largest geometry, the end containment would be the spherical geometry. That would express the geometries differentially in the order that they would occur in the technology adoption lifecycle (TALC) working from the inside out. Risk diminishes in this order as well.

Geometry Evolution w TALC

In the above figure, I’ve correlated the TALC with the geometries. I’ve left the technical enthusiasts where Moore put them, rather than in my underlying infrastructural layer below the x-axis. I’ve omitted much of Moore’s TALC elements focusing on those placing the geometries. The early adopters are part of their vertical. Each early adopter owns their hyperbola, shown in black, and seeds the Euclidean of their vertical, shown in red, or normal of the vertical (not shown).  There would be six early adopter/verticals rather than just the two I’ve drawn. The thick black line represents the aggregation of the verticals needed before one enters the tornado, a narrow phase at the beginning of the horizontal. The center of the Euclidean cylinder is the mean of the aggregate normal representing the entire TALC, aka category born by that particular TALC. The early phases of the TALC occur before the mean of the TALC. The late phases start immediately after the mean of the talk.

The Euclidean shown is the nascent seed of the eventual spherical. Where the Euclidean is realized is at a sigma of one. I used to say six, but I’ll go with one for now. Once the sigma is larger than one, the geometry is spherical and tending to more so as the sigmas increase.

From the risk point of view, it is said that innovation is risky. Sure discontinuous innovation (hyperbolic) has more risk than continuous (Euclidean) and commodity continuous (spherical) less risk. Quantifying risk, the hyperbolic geometry gives us an evolution towards a singular success. That singular success takes us to the Euclidean geometry. Further data collection takes us to the higher sigma normals, the spherical space of multiple pathways to numerous successes. The latter, the replications, being hardly risky at all.

Concentric

Nesting these geometries reveal gaps (-) and surplusses (+).

 

 

 

 

The Donut/Torus Again

In an earlier post, I characterized the overlap of distributions used in statistical inference as a donut, as a torus, and later as a ring cyclide. I looked at a figure that Torus_Positive_and_negative_curvaturedescribed a torus as having positive and negative curvature.

 

So the torus exhibits all three geometries. Those geometries transition through the Euclidean.Torus 2

The underlying distributions lay on the torus as well. The standard normal has a sigma of one. The commodity normal has a sigma greater than one. The saddle and peaks refer to components of a hyperbolic saddle. The statistical process proceeds from the Poisson to the standard normal to the commodity normal. On a torus, the saddle points and peaks are concurrent and highly parallel.

Torus 3

Enjoy.

The Average, or the Core

August 4, 2017

Tonight I ended up reading some of the Wolfram MathWorld discussion of the Heaviside Step Function among other topics.  I only read some of it like most things on that site because I bump into the limits of my knowledge of mathematics. But, the Heaviside step function screamed loudly at me. Well, the figure did, this figure.

Cute

Actually, the graph on the left. The Heaviside step function can look like either depending on what one wants to see or show.

The graph on the left is interesting because it illustrates how the average of two numbers might exist while the reality at that value doesn’t. Yes, I know, not quite, but let’s just say the reality is the top and bottom line, and that H(x)=1/2 value is a calculated mirage. All too often the mean shows up where there is no data value at all. Here, the mean of 0 and 1 is (0+1)/2. When we take the situation to involve the standard normal, we know we are talking about a measurement of central tendency, or the core of the distribution. That central tendency or core in our tiny sample is a calculated mirage. “Our average customer …” is mythic, a calculated mirage of a customer in product management speak.

Cute w Nomal

Here I put a standard normal inside the Heaviside step function. Then, I show the mean at the x=1/2 of the Heaviside step function. The core is defined by the inflection points of the standard normal.

The distribution would show skew and kurtosis since n=2. A good estimate of the normal cannot be had with only two data points.

More accurately, the normal would look more like the normal shown in red below. The red normal is higher than the standard normal. The height of the standard normal shown in blue is around 4.0. The height of the green normal is about 2.0. The red normal is around 8.0. I’ve shown the curvature circles generated by the kurtosis of the red distribution. And, I’ve annotated the tails. The red distribution should appear more asymmetrical.

more acurately

Notice that the standard deviations of these three distributions drive the height of the distribution. The kurtosis clearly does not determine the height, the peakedness or flatness of the distribution, but too many definitions of kurtosis define it as peakedness, rather than the height of the separation between the core and the tails. The inflection points of the curve divide the core from the tail. In some discussions, kurtosis divides the tails from the shoulders, and the inflection points divide the core from the shoulders.

To validate a hypothesis, or bias ourselves to our first conclusion, we need tails. We need the donut. But, before we can get there, we need to estimate the normal when n<36 or we assert a normal when n≥36; otherwise, skew and kurtosis risks will jerk our chains. “Yeah, that code is so yesterday.”

And, remember that we assume our data is normal when we take an average. Check to see if it is normal before you come to any conclusions. Take a mean with a grain of salt.

Convolution

Another find was an animation illustrating convolution from Wolfram MathWorld “Convolution.” What caught my eye was how the smaller distribution (blue) travels through the larger distribution (red). That illustrates how a technology flows through the technology adoption lifecycle. Invention of a technology, these days, starts outside the market and only enters a market through the business side of innovation.

The larger distribution (red) could also be a pragmatism slice where the smaller distribution (blue) illustrates the fitness of a product to that pragmatism slice.

convgaus

The distributions are functions. The convolution of the two functions f*g is the green line. The blue area represents “the product g(tau)f(t-tau) as a function of t.” It was the blue area that caught my eye. The green line, the convolution, acts like a belief function from fuzzy logic. Such functions are subsets of the larger function and never exit that larger function. In the technology adoption lifecycle, we eat our way across the population of prospects for an initial sale. You only make that sale once. Only those sales constitute adoption. When we zoom into the pragmatism step, the vendor exits that step and enters the next step. Likewise when we zoom into the adoption phase.

Foster defined disruption as the interval when a new technology’s s-curve is steeper than the existing s-curve, we can think of a population of s-curves. The convolution would be the lessor s-curves, and the blue area represents the area of disruption.  Disruption can be overcome if you can get your s-curve to exceed that of the attacker. Sometimes you just have to realize what was used to attack you. It wasn’t the internet that disrupted the print industry, it was server logs. The internet never competed with the print industry. Fosters disruptions are accidental happenings when two categories collide. Christensen’s disruptions are something else.

Enjoy.

Notes on the Normal Distribution

July 24, 2017

Pragmatism Slices and Sales

Progress through the technology adoption lifecycle happens in terms of seats and dollars. If you use alternate monetizations, rather than sell your product or service, drop the dollars consideration. Beyond those monetizations even if you sell your product or service, dollars are flaky in terms of adoption. But the x-axis is about population, aka seats.

Sales drive the rate of adoption in the sense that a sale moves the location of the product or service, the prospect’s organization(s), and the vendor’s organization(s) under the curve. By sales, I mean the entire funnel from SEO to the point where the sales rep throws the retained customer under the bus. But, I also mean initial sales, the point where prospects become customers. That sale moves from adoption from the left to right, from the early phases towards the late phases, from category birth to category death.

But, there are two kinds of sales: the initial sale, aka the hunter sale, and the upgrade sale, aka the farmer sale. What struck me this week was how the farmer sale does absolutely nothing in regards to progress through the various entities locations under the adoption curve. So let’s look at this progress.

Sales

People in a pragmatism slice reference each other. They do not reference people in other pragmatism slices.

In the figure, the hunter sales move the adoption front across the adoption lifecycle from left to right. The hunter sales rep made four sales. The farmer sales rep made four sales as well that generated revenues, but no movement across the lifecycle.

Growth

The size of the normal representing the addressable markets in the technology adoption lifecycle is fixed. It does not grow. A single company has a market allocation that tells us how much of that normal they own. With discontinuous innovations, that allocation to the market leader maxes out at 74%. Beyond that, antitrust laws kick in. Such a market leader would be a near-monopolist. Their market leadership will be the case until they exit the category, or face a Foster disruption. Intel was the market leader until NVIDIA brought a different technology to market. With continuous innovations, we are dealing with many players in a commodity market. The allocations are small. Market leaders can change every quarter.

Growth

In this figure, I started with a standard normal distribution (dark yellow) representing 100% of a category’s market. I represented the near monopolist’s market allocation of 74% as a normal distribution (light blue) inside of the larger normal. Then, I drew the circles (orange and blue) representing the curvature of the kurtoses of these distributions. The light blue distribution cannot get any larger. It is shown centered at the mean of the category’s normal. It could be situated anywhere under the category’s normal. Once a vendor has sold more than 50% of its addressable market, that vendor starts looking for ways to grow, ways to move the convergence of the vendor’s distribution as far to the right as possible. They try to find a way to lengthen the tail on the right. They run into trouble with that.

While a normal distribution represents the technology adoption lifecycle, the probability mass gets consumed as sales are made. The probability mass to the left has been consumed. So there is very little mass to allocate to the tail. In placing those curvature circles, I looked for the inflection points and made the circles tangent to the normals there. For the proposed tail, I drew its curvature circle. The thick black line from the mean to the top most inflection point doesn’t leave enough probability mass to allocate to the tail so the tails would be lower and the curvature circle would be larger. The thick red line from the mean to the bottom most inflection point leaves enough probability mass to allocate to the tail. It’s important that the curves represented by the black and red lines be smooth.

The points of convergence for the 74% normal, the 100% normal, and the long tail appear below the x-axis of the distribution. The mass between the convergences of the 100% normal and the long tail are outside the category’s normal distribution. The normal under the normal model used a kurtosis of zero. But, with the long tail, the kurtosis is no longer zero. That growth is coming from something other than the product or service of the vendor. And, the mass in the tail would not come from the normal inside the category’s normal. The normal was deformed when the mass was allocated towards the tail. But, again, that still does not account for the mass beyond the category normal. That mass beyond the category normal is black swan like and hints towards skew risk and kurtosis risk. Look for it in the data. These distributions just show a lifecycle of the category and vendor normals. The data should reflect the behaviors shown in the model. The pragmatism slices move as well. Taking a growth action that concatenates the tail can dramatically change your phase in the technology adoption lifecycle. Each phase change requires some, possibly massive, work to get the products and services to fit the phase they find themselves addressing.

Booms stack the populations in the technology adoption lifecycle. See Framing Post For Aug 12 Innochat: The Effects of Booms and Busts on Innovation for that discussion.

I drew my current version of Moore’s adoption lifecycle.

The Technology Adoption Lifecycle

Moore built his technology adoption lifecycle on top or Rodgers’ model of the diffusion of innovation. Rodgers identified the populations involved in technology adoption like the innovators, early adopters, early and late majorities, and laggards. Moore went further and teased out technical enthusiasts, and the phobics, Moore changed the early majority to vertical markets and the late majority to horizontal markets. Moore identified several structural components like the bowling alley, the chasm, and the tornado.

I’ve made my own modifications to Moore’s model. The figure is too abundant. Another incidence of my drawing to think, rather than to communicate.

TALC setup

The technology adoption lifecycle provides the basis for the figure. The technology adoption lifecycle is about the birth, life, and death of categories that arise from discontinuous innovation. This leaves aside the categories that can be created via management innovation discussed in an HBJ article over the last year. A category is competed for during the Tornado and birthed when market power selects the market leader. Immediately after the birth of a category, the competing companies consolidate, or exit. Their participation in the category ends. The category can live a long time, but eventually, the category dies. Its ghost disappears into the stack. The horse is still with us. Disruption is a means of killing a category, not about competing in the disrupted category. Disruption happens to adjacencies, not within the category sponsoring the disruptive discontinuous innovation.

The populations are labeled with red text. Most of the phase transitions are shown with red vertical lines. The transition to the early majority is shown with a black line, also labeled “Market Leader Selected.” The vertical labeled with red text consists of the early adopter (EA) and the next phase that Moore called the vertical market. Some technical enthusiasts would be included in the vertical as well, but are not shown here as such.

Notice that I’ve labeled the laggard phase device and the phobic phase cloud. The cloud is the ultimate task sublimation. The device phase is another task sublimation. These are not just form factors. They are simpler interfaces for the same carried use cases. The carrier use cases are different for every form factor. Moving from early majority to late majority phases also involved task sublimation, as described by Moore. Laggards need even simpler technology than consumers. Phobics don’t want to use computers at all. The cloud provides admin-free use. The cloud is about the disappearance of both the underlying technology in the carrier layer and the functionality in the carried layer. Notice that after the cloud the category disappears. There are no remaining prospects to sell.

The technical enthusiasts, as defined by Moore, was a small population at the beginning of the normal. But, there are technical enthusiasts in the Gladwell sense all the way across the lifecycle. They are a layer, highlighted in orange, not a vertical slice, or phase. I’ve shown both views of the technical enthusiasts. The IT horizontal people would show up as technical enthusiasts if the product or service was being sold into the IT horizontal. This distinction is made in my Software as Media Model. The technical enthusiasts are concerned with the carrier layer of the product or service.

Moore’s features are shown as brown rectangles. These features include the chasm, the tornado, and the bowling alley. Specific work, tactics, and strategies address the chasm, the tornado, and the bowling ally. These are labeled as pre-chasm, pre-tornado, and keeping the bowling alley full. They show up as blue rectangles. Another feature stems from de-adoption, the “Need (for a) New Category,” and appears as a blue rectangle. This latter feature happens, because nothing was done to create a new category before it was needed. Or, such an effort failed. The point of keeping the bowling alley full is to create new categories based on discontinuous innovation on an ongoing basis. I’ve seen a company do this. But, these days discontinuous innovation is very rare. Discontinuous innovations can, but not always, cause (Foster) disruptions. Christensen’s disruptions happen in the continuous innovation portion of the adoption lifecycle.

The lifecycle takes a discontinuous innovation to market and keeps the category on the market via continuous innovation. Plant the seed (discontinuous), harvest the yield (continuous). This division of the lifecycle is labeled in white text on a black rectangle towards the bottom of the figure. Discontinuous innovation generates economic wealth (inter-). Continuous innovation generates an accumulation of cash (intra-). A firm does not own the economic wealth it generates. that economic wealth is shared across firms. I am unaware of any accounting of such.

At the very top of the lifecycle, the early and late phases are annotated. The early phases constitute the growth phase of the startup. The late phases constitute the decline phase. The decline phase can be stretched out, as discussed in the previous section. When the IPO happens in the early phases, but not before the Tornado, the stock price sells at a premium. When the IPO happens in the late phases, the stock price does not include such a premium. The Facebook IPO bore this out. It’s typical these days, these days of continuous innovation, that no premium is involved.

Founders, at least in carrier business, with discontinuous innovation are engineers, not businessmen, so at some point, they have to hire them to put the biz orthodoxy in place. VCs these days require a team that is already orthodox. The hype before the Shake Shack IPO demonstrates that innovation has moved on from software. Orthodox businesses are now seen as innovative, but only in the business model, continuous innovation sense. Shark Tank and VCs don’t distinguish the technology startup from other startups. The innovation press confuses us as well. It used to be that the CFO and one other person had an MBA, now everyone has one. But, in an M&A, the buyer doesn’t want to spend a year integrating the business they just bought. The merger won’t succeed unless the buyer can launch their own tornado and bring in new customers in the numbers they need. The Orthodoxy needs to be in place at least a year before the IPO, or the stock price will underperform the IPO a year after the IPO.

From a statistical point of view, the process of finding a new technology involves doing Levy flights, aka a particular kind of random walk, until that new technology is found. It should not be related to what you are doing now, aka to your install base. You are building a brand new company for your brand new category. Google’s Alphabet does this. Your company would become a holding company. Managing the diversity inherent in the technology adoption lifecycle becomes the problem. “No, that company is in a different phase, so it can’t do what our earlier company does now.” Contact me to find out more.

After the Levy flights, we search for early adopters. Use Poisson games to look at that. The Poisson distributions tend to the normal. Those normals become higher dimensional normals. The standard normal has six sigma, the later normals in later phases of the lifecycle have more than six sigma. These divisions translate into geometries. The nascent stages of the lifecycle occur in a hyperbolic geometry where the distant is small from a Euclidean perspective generated by the inherent L2 geometry of linear algebra. Artists see the distant as small reality in perspective drawings. They call that foreshortening. We foreshorten our financial forecasts and small is bad. But, as the Poisson become a normal, those financial forecasts stop foreshortening. The idea we threw away becomes obviously invaluable after the founder builds a market, a technology, a product or service, a company, value chains,… The distributions change, and the geometries change. Once you move beyond six sigma, the geometry becomes spherical. In such geometry, there are many ways for followers with different strategies to win. We start with a very narrow way to win in the hyperbolic, arrive at the one way to win in the Euclidean, and find ourselves in the many ways to win in the Spherical. Or, damn, so many fast followers, geez.

Last but not least, we come to the Software as Media model. Media is comprised of carrier layers and carried content layers. The phases of the adoption lifecycle change layers when they change phases. The technical enthusiast is about the carrier layer; the early adopter, the content layer; the vertical, the content layer; the horizontal, the carrier layer; the device, both; and the cloud, carrier. At the point where you need another category, it could be either. But, these oscillations involve the market and the way the vendor does business. Each phase is vastly different. The past has nothing to do with the present. Yes, the practices were different, but they fit their market. They were not better or worse unless they did not fit their market.

Designers whining about the 80’s were not around then. They take today’s easiness for a given and think the past should have been done their way. The past taught. We learned. And, as we cross the technology adoption lifecycle, the Ito process that crossing, the memories are deep. We learned our way here. And, when we repeat the cycle, our organizations are not going to start over. They don’t have to if properly structured. Call me on that as well. But, usually they don’t start over from scratch, but should, because they forgot the prior phase, as they moved to the next.

Enjoy.

The Curvature Donut

July 23, 2017

In last month’s The Cones of Normal Cores, I was visualizing the cones from the curvatures of a skewed normal to the eventual curvatures of a standard normal distribution. The curvatures around a standard normal appear as a donut or, a torus. Those curvatures are the same all the way around the normal in a 3-D view. That same donut around a skewed normal appears as a deformed donut, or a ring cyclied. In the skewed normal the curvatures differ from one side to the other. These curvatures differ all the way around the donut.

The curvature donut around the standard normal sits flatly on the x-axis and touches the inflection points of the normal curve. Dropping a line from the inflection points down to the x-axis provides us with a point where a line 45 degrees above the x-axis is where the origin of the circle of the particular curvature would be.

The curvature donut of a skewed normal would sit flatly on the x-axis, but might be tilted as the math behind a ring cyclied is symmetrical to another x-axis running through the centers of the curvatures. In January’s Kurtosis Risk, we looked at how skew is a tilt of the mean by some angle theta. This tilt is much clearer in More On Skew and Kurtosis. That skewness moves the peak and the inflection points but the curve stays smooth.

So I’m trying to overlay a 2-D view of a skewed distribution on a 3-D view of ring cyclied.

Ring Cyclide

I’ve used a red line to represent the distribution. The orange areas are the two tails of the 2-D view. The curvatures show up as yellow circles. The inflection points on the distribution are labeled “IP.” The core is likewise labeled although the lines should match that of the tilted mean.

I think as I draw these figures, so in this one, have a gray area and a black vertical line on the ring cyclied that are meaningless. Further, I have not shown the orientation of the ring cyclied as sitting flat on the x-axis.

The ring cyclied occurs when skewness and kurtosis occur. A normal distribution exhibits skewness and kurtosis occur when the sample size, N, is less than 36. When N<36, we can use the Poisson to approximate or estimate the normal. Now, here is where my product management kicks in. We use Poisson games in Moore’s bowling ally to model Moore’s process as it moves from the early adopter to the chasm. The chasm being the gateway to the vertical market that the early adopter is a member of. We stage gated that vertical before we committed to creating the early adopter’s product visualization.  We get paid for creating this visualization. It is not our own. The carried component always belongs to the client. The carrier is our technology and ours alone.

So let’s look at this tending to the normal process.

Conics as Distribution Tends to Normal

I was tempted to talk about dN and dt, but statistics kids itself about differentials. Sample size (N) can substitute for time (t). The differentials are directional. But, in statistics, we take snapshots and work with one at a time, because we want to stick to actual data. Skew and kurtosis go to zero as we tend to the standard normal, aka as the sample size gets larger. Similarly, skew risk and kurtosis risk tend to zero as the sample size gets larger.

The longer conic represents the tending to normal process. The shorter conic tends to work in the inverse direction from the normal to the skewed normal. Here direction is towards the vertex. In a logical proof, direction would be towards the base.

The torus, the donut associated with the standard normal, like its normal is situated in Euclidean space. However; the ring cyclide is situated in hyperbolic space.

An interesting discussion on twitter came up earlier this week. The discussion was about some method. The interesting thing is what happens when you take a slice of the standard normal as a sample. The N of that slice might be too small, so skew and kurtosis return, as do their associated risks. This sample should remain inside the envelope of the standard normal; although it is dancing. I’m certain the footprints will. I’m uncertain about the cores in the vertical sense. Belief functions of fuzzy logic do stay inside the envelope of the base distribution.

Another product manager note: that slice of the standard normal happens all the time in the technology adoption lifecycle. Pragmatism orders the adoption process. Person 7 is not necessarily seen as an influencer of person 17. This happens when person 17 sees person 7 as someone that takes more risk than they or their organization does. They are in different pragmatism slices. Person 17 needs different business cases and stories reflecting their lower risk willingness. These pragmatism slices are a problem in determining who to listen to when defining a product’s future. We like to think that we code for customers, but really, we code for prospects. Retained customers do need to keep up with carrier changes, but the carried content, the use cases and conceptual models of carried content rarely the changes. The problem extends to content marketing, SEO, ancillary services provided by the company, and sales qualifications. Random sales processes will collide with the underlying pragmatism structure. But, hey, pragmatism, aka skew and kurtosis, is at the core of problems with Agile not converging.

In terms of the technology adoption lifecycle, the aggregated normal that it brings to mind is actually a collection of Poisson distributions and a series of normal distributions. The footprint, the population of the aggregated normal does not change over the life of the category. Provided you not one of those to leave your economy of scale with a pivot. Our place in the category is determined in terms of seats and dollars. When you’re beyond having sold 50% of you addressable population you are in the late market. The quarter where you left the early market and entered the late market is where you miss the quarter and where the investors are told various things to paper over our lack of awareness that lost quarter was predictable.

If you know anything about the ceiling problem, the sample distribution reaching beyond the parent normal let me know.

I’ve actually seen accounting visualizations showing how the Poissons tend to the normal.

Enjoy.

The Postmodern UI

July 8, 2017

A tweet dragged me over to an article in The New Republic, a journal that I’m allergic to.  But the article, America’s First Postmodern President, an article I read with my product manager hat on, an article about the postmodern world we live in, that world one of constant, high-dimensional, directionless change. And, it became obvious to me that I’m not a postmodernist while Agile is exactly that, postmodernist, so our software products reflect that.

No politics here. The quotes might go that way, but I will annotate the quotes to get us past that. I’ll ignore the politics. Here the discussion will be product, UI, design, Agile.

For Jameson, postmodernism meant the birth of “a society of the image [textual/graphical/use case] or the simulacrum [simulation] and a transformation of the ‘real’ [the carried content] into so many pseudoevents.” Befitting the “postliteracy [Don’t make me read/YouTube it] of the late capitalist world,” the culture of postmodernism would be characterized by “a new kind of flatness or depthlessness [no heirarchy, no long proofs/arguments/logics/data structures/objects, a new kind of superficiality [the now of the recursion, the memorilessness of that recursion’s Markov chain] in the most literal sense” where “depth [cognitive model/coupling width/objects] is replaced by surface [UI/UX/cloud–outsourced depth].” Postmodernism was especially visible in the field of architecture, where it manifested itself as a “populist” revolt “against the elite (and Utopian) austerities of the great architectural modernisms: It is generally affirmed, in other words, that these newer buildings [applications/programs/projects/products/services] are popular works, on the one hand, and that they respect the vernacular of the American city fabric, on the other; that is to say, they no longer attempt, as did the masterworks and monuments of high modernism [No VC funded, logrithmic hits out of the financial ballpark], to insert a different, a distinct, an elevated, a new Utopian language into the tawdry and commercial sign system [UX as practiced now] of the surrounding city, but rather they seek to speak that very language, using its lexicon and syntax as that has been emblematically ‘learned from Las Vegas [for cash and cash alone, no technlogical progress/reproduction by other people’s means].’”

And,

For Baudrillard, “the perfect crime” was the murder of reality, which has been covered up with decoys (“virtual reality” and “reality shows” [and UIs]) that are mistaken for what has been destroyed. “Our culture of meaning is collapsing beneath our excess of [meaningless] meaning [and carrier impositions], the culture of reality collapsing beneath the excess of reality, the information culture collapsing beneath the excess of information[multiplicities in the spherical geometry where every model models correctly in the financial/cash sense]—the sign and reality sharing a single shroud,” Baudrillard wrote in The Perfect Crime (1995)…[political cut].

What a mess. It helped that this morning in those Saturday morning, light-weight introspective moments the notion of objects being bad and the reassertion of functional programming was leaving us with data scattered in the stack via recursion, and the now of the current system stack with nothing to see of how we got here. But, hey, no coupling between functions through the data structure, something I never thought about until some mention in the last two weeks. Yes, the alternative to static would do that no matter how dynamic.

Those gaps, the architecture enabling us to escape those tradeoffs we make in our products, the slowness of feedback from our users, and the feedback from the managers as if  they were users–a flattening–all disappear when we go postmodern when we go flat. That jack in your car becomes worthless when your emergency tire goes flat.

Still, I don’t like surface without depth; the absence of a cognitive model; the painted on UI, the erasure of the deep UX/CX/BX/MX/EX, the surface of machine learning, and programmers writing up other people’s disciplines as if those disciplines don’t matter, as if those years spent in school learning that discipline doesn’t matter, that the epistemical/functional cultures don’t matter–but, of course, they don’t matter because the programmer knows all the content they encode, and management lays off all the content anyway ending their Markov chains and filling their resumes so full of cheap labor jobs so you can’t see the underlying person. Thirty years of doing something, the depth, forgotten because seven years have passsed, still leaves depth, but hiring passion over experience gets us to that postmodernist surface. Oh, well. When better is surface, when success is reality TV, when…

The danger of a sweeping theory like postmodernism is that it can produce despair.

But, that’s where we are this morning, sweeping theory, not despair.

 

The Cones of Normal Cores

June 23, 2017

A few days ago, I drew a quick sketch about constraints, symmetries, and asymmetries. Discontinuous inventions break a physical constraint, change the range of a physical constraint, weaken a physical constraint, or bend a physical constraint. That discontinuous invention goes on to become a discontinuous innovation once they escape the lab and business people build a business around it.  Asymmetries present us with the necessity of learning.

So we start with a rotational symmetry out in 01 Symmetryinfinite space. This is the space we seek in the economic sense, the theory yet faced with the realities of practice, the desired, the sameness, the undifferentiated, the mythical abundance of the commodity. We could rotate that line in infinite space and never change anything.

Reality shows up as a constraint deforming the 02 Asymmetry infinite space and the symmetry into an asymmetry, an asymmetry we are not going to understand for a while. Not understanding will lead any learning system through some lessons until we understand. Not understanding makes people fear.

The symmetry generates data supporting a normal distribution. When the symmetry 03 Distributions encounters the constraint, the density is reflected at the boundary of the constraint. That increases the probability density, so the distribution exhibits skew and kurtosis.

The normal distribution of the symmetry is shown in light aqua. The skewed distribution is shown in a darker aqua.

04 Curvatures

The skewed distribution exhibits kurtosis which involves a maximum curvature at the shoulder between the core of the distribution and the long tail of that distribution, and a minimum curvature at the shoulder between the core of the distribution and the short tail of that distribution.

With a discontinuous innovation, we enter the early adopter market via a series of Poisson games. The core of a Poisson distribution, from a top down view, would be a small circle. Those Poisson distributions tend to the normal, aka become a normal distribution.

In the previous figure we annotated these curvatures with circles having the given curvature. The normal distribution gives us two circles with the same curvature as the circle is symmetric . The tail of the normal can be considered to be rotated around the core. The skewed distribution gives us a circle representing the curvature on the long tail side of the core larger than the normal , and a circle representing the curvature on the short tail side shorter than the normal.

These curvature circles generate conics, aka cones. 05 ConesSimilarly, the Poisson distribution is the tip of the cone, and the eventual normal is the base of the cone. The technology adoption process generates a cone that gets larger until we’ve sold fifty percent of our addressable market. The base of the cone gets larger as long as we are in the early phases of the technology adoption lifecycle. Another cone on the same axis and using the same base then gets smaller and comes to a tip as the underlying technology is further adopted in the late phases and finally is deadopted.

The early tip represents the birth of the 06 Birth and Death of a Categorycategory, the later tip represents the death of the category. The time between birth and death can be more than fifty years. These days, the continuous innovations we bring to market in the late mainstreet phase of the technology adoption lifecycle lasts only as long as VC funding can be had. Or, no more than ten years beyond the last round of funding. All of that occurs inside the cone that shrinks its way to the death of the category.

We innovate inside a polygon, so we involve ourselves 07 Multiple Constraintswith more than one constraint. We will look at the distributions involved from the top down looking at the circles that constitute the distributions involved. The normal distributions are represented by circles. Poisson distributions are represented by much smaller circles. Technology adoption moves from a small footprint, a small circle, to a large footprint, a large circle.

Notice that as time pases on the adoption side of the technology adoption lifecycle, the distribution gets larger. Likewise on the deadoption side, the distribution gets smaller. Smaller and larger would be relative to sample size and standard deviations. The theta that is annotated in the diagram indicates the current slope of the technology associated with that constraint and the productivity improvement of the technology’s s-curve, aka price-performance curve, and by price we me the invested dollars to improve the performance.

Notice that when we pair adoption and deadoption, 08 Zero-Sum Gamewe are looking at a zero-sum game. The Poisson distribution would represent the entrant. The circle tangent to the Poisson distribution would represent the incumbent in a Foster disruption. The
s-curves of both company’s competing technologies is still critical in determining if a Foster disruption is actually happening or not, or the duration of such disruption. Christensen disruptions are beyond the scope of this post.

I annotated a zero-sum game on the left, earlier in time. The pair of circles on the right, are not annotated, but are the same zero-sum game. There might be five or more vendors competing with the same technology. They might have entered at different times. Consider Moore’s market share formula he talked about in his books. The near monopolist gets 74% and everyone else gets a similar allocation of the remainder.

Notice that I used the term Core and orientation in the previous figure. The orientation would have to be figured out relative to the associated constraint. But, the circles in each zero-sum game represent curvature of the kurtoses involved that drive the length of the tails of the distribution relative to a core.

That core is much wider than shown 09 Line as Corein all but the weak signal context of a Dirac function that indicates some changes to conditional probabilities.

The arrow attached to each kurtosis indicates the size of each as the distribution normalizes.

The core is usually wider. As it gets wider, the height of the distribution gets lower. The normalization of the standard normal 10 Rectangle as Coreor the fact that the area under the distribution will always equal zero is what causes this. I did not change the kurtosises in the figure, but the thicker core implies progress towards the normal and less difference between the two kurtosises. The width of the range should stay the same throughout the life of the distribution once it begins to normalize. Remember that it takes 36 to 50 or so measurements before a sample normalizes. Various approximation methods help us to approximate the normal when we lack adequate data. Skewness and kurtosis will be present in all samples lacking sufficient measurements. Look for Skewness and kurtosis in the feedback collected during Agile development efforts. The normal, in those circumstances will inform us as to whether the functionality is done and deliverable.

Core width will change over the adoption 11 Core Widthlifecycle.  I drew this figure thinking in terms of standard deviations. But, the Poisson distribution is what we have at the early adopter phase of the lifecycle. In the vertical, we tend to the normal. In the horizontal, some complex data fusions give us a three or more sigma normal and in the late phases we are in the six or more sigma range. The core width is correlated with time, but in the lifecycle, time is determined by seats and dollars, and the lifecycle phase rather than calendar time. Note that I correlated the underlying geometries with time as well. Our financial analysis tells us to pass on discontinuous technologies, because the future looks small in the hyperbolic geometry we don’t know that we are looking at. Euclidean is easy. And, the spherical geometry that leaves us in banker numbers, in information (strategy) overload, aka 30 different approaches that all work. No, he wasn’t lucky. He was spherical.

Enjoy.

 

 

 

 

 

 

 

 

Do we gerrymander our product’s market?

April 5, 2017

Notice: I make absolutely no political statements in the following. 

Of course, we gerrymander our product’s market. We don’t intend to represent all of our customers either. When we introduce a product, we pick who we will sell it to. We find some rules. Sales qualify their prospects with respect to those rules. Sales bring outliers to us forcing us to say no to their deal or forcing us to redefine the product.

We prioritize. We tradeoff. We waste code. We waste prospects, customers, and users. All of these are our mechanisms for gerrymandering our product. We become insensitive to our prospects, customers, and users.

The technology adoption lifecycle organizes our prospects, customers, and users. With discontinuous innovations, our focus changes as we cross the technology adoption lifecycle. We start out in carrier or protocol, shift to carried content, then shift to carrier again, and subsequently shift back to carried content. We start out in a single form-factor and end broadly in many different form-factors. We start out with risk takers and end with those that take the least risk possible.

This latter characterization demonstrates the pragmatism scale underlying the entire technology adoption lifecycle.

With continuous innovations, typical these days, we don’t do the whole lifecycle. We jump into the late phases and move to later phases. We act surprised when our offer suddenly has run through all of our addressable prospects. We surprise ourselves when we realize we need something new. Yes, even Apple surprised itself with this many times since the first Apple computer.

But, here I’m talking about the pragmatism scale organizing our business with the phases of the lifecycle, not just phases. The finer we go with this the more likely a release will address prospects different from our consumers, and users with use cases organized in pragmatism slices, not just time slices. We end up with slices at one scale within slices of another scale. We end up with queues. We end up with boundaries.

Not attending to those boundaries results in gerrymandering which in turn leaves us inattentive to opportunities for customization in use cases, and pricing.

Mathematicians are addressing political gerrymandering now. See How to Quantify (and Fight) Gerrymandering.

Gerrymandering our products is a hard problem. The scales we use need to be aligned with our release cycle. Decide on scales. Then, get agreement on where the product and company are on the technology adoption lifecycle. Make a map. Know your pragmatism boundaries.

Moore described the pragmatism boundaries in terms of reference groups. Everyone in a particular slice refers to people, businesses, and business cases in their slice and nearby adjacencies. Each slice has it’s own evidence. This generates some communications isolations that grant us pricing isolations. Communications channels generate more boundaries, more to map.

The use cases served in the current slice will differ from the use cases in earlier slices. Yes, as time goes by the economic customer becomes more pragmatic, but then, so could the use cases and the marketing content.

To make matters harder, sales consumes each population at a different speed and might sell much more randomly without regard to lifecycle or pragmatism scale or communications channel considerations. Just a warning.

Growth would impact all of this. A prospect once sold is forever a customer ever after.

And, of course, all the talk of listening to customers et. al. becomes a matter of where on our map that customer is speaking from. How does the map bundle that feedback? And, how does that feedback verify efforts?

Quite a mess, a profitable mess.