Archive for the ‘Uncategorized’ Category

July 1, 2020

Today, my inbox brought me the usual email from Medium. So I went through the email opening the links to stories I would read later. I clicked on Stop Using If-Else Statements. If-Else seemed like a hot topic lately, so go ahead, show me. It did. Then, I crossed paths with Better Software Without If-Else; 5 Ways to Replace If-Else. Beginner to advanced examples. That article showed me as well.

Yes, I’m old, aka pre-structured programming, aka programming before the If…then…else. Yeah, go back to computed gotos in Fortran. Please, no. If…then …else was cool until Lisp. Then, it was recursion. Even recursive Cobol. Well, my data processing professor at the junior college hated that. She was structured programming or else. Then, came the object revolution. And, its get and put implementations that ended the revolution.

So reading those two articles was cool, if I was ever to code another line. But, it was something else, a view into the organizational structure of corporations. In Clayton Christensen’s Inventor’s Dilemma, he mentioned separation as a key to innovation. So, in my posts here, I took the separation idea and ran with it. It ties nicely into the difference between discontinuous (E in the second figure) and continuous innovation (L in the second figure) in the technology adoption lifecycle (TALC).

I did the same in another blog where I am putting my archived Strategy as Tires tweets. There the protagonist runs the parent company that brings discontinuous innovations to market one after another. There every phase of the TALC has its own division much like these if-else removals involve each state being an object. Each division knows how to do its business. Every division does its business its own way. Companies, technologies, products, and customers flow sequentially through the organizational structure.

The Poisson game that seed the new discontinuous technology with early adopters is one of those processes near the edge of the normal distribution to come that represents the vertical phase of one technology. Each early adopter gets a position on the lane or in the queue(s) in the bowling ally. The first technology only has one lane until it exits the bowling ally. This serializes the early adopters. Once the company has more money, it can have more than one lane. Or in If-else elimination speak, more than one state.

One discontinuous technology is sold to six early adopters, which results in six vertical applications that are aggregated into a single horizontal product or into the user-led growth phase. That horizontal product is sold into the IT horizontal phase or into the user-led growth phase. Here C (red) is the client engagement capabilities, BA (red) is the bowling alley, V (red) is the vertical market, and H (red) is the horizontal market.

That Poisson game is a game of an unknown number of players. That number grows and later shrinks. Then like all things in the vertical phase, they disappear as the technology moves into the early mainstreet phase.

The software as media model divide the layers of the system into the carrier layer and the carried content. The focus on the layers change when we transition across TALC phases. The layers have their own events. Phase transitions have their own events as well. But, after so many technologies, it all looks familiar. It starts to look like an API.

Anyway, read those articles. Read them even if you don’t know object orientation. Think about it in terms of business processes and business events. Imagine what your enterprise goes through. Imagine if all the questions you ask are really spatially and temporally organized places and times. Your answers trigger subsequent work. Or, The question presents you with a menu of options, so you order and one thing, or one chain of events happen.

“Do we ship today?”

“Well, QA says no!”


Aggregate Normals

June 27, 2020

Revised on 6/27/2020.

I’m reading up on all things “thick-tailed.” So I came across a paper that I had read and blogged about eons ago in “Nomials”. The article I am bogging about here is “Mean, Median, and Skew: Correcting a Textbook Rule.” Yes, it is the same one I blogged about back in 2018.

The first thing that jumped out at me was the distribution. I had to delineate the nomials.

The figure is different from the last time I wrote about this. The earlier post took the pink nomial to be three different nomials. I omitted two of them here.

The peaks understate the number of nomials. Every convex curve is a skewed distribution and counts as a nomial. So the aggregated distribution is a tri-nomial.

The base of a normal distribution is a circle. The base of a skewed distribution is an ellipse. The major axis of that ellipse orients the distribution. The orientation is difficult to determine without the underlying data.

The original figure calculated the mean, median, and mode for the aggregated data. The familiar textbook rule about the order of these statistics can become irrelevant. Given a collection of distributions, we can add them together when they

  • Have achieved normality
  • Have a mean of zero
  • Have a standard deviation of one.

The datasets achieve normality by being large enough to have achieved normality. Or, those datasets contain normalized data. Normalizing the data assumes much.

The normals in the figure do not comply with the bulleted rules. Every nomial has its own mean. They would also have their own standard deviations. And, the skewed distribution has not yet achieved normality. So why would the aggregate distribution comply with with the text book rules?

And, the math is about aggregated data. The distribution of that data would not present a shown in the article. It would be one big normal. A multidimensional normal would result. Adding it’s many dimensions would result in a distribution of the shape shown. It is good that the distributions still show up in the diagram.

When the distribution is a binomial, more analysis will separate the nomials.

Consider that such summed normals and the lifecycle of a normal lead us to the notion of synthetic data. We already use synthetic data in Monte Carlo methods, and with Bayesian analysis.

From my readings on regression to the tail, data science has ignored regression to the tail. It becomes very obvious that premature normalization of data leads to hidden tails. Linear regression assumes that the distribution is normal, and has no thick tails. When we normalize data, we assume much.

Datasets are also part of the problem. Datasets are not the data. Datasets hide the lifecycle of the distribution. Datasets hide the differential aspects of statistics.

VCs see discontinuous innovation as risky. The sample size is too small to produce a normal distribution. The small sample produces a skewed distribution. And, the distribution is in hyperbolic space. VCs are familiar with evaluating deals in Euclidean and spherical spaces. The VC’s financial analyses of discontinuous innovations understate future returns. The small sample sizes involved with discontinuous innovations cause this understatement. VCs do not realize this. That in turn, suppresses their willingness to invest.

We are conflating discontinuous innovation with disruptive innovation. They are not the same thing. Discontinuous innovation leads to Foster disruptions. Christensen disruptions happen in continuous innovation phases. These very different disruptions produce very different results. Foster’s disruptions produce economic wealth. Christensen’s disruptions produce some cash and another serial entrepreneur. Foster disruptions produce premiums on their IPOs. Christensen disruptions produce no premiums on their IPOs. Investors created the new stock Silicon Valley (SV) financial market. When investors did not get premiums on the IPOs of companies they invested in, the SV market resulted. Those investors want to profit from hype. They will use the VC money to sell the hype in the hopes of earning a premium. In the past, the recipient of VC funds had to create economic wealth to earn a premium.

Off-topic, I know.

But, this aggregating of normals matters in the technology adoption lifecycle (TALC). Each TALC phase has its own normal distribution. Each phase has a different managerial focus, so the data is different in each phase. But, when we add it all up, we get a monomial, not a hexinomial. It is not one standard normal. It is the sum of six normals.

The user-led growth phase provides another nominal. That takes the aggregated TALC distribution up to a seventh nominal. I once took user-led growth to be another tail. But, it is another phase, a seventh phase, a seventh nomial that follows the cloud phase of the TALC.

It turns out that there are tests for tails. And, there are regression methods that reveal tails. Linear regression hides tails. Normalization of the data hides tails. Yes, we will need to use some distribution other than the normal. Statistics assumed much.

If we are inaccurate about the shape of the underlying distribution, machine learning cannot climb the hill. And, we cannot see our opportunities and risks as we move another technology across the TALC.

Shape matters. We’ve hidden our tails for too long.

The Box

June 21, 2020

The normal distribution starts life as a point O when we assert the existence of a stochastic variable. When we assert that existence, a Dirac function generates a line between infinity and at point O. We don’t know where that point O is located in whatever coordinate system we end up using.

The line generated by that Dirac function contains all the probability mass that the asserted stochastic variable will ever have. But, there is no interval yet, so the probability cannot be had.

Point O is on an, as of yet, unknown dimension. The interval that we need before we have a probability will be along that dimension. As the sample size goes from 0 to n, the probability mass distributes itself along the interval. The distribution has a shape. More abstractly, the probability mass is inside an envelope, a flight envelope. More abstractly, the probability is inside a box.

We will follow the lifecycle of a normal distribution. I will include a normal distribution inside the box. But, later the box will become the focus of discussion. The point of this post is the forces that act on the box.

The aqua colored boxes each contain a normal distribution. The normal distributions start out thin and tall. They get wider and shorter throughout their lifecycle, aka as n, the numbers of sampled data increases. I did not start at O, the point where the Dirac function instantiates the stochastic variable, aka where n=0. The diagram starts later.

Before the normals achieve normality at what I labeled the standard normal, the distributions should be skewed and exhibit kurtosis tending towards the standard normal as n increases. This is where the normals exhibit long tails. Every dimension has a long and a short tail. Consider them to be paired.

I drew those normals via paint. I scaled a standard normal up by 200% and reduced its width by 50%. The point was to change those two variable in each normal, so their area would still be equal to one. So although the shape changed, the probabilities still sum to one.

As the width of the distributions got wider, their eccentricities at their tails got larger. The distribution that is 1/7th of the standard distribution wide, the first one on the left, has the smallest eccentricity. Revolving he eccentricity circle would generate a torus. This happens, because to be a distribution that appears normal, rather than skewed and kurtotic, we will say that the data for distributions to the left of the standard normal has been normalized.

Since these distributions contain prematurely normalized data, the shoulders are symmetric and a rotation of the eccentricities result in a torus. If these distributions accurately reflected their skewed and kurtotic shape, The eccentricities would increase from the minimum eccentricity of the short tail to the maximum eccentricity of the long tail. These two eccentricities would result in a cyclide.

I did not include the tori in the diagram.

The distributions on the right are post-standard normal. The standard deviations, or the variances increase once the distribution achieves actual normality at the standard normal. These distributions get wider and lower.

The distributions that are on the approach to the standard normal are in hyperbolic space. A Euclidean financial analysis will understate the future outcomes of technologies and products that come into being in these skewed and kurtotic distributions, or on the left side of the technology adoption lifecycle (TALC).

When a multidimensional normal finally achieves normality, or when he data in such is finally normal, subsets will still be to small to be normal. A single dimension would still be to small to be normal.

I highlighted the mean of the standard normal with a red line. This is where regression to the mean happens. Regression to the tail depends on the distribution being a distribution that has thick tails. That implies that the distribution cannot be normal. So I labeled the far right side of the figure, where the variance goes to infinity as the Non Normal Regression to the Tail.

It is usual that once normality is achieved, the standard deviation and the variance increase. Normal distributions exhibiting this behavior are in spherical space. This was troublesome until I remembered that regression to the tail only happens in distributions that are not normal, aka distributions that have thick tails.

Now, I will put a standard normal in a box.

Our first diagram included distributions that should have been skewed and kurtotic. An example of one is shown in the next figure.

In a normal distribution, the mean, median, and mode have converged. That they are separate in the skewed kurtotic distributions hints at what will happen a the sample size n increases.

In the next figure, I have rotated the short tail around the mode. This shown as a purple line. Then, I put the skewed distribution on top of a standard normal, so I can see where history will take us. The long tail of the skewed distribution contracts to the standard normal. The green area under the skewed distribution is the safe place to invest. The long tail contracts to the short tail. Investing in the long tail will be lost. Short it.

The standard normal is six standard deviations, aka six sigma, wide. The data in the standard normal has achieved normality. Once it has, n will continue to increase as will the variance, standard deviations, and sigmas. They may become infinite in some distributions, the ones with thick tails.

Then, I realize that I could go downmarket with the thick tail, labelled A in the above figure. The yellow is the shoulder and tail portions of the thick tail. The shoulder is overstated since it should start further away from the mean, but starts at the mean in this figure. Had I drawn this all the way out to the thick tails point of convergence, the figure would be too small to be seen in this blog post. The point of convergence is a long way out there.

Distance implies time. The convergence on the right on the TALC represents the moment that the category dies.

Regression to the mean in a standard normal implies the median, and the mode. With a skewed distribution, we could call this regression to the mode. And, when we make a downmarket move the base of the distribution moves down the y-axis, which gives us a thick tail.

I’ve written about going downmarket with normals previously. In those we renormalize the remaining addressable market and move down the y-axis. This extends the life of the category. If you want to live longer, go further downmarket. But, be warned that this is more than a pricing decision. We embed the preferences of our current prospects in our interfaces and encoded cognitive models. Those change as the user/buyer populations change.

So why the box?

The box moves independently from the distribution. The standard normal is taller than its skewed former self. The downmarket move of the skewed distribution is typical for downmarket moves. We only went downmarket to align the bases of both distributions. This was arbitrary, but illustrative.

If we had a standard normal, any normal and moved upmarket, no tail would have been involved. We would renormalize the remaining addressable market an ue the new standard normal as our planning basis.

Here we moved upmarket. The normal went up the y-axis aligned at the mean so we could account for having already consumed 60 percent of our addressable market. We do lose some customers, shown in yellow and labeled fewer, still in our addressable market due to our price increase. We did add some new customers, shown in green and labeled more. We gave up prospects in our former market, labeled C. The tail of the upmarket extends into the future beyond where it ended previously. This time duration is labeled A. We gained some time, but the upmarket move shortened our lifetime by the time interval B.

A population clock in the software industry is based on seats and dollars. Many things are tied to this clock. The box around the TALC or TALC phase moves when management decides to move it. But it also moves relative to macroeconomics. A company can be forced into up or down markets unexpectedly.

I imagine that the box moves constantly.

Thick Tails

June 20, 2020

I’m on an email-based service, Academia. Son on June 18th, I got an email about a paper written by Bent Flyvbjerg, “The Law of Regression to the Tail…” I read the paper. It mentions the notion of regression to the mean. And, it draws a contrast with a fairly new realization that is regression to the tail. This involves thick tailed distributions.

I’ve mentioned long tails in the past in any discussion of kurtosis. Long tails and thick tails are not the same things. Both involve the notion of kurtosis as being about shoulders and tails. This is a recent and still being debated change to the meaning of kurtosis. It was a collection of terms about peakedness, but these terms are being replaced by the shoulders and tails point of view. This is an example of older theory being replaced by newer theory. The new theory is based on things unrelated to the old theory. This is a classic example of a discontinuous innovation.

So I’ll start out with a figure about the old theory. The terminology used is emic, aka a taxonomic classification system. I’ve only seen the terminologies used in a descriptive manner. I’ve not seen the terminology used in a quantitative manner. And, these terms are useful only in the dataset sense where the data has been prematurely normalized, or assumed to be normal. That makes the terminology even more questionable. The named terms seem to be related to sample size, rather than kurtosis. I have not figured out how to create some data that would allow me to increase the sample size without altering the kurtosis values. There should be away to do this, so we can see if the named effects are really about kurtosis, rather than sample size.

The above figure came from stack exchange post, cited in the figure on the left side of the composite figure. The figure on the right was pulled from and displayed as the result of a Google search. The terminology was taken from another figure found in that Google search,

On the left, the red dots actually converge into a single dot. The blue dots do the same. The single dot (red and blue ones) denote where the tails begin. The curves involved in leptokurtic, mesokurtic, and platykurtic distribution are different as shown in the next figure. The kurtosis terms are ignoring that the dataset approaches normality and then departs normality. But, prior to the data actually achieving normality, it is skewed as well as kurtotic.

The data in both figures assumes normality and achieves this by normalizing the data throughout the sampling process.

In this figure, I colored the shoulders in light brown and the tails in yellow. The colors are applied to the thick tails and shoulders. I added the mean symbol to the line representing the mean. That line is pointed to by the callout for the peak, as in peakednes. I also added a unit measure below the base of the distribution. The standard deviation is the unit measure.

The peak as shown implies normalized data and a sample size that is small. It is greater than one, as that would be line to infinity. That line would be a point, rather than an interval, so it would have no probability. Only intervals have probabilities. Normality was a assumed. Normality actually happens when the mean, median, and mode converge to the same value. In skewed normals, those values are not the same. Distributions with a kurtosis of 3 have achieved normality.

We consider the x-axis to be the base of the distribution. All dimensional axes are considered to be the x-axis. Black swans have a new x-axis at a new positions on the y-axis. Upmarket and down market moves similarly move the x-axis vertically.

The long tail and shot tail of the distribution on a given dimension’s axis are apparent when the distribution is skewed and kurtotic. A dataset will obscure the locations of the tails. Modes are where short tails happen.

Anyway, enough of that. I did more reading. Thick tails are where you are exposed to risks, as are long tails. I consider areas under rotations of short tails around the modes to be safe.

Keep in mind that our normals are n-dimensional. We see them drawn in two dimensional diagrams. That leads us to think that we only have two tails, but we have two tails per dimension. Those dimensions intersect at various angles and in various places. As an aggregate there are numerous tails in an n-dimensional normal.

Those tails have curvatures, and those curvatures define a torus around standard normals, or normals that have achieved normality. If the dimension is leptokurtic, the curvature has a larger diameter, or platykurtic, the curvature has a smaller diameter. That turns the torus into a cyclide as the object rotates through the dimensions. A cyclide is oriented at its minimum and maximum diameters.

In risk management, solutions with thick tails are replaced by solutions with thin tails. Long tailedsolutions would be replaced by short tailed ones.

Keep in mind that software products exhibit long tails. Projects exhibit long tails as well. In both cases, there can be numerous tails.


Revising the Technology Adoption Lifecycle (TALC)

June 15, 2020

I’ve been talking about how the cloud has been cannibalizing the early mainstreet phase of the TALC. My last post on this was Cannibalization, which was a revision to On Cannibalization. This post addresses some further changes to my last on this.

User-led growth (ULG) provides another tail within the cloud/phobic phase of the TALC. Both of these tails are on the right. Having more than one tail is not really a problem. But, the flow has reversed direction. The TALC has always run let to right. Some companies sold to consumers before they went back to the enterprise. The phases when done left to right builds naturally. The sequence of task subliminations run from left to right as well. But, User-led growth traverses the TALC from right to left.

The right side of the TALC has always represented the death of category. This is where negative profitability, VC money driven monopolies will find themselves in short order even if antitrust law is never enforced. They will have run out of addressable market.

Movement to the cloud has always been via vertical applications. Attempts to move horizontal applications to the cloud have been failures. User-led growth is another way to move those horizontal applications. They sell to users, and then expand across the company for which the user works. Task sublimations is what moved the buyers across the TALC from left to right. That force is not applicable in User-led growth applications. It is really the users that drive the company to make the purchase.

But, where is the death of the category? Will it be the vertical applications that make computing disappear? In the normal left to right direction, the carrier layer disappeared. You don’t save files anymore. You don’t administer databases and applications anymore. The carrier is gone. The carrier is invisible to users and enterprises. The jobs-to-be-done, however, remain. So when user-led growth pushes into the enterprise, it is not an IT purchase. The IT budget goes to the cloud. The CIO is not the target that makes the purchases. The user’s bosses make those purchases. The CIO represented the enterprise. But, now it is the managers throughout the enterprise that will make the purchases. The power is moving.

Back when those managers were in power, we failed at requirements elicitation. Agile has people asking users, rather than managers. Regardless, Agile did not solve the requirement elicitation problem. Agile let us produce software that delivered amatuer level-performnace. Users do not tell us the implicit knowledge involved. 80’s era artificial intelligence failed for this very reason. The great unsolved problem, requirements elicitation, has not been solved. Ux has not solved the problem either.

So what do we do with the TALC? We want an answer where everything runs from birth to death. We want birth to be on the right and death to be on the left. We don’t want a fold in the middle. And, we don’t want to change the direction of task sublimation. The TALC will be bimodal for now. And, users will at last be in the driver’s seat.

Here I have added another normal for user-led growth. I’ve added it previously, but it folded and headed back to B. The yellow represents the cannibalization. To make the horizontal and vertical aspects clear, I would have to draw a layered version based on the Software as Media model that separates carrier and carried content. The normal on the left is the ordinary TALC. The normal on the represents the user-led growth phase.

We will have impacts on our processes at A given the vendor no longer has a process for moving from the market for vertical applications to the consumer market. The market for vertical applications relied on personal selling and the emergence of a formerly non-existent industry or trade press. The consumer market is a mass market. The vertical applications had to cross the B2B chasm. The consumer market has Gladwell early adopters. They are unlike Moore’s B2B early adopters.

The tornado that happens at A will move to B. The tornado happens at the end of the bowling alley. Moving the tornado eliminates the point of the bowling alley, except that the bowling alley did let the vendor bootstrap. The bowling alley was the process that built the market for discontinuous innovations. The Tornado also enabled vendors to build a near-monopoly during the early mainstreet phase. What a mess.

Moore’s bowling alley book is out of print. It was the first or second of Moore’s books that I read. It’s notable that he did not write a book about entering the late mainstreet phase. The dot bust got in the way. And, Moore’s focus changed.

There is only one chasam. It separated the initial B2B early adopter from the rest of that early adopter’s vertical. There were far too many claims of chasms by founders that were not bringing a discontinuous innovation to market. They were excusing bad marketing of the typical kind.

Wadley’s scale chasms will remain. A will Christensen’s downmarket tactic to extend the life of the category that remains under the usual TALC.

Notice that we have no hint at what would lie to the right of C on the extended TALC. We might get to expert-level performance from software that eliminated cognitive gaps in use, but that will require an evolution in programming methodologies. It would not be artificial intelligence.

Many mysteries remain. Some of them will find discontinuous solutions granting us more economic wealth.

What does profit-free mean to me?

May 30, 2020

I’m into discontinuous innovation. Back when we did that, we created a category and economic wealth. We had to create a technology, a product, and a market. It was not a fast process. We had to create that economic wealth consisting of new industries, new value chains, and new careers. This economic wealth is what got us a premium on our IPO. But, that was the old days.

I’m into the technology adoption lifecycle (TALC). Discontinuous innovation starts in one place and might cause a Foster disruption. Continuous innovation starts later on the other side of the mean of the aggregate normal that is the TALC. The TALC is the story of the birth, life, and death of a category.

And, I am from the old days when the whole point of a product was to advance the adoption of the technology, a technology that bent or broke a constraint. You sold that product. Your business was focused on that. You didn’t deal with alternative monetizations. You didn’t pretend to be publisher that denied being a publisher, so you had no responsibilities to no one, but yourself. You didn’t sell eyeballs. You made a profit. You got VC money to expand your market, rather than code. And, you bootstrapped. Your valuation did not come from your ability to pitch. And, if you were a serial entrepreneur, you actually built a company and sold product. You knew how to do more than pitch.

Continuous innovation is about improving a technology, not replacing it. It is about the old theory where discontinuous innovation is about completely replacing the old theory with a new one that springs from a completely different basis. Continuous innovation is what VCs invest in. When the VC requires an existing market, that boils down to investing in only continuous innovation. You make some money with that, but you don’t create any economic wealth and that point is apparently lost on everyone. Continuous innovation will not earn you a premium on your IPO.

Now, business schools teach whatever is interesting. But, they mostly teach how to sell a commodity and how to extend the life of the company by extending the life of the category. Design thinking gets that done. Book cooking gets that done. And, a Christianson downmarket move gets it done. But disruption beyond the downmarket move gets us in trouble.

A category once born is filled with competitors. Someone is number one. Someone is number two. When there is room in a category, the freed up market allocation is distributed to the companies already in that market. And, a new entrant might take that allocation. At first the allocations are made by the market. You win those positions by serving the market. Later, when the underlying technology becomes a commodity, market share can be bought. You are in the late mainstreet or later phases of the TALC, your margin is tight before you can buy market share. This is where we do something that we call “compete. Before this you get, at best, 74 percent of the unallocated market. The market leader gets a lot and everyone else get way less. This is the world where b-schools teach you to manage.

What have we been doing since the dot bust? The VCs have insisted on existing markets, b-school territory and quick returns. Microsoft couldn’t happen today. We don’t do that anymore. Instead, we do design thinking, and Christianson disruptions. With this type of disruption, you have no technological basis, and we pretend that business models do not have s-curves. We pretend that management alone without invention is innovation, and that management is not about bending or breaking constraints. Bogus! And, we pretend that we don’t need a profit as long as we drive all the competitors out of the market. Alas, that takes a long time. Uber will win only when no other taxi company exists. Right not, Uber uses predatory pricing to kill their competitors. Amazon leverages their no-sales-tax sale to close BAMs before they enter the same markets with their own BAMs. The business world seems to believe that as long as … antitrust law will not be enforced. That is a minefield. Anyway that seems to be the hope for all these did nothing unicorns, that they can be the monopolist. Not good, either for them or the consuming public. I know that I can’t get a cab. And, I know that I could reserve a cab long ago in my Austin days, long before Uber.

Economists warned up about globalism. They told us someone would profit, but they didn’t know who. They knew that some places would profit, but again, they didn’t know where. And, they warned us not to zero-sum the profits from globalism. The latter is exactly what we have done. We are not investing in new economic wealth/discontinuous innovation. And, we are politically engaging in kleptocracy, the means by which a country becomes a third-world country. We are being put into grave danger by refusing to innovate discontinuously. We are in grave danger of our innovation-free means of innovating. Profit free is hazardous to our future.

A once sales VP used to say, get back to work.

Anyway, enjoy!


May 30, 2020

I am no longer on Twitter. You have not seen any Strategy as Tires tweets in quite a while. And, you have not seen any Product Strategist blog post promotion tweets.

I have been publishing my Strategy as Tires content in a blog of the same name. Read those tweets at A blog doesn’t enforce wordcount, so they are short blog entries. Some of them are tweet length. And, I have not done more than one a day. I will edit this stuff and shorten them and burst them out into multiple tweet length entries. It is coming. The content will again, be the familiar Strategy as Tires content.

I am also putting my pror tweets to the extent that I can find them in my tweet archive in the blog as well. That has just started. Those blog entries look very different from my recent “tweets.” They will all look like these blog entries in about a year. There is way too much to do.

I miss my twitter community. Please visit and follow my Strategy as Tires blog.



May 13, 2020

I’ve blogged on this topic recently. See On Cannibalization. In that post, I drew a sketch of what the technology adoption lifecycle(TALC) would look like because of the cannibalization of the early mainstreet (EM) phase. I put user-led growth (ULG), a new normal, in the aggregate normal that represents the TALC.

I drew the early mainstreet phase as being flat. That is in the distant future. The work and the money is moving to the cloud. But, it is ding this slowly. And, the situation is bifurcated. The enterprise functionality that has moved and is moving to the cloud are vertical applications. The horizontal applications are more resistant. These horizontal applications are a great target for the user-led growth companies.

I’ve drawn a new diagram of the emerging cannibalized TALC.

The red line represents the aggregate shape of the TALC. The black arrow shows the early mainstreet vertical application migrating from the early mainstreet (EM) phase to the vertical layer of the cloud phase (CV). The grey lines represent the former TALC. The user-led growth is shown as the horizontal layer of the cloud phase (CH).

The device phase is shown unaltered by the forces acting on the cloud. I’ll ignore the G5 impacts in this diagram. But, more functionality will be available to the “laggard”-based phase that I’ve been calling the device phase.

Notice that the early mainstreet phase does not disappear as that is dependent on the success of the user-led growth phase. The early phases of the TALC was predicted on the near-monopoly results obtained by the bowling ally and its aggregation in the tornado. Cannibalization threatens this. But nobody is doing discontinuous innovation these days. And, early mainstreet is being approached after success on late mainstreet. The only problem here is that the reverse pathway does not get you to that near monopoly. The innovation is on the reverse pathway is continuous, so the result is a successful cash play without creating any economic wealth. The normal pathway for discontinuous innovation got us to the success of that near monopoly, which got us premiums on our IPOs. No premium is the fault of VCs not investing in discontinuous innovation, rather than a failure of the financial markets to understand.

The increased number of nomials forces some changes to the organizational structure of the company that does discontinuous innovation on an ongoing basis. Each phase is different. They are not generic. A product can be birthed as either continuous or discontinuous. A category is birthed only discontinuously. A category is a collection of vendors, not a single vendor, hence the economic wealth creation and the premium on such IPOs.

The layering of the cloud is new. The functionality already show us that at the jobs to be done level, the work in those two layers is significantly different. So the divisional organization that runs the cloud will have two organizations reporting to it instead of one. The horizontal layer organization will own the user-led growth companies. More than the technology will be different in that layer.

Notice also that the cloud horizontal/user-led growth is skewed and kurtotic in the figure. Eventually it will achieve normality. Until then, put the money on the short tail side of the distribution.



May 10, 2020

This morning, I found myself reviewing some math before moving forward into some new-to-me math. A sheet of paper and a pen took me to a few surprises in familiar territory. It all began with having read “Question becomes a conjecture.” Just attribute that to some unknown source, not me. It was on a passing page. I was not online. And, Google didn’t find it. Besides, we are off in a ditch now.

Back in the late ’80s, NASA held a hypertext conference. Researchers from the Microelectronics and Computer Technology Corporation (MCC) in Austin presented some work on formal requirements. CS majors were proving programs. But, nothing beyond it works could be said. It did what we coded, yes. But, did we code the correct thing? Who knew? Formal requirements were supposed to resolve that problem. Connecting code to it’s requirements was the point.

This approach begins by asking a question and encoding the answer in If…then… statements. Thus, eliciting requirements became a generative process. Asking more questions gets you a tree. A tree becomes a triangle. So my triangle model began. The triangle model became a place to embed the waterfall. Yes, the hated waterfall, a classifier of decisions. At the end of all the decisions was the base of the triangle, a user interface. Later, the use case layer was added.

So logic is a triangle, and question is a conecture.

It may take a century or more to answer that question. And, that answer might involve new theory that will make us a ton of economic wealth. Hell, cash these days is made from nothing. Cash is the joker, not the king.

Somewhere along the way, I added Goldratt’s theory of constraints to the mix. The peak of the triangle became the origin. A generative system built outward from the origin until it is blocked by one or more constraints. The constraints form an envelope. What you have is a multidimensional mess. For me, the point of product has always been to bend or break one or more constraints. Alas, this is continuous innovation. Better, quicker, cheaper, … , more profitable for the seller, …

Discontinuous innovation builds a new triangle from new theory, but it gets adopted by approaching the existing use cases even if it has its own use cases. It gets close. It approximates. It approaches. It reaches out. It converges, but only at the use case. It jumps a dielectric there.

The next notion on the list is having read a property that just was, chirality. Chirality is a property of asymmetry. Chirality was said to be Boolean. Either it was the case, or it was not. Now, boil that down to existence (∃). Chirality exists. Chirality, has become parameterized, quantified, leaving that Boolean behind. Existence is a surface. Parameterization moves us out on some normal from the surface of existence to another surface, or other surfaces.

Back to constraints for a moment. When you break a constraint and discover new constraints, you have gained some surface, some area, some volume in which to extend your use. You have more, or inversely less, which generates some income for the facilitator/vendor/us.

After making that list, I got back to the hunt for diagonals.

A circle has one diagonal. When you use that diagonal as the base of a triangle, the angle of the intersection of the two lines drawn from the ends of the diagonal to a point on the circle will be perpendicular. If we are talking data, a circle implies no correlation, as does the perpendicular intersections. The center of the circle or origin implies, statistically, the mean.

A circle also means that the data is no longer skewed or kurtotic. The sample represented by the circle has achieved normality, or has normalized the data to prematurely assume normality. When normality has not yet been achieved, the sample space is hyperbolic, but once normaly has been achieved that sample space may be spherical. At a minimum, the sample space is Euclidean. It becomes spherical as normality is achieved and sigma increases beyond one.

A triangle has an incircle (inside the triangle) and a circumcenter (outside the triangle), so that triangle has two diagonals: one for the incircle, and another for the circumcenter. Triangles have many centers and circles. An incircle is tangent to the sides of its triangle. That incircle has a gap between it and the triangle’s vertices.

The incircle can represent a Poisson game, a game of an unknown number of players. Poisson games is one way of thinking about the bowling ally, a functional construct in the technology adoption lifecycle (TALC). In the bowling ally, the company bringing a discontinuous innovation to market begins to create that market by selling the technology in a client engagement with a B2B early adopter, another TALC phase. The B2B early adopter is the first player in the vertical market’s Poisson game. The B2B early adopter is the entry point into the vertical. Vertical markets are specific to vertical industries. These markets are not entered in the manner as one enters the mass market.

The vendor does six of these client engagements sequentially with the intent of winning market leadership in all six vertical markets. That market leadership is earned, rather than being bought.

Bring it back to geometry, each early adopter has their own Poison game, and their own triangle. Each triangle has its own tails. The distribution has numerous tails. The distribution is asymmetrical. The asymmetry is reflected in the cyclide that surrounds the hump of the distribution. As normality is achieved, that cycide will become a torus. The eccentricities become uniform because the tails become uniform.

As it is, not yet having achieved normality, the diameter of the triangle’s incenter is the short tail of the distribution represented by the circle. The triangle has two more tails, long tails. The short tail will not get shorter. The long tails will contract as the distribution achieves normality. Money put on the short tail will persist, while money put on the long tail will be lost.

Using a circle to to represent the distribution is a simplification. It should be an ellipse. That ellipse will show a correlation between two dimensions captured by the sample data. There would be many such ellipses.

Back to the diameters, the square has two diameters. A square can be encircled. That implies that a square has no correlation.

The large circle would represent the population mean of the adressible market. The smaller circle would be the represent the sample. The population distribution may have achieved normality. That implies nothing of the sample distribution. Once the sample population achieves normality, the population distribution will have already achieved normality. Do not assume normality. That assumption will cost you money.

The reality is that a rectangle will represent non-normal, kurtotic distributions.

Keep in mind that a circle only has one diameter. A square has two. A rectangle has two.

These distributions will be skewed. Rectangles have two diagonals. These rectangles are still abstractions. They represent ellipses. Those ellipses would result from regressions involving two different dimensions meeting at some angle other than right angles. I do not have a tool that can draw them. If I could draw them, the rectangles would be tilted as would the ellipse.

Using circles instead of rectangles causes us to exclude much of the population generated by the long diameter and its larger circle. We need to use an ellipse here.

In the last few weeks, I came across a theorem about quadrilaterals. A quadrilateral being a geometric object having four sides. It turns out that it is an object that organizes two triangles, aka two code bases, that face each other in the market. It goes further when animated as it shows the results of the competitive face off of the two code bases. I have yet to extend this to more competitors.

So A and B are the two competing code bases. A is coding more due to the revenue spit between them. A cannot exceed 74 percent of total market share. B holds the number two position in the market. B cannot capture more than 19 percent of the remaining market. The figure was drawn without these considerations. The gray area hints at A’s concessions to antitrust law for expanding outward, and for incurring into B’s market share.

Each numbered line is a release as per the triangle model. B lags A. The red lines illustrate how the market boundaries move when A wins in an incursion into B’s market. The incursions are a darker color than the release that delivered them. The white areas indicate yet to be delivered code, aka areas that will be filled by future releases or other allocations.

Anyway, such fun. Putting this in the context of a normal distribution is next.

This diagram starts with a quadrilateral where A defines a triangle facing B, and B defines a triangle facing A. This gives us four radi. Two of them are the same, so we end up with 3 circles. We will get to the grey circle later.

Each of the lines defining the quadrilateral are policy lines. A defines two of those lines, and B defines the other two lines. It takes an effort to enforce these lines. They are constraints. We are not just serving the normals implied by the circles. There are three light blue areas. They are labeled in a darker blue by the numbers 1, 2 and 3. The populations in those three areas require some code of their own. There are two light green areas. They are labeled in a darker green by the numbers 1 and 2. Those populations require their own code. Divide and conquer translates into divide and complicate.

The vertical purple line is the line dividing the market between companies A and B. A, an outlier, is far away, however, A impacts the division of the market space. A’s constraints are not ignored by B.

B is focused on the dark blue population. B extends its policy lines to see around the population it is focused on. Extending these lines gives rise to the light grey (1, 2, and 3) and dark grey areas (1 and 2). It also moves A to A’. A is no longer far away. A’ maybe a mirage. Get some data.

Extending the lines moves the radi lines and the circles.

The important thing here is that the quadrilateral gives rise to the need to enforce policy, in other words there is a need to say no. Your competitor’s policy lines effect you and yours. Code does not reach adjacent populations. Or, maybe it does when it should not. And, lastly, code is a measure of distance. How much code will we need to serve this population? Which translates into how much money do we need to reach that population?

Anyway, draw your own pictures. Do your own math. Enjoy!


April 11, 2020

Today, from the writer’s point of view, and a while back from the reader’s point of view, I’m reading a person describing his making of a PB&J sandwich. He asks the person that wanted it, why can’t you make it? That person says, he doesn’t know the ratio.

Mathematicians see everything as being between some upper convergence and some lower convergence. This being a normal distribution, it’s mostly about a ratio about some average and a flight envelope. That average roughly bisects that envelope.

So working from before and moving to after, we start with a minimum usually below zero even if you don’t code on your own dime. And you exit at some max. That minimum is your right most convergence. The maximum is your left most convergence. Moving that middle horizontal line, moves your points of convergence. The Exit line does not indicate the when the VC forces you to exit. That is way earlier.

Notice that we have two similar, but different sandwiches. Call it a burrito and that problem goes away.

Let’s get with modern management theory. Everything has to be aligned. Obviously, our two sandwiches must be correlated, or mapped is some confusing way. Sandwich A is run by executive A. And, Sandwich B is run by executive B. But, we do have a solution, the Burrito. The Burrito invariant solves the the problem.

Actually, it causes problems. But, it is aligned. One executive job cut. But, every bite is the same. And, there is a problem with circles. Circles imply no correlation. So no big data, no machine learning hill climbing–it’s indifference all the way around. But, our data says that a correlation exists, aka an ellipse, and we are not talking about a bad rolling of the burrito.

Now, after adding a dimension, we have an ellipse. That blue ellipse results from the correlation of the new dimension with the prior data. We are adding the technology adoption lifecycle (TALC) as this dimension.

We’ve rolled up our envelope into a burrito. The layers have a varying thickness. We did not align the burrito rollup with the TALC rollup. The TALC is the timeline of the company. The exit is on the right where a real burrito would be sealed.

The statistical structure of the TALC is different from the ingredients in our burrito. The burrito rollup is shown in blue and rolls up from the center. The TALC rolls up from the outside in. This is shown or intended with the black spiral that starts at the tornado. The six early adopters we need are the tortilla. The annotations to the circle to the right end of our burrito hint at where TALC phases would be in to burrito. If only I had Autocad. The numbered circles are the six early adopter/vertical applications. The purple circle is the early mainstreet/aggregated application.

When you take samples of your customers, you would capture their physical location and their TALC phase indicators. That data is independent like our TALC-burrito mapping. When you roll up a burrito, the ingredients that you don’t have don’t get in the burrito. Like when a VC demands an existing market, you can only give them the last half of the burrito (dark green).

The light green represents the early mainstreet phase and a vertical application or two.

Beyond the Burrito

The technology adoption lifecycle is a process. Yes, Moore denies this. So throw in some asynchronous like we did with email, and the process remains. Enter here, enter there. It’s like a shopping mall. It has multiple entries and multiple exits. Yet, you park in your usual spot and take your usual door and then you walk down the mall, the loop. You can’t get back to your car without a loop or a bidirectional walk in the same part of the mall. After a decade of walking the same mall, it seems like a process. The series of stores only changes when one of the stores goes out of business, or moves.

The early mainstreet phase of the TALC, while it is being cannibalized by the cloud, is moving closer to the exits, or for the phobics the entry.

That average of our envelope is just a portion indicator. The TALC has more than one average. The TALC is an aggregate normal of many normals. Each phase is a normal distribution unto itself. Each phase has a front half and a back half. Each phase has an entry point and an exit point. That’s a lot of sandwiches, sandwiches with different and very specific fillings that get served to very particular populations of users. One sandwich satisfies a user in a specific phase, but in no other phases. It’s a Boolean. There are some parameterizations that get us across the TALC.

Sometimes you hear about doing a phase from back there, closer to the entries, after a phase from closer to the exits. Someone will talk about crossing the chasm when they were no innovative and never had a chasm to cross. They didn’t do the chasm’s parameterizations either. Who can blame them.

If they had walked up and pitched something that had no existing market, aka was not in the late mainstreet or later phases, they would not have gotten pitched. VCs don’t invest in that build a market proposition mainly because they were orthodox business people that began their careers in innovative companies that hired them once they were in the late mainstee phase. Nevermind the who statistical geometries problem that causes problems with the mathematics and understates future returns. Of course, they never faced those problems or resolved those problems. Hell, as it is, they exit their continuous innovation investments quickly, because they don’t see, should never see, and will never achieve the multipliers involved in successful discontinuous innovations.

A while back, a lawyer out in my twitter stream was involved with passing an innovation funding effort in the state of Minnesota. The pitch was all about the returns from discontinuous innovation, but most innovation these days is continuous, or not discontinuous. VC exits will follow. Wrong sandwich. Oh, well.

The ratio of continuous to discontinuous innovation is why we don’t have jobs for everyone, why we are not creating new categories, new worlds, new careers, and many more wealthy former programmers. Even they have that wealth inequality problem. The VCs want to blame the financial markets. But, the financial markets know what success is, economic wealth. That is not cash. The VCs are investing solely for cash.

The TALC is a metaphor. I’ll stop here with that here. The mathematical sandwiches are what made me write this post. Yeah, it’s well past lunch here. What gets us up and out of bed, or having lunch at a particular time during this pandemic? Hell, even the pandemic is a technology adoption problem.