Archive for January, 2020

Alternate Monetization, Revised

January 31, 2020

Not selling software is a preached philosophy. It necessitates alternate monetization, which usually means selling eyeballs, and capturing the psychological insights well beyond zip codes and demographic data, all of that boiling down to selling ads. It’s a completely different business. It has its own carrier layers and content layers. It has its own technology adoption lifecycle. And, it needs its own management. Often no one is in that management role, no executive. Let alone effective governance reaching from the top of the organization to the bottom.

We go with it’s easy to serve ads. But, to whom? What ads? Or, is this the best way? The first ad is easy. The 211th ad, not so easy. It gets messy. And, messier.

That 211th ad takes me the 211th data item, or it takes me to normality, and the aggregation of normals into phases in the technology adoption lifecycle (TALC). That makes we want to see a level view of those alternative monetizations. Here each alternative monetization gets its own TALC, but I will only draw one. And, it won’t tell us much beyond insisting that we should be finding out.

I won’t go into the x as media split. It will be just one TALC coordinating with another. They would be separate layers in the organization. And, they would be coordinated in terms of two convex hulls.

So it’s time to draw. I put the alternate monetization on the TALC. I also added the cannibalization early mainstreet (EM) and user-led growth (ULG). I will discuss these latter issues after the alternate monetization discussion.

I don’t see the point of advertising to the technical enthusiasts that precede the B2B early adopters. Nor, do I see the point of advertising to the early adopters. But, once you have crossed the chasm, the customers in the vertical market have a mass that would be targeted by the advertisers in the vertical market as B2B buyers and the mass market as consumers. So I started the ad TALC in the vertical phase. Here I said ads, but I’m using that word to represent any and all alternate monetizations. Adjust for particulars. I’m not prepared to go into details of the particular processes, populations, and phases.

I’m not into alternate monetizations myself. The money needs to be kept separate. And, code decisions need to be made on the basis of the sale of code either in products or service. It’s seats and dollars, not seats and eyeballs. You can do otherwise, but I’ve not studied it, so don’t ask me, Yet!

I’ve isolated the phases into divisions. I would similarly insist on each monetization having to hit their own numbers and billing across those divisions. There is no free code. And, there is no shared infrastructure. Just trying to avoid games. The user in those separate monetizations would not share data across those divisions. Nor, would users be shared across divisions. Each division would have develop their own normals.

Convex Hull

January 30, 2020

Someone tweeted about Bernstein polynomials. So i looked them us and found a surprise, differential view of a normal distribution.

Each of the darker grey shapes are either normal distribution or exponential distributions. They reflect the evolution or differential of the normal distribution as its sample size increases. The red curve is the convex hull that is always tangent to all those distributions.

The text containing the figure called the distribution to the far right and left an exponential distribution. This makes we wonder if the exponential distribution is really a Poisson distribution that successively approximates the normal as the sample size increases from zero. I’ve long said that the technology adoption lifecycle (TALC) starts with a succession of Poisson games. These are games of an unknown population of players. The first player in each game is the B2B early adopter. These people are clients. They are not consumer early adopters. These six clients are the first players in each of the six vertical markets that you will enter. These six client are business unit managers. They are not functional managers in the IT horizontal.

I’ve used my software as media model to separate the TALC into carrier-centric populations layer from the carried-content-centric populations layer. This implies two distributions as shown below.

Those first six players, those six verticals, are separate lanes in your bowling ally. Each lane gets its own application specific to that vertical and a subtree within the industrial classification tree. It is not your early mainstreet or IT horizontal product. Not yet. Commonality in the carrier layer, now, however, will pay off when you are finished bowling. The sooner you start with a layered architecture implied by the software as media model the better.

Back to the math. Bernstein polynomials are a consequence of the binomial theorem. The Bernstein polynomial generates the convex hull. That convex hull is curved, but nearly flat. The minimum value of the convex hull happens at the mean of underlying normal. I drew a horizontal line in the figure below at that minimum value. I’ve labelled this horizontal line as the ML minimum, or the machine learning minimum. The standard deviation is three sigmas. These values will vary over the life of the normal as the sample size increases and actual normality is achieved.

The exponential on the right side of the convex hull shows the effects of de-adoption.

The TALC is taught of as a series of phase transitions, Each phase is bookended by phase transition, but between the bookends, each phase is a very different mean field game. In early mainstreet phase, the market allocates market leadership to one company. The market-power allocated market leader then has the responsibility to define the category and how the competitors compete within the category. They can optimally control 74 percent of the market.

Those mean-field games are the nearly flat part of the convex hull.

The exponential sides of the convex hull are kurtotic. They are Poisson games, or games of unknown player. Poisson distributions tend to the normal distribution. The gray distributions move from the left and right sides of the convex hull to the center, or mean. That mean and center are the empty core of the normal distributions comprising the convex hull.

Inferences are made in the tails of normals, aka the parts of the normal distribution beyond the core. Normality is assumed for the normal because the inferences have easier mathematics for standard, non-kurtotic normals.

As the normal sample size increases, the normal gets wider and shorter. The sample size drives the minimal height of the convex hull. If we were not looking at the entire lifecycle of the normal simultaneously the convex hull would not appear level towards the right side of the figure.

The TALC is a collection of independent normals summed together into a single normal. This implies that the TALC has a convex hull. I’ve correlated the TALC and its convex hull in the next figure.

The convex hull is shown with its TALC phases and its sample-size driven geometries. The thick black line on the left is the output of a Dirac function. The Dirac function exist when a random variable is asserted. The function creates a histogram at a point, not at an interval. It cannot be considered to be a probability yet, but it contains all of the probability mass. Sampling moves the probability mass into the distribution that represents the asserted random variable.

Notice that the middle distribution in he figure is not yet normal. The distribution is still skewed and kurtotic. The grey vertical line in the middle figure shows where the population mean for the normal that achieves normality would be. It is labeled with “mu” or µ. It is also labeled for it’s TALC phase, which is shown as being late mainstreet phase (LM). The mean line separates the early mainstreet phase (EM) from the late mainstreet phase (LM). The mean also divide the increase in the size of the market, aka the uphill side, from the decrease in the size of the market, aka the downhill side.

At the top of the figure, I’ve labelled the geometries associated with the sample sizes of the normals under the convex hull. The geometry associated with a three sigma standard deviation is Euclidean. Earlier, smaller sigmas, generate a hyperbolic geometry. Later, larger sigmas are spherical. That discontinuous innovations start with small sample sizes leaves discontinuous innovations in hyperbolic space where infinity is close, any future financial projections are small, and risk is overestimated. Due to the understatement of financial returns and only, and exactly one successful path through that space, VCs don’t invest in discontinuous innovations. Continuous innovations happen in Euclidean or spherical space. Spherical space provides a multiplicity of geodesics, aka paths, to achieve success. VCs live in spherical space.

The population mean is the typical place of missed quarters. Know the size of your addressable market. Know it in both your seats and your dollars. Know that each sale gets you to your 50 percent mark. Don’t be surprised by it. Did you provide positive guidance? If so, stop selling. More sales will give you a negative quarter. Similarly, when you approach the theoretical antitrust limit of 74 percent, stop selling. Keep your alternative monetization dollars out of these figures.

A startup that has already IPOed by this point will see their stock price all. These IPOs captured a premium. These companies would be pushing an underlying discontinuous innovation. IPOs that happen to the right of the population mean never earn a premium on their IPO. Continuous innovations outside of the early mainstreet phase will not earn a premium.

I have drawn this version of the TALC to show the Software as Media view of the TALC. The thick red lines show us the chasm, the first tornado, and the second tornado. The thick red lines are marketing events. They appear in the carried content layer. The thick green lines are financial events. They, likewise, appear in the carried content layer. I also added a period in the late mainstreet phase for mass customization (MC) where vertical phase functionality reappears in the products. And, product-led growth (PLG) was added at behind the cloud phase.

Product-led growth is a separate normal because it is a separate population. It is “sold to” in and bottom up manner. It eventually makes the enterprise sale. Consider PLG to be an upmarket move.

The financial events include the efforts tied to an acquisition and merger. This requires the development of a product that would attract a buyer. The black bar (PM&A) represents this effort. The adjacent red bar (T2) represents the second tornado.

The uncolored phases of the TALC represent time periods when nothing is done. Orange represents activities in the carrier layer. The black bars there represent development activities intended to make the carrier layer and application ready for the next phase. They are labelled with a prefix P and the abbreviation for the next phase. The phase colors in the carrier layer change when the carrier infrastructure changes. The diagonally stripped phase involves multiple carrier infrastructures. The device phase is an intersection between the TALC for computers, and the TALC for telephones. Two different technical enthusiast/geek populations are involved.

In the carried content layer, there are two different colors: one for vertical content, and another for late mainstreet/consumers, device, and cloud content.

Precursor activities preceding the TALC include obtaining bibliographic maturity (BM), which necessitates the existence of an invisible college(IC); and engaging in Lévy flights (LF) done in search of discontinuities.

Phases are about populations. The thick bars are about processes. The processes will differ in duration, but I left the thickness of the bars denotational.

The bowling ally(s) have been omitted from this diagram. Each lane in the bowling ally represents the early adopter engagement and the subsequent effort to develop the early adopter’s vertical application and their business proposition based on that vertical application.

The convex hull floats above the distributions that comprise the TALC.

In a boat having a convex hull, the weight carried by the boat can have aerodynamic effects. I was never good at leaning my canoe. Enjoy!



Strategy As Tires

January 21, 2020

I am no longer on Twitter so I am moving my Strategy as Tires tweets to the Strategy As Tires blog. I started using Twitter to make my Product Strategist blog accessible. I added the Strategy As Tires tweets and recently the Ask A Constraint tweets.

Twitter ate my life, so getting away from Twitter is a good thing.

Thanks for reading my Strategy as Tires and other product strategy related tweets. Please enjoy them at the Strategy As Tires blog.

Assuming Normality

January 17, 2020

Another aspect of the diagram that the last post was written about, is the difference between reality and the assumption of normality. Frequentists avoid reality by assuming normality when they normalize the data in their dataset before conducting their analysis. They hide the dynamics of the data collection as well. The dynamics of the geometry of the space is tied to the dynamics dynamics of the data collection.

Bayesians assume normality as well.

Statistical inference requires normality, so it is assumed. Most people don’t know how to do statistical inference with other distributions.

Don’t assume normality. Don’t assume data. Don’t assume the rate and result of your data collection.

Enjoy!

What that statistical snapshot won’t tell you

January 11, 2020

Watching a normal distribution achieve normality from a sample size of one is informative, but we jump over that using more data and worse assuming normality. Slowing down and looking at less data will tell you where the short tail and where the log tail are for a given dimension. The same is true of every subset you take from that normal.

The following graphic was taken from a paper by Peters, O. The ergodicity problem in economics. Nat. Phys.15, 1216–1221 (2019) doi:10.1038/s41567-019-0732-0. The graphic shows us how the skewed distribution achieves normality. The footprint of short tail of the skewed distribution does not exceed the footprint of the normal. Investments made in the short tail persist while investments in the long tail vanish. More data points just reveal the error. Or, put another way, growth reveals the error.

I added the tail labels to the diagram. Upon close inspection the normal is separated from slightly from the skewed normal. The skewed normal remains inside the normal.

The averages of the skewed normal from left to right are the mode, the median, and the mean. The median is anchored at the midpoint between the mde and the mean. The median runs from that midpoint on the x-axis to the top of the mode. The steeper the mode is the closer the skewed distribution is to achieving the normal.

At a given n data points, a distribution achieves normality, any subset of that distribution at that n number of data points will be skewed. The definition of the subset implies the definition of a new aggregate dimension. That in turn implies a new nomial, aka a new peak.

In the next figure, I drew the bases of the distributions. Skewness implies an ellipse. Once normality is achieved, the base is a circle. Every distribution implies a core. Skewness implies tails and shoulders. The gray vertical line is my estimate of where the tail and shoulder transition. The pink circle divides the core and the tails. I labeled this as the shoulder, but lacking the data that is the best I could do. The red area is where the normal is outside the ellipse. Those areas are tails that emerged as the distribution approached normality. Investment there will not be lost as the sample size continues to increase.

The core depends on the variance, so it can get larger or smaller. When it gets larger, Investments in that area of the tail should be reexamined. The core can be considered a “don’t care.”

Enjoy.