Yes or No in the Core and Tails II

June 4, 2018

The ambiguous middle of my decisions tree for my last post “Yes or No in the Core and Tails” has bugged me for a few days. I have a hard time thinking that I drive up to a canyon via a few roads, climb down to the river, cross the river, climb up the other side, and select one of many roads before driving off. That is not a reasonable way to deal with a decision tree that doesn’t get entirely covered by my sample space.

So what is this mess hinting at? Do not stop sampling just because you’ve achieved normality! Keep on sampling until you’ve covered the entire sample space. Figure out what power of 2 will produce a decision tree wide enough to contain the sample space, then sample the entire decision tree. Before normality is achieved, not sampling the entire base of the decision tree generates a skewed normal. This exposes you to skew risk. There will also be some excess kurtosis, which brings with it kurtosis risk.

Here is a quick table you can Binary Space vs Normal Sample Sizeuse to find the size of the sample space after you’ve found the number of samples you need to achieve normality. The sample space is a step function. Each step has two constraints.

Given that it takes less than 2048 samples to achieve a normal, that should be the maximum. 211 should be the largest binary sample space that you would need, hence the red line. We can’t get more resolution with larger sample spaces.

Note that we are talking about a binary decision in a single dimension. When the number of dimensions increases the number of nomials will increase. This means that we are summing more than one normal. We will need a Gaussian mixture model when we sum normals. The usual insistences when adding normals is that need to have the same mean and standard deviation. Well, they don’t, hence the mixture models.

I took some notes from the Bionic Turtle’s YouTube on Gaussian mixture models. Watch it here.

Gaussian Mixture Model

Back when I was challenging claims that a distribution was binomial, I wondered where the fill between the normals came from. As I watched a ton of videos last night, I realized Probability Massthat the overlapping probability masses at the base had to go somewhere. I quickly annotated a graph showing the displaced probability mass in dark orange, and the places where the probability mass went in light orange. The areas of the dark orange should sum up to the areas of light orange. The probability masses are moved by a physics.

A 3-D Gaussian mixture model is illustrated next. I noted that there are three saddle 3D Gaussian Mixture Modelpoints. They are playing three games at once or three optimizations at once.  EM Clustering is alternative to the Gaussian mixture model.

So to sum it all up, do not stop sampling just because you’ve achieved normality! 





Yes or No in the Core and Tails

June 2, 2018

Right now, I’m looking at how many data points it takes before the dataset achieves normality.  I’m using John Cooks binary outcome sample size calculator and correlating those results with z-scores. The width of the interval issue matters. The smaller the interval, the larger the sample needed to resolve a single decision.  But, once you make the interval wide enough to reduce the number of samples needed, the decision tree is wider as well. The ambiguities seem to be a constant.

A single bit decision requires a standard normal distribution with interval centered at some z-score. For the core, I centered at the mean of 0 and began with an interval between a=-0.0001 and b=+0.0001. That gives you a probability of 0.0001. It requires a sample size of 1×108, or 100,000,000. So Agile that. How many customers did you talk to? Do you have that many customers? Can you even do a hypothesis test with statistical significance on something so small? No. This is the reality of the meaninglessness of the core of a standard normal distribution.

Exploring the core, I generated the data that I plotted in the following diagram.


With intervals across the mean of zero, the sample size is asymptotic to the mean. The smallest interval required the largest sample size. As the interval gets bigger, the sample size decreases. Bits refers to the bits needed to encode the width of the interval. The sample size can also be interpreted as a binary decision tree. That is graphed as a logarithm, the Log of Binary Decisions. This grows as the sample size decreases. The more samples required to make a single binary decision is vast while the number of samples required to make a decision about subtrees requires fewer samples. You can download the Decision Widths and Sample Sizes spreadsheet.

I used this normal distribution calculator to generate the interval data. It has a nice feature that graphs the width of the intervals, which I used as the basis of the dark gray stack of widths.

In the core, we have 2048 binary decisions that we can make with a sample size of 31. We only have probability density for 1800. 248 of those 2048 decisions are empty. Put a different way, we use 211 bits or binary digits, bbbbbbbbbbb but we have don’t cares at  27, 26, 25, 24, and 23. This gives us bbbb*****bb. Where each b can be a 0 or 1. The value of the don’t cares would 0b or 1b, but their meaning would be indeterminate. The don’t cares let us optimize, but beyond that, they happen because we have a data structure, the standard normal distribution representing missing, but irrelevant data. That missing but irrelevant data still contributes to achieving a normal distribution.

My first hypothesis was that the tail would be more meaningful than the core. This did not turn out to be the case. It might be that I’m not far enough out on the tail.


Out on the tail, a single bit decision on the same interval centered at x=0.4773 requires a sample size of 36×106, or 36,000,000. The peak of the sample size is lower in the tail.  Statistical significance can be had at 144 samples.
Core vs Tail

When I graphed the log of the sample sizes for the tail and the core, they were similar, and not particularly different as I had expected.

I went back to my the core and drew a binary tree for sample size, 211 and the number of binary decisions required. The black base and initial branches of the tree reflect the being definite values, while the gray branches reflect the indefinite values or don’t cares. The dark orange components demonstrate how a complete tree requires more space than the normal. The light orange components are don’t cares of the excess space variety. While I segregated the samples from the excess space, they would be mixed in an unbiased distribution.

Decision Tree

The distribution as shown would be a uniform distribution, the data in a normal would occur with different frequencies. They would appear as leaves extending below what is now the base. Those leaves would be moved from the base leaving holes. Those holes would be filled with orange leaves.

Given the  27, 26, 25, 24, and 23, there is quite a bit of ambiguity as to how one would get from  28 branches to 22 branches of the tree. Machine learning will find them. 80’s artificial intelligence would have had problems spanning that space, that ambiguity.

So what does it mean to a product manager? First, avoid the single bit decisions because they will take too long to validate. Second, in a standard normal the data is evenly distributed, so if some number of samples occupies less than the space provided by 2x bits, they wouldn’t all be in the tail. Third, you cannot sample your way out of ambiguity. Forth, we’ve taken a frequentist approach here, you probably need to use a Bayesian approach. The Bayesian approach let you incorporate your prior knowledge into the calculations.


Kurtosis, Another Definition

May 31, 2018

Tonight, I came across a third definition of kurtosis. This definition begins at 25:30 in Statistics 101: Is My Data Normal. This source defines kurtosis as a distribution having higher than expected probability mass in the tails. Compare this to the typical definition, this one returned from a Google search, the sharpness of the peak of a frequency-distribution curve, which I’ve not used since I found kurtosis to be the curvature of the tails. See More On Skew and Kurtosis. I’m still lost as to how the kurtosis statistic translates into the curvatures of skewed distributions. Complicating the curvature issues is that in an n-dimensional normal, there are more than two tails. There does seem to be a pattern of curvatures as defining a torus for a normal without excess kurtosis or ring cyclide for a normal with excess kurtosis. The torus fits flatly on top of the tails of a normal parallel to the base plane. The ring cyclide sits on flatly on top of the tails, which is tilted in regards to the base plane.

This third definition of kurtosis is nicely quick to grasp. The typical definition seems to be confused with n, the number of data points. With little data, the normal is thin, high, and has two short tails, given the absence of skew. With a lot of data, the normal is wide, lower, and has longer tails, given the absence of skew.

I have not gotten to topological data analysis and the issues of what the torus or ring cyclide is telling us.


The Technology Adoption Lifecycle

May 24, 2018

A while back I wrote about all the so-called Chasms. These days we begin our continuous innovations in the late mainstreet. Nobody crosses the Chasm.

I was working on watching a data from a pseudorandom generator for a normal distribution converge to a normal. That is supposed to happen by the time you have 36 data points. It didn’t happen. And, it didn’t happen by the time I plotted 50 data points. It didn’t help that I had to generate more data after the first 36 data points.

I made a mistake. Each call of the generator starts the process off with a new seed, aka a new distribution, so of course, it doesn’t converge. I’m not liking this dataset mindset of statistics. I’m not p hunting. I’m trying to validate a decision made in the Agile development process. I don’t have all day, but apparently, I have a week. Claims about fast discovery turn out to be bunk. A friend of mine suggested taking a Bayesian approach instead.

Through some, now forgotten thought process, I was plotting sigmas and z-scores, et all. That brought me back to some details of the technology adoption lifecycle (TALC). So I Googled it and found a whole lot of graphs of it that were just flat out wrong. No wonder everyone is confused about the Chasm. They are using one of the revised (wrongly drawn) figures. So I’ll show you some of the figures, point out the errors, and draw an older more correct view.

The misstatements seem to be sourced from Geoffrey Moore. When he moved into the late phases when the dot bust happened, he set about making the TALC relevant to the late phases and the biz orthodoxy. He has taken back most of the claims he made in his prior version of the TALC. It’s all disappointing.

One thing Moore said back in the beginning of his TALC, not Rodger’s version, was that it was was not a clock. I always thought he meant not an asynchronous clock, aka not like email. No, what he meant was we can choose to enter any phase we want. That leaves money on the table, but it accurately reflected what businesses do. This very characteristic means that businesses can completely skip the Chasm, the bowling alley, and his first tornado. Yes, some acquiring companies skip the second tornado or just suck at it so the acquisition fails. Mostly, acquisitions don’t even try to succeed. The VCs got their exit, that being the whole point of most VC investments these days.

Once you skip over the processes that are Moore’s contribution to technology adoption, people feel free to just fall back to Rodgers, a solely sociological collection of populations. Moore took Rodgers someplace else. Yes, Rodgers didn’t see the Chasm. But, Moore didn’t see Myerson’s Poisson games. The underlying model changed over time. I’ve modified the model myself. But, Moore’s processes didn’t move.

So let’s look at the mess.

01 TALC 2018

Figures from

  4. Adoption-Lifecycle.png

I’m just citing the sources of the figures. They probably copied them from others that copied them. I’m not assigning blame. But, this very small sample demonstrates the sources of confusion about the Chasm.


  • In figures 1, 2, 3, and 5, the first phase is called “Innovators.” Well, no. The inventors happened a long time before the technology adoption lifecycle began. The word innovators are indicative of management. In the earlier texts, this population was called technical enthusiasts. They are engineers, not business people. And, in the bowling alley and vertical sense, they were programmers known to the early adopter for the given vertical.
  • In figure 2, the gray graph behind the technology adoption lifecycle has an axis labeled “Market Share.” No, in no way is a technology firm allowed to capture 100% of the market share. The maximum is 74%. After that, you have a monopoly and your business is in violation of antitrust law. The EU is probably stricter than the US. That 74% is the US threshold.
  • In figures 1, 2, 3, and 5, the second phase is called the “Early Adopters.” Under Moore’s version, this phase is more accurately called the bowling alley. It is where we sell into the vertical markets by selling to one B2B early adopter in each vertical. We would enter six verticals with a product conceived by the early adopter. That product would be built on the technology we are trying to get adopted. Products are just the means of getting the underlying technology adopted. The product visualization is the early adopter’s alone. The idea is not ours. We sell to six early adopters. This takes time. There is no hurry. We have to ensure that each of these six early adopters achieves their intended business advantage.
  • The population percentages for each phase are accurate in figure 3.
  • In figure 4, the Chasm is correctly placed, but the early adopters are to the left, aka before the Chasm, and their vertical is to the right. It is not accurate to call the entire phase where the Chasm occurs the early adopters. There is a two-degrees-of-separation network between the early adopter and their vertical. Sales reps find no particular advantage in attempting to sell to a third degree of separation. Selling to that network constitutes the central issue of the Chasm.
  • Figure 4 also splits the early and late majorities in the wrong place.
  • In figure 5, the Chasm is incorrectly placed. The Early Majority is really the horizontal, usually the IT horizontal. The Tornado sits at the entrance of this phase, the horizontal, not the Chasm. The Chasm sits at the entrance of the verticals.

One of the problems that Moore encountered was the inability of managers to know where they were in the TALC. These figures do not agree with each other, so how would managers using different versions come to agree.

I’ve made my own changes to the TALC. First, the left convergence of the normal is well after the R&D, aka science and engineering research that firms no longer engage in. The left convergence is long after the research has gained bibliographic maturity. The left convergence only happens when researchers with Ph.D.’s and master’s degrees decide to innovate after having invented. They happen long before the TALC. This doesn’t look like how we innovate these days. These days we innovate in the late phases and innovate in a scientific and engineering-free idea-driven manner with design thinking innovating around the thinnest of ideas. These early phases, the phase before the late majority start with discontinuous innovation. These days in the phases after the early majority we innovate continuously. We don’t try to change the world. We are happy to fit in and replicate as directed by the advertising-driven VCs. The VCs demand exits so quickly that we couldn’t change the world if we wanted to.

The second change was in the placement of the technical enthusiasts. They are a layer below the entire TALC. They are the market in the IT horizontal. But, they work everywhere.

The third change involves integration with my software as media model. Each phase changes its role as a media. A media has a carrier and some carried content. All software involves the stuff used to model, and the content being modeled. Artists use pens, inks, paints, bushes, and paper. Developers use hardware, software, code, … Artists deliver a message. Developers deliver a message at times more obvious than at other times.

The fourth change is my labeling the laggards as the device market and the phobics as the cloud. I do this because these populations do not want their technology use to be obvious. The phobics use technology all the time, but with deniability. They use their car, not the computer that runs the car. Task sublimation and pragmatism organize the TALC. The phobics get peak task sublimation. This is where the technology disappears completely outside of the technical enthusiast population.

Here is a revised view of the TALC that incorporates my extensions and changes.

02 Revised TALC

The end is near. The underlying technologies disappear at the convergence on the right. Then, we will need new categories that we can only build from discontinuous innovation. If you don’t read the journals, you won’t see it coming. And, if you spent your life doing continuous innovation, you won’t be able to innovate discontinuously.

Another figure out on Google correlates Gartner’s Hype Cycle with the TALC. But, this Gartner Hypecycleone is absolutely wrong. Gartner has nothing to say about technologies in the vertical. Gartner starts with the IT horizontal. If the horizontal is not the IT horizontal, Gartner has nothing to do with the TALC. The Chasm happens a long time before the Trough of Disillusionment. The Hype Cycle starts at the tornado that sits at the entry into the IT horizontal.


I’ve made the necessary adjustment in the following figure. The Hype Cycle does Gartner Hypecycle and TALC Modifiedmanifest itself in the IT Horizontal and all subsequent phases. One Hype Cycle does not cross from one TALC phase to another. Each phase has its own hype cycle. I’ve only shown the hype cycle for the IT Horizontal.

The original figure was found in a Google image search. It was sourced from

The reason I moved the Hypecycle is that in the search for clients in the vertical, IT is specifically omitted, and IT is not involved in the project. The client has to have enough pull to keep IT out. The clients would be managers of business units or functional units other than the eventual intended horizontal that you would enter in the next phase. The Chasm and the earlier adopter problems discussed relative to earlier graphics is apparent here.

The second tornado came up in Moore’s post web 1.0 work. It happens after a purchase but before integration. The VCs get their money on completion of the purchase. The acquiring company gets value from the M&A only after the integration attempt succeeds.  The AT&T acquisition of DirectTV had a very long tornado. That tornado is probably done by now. Most M&As fail. Many M&As are done solely to ensure the VCs recover their money. These are not done because the acquired company will generate a return for the acquirer. The underlying company fades into oblivion shortly after the acquisition. I’ve put both tornados in the next graphic. The timing of the M&A is independent of phase.


In most figures, the acquiring company is shown moving upwards from the M&A. That is incorrect. The acquiring company is post-peak, post early majority and is in permanent decline. The best that can happen is that the convergence on the right will be moved further to the right granting the acquirer more time before the category dies. The green area in the figure reflects the gains from a successful integration, which happens to require a successful second tornado.

What was not shown was the relation of the first tornado to an IPO that pays a premium. That only happens with discontinuous innovation, and only in the early phases of the TALC. With the innovations we do these days, we are in the late phases of the TALC, so there is no premium on the IPO.  Facebook did not get a premium on their IPO.

One aspect of today’s TALC that I have not worked out is how the stack of the IT horizontal is cannibalized by the cloud.

Back when I gave my SlideShare presentation in Seattle in 2009, a lot of people didn’t feel that the TALC was relevant. It was still relevant then. It is still relevant now. We leave much money on the table by rushing, by being where everyone else is, by quoting the leaders of the early phases while we work in the late phases. We settle for cash, instead of the economic wealth garnered by changing the world. If we set out to change the world, the TALC is the way.






Generative from Constraints, a Visualization

May 23, 2018

I came across a tweet from Antonio Gutierrez from Several constraints on a plane form a triangle. That triangle could have been a point before the constraints were loosened enough to give us some space within that triangle. More constraints would just give us a different polygon.

The loosened constraints required some room for continuous innovation. The point that became the triangle could be thought of as a “YET” opportunity of a problem that couldn’t be solved yet. But, with the triangle the opportunity awaits. So we dive in from some point of view where we can see the point at some distance. We establish a baseline from our the point of our view, the origin, to the center of the triangle. From that origin, we project three lines up to and beyond the triangle. This volume is code. At some point above the constraint plane, we take a slice through that volume of code, the blue triangle, Generative From Constraints over Time from Originand ship it. We continue to work outward. This would involve very little rework.

Alas, things change. The constraints contract (red arrows) causing us rework, or widen (green arrows) to give us space for new opportunities. The black triangle at the intersections of the constraints could widen or contract in parallel to our current boundaries (black arrows). Or, we could move our origin up or down to widen or narrow our current projection. That’s three classes of change. Each class gives us different volumes to fill.

In my game-theoretic illustrations, the release is always in a face-off with the requirements, such is the nature of design in the axiomatic sense of requirements from the carried content as assertions balanced against the enabling and disabling elements of the carrier technology. The projection doesn’t go hockey stick like into the constraints of the underlying geometry. There is always a constraint up there that’s much closer than we’d like to admit. Goldratt insists that there is always another constraint. And, in hyperbolic geometry, there is always a convergence at the nearby infinity.

In another view, the first line (red) from the origin through the center of the triangle and API - w Carrier and Carriedout into space is where we start the underlying technology. It grows outward thickening the line into a solid with the pink triangle as the base of the carrier technology. The carried content is built outward from the carrier core.

Constant change can be managed. Moving the origin down contracts the code volume. Moving G towards B contracts the code volume. Moving E towards A contracts the code volume. And, moving F towards C contracts the code. You can know before you code where rework is required and where your opportunities are to be found.

I’ve kept this simple. You can imagine that your carrier and your carried content have their own constraints, timeframes, and rates. There would be two planes, two centerlines, two triangular solids intersecting on the place representing what we will ship. We could slip in a plane to project onto and out from. Oh, well.



Holes II

May 8, 2018

This week I revisited fractional calculus. A few months ago, someone on twitter tweeted a link to a book on fractional calculus. I didn’t get far. My computer crashed, so I lost my browser tabs. I didn’t reload them, because I had so many the browser was slowly doing its job, which apparently is collecting vast numbers of tabs of readme wannabes.

The topic came up again. I’m not sure the original link got me to the Chalkdust article, or if I had to Google it. The content was less complete, and not historical at all. But, you come away with two methods of getting the job done.

The article ended with a graphic that blew me away when I look at it from the perspective of discontinuous innovation. The discontinuity is large. It went on to hide, you might say, another discontinuity. I’m always asked what discontinuities are. I try never to make the mathematical answer to that question. The Wright brothers were not math equations.

Fractional CalculusSo here is the figure from the article. Do you see the discontinuities? The first one is glaring if you’re always looking for and needing discontinuities. Much like the discontinuities that the Mittag-Lefler Theorem, discussed in my last post, Holes,  lets us generate one or more discontinuities are essential to discontinuous innovation. There is profit in those holes. They are profit beyond the cash plays of continuous innovation, the profit of economic wealth that accumulates to the whole, the “we,” not just to the “me.” They are profit in the sense of new value chains, new careers, and revised ways to do jobs to be done.

Fractional Calculus - DiscontinuityI marked the figure up to uncover the discontinuities. We can start with the plane ABCD. The plane is outlined with a thin blue line containing the red surface from which the differentiation process departs. I drew some thick red lines to outline the hole where the process lifts the differentiation process above the plane.

There is a shadow that is visible through the front surface of the process. It was visible in the original graph. Highlighting it hides it. The thin orange lines highlight that surface.

D8 and D9 do not intersect. The third dimension lets them slide by each other without intersecting. When confronted with an intersection of constraints, look for a dimension that separates them, or look for a geometry that separates them. As product managers, we just have to look for the mathematicians and scientists that separate them. Product has always been about breaking or bending a constraint. Here we broke one. It looks like all we did was bend a constraint as of yet.

The hole is on the floor of the atrium, not on the canvas comprising the surface of the tent.  I drew a line parallel to the y-axis and put a hole on it so we could see the discontinuity. It’s not a hole that is a point. It is an area, an area on the plane. I drew a gray line across the plane to characterize the hole on that line. These scan lines don’t have to be parallel or orthogonal to the x-axis, but a polar or complex space would not simplify what we are doing here.

Everything under the surface of the graph and above the original plane is the hole. Another plane would characterize the hole differently.

That’s the first discontinuity.

Having read the article, I know that fractional derivatives involve deriving and then adding an approximation of the fractional component, or deriving past the integer power and subtracting the fractional component. In integer calculus, it’s all about functions until you get to a constant, a number. And, when you get a constant of zero, you’re done. There is a wall there. There is a hole on the other side of that wall into which no mathematics I know goes to take a swim. Yes, the differentials can be negative. We call that process integration. But, the switch between analysis and the approximation by the Gamma function is significant as is the switch between analysis and number theory.

I drew an axis above the graph in the sense of derivatives only omitting integration and projected the boundaries between equations, numbers, and zero. At zero, the zero deflects integration when zero is a number, rather than a function with the value of zero. It’s a gate. When that zero is the value of a function, integration passes unimpeded into the negative differential region.

Most of the time the “Does not exist” answer to the equation just means that we don’t know the math yet. Yes, we cannot divide by zero until calculus class, then we divide by zero all the time. The Mittag-Lefler theorem welcomes us to put holes where we need them. The mathematics is simpler without holes, so mathematicians sought to get rid of them. But, as product managers, we need our holes, if as product managers you are commercializing discontinuous innovation.

On our plane, point D at the far left where we’ve gone to number. The second hole is to the left of the orange line I projected up to our function-number axis. I don’t yet know what’s on the other side of line. Now, I’ll have to go there.



April 27, 2018

I’m about discontinuous innovation. I’m asked at times to define discontinuities. Well, Kuhn’s crisis science is one answer. Mathematical holes is another. Anything on the other side of a constraint that nobody knows how to cross, yet another. Or, a logic that drives across an inconsistent space. Or, the line approaching the horizon of a hyperbolic space that never arrives because it goes to a limit and can’t pass that point of convergence. Or, the simpler case of anything not continuous.

As marketers, our answers are simpler. If we can bring it to an existing market from within an existing category, it is continuous. We’ll make some money, but we won’t change the world. We put our innovation into the late mainstreet, the device or the cloud market and starting there we leave a lot of money behind. Don’t worry, everyone else left that money behind as well.

But, if we have to take the long road of complete adoption in a nonexistent category and face the nascent bowling alley, the B2B non-IT early adopter, and the so-called non-existent Chasm. We don’t leave any money behind, and we have to create careers and a value chain, those being outcomes from generating a category which in turn generates larger financial returns than what our own intra-firm managerial accounting tells us, then the innovation is discontinuous.

I posted this after hearing an interview about the MittagLeffler’s theorem.  The Mittag-Lefler Theorem is about holes or more to the point how to make holes with a single function. Or, simpler, how to describe them. The holes exist. The function doesn’t. The point of the function is to process the holes to some end. This function can be described in a manner that it deals with all the holes, not just one. Green’s theorem deals with one hole you encounter while doing the integration.

The holes of the Mittag-Lefler Theorem show up in complex analysis. Kicking it back to marketing, we seek one hole, just one, but this theorem tells that there would be many holes. That’s the point of the bowling alley in Moore’s technology adoption lifecycle. We put the technology into a product built for one early adopter in their vertical market. This being one lane of our bowling alley. This being one hole. Then, when we have the capacity, we put the technology in another product for another early adopter in another vertical market. This being another lane of our bowling alley. This being another hole. We have to do this a total of six times across seven years. Discontinuous innovation is not fast. There is an eventual point, success in the tornado we face as we enter the horizontal maybe ten years later. But there is a point.

Those six lanes would fill six holes with six different value propositions in six different vertical markets, but the same underlying technology. Each of those client engagements would be in a different place at different heights in the industrial classification tree. That puts the holes at different heights from the complex plane.

But, back to the math. One kind of function that requires the theorem are Meromorphic functions like the one below. The cool thing is that you can write a function that Holes 01describes all those holes. This is a relatively simple function. The holes could be all over the plane, and still, a single function would handle them. I can imagine using a Fourier sum to get this done. A Fourier analysis would give us a collection of trig functions describing a wave that hits these holes. That sum would be a sum of different waves where each wave hits some of the holes.


This example is simple. It only requires one frequency. The cosines go to zero at each hole. We take the reciprocal of that, aka we divide by zero, and a hole results. These would be more involved because the holes are of different sizes. I don’t know how to do that yet. But, this is a start. This is a good mystery if your math isn’t there yet.

Holes 02

I drew the red line to say where the wave would be oscillating. The thick black line is the wave, a cosine wave. The reciprocal of the cosine gives us the hole. Those holes are where we will be making our money for a few years, seven years plus development time and value proposition development and execution, plus two years or more in that vertical. This if you’re in the last of the six lanes of your bowling alley.

The function f(x) is not a complex function on the complex plane, but if the red line has an angle of zero degrees, the function is complex. The origin is at (0,0).

Fourier analysis can break down a much more complicated signal into a wider set of waves, and sum all of those waves into a single function. I’ll add a few more complications to the figure.

Holes 03So this was my first try. It’s wrong. This is complex analysis, so the waves are on an axis through (0,0) and would have a different complex variable multiplying each wave. Regardless of where you put the holes a sum of complex trig functions can get us there. The figure shows the component waves that a Fourier analysis would deliver.

Holes 04

Here, I have put a hole out there in complex trig. I should have drawn the black ellipse centered at the origin. This polar complex view is far simpler than the waves shown in the previous figure. There might be more waves to add up here, but it is clearer. I’m not sure the trig functions are correct, but this is my best and last attempt for now.

I raised it for some clarity, but that puts the height in the equations. I deliberately drew the height of the new hole some depth w below the whole, so for this wave the height adds v and subtracts w. The reason I put the height in the equation will take us back to the marketing. Back to a vertical issue relative to where we enter the vertical market associated with the hole.

Verticals are organized by the industrial classification tree. Every vertical is a subtree of the classification tree. Don’t enter the vertical at the top of the subtree, nor at the bottom of the subtree. Try to leave yourself some room to generalize towards management at the root of the subtree or to specialize towards the detailed work in the leaves of the subtree. The most difficult work would be to implement and sell to siblings.  There will be enough to do for the early adopter client and their company.

The height of the hole,  w, would match with the vertical height of the client’s business in the industrial classification tree.

We will look at the vertical in the technology adoption lifecycle (TALC). The vertical is just one of several normals that are summed into the TALC. They have not been drawn to scale. Keep in mind that the device/laggard and cloud/phobic markets are small and short in terms of time.


The hole is shown in the top layer of the figure showing individual normals that get summed into the TALC shown in the bottom layer of the figure. The hole is on the far right. The normal for the vertical would replace the Meromorphic function we used in the previous figures. The hole is associated with a single lane in the bowling alley.

There would be six lanes for a given discontinuous innovation. They would be entered into successively until the company could afford and is staffed to do more projects at once. One early adopter engagement, particularly the early ones for a given technology, would take two or more years. That these engagements are stretched out over time, satisfies the requirements of the Mittag-Lefler theorem that insists on the holes being clearly separated.

Now, we’ll fill the bowling alley.

Bowling Alley

I’ve used a fragment of the industrial classification tree to find a B2B early adopter in the middle of their vertical. Then, I measured the depth of the early adopter business in that tree to the total depth in the classification tree. Then, I put the hole for their position on the normal of the aggregate TALC. All six verticals were measured in the same way and placed on the aggregate TALC. Then, I used the polar form to build the hole accessing functions. There are six verticals, one for each early adopter. We need six different verticals, rather than six engagements in one vertical. I then set up a rough schedule for getting those six applications of the underlying technology done.

Once all six verticals have been built, we ensure that the early adopter’s value expectations are met. Then, we help them write their business case. We will use that business case when we market and sell to the early adopter’s network through the first three degrees of separation.

Once we have built successful applications in those six verticals, we can sell the underlying technology more directly into the IT horizontal. It takes quite a while. It is not the flash in the pan miracles we see in the consumer phase. Time is money, earned money.






“Pinpoint; How GPS is changing technology, culture, and our minds”

April 25, 2018

In Greg Milner’s “Pinpoint, ” James Cook, a British sea captain in the mid- to late-1700’s sought to discover how the Polynesian’s navigated. The Polynesian navigators could demonstrate their abilities, but they could not say what they knew. The knowledge Cook sought was implicit.

By the end of the book, the knowledge was a collection of memories that were not to be forgotten. This said some 200 years later. Consider the agilists shipping errors for 200 years.

Consider what spellcheck does. It reduces our confidence in our ability to spell. It attacks us.

GPS reduces our ability to know where we are in the physical setting. GPS is more of a clock than a compass. I don’t wear a watch anymore. I don’t care what the bus schedule says. I just want to know how far I am from the next bus. Sometimes, the buses around here just never show up. One day, with a traffic jam downtown, three buses in a row never arrived.

GPS reduces our memories of the contexts of places. Places become numbers, numbers in a particular context that have nothing to do with specific places. The developers of GPS picked a representation and evolved that representation. When they solved the navigation problem, they extended the representation because they could see things that they never imagined. They can determine the humidity levels in the air. They could sense the movement of land masses. They can take over from the seismographs once the seismographs get swamped. They could notice when the Earth’s center of gravity changed. They know when masses of water move.

But, mostly, they change our memories of place.

We live in an age that would rather disregard the experts. We deliver products for the novices. We don’t ask the experts of what their cognitive models consist. We deliver software at the level of the introductory class.

A tweet this week linked to an article on how chaos researchers are ignoring formulas and just looking at the data, at the trajectories themselves using machine learning. Their system works. This hints at developing software where you don’t talk to the users or the customers. You just look at the data. The trap will move from explication to illumination. Many sensors need illuminators that make the sensed visible before they can capture the data. This is all well and good, but neural nets can’t tell us the equations, the conceptual frameworks. They capture the results from a given dataset. But, sensing the differences between salamaders and lizards, it can’t tell you what those differences are.

When we code, we ask questions about a situation that differs by one bit. If they differed by a kilobyte, it would be much easier to tell them apart. We could get by with much less data. When the difference is a mere 1 bit, we need upwards of 600k examples. That’s big data.

“Pinpoint” was an interesting story of adoption and adaption, competition and collaboration–coopertition. Product managers should find it a good read.


A Few Notes

March 20, 2018

Three topics came up this week. I have another statistics post ready to go, but it can wait a day or two.

Immediacy and Longevity

I crossed paths with a blog post, “Content Shelf-life: Impressions, Immediacy, and Longevity,” on Twitter this week. In it, the author talks about the need for a timeframe that is deals with the rapid immediacy and the longevity of a product.

When validating the Agile-developed feature or use case, achieving that validity tells us nothing about the feature or use case in its longevity. When we build a feature or use case, we move as fast as we can. The data is Poisson. From that, we estimate the normal. Then, we finally achieve a normal. Operating on datasets, instead of time series hides this immediacy. Once that normal is achieved, we engage in statistical inference while at the same time continuing to collect data to reach the longevity. This data collection might invalidate our previous inferences. We have to keep our inferences on a short leash until we achieve a high sigma normal where it is big enough to stop moving around or shrinking the radius of our normal.

In the geometry sense, we start in the hyperbolic, move shortly to the Euclidean, and move permanently into the spherical. The strategies change, not the user experience. The user population grows. We reach the longevity. More happens, so more affects our architectural needs. Scale chasms happen.

The feature in its longevity might move the application and the experience of that application to someplace new, distant from the experience we created back when we needed validity yesterday, distant from the immediacy. The lengthening of tweets is just one example. My tweet stream has gotten shorter. That shortness makes Twitter more efficient, but less engaging. I’m not writing so many tweets to get my point across. There is less to engage with.

This longer-term experience in the is surprisingly very different. In the immediacy, we didn’t have the data to test this longest time validity. Maybe we can Monte Carlo that data. But, how would we prevent ourselves from generating more of that immediacy data in bulk that won’t reflect the application’s travel across the pragmatism gradient?

The lengthening of the tweets probably saved them some money because they didn’t have to scale up the number of tweets they handled. They take up more storage, but no more overhead, a nice thing if you can do it.

Longest-Shortest Time

Once the above tweet took me to the above post on the Heinz marketing site, I came across the article, “The Longest Shortest Time”  there. The daily crises make a day long, but the days disappear rapidly in retrospect. The now, the immediacy is hyperbolic. The fist of a character in a cartoon is larger due to foreshortening. Everything unknown looks big when we don’t have any data. But, once we know, we look back. Everything is known in retrospect. Everything is small in retrospect. Everything was fast. That foreshortened view was fleeting. The underlying geometry shifted from hyperbolic to Euclidean as we amassed data and continues to shift until it is spherical. The options were less than one, then one, then many.

Value in the business sense is created through use. Value is projected through the application over time into the future from the past, from the moment of installation. That future might be long beyond the deinstall. The time between install and deinstall was long but gets compressed in retrospect. The value explodes across that time, the longest time. Then the value erodes.

In the even longer time all becomes, but a lesson, a memory, a future.

Chasm Chatter

This week there were two tweets about how the Chasm doesn’t exist. My usual response to chasm mentions is just to remind people that today’s innovations are continuous, so they face no Chasm in the technology adoption lifecycle (TALC) sense. They may face scale chasm during upmarket or downmarket moves. But, there are no Chasms to be seen in the late phases of the TALC, the phases where we do business these days.

Moore’s TALC tells us about the birth and death of categories. Anything done with a product in an existing category is continuous. In this situation, the goal is to extend the life of the category by any means, innovation being just one of the many means. VCs don’t put much money here. VCs don’t provide much guidance here. And, VCs don’t put much time here either. The time to acquisition is shrinking. Time to acquisition is also known as the time to exit. In the early phases, all of that was different.

Category birth is about the innovator and those within three degrees of separation from the innovator. That three degrees of separation is the Chasm. It’s about personal selling. It’s not about mass markets. It’s about a subculture in the epistemic cultural sense. It’s a few people in the vertical, a subset of an eventual normal. It’s about a series of Poisson games. It’s about the carried content. The technology is underneath it all, but no argument is made for the technology. It isn’t mentioned. The technical enthusiasts in the vertical know the technology, but the technology explosion, the focus on carrier is in the future. It is at least two years away and as much time will pass as needed. But, the bowling alley means it is at least seven years away.

Then comes, the early mainstreet/IT horizontal. The tornado happens at the entrance. Much has to happen here, but this is a mass-market play.

After the horizontals, the premium on IPOs disappears. We enter the late phases of the TALC where innovation becomes continuous and no new categories are birthed. This is the place where people make errant Chasm crossing claims. This is where all the people claiming there is no Chasm have spent their careers, so no, they never saw a Chasm. They made some cash plays. They were serial innovators with a few months on each innovation, rather than ten years on one innovation that did cross the Chasm. Their IPOs didn’t make them millionaires because there is no premium. The TALC is converging to its right tail. The category is disappearing. They cheer the handheld device, a short-lived thing, and they cheer the cloud, another even shorter-lived thing, the end of the category where the once celebrated technology becomes admin-free magic.

So yes, there is no Chasm. But, my fear is that we will forget that there is a Chasm once we stop zero-summing the profits from globalism and have to start creating categories again to get people back to work. Then, we will see the Chasm again. It won’t be long before the Chasm is back.





Nominals II

March 15, 2018

I left a few points out of my last post, Nominals. In that post, the right-most distribution presents me with a line, rather than a point, when I looked for the inflection point between the concave-down and concave up sections of the curve on the right side of the normal distribution.

A few days after publishing that blog post, it struck me that the ambiguity of that line had a quick solution tied to the fact that the distance between the mean and that inflection point is one standard deviation. All I had to do was drop the mean from the local maximum at the peak of the nominal and then trisect the distance between that mean and the distribution’s point of convergence on the right side of that nominal’s normal distribution.

Backing out of that slightly, every curve has at least one local maxima and at least one local minima. A normal distribution is composed of two curves one to the right of the mean and another to the left. Each of those curves has a maxima and minima pair on each side of the mean. The maxima is shared by both sides of the mean. A normal that is not skewed is symmetric, so the inflection points are symmetric about the mean.

01 min max IP

Starting with the nominals comprising the original distribution, I labeled the local maxima, the peaks, and local max minima, the points of convergence with the x-axis. Then, I eyeballed each line between the maxima and minima pairs to find the inflection point between each pair. Then, I drew a horizontal line to the inflection point on the other side of the normal. Notice the skewed normal is asymmetric, so the line joining the inflection points is not horizontal. Next, I drew a vertical line down from the maxima of the normal distribution on the right. Then, I divided the horizontal distance from the maxima to the minima on the right into three sigmas or standard deviations. The first standard deviation enabled us to disambiguate the inflection point on the right side of the distribution.

The standard normal is typically divided into six standard deviations–three to each side.

02 IP

Here I’ve shown the original distribution with the rightmost nominal highlighted. The straight line on the right and the straight line on the left leaves us unable to determine where the inflection point should be. My guess was at point A. The curvature circles of the tails did not provide any clarity.

I used the division method that I learned from a book on nomography. I drew the line below the x-axis and laid out three unit measures. Then, I drew a line from the mean and the x-axis beyond the left side of the first unit measure. Next, I drew a line from the distribution’s point of convergence on the right side of the normal beyond the right side of the third unit measure. The two lines intersect at point 3. The rest of the lines are projected from point 3 through the line where we laid out the three unit measures. These lines will pass through the points defining the unit measures. These lines are projected t the x-axis.

Where the lines we drew intersect with the x-axis, we draw vertical lines. The vertical line through the mean or local maxima is the zeroth standard deviation. The next vertical line to the right of the mean is the first standard deviation. The standard deviation is the unit measure of the normal distribution. The vertical lines at the zeroth and first standard deviation define the width of the standard deviation. The vertical line demarking the first standard deviation crosses the curve of the normal distribution at the inflection point we were seeking. The point B is the inflection point. We found the standard deviation of the rightmost normal without doing the math.

I put a standard normal under the rightmost normal to give us a hint at how far our distribution is from the standard normal. At that height, our normal would have been narrower. The points of convergence of our normal limit the scaling of the standard normal. A larger standard deviation would have had tails outside our normal.

03 Added Standard Normals

Here I’ve shown the six standard deviations of the standard normal. I also rescaled standard normals to show how a dataset with fewer data items would be taller and narrower, and how a dataset with more data items would be shorter and wider. The standard normal with fewer data elements could be scaled to better fit our normal distribution.

In the original post, I wondered what all the topological torii would have looked like. I answered that question with this diagram.

03 Torii