The Shape Of Innovation

November 26, 2016

In the past, I’ve summarized innovation as a decision tree. I’ve summarized innovation as divergence and convergence, generation and tree pruning. So I drew this figure.
context-10The generative grammar produces a surface. The Constraints produce another surface. The realization, represented by the blue line, would be a surface within the enclosed space, shown in yellow. The realization need not be a line or flat surface.

In CAD systems, the two surfaces can be patched, but the challenge here is turning the generative grammar into a form consistent with the equations used to define the constraints. The grammar is a tree. The constraints are lines. Both could be seen as factors in a factor analysis. Doing so would change the shape of the generated space.

context-06In a factor analysis, the first factor is the longest and steepest. The subsequent factors are flatter and shorter.

A factor analysis produces a power law.

A factor analysis represents a single realization. Another realization gives you a different factor analysis.

context-07When you use the same units on the same axes of the realizations, those realizations are consistent or continuous with each other. These are the continuities of continuous innovation. When the units differ in more than size between realizations, when there is no formula that converts from one scale to another, when the basis of the axes differ, the underlying theories are incommensurate or discontinuous. These are the discontinuities of discontinuous innovation.

context-11The surfaces contributing to the shape of the enclosed space can be divided into convex and concave spaces. Convex spaces are considered risky. Concave spaces are considered less risky. Generation is always risky. The containing constraints are unknown.
context-17The grammar is never completely known and changes over time. The black arrow on the left illustrates a change to the grammar. Likewise, the extent of a constraint changes over time, shown by the black arrow on the right. As the grammar changes or the constraints are bent or broken, more space (orange) becomes available for realizations. Unicode, SGML, and XML extended the reach of text. Each broke constraints. Movement of those intersections moves the concavity, the safe harbor in the face of gernerative risks. As shown the concavity moved up and to the left. The concavity abandoned the right. The right might be disrupted int he Foster sense. The constraints structure populations in the sense of a collection of pramatism steps. Nothing about this is about the underserved or disruption in the Christensen sense.

The now addressible space is where products fostering adoption of the new technology get bought.

The generative grammar is a Markov chain. Where the grammar doesn’t present choice, the chain can be thought of as a single node.

context-12The leftmost node is the root of the generative grammar. It presents a choice between two subtrees. Ultimately, both branches would have to be generated, but the choice between them hints at a temporal structure to the realization, and shifting probabilities from there.

New gramatical structures would enlarge the realization. Grammars tend to keep themselves short. They provide paths that we traverse or abandon over historical time. The realization would shift its shape over that historical time. This is where data mining could apply.

When the constraints are seen from a factor analysis perspective, the number of factors are few in the beginning and increase over time. This implies that gaps between the realization and the factors would exit and diminish over time. Each factor costs more han the factor before it. Factors add up to one, and then become a zero-sum game. For another factor to assert itself, existing factors would have to be rescaled.

Insisting on a factor anlaysis perspective leaves up with trying to find a factor designated as the root constraint. And then, defining the face offs. This subgrammar vs this collection

context-18of constraints. Each would have rates, thus differential equations. Each would be a power law. So in our system there would be four differential equations and four power laws. There would also be four convergences. These would be reflected in the frequencies of use histograms.

Notice that nowhere in this discussion was innovation based on an idea from management. The ideas were about enlarging the grammar, aka ontological sortables, and the breaking or bending of constraints. When a constraint built into a realization breaks, Glodratt told us that the realization moves some distance to the next constraint.These efforts explore the continuities and discontinuities of the possible innovations. Produtization is the next step in fostering adoption.

As always, enjoy.

 

 

 

 

Doing Discontinuous Innovation

November 14, 2016

Discontinuous innovation creates economic wealth. Continuous innovation captures cash. Economic wealth, unlike what the financial services companies tell us with their wealth management services, is more than a pile of cash. Cash is the purview of the firm.  Economic wealth is the purview of the economy as it reaches well beyond the firm. Cash is accounted for where economic wealth is not.

Notice that no firm has an imperative to create economic wealth. To the contrary, managers today are taught to convert any economic wealth they encounter into cash. They do this with the assumption that that economic wealth would be put back, but that has yet to happen. Globalism was predicated on using the cash saved to create new categories, new value chains, new careers—economic wealth. Instead, we sent it to Ireland to avoid taxes. Oh well, we let the tail wag the dog.

Likewise, we are taught to lay off people, because we can put that money to better use, but then we don’t put it to better use. Those people we laid off  don’t recover. They work again, but they don’t recover. Oh, well. This is where continuous innovation takes you. Eventually, it is moved offshore. The underlying carrier technologies are lost as well, so those jobs can’t come back. The carrier technologies will evolve elsewhere.

I could go on. I did, but I deleted it.

Anyway, I’ve been tweeting about our need to create new economic wealth as the solution to globalism. Instead, the rage gets pushed to the politicians, so we’ve seen where that got us. The politicians have no constructive solution. We can solve this problem without involving politicians. We can innovate in a discontinuous manner. As a result of those tweets, a product manager that follows me ask, so how do we innovate discontinuously.

I’ll review that here.

  1. Begin with some basic research. That kind of research bends or breaks a constraint on the current way things are done in that domain.

Samuel Arbesman’s “The Half-Life of Ideas” gives us a hint in the first chapter with a graph of the experiments on temperature. Each experiment resulted in a linear range resulting from the theory used to build the measurement system that underlaid the experiment. The experiments gave us a dated collection of lines. The ends of those lines were the end of the theories used to build the experiments. You couldn’t go from one line to the next with a single measurement device, with a single theory. You had a step function on your hands after the second experiment.

The lines on the right side of the graph were replaced with later lines, later experiments. The later lines were longer. These later lines replaced the earlier step functions with another step function. A single measurement device could measure more. The later theory could explain more. The later theory broke or bent a constraint. The earlier theory did so as well when you consider that before the earliest theory, there was no theory, so nothing could be done. As each theory replaced the prior theory more could be done. Value was being delivered. That value escaped the lab once a manager got it sold into a market beyond the lab, aka innovated.

  1. Build that basic research into an infrastructural platform, into your technology/carrier layer, not into a product/carried layer. Do not even think about a product yet.

Moore’s technology adoption lifecycle starts with a technology. After step 2, that’s what you have. You have a technology. Products get a technology adopted. The technical enthusiasts is the first population that needs to be addressed. This population is the geeks. They insist on free. They insist on play. They refer technologies to their bosses.

  1. Explore what vertical industry you want to enter then hire a rainmaker known in that vertical. This rainmaker must be known by the executives in that vertical. This rainmaker is not a sales rep calling themselves a rainmaker.
  2. When the rainmaker presents you with a B2B early adopter, their position in the vertical matters. Their company must be in the middle of the industry’s branch/subtree of the industrial classification tree. They should not be on a leaf or a root of the branch/subtree. This gives you room to grow later. Growth would be up or down and not sideways to peers of the same parent in the subtree.
  3. That B2B early adopter’s vertical must have a significant number of the seats and dollars.
  4. That early adopter must have a product visualization. This product visualization should be carried content, not carrier. Carrier functionality will be built out later in advance of entering the IT horizontal. Code that. Do not code your idea. Do not code before you’re paid. And, code it in an inductive manner as per “Software by Numbers.” Deliver functionality/use cases/jobs to be done in such a way that the client, the early adopter, is motivated to pay for the next unit of functionality.
  5. Steps 3—6 represent a single lane in Moore’s bowling ally. Prepare to cross the chasm between the early adopter and more pragmatic prospects in the early adopters vertical. Ensure that the competitive advantage the early adopter wanted gets achieved. The success of the early adopter is the seed of your success. Notice that most authors and speakers talking about crossing the chasm are not crossing the chasm. There is no chasm in the consumer market.
  6. There must be six lanes before you enter the IT horizontal. That would be six products each in their own vertical. Do not stay in a single vertical. So figure out how you many lanes you can afford and establish a timing of those lanes. Each lane will last at least two years because you negotiate a period exclusion for the client in exchange for keeping ownership of your IP.
  7. Each product will enter its vertical in its own time. The product will remain in the vertical market until all six products in the bowling ally have been in their verticals at least two years. Decide on the timing of the entry into the horizontal market taking all six products into consideration. All six will be modified to sum their customer/user populations into a single population, so they can enter the IT horizontal as a carrier focused technology. The products will shed their carried functionality focus. You want to enter the horizontal with a significant seat count, so it won’t take a lot of sales to win the tornado phase at the front of the IT horizontal.
  8. I’ll leave the rest to you.

For most of you, it doesn’t look like what you’re doing today. It creates economic wealth, will take a decade or more, requires larger VC investments and returns, and it gets a premium on your IPO unlike IPOs in the consumer/late market phases of the technology adoption phases.

One warning. Once you’ve entered the IT horizontal, stay aware of your velocity as you approach having sold half of your addressable market. The technology adoption lifecycle tells us that early phases are on the growth side and that late phases are on the decline side of the normal curve.

There needs to be a tempo to your discontinuous efforts. The continuous efforts can stretch out a category’s life and the life of the companies in that category. Continuous efforts leverage economies of scale. A discontinuous effort takes us to a new peak from which continuous efforts will ride down. Discontinuous innovations must develop their own markets. They won’t fit into your existing markets, so don’t expect to leverage your current economies of the scale. iPhones and Macs didn’t leverage each other.

Don’t expect to do this just once. Apple has had to do discontinuous innovations three or four times now. They need to do it again now that iPhones are declining. Doing it again, and again means that laying off is forgetting how to do it again. It’s a matter or organizational design. I’ve explored that problem. No company has to die. No country has to fall apart due to the loss of their economic wealth.

Value Projection

November 7, 2016

I’ve often used the triangle model to illustrate value projection. In a recent discussion, I thought that a Shapely Value visualization would work. I ended up doing something else.

We’ll start by illustrating the triangle model to show how customers use the enabling software to create some delivered value. The customer’s value is realized by their people using a vendor’s software. The vendor’s software provides no value until it is used to create the value desired by the customer.

value-projection-w-triangle-model-01The gray triangle represents the vendor’s decisions that resulted in the software that they sold to the customer. The base of that triangle represents the user interface that the customer’s staff will use. Their use creates the delivered value.

The red triangle represent’s the customer’s decisions that resulted in that delivered value. The software was a very simple install and use application. Usually, configurations are more complicated. Other software may be involved. It may take multiple deliverables to deliver all the value.

value-projection-w-triangle-model-02

Here we illustrate a more complicated situation where a project with several deliverables and another vendor’s product was needed to achieve the desired value.

 

 

 

When a coallition is involved in value delivery, Shapely value can be used to determine the value each member of the coallition should receive realtive to their contribution to value delivered.

shapely-value

Here I used a regular hexigon to represent six contributors that made equal contributions. The red circle represents the value delivered.

The value delivered is static, which is why I rejected this visualization. The effort involves multiple deliverables.

 

The next thing we had to handle was representing the factors involved in that value delivery. Those factors can be discovered by a factor analysis.

factor-analysis

A factor analysis allocates the variance in the system to a collection of factors. The first factor is the longest and steepest factor. The first factor explains more variance than any of the subsequent individual factors. The second factor is shorter and flatter than the first factor. The second factor is longer and steeper than the first factor. The third factor is flatter and shorter than the second factor.

Even without the details 80 percent of the variance is covered by the first three factors. Additional factors can be found, but they become increasingly expensive to discover.

For our purposes here we will stop after the first three factors or after the first 80 percent of variance. We will allocate some of the delivered value to those factors.

Putting all of this together, we get the following visualization.
value-projection

Here the vendor is at the center of the rings. The rings are organized by the project’s deliverables along the project’s timeline. The first ring represents the UI of the vendor’s application. The distance between this ring and the origin of the circle represents the time it took to deliver the UI. That UI incorporates the factors explaining the relative importance of the delivered elements of the software.  The white area in the vendor ring, adjacent to the purple factor represents the 20 percent of variance or importance that would be allocated to subsequent factors beyond the first three.

The gray rings represent the time gaps between the install. The second customer ring represents the efforts to configure the application. The third ring represents further implementation efforts. The customer’s efforts might involve using an API to extend the functionality of the application. This is shown with the orange and red segments. The extension is organized as a stack crossing the customer’s rings.

The radius of the circles represents time. That being the case, we don’t need the left side of the circles. Time starts at the origin and moves outward.

Different vendors could be represented with different rings, or some allocation of the existing rings. The vendors themselves have relative ranks relative to the delivery of the ultimate value.

I’d appreciate some comments. Enjoy.

Implicit Knowledge

October 24, 2016

One of the distinctions I’ve been making out on twitter is the difference between what I call fictional and non-fictional software. We get an idea. We have to ask the question does users actually do this today without our software. If the answer is “No,” we get to make up how it is to be done. The user tasks are a blank whiteboard. That’s fictional software. But most of the time, the answer is not “No.” In that case, the software is non-fictional, so we need to do an ethnography and find out exactly how the user does it, and what cognitive model of their thinking is while they do what they do. In non-fictional software, neither the developer or the UX designers are free to make things up.

Yesterday, I read “Usability Analysis of Visual Programming Environments: a ‘cognitive dimensions’ framework.” The author, a UX designer, makes some statements that clarified for me that UX design as practiced today, particularly by this designer, is fictional. Tasks exist before they are designed. Tasks exist before they are digitized by programmers. This isn’t new. Yahoo built a search engine without ever looking at existing search engines or asking library science practitioners how to do it. Yahoo made it up and then discovered many of the finding and practices of library science practitioners later. That is to say, they approached, progressed towards convergence with, the user’s real cognitive model of the underlying tasks. There is still a gap.

Agile cannot fix those gaps in non-fictional software. It can only approach and converge to the gap width between the user’s bent cognitive model they use as users, and the real cognitive model they learned eons ago in school. That learning was explicit with a sprinkling of implicit. The implicit does not get captured by asking questions, talking, observing, or iterating. With any luck, a trained observer, an ethnographer, and their observational frameworks can observe and capture that implicit knowledge.

iteration-gap

A Rubik’s Cube can serve as an example. When solving a cube, we explore the problem space, a tree, with a depth first search. We can use simple heuristics to get close. But then, we stop making progress and start diverging away from the solution. We get lost. We are no longer solving. We are iterating. We are making noise in the stochastic sense. We stop twisting and turning. We look for a solution on the web. We find a book. That book contains “The hint,” the key. So after a long delay, we reset the cube, use the hint, and solve the cube.

diverge-converge-delay

We joined the epistemic culture  or what I was calling the functional culture of the cube. We are insiders. We solve the cube until we can do it without thinking, without the search struggles, and without remembering the hint. The explicit knowledge we found in that book was finally internalized and forgotten. The explicit knowledge was made implicit. If a developer asked how to solve the cube, the user doesn’t remember and cannot explicate their own experience. They cannot tell the developer. And, that would be a developer that wasn’t making it up, or fictionalizing the whole mess.

All domains contain and find ways to convey implicit knowledge. The Rubik’s cube example was weakly implicit since it has already been explicated in that book. The weakly implicit knowledge is a problem of insiders that have been exposed to the meme and outsiders who have not. Usually, those that got it teach those that don’t. Insiders teach outsiders. In other domains, implicit knowledge remains implicit but does get transferred between people without explication. Crafts knowledge is implicit. Doing it or practice transfers craft knowledge in particular, and implicit knowledge generally.

Let’s be clear here that generalist 101 class in the domain that you took back in college did not teach you the domain in the practitioner/expert sense. You/we don’t even know the correct questions to ask. I took accounting. I’m not an accountant. It was a checkbox, so I studied it as such. A few years after that class I encountered an accounting student and his tutor. The student was buying some junk food at the snack bar. The tutor asked him what accounts were affected by that transaction. That tutor was an insider. The student was working hard to get inside.

For anyone that will ever be a student of anything, there is no such thing as a checkbox subject. Slap yourself if you think so. Dig into it. Boredom is a choice, a bad one. You’re paying a lot of money, so make it relevant to think like an insider.

Recently, a machine beat a highly-ranked human in Go, a game not amenable to the generative space and heuristic-based pruning approach of the likes of Chess. The cute thing is that a machine learned how to be that human by finding the patterns. That machine was not taught an explicit Go knowledge. That machine now teaches Go players what it discovered implicitly and transfers knowledge via practice and play. The machine cannot explain how to play Go in any explicating manner.

One of my lifetime interest/learner topics was requirements elicitation. Several years ago, I came across a 1996 paper on requirements elicitation. Biases were found. The elicitor assumed the resulting system would be consistent with the current enterprise architecture, and let that architecture guide the questions put to users and customers, their bosses. That biased set of requirements caused waterfall development to fail. But, Agile does not even try to fix this. There will always be that gap between the user’s cognitive model and the cognitive model embedded implicitly in the software. UX designers like the author of the above paper impose UX without regard to the user’s cognitive model as well. I have found other UX designers preaching otherwise.

So the author of the above paper takes a program that already embeds the developer’s assumptions that already diverges and fictionalizes the user’s non-fictional tasks and further fictionalizes those tasks at the UX level. Sad, but that’s contemporary practice.

So what does this mess look like?

dev-ui-induced-gap

Here, we are looking at non-fictional software. The best outcome would end up back at the user’s conceptual model, so there was no gap. I’ve called that gap negative use costs, a term used in the early definition of the total cost of ownership (TCO). Nobody managed negative use costs, so there were no numbers, so in turn Gartner removed from the TCO. Earlier, I had called it training, since the user that knew how to do their job has to do it the way the developer and UX designer defined it. When you insert a manager of any kind in the process, you get more gap. The yellow arrows reflect an aggregation of a population of users. Users typically don’t focus on the carrier layer, so those training costs exist even if there were no negative use costs in the carried content.

As for the paper that triggered this post, “cognitive” is a poor word choice. The framework does not encode the user’s cognitive map. The framework is used to facilitate designer to manager discussions about a very specific problem space, users writing macros. Call it programming and programming languages if you don’t want your users to do it. Still useful info, but the author’s shell is about who gets to be in charge. The product manager is in charge. Well, you’ll resolve that conflict in your organization. You might want to find a UX designer that doesn’t impose their assumptions and divergences on the application.

 

The Tracy-Windom Distribution and the Technology Adoption Lifecycle Phases.

October 11, 2016

In my recent bookstore romps, the local B&N had a copy of Mircea Pitici’s The Best Writing on Mathematics 2015. I’ve read each year’s book since I discovered them in the public library system in San Antonio years ago. I read what I can. I don’t force myself to read every article. But, what I read I contextualize in terms of what does it mean to me, a product strategist. I’m a long way from finished with the 2016 book. I’m thinking I need to buy them going back as far as I can and read every article. Right now that’s impossible.

I thought I was finally finished with kurtosis, but no I wasn’t thanks to the 2015 book. So what brought kurtosis back to the forefront? Natalie Wolchover’s “At the Far Ends of a Universal Law,” did. The math in that article is about the analytic view of phase transitions or coupled differential equations described by something called the Tracy-Widom distribution. That distribution is asymmetric meaning it has skewness, which in turn means it exhibits kurtosis.

In “Mysterious Statistical Law May Finally Have an Explanation” in the October 2014 edition of Wired magazine, the Tracy-Wisdom distribution is explained. It is linked to distributions of eigenvalues, and phase transitions. The phase transition aspect of the Tracy-Widom distribution caught my attention because Geoffrey Moore’s technology adoption lifecycle is a collection of phase transitions.  The article contained a graph of the Tracy-Widom distribution, which I modified somewhat here.

tw2

I annotated the inflection points (IP) because they represent the couplings between the differential equations that comprise the Tracy-Widom distribution. I used thick black and blue lines to represent those differential equations. The Tracy-Wisdom distribution is a third-order differential equation, which is comprised of two second-order differential equations (S-curves), which in turn are comprised of two differential equations each (J-curves).

The cool thing is that we move from a stochastic model to an analytic model.

I removed the core vs tail color coding in the WIRED diagram. In my earlier discussions of kurtosis, the core and tails were defined by the fourth moment, aka the inflection points coupling the first order differential equations. The error persists in this figure because the inflection points were hard to determine by sight. Notice also that the WIRED diagram hints at skewness, but does not indicate how the distribution is leaning. For more on leaning and theta, see Convergence and Divergence—Adoption, De- adoption, Readoption, and More On Skew and Kurtosis. They are taking the Tracy-Widom distribution as a given here, rather than a transformation of the normal. Much about kurtosis is not resolved and settled in the mathematics community at this time.

The dashed vertical line separating the two sides of the distribution intersects the curve at the maxima of the distribution. The maxima would be a mode, rather than a mean. When a normal is skewed, the mean of that normal does not move. The line from the mean on the distribution’s baseline to the maxima slopes meeting the baseline at some θ. Ultimately, the second-order differential equations drive that θ. Given I have no idea where the mean is, I omitted the θ from this diagram.

In the 2015 book, the left side of the distribution is steeper, almost vertical, which generates a curve closer to the base axis, a curve with a tighter curvature, aka a larger value for Κ1 (smaller radius); and the right side is flatter, which generates a looser curvature, aka a smaller value for Κ2 (larger radius)—note that curvature Κ = 1/r.

tw3

So both figures can’t be correct. How did that happen? But, for my purposes, this latter one is more interesting because it shows a greater lag when transitioning between phases in the technology adoption lifecycle and in firms themselves, particularly in firms unaware that they are undergoing a phase transition. In Moore’s bowling alley, where the Poisson games occur. The phase transitions are more symmetric and faster. In the transition between the vertical and the IT horizontal, the phase transition can be slower, less symmetric. In the transition between early and late main street, the phase transition is fast. Most firms miss their quarterly guidance here, so they undergo a black swan, which is surprising since a firm should know when they are approaching having sold 50% of their addressable market. A firm should also know when they are approaching having sold 74% of their addressable market, so they wouldn’t hear from the Justice department or the EU. Of course, most firms never get near that 74% number.

talc-w-t-w

Here I aligned a Tracy-Windom distribution with each technology adoption lifecycle phase boundary. I have no idea about the slopes of the s-curves, the second order differential equations. Your company would have its own slopes. Your processes would give rise to those slopes, so collect your data and find out. Knowing your rates would be useful if you were continuously doing discontinuous innovation.

I’ve labeled the phases and events somewhat differently from Moore. TE is the technical enthusiast layer. They don’t disappear at any point in the lifecycle. They are always there. Well, they do lose focus in the software as media model in the vertical phase of the adoption lifecycle.  Likewise in all late phases. BA is the bowling ally. Keeping your six early-adopter (EA) channels of the bowling alley full is key to continuously doing discontinuous innovation. V is the verticals. There would be one vertical for each early adopter. The early adopter is an executive in a vertical market. IT H is the IT horizontal market. Early main street (EM) is another term for the IT horizontal. If we were talking about a technology other than computing, there would still be a horizontal organization servicing  all departments of an enterprise.An enterprise participates in serval horizontal markets. Late main street (LM) also known as the “Consumer Market” where we are today, a market that orthodox business practice evolved to fit, a market where innovation is continuous, managerial, and worse “disruptive in the Christensen way (cash/competition).” The technical enthusiast and bowling alley is wonderfully discontinuous and disruptive positively in the Foster way (economic wealth/beyond the category). L is laggard or device phase. P is phobic or cloud phase. I the phobic phase computing disappears. The technical enthusiasts will have their own Tracy-Windom distributions. Moore’s chasm being one. Another happens when the focus changes from the carried to the carrier  in the vertical phase. And, yet another happens when aggregating the bowling alley applications into a carrier-focused, geek/IT facing product sold in the tornado. Cloud rewrites another. An M&A would cause another as well. That product would sell in the second (merger) tornado (not shown in the figure).

The first second-order differential equation accounts for what it takes to prepare to make the phase transition. The second second-order differential equation accounts for operationalized work during the phase. The diagram is not always accurate in this regard.

More than enough. Enjoy.

Geez another edit, but over packed.

Customer Lifecycle and the Value Gap

October 2, 2016

John Culter, @johncutlefish,  tweeted a link to Customer Retention Hacking: How to get Users to Commit. Reading the article I was struck reading this quote

You don’t interact with your significant other the same way on your first date as you do on your 50th or 200th date. Similarly, giving a customer a great experience on day one isn’t going to be the same as on day 50.

with how the long tails of an application’s clicks could be organized to make it work with the customer lifecycle.

We start with the 1st day, the onboarding. Different things happen from there. Learning happens differently in each user. Expertise develops over time. Roles diverge over time.

Value projection has its timeline as well. John tweeted a link to The Success Gap: A HUGE Opportunity You Haven’t Considered.

So we’ll review the long-tail of application’s clickstream. Let’s say that every control in your application emits an HTTP request to an HTML page for that control, so that every click get counted, sorted, and summed up by a directory structure. This will tell you what the users are doing. If you can isolate this down to a particular user, you might want to get permission or default permission in an EULA. This will timestamp the application’s clickstream. What’s important for the purposes of this post is the timestamp. You could see what the users does with your application each day via server log analytics. You could see what the user doesn’t do efficiently, or what the user doesn’t know how to do. That knowing or not will be role specific. You need to know the user’s role, and when the user changes roles. Is your user doing self-support? You can see it. Likewise, you can see where a bug happens, because the histograms will change drastically.

daily-long-tail

The histograms on the left aggregate several of the histograms on the right. We save a named file via the menu. We save a named file via shortcut. Those would each have their own histogram. They would be added together in “save a named file.” These aggregations would be defined by the directory structure containing the file for each control. We can save the control clicks by use case. The structure can get messy. With continuous delivery, we would save the server log and put a new server log out there. Play with it. Aggregate down the timeline.

Every click of a control is a micro conversion. Click and you see the next set of controls. Another click could tell you what use case the user is attempting to perform.

Value is projected outward from the application. Further various value propositions are projected from the application. Some use does not move the system towards a value proposition. We can sort that out. The value not yet delivered would constitute the Success Gap.

value-projection

In this figure, I started with the triangle model where an application is a decision tree. The base of the triangle (right side) is the user interface (UI). Ideally, the UI would be organized by the one task, one dialog, or in contemporary terms one use case, one dialog. We do not deliver value. We deliver enablers that enable the user to deliver value through the use of those enablers. The user has an orientation towards the application. A good measure of location would be what training would be required to efficiently use the application. That training can be pushed into the buying cycle, rather than waiting until after the application is installed. Post install training would show up between the user and the UI. There would be various, numerous users each with different competencies and competency.

The triangle model here is correlated with the roadmap and the releases. Released functionality should always deliver value and reduce the value gap. When this is the case, the user is induced to continue subscriptions. Software as Numbers discussed this need to induce in the client-consultancy, custom-build engagement, the type of engagement where discontinuous technologies find client productizations and vertical markets for that product. The focus in such engagements would be carried content in the software as media model.

Notice I’m counting bits here. Used bits and Delivered bits can give you an idea of leverage. Each release delivers some bits to the ultimate value proposition. The value delivered may the users or that of an economic buyer. The economic buyer’s value generally reaches deeper into the future.

In an agile development environment, the iterations would be tactical; the value delivery, strategic. Why the labels? Consider the machine intelligence environment for a moment. Strategic is not a continuation of the tactical. In phase change environments, you have to stop collecting data and begin a new collection. How wide is your tactical learning needs? How wide is your strategic learning needs?

So we have seen how to collect the data about the customer lifecycle, the daily use under different situations. We’ve looked at the success gap. Both of these ideas tie to a timeline. You can measure against the time to return, or the time to value delivery. The retained customer would have to learn again with each release. Permission campaigns can move that learning earlier. Content marketing likewise. The economic buyer might have to be taught the value proposition, and in value-based marketing, sold on the price and configuration. Microservices can partition, so the amount of UI is variable, so the UI purchased is the minimal UI for the expected value projection.

Enjoy.

 

 

Convergence and Divergence—Adoption, De-​adoption, Readoption

September 5, 2016

Skewness Risk

This week, I visited the Varsity Bookstore, the off-campus bookstore for Texas Tech. I looked at a statistics book, sorry no citation made, that said skewness was about how much the normal distribution leaned to one side or another. When it leans, the mean stays put, but the mode moves by some angle theta. My last blog post on kurtosis mentions theta relative to one of the figures.

Lean

The notions of skewness risk and kurtosis risk came up during the work on the earlier post. It took this long to find some details hinted at in places like Investortopia. The thick tails dive under the threshold for extreme outcomes. Even with a black swan, there isn’t that much under the threshold. The negative skewness graph shows how the large losses move the convergence with the horizontal asymptote towards the present. The same thing happens with small losses possibly with the same extent horizontally, since the longer tail magnifies the small loss.

Notice that on the left side of the normal gains happen; on the right losses. Moore’s technology adoption lifecycle similarly shows the left to be growth and the right to be decline. What saves the right tail is that an acquisition is supposed to bring a 10x multiple into play, but that requires the acquirer to play the merger tornado game. That game is not played well if it is played at all. Most acquisitions provide exits to investors tied up with interlocking directors and funds.

The skewness happens because the distribution is tending to the normal, but at the moment captured by the data underlying the distribution data is missing, and the data is not normal. Once the data is captured, the normal will stand upright and centered without skewness, and without skewness, there is no kurtosis.

S-curves

Since I’m on the road, I’ve left the bookstore behind that had a book by a venture capitalist or strategist, no citation, no way to find this book again. But, the author said he didn’t see the relevance of S-curves to the companies in his portfolio. Well, most of those firms are built on commodity software, so they are long past the upsides of that software. Consumer software still commoditizes and that brings a black swan, a missed quarter to the stock price. When that commoditization happens, the underlying software has to be replaced with a better technology. Replacing it is an s-curve play by the seller of that technology, not the users of that technology. Most of his portfolio would be users of, rather than makers of underlying technologies. Simple fact, in the late phases of the technology adoption lifecycle, declines in stock prices, hope for a merger upside, no premium on IPO, and nobody dealing with S-curves is the norm. Oh, and the whole thing being about cash. You get rich in a upper-middle-class way, but it’s too late to create economic wealth. Confusion between early stage financing and early phase adoption is rife. Talk of early adopters is not in the Moore sense, but the Gladwell since. And, no chasms exist to be crossed. So yeah, no S-curves.

S-curves confuse disruption in the Foster sense because they can be temporary if the innovation’s s-curve slope slips below that of the incumbents. Foster put causes before effects where Christensen focuses on effects absent cause. In the 80’s and early 90’s nobody was overserved. It just turned out that the technology left everyone overserved. The small-disk manufacturers were not competing with the large-disk manufactuers. They just served their markets and the markets got bigger on their own. Alas, the old days.

Kurtosis, defined by curvature hinted at defining s-curves in the same way. Curvature is implicit. Mathematically, the curve defines the curvature. We cheat when we claim curvature is the reciprocal of the radius. We don’t know where the center is, so we don’t know the radius, thus we don’t know the curvature. There probably is some software somewhere that can find the curvature.

S-curve

The red line is the s-curve. The blue horizontal line shows where rapid improvement gives way to slower improvement. The line also shows where investment is cheap and where it become increasingly expensive. The large circle gets larger as we go and shifts its center down, so we get a slower and longer curve. At the top of the large circle, we’ve transitioned to that 10x returns if the merger was actually successful.

The s-curve tells us how much change to expect. If you had the s-curve for every contributing technology, then you would have some notion of the rates of change you could expect. We overstate change in our conversations, particularly when we talk about the s-curves and rate of change of the carried content.

Convergence and Divergence

In today’s reading of “Concepts and Fuzzy Logic” by Radin Balahlavek, and George J. Klir, eds. As editors, the goal of this book was to foster a return to the use of fuzzy logic within the disciplines of the psychology of concepts and mathematicians. I’ve always seen ideation as being convergent or divergent, but over the life of a conceptual model, there are several convergences and divergences. The editors here sought to foster a return to a convergent conceptual model that previously converged and later diverged. 

So we start with the verbs, with the tokens with which we parse the adoption of the discontinuous innovation. The drivers at this stage are those driving bibliographic maturity. We converge or diverge. In the converge, we merge separate disciplines. The conceptual model being adopted is the platform technology, the carrier. The disciplines bring their carried content into the mix. The carrier is under adoption, and the new found applications in the discipline in the carried is under adoption as well. Those applications make the business case of those in the current and near-term pragmatism steps. Those applications and the business cases will change as we approach the mid-term and long-term pragmatism steps.

Convergent or Divergent

In a product, care must be taken to the pragmatism steps. Like pricing bifurcations due to communications channel isolation,  the business cases are specific and the reference cases that will be adopted by a population on the pragmatism step are likewise specific. The early adopter’s success will not drive laggards to buy. But, that the macro view of adoption phases where pragmatism steps present the micro view.

We start with two populations. Each adopts a conceptualization at their own rate. Each has its own reference bases. Once adoption begins, a third population emerges, the adopters. People entering either of the disciplines involved after adoption begins can adopt the idea immediately. This is more pronounced when the conceptualization under adoption is discontinuous. Do students of SEO ever get around to print, or worse focus groups?

In the case documented in the book, mathematicians (yellow) worked their way towards fuzzy logic. They took the path of the continuous innovation. The psychology of concepts researchers (red) found fuzzy logic and it solved some of their problems, so it was adopted, but they were not working with mathematicians to accelerate the use of fuzzy logic.

Publication in these populations motivates adoption. Those peer-reviewed papers constitute the touchpoints in a content marketing network. Publication is likewise and event. Adoption and de-adoption are fostered by events.

System of Convergences and Divergences

In every adoption, there are collaborators and defectors in game theory speak. At some point, a defector succeeded in publishing some claims about how fuzzy logic couldn’t do this or that. These claims were accepted uncritically among psychology of concepts researchers. That led to the de-adoption of fuzzy logic by that population. De-adoption happened only in the psychology of concepts population driven by the publication of that defector’s claims. This went unnoticed by the mathematicians working in the same space. Again, like price communications isolation providing opportunities, discipline-specific communications channels provided the isolation here.

At least in this convergence, the two disciplines were not putting each other down like the demographers and ethnographers involved in ethnographic demography were. I can’t find that post mentioning that behavior. It doesn’t help that this blog has stretched across three blogging platforms. But, the behavior is typical. Those converging will be some small portion of the contributing domains.

Mathematicians continue to develop fuzzy logic to this day.

After de-adoption, a researcher looked at the claims and found them to be false. This led the editors to realize that they needed an intervention. Their book was part of that intervention. That accelerated readoption.

Realize here that in the readoption, the base population has changed, and the concepts being adopted have changed as well. The mathematicians widened the conceptual model to be readopted while the psychology of concepts researchers were gone.

Looking at the underlying populations, the psychology of concepts population had not completely adopted fuzzy logic, nor did that population completely de-adopt. Those later in the adoption lifecycle never bothered with fuzzy logic. They didn’t go through de-adoption. They did go through readoption eventually.

One of the messy things about the normal distribution representation of the technology adoption lifecycle is that adoption happens in a time series. The population is spread out along that time series. The timeline moves left to right. Each sale, whether by seats or dollars moves one down the timeline. B2B sales moves are huge. The mean becomes the marker where fifty percent of the seats have been sold. The growth side of the curve ends with the seat sitting at the fifty percent mark. This timeline is present regardless of skewness or kurtosis.

The timeline starts with the Dirac function providing the potential energy that drives the lifecycle. After the Dirac function comes the Poisson games. Then we move on to the convergence with the normal via sample populations of less than thirty, in statistics, these are Poisson approximations of the normal, which leads us to skewness and kurtosis. Once the sample populations are over thirty, we have a normal that is not skewed. Risks become symmetric. This normal is one of a series of three normals: vertical (carried), horizontal (carrier), and post-merger (whole media, both). The standard normal hides the relative sizes of these normals.

The three normals give us a hint towards Moore’s three horizons, which turn the technology adoption lifecycle around. The horizons look at the technology adoption lifecycle in the rear-view mirror as if they are right in front of us. Maybe a backup camera view is a better perspective. The B2B early adopter is barely seen or focused on. It is inconsistent with the present horizon.

Anyway, those two populations are now a third happily solving psychology of concepts problems with fuzzy logic. The defectors lost. The price-performance or s-curves make the case for adoption. Other things make the case for de-adoption, and readoption. The editors here demonstrated the role of the intervenor, or in most cases, the near-monopolistic, market power positioned, market leader that so many programmers abhor. That market leader does much to make the category happen and thrive.

So what is a product manager to do? Start with understanding the conceptual models that comprise your product. Understand the adopting populations for each. Those populations are not on the same page and don’t adopt at the same rates. Those domains do not inject change into your product at the same rates. Those populations might be deviating away from your product due to de-adoption of the underlying conceptualization. Yes, get someone to stay on top of the changes in each of those domains even. Know when a defect and defection is happening. That defection might disrupt you. That’s classic in the sense of how the hell would you, the product manager, have known. It’s not about competition. It’s about conceptualization. They change. They oscillate. They own you and your product if you’ve taken them into your product or service. They happen in the carrier and the carried of the media we play in.

Likewise know your s-curves, aka your price-performance curves. If they touch your product, know them. Sure, you can’t deal with the fabrication plant investment issue, but it will throttle your product if you need that fab.

 

 

 

 

 

 

A Discontinuity in a Sequence

August 22, 2016

In my last post, The Grid, we looked at how grids imprison sequences. We discovered a discontinuity, a hole, among the sequences laminated into the larger sequence, the sequences of differences between z-score values. I called them out. And, left much unsaid. We’ll continue that discussion in this post.

In mathematics, we have holes in our graphs. We have holes in what each of us knows about math. In Algebra class, we’re restricted to the reals, so we’re told no solution exists. It turns many of those solutions are complex numbers, not reals. There are plenty of holes, potholes.

Then, we have asymptotes. We can approach them, but we can’t cross them with a function because they are manifolds, something that falls into that wide category of math we don’t know yet.

I remember stepping into a gopher hole. After that, I kept a close eye on the ground where my feet were stepping into. One day a lieutenant colonel stopped his staff car so we could have a conversation about why I didn’t salute his staff car. “Gopher holes sir.” Not that I had to worry, my colonel would have laughed the incident off. It was one of those days when the graph you live in has a few new nodes and the graph’s normal distribution changes.

The z-score sequence is directed from core to tail–away and towards. Oddly, humans use the same kind of dimension, technically a half of a dimension. We are 2.5-D beings, not 3D beings. But, we round off dimensions for our mathematical convenience.  If it’s not easy, it’s not math–easy being very relative. Consider that z-score sequence to be a vector. Consider the hole to accommodate another intersecting vector that for the moment we will consider orthogonal, or simply perpendicular.

01 Bundle Vector and Orthogonal Vector

Being orthogonal in statistic means that the vectors intersecting in that manner are independent, aka not correlated. The cosine of 90 degrees is zero, so the cosine of correlation is zero, so the vectors are not correlated.

The vector passing through the hole in the z-score sequence has its own distribution. In the end, the data comprising that distribution will be added to the z-score sequence’s distribution. For now, that distribution is unknown, and like all unknowns constitutes a source of risk.

02 Bundle Vector and Orthogonal Vector w Risk Entry Point

Now, we can imagine a flow through the subsequences. Imagine each layer as a pipe. That gives us some plumbing, aka some fluidics. No, I’m not going there tonight. But, I did draw it just to assess its probabilities. Of course, I ignored some of the subsequences. In modeling, you put in what you think is important and you leave out the rest.

03 Risk Flows

Just for the bayesian priors s, t, and u all started with a probability of 0.50. That gave us the probability of st, the probability after the first mix of st=0.25. Then we dealt with the second mix, which had us adjusting the probabilities so they equaled 1.00, leaving us with p(st)=0.333 and p(u)=0.666. Oh, we’ve crossed an approximation boundary.

I finally gave in to reading David Hand’s “The Improbability Principle.” Hand refers to Borel’s theorem about the impossibility of events with sufficiently small probabilities. Borel wanted us to understand that p=1 and not more than 1. It takes a while to get to the point. Borel is modeling via probability, so the impossible events are left out, but due to Borel’s theorem, we are assured that we can simplify the situation via omission and kept going, all things being logically consistent.

We are not leaving the hole out. Everybody else probably has left it out. It’s not in the z-score table screaming out to be seen. We stumbled across it with much labor. But, we will start with the vector being orthogonal. I took a top-down view for the next graphic.

Evolution Top Down

Here we start at the global maxima of the z-score differences sequence, the axis of symmetry or rotation, on the left. The sequence running to infinity somewhere off the page to the right. The hole appears in light blue. The hole is where the sequence vector intersects the orthogonal vector.  The long-term mean will come to rest at the intersection. The r variable is the indicator of correlation. The angle between the sequence vector the actual vector (shown in red), theta, illustrates a positive correlation. So the distribution will come to rest on the actual vector (red).

We started with a surprise unknown at the hole. Once discovered, we have to find it’s measure. So we assert the distribution’s existence. This has the effect of putting a Dirac function at the center of the distribution. With more data, we have a Poisson distribution. We can use that Poisson distribution to approximate the normal distribution until we have collected 30 or more datum. The figure is wrong, but I had to make the Poisson distributions large enough to show up. The Poisson distributions would still be inside or under the normal distribution. As the Poisson approaches the normal, the mean moves around until it settles at the core intersection, aka the mean as shown in the diagram, and the distribution would exhibit skewness and kurtosis.

The Process

Here I show the evolution of that hole. The Dirac function generates a line at infinity, here labeled PE, as in potential energy. Potential energy is used here to hit at information physics. Strong writing on information physics put it as potential energy being position and not some form of energy, just a physics bookkeeping slight of hand. Next, the Poisson distribution is generated along the line of positive correlation in its continuous form (blue line and blue area). Poisson distributions speak loudly to the myth of deregulation being valuable in a business. The constraint, here a policy constraint (gray) moves the probabilities stretching out to infinity and concentrates them into the histograms inside the constraint, which makes the business more focused and less costly. Beware of this myth. The constraint generates the higher histograms (red volumes with orange tops) in the discrete form and generates the higher curve (dark red) as opposed to the original curve (blue) in the continuous form. Constraints create value.

Next, the Poisson distribution is generated along the line of positive correlation in its continuous form (blue line and blue area). Poisson distributions speak loudly to the myth of deregulation being valuable in a business. The constraint, here a policy constraint (gray) moves the probabilities stretching out to infinity and concentrates them into the histograms inside the constraint, which makes the business more focused and less costly. Beware of this myth. The constraint generates the higher histograms (red volumes with orange tops) in the discrete form and generates the higher curve (dark red) as opposed to the original curve (blue) in the continuous form. Constraints create value.

Last, the normal distribution reaches its equilibrium distant from the Poisson distribution on the timeline (gray). The normal has lost the directional sense that the Poisson distribution provided. The data is close in distance but spread out over time. The potential energy of the assertion that generated the Dirac signal flows down to the normal and beyond as the normal gets wider and loses height, aka becomes flat. The normal here is in situated in Euclidean space. The Dirac and Poisson are situated in hyperbolic space. Beyond the normal shown, where the normal becomes flat, those normals find themselves in the spherical space. Financial analysis as it is conducted today is carried out in spherical space. In that space, multiple analyses give good answers. In hyperbolic space, no analysis gives good answers.

Think of your data efforts as dynamic undertakings. Statistics uses the static view as the means to honest statistics, dynamics is prohibited. Statisticians take snapshots, but technology adoption is a dynamic proposition.

Standard normals hide much. All normal distribuitons look the same in the standard normal form. At times seeing the real normal will tell us much.

The Grid

August 18, 2016

It’s been said of mathematical proofs that they start somewhere and end somewhere else. Grids behave in the same manner. Grids might be rectangular or square.

Grid 4x10Grids might be laid out on some modulo, which greatly restricts their shape and how they shape the content they contain or in our verbiage “carry”. In the end, a grid starts somewhere and ends somewhere else.

Mod 10 Grid

Each of the rows could have kept on going, but the rule about row population prevents this, and instead, puts the red numbers on the next line.

A table of z-scores takes an infinite ray and chops it up at decreasing and later increasing intervals. The z-score table in the back of my statistics book gives the wrong impression when it chops the entries up ten z-scores to rows of modulo 10. The shape of the table controls the shape of the carried z-scores. The z-scores have their own shape, but it is lost here.

Table As Media

Just to make the table as media reality clearer, I’ve changed the carrier, the grid, as I changed the number of columns. I changed the metadata or meta carrier to change the number of columns. Being a carrier or the carried is a matter of shifting contexts in the stack.

Table Carrier Modified Meta Carrier

Oops! This carrier is smaller than the last. We’ve run out of carrier before we’ve ran our of carried content. Those excess numbers fall into a jumble on the floor. Some of the numbers that remained in the table did not move. Other’s moved. I’ve highlighted the ones that did not move. They remind me of Ito processes, processes with fixed sized memories. A Markov process is an Ito process with zero memory (n=0). In our table, the rows are memories that vary between zero and ten (0 ≤ n ≤ 10). This memory problem is what the Hilbert Curve was invented to solve. A value placed on a Hilbert space-filling curve never moves. Hilbert curves forget nothing in our Ito process sense even as the resolution or densities vary. In terms of the last post, Matrix Composition, matrix compositions, the processes never move even as the customers and the products move on.

Table Sequence and Memory

When the carried is a sequence, it remains a sequence. The grid becomes sparse or ceases to be a rectangle or a square when the sequence dances. z-scores are such a sequence. The z-score sequence is really a collection of sequences.

Sequence of Differences Without Modulo 04

Here I’ve put each sequence making up the larger sequence on its own line. Here we put a parsing rule in place. The first number that is larger than the previous number goes to the next line. Then, we add the next equal in value numbers pushing back to the front indicated by the red vertical line. This works until the new line is longer than the prior lines. Then we add another rule. Push the front of the lower value number further to the right and add spacers or holes on the lines above where necessary, so the lower values are aligned at their front. Spacers change the shape of the surface of the curve. Holes run through the solid mass of the curve. Those two rules let the sequences express their “natural” shape. The grid is going where it will. The shape of the curve, the shape the grid will follow, might surprise you.

Iterations and releases would behave similarly. If you put too much in an iteration, you end up pushing the boundary of the next iteration or release. Or you move the current iteration into the next release and ship what you have, a working iteration.

As a product manager, are you imposing a modulo on your roadmaps, or are your roadmaps going where they go without enforcement? Are you mining the shape of your roadmaps for surprise? Yes, we impose some rules about delivering value in each release. We have an upgrade tempo, but the functionality carried by the roadmap dances to its own shape.

Are your carriers clearly separated from your carrieds? Are your populations facing your carriers or your carrieds? Remember that the IT horizontal is carrier facing. Most of what we do these days is likewise carrier facing even though we might be selling to consumers. Are we turning consumers into administrators with this carrier focus?

The push rule provides a new kind of outcome if we were being probabilistic about outcomes. Z-scores have holes in them.

 

Matrix Composition

August 14, 2016

Watch this first, “Matrix algebra as composition.” A firm is a sequence of matrix multiplications. When we do anything, we are left with a need for each transformation, a sequence of such, and the evolution of that sequence over time. Your fast followers won’t match your evolution, and they won’t match your sequence, your composition. They will start somewhere else, and go directly to the product emerging from your composition. The fast follower will duplicate your output without duplicating your firm.

In the competition, if you insist on calling it that, your output fits your customers and hopefully it fits your near-term prospects, the prospects on the next pragmatism step. Your output doesn’t fit your competitor’s customers. Notice that your feedback only fits your existing customers, aka your economies of scale. We consume our market allocation at times in seats and at other times in dollars in addition to seats. We do not consume our competitor’s market allocation. We convert our prospects into buyers of the system, then we immediately market to them as repeat, continuing customers.  This latter part is where software companies captured their increasing return. If the marketing does not bifurcate, we’re selling a product with very high upgrade costs. More money sure, but bad money.

With discontinuous innovations, we start off with a client, just one, but a firm, not a single individual, with a wide width of use cases to cover. We start with a lot of potential. We picked that client with our bowling ally strategy in mind. We pick one in the middle of the industrial classification tree, so we can move up or down the tree as we go. That enables us to span not just the firm, but the whole industry, the whole ecology, the whole value proposition. Eventually, we will be in a simpler place described in the previous paragraph. But, our composition in matrix terms is deep. Our fast followers are thin. So keep your cards to yourself and fake the tells, so the competition chases it’s imaginary illusions, instead of you.

The differences across the technology adoption lifecycle are immense. We hire for each function, we tune each function, then we cross a technology phase boundary, and change the focus of our functionality. Call this later thing forgetting. But, that means we cannot repeat the function in the future when the demand for another discontinuity requires it. Apple is stuck now. The length of time that a company is stuck is a reflection of how much it forgot. Repeated discontinuous innovation requires remembering, rather than forgetting. Repeated discontinuous innovation requires an organizational structure that can improve it’s processes and it’s customer knowledge. Not the stuff of innovation consultants. Even if Christensen suggested it long before his effect-cause confused disruption idea became the rage. The cost accountants couldn’t go there. So the organizational structure required goes unaddressed.

But, what of Christensen’s separation as he called it? Everyone is probably thinking separation as in spin outs or it’s cousins. But, there is another way to separate. It’s hard work. It doesn’t anchor itself to economies of scale. Discontinuous innovations require new markets that might merge , or not, decades down the road into one of the company’s economies of scale. The company has a tempo modulating continuous innovation with discontinuous innovations. The former serving existing customers. The latter finding new never before addressed customers.

Software as media provides a hint. In the software as media model, we split the carrier from the carried. The distinction is difficult at times. What is strictly speaking about the carrier, the software, and what is about the content of the domain? Addition is a carrier (red) of the carried things being added (blue), so 01+01=10. But, if it something carrier being added, like loop indexes, the whole thing would be carrier. as in 01+01=10.

An organization is also a media, so it has carrier and carried layers. The carried layer would be focused on the customer. The carrier layer would be focused on things that don’t require customer inputs like the process of shipping goods to the customer. The staff that had customer relationships would flow through the firm with the customers. The staff that had process knowledge would stay in the phase specific organizations and keep improving those phase specific processes.

The technologies would flow through the organization as well. The technology would  be productized at the B2B early adopter client engagement. The technology and the product would then flow into the vertical phase, then the IT horizontal phase, and beyond. But, when the bowling alley has a free lane the next technology would take it. The processes across the phase specific divisions would be fully loaded all the time as would the staff attending to those processes.

The IT horizontal oscillation switches the focus from the carried to the carrier and the next adoption phase shifts the focus back to the carried. In this situation, the customer specific staff would not be fully loaded, but would have time to gain more in depth knowledge of the domain constituting the carried.

A company organized in such a way would have to manage the separation. Cross talk between the managers in the different phases needs to be suppressed. A best practice in the tornado, “free,” doesn’t work beyond the tornado. Sales reps love tornados, but tornado sales forces are unlike the sales forces serving both retained customers and new prospects. “Free” fails in all other contexts except the merger tornado.

Each phase has its own operational foci. A factor analysis of each would reveal that those organizations in a specific phase are alike, and different from the organizations in all the other phases. Each organization has it’s own factor analysis: as in factors and factor weights. The parent company would look like a holding company and have holding company problems like understanding that there are no synergies across the held organizations.

But, I’ve thought about this long enough.

Know where you are. Don’t do what everybody else is doing, particularly those companies that don’t know where they are. Know that funding phases are not synchronized with adoption phases. Many of those so called technology companies are not technology companies at all. Most of them are technology users, not technology makers. They are coding content, not carrier. They are doing continuous innovation and throwing away the results from discontinuous possibilities because the hyperbolic realities don’t look like the familiar spherical geometries they are use to. Yeah, I know, too much.