Archive for June, 2015

Factor Analysis–What’s Important to your Product

June 29, 2015

Earlier in the week John Cook tweeted something about Coxeter circles, so I clicked the link and was surprised by the following figure. The relationships between the diameters or radii of the circles is the same as what one would expect from a factor analysis. The first factor is the steepest and longest. The next less steep and shorter than the first. Subsequently, each factor is less steep and shorter than the previous factor. The particular angles and lengths will differ, but the subsequent factor will always be less steep and shorter.

Coxeter_circlesThe circle labeled zero is your firm. The circle labeled one would be your category. If you are focused on managing your revenues, your monetization generating your revenues would determine your category. If you are focused on something other than revenues, then place yourself in a category relative to that. The circles labeled two or three, any number above one, would be macroeconomic considerations.

A factor analysis typically covers 80% of your variance with three factors. They would be labelled with negative numbers. The area of a given circle hint at how much variance that factor covers. The factors would, as circles get smaller, or in a line graph get flatter and shorter. The statistical studies of your variance beyond those three factors gets more expensive, so your budget would constrain the number of factors your effort could be managed with. The budget is both monetary and managerial focus driven. The independence of the variables and the complexity of the data fusions giving rise to each factor would impact managerial focus.

The Coxeter circles here represent two levels of macro economic factors, your category, your firm, and your product. For wider product portfolios there would be more circles with negative numbers. Imagining this in three dimensions, as collections of spheres would demonstrate some interesting relationships.

In a firm that stretches across the technology adoption lifecycle (TALC), the factors would migrate in an animation, live and die as Ito memories and oscillate between carrier and carried considerations. In such a firm, the population considerations could be a parallel factor analysis anchored around each population’s relevant product. Economies of scale do not allow expression of the TALC.

Factor analyses need not be firm centric. The economic return on a given set of factors, places a given firm in a given value chain. In a value chain, the larger, aka steeper and longer factors may be outside of your managerial focus. A small factor for your customer would be a very large factor for your company. The key reason to outsource is to preserve managerial focus. When you tell your supplier how to do business, you are not preserving managerial focus. I realize a product manager wouldn’t do this, but when it happens it enters into your matrixed product organization.

Factor Analysis of Value ChainAd serving might be your only monetization, so you need to get and keep eyeballs, and deal with the standardized ad serving infrastructure. Your factor analysis would have holes in it. Your factor analysis would have discontinuities in it. Fast followers would have similar factors, whole product factors, and supplier factors.

In the figure, two whole products are shown: one for web, and another for mobile. One fast follower is shown. A fast follower may compete with you on a single factor. All ad serving monetized businesses might use this supplier.

The arrowheads indicate convergences defining the world size of a given value chain. That is similar to convergences in probability distributions. A factor analysis looks like a power law distribution or a long tail.

Where you have discontinuities in your value chain, you will have to establish well defined interfaces, as well as deciding how soon you would want to follow changes to the definitions of those interfaces.

Ito Processes in the Technolgy Adoption Lifecycle.

June 20, 2015

A Markov process has no (zero) memory. An Ito process has a finite memory. A Markov process is an Ito process with a memory size of n=o. All of that is, for our purposes, talking about history, or more specifically, relevant memory.

In our ordinary conversations about memory or leaning in a firm, the memory is infinite. It is not an Ito process, so it can’t be a Markov process. We talk about brand and design as if they will always be relevant, and have always been so. We talk about a whole host of things this way. But, it is the technology adoption lifecycle that makes everything finite. We try very hard to make the late mainstreet market infinite. Sloan’s invention of management leads us to the infinite firm and the management practices that make the infinite firm. Blue oceans lead us to find another structure for a category after we can’t get anymore infinity from our management practices. These notions of infinity invite us to cut costs until there are no more costs to cut. These notions of infinity kill our companies, and kill our companies fast and faster.

Innovation and management are entirely different. Sloan didn’t innovate, except in his creation of the product he called management. He did not innovate cars. He grew his company through M&As. He consolidated his category. Such consolidations are an indicator that the market leaders have been chosen. Those market leaders get a monopoly or near-monopoly position. Everyone else is stuck in promo spend territory fighting over the scraps. Everyone else is stuck with competing on brand and design, because they have no market power, and no differentiation. This is late mainstreet phase of the technology adoption lifecycle (TALC) out to laggards (devices) phase. The later you are in the TALC, the more you have to spend on brand and design, the more you have to manage your costs and processes.

When we talk about the early mainstreet IT horizontal, geek facing internet of the 90’s as if it didn’t have design, the more we ignore the lessons of the TALC, fit the population you serve. Design is not a characteristic of geek facing products. Design in a characteristic of consumer facing products. The geeks that tried to sell dog food, or any consumer product back in the 90’s, in the early mainstreet market failed. Those same geeks giving something away for free, something technical, something infrastructural, something non-consumer succeeded. We came into the late mainstreet market knowing that free worked, that customer would not pay for anything, that paywalls were wrong, …. We came into the late mainstreet market having learned the wrong lessons. We are finally forgetting those lessons. We are finally learning that consumers pay for stuff.

Alas, we learned the wrong lesson still, when we try to sell something to geeks in the late mainstreet market. No, they will not pay… We are learning the wrong lessons from our success with consumers.

The main problem in crossing the TALC is that the TALC structures our memories. We have finite memories and infinite memories. But, we only have one memory. In my prior discussion of software as media, and in my TALC slideshare, I

So back to this Ito process.

The birth of a category begins by finding the B2B early adopters. Yes, lean does not start there. Lean is late mainstreet. Lean is built on other people’s whole product. It starts well within a category’s life. The birth of a category is 90’s era internet. That’s where today’s whole product came from. Twitter is probably the only such play we’ve had in Web 2.0. Even Google is a subsequent generation in an existing category and a promo spender to boot. And, no, we hear about how B2B needs design these days, sorry, but that is late mainstreet as well. It’s consumer and laggard/phobic facing.

The category is born with a Poisson game, aka a Markov process. These vendors have nothing to leverage and face the process of building tech, product, market, and company all facing the client. Unlike lean, they are stuck with the technology whose adoption they are fosters. Unlike lean, the best practice is to automate the client’s product visualization, not your own. Well, lean lets the users provide the product visualization instead of their own. The point is that n=o, aka we have a Poisson process with no memory. But, we do have a nascent finite memory on our hands. That we intend to repeat this process, we separate our phase processes from our customer facing people. Usually, companies do not do this. For them, the end of their category leaves them without a memory of the discontinuous innovation processes, so they start over again with the disadvantage of the cost issues with trying to use their current late mainstreet process to do what they cannot do, and economies of scale that are devoid of the needed customer base. Memory problems have costs, but accountants can’t tell you how much those problems cost. Memory problems kill innovation. Separation, Christensen’s real original concept, failed to gain traction against the cost accountants.

Christensen build his consulting firm with late mainstreet people who did not provide the early mainstreet effort needed to foster adoption of the separation concept.

So we start with a Markov process. With every capability we build in our consultative product implementation processes, we add to that memory. Call it n=20. Then, we start to build our  vertical market chasm crossing processes, n=21 to n=60. But we partition these two capability collections. We keep our consultative processes going with a brand new discontinuous innovation when the time comes, when the bowling alley ends. Then, we focus on carrier, and build our IT horizontal facing processes, n=61 to n=90. Within the IT horizontal facing organization, we build our tornado capabilities, n=91 to n=100. The tornado capabilities will be harder to retain, but they only work in the tornado and in the post M&A tornado. It is hard to keep them loaded from an HR perspective. Likewise any IPO and further investor relations capabilities, again memory in terms of processes and people. Through it all our Markov process becomes Ito.

At some point we get to our six sigma normal and all things Markov/Ito become Gaussian. Memory becomes infinite. We move from discovery to enforcement, different types of machine learning. Our geometry changes from hyperbolic to Euclidean and subsequently beyond six sigma, to spherical, Euclidean and spherical being safe for management.

Still, there are events that drive us back to earlier memories. Commodification of core technologies make us go back to discontinuous innovation in the midst of our continuous innovation efforts. Mass customization forces us to focus deeply on carried like we did the B2B EA. There will also be processes that we use once and throw away. Before throwing them away, however, you need to think long and hard about reuse and load issues. If you need those people and processes don’t throw them away, and find a way to keep them loaded, rather than letting them dissipate in lateral moves.

Outsourcing is another of those late mainstreet methods for managing managerial focus that lead us to dispose of capabilities and learning, memory, that we may need again. Again, think hard. You can’t get these back after they are gone.

Devices phase leads us to gain a hardware capability beyond the software capabilities we already have. Hardware also drives new software capabilities. More memories, more people, more processes will all be required. Cloud, the phobic phase, similarly.

Like in my post on incommensurate, the water balloons, or balloon poodles model will help here. Where does the memory begin? How large does the girth of this  memory get? How long does it last? Does it produce cash or wealth or loss? What balloons are inside other balloons? What balloons are outside the others? What are the interfaces? The coupling? The cohesion?

Know that you are managing your company’s memory. Learning is good, but it takes us away from our pasts even as it takes us to our future. Learning can prevent us from going back to the parts of our past that we will need again unless we were built to flip, built to exit. Manage memory.

Comments?

Incommensurate

June 15, 2015

Next, I went back and color coded the labeled gaps. In this figure, I’ve put lines at the bridged gaps indicating the use of a new Back in 2009 or so a reader of this blog asked me to define the term incommensurate. I’ll do that again here.

I’ll start with a graph from S. Arbesman’s The Half-Life of Facts. That graph was a surprise to me. It displayed the results of fifty or so experiments about temperature. Some of the experiments intersected with other experiments. Other experiments were parallel to the existing experiments. I’ve drawn a graph here showing the same kinds of things.

BaseThe darker lines are the results of a regression of data contained by the light gray rectangle. Each rectangle represents a single experiment and its replications.

Where the lines intersect, we can call those results commensurate. They result from what Kuhn called normal science. The experiments were designed differently, but reflect a single theory. The measurements within a single experiment reflect a particular apparatus. Changing the apparatus would give you another experiment with potentially different results.

Where the lines don’t intersect, we can call those results incommensurate. I’ll point out the gaps in the next figure. These gaps reveal an inadequacy in the current theory.

This graph can show us all of the experiments at once. But, that covers up things that would be revealed better in an animation. We don’t know, from this graph, when a particular result showed up. If we attended to the temporal aspects of the underlying data, we’d be able to see other gaps. The experiments characterized the gaps across the ranges and domains of several experiments.

Continuities 00ln this figure I’ve highlighted the continuities, the intersections, with red squares. I’ve assumed that all of these intersections exist. The results of one experiment, in the top left, is shown in blue.  I’ve assumed that this experiment was incommensurate and that the experiments that intersect with it did not exist at the time. The experiment that connected it to the chain of experiments to its right happened later.

The experiments shown with red lines are still incommensurate. They exhibit gaps with those experiments to their right. At the bottom right, three experiments exhibit continuity with each other, but exhibit a gap with both the other experiments above and to their right, and the other experiments to their left.

Normal science looks like a well connected network. Extending the range and domain of the existing theory is the job of normal science. A single regression would result in a decreasing function. Where the details differ from that single regression, we have an opportunity for clear functional differentiation.

Each of those commensurate experiments enables continuous innovation that extends the life of a category after the discontinuous innovation gives birth to the category. The technology adoption lifecycle is driven by improvements in a technology’s price-performance curve or S-curve. It is the price-performance curve that delivers on the promises made when the technology was sold and purchased. The demanded performance gets easier and easier to deliver and the range and domains of the underlying experiments expand.

Discontinuities 00In the next figure,  I’ve circled the discontinuities, the gaps, the incommensurate experiments. We won’t pursue experiments to bridge the gaps labeled G and H. We won’t try moving to G, because we can already read that temperature. We might want another way to take that measurement. We could develop a pass-fail thermometer where we are just interested in knowing if we have to make a bigger effort to get a numeric reading. Then, jumping that gap would make sense. The gap H just hasn’t been worked on yet.

Discontinuities Next, I went back and color coded the labeled gaps. The black rectangles show the range and domains involved in bridging a given gap. Bridging a gap requires new theory. The gap at A is from the experiment represented by the  blue line to the experiment on the right. The gap at E can bridge to any of three branches on the right. Any one branch will do. Continuous paths can get you to the other branches. Think step functions. The gap at F actually gaps with a collection of experiments to its right. The gap at B bridges two large subnets. Bridging this gap is critical. The gap at D can bridge to the left or the right. Either will do. Again, paths exist to get to and from the left and right side.

Other parametersIn this figure, I’ve put lines at the bridged gaps indicating the use of a new parameter that enables us to bridge the gaps. These parameters are labeled p and q. Their use was described in a new theory. The dark purple lines demonstrate how a continuous path through the network resolves a branch in resolving the gap.

The gaps E and A were resolved via parameter p and the network flow. The three gaps at F were resolved by parameter p as well. The gap at B was resolved by the solution to the gap at F. The gap at G continues to be ignored. The gap at D and C was resolved via the parameter q and network flows. The gap at H, again, ignored.

In these experiments basic research has showed us where our opportunities lay. It has delivered to us the incommensurate seeds of categories, and the commensurate life blood of new growth (dollars, not populations) to lift us slightly from the swamps of the margins from our nominal operations.

Another Explanation

The simplest explanation of what incommensurate means is that every theory is a water balloon. A theory can only hold so much of what it manages to contain. When you want more than a theory can deliver, when continuous improvements run out, you need a new trick to combine two water balloons. Have fun with that.

 

 

Where to Invest?

June 4, 2015

Where to invest was the question. My answer has always been in a company doing discontinuous innovation. But, like most things finding them is the hard part. Most of what we hear about these days is continuous innovations. What we don’t hear about is discontinuous innovation.

Since I’ve worked in startups well before the web came along, my problem has always been finding startups. Truth is that I didn’t find them. They found me. But, living in a startup desert, I’m looking for ways to find them. For a job search, watch your sales tax permit applications. That’s not much help for an investor, and it’s probably way too early. I know from cold calling SIC coded companies that the SIC classification system is very wide. You’ll end up calling a lot of companies that don’t do anything remotely like selling software.

The investor alternative is to find VC funds and put your money in one of them. If you’re going with discontinuous innovation, finding that VC fund will be the issue. I don’t know if VC funds mix discontinuous and continuous innovators together in the same portfolio. I do know that the continuous investments are smaller and get less attention from the VCs. Discontinuous innovations take more time, more money, and more VC attention.

You’ll hear about the continuous innovators and more than likely you won’t hear about the discontinuous innovators. Read the journals in the fields where you expect to invest. Read the SBIRs. Take note of the investigator’s names. Check their bibliographic information. When will one of their students bring the investigator’s technology to the market?

Anyway just a few hints on where to find the discontinuous innovators. Investing in a company that creates a category, and gets the near-monopolistic position is a good place to grow your money. The quick flip of the continuous innovators or the fast followers not so much.

Remember that the technology adoption lifecycle is more than some ordered Markov process transitioning populations. The populations organize the companies serving them. Early phases grow. Late phases decline. We hide that decline in things like cost management and large numbers. Early phases create wealth. Late phases capture cash. Discontinuous innovations begin in the early phases and transition into the late phases. Continuous innovation begin in the late phase and live short lives.

From a Geometry Proof

June 2, 2015

I was out on twitter a several weeks ago, and Alexander Bomogolny tweeted another of his GeoGeBra proofs, shown below.

Raw

The key issue was the similarity of the two blue lines in terms of their angle and length. But, I looked at the center pentagon and thought value chains, Shapely values, and just who was that dot in the center pentagon that manages the interactions with the other pentagons. How central was this person?

To examine the issue of that person’s centrality, I looked for the center of the center pentagon.

Center

From the figure it was clear that the person working where the value chain contributors was not some executive in the CXO crowd. Instead the person was managing at a distance from the CXOs. Sales reps like the CXO sale, but this person, the user, is some distance from the buying decision. Yes, the buying team is constructed. This person, the user, might be on the Buying team. But the vision, the value proposition across these distances will differ. The grandeurs of the CXO’s value proposition can be very distant from the user thinking about how they can cut and paste these numbers into that equation. So who is the customer and who is this user or that one is critical. Those buying personas all have personas in your software.

Are you calling each of them? Do you satisfy all of them?  Are you teaching each of them? Are you marketing to each of them? Unlike the proof about those blue lines. Those buying personas are not similar. They are in conflict. This even if the CXO has everyone aligned.

What about all those who don’t use your software, but are aligned with the larger, smaller, or intermediary value propositions? How long will they be involved? When will their involvement begin and end? What will your process orchestration look like?

That person at the intersection of that value chain is just one Poisson distribution under the corporate normal. The vector under that Poisson points where?

Comments?