A Night at the Bookstore, yes, a Bricks and Mortar Space

July 23, 2015

When I go to the bookstore or a university library, I pick out a stack of books in my areas of interest, and try to scan through them enough to justify taking them off the shelf. I was supposed to finish a particular book, but that didn’t happen. Instead, I spent some time looking through the following at a high level:

  1. EMC^2 (the author), Data Science and Big Data Analytics,
  2. Lea Verou, CSS Secrets, and
  3. Adam Morgan et.al, A Beautiful Constraint.

In Data Science …, I came across a very clear diagram of how power (or significance) gets narrow and taller as sample size increases. Consider each sample to be a unit of time. That leads us to the idea that power arrives over time. These statistics don’t depend on the data. They are about the framing of the underlying studies. The data might change the means and the standard deviations. If the means are narrowly separated, you’re going to need a larger sample size to get the distributions to be narrow enough to be clearly separated, which is the point of the power statistic. Their arrival and departures will change the logic of the various hypotheses. You could under this paradigm see the disruptions of Richard Foster’s Innovation, a book Christensen referenced in his Inventor’s Dilemma before Christensen took an inside-out view of disruption, a view of the scientist/engineer-free innovation, as the arrival of the steeper slopes of the price-performance curve intersections and the departures of same.

As an aside, This week in a twitter linked blog post by a never to be named product manager, I came across the weakest definition of our “all the rage” disruptive innovation, as being akin to a classroom disruption, so far has our vocabulary fallen. No. No. But, it is a buzzword after all. Louder with the buzz please. “I can’t hear you.”

There was also a graph of Centroids (Clusters) that turn out to look like a factor analysis in the sense of steep and long to ever flatter and shorter spans.

There was also a discussion of trees. A branching node in the middle of the tree was called an internal node. I typically divide a tree into its branch nodes and it’s leaf leave nodes. I didn’t read it closely, so the distinction is lost on me.

This book is not an easy elementary statistics book.  I will buy it and take a slow read through it.

In CSS Secrets, there were a lot of things new to me. I did some CSS back in the day, so sprinting through this was interesting. Yes, you can do that now. What? Align text on any path, use embedded SVG. The real shocker was tied to Bezier curves and animation. Various curves in a cubic-Bezier curve showed how to “Ease In;” “Ease In and Out,” which looks like the S-curve of price-performance fame; “Ease Out”; and the familiar “Linear.” The names of the curves could be framed as talking about business results. There were more curves, but there are only a limited number of cubic-Bezier curves. Higher-order curves were not discussed. A cubic-Bezier curve has two end points, and two control points. In the animation sense, the curve feeds values to the animated object. The cubic-Bezier curve is not capable of driving, by itself, full-fledged character animation, but it’s a beginning. We, the computer industry, are easing out of Moore’s law as we speak.

In A Beautiful Constraint, we are looking at a biz book, in the self-help sense. It describes the mindset, method, and motivation for overcoming constraints on one’s performance. We start out as victims. We have to overcome path dependence. We do that with propelling questions and what the author calls Can-If questions. With a Can-If question we are asking about the “How,” sort of the developer’s how, rather than the requirements elicitor’s what. Breaking the path dependency has us asking ourselves or our team about believing it’s possible, knowing where to start, and knowing how much do we want to do it.

An interesting statement was that Moore’s law is actually a path dependence. Intel’s people didn’t let the law break. They always found a way to preserve the “law.” But, Moore’s law was really a sigmoid curve. It flattens at the top. The investment to break the constraint requires much more investment and delivers almost no return, so Intel’s people easing out of it. They like Microsoft will have to find another discontinuous innovation to ride.  The cloud is not such a thing. In fact, the cloud is old and there won’t be a near monopolist in that category. It’s not the next discontinuous innovation. It is really the disappearance, the phobic and non-adopter phases–the phases at the convergence at the end of the category. The device space is that the laggard, yes laggard, but it is still 10x bigger than pre-merger late mainstreet. The normal of Moore’s technology adoption lifecycle is really a sum of a bunch of normals, which leave us unable to see the reality of the category that the discontinuous innovation gave rise to. The end is near.

Anyway, that was tonight’s reading/browsing/carousing. Enjoy.

Geometric Progressions vs Constraints

July 6, 2015

You have a glass of water sitting on the table. You smash the glass. The once organized water now approaches maximum entropy. You throw down a paper towel that absorbs some of that water. That paper towel has organized some of that water. Entropy is lower after that paper towel does its job. That glass acts as a constraint on the behavior of the water the glass contains. That paper towel acts as a constraint on the behavior of the water, as well.

In electronics, this sort of thing is called a ground. If no ground is applied to a light bulb it won’t light up.  Electrons flow. They flow from source to ground, or from ground to source depending on who you are. Yes, there are different perspectives on this. Asking won’t reveal this. But, more to the point, the flow of electrons are organized by constraints.

Graph theory mathematicians talk about completely connected graphs. Add an edge to a large completely connected graph gets you a large number of connections. Notice that this graph is not organized by constraints. In effect, it is ground. The mathematics under the hood is a geometric progression.

This morning I woke up being challenged about some tweet about team size and communications. I didn’t really think about it long enough. A tweet or two touched on the core issues. But, I’ll go into my view here.

Organizations are organized. Organizations are constrained. So when your team size, as a product manager increases by a new staffer, your communications/leadership network does not explode. Every team member doesn’t talk to every other team member. It might be easier if they did, but it doesn’t happen. Yes, you’ll need to talk to that staffer, but you already talk to his boss, and others in the same role.

In one company, all the developers, testers, documentation people had a weekly meeting. The only thing that was really communicated was the ship date. This meeting wasn’t Scrum. While we all attended, and we all reported our status, it was a culture and team thing. The dev team had their own meeting with their lead. The real communications was lead to lead. It wasn’t team to team. If I needed something from dev, I asked the PM to be in the middle. The communications between teams was not nice. But, there was no communications/leadership network explosion. We were organized. We were constrained. We had our subnets.

Companies are organized. This subnet. That subnet. The communications arcs are stable. Being a flatter organization gets rid of some of these constraints, so we explode a bit.

Companies are temporal. Communications isn’t constant. I know I used to work all night, because there were too many interruptions during the day. Talk to me during the day at anytime. But, don’t talk to me at all at night. Those arcs are not persistent. We are not always talking to every edge, every node. We are not always conversing. Sometimes we broadcast. Like when we say what the ship date is.

Companies are cognitive. There is plenty of discussion about the 7+/-2 rule. But, consider that the mean. In different contexts the communicated content will be more or less. PowerPoint insists that we present only three things before we shift it all to long-term memory. DITA and modularized text has similar problems. It takes a lot of planning to divide up the content into nicely related chunks. UIs, task, user stories, face the cognitive limits of the audiences similarly. So when the company communicates to its staff and to its markets, it has to organize the content to the cognitive limits of its recipients.

Channels exist within and without. Those channels structure communications.

Consider that geometric progression to be ground, or maximum entropy. Organizing constraints abound. With the geometric progression we can pretend like we do with Frequentist probabilities that everything is random, but we should realize that the geometric progression doesn’t speak in the face of organization, likewise Frequentist probabilities. Seek the model that accounts for the organization, rather than ground.

Anyway, that’s my response after a few hours. That should be clearer than my immediate response to a tweet. Enjoy. Now back to my world, where the staff is self managed. I could drop dead and the product would still ship on time.

Factor Analysis–What’s Important to your Product

June 29, 2015

Earlier in the week John Cook tweeted something about Coxeter circles, so I clicked the link and was surprised by the following figure. The relationships between the diameters or radii of the circles is the same as what one would expect from a factor analysis. The first factor is the steepest and longest. The next less steep and shorter than the first. Subsequently, each factor is less steep and shorter than the previous factor. The particular angles and lengths will differ, but the subsequent factor will always be less steep and shorter.

Coxeter_circlesThe circle labeled zero is your firm. The circle labeled one would be your category. If you are focused on managing your revenues, your monetization generating your revenues would determine your category. If you are focused on something other than revenues, then place yourself in a category relative to that. The circles labeled two or three, any number above one, would be macroeconomic considerations.

A factor analysis typically covers 80% of your variance with three factors. They would be labelled with negative numbers. The area of a given circle hint at how much variance that factor covers. The factors would, as circles get smaller, or in a line graph get flatter and shorter. The statistical studies of your variance beyond those three factors gets more expensive, so your budget would constrain the number of factors your effort could be managed with. The budget is both monetary and managerial focus driven. The independence of the variables and the complexity of the data fusions giving rise to each factor would impact managerial focus.

The Coxeter circles here represent two levels of macro economic factors, your category, your firm, and your product. For wider product portfolios there would be more circles with negative numbers. Imagining this in three dimensions, as collections of spheres would demonstrate some interesting relationships.

In a firm that stretches across the technology adoption lifecycle (TALC), the factors would migrate in an animation, live and die as Ito memories and oscillate between carrier and carried considerations. In such a firm, the population considerations could be a parallel factor analysis anchored around each population’s relevant product. Economies of scale do not allow expression of the TALC.

Factor analyses need not be firm centric. The economic return on a given set of factors, places a given firm in a given value chain. In a value chain, the larger, aka steeper and longer factors may be outside of your managerial focus. A small factor for your customer would be a very large factor for your company. The key reason to outsource is to preserve managerial focus. When you tell your supplier how to do business, you are not preserving managerial focus. I realize a product manager wouldn’t do this, but when it happens it enters into your matrixed product organization.

Factor Analysis of Value ChainAd serving might be your only monetization, so you need to get and keep eyeballs, and deal with the standardized ad serving infrastructure. Your factor analysis would have holes in it. Your factor analysis would have discontinuities in it. Fast followers would have similar factors, whole product factors, and supplier factors.

In the figure, two whole products are shown: one for web, and another for mobile. One fast follower is shown. A fast follower may compete with you on a single factor. All ad serving monetized businesses might use this supplier.

The arrowheads indicate convergences defining the world size of a given value chain. That is similar to convergences in probability distributions. A factor analysis looks like a power law distribution or a long tail.

Where you have discontinuities in your value chain, you will have to establish well defined interfaces, as well as deciding how soon you would want to follow changes to the definitions of those interfaces.

Ito Processes in the Technolgy Adoption Lifecycle.

June 20, 2015

A Markov process has no (zero) memory. An Ito process has a finite memory. A Markov process is an Ito process with a memory size of n=o. All of that is, for our purposes, talking about history, or more specifically, relevant memory.

In our ordinary conversations about memory or leaning in a firm, the memory is infinite. It is not an Ito process, so it can’t be a Markov process. We talk about brand and design as if they will always be relevant, and have always been so. We talk about a whole host of things this way. But, it is the technology adoption lifecycle that makes everything finite. We try very hard to make the late mainstreet market infinite. Sloan’s invention of management leads us to the infinite firm and the management practices that make the infinite firm. Blue oceans lead us to find another structure for a category after we can’t get anymore infinity from our management practices. These notions of infinity invite us to cut costs until there are no more costs to cut. These notions of infinity kill our companies, and kill our companies fast and faster.

Innovation and management are entirely different. Sloan didn’t innovate, except in his creation of the product he called management. He did not innovate cars. He grew his company through M&As. He consolidated his category. Such consolidations are an indicator that the market leaders have been chosen. Those market leaders get a monopoly or near-monopoly position. Everyone else is stuck in promo spend territory fighting over the scraps. Everyone else is stuck with competing on brand and design, because they have no market power, and no differentiation. This is late mainstreet phase of the technology adoption lifecycle (TALC) out to laggards (devices) phase. The later you are in the TALC, the more you have to spend on brand and design, the more you have to manage your costs and processes.

When we talk about the early mainstreet IT horizontal, geek facing internet of the 90’s as if it didn’t have design, the more we ignore the lessons of the TALC, fit the population you serve. Design is not a characteristic of geek facing products. Design in a characteristic of consumer facing products. The geeks that tried to sell dog food, or any consumer product back in the 90’s, in the early mainstreet market failed. Those same geeks giving something away for free, something technical, something infrastructural, something non-consumer succeeded. We came into the late mainstreet market knowing that free worked, that customer would not pay for anything, that paywalls were wrong, …. We came into the late mainstreet market having learned the wrong lessons. We are finally forgetting those lessons. We are finally learning that consumers pay for stuff.

Alas, we learned the wrong lesson still, when we try to sell something to geeks in the late mainstreet market. No, they will not pay… We are learning the wrong lessons from our success with consumers.

The main problem in crossing the TALC is that the TALC structures our memories. We have finite memories and infinite memories. But, we only have one memory. In my prior discussion of software as media, and in my TALC slideshare, I

So back to this Ito process.

The birth of a category begins by finding the B2B early adopters. Yes, lean does not start there. Lean is late mainstreet. Lean is built on other people’s whole product. It starts well within a category’s life. The birth of a category is 90’s era internet. That’s where today’s whole product came from. Twitter is probably the only such play we’ve had in Web 2.0. Even Google is a subsequent generation in an existing category and a promo spender to boot. And, no, we hear about how B2B needs design these days, sorry, but that is late mainstreet as well. It’s consumer and laggard/phobic facing.

The category is born with a Poisson game, aka a Markov process. These vendors have nothing to leverage and face the process of building tech, product, market, and company all facing the client. Unlike lean, they are stuck with the technology whose adoption they are fosters. Unlike lean, the best practice is to automate the client’s product visualization, not your own. Well, lean lets the users provide the product visualization instead of their own. The point is that n=o, aka we have a Poisson process with no memory. But, we do have a nascent finite memory on our hands. That we intend to repeat this process, we separate our phase processes from our customer facing people. Usually, companies do not do this. For them, the end of their category leaves them without a memory of the discontinuous innovation processes, so they start over again with the disadvantage of the cost issues with trying to use their current late mainstreet process to do what they cannot do, and economies of scale that are devoid of the needed customer base. Memory problems have costs, but accountants can’t tell you how much those problems cost. Memory problems kill innovation. Separation, Christensen’s real original concept, failed to gain traction against the cost accountants.

Christensen build his consulting firm with late mainstreet people who did not provide the early mainstreet effort needed to foster adoption of the separation concept.

So we start with a Markov process. With every capability we build in our consultative product implementation processes, we add to that memory. Call it n=20. Then, we start to build our  vertical market chasm crossing processes, n=21 to n=60. But we partition these two capability collections. We keep our consultative processes going with a brand new discontinuous innovation when the time comes, when the bowling alley ends. Then, we focus on carrier, and build our IT horizontal facing processes, n=61 to n=90. Within the IT horizontal facing organization, we build our tornado capabilities, n=91 to n=100. The tornado capabilities will be harder to retain, but they only work in the tornado and in the post M&A tornado. It is hard to keep them loaded from an HR perspective. Likewise any IPO and further investor relations capabilities, again memory in terms of processes and people. Through it all our Markov process becomes Ito.

At some point we get to our six sigma normal and all things Markov/Ito become Gaussian. Memory becomes infinite. We move from discovery to enforcement, different types of machine learning. Our geometry changes from hyperbolic to Euclidean and subsequently beyond six sigma, to spherical, Euclidean and spherical being safe for management.

Still, there are events that drive us back to earlier memories. Commodification of core technologies make us go back to discontinuous innovation in the midst of our continuous innovation efforts. Mass customization forces us to focus deeply on carried like we did the B2B EA. There will also be processes that we use once and throw away. Before throwing them away, however, you need to think long and hard about reuse and load issues. If you need those people and processes don’t throw them away, and find a way to keep them loaded, rather than letting them dissipate in lateral moves.

Outsourcing is another of those late mainstreet methods for managing managerial focus that lead us to dispose of capabilities and learning, memory, that we may need again. Again, think hard. You can’t get these back after they are gone.

Devices phase leads us to gain a hardware capability beyond the software capabilities we already have. Hardware also drives new software capabilities. More memories, more people, more processes will all be required. Cloud, the phobic phase, similarly.

Like in my post on incommensurate, the water balloons, or balloon poodles model will help here. Where does the memory begin? How large does the girth of this  memory get? How long does it last? Does it produce cash or wealth or loss? What balloons are inside other balloons? What balloons are outside the others? What are the interfaces? The coupling? The cohesion?

Know that you are managing your company’s memory. Learning is good, but it takes us away from our pasts even as it takes us to our future. Learning can prevent us from going back to the parts of our past that we will need again unless we were built to flip, built to exit. Manage memory.

Comments?

Incommensurate

June 15, 2015

Next, I went back and color coded the labeled gaps. In this figure, I’ve put lines at the bridged gaps indicating the use of a new Back in 2009 or so a reader of this blog asked me to define the term incommensurate. I’ll do that again here.

I’ll start with a graph from S. Arbesman’s The Half-Life of Facts. That graph was a surprise to me. It displayed the results of fifty or so experiments about temperature. Some of the experiments intersected with other experiments. Other experiments were parallel to the existing experiments. I’ve drawn a graph here showing the same kinds of things.

BaseThe darker lines are the results of a regression of data contained by the light gray rectangle. Each rectangle represents a single experiment and its replications.

Where the lines intersect, we can call those results commensurate. They result from what Kuhn called normal science. The experiments were designed differently, but reflect a single theory. The measurements within a single experiment reflect a particular apparatus. Changing the apparatus would give you another experiment with potentially different results.

Where the lines don’t intersect, we can call those results incommensurate. I’ll point out the gaps in the next figure. These gaps reveal an inadequacy in the current theory.

This graph can show us all of the experiments at once. But, that covers up things that would be revealed better in an animation. We don’t know, from this graph, when a particular result showed up. If we attended to the temporal aspects of the underlying data, we’d be able to see other gaps. The experiments characterized the gaps across the ranges and domains of several experiments.

Continuities 00ln this figure I’ve highlighted the continuities, the intersections, with red squares. I’ve assumed that all of these intersections exist. The results of one experiment, in the top left, is shown in blue.  I’ve assumed that this experiment was incommensurate and that the experiments that intersect with it did not exist at the time. The experiment that connected it to the chain of experiments to its right happened later.

The experiments shown with red lines are still incommensurate. They exhibit gaps with those experiments to their right. At the bottom right, three experiments exhibit continuity with each other, but exhibit a gap with both the other experiments above and to their right, and the other experiments to their left.

Normal science looks like a well connected network. Extending the range and domain of the existing theory is the job of normal science. A single regression would result in a decreasing function. Where the details differ from that single regression, we have an opportunity for clear functional differentiation.

Each of those commensurate experiments enables continuous innovation that extends the life of a category after the discontinuous innovation gives birth to the category. The technology adoption lifecycle is driven by improvements in a technology’s price-performance curve or S-curve. It is the price-performance curve that delivers on the promises made when the technology was sold and purchased. The demanded performance gets easier and easier to deliver and the range and domains of the underlying experiments expand.

Discontinuities 00In the next figure,  I’ve circled the discontinuities, the gaps, the incommensurate experiments. We won’t pursue experiments to bridge the gaps labeled G and H. We won’t try moving to G, because we can already read that temperature. We might want another way to take that measurement. We could develop a pass-fail thermometer where we are just interested in knowing if we have to make a bigger effort to get a numeric reading. Then, jumping that gap would make sense. The gap H just hasn’t been worked on yet.

Discontinuities Next, I went back and color coded the labeled gaps. The black rectangles show the range and domains involved in bridging a given gap. Bridging a gap requires new theory. The gap at A is from the experiment represented by the  blue line to the experiment on the right. The gap at E can bridge to any of three branches on the right. Any one branch will do. Continuous paths can get you to the other branches. Think step functions. The gap at F actually gaps with a collection of experiments to its right. The gap at B bridges two large subnets. Bridging this gap is critical. The gap at D can bridge to the left or the right. Either will do. Again, paths exist to get to and from the left and right side.

Other parametersIn this figure, I’ve put lines at the bridged gaps indicating the use of a new parameter that enables us to bridge the gaps. These parameters are labeled p and q. Their use was described in a new theory. The dark purple lines demonstrate how a continuous path through the network resolves a branch in resolving the gap.

The gaps E and A were resolved via parameter p and the network flow. The three gaps at F were resolved by parameter p as well. The gap at B was resolved by the solution to the gap at F. The gap at G continues to be ignored. The gap at D and C was resolved via the parameter q and network flows. The gap at H, again, ignored.

In these experiments basic research has showed us where our opportunities lay. It has delivered to us the incommensurate seeds of categories, and the commensurate life blood of new growth (dollars, not populations) to lift us slightly from the swamps of the margins from our nominal operations.

Another Explanation

The simplest explanation of what incommensurate means is that every theory is a water balloon. A theory can only hold so much of what it manages to contain. When you want more than a theory can deliver, when continuous improvements run out, you need a new trick to combine two water balloons. Have fun with that.

 

 

Where to Invest?

June 4, 2015

Where to invest was the question. My answer has always been in a company doing discontinuous innovation. But, like most things finding them is the hard part. Most of what we hear about these days is continuous innovations. What we don’t hear about is discontinuous innovation.

Since I’ve worked in startups well before the web came along, my problem has always been finding startups. Truth is that I didn’t find them. They found me. But, living in a startup desert, I’m looking for ways to find them. For a job search, watch your sales tax permit applications. That’s not much help for an investor, and it’s probably way too early. I know from cold calling SIC coded companies that the SIC classification system is very wide. You’ll end up calling a lot of companies that don’t do anything remotely like selling software.

The investor alternative is to find VC funds and put your money in one of them. If you’re going with discontinuous innovation, finding that VC fund will be the issue. I don’t know if VC funds mix discontinuous and continuous innovators together in the same portfolio. I do know that the continuous investments are smaller and get less attention from the VCs. Discontinuous innovations take more time, more money, and more VC attention.

You’ll hear about the continuous innovators and more than likely you won’t hear about the discontinuous innovators. Read the journals in the fields where you expect to invest. Read the SBIRs. Take note of the investigator’s names. Check their bibliographic information. When will one of their students bring the investigator’s technology to the market?

Anyway just a few hints on where to find the discontinuous innovators. Investing in a company that creates a category, and gets the near-monopolistic position is a good place to grow your money. The quick flip of the continuous innovators or the fast followers not so much.

Remember that the technology adoption lifecycle is more than some ordered Markov process transitioning populations. The populations organize the companies serving them. Early phases grow. Late phases decline. We hide that decline in things like cost management and large numbers. Early phases create wealth. Late phases capture cash. Discontinuous innovations begin in the early phases and transition into the late phases. Continuous innovation begin in the late phase and live short lives.

From a Geometry Proof

June 2, 2015

I was out on twitter a several weeks ago, and Alexander Bomogolny tweeted another of his GeoGeBra proofs, shown below.

Raw

The key issue was the similarity of the two blue lines in terms of their angle and length. But, I looked at the center pentagon and thought value chains, Shapely values, and just who was that dot in the center pentagon that manages the interactions with the other pentagons. How central was this person?

To examine the issue of that person’s centrality, I looked for the center of the center pentagon.

Center

From the figure it was clear that the person working where the value chain contributors was not some executive in the CXO crowd. Instead the person was managing at a distance from the CXOs. Sales reps like the CXO sale, but this person, the user, is some distance from the buying decision. Yes, the buying team is constructed. This person, the user, might be on the Buying team. But the vision, the value proposition across these distances will differ. The grandeurs of the CXO’s value proposition can be very distant from the user thinking about how they can cut and paste these numbers into that equation. So who is the customer and who is this user or that one is critical. Those buying personas all have personas in your software.

Are you calling each of them? Do you satisfy all of them?  Are you teaching each of them? Are you marketing to each of them? Unlike the proof about those blue lines. Those buying personas are not similar. They are in conflict. This even if the CXO has everyone aligned.

What about all those who don’t use your software, but are aligned with the larger, smaller, or intermediary value propositions? How long will they be involved? When will their involvement begin and end? What will your process orchestration look like?

That person at the intersection of that value chain is just one Poisson distribution under the corporate normal. The vector under that Poisson points where?

Comments?

Geometry

May 23, 2015

I was looking for the parameters of an eclipse earlier in the week. I ended up Wikipedia looking at the definition of Eccentricity. The parameter of interest is eccentricity. Right away eccentricity breaks down into four cases: circle (e=0), ellipse (0<e<1), parabola (e=1), and hyperbola (e>1). Notice that this aligns itself with the geometry of the space itself. Relative to the sum of the angles in a triangle we have three cases: hyperbolic (<180), Euclidean (=180), and spherical (>180). Notice also that this aligns itself with the definition of probabilities, as 0 ≤ p ≤ 1. And, footprints of distributions tie into eccentricity: normal as a circle, and Poisson as an ellipse. The distributions also tie into machine learning: Poisson giving us rule enforcement, and Gaussian (normal) giving us rule enforcement. Then, there is Ito processes: n = 0 giving us the Markov chain, n > 0 giving us an Ito process. The Markov chain is a special case of the Ito process. The holes in these associations is probably due to my having been exposed to that math yet. Everything in math is tied to everything else in Math.

I don’t have a correlation between the parabola and anything else. I’ll have to think about this single case.

The failures of a given innovation is excused by faulting innovation. But management as an idea was extended to innovation. Management as an idea was exclusive of innovation when Sloan created management. Nobody says management failed when an innovation fails. Christensen makes the case that managers excelling at management failed when their companies were disrupted. Ultimately what this boils down to is place, under a distribution in a specific geometry. I will finish this post talking about place, but I need to get back to eccentricity and geometry first.

In the Wikipedia post on eccentricity, there was an animation linking circles with ellipses, parabolas, and hyperbolas. Watch it several times, because I going to ask you to image the animation happening in a different order.

250px-Ellipse_and_hyperbolaThe animation begin with the circle. A blue dot represents the center of that circle. That dot goes on to represent the foci of the ellipse, the parabola, and the hyperbola. You can watch the dot move in each frame of the animation.

So now we can think about it in terms of the technology adoption lifecycle(TALC), or the processes organized by the lifecycle. We’ll start simply here. It will get messy as we go deeper. Start with a Poisson game. That’s when we are looking for those B2B early adopters in the TALC. That’s the second phase, the one adjacent to the technical enthusiasts.

A series of Poisson distributions generate a single Poisson distribution whose foot print is an ellipse. The major axis of the ellipse shows us a Markov process as the major axis grows. The major axis is a vector. We start with this Poisson distribution, because we are using a game-theoretic game to represent a game of unknown population, a Poisson game. You can play these games as Gaussian games, but my intuition is to go with discovery learning. Keep in mind that I’m talking about a discontinuous innovation here. Continuous innovations happen elsewhere in the TALC.

Now, this Poisson distribution starts off as a single infinite histogram, aka a point, in other words as a tiny circle. Markov chains are composed of Poisson distributions of arcs, whose pre-choice probabilities are taken from normal distributions of the nodes, small distributions. The Poisson would be external, while the normal would be internal.

We are representing the company and its customer base, as opposed to its prospect base as a Poisson distribution. Over time, that Poisson distribution tends to the normal. The ellipse gets longer and wider. The ellipse fits inside a rectangle that eventually becomes a square at which point the ellipse becomes a circle. The eccentricity changes from something between zero and one becoming zero. I’ve seen this in financial results of companies selling products to foster the adoption of discontinuous innovation. I trust this to be reliable.

The circle represents the vertical. The bowling ally is a collection of approaches to different verticals. The Poisson distributions of those approaches to their verticals point to their respective verticals, aka they walk to their vertical.  Arriving at the chasm is the event that correlates with the onset of the normal. The onset of the normal is also the onset of Euclidean space.

The circle goes on to represent the horizontal market. Consider it to be six sigma wide at the post tornado. Once it is larger than six sigma the geometry is spherical. The standard b-school case analysis becomes very reliable in spherical space. But, my focus is on why that same analysis fails us prior to the chasm. I hypothesize that the space prior to the Euclidean is hyperbolic.  We’ll go back to animation again, but this time I’ll capture the frames.

00 Research FrontThe animation ends with the hyperbola. Businesses don’t end with the hyperbola. They end in a spherical geometry usually with a black swan that makes their distribution contract. A category begins with a gap. Consider the space looking outward to the foci to be the gap.

I was going to show that the research front changed and call that period the research effort. But, the animation didn’t support that. The directrices moved instead. They do approach each other, but never converge. distance from one foci to the nearest directrix is equal to the eccentricity, which will be larger than one.

I’m going with the hyperbola, as it is unfamiliar and weird enough to lead to things like taxicab geometry where you can’t go straight there, instead having to stay on the grid. In the other geometries you can go straight there. I imagine linear algebra can make the hyperbolic linear, but I haven’t gotten to that math yet.

The time research takes would happen on a z-axis. The search that is research would happen on the surface of the research front. Notice I didn’t use the term R&D. Research gets us our technology and our s-curve. Products foster adoption of the technology. Technology is adopted. Products are sold.

02 Poisson GameOnce the directrices have converged to their minimum separation the weak signal is emitted and the Poisson games begin. I had to draw the figure myself, because the ellipse was too large since in the animation the ellipse starts with a circle. The hyperbola in the figure is there to show the system before the directrices converged. The big bang here is the signed contract with the B2B early adopter. We grow from nothing starting here.

As an aside, Levy flights happen at the find you’re underlying technology phase, aka before the technical enthusiast phase of the TALC.

Now, we’ll go back to the notion of place. In the animation, the blue dots that represent the origin and the foci moving across the geometries. In the TALC, a normal of normal, discontinuous technologies undergo adoption from left to right starting at the far left. All other types of innovation start in the random-access sense somewhere to the left, aka in a different place. Starting at the left means being a monopolist or exiting the category. Starting to the right means competing on promo-spend dollars against fast followers and other look alikes. Those are different places. Samsung will never be Apple even if they hire Steve Jobs. Different places. Different times. Different pathways.

I’ll talk about place in a later post. Tweets about design and brand drive me nuts. They are phase specific–place specific.

Comments?

Normal Approximating Whatever

May 13, 2015

I finally got back to a math book, Modeling the Dynamics of Life by Frederick R. Adler,  I’ve had it on hold for a long while. I’ve been at it for over a year. And, I still haven’t done the homework. The homework actually teaches beyond the text in a lot of math books. So I’ll be at it for a long time to come even though I’m starting the final chapter. It’s an applied textbook, so the author gets his point across without turning you into a mathematician, or at least tries to. The mathematician thing will happen if you pay attention, but who does that?

In the previous chapter, the book talks about approximating a Poisson distribution with a normal. That’s a very small normal since it fits inside that Poisson distribution it’s trying to approximate. It does the same sort of thing for the Binomial. And, again for the exponential. I drew the series of distributions for this latter exercise. It takes a lot of distributions added together to get that normal, a lot like 30 distributions. The thing that can get lost is the shape of the world holding the distribution.

In approximating the normal from an exponential, the exponential, aka long tail looked longer than it was tall. But adding two distributions brought us to a gamma distribution that was a little longer. Adding five distributions got us something that looked normal, but was wider still, and pdf was taller than the normal. Adding ten distributions, wider again and less tall. Adding 30, wider, practically on top of each other and shorter. If we kept on adding, it would get shorter and wider, aka it would get tiny, but the approximation and the actual would be close enough that we’d be collecting data and graphing things for entertainment.

This graph will be too small. But take a look.

Sum of Distributions Tending to Normal

At some point further calculation becomes pointless. Factor analysis shares this property. Does another factor tell you something actionable? Does more accuracy do the same?

Another thing that got talked about was the standard normal. You get to the standard normal from the normal via z-scores. You want all your distributions to have a normal approximation since your tools for approximating probabilities are based on the standard normal and its z-scores. To do hypothesis testing, you need a normal.

You can find those formulas for distributions. They tend to look messy. Try integrating them. Getting to a standard normal is easier. Another author in another book that I can’t cite, said that while the numbers convert via those formulas, the logic does not follow the flow of the calculations. Hypothesis testing in non-normal distributions is an active area of research. An example of calculation and logic not being consistent,  we have  machine learning, Markovian approaches discover, while Gaussian approaches enforce. That’s not really a matter of application. One is ontological while the other approach is taxonomic.

Notice that all these approximations and converging tos require a lot of data and a lot of distributions. We are using big data to estimate small data.

Enjoy! Comments?

More on the Gap

May 10, 2015

After posting “The Gap,” I kept going. I put the technology adoption lifecycle across the terrain. An idea gains adoption via some apostles in an invisible college, which gets the idea published in a peer-reviewed journal. But, that’s long before the idea shows up in a corporation pushing it out into some productization. That corporation wrestles with the idea. Someone has to convince someone. The idea has to gain adoption internally within the corporation. That corporation is staffed with people drawn from the larger world. The pragmatism scale organizing external adoption is also organizing the internal market. Someone will be the technical enthusiast. Someone will be the early adopter. Not everyone in the corporation has to adopt the idea. Once the corporation starts selling the idea, there will be some internal laggards, some phobics, some non-adopters.  But, before the corporation starts selling, it will have adopted the idea.

Before the corporation sells much, it is faced with external adoption. The forces of external adoption will be with the corporation until it abandons the idea’s  category.

01 09a A Point In A World

Internally, we have an ontology, a hierarchical definition of the idea, a definition delineating how it is different and how it is similar to other ideas. Patent applications are like that, differences and similarities. But patents are really about realizations. Ontologies organize ideas.

Taxonomies organize realities. External adoption uses different species of implementation in different product spaces. The realizations in external adoption get organized around differences and similarities with other products. The idea becomes implicit in the taxonomy.

Since external adoption sequences markets and contexts it also sequences whether the focus is on the vertical or the horizontal, on the carried or the carrier. The external adoption is itself a media that orchestrates the media of software.

Ontologies and taxonomies organize their search spaces. Ontologies are generative. Ontologies diverge. Taxonomies are enforcing. Taxonomies converge. At each taxonomic decision, I am becoming more known. At each ontological decision, I become less known. Ontologies face into the unknown, the more to be known. Taxonomies face into the known.

Ontologies are convex; Taxonomies, concave. The book “Antifragile” tells us that concave is safe, while convex is unsafe. Sloan, the founder of GM, invented management. He was all about the concave. Sloan was not an innovator. GM bought the innovations it needed. Taxonomy is management. Ontologies are innovation. Innovation is exclusive of management. I’ve gone so far as to say that management inserts risk into innovation.

01 09b A Point In A World

The ontological spreads out across the search space. To realize an idea, we trim the tree that is the search space. We trim it enough to converge to a solution. That may be a point, or a line, or a shape. The figure is a little off. The solution, the thick dark blue line occurs before the external technology adoption lifecycle. It should occur inside he lifecycle.

01 09c A Point In A World

One last thing to do was to count the bits involved in crossing the gap. The idea uses 3 bits to document its search space. The realization, likewise, uses 3 bits. Those would be explicit bits. When differentiators become commoditized, their  bits become implicit. The number of bits involved will change as the idea moves through the technology adoption lifecycle.

01 10 A Point In A World

Enjoy. Comments?


Follow

Get every new post delivered to your Inbox.

Join 1,810 other followers