Discontinuous Innovation

July 15, 2013

I’m reading a book on math for biology majors. The first chapter discrete time dynamical systems was great. It tied back to phase graphs in the book on Chaos I read back a few years ago. The next chapter is about derivatives. I was going to skip it, but I’m glad I didn’t. I takes a completely different approach. It’s not trying to make you into a mathematician. So I get to the part about continuity. Boring, except it wasn’t.

Instead, I found myself looking at a graph that was simply shocking. I must have seen this before, but no, in my math books discontinuities were open points omitted from the domain or range. Not this time.

In the earlier chapter, we went looking for equilibriums and in a certain situation, there are none. That situation, a discontinuity between two parallel intervals.

So this time, we have two intervals with a vertical gap, a discontinuity between them. Of  course, that wasn’t the shock. Instead, it was putting this in the context of trying to explain discontinuous innovation. First, the graphic. Then, the build to what it demonstrates.

Discontinuity IV

The function we graphed was a step function:
f(x)=2Vt if x≤20 and f(x)=3Vt+20 if x>20.
The major point here is that they don’t intersect.

Next, we throw the marketing at it.

Discontinuity I

From a marketing perspective a discontinuous innovation is about a new formerly unserved population, a population that wasn’t interested in your offers before this one came a long, a population you weren’t interested in, and populations that are not known to each other either in terms of serving as a reference base for the other. Like the demographers and ethnographers trying to converge into a new discipline that I mentioned in another post. Still calling each other names. The technology under the hood isn’t similar to that of the existing population’s tech. The technology might not even be as good, yet. But, this discontinuity is wonderful, because it lets you create a new category and be the next near-monopoly exemplar corp in the biz press, a decade from now. Yeah, it’s not a next quarter thing.

But, back to the graph. The thick brown lines represent step functions that have been associated with their populations. I color coded the areas under those functions with aqua and purple. And, I show the vertical gap, the discontinuity in gray. Then, thinking about alleles, I differentiate the functions with a single bit, summarzing all the bits it takes to make those two function lines happen in a product.

The gray area represents a curriculum problem, a content problem, absence of an old-new contract. When Relativity came a long, they were the new population. The adopters had to make a knowledge leap and believe in the stuff, but doing so did help them, so they did it. There was no road from Newtonian to Relativity. To move the prior population, was to teach them, and retire those that wouldn’t learn. This stuff happens with our technologies as well. Take object-oriented programming (OOP). Initially, OOP was radical. So radical, that my CS profs wouldn’t go there until later, not with us undergrads anyway. But, it finally fell to MS to adopt OOP in there API. When they did, the did it in a continuous manner, and OOP stopped being radical. OOP wasn’t the same either, so today you still hear object thinkers trying to recapture the promised upsides of radical OOP. Oracle helped norm OOP as well by killing off the object-oriented database management cateory. Yes, to persist is a verb, or something that programmers still have to mess with, because OOP doesn’t do what was promised.

Oddly enough, back seven years ago, I was reading Seeing What’s Next, one of Christensen’s book in the delimna series. I posted a blog talking about how discontinuous was lexical, a decision about an approach. Christsen had a graph of S-curves. I redrew it. I put the old S-curve on the background, and the new S-curve on the forground. The middle ground was the lexical space. The middle ground was the discontinuity. Eliminating that middle gound collapses the radical, the discontinous into the continuous. Eliminating the middle ground changes the economic outcomes, because without it, you don’t need new value chains, and eliminating it changes the geography so it Euclidean or spherical depending on the size of the company pushing the underlying technology. Eliminating also takes the tornado allocated market leadership with it. Nah, without that middle ground, all you get is another market allocation in an existing category, aka a very small allocation of miniscule marketshare.

The discontinuity on the graph is the same as the middle ground in my long ago illustration. That discontinuity is gray. There are no bits here. This is the unknown. But, here is the thing, we actually decided to not extend the graph of the interval on the left, so that it would intersect the interval on the right. We decided to keep the middle ground, and to keep the populationos mutually exclusive. We decided to separate. Unfortuately, the business orthodoxy doesn’t let us separate. They’ll tell us that it costs too much. Then, the innovation fails to achieve it’s business objectives, and it was the innovation’s fault. Sorry management, but no. Christensen has not won the war on the separation concept, so we will all lose until we get this right. Separation is necessary. The point of separation is to create wealth, to create those value chains, not to capture cash, or pretend to be a bank like all those sigma 30 to 40 public companies out there, companies with no margins and an absolute need for cheap labor. But, the orthodoxy will wear you down. It was Moore that used to tell us that discontinuous innovation is about creating wealth. The Chasam Companion was about this wealth creation via value chain concept. It was also Moore that disavowed separation as being too expensive in his last book, a book where he turned his technology adoption lifecycle inside out for the sake of the orthodoxy he’s been working for since the Web 1.0 dot bust. Who can blame him? Nobody does real technological innovaiton anymore. We are replicants now.

But, there it is in gray, separation.

So if the discontinutity is a choice, what of continutity?

Discontinuity II

So here we are with our situation no longer discontinuous, no longer radical, not longer about creating wealth. Loads of cash, sure. And, how did we do this. We decided. We decided to let the function on top keep going until it intersected with the other function. We changed
f(x)=3Vt+20 to f(x)=3Vt+10.

I’ll have to check those functions and the conditionals, but that’s what I remember right now.

Comments?

Geometry

June 21, 2013

I’ve been thinking about geometry a lot these days. What does the sparseness of a hyperbolic geometry feel like? Does hyperbolic geometry encompass Moore’s bowling ally? Does hyperbolic geometry and Poisson games encompass the core management issues? Lots of questions. It just compels me to learn more math, but much of that math hides the geometries, or explains everything from the comfort of the Euclidean geometry. The linear assumption of management includes the Euclidean assumption. We bumped into this in my last post, Depth of Value.

So, I’ve sketched up a quick graphical comparison of the geometries. I use the geometries: hyperbolic (H), Euclidean (E), and spherical (S) to show what a triangle looks like, the triangles of the Triangle Model. These geometries are blunt instruments.

Geometries

They didn’t teach us this stuff back in school. They do teach it to high school students these days. We’re on the cusp of many new understandings. Oh, don’t blame out teachers. Mathematics teaching lags mathematics by about 50 years. Some of the mathematicians that produced the ideas we are just now hearing about are still walking the halls of academia, or died in our lifetimes. I am finding math textbooks at half-price books that have moved the ball. Yes, your kids know what a Markov distribution is and what to do with one. Great!

I’ve correlated the distributions we use with the geometries. A discontinous technology starts out as a Poisson distribution. It’s hyperbolic out in the bowling ally. The lanes are straight, like Einsteins light, and all that ensuing weirdness. That discontinous technology then crosses the chasam and moves into the normal distribution (6 sigma) of the vertical, a smaller normal in terms of standard deviations, sigmas, than the normal of eventual IT horizontal. These normals live in Euclidean space. Eventually, that discontinuous technology company is M&Aed into the huge public companies with the vast sigmas (30 sigma), the vast normal. The total probablity under that vast normal is still one, so the height falls, the margins thin, and you need a scraper to get it off the floor. The vastness still reflects the decisions constituting a decision tree, a triangle, but it bulges out of the confines of the Euclidean plane. Real options, strategic choices abound in the spherical, but not so in the hyperbolic.

Notice that the figure doesn’t include all eight lanes in our bowling ally. Three were enough for purposes. There is much more to this Poisson tending to the Normal and it’s visualization across an eight lane bowling ally and time. And, more again when you start to account for the layered structure of a media.

Somehow, we build a business orthodoxy based on the likes of Sloan’s GM. We teach that orthodoxy. We use linearity to discuise the spherical geometry under the hood. The gaps don’t bother us much. It looks like a nice generic set of tools, so we preach them as universals. We teach it to everyone. Then, we wonder why we can’t innovate. We blame the innovation itself, because we never blame ourselves, and never question the generalist, generics of our orthodoxies.

I defend innovation, because it builds the businesses the orthodoy milks, the cash cows. It builds wealth, wealth as something other than piles of cash, wealth that requires collaboration beyond the firm, beyond the cash flows of our own organizations and value chain. It’s how we make a world different from what we’ve known.

Continuous innovation doesn’t do the hyperbolic geometry. But, discontinuous innvaton will happen there, because discontinuous innovation is just part of product being used to foster adoption of that technology. The transition from Euclidean to Spherical still happens with continuous innovation, so even continuous innovation can find gains in the awareness of their geometries.

Mind your geometries.

Comments?

Depth of Value

June 12, 2013

These days I spend part of my day at a university library, one that buys new books, a rare thing these days. The state university in town buys journals and skimps on books, so the library shelves are full of aging books. The new books in this library are amazing.

Yesterday, I picked perused Soil Ecology and Ecosystem Services. Before reading Nowak’s Super Cooperators, I would have passed this one by. But, another thing struck my eye, the notion of soil providing services.

I’ve put product ecosystems on my roadmaps. That’s not new. I’ve worked in the TQM/ISO places and years enough to know that every entity has customers, stakeholders, suppliers, and services. And, the wildest Web 2.0+ evangelicals spout services, services, services as their mantra. Soil just serves. That until I crossed paths with this book.

Beyond services, it was a graphic that caught my eye. It fits along side the triangle model. It describes populations. It correlates populations and features with value, value not just at the interface, but at depth in the away sense. Value at depth has been with me for a long while now, but finally, here is a way to get more specificity into its description.

I’ll start with a paraphrase of the original graph.

Soil serving populations with specific services.

Soil serving populations with specific services.

In this graph, the x-axis is the interface between the sky and the ground. Your looking at the dirt between your feet. Soil services extend from there into the depth of dirt under your feet. The view of the grass and ground that you get is your view in the model-view-controller. The red lines represent the populations being served by the soil. The blue is the amount of service provided, the use frequencies that I’ve illustrated in the long tails discussions. The variable of pore volume will tie into diffusion, aka the diffusion of innovation.

Now, I’ll show the graph’s relationship with the Triangle model, and relabel the graph so it’s more in line with software development.

002

I’ve aligned the Triangle model with the value-depth graph. The interface resulting from all of your development decisions appears in blue at the bottom of the decision tree, although upside down, and at the x-axis, the former sky-ground interface. Soil services are now a collection of minimal-marketable functionality. Populations are still populations. A little color reveals that we have an over-served population.

Both graphs have log-log axes. Beyond straightening curves, beyond those algebraic transformations implying changes in geometry, log is how you encode a base or modulo arithmetic in a graph, aka positional notation. Cognitive limits impose a base on the underlying data. Humans have cognitive limits. Brains impose cognitive limits. Media impose cognitive limits. Our applications serve one population well, and other populations not so well based on the population’s cognitive limit parameter. We probably pay no attention to this, but the cognitive limit is there, and it is very mathematical. It reaches beyond our interfaces (views) to our models, to our user support content and to our marcom.

The notion of the cognitive limit has become controversal, because the original research of the past is seen as flawed, but attending to this matter will pay off. When you hear advice like never having more than three bullets on a Powerpoint slide, what is really being said here is that Powerpoint as a media is imposing a cognitive limit of three. The rule as it usually stated is 7 plus or minus 2. So Powerpoint clips the mental capacity at a perceptual level long before it gets us worried about our short-term memory limits and paging to long-term memory. I won’t say the limit is 9. Software is supposed to be a cognitive tool, a tool to think with, but that’s what it can be and usually isn’t. In a tweet on presenting requirements, I suggested putting it all into a Powerpoint presentation, precisely, because its cognitive limit would limit the number of requirements, use cases, or user stories that we expect to deliver in the next iteration. The limit forces us to organize the content and the reveal, or rhetorical encounter. The limit forces us to structure the experience. We don’t have to make them think, unless of course, we are helping them think. So know your population’s cognitive limit, and if you serve several different populations with several different cognitive limits, realize that not everything will be used by every user. Don’t choke the weakest user. They still make the upgrade decision, or in a Web 2.0 world, the subscription renewal decision. No, the economic buyer does not make those decisions. Don’t call him for those.

So before we run off, lets set a cognitive limit of 7 on a big project delivering twelve things. We’ll also set a cognitive limit of 3 on it.

003

Now, I’ll put the table into a graphic, so we can see how the total cognitive load and cognitive limits affect our customer’s time to return.

004

In this figure, we start with a log-log cycle. We highlight the logs of bases 3, 7, and 10 that we’ve been using as examples. These bases are shown in black. The others in aqua. A log-log scale presents us with squares within squares. These cognitive limits hint at the dive that we make into whatever we have to learn: textual content, automated content, implicit cognitive models, implicit models, implicit model constraints, and technology adoption lifecycle phase mediation. We also have to find the explication gaps and workarounds, and other negative use costs. We use parabolic physics here. First you climb the platform, then you dive. The red, green, and purple grids use the base log squares as unit measures. We take the cognitive effort required to make the dive off of the unit measure grids. The bases of the grid provides each user with some credit for knowing some of the content. The black horizontal line at the bottom represents the system ground as an absolute. The grounds for each cognitive limit were shifted, again, relative to the expected prerequisite knowledge.

The Time To Return (TTR) in the figure is a little bunched up. If the curve had been wider, the arrivals would have been streched out, and more realistic. Practices like Minimal Marketable Functionality aim at delivering smaller cognitive loads, and arriving at the TTR one unit of minimal marketable functionality at a time. Moving training earlier in the sales cycle can also move the TTR around, and reduce negative use costs.

When we take the population as a whole, we end up with a collection of parabolas generating a surface over time–a fireworks show.

So, back to the soil. The population of soil microbes being served at a particular location on the graph are there, because that is the only place that has what they need. And, so it is with an application. We, both us and those soil microbes, are seeking, searching for cognitively cheap, exploitable, consumable value. Making it too easy is a loser, because we will be bored. Making it too hard will result in a support call, or worse, an exit. Rocks are rocks. And, no they don’t rock. Microbes and humans go elsewhere.

Comments?

Twinkle, twinkle, little product

March 29, 2013

So we’re far away from the city; the night is dark; the moon is full; the light sufficient; headlights off; just us and the sky, a wide and twinkling sky; and our car moving us beneath the glorious heavens. The stars twinkle. We recall the old rhyme, as the sky takes our breath away.

A few nights later, some of the staff is working late to meet tomorrow’s deadline, so you’re doing your leadership thing while you lose yourself for a moment looking out your skyscraper window. The stars still twinkle. The moon doesn’t. The streetlights don’t. Only the stars, the few you can see standing in all that light pollution, twinkle.

Years ago, decades ago, the astronomy community realized that they had to move their telescopes and other scanners out beyond the atmosphere if they were going to get rid of the bugs we call twinkles. Once they got the Hubble up there, the stars no longer twinkled for astronomers. With the bugs gone, they gained clarity. They gained vision. They gained insight. They moved their value chain beyond one of its constraints, and went on to capture that value, deeper value.

Astronomers can hardly be blamed for those twinkles, those bugs. Those twinkles arose from a physical constraint, the sky. Managerial decisions wouldn’t have made those twinkles go away. Quality assurance wouldn’t make those twinkles go away. Better astronomers wouldn’t make those twinkles go away either. Twinkles persisted until recently.

But, I said product didn’t I? How do our products twinkle? How do our products twinkle despite management, programmers, quality assurance? And, I’m not talking about the bugs that could just as well turn up in a telescope rather than our code. I mean the twinkles, the bugs, we are blind to; the politics of product and the politics of elicitation; the politics of governance; the CEO; the execs; and the management of the software vendor organization; yes, right to your door, that of the product manager; and beyond that the politics out in the distance there, the politics of the economic buyers that constitute our customers; and our early adopter clients and their organizations management. Call them, the air of the development world.

In recent tweets, I’ve had to remind peeps that, in my world at least, that of companies that sell technology, rather than content, aka not a web 2.0 company, the economic buyer is only the first buyer, the person in the initial sale, and given the enterprise nature of the pursuit of our increasing returns, not a person involved in the subsequent sales, not a person that will even involve themselves in the UX. That economic buyer does, however, get sold some notions of business value, and lacking that might snap back and see to it that our application is removed from their company. That economic buyer is at the apex of the purchasing company’s politics spreadsheet. That economic buyer is the twinkle supplier.

Software development is repleat with myth. Requirements are never stable. But, that flies in the face of those of us who worked in functional domains. Our requirements rarely change. We’re mostly about reproduction, doing it again and again and again. And, meaning wise, our meanings rarely change, so don’t look at your elicitation sources as the sources of twinkle. And, absolutely–what that myth tell us is that requirements never stop twinkling? Like stars photographed from the ground, the twinkles stop, because we fixed them in silver. Requirements fix them in words. Developers never see the twinkles until a project turns into a program, or in a more Agilist world, the next iteration or refactoring.  Even then, developers are far away from the nuclear furnace.

The twinkle, twinkle, in an internal organization, requires us to look up. And, don’t talk to me about flat organizations. If an organization was really flat, my CEO would be shopping at Walmart and wearing those t-shirts they give us, so we have clothes to wear at work. There is always an up. And, in a vendor organization there is a down. Flip the representation over if you like. Source politics on one side and builder politics on the other. Twinkle, twinkle–southern hemisphere, northern hemisphere–matters not. That politics is hierarchical, deep, and fused. The end results are tradeoffs. We talk about tradeoffs as if they were necessary and the core of what we do. The tradeoffs keep changing. So the twinkle never ends, and the product fits loosely if at all. Does it serve the economic buyer’s expected value, or the need for users to get some aerobic exercise pushing a mouse across a screen while compensating for the mismatches between the software and their functional cultures.

The twinkle does have solutions–AOP for one. Ask a developer about it, or search this blog. It might be in my prior blogs that are now inaccessible, one of the wonders of SAAS. But, beyond the technical enablers, the booster rockets, we need to get rid of the twinkle, the politics that ruin our ability to deliver value fully. No endless chain of  iterations will eliminate the twinkle. Only we can get our software up above politics. Start by noticing it. Of course, we can dream of the day,….

Back to Blog

March 15, 2013

It’s been over a year now since I disappeared from my blog. I still have no ability to draw the bitmap graphics that I’ve used extensively in my blog, but a writing book challenged me to go completely lexical. Disappearing doesn’t mean that I forgot my backlog, or that new ideas haven’t showed up to extend the universe. But now, I’m back here.

Last month, out of frustration, I started another blog, Product Strategist 2. But today,  I posted a link back there. If you’ve already followed me out to that blog, we will be staying here. Check your subscriptions. Thanks.

About Control

January 7, 2012

Where did I put my controls? If you have authority and use it, rather than doing something a little more complicated and implicit like lead, you know your controls are explicitly up there in the hierarchy. If you practice shepard leadership, you know it’s out there in the implicitly plowed field of yours and your team. If you’re dealing with channels, you better understand gravity, control at a distance, because you are far away from the decision making of the actors.

This afternoon’s road rage trigger pulled into the fast lane as I was closing on an open slot adjacent to a semi a lane to the right, the shoulder adjacent the open lane and a separator wall to the left were it should be. Slow traffic in the fast lane is supposed to be illegal, so where is the policeman who is supposed to pull this guy over. Yeah, a moving control. Stuff we deal with everyday like banks that won’t loan. Is it any wonder, I’m left wondering where my controls are. No, I didn’t road rage. I made the six lane changes to pass the control and get on with it. Thanks to the road controller the world was a little more dangerous than a fast drive through the slot and beyond for those few moments. Then, the world was safe again for the fast traffic left to itself in the fast lane where it belongs in a lane discipline state like Texas, which likewise makes it easier for the police to know where to look when on the lookout for the harmless speeders.

So here we have various kinds of controls: barbed wire fences, paths up the cliff face, flat surfaces, ramps, hills, speed bumps, twists from inside to outside, and muddy plowed fields from those collegial conversations in the rain. So lets talk about controls, about mission, about vision, about all the things that lay out what must be and how it must be done. This isn’t about lists. It isn’t about maps either, not this time.

We may get lost in the math, so I’ll omit it, gloss over it, or hint at it. If you want to dive into it, we can talk later. Consider these ideas Lego blocks, or yet another wrench of one kind or another that you can use when you get tired of the straight lines of our linear assumptions.

Yes, this coulda, shoulda, woulda, mighta been a slide presentation, or a cartoon. It’s graphics rich. It’s long. And, given I drew this stuff months ago after a period of trying to crank out part 2 and 3 of the long tails, thick tails presentation, it concludes where I lost the time to stay focused to the rat race of keeping the food on the table, rent paid, and car running–my current controls.

So I’ll start out here with the typical linear view of the business proposition. Linear teases us with 8th grade geometry. Two points are a line. Two lines are point. Hints of recursion; of arcs being nodes; of von Neumann’s zero-sum game theory; of drafting boards, t-squares, triangles, compass, and rulers, of much, yes, even CGI at some level. Of some old line still used in a bar.

Mostly linear is a belief. Given that so much math has moved on from the linear and the orthogonal, linear survives just because “non-linear” is less familiar, more risky like discontinuous innovation, and harder to communicate to those less analytical, less abstract executors of our strategies. Linear is helped out by regression, a line defined by many points most of which are not on the line–controls at a distance. Still regression like much of math is still beholden to the Pythagorean notion of distance.

The Linear Assumption

The Linear Assumption

We assume that if we postpone a decision, all is well, because things will just go on nice, straight, and level. We might be bothered by the idea that our industry, our category, our financial performance is just going to converge with that of our competition. We might want to turn.

The Curved Reality

The Curved Reality

The reality is more like we are curving, turning all the time, but we just project all those turns onto the path of our linear assumption. Going linearly straight would take so much effort, we’d be doing nothing else. Strategic alignment would kill us. Besides, we have an easy cheat. We can just project all that curving down to our linear assumption and get some sleep.

Log-Linear Transform

Log-Linear Transform

The mathematicians built these tools, not for the businessman, but for themselves. They work hard to make mathematics easy on themselves. I might have the name of this transform wrong. Being loose here makes both of our lives easier. But, rest assured, the transform exists, has a name, and yes, you learned it over and over again back in school.

Log-Linear Twice

Log-Linear Twice

So here we see that earlier two-dimensional curve being depicted as a three-dimensional curve. Raising the exponents leaves us with having to carry out two projections back to the linear assumption. Easy enough. Keep the story straight; simple; communicable, like a disease. It hides intentions if you need to keep something secret while appearing to be completely open. Yes, those fast followers follow with their own linear assumptions.

The 3-D Assumption

The 3-D Assumption

Yes, we live in a 3-D world, so we assume that to be the nature of even the 1-D linear assumption. Alas, we would be wrong. Studies on human perception show humans to sense only 2.5 dimensions. But, mathematicians like dimensions to be integer constructs, so they round up that 2.5 to 3, and we just get on with it. The dimension of towards and away stops at our stomachs, so the known world hangs out behind us only as a concept, much like the past and the future.

Our 2.5-D World

Our 2.5-D World

Here the z-axis, that half a dimension runs from the upper left to the bottom right. Notice there is no arrow moving off into the upper left. The three divergent lines find their way out in this 2.5-D playground. Of course, corporations perceive in ways independent of human perception.

2.5-D Reality

2.5-D Reality

The dimensions are counted out in this figure. Towards might be labelled away. It’s a frame of reference problem. The perceptual physiologist probably have some standards laid out for their discussions of the matter. Notice the red line disappearing into an electrical outlet of sorts, really a dimensional boundary. That line might actually be 4-D, but we are only reporting on a 2.5-D world, so statistical significance would make the line just plain disappear, because the data ran out, and a regression only sees as far as it’s most distant outliers on each axis of the reported dimensions. Magic if you will, or thick tails falling into the implicit.

I know. You know this stuff. But do you use it? Or, lose it? Do you make your roadmap a list, so you don’t have to do all that GPS and dead-reckoning math? Do we have inertial nav for our roadmaps yet?

Decisions with Equations

Decisions with Equations

Now, I’ll admit that I drew the lines long before I figured I was going to talk about equations or polynomials. I don’t have Mathematica, so the equations are loose approximations. The equations of the lines runs from 1-D to 2-D to 3-D. That’s pretty much the point. The point was to open a gateway to other topics, codecs, protocols, which in turn lets us build other worlds, worlds that couldn’t be build otherwise. Some of us PMs push codecs and protocols, our technologies, out into the world embedded in products and services. That’s where value-chains, lasting wealth, and careers get built. You don’t have to do that if cash and jobs is as far as you want to go with your change the world pursuits.

Decisions

Decisions

So why did I include the word decision in the titles of the last two graphs? Well, once you kick the entity painting the line in some direction and some magnitude, oops those sneaky vectors, you’ve made and implemented a decision. You can stop thinking at that point. But, you’re paid to think. you’re paid to fake out the soccer goalies paid by your competition. You’re paid to turn, rather than go straight. You’re paid to decide. Those decisions dance with the notion of controls. Those controls might be pool table bumpers, so you can stick with the linear assumption, or they might be curves of all ilks. The triangles mark the moment of decision on each of the lines.

Consider real options, the idea that you pencil in future decisions along your vectors of differentiation, so an assessment of the tracking portfolio of each of your strategies is calendared and made. Some at least minimally go/no go decision is made. The linear assumption is littered with decision points. The accounting measurement lattice works similarly. Both don’t force you to turn, but might necessitate a turn in response to changes in the underlying situation.

Notice that your equations can only be so complicated given your cost structure and policy structure at the time of decision. The curve, the turn might have to be simpler until you can hire and buy the needed capability.

Geometries

Geometries

Back in the day, you drew a flowchart before you coded. You made a decision, you branched, and as far as you ever noticed, the world didn’t change because of the decisions made inside your program. You went left or right. You did this or not. You did this or that. Your decisions were binary tending upward to the case statement with the ensuing catch all called OTHERWISE. You didn’t really think in terms of dimensionality. You never got around to the n-dimensional thing I call the splat. You never asked yourself the mathematician’s question of how many dimensions were involved, you never rounded up to compensate for the programming language’s dependence on integer-based branching. What would a half-a-dimension branch be in C++ logic flow? Worse, since you were not Einstein, you didn’t ask about curvature. It just wasn’t done.

A book on cosmological topology changed all of that for me. It’s not right linear vs. left linear. It’s curvatures. It’s crumple zones. It’s densities. It’s all those roadmaps that didn’t prove their case and ended up as crumpled balls laying wherever your intended 3-point shot left them in the neighborhood of your trash can. It’s that straight line bent all to hell. It’s that straight line, reorganized into a collection of composite functions.

Topology is one of those topics that separates mathematicians and statisticians. I’m taking this from a statistician I met a while back that never cleared the hurdle of topology.

Topology was created by some folks that questioned Euclid’s fifth postulate, the parallel lines postulate. They thought this stuff up, so we don’t have to. Euclidean geometry honors parallel lines as a truth. Non-Euclidean geometries don’t. The earliest two, as far as I know, non-Euclidean geometries involved convex and concave worlds where the parallel postulate was violated. Equality became inequalities. The angles in a triangle used to add up to and equal 180 degrees. With inequalities, they were equal to something less or more than 180 degrees. The constraints changed and with those constraints, worlds changed. The above figure shows the relations between the underlying geometries and their curvatures.The constraints asserted differences in control. Are you inside the curve or outside the curve. All of this becomes a roller-coaster ride.

More on Geometries

More on Geometries

A curve has an inside and an outside. That curve exhibits both geometries depending on the anchor of your view. The right and left branch of a decision becomes a choice between one curvature or another, so decisions chose geometries.

Geometries and Their Angles

Geometries and Their Angles

So here we lay out the relationship between angle and geometry: Sum of angels of a triangle =180 degrees, Euclidean; Sum > 180 degrees, Spherical; Sum < 180 degrees Hyperbolic. Einstein’s space-time is hyperbolic. But, where are the controls? Right. Well, shapes control, lines control, points control. Put them where you need them.

Decisions as Bezier Curves

Decisions as Bezier Curves

In graphics packages like MS Paint, or Adobe Illustrator, or say, just about all of them these days, Bezier curves are the first place you run into controls that define a line, a path, that are not on the line or path itself. My first run in with such things was NURBS curves. When I ran into them, I thought, hey, this is cool, because adding a control point didn’t change the curve. It just granted you the possibility of additional control deeper into the future, deeper into your strategy. I’ve since come to discover the same kind of control points in numbers themselves, polynomials, hell, everywhere. It is just the way mathematicians and even logicians do things. And, those of us distant from math and logic do it as well. Do you keep your apartment or ditch it when moving in with her/him?

Do we grant ourselves degrees of freedom or commit?

But, what of the previous figure? The endpoints of a Bezier curve are fixed on the spline. The four points and three straight lines constitute a spline. The spline defines the Bezier curve. There can be more lines and points to this spline. The four points are control points. You move the control points to change the curve, aka to control the curve. The deep coolness of these controls won’t be revealed until the last paragraph of this post.

Decisions and Control Points

Decisions and Control Points

Here we’ve made the control points as decisions explicit by annotating each decision with a triangle.

Decisions Constructed

Decisions Constructed

If you’re a reader here, you know that I use a large triangle, non-iconic, to represent decision trees that result in realizations. My use of this symbology is something I call the Triangle Model. Decisions are realizations. Decisions are constructed, built and later made. In the figure above, the circles structure the curve, and the tan-colored triangles build further controls that control the implementation of the curve, aka the line. The triangles imply many decisions made by many people,  potentially many organizations either cooperatively, or in a zero-sum, linear programming face off. Each decision tree contributes a limiting surface to the overall definition of the curve.

Decisions and Geometry

Decisions and Geometry

Here I’ve added a few more details to the surface hugging curve. Before it makes sense, I have to step back and bring up a metaphor I first came across in a philosophy-based logic class. Truth is not the central issue in logic. Validity is. Validity asks the question, is the argument constructed correctly. Validity is a question focused on the plumbing, not the truth or falsehoods flowing through that plumbing. Validity is about the carrier of logic itself. Truth is about the content conveyed by that carrier. Logic as a whole is about a carrier and its carried, so logic is a media. Similarly, mathematics is likewise a media. This does not become apparent until you bump into parametric equations. Those equations can be thought of as tubes. The value at time t, is a place in the tube. The point can even spin if you’ve built quaternions into the equation.  Never mind what a quaternion is. It spins. That’s enough for now. So math is a media. So software is a media.

In the figure the pipe is larger than the point. The pipe is like a water slide. A point starts out on the centerline, then finds itself on the pipe wall. It moves from being symmetric to the pipe to being asymmetric. It is on one side of the pipe, one edge, then it rotates or switches to the far side of the pipe to take advantage of a curvature. The point makes a decision. It starts out in a Euclidean world, a flat world, then it finds itself in a spherical world, but preferring the hyperbolic, due to its corporate capabilities, it switches to the other curvature on the other side of the curve. Then, it moves to the symmetric position in the centerline of the exiting Euclidean pipe. Yes, your company is the point in the parametric equation.

Decisions and Geometry Abstractly

Decisions and Geometry Abstractly

In this figure, I’ve firmed up the structure of the ride your company will take as the point in the parametric equation. That structure is a control. Companies ride such structures all the time. They don’t necessarily build those structure, but they do try to exert some control over their traversal of such structures.

Inside-Outside Geometry

Inside-Outside Geometry

Another view of that structure, but here we ask different questions. Can your company function on the outside of a curve, in the spherical? Can your company function of the inside of a curve, in the hyperbolic? Can your company traverse between the spherical and hyperbolic, and back? Can your company find a place in the linear, the Euclidean and maintain it deliberately? It’s not enough to stick with the linear assumption.

Decisions and Surfaces

Decisions and Surfaces

Here we highlight the structure, the surface, or in business terms the situations upon which strategy is built. Those capabilities mentioned earlier were abilities to execute at specific moments and during specific time intervals. Those capabilities were put there by strategy in anticipation of structuring situations.

On Surface

On Surface

The technology adoption lifecycle is one of those structures that technologies, products, categories, companies, industries, whole verticals, and whole economies traverse. That single linear assumption doesn’t get far in the varying densities of populations, events, and intervals comprising the lifecycle. A traversal would occur through the distribution, a distributed control, and given the Poisson distributions comprising Moore’s bowling ally, many distributed control populations. That traversal would not be a surface ride. That traversal would engage differential games of rates interdependent with other navigational aspects of getting the technology, product, sidebands, company, channels, ecologies, sales, revenues, and profits done.

The Borel set enables the calculation of probabilities for mathematicians. The Borel set informs businessmen that the population if fixed. That fixedness should inform the myths of growth, and the ignored reality of decline and it’s incipient myth of “Who us? Decline, never!” Ask Kodak and stop talking about disruption. It was Christensen’s good management doing what they do. It wasn’t some attacker having labelled itself as disruptive in it’s pleas for VC funding.

On the TALC Surface

On the TALC Surface

The technology adoption lifecycle (TALC) surface describes the totality of your category, not your company. You could scale the normal to represent your company. Still, macroeconomic considerations are better shown at category scale.

In this figure we assume the company has made it the point where they have consumed 50 percent of their full lifecycle, available market without missing a quarter and without incurring the wrath of Wall Street. They reach their aftermarket and are subsequently lifted into the realm of the Fortune 500 companies with their much larger market size via the dreaded M&A. Still, they face discontinuity, and of course the M&A typically fails, so much for the red line, so much for the linear assumption, and usually so much for growth.

On the TALC Surface Again

On the TALC Surface Again

Here we see the point of the aftermarket, the point of an M&A, the point of the huge public company, and the point of the startup. The telcos will make ten times more money from the Internet than the startups did. The telcos could not have brought internet technologies into adoption. Web content startups are not fostering adoption–adoption of those underlying technologies has been done for a while now.

Polynomial as Control

Polynomial as Control

Here we go back to the math to generalize the polynomial as a sequence of controls made explicit by the assertion of a waiting, but implicit, control. This hints back to the NURBS curve control points and how mathematics does this all the time. We solved polynomials without ever using them. No wonder mathematics wasn’t fun. It would have been fun to take on our advanced biology teacher during the test reviews with a ton of math. That’s probably why it wasn’t taught.

A Point

A Point

So what’s with this point? We all have point like this. Ask our significant other.

Measurement Lattice

Measurement Lattice

We’ll be getting the point of that point soon enough. That point is consistent with other points in a cloud of data, big data if you like. But, all those points are waiting around for a line to show up. “Yeah, no line gets past me. I’m an outlier, a tough guy. Hype that big data all you like. There is nothing out there beyond me.” Beyond the collected data is the implicit, which will remain implicit. The data collection explicated an expanse of space.

Measurement Lattice-Data-Regression Extent

Measurement Lattice-Data-Regression Extent

The regression traverses the extent of the collected data, but goes no further. The regression provides a structure for parametric traversal.

Measurement Lattice-Data-Regression and Dimension Extent

Measurement Lattice-Data-Regression and Dimension Extent

The dimensional extent of the collected data controls the dimensional extent of the regression and regression-based forecasts. In the figure, the 3-D dimensional projections from the regression are invalid. Degree elevation won’t work here.

Controls Again

Controls Again

The control zoo once again. What species of control do you want to exert. As I’ve read more mathematics I’ve become interested in more mathematics. Warning! Danger!

Decisions and Surfaces

Decisions and Surfaces

Like the TALC, macroeconomics is another controlling surface. Your curve will have to work around macroeconomic surfaces.

Decisions and Market Allocation

Decisions and Market Allocation

Market allocation significantly limits where your lines can go. Market allocation is a control. The market allocation circle is based on the normal distribution of the technology adoption lifecycle. Moore defined a formula for determining maximum market share based on the ordinal entry of a competitor into a category. Later entry would find not only smaller revenues, but also a shorter interval of participation in the category. If you arrive later without a new technology underlying your efforts, aka without having the capacity to create a category, you’ll be leaving sooner.  The circle provides controls.

Stakeholder Preferences

Stakeholder Preferences

Here stakeholder preferences are incorporated as controls in the earlier figure of the role of macroeconomics as a controlling surface.

So you’ve seen some of the structures that control the line we once considered to be just a linear assumption. As out last view of curves for a while, I’ll talk about the subdivision of a Bezier curve as a parametric equation. Look in Google to find several animations of Bezier curves. I found them very interesting. So on to why.

Bezier Curve Subdivision

Bezier-Curve Subdivision

In the above figure, the base spline is shown in black. The first subdivision is drawn in red. In the animations the red points subdividing the black lines start at the one endpoint of the line and move to the other. All of the red points move across the line they are on. The second subdivision is drawn in green. The green points subdivide the red lines and move across the red lines. The third subdivision is provided by the black point subdividing the green line. The resulting curve ends up being descriptive of a three-tier hierarchy, or a corporation. Adding another point to the base spline would insert another subdivision, and another layer in the hierarchy.

Try moving your controls around.

Leave some comments. Thanks.

My Long Tails

November 17, 2011

Chris Anderson popularized his version of the long tail in his book “The Long Tail.” I read the book and went on to use it to model software. Richard R. Reisman applied it to create a pricing model. I just came across Reisman’s work today, so it will take some time to think about, play with it, integrate it, and get back to you.

The last weekend in October found me attending Seattle Product Camp 11. I didn’t drive. The road trip is still a dream. I haven’t flown in something like eight years, that flight eight years ago didn’t involve layovers or luggage. The experience of flying has changed over the years, depreciated, but gotten cheaper in a few dimensions and more expensive in others. Flying is a good time to be on a diet, or a fast. A $10+ patty melt is a pocket melt. Still, I enjoyed Seattle and #PCS11. It was the first time I got to experience Seattle in overcast rainy weather, an improvement over LA’s bland constant sunshine, or the teasing Texas drought that hints, but never delivers.

Back to those long tails. I proposed a presentation for #PCS11, Long Tails and Thick Tails for Product Managers. The thick tail part of the presentation was intended to link my presentation at #PCS09, Game Theory for Product Managers, an advanced topic, since I didn’t want to get into the details of minimax, and the neatest thing was Poisson games, or games with an unknown population of players–the typical technology adoption problem. Poisson games linked to ideas that we were talking about at the time on the Anthropology for Product Managers tweetchat, Functional Cultures. The #PCS09 presentation led to my proposing of another presentation for #OC10, So you don’t have a market? Great!, a presentation about organizing markets with Poisson distributions and Markov chains, something that Moore hinted at with his bowling ally idea in Crossing the Chasm. One of people attending the #PCS09 presentation said of six-degrees of separation that it implied a thick tail. That statement laid there begging to be dug into. So my presentation in Seattle was intended to reply to that, and to make me sit down and collect my ideas about long tails and thick tails. They went far beyond software as long tails before it was over. It’s still not over. Just another question to stew on.

Alas, I didn’t actually get my proposal submitted for voting. The topic blew up. The sessions got shorter. The summary was illusive. Had it been finished it would have been three times too long. I’m still putting the presentation together even now. I’ll post it as a SlideShare when I’m done.

Getting back home,  Ruud Hein over at SearchEnginePeople asked me to write another guest post. So I wrote up the long tail part of my #PCS11 presentation. The presentation was intended to show how a long tail can be applied to model many different processes we bump into in product management and product marketing management. One of those models integrates product management and product marketing, a topic hinted at in this post on functional cultures.

So consider

Long Tails Beyond SEO

to be part one of a two if not three-part series that would have been my #pcs11 presentation.

My previous guest post on Rudd’s Search Engine People blog was on how I write my Strategy as Tires tweets. The title in the post was Ruud’s. I do my wall flowering thing at conferences.

Ruud, Thanks for the opportunity to guest post on your Search Engine People blog.

Comments!

Science as Ito Process

October 11, 2011

In a cryptic tweet , “Ito stochastic process n>=0, science. Knowledge=explicated +forgotten,” I was replying to Trevor Rotzien’s tweet of “Science isn’t static statements of universal laws nor a set of arbitrary rules. It’s an evolving body of knowledge.” In that response, I was defining science as a random, statistical process, or more simply a process that exhibits certain characteristics. Then, I tied that definition to that of knowledge. Being a tweet, I left something out of my definition of knowledge. I’ll put it back in.
Knowledge is a cyclical process of doing something artificial intelligence people call explicating knowledge, turning implicit/tacit knowledge into explicit form or explicit knowledge. The moves in Argentine Tango can be described explicitly, but practice puts that explicit description into your muscle memory where it is no longer explicit. It is implicit. We practice to re-implicate or make implicit that explicit knowledge. Science discovers through wide, vast, and history spanning explication. Discovery is explication.

Knowledge has its highest value not in its explicit forms, but in its implicit forms. Craft production is implicit production. Even explicit production has us using tools that embed explicit into an implicit media. A hammer is all the decisions made to stuff comprising the hammer and the hammer producing process.

Ore is dug up. Ore is transported. Ore is fired. Ore is oxygenated to make steel. Steel is poured, …. The Ore truck has an accident, so this whole hammer production process engages randomness. So the hammer production process is a random or stochastic process.

Stochastic processes come in two flavors these days. A few years ago before generalizing one flavor they came in two different flavors. The flavors have to do with how much memory is involved in dealing with the probabilities of the transitions from one state to another in the stochastic process. We had Gaussian/Bayesian and Markovian stochastic processes. Gaussian/Bayesian stochastic processes take into consideration the entire scope of the history of the known, small world to determine which state transition to make. Gaussian/Bayesian stochastic processes use all the, or the complete memory, n=infinity. Gaussian/Bayesian stochastic processes live under normal distributions. Markovian stochastic processes make state transition decisions without any memory, n=0. Markovian processes live under Poisson distributions.

Lately, Markovian stochastic processes have been generalized as an instance of a class that we call Ito processes, aka stochastic process with less than complete memory, or 0<=n

Machine learning shows us that Markovian/Ito processes are processes that discover new rules. Gaussian/Bayesian processes are processes that enforce rules, but do not discover new rules for yet to be explicated phenomena. So science is a process of discovering new rules, aka Markovian/Ito. Science education, aka the generalist culture of science express themselves as othodoxies in the general form of Gaussian/Bayesian–all knowledge propositions.

Normal (Gaussian/Bayesian) distributions are the limiting distribution or shape for Poisson distributions. This implies that as a collection of Poisson distributions attempt to cover the same data as Normal distribution the shapes or distributions converge. Poisson distributions converge faster than Normal distributions. Discoveries become orthodoxy. Poisson distributions are tall and narrow. Normal distributions are lower and wide.  Correlation and statistical significance require normal distributions of sufficient height and separation.Poisson distributions lead to Poisson games that I presented in a session two years ago at PcampSEA 09, “Game Theory for Product Managers.”

Functional cultures, like expert-based science transition from the generalist culture under the normal distribution to an expert culture under the Poisson distribution. The process of technology adoption moves from expert to generalist, from Poisson of the newly discovered to the normal and the disruptive fight with the incumbent orthodoxy. This is the process of learning. There are mirroring processes of de-adoption and forgetting.

Forgetting is a process of moving from the infinity of state transition histories of the normal distribution towards the no/(zero)-memory state transitions of the Poisson distribution. We forget by successively omitting the most distant state from the decision about the next transition. We will have forgotten once the zero state transition is eliminated.

Discovery is a queued process as well. We distill random variation down to a steady state. That steady state is Gaussian/Bayesian. Forgetting is likewise a queued process that starts with the steady state and admits random variation until only random remains. Poisson distributions describe queues, so Poisson>Gaussian/Bayesian>Poisson is the way of knowledge. It is likewise, Science.

So that’s what that tweet meant.

Since product managers move product to move a technology across the technology adoption lifecycle, we deal with these distributions and others as we get the job done.

Sorry about not having graphics for this post.

Comments? Thanks!

PCS11, More on collaborative games

August 26, 2011

Product Camp Seattle 11 (#PCS11)

Two years ago I planned to run up to Seattle from LA for PCamp Seattle 09. I put up a survey to find out what presentations I should propose. That survey gave little guidance to the preparations. Then, life happened, and I ended up in Texas, so being a Rand McNally trip plan dreamer, I put a trip plan together that would take me across some states I’ve not been to before like Wyoming and Montana. That PCamp was held in early October. I was a warned of snow. See the first trip plan. I drove the Junction to Ballinger route the last time I left California. I’m not going that way again. This year, I’ll take the Austin to Brownwood route, my usual route to Santa Fe when I lived in Austin. I’m uncertain as to Lubbock to Muleshoe or Dalhart. I might try to hit the drive-in movie in Abilene this time if it isn’t closed for the season. The Capulin volcano and Raton pass always beckon.  And, I’m definitely stopping in Boulder on the way up. Who knows, I may dance a few tangos in Denver.

Anyway, more life happened, so I ended up routing through California and going back to California for a while, instead of returning to Texas. I’ve still not made it to Wyoming and Montana. I ended up back in Texas.

I missed PCamp Seattle 10. But, I’ll make it back this year. I’m definitely going to make it to PCS11. I’m still wondering about what to present.

The presentation I didn’t give at OC PCamp 10 was too long in terms of delivery time, and it wasn’t particularly interactive, “So you don’t have a market, Great!”.

I’ll write a Product Strategist post to introduce whatever I decide to talk about at PCS11.

Collaborative Games

Kenny Bastani’s comment to my earlier post needed a follow-up post to answer the issues raised. We tweeted it out instead. The question did take me to some interesting questions.

I started out with a list, but that struck me as too simple. Then, it was a normal-form game. Then, back to a collaborative game. Each line in the Shapely Value polygon is recursive, yet another Shapely Value polygon.

In the midst of the search for an answer for Kenny, the Fujuyama nuclear plant caused Toyota to halt production, because of supply chain interruptions. JIT logistics is the kind of efficiency that you get from the equilibriums of pure strategies derived from the normal form of a competitive game. You get to those equilibriums by eliminating alternatives that are dominated, or dominating depending on the role of those alternatives. We seek an optimal solution, a single source. But, here is a risk that wasn’t accounted for in the game, so the game must stop for a while. Call this risk a black swan, or a thick tailed distribution.

A mixed strategy solution to a normal form game would approach the outcomes of a pure strategy, but not exceed it. A mixed strategy would generate a solution in terms of area, where the pure strategy gives us a point solution. The point solution gave Toyota the JIT logistics. The point solution eliminated the opportunity to leverage the alternatives that dominated and dominating removed from the solution set. Shapely Values generate solutions in term of area just like a mixed strategy, but more so.

I’ll leave my discussion with Kenny in my timeline. He posed interesting question that took him one place and myself to another. I took Kenny to be asking about orchestration, something very important to product managers. Orchestration is the place where inter-organizational work happens, so it is the deepest depth at which an application can provide value. I’ve talked about process orchestration and choreography in posts on the Triangle Model, which I don’t particularly write about in this blog, because it was a core theme in earlier blogs for many years. I have mentioned the Triangle Model in the following posts:

The recursive nature of Kenny’s Shapely Values would be fractal if the recursions were the same shape as the parent or subsequent children, but that isn’t necessarily the case. The recursions hint towards grammar.

To get to process orchestration and choreography, you work out from the interface into what would be a core of a Shapely Value. For my purposes, since the Triangle Model is a decision tree, that core would delineate the decisions necessary to create that value in depth, a depth starting at the view or interface and working outward in layers: features; tool (artificial) tasks; user tasks; work design (intra-organizational); work (intra-organizational)–collaborative, if you will, or sequentially pipelined; inter-organizational work design; and inter-organizational work.

When dealing with layers, minimal marketable functionality (MMF) would deliver value to a defined depth across only a few layers. Deeper value takes customer organizations some time to reach, so delaying later, more distant layers makes for a profitable roadmap. Delivering a single layer at a time minimizes expectations. However, tool tasks (carrier) enable user tasks (carried), so a single MMF would have to deliver at least those two layers. Notice also that Gartner’s Hype Cycle is about value in depth.

Carrier and carried denote the components of any media, or my software as media framework. All software is media, not just those that are obviously media.

I’ve been through a huge change during my absence. Many new questions and insights arose in that time. Ask a question. Post a comment. I’ll answer. We’ll learn together. Thanks!

Geography for Product Managers

April 17, 2011

This post was written in response to this comment on The ordinals we call a clock.

Geography has been my theme for a while now. Functional cultures present us with geographies. Ideas present us with geographies. IT departments present us with geographies. Interfaces present us with geographies. Interactions between minimal marketable functions present us with geographies, likewise, product roadmaps. So what is a geography? My take is that a geography is anything that can be better expressed in a GIS system, rather than a list. Is time an issue? Are the relevant issues spatially organized? Then, you have a geography.

When I was working as an ITIL change manager, we were adding the change management component of ITIL to an existing ITIL implementation. I was supposed to track down managers willing to be responsible for improving their change management processes. I was supposed to find the potential conflicts between changes scheduled to be made to code, hardware, and other infrastructural elements across a huge IT shop in any particular time interval. It took me one day to decide that this was a GIS problem. We were trying to get this done via a relational database system and Excel. Oh, hell. Worse, I said that we needed a GIS system on my second day on the job. They were installing an upgrade or a completely new relational system at the time, so ….

Just keep your mouth shut until you’ve wired the joint.

In their system, you had wires, tons of wires, routers, switches, network stuff. That is was physically located in particular places tied to physical geography quite well. Servers exist in physical space. So drawing the map of their physical system wouldn’t be that difficult. So before we make a leap let’s explore this geography and what it means in terms of game theory.

I took my son on a spring break trip down Route 66. Route 66 emerged from the efforts of various chambers of commerce along the route and the states involved. The road was a bottom  up, business proposition from day one. It is obviously geographic. Services were stretched out along this road. The road is celebrated today as history and a user experience–the stories of the road.

Several value chains stretched down this road simultaneously. In one sense, the cafe in this town competed with the cafe in the next town. But, in the larger sense, the road, competed with other roads, so all the competitors on the road collaborated in the competition between their road and the other roads.

So lets look at the road as a value chain and move it into a Shapely Value representation.

Route 66 - Discrete

Route 66 - Discrete

Here we take a length of the road and partition it into sections based on the contribution made by each town along the road. That town has a geographic reach. Using the frequency of use idea would let us build other measures of the value contributed by each town along the road.

Route 66 - To Numbers

Route 66 - To Numbers

Next, we measure the length and normalized that length. When you  normalize a collection of numbers, the largest number becomes 1.0, and the other numbers, except for zero, become some fraction of that number vs. the largest number. Normalized values express probabilities. Notice that the existence of the road is taken as a given. If a section of road where to become impassible, the value of much of the road would disappear soon enough. Likewise, if your trunk connection to the internet went down, the value of your enterprise network disappear while you were screaming into the phone. Ah, but for redundancy.

If all of this were in a GIS system, we could consider gas station pumps, beds, lunch counter seats, booths, tables, tourist attractions, movie houses. We would be playing many games.

So our next rep is the Shapely Value of our road.

Shapely Value

Shapely Value

This representation was built from the totals from the previous figure. This representation is optimal, as such it assumes maximum collaboration. The Shapely Value is a number equal to the area of the gray area, the core, in the figure. The white area represents space that cannot be reached due to inefficiencies and the lack of the necessary capabilities.

The Shapely Value is typically introduced as a 3-party game, which leads to a triangle. It is also three dimensional, which puts it at the limit of our ability to visualize. Shapely Values use perfect polygons, aka polygons where each side is the same length. This just makes the math easy, but the math is not based on shape. Matrices or tables are used to compute the value of each coalition.

Shapely Value - Suboptimal

Shapely Value - Suboptimal

In this figure, the town associated with the value 8 is having some problems and is not contributing to the value chain to an optimal degree. The town associated with the value 18 has not been impacted, but all the other towns in the value chain are seeing their earnings decline.

That’s just one value chain. Instead of cities, think about your network infrastructure. So now we’ll move on to a server, a server hosting a database. That server is in town, a city unto itself. It connects to another server hosting a data warehouse. There are connections upon connections, layers upon layers, maps upon maps. You can imagine it yourself. Just try mapping out your Twitter experience, or your blog host and RSS feed reader experience. Diligence will get this map done. You will end up with game upon game, onions of games. GIS is the tool that can take you there. Relational can define a single layer, but the toilet map, otherwise known as the sewage system doesn’t relate well with the electrical system.

There I was working hard to do a GIS analysis with an RDBMS, but I was wondering why ITIL system vendors hadn’t gone GIS. It’s probably cultural. If your IT organization hasn’t gone GIS yet, why would an IT management system be the first to do so. Ultimately in the abstract, it all boils down to features and how they relate to each other. All those features end up on a vector. Bivectors relate vectors, but work makes those features work.

In an application, its features comprise a coalition. Each feature contributes to the value a customer derives from an application to different degrees. Since I prefer to deliver features as networks of related functionality, or minimal marketable functionality (MMF), because it enables me to deliver some value to the customer, the economic buyer, earlier, allows users to learn the application over time, and allows a vendor to schedule their revenues and cash flows more consistently, I’ll focus on the MMF as the unit of coalition.

In the comment, Kenny Bastani, asked about a list of “features” that tend to be layers of crosscutting concerns or aspects, thus leading to Aspect-Oriented Programming (AOP). His considerations were

  • Integration of CRM, data warehouse extracts, aligning vendors for product integration, designing a support strategy for all third party systems and data integrated into the product?
  • Security risks for centralizing a data layer from so many different systems?

To get our hands around this, we can consider each aspect to be a vector. When I talked about bivectors in the previous posts, I summed them up into a single technology vector, a single product vector, and a single business components vector. Vectors sum up easily enough. But, let’s take a more expansive look at the base vectors that were summed up.

Summed vectors

Summed vectors

Back when we sold technology, instead of a webpage, the product was built on top of a technology. Customers bought the product, but installed and configured that technology before the product would work. Much of that technology was already present in the product. The product wouldn’t install otherwise. With webpages, we use a server, a browser, tons of technologies, but we don’t sell any technology. Our products are not fostering adoption of some technology. The underlying technology has already been adopted and for the most part is situated in Moore’s late mainstream market, much of it approaching the horizontal asymptote at the top of their S-curves. These technologies that we use, but don’t sell per se, are our whole product components. The cloud likewise, our whole product components. In this figure the whole products are shown as one summative vector, instead of as a collection of individual vectors representing each component. Notice that the granularity and summative vectors or not is up to you.

I’ve shown two technology vectors to illustrate that a vendor that actually sells a technology will always need a second, subsequent, discontinuous technology available to switch to when the initial or prior one commoditizes. In my Slideshare presentation, I summed these vectors and their S-curves, starting with slide 53. But, in the Framing Post For Aug 12 Innochat: The Effects of Booms and Busts on Innovation, I came to realize that not only are these technologies discontinuous, the market is likewise discontinuous. Why this surprised me is just that I knew this for discontinuous technologies, you can’t sell the next one to your current market, but I had not extrapolated that to the vector and s-curve visualizations.

For both the product and business components in the offer, I used a shorthand, the eigenvector to build those vectors. Why shorthand, well, I’m not showing that each minimal marketable function (MMF) or business capability (BC) need not be aligned. Showing them aligned was just quicker to draw. The current representation isn’t like an eigenvector, in that eigenvectors are used as unit vectors. I do think of MMFs that way, because MMFs originated with feature-driven design (FDD), an agile approach, which means that a single MMF is delivered in a single release. That release is a timeboxed across all releases. Think one per quarter and tie it to the quarterly revenue goals of the subsequent quarter. The BCs are another matter. By capabilities I mean not just processes, abilities, people, but also policy. Policies arise or are explicated from implicit expectations in a mostly reactive, event-driven manner.

A company can be represented by its cost structure and policy structure as vector

Aspects as Vectors

Aspects as Vectors

This figure takes us back to the comment. Each of those aspects in the comment can be represented by a summative vector, or that vectors collection of base vectors. When all the vectors for an aspect share an origin, via translation, just because we want to do that, we end up looking at what linguists would call a morpheme, or a probability cloud defined by those aspect vectors. When laid end to end, we end up a road. The road shows us that we can get to a Shapely Value with any collection of vectors.

All of this goes to show us that we have our vectors of differentiation. Those vectors compete and collaborate. Those vectors have an internal price/cost and an externalizable performance, a performance relative to a market, large or small. Those vectors have populations involved with them, and from the Slideshare presentation that implies Poisson distributions, Markov processes, grammar, machine learning associated with them. Those vectors have triangle models associated with them and several other representations I have yet to discuss. Math is a massively distributed, collaborating population of various types like people, ideas, theorems, calculations all rolled up into a massive geography that no one is entirely familiar with.

So goes any IT system as well. Please, do your ITIL people a favor. Get a GIS system. They need it.

The math under the Shapely Value is easy. Google it until you find an accelerator that makes it easy for you. The point behind the Shapely Value is that the system has more value in it than that obtainable by its parts alone. This should make it clear that value is not at the interface, it is in the interactions deeper into the space of work where your feature feeds someone else’s, where emergence and radiosity tip their hand.

Comments?


Follow

Get every new post delivered to your Inbox.

Join 1,740 other followers