N-Grams and Strategy

December 16, 2014

In my recent travels I’ve been flying around a lot. I’ve not managed to use my laptop. But, I did get to read a few books. The math book I was dragging around was one I had to take notes on while I read it. So I stopped in at the Hudson’s Books shops in almost every airport it seems and picked up a quick airplane read. My first such book was “Uncharted: Big Data as a Lens on Human Culture” by Erez Aiden and Jean-Baptiste Michel, http://www.amazon.com/Uncharted-Data-Lens-Human-Culture/dp/1594632901/ref=sr_1_2?s=books&ie=UTF8&qid=1418721581&sr=1-2&keywords=uncharted+in+books. This is the first book I’ve read on n-grams. It was the charts that drew me to the book.

I’ve written about charting use frequency and how this chart is our long tail and more. The frequencies of use I talked about was that of features and content in our content marketing and support universes. In “Uncharted,” the authors are talking about word frequencies taken from every book published going back as far as possible. They used books, because books are stable purveyors of history.

In their first chart they looked at the United States as a singular noun and as a plural noun. These uses changed over historic time. Singular replaced plural usage in 1880. OK, so what? Well, if you managed a piece of software back then, eventually, you were going to have to edit your UI and the content of your content marketing. Worse, you might have to consider changing your concept model, data structures, existing features, and add some new functionality. The lexical network reflects the semantic network much of this in code. And, a side effect of changes to this network, you might consider how cognitive limits shape such networks and architectures. Worse, these lexical changes can escape code and become organizational issues.

Consider the fuzzy concepts of wants and needs. Marketers mess with this fuzziness often enough. It’s as bad as sitting though a product camp talk about value. Most of which is correct, but barely, and usually much too close to the interface limiting the value we deliver to customers. But, back to wants and needs, words battling it out in lexical space, aka in the n-grams and the charts of such. In “Uncharted,” the author’s mention a study of wants and needs.

Need vs Want 01

This first chart shows the raw n-grams for “I want” and “I need.” In 1800, people needed more than they wanted. In 1862, that changed wants began to outstrip needs. Needs faded into the background of your lives. But this is the kind of thing that happens with feature frequencies all the time. We talk about email dying, but really, we still check our email.

Now taking a step back what we see are two lines intersecting. We see a game theoretic game.

Need vs Want 03

The graph can be expressed as a collection of mixed strategies. The numbers in the ratios reflect the difference between the two competitors. Still, will our organization serve both groups of users? Will we deliver the lower priced needs, or will we go for the higher priced wants? Will we deal with repeat business or will we be a hits-based business. Will we create an organization that does one of these well, both well, one of these less well? Somebody gets to decide. Those decisions end up in our offers, our organizations, and our financial results.

Need vs Want 02

Another view of the same chart would tell us about the undifferentiated infrastructure (tan); the infrastructure for need; the infrastructure for want; growth and decline; convergence, divergence, and steady state, relative investment levels. It can hint at world sizes.  The point labeled zero is where the words meant pretty much the same thing. They were interchangeable. Max is where the maximum difference was achieved. The differences between words lies in their connotations and denotations.

It’s not just about word frequencies. Words are proxies. The authors mention several proxies they used to study things that didn’t have any direct words and n-grams to look at.

Given that the frequencies of use of our own functionality can be explored. Seeing across categories will be harder and legally more complicated. You might not know why your Save function is being used less, but seeing it tells you to go find the reason. Competitive thrusts will show up in your use frequencies.

Those mixed strategy ratios are measures of differentiation. They tell us about the offer, the company, the customer. And, notice that these charts are about counts, rather than statistics and probability distributions. Capturing your server log entries as histories of use frequencies might require some work within your organization, but the clicks are there for the charting.

Comment please.

Pie Charts

July 23, 2014

It’s Christmas time, aka pie season. Mom cooked each of us a pie of our own. Dad got an apple pie, I got a cherry pie, everyone got their own. We had five pies. After Santa Claus, we’d eat all the baked goods. There were a lot of baked goods. Then, came Christmas dinner. Eventually, it was time to shove all that food in the refrigerator.

If you ate a few slices of pie, you where sparse, hyperbolic. That pie, as a pie chart, had the angles sum up to less than 360 degrees. If you had several hyperbolic pies, you could save space in the refrigerator by putting the slices of several pies into the pan. The resulting pie had angles summing to more than 360 degrees, so your pie pan contains a spherical space.

What brought that on? I had read some designer talking about how a pie chart should never added up to more than 100 percent. Sure. That’s best. Lets always assume a Euclidean space.

When we do our linear analysis, we assume a Euclidean space. But, there times when we shouldn’t assume the Euclidean. When we are talking about a discontinuous innovation, we start in hyperbolic space. Once the bowling ally Poisson distributions tend to normals and add up to the single normal that we start into the tornado with, we are in a Euclidean space. From there we grow our company from six sigma to forty sigma, that puts us in a spherical space. The normal gets wider, but shorter. The probabilities decrease like our margins.

We are told that innovations are risky. We conclude that innovations are risky, because those linear analyses fail us. Those linear analyses assume linearity long before the space has converged to the Euclidean. When we are in the spherical space, The space has already converged to the Euclidean, and a linear projection can always be made from the spherical. This covers up the non-Euclidean situation. The hyperbolic, however, is too sparse to support a linear analysis. The hyperbolic doesn’t have a linear projection via a geodesic. Instead, you have world lines that generate something like the navigation of a taxicab geometry. You end up fragmenting the linear and turning often. But, worse, you are talking about points, not lines. Between the points, you have nothing, certainly not a projectable linearity.

So this pie/pie pan/pie chart thing, this Euclidean assumption, is the usual thing. It’s the way we, as managers, inject risk into an innovation. Beware of the implicit assumptions. Beware of the space.


From A Geometry Proof

July 20, 2014


I came across a geometry proof that was immediately interesting to me for what it was saying to my product strategist self.

The citation for the proof is A. Bogomolny, The 80-80-20 Triangle Problem, Solution #12 from Interactive Mathematics Miscellany and Puzzles, http://www.cut-the-knot.org/triangle/80-80-20/Classical12.shtml#solution, Accessed 19 July 2014. I’ll include the graphics and the problem statement from that post.

The problem begins with the diagram and some assertions. The author selected one of many solutions. It’s the solution that I’m going to expound upon, but I’ll barely be talking about the obvious geometry.




  • ABC is an isosceles triangle (AB = AC).
  • ∠BAC = 20°.
  • Point D is on side AC such that ∠CBD = 50°.
  • Point E is on side AB such that ∠BCE = 60°.

Search Goal:

Find the measure of ∠CED.




Any time you are handed a problem, the effort will be a matter of a search for a solution. When you have one solution, you still might continue your search for another solution. The solution above is the twelfth solution.

A search is a matter of traversing a search space. Before we can search a space, that space has to be populated, or generated. We use rules to generate this space. In a game like Go or Chess, the board is a set of rules. In Chess, we go with queen on color as the rule that ensures proper board orientation. In marketing there is a huge population organized via various organizing schemes. In technology adoption, the population is organized around referral bases, so we end up with a quantized collection of populations. Each of those populations are independent of the rest, so we get a nice A+B+C+… marketing effort. Well, that is true within a single phase of the technology adoption lifecycle, and not true across sequential lifecycle phases.  Now, when I said marketing effort, I was talking marketing department. Sales randomizes, or to put it differently, sales is a problem. They sell to people who are not yet prospects. Notice the word prospects.

A search has a budget. A search has a breadth and a depth. You can do a breadth-first search, a depth-first search, or some mix. The nature of that search has organization wide impacts. Where sales is actually selling to the leads generated by marketing, you’ve got alignment on search. In product marketing, we talk about having conversations with our customers. Well, maybe. My definition of customer is an entity has purchased our product. My definition of prospect is an entity that has not purchased our product yet, and is a member of the population we are currently marketing to, in the quantized populations currently addressed by the referral basis and levels of pragmatism currently being addressed by marketing. We are talking about prospects in the sense of ready to buy last year, last month, today, next month, next year. We can’t talk to all these people as if they were a single population. Sales does that, hence the randomization. Marketing does not. Not that marketing understands that. And, product marketing, lets hope not.

The notion of an increasing return is either interesting or totally ignored as a strategic decision. Those increasing returns are predicated on a decreased cost of sale. Decreasing the cost of sale means marketing and selling to customers (initial sales) and to prospects (recurring sales) differently. Unbelievably I used to get the same marketing a prospect would get for a tool that I used for over a decade, that was upgraded annually and that had me dealing with the same sales force. The price for this product was very high, because the company didn’t bother to capture their increasing return. In another company, I had a sales guy gloating in the hallway about throwing a customer that called him to order an upgrade under the bus, because he had higher commissions taking up his time. In that company, we sacrificed a lot to retain customers. But, that was just a marketing strategy. Sales was not aligned with marketing. And, of course, it can’t be said that we captured our increasing return.

So as marketers, we have a population that we market to. We can select the organizing features of that population to exploit. In making these kinds of choices we limit the answers our search will produce. These choices define how we will move. Each rule has either a divergent impact, one that makes the search space larger, or a convergent impact, one that makes the search space smaller. The divergent is generative like a generative grammar. The divergent is discovery learning, a saying yes, an effort to adopt. The convergent is enforcement learning, a saying no, a right or wrong.

So that’s triangles. In the solution column of our table the upper diagram restates the lower diagram by stacking the triangles on of each other. Where stacked, decisions are shared, so we can consider the upper diagram, the decision tree of our successive releases. Those decision trees can be thought of as an organization of a bits. Likewise, our populations. The organization of those bits happened over time, and when programming a cognitive tool, those bits are organized by an imposed cognitive space. The underlying geometry of the innovation, hyperbolic, Euclidean, or spherical also impacts the organization of those bits. The shape of the triangle changes as the underlying innovation is adopted.

In both the upper and lower diagrams, the relevant sides of the triangles were annotated with red arrowheads. I see those sides, those lines, as being factors, the kinds of factors that you would derive from a factor analysis of some portion of the variance generated by a system, of the variance found in data collected from a system. In a purposefully generated system, we should know how much each element is supposed to contribute to the behavior of that system. Well, that’s an ideal. Comparison with a factor analysis will reveal where we are not providing what we expected to provide.

In my triangle model, the base of the triangle is where the user interface is, where the user generates behavior. The base line can represent other things beyond the user interface.

In the upper diagram, the factors would be the lines inside the largest triangle. Factors start out long and steep. Each successive feature gets less steep and less long. Those lines inside the largest triangle exhibit this order. Factors converge to the x-axis as the variance included in the factor analysis increases. In the upper diagram, that convergence would happen at point C. Each of those factors reflects a search, a collection of divergences and convergences, which result in a point or line when we search a two-dimensional space.

If we think of the normal distribution we use to represent the technology adoption lifecycle, and impose a black swan on it, aka we stop heading to the distribution’s convergence with the x-axis, we never make it to point C. Commoditization is an example of this. The point that would be the black swan becomes the point of convergence with the x-axis of the distribution. And, in the upper diagram, the black swan point is the limit of the convergence towards C. We end up with a smaller world, fewer bits, less revenues, and a need for a new triangle to traverse.

More questions arose as I wrote this. Enjoy. Please comment. Thanks.

Strategy Alignment and Geometry

June 26, 2014

I’ve been looking at the question of why discontinuous innovations can’t be analyzed successfully. In earlier posts, I suggested that the earlier phases of the technology adoption lifecycle, each lane in the bowling ally, were characterized by Poisson distribution, which over the lifecycle converges to the normal distribution. I’ve seen accounting data showing this. But, that’s probability distributions and machine learning, again something I’ve talked about before. There is a corresponding geometry for the technology adoption lifecycle. It begins sparsely in a hyperbolic space. That hyperbolic space cannot be projected to a linear analysis. Over time this space converges to Euclidean, aka the linear analysis. Then, with the move from six sigma to thirty-six sigma, the normal gets wider, but shorter, since the probabilities still do not add up to more than one. In terms of space, however, the space goes spherical, as in information overload, which allows us to project to the linear in numerous ways. These numerous linear projections enable growth, numerous pathways to growth.

I also discussed how to draw curves based on correspondences of axes in some prior posts. I applied this idea to strategy alignment. It turns out that these curves illuminate the underlying geometries. So lets see what the heck happened.


For these curves we draw a line from 9 on the x-axis to 9 on the y-axis, from 8 to 8, and so on. This gives us a curve (black). Then, we draw a circle (orange), which represents the Euclidean, aka the linear analysis world. The black line appears inside, on, and outside the orange circle. Inside (under) is hyperbolic. On (linear) is Euclidean. Outside (over) is spherical. The relevant spaces are colored: yellow for hyperbolic, aka caution; green for linear, aka thoughtless; and light blue for spherical, aka overly linear. The core idea here is to know where trouble will turn up in your analysis. Hyperbolic spaces are difficult, because you are stuck on one of Einstein’s world lines. cannot move freely in the space. Spherical spaces are likewise difficult, because your projected linearity could easily be wrong.

For strategy alignment purposes, we will use out black line in this figure to be our baseline. We will consider that the numbers on the axis reflect some pairwise prioritization. Each organization contributing to an axis would have their own order. That order changes the curve to reflect that organization’s aggregate prioritization. Those orders contribute to the curve we will draw. Then, we will look at their curve relative to the baseline curve.

Their curve should reflect their factor analysis. Likewise, the baseline would reflect the factor analysis of the entire organization.


I’ve ordered the factors here relative to their length. In general, factors are ordered by length and angle. Factors describe a curve.

Notice that some of the lines that built our baseline curve are not making any contribution to the curve. These lines represent irrelevancies, or in game-theory speak, dominated strategies. Since some of the lines are not making any contribution to the curve, the shape of the contributing organization’s curve will be different from the baseline. The gaps between the factor lines represent the intersections of the factors. With better tools they should be points. Then, the length of the factors should accurately reflect the results of a factor analysis.

We can compare the contributing organization’s curve to the baseline curve.


Here we drew the baseline curve based on the dark blue lines. The baseline curve is shown in black with red showing us the gaps in the alignments of these organizations.

The axis of symmetry is a managerial control that is defined for the rectangle representing the world. Management can move it. The axis of symmetry here is shown for the baseline curve. Notice in this last graph, the system is only linear at a single point. The dominated strategies are shown in brown.

A graph can be drawn for each contributor. Those can be compared to the baseline. This will give you a 3D view of your organization or value chain. The collection of curves will give you a surface and expose much in terms of those dominated strategies. The war between marketing and sales is about dominated strategies, or the tension between demand generation and demand creation.

Keep these geometries in mind when throwing around those linear forecasts.



Discontinuous Innovation

July 15, 2013

I’m reading a book on math for biology majors. The first chapter discrete time dynamical systems was great. It tied back to phase graphs in the book on Chaos I read back a few years ago. The next chapter is about derivatives. I was going to skip it, but I’m glad I didn’t. I takes a completely different approach. It’s not trying to make you into a mathematician. So I get to the part about continuity. Boring, except it wasn’t.

Instead, I found myself looking at a graph that was simply shocking. I must have seen this before, but no, in my math books discontinuities were open points omitted from the domain or range. Not this time.

In the earlier chapter, we went looking for equilibriums and in a certain situation, there are none. That situation, a discontinuity between two parallel intervals.

So this time, we have two intervals with a vertical gap, a discontinuity between them. Of  course, that wasn’t the shock. Instead, it was putting this in the context of trying to explain discontinuous innovation. First, the graphic. Then, the build to what it demonstrates.

Discontinuity IV

The function we graphed was a step function:
f(x)=2Vt if x≤20 and f(x)=3Vt+20 if x>20.
The major point here is that they don’t intersect.

Next, we throw the marketing at it.

Discontinuity I

From a marketing perspective a discontinuous innovation is about a new formerly unserved population, a population that wasn’t interested in your offers before this one came a long, a population you weren’t interested in, and populations that are not known to each other either in terms of serving as a reference base for the other. Like the demographers and ethnographers trying to converge into a new discipline that I mentioned in another post. Still calling each other names. The technology under the hood isn’t similar to that of the existing population’s tech. The technology might not even be as good, yet. But, this discontinuity is wonderful, because it lets you create a new category and be the next near-monopoly exemplar corp in the biz press, a decade from now. Yeah, it’s not a next quarter thing.

But, back to the graph. The thick brown lines represent step functions that have been associated with their populations. I color coded the areas under those functions with aqua and purple. And, I show the vertical gap, the discontinuity in gray. Then, thinking about alleles, I differentiate the functions with a single bit, summarzing all the bits it takes to make those two function lines happen in a product.

The gray area represents a curriculum problem, a content problem, absence of an old-new contract. When Relativity came a long, they were the new population. The adopters had to make a knowledge leap and believe in the stuff, but doing so did help them, so they did it. There was no road from Newtonian to Relativity. To move the prior population, was to teach them, and retire those that wouldn’t learn. This stuff happens with our technologies as well. Take object-oriented programming (OOP). Initially, OOP was radical. So radical, that my CS profs wouldn’t go there until later, not with us undergrads anyway. But, it finally fell to MS to adopt OOP in there API. When they did, the did it in a continuous manner, and OOP stopped being radical. OOP wasn’t the same either, so today you still hear object thinkers trying to recapture the promised upsides of radical OOP. Oracle helped norm OOP as well by killing off the object-oriented database management cateory. Yes, to persist is a verb, or something that programmers still have to mess with, because OOP doesn’t do what was promised.

Oddly enough, back seven years ago, I was reading Seeing What’s Next, one of Christensen’s book in the delimna series. I posted a blog talking about how discontinuous was lexical, a decision about an approach. Christsen had a graph of S-curves. I redrew it. I put the old S-curve on the background, and the new S-curve on the forground. The middle ground was the lexical space. The middle ground was the discontinuity. Eliminating that middle gound collapses the radical, the discontinous into the continuous. Eliminating the middle ground changes the economic outcomes, because without it, you don’t need new value chains, and eliminating it changes the geography so it Euclidean or spherical depending on the size of the company pushing the underlying technology. Eliminating also takes the tornado allocated market leadership with it. Nah, without that middle ground, all you get is another market allocation in an existing category, aka a very small allocation of miniscule marketshare.

The discontinuity on the graph is the same as the middle ground in my long ago illustration. That discontinuity is gray. There are no bits here. This is the unknown. But, here is the thing, we actually decided to not extend the graph of the interval on the left, so that it would intersect the interval on the right. We decided to keep the middle ground, and to keep the populationos mutually exclusive. We decided to separate. Unfortuately, the business orthodoxy doesn’t let us separate. They’ll tell us that it costs too much. Then, the innovation fails to achieve it’s business objectives, and it was the innovation’s fault. Sorry management, but no. Christensen has not won the war on the separation concept, so we will all lose until we get this right. Separation is necessary. The point of separation is to create wealth, to create those value chains, not to capture cash, or pretend to be a bank like all those sigma 30 to 40 public companies out there, companies with no margins and an absolute need for cheap labor. But, the orthodoxy will wear you down. It was Moore that used to tell us that discontinuous innovation is about creating wealth. The Chasam Companion was about this wealth creation via value chain concept. It was also Moore that disavowed separation as being too expensive in his last book, a book where he turned his technology adoption lifecycle inside out for the sake of the orthodoxy he’s been working for since the Web 1.0 dot bust. Who can blame him? Nobody does real technological innovaiton anymore. We are replicants now.

But, there it is in gray, separation.

So if the discontinutity is a choice, what of continutity?

Discontinuity II

So here we are with our situation no longer discontinuous, no longer radical, not longer about creating wealth. Loads of cash, sure. And, how did we do this. We decided. We decided to let the function on top keep going until it intersected with the other function. We changed
f(x)=3Vt+20 to f(x)=3Vt+10.

I’ll have to check those functions and the conditionals, but that’s what I remember right now.



June 21, 2013

I’ve been thinking about geometry a lot these days. What does the sparseness of a hyperbolic geometry feel like? Does hyperbolic geometry encompass Moore’s bowling ally? Does hyperbolic geometry and Poisson games encompass the core management issues? Lots of questions. It just compels me to learn more math, but much of that math hides the geometries, or explains everything from the comfort of the Euclidean geometry. The linear assumption of management includes the Euclidean assumption. We bumped into this in my last post, Depth of Value.

So, I’ve sketched up a quick graphical comparison of the geometries. I use the geometries: hyperbolic (H), Euclidean (E), and spherical (S) to show what a triangle looks like, the triangles of the Triangle Model. These geometries are blunt instruments.


They didn’t teach us this stuff back in school. They do teach it to high school students these days. We’re on the cusp of many new understandings. Oh, don’t blame out teachers. Mathematics teaching lags mathematics by about 50 years. Some of the mathematicians that produced the ideas we are just now hearing about are still walking the halls of academia, or died in our lifetimes. I am finding math textbooks at half-price books that have moved the ball. Yes, your kids know what a Markov distribution is and what to do with one. Great!

I’ve correlated the distributions we use with the geometries. A discontinous technology starts out as a Poisson distribution. It’s hyperbolic out in the bowling ally. The lanes are straight, like Einsteins light, and all that ensuing weirdness. That discontinous technology then crosses the chasam and moves into the normal distribution (6 sigma) of the vertical, a smaller normal in terms of standard deviations, sigmas, than the normal of eventual IT horizontal. These normals live in Euclidean space. Eventually, that discontinuous technology company is M&Aed into the huge public companies with the vast sigmas (30 sigma), the vast normal. The total probablity under that vast normal is still one, so the height falls, the margins thin, and you need a scraper to get it off the floor. The vastness still reflects the decisions constituting a decision tree, a triangle, but it bulges out of the confines of the Euclidean plane. Real options, strategic choices abound in the spherical, but not so in the hyperbolic.

Notice that the figure doesn’t include all eight lanes in our bowling ally. Three were enough for purposes. There is much more to this Poisson tending to the Normal and it’s visualization across an eight lane bowling ally and time. And, more again when you start to account for the layered structure of a media.

Somehow, we build a business orthodoxy based on the likes of Sloan’s GM. We teach that orthodoxy. We use linearity to discuise the spherical geometry under the hood. The gaps don’t bother us much. It looks like a nice generic set of tools, so we preach them as universals. We teach it to everyone. Then, we wonder why we can’t innovate. We blame the innovation itself, because we never blame ourselves, and never question the generalist, generics of our orthodoxies.

I defend innovation, because it builds the businesses the orthodoy milks, the cash cows. It builds wealth, wealth as something other than piles of cash, wealth that requires collaboration beyond the firm, beyond the cash flows of our own organizations and value chain. It’s how we make a world different from what we’ve known.

Continuous innovation doesn’t do the hyperbolic geometry. But, discontinuous innvaton will happen there, because discontinuous innovation is just part of product being used to foster adoption of that technology. The transition from Euclidean to Spherical still happens with continuous innovation, so even continuous innovation can find gains in the awareness of their geometries.

Mind your geometries.


Depth of Value

June 12, 2013

These days I spend part of my day at a university library, one that buys new books, a rare thing these days. The state university in town buys journals and skimps on books, so the library shelves are full of aging books. The new books in this library are amazing.

Yesterday, I picked perused Soil Ecology and Ecosystem Services. Before reading Nowak’s Super Cooperators, I would have passed this one by. But, another thing struck my eye, the notion of soil providing services.

I’ve put product ecosystems on my roadmaps. That’s not new. I’ve worked in the TQM/ISO places and years enough to know that every entity has customers, stakeholders, suppliers, and services. And, the wildest Web 2.0+ evangelicals spout services, services, services as their mantra. Soil just serves. That until I crossed paths with this book.

Beyond services, it was a graphic that caught my eye. It fits along side the triangle model. It describes populations. It correlates populations and features with value, value not just at the interface, but at depth in the away sense. Value at depth has been with me for a long while now, but finally, here is a way to get more specificity into its description.

I’ll start with a paraphrase of the original graph.

Soil serving populations with specific services.

Soil serving populations with specific services.

In this graph, the x-axis is the interface between the sky and the ground. Your looking at the dirt between your feet. Soil services extend from there into the depth of dirt under your feet. The view of the grass and ground that you get is your view in the model-view-controller. The red lines represent the populations being served by the soil. The blue is the amount of service provided, the use frequencies that I’ve illustrated in the long tails discussions. The variable of pore volume will tie into diffusion, aka the diffusion of innovation.

Now, I’ll show the graph’s relationship with the Triangle model, and relabel the graph so it’s more in line with software development.


I’ve aligned the Triangle model with the value-depth graph. The interface resulting from all of your development decisions appears in blue at the bottom of the decision tree, although upside down, and at the x-axis, the former sky-ground interface. Soil services are now a collection of minimal-marketable functionality. Populations are still populations. A little color reveals that we have an over-served population.

Both graphs have log-log axes. Beyond straightening curves, beyond those algebraic transformations implying changes in geometry, log is how you encode a base or modulo arithmetic in a graph, aka positional notation. Cognitive limits impose a base on the underlying data. Humans have cognitive limits. Brains impose cognitive limits. Media impose cognitive limits. Our applications serve one population well, and other populations not so well based on the population’s cognitive limit parameter. We probably pay no attention to this, but the cognitive limit is there, and it is very mathematical. It reaches beyond our interfaces (views) to our models, to our user support content and to our marcom.

The notion of the cognitive limit has become controversal, because the original research of the past is seen as flawed, but attending to this matter will pay off. When you hear advice like never having more than three bullets on a Powerpoint slide, what is really being said here is that Powerpoint as a media is imposing a cognitive limit of three. The rule as it usually stated is 7 plus or minus 2. So Powerpoint clips the mental capacity at a perceptual level long before it gets us worried about our short-term memory limits and paging to long-term memory. I won’t say the limit is 9. Software is supposed to be a cognitive tool, a tool to think with, but that’s what it can be and usually isn’t. In a tweet on presenting requirements, I suggested putting it all into a Powerpoint presentation, precisely, because its cognitive limit would limit the number of requirements, use cases, or user stories that we expect to deliver in the next iteration. The limit forces us to organize the content and the reveal, or rhetorical encounter. The limit forces us to structure the experience. We don’t have to make them think, unless of course, we are helping them think. So know your population’s cognitive limit, and if you serve several different populations with several different cognitive limits, realize that not everything will be used by every user. Don’t choke the weakest user. They still make the upgrade decision, or in a Web 2.0 world, the subscription renewal decision. No, the economic buyer does not make those decisions. Don’t call him for those.

So before we run off, lets set a cognitive limit of 7 on a big project delivering twelve things. We’ll also set a cognitive limit of 3 on it.


Now, I’ll put the table into a graphic, so we can see how the total cognitive load and cognitive limits affect our customer’s time to return.


In this figure, we start with a log-log cycle. We highlight the logs of bases 3, 7, and 10 that we’ve been using as examples. These bases are shown in black. The others in aqua. A log-log scale presents us with squares within squares. These cognitive limits hint at the dive that we make into whatever we have to learn: textual content, automated content, implicit cognitive models, implicit models, implicit model constraints, and technology adoption lifecycle phase mediation. We also have to find the explication gaps and workarounds, and other negative use costs. We use parabolic physics here. First you climb the platform, then you dive. The red, green, and purple grids use the base log squares as unit measures. We take the cognitive effort required to make the dive off of the unit measure grids. The bases of the grid provides each user with some credit for knowing some of the content. The black horizontal line at the bottom represents the system ground as an absolute. The grounds for each cognitive limit were shifted, again, relative to the expected prerequisite knowledge.

The Time To Return (TTR) in the figure is a little bunched up. If the curve had been wider, the arrivals would have been streched out, and more realistic. Practices like Minimal Marketable Functionality aim at delivering smaller cognitive loads, and arriving at the TTR one unit of minimal marketable functionality at a time. Moving training earlier in the sales cycle can also move the TTR around, and reduce negative use costs.

When we take the population as a whole, we end up with a collection of parabolas generating a surface over time–a fireworks show.

So, back to the soil. The population of soil microbes being served at a particular location on the graph are there, because that is the only place that has what they need. And, so it is with an application. We, both us and those soil microbes, are seeking, searching for cognitively cheap, exploitable, consumable value. Making it too easy is a loser, because we will be bored. Making it too hard will result in a support call, or worse, an exit. Rocks are rocks. And, no they don’t rock. Microbes and humans go elsewhere.


Twinkle, twinkle, little product

March 29, 2013

So we’re far away from the city; the night is dark; the moon is full; the light sufficient; headlights off; just us and the sky, a wide and twinkling sky; and our car moving us beneath the glorious heavens. The stars twinkle. We recall the old rhyme, as the sky takes our breath away.

A few nights later, some of the staff is working late to meet tomorrow’s deadline, so you’re doing your leadership thing while you lose yourself for a moment looking out your skyscraper window. The stars still twinkle. The moon doesn’t. The streetlights don’t. Only the stars, the few you can see standing in all that light pollution, twinkle.

Years ago, decades ago, the astronomy community realized that they had to move their telescopes and other scanners out beyond the atmosphere if they were going to get rid of the bugs we call twinkles. Once they got the Hubble up there, the stars no longer twinkled for astronomers. With the bugs gone, they gained clarity. They gained vision. They gained insight. They moved their value chain beyond one of its constraints, and went on to capture that value, deeper value.

Astronomers can hardly be blamed for those twinkles, those bugs. Those twinkles arose from a physical constraint, the sky. Managerial decisions wouldn’t have made those twinkles go away. Quality assurance wouldn’t make those twinkles go away. Better astronomers wouldn’t make those twinkles go away either. Twinkles persisted until recently.

But, I said product didn’t I? How do our products twinkle? How do our products twinkle despite management, programmers, quality assurance? And, I’m not talking about the bugs that could just as well turn up in a telescope rather than our code. I mean the twinkles, the bugs, we are blind to; the politics of product and the politics of elicitation; the politics of governance; the CEO; the execs; and the management of the software vendor organization; yes, right to your door, that of the product manager; and beyond that the politics out in the distance there, the politics of the economic buyers that constitute our customers; and our early adopter clients and their organizations management. Call them, the air of the development world.

In recent tweets, I’ve had to remind peeps that, in my world at least, that of companies that sell technology, rather than content, aka not a web 2.0 company, the economic buyer is only the first buyer, the person in the initial sale, and given the enterprise nature of the pursuit of our increasing returns, not a person involved in the subsequent sales, not a person that will even involve themselves in the UX. That economic buyer does, however, get sold some notions of business value, and lacking that might snap back and see to it that our application is removed from their company. That economic buyer is at the apex of the purchasing company’s politics spreadsheet. That economic buyer is the twinkle supplier.

Software development is repleat with myth. Requirements are never stable. But, that flies in the face of those of us who worked in functional domains. Our requirements rarely change. We’re mostly about reproduction, doing it again and again and again. And, meaning wise, our meanings rarely change, so don’t look at your elicitation sources as the sources of twinkle. And, absolutely–what that myth tell us is that requirements never stop twinkling? Like stars photographed from the ground, the twinkles stop, because we fixed them in silver. Requirements fix them in words. Developers never see the twinkles until a project turns into a program, or in a more Agilist world, the next iteration or refactoring.  Even then, developers are far away from the nuclear furnace.

The twinkle, twinkle, in an internal organization, requires us to look up. And, don’t talk to me about flat organizations. If an organization was really flat, my CEO would be shopping at Walmart and wearing those t-shirts they give us, so we have clothes to wear at work. There is always an up. And, in a vendor organization there is a down. Flip the representation over if you like. Source politics on one side and builder politics on the other. Twinkle, twinkle–southern hemisphere, northern hemisphere–matters not. That politics is hierarchical, deep, and fused. The end results are tradeoffs. We talk about tradeoffs as if they were necessary and the core of what we do. The tradeoffs keep changing. So the twinkle never ends, and the product fits loosely if at all. Does it serve the economic buyer’s expected value, or the need for users to get some aerobic exercise pushing a mouse across a screen while compensating for the mismatches between the software and their functional cultures.

The twinkle does have solutions–AOP for one. Ask a developer about it, or search this blog. It might be in my prior blogs that are now inaccessible, one of the wonders of SAAS. But, beyond the technical enablers, the booster rockets, we need to get rid of the twinkle, the politics that ruin our ability to deliver value fully. No endless chain of  iterations will eliminate the twinkle. Only we can get our software up above politics. Start by noticing it. Of course, we can dream of the day,….

Back to Blog

March 15, 2013

It’s been over a year now since I disappeared from my blog. I still have no ability to draw the bitmap graphics that I’ve used extensively in my blog, but a writing book challenged me to go completely lexical. Disappearing doesn’t mean that I forgot my backlog, or that new ideas haven’t showed up to extend the universe. But now, I’m back here.

Last month, out of frustration, I started another blog, Product Strategist 2. But today,  I posted a link back there. If you’ve already followed me out to that blog, we will be staying here. Check your subscriptions. Thanks.

About Control

January 7, 2012

Where did I put my controls? If you have authority and use it, rather than doing something a little more complicated and implicit like lead, you know your controls are explicitly up there in the hierarchy. If you practice shepard leadership, you know it’s out there in the implicitly plowed field of yours and your team. If you’re dealing with channels, you better understand gravity, control at a distance, because you are far away from the decision making of the actors.

This afternoon’s road rage trigger pulled into the fast lane as I was closing on an open slot adjacent to a semi a lane to the right, the shoulder adjacent the open lane and a separator wall to the left were it should be. Slow traffic in the fast lane is supposed to be illegal, so where is the policeman who is supposed to pull this guy over. Yeah, a moving control. Stuff we deal with everyday like banks that won’t loan. Is it any wonder, I’m left wondering where my controls are. No, I didn’t road rage. I made the six lane changes to pass the control and get on with it. Thanks to the road controller the world was a little more dangerous than a fast drive through the slot and beyond for those few moments. Then, the world was safe again for the fast traffic left to itself in the fast lane where it belongs in a lane discipline state like Texas, which likewise makes it easier for the police to know where to look when on the lookout for the harmless speeders.

So here we have various kinds of controls: barbed wire fences, paths up the cliff face, flat surfaces, ramps, hills, speed bumps, twists from inside to outside, and muddy plowed fields from those collegial conversations in the rain. So lets talk about controls, about mission, about vision, about all the things that lay out what must be and how it must be done. This isn’t about lists. It isn’t about maps either, not this time.

We may get lost in the math, so I’ll omit it, gloss over it, or hint at it. If you want to dive into it, we can talk later. Consider these ideas Lego blocks, or yet another wrench of one kind or another that you can use when you get tired of the straight lines of our linear assumptions.

Yes, this coulda, shoulda, woulda, mighta been a slide presentation, or a cartoon. It’s graphics rich. It’s long. And, given I drew this stuff months ago after a period of trying to crank out part 2 and 3 of the long tails, thick tails presentation, it concludes where I lost the time to stay focused to the rat race of keeping the food on the table, rent paid, and car running–my current controls.

So I’ll start out here with the typical linear view of the business proposition. Linear teases us with 8th grade geometry. Two points are a line. Two lines are point. Hints of recursion; of arcs being nodes; of von Neumann’s zero-sum game theory; of drafting boards, t-squares, triangles, compass, and rulers, of much, yes, even CGI at some level. Of some old line still used in a bar.

Mostly linear is a belief. Given that so much math has moved on from the linear and the orthogonal, linear survives just because “non-linear” is less familiar, more risky like discontinuous innovation, and harder to communicate to those less analytical, less abstract executors of our strategies. Linear is helped out by regression, a line defined by many points most of which are not on the line–controls at a distance. Still regression like much of math is still beholden to the Pythagorean notion of distance.

The Linear Assumption

The Linear Assumption

We assume that if we postpone a decision, all is well, because things will just go on nice, straight, and level. We might be bothered by the idea that our industry, our category, our financial performance is just going to converge with that of our competition. We might want to turn.

The Curved Reality

The Curved Reality

The reality is more like we are curving, turning all the time, but we just project all those turns onto the path of our linear assumption. Going linearly straight would take so much effort, we’d be doing nothing else. Strategic alignment would kill us. Besides, we have an easy cheat. We can just project all that curving down to our linear assumption and get some sleep.

Log-Linear Transform

Log-Linear Transform

The mathematicians built these tools, not for the businessman, but for themselves. They work hard to make mathematics easy on themselves. I might have the name of this transform wrong. Being loose here makes both of our lives easier. But, rest assured, the transform exists, has a name, and yes, you learned it over and over again back in school.

Log-Linear Twice

Log-Linear Twice

So here we see that earlier two-dimensional curve being depicted as a three-dimensional curve. Raising the exponents leaves us with having to carry out two projections back to the linear assumption. Easy enough. Keep the story straight; simple; communicable, like a disease. It hides intentions if you need to keep something secret while appearing to be completely open. Yes, those fast followers follow with their own linear assumptions.

The 3-D Assumption

The 3-D Assumption

Yes, we live in a 3-D world, so we assume that to be the nature of even the 1-D linear assumption. Alas, we would be wrong. Studies on human perception show humans to sense only 2.5 dimensions. But, mathematicians like dimensions to be integer constructs, so they round up that 2.5 to 3, and we just get on with it. The dimension of towards and away stops at our stomachs, so the known world hangs out behind us only as a concept, much like the past and the future.

Our 2.5-D World

Our 2.5-D World

Here the z-axis, that half a dimension runs from the upper left to the bottom right. Notice there is no arrow moving off into the upper left. The three divergent lines find their way out in this 2.5-D playground. Of course, corporations perceive in ways independent of human perception.

2.5-D Reality

2.5-D Reality

The dimensions are counted out in this figure. Towards might be labelled away. It’s a frame of reference problem. The perceptual physiologist probably have some standards laid out for their discussions of the matter. Notice the red line disappearing into an electrical outlet of sorts, really a dimensional boundary. That line might actually be 4-D, but we are only reporting on a 2.5-D world, so statistical significance would make the line just plain disappear, because the data ran out, and a regression only sees as far as it’s most distant outliers on each axis of the reported dimensions. Magic if you will, or thick tails falling into the implicit.

I know. You know this stuff. But do you use it? Or, lose it? Do you make your roadmap a list, so you don’t have to do all that GPS and dead-reckoning math? Do we have inertial nav for our roadmaps yet?

Decisions with Equations

Decisions with Equations

Now, I’ll admit that I drew the lines long before I figured I was going to talk about equations or polynomials. I don’t have Mathematica, so the equations are loose approximations. The equations of the lines runs from 1-D to 2-D to 3-D. That’s pretty much the point. The point was to open a gateway to other topics, codecs, protocols, which in turn lets us build other worlds, worlds that couldn’t be build otherwise. Some of us PMs push codecs and protocols, our technologies, out into the world embedded in products and services. That’s where value-chains, lasting wealth, and careers get built. You don’t have to do that if cash and jobs is as far as you want to go with your change the world pursuits.



So why did I include the word decision in the titles of the last two graphs? Well, once you kick the entity painting the line in some direction and some magnitude, oops those sneaky vectors, you’ve made and implemented a decision. You can stop thinking at that point. But, you’re paid to think. you’re paid to fake out the soccer goalies paid by your competition. You’re paid to turn, rather than go straight. You’re paid to decide. Those decisions dance with the notion of controls. Those controls might be pool table bumpers, so you can stick with the linear assumption, or they might be curves of all ilks. The triangles mark the moment of decision on each of the lines.

Consider real options, the idea that you pencil in future decisions along your vectors of differentiation, so an assessment of the tracking portfolio of each of your strategies is calendared and made. Some at least minimally go/no go decision is made. The linear assumption is littered with decision points. The accounting measurement lattice works similarly. Both don’t force you to turn, but might necessitate a turn in response to changes in the underlying situation.

Notice that your equations can only be so complicated given your cost structure and policy structure at the time of decision. The curve, the turn might have to be simpler until you can hire and buy the needed capability.



Back in the day, you drew a flowchart before you coded. You made a decision, you branched, and as far as you ever noticed, the world didn’t change because of the decisions made inside your program. You went left or right. You did this or not. You did this or that. Your decisions were binary tending upward to the case statement with the ensuing catch all called OTHERWISE. You didn’t really think in terms of dimensionality. You never got around to the n-dimensional thing I call the splat. You never asked yourself the mathematician’s question of how many dimensions were involved, you never rounded up to compensate for the programming language’s dependence on integer-based branching. What would a half-a-dimension branch be in C++ logic flow? Worse, since you were not Einstein, you didn’t ask about curvature. It just wasn’t done.

A book on cosmological topology changed all of that for me. It’s not right linear vs. left linear. It’s curvatures. It’s crumple zones. It’s densities. It’s all those roadmaps that didn’t prove their case and ended up as crumpled balls laying wherever your intended 3-point shot left them in the neighborhood of your trash can. It’s that straight line bent all to hell. It’s that straight line, reorganized into a collection of composite functions.

Topology is one of those topics that separates mathematicians and statisticians. I’m taking this from a statistician I met a while back that never cleared the hurdle of topology.

Topology was created by some folks that questioned Euclid’s fifth postulate, the parallel lines postulate. They thought this stuff up, so we don’t have to. Euclidean geometry honors parallel lines as a truth. Non-Euclidean geometries don’t. The earliest two, as far as I know, non-Euclidean geometries involved convex and concave worlds where the parallel postulate was violated. Equality became inequalities. The angles in a triangle used to add up to and equal 180 degrees. With inequalities, they were equal to something less or more than 180 degrees. The constraints changed and with those constraints, worlds changed. The above figure shows the relations between the underlying geometries and their curvatures.The constraints asserted differences in control. Are you inside the curve or outside the curve. All of this becomes a roller-coaster ride.

More on Geometries

More on Geometries

A curve has an inside and an outside. That curve exhibits both geometries depending on the anchor of your view. The right and left branch of a decision becomes a choice between one curvature or another, so decisions chose geometries.

Geometries and Their Angles

Geometries and Their Angles

So here we lay out the relationship between angle and geometry: Sum of angels of a triangle =180 degrees, Euclidean; Sum > 180 degrees, Spherical; Sum < 180 degrees Hyperbolic. Einstein’s space-time is hyperbolic. But, where are the controls? Right. Well, shapes control, lines control, points control. Put them where you need them.

Decisions as Bezier Curves

Decisions as Bezier Curves

In graphics packages like MS Paint, or Adobe Illustrator, or say, just about all of them these days, Bezier curves are the first place you run into controls that define a line, a path, that are not on the line or path itself. My first run in with such things was NURBS curves. When I ran into them, I thought, hey, this is cool, because adding a control point didn’t change the curve. It just granted you the possibility of additional control deeper into the future, deeper into your strategy. I’ve since come to discover the same kind of control points in numbers themselves, polynomials, hell, everywhere. It is just the way mathematicians and even logicians do things. And, those of us distant from math and logic do it as well. Do you keep your apartment or ditch it when moving in with her/him?

Do we grant ourselves degrees of freedom or commit?

But, what of the previous figure? The endpoints of a Bezier curve are fixed on the spline. The four points and three straight lines constitute a spline. The spline defines the Bezier curve. There can be more lines and points to this spline. The four points are control points. You move the control points to change the curve, aka to control the curve. The deep coolness of these controls won’t be revealed until the last paragraph of this post.

Decisions and Control Points

Decisions and Control Points

Here we’ve made the control points as decisions explicit by annotating each decision with a triangle.

Decisions Constructed

Decisions Constructed

If you’re a reader here, you know that I use a large triangle, non-iconic, to represent decision trees that result in realizations. My use of this symbology is something I call the Triangle Model. Decisions are realizations. Decisions are constructed, built and later made. In the figure above, the circles structure the curve, and the tan-colored triangles build further controls that control the implementation of the curve, aka the line. The triangles imply many decisions made by many people,  potentially many organizations either cooperatively, or in a zero-sum, linear programming face off. Each decision tree contributes a limiting surface to the overall definition of the curve.

Decisions and Geometry

Decisions and Geometry

Here I’ve added a few more details to the surface hugging curve. Before it makes sense, I have to step back and bring up a metaphor I first came across in a philosophy-based logic class. Truth is not the central issue in logic. Validity is. Validity asks the question, is the argument constructed correctly. Validity is a question focused on the plumbing, not the truth or falsehoods flowing through that plumbing. Validity is about the carrier of logic itself. Truth is about the content conveyed by that carrier. Logic as a whole is about a carrier and its carried, so logic is a media. Similarly, mathematics is likewise a media. This does not become apparent until you bump into parametric equations. Those equations can be thought of as tubes. The value at time t, is a place in the tube. The point can even spin if you’ve built quaternions into the equation.  Never mind what a quaternion is. It spins. That’s enough for now. So math is a media. So software is a media.

In the figure the pipe is larger than the point. The pipe is like a water slide. A point starts out on the centerline, then finds itself on the pipe wall. It moves from being symmetric to the pipe to being asymmetric. It is on one side of the pipe, one edge, then it rotates or switches to the far side of the pipe to take advantage of a curvature. The point makes a decision. It starts out in a Euclidean world, a flat world, then it finds itself in a spherical world, but preferring the hyperbolic, due to its corporate capabilities, it switches to the other curvature on the other side of the curve. Then, it moves to the symmetric position in the centerline of the exiting Euclidean pipe. Yes, your company is the point in the parametric equation.

Decisions and Geometry Abstractly

Decisions and Geometry Abstractly

In this figure, I’ve firmed up the structure of the ride your company will take as the point in the parametric equation. That structure is a control. Companies ride such structures all the time. They don’t necessarily build those structure, but they do try to exert some control over their traversal of such structures.

Inside-Outside Geometry

Inside-Outside Geometry

Another view of that structure, but here we ask different questions. Can your company function on the outside of a curve, in the spherical? Can your company function of the inside of a curve, in the hyperbolic? Can your company traverse between the spherical and hyperbolic, and back? Can your company find a place in the linear, the Euclidean and maintain it deliberately? It’s not enough to stick with the linear assumption.

Decisions and Surfaces

Decisions and Surfaces

Here we highlight the structure, the surface, or in business terms the situations upon which strategy is built. Those capabilities mentioned earlier were abilities to execute at specific moments and during specific time intervals. Those capabilities were put there by strategy in anticipation of structuring situations.

On Surface

On Surface

The technology adoption lifecycle is one of those structures that technologies, products, categories, companies, industries, whole verticals, and whole economies traverse. That single linear assumption doesn’t get far in the varying densities of populations, events, and intervals comprising the lifecycle. A traversal would occur through the distribution, a distributed control, and given the Poisson distributions comprising Moore’s bowling ally, many distributed control populations. That traversal would not be a surface ride. That traversal would engage differential games of rates interdependent with other navigational aspects of getting the technology, product, sidebands, company, channels, ecologies, sales, revenues, and profits done.

The Borel set enables the calculation of probabilities for mathematicians. The Borel set informs businessmen that the population if fixed. That fixedness should inform the myths of growth, and the ignored reality of decline and it’s incipient myth of “Who us? Decline, never!” Ask Kodak and stop talking about disruption. It was Christensen’s good management doing what they do. It wasn’t some attacker having labelled itself as disruptive in it’s pleas for VC funding.

On the TALC Surface

On the TALC Surface

The technology adoption lifecycle (TALC) surface describes the totality of your category, not your company. You could scale the normal to represent your company. Still, macroeconomic considerations are better shown at category scale.

In this figure we assume the company has made it the point where they have consumed 50 percent of their full lifecycle, available market without missing a quarter and without incurring the wrath of Wall Street. They reach their aftermarket and are subsequently lifted into the realm of the Fortune 500 companies with their much larger market size via the dreaded M&A. Still, they face discontinuity, and of course the M&A typically fails, so much for the red line, so much for the linear assumption, and usually so much for growth.

On the TALC Surface Again

On the TALC Surface Again

Here we see the point of the aftermarket, the point of an M&A, the point of the huge public company, and the point of the startup. The telcos will make ten times more money from the Internet than the startups did. The telcos could not have brought internet technologies into adoption. Web content startups are not fostering adoption–adoption of those underlying technologies has been done for a while now.

Polynomial as Control

Polynomial as Control

Here we go back to the math to generalize the polynomial as a sequence of controls made explicit by the assertion of a waiting, but implicit, control. This hints back to the NURBS curve control points and how mathematics does this all the time. We solved polynomials without ever using them. No wonder mathematics wasn’t fun. It would have been fun to take on our advanced biology teacher during the test reviews with a ton of math. That’s probably why it wasn’t taught.

A Point

A Point

So what’s with this point? We all have point like this. Ask our significant other.

Measurement Lattice

Measurement Lattice

We’ll be getting the point of that point soon enough. That point is consistent with other points in a cloud of data, big data if you like. But, all those points are waiting around for a line to show up. “Yeah, no line gets past me. I’m an outlier, a tough guy. Hype that big data all you like. There is nothing out there beyond me.” Beyond the collected data is the implicit, which will remain implicit. The data collection explicated an expanse of space.

Measurement Lattice-Data-Regression Extent

Measurement Lattice-Data-Regression Extent

The regression traverses the extent of the collected data, but goes no further. The regression provides a structure for parametric traversal.

Measurement Lattice-Data-Regression and Dimension Extent

Measurement Lattice-Data-Regression and Dimension Extent

The dimensional extent of the collected data controls the dimensional extent of the regression and regression-based forecasts. In the figure, the 3-D dimensional projections from the regression are invalid. Degree elevation won’t work here.

Controls Again

Controls Again

The control zoo once again. What species of control do you want to exert. As I’ve read more mathematics I’ve become interested in more mathematics. Warning! Danger!

Decisions and Surfaces

Decisions and Surfaces

Like the TALC, macroeconomics is another controlling surface. Your curve will have to work around macroeconomic surfaces.

Decisions and Market Allocation

Decisions and Market Allocation

Market allocation significantly limits where your lines can go. Market allocation is a control. The market allocation circle is based on the normal distribution of the technology adoption lifecycle. Moore defined a formula for determining maximum market share based on the ordinal entry of a competitor into a category. Later entry would find not only smaller revenues, but also a shorter interval of participation in the category. If you arrive later without a new technology underlying your efforts, aka without having the capacity to create a category, you’ll be leaving sooner.  The circle provides controls.

Stakeholder Preferences

Stakeholder Preferences

Here stakeholder preferences are incorporated as controls in the earlier figure of the role of macroeconomics as a controlling surface.

So you’ve seen some of the structures that control the line we once considered to be just a linear assumption. As out last view of curves for a while, I’ll talk about the subdivision of a Bezier curve as a parametric equation. Look in Google to find several animations of Bezier curves. I found them very interesting. So on to why.

Bezier Curve Subdivision

Bezier-Curve Subdivision

In the above figure, the base spline is shown in black. The first subdivision is drawn in red. In the animations the red points subdividing the black lines start at the one endpoint of the line and move to the other. All of the red points move across the line they are on. The second subdivision is drawn in green. The green points subdivide the red lines and move across the red lines. The third subdivision is provided by the black point subdividing the green line. The resulting curve ends up being descriptive of a three-tier hierarchy, or a corporation. Adding another point to the base spline would insert another subdivision, and another layer in the hierarchy.

Try moving your controls around.

Leave some comments. Thanks.


Get every new post delivered to your Inbox.

Join 1,793 other followers