Archive for June, 2013

Geometry

June 21, 2013

I’ve been thinking about geometry a lot these days. What does the sparseness of a hyperbolic geometry feel like? Does hyperbolic geometry encompass Moore’s bowling ally? Does hyperbolic geometry and Poisson games encompass the core management issues? Lots of questions. It just compels me to learn more math, but much of that math hides the geometries, or explains everything from the comfort of the Euclidean geometry. The linear assumption of management includes the Euclidean assumption. We bumped into this in my last post, Depth of Value.

So, I’ve sketched up a quick graphical comparison of the geometries. I use the geometries: hyperbolic (H), Euclidean (E), and spherical (S) to show what a triangle looks like, the triangles of the Triangle Model. These geometries are blunt instruments.

Geometries

They didn’t teach us this stuff back in school. They do teach it to high school students these days. We’re on the cusp of many new understandings. Oh, don’t blame out teachers. Mathematics teaching lags mathematics by about 50 years. Some of the mathematicians that produced the ideas we are just now hearing about are still walking the halls of academia, or died in our lifetimes. I am finding math textbooks at half-price books that have moved the ball. Yes, your kids know what a Markov distribution is and what to do with one. Great!

I’ve correlated the distributions we use with the geometries. A discontinous technology starts out as a Poisson distribution. It’s hyperbolic out in the bowling ally. The lanes are straight, like Einsteins light, and all that ensuing weirdness. That discontinous technology then crosses the chasam and moves into the normal distribution (6 sigma) of the vertical, a smaller normal in terms of standard deviations, sigmas, than the normal of eventual IT horizontal. These normals live in Euclidean space. Eventually, that discontinuous technology company is M&Aed into the huge public companies with the vast sigmas (30 sigma), the vast normal. The total probablity under that vast normal is still one, so the height falls, the margins thin, and you need a scraper to get it off the floor. The vastness still reflects the decisions constituting a decision tree, a triangle, but it bulges out of the confines of the Euclidean plane. Real options, strategic choices abound in the spherical, but not so in the hyperbolic.

Notice that the figure doesn’t include all eight lanes in our bowling ally. Three were enough for purposes. There is much more to this Poisson tending to the Normal and it’s visualization across an eight lane bowling ally and time. And, more again when you start to account for the layered structure of a media.

Somehow, we build a business orthodoxy based on the likes of Sloan’s GM. We teach that orthodoxy. We use linearity to discuise the spherical geometry under the hood. The gaps don’t bother us much. It looks like a nice generic set of tools, so we preach them as universals. We teach it to everyone. Then, we wonder why we can’t innovate. We blame the innovation itself, because we never blame ourselves, and never question the generalist, generics of our orthodoxies.

I defend innovation, because it builds the businesses the orthodoy milks, the cash cows. It builds wealth, wealth as something other than piles of cash, wealth that requires collaboration beyond the firm, beyond the cash flows of our own organizations and value chain. It’s how we make a world different from what we’ve known.

Continuous innovation doesn’t do the hyperbolic geometry. But, discontinuous innvaton will happen there, because discontinuous innovation is just part of product being used to foster adoption of that technology. The transition from Euclidean to Spherical still happens with continuous innovation, so even continuous innovation can find gains in the awareness of their geometries.

Mind your geometries.

Comments?

Depth of Value

June 12, 2013

These days I spend part of my day at a university library, one that buys new books, a rare thing these days. The state university in town buys journals and skimps on books, so the library shelves are full of aging books. The new books in this library are amazing.

Yesterday, I picked perused Soil Ecology and Ecosystem Services. Before reading Nowak’s Super Cooperators, I would have passed this one by. But, another thing struck my eye, the notion of soil providing services.

I’ve put product ecosystems on my roadmaps. That’s not new. I’ve worked in the TQM/ISO places and years enough to know that every entity has customers, stakeholders, suppliers, and services. And, the wildest Web 2.0+ evangelicals spout services, services, services as their mantra. Soil just serves. That until I crossed paths with this book.

Beyond services, it was a graphic that caught my eye. It fits along side the triangle model. It describes populations. It correlates populations and features with value, value not just at the interface, but at depth in the away sense. Value at depth has been with me for a long while now, but finally, here is a way to get more specificity into its description.

I’ll start with a paraphrase of the original graph.

Soil serving populations with specific services.

Soil serving populations with specific services.

In this graph, the x-axis is the interface between the sky and the ground. Your looking at the dirt between your feet. Soil services extend from there into the depth of dirt under your feet. The view of the grass and ground that you get is your view in the model-view-controller. The red lines represent the populations being served by the soil. The blue is the amount of service provided, the use frequencies that I’ve illustrated in the long tails discussions. The variable of pore volume will tie into diffusion, aka the diffusion of innovation.

Now, I’ll show the graph’s relationship with the Triangle model, and relabel the graph so it’s more in line with software development.

002

I’ve aligned the Triangle model with the value-depth graph. The interface resulting from all of your development decisions appears in blue at the bottom of the decision tree, although upside down, and at the x-axis, the former sky-ground interface. Soil services are now a collection of minimal-marketable functionality. Populations are still populations. A little color reveals that we have an over-served population.

Both graphs have log-log axes. Beyond straightening curves, beyond those algebraic transformations implying changes in geometry, log is how you encode a base or modulo arithmetic in a graph, aka positional notation. Cognitive limits impose a base on the underlying data. Humans have cognitive limits. Brains impose cognitive limits. Media impose cognitive limits. Our applications serve one population well, and other populations not so well based on the population’s cognitive limit parameter. We probably pay no attention to this, but the cognitive limit is there, and it is very mathematical. It reaches beyond our interfaces (views) to our models, to our user support content and to our marcom.

The notion of the cognitive limit has become controversal, because the original research of the past is seen as flawed, but attending to this matter will pay off. When you hear advice like never having more than three bullets on a Powerpoint slide, what is really being said here is that Powerpoint as a media is imposing a cognitive limit of three. The rule as it usually stated is 7 plus or minus 2. So Powerpoint clips the mental capacity at a perceptual level long before it gets us worried about our short-term memory limits and paging to long-term memory. I won’t say the limit is 9. Software is supposed to be a cognitive tool, a tool to think with, but that’s what it can be and usually isn’t. In a tweet on presenting requirements, I suggested putting it all into a Powerpoint presentation, precisely, because its cognitive limit would limit the number of requirements, use cases, or user stories that we expect to deliver in the next iteration. The limit forces us to organize the content and the reveal, or rhetorical encounter. The limit forces us to structure the experience. We don’t have to make them think, unless of course, we are helping them think. So know your population’s cognitive limit, and if you serve several different populations with several different cognitive limits, realize that not everything will be used by every user. Don’t choke the weakest user. They still make the upgrade decision, or in a Web 2.0 world, the subscription renewal decision. No, the economic buyer does not make those decisions. Don’t call him for those.

So before we run off, lets set a cognitive limit of 7 on a big project delivering twelve things. We’ll also set a cognitive limit of 3 on it.

003

Now, I’ll put the table into a graphic, so we can see how the total cognitive load and cognitive limits affect our customer’s time to return.

004

In this figure, we start with a log-log cycle. We highlight the logs of bases 3, 7, and 10 that we’ve been using as examples. These bases are shown in black. The others in aqua. A log-log scale presents us with squares within squares. These cognitive limits hint at the dive that we make into whatever we have to learn: textual content, automated content, implicit cognitive models, implicit models, implicit model constraints, and technology adoption lifecycle phase mediation. We also have to find the explication gaps and workarounds, and other negative use costs. We use parabolic physics here. First you climb the platform, then you dive. The red, green, and purple grids use the base log squares as unit measures. We take the cognitive effort required to make the dive off of the unit measure grids. The bases of the grid provides each user with some credit for knowing some of the content. The black horizontal line at the bottom represents the system ground as an absolute. The grounds for each cognitive limit were shifted, again, relative to the expected prerequisite knowledge.

The Time To Return (TTR) in the figure is a little bunched up. If the curve had been wider, the arrivals would have been streched out, and more realistic. Practices like Minimal Marketable Functionality aim at delivering smaller cognitive loads, and arriving at the TTR one unit of minimal marketable functionality at a time. Moving training earlier in the sales cycle can also move the TTR around, and reduce negative use costs.

When we take the population as a whole, we end up with a collection of parabolas generating a surface over time–a fireworks show.

So, back to the soil. The population of soil microbes being served at a particular location on the graph are there, because that is the only place that has what they need. And, so it is with an application. We, both us and those soil microbes, are seeking, searching for cognitively cheap, exploitable, consumable value. Making it too easy is a loser, because we will be bored. Making it too hard will result in a support call, or worse, an exit. Rocks are rocks. And, no they don’t rock. Microbes and humans go elsewhere.

Comments?