Depth of Value

These days I spend part of my day at a university library, one that buys new books, a rare thing these days. The state university in town buys journals and skimps on books, so the library shelves are full of aging books. The new books in this library are amazing.

Yesterday, I picked perused Soil Ecology and Ecosystem Services. Before reading Nowak’s Super Cooperators, I would have passed this one by. But, another thing struck my eye, the notion of soil providing services.

I’ve put product ecosystems on my roadmaps. That’s not new. I’ve worked in the TQM/ISO places and years enough to know that every entity has customers, stakeholders, suppliers, and services. And, the wildest Web 2.0+ evangelicals spout services, services, services as their mantra. Soil just serves. That until I crossed paths with this book.

Beyond services, it was a graphic that caught my eye. It fits along side the triangle model. It describes populations. It correlates populations and features with value, value not just at the interface, but at depth in the away sense. Value at depth has been with me for a long while now, but finally, here is a way to get more specificity into its description.

I’ll start with a paraphrase of the original graph.

Soil serving populations with specific services.

Soil serving populations with specific services.

In this graph, the x-axis is the interface between the sky and the ground. Your looking at the dirt between your feet. Soil services extend from there into the depth of dirt under your feet. The view of the grass and ground that you get is your view in the model-view-controller. The red lines represent the populations being served by the soil. The blue is the amount of service provided, the use frequencies that I’ve illustrated in the long tails discussions. The variable of pore volume will tie into diffusion, aka the diffusion of innovation.

Now, I’ll show the graph’s relationship with the Triangle model, and relabel the graph so it’s more in line with software development.


I’ve aligned the Triangle model with the value-depth graph. The interface resulting from all of your development decisions appears in blue at the bottom of the decision tree, although upside down, and at the x-axis, the former sky-ground interface. Soil services are now a collection of minimal-marketable functionality. Populations are still populations. A little color reveals that we have an over-served population.

Both graphs have log-log axes. Beyond straightening curves, beyond those algebraic transformations implying changes in geometry, log is how you encode a base or modulo arithmetic in a graph, aka positional notation. Cognitive limits impose a base on the underlying data. Humans have cognitive limits. Brains impose cognitive limits. Media impose cognitive limits. Our applications serve one population well, and other populations not so well based on the population’s cognitive limit parameter. We probably pay no attention to this, but the cognitive limit is there, and it is very mathematical. It reaches beyond our interfaces (views) to our models, to our user support content and to our marcom.

The notion of the cognitive limit has become controversal, because the original research of the past is seen as flawed, but attending to this matter will pay off. When you hear advice like never having more than three bullets on a Powerpoint slide, what is really being said here is that Powerpoint as a media is imposing a cognitive limit of three. The rule as it usually stated is 7 plus or minus 2. So Powerpoint clips the mental capacity at a perceptual level long before it gets us worried about our short-term memory limits and paging to long-term memory. I won’t say the limit is 9. Software is supposed to be a cognitive tool, a tool to think with, but that’s what it can be and usually isn’t. In a tweet on presenting requirements, I suggested putting it all into a Powerpoint presentation, precisely, because its cognitive limit would limit the number of requirements, use cases, or user stories that we expect to deliver in the next iteration. The limit forces us to organize the content and the reveal, or rhetorical encounter. The limit forces us to structure the experience. We don’t have to make them think, unless of course, we are helping them think. So know your population’s cognitive limit, and if you serve several different populations with several different cognitive limits, realize that not everything will be used by every user. Don’t choke the weakest user. They still make the upgrade decision, or in a Web 2.0 world, the subscription renewal decision. No, the economic buyer does not make those decisions. Don’t call him for those.

So before we run off, lets set a cognitive limit of 7 on a big project delivering twelve things. We’ll also set a cognitive limit of 3 on it.


Now, I’ll put the table into a graphic, so we can see how the total cognitive load and cognitive limits affect our customer’s time to return.


In this figure, we start with a log-log cycle. We highlight the logs of bases 3, 7, and 10 that we’ve been using as examples. These bases are shown in black. The others in aqua. A log-log scale presents us with squares within squares. These cognitive limits hint at the dive that we make into whatever we have to learn: textual content, automated content, implicit cognitive models, implicit models, implicit model constraints, and technology adoption lifecycle phase mediation. We also have to find the explication gaps and workarounds, and other negative use costs. We use parabolic physics here. First you climb the platform, then you dive. The red, green, and purple grids use the base log squares as unit measures. We take the cognitive effort required to make the dive off of the unit measure grids. The bases of the grid provides each user with some credit for knowing some of the content. The black horizontal line at the bottom represents the system ground as an absolute. The grounds for each cognitive limit were shifted, again, relative to the expected prerequisite knowledge.

The Time To Return (TTR) in the figure is a little bunched up. If the curve had been wider, the arrivals would have been streched out, and more realistic. Practices like Minimal Marketable Functionality aim at delivering smaller cognitive loads, and arriving at the TTR one unit of minimal marketable functionality at a time. Moving training earlier in the sales cycle can also move the TTR around, and reduce negative use costs.

When we take the population as a whole, we end up with a collection of parabolas generating a surface over time–a fireworks show.

So, back to the soil. The population of soil microbes being served at a particular location on the graph are there, because that is the only place that has what they need. And, so it is with an application. We, both us and those soil microbes, are seeking, searching for cognitively cheap, exploitable, consumable value. Making it too easy is a loser, because we will be bored. Making it too hard will result in a support call, or worse, an exit. Rocks are rocks. And, no they don’t rock. Microbes and humans go elsewhere.



Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: