Archive for May, 2010

Building a Dog. Oh, Make that a Cat

May 29, 2010

So you thought you were in the dog business? Surprise, the boss came in this morning and tells you that we are now in the cat business. Hell, the dog’s only half finished. Dog 1.2 is in the works right now.

Adding cat to the code is just a third of the problem. We can add some classes, refactor towards an architecture for pets in general, we can revise our marcom to do the cat stuff in parallel to the dog stuff, but we have vast issues having lost our focus to become a pet store knowing that tomorrow, you’ll be asked to do Gerbals, another segment. All the projections for dogs, and the roadmap for dogs is hosed.

Sure pivot, but dogs have been profitable and proven. The answer is no! Seems like the answer is always no.

But, if you are going to say no, you need the facts to say no for you. You need to measure the costs involved in this deviation from the roadmap, or call it strategy. Even if management wants strategy, the Agile community disavows strategy. Call it whatever you like.

If there is a goal, even if that goal is just to make a living, or to party hard along the way, a goal is a line.

A goal is a line.

A Goal is a Line.

As a line, that goal would just be a dream, or intent. To make that goal into a motivator for action, that line would be a vector.

A goal represented as a vector

Goal as a Vector

If that dog you are building were a vector, then that cat your boss wants to build would be another vector.

The vector view of dogs and cats

Dogs and Cats as Vectors

So how do we 1) point these vectors, and 2) measure the ever growing distance between cats and dogs.

Information Theory

In information theory, that Claude Shannon stuff, the world is comprised of bits. A bit is also the outcome of a single difference. We ask a question to distinguish between things, and the end result is a bit of information. The question we are asking is the “Do, or Do not” question.
The "Do or Do Not" Question

Taking this back to geometry, a bit is a unit measure.

Bit as a unit of measure resulting from a decision

Bit as a unit of measure resulting from a decision

So right away we can see that our dog and that cat are at least one bit away from each other. We might add Hamsters next week, and other pets in the coming months since we moved from the dog business to the pet business, but maybe we haven’t realized that yet. So what’s happening to our bits?

When a question has more alternatives, we use more bits to encode the answer.

More Alternatives, More Bits

We asked a question with more than a pair of alternative answers, so we need more bits to encode the answer, or to locate the alternative.

Once we ask a question, we invariably ask another question. When we where in the dog business, we asked what kind of dogs. We are busy right now expanding the kinds of dogs we can serve and derive revenue from. Asking what kind of dog moves us to another level of detail. It moves us from general to specific. It moves us from one level in a hierarchy to another.

A hierarchy results when asking what kind of dog.

A hierarchy results when asking a subsequent question.

Notice that we now have to encode or measure the depth of our hierarchy, so our unit measure is now two dimensional, but it need not be a square. It could be time and money, but I’m getting ahead of myself. Also notice that we are no longer in the dog business, but rather in the dogs business. The dog company would turn into the dogs division, and the poodle company would report to the dogs.

That we added a dimension to our unit measure. We also added a dimension to our hierarchy. If I was to ask what kind of cat? I would be laying out another independent choice set for measurement. The measurement of the kinds of dogs and the kinds of cats, would be different measures.

Another Dimension

Another Dimension

Notice that the dogs and cats alternative sets are separated by the unit measure of the parent dimension, rather than unit measure. Notice also that we stretched the unit measure of the parent decision.

Once you have a two-dimensional unit measure, you can move on to a Cartesian coordinate system, which puts you back in analytic geometry, algebra, and trigonometry. No, I’m not taking the dogs and cats there. But, we do end up with a way to 1) point the vectors, and 2) measure the distance between the vectors at a point in time.

The trick was measuring something abstract like an idea. Any idea is measurable to the degree that it has been delineated by questions about it. Any definition answers a set of questions. Continuous or sustaining innovations add choices to an existing alternative set. Discontinuous innovations provide a new set of parents to existing and new concepts.

Notice, I’m talking about concepts. That means we are working in the philosophical discipline of ontology. The questions are sortables. We do this, because we work with ideas and the realization of those ideas. A product can be considered a realization. Once realized, a product can be classified by a taxonomy. Ontologies are conceptual, taxonomies are physical or at least realized manifestations.

The Triangle Model
Requirements are decisions. In my earlier post, Understanding I talked about how deciding was knowing. During requirements elicitation we are making decisions about the “What,” that will be implemented or realized. They might not look like decisions, because they are encoded into sentences. A dog barks, yes or no? Yes, we will include barking in the abstraction of a dog that we implement, or maybe we will delay the barking until a later release.

It turns out that throughout the software development process we are making decisions, technical decisions that driven by business decisions. And, it’s not just software, it is any product, even if that product is a sharpened pencil, the same goes decisions were made. And, where decisions were made bits arise. We organize bits, and as we do so, many of those bits are aliases for physical or virtual entities, so we break into the physical or virtual worlds with technologies, products, services, experiences, and even organizations.

These decisions are the same ones we were talking about back when we tried to point and measure our vectors.

The world is full of realizations and the decision trees that brought the world into existence. Many of those decisions are implicit. Many of them will not be explicated, or even hypothesized for years to come. Many of those decisions become skills that we practice until they don’t seem like decisions to us. And, while Jackson Pollack denied intention, he omitted that media presented constraints, and that those constraints were overcome by implicit decisions. The dogs and cats are getting away here.

Back in AI class a collection of decisions encoded as logic propositions organized themselves into decision trees, so even if you don’t walk though the decisions serially or in the correct order, you end up with a decision tree. In mathematics a proof begins at an arbitrary spot in the space laid out by earlier theorems and such, and end at a goal, thus adding to the decision tree that is mathematics. Again, the dogs bark.

A decision tree can be represented by a triangle. A realization effort does ask questions and explore areas beyond the ultimate solution. In retrospect, that solution trims the tree, so ignoring the divergences of the generative exploration of the problem or solution space, we end up with a triangle, hence the Triangle Model. Hints of it show up in So you don’t have a Market? Great!, as does more information on vectors of differentiation, and S-curves.

In my use, the Triangle Model can model the waterfall, or Agile. I’ve extended it through the Hype Cycle and touchpoint considerations well beyond the interface.

Before moving forward, I need to provide a key to the graphic representations used going forward.

Unit of Work (Triangle Model) Icon Key

Unit of Work (Triangle Model) Icon Key

Subsequent diagrams will be built as a series of these icons. Each rectangle represents the effort towards the release of a single minimal marketable functionality unit. Each rectangle is a release, and here a release is composed of three iterations. I realize that each iteration would be a deliverable, each of which should be represented as a triangle unto itself, so maybe iterations here is overkill, but iterations as well as releases can be used a units of information.

The effort is also represented as a decision space composed of the undelivered divergent or exploratory phase (green) effort, the undelivered convergent phase (black) effort, and the delivered effort, the triangle (orange) effort.

Progress towards the goal is shown, as time (blue.) This progress can be considered a vector towards the goal, or as the roadmap.

In the subsequent diagrams I added a vector (red) to denote effort away from the original goal. And, I showed the volume of decisions in each release related to the effort away from the original goal (yellow).

Decision Volume Triangle Model Key 2

Decision Volume Triangle Model Key 2

Back to Business

Lets think of a release as a unit of measure. Likewise, an iteration is a unit of measure. And, a roadmap is a series of releases.

Successive releases away from the roadmap

Successive releases away from the roadmap

Our roadmap is represented by the black vector. Our actual efforts are shown by the blue and red vectors as we diverted from realizing the dog, and turned instead to realizing the cat.

In our triangles, the yellow area represents the effort that was off the roadmap.

Time and Code Volume Vectors

Time and Code Volume Vectors

That effort costs more than time and money spent developing the cat. It it also costs you in terms of lost dog effort. To get back to the roadmap, to get back to the dog, it will take you more time and more money.

The Time and Money that the Dog Lost to the Cat

The Time and Money that the Dog Lost to the Cat


Meanwhile, your pet business explodes due to a loss of focus, your operational costs go up, and the dog owners feel neglected, or maybe their more than unhappy about your going to the cats.

It’s important to say no. But, it is more important to let the numbers speak for you. That big black vector tells you that you are 2 releases behind where you would have been had you focused on shipping the dog. You’ll also be 5 releases behind by the time you ship that dog. And, you still have to deal with the hamster.

Comments? Thanks!

Understanding

May 27, 2010

Earlier this week, someone tweeted that making a decision was knowing. Decisions get encoded as IF…THEN… rules, which in turn serve as rules in inferential systems. Back in the late 80’s, I attended a hypertext conference where some MCC researchers, working on program proofing, defined requirements as decisions, as in all requirements are decisions, not just decisions that were made by the application. Deciding was knowing.

In all the startups and IT shops I worked in, I’m always amazed at the claims made about how few development projects succeeded, because I’ve only seen one development effort fail. That failure was caused by an ambiguity that was deferred until later into the project. Six months later, when some clarifying decisions needed to be made, surprise, we just wasted 2.5 man-years for a cancellation. Not deciding was not knowing, but eventually the decision arrives, gets made, understanding established, and deciding became knowing.

Decisions were made and as a result we know.

Deciding is Knowing.

In the figure, we are making decisions about the world. We have a budget. We have a system. We might use syndicated data, but someone has built a decision support pipeline from the world to a view we based our decision on. A decision support system consists of one or more sensors, a fusion process that combines sensed data into a coherent summary, and a view where the decider uses the data to make their decision. A sensor may use an illuminator when something cannot be sensed directly, like an electric eye used as a customer counter where you count the beam interruptions and assert that a pair of interruptions represent a customer. Finally a decision is made. A fact is asserted by the decision. That fact might not be based on the underlying data provided by the decision support network. That fact might be an assumption. It is the decision that establishes the facts. the decision support network provides the justification, particularly where we are being irrational, emotional, irrelevant in the face of a yet to be discovered invariant, or making a decision under time pressures before the supporting data is available.

Note that the figure hints at a lack of density in our collection of underlying data. The figure also suggests that our fusion processes could be implemented by factor analysis, which finds factors in the data, rather than having them asserted by decision makers or fusion process designers. Factor analysis finds factors and organizes them into hierarchies by finding classification factors. Factor analysis works much like machine learning.

When I read about data warehouses in the past, facts were like the flags that define the course in a downhill skiing race. I’m not sure that’s accurate, but facts have a density, and facts go through a lot before we use them to make decisions.

Making a decision is knowing.

We make many decisions offline. We make decisions based on statistical research. We collect data, code it, summarize it with statistics like a mean and a standard deviation. We contextualize data as normal curves. If we don’t do these things, our researchers do it for us. The research process still involves creating and deploying sensors, fusing the data into summaries or inferences, and ultimately we do research to make decisions, to decide what is realty, or to decide what future we intend to be ready for, to make a decision, so we, our team, our stakeholders, and all those reliant upon our dependencies to know. We make a decision to know. We do not make a decision to be correct.

Starting with a decision support system we sense the world, we aggregate our data, we summarized our data into a normal distribution, then we bring our data and metadata, our inferences into our world, as we make decisions in that world.

Using Research to Know.

This figure illustrates the processes we use as we research our world, so we can make decisions and establish the facts of our world. The blue elements, in the figure, talk about something I found surprising, as it was never explained in this way, a function ceases to exist where it converges to a limit. Distributions typically converge to a limit. So I’ve indicated in the figure where the function describing the distribution exists and where it does not exist. The red line, labeled as metadata, indicates the mean. Had I included a standard deviation in the figure, it would have been read. The distribution is normal. That is also metadata.

I also indicated the presence of codecs. When you encode something, you can do things with that something that you couldn’t do before. You can keep a secret for a defined period of time. You can transmit it. You can search it faster than if you searched that something in its raw state. You might have to decode it to use it. A codec is the encode and decode pair. A codec creates spaces.

Codecs are amazing things. In the book The Box, the author talks about how container shipping changed the world by encoding and decoding content in a different manner than traditional shipping. Notice I said, encoding and decoding. Container shipping altered the way geography encoded and decoded the conduct of commerce. Container shipping was a protocol, and protocols inherently are codecs.

In the figure, the decision support system is a codec. The parameters of a normal curve are created via a series of nested codecs. The normal curve is a codec. And, when we use statistical data, we are bringing encoded, filtered, constrained data into our decisions. But, luckily, we decide to know, so we float on top of codecs. And, once we know, we implement policies via other codecs. Once it is digital it is rich with codecs.

The following figure illustrates the nested codecs through which we perceive our world and make decisions about that world.

This figure shows codecs nested withing each other as each contributes to decisions made by people and organizations.

Nested Codecs Towards Decisions

The elements of a decision support system each encode or produce some signal or data, so it can be consumed by the next element in the system. These nested codecs describe a process consisting of producer-consumer pairs. While we operate as consumer to our research, data, and inference providers, we, in making decisions, are producers. We produce knowing.

Codecs, Decision Support System components, and the producer-consumer chain

Codecs, Decision Support System Components, and the Producer-consumer Chain

In this figure, the previous figure has been modified to indicate the decision support components in red, and to correlate the producer-consumer representation with the nested codecs.

A critical issue is whether we have defined our data and systems so that it servers the decisions it will drive. We should be careful not to be victims of decision support systems we use whether ours, IT’s, or that of external research organizations. They too are making decisions to know.

Christensen pointed out in his books that the net present-value calculation was killing discontinuous, potentially disruptive, innovation needlessly. That calculation and its users were encoding a world in a particular way, and their decisions not only knew, but became self-fulfilling prophecies, that essentially made a world. Calculations are codecs, spaces, worlds.

Sometimes others decide for us. In those instances, we might not know. But, we decide to know. And, in our power to decide lays our power to construct a world, to be a codec.

Comments please! Thanks!

More on Innovation Visualization

May 25, 2010

Revisiting the exponential-polar representation, discussed in Innovation Visualization, I’ve expanded the representation, and found surprising extension to the long tail/power law version, discussed in Cognitive Models on the Efficiency Frontier.

In this figure, I drew the exponential-polar representation for a series of three successive disruptive innovations. Then, I drew a chord across the arc representing delivered functionality before and after the threshold of disruption. I was going to try to put a conceptual model along the chord, but using the long tail/power law distribution as a shorthand for the logarithmically distributed conceptual model instead.

Log scale polar representations and commoditization thresholds and the clipping of the power law distribution of the conceptual model

Log scale polar representations and commoditization thresholds and the clipping of the power law distribution of the conceptual model

The chord is shorter than the length of the arc. I’ve not drawn the circles whose diameters would represent commoditizations of the underlying technology. It turns out that the chord clips the long tail/power law distribution. Where this clipping occurs varies, but it happens well before the S-curve reaches it’s ceiling, which translates to both axes of the power law distribution.

Power law or Parento distributions exhibit a 80/20 split. In the long tail interpretation, the first 20 percent represents the hits, the remaining 80 percent represents niches. Translating this to the frequencies of feature use in an interface means that the first 20 percent is whole product partners and infrastructural elements. Some of that 20 percent may be the most frequently used features that a vendor provides. In SaaS, that first 20 percent would be browser, server, and Ajax. The rest of an applications features would be distributed down the tails (x and y axes). The vendor provided features encode an underlying conceptual model, which brings us back to the context of the earlier posts.

Another feature of all statistical distributions is convergence. A Poisson distribution converges earlier than a normal distribution. A long tail/power law/parento distribution converes much later than a normal distribution. This later convergence implies that there is always room for more features or concepts under a long tail.

The S-curve for paradigm C, the third disruption in the sequence is shown at the upper right. I’ve drawn the threshold of commoditization on that S-curve. There is always more development beyond, aka above, the threshold, but it is no longer profitable. A vendor would change their vector of differentiation at this point, which brings another technology into the mix and changes the focus of the marketing messages. Commoditization occurs when the customer is no longer willing to pay for improvements in a given technology.

Since customers are no longer willing to pay for more improvements in a given technology, you can think about it in terms of customers not being willing to pay for more features related to that technology. This implies that a feature distribution along the long tail would be clipped, and that both the x and y axes would be clipped. A vendor may have already created features beyond the threshold. There is no reason to removed them, but there is every reason to stop adding features.

This could have been Rick Chapman’s argument about not having a product manager for SaaS applications, along with the notion that in late market, costs must be minimized, hence investment in continued development would be minimized. I know that I still use Web 1.0 sites where I pay a subscription. Things that were broken long ago are still broken.

The blue arrow under the clipped long tail is the vector of differentiation related to the clipped long tail.

Clipping the long tail likewise clips the application’s underling user conceptual model. The frequency of use of a given feature also implies the the frequency of use of the concepts related to that feature.

It might be rare to see an industry undergo three successive disruptions, but functional units with staff subscribing to three or more paradigms is less rare. In cost accounting, as far as I know, you have traditional, ABC, and throughput accounting paradigms. Each of these paradigms serve the purposes of providing accounting data that managers rely upon for managerial decision making, but while they serve the same purposes and same managers, they approach and originate data from very different places and perspectives. Each paradigm has its own conceptual model. ABC cost accounting is build on top of traditional cost accounting, but variablized the once fixed categories that costs are assigned to.

What do you think? Please leave a comment. Thanks!

Innovation Visualization

May 24, 2010

Last week, I finished “e”: The Story of a Number by Eli Maor. It explained a lot of familiar and unfamiliar math. It covered hyperbolic, exponential, and imaginary functions. Exponential functions generate geometric progressions. Exponential functions when graphed present you with the same log scale doubling we talked about in the the previous blog post, Cognitive Models on the Efficiency Frontier.

When you move to imaginary numbers, you end up with an exponential polar graph. These graphs had me thinking about how the concentric circles represent the gaps generate by discontinuous innovation. The arc around the circles represents continuous innovation.

Polar forms deal with a radius or magnitude in some direction like vectors, but the book got here from imaginary numbers.

A discontinuous innovation becomes disruptive if investment in it (price) generates a performance reflected in a price-performance curve has a slope greater than the technology being replaced. I discussed this in The Word is Discontinuous.

S-curves or price-performance graphs, increase to an inflection point and decrease beyond the inflection point. Immediately before the inflection point, the slope of the S-curve is at it’s maximum. This maximum slope has meaning in a polar representation. Innovation slows down beyond it. And, it serves as the boundary that the next technology must exhibit before that technology is disruptive.

I’ve graphed two disruptive innovations, Technology A, and Technology B, followed by continuous/sustaining innovations.

Successive Discontinuous Innovations and Subsequent Continuous Innovations on an exponential polar graph.

Successive Discontinuous Innovations and Subsequent Continuous Innovations.

On the left, two S-curves are shown along with their inflection points and maximum slopes. On the right, we have a exponential polar graph depicting the serialization of the discontinuous innovations.

For Technology A, shown in red at r=1. We transfer the maximum slope found on the S-curve for Technology A to the polar graph using the x-axis as the base of the angle. The base of the angle represents the discontinuous innovation. The vector at the given angle from the base is the disruption threshold. As continuous/sustaining innovations sweep the arc at r=1, it brings the technology to (before) and beyond (after) the disruption threshold.

For Technology B, shown in green at r=2. We build the representation as we did for Technology A, except that we use the disruption threshold for Technology A as the base of the angle to the disruption threshold for Technology B.

The tan areas represent the innovations before the disruption thresholds. The orange areas represent the innovations after the disruption thresholds. Notice that even a disrupted technology might be improved while it is being replaced.

As the technologies are improved the innovations are serialized over counterclockwise positions along the arcs.

The distances between subsequent radii, represent the new cognitive model of the discontinuous innovation. A continuous innovation extends the current cognitive model. A discontinuous innovation replaces the current cognitive model entirely. The basis of a discontinuous innovation is far removed from that of the current cognitive model. Examples of such cognitive model conflicts, include Newtonian and Einsteinian physics, traditional cost accounting and ABC cost accounting, Quicken’s single entry accounting system and standard accounting’s double entry accounting system. Such wholesale replacement of cognitive models are called paradigms. Paradigms likewise involve mutually exclusive populations of adopters, and Moore’s technology adoption lifecycle.

Each paradigm is represented by its own circle. The gaps between the circles represent cognitive models in the form of ontologies and, later after the ontologies are realized, taxonomies. Note that ontologies are represented independent of implementation considerations represented in UML. The SemanticWeb, Web 30, will bring more attention to ontologies. Information architects construct ontologies, but they need to be captured during requirements elicitation through ethnographic research.

The radius of each circle ignores commoditization. This implies that the radius is never achieved in reality before the vector of differentiation must change. The technology related to the new vector of differentiation would be a new circle, but the base of the new disruption threshold would be independent of those already depicted on the graph, shown by the blue lines. See slides 53-59 of “So you don’t have a market? Great!.

On slide 57 should have annotated Points of Parity as PoP, rather than PoC, Points of Contention. This language was first describe in Value Merchants. In this vocabulary, points of party provide no competitive value beyond market participation.

Please comment. Thanks!

Cognitive Models on the Efficiency Frontier

May 19, 2010

In my last post, The Efficiency Frontier, I talked about how products span an efficiency frontier that is always moving. In that movement, an application moves away from the observable into an imagined future. This requires customer followship, as well as customer leadership.

When I talked about the figure in the last post, I mentioned the cognitive load, but I drew the delivered functionality as being linear. Our cognitive limits force cognitive models into a log rather than linearly scaled measure. I’ve redrawn the figure to highlight this log scale. Drawing this figure presented a problem, because each of us has a different cognitive limit.

Our cognitive limit shows up in the length of a the lists that we routinely handle. The rule is 7+/-2 list items. Powerpoint experts might tell you 3 list items. We might be smart enough to handle a list that’s 12 items long, but we are not our customers. Shorter lists cause no harm where longer lists might cause chunking and the need to store short-term memory into long-term memory. The following figure illustrates the effort differences when people with different cognitive limits: 3 and 7, are confronted by a cognitive model encompassing 9 concepts.

Different users experience your application on different log scales.

Log scales express bases or positional notation by doubling in length for each additional position. The gray numbers express the base arithmetic. For a list of 3 items, use base 4. For a list of 7 items, use base 8.

The scales shift to yellow and later to red to indicate the need to move the content of short-term memory into long-term memory. The cognitive models are exactly the same, except their horizontal locations move with the differences in the log scales.

In the next figure, I’ve gone back to the Triangle Model and Efficiency Frontier diagram from the first post and updated it with a log scale for a user with a cognitive limit of 3 items.

The efficiency frontier of an application across a log scale for a user with a cognitive limit of 3.

When we partition functionality into minimal marketable functionality packages, we usually do so in terms of iterations, releases, and cash flows on the vendor side, and value delivery and value proofs for project continuation. Cognitive limits gives us a user-centered way to partition functionality. Packaging to cognitive limits ensures that a learner installs functionality that is learned quickly, so each package achieves its ROI quicker. Such packaging also sequences the delivery of minimal marketable functionality in a user, rather than developer, facing manner.

An application’s features are used in a zero-sum way. If I am using feature A, I’m not using feature B. A minimal marketable function is a network. Each such network exhibits a power-law distribution, or long tail. The features packaged in that minimal marketable function serve as the basis of the user conceptual model.

Developers express a conceptual model captured in the requirements (carried) in terms of design artifacts like UML and code artifacts like frameworks and APIs (carrier) before expressing a conceptual model in the GUI. UML is a long way from the user conceptual model. The conceptual model in the GUI is what we count when applying cognitive limits, but users bring their internal conceptual model to the conceptual model expressed by the GUI. These conceptual models may conflict. Those conflicts may insert additional cognitive efforts into the experience.

The application’s feature networks exhibit a frequency of use. That frequency of use is expressed in the power-law distribution. The frequency of use of a feature, expresses the frequency at which a concept is dealt with, which in turn indicates how quickly it is learned. In these figures the concept at the far right would be infrequently used, and it would be the least know or understood concept in the conceptual model. Adding features always adds to the conceptual model and the learning required to achieve full value.

The following figure shows the conceptual model organized along a power-law distribution (orange) for a user with a cognitive limit of 3.

A cognitive model for a user with a cognitive limit of 3 laid out along a power-law distribution

The challenge involved in dealing with cognitive limits include discovering a user’s particular cognitive limit, discovering the terrain of a population’s cognitive limits in aggregate, and using cognitive limits as an architectural element or aspect of the code itself via Aspect-Oriented Programming (AOP).

Meeting this challenge will enable us to deliver applications that meet our economic buyer’s need for a quick realization of ROI.

Comments?

Efficiency Frontier

May 15, 2010

During the past two weeks, we have discussed best practices out on twitter. Best practices are thought to be good, because it moves an adopting business unit closer to the the efficiency frontier. The closer you are to the efficiency frontier the lower the cost of the process involved.

A marketer doesn’t see it that way. Instead, best practices commoditizes the process involved. If that process is a differentiator, it won’t be once a best practice is adopted. Yes, costs go down, margin increases, but that is assuming that price-based competition won’t kick in driving the price and margins further down without regards to the efficiency frontier.

A product provides value when it breaks a constraint or dampens its impact on a process, so that the users of that product move their processes closer to the efficiency frontier. When the product is an application, the business unit first installs it, then learns how to use it, and as a result of that learning moves the business unit closer to the efficiency frontier. The time involved ends up in a metric that Gartner calls the Time To Return (TTR).

Making that move closer to the efficiency frontier depends on the how much has to be learned.

The application bridges the past and the future relative to the efficiency frontier. I use the Triangle Model to represent any realization, aka man-made thing like software, hardware, services, pencils, lumps of coal, or lunch boxes. The Triangle Model represents all the decisions made towards realization that were actually shipped, so you end up with a decision tree.

The efficiency frontier passes through the Triangle Model at the Time-to-Return point on the customer’s timeline. Given that the vendor of the software intends to retain this customer, additional work with the and learning will improve performance well into the future beyond the Time to Return. All that learning translates into behavior change. That behavior change gates future behavior change.

So here is the figure.

Learning required to achieve movement towards the efficiency frontier

Delivered Functionality and the Efficency Frontier.

The red line represents the efficiency frontier. The base of the triangle is shown as a series of rectangles. Each rectangle represents a single minimal marketable feature (MMF). The size of the rectangle represents the cognitive load that must be overcome before that feature can be exploited to move the customer closer to their efficiency frontier. Learning here has been serialized. That may not be the case in a customer’s organization.

At the Time To Return, we move from meeting our ROI commitments (Past) to serving more aspirational needs where a practice could become a differentiator for some period of time (Future). These customers would push the efficiency frontier for a time. ISO 9000 was committed to by manufacturers in a given value chain. Eventually all manufacturers got certified, so the differentiator disappeared. It’s not just software that is duplicated by fast followers. Duplication is the point of best practices.

When a customer buys software, they first must learn about it. Content marketing can teach the stakeholders in a purchase. Learning can be allocated. Some companies buy training before they will buy the application. That training might teach them that they do not want the application. WebMonkey taught web developers a wide range of technologies back at the dawn of the web. It did this through a permission campaign that delivered web-based tutorials via an email newsletter. This can move the Time to Return sooner.

Before the customer gained awareness of the category or company or product, they had knowledge of the problem solved or the jobs to be done by the product.

The Triangle Model extends beyond the functionality to tool tasks, user tasks, work design, and further out to meta-management considerations like orchestration. All of which move one or more efficiency frontiers.

Minimal marketable functionality, a feature-driven-design based approach, quickly delivers value. In doing so, each chunk of functionality will present a subset of the cognitive model that the users must learn. With less learning, the Time To Return is achieved sooner.

A final point about efficiency frontiers that must be pointed out here, it is not about observing what is being done now. It is about how things will be done in the future. If the underlying technology is disruptive the efficiency frontier may still be years away. A vendor facing such an efficiency frontier doesn’t have to worry about the future. Getting their market made would be the issue, as would living through the HypeCycle, since the promise is still far off.

Leave a comment. Thanks.