Archive for March, 2010

Putting Behavior Change on the Roadmap

March 31, 2010

After reading the HBR blog post, “IDEO’s Tim Brown on Using Design to Change Behavior,” here are a few of my thoughts on the matter.

Applications change the behavior of its users. Behavior change happens. My behavior changes, as I discussed in “What does a product change? Part I” It’s inherent.

Maybe it shouldn’t be inherent, but our requirements elicitation practices, prioritization practices, and integration apps mean that instead of serving a particular functional unit culture, we create an artificial amalgamation, the average user. The real user operates at a distance from the application, because they are not the average user. The user interface is not necessarily the problem. The model is the problem. The model is where meaning is encoded. The view just exposes some portion of the model.

The more functional unit cultures involved, the more meaningless is encoded. Each functional unit has its own distance from the application, so each functional unit has its own efforts to make their work meaningful, which they pay for in time and effort out of their own budgets. IT doesn’t pay for this time and effort. The costs are embedded in the operational budgets of the functional units.

IT practices become software vendor practices. We all went to the same schools and learned the same techniques. We define requirements elicitation similarly. Vendors adopt Agile, because it worked well enough in IT shops. So vendors average their customer bases just like IT shops and deliver averaged functionality even with personals and voice of the customer. Technical architectures can solve these issues, the key point here is that customer and user behaviors change.

Customer and user behavior change is both an entry barrier and an exit barrier. Behavior change has economic consequence. For a software vendor, anything that has economic bearing on their technologies, products, and services should be on the roadmap.

Getting these behavior changes on your roadmap involves the following questions:

  1. What is your conceptual geography? Developers typically embed conceptual models in UML, but this gunks up the conceptual model and expresses it in a form closer to code than desired. Look at OWL and SemanticWeb formulations of ontologies and conceptualizations instead. Conceptualizations tie into terminology management, which spans the words that appear on the UI; the words used in the documentation, marcom, and all other communications activities, and the words used in translated, localized content.
  2. What is your user’s inherent conceptual model, the one from which you capturing raw, unprioritized requirements?
  3. What conceptual model does the user create as they are exposed to the conceptual model embedded in your application?
  4. In what order are concepts discovered by the user?
  5. Where are the differences between these two conceptual models?

If you layout your roadmap in terms of minimal marketable functionality (MMFs), you will see how each MMF involves a certain subset of the overall conceptual network. This subset expands over time. The first MMF delivered to the market anchors the conceptual model the user develops of the application, and it provides the first increment of behavior change. Think of the functionality as enabling the programmed instruction of the associated behavior. This means that you have a process or succession of state changes occurring, which, much like a hike in the hills, implies a geography, a spatio-temporal geography.

This spatio-temporal geography is a shapeable thing, a designed outcome, typically left to the venacular, the accidental. With permission marketing and prototyping, you can move learning, so it occurs before the install, so that your time to return (TTR) happens sooner. Prototyping hints at the idea that development practices allocate learning. The more the developer learns, the less the user has to learn, but again average functionality hurts here.

Once you have a conceptual geography and you’ve allocated the learning, you can determine your time to behavior (TTB).

Going further, you can exploit external content and trends. You might try to figure out how soon your keywords will become buzzwords. And, beyond meaning fitness and its supporting technical architectures, which will require vendors to change their development practices, maybe, and only with deliberate intention, you might want to change the customer’s/user’s/industry’s behavior. If so, put it on your roadmap!

Tim Brown discusses business offer elements, not just in application functionality. Early market applications expose business offer elements thinly, but moving into the late market, expansion of business offer elements and their being online is a key reality. You can change behavior across your entire offer, not just with the application’s functionality.

Comments, anyone?

The Word is “Discontinuous”

March 12, 2010

Innovations come in two flavors: continuous and discontinuous. The market decides. There is nothing inherent in the idea itself that discerns which is which. Market research determines the matter.

If you have a market, it is continuous. If not, it is discontinuous. Still, it depends on how you frame the innovation. You can frame a discontinuous innovation as a radical innovation, or a continuous innovation.

Object-oriented programming was once a radical thing. The release of the first object-oriented Windows API settled the matter, because it did little more than add gets and puts, a continuous framing from functional programming. Even today, much of the promise of objects has gone unfulfilled. That has led to a renaissance of the radicals pushing object thinking, with its inherent opposition to gets and puts.

What happened with these shifts from radical to continuous and back to radical was a change in the idea’s vector of differentiation. Every aspect of an idea has a vector of differentiation, which implies that an idea has many vectors of differentiation. In framing we pick, we prioritize, we package, we create a conceptualization, which evolves into a product, and an offer. Every aspect of an offer has a vector of differentiation. This takes care of business model innovation crowd.

Every vector of differentiation has a price-performance, or s-curve. Those business model innovations are a collection of s-curves. S-curves settle the matter of “disruption.” S-curves have been around longer than Christensen. He used them in his books on disruption. Disruption happens when the slope of the entrant’s s-curve exceeds that of the incumbent’s. Such an event happens only after a particular idea has been invested in and  improved to that point.

Such an event can happen to an idea over and over. Why? Implementation matters. If you achieve disruption, but get stalled, so that your aggregate slope falls below that of a disruption, well, ouch! Then, you might just achieve disruption again. Time and money matter, but so does hitting the price-performance targets with the R&D.

I found myself asking when would an innovation achieve disruption, not in terms of when one would see the disruption, but on a timeline, in time.

A first attempt at defining the Time to Disruption.

When will your innovation achieve disruption?

Here I drew a single s-curve. The attacker is near the bottom of the s-cuve. The slope of the attacker’s s-curve is increasing. The incumbent is near the top of the curve. The slope of the incumbent’s s-curve is decreasing. Those two rates are going to intersect. But, when?

I decided to reflect the incumbent’s position around the s-curve’s inflection point and then project it down to the time axis. The time interval between the attacker’s current position on the s-curve, and the incumbent’s projected position represents the Time to Disruption (TTD).

This is a bit nuts, because there should be two s-curves, not one.

Time To Disruption based on two s-curves with projections to Moore's Technology Adoption Lifecycle

Time To Disruption based on two s-curves.

Here I’ve shown the two s-curves, their rates, and the disruptive event. The interval between the attacker’s present, NOW, and the disruptive event, again, constitutes the Time To Disruption. The relative positions of the s-curves is arbitrary, except for the expectations of where an attacker would be in their s-curve and where an incumbent would be on their s-curve. You would expect an incumbent to be in or approaching commoditization, which typically happens below the top of the s-curve, but beyond the inflection point of their s-curve.

The attacker and incumbent would likewise be in different positions on Moore’s technology adoption lifecycle. The disruptive event was projected into both lifecycles. For an attacker, disruption marks an important moment in its efforts in the early market. Disruption would happen before sales would push the market into the tornado, so I’ve just annotated the end of the tornado, and my expectation that disruption would occur as a trigger to a tornado. The incumbent would be in the early market, late market or the aftermarket (not shown).

The point here is that talking about disruption is speculative. A technology isn’t disruptive in the front windshield view of the daily do. A technology is disruptive in a retrospective manner. Use the word “discontinuous” instead.

Technology is the application of thought. Every discipline even street sweeping engages in thought, aka technology, not just high tech. Innovation is an idea, ok an idea undergoing commercialization, or adoption and sale. And, one last swipe at sloppy lexiconizations, only technology is adopted. Products and services are sold, or given away. Products are not adopted.

Yeah, I know, incremental, as in how we deliver it.

The words have their effects in the world, but that is a topic for another day.

PCSC Presentation: Poisson Games in Technology Adoption

March 8, 2010

I proposed this presentation for ProductCamp SoCal. The title was “So you don’t have a market. Great!” It did not get enough votes.

I did present a similar presentation at ProductCamp Seattle last October, but I came at Poisson games through game theory and omitted most of the technology adoption lifecycle content. I had intended to repeat the PCS presentation, but the PCSC leads cut presentation time from 45 minutes down to 25 minutes, not enough time to get that done.

The main point of both presentations was to make product managers aware of Poisson games. In the latest presentation, I made more of a point to tie the content to product management. The technology adoption lifecycle can be represented as a series of Poisson games. The technology adoption lifecycle also defines some limits to the voice of the customer.

This presentation still wasn’t very interactive. It was also too long during rehearsal, and it got longer after the ProductCamp.  I have some ideas about that the next time I present on this topic.

Let me know what you think. Leave a comment. Thanks!