Archive for December, 2010

Technology, A Definition

December 20, 2010

Last January Christoper Cummings asked if he could interview me for his blog, Product Management Meets Pop Culture. In the resulting interview, “Product Manager Interview: David Locke,” published in early February of 2010, we discussed leadership and the need to maintain a proactive time stance. My response to a question on innovation and technology was disproportionately long, so it was omitted. We are putting that response to use in this post. Enjoy.

Thanks Chris!


Q.) You’ve said, innovation doesn’t necessarily mean “new” or “technology”. What does it mean, and how can product management best contribute to bring innovation to the market?

Innovation not being new, did I say that? That might be the hardest aspect of this question.

I’ll start with my definition of “technology,” and notice we are not talking about high tech, or some prioritization scheme making some things more tech than others. Technology is the application of thought. When I was fifteen, a guy on a job with me, showed me how to use a shovel. That was knowledge. That was thought. That was technology. It made the work easy. Everyone thinks. Every functional discipline person, every business unit person thinks. They think within the contexts of their work, the functional culture of their professions. They work within the definitions they learned in school, life, and the workplace. Some of those definitions are undergoing adoption as we speak. One paradigm replaces another, one paradigm succeeding, another one fading.

An innovation is an idea. Innovations are either divergent or convergent. They tend to diverge, and then converge as they gain acceptance. Notice that I didn’t say anything about continuous, discontinuous, radical, disruptive, sustaining, or incremental. That lexicon gets attached later when the birthed idea is commercialized. The nascent divergent or convergent innovation becomes the model, for a commercialization that gives rise to a view that is continuous or discontinuous based on the existence of a market or its non-existence. This determination is where most innovations die. Some see the non-existence of the market as a reason not to invest, based on their previous attempts to push innovations into the market using go-to-market approaches that consistently work for continuous innovation, but fail consistently for discontinuous innovation. Discontinuous is a go-to-market mechanism.

Even a discontinuous innovation, seen as radical can be reframed to seem continuous. Object-oriented was radical, became a near cousin of functional programming, and finds itself being once again radicalized, and in some quarters being trashed in a return to a strict functional programming.

Only later does a discontinuous innovation become a disruptive innovation. This happens as the slope of the S-curve or price-performance curve of the innovation exceeds that of the earlier innovation, which in turn causes the adoption of the discontinuous innovation which drains the previously accepted conceptualization of its market.

The existence of a market, allows you to go straight to that market without dealing with technology adoption. If the existing market is a vertical market, enter the vertical market. Likewise, early mainstream, late mainstream, information appliance, or embedded markets.

Don’t talk to me about getting your product adopted when you mean getting it sold. Technologies are adopted. Products and services are sold. Products and services are instances of the technology, and are intended to get the technology adopted. Devices or form factors are instances as well. That those products and services are sold just provides us with cash and financial market glory. Your technologies, or those of other vendors that you use in your products or to provide your services determines your place on the technology adoption lifecycle.

Ideas are technology, or the application of thought, but not necessarily high tech. Ideas are everywhere in our organizations. Some are “in-offer,” some are not, or not yet “in offer.” Some ideas will never be “in-offer.” Yes, art and design guys have ideas. Yes, managers have ideas. Most of the talk these days seems to be about usurping the engineer’s place in innovation. I get tired of the fight for supremacy in the innovation sphere. Ideas have their place across the organization, and across the technology adoption lifecycle. Our businesses and products are not all in the same place, space, or timeframe. The core competencies of our firms vary. Those core competencies drive how our firms value ideas. Corporate cultures determine whose ideas are more important and conversely whose ideas are not important. This effect of corporate culture is unfair, and yes, costly to our firms, and deadly to our offer components. Offers tend to expand in the late market.

Not all of our products are about high tech carriers or protocols. Some products live on top of the technical carriers, and at the core may be functionality and technology derived from a functional domain, which I call “content,” or “the carried,” not necessarily text, graphics, sound, or video. A cost accounting program is application or carrier that carries cost accounting content. Like back in data structures class, where I used to see the data structure mixed into the other code. Keep your carrier and content separate. Innovations originate in both carrier and carried content.

Ideas are concepts. Concepts live in conceptualizations. We don’t really pay enough attention to concepts as product managers. Concepts lead to terminology, but we skip this stuff and move directly to UML.  In the case of carried content, I have to ask why we let developers create the terminology. Slow down, deal with the overhead, because it is rich—rich as in profitable.

Lastly, what do I mean when I said that innovation is not necessarily new. When I talk about divergent or convergent ideas, at that time they are new. That may have been years and years ago, long before we commit them to code. But, divergent means moving away from old ideas, and convergent means mixing a bunch of old ideas together that have never been mixed in that manner before. Ideas are concepts. A concept by itself cannot be explained until it is surrounded by or socialized by a collection of other existing (old) concepts.

An Apple is a Pear, but not pear shaped, but…, but…, and they keep doctors away. That apple is a collection of concepts. As a pear, the apple concept converges with pear concept, then with each of those “buts” that apple concept diverges itself from the pear concept, and obviously diverges from the doctor concept.

At the end of the happy hour, this old-new business is embodied in something called the old-new contract. New, or previously unknown, things cannot be explained except in the context of the old, or known. But, doesn’t that seem like an innovation must be new. Not quite.

The new can likewise recontextualize the old, so you end up with the old-new, and new-old. The new can contextualize other new ideas. And, old ideas can recontextualize old ideas. Neo-classical economics is really classical economics reconfigured by the some later economists in the Austrian school of economics, but neither are new, which would be behavioral economics.

Still, there is another reason why innovations are not necessarily new, and this stems from what happens to get an idea published in a peer-edited journal. A creator knows people who come to be apostles and who participate in the further development of the idea in an invisible college, which means, these days, that they exchange email. If you’ve ever read a research paper seriously, you’ve seen bibliographies that mention unpublished papers, letters, and such—the stuff of the invisible colleges. The creator and the apostles write journal articles and they get turned down. They go unpublished. Eventually, the creator’s idea gains credence with enough people in the creator’s discipline that they get a paper published. From there we read about it, and some things, hey, we could use this idea in our product. Then, they put it in the product without asking, without getting a lawyer, without a license, and nobody notices until the lawsuit decades later. An idea is old before you ever know it. The internet can get ideas out into the public earlier. But, that is just communications, rather than adoption or acceptance. Still, it gets the ideas into our products and services, but that lack of adoption or acceptance represents a risk in our efforts to get the technology adopted in the market, or our products based on the idea sold.

Product managers need to realize that the decisions about how to commercialize a innovation may have been wrong. If you find yourself going nowhere in the market, go back and ask if a market exists. If it doesn’t, set about creating a market using Moore’s process, the whole process. You may be forced to go headlong into a non-adopting market, but splitting your forces won’t kill you, you are already dead.

Product managers need to know what innovations are entering into their offers. Letting a innovation into your offer might motivate a contributor.


Requirements as Circles

December 14, 2010

Last Wednesday morning, as I climbed out of bed, a stray thought hit me: Requirements As Circles. This originated with some comment on a blog, link unknown at this point, about including or not including the UI in the requirements. So requirements were bouncing around in my head, as I had hit an impasse on writing about functional cultures, or the post on what I’ll just call Metcalf’s law, a visualization.

I scribbled a few sketches in my notebook. Only one of which involved a circle, and even then translated circles into set theory, which misses the point. If requirements are circles, it wasn’t going to be set theory related, because the perspective in these posts remains tied to analytic geometry and measurement. Set theory is a well known approach that marketers call market segmentation, which I claim leads to averaging and leads away from mass customization as a strategy for countering price-based competition, as if there were a price these days.

Requirements as Sets

Requirements as Sets

So we have an application running on an operating system, Windows (set w) and running in a browser, IE8 (set b).

No, these circles have to have meaningful radii, a direction, Fourier analysis sidebands, bits, bandwidth, existence, vectors of differentiation, s-curves, price curves, and hints to conics, as well as functional cultures. So a first cut happened, then a second. So lets get on with it.


Requirements as Statements

These are the requirements that we are familiar with. The requirement as we know it is a statement and some unique ID that we use for traceability. The statement might be presented in an outline just keep the statement associated with other statements around a common subject.

I. Environment

  1. The application will used within the Windows v… operating system [234231]
  2. The application will run from IE 8… browser [234265]

Just a quick example. Not advocating any particular operating system or browser here. They are market segments. They are dollars. They are constraints and affordances.


Requirements as Decisions

Back in 1987, I attended a NASA sponsored hypertext conference. Some presenters discussed their work at MCC on overcoming the context problem limiting formal requirements. Their key insight was that requirements were decisions.

It turns out that turning a decision, or question, into a logical proposition was more straightforward than turning a natural language statement into the same logical proposition.

If implies then. The probability of then when given if (WIF/ASIF). Cute, the time lag between the conceptualization and the realization is built into the conditional probabilities. Anyway, …

The above statements turn into questions:

  1. Runs on? [234231]
  2. Runs in? [234265]
  3. …?

From a roadmap perspective, you want your answers to eventually be all inclusive. You want the operating system to no longer matter. You want it to be sublimated. You want to reach beyond the Windows market. Likewise with browsers or databases. You get to these marketing objectives through the abundant use of the adapter pattern.

Over time your answer might change, but the question stands the test of time. Everything  is moving to the cell phone or cell linked pad, so you find yourself one adapter pattern away from the new hot tech platform, a consumer, non-code-geek thing. The requirements beyond the technical platform of the application does not change much as long as you’ve isolated those technical platform requirements into a layer. And, while we are at it, we’ll ask just how much of a vector of differentiation is our technical platform? Sure, with certainty it’s a market segmenter, the the framework for all things code, the hits in terms of feature frequency of use, but value? Sublimated, the gate to play, but that is all.  Most of those features will be points of parity.

Requirements as Decisions

Requirements as Decisions

Notice that the alternatives chosen from in a decision are associated with a number, binary here. Each decision defines its own dimension. Each dimension has its own axis. The chosen alternative is positioned along its axis by its number. That a given functional requirement has any number of non-functional requirements (constraints) associated with it. A spacial geometry gets messy quick.

I’ve described built a geometry around bits in previous posts. See “Building a Dog. Oh, Make that a Cat”, “Now that you have that Cat”, and “Taxicab Geometry”.

I refer to any range of numbers or the number of bits as bandwidth. Bandwidth typically limits the number of bits delivered simultaneously. In a software application, the task does the clipping, so counting the number of bits across an interface gives us a bandwidth that is much larger than that of the interface once in use.

Returning to the big picture of requirements as questions, notice that we are specifying architecture, so in an Agile effort,  those architectural components each need their own persona to ensure that they get built.

I realize that I’m crossing over the what (carried) and how (carrier) divide with my example. I should have used an example originating strictly from an automated domain (carried).

In real life, the reason for not specifying how, is that the how changes all the time. At a user interface workshop, years ago, another attendee claimed that “web-based” was a legitimate requirement. I disagreed, because I lived through the mainframe to three-tier client server transition and watched it (carrier) change many times over while the what (carried) hardly changed at all. That “How” requirement might have changed to cell-phone based at this point. Requirements, good requirements, live forever–or until the next paradigmatic shift in the functional culture. Requirements  just change their expression. Architecture enables that expression to change within a single release cycle. That architecture presents real options to the business, so it is not optional.

“Web-based” is a legitimate contractual term with a custom software development team, but that doesn’t elevate it to a requirement.


Requirements Traceability

Traceability is one of those bookkeeping issues around requirements management. Do we actually manage requirements?

When we solve a problem we generate a solution space, and then we search that solution space for the convenient or optimal solution–the solution that meets all the non-functional requirements, or satisfies the preferences of the largest number of stakeholders to one degree or another. We do this over and over throughout a software development effort. We do it during requirements elicitation, design, coding, and testing. We do it anytime we develop an artifact via successive feedback loops. Traceability extends through all of this effort.

As a shorthand, we diverge and then we converge. As we converge we trim our decision tree. We move from one branch to another. We navigate through the tree. Our requirements trace stays within the bounds of the trimmed tree. Other explorations are omitted from the build. The build reflects only those decisions going out the door, shipping. The rest of the decisions remain in limbo in our version control system. We can revisit them in the future.We can keep working on those decision threads that are anchored out in version control limbo. One day, we will break that constraint, then WoW!

When we converge, we can converge to a point, or to a collection of points. I represent this collection of points as a line. That line represents the API and GUI components of a release. Since that line represents the base of our decision tree at the time we shipped, I call that line the NOW line. This decision tree forms the basis of the triangle model.

I described the triangle model in See “Building a Dog. Oh, Make that a Cat”, “Now that you have that Cat”, and “Gary Hamel’s Pyramid and the Triangle Model.

I’ve written about the triangle model as a means for analyzing any media not just software. A media is anything that beats together a carrier and a carried (content). A radio is the obvious media. It beats together the content of the show, the sounds with a fundamental frequency that we turn to when we tune our radio to the station’s channel. Our radio filters out the fundamental frequency, so we can hear the show.

Software is likewise a media. A statement that I ran across years ago was that programmers abstract away from the requirements. It’s easy to talk about What vs. How, but programmers are all How. That What get’s done is something of an accident. Or, it used to be back in the day when a developer would claim “I deliver functionality. I don’t know anything about interfaces. Interface designers enable the programmer’s continued focus on carrier.

At the core of the issue is that when you code a product for geeks that don’t know the what, it all looks like how. When you code technology itself, rather some use of that technology it all looks like how. Still, I remember working with some developers that were coding a code generator. They did it for developers just like themselves. Or, the framework developers that didn’t weigh the cost of learning a framework vs. writing your own. If you did the latter, you would know it and wouldn’t have to learn another one. Learning a framework is tough. Sure, learn what you need, but the whole thing and its way of thinking. Effort. Sure, play. The real economic value was in coding for those different from ourselves, or coding to reduce learning.

Requirements elicitation also hides the carried nature of the application to be automated, as does UML. UML provides a platform for communicating among developers, rather than a means of capturing the ontologies being automated. This problem will be highlighted as we move to a development methodology built on top of the SemanticWeb. Still, containers are not the contained. And, container semantics is not semantics.

Requirements as Traceability

Requirements as Traceability

The gold area is the extent of the divergence and convergence, the search space as it is generated and subsequently searched. The blue lines outline the decision tree that was actually shipped, the code contributing to the realization. The width of the base represents the number of bits shipped in the realization, its bandwidth. The red line represents the trace of a single requirement.

The trace of a single requirement reaches across all stages or phases or classes of decisions that comprise a realization or development effort. The trace at some point might be outside the bounds of the shipped realization, but the tree would be reorganized once that requirement actually shipped. The trace when it is outside the shipped triangle is an option implying that its further development may be continued or stopped. A requirement might actually get trimmed from the decision tree, and never shipped.

A trace might branch across the tree. A trace might generate impedances up the tree and force the implementation of other architectural elements to change. A trace may originate outside the shipped realization and flow into that realization via an API. A trace can cross layers via an API to or form your technology layers.

A trace can terminate at the features in the GUI or API, or it might flow into the task performance, work performance, collaboration, and meta-management layers of the triangle model well beyond the interface and into the depth of the hype cycle.


Meeting in the Middle

Design situates itself between the requirements (what) and the implementation (how). The job in the design phase is to align and balance the requirements against the constraints and affordances inherent in and provided by the development or technical environment. Think of a collection of pistons, force against force.

Design as a Meeting of the Requirements and the Implementation Environment

Design as a Meeting of the Requirements and the Implementation Environment

The gaps between the requirements and the implementation environment represents the space in which design contributes to the solution or realization.

More abstractly, on a bit-by-bit basis across the bandwidth of the realization, can provide us with a visualization of the balance of forces in a realization effort at a given moment in time.

Affordances make realization easy. Constraints make realization difficult. Gaps arise between the requirements and the affordances and constraints of the implementation environment.

Affordances, Constraints, Requirements, and Gaps

Affordances, Constraints, Requirements, and Gaps

The gaps indicate a measured difference between intention and expectation, between the requirements and the implementation environment. Being measurable leads us to a metric space, or a space that has a unit measure. The bits in a bandwidth is likewise a metric space, a space having a unit measure, the bit.

One difficulty will be that a given requirement has different stakeholders with different preferences, which leads to scaling issues across each gap. This means that each bit or each requirement, as a collection of bits would have its own measurement axis.

An interesting side effect of this is that the moment in time when a requirement is realized gives rise to the existence of the measurement axis from the point of view of the released product. Later in our circle model of a requirement, this asserting of existence is the origin of the vector of differentiation for the requirement. This vector exists prior to axis existence or origin, but it will not be expressed in the circle model, and external facing or market facing model of a requirement.

Requirements in Releases

Requirements in Releases

In subsequent releases, across the released bandwidth, the degree of realization and performance of the implementation, as measured across the nonfunctional requirements for a given requirement, can improve. Such changes would alter the gaps between requirements and implementation. This hints towards the minimal viable product.

Over time persuasion and market knowledge can alter the preferences of the stakeholders of the realization. Such changes would result in some rescaling of the measurement axes of the individual stakeholders. See “Ordinals for Product Managers” for more information on stakeholder preferences, ordinals, and utils.

Another view of a release defines requirements in terms of utils, a unit-less measure of utility defined via stakeholder preferences.

Release as Utils

Release as Utils

Scaling via utils provides a unified vertical scale across all requirements. New requirements are highlighted in red. The other requirements are measured relative to their expected performance in the next release. The green axis indicated that origin, and existence recognition begins at the red line and not earlier.

Release as utils including the pre-release measurement scale

Release as utils including the pre-release measurement scale

Here we have dropped out all the details to illustrate the extent of the requirement’s measurement scale, which extends into implementation time frame as negative numbers.


Requirements as Circles

Now we will move into the metric geometry of a measurable requirements representation.

Requirements as Circles

Requirements as Circles

In this representation, r1 represents the cumulative number of customers sold, while r2 represents the cumulative number of customers lost. These measure might be hard to peg on a single requirement, so use a collection of requirements. When a sales rep tells you they have to have x to close a deal, pull this out. Show them how small that customer is.

The direction of the vector of differentiation is arbitrary in most of these representations. When you put two vectors or a bivector into the representation, you might find that you need some notion of direction. We will see this later when we deal with commoditization.

Both r1 and r2 are measured against a single vector of differentiation, so aggregate requirements that contribute to that vector of differentiation. We tend to think of vectors of differentiation as features, but what happens when offer expansion begins to include business functions like shipping in the offer. The fact is that vectors of differentiation could be a task, aka some unit that Christensen called “Work to be Done.” Sales reps use something called the FAB framework to turn features into benefits or sizzle. In software, task performance is the benefit to a user; competitive advantage the benefit to an economic buyer; collaboration, choreography, and orchestration the benefit to other economic buyer. Anchor a vector of differentiation outwardly from the requirements and deep into the use space far beyond the “It’s the Interface Stupid” space. There is money out there.

Requirement as Vector of Differentiation Across the Triangle Model

Requirement as Vector of Differentiation Across the Triangle Model

Two different vectors of differentiation, shown in red, illustrate how you can pick your place with a vector of differentiation.


Requirements as Circles and as Frequencies

Requirements as Circles and as Frequencies

Back in trig class, a unit circle generated a sine wave. A sine can be decomposed into a collection of sine waves via Fourier analysis. It hints that every signal has a sideband, or every website has a multitude of monetizations. Here, with a requirement being a circle, that requirement would likewise be decomposable into a multiplicity of value provisions and revenue events.

Read “Who is Fourier” for an accessible book on the subject, and on mathematics in general. Read “Software by Numbers” for a better understanding of the minimal marketable functionality approach to revenue events and customer cognitive load management.

Requirements as Markets

Requirements as Markets

Here we break our market down to addressable, anticipated, and current customers or seats. This looks like set theory, but it is based on measurement against a vector of differentiation generated by a requirement or set of requirements.

Requirements in Market Consumption Processes

Requirements in Market Consumption Processes

In a representation where we illustrate the gain and loss of customers for a vector of differentiation, the number of customers is finite. The market consumption process would be an Ito process, since the number of customers is finite. Ito processes are stochastic process with a finite memory.

Ito processes are relatively new math. I’ve not studied it enough to know if a finite memory means constant sized. I hope not. An Ito process contrasts with Markovian processes, processes without memory, and Gaussian processes, processes with complete memory. In my slideshare presentation, I’ve discussed how Markovian processes better represent Moore’s technology adoption lifecycle.

One of my old phones has a game called Snake on it. The snake eats food, gets longer, and lives as long as it doesn’t crash into a wall or bite itself. That snake moves just like an Ito process. Requirements move from being points of differentiation to being points of contention, and on to dying as a point of parity. Value moves and the value migrates. A requirement contributes value for a while and then it doesn’t.

The familiar statistical analysis is based on a Gaussian world, a world of complete memory. We have data warehouses, and data mining that assumes that the numbers constitute complete memory, but when we improve our processes, our averages tie us into the past before the improvements. We cannot turn a corner in a Gaussian statistical process. The Markovian process turns corners constantly. Markovian processes discover. Gaussian processes enforce. Ito processes will turn out to be a hybrid between the two, discovering at the front end, enforcing at the tail end.

When you lose customers, you may be losing the rational, the stakeholders for certain requirements. Those requirements may no longer provide value in your current stakeholder pool. Your requirements are pulling an Ito on you.

Requirements as Circles Having a Probability Density

Requirements as Circles Having a Probability Density

Imagine the cross section of the probability function as being directly over the vector of differentiation. Since an Ito process has a finite memory, it’s probability function is zero beyond that finite memory in what I call the “doughnut hole.” You see the doughnut hole in every major American city. It is what I call the blight zone between the suburbs and the working urban core, the poor areas characterized by higher crime and lower rents. The value goes to zero there, as it does for Christensen’s overserved, and lost customers and seats.

The probability distribution falls to zero shortly beyond the commoditization event.

When a particular vector of differentiation is commoditized, you change vectors of differentiation. Every vector of differentiation has an s-curve, or price-performance curve associated with it. Commoditization occurs near the top of the curve where huge investments generate little improvement.

Faster processors were commoditized when processors got too hot, and the constraint of the SCSI bus was the real limiting process, in terms of perceptible speed. Moving to DDR, was the vector of differentiation change that allowed processors to generate the faster performance without actually being faster, or in this case hotter.

The processor didn’t lose all of its value, so it didn’t fall to zero, but it fell big time. the $600 laptop is a sign of that fall, commoditization, value migration. So it might be a bit much to assume that your value of differentiation will fall to zero. Points of parity are generally the gateway to play in any convergent industry, or to say just about any non-startup company. In software, documentation was one of those points of parity that kept entry into the software industry higher than it need to be. Documentation was a market barrier, aka nobody reads the manual. Or, maybe the latter was crock and Documentation still provides value.

Changing Your Vectors fo Differentiation, Changing Your Circles

Changing Your Vectors fo Differentiation, Changing Your Circles

Here we illustrate the what happens when your vectors of differentiation change. Your circles change, your probability functions change, and your S-curves change when you change your vector of differentiation. The two vectors are drawn relative to each other.

S-curves and Probability (Profitability) Curves

S-curves and Probability (Profitability) Curves

When you change vectors of differentiation, you change your S-curves and Probability (Profitability) Curves. The profitability curves originate in the circle representation of the underlying requirements package.


Complexity, Multidimensionality

Yeah, I know, you wouldn’t do this. Or why should you do this? If we didn’t have to lead the customers, the market, the world to a new future, if all we had to do was follow, sure there would be no reason to wring the world out to find new value. You don’t have to fast follow. You don’t have to be free. You don’t have to hope. You can lead and that takes vision, visualization, and effort.

In software we abstract things. You don’t have to analyze your points of parity the same way you would analyze your points of differentiation. You can pick and choose what you analyze. That said, the world is multivarite, multivariable, multivectored, multibivectored. When we talk, we engage with words, not the simplest of things. The actuality is that we compute with strings, strings that represent the view of words, rather than the model of words. One word, a thousand attributes easy, each a vector of differentiation forming up a mulivariable envelope around something that in aggregate with other words or other morphemes become understandable as just one thing, or in poetry many things always and forever ambiguous.

Computational linguistics programmers see a single morpheme as a porcupine of dimensions.

A Morpheme: A Visualization of Multidimensionality

A Morpheme: A Visualization of Multidimensionality

A single morpheme is a collection of attribute-based dimensions. Each attribute is a line (black). All the attributes intersect at a single point, the morpheme itself. The red highlighting indicates the contribution of the highlighted attributes to the meaning that the morpheme is currently engaged with.

A single requirement would look like a morpheme. A software application, a collection of requirements would look like a word. A value chain would look like a sentence. In the end, you have a caterpillar.

In the end, you have a probability envelope with seemingly uncorelateable linear equations scattered across more dimensions than anyone cares to deal with. In the end, you have to focus, limit, decide, circle the relevant world.

Comments, please!