Last Wednesday morning, as I climbed out of bed, a stray thought hit me: Requirements As Circles. This originated with some comment on a blog, link unknown at this point, about including or not including the UI in the requirements. So requirements were bouncing around in my head, as I had hit an impasse on writing about functional cultures, or the post on what I’ll just call Metcalf’s law, a visualization.
I scribbled a few sketches in my notebook. Only one of which involved a circle, and even then translated circles into set theory, which misses the point. If requirements are circles, it wasn’t going to be set theory related, because the perspective in these posts remains tied to analytic geometry and measurement. Set theory is a well known approach that marketers call market segmentation, which I claim leads to averaging and leads away from mass customization as a strategy for countering price-based competition, as if there were a price these days.
Requirements as Sets
So we have an application running on an operating system, Windows (set w) and running in a browser, IE8 (set b).
No, these circles have to have meaningful radii, a direction, Fourier analysis sidebands, bits, bandwidth, existence, vectors of differentiation, s-curves, price curves, and hints to conics, as well as functional cultures. So a first cut happened, then a second. So lets get on with it.
Requirements as Statements
These are the requirements that we are familiar with. The requirement as we know it is a statement and some unique ID that we use for traceability. The statement might be presented in an outline just keep the statement associated with other statements around a common subject.
- The application will used within the Windows v… operating system 
- The application will run from IE 8… browser 
Just a quick example. Not advocating any particular operating system or browser here. They are market segments. They are dollars. They are constraints and affordances.
Requirements as Decisions
Back in 1987, I attended a NASA sponsored hypertext conference. Some presenters discussed their work at MCC on overcoming the context problem limiting formal requirements. Their key insight was that requirements were decisions.
It turns out that turning a decision, or question, into a logical proposition was more straightforward than turning a natural language statement into the same logical proposition.
If implies then. The probability of then when given if (WIF/ASIF). Cute, the time lag between the conceptualization and the realization is built into the conditional probabilities. Anyway, …
The above statements turn into questions:
- Runs on? 
- Runs in? 
From a roadmap perspective, you want your answers to eventually be all inclusive. You want the operating system to no longer matter. You want it to be sublimated. You want to reach beyond the Windows market. Likewise with browsers or databases. You get to these marketing objectives through the abundant use of the adapter pattern.
Over time your answer might change, but the question stands the test of time. Everything is moving to the cell phone or cell linked pad, so you find yourself one adapter pattern away from the new hot tech platform, a consumer, non-code-geek thing. The requirements beyond the technical platform of the application does not change much as long as you’ve isolated those technical platform requirements into a layer. And, while we are at it, we’ll ask just how much of a vector of differentiation is our technical platform? Sure, with certainty it’s a market segmenter, the the framework for all things code, the hits in terms of feature frequency of use, but value? Sublimated, the gate to play, but that is all. Most of those features will be points of parity.
Requirements as Decisions
Notice that the alternatives chosen from in a decision are associated with a number, binary here. Each decision defines its own dimension. Each dimension has its own axis. The chosen alternative is positioned along its axis by its number. That a given functional requirement has any number of non-functional requirements (constraints) associated with it. A spacial geometry gets messy quick.
I’ve described built a geometry around bits in previous posts. See “Building a Dog. Oh, Make that a Cat”, “Now that you have that Cat”, and “Taxicab Geometry”.
I refer to any range of numbers or the number of bits as bandwidth. Bandwidth typically limits the number of bits delivered simultaneously. In a software application, the task does the clipping, so counting the number of bits across an interface gives us a bandwidth that is much larger than that of the interface once in use.
Returning to the big picture of requirements as questions, notice that we are specifying architecture, so in an Agile effort, those architectural components each need their own persona to ensure that they get built.
I realize that I’m crossing over the what (carried) and how (carrier) divide with my example. I should have used an example originating strictly from an automated domain (carried).
In real life, the reason for not specifying how, is that the how changes all the time. At a user interface workshop, years ago, another attendee claimed that “web-based” was a legitimate requirement. I disagreed, because I lived through the mainframe to three-tier client server transition and watched it (carrier) change many times over while the what (carried) hardly changed at all. That “How” requirement might have changed to cell-phone based at this point. Requirements, good requirements, live forever–or until the next paradigmatic shift in the functional culture. Requirements just change their expression. Architecture enables that expression to change within a single release cycle. That architecture presents real options to the business, so it is not optional.
“Web-based” is a legitimate contractual term with a custom software development team, but that doesn’t elevate it to a requirement.
Traceability is one of those bookkeeping issues around requirements management. Do we actually manage requirements?
When we solve a problem we generate a solution space, and then we search that solution space for the convenient or optimal solution–the solution that meets all the non-functional requirements, or satisfies the preferences of the largest number of stakeholders to one degree or another. We do this over and over throughout a software development effort. We do it during requirements elicitation, design, coding, and testing. We do it anytime we develop an artifact via successive feedback loops. Traceability extends through all of this effort.
As a shorthand, we diverge and then we converge. As we converge we trim our decision tree. We move from one branch to another. We navigate through the tree. Our requirements trace stays within the bounds of the trimmed tree. Other explorations are omitted from the build. The build reflects only those decisions going out the door, shipping. The rest of the decisions remain in limbo in our version control system. We can revisit them in the future.We can keep working on those decision threads that are anchored out in version control limbo. One day, we will break that constraint, then WoW!
When we converge, we can converge to a point, or to a collection of points. I represent this collection of points as a line. That line represents the API and GUI components of a release. Since that line represents the base of our decision tree at the time we shipped, I call that line the NOW line. This decision tree forms the basis of the triangle model.
I described the triangle model in See “Building a Dog. Oh, Make that a Cat”, “Now that you have that Cat”, and “Gary Hamel’s Pyramid and the Triangle Model.“
I’ve written about the triangle model as a means for analyzing any media not just software. A media is anything that beats together a carrier and a carried (content). A radio is the obvious media. It beats together the content of the show, the sounds with a fundamental frequency that we turn to when we tune our radio to the station’s channel. Our radio filters out the fundamental frequency, so we can hear the show.
Software is likewise a media. A statement that I ran across years ago was that programmers abstract away from the requirements. It’s easy to talk about What vs. How, but programmers are all How. That What get’s done is something of an accident. Or, it used to be back in the day when a developer would claim “I deliver functionality. I don’t know anything about interfaces. Interface designers enable the programmer’s continued focus on carrier.
At the core of the issue is that when you code a product for geeks that don’t know the what, it all looks like how. When you code technology itself, rather some use of that technology it all looks like how. Still, I remember working with some developers that were coding a code generator. They did it for developers just like themselves. Or, the framework developers that didn’t weigh the cost of learning a framework vs. writing your own. If you did the latter, you would know it and wouldn’t have to learn another one. Learning a framework is tough. Sure, learn what you need, but the whole thing and its way of thinking. Effort. Sure, play. The real economic value was in coding for those different from ourselves, or coding to reduce learning.
Requirements elicitation also hides the carried nature of the application to be automated, as does UML. UML provides a platform for communicating among developers, rather than a means of capturing the ontologies being automated. This problem will be highlighted as we move to a development methodology built on top of the SemanticWeb. Still, containers are not the contained. And, container semantics is not semantics.
Requirements as Traceability
The gold area is the extent of the divergence and convergence, the search space as it is generated and subsequently searched. The blue lines outline the decision tree that was actually shipped, the code contributing to the realization. The width of the base represents the number of bits shipped in the realization, its bandwidth. The red line represents the trace of a single requirement.
The trace of a single requirement reaches across all stages or phases or classes of decisions that comprise a realization or development effort. The trace at some point might be outside the bounds of the shipped realization, but the tree would be reorganized once that requirement actually shipped. The trace when it is outside the shipped triangle is an option implying that its further development may be continued or stopped. A requirement might actually get trimmed from the decision tree, and never shipped.
A trace might branch across the tree. A trace might generate impedances up the tree and force the implementation of other architectural elements to change. A trace may originate outside the shipped realization and flow into that realization via an API. A trace can cross layers via an API to or form your technology layers.
A trace can terminate at the features in the GUI or API, or it might flow into the task performance, work performance, collaboration, and meta-management layers of the triangle model well beyond the interface and into the depth of the hype cycle.
Meeting in the Middle
Design situates itself between the requirements (what) and the implementation (how). The job in the design phase is to align and balance the requirements against the constraints and affordances inherent in and provided by the development or technical environment. Think of a collection of pistons, force against force.
Design as a Meeting of the Requirements and the Implementation Environment
The gaps between the requirements and the implementation environment represents the space in which design contributes to the solution or realization.
More abstractly, on a bit-by-bit basis across the bandwidth of the realization, can provide us with a visualization of the balance of forces in a realization effort at a given moment in time.
Affordances make realization easy. Constraints make realization difficult. Gaps arise between the requirements and the affordances and constraints of the implementation environment.
Affordances, Constraints, Requirements, and Gaps
The gaps indicate a measured difference between intention and expectation, between the requirements and the implementation environment. Being measurable leads us to a metric space, or a space that has a unit measure. The bits in a bandwidth is likewise a metric space, a space having a unit measure, the bit.
One difficulty will be that a given requirement has different stakeholders with different preferences, which leads to scaling issues across each gap. This means that each bit or each requirement, as a collection of bits would have its own measurement axis.
An interesting side effect of this is that the moment in time when a requirement is realized gives rise to the existence of the measurement axis from the point of view of the released product. Later in our circle model of a requirement, this asserting of existence is the origin of the vector of differentiation for the requirement. This vector exists prior to axis existence or origin, but it will not be expressed in the circle model, and external facing or market facing model of a requirement.
Requirements in Releases
In subsequent releases, across the released bandwidth, the degree of realization and performance of the implementation, as measured across the nonfunctional requirements for a given requirement, can improve. Such changes would alter the gaps between requirements and implementation. This hints towards the minimal viable product.
Over time persuasion and market knowledge can alter the preferences of the stakeholders of the realization. Such changes would result in some rescaling of the measurement axes of the individual stakeholders. See “Ordinals for Product Managers” for more information on stakeholder preferences, ordinals, and utils.
Another view of a release defines requirements in terms of utils, a unit-less measure of utility defined via stakeholder preferences.
Release as Utils
Scaling via utils provides a unified vertical scale across all requirements. New requirements are highlighted in red. The other requirements are measured relative to their expected performance in the next release. The green axis indicated that origin, and existence recognition begins at the red line and not earlier.
Release as utils including the pre-release measurement scale
Here we have dropped out all the details to illustrate the extent of the requirement’s measurement scale, which extends into implementation time frame as negative numbers.
Requirements as Circles
Now we will move into the metric geometry of a measurable requirements representation.
Requirements as Circles
In this representation, r1 represents the cumulative number of customers sold, while r2 represents the cumulative number of customers lost. These measure might be hard to peg on a single requirement, so use a collection of requirements. When a sales rep tells you they have to have x to close a deal, pull this out. Show them how small that customer is.
The direction of the vector of differentiation is arbitrary in most of these representations. When you put two vectors or a bivector into the representation, you might find that you need some notion of direction. We will see this later when we deal with commoditization.
Both r1 and r2 are measured against a single vector of differentiation, so aggregate requirements that contribute to that vector of differentiation. We tend to think of vectors of differentiation as features, but what happens when offer expansion begins to include business functions like shipping in the offer. The fact is that vectors of differentiation could be a task, aka some unit that Christensen called “Work to be Done.” Sales reps use something called the FAB framework to turn features into benefits or sizzle. In software, task performance is the benefit to a user; competitive advantage the benefit to an economic buyer; collaboration, choreography, and orchestration the benefit to other economic buyer. Anchor a vector of differentiation outwardly from the requirements and deep into the use space far beyond the “It’s the Interface Stupid” space. There is money out there.
Requirement as Vector of Differentiation Across the Triangle Model
Two different vectors of differentiation, shown in red, illustrate how you can pick your place with a vector of differentiation.
Requirements as Circles and as Frequencies
Back in trig class, a unit circle generated a sine wave. A sine can be decomposed into a collection of sine waves via Fourier analysis. It hints that every signal has a sideband, or every website has a multitude of monetizations. Here, with a requirement being a circle, that requirement would likewise be decomposable into a multiplicity of value provisions and revenue events.
Read “Who is Fourier” for an accessible book on the subject, and on mathematics in general. Read “Software by Numbers” for a better understanding of the minimal marketable functionality approach to revenue events and customer cognitive load management.
Requirements as Markets
Here we break our market down to addressable, anticipated, and current customers or seats. This looks like set theory, but it is based on measurement against a vector of differentiation generated by a requirement or set of requirements.
Requirements in Market Consumption Processes
In a representation where we illustrate the gain and loss of customers for a vector of differentiation, the number of customers is finite. The market consumption process would be an Ito process, since the number of customers is finite. Ito processes are stochastic process with a finite memory.
Ito processes are relatively new math. I’ve not studied it enough to know if a finite memory means constant sized. I hope not. An Ito process contrasts with Markovian processes, processes without memory, and Gaussian processes, processes with complete memory. In my slideshare presentation, I’ve discussed how Markovian processes better represent Moore’s technology adoption lifecycle.
One of my old phones has a game called Snake on it. The snake eats food, gets longer, and lives as long as it doesn’t crash into a wall or bite itself. That snake moves just like an Ito process. Requirements move from being points of differentiation to being points of contention, and on to dying as a point of parity. Value moves and the value migrates. A requirement contributes value for a while and then it doesn’t.
The familiar statistical analysis is based on a Gaussian world, a world of complete memory. We have data warehouses, and data mining that assumes that the numbers constitute complete memory, but when we improve our processes, our averages tie us into the past before the improvements. We cannot turn a corner in a Gaussian statistical process. The Markovian process turns corners constantly. Markovian processes discover. Gaussian processes enforce. Ito processes will turn out to be a hybrid between the two, discovering at the front end, enforcing at the tail end.
When you lose customers, you may be losing the rational, the stakeholders for certain requirements. Those requirements may no longer provide value in your current stakeholder pool. Your requirements are pulling an Ito on you.
Requirements as Circles Having a Probability Density
Imagine the cross section of the probability function as being directly over the vector of differentiation. Since an Ito process has a finite memory, it’s probability function is zero beyond that finite memory in what I call the “doughnut hole.” You see the doughnut hole in every major American city. It is what I call the blight zone between the suburbs and the working urban core, the poor areas characterized by higher crime and lower rents. The value goes to zero there, as it does for Christensen’s overserved, and lost customers and seats.
The probability distribution falls to zero shortly beyond the commoditization event.
When a particular vector of differentiation is commoditized, you change vectors of differentiation. Every vector of differentiation has an s-curve, or price-performance curve associated with it. Commoditization occurs near the top of the curve where huge investments generate little improvement.
Faster processors were commoditized when processors got too hot, and the constraint of the SCSI bus was the real limiting process, in terms of perceptible speed. Moving to DDR, was the vector of differentiation change that allowed processors to generate the faster performance without actually being faster, or in this case hotter.
The processor didn’t lose all of its value, so it didn’t fall to zero, but it fell big time. the $600 laptop is a sign of that fall, commoditization, value migration. So it might be a bit much to assume that your value of differentiation will fall to zero. Points of parity are generally the gateway to play in any convergent industry, or to say just about any non-startup company. In software, documentation was one of those points of parity that kept entry into the software industry higher than it need to be. Documentation was a market barrier, aka nobody reads the manual. Or, maybe the latter was crock and Documentation still provides value.
Changing Your Vectors fo Differentiation, Changing Your Circles
Here we illustrate the what happens when your vectors of differentiation change. Your circles change, your probability functions change, and your S-curves change when you change your vector of differentiation. The two vectors are drawn relative to each other.
S-curves and Probability (Profitability) Curves
When you change vectors of differentiation, you change your S-curves and Probability (Profitability) Curves. The profitability curves originate in the circle representation of the underlying requirements package.
Yeah, I know, you wouldn’t do this. Or why should you do this? If we didn’t have to lead the customers, the market, the world to a new future, if all we had to do was follow, sure there would be no reason to wring the world out to find new value. You don’t have to fast follow. You don’t have to be free. You don’t have to hope. You can lead and that takes vision, visualization, and effort.
In software we abstract things. You don’t have to analyze your points of parity the same way you would analyze your points of differentiation. You can pick and choose what you analyze. That said, the world is multivarite, multivariable, multivectored, multibivectored. When we talk, we engage with words, not the simplest of things. The actuality is that we compute with strings, strings that represent the view of words, rather than the model of words. One word, a thousand attributes easy, each a vector of differentiation forming up a mulivariable envelope around something that in aggregate with other words or other morphemes become understandable as just one thing, or in poetry many things always and forever ambiguous.
Computational linguistics programmers see a single morpheme as a porcupine of dimensions.
A Morpheme: A Visualization of Multidimensionality
A single morpheme is a collection of attribute-based dimensions. Each attribute is a line (black). All the attributes intersect at a single point, the morpheme itself. The red highlighting indicates the contribution of the highlighted attributes to the meaning that the morpheme is currently engaged with.
A single requirement would look like a morpheme. A software application, a collection of requirements would look like a word. A value chain would look like a sentence. In the end, you have a caterpillar.
In the end, you have a probability envelope with seemingly uncorelateable linear equations scattered across more dimensions than anyone cares to deal with. In the end, you have to focus, limit, decide, circle the relevant world.