Archive for February, 2011

Feature Terrains, Networks of Frequencies

February 26, 2011

I’m reviewing mathematics. Actually, I hardly have time to review, because I’m finding so many new to me insights in my general reading of mathematics. This reading has me parked at bookstores and the public library. Some of these books go fast, some slow, some so thick with mud that I’d rather be doing something else. When I’m in that no more math frame of mind, I’ll do anything else, but typically end up looking for another book.

At the public library non-fiction juvenile books are filed in with the adult books, so there are a lot of fun titles that turn out to be really quick reads. Elsie C. Ellison’s “Fun with Lines and Curves,” is one of those. I looked at it quickly one day, boring. On another day, I looked at it and though, hey neat. And, here I am blogging about it, because it answered one of those nagging, persistent questions I had about how I’d get from a user interface to the long tail. But, combine that with some of Norman Wildberger’s linear algebra YouTubes and I had an avalanche of ideas. Norman’s videos finally got the eigenvalue concept on firm ground, and introduced the bivector. I still remember the tweet about brand being an eigenvector. Yes.

I’m not going to go over all that today. I’ll show you Ellison’s visualization, hook it to a user interface, add some use frequencies to it, and abstract it out to the long tail.

Ellison is a teacher who wanted to make mathematics fun. Draw two lines that intersect. They need not be perpendicular. Then mark off some positions on those lines. They need not be unit measures, nor do both lines (axes) have to use the same unit measure. The number of positions do need to be the same on each of the axes. Then, on one axis, number your positions from the intersection to infinity in normal order, and the other axis in reverse order. Once the axes are ready, draw a line from a numbered position on one axis to the position on the other axis having the same number.

So for those of you who haven’t taken out a piece of paper and a pencil, here’s the figure.

Our Example

Our Example

Notice how the straight lines laid out a curve.

The nice thing about this diagram is that it represents a network (Metcalf’s Law). If we use a table, we know that we throw away half the entries above of below the (1,1), (2,2), …, (n,n) (x=y) line. The surprise is that in this figure, you can’t network with yourself, and you can’t network with people you already networked with. No just talking to co-workers from your functional unit. Eventually, you’ll meet the CEO. And, yeah, you PMs only get to talk to the key customer once, rather than over and over in a attempt to like not meet the rest of your customers, the ones not so keen on your prioritization schemes.

I drew the above figure last. Here is the first one I drew. I was a little stiff.

Network Representation

Network Representation

Stiff as in a little to Cartesian.

So we have a network, and everyone can reach everyone else. It reminded me of hypertext theory with it’s links (associations) and nodes, and graph theory where links can be node and nodes can be links, much like vectors being arrow to some and points to others. Don’t worry, hypertext theory is still theory, and a link just displays another page, runs some javascript, or whatever, but never yet reaches the places that hypertext theory would take it, and us with it. We’ve stalled out a bit. Just pull up on the noise of the aircraft to maintain its forward airspeed into that barn ahead.

Network of Nodes and  Links

Network of Nodes and Links

The red circles at the intersections, the nodes. The black lines are the links.

Using fractional notation in this diagram was just plain wrong! The numbers are just addresses, so use some other syntax. Do not treat them as fractions. Why the warning? I spent too much time doing that and not getting anywhere. One half showed up all over the place, so I didn’t end up with a number line. Why would I do that? Well, math is like that. Why worry about the 16Bth hexadecimal digit of pi? Or, why was it that I did so much type conversion back in the day. Things matter when you climb into them that would never matter if you just walked around the exterior. I was trying to linearize the eventual probabilities I’ll associate with each intersection (node) later.

When we talk about features, or minimal marketable features we are talking about networks of such features. Rarely will a feature stand by itself. A feature like a dialog or a web page will need to be opened or displayed, used, and then closed or exited from. Using features can also mean engaging in a feedback loop. So lets look at one such network. We will open a file in MS Paint. I didn’t go into detail selecting the file that we’ll open.I am opening the file from within Paint. The file could be opened from the file browser in Windows, but that would be framework functionality, which appears at the hits end of the long tail.

Opening a File in MS Paint

Opening a File in MS Paint

Now, I’ll take this network and place it in Ellison’s network graph.

UI Components in a Task Network

UI Components in a Task Network

Here each line is associated with a particular UI component. Line 1 is associated with the main window; line 10, the Paint menu control, which opens the Paint menu associated with line 2. The options on the Paint menu are associated with lines 5-9. The Open command is associated with line 6. Not all the menu commands have been included in this figure. The recent files list was likewise omitted. The file menu is represented by line 3. The open file shortcut keys are associated with line 4.

The open dialog is only represented by a single line. All by itself, it would involve a huge graph.

As for the main window, I originally showed it empty and again with the file opened in it. I would have to extend line 1 to show both. The past would end up in the background. This was more straightforward. It’s fit for a particular use, and not for some other use.

Next, I add the frequency of use histograms for each node in the task. Coming up with the frequency numbers might involve looking at your server logs, or capturing the use data by some other means. It doesn’t require a lab with video cameras. You also have to decide on scope. Will you count the frequencies for all whole product contributors, or only those that originate in your own code? I’ll leave that up to you. Let’s just go with the idea that the histograms here are illustrative.

UI Components in Our Task Network With Use Frequencies

UI Components in Our Task Network With Use Frequencies

Once you have your use frequencies, you can order the histograms by height which sequences them into a distribution. That distribution will be a long tail, or a thick tail distribution, which we discussed last week in Search Scarcity.

Long Tail

Long Tail

The long tail for this minimal marketable function is shown as a discrete and a continuous distribution. When additional minimal marketable functions are added, the histograms will separate down the long tail. Some features like the keyboard shortcuts might never be used. That histogram will be far down the long tail. Hopefully, you won’t have features that are never used.

APIs have similar use frequencies associated with its function calls.

Looking back at the task network, the red lines represent a single use case. Live data would constitute a use scenario. A user story might be built from a network or tool task like the open task. Nobody, except the police, buys an application to open a file. Well, maybe a geek. I’m sick of downloading files from professors and ending up with .ps files or others that I can’t open. But, the my point stands. Nobody uses independent disk pools, a feature provided by the IBM i-series operating system. They do geo-mirroring, which in turn forces them to use independent disk pools. Tool tasks are the task we do, because we are using a computer. Most web apps have sublimated file system tasks. Users are happy.

Going back to hypertext/graph theory, the red lines hooking the features together represent a task. We could say that the red lines sequence some symbols, the features, and the whole thing represents a grammar going back to compiler class. We could go further and layer another network on top of the one in our figure, so that the task, a tool task, is a node in that network. Then we could link those tool tasks into a user task.

We could go further and create more layers each working further an further from the interface, from the features. This visualization would give us a better view of the value that our features support and that our customers realize.

Yes, a kids book was full of surprises. It had some great features, and was put to use well beyond its design, but mathematics is good for that.

I even had to reconsider bivectors, but now it has blown me away and given me some new ways to see the issue of cost and policy structure. That down the road a bit. Next, constraints, something that came up in early January. I’m so far behind.

Anyway, go down to your public library and peruse the math section on the kids shelves if they are separate. Pick a few books out, and yes, you know that stuff, but you’re in it for the serendipity, the epiphany, the shock of the boring, the surprise, the leverage, the fun, the point of view of a kid. You see a number is a measure. Life is thick. Numbers are thin. Kids see different, aka paradigmatic culture right there at our knees. You didn’t know they had a kids book on growing up to be a product manager did you?

Comments? Thanks!

Advertisements

One User, Two Conflicting Conceptual Models

February 20, 2011

At the very end of the Let’s Negotiate Away Some Meaning, I listed four participants that interacted with a conceptual model while it was realized in software. I did not illustrate this with a figure. So let’s start there.

Four Conceptual Models

Four Conceptual Models

In Chaos has Changed & Functional Cultures are Alive and Well, and other posts, I discussed where functional cultures come from. Each of us has subscribed to one or more of them. Each of us are users. As users, each of us has a conceptual model of our functional culture. This conceptual model provides an infrastructure of meaning upon which we build our rituals that we call work, or that Christensen called “Jobs to Be Done,” and that trainers and technical writers call tasks. In the figure, the user (1) maintains the source conceptual model that is realized to some degree in a software application.

In the Negotiate Away post, one of the figures illustrated the requirements elicitation process as being driven by the theory of the application. Elicitors ask users and stakeholders questions in order to validate the theory. This can lead to unfortunate errors. In that post, the theory acted as a filter. It is not shown in the above figure. In this post and that post, I omitted the role of the executive stakeholder and the functional unit manager of the elicited user. The elicitor (2) embeds some portion of the elicited user’s conceptual model in the functional requirements. In practice, the elicitor is not capturing meaning, and the requirements elicitation process is not focused on meaning. Requirements are modeled with tools like UML as opposed to ontological modeling tools.

Requirements come in two flavors: functional (What) and non-functional (How Well). The non-functional requirements deal with carrier issues. The functional requirements deal with both carrier and carried issues. The elicited conceptual model constitutes the carried component.

The arrow from user to elicitor should be filtered by the elicitor’s theory, and the user’s functional managers. The user’s functional managers are acting to select a particular generational paradigm within a functional unit’s culture. We know that a market is organized by the risk tolerance of the firms in a market. For a given firm that risk tolerance is the summation of the risk tolerances and behavior of the people within the firm. A purchase in a given category effects different business units and functional units within a firm, so the category selects the people within the firm whose risk tolerance characterizes the firms risk tolerance in that category. This means that Moore’s technology adoption lifecycle plays out across an organization and across the scales within the organization. A functional unit organizes its staff across generations, each with it’s own version of the unit’s functional culture, each sharing a core, but each different due to the state of the art when these staff members were trained. The state of the art organizes the paradigmatic cultures similarly to that of the category and the firm. The technology adoption lifecycle spans the functional unit.

Eventually, the conceptual model is realized at the view (3), at the interface. This conceptual model represents the outcomes of having communicated the conceptual model described in the requirements through the developers and user interface specialists that contributed to the realization of the interface.

When any user approaches an interface, they use a troubleshooting approach. This idea from Rathmussen. I ran across his book in the NASA JSC library back in the late 80’s. He was writing about operating plant culture.

The idea is that a user forms a hypothesis, takes some action, and then obtains feedback from the outcomes of that action. This teaches the user, the learner, how the application works. It teaches the user it’s meaning. The user builds a user conceptual model (4) of the application. The hypotheses originate in the interface, while the feedback reveals something of the model far removed from the interface.

The user’s hypotheses are based on the functional culture that was elicited from the user back during requirements elicitation. This functional culture is the source of the users expectations, which are embodied in that user’s hypotheses. Given that requirements are elicited from a market segment perspective, rather than a functional culture perspective, the usual expectation is that the outcomes will be distant from the functional culture. The user’s conceptual model (4) built from interactions with the application is really a model of the distance between the user’s functional culture and the application, the gap.

The user’s conceptual model also incorporates any compensations necessary to close the gap. These compensations show up in requirements asking for the ability to export and import from Excel and Access. Look for these type of requirements. They mean that the app doesn’t do what I need it to do. And, they demonstrate that value is actually created at a distance from the application out in the mid ground of the value chain.

An application enables value creation beyond the interface via scripting, macros, and the ability to use APIs and various protocols. These features enable use beyond design. Realize that enabling use beyond design will impact your user support functions spanning technical support, training, and documentation. Enabling use beyond design also seeds the search space beyond the thick tail for subsequent products, releases, and iterations. Build to search.

Do realize that the user is expected to maintain two conceptual models, aka two cognitive models. The conceptual model originating in the user’s  functional culture has been implicated. The user’s conceptual model originating at the interface is more likely to be explicated and incomplete. The user’s attention will limit discovery of the full range of features, and use of those features. The user’s attention and cognitive limits organize the features and concepts into networks. Their frequencies of use end up being organized on long tailed distributions. And, likewise, a factor analysis of those features and concepts will give rise to a discrete distribution echoing those long tails.

Comments? Thanks!

Search Scarcity

February 13, 2011

I’m still working my way through “Chaos Theory Tamed” by Garnett P. Williams. I had to return it to the library, so it will come up again in the near future. He was discussing how to tile a trajectory to reconstruct the attractor when out of the blue the Product Strategy session I presented at Pcamp Seattle09 came back to me. I was talking about how the frequencies of feature use would look like a power law distribution, or the long tail. One of the attendees brought up that the situation was really a thick tail.

Well, the ensuing Eureka moment had me googling around to see if I understood the thick tail even after starting into “The Black Swan” twice. I’m still not finished. I recalled it being drawn like a long tail, except that it’s higher, so a vertical line connected the distribution to the x-axis. No this vertical line is not part of the distribution. The Eureka moment centered around the question of what’s on the other side of the apparent end of the thick tail? So off we go.

Is the overhead projector working?

Tilings

Tilings

Well, this is a tiling. Imagine that you’re trying to count your cash cows out in the pasture. You take a photo. Then, you drop a grid over the photo to isolate the number of cows you have to count at any particular moment. Did I count that cow or not?

Tilings do show up in the business world. They’re called the monthly, quarterly, and annual close. Another word for them is measurement lattices.

In the chaos book, we are looking at trajectories, or more specifically the points that comprise a trajectory.

Pseudo Phase Space Trajectory

Pseudo Phase Space Trajectory

We drop the tiling over the trajectory, so we can count up the points.

A Trajectory Tiled with Numbered Tiles

A Trajectory Tiled with Numbered Tiles

It took 45 tiles to tile the trajectory. The number of tiles is significant, because some n/45, where n is the number of points in a given tile, gives you the probability of a point occurring in that tile. Some of the points are on the border between two or more tiles, so counting can be fun. My counting rule was to count these border points in each of the tiles bordered. Strangely enough, this is on big deal apparently.

Tiled Counts and Probabilities

Tiled Counts and Probabilities

I color coded each tile. The orange tiles are traversed by the trajectory. The yellow and blue ones are not. The yellow tiles comprise a single contiguous  area. The blue tiles comprises another separate contiguous area.

Color Coded Tiling

Color Coded Tiling

Here’s a closer look. The points are ordered in a time series. The first point is over in tile 10. Two increments later we are in tile 11. Imagine we are in a spaceship, or making a motion control video. Much of the attractor graphs look like caves, so we’d be spelunking. You maybe, but no, not me. I’m not really headed towards chaos here.  The tiles (tile numbers) alias for the points moving forward.

Tiled Topological Object with Numbers and Colors

Tiled Topological Object with Numbers and Colors

The attractor here is a torus or a doughnut. The upper corners wrap around and touch the lower corners. The arrows represent a continuity. I may be far ahead of myself here. We won’t go there.

So far we have a path traversed over time, and each point on that path has a probability.

Markov Chain

Markov Chain

Here I color coded the tiles. The yellow ones represent the walls of the cave. We can’t go there. The blue and pink tiles define the continuous pathways until the cave forks. The blue pathways are definitely traveled. The pink tiles are not on the main pathway. The arrows show you the choice at a given fork.

The last figure is really a map of a Markov process. Markov processes have finite memory unlike say processes built from the normal or Gaussian distribution. Markov chain is just another term for Markov process.

OK, so what does that have to do with me, a product manager? It turns out that the technology adoption lifecycle is a Markov process. Each phase of the lifecycle is significantly different from the one on either side of it. That Moore drew it as a normal distribution means that it hides time. He even went so far as to say it wasn’t a clock. Sure. Like Campy saying process re-engineering wasn’t object oriented.

I’ve drawn the technology adoption lifecycle as a Markov process.

TALC as Markov Chain

TALC as Markov Chain

Moore’s early adopter is in a vertical. Not show is the chasm, the bowling ally, and the tornado among many adoption structures. The diagram shows a layered architecture. The technical enthusiasts are not a population that only shows up at the pre-market phrase. They underlay the entire lifecycle.

I know the folks that think that doing something on the web makes them a technologist won’t like this figure. But, it’s the way I see it. Consumer web is consumer. The web is just a means of selling something to that consumer. And, you are in the business of the stuff you are selling. Do you have an exec that came from that industry? Do you have an exec that came from each of the industries you monetize around? You should.

It turns out that Markov processes are based on Poisson distributions. The main point of my presentation in Seattle was to expose people to something called Poisson games, or games of unknown population. Each of the phases in the technology adoption lifecycle gets its own Poisson game. See my slides for a proposed session at the Orange County Product Camp 10, So you don’t have a market? Great! If you’ve been a regular reader of this blog, you may have seen these slides before.

A single Poisson game could represent a single functional culture, particularly in the bowling ally.

All the probabilities in our tiling taken together comprise a probability distribution, a surface. The long tail is a distribution. So are the Poisson distribution, normal (Gaussian) distribution, and the thick tail.

The Convergences of Probability Distributions

The Convergences of Probability Distributions

Notice that these distributions converge at different rates. The Poisson distribution converges quickly, the normal a little slower, the long tail quite a ways further out than the normal, and the thick tail about ten or more multiples of the long tail.

So why would you use one distribution as opposed to another? We use the normal, because it is built from familiar statistics–the mean and the standard deviation. We use the normal habitually. The other distributions are defined by parameters other than those that define the normal distribution.

When laying out the long tail, Chris Anderson, saw lots of low volume markets that did not exist prior to the internet, more specifically the search surplus provided by web search engines. I saw the Beatles store as a center of the artifactual culture of the Beatles subculture, not just a market, but an anchor of meaning. I also saw the power law distribution, or long tail, as an organizer of clicks on a software application’s interface. An application is a network of features at one layer, a network of concepts at another, a network of tasks (modules of work done by users) at yet another. Each of these things: a feature, a concept, a task would be just a point in space linked via a Markov chain. Yes, a feature has a probability. This is not news to SEO people. But, these probabilities are not taken into account by product managers, particularly when they think that adding features is key.

Attention is limited. Use is habitual. The probabilities would thin if every new feature was actually used.

Even those pedagogical pathways we discussed back in Visualizing Functional Culture are networks ultimately comprising Markov chains.

To back up this UI as long tail idea even more, it turns out that factor analysis builds a discrete version of the long tail. In factor analysis, three factors generally cover 85 percent of the variance. That implies that your application has three features that consume 85 percent of your user’s attention. You can squeeze out the last 14.95 percent of the variance before you hit the noise that will block further exploration. That would be like going to infinity. A test budget limits tests. A time budget limits factor analysis similarly.

So what of the thick tail? It’s a long tail that is further away from the x and y axes that it eventually converges with. That means that there is even more space under the curve for more markets–not just markets for goods, but ideas, messages, use, or financial disasters. That vertical line represents the end of the known or imagined world–the end of search itself. But, I’ve gotten ahead of the projector. Slide please!

Search Spectrum

Search Spectrum

Here we are looking at search.

We go to the mall. We go to the bookstore. At the bookstore we look at their front list, the best sellers and the new books–the hits, or we might just look at the shelves deeper into the store–the back list, the thing that makes one bookstore chain different from the others–the long tail. We are searching. And, the merchandizers, the marketers, the sellers have organized the goods, so we can find them. All this existed before the internet and information architecture came along. This search is indicated by the dark green below the timeline of the graph at what I called search enablers.

Channels and brands organize search.

On the far left side we have the bibliographic maturity lifecycle. Consider it to be a miniature technology adoption lifecycle. It starts out with the creator. The creator finds some apostles. The creator and the apostles exchange research in an invisible college. Ultimately, the members of the invisible college are trying to create a conceptualization that a peer edited journal would accept, and in doing so publish a work embodying the idea, so it escapes the ultimate niche of its speciation, the individual, and spread into the larger world where it will strive or to extinct. When an idea is first published by a peer edited journal, the idea is said to have achieved bibliographic maturity. Now, professional reference librarians (dark green again) can find it, which enables the idea to spread and become adopted by the larger academic population. Eventually, one of those academics will write a book, a searchable entity (lighter green). Yes, search, but not search abundance.  The breakout into the general/commercial population will take years and years. Or, maybe a blog would hurry things along.

The hit portion of the long tail is the commercial portion of the search spectrum that has search, but not search abundance. It does have search abundance to the degree that it is aliased online, and that alias has been consumed by a search engine’s spider. It’s only after the Pareto split at the inflection point of the power law’s curve that the internet search engines kick in and provide search surplus. Yes, internet search is not perfect. Stick to the PR line buddy.

Before the invisible college, the idea if it had been conceived lay hidden in search scarcity. The long tail lays in search surplus, but beyond it’s point of convergence, the thick tail; the unrepentant, not gonna search me, unknown; has its way with its search scarcity. Search is just the filling sandwiched between two pieces of white bread search scarcity.

Well, there might be more to this sandwich, because many things still can’t searched. Search expands. Search scarcity stands hardly bothered. Search scarcity is a bully.

Notice the line labeled Now. Right next to it is the past and the future. The long tail, the thick tail, the normal, and the Poisson sit on a time line. We’re back to talking about processes again. Throw a tiling on it and let’s get moving. Funny, Slavic time has no notion of the arrow of time, instead, time–this moment– is a container. That container has been here forever and will be here after we leave it. We step into a moment of time like we step into an LDAP or DOM container. That moment in time is going nowhere. “Beam us up Scotty.” If search scarcity wasn’t infinite, the constant seeding of the near term spaces of search scarcity might bother the bully.

Yes, the science fiction writer’s content seeds the future. Eureka moments seed the future. Our dreams seed the future. Marketers do their share of plowing and sowing of the future. Is is searchable? It will be.

The Seeds of the Future

The Seeds of the Future

As marketers, we seed a horizon, a planned expanse. When some of those seeds sprout into searchable content, we generate a point in the search space and add it to the regression cloud expanding the regression line that seems to inform the boundary between search surplus and search scarcity. The limits of regression are with us always. Much like the priestly computation personnel of the ancients, when they did something like subtract down to zero, a non-existent concept at the time, they subsequently fell through the cracks and died, regression spanks us when we in any direction exceed the regressed data cloud.

Plant your seeds in the fields of search scarcity. Put it on your roadmap. Farmers use tractors with GPS units. What do your fields look like?

And, before I shut the machine down and head out, there is the idea that our ordered convergences of statistical distributions define just how much space we might end up searching. I’m tying this back to the triangle model as an abstraction of a realization here in terms of divergence and convergence, searching and deciding.

Realization: Search Divergence Followed By Convergence

Realization: Search Divergence Followed By Convergence

So there you have it. Sure, ideas are a dime a dozen, but that might just be paying too much. They’re like lettuce. If you buy it today, eat it today. There’s an infinity of ideas. We might not be ready for them, but search scarcity assures us that it will be awhile before an idea creeps up on us and astonishes us with a Eureka moment! That shouldn’t happen, but hey, we are not web search engines. We’ve got fields to plow.

Oh, back to Seattle. The fact that the distribution might be a thick tail as opposed to a long tail does not disturb the motivations. Consider that software applications are used beyond their designed intent all the time. In Architecture, some architects have written about the need to escape the lifelessness of contemporary architecture by escaping the program (use/requirements) put forth by the clients/eventual owners/managers. They want to build for the emergent use. We provide macro facilities, so users can explore. In doing so, we build in the thick tail, because that exploration that begins with search is part of the product. So, yes, I can see it, the thick tail. Thanks for pointing this out to me long before all the pieces fell into place, so I could understand it.

Let’s have a conversation–cluetrain and all that. That comment text box below is sitting there at the boundary between the long tail and the thick tail. It is search scarcity up close–the transporter. Comment, please. I’ll learn a lot from you. Thanks.

Let’s Negotiate Away Some Meaning?

February 9, 2011

“Good to see you again. I’m glad you’ve got time today to talk about the XYZ app. You’ve been briefed, right?”

Hopefully, your briefer, your manager, told you that meaning always gets lost in the requirements elicitation process. Hopefully, you know what meaning you can give away, because the compensations are easy enough. He probably didn’t tell you that your job was created by the MNO application, the app that the XYZ app will replace. He probably didn’t know that. Nobody told him that he’d need more staff once the MNO app was installed. Nobody had figured that out yet. He probably knows nothing of meaning either. It’s work. We do it. It’s not ritual. It’s not meaningful in the big picture. And, his team of functional specialists are just like everyone else, not a culture, and of course, silo denial. So why is it that at the company party, you hang with your team? Can you talk your job with anyone else?

Let’s face it, you’re doomed. Everybody loses during elicitation, or should we call that negotiations. Even the winner loses. Give a little, get a little. Play to play again. Game theory sure. Minimax is a conservative approach that ensures that you don’t go out of business, don’t win big, don’t lose big, don’t become a hero. Of course the assumptions under the theory are just plain weird. You are just like your opponent. But, this is a corporate IT app elicitation, who is your opponent? Well, every other functional staffer that will be elicited, every generalist that establishes the context, your bosses budget, even the elicitor. Anything but a two person game.

Game theory aside, it’s your personality that drives your negotiation outcomes. Your disposition self selected your career. Your career presented you with opportunities to learn and practice your negotiation skills. If it didn’t, you don’t have them. If you do it every day, well, you should be skilled. Researchers found, and HR trainers preach, that

  • Your executives and sales reps win their negotiations 55% of the time
  • Your functional unit managers win 35% of the time
  • Your functional unit leads win 8% of that time–hey, this is you!
  • Your data geek functional unit heads down get stuff done people win 2% of the time
Negotiating Away Meaning, Wins and Loses

Negotiating Away Meaning, Wins and Loses

This figure shows the wins for each participant. Only one user is shown here. The user has very little power in all of this.

It’s not so bad. At least you know what you’re up against for the moment. The situation will change soon enough. Right now you can say, “Hey, this is an internal development effort.” Yes, things are different for software vendors. We’ll get back to that in a moment.

If it was just you and one of your opponents in the resource allocation game, disregarding functional cultures, you’d keep only 25% of your meaning cleanly in tact, you’d keep another 50% somewhat polluted by other people’s meanings, and you’d lose 25%–just vanished. These numbers come from one of Kimble’s data warehouse books–real time I think.

Negotiating Away Meaning

Negotiating Away Meaning

This figure illustrates the 25%, 50%,25% results for two-party negotiations. The 50% really breaks down into a 25%-25% split with the winner gaining the advantage in regards to meaning retention.

But, we can’t disregard functional cultures if we want to avoid hidden costs, compensations, and loss of that theoretical economic rationality. Sure we can do it better, faster, cheaper if all we look at are the explicit costs in the firm’s books. I’ve mentioned the implicit costs in other posts. The true costs are off the books. Consider for a moment that the point of most applications is better decision making–better, less meaningful decisions. Yes, we trade off meaning. Consultants actually sell this with their silo busting and integration applications. Other consultants push the data quality concept. The Data Quality people may save us from ourselves one day. They know data quality is low. IT is looking at the problem at last. Technical solutions exist.

You’ve negotiated away meaning, so lets go back to the first figure and focus on use and value.

Negotiating Away Meaning, the User

Negotiating Away Meaning, the User

Yeah, that’s the user. Where did everyone else go? Hey, it’s the user that gets into flow with their application while the rest of the world disappears. That economic buyer is busy, off on their next buy. The user is driving.

Negotiating Away Meaning, Use and Value

Negotiating Away Meaning, Use and Value

But, the user is just the valet parking guy who brings you your car, before it’s driven off the lot and beyond out into the value provisioning of the firm and out into the world of process choreography and orchestration. So the value, where is it? Value is well beyond the interface, well beyond the user.

Product managers know that features do not equal value. Sales reps know that you have to turn features into benefits. They use the FAB framework for that. But, FAB statements are lexical playgrounds. Technical writers and trainers write about tasks and scenarios. These things are closer to value than features. But, those are just helping professions, like nursing. The doctors matter more, of course. Ask any developer. Derivative works. Denotational content. Do you read the manual?

The depth of an application’s value shows up in the depth of the fulfillment chains that move prospects into the sales funnel. Where is the attraction marketing? On the interface? You’re kidding right? No, it’s way out there on the Hype Cycle, the channels, the mentions. But, fast following, doing what everyone else is doing, commoditizers don’t have to get that far out in front. Do they? They can always cut their price, go out of business, whatever!

Technologies now exist that enable us to get meaningful applications built without trade offs. Now that we can do it, we just have to push the adoption of the technology and its underlying architectures. The market is already organized in just such an architecture.

Software vendors have to deal with the economic buyer vs. user split. Well, they don’t have to. They don’t have to capture their increasing return that they get from retained “customers” via customer loyalty. I’ve seen companies fail to do this in their marketing. I’ve seen companies that preached loyalty and drove the entire offer building functions to generate loyalty fail to do this in their sales organizations. Hunters, those sales reps working new customer acquisition, actually threw away retained customers. Ouch! But, in a B2B situation the relationship isn’t with the “customer,” it’s with the economic buyer during acquisition, and the end user thereafter. Expertise development retains that end user. All they need to initiate a purchase of an upgrade is a simple piece of marcom that says, “We are upgrading again!” That’s all it takes to generate that 80% of install base in 90 days metric provided of course that someone in sales is motivated to talk to and spend, say five minutes with upgrade purchaser. A price below signature authority goes a long way as well. But, yeah, some companies never get their 90%-60% cheaper sale and the higher profit from upgrade revenue. The figure please.

Negotiating Away Meaning, The User is the Customer

Negotiating Away Meaning, The User is the Customer

Notice the red line running through the user. Hopefully there is more than one user. But, if there are more users, then the upgrade decision will rest with either the team lead, or the functional unit manager. Let’s be clear here though, if it is a piece of junk, the user won’t go to their bosses and evangelize your product. And, if they go to their bosses about the compensating efforts, that boss will be asking the product manager for changes. Product managers should be talking to users, not customers. The customer doesn’t speak for the users. And, so should marketing. Do you know your user’s names? No, my users don’t include anyone named Joe Bob CEO.

The user gets the power finally, but now that user competes with all the other users. Which users get the ear of the product manager? How does that political game play out?

The figure also demonstrates how an organization is the carrier for the users. The hierarchical structure of even the flattest of firms still looks like LDAP or the HTML DOM, or those Russian matryoshka, nested dolls. From a leadership point of view, you protect your people no matter how costing the owning organization happens to be. You make a safe space for your people. That’s spaces within spaces–topology. Functional cultures and paradigmatic cultures likewise. The user is the last matryoshka, the recursive invariant, the end of the parse of the grammar of wants and needs, the final word on that upgrade buy, the final word on the longevity of the software vendor that got this stuff on their client platform.

Now for two spectrum analyzer views of the power relations.

Power Spectrum and Compensations

Power Spectrum and Compensations

The black, blue, brown, and read lines reflect the negotiation power of the various rolls involved. The pink lines represent the compensations involved. These compensations aggregate hierarchically to the more powerful roles.

Power Spectrum and Compensations

Power Spectrum and Compensations

Here is a more abstract view of the previous figure.

Elicitation

In my last post, I promised that I’d finish up in this post. I have a few more figures to illustrate elicitation and implementation. So we’ll go back to the triangle model and software as media model.

I omitted a link in the previous post. I did catch this and correct it, but only after most of my readers read that post. It was the link I promised to  the slideshow on the Triangle Model and how it can be used to model organizations as realizations.

So what does elicitation look like when modeled via the triangle model.

Elicitation

Elicitation

The elicitor, the guy in the blue shirt and gray pants, comes to the elicitation with a theory, the gray triangle at the top of the triangle on the right, which represents the requirements. That theory is the “Why?” The elicitor captures the relevant portions of the user’s model as, the triangle on the left. The theory drives the elicitation and determines the relevancy of the user’s model elements. The why filters the requirements.

When many different functional units are involved, an executive sponsor will determine relevancy after the elicitation. Executive sponsors make a mess of it. The product manager plays the role of the executive sponsor in a software vendor organization.

The functional requirements capture the functional culture, or carried component of the application. The non-functional requirements drive the implementation of the carrier component of the application. So the software as media concept was implicitly buried in the idea of requirements. It’s been there all these years while the focus was on developer efficiency. Nobody noticed. This divide has become more important as software economics shifts to lower cost software development, and the increasing commodization of software applications.

Delivered w Carrier Lost and Carried

Delivered w Carrier Lost and Carried

In this figure you can see that some of the user’s model was lost. If you wonder about this, look at your non-web based applications. How much of what you do is done, because you are using software? How much is done, because it’s what you would do if you didn’t have a computer. The proportion of these tool tasks vs user task changes from one phase of the technology adoption lifecycle to another. In the late mainsteet market where the web sits, you start into a continuum of task sublimations. The tool tasks must disappear. The web browser and databases takes care of most of this task sublimation. Moving to information appliances like cell phones and pads, the sublimation increases–less carrier. Eventually, the carrier is hidden from the user. Punch 10 seconds in your microwave, the computer is nowhere to be seen. So in the figure above, the brown components are just plain gone. But, microwave cooking is quite different than stove top cooking.

Delivered w Carrier Lost and Carried

Delivered w Carrier Lost and Carried

In this figure, the area of carried in the application equals the reduced size of the user’s model. This reduction in the user’s model shows up as compensations and loss.

Turning this around and looking at the conceptual a model a user creates while working with an application, disregarding the conceptual model the user brings to the application, we see a slightly different result.

Multiple Conceptual Models

Multiple Conceptual Models

In this figure, the user has to learn the carrier relative to the controls through which interactions occur. When you use a dialog box, you use carried controls, and then when you click OK, Apply, or Cancel, you are using carrier components. I don’t have to click OK if I do this on paper.

The user’s conceptual model involves looking at the interface, creating a hypothesis or expectation, using the application, inspecting the results, and if the user’s expectations are not met, looping back and making some kind of adjustment. This is learning. It takes the user from unknowing to knowing, but this knowing might be different from what they knew coming to the application.

There are actually four models here:

  1. The user’s (source) conceptual model originating from the user’s functional culture
  2. The elicitor-driven developer implemented model in the model component of the application’s code
  3. The UX designer’s model embedded in the view component of the application code
  4. The user’s conceptual model of the application itself.

None of these models will be consistent with each other. Ideally sure, but I don’t live on that planet. Even a usable interface doesn’t preserve the source conceptual model.

Yes, we can overcome these issues with available technology. Changing practice begins with awareness. So you’re aware now. You get to change your processes where you are. Let us know how it goes.

Next time, the thick tail vs. the long tail.

Comments? Thanks!

The Tip Off and Functional Cultures Viz II

February 5, 2011

Back in Chaos has Changed & Functional Cultures I described how we get over the plateaus we get stuck on while learning something new. The climber can’t find the crack they need to get further up on their climb. They won’t find it on their own. Learning is social. Someone has to tell you, hey do this, reach over there. We do, we get it and we move on. Once we move on, we forget it. It seems so natural now. I called this process reimplication, the last step in the learning process. It’s the one we don’t notice. Ratcheting up is another term used to describe it.

This ratcheting up is a problem for elicitors.

To make ratcheting up real for you, think about how you drive from the office to your home. You do this everyday. You know the roads you take. You know why you take the particular roads you take.

I remember having to drive from Pasadena down to San Diego during rush hour. My boss knows that side of SoCal, so he clues me in. It only took 1.5 hours, an impossibly fast drive. Hell could have froze over. I had time to kill. I was ratcheted up.

I also remember a rainy day in Houston. I decided there were too many puddles in my usual lanes, so I slowed down drove in unfamiliar lanes, hit a puddle, spun out crossed the lanes twice bouncing off the divider, hitting a sign, sign snaps and hits the other side of my car. I was ok. I drove away. Everyone stopped. No one else involved. The insurance company totaled the car.

So where were we, oh, reimplicated. Forgot didn’t you. But, really, what road did you take this afternoon? Was there some reason you had to take another route? Will you remember that route Monday afternoon. You’ll have some vague notion that it was not a problem, but ordinary. That’s how reimpication feels.

Explicit and Implicit Components of a Model

Explicit and Implicit Components of a Model

The figure is our drive from the office to our home. We don’t recall the routine. The routine was not memorable. It was implicit. We remember the office, and home. We implicate them as well. Take a vacation. Do you think about the chores?

One night I was coming off a third shift at the grocery store and stopped to get some gas. The clerk was studying for his GRE. He was doing percentage problems. He has to remember three formulas. I can’t do that. I know algebra, so I don’t have to. My life is easier, because of algebra. I was ratcheted up. Examples can be found everywhere.

But, can we really talk about all the stuff we know and do in the most ordinary sense? Can we teach it? Can we teach it to the requirements elicitor? Mostly, we let them ask the questions. Oh! Did I leave off this tiny thing here? Probably.

On to Functional Cultures Visualization, part II.

Back in Building a Dog, Oh, Make that a Cat, I described the triangle model, and in the very last diagram of Visualizing Functional Culture, I left you with two triangles that I called models. Those models were associated with populations having taken different pedagogical pathways into a conceptualization. In this post, we’ll take a look at how the triangle model can be used to illustrate the gaps between our models and our apps.

I use the triangle model to illustrate processes surrounding realization, any realization. Making a concept real involves goals, search, decision making, tools, stuff, learning, teaching. Those tools and that stuff only gets into our hands, because they are realizations as well. And,they are realizations that embed all the decisions, aka knowledge, that was needed to make them. Most of that knowledge is like our drive to work, implicit. We don’t need to know. And, we don’t want to know. No information overload involved.

If you want to have fun, take a look at this slideshow on a website where I posted this stuff, before I moved to my prior, no longer served blog. Fun, did I over promise?

Anyway, enough of that.

Triangle Model of Any Realization

Triangle Model of Any Realization

Time going down is just weird, so lets do a smooth translation.

Triangle Model of Any Realization

Triangle Model of Any Realization

There we go. Time is headed in the correct direction. Time always moves left to right, unless you are dealing with the Slavic notion of time which has no arrow. Actually, turning the figure improved its fitness for use. It’s information design. You can imagine where the dollar axis goes. Decisions cost time and money, even the implicit decisions, those miles you drove, but can’t recall. We don’t have to explicate the decisions. The decision tree happens.

Triangle Model for an App

Triangle Model for an App

An application is a realization. I use two lines at the base of the triangle to illustrate the user interface, and at times, the API. Interfaces are as close to real as we get with software. The two lines will stand for realization throughout this post.

Triangle Model of an Idea or Concept

Triangle Model of an Idea or Concept

I use a single line at the base of the triangle to indicate an idea or concept. Keep in mind that as a concept can’t travel alone, that single line would be a conceptualization, the thing we used our icons from the last post to map out.

Idea in a Realization

Idea in a Realization

Here I’ve put the idea behind the realization inside the realization. Notice that the idea is a decision tree. In software, the carried idea is the application’s requirements.

Ideas Realized

Ideas Realized

A realization might involve more than one idea, or more than one collection of cultural meanings. Ontologies are easily represented as trees, hence decision trees, a triangle, a model. Lexicons can be planned in a discipline called language planning, so the language is a realization, a tool, which might express the concepts in an ontology. Just tying back to the central problem presented by functional cultures.

Notice that the models extend through the realization and into the future, into the depth beyond the interface, into the space of work product, and the stuff we do with work product–the real place where value is found. So no, it’s not UX stupid. I left the burner on, my food is burned, but I won’t have to cope with that until it’s plated and I’m eating in front of my TV–a long way from my stoves interface. But, the burn was a realization.

We use realizations to create realizations. Geeks focus on the former, users on the latter.

I drew this figure with the models extending beyond the realization, because I using the distance from the realization to the base of the model as the gap between my traded off functionality and what my users needed to work within the context of their functional cultures. I have another rep to cover the burned dinner. No, we won’t go there in this post.

Idea and realization for a single person

Idea and realization for a single person

So taking it back to people, we can write a software application for one person by eliciting requirements from that person and building what they want. No, gap. No persona. No market segment. No executive sponsor. No tradeoffs. No politics. Still, ratcheting up will be a problem. But, the functional culture is encoded, and no paradigmatic issues arise. Who cares if the elicited person is a dinosaur, or someone that cuts themselves shaving on the edge of the state of their art. No generational issues.

This assumes that the person is not operating in an environment that they don’t control. The person is not preparing income taxes. Everyone is operating in constrained environments, so their processes take that into account or the constraints are blackboxed or implicit. My truck is a blackbox with an automatic transmission,  V-8, cup holder, CD changer, and a bench seat. There is just so much space for meaning. Hey, I love my truck. Just don’t make me dive under the hood.

Detour ahead.

Idea and realization for a single person as carrier and carried

Idea and realization for a single person as carrier and carried

I used the word carrier earlier. That comes from the software as media concept. A media consists of a carrier (gray) and a carried (aqua) component. The carrier is software. The carried is the controls users manipulate to do their work, their rituals. The carried encodes the meanings defined within the functional culture. We’ll bump into this idea again.

Back on the road, you’ll forget, ah reimplicate, the detour.

App Coded For Several Users

App Coded For Several Users

So here we have an application coded for several people. Their distance from the interface represents their gaps. So we have gaps; personas; market segments; an executive sponsor, sometimes called a product manager or a CEO; tradeoffs; politics; and paradigmatic issues. In short, we have a mess. We’ve managed that mess by pretending it works, and by delivering average functionality that fits no one. We let the time arrow go down. We invented the executive sponsor and personas to cure requirements volatility, but still our requirements are volatile. Yes, even with Agile.

So now we embody those people in their functional cultures.

App Models and Compensations For Several Users

App Models and Compensations For Several Users

Once we add the functional cultures back into the figure, we have the requirements, and the expression of those requirements. Each cultural model has its own blue. Each expression involves some distance from the interface, some gap. The person with the thinnest gap won the negotiation game, the political warfare around the meaning tradeoffs made so the developers could efficiently build the application.  The tradeoffs cause compensating work, hence compensations. That compensating work creates off budget tradeoffs for the functional unit using the software. Off budget means implicit costs attributed to the functional unit. The IT department or vendor is not billed for the cost of the compensations, so nobody has any motivation to eliminate those costs. Measure it. “Measure what?”

Apps and Users

Apps and Users

Here we show users and the gaps involved in their use of several different applications in their daily do. The gaps are not the same size for each person using the same application, and the gaps are not the same size from one application to another. It’s a rough world. Those 360 degree gaps are the UXs involved in just getting your work done.

So we have used the triangle model to demonstrate the relations between people, their functional cultures, and the applications they use. I hinted at how the iconic representation of a conceptualization ties to the triangle model. And, we talked about ratcheting up.

And, would you know it, I still have some figures left over. Yet another post on this subject. It should be shorter.

One day, with an appropriate toolset and an ethnographer on board to find the meanings and the ratcheting ups, we can close the gap. We as product managers or the person with another job title who does product management before a product manager shows up can get this done. Try it. Draw it. Spec it. Implement it. Well, get it implemented. Is development already using Aspect-Oriented Programming? If so, it can happen next week.

Comments. Really. A comment long ago set me to find the why behind requirements volatility. Oh, I was mad. Oh, I didn’t like the guy. But, hey, he was key. He challenged my dearly held idea. He made it stronger. Fire away.

Visualizing Functional Culture

February 1, 2011

In my recent posts, I taken you on a romp through functional cultures. If you haven’t read those posts, see Meaning Fitness, Why we ignore functional cultures?, and Chaos has Changed & Functional Cultures are Alive and Well. So we’ve explored the conceptual model. We have a few hints. It’s time to tool functional cultures up, so you can start using them.

In the last post, I mentioned the situation where an idea, a concept, divides a population into those that have adopted the idea (in), those that have started down a pedagogical pathway towards adopting the idea (on), and those that have no idea about the idea (out). These things were discussed in terms of a book, but marketing teaches, particularly in B2B where you need six contacts with a prospect before you can schedule a sales visit. A lot of things teach. A lot of things get learned.

I wondered if I meant a Venn diagram when I wrote that post. Subcultures are a proper subset, well, if the world was nice and neat. Let’s pretend. We won’t pretend long.

Venn Diagram of a Subculture within another Subculture

Venn Diagram of a Subculture within another Subculture

Here we use a Venn diagram to illustrate how a population within another population adopts an idea. That idea has meanings attached to it. Those meanings are learned and justified by the adopting population. These actions make culture, a subculture. The adopting population will see the world from the perspective of the adopted idea. The non-adopters won’t see the world that way. The non-adopters are different from the universal population, so the non-adopters are a subculture unto themselves. They learned some meanings that separated them from the universal population. So learning the idea that separates populations A and B, non-adopter and adopter, made a subculture within a subculture. The picture starts to look like a topological map.

So if you need a quick visualization of a functional culture use a Venn diagram. Please don’t rush to UML. Apparently, UML doesn’t encode ontology. There are tools for that, tools that are now associated with the SematicWeb. UML doesn’t tie into terminology management, so we go without controlling our vocabulary. We let developers name thing, and then we expect everyone else to conform to the developer’s terminology. Just another thing we do to avoid confronting, admitting to, and leveraging  functional cultures.

Personally, I think in terms conceptual geographies, complete with elevation, cliffs, canyons, and such. But, the Venn diagram is as close as we’ll get to that here.

So we’re on the road to the next visualization.

Sociogeography of a Subculture

Sociogeography of a Subculture, a graphic primitive

This iconic representation differs from the Venn diagram, because the boundary between the source and target subcultures is thick. The black arrows are the pedagogical pathways where a person can move from the outside to the inside , from out to in. The problem for the person is to find a resource that enables them to get in. Sometimes being on is enough. Make the A, get out, forget it, get on with it. Attention is limited. Maintaining a subculture subscription becomes a constant if it becomes part of the person’s identity.

If I had a graphic tool to visualize these subcultural boundaries, I could drop another subculture in the destination culture and recursively show the complexity of say
Accounting>Tax Accounting>Multinational>WIP.

Conceptualizations work that way. You drop the concept into a set of other concepts. You do this recursively, so the concept you dropped into the structure has a position, a location, an address. And, you have a hierarchical definition for the new concept at the ontological level and terminological level.  I discussed ontological level in Building a Dog, Oh, Make that a Cat. And, in the previous post when I talked about storying. Ontons, or ontological sortables, are the thing these cultural boundaries are made of.

The Berlin Wall fell. It took years for the East Germans to adjust to the West Germans, and likewise. The ontons were more than the bricks and barbed wire.

So I’ve moved the idea of dropping that icon to realizing it.

A Subculture within a Subculture

A Subculture within a Subculture

The shape of the icons can be established by ontological locations as point coordinates, or by massing a population of ontons into an area. Or, you can just skip that and draw it. Don’t worry, I won’t go deeper today.

The black arrows represent ontological pathways. They don’t have to line up. These ontological pathways have rates. It takes so many minutes, hours, days, weeks, months, years to move across cultural divides. Or, seconds. there is a differential equation buried in ontological pathways. The thicknesses of the cultural divides interact with the speed of the transition. You can relate the thicknesses to those ontological locations and the duration of the ensuing ontological travel. Or, you can just draw it.  Take bus, the train, the plane, just get here–my apologies to “Get Here,” a song I used to sing in the Karaoke bars.

The Rough Guide for one of these ontological travel episodes would tell you that you need certain things: a passport, a visa, money, and of course, a guide–in a word prerequisites. Having a little knowledge speeds up your travels and gets you through passport control a little faster.

The ontological pathways on the diagram are unique to each transition. A person seeking to make the trip must find them. Even you have to find them if you are going to include them on the diagram.

The hard part of text visualization is the that the lexicon, the evidence of an underlying ontological sortable is that direction and distance tends to be extrinsic. Intrinsic geography would require standards. Make up your own rules. Define the distances. Just don’t churn those distances. Be consistent. While you do this, keep in mind that a standard deviation can be normalized, thus becoming a unit measure.

Another aspect of  this figure is that it illustrates the movement from general to specific, the process of becoming a specialist.

When the arrows run from in to out, you have a specialist becoming a generalist like an engineer going to b-school. Warning. B-school is like any other school, you learn and enter into a subculture, a functional culture. A former coworker described b-school as a place to learn the lexicon. Well, there were ontons to go with that lexicon, and there was the doing, the rituals.

Sociogeography of Generalization

Sociogeography of Generalization (from in to out)

To show that engineer going to b-school, I’d have to show the intake subculture of b-school which moves the student from on to in. That would be another specialist icon.

Those pedagogical pathways are more than content. Remember the tests? You have to pass the tests or change your major. Sales calls it qualifying the prospect. A pedagogical pathway is instruction. “Hey, you! Go back to concept 17.” The pedagogical pathways is a series of gates. I was thinking fluidics–as in Solo and Chewbaka discussing their getaway spaceship, hydrauliclogic–valves, so I didn’t use Boolean logic symbols, but you could. Do it your way.

Sociogeography of a Subculture with Gates (Valves)

Sociogeography of a Subculture with Gates (Valves)

Looks like a mess? Use a CAD system–Culture CAD.

I go on to switch the representation to documents. Here I take an enterprise wide view of the touchpoint collection, or document set in what we are calling content marketing these days. Marketing communications is teaching. You have to teach multiple stakeholders the various perspectives on your conceptualization, its eventual realization at the interface, and its value realization in the depth beyond the interface. A sales rep making a sales presentation is a document. Yes, a document with a mouth, a brain, a quota, and great communications skills. It’s an abstraction, not an insult.

I’m just illustrating a single pathway for a single stakeholder. There would be many pathways, many gates, many documents–networks!

Sociogeography of a Subculture with Fulfillment Chains

Sociogeography of a Subculture with Fulfillment Chains

When these pedagogical pathways are boiled down to a sequence of documents the word I’ve come to use is fulfillment chains. A printer or an independent sales organization ships documents to prospects in response to a request from that prospect. They call that fulfillment. Similarly, a web server serves documents in response to a request. You can also use fulfillment chains to cover the expertise development process of your documentation, training, and technical support operations. They are key to the post-sale enactment chain where expertise development lead to customer loyalty. Likewise, in the pre-sale enactment chain, the one that drags prospects to the cliff we call the sales funnel.

I could have drawn some kind of instructional design flowchart instead of a document chain. This visualization can span the product manager’s scope of responsibility. Get your head out of the bug list. If you want to be CEO of the product, you have the scope.

Before going on, I want to make this perfectly clear:

Instruction moves people from out to in.

Even software features teach. The thing that has to be learned is what feature X does, and how far from ones expectations the outcomes will be, aka how much compensation I’ll have to expend to meet those expectations. If I learn that I can’t get there from here, and I’m still going there, your software is toast–not another nickel. I learn that, its migration time, and I’m up against a 10 am deadline. Who needs sleep?  Your brand is toast as well. And, who ever saves me from you, just got themselves an evangelist.

Since features teach, any supporting content improves the chances that the feature will be learned. That supporting content becomes a feature in your offer. That supporting content isn’t just some nice to have. It’s critical, particularly since we haven’t really defined that feature to meet  the user’s need even if it has been UXed. The feature lacks cultural fit. The supporting content actually recontextualized the feature improving its cultural fitness. This comes from the statement I ran across long ago:

Programmers abstract away from the requirements, technical writers explicate back to the requirements.

This was a hint towards carrier (means) vs carried (content), the software as media idea. Programmers spend time on the carrier. The division is subtle. It takes more work to get something done fast enough, than it does to define a database record. The former carrier, the latter carried. The above statement is a fact in terms of my career. One programmer put it this way, “I deliver functionality. I don’t know anything about interfaces.” That was a few years back, so things change, but the focus is elsewhere. A hashing algorithm is a long way from a user, or that user’s functional culture.

The carrier/carried split also means there is a content split between the GUI interface users and the API users. That content split translates into learning, and the functional cultures defined by that learning.

Back to notation. Consider your instructional value chain. Your company doesn’t have to do it all. A professor picks a book, because they think that book works well with their particular take on the subject. They don’t have to do it all.You don’t either. You may want a partner to do this, or it may be part of your 3rd party developer ecology. They need to know what’s changing, so there is no lag in getting their content ready and out there. If you don’t want it to get out too soon, you may need a contract.

Sociogeography of a Subculture with Fulfillment Chains as Realizations

Sociogeography of a Subculture with Fulfillment Chains as Realizations

If you have someone creating documentation, training, or technical support content, they are doing project work and have dependencies in any development effort. They are creating a realization. I’ve used triangles to represent those realizations, aka I’ve used the triangle model, which I mentioned in the Dogs and Cats posts linked to earlier, and in Now that you have a Cat. Later, we’ll see how the triangle model lets us simplify the notation we use to illustrate functional cultures.

Realize that there is a recursion in a pedagogical pathway. You produce the instruction to produce a population that has learned what you taught. In software you realize content to realize users. Even the use of a single feature teaches some subset of your users. MS Word used to have a setting to enable the use off WordPerfect shortcut keys. This was the feature that eroded WordPerfect’s hold on its last stronghold, the legal secretary, a functional culture. I never used that feature. Alas, I’m not a legal secretary. Lawyers don’t have those anymore. Disrupted.

We’ve abstracted the underlying populations into the universal set. Its time to get more specific with the populations. Think channels.  I’ll add populations at the entries to the fulfillment chains. I do this, because the population using a particular pedagogical pathway to get in, does so because they have gained awareness of a resource. That awareness came to them either by accident (indigenously), or via marketing.

Sociogeography of a Subculture with Fulfillment Chains as Realizations with Populations
Sociogeography of a Subculture with Populations

Sociogeography of a Subculture with Populations

Of course, those populations are not all the same size. Their associated pedagogical pathways exhibit various efficiencies in moving that population from out to in. These populations were attracted, not gated or filtered.

Now, I’ll annotate a branded instructional pathway.

Sociogeography of a Subculture with Branded Fulfillment Chain

Sociogeography of a Subculture with Branded Fulfillment Chain

A collection of branded pedagogical pathways can cross the conceptual hierarchy. If the branded entity helps the learner succeed at the first transition, that entity can certainly help the traveler gain awareness of that entity’s subsequent offerings–the retained learner.

What gets taught by these pedagogical pathways embeds a model, a perspective, of the underlying subject. That model becomes yet again a subculture. The pathway might emerge after years of teaching the subject, so the model is different from earlier models. The subject changed. The teaching changed, the culture changes. Across time, you end up with a functional culture composed of generations of meaning, and populations divided by those generations. I call this a collection of paradigmatic cultures. Paradigmatic cultures occur temporally within a subculture.

In computer science, it’s the difference between the software engineering era, and the non-computability era, or the determinist vs nondeterminist programmers. It happens. It’s time to start accounting for and accommodating this reality.

Sociogeography of a Subculture with Cultural-Ontological Models and Populations

Sociogeography of a Subculture with Cultural-Ontological Models and Populations

This figure illustrates how a convergent idea and a divergent idea move populations from out to in. Convergent ideas (bottom right) merges two formerly mutually exclusive populations like nuclear scientist and medical doctors. Divergent ideas (upper right) create a proper subset in a single population.

So why bother with all this?

Across the technology adoption lifecycle it means more money for vendor at the very time when their markets are declining. Getting it done on an organizational level requires application and organizational architectures unfamiliar right now. It requires organizations to change their behavior.

For the users it promises better fitness beyond what the UX paradigm offers, because the dysfunctions are far from the interface.

For product managers it ties software development and marketing together much more closely. When a function depreciates, so does its underlying concept and the concept’s pedagogical pathways. It’s never fire and forget. It’s never a matter of throwing it over the wall and letting marketing play catch up. It’s never a matter of terminology conflicts caused by the failure to manage that terminology.

I’ll leave the notation simplification for the next post. The figures are done. So it won’t be long. You don’t have all day to read this. I’ll leave you with a summing up graphic.

Ontological Models-Verticals and Horizontals-TALC-Economic Impacts

Ontological Models-Verticals and Horizontals-TALC-Economic Impacts

I added a layer under the conceptual model (the carried) for the carrier. It’s the green along the left and upper edge of the conceptual model. I also coded the IT horizontal of the technology adoption lifecycle green as well, because the focus of applications in this phase is carrier. The IT horizontal is its own collection of functional cultures. I went on to draw the bowling ally in the technology adoption lifecycle. Here you go find yourself an early adopter in a vertical industry. You do this eight times. The figure just shows two such engagements and their subsequent verticals. You are productizing your technology in these verticals. The verticals are far apart on the macroeconomic industrial tree. They are culturally very different. Their applications share only your underlying technology. Those differences get trashed as the technology moves to the IT horizontal. The focus shifts to IT functional cultures and away from those vertical cultures. When the application moves to the late market, you de-geek the application, you move to an average, segmentation driven functionality. This is where you can improve your fitness by going back to a focus on specific functional cultures.

The macroeconomic industrial tree is there to illustrate the risk reduction provided by the bowling ally. Individual companies in your portfolio become economic sensors in your macroeconomic decision support system. It also illustrates generalization and specialization. It is easier to migrate up the tree than down (generalization vs. specialization). It is much hard to migrate across the tree laterally (specialization to specialization).

Comments please. Thanks!