Archive for January, 2011

Chaos has Changed & Functional Cultures are Alive and Well

January 21, 2011

I’m reading “Chaos Theory Tamed” by Garnett P. Williams, copyrighted in 1997. It’s a very good book. The necessary math is explained without calculus. I’m not going to go into chaos. No reason to call Maxwell Smart. The book is definitely not a mud thick, quicksand, mind resistant book, aka a textbook. I recommend this book. My public library always surprises.

Williams does something that most researchers do, he talks about the shape of his functional culture. He probably does this more than other authors I’ve read lately. So I wanted to show how he does this, and give requirements elicitors a few hints to help them make a functional culture and its subcultures more visible.

So off I go to look up the quotes, the quotes I didn’t highlight when I dug into this book. How did I know I’d be writing about this? I didn’t. My requirements forked. The original requirements didn’t change. Not at all. I wanted a fast read. But, it turned into a slow know this stuff by the time I put the book down project.

So right here in the Preface, Williams stories the discipline of chaos right here in his goals for this book. Storying the data is the act of contextualizing a thesis. Like so and so, but differ from …. Every PhD thesis starts this way. So go back to 1997 and take a look at chaos theoreticians. Williams uses another term for them, but after looking up that term up in Google 14 years later, I’ll not use that term in this post. Yes, in 14 years, chaos has changed, but it’s still chaos. As Williams puts it, chaos has an insider, a surface, and outsider population (in a bullet):

Most books on chaos … use a high level of math. Those books have been written by specialists for other specialists, even though the authors often label them “introductory.” Amato (1992) refers to a “cultural chasm” between a small group of mathematically inclined initiates who have been touting chaos theory, on the one hand, and scientists (and, I might add, “everyone else, on the other….

So we have three populations divided by two cultural divides. Cultural divides, because meaning partitions populations into those that don’t know, those that know a little, those that know it cold.

I play this old game called Snake. It looked like an Ito process to me. So did this Chaos stuff, so I tweeted someone who would be able to tell me if I was correct. Well, he felt is was Markov. I was a bit stunned. Markov is a process with no memory, Ito some memory (finite), Gaussian/Bayesian all memory. His answer reflects my now refined definition of Ito, Zero is finite, so all Markovian processes are Ito processes. I had to dig. I finally figured out what my peep was saying. His answer was a perfectly clear statement. It didn’t mean what he said. Yes, a nuclear submarine could have squeezed through there. As experts in human ambiguity, you’d think our ambiguity sensors would have gone off. Nope. And, like every good requirements elicitor, we’d claim that the requirements changed. I’m on the surface. Peep was an insider. Not faulting my peep. In fact, thank you very much. Understanding is always gold. Your elicited won’t anticipate your misunderstanding either unless they are writing a multiple choice math test. I hate those. Grr.

Another argument that Williams is taking up here is the pedagogical pathway, the sequence through which one teaches, or was taught. Where you anchor your introduction selects your readers, students, and if your media is software your users. How much must they know? How much of an insider must they be? That introduction exists in a space of constraints and enablers (affordances), which pretty much accounts for why there are so many of them for a given subject, and why you can always write another one.

Pedagogical pathways turned out to be important in Argentine tango. I learned “going to the cross” a certain way. I teach it that way. Other instructors don’t teach it, or teach it differently. I went on to deconstruct my tango and relearn it. One day I learned to go to the cross without all the conventions we used. Instead of those conventions, you just extend the two men in a donkey metaphor, and the leader’s right shoulder puts the follower, her left foot to be more exact where it belongs, while you give a little down lead on her back. Done in the distance of a single step, a single beat of music, rather than step outside, step inside if, and finally the right shoulder left foot thing. Why did we teach it the longer way? We did it so the learner would believe in themselves. Learning is a matter of moving from the unaware, unknowing to aware, unknowing to aware, knowing, to unaware knowing. With that aware unknowing comes fear, an obstacle to learning, and in a voluntary situation, it is libel to cause the student to give up and quit. We needed more dancers, not less. But, why learn it the donkey way? Because the conventions are shared and where the conventions are not known, conventions become problematic.  When you walk into a new venue and ask a non-dancer, rapt watcher in the audience, if she wants to dance, she’ll say no, because she never had a lesson–she is afraid. Don’t get her wrong. She wants to. And, guess what, you have to make that happen, and you can if you don’t have to rely on conventions.

Those pedagogical pathways are important to us elicitors, because they reveal the conceptual underpinnings of doing, the meaning at the core of the ritual, the ritual we hope to encode and encase in GUI pixels. If all of the people elicited went to different schools and read different textbooks, expect that they have taken very different pedagogical pathways through the given subject. Even an Aha! is arrived at via a sequential cognitive pathway of this idea before that one. In chaotics, these sequential pathways become phase space pathways and dimensions.

Back to Williams:

In this book, I assume no prior knowledge of chaos, on your part. …, I try to discuss only the most important ones [topics]… I try to present a plain vanilla treatment, ….

So we will visit the peaks, not the valleys of the cultural terrain of chaos. And, really Williams has a bit of attitude with this vanilla thing. It comes up again and again. We won’t be joining Williams up there on the summit. That’s OK. This is the 101 class, the surface, feet wet, freeze at the knees spring.

Williams is himself a geologist/hydrologist that came to chaos much like us, non-mathematicians, from somewhere else. He mentions his writing style and its goal of reducing the distance from the subject and the reader, a UX goal.

In defining nonlinear, he says:

… An alternate and sort of “cop-out” definition is that nonlinear refers at to anything that isn’t linear,…

I still remember in linear algebra class, the fill-in professor, who was taking the place of THE linear algebra professor, who died the day before the semester started, writing an equation on the chalkboard and saying, This is an eigenvector” So I had to ask, “What is an eigenvector?,” “This,” he answered. That was the end of my attention to the subject for the rest of the semester.

“Us against them (you),” a cultural boundary. No not a stupid question. And, not a wrong answer. Just an answer with no batteries for the flashlight. Williams went on to say that we didn’t really have to know what nonlinearity was beyond this cop-out definition. We were not going to become one of them. Let’s face it, we are happy to be BAs and PMs. We wouldn’t take the pay cut to become one of them. Although another day like yesterday, and we’ll be feeling around wondering where is that towel?

That cop-out definition was William’s proof that he is one of us, not them.

You’ll often see the term “flow” with differential equations. To some authors … a flow is a system of differential equations. To others … a flow is the solution of differential equations.

So where so we stand? I’m sure some of us can handle differential equations. I can, but only lightly right now. It’s an active goal. At any rate, start with the universal set U, draw a subset inside that for the differential equation crowd, and divide that crowd into two mutually exclusive populations (subsets). The line between those two differential equation crowds is where those pedagogical pathways would be to take someone in the unknowing crowd into the knowing crowd. “Belay on!”

Not all differential equations can be solved, so you might have to rappel down and climb up again. This thin line in the diagram is really a deep box canyon, and it’s an easier climb if you learned your integral calculus cold way back there on your pedagogical pathway. There will be a moment or two on the line when someone will yell out, “To the right, slightly above you, a crack!” You’ll reach out. You can’t see it. You have to feel it. There it is. Fear and relief, learning again. On the climb again, get off that plateau. Ratcheting up. You’d think that the people you are eliciting would remember the guy that tipped them off, or that they would forever recall that they didn’t know the tip. When you elicit them, it was smooth, they were brilliant, nobody helped them, the teacher/professor sucked, and education failed them. Really?

Don’t worry, that hint will turn into a bug. Why didn’t you …? Well, you didn’t …, and … (not the time to drag out the tape and prove it).

That last step in learning, unaware knowing, is in knowledge management terminology a reimplication, the explicit disappearing back into the implicit/tacit, as if it was so obvious why would anyone bother to explicate it at all. It’s just the way all of us humans are. Culture is so transparent that you can’t see it even if it obscures things.

Some authors like to label the first observation as corresponding to “time zero” rather than time 1.

As programmers, how many times to we make this error, the off by one error? You probably see it as random event, rather than the behavior of a population. Well, for authors, it’s not an error. It’s a way of seeing the world. Nobody is going to get in a fight over A0 vs. Ax. They look at it and think, oh, one of them. Likewise, you can run into a differential equation book written in Newton’s notation, rather than the usual Leibniz notation. Still, a frontier checkpoint where the customs guy has gone home for lunch leaving the gate up.

When you’re looking at a conceptualization, just remember it was constructed to sell it’s constituent concepts. And, every concept is a producer to some population’s consumers. “Are you ready to consume this idea yet?”

Vectors are points. I only recently came to that. They look like lines between two points with an arrow at one end. But, normalized, a point is all you need to specify a vector. If you know that, congrats, you’ve been ratcheted up. Everything is easier where you are. If you didn’t, so everything was tougher until the day I joined you on the other side. The point again, a conceptualization dividing two populations. Tougher and more relevant is that BAs and PMs will end up eliciting requirements from both of these populations, because you didn’t sort them out. You can’t do it both ways can you? So you do it one way, half way between both. Sure, with vectors, no way, but with distant stuff, subtle stuff, how do you know? And, again, the customer didn’t know what they wanted, so the requirements changed. No, no the requirements didn’t.

There are three ways to show a set of sequential measurements….

  • The time series….
  • The pseudo phase space plot….
  • Th wave-characteristic plot….

Because wave-characteristic graphs show a parameter’s array or spectrum of values…,  people often attach the word “spectrum,” as in “”power spectrum,” or “frequency spectrum.” The terminology isn’t yet fully standardized. Commonly, a “variance spectrum” or “power spectrum” ….

Some authors reserve the term “spectrum” for the case of a continuous (as opposed to discrete) distribution of ….

Statisticians just love frequency-domain transformations of time series….

Different terminology sure. Different times. Different people. Different functional cultures. And, different rituals where Williams is telling us what statisticians love. Different states of adoption as well.

Williams brings up an interesting aside when he talks about Fourier series being limited to waves of finite periods:

… Mathematicians had to figure out a way around that problem….

Until this problem, this constraint, was solved, Fourier’s methods could only be used by a small group of people, the market. Once the mathematicians cracked this constraint on the use of Fourier’s methods, and people adopted this new technology, the market grew larger.

Most techniques and concepts in chaos analysis assume that raw data were measured at equally spaced time intervals. Only a few techniques [1997] have sophisticated variants to account for irregularly spaced data. Hence the best approach by far is to measure data at equal time intervals.

Williams goes on to list three methods, and follows this list with

Our discussion will be based on equal time intervals, unless specified otherwise.

The park ranger is telling you to stay on the trail and not to create any new switchbacks. Easier on you that way. The trail is again the pedagogical pathway. He did this earlier when he covered DFTs [nevermind what they are]:

The mathematics behind the DFT are vast and complicated. Many books….

At least he hinted at where the adventurous could find what they were looking for. The people you elicit won’t do that, and somehow, you’ll have to be able to become one of them if you aim to capture ethnographic field notes and eliminate requirements volatility at its source: populations and their various meanings–the preludes to doing.

The way he handled the dangers of DFTs to learners is exactly what we did as river canoeing guides. Waterfalls are dangerous. It’s ok to portage them. They portage it, we run it safely after they are downstream. It was only four feet high and didn’t have a deep keeper. You have to ditch the life jacket and dive out under the keeper. No thanks. Yeah, now that we’re done with that, feel free to explore. Just don’t let them get behind you on the river.

Then, Williams dives into standardization and differencing. He walks us through standardization. Yuck! It reminds me of factor analysis and linear programming. Nothing difficult, just endless iterative mathematics, over and over again. And, when you are fed up with that differencing comes to the rescue. Easy. Done! So why did he take the pedagogical pathway that he took? Did he do it to give us something spreadsheetable? Did he do it to make us love differencing? Work is persuasion. Lazy wins. What about the people you elicit from? Did they walk you the long way to the destination, or the fastest way possible? Knowing that they were not going to get what they are asking for, where they punishing you, or did they think that the fastest way would get you out of their hair? How would you know? And, in the end, when they don’t get what they needed, who will pay the costs? Not you. Not the vendor if you are the vendor. Not the IT department after months of effort to an anti-heroic end, except that the functional unit had to hire a few more unemployed people. BAs and PMs can get people back to work by ensuring that meaning can only be recovered from an application by many more hours of staff labor than was previously slated in that units budget. Hire another staffer.

When discussing attractors Williams used the terminology of approaching the attractor asymptotically. Doesn’t he mean limit I wondered. Well, yes, eventually he got around to defining epsilon and walking down that geometry-based intuitive approach to limits we got hit with in calculus class. But, he went on to say that statisticians see it as a matter of hitting the noise and having the signal disappear, the confidence interval becomes epsilon. Then, there’s me and that line as a collection of convergent sequences using a set theory-based definition of a post-geometrically intuitive mathematical limit.

I brought outside stuff to my understanding. What about when we elicit? Sure we do. How do we not? Do we have practices that eliminate spill over. Would we stop asking questions once we thought we knew? Would we have ever gotten to the statisticians definition? Set theory doesn’t belong in the mix at all.

And, lastly, I’ll go back to the side point about constraints. It’s 1997, Williams is saying

… There’s no way to predict long-term evolution…

That was a constraint back in 1997. It might have been a collection of constraints. This is where value lives. Eliminate that constraint, and the dollars will flow into your pockets.

Chaos is deterministic, and non-stochastic. There is no reason whatsoever that you can’t predict long-term evolution, at least not from where I stand.

Williams was always saying “Other authors…” so comments please. Cross the cultural divide. (more…)

Why we ignore functional cultures?

January 10, 2011

In my last blog post, Functional Cultures, I summed up a long realization and long running discussion of functional cultures. There was more. There was this nagging question about how it is that software developers ignore functional cultures. Nagging questions just lay in wait for serendipity. There is more to the answer to this question than “Hey, cultures are so easy to ignore.” It probably takes more sociology, anthropology, or ethnography than I have under my belt. But, there it was a serendipitous aside in a math book of all places.

It’s been forever since I last took a math class. I dived into a review and some efforts to extend my mathematics capabilities. I started out wanting to be able to do differential games. These days I read several math books at once. Do some math. Jump to conclusions. Ask questions framed in mathematics. And, turn this stuff into tools that might be interesting to product strategists. This stuff is math simpler than the Black-Scholes options equation.

So here I am reading a BA math book. Yes, you can get a BA degree in mathematics. That would be a book “About” math. Not a book where you are bludgeoned within inches of your life by some calculations, computations, or symbolic manipulations that we engage in when “do” math. If you do enough math you get a BS degree in mathematics.

Notice the distinction I made about “about” and “do.” I typically express his as being “on” vs. “in” a culture. Poets might say “On love” vs. “In love.” The distinction is analogous.

We went through this a lot back in school. American history class taught us content without methodology. It taught us “about,” rather than how to “do” history, aka write a persuasive piece using historical methodologies and primary evidence on some narrow, but deeply defined historical conceptualization. We were “on” history, rather than “in” history.

History is just one subject we took. History is more than a subject. When you pursue any masters degree, you lean the who of your field, any field, not just history. Who you know, and who you know of will define your career. It likewise defines your Ph.D. thesis, and where you go to get that Ph.D. It determines what ideas you are exposed to, what conceptualizations. It defines your approach to the topic. But, all of these things are the result of your interactions with people, people in organizations, people that share meaning, ritual, purpose, and work—CULTURE, more specifically, a functional culture.

If you took more history than that required for graduation, American history class, you move from the surface into the depths. You traverse a geography, an organized collection of populations of ideas and people that looks like a Venn diagram gone mad. This geography has a map. It shows elevation. To get from “about” to the Andrew Jackson’s Secretary of War’s view of the Louisiana Purchase, you have to climb a mountain and look around, repel down the mountain, and take only your bibliography when you go. If you make that climb, you are never again the same. This even if you don’t like what you saw and vow to never bring it up again. Brain science will make you a liar.

So it is with every discipline be it mathematics, computer science, engineering, finance, data processing, optics, composition, or business. The doing changes, the conceptualizations changes, the lexicon changes. But, the sociological processes that led us to do it that way are universal. We learned to do it with our peer, who like us, traversed the same geography, and came to subscribe to the same norms of our functional culture.

A software developer can come from different functional cultures: engineering; mathematics; computer science, today a different subject and separate from mathematics; the b-school IT programmer; philosophy; HF/UX expert,  the hacker. They might program the same application, but at the level of code, methodology, priority and focus on given aspects that application will be very different.

Every unit within a corporation has a functional culture. Some of those functional cultures are given power. The functional culture with power provides the roots of the corporate culture. The business units share an education in business administration. They share a culture. That education serves as a set of protocols, an infrastructure, a carrier of the monetization(s).

The typical definition of corporate culture is “How we do things here.” But, here is a complicated thing. There are niches within a corporation. Every business unit, every functional unit is a “Here.” Put differently, there is a geography. That geography isolates populations, so a culture emerges. In functional units, the functional culture is established by the functional unit manager. The people they hire are qualified by their education where they entered the functional culture, and their work history where they expressed, refined and evolved their functional culture. The people are unique, but share a common pool of meaning. They are different from others in other functional units in the same corporation.

There is more to this “Here.” There is also a “When” that expresses itself as a cultural container. In academia, professors must change the topic of their research every eight years. They do this on a continuous basis. This leads them to teach their subject differently over time. They develop pedagogical approaches to their subject. Their students learn something that their peers do not. So graduates from a single department over the span of a decade are different. Over twenty year, they are even more different. Generations are similar, but functional unit staff spans these differences.

A good team has specialists that support the rest of the team. Every team member has a particular strength and set of interests. These differences and this structure keeps the functional unit fit to evolve as various technologies mature.

The ideas being taught are either continuous or discontinuous. The ideas divide populations into sub populations that subscribe to the idea or not, or to what degree. Just being aware moves the person within the cultural geography. Discontinuous ideas lead to discontinuities in populations. These population discontinuities involve long time-frame, sociological processes, technology (idea) adoption. These discontinuities are paradigmatic.

The way an idea is taught, the pedagogy,  has similar effects.

The communications across a paradigmatic gap is complicated. Shared vocabulary and shared meaning is clipped.

Requirements elicited from a functional unit is complicated by the cultural geography of the functional unit. Requirements elicitation prioritizes the paradigmatic subcultures. Some meaning is captured. Some meaning is omitted. This happens withing the functional unit itself before the wider tradeoffs made by executive sponsors. The functional unit manager decides who will be elicited from, which paradigms are installed, and which paradigms will work at a disadvantage given meaning loss that requires adjustment.

Just a few examples of meaning differences, cultural divides:

  • Traditional cost accountings vs. activity-based cost accounting vs. throughput accounting
  • Capital as defined by accountants (cash) vs. capital defined by international economic development economists (laws that improve the efficiency of cash)
  • Demos as defined by marketers (demographics), and multimedia producers (example functionality)

So the book was How Math Explains the World by James D. Stein. In this book, the author says

Part of the reason for the success of mathematics is that a mathematician generally knows what other mathematicians are talking about, which is not something you can say about any other field. If you ask mathematicians to define a term such as group, you are going to get virtually identical definitions from all of them, but if you ask psychologists to define love,…

You get the picture. Mathematics is young. It has, however, grown to the point where it is no longer possible for a mathematician to know all of mathematics, so like psychologists, mathematicians will lose the coherence of their definitions. Their functional culture will broaden into a collection of sub cultures.

Mathematics amazes me. Many issues in how we teach Calculus, for example, were resolved less than 50 years prior to when I was taught Calculus. I’m not that old.

So why is that we BS grads ignore functional cultures? I’ll accept the notion that the coherence of our definitions at the core of our disciplines leads us to believe that everyone else is like us. Some of use even code as if our users are just like us. We believe in the math of populations. We happily aggregate dissimilar people. We have numbers, blunt numbers, a brute force attack.

We think that all things are functions. In the years to come, maybe we will be equally at home and fluid with the notions of all things being manifolds globally, and functions locally.

Later Stein mentions the word “Duck!” and reminds us that it has one meaning as a noun, a water fowl, and another as a verb, get out of the way.

Again, local-global, function-manifold, and sub populations.

So here you are having been exposed to this idea of functional cultures and the need for meaning fitness. I’ve pushed you into a subculture. Lets evangelize these ideas. Lets make more money because we finally quit ignoring functional cultures and paradigmatic subcultures. Lets eliminate the need for users to spend time compensating for meaning loss.

Comments please!

 

 

 

Meaning Fitness

January 6, 2011

Back about two years ago, Trevor Rotzien (@trevorrotzien) and I led the Anthropology of Product Management, #aopm, tweet chats. Trevor handled the internal facing uses of anthropology. I handled the external facing uses of anthropology, or more specifically, ethnography.

Since then, I participated on an Anthropology of Product Management panel at the 2009 Orange County Product Camp. I’m not an anthropologist. I’m not in retail. I’m a software guy. I’m an AI guy. I’m a software engineer with a deep interest in requirements elicitation. I’m a Argentine Tango dancer who took on the cultural aspects of that. And, through that came to know an ethnographer, and read ethnographies. During our panel discussion, I showed how attention to meaning, aka cultures, or more specifically functional cultures could lead to better requirements.

In 2010, I attended the Austin Product Camp. I went to a session on retail anthropology. Anthropologists tell us to observe. But, anthropologists spent ten or more years working through a PhD where they were taught frameworks for observation. They entered into a functional culture and internalized the meanings of anthropology. They know how to observe.

We know how to code. We know our frameworks. We learned those frameworks during our education and career as developers. But, if I stood there and told an anthropologist to code. They’d code, well sort of. Likewise, we observe, sort of.

I think it’s the product camp rule against selling that gives rise to this glossy advice to observe. The real advice should be hire an ethnographer. Here is how. Here is what to expect. Here is how to specify the project. Here is how to use the results. The Austin presenter shrugged off my question and never answered it. Another product manage asked me is he answered the question. That product manager wanted to know.

During those tweet chats I laid out a long and very code and industry specific argument about why market segmentation generates average functionality. Meaning is specific, cannot be traded off, seemingly expensive, and contrary to the goals of software engineering and even requirements elicitation, reducing developer costs. I said seemingly expensive, but that was then, back when software engineering was a hot topic, requirements elicitation likewise, and developers were expensive. Today, developers are not costly, or certainly are not the cost drivers they used to be. No, today we pick our cost. We can get code for free. We can let someone else develop a critical component and leverage it even if we have no resources to do it ourselves. We can pay for it, but only as much as we want to pay. Methods and architectures, like Aspect-Oriented Programming, have come into practice, as well, that enable mass customization around meaning possible. Other benefits of these architectures makes this mass customization around meaning highly beneficial to the business case side of things.

I argue with the notions of integration apps and knowledge management techniques that negotiate meaning away. Sorry, meaning is important. Meaning is key. This thread of the argument demonstrates how much meaning is traded away, and how the most data centric of enterprise users actually have little to say about the data and functionality they actually get out of a development process.

I argue with the notion usually expressed as “It’s the Interface, Stupid.” Sorry, no. Meaning is in the model and the view, but the interface stops at the view. Fixing the interface is not enough. And, interfaces can be fixed, so that they serve the averaged populations defined by market segmentation.

Very early in my career, I came across Gartner’s efforts to define the Total Cost of Ownership (TCO). They defined a category called negative use costs into which all non-productive interactions with an application were placed: self support, user time spent on tech support call, reading manuals, desktop training, compensating efforts for holes in the application. Later, as an ITIL change manager, I spent hours reworking data to get the answers I needed. No, we were not getting applications consistent with the meaning of the work we did. Gartner ditched this cost category, but these negative use costs, estimated at 15% of operational costs, demonstrated just how much our applications wasted our time, because of the gap between meaning and doing.

Today, we encode doing, rather than seeing doing as the ritual centered around the meanings that a user population is wrapped in. We encode meaningless doing. We ask users and customers about meaningless doing and call that observing. We can do better.

I run across the same issues tied up in the data quality movement. Sorry, data quality is independent of the meaningfulnesss of the data. Some tweeter on data quality actually blogged about the need for fitness to use. ?Yes!

Back in the #aopm chats, I came to use the term “meaning fitness.” In the past, I came to Saul Worman’s notion of information design as the act of increasing the fitness of use of the underlying data. That has very little to do with graphic design as information design now finds itself being eased into–the difference erased.

It was in Kimble’s book on real-time data warehouses where I came across the idea that the data users would negotiate the meanings of the data items in the data warehouse. Kimble expressed it simply as

Negotiating Away Meaning, The Simple Case

Negotiating Away Meaning, The Simple Case

But, Kimble is assuming that there are only two parties, and that those parties are equals in their ability to negotiate and get what they want. Negotiation class taught me otherwise.

That negotiation class presented me with a research-based framework, which showed who won, who lost, and by how much. Negotiation power classifies populations into those that win 55% of the time, those that win 35% of the time, those that win 8% of the time, and those that win 2% of the time. Putting names on these groups: business unit execs (55%), functional unit managers (35%), functional unit workers (8%), and deep data geek (2%). Ouch. The deep data geek ends up with data skewed towards generalists, so they must do tons of compensating work to just do their jobs, the jobs that were supposed to be automated.

A product manager can catch these compensatory efforts in the customer’s request for export to and import from Access and Excel, the tools used to make the necessary compensations to bring the data back to meaningfulness.

So why didn’t negative use costs become an issue? Gartner found no data to support the existence of these costs. Yes, we incurred them. The August issue of most IT periodicals for the decade before the Web were full of stories about how IT was not delivering the expected productivity improvements. CIO’s knew intuitively what numbers could not tell them. Negative use costs were invisible, implicit, tacit. Negative use costs couldn’t be managed or eliminated. They simply embedded themselves in the cost structure of the organization where they live forever–a cost virus, increasing the labor costs of the functional units while costs were off the IT budget.

Well, CIOs are now dealing with this. But, product managers think nothing of pushing costs off on customers. Hell, the customer doesn’t see those costs. The Web and the Cloud won’t change this.

Ethnography is the way out. Building applications that recognize meaning and don’t trade it off is the way out. But, that is just the cost argument. There is a revenue and profit argument. Mass customization around meaning means reaching more customers, enlarging our markets, without vastly increasing the cost structures of the vendor, us.

Enjoy. Comments?