Implicit Knowledge

One of the distinctions I’ve been making out on twitter is the difference between what I call fictional and non-fictional software. We get an idea. We have to ask the question does users actually do this today without our software. If the answer is “No,” we get to make up how it is to be done. The user tasks are a blank whiteboard. That’s fictional software. But most of the time, the answer is not “No.” In that case, the software is non-fictional, so we need to do an ethnography and find out exactly how the user does it, and what cognitive model of their thinking is while they do what they do. In non-fictional software, neither the developer or the UX designers are free to make things up.

Yesterday, I read “Usability Analysis of Visual Programming Environments: a ‘cognitive dimensions’ framework.” The author, a UX designer, makes some statements that clarified for me that UX design as practiced today, particularly by this designer, is fictional. Tasks exist before they are designed. Tasks exist before they are digitized by programmers. This isn’t new. Yahoo built a search engine without ever looking at existing search engines or asking library science practitioners how to do it. Yahoo made it up and then discovered many of the finding and practices of library science practitioners later. That is to say, they approached, progressed towards convergence with, the user’s real cognitive model of the underlying tasks. There is still a gap.

Agile cannot fix those gaps in non-fictional software. It can only approach and converge to the gap width between the user’s bent cognitive model they use as users, and the real cognitive model they learned eons ago in school. That learning was explicit with a sprinkling of implicit. The implicit does not get captured by asking questions, talking, observing, or iterating. With any luck, a trained observer, an ethnographer, and their observational frameworks can observe and capture that implicit knowledge.

iteration-gap

A Rubik’s Cube can serve as an example. When solving a cube, we explore the problem space, a tree, with a depth first search. We can use simple heuristics to get close. But then, we stop making progress and start diverging away from the solution. We get lost. We are no longer solving. We are iterating. We are making noise in the stochastic sense. We stop twisting and turning. We look for a solution on the web. We find a book. That book contains “The hint,” the key. So after a long delay, we reset the cube, use the hint, and solve the cube.

diverge-converge-delay

We joined the epistemic culture  or what I was calling the functional culture of the cube. We are insiders. We solve the cube until we can do it without thinking, without the search struggles, and without remembering the hint. The explicit knowledge we found in that book was finally internalized and forgotten. The explicit knowledge was made implicit. If a developer asked how to solve the cube, the user doesn’t remember and cannot explicate their own experience. They cannot tell the developer. And, that would be a developer that wasn’t making it up, or fictionalizing the whole mess.

All domains contain and find ways to convey implicit knowledge. The Rubik’s cube example was weakly implicit since it has already been explicated in that book. The weakly implicit knowledge is a problem of insiders that have been exposed to the meme and outsiders who have not. Usually, those that got it teach those that don’t. Insiders teach outsiders. In other domains, implicit knowledge remains implicit but does get transferred between people without explication. Crafts knowledge is implicit. Doing it or practice transfers craft knowledge in particular, and implicit knowledge generally.

Let’s be clear here that generalist 101 class in the domain that you took back in college did not teach you the domain in the practitioner/expert sense. You/we don’t even know the correct questions to ask. I took accounting. I’m not an accountant. It was a checkbox, so I studied it as such. A few years after that class I encountered an accounting student and his tutor. The student was buying some junk food at the snack bar. The tutor asked him what accounts were affected by that transaction. That tutor was an insider. The student was working hard to get inside.

For anyone that will ever be a student of anything, there is no such thing as a checkbox subject. Slap yourself if you think so. Dig into it. Boredom is a choice, a bad one. You’re paying a lot of money, so make it relevant to think like an insider.

Recently, a machine beat a highly-ranked human in Go, a game not amenable to the generative space and heuristic-based pruning approach of the likes of Chess. The cute thing is that a machine learned how to be that human by finding the patterns. That machine was not taught an explicit Go knowledge. That machine now teaches Go players what it discovered implicitly and transfers knowledge via practice and play. The machine cannot explain how to play Go in any explicating manner.

One of my lifetime interest/learner topics was requirements elicitation. Several years ago, I came across a 1996 paper on requirements elicitation. Biases were found. The elicitor assumed the resulting system would be consistent with the current enterprise architecture, and let that architecture guide the questions put to users and customers, their bosses. That biased set of requirements caused waterfall development to fail. But, Agile does not even try to fix this. There will always be that gap between the user’s cognitive model and the cognitive model embedded implicitly in the software. UX designers like the author of the above paper impose UX without regard to the user’s cognitive model as well. I have found other UX designers preaching otherwise.

So the author of the above paper takes a program that already embeds the developer’s assumptions that already diverges and fictionalizes the user’s non-fictional tasks and further fictionalizes those tasks at the UX level. Sad, but that’s contemporary practice.

So what does this mess look like?

dev-ui-induced-gap

Here, we are looking at non-fictional software. The best outcome would end up back at the user’s conceptual model, so there was no gap. I’ve called that gap negative use costs, a term used in the early definition of the total cost of ownership (TCO). Nobody managed negative use costs, so there were no numbers, so in turn Gartner removed from the TCO. Earlier, I had called it training, since the user that knew how to do their job has to do it the way the developer and UX designer defined it. When you insert a manager of any kind in the process, you get more gap. The yellow arrows reflect an aggregation of a population of users. Users typically don’t focus on the carrier layer, so those training costs exist even if there were no negative use costs in the carried content.

As for the paper that triggered this post, “cognitive” is a poor word choice. The framework does not encode the user’s cognitive map. The framework is used to facilitate designer to manager discussions about a very specific problem space, users writing macros. Call it programming and programming languages if you don’t want your users to do it. Still useful info, but the author’s shell is about who gets to be in charge. The product manager is in charge. Well, you’ll resolve that conflict in your organization. You might want to find a UX designer that doesn’t impose their assumptions and divergences on the application.

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: