Earlier this week, someone tweeted that making a decision was knowing. Decisions get encoded as IF…THEN… rules, which in turn serve as rules in inferential systems. Back in the late 80’s, I attended a hypertext conference where some MCC researchers, working on program proofing, defined requirements as decisions, as in all requirements are decisions, not just decisions that were made by the application. Deciding was knowing.

In all the startups and IT shops I worked in, I’m always amazed at the claims made about how few development projects succeeded, because I’ve only seen one development effort fail. That failure was caused by an ambiguity that was deferred until later into the project. Six months later, when some clarifying decisions needed to be made, surprise, we just wasted 2.5 man-years for a cancellation. Not deciding was not knowing, but eventually the decision arrives, gets made, understanding established, and deciding became knowing.

Decisions were made and as a result we know.

Deciding is Knowing.

In the figure, we are making decisions about the world. We have a budget. We have a system. We might use syndicated data, but someone has built a decision support pipeline from the world to a view we based our decision on. A decision support system consists of one or more sensors, a fusion process that combines sensed data into a coherent summary, and a view where the decider uses the data to make their decision. A sensor may use an illuminator when something cannot be sensed directly, like an electric eye used as a customer counter where you count the beam interruptions and assert that a pair of interruptions represent a customer. Finally a decision is made. A fact is asserted by the decision. That fact might not be based on the underlying data provided by the decision support network. That fact might be an assumption. It is the decision that establishes the facts. the decision support network provides the justification, particularly where we are being irrational, emotional, irrelevant in the face of a yet to be discovered invariant, or making a decision under time pressures before the supporting data is available.

Note that the figure hints at a lack of density in our collection of underlying data. The figure also suggests that our fusion processes could be implemented by factor analysis, which finds factors in the data, rather than having them asserted by decision makers or fusion process designers. Factor analysis finds factors and organizes them into hierarchies by finding classification factors. Factor analysis works much like machine learning.

When I read about data warehouses in the past, facts were like the flags that define the course in a downhill skiing race. I’m not sure that’s accurate, but facts have a density, and facts go through a lot before we use them to make decisions.

Making a decision is knowing.

We make many decisions offline. We make decisions based on statistical research. We collect data, code it, summarize it with statistics like a mean and a standard deviation. We contextualize data as normal curves. If we don’t do these things, our researchers do it for us. The research process still involves creating and deploying sensors, fusing the data into summaries or inferences, and ultimately we do research to make decisions, to decide what is realty, or to decide what future we intend to be ready for, to make a decision, so we, our team, our stakeholders, and all those reliant upon our dependencies to know. We make a decision to know. We do not make a decision to be correct.

Starting with a decision support system we sense the world, we aggregate our data, we summarized our data into a normal distribution, then we bring our data and metadata, our inferences into our world, as we make decisions in that world.

Using Research to Know.

This figure illustrates the processes we use as we research our world, so we can make decisions and establish the facts of our world. The blue elements, in the figure, talk about something I found surprising, as it was never explained in this way, a function ceases to exist where it converges to a limit. Distributions typically converge to a limit. So I’ve indicated in the figure where the function describing the distribution exists and where it does not exist. The red line, labeled as metadata, indicates the mean. Had I included a standard deviation in the figure, it would have been read. The distribution is normal. That is also metadata.

I also indicated the presence of codecs. When you encode something, you can do things with that something that you couldn’t do before. You can keep a secret for a defined period of time. You can transmit it. You can search it faster than if you searched that something in its raw state. You might have to decode it to use it. A codec is the encode and decode pair. A codec creates spaces.

Codecs are amazing things. In the book The Box, the author talks about how container shipping changed the world by encoding and decoding content in a different manner than traditional shipping. Notice I said, encoding and decoding. Container shipping altered the way geography encoded and decoded the conduct of commerce. Container shipping was a protocol, and protocols inherently are codecs.

In the figure, the decision support system is a codec. The parameters of a normal curve are created via a series of nested codecs. The normal curve is a codec. And, when we use statistical data, we are bringing encoded, filtered, constrained data into our decisions. But, luckily, we decide to know, so we float on top of codecs. And, once we know, we implement policies via other codecs. Once it is digital it is rich with codecs.

The following figure illustrates the nested codecs through which we perceive our world and make decisions about that world.

This figure shows codecs nested withing each other as each contributes to decisions made by people and organizations.

Nested Codecs Towards Decisions

The elements of a decision support system each encode or produce some signal or data, so it can be consumed by the next element in the system. These nested codecs describe a process consisting of producer-consumer pairs. While we operate as consumer to our research, data, and inference providers, we, in making decisions, are producers. We produce knowing.

Codecs, Decision Support System components, and the producer-consumer chain

Codecs, Decision Support System Components, and the Producer-consumer Chain

In this figure, the previous figure has been modified to indicate the decision support components in red, and to correlate the producer-consumer representation with the nested codecs.

A critical issue is whether we have defined our data and systems so that it servers the decisions it will drive. We should be careful not to be victims of decision support systems we use whether ours, IT’s, or that of external research organizations. They too are making decisions to know.

Christensen pointed out in his books that the net present-value calculation was killing discontinuous, potentially disruptive, innovation needlessly. That calculation and its users were encoding a world in a particular way, and their decisions not only knew, but became self-fulfilling prophecies, that essentially made a world. Calculations are codecs, spaces, worlds.

Sometimes others decide for us. In those instances, we might not know. But, we decide to know. And, in our power to decide lays our power to construct a world, to be a codec.

Comments please! Thanks!


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: