Archive for October, 2019

Ito Processes

October 30, 2019

Markov processes make decisions based on the situation in the present. They make decisions based only on the current situation. They do not remember how they got where they are. They have no memory.

Ito processes make decisions based on the current situation and a finite past. How much past is up to you. They remember a fixed amount of how they got where they are. Memory here is a parameter.

Markov processes are Ito processes that have memory = 0.

A link on Twitter took me to an article on Quanta Magazine’s website, Smarter Parts Make Collective Systems Too Stubborn. The researchers were trying to find out how much memory is too much in the Ito process sense of memory. In a more human sense, memory would be cognitive load. Once upon a time, the rule was the 9±2 items.

The article used an illustration to summarize the research and their findings. The outer circle is where the processes started. The number circles represent each trial. The numbers in the circles tell us how much memory the associated process had. The goal was denoted by the star in the center of the concentric circles. I annotated the star with a red zero. I also numbered each circle from the center outward. There are six concentric circles.

Then, I turned the diagram into a table. Then, I annotated the table in terms of the shape of the distribution. The distribution exhibited short and long tails. The distribution is skewed. This implies that the distribution is asymmetric. And, that implies that the space of the distribution is hyperbolic. The grey and red boxes indicate the results of each trial. The star, or zero was approached, but never reached. The red box is a median, not a mean. The short tail has a mode associated with it. And, the mean is on the long tail side of the distribution.

The table tells us that the ideal cognitive load is 5 elements. The table demonstrates a saddle point at 5 elements. Performance decays beyond 5 elements. This is typical of what happens when optimal values, the value of the game, are exceeded.

The results of the reported research are contrary to earlier findings known as the wisdom of crowds. The article sees values exceeding 7 elements as being too uncorrelated. We typically maintain our batch sizes smaller, so the process maintains its correlations.

The diagram and the table show that the memory parameter was varied from 1 to 13 elements. Machine learning does hill climbing. It would have discovered peak performance was achieved when the memory parameter was set to 5 elements.

In some other reading, a formerly Boolean classification was parameterized and widened greatly. As an innovation, parameterizing a formerly unparameterized phenomena would be a continuous innovation until new theory was needed. There can be multiple parameterizations with each parameterization having its own logic and its own ontology. Each such parameterization is a standalone construct. They can be discontinuous innovations.

The concentric circles are indicative of set of Fourier transforms. Each circle can be thought of as a waveguide filtering out frequencies that would not fit inside that circle.

The technology adoption lifecycle phase is a set of Fourier transforms. Each phase, limits the cognitive load of the organization in that phase. Some work done in a specific phase is specific to that phase. Some business processes work in a phase. Others do not.

For prospects, customers, and clients, task sublimation is the primary organizer of phases. Different populations, different markets require different task sublimations. Different populations will exhibit different cognitive loads. Don’t think easy is what is needed. Remember that the star, the goal in the diagram was never reached. It was approached.

Parameterize the problem. Watch as each parameter approaches, converges, and then diverges. The goal might be beyond the value of the game. Watch performance improve and later degrade.

Enjoy.