In my last post, The Grid, we looked at how grids imprison sequences. We discovered a discontinuity, a hole, among the sequences laminated into the larger sequence, the sequences of differences between z-score values. I called them out. And, left much unsaid. We’ll continue that discussion in this post.

In mathematics, we have holes in our graphs. We have holes in what each of us knows about math. In Algebra class, we’re restricted to the reals, so we’re told no solution exists. It turns many of those solutions are complex numbers, not reals. There are plenty of holes, potholes.

Then, we have asymptotes. We can approach them, but we can’t cross them with a function because they are manifolds, something that falls into that wide category of math we don’t know yet.

I remember stepping into a gopher hole. After that, I kept a close eye on the ground where my feet were stepping into. One day a lieutenant colonel stopped his staff car so we could have a conversation about why I didn’t salute his staff car. “Gopher holes sir.” Not that I had to worry, my colonel would have laughed the incident off. It was one of those days when the graph you live in has a few new nodes and the graph’s normal distribution changes.

The z-score sequence is directed from core to tail–away and towards. Oddly, humans use the same kind of dimension, technically a half of a dimension. We are 2.5-D beings, not 3D beings. But, we round off dimensions for our mathematical convenience. If it’s not easy, it’s not math–easy being very relative. Consider that z-score sequence to be a vector. Consider the hole to accommodate another intersecting vector that for the moment we will consider orthogonal, or simply perpendicular.

Being orthogonal in statistic means that the vectors intersecting in that manner are independent, aka not correlated. The cosine of 90 degrees is zero, so the cosine of correlation is zero, so the vectors are not correlated.

The vector passing through the hole in the z-score sequence has its own distribution. In the end, the data comprising that distribution will be added to the z-score sequence’s distribution. For now, that distribution is unknown, and like all unknowns constitutes a source of risk.

Now, we can imagine a flow through the subsequences. Imagine each layer as a pipe. That gives us some plumbing, aka some fluidics. No, I’m not going there tonight. But, I did draw it just to assess its probabilities. Of course, I ignored some of the subsequences. In modeling, you put in what you think is important and you leave out the rest.

Just for the bayesian priors s, t, and u all started with a probability of 0.50. That gave us the probability of st, the probability after the first mix of st=0.25. Then we dealt with the second mix, which had us adjusting the probabilities so they equaled 1.00, leaving us with p(st)=0.333 and p(u)=0.666. Oh, we’ve crossed an approximation boundary.

I finally gave in to reading David Hand’s “The Improbability Principle.” Hand refers to Borel’s theorem about the impossibility of events with sufficiently small probabilities. Borel wanted us to understand that p=1 and not more than 1. It takes a while to get to the point. Borel is modeling via probability, so the impossible events are left out, but due to Borel’s theorem, we are assured that we can simplify the situation via omission and kept going, all things being logically consistent.

We are not leaving the hole out. Everybody else probably has left it out. It’s not in the z-score table screaming out to be seen. We stumbled across it with much labor. But, we will start with the vector being orthogonal. I took a top-down view for the next graphic.

Here we start at the global maxima of the z-score differences sequence, the axis of symmetry or rotation, on the left. The sequence running to infinity somewhere off the page to the right. The hole appears in light blue. The hole is where the sequence vector intersects the orthogonal vector. The long-term mean will come to rest at the intersection. The r variable is the indicator of correlation. The angle between the sequence vector the actual vector (shown in red), theta, illustrates a positive correlation. So the distribution will come to rest on the actual vector (red).

We started with a surprise unknown at the hole. Once discovered, we have to find it’s measure. So we assert the distribution’s existence. This has the effect of putting a Dirac function at the center of the distribution. With more data, we have a Poisson distribution. We can use that Poisson distribution to approximate the normal distribution until we have collected 30 or more datum. The figure is wrong, but I had to make the Poisson distributions large enough to show up. The Poisson distributions would still be inside or under the normal distribution. As the Poisson approaches the normal, the mean moves around until it settles at the core intersection, aka the mean as shown in the diagram, and the distribution would exhibit skewness and kurtosis.

Here I show the evolution of that hole. The Dirac function generates a line at infinity, here labeled PE, as in potential energy. Potential energy is used here to hit at information physics. Strong writing on information physics put it as potential energy being position and not some form of energy, just a physics bookkeeping slight of hand. Next, the Poisson distribution is generated along the line of positive correlation in its continuous form (blue line and blue area). Poisson distributions speak loudly to the myth of deregulation being valuable in a business. The constraint, here a policy constraint (gray) moves the probabilities stretching out to infinity and concentrates them into the histograms inside the constraint, which makes the business more focused and less costly. Beware of this myth. The constraint generates the higher histograms (red volumes with orange tops) in the discrete form and generates the higher curve (dark red) as opposed to the original curve (blue) in the continuous form. Constraints create value.

Next, the Poisson distribution is generated along the line of positive correlation in its continuous form (blue line and blue area). Poisson distributions speak loudly to the myth of deregulation being valuable in a business. The constraint, here a policy constraint (gray) moves the probabilities stretching out to infinity and concentrates them into the histograms inside the constraint, which makes the business more focused and less costly. Beware of this myth. The constraint generates the higher histograms (red volumes with orange tops) in the discrete form and generates the higher curve (dark red) as opposed to the original curve (blue) in the continuous form. Constraints create value.

Last, the normal distribution reaches its equilibrium distant from the Poisson distribution on the timeline (gray). The normal has lost the directional sense that the Poisson distribution provided. The data is close in distance but spread out over time. The potential energy of the assertion that generated the Dirac signal flows down to the normal and beyond as the normal gets wider and loses height, aka becomes flat. The normal here is in situated in Euclidean space. The Dirac and Poisson are situated in hyperbolic space. Beyond the normal shown, where the normal becomes flat, those normals find themselves in the spherical space. Financial analysis as it is conducted today is carried out in spherical space. In that space, multiple analyses give good answers. In hyperbolic space, no analysis gives good answers.

Think of your data efforts as dynamic undertakings. Statistics uses the static view as the means to honest statistics, dynamics is prohibited. Statisticians take snapshots, but technology adoption is a dynamic proposition.

Standard normals hide much. All normal distribuitons look the same in the standard normal form. At times seeing the real normal will tell us much.

## Leave a Reply