Archive for March, 2018

A Few Notes

March 20, 2018

Three topics came up this week. I have another statistics post ready to go, but it can wait a day or two.

Immediacy and Longevity

I crossed paths with a blog post, “Content Shelf-life: Impressions, Immediacy, and Longevity,” on Twitter this week. In it, the author talks about the need for a timeframe that is deals with the rapid immediacy and the longevity of a product.

When validating the Agile-developed feature or use case, achieving that validity tells us nothing about the feature or use case in its longevity. When we build a feature or use case, we move as fast as we can. The data is Poisson. From that, we estimate the normal. Then, we finally achieve a normal. Operating on datasets, instead of time series hides this immediacy. Once that normal is achieved, we engage in statistical inference while at the same time continuing to collect data to reach the longevity. This data collection might invalidate our previous inferences. We have to keep our inferences on a short leash until we achieve a high sigma normal where it is big enough to stop moving around or shrinking the radius of our normal.

In the geometry sense, we start in the hyperbolic, move shortly to the Euclidean, and move permanently into the spherical. The strategies change, not the user experience. The user population grows. We reach the longevity. More happens, so more affects our architectural needs. Scale chasms happen.

The feature in its longevity might move the application and the experience of that application to someplace new, distant from the experience we created back when we needed validity yesterday, distant from the immediacy. The lengthening of tweets is just one example. My tweet stream has gotten shorter. That shortness makes Twitter more efficient, but less engaging. I’m not writing so many tweets to get my point across. There is less to engage with.

This longer-term experience in the is surprisingly very different. In the immediacy, we didn’t have the data to test this longest time validity. Maybe we can Monte Carlo that data. But, how would we prevent ourselves from generating more of that immediacy data in bulk that won’t reflect the application’s travel across the pragmatism gradient?

The lengthening of the tweets probably saved them some money because they didn’t have to scale up the number of tweets they handled. They take up more storage, but no more overhead, a nice thing if you can do it.

Longest-Shortest Time

Once the above tweet took me to the above post on the Heinz marketing site, I came across the article, “The Longest Shortest Time”  there. The daily crises make a day long, but the days disappear rapidly in retrospect. The now, the immediacy is hyperbolic. The fist of a character in a cartoon is larger due to foreshortening. Everything unknown looks big when we don’t have any data. But, once we know, we look back. Everything is known in retrospect. Everything is small in retrospect. Everything was fast. That foreshortened view was fleeting. The underlying geometry shifted from hyperbolic to Euclidean as we amassed data and continues to shift until it is spherical. The options were less than one, then one, then many.

Value in the business sense is created through use. Value is projected through the application over time into the future from the past, from the moment of installation. That future might be long beyond the deinstall. The time between install and deinstall was long but gets compressed in retrospect. The value explodes across that time, the longest time. Then the value erodes.

In the even longer time all becomes, but a lesson, a memory, a future.

Chasm Chatter

This week there were two tweets about how the Chasm doesn’t exist. My usual response to chasm mentions is just to remind people that today’s innovations are continuous, so they face no Chasm in the technology adoption lifecycle (TALC) sense. They may face scale chasm during upmarket or downmarket moves. But, there are no Chasms to be seen in the late phases of the TALC, the phases where we do business these days.

Moore’s TALC tells us about the birth and death of categories. Anything done with a product in an existing category is continuous. In this situation, the goal is to extend the life of the category by any means, innovation being just one of the many means. VCs don’t put much money here. VCs don’t provide much guidance here. And, VCs don’t put much time here either. The time to acquisition is shrinking. Time to acquisition is also known as the time to exit. In the early phases, all of that was different.

Category birth is about the innovator and those within three degrees of separation from the innovator. That three degrees of separation is the Chasm. It’s about personal selling. It’s not about mass markets. It’s about a subculture in the epistemic cultural sense. It’s a few people in the vertical, a subset of an eventual normal. It’s about a series of Poisson games. It’s about the carried content. The technology is underneath it all, but no argument is made for the technology. It isn’t mentioned. The technical enthusiasts in the vertical know the technology, but the technology explosion, the focus on carrier is in the future. It is at least two years away and as much time will pass as needed. But, the bowling alley means it is at least seven years away.

Then comes, the early mainstreet/IT horizontal. The tornado happens at the entrance. Much has to happen here, but this is a mass-market play.

After the horizontals, the premium on IPOs disappears. We enter the late phases of the TALC where innovation becomes continuous and no new categories are birthed. This is the place where people make errant Chasm crossing claims. This is where all the people claiming there is no Chasm have spent their careers, so no, they never saw a Chasm. They made some cash plays. They were serial innovators with a few months on each innovation, rather than ten years on one innovation that did cross the Chasm. Their IPOs didn’t make them millionaires because there is no premium. The TALC is converging to its right tail. The category is disappearing. They cheer the handheld device, a short-lived thing, and they cheer the cloud, another even shorter-lived thing, the end of the category where the once celebrated technology becomes admin-free magic.

So yes, there is no Chasm. But, my fear is that we will forget that there is a Chasm once we stop zero-summing the profits from globalism and have to start creating categories again to get people back to work. Then, we will see the Chasm again. It won’t be long before the Chasm is back.






Nominals II

March 15, 2018

I left a few points out of my last post, Nominals. In that post, the right-most distribution presents me with a line, rather than a point, when I looked for the inflection point between the concave-down and concave up sections of the curve on the right side of the normal distribution.

A few days after publishing that blog post, it struck me that the ambiguity of that line had a quick solution tied to the fact that the distance between the mean and that inflection point is one standard deviation. All I had to do was drop the mean from the local maximum at the peak of the nominal and then trisect the distance between that mean and the distribution’s point of convergence on the right side of that nominal’s normal distribution.

Backing out of that slightly, every curve has at least one local maxima and at least one local minima. A normal distribution is composed of two curves one to the right of the mean and another to the left. Each of those curves has a maxima and minima pair on each side of the mean. The maxima is shared by both sides of the mean. A normal that is not skewed is symmetric, so the inflection points are symmetric about the mean.

01 min max IP

Starting with the nominals comprising the original distribution, I labeled the local maxima, the peaks, and local max minima, the points of convergence with the x-axis. Then, I eyeballed each line between the maxima and minima pairs to find the inflection point between each pair. Then, I drew a horizontal line to the inflection point on the other side of the normal. Notice the skewed normal is asymmetric, so the line joining the inflection points is not horizontal. Next, I drew a vertical line down from the maxima of the normal distribution on the right. Then, I divided the horizontal distance from the maxima to the minima on the right into three sigmas or standard deviations. The first standard deviation enabled us to disambiguate the inflection point on the right side of the distribution.

The standard normal is typically divided into six standard deviations–three to each side.

02 IP

Here I’ve shown the original distribution with the rightmost nominal highlighted. The straight line on the right and the straight line on the left leaves us unable to determine where the inflection point should be. My guess was at point A. The curvature circles of the tails did not provide any clarity.

I used the division method that I learned from a book on nomography. I drew the line below the x-axis and laid out three unit measures. Then, I drew a line from the mean and the x-axis beyond the left side of the first unit measure. Next, I drew a line from the distribution’s point of convergence on the right side of the normal beyond the right side of the third unit measure. The two lines intersect at point 3. The rest of the lines are projected from point 3 through the line where we laid out the three unit measures. These lines will pass through the points defining the unit measures. These lines are projected t the x-axis.

Where the lines we drew intersect with the x-axis, we draw vertical lines. The vertical line through the mean or local maxima is the zeroth standard deviation. The next vertical line to the right of the mean is the first standard deviation. The standard deviation is the unit measure of the normal distribution. The vertical lines at the zeroth and first standard deviation define the width of the standard deviation. The vertical line demarking the first standard deviation crosses the curve of the normal distribution at the inflection point we were seeking. The point B is the inflection point. We found the standard deviation of the rightmost normal without doing the math.

I put a standard normal under the rightmost normal to give us a hint at how far our distribution is from the standard normal. At that height, our normal would have been narrower. The points of convergence of our normal limit the scaling of the standard normal. A larger standard deviation would have had tails outside our normal.

03 Added Standard Normals

Here I’ve shown the six standard deviations of the standard normal. I also rescaled standard normals to show how a dataset with fewer data items would be taller and narrower, and how a dataset with more data items would be shorter and wider. The standard normal with fewer data elements could be scaled to better fit our normal distribution.

In the original post, I wondered what all the topological torii would have looked like. I answered that question with this diagram.

03 Torii