The Mortgage Crisis

September 5, 2017

Last week, I came across another repetition of what passes for an explanation of the mortgage crisis. It claimed that the problem was the propensity of low-quality loans. Sorry, but no. I’m tired of hearing it.

A mortgage package combines loans of all qualities, of all risks. But, being an entity relying on stochastic processes, it must be random. Unfortunately, those mortgage packages were not random. This is the real failing of those mortgage packages. Mortgages happen over time and are temporally organized, as in not random.

The housing boom was great for bankers up to the point where they ran out of high-quality loans. At that point, the mortgage industry looked around for ways to make lower quality loans. Mortgage packages gave them the means. So fifty loans got sold in a given week, the lender packaged them into one package. Some of those loans were refinancing loans on high-quality borrowers. Rolling other debts into the instrument improved the borrower’s credit but didn’t do much for the mortgage package. Still, the averages worked out, otherwise, throw a few of the pre-mortgage packaging loans, high-quality loans, in there to improve the numbers. A few people had to make payments to their new mortgage holding company. Their problem.

But, the real risk was that all of the original fifty loans originated from the same week. They were temporally organized. That breached the definition of the underlying necessities of stochastic systems. That was the part of the iceberg that nobody could see. That;s the explanation that should be endlessly retweeted on Twitter.

Why? Well, we no longer living in a production economy. You can make money without production. You can make money from the volatility economy. You can make money off of puts and calls and packages of those. That allows you to make money off of your own failures to run a successful business. Just hedge. The volatility economy is a multitude of collections of volatility based on a stochastic system, the stock market.  And, with the wrong lessons having been learned about mortgage packages, the regulators want to regulate mortgage packages and other stochastic systems. Or, just make them flat our illegal because they didn’t know how to regulate them. I’m not against regulation. Constraints create wealth. I just see the need for stochastic systems.

Too many stories are wrong, yet, endlessly repeated on twitter. Kodack, …. 3M, …. There was only one writer that wrote about Kodak that understood the real story. With 3M, their innovation story was long past and still being told when the new CEO gutted the much-cited program.

From the product manager view, where do stochastic systems fit in? The bowling alley is a risk package akin to a mortgage package. But, if you are an “innovative” company much-cited in the innovation press these days, don’t worry, your innovation is continuous. The only innovations showing up in the bowling alley are discontinuous. Likewise, crossing the chasm, as originally defined by Moore, was for discontinuous innovations. Those other chasms are matters of scale, rather than the behavior of pragmatism slices.

But, back on point, we engage in stochastic systems even beyond the bowling alley. A UI control has a use frequency. When they have a bug, that use-frequency changes. Use itself is a finite entity unless you work at making your users stay in your functionality longer. All of that boiling down to probabilities. So we have a stochastic system on our hands. In some cases, we even have a volatility economy on our hands.

Enjoy

Advertisements

A Different View of the TALC Geometries

August 25, 2017

I’ve been trying to convey some intuition about why we underestimate the value of discontinuous innovation. The numbers are always small, so small that the standard financial analysis results in a no go decision, a decision not to invest. That standard spreadsheet analysis is done in L2, a Euclidean space. This analysis gets done while the innovation is in hyperbolic space so the underestimation of value would be the normal outcome.

In hyperbolic space, infinity is away at the edge at a distance. In hyperbolic space, the unit measure appears smaller at infinity when viewed from Euclidean space. This can be seen in a hyperbolic tiling. But, we need to keep something in mind here and throughout Hyperboic Tilingthis discussion, the areas of the circle are the same in Euclidean space. The transform, the projection into hyperbolic space makes it seem otherwise. That L2 financial analysis assumes Euclidean space while the underlying space is hyperbolic, where small does not mean small.

How many innovations, discontinuous ones, have been killed off by this projection? Uncountably many discontinuous innovations have died at the hands of small numbers. Few put those inventions through the stage-gated innovation process because the numbers were small. The inventors that used different stage gates pushed on without worrying about the eventual numbers succeeded wildly. But, these days, the VCs insist on the orthodox analysis, typical of the consumer commodity markets, that nobody hits one out of the ballpark and pays for the rest. The VCs hardly invest at all and insist on the immediate installation of the orthodoxy. This leads us to stasis and much replication of likes.

I see these geometry changes as smooth just as I see the Poisson to normal to high sigma normals as smooth. I haven’t read about differential geometry, but I know it exists. Yet, there is no such thing as differential statistics. We are stuck in data. We can use Monte Carlo Markov Chains (MCMC) to generate data to fit some hypothetical distribution from which we would build something to fit and test fitness towards that hypothetical distribution. But, in sampling that would be unethical or frowned upon. Then again, I’m not a statistician, so it just seems that way to me.

I discussed geometry change in Geometry and numerous other posts. But, in hunting up things for this post, I ran across this figure. Geometry Evolution I usually looked at the two-dimensional view of the underlying geometries. So this three-dimensional view is interesting. Resize each geometry as necessary and put them inside each other. The smallest would be the hyperbolic geometry. The largest geometry, the end containment would be the spherical geometry. That would express the geometries differentially in the order that they would occur in the technology adoption lifecycle (TALC) working from the inside out. Risk diminishes in this order as well.

Geometry Evolution w TALC

In the above figure, I’ve correlated the TALC with the geometries. I’ve left the technical enthusiasts where Moore put them, rather than in my underlying infrastructural layer below the x-axis. I’ve omitted much of Moore’s TALC elements focusing on those placing the geometries. The early adopters are part of their vertical. Each early adopter owns their hyperbola, shown in black, and seeds the Euclidean of their vertical, shown in red, or normal of the vertical (not shown).  There would be six early adopter/verticals rather than just the two I’ve drawn. The thick black line represents the aggregation of the verticals needed before one enters the tornado, a narrow phase at the beginning of the horizontal. The center of the Euclidean cylinder is the mean of the aggregate normal representing the entire TALC, aka category born by that particular TALC. The early phases of the TALC occur before the mean of the TALC. The late phases start immediately after the mean of the talk.

The Euclidean shown is the nascent seed of the eventual spherical. Where the Euclidean is realized is at a sigma of one. I used to say six, but I’ll go with one for now. Once the sigma is larger than one, the geometry is spherical and tending to more so as the sigmas increase.

From the risk point of view, it is said that innovation is risky. Sure discontinuous innovation (hyperbolic) has more risk than continuous (Euclidean) and commodity continuous (spherical) less risk. Quantifying risk, the hyperbolic geometry gives us an evolution towards a singular success. That singular success takes us to the Euclidean geometry. Further data collection takes us to the higher sigma normals, the spherical space of multiple pathways to numerous successes. The latter, the replications, being hardly risky at all.

Concentric

Nesting these geometries reveal gaps (-) and surplusses (+).

 

 

 

 

The Donut/Torus Again

In an earlier post, I characterized the overlap of distributions used in statistical inference as a donut, as a torus, and later as a ring cyclide. I looked at a figure that Torus_Positive_and_negative_curvaturedescribed a torus as having positive and negative curvature.

 

So the torus exhibits all three geometries. Those geometries transition through the Euclidean.Torus 2

The underlying distributions lay on the torus as well. The standard normal has a sigma of one. The commodity normal has a sigma greater than one. The saddle and peaks refer to components of a hyperbolic saddle. The statistical process proceeds from the Poisson to the standard normal to the commodity normal. On a torus, the saddle points and peaks are concurrent and highly parallel.

Torus 3

Enjoy.

The Average, or the Core

August 4, 2017

Tonight I ended up reading some of the Wolfram MathWorld discussion of the Heaviside Step Function among other topics.  I only read some of it like most things on that site because I bump into the limits of my knowledge of mathematics. But, the Heaviside step function screamed loudly at me. Well, the figure did, this figure.

Cute

Actually, the graph on the left. The Heaviside step function can look like either depending on what one wants to see or show.

The graph on the left is interesting because it illustrates how the average of two numbers might exist while the reality at that value doesn’t. Yes, I know, not quite, but let’s just say the reality is the top and bottom line, and that H(x)=1/2 value is a calculated mirage. All too often the mean shows up where there is no data value at all. Here, the mean of 0 and 1 is (0+1)/2. When we take the situation to involve the standard normal, we know we are talking about a measurement of central tendency, or the core of the distribution. That central tendency or core in our tiny sample is a calculated mirage. “Our average customer …” is mythic, a calculated mirage of a customer in product management speak.

Cute w Nomal

Here I put a standard normal inside the Heaviside step function. Then, I show the mean at the x=1/2 of the Heaviside step function. The core is defined by the inflection points of the standard normal.

The distribution would show skew and kurtosis since n=2. A good estimate of the normal cannot be had with only two data points.

More accurately, the normal would look more like the normal shown in red below. The red normal is higher than the standard normal. The height of the standard normal shown in blue is around 4.0. The height of the green normal is about 2.0. The red normal is around 8.0. I’ve shown the curvature circles generated by the kurtosis of the red distribution. And, I’ve annotated the tails. The red distribution should appear more asymmetrical.

more acurately

Notice that the standard deviations of these three distributions drive the height of the distribution. The kurtosis clearly does not determine the height, the peakedness or flatness of the distribution, but too many definitions of kurtosis define it as peakedness, rather than the height of the separation between the core and the tails. The inflection points of the curve divide the core from the tail. In some discussions, kurtosis divides the tails from the shoulders, and the inflection points divide the core from the shoulders.

To validate a hypothesis, or bias ourselves to our first conclusion, we need tails. We need the donut. But, before we can get there, we need to estimate the normal when n<36 or we assert a normal when n≥36; otherwise, skew and kurtosis risks will jerk our chains. “Yeah, that code is so yesterday.”

And, remember that we assume our data is normal when we take an average. Check to see if it is normal before you come to any conclusions. Take a mean with a grain of salt.

Convolution

Another find was an animation illustrating convolution from Wolfram MathWorld “Convolution.” What caught my eye was how the smaller distribution (blue) travels through the larger distribution (red). That illustrates how a technology flows through the technology adoption lifecycle. Invention of a technology, these days, starts outside the market and only enters a market through the business side of innovation.

The larger distribution (red) could also be a pragmatism slice where the smaller distribution (blue) illustrates the fitness of a product to that pragmatism slice.

convgaus

The distributions are functions. The convolution of the two functions f*g is the green line. The blue area represents “the product g(tau)f(t-tau) as a function of t.” It was the blue area that caught my eye. The green line, the convolution, acts like a belief function from fuzzy logic. Such functions are subsets of the larger function and never exit that larger function. In the technology adoption lifecycle, we eat our way across the population of prospects for an initial sale. You only make that sale once. Only those sales constitute adoption. When we zoom into the pragmatism step, the vendor exits that step and enters the next step. Likewise when we zoom into the adoption phase.

Foster defined disruption as the interval when a new technology’s s-curve is steeper than the existing s-curve, we can think of a population of s-curves. The convolution would be the lessor s-curves, and the blue area represents the area of disruption.  Disruption can be overcome if you can get your s-curve to exceed that of the attacker. Sometimes you just have to realize what was used to attack you. It wasn’t the internet that disrupted the print industry, it was server logs. The internet never competed with the print industry. Fosters disruptions are accidental happenings when two categories collide. Christensen’s disruptions are something else.

Enjoy.

Notes on the Normal Distribution

July 24, 2017

Pragmatism Slices and Sales

Progress through the technology adoption lifecycle happens in terms of seats and dollars. If you use alternate monetizations, rather than sell your product or service, drop the dollars consideration. Beyond those monetizations even if you sell your product or service, dollars are flaky in terms of adoption. But the x-axis is about population, aka seats.

Sales drive the rate of adoption in the sense that a sale moves the location of the product or service, the prospect’s organization(s), and the vendor’s organization(s) under the curve. By sales, I mean the entire funnel from SEO to the point where the sales rep throws the retained customer under the bus. But, I also mean initial sales, the point where prospects become customers. That sale moves from adoption from the left to right, from the early phases towards the late phases, from category birth to category death.

But, there are two kinds of sales: the initial sale, aka the hunter sale, and the upgrade sale, aka the farmer sale. What struck me this week was how the farmer sale does absolutely nothing in regards to progress through the various entities locations under the adoption curve. So let’s look at this progress.

Sales

People in a pragmatism slice reference each other. They do not reference people in other pragmatism slices.

In the figure, the hunter sales move the adoption front across the adoption lifecycle from left to right. The hunter sales rep made four sales. The farmer sales rep made four sales as well that generated revenues, but no movement across the lifecycle.

Growth

The size of the normal representing the addressable markets in the technology adoption lifecycle is fixed. It does not grow. A single company has a market allocation that tells us how much of that normal they own. With discontinuous innovations, that allocation to the market leader maxes out at 74%. Beyond that, antitrust laws kick in. Such a market leader would be a near-monopolist. Their market leadership will be the case until they exit the category, or face a Foster disruption. Intel was the market leader until NVIDIA brought a different technology to market. With continuous innovations, we are dealing with many players in a commodity market. The allocations are small. Market leaders can change every quarter.

Growth

In this figure, I started with a standard normal distribution (dark yellow) representing 100% of a category’s market. I represented the near monopolist’s market allocation of 74% as a normal distribution (light blue) inside of the larger normal. Then, I drew the circles (orange and blue) representing the curvature of the kurtoses of these distributions. The light blue distribution cannot get any larger. It is shown centered at the mean of the category’s normal. It could be situated anywhere under the category’s normal. Once a vendor has sold more than 50% of its addressable market, that vendor starts looking for ways to grow, ways to move the convergence of the vendor’s distribution as far to the right as possible. They try to find a way to lengthen the tail on the right. They run into trouble with that.

While a normal distribution represents the technology adoption lifecycle, the probability mass gets consumed as sales are made. The probability mass to the left has been consumed. So there is very little mass to allocate to the tail. In placing those curvature circles, I looked for the inflection points and made the circles tangent to the normals there. For the proposed tail, I drew its curvature circle. The thick black line from the mean to the top most inflection point doesn’t leave enough probability mass to allocate to the tail so the tails would be lower and the curvature circle would be larger. The thick red line from the mean to the bottom most inflection point leaves enough probability mass to allocate to the tail. It’s important that the curves represented by the black and red lines be smooth.

The points of convergence for the 74% normal, the 100% normal, and the long tail appear below the x-axis of the distribution. The mass between the convergences of the 100% normal and the long tail are outside the category’s normal distribution. The normal under the normal model used a kurtosis of zero. But, with the long tail, the kurtosis is no longer zero. That growth is coming from something other than the product or service of the vendor. And, the mass in the tail would not come from the normal inside the category’s normal. The normal was deformed when the mass was allocated towards the tail. But, again, that still does not account for the mass beyond the category normal. That mass beyond the category normal is black swan like and hints towards skew risk and kurtosis risk. Look for it in the data. These distributions just show a lifecycle of the category and vendor normals. The data should reflect the behaviors shown in the model. The pragmatism slices move as well. Taking a growth action that concatenates the tail can dramatically change your phase in the technology adoption lifecycle. Each phase change requires some, possibly massive, work to get the products and services to fit the phase they find themselves addressing.

Booms stack the populations in the technology adoption lifecycle. See Framing Post For Aug 12 Innochat: The Effects of Booms and Busts on Innovation for that discussion.

I drew my current version of Moore’s adoption lifecycle.

The Technology Adoption Lifecycle

Moore built his technology adoption lifecycle on top or Rodgers’ model of the diffusion of innovation. Rodgers identified the populations involved in technology adoption like the innovators, early adopters, early and late majorities, and laggards. Moore went further and teased out technical enthusiasts, and the phobics, Moore changed the early majority to vertical markets and the late majority to horizontal markets. Moore identified several structural components like the bowling alley, the chasm, and the tornado.

I’ve made my own modifications to Moore’s model. The figure is too abundant. Another incidence of my drawing to think, rather than to communicate.

TALC setup

The technology adoption lifecycle provides the basis for the figure. The technology adoption lifecycle is about the birth, life, and death of categories that arise from discontinuous innovation. This leaves aside the categories that can be created via management innovation discussed in an HBJ article over the last year. A category is competed for during the Tornado and birthed when market power selects the market leader. Immediately after the birth of a category, the competing companies consolidate, or exit. Their participation in the category ends. The category can live a long time, but eventually, the category dies. Its ghost disappears into the stack. The horse is still with us. Disruption is a means of killing a category, not about competing in the disrupted category. Disruption happens to adjacencies, not within the category sponsoring the disruptive discontinuous innovation.

The populations are labeled with red text. Most of the phase transitions are shown with red vertical lines. The transition to the early majority is shown with a black line, also labeled “Market Leader Selected.” The vertical labeled with red text consists of the early adopter (EA) and the next phase that Moore called the vertical market. Some technical enthusiasts would be included in the vertical as well, but are not shown here as such.

Notice that I’ve labeled the laggard phase device and the phobic phase cloud. The cloud is the ultimate task sublimation. The device phase is another task sublimation. These are not just form factors. They are simpler interfaces for the same carried use cases. The carrier use cases are different for every form factor. Moving from early majority to late majority phases also involved task sublimation, as described by Moore. Laggards need even simpler technology than consumers. Phobics don’t want to use computers at all. The cloud provides admin-free use. The cloud is about the disappearance of both the underlying technology in the carrier layer and the functionality in the carried layer. Notice that after the cloud the category disappears. There are no remaining prospects to sell.

The technical enthusiasts, as defined by Moore, was a small population at the beginning of the normal. But, there are technical enthusiasts in the Gladwell sense all the way across the lifecycle. They are a layer, highlighted in orange, not a vertical slice, or phase. I’ve shown both views of the technical enthusiasts. The IT horizontal people would show up as technical enthusiasts if the product or service was being sold into the IT horizontal. This distinction is made in my Software as Media Model. The technical enthusiasts are concerned with the carrier layer of the product or service.

Moore’s features are shown as brown rectangles. These features include the chasm, the tornado, and the bowling alley. Specific work, tactics, and strategies address the chasm, the tornado, and the bowling ally. These are labeled as pre-chasm, pre-tornado, and keeping the bowling alley full. They show up as blue rectangles. Another feature stems from de-adoption, the “Need (for a) New Category,” and appears as a blue rectangle. This latter feature happens, because nothing was done to create a new category before it was needed. Or, such an effort failed. The point of keeping the bowling alley full is to create new categories based on discontinuous innovation on an ongoing basis. I’ve seen a company do this. But, these days discontinuous innovation is very rare. Discontinuous innovations can, but not always, cause (Foster) disruptions. Christensen’s disruptions happen in the continuous innovation portion of the adoption lifecycle.

The lifecycle takes a discontinuous innovation to market and keeps the category on the market via continuous innovation. Plant the seed (discontinuous), harvest the yield (continuous). This division of the lifecycle is labeled in white text on a black rectangle towards the bottom of the figure. Discontinuous innovation generates economic wealth (inter-). Continuous innovation generates an accumulation of cash (intra-). A firm does not own the economic wealth it generates. that economic wealth is shared across firms. I am unaware of any accounting of such.

At the very top of the lifecycle, the early and late phases are annotated. The early phases constitute the growth phase of the startup. The late phases constitute the decline phase. The decline phase can be stretched out, as discussed in the previous section. When the IPO happens in the early phases, but not before the Tornado, the stock price sells at a premium. When the IPO happens in the late phases, the stock price does not include such a premium. The Facebook IPO bore this out. It’s typical these days, these days of continuous innovation, that no premium is involved.

Founders, at least in carrier business, with discontinuous innovation are engineers, not businessmen, so at some point, they have to hire them to put the biz orthodoxy in place. VCs these days require a team that is already orthodox. The hype before the Shake Shack IPO demonstrates that innovation has moved on from software. Orthodox businesses are now seen as innovative, but only in the business model, continuous innovation sense. Shark Tank and VCs don’t distinguish the technology startup from other startups. The innovation press confuses us as well. It used to be that the CFO and one other person had an MBA, now everyone has one. But, in an M&A, the buyer doesn’t want to spend a year integrating the business they just bought. The merger won’t succeed unless the buyer can launch their own tornado and bring in new customers in the numbers they need. The Orthodoxy needs to be in place at least a year before the IPO, or the stock price will underperform the IPO a year after the IPO.

From a statistical point of view, the process of finding a new technology involves doing Levy flights, aka a particular kind of random walk, until that new technology is found. It should not be related to what you are doing now, aka to your install base. You are building a brand new company for your brand new category. Google’s Alphabet does this. Your company would become a holding company. Managing the diversity inherent in the technology adoption lifecycle becomes the problem. “No, that company is in a different phase, so it can’t do what our earlier company does now.” Contact me to find out more.

After the Levy flights, we search for early adopters. Use Poisson games to look at that. The Poisson distributions tend to the normal. Those normals become higher dimensional normals. The standard normal has six sigma, the later normals in later phases of the lifecycle have more than six sigma. These divisions translate into geometries. The nascent stages of the lifecycle occur in a hyperbolic geometry where the distant is small from a Euclidean perspective generated by the inherent L2 geometry of linear algebra. Artists see the distant as small reality in perspective drawings. They call that foreshortening. We foreshorten our financial forecasts and small is bad. But, as the Poisson become a normal, those financial forecasts stop foreshortening. The idea we threw away becomes obviously invaluable after the founder builds a market, a technology, a product or service, a company, value chains,… The distributions change, and the geometries change. Once you move beyond six sigma, the geometry becomes spherical. In such geometry, there are many ways for followers with different strategies to win. We start with a very narrow way to win in the hyperbolic, arrive at the one way to win in the Euclidean, and find ourselves in the many ways to win in the Spherical. Or, damn, so many fast followers, geez.

Last but not least, we come to the Software as Media model. Media is comprised of carrier layers and carried content layers. The phases of the adoption lifecycle change layers when they change phases. The technical enthusiast is about the carrier layer; the early adopter, the content layer; the vertical, the content layer; the horizontal, the carrier layer; the device, both; and the cloud, carrier. At the point where you need another category, it could be either. But, these oscillations involve the market and the way the vendor does business. Each phase is vastly different. The past has nothing to do with the present. Yes, the practices were different, but they fit their market. They were not better or worse unless they did not fit their market.

Designers whining about the 80’s were not around then. They take today’s easiness for a given and think the past should have been done their way. The past taught. We learned. And, as we cross the technology adoption lifecycle, the Ito process that crossing, the memories are deep. We learned our way here. And, when we repeat the cycle, our organizations are not going to start over. They don’t have to if properly structured. Call me on that as well. But, usually they don’t start over from scratch, but should, because they forgot the prior phase, as they moved to the next.

Enjoy.

The Curvature Donut

July 23, 2017

In last month’s The Cones of Normal Cores, I was visualizing the cones from the curvatures of a skewed normal to the eventual curvatures of a standard normal distribution. The curvatures around a standard normal appear as a donut or, a torus. Those curvatures are the same all the way around the normal in a 3-D view. That same donut around a skewed normal appears as a deformed donut, or a ring cyclied. In the skewed normal the curvatures differ from one side to the other. These curvatures differ all the way around the donut.

The curvature donut around the standard normal sits flatly on the x-axis and touches the inflection points of the normal curve. Dropping a line from the inflection points down to the x-axis provides us with a point where a line 45 degrees above the x-axis is where the origin of the circle of the particular curvature would be.

The curvature donut of a skewed normal would sit flatly on the x-axis, but might be tilted as the math behind a ring cyclied is symmetrical to another x-axis running through the centers of the curvatures. In January’s Kurtosis Risk, we looked at how skew is a tilt of the mean by some angle theta. This tilt is much clearer in More On Skew and Kurtosis. That skewness moves the peak and the inflection points but the curve stays smooth.

So I’m trying to overlay a 2-D view of a skewed distribution on a 3-D view of ring cyclied.

Ring Cyclide

I’ve used a red line to represent the distribution. The orange areas are the two tails of the 2-D view. The curvatures show up as yellow circles. The inflection points on the distribution are labeled “IP.” The core is likewise labeled although the lines should match that of the tilted mean.

I think as I draw these figures, so in this one, have a gray area and a black vertical line on the ring cyclied that are meaningless. Further, I have not shown the orientation of the ring cyclied as sitting flat on the x-axis.

The ring cyclied occurs when skewness and kurtosis occur. A normal distribution exhibits skewness and kurtosis occur when the sample size, N, is less than 36. When N<36, we can use the Poisson to approximate or estimate the normal. Now, here is where my product management kicks in. We use Poisson games in Moore’s bowling ally to model Moore’s process as it moves from the early adopter to the chasm. The chasm being the gateway to the vertical market that the early adopter is a member of. We stage gated that vertical before we committed to creating the early adopter’s product visualization.  We get paid for creating this visualization. It is not our own. The carried component always belongs to the client. The carrier is our technology and ours alone.

So let’s look at this tending to the normal process.

Conics as Distribution Tends to Normal

I was tempted to talk about dN and dt, but statistics kids itself about differentials. Sample size (N) can substitute for time (t). The differentials are directional. But, in statistics, we take snapshots and work with one at a time, because we want to stick to actual data. Skew and kurtosis go to zero as we tend to the standard normal, aka as the sample size gets larger. Similarly, skew risk and kurtosis risk tend to zero as the sample size gets larger.

The longer conic represents the tending to normal process. The shorter conic tends to work in the inverse direction from the normal to the skewed normal. Here direction is towards the vertex. In a logical proof, direction would be towards the base.

The torus, the donut associated with the standard normal, like its normal is situated in Euclidean space. However; the ring cyclide is situated in hyperbolic space.

An interesting discussion on twitter came up earlier this week. The discussion was about some method. The interesting thing is what happens when you take a slice of the standard normal as a sample. The N of that slice might be too small, so skew and kurtosis return, as do their associated risks. This sample should remain inside the envelope of the standard normal; although it is dancing. I’m certain the footprints will. I’m uncertain about the cores in the vertical sense. Belief functions of fuzzy logic do stay inside the envelope of the base distribution.

Another product manager note: that slice of the standard normal happens all the time in the technology adoption lifecycle. Pragmatism orders the adoption process. Person 7 is not necessarily seen as an influencer of person 17. This happens when person 17 sees person 7 as someone that takes more risk than they or their organization does. They are in different pragmatism slices. Person 17 needs different business cases and stories reflecting their lower risk willingness. These pragmatism slices are a problem in determining who to listen to when defining a product’s future. We like to think that we code for customers, but really, we code for prospects. Retained customers do need to keep up with carrier changes, but the carried content, the use cases and conceptual models of carried content rarely the changes. The problem extends to content marketing, SEO, ancillary services provided by the company, and sales qualifications. Random sales processes will collide with the underlying pragmatism structure. But, hey, pragmatism, aka skew and kurtosis, is at the core of problems with Agile not converging.

In terms of the technology adoption lifecycle, the aggregated normal that it brings to mind is actually a collection of Poisson distributions and a series of normal distributions. The footprint, the population of the aggregated normal does not change over the life of the category. Provided you not one of those to leave your economy of scale with a pivot. Our place in the category is determined in terms of seats and dollars. When you’re beyond having sold 50% of you addressable population you are in the late market. The quarter where you left the early market and entered the late market is where you miss the quarter and where the investors are told various things to paper over our lack of awareness that lost quarter was predictable.

If you know anything about the ceiling problem, the sample distribution reaching beyond the parent normal let me know.

I’ve actually seen accounting visualizations showing how the Poissons tend to the normal.

Enjoy.

The Postmodern UI

July 8, 2017

A tweet dragged me over to an article in The New Republic, a journal that I’m allergic to.  But the article, America’s First Postmodern President, an article I read with my product manager hat on, an article about the postmodern world we live in, that world one of constant, high-dimensional, directionless change. And, it became obvious to me that I’m not a postmodernist while Agile is exactly that, postmodernist, so our software products reflect that.

No politics here. The quotes might go that way, but I will annotate the quotes to get us past that. I’ll ignore the politics. Here the discussion will be product, UI, design, Agile.

For Jameson, postmodernism meant the birth of “a society of the image [textual/graphical/use case] or the simulacrum [simulation] and a transformation of the ‘real’ [the carried content] into so many pseudoevents.” Befitting the “postliteracy [Don’t make me read/YouTube it] of the late capitalist world,” the culture of postmodernism would be characterized by “a new kind of flatness or depthlessness [no heirarchy, no long proofs/arguments/logics/data structures/objects, a new kind of superficiality [the now of the recursion, the memorilessness of that recursion’s Markov chain] in the most literal sense” where “depth [cognitive model/coupling width/objects] is replaced by surface [UI/UX/cloud–outsourced depth].” Postmodernism was especially visible in the field of architecture, where it manifested itself as a “populist” revolt “against the elite (and Utopian) austerities of the great architectural modernisms: It is generally affirmed, in other words, that these newer buildings [applications/programs/projects/products/services] are popular works, on the one hand, and that they respect the vernacular of the American city fabric, on the other; that is to say, they no longer attempt, as did the masterworks and monuments of high modernism [No VC funded, logrithmic hits out of the financial ballpark], to insert a different, a distinct, an elevated, a new Utopian language into the tawdry and commercial sign system [UX as practiced now] of the surrounding city, but rather they seek to speak that very language, using its lexicon and syntax as that has been emblematically ‘learned from Las Vegas [for cash and cash alone, no technlogical progress/reproduction by other people’s means].’”

And,

For Baudrillard, “the perfect crime” was the murder of reality, which has been covered up with decoys (“virtual reality” and “reality shows” [and UIs]) that are mistaken for what has been destroyed. “Our culture of meaning is collapsing beneath our excess of [meaningless] meaning [and carrier impositions], the culture of reality collapsing beneath the excess of reality, the information culture collapsing beneath the excess of information[multiplicities in the spherical geometry where every model models correctly in the financial/cash sense]—the sign and reality sharing a single shroud,” Baudrillard wrote in The Perfect Crime (1995)…[political cut].

What a mess. It helped that this morning in those Saturday morning, light-weight introspective moments the notion of objects being bad and the reassertion of functional programming was leaving us with data scattered in the stack via recursion, and the now of the current system stack with nothing to see of how we got here. But, hey, no coupling between functions through the data structure, something I never thought about until some mention in the last two weeks. Yes, the alternative to static would do that no matter how dynamic.

Those gaps, the architecture enabling us to escape those tradeoffs we make in our products, the slowness of feedback from our users, and the feedback from the managers as if  they were users–a flattening–all disappear when we go postmodern when we go flat. That jack in your car becomes worthless when your emergency tire goes flat.

Still, I don’t like surface without depth; the absence of a cognitive model; the painted on UI, the erasure of the deep UX/CX/BX/MX/EX, the surface of machine learning, and programmers writing up other people’s disciplines as if those disciplines don’t matter, as if those years spent in school learning that discipline doesn’t matter, that the epistemical/functional cultures don’t matter–but, of course, they don’t matter because the programmer knows all the content they encode, and management lays off all the content anyway ending their Markov chains and filling their resumes so full of cheap labor jobs so you can’t see the underlying person. Thirty years of doing something, the depth, forgotten because seven years have passsed, still leaves depth, but hiring passion over experience gets us to that postmodernist surface. Oh, well. When better is surface, when success is reality TV, when…

The danger of a sweeping theory like postmodernism is that it can produce despair.

But, that’s where we are this morning, sweeping theory, not despair.

 

The Cones of Normal Cores

June 23, 2017

A few days ago, I drew a quick sketch about constraints, symmetries, and asymmetries. Discontinuous inventions break a physical constraint, change the range of a physical constraint, weaken a physical constraint, or bend a physical constraint. That discontinuous invention goes on to become a discontinuous innovation once they escape the lab and business people build a business around it.  Asymmetries present us with the necessity of learning.

So we start with a rotational symmetry out in 01 Symmetryinfinite space. This is the space we seek in the economic sense, the theory yet faced with the realities of practice, the desired, the sameness, the undifferentiated, the mythical abundance of the commodity. We could rotate that line in infinite space and never change anything.

Reality shows up as a constraint deforming the 02 Asymmetry infinite space and the symmetry into an asymmetry, an asymmetry we are not going to understand for a while. Not understanding will lead any learning system through some lessons until we understand. Not understanding makes people fear.

The symmetry generates data supporting a normal distribution. When the symmetry 03 Distributions encounters the constraint, the density is reflected at the boundary of the constraint. That increases the probability density, so the distribution exhibits skew and kurtosis.

The normal distribution of the symmetry is shown in light aqua. The skewed distribution is shown in a darker aqua.

04 Curvatures

The skewed distribution exhibits kurtosis which involves a maximum curvature at the shoulder between the core of the distribution and the long tail of that distribution, and a minimum curvature at the shoulder between the core of the distribution and the short tail of that distribution.

With a discontinuous innovation, we enter the early adopter market via a series of Poisson games. The core of a Poisson distribution, from a top down view, would be a small circle. Those Poisson distributions tend to the normal, aka become a normal distribution.

In the previous figure we annotated these curvatures with circles having the given curvature. The normal distribution gives us two circles with the same curvature as the circle is symmetric . The tail of the normal can be considered to be rotated around the core. The skewed distribution gives us a circle representing the curvature on the long tail side of the core larger than the normal , and a circle representing the curvature on the short tail side shorter than the normal.

These curvature circles generate conics, aka cones. 05 ConesSimilarly, the Poisson distribution is the tip of the cone, and the eventual normal is the base of the cone. The technology adoption process generates a cone that gets larger until we’ve sold fifty percent of our addressable market. The base of the cone gets larger as long as we are in the early phases of the technology adoption lifecycle. Another cone on the same axis and using the same base then gets smaller and comes to a tip as the underlying technology is further adopted in the late phases and finally is deadopted.

The early tip represents the birth of the 06 Birth and Death of a Categorycategory, the later tip represents the death of the category. The time between birth and death can be more than fifty years. These days, the continuous innovations we bring to market in the late mainstreet phase of the technology adoption lifecycle lasts only as long as VC funding can be had. Or, no more than ten years beyond the last round of funding. All of that occurs inside the cone that shrinks its way to the death of the category.

We innovate inside a polygon, so we involve ourselves 07 Multiple Constraintswith more than one constraint. We will look at the distributions involved from the top down looking at the circles that constitute the distributions involved. The normal distributions are represented by circles. Poisson distributions are represented by much smaller circles. Technology adoption moves from a small footprint, a small circle, to a large footprint, a large circle.

Notice that as time pases on the adoption side of the technology adoption lifecycle, the distribution gets larger. Likewise on the deadoption side, the distribution gets smaller. Smaller and larger would be relative to sample size and standard deviations. The theta that is annotated in the diagram indicates the current slope of the technology associated with that constraint and the productivity improvement of the technology’s s-curve, aka price-performance curve, and by price we me the invested dollars to improve the performance.

Notice that when we pair adoption and deadoption, 08 Zero-Sum Gamewe are looking at a zero-sum game. The Poisson distribution would represent the entrant. The circle tangent to the Poisson distribution would represent the incumbent in a Foster disruption. The
s-curves of both company’s competing technologies is still critical in determining if a Foster disruption is actually happening or not, or the duration of such disruption. Christensen disruptions are beyond the scope of this post.

I annotated a zero-sum game on the left, earlier in time. The pair of circles on the right, are not annotated, but are the same zero-sum game. There might be five or more vendors competing with the same technology. They might have entered at different times. Consider Moore’s market share formula he talked about in his books. The near monopolist gets 74% and everyone else gets a similar allocation of the remainder.

Notice that I used the term Core and orientation in the previous figure. The orientation would have to be figured out relative to the associated constraint. But, the circles in each zero-sum game represent curvature of the kurtoses involved that drive the length of the tails of the distribution relative to a core.

That core is much wider than shown 09 Line as Corein all but the weak signal context of a Dirac function that indicates some changes to conditional probabilities.

The arrow attached to each kurtosis indicates the size of each as the distribution normalizes.

The core is usually wider. As it gets wider, the height of the distribution gets lower. The normalization of the standard normal 10 Rectangle as Coreor the fact that the area under the distribution will always equal zero is what causes this. I did not change the kurtosises in the figure, but the thicker core implies progress towards the normal and less difference between the two kurtosises. The width of the range should stay the same throughout the life of the distribution once it begins to normalize. Remember that it takes 36 to 50 or so measurements before a sample normalizes. Various approximation methods help us to approximate the normal when we lack adequate data. Skewness and kurtosis will be present in all samples lacking sufficient measurements. Look for Skewness and kurtosis in the feedback collected during Agile development efforts. The normal, in those circumstances will inform us as to whether the functionality is done and deliverable.

Core width will change over the adoption 11 Core Widthlifecycle.  I drew this figure thinking in terms of standard deviations. But, the Poisson distribution is what we have at the early adopter phase of the lifecycle. In the vertical, we tend to the normal. In the horizontal, some complex data fusions give us a three or more sigma normal and in the late phases we are in the six or more sigma range. The core width is correlated with time, but in the lifecycle, time is determined by seats and dollars, and the lifecycle phase rather than calendar time. Note that I correlated the underlying geometries with time as well. Our financial analysis tells us to pass on discontinuous technologies, because the future looks small in the hyperbolic geometry we don’t know that we are looking at. Euclidean is easy. And, the spherical geometry that leaves us in banker numbers, in information (strategy) overload, aka 30 different approaches that all work. No, he wasn’t lucky. He was spherical.

Enjoy.

 

 

 

 

 

 

 

 

Do we gerrymander our product’s market?

April 5, 2017

Notice: I make absolutely no political statements in the following. 

Of course, we gerrymander our product’s market. We don’t intend to represent all of our customers either. When we introduce a product, we pick who we will sell it to. We find some rules. Sales qualify their prospects with respect to those rules. Sales bring outliers to us forcing us to say no to their deal or forcing us to redefine the product.

We prioritize. We tradeoff. We waste code. We waste prospects, customers, and users. All of these are our mechanisms for gerrymandering our product. We become insensitive to our prospects, customers, and users.

The technology adoption lifecycle organizes our prospects, customers, and users. With discontinuous innovations, our focus changes as we cross the technology adoption lifecycle. We start out in carrier or protocol, shift to carried content, then shift to carrier again, and subsequently shift back to carried content. We start out in a single form-factor and end broadly in many different form-factors. We start out with risk takers and end with those that take the least risk possible.

This latter characterization demonstrates the pragmatism scale underlying the entire technology adoption lifecycle.

With continuous innovations, typical these days, we don’t do the whole lifecycle. We jump into the late phases and move to later phases. We act surprised when our offer suddenly has run through all of our addressable prospects. We surprise ourselves when we realize we need something new. Yes, even Apple surprised itself with this many times since the first Apple computer.

But, here I’m talking about the pragmatism scale organizing our business with the phases of the lifecycle, not just phases. The finer we go with this the more likely a release will address prospects different from our consumers, and users with use cases organized in pragmatism slices, not just time slices. We end up with slices at one scale within slices of another scale. We end up with queues. We end up with boundaries.

Not attending to those boundaries results in gerrymandering which in turn leaves us inattentive to opportunities for customization in use cases, and pricing.

Mathematicians are addressing political gerrymandering now. See How to Quantify (and Fight) Gerrymandering.

Gerrymandering our products is a hard problem. The scales we use need to be aligned with our release cycle. Decide on scales. Then, get agreement on where the product and company are on the technology adoption lifecycle. Make a map. Know your pragmatism boundaries.

Moore described the pragmatism boundaries in terms of reference groups. Everyone in a particular slice refers to people, businesses, and business cases in their slice and nearby adjacencies. Each slice has it’s own evidence. This generates some communications isolations that grant us pricing isolations. Communications channels generate more boundaries, more to map.

The use cases served in the current slice will differ from the use cases in earlier slices. Yes, as time goes by the economic customer becomes more pragmatic, but then, so could the use cases and the marketing content.

To make matters harder, sales consumes each population at a different speed and might sell much more randomly without regard to lifecycle or pragmatism scale or communications channel considerations. Just a warning.

Growth would impact all of this. A prospect once sold is forever a customer ever after.

And, of course, all the talk of listening to customers et. al. becomes a matter of where on our map that customer is speaking from. How does the map bundle that feedback? And, how does that feedback verify efforts?

Quite a mess, a profitable mess.

The Cook’s Customer

March 17, 2017

I was perusing Anthony Bourdain’s Appetites; a cookbook. In it, he asks a few questions about his customers, and he is shockingly honest about the answers to those questions.

What is it that “normal” people do? What makes a “normal” happy family? …

I had little clue how to answer these questions for most of my working life, as I’d been living it on the margins. I didn’t know any normal people. From age seventeen on, normal people have been my customers. They were abstractions, litterally shadowy silhouettes in the dining rooms of wherever it was that I was working at the time. I looked at them through the perspective of the lifelong professional cook chief—which is to say, as someone that did not have a family life, who knew and associated only with fellow resturant profesionals, who worked while normal people played and played when normal people slept.

Do those of us in the software community have this problem? Are our customers still abstractions even if we’ve met them, spoken with them, engaged them in an ethnographic field study? Does their corporate culture look like our culture? Is it true that we work while they sleep? Do they use the same software we use? No, of course not. Do they seek value whee we seek it?

Do they use the same software we use? No, of course not. Do they seek value where we seek it? No, of course not. Do our customer personas point out the differences between us and them? This gets harder with the technical enthusiasts because they seem much more like us than our users or our economic buyers.

Where do we define the closeness of our abstraction, the gap between an atomic bomb and a hypodermic needle? We go with the atomic bombs too often.

Make no mistake, sure I’m asking product managers, but really, I’m asking the developers because we leave this in their hands. And, when we “fully load” those developers to capture all the effort that we can, are we not failing to leave time to know the customer, know the carried content, or even know the carrier. We do tend to make time for our developers to know the carrier.

Developers don’t come to us experts in our carried content, our users, or our economic buyers. They need to learn those things which reach well beyond the “learning” the Agilists mention and experiment towards: was it used; was it used sooner, rather than later (was it successfully learned); does it deliver the value it was supposed to deliver to the entity it was supposed to be delivered to?

Once those questions get answered, tighten the limit, so the gap becomes a fence, rather than a borderland, and answer the questions again. Find the questions tied to the scale of the gap. Enjoy.

I’m sure after working with too many developers that thought that their users were just like them, that your answers should surprise you, just as Anthony’s answers surprised him.  Enjoy.

Kurtosis Risk

January 2, 2017

In the research for my previous posts on kurtosis, I ran across mentions of kurtosis risk. I wasn’t up to diving into that, and getting too far away from what I was writing about in those posts. mc spacer retweeted More On Skew and Kurtosis. I reread the post and decided to conquer kurtosis risk. The exploration was underway.

One of the things they don’t teach you about in that into stats class is the logical proof of what we are doing. We take a mean without checking its normality. We go forward with the normal distribution as if it were normal, ordinary, usual, typical, non-problematic. Then, we meet the data, and it’s anything but normal. When meeting the data, we also meet skew risk and kurtosis risk. It’s like meeting your spouse to be’s mom. Usually, you meet your spouse to be’s dad at the same time. Yeah, they all show up at the same time.

You might get taught various ways to approximate the mean when you have less than 30 data points, aka when your sample is too small. That less than 30 data points is the space where skew risk and kurtosis risk happen. The sample statistics drive around a while getting close to the as yet unknown population mean, equalling it a few times, circling it, and finally pulling in and moving in. Our collection of sample means eventually approximates the population mean.

In artificial intelligence, back in the old days when it was important to think like a human, back in the days of expert systems, we encode the logic in augmented transition networks. A single transition would look like IF StopSign, THEN Stop. Of course, that’s not a network yet. That would wait until we wrote another, IF YeildSign, THEN Yield. That’s just another transition. Those two transitions would with some additional infrastructure become a network, thus they would become an augmented transition network. To make this easier, we used a descriptive language, rather than a procedural one. Prolog gives you the widest infrastructure. Prolog let you present it with a collection of transitions and it would build the proof to achieve the goal. It built a tree and trimmed the inconsistent branches.

We’ve seen that building the tree and trimming the inconsistent branches before. We use generative grammars to build a decision tree for a potential product, and constraints to trim that decision tree, so we arrive at the product fit for the moment. There is a logical argument to our product.

Similarly, there is a logical argument, or a proof, to our statistical analysis. There in that proof of our statistical analysis, our skew and kurtosis risk emerge.

Statistics happen after our data is collected. We think in terms of given (IF or WhatIF, WIF) this data, then these statistics. We don’t think about that driving around as looking for the population mean, as a process. Statistics is static, excepting the Bayesian approach. Logic insists. The proof frames everything we do. When computing a mean, the proof is going to insist on normality. But, this logical insistence is about the future, which means we are actually doing an AsIf analysis. We imagine that we checked for normality. We imagine that we know what we are doing since nobody told us any different yet. An AsIf analysis imagines a future and uses those imagined numbers as the basis for an analysis. In that imagining of the future, we are planning, we are allocating resources, we are taking risks. With samples, those risks are skewness and kurtosis risks.

I’m delayed defining skewness risk in this post until the very end. Once you understand kurtosis risk, skewness risk is nearly the same thing, so bare with me.

valid-distributionWe will use the triangle model, which represents decision trees as triangles, to represent our proof.

In this figure, the root of the decision tree is at the bottom of the figure. The base of the tree is at the top of the figure. In the triangle model, the base of the triangle represents the artifact resulting from the decision tree, or proof.

Here we paired the distribution with its proof. A valid proof enables us to use the distribution. In some cases, the distributions can be used to test a hypothesis. An invalid proof leads to an invalid distribution which leads to an invalid hypothesis. Validity comes and goes.

OK, enough meta. What is Kurtosis risk?

When we assert/imagine/assume (AsIf) that the distribution is normal, but the actual data is not normal, we’ve exposed ourselves to kurtosis risk. We’ve assumed that the sample mean has converged with the population mean. We’ve assumed that we have a legitimate basis for hypothesis testing. Surprise! It hasn’t converged. It does not provide a basis for hypothesis testing.

As an aside, WIFs (What IFs) are what spreadsheets are for. Pick a number, any number to see what the model(s) will do. AsIfs come from scenario planning, a process that is much more textual than numeric. A scenario is an outcome from various qualitative forces.

Back to it. Google sent me to Wikipedia for the above definition of kurtosis. I drew the definition and kept on thinking. This picture is the final result of that thinking.

kurtosis-risk

We start with the top-down, footprint view of normal distribution, a circle. The brown vertical line extends from the green cross on the right representing the mean, median, and mode which are the same for distributions that are normal.

Then, we see that our actual data is an ellipse. The blue vertical line extends from the green cross on the left. That line is labeled as being the mode of the skewed normal. In previous discussions of kurtosis, we use kurtosis to describe the tails of the distribution. In some definitions of kurtosis, kurtosis was seen as describing the peakedness of the distribution where we used kurtosis to describe the core of the distribution.

I drew a line through the two means. This line gave us two tails and a core. I should have drawn the core so it actually touched the two means. Then, I projected the two tails onto an x-axis so I would have a pair of lengths, the cosines of the original lengths. That one is longer and the other shorter is consistent with previous discussions of kurtosis.

A note on the core: I’ve taken the core to the most undifferentiated space under the curve. This is where no marketer wants to get caught. The circle that serves as the footprint of the normal is tessellated by some scheme. A shape in that tessellation represents the base of a histogram bar. From that bar, each adjacent histogram bar is exactly one bit different from that bar. The resolution of the shapes can be any given number of bits different, but that gets messy and, in the 3D graphic tessellation sense, patchy. A string “00000000” would allow its adjacent ring of histogram bars to contain up to eight different bars representing eight unique differences. “Ring” here is descriptive, not a reference to group theory. The histograms of the normal distribution encode all available differences. Refinements work outward from the undifferentiated mean to the highly differentiated circle of convergences, aka the parameter of the normal distributions footprint. We are somewhere under the curve. So are our competitors. So are our prospects and customers.

An ordinary interpretation of a peak with high peakedness is uniqueness or focus. That’s a high kurtosis value. A peak that’s less peaked, rounded, smoother is less unique, less focused, possibly smudged by averaging, tradeoffs, and gaps. It all shows up in the histogram bars. The other thing that shows up is the differences that are our product over the life of the product.

The other thing that shows up is the differences that are our product over the life of the product. A given iteration would have a particular shape. Subsequent iterations would build a path under the histograms that constitute the normal. Customers would cluster around different iterations. A retracted feature would show up as defections to competitors with different configurations more consistent with the cognitive processes of the defectors, our “once upon a time” users. Use tells. Differentiation segments.

So I attend to the tessellations and shapes of my histogram bars, to the sense of place, and to movement.

I then projected the core onto the sphere represented by the circle. Yes, the same circle we used to represent the footprint of the normal distribution. The core then appears as an ellipse. It should be closer to the pole, then it would be smaller. This ellipse should be the same shape as the top of the ellipsoid, containing the ellipse of the data, that the sphere is topologically deformed into.

Then, I drew a vector along the geodesic from the pole to the elliptical projection of the core to represent the force of topological deformation. I also labeled the circle and ellipse view to show how the deformation would be asymmetrical. The right is much less deformed than the right.

summary-veiwNext, I put the kurtosis in the summary view of a box chart using those lengths we found drawing a line through the two means. This box chart is tied to a view of the tails and kurtoses drawn as curvatures. As for the slopes of the distribution’s actual curve, they are approximations.

So that is kurtosis risk? When your sample means have not as yet converged to the population mean, you are exposed to kurtosis risk. Or, as Wikipedia put it when you asserted that the data is normally distributed, but it wasn’t, that assertion gives rise to kurtosis risk.

And, what of skew risk? You expose yourself to skew risk when you assert that your data is symmetric, when in fact, it isn’t. In the math sense, skew transforms the symmetric into the asymmetric and injects the asymmetries into the curvatures of the kurtoses constraining the tails along the radiant lines in the x-axis plane.

This business of the assertion-base for statistics involves constant danger and surprise. A single inconsistent assertion in the middle of the proof can invalidate much of the formerly consistent proof of a once useful analysis. Learn more, be more surprised. Those intro classes blunt the pointed sticks archers call arrows. Before they were pointed, they were blunt–dangerous in different ways. Enjoy.