Archive for June, 2018

Point of Value

June 21, 2018

A few days ago,  tweeted a video where he was saying that it was all about value. We get the idea that the product is the value we are delivering, but that is a vendor specific view. What we are really doing to providing a tool with which the economic buyer purchases, so their people can use it to create value beyond the tool. I’ve called this concept projecting the value through the product. It is the business case, the competitive advantage derived from use that provides the economic buyer with value, not the product itself. This same business case can convince people in the early adopters two-degrees of separation network to buy the product moving it across the chasm if the underlying technology involves a chasm.

An XML editor provides no value just because it was installed. The earliest version of Gartner’s total cost of ownership framework classified that install as effort not expended doing work. They called it a negative use cost. The product has not been used yet. The product has not generated any value and yet cost were accumulating. Clearly, the XML editor did not provide the owner with any value yet.

Once a user tags a document with that XML editor and publishes that document, some value is obtained by someone. The user has a point of view relative to the issue of value. And, the recipient of that value has their own point of view on the value. When the recipient uses the information while writing another report, the value chain moves the point of view on the value again, and more value accumulates.

That led me to think in term of a value chain, the triangle model, and the projection of value. So I drew a quick diagram and redrew it several times.

In this first figure, the thickVP 03 black line of the diagram on the left is the product. Different departments use the product. The use of the product is focused, and the value is delivered at the peaks of those downward facing triangles. The value shown by the black triangles is used within the red triangles. The use inside the red triangles delivers value to the peaks of the red triangles. Notice there is a thick red line, labeled E. This represents the use of the underlying application by users outside the entities represented by the black triangles. that report to the red entity. The underlying application is doing different things for users in different roles and levels.

All this repeats for the purple entities and values, and the blue entities and values. Value is projected from the interface to a point of value through work. That delivered value is projected again to the next point of value. The projections through work continue to accumulate value as the points of value are traversed.

The diagram on the right, in the top figure, diagrammatically depicts the value chaining and points of value, shown in the diagram on the left. It should be clear that the value is created through work, work enabled by the product. The product is the carrier, and the work is the carried content. The work should be entirely that of the purchaser’s users.

VP 01I’ve always thought of product as being the commercialization of some bending or breaking of constraints. I stick with physical constraints. In the figure on the left, we start with the linear programming of some process. Research developed a way to break a constraint across some limited range that I’ve called an accessibility gate. Once we can pass through that gate, we can acquire the value tied up in the accessed area (light blue).

The effort to pass through that gate involved implementing five factors. Those factors are shown as orange triangles that represent five different deliverables. Each of these factors are different components of the software to be delivered. The order of delivery should increase the customer or client’s willingness to pay for the rest of the effort. Value has to be delivered to someone to achieve this increased willingness. Quickly delivering nothing gets us where? The thin purple curve order various point of value in a persuasive delivery order.

Some of the factors are not complete before they are being used and projecting some value. The projection of value is not strictly linear. The factor on the far left involves code exclusively but is the last of the factors to deliver value. For this factor, it takes three releases to deliver value to three points of value.

The other factors require use by the customer’s or client’s organization to project the desired value.

Further value is accomplished by entities remote from the product. This value is dependent on the value derived by the entities tied to the product. I’ve labeled these earlier entities as being independent. The distant projections of value are dependent on the earlier ones. It remains to be seen if any of it is independent.

The path symmetries tie into the notions of skew and kurtosis as well as projections as being subsets or crosscutting concerns. Organizational structure does not necessarily tell us about where the value accrues.

VP 02In the next figure, we take you from the user to the board member. The red rectangle represents the product. The thick black line indicates the work product projected from the user through the product. The thin red arrows represent the various changes in the points of value. The thin light blue lines show the view of the value.

At some point in the value chain, the value becomes a number and later a number in an annual report. The form of the underlying value will change depending on how a given point of value sees things. This is just as much an ethnographic process as requirements elicitation. These ethnographic processes involve implicit knowledge and the gaps associated with that implicit knowledge. Value projection is both explicit and implicit.

Enjoy.

Advertisements

Complex

June 11, 2018

Today, someone out on twitter mentioned how power users insist on the complex, while the ordinary users stick with the simple. No. It’s more complicated than that. And, these days there is on excuse for the complex.

Lately, I’ve been watching machine learning videos and going to geek meetups. One guy was talking about how machine learning is getting easier as if that was a good thing. And, he is a geek. Easier, simpler happens. And, as it does, the technology can’t generate the income it used to generate. Once the average user can do machine learning without geeks, what will the geeks do to earn a living? Well, not machine learning.

The technology adoption lifecycle is organized by the pragmatism of the managers buying the stuff and the complications and simplicities of the technology. The technology starts out complicated and gets simpler until it vanishes into the stack. It births a category when it’s discontinuous, aka a completely new world, and it kills the category once it has gotten as simple as it can be. The simpler it gets, the less money can be made, so soon enough everybody can do it, and nobody can make any money doing it. We add complications so we can make more money. Actually, we don’t. Things don’t work that way.

So I drew a technology adoption lifecycle (TALC) vertically. I’ve modified the place of the technical enthusiasts in the (TALC). They are a layer below the aggregating mean. They span the entire lifecycle. I left Moore’s technical enthusiasts at the front end of the vertical. And, I’ve extended the technical enthusiasts all the way out to the efforts prior to bibliographic maturity.

Complicated

I used the word “Complicated” rather than complex. Complicated is vertically at the top of the figure. Simpler is at the bottom. The left edge of the technical enthusiast slice of the normal is the leading edge of the domain where the complicated, the complex is encountered. The complex can be thought of like constraints. Once you simplify the complex there is more complexity to simplify. The vertical lines represent consecutive simplifications. Where there are many vertical lines, the complications are those of the people working on the carrier aspects of the complexity. I drew a horizontal line to separate the early and late phases. I did this to ghost the complexity grid. There is more than enough going on in the distribution itself. the vertical lines below that horizontal line are the complexity lines related to the TALC phases on the right side of the TALC, to the right of the mean, to the right of the peak. Or in this figure, instead of the usual left and right, think above and below.

In the diagram, I put “Simpler” above (to the right of) “Complicated.” This is then labeled “Simpler 1.” We are still in the lab. We are still inventing. This simplification represents the first task sublimation insisted on by the TALC. This task sublimation happens as we enter into the late mainstreet, consumer phase. Technical enthusiasts don’t need simpler. But, to move something out of the IT horizontal into broader use, it has to get simpler.

Simpler is like graph paper. “Simpler 1” is distant from the baseline and aligned with the TALC phases, although the diagram separates them for clarity, hopefully.

The device phase, aka the phase for the laggard population, absolutely requires technology that is far simpler than what we had when we moved the underlying technology into the consumer phase, late mainstreet. Devices are actually more complicated because the form factor changes and an additional carrier layer gets added to everything.  The orange rectangle on the left of the device phase is the telco geeks and their issues. The carried content gets rewritten for simpler UI standards. The tasks done on a device shouldn’t be the same as those done on a laptop or a desktop. The device phase presents us with many form factors. Each of those form factors can do things better than other form factors. But, again, the tasks done on each would be limited.

In Argentine tango, when you have a large space in which to dance, you can dance in the large. But, when the crowd shows up or the venue gets tiny, we tighten up the embrace and cut the large moves. Our form factor shrinks, so our dance changes.

How would basketball feel if it was played on a football field?

The cloud phase, aka the phase for the phobic population, requires technology that is totally hidden from them. They won’t administer, install, upgrade, or bother in the least. The carrier has to disappear. So again the UI/UX standards change.

The phase specificity of the TALC should tell us that each phase has its own UI standards. With every phase, the doing has to get simpler. The complexities are pushed off to the technical enthusiasts who have the job of making it all seem invisible to the phobics, or simple to the laggards, or somewhat simpler to consumers.

Task sublimations, simplifications, are essential to taking all the money off the table. If we get too simple too fast, we are leaving money on the table. When we skip the early phases of the TALC and jump into the consumer phase, we are leaving money on the table.

But, being continuous innovations, we don’t bother with creating value chains, and careers. They get the technical enthusiasts jobs for a few months. They get some cash. The VCs get their exit. It has to be simple enough for consumers. More simplifications to come. But, the flash in the pan will vanish. Continous innovations don’t put money on the table. That money is on the floor. Bend your knees when picking it up.

Technical enthusiasts should not cheer when the technology gets simplified. Maybe they need it to get simpler, so they can use it. But, it is going to continue to get simpler. And, real science in the pre-bibliographic maturity stage will be complex or complicated. It won’t get more complicated. It will get simpler. Simper happens.

That doesn’t mean that everything has to be in the same simplicity slice. It just means that the simplicity must match the population in the phase we sell into.

One complication that doesn’t show up in the diagram is that the TALC is about carrier except in bowling alley. In the bowling alley, the carried content is what the customer is buying. But, that carried content is a technology of its own, so the carrier TALC, and the carried TALC meet in the bowling ally. Each of those technologies gets simpler at their own rates. These intersections show up in late mainstreet when you want to capture more of the business from the vertical populations. This is a real option. But, it will take quite an effort to hold on to the domain knowledgeable people.

The diagram covers much more ground. Today, we just called out the complicated and the simple.

Enjoy!

Fourth Definition of Kurtosis

June 6, 2018

In the Wikipedia topic on Moment, Kurtosis being the fourth moment, aka forth integral of the moment generating function, Wikipedia says, “The fourth central moment is a measure of the heaviness [or lightness] of the tail of the distribution, compared to the normal distribution of the same variance.” Notice here, no mention of peakedness.

In Yes or No in the Core and Tails II, I included some discussion of mixture models with a two-dimensional graphic that illustrated the summing of two distributions. The sum (red) was Normals as Constraintssaid to have a heavy tail.  It was interesting to see distributions in a mixture model acting as constraints. I have not been able to confirm that normals in other sums act as constraints. In a mixture model, the weights of the summed normals must add up to 1, so one normal has a weight of p, and the other would have a weight of 1-p. The yellow areas represent the constrained space. The red distribution is sandwiched between the green one and the blue one. The green normal and the blue normal are constraining the red normal.

In analysis, distribution theory is not about statistics, but rather as substitutes for functions. In linear programming, constraints are functions, so it should be of no surprise that distributions act as constraints. Statistics is full of functions like the moment function. Every time you turn around there is a new function describing the distribution. Those functions serve particular purposes.

Another view of the same underlying graph shows these normals to be events on a timeline, the normal timeline. Statistics lives in fear of p-hacking, or waiting around and continuing to collect data until statistical significance is achieved. But, what if you are not doing science. P-hacking wouldn’t pay if the people doing it were trying to make some money selling product, rather than capturing grant money. Statistics takes a batch approach to frequentist statistical inference. Everything is about data sets, aka batches of data, rather than data. But, if we could move from batch to interactive, well, that would be p-hacking. If I’m putting millions on a hypothesis, I won’t be p-hacking. If I’m putting millions on a hypothesis, I won’t use a kurtotic or skewed distribution that will disappear in just a few more data points or the next dataset. That would just be money to lose.

So what is a normal timeline? When n is low, shown by the green line in the figure, labeled A, the normal is tall, skinny ideally, ideallyNormals as Timeline because it is also skewed and kurtotic which is not shown in this figure. We’ll ignore the skew and kurtosis for the moment. When n is finally high enough to be normal, shown by the red line, it is no longer tall, and not yet short. It is a standard normal. When n is higher, shown by the blue line, labeled B, the distribution is shorter and wider. So we’ve walked a Markov chain around the event of achieving normality and exceeding it. This illustrates a differential normality.

We achieve normality, then we exceed it. This is the stuff of differentials. I’ve talked about the differential geometry previously. We start out with Poisson games on the technology adoption lifecycle. These have us in a hyperbolic geometry. We pretend we are always in a Euclidean space because that is mathematically easy. But, we really are not achieving the Euclidean until our data achieves normality. Once we achieve normality, we don’t want to leave the Euclidean space, but even if we don’t, the world does, our business does. Once the sigma goes up, we find ourselves in a spherical geometry. How can so many businesses exist that sell the same given commodity in a multiplicity of ways? That’s the nature of the geodesic, the metric of spherical geometry. In a Euclidean space, there is one optimal way; in hyperbolic, less than one optimal way; and spherical, many. This is the differential geometry that ties itself to the differential normality. The differential normality that batch statistics, datasets hide. A standing question for me is whether we depart the Euclidean at one sigma or six sigma. I don’t know yet.

As a side note on mixture models like the underlying figure for the figures above, these figures show us normals that have a mean of zero, but their standard deviations differ. Sum of Normals - Different Std DevsThe first standard deviation is at the inflection point on each side of the normal distribution. The underlying figure is tricky because you would think, that all three normals intersect at the same inflection point. That might be true if all three had the same standard deviation. Since that is not the case, the inflection points will be in different places. The figure shows the inflection points on one side of the normal. When the distribution is not skewed, the inflection points on the other side of the mean are mirror images.

Mixture models can involve different distributions, not just normals. Summing is likewise not restricted to distributions having the same mean and standard deviations or being of the same kind of distributions.

Multivariable normals contain data from numerous dimensions. A single measure is tied to a single dimension. A function maps a measurement in a single dimension into another measurement in another dimension. Each variable in a multivariable normal brings its own measure, dimension, and distribution to the party. That multivariable normal sums each of those normals. Back in my statistics classes, adding normals required that they have the same mean and same standard deviation. That was long ago, longer than I think.

Enjoy.

 

 

Yes or No in the Core and Tails II

June 4, 2018

The ambiguous middle of my decisions tree for my last post “Yes or No in the Core and Tails” has bugged me for a few days. I have a hard time thinking that I drive up to a canyon via a few roads, climb down to the river, cross the river, climb up the other side, and select one of many roads before driving off. That is not a reasonable way to deal with a decision tree that doesn’t get entirely covered by my sample space.

So what is this mess hinting at? Do not stop sampling just because you’ve achieved normality! Keep on sampling until you’ve covered the entire sample space. Figure out what power of 2 will produce a decision tree wide enough to contain the sample space, then sample the entire decision tree. Before normality is achieved, not sampling the entire base of the decision tree generates a skewed normal. This exposes you to skew risk. There will also be some excess kurtosis, which brings with it kurtosis risk.

Here is a quick table you can Binary Space vs Normal Sample Sizeuse to find the size of the sample space after you’ve found the number of samples you need to achieve normality. The sample space is a step function. Each step has two constraints.

Given that it takes less than 2048 samples to achieve a normal, that should be the maximum. 211 should be the largest binary sample space that you would need, hence the red line. We can’t get more resolution with larger sample spaces.

Note that we are talking about a binary decision in a single dimension. When the number of dimensions increases the number of nomials will increase. This means that we are summing more than one normal. We will need a Gaussian mixture model when we sum normals. The usual insistences when adding normals is that need to have the same mean and standard deviation. Well, they don’t, hence the mixture models.

I took some notes from the Bionic Turtle’s YouTube on Gaussian mixture models. Watch it here.

Gaussian Mixture Model

Back when I was challenging claims that a distribution was binomial, I wondered where the fill between the normals came from. As I watched a ton of videos last night, I realized Probability Massthat the overlapping probability masses at the base had to go somewhere. I quickly annotated a graph showing the displaced probability mass in dark orange, and the places where the probability mass went in light orange. The areas of the dark orange should sum up to the areas of light orange. The probability masses are moved by a physics.

A 3-D Gaussian mixture model is illustrated next. I noted that there are three saddle 3D Gaussian Mixture Modelpoints. They are playing three games at once or three optimizations at once.  EM Clustering is alternative to the Gaussian mixture model.

So to sum it all up, do not stop sampling just because you’ve achieved normality! 

Enjoy. 

 

 

Yes or No in the Core and Tails

June 2, 2018

Right now, I’m looking at how many data points it takes before the dataset achieves normality.  I’m using John Cooks binary outcome sample size calculator and correlating those results with z-scores. The width of the interval issue matters. The smaller the interval, the larger the sample needed to resolve a single decision.  But, once you make the interval wide enough to reduce the number of samples needed, the decision tree is wider as well. The ambiguities seem to be a constant.

A single bit decision requires a standard normal distribution with interval centered at some z-score. For the core, I centered at the mean of 0 and began with an interval between a=-0.0001 and b=+0.0001. That gives you a probability of 0.0001. It requires a sample size of 1×108, or 100,000,000. So Agile that. How many customers did you talk to? Do you have that many customers? Can you even do a hypothesis test with statistical significance on something so small? No. This is the reality of the meaninglessness of the core of a standard normal distribution.

Exploring the core, I generated the data that I plotted in the following diagram.

Core

With intervals across the mean of zero, the sample size is asymptotic to the mean. The smallest interval required the largest sample size. As the interval gets bigger, the sample size decreases. Bits refers to the bits needed to encode the width of the interval. The sample size can also be interpreted as a binary decision tree. That is graphed as a logarithm, the Log of Binary Decisions. This grows as the sample size decreases. The more samples required to make a single binary decision is vast while the number of samples required to make a decision about subtrees requires fewer samples. You can download the Decision Widths and Sample Sizes spreadsheet.

I used this normal distribution calculator to generate the interval data. It has a nice feature that graphs the width of the intervals, which I used as the basis of the dark gray stack of widths.

In the core, we have 2048 binary decisions that we can make with a sample size of 31. We only have probability density for 1800. 248 of those 2048 decisions are empty. Put a different way, we use 211 bits or binary digits, bbbbbbbbbbb but we have don’t cares at  27, 26, 25, 24, and 23. This gives us bbbb*****bb. Where each b can be a 0 or 1. The value of the don’t cares would 0b or 1b, but their meaning would be indeterminate. The don’t cares let us optimize, but beyond that, they happen because we have a data structure, the standard normal distribution representing missing, but irrelevant data. That missing but irrelevant data still contributes to achieving a normal distribution.

My first hypothesis was that the tail would be more meaningful than the core. This did not turn out to be the case. It might be that I’m not far enough out on the tail.

Tail

Out on the tail, a single bit decision on the same interval centered at x=0.4773 requires a sample size of 36×106, or 36,000,000. The peak of the sample size is lower in the tail.  Statistical significance can be had at 144 samples.
Core vs Tail

When I graphed the log of the sample sizes for the tail and the core, they were similar, and not particularly different as I had expected.

I went back to my the core and drew a binary tree for sample size, 211 and the number of binary decisions required. The black base and initial branches of the tree reflect the being definite values, while the gray branches reflect the indefinite values or don’t cares. The dark orange components demonstrate how a complete tree requires more space than the normal. The light orange components are don’t cares of the excess space variety. While I segregated the samples from the excess space, they would be mixed in an unbiased distribution.

Decision Tree

The distribution as shown would be a uniform distribution, the data in a normal would occur with different frequencies. They would appear as leaves extending below what is now the base. Those leaves would be moved from the base leaving holes. Those holes would be filled with orange leaves.

Given the  27, 26, 25, 24, and 23, there is quite a bit of ambiguity as to how one would get from  28 branches to 22 branches of the tree. Machine learning will find them. 80’s artificial intelligence would have had problems spanning that space, that ambiguity.

So what does it mean to a product manager? First, avoid the single bit decisions because they will take too long to validate. Second, in a standard normal the data is evenly distributed, so if some number of samples occupies less than the space provided by 2x bits, they wouldn’t all be in the tail. Third, you cannot sample your way out of ambiguity. Forth, we’ve taken a frequentist approach here, you probably need to use a Bayesian approach. The Bayesian approach let you incorporate your prior knowledge into the calculations.

Enjoy.