Convergence and Divergence—Adoption, De-​adoption, Readoption

September 5, 2016

Skewness Risk

This week, I visited the Varsity Bookstore, the off-campus bookstore for Texas Tech. I looked at a statistics book, sorry no citation made, that said skewness was about how much the normal distribution leaned to one side or another. When it leans, the mean stays put, but the mode moves by some angle theta. My last blog post on kurtosis mentions theta relative to one of the figures.

Lean

The notions of skewness risk and kurtosis risk came up during the work on the earlier post. It took this long to find some details hinted at in places like Investortopia. The thick tails dive under the threshold for extreme outcomes. Even with a black swan, there isn’t that much under the threshold. The negative skewness graph shows how the large losses move the convergence with the horizontal asymptote towards the present. The same thing happens with small losses possibly with the same extent horizontally, since the longer tail magnifies the small loss.

Notice that on the left side of the normal gains happen; on the right losses. Moore’s technology adoption lifecycle similarly shows the left to be growth and the right to be decline. What saves the right tail is that an acquisition is supposed to bring a 10x multiple into play, but that requires the acquirer to play the merger tornado game. That game is not played well if it is played at all. Most acquisitions provide exits to investors tied up with interlocking directors and funds.

The skewness happens because the distribution is tending to the normal, but at the moment captured by the data underlying the distribution data is missing, and the data is not normal. Once the data is captured, the normal will stand upright and centered without skewness, and without skewness, there is no kurtosis.

S-curves

Since I’m on the road, I’ve left the bookstore behind that had a book by a venture capitalist or strategist, no citation, no way to find this book again. But, the author said he didn’t see the relevance of S-curves to the companies in his portfolio. Well, most of those firms are built on commodity software, so they are long past the upsides of that software. Consumer software still commoditizes and that brings a black swan, a missed quarter to the stock price. When that commoditization happens, the underlying software has to be replaced with a better technology. Replacing it is an s-curve play by the seller of that technology, not the users of that technology. Most of his portfolio would be users of, rather than makers of underlying technologies. Simple fact, in the late phases of the technology adoption lifecycle, declines in stock prices, hope for a merger upside, no premium on IPO, and nobody dealing with S-curves is the norm. Oh, and the whole thing being about cash. You get rich in a upper-middle-class way, but it’s too late to create economic wealth. Confusion between early stage financing and early phase adoption is rife. Talk of early adopters is not in the Moore sense, but the Gladwell since. And, no chasms exist to be crossed. So yeah, no S-curves.

S-curves confuse disruption in the Foster sense because they can be temporary if the innovation’s s-curve slope slips below that of the incumbents. Foster put causes before effects where Christensen focuses on effects absent cause. In the 80’s and early 90’s nobody was overserved. It just turned out that the technology left everyone overserved. The small-disk manufacturers were not competing with the large-disk manufactuers. They just served their markets and the markets got bigger on their own. Alas, the old days.

Kurtosis, defined by curvature hinted at defining s-curves in the same way. Curvature is implicit. Mathematically, the curve defines the curvature. We cheat when we claim curvature is the reciprocal of the radius. We don’t know where the center is, so we don’t know the radius, thus we don’t know the curvature. There probably is some software somewhere that can find the curvature.

S-curve

The red line is the s-curve. The blue horizontal line shows where rapid improvement gives way to slower improvement. The line also shows where investment is cheap and where it become increasingly expensive. The large circle gets larger as we go and shifts its center down, so we get a slower and longer curve. At the top of the large circle, we’ve transitioned to that 10x returns if the merger was actually successful.

The s-curve tells us how much change to expect. If you had the s-curve for every contributing technology, then you would have some notion of the rates of change you could expect. We overstate change in our conversations, particularly when we talk about the s-curves and rate of change of the carried content.

Convergence and Divergence

In today’s reading of “Concepts and Fuzzy Logic” by Radin Balahlavek, and George J. Klir, eds. As editors, the goal of this book was to foster a return to the use of fuzzy logic within the disciplines of the psychology of concepts and mathematicians. I’ve always seen ideation as being convergent or divergent, but over the life of a conceptual model, there are several convergences and divergences. The editors here sought to foster a return to a convergent conceptual model that previously converged and later diverged. 

So we start with the verbs, with the tokens with which we parse the adoption of the discontinuous innovation. The drivers at this stage are those driving bibliographic maturity. We converge or diverge. In the converge, we merge separate disciplines. The conceptual model being adopted is the platform technology, the carrier. The disciplines bring their carried content into the mix. The carrier is under adoption, and the new found applications in the discipline in the carried is under adoption as well. Those applications make the business case of those in the current and near-term pragmatism steps. Those applications and the business cases will change as we approach the mid-term and long-term pragmatism steps.

Convergent or Divergent

In a product, care must be taken to the pragmatism steps. Like pricing bifurcations due to communications channel isolation,  the business cases are specific and the reference cases that will be adopted by a population on the pragmatism step are likewise specific. The early adopter’s success will not drive laggards to buy. But, that the macro view of adoption phases where pragmatism steps present the micro view.

We start with two populations. Each adopts a conceptualization at their own rate. Each has its own reference bases. Once adoption begins, a third population emerges, the adopters. People entering either of the disciplines involved after adoption begins can adopt the idea immediately. This is more pronounced when the conceptualization under adoption is discontinuous. Do students of SEO ever get around to print, or worse focus groups?

In the case documented in the book, mathematicians (yellow) worked their way towards fuzzy logic. They took the path of the continuous innovation. The psychology of concepts researchers (red) found fuzzy logic and it solved some of their problems, so it was adopted, but they were not working with mathematicians to accelerate the use of fuzzy logic.

Publication in these populations motivates adoption. Those peer-reviewed papers constitute the touchpoints in a content marketing network. Publication is likewise and event. Adoption and de-adoption are fostered by events.

System of Convergences and Divergences

In every adoption, there are collaborators and defectors in game theory speak. At some point, a defector succeeded in publishing some claims about how fuzzy logic couldn’t do this or that. These claims were accepted uncritically among psychology of concepts researchers. That led to the de-adoption of fuzzy logic by that population. De-adoption happened only in the psychology of concepts population driven by the publication of that defector’s claims. This went unnoticed by the mathematicians working in the same space. Again, like price communications isolation providing opportunities, discipline-specific communications channels provided the isolation here.

At least in this convergence, the two disciplines were not putting each other down like the demographers and ethnographers involved in ethnographic demography were. I can’t find that post mentioning that behavior. It doesn’t help that this blog has stretched across three blogging platforms. But, the behavior is typical. Those converging will be some small portion of the contributing domains.

Mathematicians continue to develop fuzzy logic to this day.

After de-adoption, a researcher looked at the claims and found them to be false. This led the editors to realize that they needed an intervention. Their book was part of that intervention. That accelerated readoption.

Realize here that in the readoption, the base population has changed, and the concepts being adopted have changed as well. The mathematicians widened the conceptual model to be readopted while the psychology of concepts researchers were gone.

Looking at the underlying populations, the psychology of concepts population had not completely adopted fuzzy logic, nor did that population completely de-adopt. Those later in the adoption lifecycle never bothered with fuzzy logic. They didn’t go through de-adoption. They did go through readoption eventually.

One of the messy things about the normal distribution representation of the technology adoption lifecycle is that adoption happens in a time series. The population is spread out along that time series. The timeline moves left to right. Each sale, whether by seats or dollars moves one down the timeline. B2B sales moves are huge. The mean becomes the marker where fifty percent of the seats have been sold. The growth side of the curve ends with the seat sitting at the fifty percent mark. This timeline is present regardless of skewness or kurtosis.

The timeline starts with the Dirac function providing the potential energy that drives the lifecycle. After the Dirac function comes the Poisson games. Then we move on to the convergence with the normal via sample populations of less than thirty, in statistics, these are Poisson approximations of the normal, which leads us to skewness and kurtosis. Once the sample populations are over thirty, we have a normal that is not skewed. Risks become symmetric. This normal is one of a series of three normals: vertical (carried), horizontal (carrier), and post-merger (whole media, both). The standard normal hides the relative sizes of these normals.

The three normals give us a hint towards Moore’s three horizons, which turn the technology adoption lifecycle around. The horizons look at the technology adoption lifecycle in the rear-view mirror as if they are right in front of us. Maybe a backup camera view is a better perspective. The B2B early adopter is barely seen or focused on. It is inconsistent with the present horizon.

Anyway, those two populations are now a third happily solving psychology of concepts problems with fuzzy logic. The defectors lost. The price-performance or s-curves make the case for adoption. Other things make the case for de-adoption, and readoption. The editors here demonstrated the role of the intervenor, or in most cases, the near-monopolistic, market power positioned, market leader that so many programmers abhor. That market leader does much to make the category happen and thrive.

So what is a product manager to do? Start with understanding the conceptual models that comprise your product. Understand the adopting populations for each. Those populations are not on the same page and don’t adopt at the same rates. Those domains do not inject change into your product at the same rates. Those populations might be deviating away from your product due to de-adoption of the underlying conceptualization. Yes, get someone to stay on top of the changes in each of those domains even. Know when a defect and defection is happening. That defection might disrupt you. That’s classic in the sense of how the hell would you, the product manager, have known. It’s not about competition. It’s about conceptualization. They change. They oscillate. They own you and your product if you’ve taken them into your product or service. They happen in the carrier and the carried of the media we play in.

Likewise know your s-curves, aka your price-performance curves. If they touch your product, know them. Sure, you can’t deal with the fabrication plant investment issue, but it will throttle your product if you need that fab.

 

 

 

 

 

 

A Discontinuity in a Sequence

August 22, 2016

In my last post, The Grid, we looked at how grids imprison sequences. We discovered a discontinuity, a hole, among the sequences laminated into the larger sequence, the sequences of differences between z-score values. I called them out. And, left much unsaid. We’ll continue that discussion in this post.

In mathematics, we have holes in our graphs. We have holes in what each of us knows about math. In Algebra class, we’re restricted to the reals, so we’re told no solution exists. It turns many of those solutions are complex numbers, not reals. There are plenty of holes, potholes.

Then, we have asymptotes. We can approach them, but we can’t cross them with a function because they are manifolds, something that falls into that wide category of math we don’t know yet.

I remember stepping into a gopher hole. After that, I kept a close eye on the ground where my feet were stepping into. One day a lieutenant colonel stopped his staff car so we could have a conversation about why I didn’t salute his staff car. “Gopher holes sir.” Not that I had to worry, my colonel would have laughed the incident off. It was one of those days when the graph you live in has a few new nodes and the graph’s normal distribution changes.

The z-score sequence is directed from core to tail–away and towards. Oddly, humans use the same kind of dimension, technically a half of a dimension. We are 2.5-D beings, not 3D beings. But, we round off dimensions for our mathematical convenience.  If it’s not easy, it’s not math–easy being very relative. Consider that z-score sequence to be a vector. Consider the hole to accommodate another intersecting vector that for the moment we will consider orthogonal, or simply perpendicular.

01 Bundle Vector and Orthogonal Vector

Being orthogonal in statistic means that the vectors intersecting in that manner are independent, aka not correlated. The cosine of 90 degrees is zero, so the cosine of correlation is zero, so the vectors are not correlated.

The vector passing through the hole in the z-score sequence has its own distribution. In the end, the data comprising that distribution will be added to the z-score sequence’s distribution. For now, that distribution is unknown, and like all unknowns constitutes a source of risk.

02 Bundle Vector and Orthogonal Vector w Risk Entry Point

Now, we can imagine a flow through the subsequences. Imagine each layer as a pipe. That gives us some plumbing, aka some fluidics. No, I’m not going there tonight. But, I did draw it just to assess its probabilities. Of course, I ignored some of the subsequences. In modeling, you put in what you think is important and you leave out the rest.

03 Risk Flows

Just for the bayesian priors s, t, and u all started with a probability of 0.50. That gave us the probability of st, the probability after the first mix of st=0.25. Then we dealt with the second mix, which had us adjusting the probabilities so they equaled 1.00, leaving us with p(st)=0.333 and p(u)=0.666. Oh, we’ve crossed an approximation boundary.

I finally gave in to reading David Hand’s “The Improbability Principle.” Hand refers to Borel’s theorem about the impossibility of events with sufficiently small probabilities. Borel wanted us to understand that p=1 and not more than 1. It takes a while to get to the point. Borel is modeling via probability, so the impossible events are left out, but due to Borel’s theorem, we are assured that we can simplify the situation via omission and kept going, all things being logically consistent.

We are not leaving the hole out. Everybody else probably has left it out. It’s not in the z-score table screaming out to be seen. We stumbled across it with much labor. But, we will start with the vector being orthogonal. I took a top-down view for the next graphic.

Evolution Top Down

Here we start at the global maxima of the z-score differences sequence, the axis of symmetry or rotation, on the left. The sequence running to infinity somewhere off the page to the right. The hole appears in light blue. The hole is where the sequence vector intersects the orthogonal vector.  The long-term mean will come to rest at the intersection. The r variable is the indicator of correlation. The angle between the sequence vector the actual vector (shown in red), theta, illustrates a positive correlation. So the distribution will come to rest on the actual vector (red).

We started with a surprise unknown at the hole. Once discovered, we have to find it’s measure. So we assert the distribution’s existence. This has the effect of putting a Dirac function at the center of the distribution. With more data, we have a Poisson distribution. We can use that Poisson distribution to approximate the normal distribution until we have collected 30 or more datum. The figure is wrong, but I had to make the Poisson distributions large enough to show up. The Poisson distributions would still be inside or under the normal distribution. As the Poisson approaches the normal, the mean moves around until it settles at the core intersection, aka the mean as shown in the diagram, and the distribution would exhibit skewness and kurtosis.

The Process

Here I show the evolution of that hole. The Dirac function generates a line at infinity, here labeled PE, as in potential energy. Potential energy is used here to hit at information physics. Strong writing on information physics put it as potential energy being position and not some form of energy, just a physics bookkeeping slight of hand. Next, the Poisson distribution is generated along the line of positive correlation in its continuous form (blue line and blue area). Poisson distributions speak loudly to the myth of deregulation being valuable in a business. The constraint, here a policy constraint (gray) moves the probabilities stretching out to infinity and concentrates them into the histograms inside the constraint, which makes the business more focused and less costly. Beware of this myth. The constraint generates the higher histograms (red volumes with orange tops) in the discrete form and generates the higher curve (dark red) as opposed to the original curve (blue) in the continuous form. Constraints create value.

Next, the Poisson distribution is generated along the line of positive correlation in its continuous form (blue line and blue area). Poisson distributions speak loudly to the myth of deregulation being valuable in a business. The constraint, here a policy constraint (gray) moves the probabilities stretching out to infinity and concentrates them into the histograms inside the constraint, which makes the business more focused and less costly. Beware of this myth. The constraint generates the higher histograms (red volumes with orange tops) in the discrete form and generates the higher curve (dark red) as opposed to the original curve (blue) in the continuous form. Constraints create value.

Last, the normal distribution reaches its equilibrium distant from the Poisson distribution on the timeline (gray). The normal has lost the directional sense that the Poisson distribution provided. The data is close in distance but spread out over time. The potential energy of the assertion that generated the Dirac signal flows down to the normal and beyond as the normal gets wider and loses height, aka becomes flat. The normal here is in situated in Euclidean space. The Dirac and Poisson are situated in hyperbolic space. Beyond the normal shown, where the normal becomes flat, those normals find themselves in the spherical space. Financial analysis as it is conducted today is carried out in spherical space. In that space, multiple analyses give good answers. In hyperbolic space, no analysis gives good answers.

Think of your data efforts as dynamic undertakings. Statistics uses the static view as the means to honest statistics, dynamics is prohibited. Statisticians take snapshots, but technology adoption is a dynamic proposition.

Standard normals hide much. All normal distribuitons look the same in the standard normal form. At times seeing the real normal will tell us much.

The Grid

August 18, 2016

It’s been said of mathematical proofs that they start somewhere and end somewhere else. Grids behave in the same manner. Grids might be rectangular or square.

Grid 4x10Grids might be laid out on some modulo, which greatly restricts their shape and how they shape the content they contain or in our verbiage “carry”. In the end, a grid starts somewhere and ends somewhere else.

Mod 10 Grid

Each of the rows could have kept on going, but the rule about row population prevents this, and instead, puts the red numbers on the next line.

A table of z-scores takes an infinite ray and chops it up at decreasing and later increasing intervals. The z-score table in the back of my statistics book gives the wrong impression when it chops the entries up ten z-scores to rows of modulo 10. The shape of the table controls the shape of the carried z-scores. The z-scores have their own shape, but it is lost here.

Table As Media

Just to make the table as media reality clearer, I’ve changed the carrier, the grid, as I changed the number of columns. I changed the metadata or meta carrier to change the number of columns. Being a carrier or the carried is a matter of shifting contexts in the stack.

Table Carrier Modified Meta Carrier

Oops! This carrier is smaller than the last. We’ve run out of carrier before we’ve ran our of carried content. Those excess numbers fall into a jumble on the floor. Some of the numbers that remained in the table did not move. Other’s moved. I’ve highlighted the ones that did not move. They remind me of Ito processes, processes with fixed sized memories. A Markov process is an Ito process with zero memory (n=0). In our table, the rows are memories that vary between zero and ten (0 ≤ n ≤ 10). This memory problem is what the Hilbert Curve was invented to solve. A value placed on a Hilbert space-filling curve never moves. Hilbert curves forget nothing in our Ito process sense even as the resolution or densities vary. In terms of the last post, Matrix Composition, matrix compositions, the processes never move even as the customers and the products move on.

Table Sequence and Memory

When the carried is a sequence, it remains a sequence. The grid becomes sparse or ceases to be a rectangle or a square when the sequence dances. z-scores are such a sequence. The z-score sequence is really a collection of sequences.

Sequence of Differences Without Modulo 04

Here I’ve put each sequence making up the larger sequence on its own line. Here we put a parsing rule in place. The first number that is larger than the previous number goes to the next line. Then, we add the next equal in value numbers pushing back to the front indicated by the red vertical line. This works until the new line is longer than the prior lines. Then we add another rule. Push the front of the lower value number further to the right and add spacers or holes on the lines above where necessary, so the lower values are aligned at their front. Spacers change the shape of the surface of the curve. Holes run through the solid mass of the curve. Those two rules let the sequences express their “natural” shape. The grid is going where it will. The shape of the curve, the shape the grid will follow, might surprise you.

Iterations and releases would behave similarly. If you put too much in an iteration, you end up pushing the boundary of the next iteration or release. Or you move the current iteration into the next release and ship what you have, a working iteration.

As a product manager, are you imposing a modulo on your roadmaps, or are your roadmaps going where they go without enforcement? Are you mining the shape of your roadmaps for surprise? Yes, we impose some rules about delivering value in each release. We have an upgrade tempo, but the functionality carried by the roadmap dances to its own shape.

Are your carriers clearly separated from your carrieds? Are your populations facing your carriers or your carrieds? Remember that the IT horizontal is carrier facing. Most of what we do these days is likewise carrier facing even though we might be selling to consumers. Are we turning consumers into administrators with this carrier focus?

The push rule provides a new kind of outcome if we were being probabilistic about outcomes. Z-scores have holes in them.

 

Matrix Composition

August 14, 2016

Watch this first, “Matrix algebra as composition.” A firm is a sequence of matrix multiplications. When we do anything, we are left with a need for each transformation, a sequence of such, and the evolution of that sequence over time. Your fast followers won’t match your evolution, and they won’t match your sequence, your composition. They will start somewhere else, and go directly to the product emerging from your composition. The fast follower will duplicate your output without duplicating your firm.

In the competition, if you insist on calling it that, your output fits your customers and hopefully it fits your near-term prospects, the prospects on the next pragmatism step. Your output doesn’t fit your competitor’s customers. Notice that your feedback only fits your existing customers, aka your economies of scale. We consume our market allocation at times in seats and at other times in dollars in addition to seats. We do not consume our competitor’s market allocation. We convert our prospects into buyers of the system, then we immediately market to them as repeat, continuing customers.  This latter part is where software companies captured their increasing return. If the marketing does not bifurcate, we’re selling a product with very high upgrade costs. More money sure, but bad money.

With discontinuous innovations, we start off with a client, just one, but a firm, not a single individual, with a wide width of use cases to cover. We start with a lot of potential. We picked that client with our bowling ally strategy in mind. We pick one in the middle of the industrial classification tree, so we can move up or down the tree as we go. That enables us to span not just the firm, but the whole industry, the whole ecology, the whole value proposition. Eventually, we will be in a simpler place described in the previous paragraph. But, our composition in matrix terms is deep. Our fast followers are thin. So keep your cards to yourself and fake the tells, so the competition chases it’s imaginary illusions, instead of you.

The differences across the technology adoption lifecycle are immense. We hire for each function, we tune each function, then we cross a technology phase boundary, and change the focus of our functionality. Call this later thing forgetting. But, that means we cannot repeat the function in the future when the demand for another discontinuity requires it. Apple is stuck now. The length of time that a company is stuck is a reflection of how much it forgot. Repeated discontinuous innovation requires remembering, rather than forgetting. Repeated discontinuous innovation requires an organizational structure that can improve it’s processes and it’s customer knowledge. Not the stuff of innovation consultants. Even if Christensen suggested it long before his effect-cause confused disruption idea became the rage. The cost accountants couldn’t go there. So the organizational structure required goes unaddressed.

But, what of Christensen’s separation as he called it? Everyone is probably thinking separation as in spin outs or it’s cousins. But, there is another way to separate. It’s hard work. It doesn’t anchor itself to economies of scale. Discontinuous innovations require new markets that might merge , or not, decades down the road into one of the company’s economies of scale. The company has a tempo modulating continuous innovation with discontinuous innovations. The former serving existing customers. The latter finding new never before addressed customers.

Software as media provides a hint. In the software as media model, we split the carrier from the carried. The distinction is difficult at times. What is strictly speaking about the carrier, the software, and what is about the content of the domain? Addition is a carrier (red) of the carried things being added (blue), so 01+01=10. But, if it something carrier being added, like loop indexes, the whole thing would be carrier. as in 01+01=10.

An organization is also a media, so it has carrier and carried layers. The carried layer would be focused on the customer. The carrier layer would be focused on things that don’t require customer inputs like the process of shipping goods to the customer. The staff that had customer relationships would flow through the firm with the customers. The staff that had process knowledge would stay in the phase specific organizations and keep improving those phase specific processes.

The technologies would flow through the organization as well. The technology would  be productized at the B2B early adopter client engagement. The technology and the product would then flow into the vertical phase, then the IT horizontal phase, and beyond. But, when the bowling alley has a free lane the next technology would take it. The processes across the phase specific divisions would be fully loaded all the time as would the staff attending to those processes.

The IT horizontal oscillation switches the focus from the carried to the carrier and the next adoption phase shifts the focus back to the carried. In this situation, the customer specific staff would not be fully loaded, but would have time to gain more in depth knowledge of the domain constituting the carried.

A company organized in such a way would have to manage the separation. Cross talk between the managers in the different phases needs to be suppressed. A best practice in the tornado, “free,” doesn’t work beyond the tornado. Sales reps love tornados, but tornado sales forces are unlike the sales forces serving both retained customers and new prospects. “Free” fails in all other contexts except the merger tornado.

Each phase has its own operational foci. A factor analysis of each would reveal that those organizations in a specific phase are alike, and different from the organizations in all the other phases. Each organization has it’s own factor analysis: as in factors and factor weights. The parent company would look like a holding company and have holding company problems like understanding that there are no synergies across the held organizations.

But, I’ve thought about this long enough.

Know where you are. Don’t do what everybody else is doing, particularly those companies that don’t know where they are. Know that funding phases are not synchronized with adoption phases. Many of those so called technology companies are not technology companies at all. Most of them are technology users, not technology makers. They are coding content, not carrier. They are doing continuous innovation and throwing away the results from discontinuous possibilities because the hyperbolic realities don’t look like the familiar spherical geometries they are use to. Yeah, I know, too much.

 

 

 

More On Skew and Kurtosis

August 9, 2016

After the last blog post, Donuts, I was still puzzled about where skew and kurtosis come from. I’ve chased enough rabbits into their holes with this one. I’m tired of the obsession, so I’ll write this one up and let go of it until I cross paths with again down the road.

It was stats. It became math. It’s on its way to becoming a set of tools like the black swan that can be applied within product management in the sense that here is the investment. How do we code down the kurtosis? I found mentions of skew risk and kurtosis risk. They are not a playground yet.

Skew and kurtosis were and still are descriptive. Later came the “summary statistics” that our spreadsheets generate for us, but read that again “summary statistics.” With kurtosis one number is describing two things. Well, those two things are connected or are part of one thing, a thing never described anywhere in the literature I’ve cruised through, another donut. Then, there is the matter of that angle, that I found a hint for after I found it own my own. The angle accounts for the two kurtoses, and the new donut.

The literature talks about moments. Skew is the third moment; kurtosis, the forth. Then, there is another view that talks integrals, and another that talks derivatives.

For myself, it boils down to derivatives being about inflection points. Three of them: 1) a global maxima,  and 2) two concavity change inflection points. That’s all there is for all of that calculus. There are a few more concavity changes, but  no more points. The fifth and sixth derivatives sit on top of the third and forth. The drive the curve, but don’t present us with any additional inflection points.

Inflection Points on the Standard Normal

All the mentions of leptokurtic, platykurtic, and mesocratic  are just terminology from long ago lacking in any numeric definition or reality. Some times we are told the data has these characteristics, but we need to keep in mind that we are describing a curve, rather than the data. We use summary statistics and distributions to make the data itself disappear. So whatever is going on is not the result of the data doing anything. The data stands around in lines, we call histograms.

On of my early pursuits was a search for slant asymptotes. Well, there are none. There is a horizontal asymptote. It is a cubic rather than a straight line. The cubic crosses the x-axis at the origin. It leaves us wondering where our convergences are with the line “formerly known as the x-axis.” Anyway, when you have a horizontal asymptote, you won’t have any slant asymptotes.

Horizontal Asympote

Next I looked to extrapolate something I read about setting up bins in regard to a given range of numbers. The binomial approximates the normal when the bins capture the data evenly.

Decision Tree - All bins

The bin widths had to be the same even if the data width doesn’t completely fill those bins. Maybe we only have data to fill the right half of the base of the decision tree.

Decision Tree - Some bins

I didn’t draw a distribution for this decision tree. The distribution will be skewed with a long tail to the left and the short tail to the right. The first box plot below shows what the distribution resulting from the above decision tree will look like. The second box plot is not skewed and is shown for comparison purposes only.

Skewed

When looking at box plots, if the line dividing the box does not divide the box into two equal size partitions, the distribution is skewed. Likewise, if the tails are not of equal length, even if the box has equal partitions, the distribution is skewed. Likewise if the outliers, not show, are not of equal distance from the mode, the distribution is skewed. These outlier skews are sensitive. Measures of coskewness and cokurtosis are about sensitivity in the financial/investment domain. Beware of outliers. I’ve said it before, say no to sales when they present you with deal demands from outliers.

The boxplot view gives a hint to the angle driving skew and kurtosis. Keep in mind that without skew, there is no kurtosis, or the kurtosis has a summary statistic value of 3, aka no kurtosis.

The Angle

I ran some lines out from the unskewed mode and skewed mode. The angle between them ties to kurtosis. I didn’t read this anywhere, but later did find some diagrammatic hints from other writers out on the internet. Notice that the mean never moves and that the vertical line labelled mode is also the mean in the unskewed case. Notice that there are two different kurtosis measures apparent in this view. This is where the summary statistic goes off in the weeds unless it is an index to both kurtoses. Given that we started with the standard normal and deformed it in a consistent manner, the two kurtoses should be correlated and indexed. I’ve not come across such.

Kurtoses are measure by curvature. The Kurtosis  curves are intrinsic curves. There is no controls off the line as in the Bezier curves we’ve discussed in the past. Curvature is a circle generated with radi of the recipricol of radius, aka 1/r.

The Curvatures

Notice the gap between the blue line and the red one. I  couldn’t make that circle big enough. But, this two dimensional view misses that there are kurtoses in every direction around the distribution. Here we’ve show the largest and the smallest. Those encompassing the distribution would be smaller than the largest and larger then the smallest. Sweeping these kurtoses around would give us a lopsided donut.

Kurtosi.png

I leave it up to your imagination to sweep the ellipse around the core of the distribution to form the donut. I made a mistake by limiting the redlines to the tails of the distribution indicated by the outer circle. The actual radi would extend beyond the circle for the longer tails and not touch the outer circle for the shorter tails.

Have fun with it.

 

 

 

 

Donuts

June 12, 2016

In Graphs and Distributions, I mentioned that I was struggling with an idea that didn’t pan out. Well, the donut was the troublesome idea. I finally, found an explanation of why hypothesis testing doesn’t give us a donut. The null hypothesis contributes the alpha value, a radius of the null, of the test. And, the alternative hypothesis contributes the beta value, a radius of the alternative, of the test. You end up with a lense, a math term, hence the spelling. Rotating that lense gives you the donut, as I originally conceived it.

In the process of trying to validate the donut idea, I read and watched many explanations of hypothesis testing. I looked into skew and kurtosis as well. I’ve mashed it up and put into a single, probably overloaded diagram.

Donut 2

Here we have two normal separate by some distance between their means as seen from above looking down. We test hypotheses to determine if a correlation is statistically significant. While correlation is not causation, causation would be a vector from the mean of one normal to the mean of another. The distance between the means creates statistical significance. Remember that statistics is all about distance.

In hypothesis testing, you set alpha, but you calculate beta. Alpha controls the probability of a false positive or type I error. Alpha rejects the tail and accepts the shoulder and core, shown in orange. Beta rejects the core and some portion of the shoulder towards the core or center, shown in yellow. Alpha and beta generate the lense shape, shown in green, representing the area where the alternative hypothesis is accepted.

I drew the core touching the lense. This may not be the case. But, two authors/presenters stated that in hypothesis testing, the tails are the focus of the effort and the core is largely undifferentiated, aka not informative.

Then, I went on to skew and kurtosis. Skew moves the core. Kurtosis tells us about the shoulder and tail. The steeper and narrower the shoulder, the shorter the tail. This kurtosis is referred to as light. The shallower and wider shoulder, the longer the tail. This kurtosis is referred to as heavy. Skewness is about location relative to the x-axis. Since the top down view is not typical in statistics, the need for a y- or z-axis kurtosis parameter gets lost–at least at the amateur level of statistics, aka the 101 intro class.  On the diagram, the brown double-ended arrow should reach the across the entire circle representing the footprint of the distribution.

Kurtosis and Tails

The volume under the shoulders and tails sum to the same value. The allocation of the variance is different, but the amount of variance is the same.

One of the papers I read in regards to kurtosis can be found here. This author took on the typical focus of kurtosis as defining core by looking at the actual parameters, parameters about tails, to conclude that kurtosis is about tails.

Notice also that the word shoulder cropped up. I first heard of shoulders in the research into kurtosis. Kurtosis defines the shape of the shoulders. As such, it would have effects on the distribution similar to that of black swans. It changes the shape of the distribution at the shoulders and tails. Tails, further, are not the same when the distribution is skewed, but somehow this is overlooked, because there is only one skew parameter, rather than two or more. This leaves an open question as to what would change the kurtosis over time. The accumulation of data over time would change the skew and kurtosis of the distribution.

Where black swans shorten tails by moving the x-axis up or down the y-axis, kurtosis changes would happen when the probability mass moves to and from the shoulders and tails.

Regression generates a function defined as a line intersecting the mean. In the multivariate normal, there are several regressions contributing to the coverage of the variance under the normal. These regressions convert formerly stochastic variations into deterministic values. Factor analysis and principal component analysis all achieve this conversion of stochastic variation into deterministic or algebraic values. These methods consume variance.

Due to the focus of hypothesis testing being in the tails, core variance is consumed or shifted towards the tails. Alpha defines an epsilon value for the limit of the normal convergence with the x-axis. Alpha essentially says that if the value is smaller than alpha, ignore it, or reject it. Alpha is effectively a black swan.

Since a factor analysis discovers the largest factor first, and increasing smaller factors as the analysis continues, it constantly pushes variance towards the bottom of the analysis. The factor analysis also acts as an epsilon limiting convergence with the x-axis again, because we typically stop the factor analysis before we’ve consumed all the variance. We are left with a layer of determinism riding on top of a layer of the stochastic or variance. Bayesian statistics uncovers the deterministic as well.

To Tails

A radar is basically a bunch of deterministic plumbing for the stochastic and some mechanisms for converting the shape of the stochastic into deterministic values. This layering of determinism and stochastic is typical.

One term that showed up in the discussion of skewness was direction. Note that they are not talking about direction in the sense of a Markov chain. The Markov chain is a vector representing causation where skewness does not represent causation.

The takeaway here should be that changes in skew and kurtosis will require us to retest our hypotheses just like the retesting caused by black swans. Data collection is more effective in the tails and shoulders than in the core if your intent is to discover impacts, rather than confirm prior conclusions.

Comments are welcome. Questions? What do you think about this? Thanks.

 

 

 

 

 

User Stories

May 9, 2016

Before the internet, back when geeks wrote software that they later sold to geeks, there was functionality. Designers today comment about how bad it was. There was no UX. There were no HF people or UI designers or designers of any kind (aka art people). Well, there were software designers (geeks). There were geeks and there were economic buyers. Those economic buyers were not users, but bosses of users usually separated by layers of other bosses.

If you were discontinuous, you didn’t sell to IT, so there was no requirements analyst that later studies found got in the way of collecting requirements, because they insisted that carrier trumped carried.

But somehow software got written and used, software companies sold software, and economic buyers got the competitive advantage they paid for. But, how? Technical writers had to turn functionality into user tasks, trainers had to do task analyses to find out what we now call “the job(s) to be done.” No ethnography was done either. Care was not taken to capture the cognitive model of the users. So, instead, the users were taught how to get from functionality to the tasks or jobs to be done. Users who knew how to think about their jobs and how to do their jobs were taught how devs would think and how devs would do their jobs. Obviously, the mismatch was huge. Unfortunately, the mismatch, the gap, is still there.

Technical writers had to go from a context ID referring to a particular dialog to a task, but only one of the tasks that could be done in the dialog or through it. There was no one dialog one task rule back then. So some tasks fell through the context ID gap. Even today a context ID does not refer to a user story.

Sales applied the feature-advantage-benefit (FAB) framework. Benefit translates to task/user story/job to be done. Sale reps can turn any feature into a FAB statement. Back in the day, everyone compensated for the developer. The developers didn’t notice most of the time.

Technical writers could turn any feature into a task. I remember one particular task in a manual, “Using Independent Disk Pools.” Beware. “Using” is not a task, and this task is a fake task. No user woke up in the morning thinking, “Hey, I get to use the independent disk pools feature today.” No. They woke up thinking, “Hey, I’ve got to set up geo mirroring today. Setting up is a real task.

User stories can fail just like those feature to task conversions.

Agile succeeded in the internal IT context. But, when Agile escaped that context, I can’t say either way. The one vendor I worked with that was Agile, failed. There was no communications outside the dev team and that was much smaller than the span of control of the VP of Dev. Other people still depend on clear communications about what is getting done. Agile made developers even more reclusive.

So when I encountered the user story tweet, I had had enough of the Agile evangelist. I need to know what the size of the typical deliverable will be, and when it will show up. If you can’t tell me that, you won’t be on my team. Agile, DevOps, this method, that don’t help me even if they make Agilists artists and uninterested in money.

Somehow Agilists are supposed to be ethnographers, marketers, and managers while keeping their coding skills up to date. The point of all the people making up the rest of the firm is specialization, knowledge, silos and all those things that make dev hard. Sorry, but I need devs that can code. I don’t believe they can do it all. Just like I don’t believe in the business generalist. Yes, your bosses boss took your job 101 back in college. He knows it all. Hardly.

But, what about using user stories when developing architecture? What about that “ideal architecture?” There is no ideal. There is now and there is yesterday. There is today’s users and yesterday’s users. There are users seen through the lens of the technology adoption lifecycle phases. Then, there are users seen through the lens of the pragmatism steps that fragment the phases into tinier populations.

That pragmatism is a problem for marketing, sales, development and everyone else. When we operate on a scale wider than the pragmatism step, we tend to mix the populations and smudge the addressability and cognitive fitness of the software, marketing, and ultimately, the firm itself. On a pragmatism step, the population references its own members. The early adopters are not in the reference group. They are too weird, too early. The early adopters are on a different step, so their opinions and results don’t matter at all.

People in firms are on different pragmatism steps, so firms are respectively on different pragmatism steps. The people in firms refer to each other to the degree that they are on the same step. Likewise, firms tend to show up at tradeshows with the other firms in their reference base.

This makes the ideal a difficult proposition. A developer could get stuck on a particular pragmatism step. That developer could be very responsive to those users, which just serves to isolate the developer and the application they are developing.

Sales has to address the step of the current prospects. Marketing has to address the steps of the retained, incumbent customer, the step of the installing customer, and the step of the prospect. Way too much is going on. Likewise, the developer can’t just sum the distributions and hope for the best. The segmentations must be preserved.

Those pragmatism steps do give a product manager the ability to price the app each independent population differently. Each group would never hear what the other group is paying since they don’t reference each other.

Aspect-oriented programming can help with all these segmentations by taking the segmentation into the code itself.

I’m very much a no tradeoffs, no averaging of functionality, no gaps in the cognitive models, and one use case to one feature person. Stop the smudging. Stop the averaging. Stop the trading off. Alas, I want too much.

Architecture is a developer facing thing. Developers are the users of an architecture. Much of the ease of what developers do these days is due to the underlying architecture. The easier it is to do, the longer its been around in the stack. The longer its been in the stack, the less likely its got a user story written for it.

Much of what developers do these days is about coding carrier functionality. Drawing the lines between carrier and carried is difficult, but it gets harder when you’re drawing the line between carrier and the next layers of the carrier stack. Different populations own different portions of the stack, so there are different terminologies, cultures, perspectives, points of view. The user in the stack is a developer, but a different developer. Who is doing this? The 101 guy or the PhD in this? The developers that think their developer users are just like them are in for a shock. In the old days we could write an API and not worry about it being copied as long as it was easier to use than write or rewrite.

A clear definition of the user is essential. The user story is just part of getting to that clear definition. Keep in mind that form is not the issue.

There is the expert’s cognitive model. It has overcome the all the plateaus. Each performance plateau constitutes a segmentation of the population. Not every use has encountered the trick to get beyond this or that plateau. Not everyone is an expert. An application built on that segmented cognitive model will also have to deal with the transitions between those levels of expertise. How will your users get from novice to mid-level performance? Where is the ideal here? The segmentation can help you keep ideal limited to one particular scope of the cognitive model.

The pragmatism steps get spit into carried and carrier as well. Architecture gets split likewise, so the pragmatism segmentation plays here as well keeping the carried expert clearly separated from the carrier expert. There are two pragmatism dimensions. Have fun with that.

I know I tweeted about other dimensions of the user story as pathway to the ideal architecture. But, it’s been a while.

It’s probably easier today, since the task analysis actually happens earlier before stuff gets written. The 101 ethnography gets done as well. We observe. We interview. But, we are not ethnographers. Spend the money. Encode the cognitive model. We don’t do that today. Instead, we rely on the developer’s idea and hope a population springs up around it. Lean checks that a population actually emerges. Not everything can be lean. Lean is where we are today on the technology adoption lifecycle. Lean would not have gotten us where we are today. We have the stack. We rely on that stack of ole.

Graphs and Distributions

April 28, 2016

Here I am struggling to make some ideas that looked interesting pan out. I’m starting into week three of writing this post when John D. Cook tweets a link to “Random is a Random Does,” where he reminds us that we are modeling deterministic processes with random variable. I’ve hinted towards this in what I had already written, but in some sense I’ve got it inside out. The point of statistical processes is to make deterministic the deterministic system that we modelled via randomness. I suppose I’ll eat a stochastic sandwich.

Having browsed David Hand’s “The Improbabily Priniciple,”  has me  trying to find a reason why events beyond the distribution’s convergence with the x-axis happen. The author probably proposed one. I have not read it yet.

Instead, I’m proposing my own.

The distribution’s points of convergence delineate the extent of a world. But, even black swans demonstrate why this isn’t so. A black swan moves the x-axis up the y-axis and pulls the rightmost point of covergence closer to the mean, or into the present from some point in the future. If you projected some future payoff near the former convergence, well, that’s toast now. It’s toast, not because the underying asset price just fell, rather the furture was just pulled into the present.

When the x-axis moves up the y-axis, the information, the bits, below the x-axis are lost. The bits could disappear due to the evaluative function and remain in place as a historical artifact. In the real world, the bits under the x-axis are still present. The stack remains. Replacing those bit with bits having an improved valuation is a key to the future. But, the key to getting out beyond the convergence is understanding that there is some determinism that we have not yet modelled with randomness.

While I labeled the area below the x-axis as lost, lets just say it’s outside consideration. It never just vanishes into smoke. Newtonian physics is still with us.

101 Black Swan

A few weeks ago, somebody tweeted a link to “A Random Walks Perspective on Maximizing Satisfaction and Profit” by M. Brand. Brand startled me when he talked about a graph in graph theory as being a collection of distributions. He goes on to say that an undirected graph amounted to a correlation, and a directed graph amounted to a causation. The problem is that the distributions overlap, but graph theory doesn’t hint at that. Actually, the author didn’t says correlation or causation. He used the verbage of symmetric and asymmetric distributions.

So that left me wondering what he meant by asymmetric. Well, he said Markov chains. Why was that so hard? The vector on the directed graph is a Poisson distribution from the departure node to the arrival node, a link in a Markov chain. The cummulative distribution would be centered near the mean of the arrival node, but the tails of the cummulative distribution would be at the outward tails of the underlying distributions. The tail over the departure node would be long, and the tail over the arrival node would be more normal, hence the asymmetry.

In the symmetric, or correlation, case, the cummulative distribution is centered between the underlying distributions with its tails at the outward tails of the underlying distributions.

The following figure shows roughly what both cumulative distributions would look like.

102 Directed and Undirected Graphs and Distributions

The link in the Markov chain is conditional. The cumulative distribution would occur only when the Markov transition happens, so the valuation would oscillate from the blue distribution on the right to the gray cumulative distribution below it. Those oscillations would be black swans or inverse black swans. The swans appear as arrows in the following figures. Different portions of the cumulative distribution with their particular swans or inverse swans are separated by vertical purple lines.

103 Directed Graph Distributions and Their Swans

The conditional nature of the arrival of an event means that the cumulative distribution is short lived. A flood happens causing losses. Insurance companies cover some of those losses. Other losses linger. The arrival event separates into several different signals or distributions.

Brand also asserts that the cumulative  distribution is static. For that summative distribution to be static, the graph would have to be static. Surprise! A graph of any real world system is anything, but static.

A single conditional probability could drag a large subgraph into the cumulative distribution reducing the height of the cumulative distribution and widening it greatly.

104 Subgraphs

In the figure, two subgraphs are combined by a short-lived Markovian transition giving rise to a cumulative distribution represented by the brown surface. Most of the mass accumulates under the arrival subgraph.

Our takeaways here are that as product managers we need to consider larger graphs when looking for improbable events and effects. Graphs are not static. Look for Markov transitions that give rise to temporary cumulative distributions. Look for those black swans and inverse black swans. And, last, bits don’t just disappear. Bits in information physics position. Information replaces potential energy, so the mouse trap sits waiting for the arrival of some cheese moving event, other things happen. A distribution envelopes the mouse and then vanishes one Fourier component at a time.

But, forgetting the mouse, commoditization of product features is one of those black swans. This graph/distribution stuff really happens to products and product managers.

A Spatiotemporal View

March 10, 2016

A few days ago I pulled Using Business Statistics by Terry Dickey off the shelf of the local public library thinking it would be a quick review. It’s a short book. But, it took a different road through the subject.

Distance, the metric of a geometry, is the key idea under statistics. Like the idea that the distance of a z-score from the mean is measured in standard deviations. A standard deviation is an interval, a unit measure. Variance is a square, an area. And, area is the gateway to probability.

Using the standard deviation as a unit measure, we can mark off the x-axis beyond the convergences of our distribution, and use that x-axis as the basis of our time series. I’ve used this time series idea under the technology adoption lifecycle (TALC), so looking at our past and our future as fitting under the TALC is typical for me.

That was the idea, so I tried it, but the technology adoption lifecycle is really a system of normal distributions spread out over time. The standard deviations for each of those normal distributions would be different. They would be smaller at first and larger later.  The geometries for each of those normal distributions would be different as well.

Smaller and larger are relative to the underlying geometry and our point of view. In the early phases of the TALC, the geometry is hyperbolic. Now appears to be big, and the future appears to be smaller and smaller, so projections will underestimate the future. Hyperbolic geometries also give us things like a taxicab geometry with it’s trickier metric, which brings with it much risk, and the world lines of Einstein. Things are definitely not linear in hyperbolic geometries.  Across the early phases of the TALC, the Poisson games of the early adopter phase tend to the normal, and the geometry achieves the Euclidean at the mean of the TALC. Moving into the later phase of the TALC the geometry tends to the spherical. Spherical geometries require great circles, but provide many ways to achieve any result, so analyses proliferate–none of them wrong, which makes things less risky.

All of those geometries affect that unit measure on the x-axis.

Discontinuous populations generate multiple populations over the span of the TALC, so the statistic itself changes as well. That is what drives the proliferation of standard deviations. Our customer population is small and our prospect population large. The customer population grows with each sale, with each seat, and with each dollar, and similarly the prospect population shrinks with same. It’s a zero sum game. The population under the TALC is fixed. That population is about the underlying enabling technology, not some hot off the presses implementation of a product or a reproduction. Products change as the adoption of the underlying technology moves across the populations of the TALC.

Big data with it’s machine learning will have to deal with the population discontinuities of reality. For now we will do it by assuming linearity and ignoring much. We already assume linearity and ignore much.

Across the TALC, pragmatism organizes the populations. That organization extends to organizing the customers as people and companies. Using negative and positive distances from the mean, similar to +/- standard deviations from the mean, we can place companies and their practices under the TALC. We could even go so far as to break an organization down to the individual executive and their personal locations on the TALC. Even an early adopter doesn’t hire a company full of early adopters.

Delivering functionality is an early phase phenomena on the negative standard deviation side of the TALC. Design is a late phase phenomena on the positive standard deviation side of the TALC.

B2B early adopter and crossing the chasm is early phase. But, why mention that? Well, I’m tired of hearing them show up on the opposite side of the mean out here on Twitter. The consumer facing SaaS vendor is not crossing the chasm. And, their early adopters are B2C. Confusion ensues. Place gets lost. I should ignore more.

Thanks to Jon Gatrell’s comment on The Gods Must Be Crazy post for pulling me back to this blog. Another recession has intervened in my job search, so I’m still looking, but there’s nothing to find, so there is no reason to focus on that search to the exclusion of writing this blog. Thanks for letting me know that someone still reading. WordPress stats don’t tell us much.

 

More on Geometry

February 6, 2016

A few days ago, I dropped into B&N for a quick browse through the math section. There wasn’t much new there, so off to the business section. There was a new book about innovation, no I didn’t write down a citation, innovation in the sense of it being a matter of the orthodoxy, aka nothing new in the book. It mentioned the need for collaborations between companies should create more value than the sum of the individual part, aka synergy. A former CEO of ITT settled this synergy thing. He called it a myth.

Tonight, I came across another of Alexander Bogomolny’s (@CutTheKnotMath) tweets. This one showed how a cyclic quadrilateral or two triangles sharing a base would give rise to a line between the opposite vertexes, which in turn gives rise to a point E. See his figure.

I look at the figure and see two organizations, the triangles, sharing bits at the base. Those triangles represent decision trees. The base of such a triangle would represent the outcome of  the current collection of decisions, which I’ve called the NOW line. The past is back towards the root of the decision tree, or the vertex of the triangle opposite the base.

It gave me a tool to apply towards this issue of synergy. To get that synergy, the triangles would position themselves on a base line where the bases of the individual triangles would overlap where they gave rise to those synergistic bits. But, they only overlap for a few bits, not all of them, as in that cyclic quadrilateral. I built some models in GeoGebra. I found the models surprising. I’m not a sophisticated user yet, so there are too many hidden lines.

I was asking those geometry questions that I mentioned a few posts about where I drew many figures about what circumstances give rise to non-Euclidean geometries. So as I played with my GeoGebra models, I was always asking where the diameter was, and that was not something GeoGebra does at the click of a button. It does let you draw a circumcircular sector, which looks like a pie with a slice removed, and draw a midpoint of the line opposite a given vertex. That was enough to give me a simple way of seeing the underlying geometry of a triangle. When half the pie is removed, a line between the two points on the circumference is the diameter of the circle, so the triangle is Euclidean. I may have said that a triangle is always Euclidean in earlier posts, but I can see how that I wrong. To be Euclidean, the base of the triangle has to be on a diameter of the circle. A figure will clear this up.

I discussed my hypothesis in the previous post.

Three Geometries - Intuitive

The hypothesis was messy. I had triangles down as being locally Euclidean and globally possibly otherwise.

With the circumcircular sectors, the complications go away.

Three Geometries via Circumcircular Sectors

The new model is so much simpler.

I went on to look at two triangles that were not competitors. I looked at that synergy.

Potential Synergy

The red line represents the shared bits. The yellow shows the potential synergy. The gains from synergy, like the gains from M&As, shows up in the analysis, but rarely in the actuals.

I went on playing with this. I was amazed how decisions far away could have population effects, and functionality effects. This even if they don’t compete on the same vectors of differentiation. But these effects are economic once the other organization is outside your category (macro). We only compete within a category (micro).

Population Effects

In this figure, the populations overlap in the outliers. The triangles don’t overlap. They are not direct competitors. The do not share a vector of differentiation. Point A is not on line DE.

The circles represent their populations. The relative population scale is way off in this figure. The late firm should have a population as much as 10 times larger than the early firm.

The problem with modeling two firms, placing them in space relative to each other, means doing a lot work on coordinate systems or using me tensors. I started drawing a grid. I’ll get that done and look for more things to discover in this model. Enjoy!