Factor Analysis–What’s Important to your Product

June 29, 2015

Earlier in the week John Cook tweeted something about Coxeter circles, so I clicked the link and was surprised by the following figure. The relationships between the diameters or radii of the circles is the same as what one would expect from a factor analysis. The first factor is the steepest and longest. The next less steep and shorter than the first. Subsequently, each factor is less steep and shorter than the previous factor. The particular angles and lengths will differ, but the subsequent factor will always be less steep and shorter.

Coxeter_circlesThe circle labeled zero is your firm. The circle labeled one would be your category. If you are focused on managing your revenues, your monetization generating your revenues would determine your category. If you are focused on something other than revenues, then place yourself in a category relative to that. The circles labeled two or three, any number above one, would be macroeconomic considerations.

A factor analysis typically covers 80% of your variance with three factors. They would be labelled with negative numbers. The area of a given circle hint at how much variance that factor covers. The factors would, as circles get smaller, or in a line graph get flatter and shorter. The statistical studies of your variance beyond those three factors gets more expensive, so your budget would constrain the number of factors your effort could be managed with. The budget is both monetary and managerial focus driven. The independence of the variables and the complexity of the data fusions giving rise to each factor would impact managerial focus.

The Coxeter circles here represent two levels of macro economic factors, your category, your firm, and your product. For wider product portfolios there would be more circles with negative numbers. Imagining this in three dimensions, as collections of spheres would demonstrate some interesting relationships.

In a firm that stretches across the technology adoption lifecycle (TALC), the factors would migrate in an animation, live and die as Ito memories and oscillate between carrier and carried considerations. In such a firm, the population considerations could be a parallel factor analysis anchored around each population’s relevant product. Economies of scale do not allow expression of the TALC.

Factor analyses need not be firm centric. The economic return on a given set of factors, places a given firm in a given value chain. In a value chain, the larger, aka steeper and longer factors may be outside of your managerial focus. A small factor for your customer would be a very large factor for your company. The key reason to outsource is to preserve managerial focus. When you tell your supplier how to do business, you are not preserving managerial focus. I realize a product manager wouldn’t do this, but when it happens it enters into your matrixed product organization.

Factor Analysis of Value ChainAd serving might be your only monetization, so you need to get and keep eyeballs, and deal with the standardized ad serving infrastructure. Your factor analysis would have holes in it. Your factor analysis would have discontinuities in it. Fast followers would have similar factors, whole product factors, and supplier factors.

In the figure, two whole products are shown: one for web, and another for mobile. One fast follower is shown. A fast follower may compete with you on a single factor. All ad serving monetized businesses might use this supplier.

The arrowheads indicate convergences defining the world size of a given value chain. That is similar to convergences in probability distributions. A factor analysis looks like a power law distribution or a long tail.

Where you have discontinuities in your value chain, you will have to establish well defined interfaces, as well as deciding how soon you would want to follow changes to the definitions of those interfaces.

Ito Processes in the Technolgy Adoption Lifecycle.

June 20, 2015

A Markov process has no (zero) memory. An Ito process has a finite memory. A Markov process is an Ito process with a memory size of n=o. All of that is, for our purposes, talking about history, or more specifically, relevant memory.

In our ordinary conversations about memory or leaning in a firm, the memory is infinite. It is not an Ito process, so it can’t be a Markov process. We talk about brand and design as if they will always be relevant, and have always been so. We talk about a whole host of things this way. But, it is the technology adoption lifecycle that makes everything finite. We try very hard to make the late mainstreet market infinite. Sloan’s invention of management leads us to the infinite firm and the management practices that make the infinite firm. Blue oceans lead us to find another structure for a category after we can’t get anymore infinity from our management practices. These notions of infinity invite us to cut costs until there are no more costs to cut. These notions of infinity kill our companies, and kill our companies fast and faster.

Innovation and management are entirely different. Sloan didn’t innovate, except in his creation of the product he called management. He did not innovate cars. He grew his company through M&As. He consolidated his category. Such consolidations are an indicator that the market leaders have been chosen. Those market leaders get a monopoly or near-monopoly position. Everyone else is stuck in promo spend territory fighting over the scraps. Everyone else is stuck with competing on brand and design, because they have no market power, and no differentiation. This is late mainstreet phase of the technology adoption lifecycle (TALC) out to laggards (devices) phase. The later you are in the TALC, the more you have to spend on brand and design, the more you have to manage your costs and processes.

When we talk about the early mainstreet IT horizontal, geek facing internet of the 90’s as if it didn’t have design, the more we ignore the lessons of the TALC, fit the population you serve. Design is not a characteristic of geek facing products. Design in a characteristic of consumer facing products. The geeks that tried to sell dog food, or any consumer product back in the 90’s, in the early mainstreet market failed. Those same geeks giving something away for free, something technical, something infrastructural, something non-consumer succeeded. We came into the late mainstreet market knowing that free worked, that customer would not pay for anything, that paywalls were wrong, …. We came into the late mainstreet market having learned the wrong lessons. We are finally forgetting those lessons. We are finally learning that consumers pay for stuff.

Alas, we learned the wrong lesson still, when we try to sell something to geeks in the late mainstreet market. No, they will not pay… We are learning the wrong lessons from our success with consumers.

The main problem in crossing the TALC is that the TALC structures our memories. We have finite memories and infinite memories. But, we only have one memory. In my prior discussion of software as media, and in my TALC slideshare, I

So back to this Ito process.

The birth of a category begins by finding the B2B early adopters. Yes, lean does not start there. Lean is late mainstreet. Lean is built on other people’s whole product. It starts well within a category’s life. The birth of a category is 90’s era internet. That’s where today’s whole product came from. Twitter is probably the only such play we’ve had in Web 2.0. Even Google is a subsequent generation in an existing category and a promo spender to boot. And, no, we hear about how B2B needs design these days, sorry, but that is late mainstreet as well. It’s consumer and laggard/phobic facing.

The category is born with a Poisson game, aka a Markov process. These vendors have nothing to leverage and face the process of building tech, product, market, and company all facing the client. Unlike lean, they are stuck with the technology whose adoption they are fosters. Unlike lean, the best practice is to automate the client’s product visualization, not your own. Well, lean lets the users provide the product visualization instead of their own. The point is that n=o, aka we have a Poisson process with no memory. But, we do have a nascent finite memory on our hands. That we intend to repeat this process, we separate our phase processes from our customer facing people. Usually, companies do not do this. For them, the end of their category leaves them without a memory of the discontinuous innovation processes, so they start over again with the disadvantage of the cost issues with trying to use their current late mainstreet process to do what they cannot do, and economies of scale that are devoid of the needed customer base. Memory problems have costs, but accountants can’t tell you how much those problems cost. Memory problems kill innovation. Separation, Christensen’s real original concept, failed to gain traction against the cost accountants.

Christensen build his consulting firm with late mainstreet people who did not provide the early mainstreet effort needed to foster adoption of the separation concept.

So we start with a Markov process. With every capability we build in our consultative product implementation processes, we add to that memory. Call it n=20. Then, we start to build our  vertical market chasm crossing processes, n=21 to n=60. But we partition these two capability collections. We keep our consultative processes going with a brand new discontinuous innovation when the time comes, when the bowling alley ends. Then, we focus on carrier, and build our IT horizontal facing processes, n=61 to n=90. Within the IT horizontal facing organization, we build our tornado capabilities, n=91 to n=100. The tornado capabilities will be harder to retain, but they only work in the tornado and in the post M&A tornado. It is hard to keep them loaded from an HR perspective. Likewise any IPO and further investor relations capabilities, again memory in terms of processes and people. Through it all our Markov process becomes Ito.

At some point we get to our six sigma normal and all things Markov/Ito become Gaussian. Memory becomes infinite. We move from discovery to enforcement, different types of machine learning. Our geometry changes from hyperbolic to Euclidean and subsequently beyond six sigma, to spherical, Euclidean and spherical being safe for management.

Still, there are events that drive us back to earlier memories. Commodification of core technologies make us go back to discontinuous innovation in the midst of our continuous innovation efforts. Mass customization forces us to focus deeply on carried like we did the B2B EA. There will also be processes that we use once and throw away. Before throwing them away, however, you need to think long and hard about reuse and load issues. If you need those people and processes don’t throw them away, and find a way to keep them loaded, rather than letting them dissipate in lateral moves.

Outsourcing is another of those late mainstreet methods for managing managerial focus that lead us to dispose of capabilities and learning, memory, that we may need again. Again, think hard. You can’t get these back after they are gone.

Devices phase leads us to gain a hardware capability beyond the software capabilities we already have. Hardware also drives new software capabilities. More memories, more people, more processes will all be required. Cloud, the phobic phase, similarly.

Like in my post on incommensurate, the water balloons, or balloon poodles model will help here. Where does the memory begin? How large does the girth of this  memory get? How long does it last? Does it produce cash or wealth or loss? What balloons are inside other balloons? What balloons are outside the others? What are the interfaces? The coupling? The cohesion?

Know that you are managing your company’s memory. Learning is good, but it takes us away from our pasts even as it takes us to our future. Learning can prevent us from going back to the parts of our past that we will need again unless we were built to flip, built to exit. Manage memory.

Comments?

Incommensurate

June 15, 2015

Next, I went back and color coded the labeled gaps. In this figure, I’ve put lines at the bridged gaps indicating the use of a new Back in 2009 or so a reader of this blog asked me to define the term incommensurate. I’ll do that again here.

I’ll start with a graph from S. Arbesman’s The Half-Life of Facts. That graph was a surprise to me. It displayed the results of fifty or so experiments about temperature. Some of the experiments intersected with other experiments. Other experiments were parallel to the existing experiments. I’ve drawn a graph here showing the same kinds of things.

BaseThe darker lines are the results of a regression of data contained by the light gray rectangle. Each rectangle represents a single experiment and its replications.

Where the lines intersect, we can call those results commensurate. They result from what Kuhn called normal science. The experiments were designed differently, but reflect a single theory. The measurements within a single experiment reflect a particular apparatus. Changing the apparatus would give you another experiment with potentially different results.

Where the lines don’t intersect, we can call those results incommensurate. I’ll point out the gaps in the next figure. These gaps reveal an inadequacy in the current theory.

This graph can show us all of the experiments at once. But, that covers up things that would be revealed better in an animation. We don’t know, from this graph, when a particular result showed up. If we attended to the temporal aspects of the underlying data, we’d be able to see other gaps. The experiments characterized the gaps across the ranges and domains of several experiments.

Continuities 00ln this figure I’ve highlighted the continuities, the intersections, with red squares. I’ve assumed that all of these intersections exist. The results of one experiment, in the top left, is shown in blue.  I’ve assumed that this experiment was incommensurate and that the experiments that intersect with it did not exist at the time. The experiment that connected it to the chain of experiments to its right happened later.

The experiments shown with red lines are still incommensurate. They exhibit gaps with those experiments to their right. At the bottom right, three experiments exhibit continuity with each other, but exhibit a gap with both the other experiments above and to their right, and the other experiments to their left.

Normal science looks like a well connected network. Extending the range and domain of the existing theory is the job of normal science. A single regression would result in a decreasing function. Where the details differ from that single regression, we have an opportunity for clear functional differentiation.

Each of those commensurate experiments enables continuous innovation that extends the life of a category after the discontinuous innovation gives birth to the category. The technology adoption lifecycle is driven by improvements in a technology’s price-performance curve or S-curve. It is the price-performance curve that delivers on the promises made when the technology was sold and purchased. The demanded performance gets easier and easier to deliver and the range and domains of the underlying experiments expand.

Discontinuities 00In the next figure,  I’ve circled the discontinuities, the gaps, the incommensurate experiments. We won’t pursue experiments to bridge the gaps labeled G and H. We won’t try moving to G, because we can already read that temperature. We might want another way to take that measurement. We could develop a pass-fail thermometer where we are just interested in knowing if we have to make a bigger effort to get a numeric reading. Then, jumping that gap would make sense. The gap H just hasn’t been worked on yet.

Discontinuities Next, I went back and color coded the labeled gaps. The black rectangles show the range and domains involved in bridging a given gap. Bridging a gap requires new theory. The gap at A is from the experiment represented by the  blue line to the experiment on the right. The gap at E can bridge to any of three branches on the right. Any one branch will do. Continuous paths can get you to the other branches. Think step functions. The gap at F actually gaps with a collection of experiments to its right. The gap at B bridges two large subnets. Bridging this gap is critical. The gap at D can bridge to the left or the right. Either will do. Again, paths exist to get to and from the left and right side.

Other parametersIn this figure, I’ve put lines at the bridged gaps indicating the use of a new parameter that enables us to bridge the gaps. These parameters are labeled p and q. Their use was described in a new theory. The dark purple lines demonstrate how a continuous path through the network resolves a branch in resolving the gap.

The gaps E and A were resolved via parameter p and the network flow. The three gaps at F were resolved by parameter p as well. The gap at B was resolved by the solution to the gap at F. The gap at G continues to be ignored. The gap at D and C was resolved via the parameter q and network flows. The gap at H, again, ignored.

In these experiments basic research has showed us where our opportunities lay. It has delivered to us the incommensurate seeds of categories, and the commensurate life blood of new growth (dollars, not populations) to lift us slightly from the swamps of the margins from our nominal operations.

Another Explanation

The simplest explanation of what incommensurate means is that every theory is a water balloon. A theory can only hold so much of what it manages to contain. When you want more than a theory can deliver, when continuous improvements run out, you need a new trick to combine two water balloons. Have fun with that.

 

 

Where to Invest?

June 4, 2015

Where to invest was the question. My answer has always been in a company doing discontinuous innovation. But, like most things finding them is the hard part. Most of what we hear about these days is continuous innovations. What we don’t hear about is discontinuous innovation.

Since I’ve worked in startups well before the web came along, my problem has always been finding startups. Truth is that I didn’t find them. They found me. But, living in a startup desert, I’m looking for ways to find them. For a job search, watch your sales tax permit applications. That’s not much help for an investor, and it’s probably way too early. I know from cold calling SIC coded companies that the SIC classification system is very wide. You’ll end up calling a lot of companies that don’t do anything remotely like selling software.

The investor alternative is to find VC funds and put your money in one of them. If you’re going with discontinuous innovation, finding that VC fund will be the issue. I don’t know if VC funds mix discontinuous and continuous innovators together in the same portfolio. I do know that the continuous investments are smaller and get less attention from the VCs. Discontinuous innovations take more time, more money, and more VC attention.

You’ll hear about the continuous innovators and more than likely you won’t hear about the discontinuous innovators. Read the journals in the fields where you expect to invest. Read the SBIRs. Take note of the investigator’s names. Check their bibliographic information. When will one of their students bring the investigator’s technology to the market?

Anyway just a few hints on where to find the discontinuous innovators. Investing in a company that creates a category, and gets the near-monopolistic position is a good place to grow your money. The quick flip of the continuous innovators or the fast followers not so much.

Remember that the technology adoption lifecycle is more than some ordered Markov process transitioning populations. The populations organize the companies serving them. Early phases grow. Late phases decline. We hide that decline in things like cost management and large numbers. Early phases create wealth. Late phases capture cash. Discontinuous innovations begin in the early phases and transition into the late phases. Continuous innovation begin in the late phase and live short lives.

From a Geometry Proof

June 2, 2015

I was out on twitter a several weeks ago, and Alexander Bomogolny tweeted another of his GeoGeBra proofs, shown below.

Raw

The key issue was the similarity of the two blue lines in terms of their angle and length. But, I looked at the center pentagon and thought value chains, Shapely values, and just who was that dot in the center pentagon that manages the interactions with the other pentagons. How central was this person?

To examine the issue of that person’s centrality, I looked for the center of the center pentagon.

Center

From the figure it was clear that the person working where the value chain contributors was not some executive in the CXO crowd. Instead the person was managing at a distance from the CXOs. Sales reps like the CXO sale, but this person, the user, is some distance from the buying decision. Yes, the buying team is constructed. This person, the user, might be on the Buying team. But the vision, the value proposition across these distances will differ. The grandeurs of the CXO’s value proposition can be very distant from the user thinking about how they can cut and paste these numbers into that equation. So who is the customer and who is this user or that one is critical. Those buying personas all have personas in your software.

Are you calling each of them? Do you satisfy all of them?  Are you teaching each of them? Are you marketing to each of them? Unlike the proof about those blue lines. Those buying personas are not similar. They are in conflict. This even if the CXO has everyone aligned.

What about all those who don’t use your software, but are aligned with the larger, smaller, or intermediary value propositions? How long will they be involved? When will their involvement begin and end? What will your process orchestration look like?

That person at the intersection of that value chain is just one Poisson distribution under the corporate normal. The vector under that Poisson points where?

Comments?

Geometry

May 23, 2015

I was looking for the parameters of an eclipse earlier in the week. I ended up Wikipedia looking at the definition of Eccentricity. The parameter of interest is eccentricity. Right away eccentricity breaks down into four cases: circle (e=0), ellipse (0<e<1), parabola (e=1), and hyperbola (e>1). Notice that this aligns itself with the geometry of the space itself. Relative to the sum of the angles in a triangle we have three cases: hyperbolic (<180), Euclidean (=180), and spherical (>180). Notice also that this aligns itself with the definition of probabilities, as 0 ≤ p ≤ 1. And, footprints of distributions tie into eccentricity: normal as a circle, and Poisson as an ellipse. The distributions also tie into machine learning: Poisson giving us rule enforcement, and Gaussian (normal) giving us rule enforcement. Then, there is Ito processes: n = 0 giving us the Markov chain, n > 0 giving us an Ito process. The Markov chain is a special case of the Ito process. The holes in these associations is probably due to my having been exposed to that math yet. Everything in math is tied to everything else in Math.

I don’t have a correlation between the parabola and anything else. I’ll have to think about this single case.

The failures of a given innovation is excused by faulting innovation. But management as an idea was extended to innovation. Management as an idea was exclusive of innovation when Sloan created management. Nobody says management failed when an innovation fails. Christensen makes the case that managers excelling at management failed when their companies were disrupted. Ultimately what this boils down to is place, under a distribution in a specific geometry. I will finish this post talking about place, but I need to get back to eccentricity and geometry first.

In the Wikipedia post on eccentricity, there was an animation linking circles with ellipses, parabolas, and hyperbolas. Watch it several times, because I going to ask you to image the animation happening in a different order.

250px-Ellipse_and_hyperbolaThe animation begin with the circle. A blue dot represents the center of that circle. That dot goes on to represent the foci of the ellipse, the parabola, and the hyperbola. You can watch the dot move in each frame of the animation.

So now we can think about it in terms of the technology adoption lifecycle(TALC), or the processes organized by the lifecycle. We’ll start simply here. It will get messy as we go deeper. Start with a Poisson game. That’s when we are looking for those B2B early adopters in the TALC. That’s the second phase, the one adjacent to the technical enthusiasts.

A series of Poisson distributions generate a single Poisson distribution whose foot print is an ellipse. The major axis of the ellipse shows us a Markov process as the major axis grows. The major axis is a vector. We start with this Poisson distribution, because we are using a game-theoretic game to represent a game of unknown population, a Poisson game. You can play these games as Gaussian games, but my intuition is to go with discovery learning. Keep in mind that I’m talking about a discontinuous innovation here. Continuous innovations happen elsewhere in the TALC.

Now, this Poisson distribution starts off as a single infinite histogram, aka a point, in other words as a tiny circle. Markov chains are composed of Poisson distributions of arcs, whose pre-choice probabilities are taken from normal distributions of the nodes, small distributions. The Poisson would be external, while the normal would be internal.

We are representing the company and its customer base, as opposed to its prospect base as a Poisson distribution. Over time, that Poisson distribution tends to the normal. The ellipse gets longer and wider. The ellipse fits inside a rectangle that eventually becomes a square at which point the ellipse becomes a circle. The eccentricity changes from something between zero and one becoming zero. I’ve seen this in financial results of companies selling products to foster the adoption of discontinuous innovation. I trust this to be reliable.

The circle represents the vertical. The bowling ally is a collection of approaches to different verticals. The Poisson distributions of those approaches to their verticals point to their respective verticals, aka they walk to their vertical.  Arriving at the chasm is the event that correlates with the onset of the normal. The onset of the normal is also the onset of Euclidean space.

The circle goes on to represent the horizontal market. Consider it to be six sigma wide at the post tornado. Once it is larger than six sigma the geometry is spherical. The standard b-school case analysis becomes very reliable in spherical space. But, my focus is on why that same analysis fails us prior to the chasm. I hypothesize that the space prior to the Euclidean is hyperbolic.  We’ll go back to animation again, but this time I’ll capture the frames.

00 Research FrontThe animation ends with the hyperbola. Businesses don’t end with the hyperbola. They end in a spherical geometry usually with a black swan that makes their distribution contract. A category begins with a gap. Consider the space looking outward to the foci to be the gap.

I was going to show that the research front changed and call that period the research effort. But, the animation didn’t support that. The directrices moved instead. They do approach each other, but never converge. distance from one foci to the nearest directrix is equal to the eccentricity, which will be larger than one.

I’m going with the hyperbola, as it is unfamiliar and weird enough to lead to things like taxicab geometry where you can’t go straight there, instead having to stay on the grid. In the other geometries you can go straight there. I imagine linear algebra can make the hyperbolic linear, but I haven’t gotten to that math yet.

The time research takes would happen on a z-axis. The search that is research would happen on the surface of the research front. Notice I didn’t use the term R&D. Research gets us our technology and our s-curve. Products foster adoption of the technology. Technology is adopted. Products are sold.

02 Poisson GameOnce the directrices have converged to their minimum separation the weak signal is emitted and the Poisson games begin. I had to draw the figure myself, because the ellipse was too large since in the animation the ellipse starts with a circle. The hyperbola in the figure is there to show the system before the directrices converged. The big bang here is the signed contract with the B2B early adopter. We grow from nothing starting here.

As an aside, Levy flights happen at the find you’re underlying technology phase, aka before the technical enthusiast phase of the TALC.

Now, we’ll go back to the notion of place. In the animation, the blue dots that represent the origin and the foci moving across the geometries. In the TALC, a normal of normal, discontinuous technologies undergo adoption from left to right starting at the far left. All other types of innovation start in the random-access sense somewhere to the left, aka in a different place. Starting at the left means being a monopolist or exiting the category. Starting to the right means competing on promo-spend dollars against fast followers and other look alikes. Those are different places. Samsung will never be Apple even if they hire Steve Jobs. Different places. Different times. Different pathways.

I’ll talk about place in a later post. Tweets about design and brand drive me nuts. They are phase specific–place specific.

Comments?

Normal Approximating Whatever

May 13, 2015

I finally got back to a math book, Modeling the Dynamics of Life by Frederick R. Adler,  I’ve had it on hold for a long while. I’ve been at it for over a year. And, I still haven’t done the homework. The homework actually teaches beyond the text in a lot of math books. So I’ll be at it for a long time to come even though I’m starting the final chapter. It’s an applied textbook, so the author gets his point across without turning you into a mathematician, or at least tries to. The mathematician thing will happen if you pay attention, but who does that?

In the previous chapter, the book talks about approximating a Poisson distribution with a normal. That’s a very small normal since it fits inside that Poisson distribution it’s trying to approximate. It does the same sort of thing for the Binomial. And, again for the exponential. I drew the series of distributions for this latter exercise. It takes a lot of distributions added together to get that normal, a lot like 30 distributions. The thing that can get lost is the shape of the world holding the distribution.

In approximating the normal from an exponential, the exponential, aka long tail looked longer than it was tall. But adding two distributions brought us to a gamma distribution that was a little longer. Adding five distributions got us something that looked normal, but was wider still, and pdf was taller than the normal. Adding ten distributions, wider again and less tall. Adding 30, wider, practically on top of each other and shorter. If we kept on adding, it would get shorter and wider, aka it would get tiny, but the approximation and the actual would be close enough that we’d be collecting data and graphing things for entertainment.

This graph will be too small. But take a look.

Sum of Distributions Tending to Normal

At some point further calculation becomes pointless. Factor analysis shares this property. Does another factor tell you something actionable? Does more accuracy do the same?

Another thing that got talked about was the standard normal. You get to the standard normal from the normal via z-scores. You want all your distributions to have a normal approximation since your tools for approximating probabilities are based on the standard normal and its z-scores. To do hypothesis testing, you need a normal.

You can find those formulas for distributions. They tend to look messy. Try integrating them. Getting to a standard normal is easier. Another author in another book that I can’t cite, said that while the numbers convert via those formulas, the logic does not follow the flow of the calculations. Hypothesis testing in non-normal distributions is an active area of research. An example of calculation and logic not being consistent,  we have  machine learning, Markovian approaches discover, while Gaussian approaches enforce. That’s not really a matter of application. One is ontological while the other approach is taxonomic.

Notice that all these approximations and converging tos require a lot of data and a lot of distributions. We are using big data to estimate small data.

Enjoy! Comments?

More on the Gap

May 10, 2015

After posting “The Gap,” I kept going. I put the technology adoption lifecycle across the terrain. An idea gains adoption via some apostles in an invisible college, which gets the idea published in a peer-reviewed journal. But, that’s long before the idea shows up in a corporation pushing it out into some productization. That corporation wrestles with the idea. Someone has to convince someone. The idea has to gain adoption internally within the corporation. That corporation is staffed with people drawn from the larger world. The pragmatism scale organizing external adoption is also organizing the internal market. Someone will be the technical enthusiast. Someone will be the early adopter. Not everyone in the corporation has to adopt the idea. Once the corporation starts selling the idea, there will be some internal laggards, some phobics, some non-adopters.  But, before the corporation starts selling, it will have adopted the idea.

Before the corporation sells much, it is faced with external adoption. The forces of external adoption will be with the corporation until it abandons the idea’s  category.

01 09a A Point In A World

Internally, we have an ontology, a hierarchical definition of the idea, a definition delineating how it is different and how it is similar to other ideas. Patent applications are like that, differences and similarities. But patents are really about realizations. Ontologies organize ideas.

Taxonomies organize realities. External adoption uses different species of implementation in different product spaces. The realizations in external adoption get organized around differences and similarities with other products. The idea becomes implicit in the taxonomy.

Since external adoption sequences markets and contexts it also sequences whether the focus is on the vertical or the horizontal, on the carried or the carrier. The external adoption is itself a media that orchestrates the media of software.

Ontologies and taxonomies organize their search spaces. Ontologies are generative. Ontologies diverge. Taxonomies are enforcing. Taxonomies converge. At each taxonomic decision, I am becoming more known. At each ontological decision, I become less known. Ontologies face into the unknown, the more to be known. Taxonomies face into the known.

Ontologies are convex; Taxonomies, concave. The book “Antifragile” tells us that concave is safe, while convex is unsafe. Sloan, the founder of GM, invented management. He was all about the concave. Sloan was not an innovator. GM bought the innovations it needed. Taxonomy is management. Ontologies are innovation. Innovation is exclusive of management. I’ve gone so far as to say that management inserts risk into innovation.

01 09b A Point In A World

The ontological spreads out across the search space. To realize an idea, we trim the tree that is the search space. We trim it enough to converge to a solution. That may be a point, or a line, or a shape. The figure is a little off. The solution, the thick dark blue line occurs before the external technology adoption lifecycle. It should occur inside he lifecycle.

01 09c A Point In A World

One last thing to do was to count the bits involved in crossing the gap. The idea uses 3 bits to document its search space. The realization, likewise, uses 3 bits. Those would be explicit bits. When differentiators become commoditized, their  bits become implicit. The number of bits involved will change as the idea moves through the technology adoption lifecycle.

01 10 A Point In A World

Enjoy. Comments?

A Point, Unquantifiable Datum

May 4, 2015

Data is made. When you take out your tape measure and measure twice before you cut, you have taken all the bits that it took to make that particular tape measure and projected them on to the tick mark where you will cut. You do that twice. You go even further by taking the bits involved in making the saw and projecting them on to the line that gets cut. Seeing a nice flat surface, a surface that doesn’t exist in nature, should remind us that data is made, manmade. Well, more than likely, robot made.

So lets start at a point and look, once again, at a point, a point in some multidimensional space, a point in an argument, a number of bits.

03 00 A Point

So we have the point again.

03 01 A Bit

When there is a point, there is at least one bit. We’ll just call zero bits ground.

03 02 A Bit

Where there is a bit, there is a decision. If such and such. Never mind the behavior associated with that bit for now. If there is a decision, there is always at least one consequence. “There’s a spot on this glass.” “Then, wash it.” Just one point and already you need a dishwasher, a water softener, and dish towel.

03 03 Two Bits

And, of course, where there is one bit, there will be more. Consider how much money is spent on systems to move bits around that boil down to that single bit, “Hey, are we still in business?”

03 03a N Bits

The real problem with bits is consciousness. We tend to treat explicit bits differently than implicit bits. We talk about assumptions, aka the criminal alias of implicit bits. But consciousness moves around. One bit is important right now. Another bit later on. We have limited focus as humans and those limits demand implicit bits. It’s the world size problem. We put some number of bits inside our world at a given moment and assume the rest.

03 03b N Bits

We focus on the foreground. We let the midground and background slip into the implicit in varying degrees. We let those assumptions fly.

03 03c N Bits

Then, there is the whole mess of carrier and carried, of software as media, of product as media, of company as media, of stock prices as media.

03 03d N Bits

It’s like a cave. You have a floor of implicit bits under you, and a ceiling of implicit bits above you. The space you can stand up in is that of your explicit bits. If you’re ever a coal mine tourist, keep the exits in mind.

03 03e Multiple Carriers

The software as media model comes into this notion of habitable space. There are many carriers. A startup that has its own technology undergoing adoption starts out as two people and three bivectors: company, product, and market. Oh, four: the technology. Whole product people can skip the fourth. To position the point is to build the company, product, and market. So all those bits roll up into that point. Fuse me some data.

03 03f Multiple Carriers FA

Here I simplified the carrier and content aspects. I’ve also applied some hypothetical factor analysis to the system. Each aspect is different in terms of how important it is. The hole is not round. As much as designers dislike radar diagrams, sometimes it takes a radar diagram to illustrate where the point is. Then, again, the point isn’t always in the middle.

51 01 Design

When you have a dimension and you are optimizing it in some way, you have the physical aspect, aka the media, presenting you with some impedance. The ribbon in MS Word does this to me all the time. I’m like, “Where the hell is the control?” I know how to move that impedance, I’m just lost on the topic of finding it. Design is the process of establishing how much the enabler will have to push against the impedance. Bits vs bits. Design in this definition is general enough to work for software or art. The criteria define the impedances. Design is a point.

51 02 Design

Since we usually design in multidimensional spaces, we end up with a multidimensional surface. That surface having explicit and implicit components, a foreground, midground, and background. That is a surface of bits. The red column of enabler bits is the technology that made this product possible, that enabled the work. The rest are context.

51 03 Design

A multidimensional design will be built on a multidimensional analysis having some tiling and some population(s

). Change the tiling and the populations and you will need a different design. These are the keys to finding a market for a fast follow. Adding  a new technology of your own will get you a different design as well. In the end, they are collections of bits, collections of points.

Oh, why did I say unquantifiable? The implicit bits are not counted. Psychological processes don’t count bits. We have no idea how many bits make up our floors or our ceilings. The poets connotations float, as do we.

Enjoy! Comments?

The Gap

April 29, 2015

In the AI of the ’80s, the goal was to solve the problem by various means, but mostly by making the problem small enough to solve. It turned out that most problems were too big. Consider that the point of HTML was to feed knowledge to AI machines without spending the money to encode the world’s knowledge on your dime. All this human reading, commerce and ad service was besides the point. Hell, a server log was an accident.

So we start out looking at the world. Actually, the world is large, so we start by focusing and tightening up our scope until we get to a comprehensible world.

00 A World

Yes, we’ll start where Euclid started. Well, he may have started with a point, instead of a line, but lines and points define each other. To get to a single point, we draw another line, not shown this time.

01 00 A Point In A World

We might think of a point, as being the result of an argument. And, while we are arguing we’ll stick with the real world, no concepts allowed. So the argument is all about taxonomy. “You’re an idiot.” “No, I’m not.” But then, idiot would necessitate that such a thing really existed, and no, not the concept of an idiot. Better to name it a rock, so we can keep our argument simple and non-conceptual.

01 01 A Point In A World

But,  somehow, we’ve admitted the concept of an idiot. So now are stuck with maintaining a taxonomy and an ontology. We end up with two worlds: the world of ideas, and the world of realizations. Realizations happen long after we get everyone on the same page as to the idea. There is some spatio-temporal notions of distance and time involved in getting everyone of the same page. And, that is pre-idea. Post-idea, post-implementation, that distance and time is tied up in the technology adoption lifecycle, even if we are talking product as opposed to the technology. Getting back to the taxonomy and ontology involved, they are different and separate worlds.

01 02 A Point In A World

Between those two worlds is a gap. We should be glad the gap is there since it’s where economic value comes from. Products reduce the impedance a constraint presents us with. Products might eliminate that impedance in its entirety. But, Goldratt’s Theory of Constraints tells us that there is always another constraints. We should be happy about that as well, because we won’t run out of work–ever. So why are we unemployed? Well, it’s not globalism, robots, computers, or laziness.

01 03 A Point In A World

But, back to the gap. Those lines are not straight. We might use matrix algebra to straighten them out, but really, they curve. We don’t even try to cross a gap until someone can see or imagine the other side.

01 04 A Point In A World

In the gap, we find value. We make the unknown a little more known. We generate few more bits in the crossing.

01 05 A Point In A World

When we can cross a gap without tossing aside yesterday’s world, when we innovate continuously, we capture cash. When the freight hauling train gets stuff to a port, billing captures some cash, eventually. But, wealth was created when the railroads were built, when railroads were a discontinuous innovation. Railroads might be a bad example, because they were vertically integrated and tended to capture all the cash involved. Today, we are no longer vertically integrated, so the cash is captured by each members of a value chain. Wealth doesn’t get captured in a single set of books. No entity gets all the cash.

One of the core jobs when getting an innovation, a discontinuous innovation, adopted is building that value chain and creating that wealth that feeds the coming cash capture. Too much of what we do today is about cashing out on yesterday’s wealth.

Back to those taxonomies and ontologies, they involve decisions. Those decisions define the terrain. The terrain isn’t even known. On that map of travel times out of New York, you got out west where the map went blank. There was terrain there, but nobody had surveyed it, mapped it, defined the features and the data that we encode in our maps. I’ve drawn the taxonomy and ontology used here as in the leaf nodes attaching to the terrain elevation lines. I’m left wondering if the taxons and ontons, the decisions, are a better place to run the terrain. Do we reach a place, or do we go up and down hills? That question seems to be the distinction between discontinuous and continuous innovation. Did we stop somewhere, or did we keep moving? Did we engage in trench warfare, or the war of fluid tank battles with no rear or forward areas? The point here is that you draw your on taxonomies and ontologies and put the terrain features where you want them. Just use a consistent set of rules for doing so.

01 07 A Point In A World

Once you have your map, you can put your value chain on the terrain as well. Here I’m using circles as a Fourier analysis of the value chain. I’ve followed the Styrofoam cup as microphone notion of saying the circles fit the largest area between the constraining elevations of both the taxonomy and the ontology. We end up with the largest possible circles, the highest frequencies you can get. Now, we might not sense that tightly. We might sense smaller. But, sensing larger is a fail in the game theory sense. We’ve gone too far. We won’t notice, except that our gut instinct will tell us something is wrong.

01 08 A Point In A World

In the figure, the purple points represent the points of contact between our sensors and our terrain. The small circle is our peak. Well, hopefully, it is our peak, because it represents the top of the value chain, where you want to be. The circles are eccentric. That means that depending on their direction of approach a competitor might surprise you.

Enjoy. Comments?


Follow

Get every new post delivered to your Inbox.

Join 1,814 other followers