Archive for April, 2011

Geography for Product Managers

April 17, 2011

This post was written in response to this comment on The ordinals we call a clock.

Geography has been my theme for a while now. Functional cultures present us with geographies. Ideas present us with geographies. IT departments present us with geographies. Interfaces present us with geographies. Interactions between minimal marketable functions present us with geographies, likewise, product roadmaps. So what is a geography? My take is that a geography is anything that can be better expressed in a GIS system, rather than a list. Is time an issue? Are the relevant issues spatially organized? Then, you have a geography.

When I was working as an ITIL change manager, we were adding the change management component of ITIL to an existing ITIL implementation. I was supposed to track down managers willing to be responsible for improving their change management processes. I was supposed to find the potential conflicts between changes scheduled to be made to code, hardware, and other infrastructural elements across a huge IT shop in any particular time interval. It took me one day to decide that this was a GIS problem. We were trying to get this done via a relational database system and Excel. Oh, hell. Worse, I said that we needed a GIS system on my second day on the job. They were installing an upgrade or a completely new relational system at the time, so ….

Just keep your mouth shut until you’ve wired the joint.

In their system, you had wires, tons of wires, routers, switches, network stuff. That is was physically located in particular places tied to physical geography quite well. Servers exist in physical space. So drawing the map of their physical system wouldn’t be that difficult. So before we make a leap let’s explore this geography and what it means in terms of game theory.

I took my son on a spring break trip down Route 66. Route 66 emerged from the efforts of various chambers of commerce along the route and the states involved. The road was a bottom  up, business proposition from day one. It is obviously geographic. Services were stretched out along this road. The road is celebrated today as history and a user experience–the stories of the road.

Several value chains stretched down this road simultaneously. In one sense, the cafe in this town competed with the cafe in the next town. But, in the larger sense, the road, competed with other roads, so all the competitors on the road collaborated in the competition between their road and the other roads.

So lets look at the road as a value chain and move it into a Shapely Value representation.

Route 66 - Discrete

Route 66 - Discrete

Here we take a length of the road and partition it into sections based on the contribution made by each town along the road. That town has a geographic reach. Using the frequency of use idea would let us build other measures of the value contributed by each town along the road.

Route 66 - To Numbers

Route 66 - To Numbers

Next, we measure the length and normalized that length. When you  normalize a collection of numbers, the largest number becomes 1.0, and the other numbers, except for zero, become some fraction of that number vs. the largest number. Normalized values express probabilities. Notice that the existence of the road is taken as a given. If a section of road where to become impassible, the value of much of the road would disappear soon enough. Likewise, if your trunk connection to the internet went down, the value of your enterprise network disappear while you were screaming into the phone. Ah, but for redundancy.

If all of this were in a GIS system, we could consider gas station pumps, beds, lunch counter seats, booths, tables, tourist attractions, movie houses. We would be playing many games.

So our next rep is the Shapely Value of our road.

Shapely Value

Shapely Value

This representation was built from the totals from the previous figure. This representation is optimal, as such it assumes maximum collaboration. The Shapely Value is a number equal to the area of the gray area, the core, in the figure. The white area represents space that cannot be reached due to inefficiencies and the lack of the necessary capabilities.

The Shapely Value is typically introduced as a 3-party game, which leads to a triangle. It is also three dimensional, which puts it at the limit of our ability to visualize. Shapely Values use perfect polygons, aka polygons where each side is the same length. This just makes the math easy, but the math is not based on shape. Matrices or tables are used to compute the value of each coalition.

Shapely Value - Suboptimal

Shapely Value - Suboptimal

In this figure, the town associated with the value 8 is having some problems and is not contributing to the value chain to an optimal degree. The town associated with the value 18 has not been impacted, but all the other towns in the value chain are seeing their earnings decline.

That’s just one value chain. Instead of cities, think about your network infrastructure. So now we’ll move on to a server, a server hosting a database. That server is in town, a city unto itself. It connects to another server hosting a data warehouse. There are connections upon connections, layers upon layers, maps upon maps. You can imagine it yourself. Just try mapping out your Twitter experience, or your blog host and RSS feed reader experience. Diligence will get this map done. You will end up with game upon game, onions of games. GIS is the tool that can take you there. Relational can define a single layer, but the toilet map, otherwise known as the sewage system doesn’t relate well with the electrical system.

There I was working hard to do a GIS analysis with an RDBMS, but I was wondering why ITIL system vendors hadn’t gone GIS. It’s probably cultural. If your IT organization hasn’t gone GIS yet, why would an IT management system be the first to do so. Ultimately in the abstract, it all boils down to features and how they relate to each other. All those features end up on a vector. Bivectors relate vectors, but work makes those features work.

In an application, its features comprise a coalition. Each feature contributes to the value a customer derives from an application to different degrees. Since I prefer to deliver features as networks of related functionality, or minimal marketable functionality (MMF), because it enables me to deliver some value to the customer, the economic buyer, earlier, allows users to learn the application over time, and allows a vendor to schedule their revenues and cash flows more consistently, I’ll focus on the MMF as the unit of coalition.

In the comment, Kenny Bastani, asked about a list of “features” that tend to be layers of crosscutting concerns or aspects, thus leading to Aspect-Oriented Programming (AOP). His considerations were

  • Integration of CRM, data warehouse extracts, aligning vendors for product integration, designing a support strategy for all third party systems and data integrated into the product?
  • Security risks for centralizing a data layer from so many different systems?

To get our hands around this, we can consider each aspect to be a vector. When I talked about bivectors in the previous posts, I summed them up into a single technology vector, a single product vector, and a single business components vector. Vectors sum up easily enough. But, let’s take a more expansive look at the base vectors that were summed up.

Summed vectors

Summed vectors

Back when we sold technology, instead of a webpage, the product was built on top of a technology. Customers bought the product, but installed and configured that technology before the product would work. Much of that technology was already present in the product. The product wouldn’t install otherwise. With webpages, we use a server, a browser, tons of technologies, but we don’t sell any technology. Our products are not fostering adoption of some technology. The underlying technology has already been adopted and for the most part is situated in Moore’s late mainstream market, much of it approaching the horizontal asymptote at the top of their S-curves. These technologies that we use, but don’t sell per se, are our whole product components. The cloud likewise, our whole product components. In this figure the whole products are shown as one summative vector, instead of as a collection of individual vectors representing each component. Notice that the granularity and summative vectors or not is up to you.

I’ve shown two technology vectors to illustrate that a vendor that actually sells a technology will always need a second, subsequent, discontinuous technology available to switch to when the initial or prior one commoditizes. In my Slideshare presentation, I summed these vectors and their S-curves, starting with slide 53. But, in the Framing Post For Aug 12 Innochat: The Effects of Booms and Busts on Innovation, I came to realize that not only are these technologies discontinuous, the market is likewise discontinuous. Why this surprised me is just that I knew this for discontinuous technologies, you can’t sell the next one to your current market, but I had not extrapolated that to the vector and s-curve visualizations.

For both the product and business components in the offer, I used a shorthand, the eigenvector to build those vectors. Why shorthand, well, I’m not showing that each minimal marketable function (MMF) or business capability (BC) need not be aligned. Showing them aligned was just quicker to draw. The current representation isn’t like an eigenvector, in that eigenvectors are used as unit vectors. I do think of MMFs that way, because MMFs originated with feature-driven design (FDD), an agile approach, which means that a single MMF is delivered in a single release. That release is a timeboxed across all releases. Think one per quarter and tie it to the quarterly revenue goals of the subsequent quarter. The BCs are another matter. By capabilities I mean not just processes, abilities, people, but also policy. Policies arise or are explicated from implicit expectations in a mostly reactive, event-driven manner.

A company can be represented by its cost structure and policy structure as vector

Aspects as Vectors

Aspects as Vectors

This figure takes us back to the comment. Each of those aspects in the comment can be represented by a summative vector, or that vectors collection of base vectors. When all the vectors for an aspect share an origin, via translation, just because we want to do that, we end up looking at what linguists would call a morpheme, or a probability cloud defined by those aspect vectors. When laid end to end, we end up a road. The road shows us that we can get to a Shapely Value with any collection of vectors.

All of this goes to show us that we have our vectors of differentiation. Those vectors compete and collaborate. Those vectors have an internal price/cost and an externalizable performance, a performance relative to a market, large or small. Those vectors have populations involved with them, and from the Slideshare presentation that implies Poisson distributions, Markov processes, grammar, machine learning associated with them. Those vectors have triangle models associated with them and several other representations I have yet to discuss. Math is a massively distributed, collaborating population of various types like people, ideas, theorems, calculations all rolled up into a massive geography that no one is entirely familiar with.

So goes any IT system as well. Please, do your ITIL people a favor. Get a GIS system. They need it.

The math under the Shapely Value is easy. Google it until you find an accelerator that makes it easy for you. The point behind the Shapely Value is that the system has more value in it than that obtainable by its parts alone. This should make it clear that value is not at the interface, it is in the interactions deeper into the space of work where your feature feeds someone else’s, where emergence and radiosity tip their hand.

Comments?

The ordinals we call a clock

April 4, 2011

I remember Campy saying that reengineering wasn’t about object oriented. This was back in the day when object oriented was new, as was reengineering. It struck me as hilarious that a business person would say that. Campy wasn’t a programmer. As for me, I lived through things before the computed GOTO, the computed GOTO, structured programming, recursive COBOL, and information modules that blew up the compiler in one version of IBM P/L I, and didn’t in another–thankfully. I lived through professors that we not yet caught up in object oriented. The one thing you can say is that structure made programming easier, and object oriented made it easier yet, so I wondered why reengineering wasn’t about object oriented.

When I was reading Moore’s technology adoption lifecycle (TALC) books, I ran across his claim that the lifecycle is not a clock. Sure it is. This before that. That before that other. That other before this here, …. It told you what to expect the next time your business was in for a big change. So the TALC has been a clock, an asynchronous clock as long as I can remember.

An asynchronous clock is a weird clock. Try talking about time in an email thread, or even a twitter thread. Most of us, I’m sure have had to deal with our machine’s clock ignoring the network server’s clock. Yes, a mess, but a clock nonetheless.

A person with one watch knows what time it is. A person with two watches never knows. Worse the person with one watch is turned into a person with two watches whenever they have to meet someone with their own watch. We live on islands of time. It takes a whole lot of trouble to keep everything synchronized even if don’t leave our timezone.

So here I was reading a lightweight math dictionary, Eula Monroe’s Math Dictionary, and somehow the term “clock” was included. How many other math dictionaries have I read that never included “clock?” I didn’t really read the definition. It was simple enough. The definition laid in wait until I got to “Clockwise.” A clock is an ordered number line. It didn’t mention the base thing, base 12.

Ordered! That means ordinals, which I blogged about back in Ordinals for Product Managers. These ordinals were the expression of your stakeholder’s preferences. Ordinals express the relationship between things in terms of one being first, second, third, and so forth. Ordinals lie at the core of game theory. Ordinals act as constraints in linear programming problems. This preference must be met before this one. Pretty much, we need this amount of revenue to assure this amount of cash flow, and maybe some profit to boot.

In an offering, we mix minimal marketable functionality from the product with business offer elements from various organizational units in the hopes that value emerges and gets paid for. That offering can be visualized as a bivector. I mentioned bivectors at the end my last blog post, Constraints. But, I jumped the gun, because I showed you two bivectors, instead of just one.

Two Separate Bivectors

Two Separate Bivectors

Here I’ve blow up the two, overlapping bivectors from the previous post and shown them as two separate bivectors. We do generate our offer components separately. In the late mainstream market, where the web is today, product managers need to think about the biz components. Every statement asserted in a policy is a biz feature. Invoicing, billing, shipping, reporting, licensing, all of that stuff is a biz feature of an offer it it interacts with the customer or the user. Likewise, customer support and technical support add their features to the biz offer vector. Some of these features migrate to software. In preparation for entry into the late mainstream market from the early mainstream market, if your company is doing this, migrate some of those biz features to the web and get the customer used to using them.

But, what about the ordinals we call a clock? That some goal/preference is first, doesn’t mean it gets done first. It’s more likely that it will get done last, but it will be comprised of and be an reflection of our success in achieving all those other goals/preferences.

Clock and Goals

Clock and Goals

In this figure the primary goal, expressed in terms of quarterly revenues and profits, doesn’t happen until after the close of the next full quarter after the release. It is the end of the clock when we move counterclockwise, but we have to achieve the underlying technology before we can productize it and turn that product into sales. Unroll the clock. Unroll your calendar as well, and put those preferences on the calendar. The calendar is ordered as much as a clock is ordered.

In game theory, the normal form game, the table form, represents a simultaneous game. No clock is involved. You play the infinite game to play again. You can find your saddle point, or your strategy mixture that optimizing your outcomes regardless of what your opponent does. Time is not leveraged. You build the capabilities to achieve your strategy, hence your outcomes, then you leave it be until some change forces you to change your strategy. Change happens, so change happens. You don’t get to let it be.

Game theorists tell you that if you wait to see what your opponent does, you can move from a normal-form game to an extensive form game where the players take turns. The temporal network can become messy.

Like our offer bivectors, the vectors are built independently. In the clock as ordinals representation these capabilities are independent games summing up much like a Gantt-chart roll up.

Ordinals move. You talked to the stakeholders last week. You know what they wanted. You can’t deliver all of it. So who will loosen their preferences? Why would they loosen their preferences? Once loosened, are they deliverable? It’s not enough to listen. You have to push back. You have to educate, to persuade, and to ensure that once the roll-up goal is achieved no matter how far way you are from that.

This leads somewhere.

When I graphed my first normal form game to find the saddle point, I hadn’t read the theorem that insists that your game has the same value as your opponent’s game. I just read that the row player’s value of game is the peak at the bottom of the graph.

That First Game Graphed

That First Game Graphed

I’ll own up right now. This graph is not correct! The following is correct. The red point is the value of the game generated by minimax and maximin. It is the value of the game for both players. Much is lost if all you want to do is find the optimal.

Corrected Graph

Corrected Graph

The incorrect graph does show something I found interesting. If we were talking about the physical constraints in our offer, the technology, it shows up a temporal map of the our technology palette. It’s one of those maps that researchers have in their heads expressing the temporal layout of their field. It tells us when a given problem will be overcome. It shows us the pathways. It doesn’t, but I made the leap. So jump, this creek isn’t that wide.

The reason it doesn’t is because we built our BI and fusion networks looking at something else.

This map is still important. So lets just use what it tells us. What would our pathway be if we were to compete at the black dot?

Pathways

Pathways

The red position resulted from your existing or planned capabilities: A, B, and an unnamed one. The unnamed one doesn’t lead further into the future unless you broaden the scope of your offer. Progress in improving capability A stopped a while back, but you still have people working on it. B is still being worked on as well. A and B exist as real options. Further investment in A and B will move your company forward.

If you continue improving A, you will reach C. C would be a totally new space for your company, so you would have to spend money and time to achieve your performance beyond the point were A meets C. C, if you have defined it well and can communicate it clearly, is a good candidate for open development. You don’t need to own C. You just need it to exist. The further C is from your core competencies the less willing you should be to create C yourself.

Likewise, improving B will get you to the point where you need D. C will get you to D as well. Again, another potential open development project.

The way open is talked about today it ties into the idea of open standards, and partnering. Neither are necessary for open development. Culture fit isn’t necessary either. Nor, your active management of the open enterprise. If you don’t have the time or resources, find someone else that can do it. They are not doing it for you. They are doing it for themselves. It’s critical not to waste your managerial focus, or to denigrate their management. They know how to manage their business. You are not a client or customer. You just had an idea. What you need is to get that idea on the market, so that it arrives when you do, so you can leverage it. That’s all you need, and that’s all you can expect.

I ran across this particular view of open development at an Orange County PDMA meeting. An executive from the Newport corporation presented their approach to open development. A few meetings later, a panel presented the usual “cultural” fit view. In Moore’s “Living on the Fault Line,” Moore discussed what to outsource, what to keep, and what to wonder about. The core reason to outsource was not cost, but to preserve managerial focus. Unfortunately, managerial focus is implicit, thus unmanaged. I asked panel members if they knew how much they individually added to the cost of their outsourcing. Their response was that they had budgets, but that didn’t answer the question. Most employees of corporations, including CEOs, don’t clock ideation, or thinking unlike consultants and people in say the advertising business, who sell ideas.

If we could see the boundary as a boundary land, rather than a boundary line, we’d know it was thick. We’d know that in that thickness is options, opportunities, room to maneuver. We’d know that there is a clock. We could look at the S-curves of each of those vectors and decide that we’ll route through the traffic to gain speed without being faster. We’d route A>B>C>D, and if B didn’t show up we’d skip it and move on to D with a route of A>D. The route would depend on our estimation of which constraints would yield first. Given enough money, as in we work for “The” market leader, we could undertake both pathways. We could jump the intersection CD and work out from there with a new organization that works outward towards both the red dot and the black dot.

We would figure out where zero hour was on our clock and move clockwise and counterclockwise to ensure that we hit our financial goals, keep the financial markets happy, and split our stock.

Our clocks need not be linear anymore than our product roadmaps need to linear. So let’s be a little more asynchronous, a little more respectful of the managerial skill of those in our value chains, a little more non-linear.

Comments?