A game boils down to a set of rules. The rules define the pieces, the board, the moves, and the outcomes.
Extensive Form Games
Back in artificial intelligence class we programmed a game. The hardest part was finding a heuristic function to evaluate the outcomes, so we could chose the best situation. We used the rules to generate a subtree, then we applied the heuristic function to each leaf to determine our direction of play within that subtree.
Generating and evaluating leaf nodes, the last tier of the subtree generated at that time, could be done in either a depth-first, or breadth-first manner. You couldn’t generate the whole tree, so you selected one of these approaches to generate a particular portion of the tree. Typically, time was limited by the interactive nature of the game.
We played a game and in doing so we generated some portion of the game tree. We’d play another game, generate another portion of the game tree. Eventually, if we played enough games, we generated the whole tree at least once. We also generated the same early positions over and over again. Maybe we had time to get smart and remember our trees between games, and only generated new positions.
If we generated a tree in a breadth-first manner, we generated an entire tier at one time. Then, we would generate the next tier.
If we wanted the game to learn, we would adjust the probabilities of the taken moves after each game. We never got that far. Adjusting the weights would make the games more like neural nets, or factor analysis.
There was a hint about probabilities in all this game playing, but in our class, we didn’t dig into the probabilities, because it wasn’t a statistics class. There was something interesting in the game tree, but I didn’t know it until I came away from my Seattle ProductCamp 09 presentation unhappy.
I decided to go back to a little game we built with matchboxes back in 7th grade. You put beads in the box to drive move selection probabilities. You added or subtracted beads after each game to change the probabilities going forward, to learn. I started into to drawing the tree, but it’s huge, and it’s as of yet unfinished. But, it showed me something interesting.
When you look at each tier across time, each tier, each time slice, has its own distribution. As a whole the game will have a distribution, call it a normal distribution, and each tier is a slice running parallel to the center line dividing that distribution.
A game tree diverges and then converges. The tree starts narrowly, widens in the middle and then narrows again at the bottom. The tree fits under the surface of the 3D normal distribution.
In life, we contemplate, we organize, we decide, and we act. All of these things take time. We loop through this process with each iteration providing a platform for the next loop. Each iteration has us arriving at a tier in the aggregate decision tree that moves us towards realization.
A game generates a tree. A realization, or product, results from a decision tree. We analyze the game at the beginning of the strategic timeframe. We make our best guess given the information on hand and our heuristics. We make a strategic choice. Then, we work towards realizing that strategic choice.
Game theory solutions are computationally intensive. We run our simulation as long as we can. We make our decision before we have achieved complete information. We only have so much time to compute. Then, we decide. We have to commit to the decision.We have to execute. We have to implement.
Actually, the team executes; implements. The decision makers move on to other strategic decisions. The simulation stops once we make the strategic decision.
When you go to Amazon, they present you with recommendations. These recommendations were computed the previous evening and have sat there waiting for you, so they could be displayed. The next night, new recommendations are computed. These recommendations are computed overnight, because they, like the game simulation are computationally intensive.
So what would happen if the simulation kept on running beyond the moment when we made our strategic choice? We would have a more complete picture. We might even know what to expect, and if our expectations were not being realized, we could ask ourselves why. We could find the surprises before they showed up in our P&L, our P&L being a lagging indicator.
The simulation runs faster than the processes we use to realize our strategic choices. Better answers arrive if we let them.
Social Partitioning of the Game Space
When 256-DES was cracked optimistically in 2.5 months, the solution space for brute force key generation was socially partitioned. Starting at zero and increment to 2 to the 256th power would take too long even on today’s desktop speed demons that equal or exceed the cold war supercomputers. Dividing the space up sped up finding the solution.
Your game space is partitionable. You could give different people different starting points, or different paths, so the whole tree is generated faster. Each person would assert a set of starting conditions and generate their subtree. As the number of participants increased, the sooner a solution could be found.
WIF and AsIF
When we build a model in a spreadsheet, we set some variable values. Then, when we don’t like the results we change the values. This is a breadth-first search. We are working widely and iterating to the desired result. When most of the model suits our purposes, but we want to test our sensitivity to changes in a single variable, we are doing a depth-first search. Both breadth-first and depth-first searches are “What If” or WIF explorations.
Strategy and its linear forecasts are WIFs.
Another mechanism can be used to quickly obtain the desired solution when you have your game tree completely specified, backwards induction. Here you find a win in a leaf tier, and assert that this is the solution. Then, you work up the tree to delineate the pathway to that solution. This is the AsIF approach.
WIFs look for the path to the solution working down the tree. AsIFs look for the path from the solution to the root.
WIFs are analogous to strategy with its forecasted basis asserting linearity. AsIFs are analogous to vision where you seek a future absent the necessary capabilities, an unforecastable nonlinearity.
Normal Form Games
The tree representation of a game is called its extensive form. Such a game also has a table representation, or normal form. The games we typically think of game theory, Nash games, are normal form games. Tables obscure geography, elevation. We use the tables to find the equilibrium, aka the spaces we want to avoid, to discover the “don’t cares,” those irrelevant subtrees whose outcomes can vary without changing the “best” strategy, and to recover the geography generated in the extensive form, the tree form, of the game.
A little game theory overcomes the computational limits inherent in the extensive generation of the game tree.
Dashboards, Outcomes, Preferences
The normal form is a dashboard. The values in the table are outcomes, either yours, or yours and your opponents depending on the format of the table. Those outcomes result from calculations that you must define. You define your outcomes, which starts with sensor data, gets fused in the computation, and presented in the table. These sensors, fusion computations, and presentations constitute a decision support network. You define it for your decision making, then like the tree generation earlier, you walk away from it.
You have to define your outcomes. Those definitions begin with the preferences of the stakeholders. Those preferences must be stable for your decision to be rational. Those preferences remain stable beyond your making of the decision. Those preferences remain stable well beyond the realization’s quarterly P&L. Those decision support systems remain stable as well.
Each outcome has its own decision support system.
If your game has equilibrium, those decision support systems will tell you when you are taking on too much risk. If your game has “don’t cares”, they won’t tell you much just yet, but they hint towards commoditization and the need for a new vector of strategic differentiation. They are already telling you to find new, more relevant dimensions to pursue. Everywhere else, those decision support systems tell you if you are succeeding or not.
Games are about strategy under conflict. The product manager faces conflict daily. Is it practical to control the battlefield? Does proactivity work? Or, would you rather be reactive, and in the battle, rather than beyond it?