## Trapezoids

I work up this morning with trapezoids on my mind. What the heck? I’ll be using them to represent generative adversarial networks (GANs). The input for a GAN is the output of another neural network. GANS take that output and minimize the number of incorrect findings in that output.

We’ll get there through the triangle model. A triangle represents a decision tree. Back in the waterfall, you started with the requirements phase. Then, you took the requirements into the design phase where you traded off enablers and constraints against the requirements. This got you an architecture. From there you wrote the functions, did the unit testing, then it was shipped to the testing department. Yes, we don’t do that these days. All of those phases fit into one triangle.

So I started this thing off a long way from the triangle model, traversed many triangles, and ended up with a trapezoid before I got to a GAN. And, I finished with several GANs. I end with a few notes on “don’t cares.”

A triangle starts somewhere, anywhere, well, where you are. It starts with one point, the origin. That point has to be someplace in space and time. That point has to be someplace in logic. That point has to be someplace in a set of assertions. Those assertions start somewhere and end somewhere else. That point is in a place, a place full of assertions. The circle represents the extent of the place, the extent of the assertions.

In the linear programming sense, a place can be an area defined by a set of constraints defined as a collection of inequalities.  Research in all domains attempts to move or break the constraints limiting our ability to get things done. Once a constraint is broken, we can do something we could not do before, or do it someplace we couldn’t do it before. Once a constraint breaks, we discover new constraints that define the extent of the new area. Infinity or finiteness limit us.

So here we are in our place looking out from the center of our world to some distant destination. We see a path. We wonder how far it is from here to there. We propose a solution. We propose a definition. We give the line from here to there, a distance. But, we’ve defined it with things unknown in our place. The term we are defining is not fair and balanced. It is asymmetrical, so we have to learn more. We have to keep trying to find a definition more symmetrical than what we now have.

Notice that we have a triangle formed by the black realization line and the redlines delineating the extent of the decision tree. The definition is a decision tree that is expressed in a generative grammar and built from the edge of the outer circle to the line exhibiting some distance.

Alas, the definition must be within our place. The decision tree must change shape. So with the realization line as the base of the triangle, we change our definition of distance until it is entirely inside our place. We change our definition until it is symmetric. We conducted experiments by adding, subtracting, and changing our assertions. We worked outward from the orign in a top-down manner until we reached our goal.

The learning implied by the original asymmetry and completed once we achieved symmetry moved us from one definition through a series of additional definitions and finally arriving at a better definition. When we moved from this better definition, we became asymmetric again. All of this took time. We learn at different rates. Some learned it faster, the thin line at the bottom of the surface. We planned on it taking a certain amount of time to learn, the thick line. And, some took longer, the thin line at the top of the surface. Each learner traversing different distances at different rates.

A game can be described as a triangle. The game tree begins at some origin and the game space, where the game is played, the game tree, explodes generatively outward only to encounter the constraints. Further play focuses towards the eventual win or loss. Here I’ve illustrated a point win.

This game is one of sequential moves. Before a game can be played, the rules and the board must be defined. The rules define moves applied generatively, and constraints that filter moves and defines wins and losses.

A game can also have a line solution, rather than a point solution. Chess is a game with a point solution representing a checkmate. There are other situations like a draw, so chess has a line solution that includes all the alternatives to continued play. While I’ve drawn this line as a continuous line, it could be represented by a collection of intervals occurring at different times.

Here the notion of assertions having a distance let me define some distances from the origin. I’ve called this the assertional radii. Each individual assertion has a distance of one, so six assertions would give us an assertional radius of six. Six would be the maximum distance. If two of those six assertions are used to build an assertion that ANDs those two assertions, one assertion would be subtracted from the six. In the figure, we have two AND assertions done in such a manner as to eliminate two assertions so the assertional radii of that branch of the tree would be two less than the maximum.

The brown area represents losses; the white area, wins; the yellow area, prohibited play, aka cheating.

So we’ll leave games now.

The triangle model has at times confused me. Which way does it grow? In the waterfall, it grew from requirements to the interface, and use beyond that. In Yes or No in the Core and Tails III, ontologies grew outward from the root to the base, and taxonomies grew from the base to the root. Ontology works towards realization. Taxonomy works off of the realization.

The symmetry in this figure is accidental.

Neural nets work from the examples of realizations. Neural nets work from the base to either a point solution or a line solution. Here the weights are adjusted to generate the line solution or the point solution. Point solutions can be viewed as a time series. In both solutions, we are given a sequence of decisions with varying degrees of correctness. These sequences are the outputs of the machine learning exercise. Line solutions give us trapezoids.

Generative adversarial networks (GANs) are a recent development in machine learning. They classify the outputs of a neural net and try to improve upon them. The red and blue trapezoids generate performance improvements over the performance of the initial neural net, shown in black. The GANs are dependent on the initial neural net. The GANs are independent of each other. Building a hand recognizer on top of an arm recognizer is one example of an application of a GAN.

So I’ll end this discussion of GANs with a graphical notation of GANs. The above illustrations of GANs can be simplified to the following figure.

## Notes on Don’t Cares

Here I’ll expand on the discussion of don’t cares in Yes or No in the Core and Tails III

Twitter had me Googling for the  Area Model. Later while I drew up the assertional radius idea, it became clear to me that ANDing reduces the assertional radius. When you just OR the assertions into a long chain you get the maximum radius. ANDings generate a lesser distance. By setting that maximum distance as the bottom of the decision tree, the shorter distances make up the difference in the branching of the binary tree by replacing assertions with don’t cares.

Later in the day, shortcut math, aka multiplying a long sequence of factors with one that is zero, means every nomial other than the one that is zero becomes a don’t care.

How does today’s post tie into product management?

Design has many definitions. I’d go with an activity that is judged by some critical framework. Different disciplines use different critical frameworks. GANs are how you apply a critical framework to the output of a neural net. GANs can be stacked on top of each other to any depth. Many GANs can be applied to the same output of a neural net.

Earlier in the week, I got into a discussion with a UI designer that was insisting that simple was best. I was saying that different points in the technology adoption lifecycle require different degrees of simplification and complexity. Yes, late mainstreet or later requires simplicity but I’ve found much simplicity just moving from functionality type programming to web pages, and from web pages to devices, and devices to the cloud. Form factors force simplicity. Complications here arise when the form factor gets in the way of the work. Anyway, Simplicity is apparently an ideology. We couldn’t discuss the issue. It was absolute. Fitness to use and fitness to the user, particularly, the current user or the next pragmatism slice through our prospects matters more than absolute simplicity.

During Web 1.0, we were selling consumer goods to geeks. Geez. If it gets too simple and the users are geeks, you’ve made a mistake, a huge mistake. Even geeks make mistakes when we discuss some new machine learning tool that simplifies the effort to apply that technology because soon enough it will be too simple to make any money doing it.

Asymmetries mean that learning is required. Learning rates differ in a population gradient. Know how much the user is going to have to learn in every release. Is that negative use cost going to be spent by your users?

Enjoy!

### One Response to “Trapezoids”

1. Bias | Product Strategist Says:

[…] see how many data points I would need to separate a single binary decision. I wrote about this in Trapezoids, Yes or No in the Core and Tails III, and the earlier posts … II, and … […]