From Time Series to Machine Learning

This post, “Notes and Thoughts on Clustering,” on the Ayasdi blog brought me back to some reading I had done a few weeks ago about clustering. It was my kind of thing. I took a time series view of the process. Another post on the same blog, “The Trust Challenge–Why Explainable AI is NOT Enough,” boils down to knowing why the machine learning application leaned what it did, and where it went wrong. Or, to make it simpler, why did the weights change. Those weights change over time, hence the involvement of time series. Clustering changes, likewise, in various ways as n, n as time, changes, again time series is involved.

Time is what blew those supposed random mortgage packages up. The mortgages were temporally tied linked, not random. That was the problem.

In old 80’s style expert systems, the heuristics were mathematics, so for most of us the rules, the knowledge was not transparent to the users. When you built one, you could test it and read it. It couldn’t explain itself, but you could or someone could. This situation fit rules 34006 and 32,***. This is what we cannot do today. The learning is statistical, but not so transparent, not even to itself. ML cannot explain why it learned what it did. So now there is an effort to get ML to explain itself.

Lately, I’ve been looking at time series in ordinary statistics. When you have less than 36 data points the normal is a bad representation. The standard deviations expand and contract depending on where the next data point is. And, the same data point moves the mean. Then, there is skew and kurtosis. In finance class, there is skew risk and kurtosis risk. I don’t see statistics as necessarily a snapshot thing, only done once you have a mass of data. Acquiring a customer happens one customer at a time in the early days of a discontinuous innovation in the bowling alley. We just didn’t have the computing power in the past to animate distributions over time or by each data point. We were asked to shift to the Poisson distribution until we were normal. That works very well because the underlying geometry is hyperbolic explaining why investors won’t put money on those innovations. The projects into the future get smaller and smaller the further out you go. The geometry hides the win.

It turns out there is much to see. See the “Moving Mean” section in the “Normals” post for a normal shifting from n=1 to n=4. Much changes from one data point to the next.

I haven’t demonstrated how clustering changes from one data point to the next. I’ll do that now.

Clustering DP1

At n=1, we have the first data point, DP1. DP1 is the first center of the first cluster, C1. The radius would be the default radius before any iterating that radius to some eventual diameter. It might be that the radius is close to the data point or at r=1.

At the next data point, DP2, it could have the same value as DP1. If so, the cluster will not move. It will remain stationary. The density of the cluster would go up. But, the standard deviation would be undefined.

Or, DP2 would be different from DP1 so the cluster will move and the radius might change. A cluster can handily contain three data points. Don’t expect to have more than one cluster with less than four data points.

Clustering DP2

At n=2, both data points would be in the first cluster. Both could be on the perimeter of the circle. The initial radius would be used before that radius would be iterated. With two points, the data points might sit on the circle at the widest width, which implies that they sit on a line acting as the diameter of the circle, or they could be closer together closer to the poles of the circle or sphere. C2 would be a calculated point, CP2 between the two data points, DP1 and DP2. The center of the cluster moves from C1 to C2, also labeled as moving from DP1 to CP2. The radius did not change. Both data points are on a diameter of the circle, which means they are as far apart as possible.

The first cluster, CL1, is erased. The purple arrow indicates the succession of clusters, from cluster CL1 centered at C1 to cluster CL2 centered at C2.

P1 is the perimeter of cluster CL1. P2 is the perimeter of cluster CL2. It takes a radius and a center to define a cluster. I’ve indicted a hierarchy, a data fusion, with a tree defining each cluster.

With two data points the center, C2 and CP2, would be at the intersection of the lines representing the means of the relevant dimensions. And, there would be a standard deviation for each dimension in the cluster.

New data points inside the cluster can be ignored. The center and radius of the cluster do not need to change to accommodate these subsequent data points. The statistics describing the cluster might change.

A new data point inside the cluster might be on the perimeter of the circle/sphere/cluster. Or, that data point could be made to be on the perimeter by moving the center and enlarging the radius of the cluster.

The new data point inside the cluster could break the cluster into two clusters both with the same radius. That radius could be smaller than the original cluster. Overlapping clusters are to be avoided. All clusters are supposed to have the same radius. In the n=3, situation, one cluster would contain one data point, and a second cluster would contain two data points.

A new data point outside the current cluster would increase the radius of the cluster or divide into two clusters. Again, both clusters would have the same radius. That radius might be smaller than the original cluster.

Clustering DP3

With n=3, the center of the new cluster, C3, is located at CP3. CP3 would be on the perimeter of the cluster formerly associated with the first data point, DP1. The purple arrows indicate the overall movement of the centers. The purple numbers indicate the sequence of the arrows/vectors. We measure radius 3 from the perimeter of the third cluster and associate that with CP3, the computed center point of the third cluster, CL3.

Notice that the first cluster no longer exists and was erased, but remains in the illustration in outline form. The data point DP1 of the first cluster and the meta-data associated with that point are still relevant. The second cluster has been superseded as well but was retained in the illustration to show the direction of movement. The second cluster retains its original coloring.

Throughout this sequence of illustrations, I’ve indicated that the definition of distance is left to a metric function in each frame of the sequence. These days, I think of distributions prior to the normal as operating in hyperbolic space; at the normal, the underlying space becomes Euclidean; and beyond the normal, the underlying space becomes spherical. I’m not that deep into clustering yet, but n drives much.

Data points DP1 and DP2 did not move when the cluster moved to include DP3. This does not seem possible unless DP1 and DP2 were not on a diameter of the second cluster. I just don’t have the tools to verify this one way or another.

The distance between the original cluster and the second was large. The distance is much smaller between the second and third clusters.

This is the process, in general, that is used to cluster those large datasets and their snapshot view. Real clustering is very iterative and calculation intensive. Try to do your analysis with data that is normal. Test for normalcy.

When I got to the fourth data point, our single cluster got divided into two clusters. I ran of time revising that figure to present the next clusters in another frame of our annimation. I’ll revise the post at a later date.

More to the point an animated view is a part of achieving transparency in machine learning. I wouldn’t have enjoyed trying to see the effects of throwing one more assertion into Prolog and trying to figure out what it concluded after that.

Enjoy.

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s


%d bloggers like this: