Estimating/Choosing optimal Hyperparameters for DBSCAN - data-mining

I need to find naturally occurring classes of nouns based on their distribution with different preposition (like agentive, instrumental, time, place etc.). I tried using k-means clustering but of less help, it didn't work well, there was a lot of overlap over the classes that I was looking for (probably because of non-globular shape of classes and random initialisation in k-means).
I am now working on using DBSCAN, but I have trouble understanding the epsilon value and mini-points value in this clustering algorithm. Can I use random values or do I need to compute them. Can anybody help? Particularly with epsilon, at least how to compute it if I need to?

Use your domain knowledge to choose the parameters. Epsilon is a radius. You can think of it as a minimum cluster size.
Obviously random values won't work very well. As a heuristic, you can try to look at a k-distance plot; but it's not automatic either.
The first thing to do either way is to choose a good distance function for your data. And perform appropriate normalization.
As for "minPts" it again depends on your data and needs. One user may want a very different value than another. And of course minPts and Epsilon are coupled. If you double epsilon, you will roughly need to increase your minPts by 2^d (for Euclidean distance, because that is how the volume of a hypersphere increases!)
If you want lots of small and fine detailed clusters, choose a low minpts. If you want larger and fewer clusters (and more noise), use a larger minpts. If you don't want any clusters at all, choose minpts larger than your data set size...

It is highly important to select the hyperparameters of DBSCAN algorithm rightly for your dataset and the domain in which it belongs.
eps hyperparameter
In order to determine the best value of eps for your dataset, use the K-Nearest Neighbours approach as explained in these two papers: Sander et al. 1998 and Schubert et al. 2017 (both papers from the original DBSCAN authors).
Here's a condensed version of their approach:
If you have N-dimensional data to begin, then choose n_neighbors in sklearn.neighbors.NearestNeighbors to be equal to 2xN - 1, and find out distances of the K-nearest neighbors (K being 2xN - 1) for each point in your dataset. Sort these distances out and plot them to find the "elbow" which separates noisy points (with high K-nearest neighbor distance) from points (with relatively low K-nearest neighbor distance) which will most likely fall into a cluster. The distance at which this "elbow" occurs is your point of optimal eps.
Here's some python code to illustrate how to do this:
def get_kdist_plot(X=None, k=None, radius_nbrs=1.0):
nbrs = NearestNeighbors(n_neighbors=k, radius=radius_nbrs).fit(X)
# For each point, compute distances to its k-nearest neighbors
distances, indices = nbrs.kneighbors(X)
distances = np.sort(distances, axis=0)
distances = distances[:, k-1]
# Plot the sorted K-nearest neighbor distance for each point in the dataset
plt.figure(figsize=(8,8))
plt.plot(distances)
plt.xlabel('Points/Objects in the dataset', fontsize=12)
plt.ylabel('Sorted {}-nearest neighbor distance'.format(k), fontsize=12)
plt.grid(True, linestyle="--", color='black', alpha=0.4)
plt.show()
plt.close()
k = 2 * X.shape[-1] - 1 # k=2*{dim(dataset)} - 1
get_kdist_plot(X=X, k=k)
Here's an example resultant plot from the code above:
From the plot above, it can be inferred that the optimal value for eps can be assumed at around 22 for the given dataset.
NOTE: I would strongly advice the reader to refer to the two papers cited above (especially Schubert et al. 2017) for additional tips on how to avoid several common pitfalls when using DBSCAN as well as other clustering algorithms.
There are a few articles online –– DBSCAN Python Example: The Optimal Value For Epsilon (EPS) and CoronaVirus Pandemic and Google Mobility Trend EDA –– which basically use the same approach but fail to mention the crucial choice of the value of K or n_neighbors as 2xN-1 when performing the above procedure.
min_samples hyperparameter
As for the min_samples hyperparameter, I agree with the suggestions in the accepted answer. Also, a general guideline for choosing this hyperparameter's optimal value is that it should be set to twice the number of features (Sander et al. 1998). For instance, if each point in the dataset has 10 features, a starting point to consider for min_samples would be 20.

Related

Principal component analysis on proportional data

Is it valid to run a PCA on data that is comprised of proportions? For example, I have data on the proportion of various food items in the diet of different species. Can I run a PCA on this type of data or should I transform the data or do something else beforehand?
I had a similar question. You should search for "compositional data analysis". There are transformation to apply to proportions in order to analyze them with multivariate tecniques such as PCA. You can find also "robust" PCA algorithms to run your analysis in R. Let us know if you find an appropriate solution to your specific problem.
I don't think so.
PCA will give you "impossible" answers. You might get principal components with values that proportions can't have, like negative values or values greater than 1. How would you interpret this component?
In technical terms, the support of your data is a subset of the support of PCA. Say you have $k$ classes. Then:
the support for PCA vectors is $\R^k$
the support for your proportion vectors is the $k$- dimensional simplex. By simplex I mean the set of $p$ vectors of length $k$ such that:
$0 \le p_i \le 1$ where $i = 1, ..., k$
$\sum_{i=1}^k{p_i} = 1$
One way around this is if there's a one to one mapping between the $k$-simplex to all of $\R^k$. If so, you could map from your proportions to $\R^k$, do PCA there, then map the PCA vectors to the simplex.
But I'm not sure the simplex is a self-contained linear space. If you add two elements of the simplex, you don't get an element of the simplex :/
A better approach, I think, is clustering, eg with Gaussian mixtures, or spectral clustering. This is related to PCA. But a nice property of clustering is you can express any element of your data as a "convex combination" of the clusters. If you analyze your proportion data and find clusters, they (unlike PCA vectors) will be within the simplex space, and any mixture of them will be, too.
I also recommend looking into nonnegative matrix factorization. This is like PCA but, as the name suggests, avoids negative components and also negative eigenvectors. It's very useful for inferring structure in strictly positive data, like proportions. But nmf does not give you a basis for simplex space.

Input one fixed cluster centroid, find N others (python)

I have a table of shipment destinations in lat, long. I have one fixed origination point (also lat, long). I would like to find other optimal origin locations using clustering. In other words, I want to assign one cluster centroid (keep it fixed) and find 1, 2, 3 . . . N other cluster centroids. Is this possible with the scikit learn cluster module?
Rather than recycling clustering for this, treat it as a regular optimization problem. You don't want to "discover structure", but optimize cost.
Beware that earth is not flat, and Euclidean distance (i.e. k-means) is a bad idea. 1 degree north is only at the equator approximately the same distance to 1 degree east. If your data is e.g. in New York, you have a non-neglibile distortion, and your solution will not even be a local optimum.
If you absolutely insist on abusing kmeans, it's easy to do.
Choose n-1 centers at random and the predefined one.
Then run 1 iteration of k-means only. Then replace that center with the desired center again. Repeat with the next iteration.

What is the fastest way to calculate position cluster centers constriant by a concave polygon

I have a distribution of weighted 2D pose estimates (position + orientation) that are samples of an unknown PDF of a systems pose. All estimates and the underlying real position are constrained by a concave polygon.
The picture shows an exemplary distribution. The magenta colored circles are the estimates, the radius line indicates the estimated direction. The weights are indicated by the circles diameter. The red dot is the weighted mean, the yellow cirlce indicates the variance and the direction but is of no importance for the following problem:
From all estimates I want to derive the most likely position of the system.
Up to now I have evaluated the following approaches:
Using the estimate with the highest weight: Gives poor results since one estimate with a high weight outperforms several coinciding estimates with slightly lower weights.
Weighted Mean: Not applicable since the mean might lie outside the polygon as in the picture (red dot with yellow circle).
Weighted Median: Would work but does neglect potential clusters. E.g. in the image below two clusters are prominent of which one is more likely than the other.
Additionally I have looked into K-Means and K-Medoids. For K-Means I do not know the most efficient way to constrain the centers to the polygon. K-Medoids seems to work, but has poor performance (O(n^2)), which is important since I have a high number of estimates (contrary to explanatory picture)
What would be the ideal algorithm to solve this kind of problem ?
What complexity can be achieved ?
Are there readily available algorithms in c++ that solve this problem, or can be easily adapted to solve it?
k-means may also yield an estimate outside your polygons.
Such constraints are beyond the clustering use case. But nothing prevents you from devising a method to correct the estimates afterwards.
For non-convex data, DBSCAN may be worth a try. You could even incorporate line-of-sight into Generalized DBSCAN easily. But I'm not convinced that clustering will help for your overall objective.

What does eigen value of structure tensor matrix denote?

It is known that good feature point across two images can be determined properly, if
the two eigen value of above matrix, are greater than 0. Can someone explain, what does it mean to have both eigen value greater than 0 and why the feature point is not good if either of them is approx. equal to 0.
Note that this matrix always has nonnegative eigenvalues. Basically this rule says that one should favor rapid change in all directions, that is corners are better features than edges or flat surfaces.
The biggest eigenvalue corresponds to the eigenvector pointing towards the direction of the most significant change in the image at the point u.
If the two eigenvalues are small the image at point u does not change much.
If one of the eigenvectors is large and the other is small this point might lie on an edge in the image but it will be difficult to figure out where exactly on that edge.
If both are large, the point is like a corner.
There is a nice presentation with examples in the panoramic stitching slide deck from a course taught by Rajesh Rao at the University of Washington.
Here E(u,v) denotes the Eucledian distance between the two areas in the vicinities of pixels shifted by the vector (u,v) from each other. This distance tells how easy it is to distinguish the two pixels from one another.
Edit The matrix of image derivatives is denoted H in this illustration probably because of its relation to Harris corner detection algorithm.
That is related with the concept of Texturedness in the paper of Thomasi-Shi "Good features to track".
The idea of Textureness is to provide a rating of texture to make features (within a window) identifiable and unique. For instance, lines are not good features since are not unique (see Figure 3.9a)
To solve equation an optical flow equation, it must be possible to invert J (Hessian matrix). In practice next conditions must be satisfied:
Eigenvalues of J cannot differ by several orders of magnitude.
Eigenvalues of Hessian overcome image noise levels λnoise: implies that both eigenvalues of J must be large.
For the first condition we know that the greatest eigenvalue cannot be arbitrarily large because intensity variations in a window are bounded by the maximum allowable pixel value.
Regarding to second condition, being λ1 and λ2 two eigenvalues of J, following situations may rise (See Figure 3.10):
• Two small eigenvalues λ1 and λ2: means a roughly constant intensity profile within a window (Pink region). Problem of figure 3.9-b.
• A large and a small eigenvalue: means unidirectional texture patter (Violet or gray region). Problem of figure 3.9-a.
• λ1 and λ2 are both large: can represent a corner, salt and pepper textures or any other pattern that can be tracked reliably (Green region).
Some references:
1 - ORTIZ CAYON, R. J. (2013). Online video stabilization for UAV. Motion estimation and compensation for unnamed aerial vehicles.
2 - Shi, J., & Tomasi, C. (1994, June). Good features to track. In Computer Vision and Pattern Recognition, 1994. Proceedings CVPR'94., 1994 IEEE Computer Society Conference on (pp. 593-600). IEEE.
3 - Richard Szeliski. Image alignment and stitching: a tutorial. Found.
Trends. Comput. Graph. Vis., 2(1):1–104, January 2006.

Finding the spread of each cluster from Kmeans

I'm trying to detect how well an input vector fits a given cluster centre. I can find the best match quite easily (the centre with the minimum euclidean distance to the input vector is the best), however, I now need to work how good a match that is.
To do this I need to find the spread (standard deviation?) of the vectors which build up the centroid, then see if the distance from my input vector to the centre is less than the spread. If it's more than the spread than I should be able to say that I have no clusters to fit it (given that the best doesn't fit the input vector well).
I'm not sure how to find the spread per cluster. I have all the centre vectors, and all the training vectors are labelled with their closest cluster, I just can't quite fathom exactly what I need to do to get the spread.
I hope that's clear? If not I'll try to reword it!
TIA
Ian
Use the distance function and calculate the distance from your center point to each labeled point, then figure out the mean of those distances. That should give you the standard deviation.
If you switch to using a different algorithm, such as Mixture of Gaussians, you get the spread (e.g., std. deviation) as part of the model (clustering result).
http://home.deib.polimi.it/matteucc/Clustering/tutorial_html/mixture.html
http://en.wikipedia.org/wiki/Mixture_model