Finding the spread of each cluster from Kmeans - c++

I'm trying to detect how well an input vector fits a given cluster centre. I can find the best match quite easily (the centre with the minimum euclidean distance to the input vector is the best), however, I now need to work how good a match that is.
To do this I need to find the spread (standard deviation?) of the vectors which build up the centroid, then see if the distance from my input vector to the centre is less than the spread. If it's more than the spread than I should be able to say that I have no clusters to fit it (given that the best doesn't fit the input vector well).
I'm not sure how to find the spread per cluster. I have all the centre vectors, and all the training vectors are labelled with their closest cluster, I just can't quite fathom exactly what I need to do to get the spread.
I hope that's clear? If not I'll try to reword it!
TIA
Ian

Use the distance function and calculate the distance from your center point to each labeled point, then figure out the mean of those distances. That should give you the standard deviation.

If you switch to using a different algorithm, such as Mixture of Gaussians, you get the spread (e.g., std. deviation) as part of the model (clustering result).
http://home.deib.polimi.it/matteucc/Clustering/tutorial_html/mixture.html
http://en.wikipedia.org/wiki/Mixture_model

Related

Integrate RANSAC to compute essential matrix

I have calculated the essential matrix using the 5 point algorithm. I'm not sure how to integrate it with ransac so it gives me a better outcome.
Here is the source code. https://github.com/lunzhang/openar/blob/master/src/utils/5point/computeEssential.js
Currently, I was thinking about computing the essential matrix for 5 random points then convert the essential matrix to fundamental and see the error threshold using this equation x'Fx = 0. But then I'm not sure, what to do after.
How do I know which points to set as outliners? If the errors too big, do I set them as outliners right away? Could it be possible that one point could produce different essential matrices depending on what the other 4 points are?
Well, here is a short explanation, in pseudo-code, of how you can integrate this with ransac. Basically, all Ransac does is compute your model (here the Essential) using a subset of the data, and then sees if the rest of data "is happy" with that result. It keeps the result for which a highest portion of the dataset "is happy".
highest_number_of_happy_points=-1;
best_estimated_essential_matrix=Identity;
for iter=1 to max_iter_number:
n_pts=get_n_random_pts(P);//get a subset of n points from the set of points P. You can use 5, but you can also use more.
E=compute_essential(n_pts);
number_of_happy_points=0;
for pt in P:
//we want to know if pt is happy with the computed E
err=cost_function(pt,E);//for example x^TFx as you propose, or X^TEX with the essential.
if(err<some_threshold):
number_of_happy_points+=1;
if(number_of_happy_points>highest_number_of_happy_points):
highest_number_of_happy_points=number_of_happy_points;
best_estimated_essential_matrix=E;
This should do the trick. Usually, you set some_threshold experimentally to a low value. There are of course more sophisticated Ransacs, you can easily find them by googling.
Your idea of using x^TFx is fine in my opinion.
Once this Ransac completes, you will have best_estimated_essential_matrix. The outliers are those that have a x^TFx value that is greater than your optional threshold.
To answer your final question, yes, a point could produce a different matrix given 4 different points, because their spatial configuration is different (you can have degenerate situations). In an ideal settings this wouldn't be the case, but we always have noise, matching errors and so on, so what happens in the end is that the equations you obtain with 5 points wont produce the exact same results as for 5 other points.
Hope this helps.

Algorithm for calculating the volume of the part of point cloud

I am taking part in the project studies associated with clouds of points.
We have to create a web application. Whose task will be displaying point cloud from .ply file. And then select an area and calculate its volume. The algorithm of counting the volume is to be implemented in C ++. The only things we have is a file in .ply format and file with the XYZ-coordinates of all points. The cloud of points we get, is generated from a picture taken by a drone. For example, it is a cloud of points representing a mountainous area . Our task is to be able to select such one mountain and calculate its approximate volume taking into account an error +/-. The measurement does not have to be perfect but it has to be even close to the real volume of mountain. The volume has to be calculated from the flat surface at the lowest point of the mountain.
I have two questions for you.
-First, could you give me a clue, link or anything that would help me to find such an algorithm and the reasons why he is the best.
-Second, do any of you have idea what would be the best way to select some area from the rendered point cloud?
I was looking for this information . But I can not find anything that would be useful enough to use it in our project. Any tip or a document on the subject would be very useful ;)
"Volume" is not a clearly defined concept for a point cloud. There are very many ways to determine a surface, and there is no single answer. It would depend very much on what constraints were given for defining the surface of the point cloud.
A very simplistic approach would be simply to use the minimum and maximum coordinate values on all three axes, thereby giving the volume of a right rectangular parallelepiped that encloses all the points.
A much more complex approach would involve computing a minimum convex envelope. That is a nontrivial problem.
It would get even harder if you were trying to find an envelope that was not necessarily convex.
In any case, it is important to pin down exactly what is meant by "volume" before you can craft an effective algorithm to compute it.
As you are working with pointclouds generated "from a picture taking by a drone" (I'm assuming here that you mean something like: photogrammetric process over drone imagery):
First:
Take a look at:
This
Or try to develop yourself some approach based on octrees.
If you go for developing your own approach, and you want it in c++, take a look at:
This
and This
Second:
I'm not sure if I understand the question, but looks obvius to me that the best way to select the area of interest in order to perfmor the calculation is through user's interaction (let the user select points arround the area and compute over the remaining points between).
Extra:
Just In case you didn't know it yet, I recommend CloudCompare to everyone who is working on something PointCloud-related.
Hope this links could help you.

What is the fastest way to calculate position cluster centers constriant by a concave polygon

I have a distribution of weighted 2D pose estimates (position + orientation) that are samples of an unknown PDF of a systems pose. All estimates and the underlying real position are constrained by a concave polygon.
The picture shows an exemplary distribution. The magenta colored circles are the estimates, the radius line indicates the estimated direction. The weights are indicated by the circles diameter. The red dot is the weighted mean, the yellow cirlce indicates the variance and the direction but is of no importance for the following problem:
From all estimates I want to derive the most likely position of the system.
Up to now I have evaluated the following approaches:
Using the estimate with the highest weight: Gives poor results since one estimate with a high weight outperforms several coinciding estimates with slightly lower weights.
Weighted Mean: Not applicable since the mean might lie outside the polygon as in the picture (red dot with yellow circle).
Weighted Median: Would work but does neglect potential clusters. E.g. in the image below two clusters are prominent of which one is more likely than the other.
Additionally I have looked into K-Means and K-Medoids. For K-Means I do not know the most efficient way to constrain the centers to the polygon. K-Medoids seems to work, but has poor performance (O(n^2)), which is important since I have a high number of estimates (contrary to explanatory picture)
What would be the ideal algorithm to solve this kind of problem ?
What complexity can be achieved ?
Are there readily available algorithms in c++ that solve this problem, or can be easily adapted to solve it?
k-means may also yield an estimate outside your polygons.
Such constraints are beyond the clustering use case. But nothing prevents you from devising a method to correct the estimates afterwards.
For non-convex data, DBSCAN may be worth a try. You could even incorporate line-of-sight into Generalized DBSCAN easily. But I'm not convinced that clustering will help for your overall objective.

Clustering a list of dates

I have a list of dates I'd like to cluster into 3 clusters. Now, I can see hints that I should be looking at k-means, but all the examples I've found so far are related to coordinates, in other words, pairs of list items.
I want to take this list of dates and append them to three separate lists indicating whether they were before, during or after a certain event. I don't have the time for this event, but that's why I'm guessing it by breaking the date/times into three groups.
Can anyone please help with a simple example on how to use something like numpy or scipy to do this?
k-means is exclusively for coordinates. And more precisely: for continuous and linear values.
The reason is the mean functions. Many people overlook the role of the mean for k-means (despite it being in the name...)
On non-numerical data, how do you compute the mean?
There exist some variants for binary or categorial data. IIRC there is k-modes, for example, and there is k-medoids (PAM, partitioning around medoids).
It's unclear to me what you want to achieve overall... your data seems to be 1-dimensional, so you may want to look at the many questions here about 1-dimensional data (as the data can be sorted, it can be processed much more efficiently than multidimensional data).
In general, even if you projected your data into unix time (seconds since 1.1.1970), k-means will likely only return mediocre results for you. The reason is that it will try to make the three intervals have the same length.
Do you have any reason to suspect that "before", "during" and "after" have the same duration? If not, don't use k-means.
You may however want to have a look at KDE; and plot the estimated density. Once you have understood the role of density for your task, you can start looking at appropriate algorithms (e.g. take the derivative of your density estimation, and look for the largest increase / decrease, or estimate an "average" level, and look for the longest above-average interval).
Here are some workaround methods that may not be the best answer but should help.
You can plot the dates as converted durations from a starting date (such as one week)
and convert the dates to number representations for time in minutes or hours from the starting point.
These would all graph along an x-axis but Kmeans should still be possible and clustering still visible on a graph.
Here are more examples of numpy:Python k-means algorithm

Generating contour lines from regularly spaced data

I am currently working on a data visualization project.My aim is to produce contour lines ,in other words iso-lines, from gridded data.Data can be temperature, weather data or any kind of other environmental parameters but only condition is it must be regularly spaced.
I searched in internet , however i could not find a good algorithm, pseudo-code or source code for producing contour lines from grids.
Does anybody knows a library, source code or an algorithm for producing contour lines from gridded data?
it will be good if your suggestion has a good run time performance, i don't want to wait my users so much :)
Edit: thanks for response but isolines have some constrains like they should not intersects
so just generating bezier curves does not accomplish my goal.
See this question: How to approximate a vector contour from an elevation raster?
It's a near duplicate, but uses quite different terminology. You'll find that cartography and computer graphics solve many of the same problems, but use different terminology for them.
there's some reasonably good contouring available in GNUplot - if you're able to use GPL code that may help.
If your data is placed at regular intervals, this can be done fairly easily (assuming I understand your problem correctly). First you need to determine at what interval you want your contours. Next create the grid you are going to use to store the contour information (i'm assuming just a simple on/off or elevation at this contour level type of data), which should be one interval smaller than the source data.
Now the trick here is to offset the 2 grids by 1/2 an interval (won't actually show up in code like this, but its the concept I'm dealing with here) and compare the 4 coordinates surrounding the current point in the contour data grid you are calculating. If any of the 4 points are in a different interval range, then that 'pixel' in the contour grid should be set to true (or the value of the contour range being crossed).
With this method, there will be a problem when the interval is too fine which will cause several contours to overlap onto each other.
As the link from Paul Tomblin suggests, Bezier curves (which are a subset of B-splines) are a ripe solution for your problem. If runtime performance is an issue, Bezier curves have the added benefit of being constructable via the very fast de Casteljau algorithm, instead of drawing them according to the parametric equations. On the off chance you're working with DirectX, it has a library function for the de Casteljau, but it should not be challenging to brew one yourself using the 1001 web pages that describe it.