I have been trying to check the quality of the clusters in Mahout K-Means and I was wondering if it is possible to get the Mean Square Error from the clusterDumper or using another command in mahout?
With ClusterDumper I was only able to get the points assigned to each clusters, the cluster values and the radius values but not the Within cluster sum of squared errors (MSE).
http://mahout.apache.org/users/clustering/cluster-dumper.html
Related
I am trying to understand working of YOLO and how it detects object in an image. My question is, what role does k-means clustering play in detecting bounding box around the object? Thanks.
K -means clustering algorithm is very famous algorithm in data science. This algorithm aims to partition n observation to k clusters. Mainly it includes :
Initialization : K means (i.e centroid) are generated at random.
Assignment : Clustering formation by associating the each observation with nearest centroid.
Updating Cluster : Centroid of a newly created cluster becomes mean.
Assignment and Update are repitatively occurs untill convergence.
The final result is that the sum of squared errors is minimized between points and their respective centroids.
EDIT :
Why use K means
K-means is computationally faster and more efficient compare to other unsupervised learning algorithms. Don't forget time complexity is linear.
It produces a higher cluster then the hierarchical clustering. More number of cluster helps to get more accurate end result.
Instance can change cluster (move to another cluster) when centroid are re-computed.
Works well even if some of your assumption are broken.
what it really does in determining anchor box
It will create a thouasands of anchor box (i.e Clusters in k-means) for each predictor that represent shape, location, size etc.
For each anchor box, calculate which object’s bounding box has the highest overlap divided by non-overlap. This is called Intersection Over Union or IOU.
If the highest IOU is greater than 50% ( This can be customized), tell the anchor box that it should detect the object it has highest IOU.
Otherwise if the IOU is greater than 40%, tell the neural network that the true detection is ambiguous and not to learn from that example.
If the highest IOU is less than 40%, then it should predict that there is no object.
Thanks!
In general, bounding boxes for objects are given by tuples of the form
(x0,y0,x1,y1) where x0,y0 are the coordinates of the lower left corner and x1,y1 are the coordinates of the upper right corner.
Need to extract width and height from these coordinates, and normalize data with respect to image width and height.
Metric for K-mean
Euclidean distance
IoU (Jaccard index)
IoU turns out to better than former
Jaccard index = (Intersection between selected box and cluster head box)/(Union between selected box and cluster head box)
At initialization we can choose k random boxes as our cluster heads. Assign anchor boxes to respective clusters based on IoU value > threshold and calculate mean IoU of cluster.
This process can be repeated until convergence.
I am working with 3D point cloud using PCL. I am using Fast Point Feature histogram (FPFH) as a descriptors which is 33 Dimensional for a single point. In my work I want to do clustering of point cloud data using FPFH where clusters are defined this feature.
However, I am confused as if I compute the FPFH of a cluster containing say 200 points than my feature vector of each point in a cluster is 200 x 33. Since two clusters will have different size I cannot use the feature vector of size like above. My question is how can I appropriately compute the features and use it to describe the cluster using single 1 x 33 dimension vector?
I was thinking of using mean but than it mean does not capture relative information of all distinct point.
The FPFH descriptor is calculated around a point (from the points neighbouring that point - using either a k nearest neighbour or a fixed radius selection typically), not from the point. So no matter what the size of the cluster, the FPFH calculated from it will only be 33 dimensional. So for each cluster you just need to feed all the points in the cluster to the FPFH calculation routine and get the 33 dimensional feature vector out. You may also need to specify a point cloud containing the points around which to calculate the feature vector. If you do this per cluster, just send the centroid of the cluster (a single point) - and make sure the radius/k is big enough so that all points in the cluster are selected.
I want to cluster some (x,y) coordinates based on distance using agglomerative hierarchical clustering as number of clusters are not known before. Is there any library that supports this task?
Iam doing in c++ using Opencv libraries.
http://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_ml/py_kmeans/py_kmeans_opencv/py_kmeans_opencv.html#kmeans-opencv
This is a link for K-Means clustering in OpenCV for Python.
Shouldn't be too hard to convert this to c++ code once you understand the logic
In Gesture Recognition Toolkit (GRT) there is a simple module for hierarchical clustering. This is a "bottom up" approach as you need, where each observation starts in its own cluster, and pairs of clusters are merged as one moves up the hierarchy.
You can train the method by:
UnlabelledData: The only thing you really need to know about the UnlabelledData class is that you must set the number of input dimensions of your dataset before you try and add samples to the training dataset.
ClassificationData:
You must set the number of input dimensions of your dataset before you try and add samples to the training dataset,
You can not use the class label of 0 when you add a new sample to your dataset. This is because the class label of 0 is reserved for the special null gesture class.
MatrixDouble: MatrixDouble is the default datatype for storing M by N dimensional data, where M is the number of rows and N is the number of columns.
Furthermore you can save or load your models from/to a file and get the clusters by getClusters().
I use K-Means algorithm to create clusters. As you know K-means algorithm needs cluster count as parameter. I try cluster counts as starting two from eight and calculate all C-Index of clusters in every looping then get the avegare of these C-Indexes. Then compare C-Index avegares and choose the minimum C-Index average as best quality cluster count. Is that true way for detecting cluster count?
There is no one correct way to detect cluster count. See following google search, this is still an active research area. Wikipedia articles says that:
The correct choice of k is often ambiguous, with interpretations depending on the shape and scale of the distribution of points in a data set and the desired clustering resolution of the user.
Only you can determine if using c-index in this way is a good way to determine cluster numbers in your domain. See another question of using c-index in clustering.
I have used SimpleKmeans class in WEKA, so I do clustering instance as well. But I have a problem during getting outlier instances.
I supposed, each cluster in this class has a center(or centroid) and a radius, so I could find outliers by checking all clusters' circle with its centroid and its radius. Although I couldn't find any variables or functions that get cluster's radius.
Now, Do you know any other way for finding outliers at SimpleKmeans class in WEKA? Or Any variables that shows radius for each cluster?
I couldn't find any radius variable in SimpleKmeans class, however I use alternative solution.
Clustering find best cluster that has minimum Euclidean distance(or manhattan ,..), so It will outlier instance if the nearest cluster is more than a specific threshold.