I'm using rapidminer, and I want to use the output of the k-means operator (attributes of 2 clusters each one of them apart ) as an input for fp-growth to discover relations between variables in each of these clusters.
is there any specific operator to proceed that?
Related
The goal is to create a computer-generated news site that aggregates headlines from different news sources around the world:
Taking a look at the centroid table results I want to Understand the following:
https://ibb.co/n1mvnbk
I used K=5
and I am using TF-IDF
Explain what those numbers mean?
When an attribute is zero in multiple clusters, what does it mean?
When I sort the centroid table by each cluster at a descending order, I find some words or attributes that have a higher value with this cluster while zero values in other clusters. Does this mean that these words occur more or less frequently in this cluster?
How can I discuss the clustering model
Do all the clusters make sense and why?
Do you think k=5 is a good choice for this dataset? or I need to choose 3? How can I classify that?
I believe K=5 denotes number of cluster you are looking into current Dataset. On the basis 5 centroid will be placed in data will be around them.
Do you think k=5 is a good choice for this dataset? Its hard to predict this way. It is all done by mathematical combination and permutation.
You might use Elbow Method to identify correct number of cluster needed for any given dataset. This methodology is based on WCSS(Within Cluster Sums of Squares) which find distance between points and provide centroid points.
Those numbers are the average tf-idf of the cluster. So a 0 means that the word is not in the cluster, and the highest valued words are most characteristic words for the cluster.
Note that for text you'll want to use spherical k-means rather than regular k-means.
Choosing k is a big problem. Forget the elbow method, it never works except for you examples. Experiment with different k and choose the one that is most convincing or most useful. None of the usual heuristics for choosing the k in k-means will work here I fear (VRC is IMHO the best). The main reason is that the data cannot be well partitioned into k clusters. There is no reason to assume there are exactly k topics in the world, nor that every document only contains one topic. Instead, topics will be a complex structure itself. For example there is Trump, but there also is the Trump Erdogan meeting, and there is the impeachment. These are not disjoint. But you will also have articles that don't fit into any of these topics. This leads to the effect that the true best k would likely be very very large, as large as the number of articles (and hence not useful).
I am trying to find an example to assist me to cluster some textual data I have. The data is in the form:
A,B,3
C,D,5
A,D,57
The two first entries are the members of a pair, the number is how often this pair occurs in the dataset. I have over 200,000 unique pairs.
Any tips? Thanks!!
Don't use k-means on such data.
It will not work.
What you have is a similarity matrix, not continuous vectors as needed for k-means. You can try hierarchical clustering (with a sparse similarity, not a distance; no, I won't write the code for you).
I have used the output predictions of J48 classifier in Weka and got the results with predictions (probability). As I need to use these predictions number in my research, I need to know how the weka calculates these numbers? What is the formula? Is it specified for each classifier?
In addition to Jan Eglinger answer.
The J48 classifier is Weka's implementation of the infamous C4.5 decision tree classifier, which is a classification algorithm based on ID3 that classifies using information entropy.
The training data is a set S = {s_1, s_2, ...} of already classified samples. Each sample s_i consists of a p-dimensional vector (x_{1,i}, x_{2,i}, ...,x_{p,i}) , where the x_j represent attribute values or features of the sample, as well as the class in which s_i falls.
At each node of the tree, C4.5 chooses the attribute of the data that most effectively splits its set of samples into subsets enriched in one class or the other. The splitting criterion is the normalized information gain (difference in entropy). The attribute with the highest normalized information gain is chosen to make the decision. The C4.5 algorithm then recurs on the smaller sublists.
This algorithm has a few base cases.
All the samples in the list belong to the same class. When this
happens, it simply creates a leaf node for the decision tree saying
to choose that class.
None of the features provide any information gain. In this case,
C4.5 creates a decision node higher up the tree using the expected
value of the class.
Instance of previously-unseen class encountered. Again, C4.5 creates
a decision node higher up the tree using the expected value.
You can find the information Gain and entropy in the Weka Api package. For that you need to start dubbing the java weka api and go through each step.
In general, if you don't worry about how algorithm works internally using high level mathematics. Try to calculate InformationGain and entropy and explain them in your research apart from decision trees, you have methods for both of these to calculate their value.
What is the formula?
Weka's J48 classifier is an implementation of the C4.5 algorithm.
I need to know how the weka calculates these numbers?
You can find implementation details in J48.java and in the weka.classifiers.trees.j48 package.
I have unstructured twitter data which is retrieved by the apache flume and stored it into the HDFS. So now I want to convert this unstructured data into structured one using the mapreduce.
Task wanted to do using the mapreduce:
1. conversion Unstructured to structure one.
2. I just want the text part which contain tweet part.
3. I want to identify the tweets for particular topic and grouped according to their sub part.
e.g. I have tweets of samsung handset so i want to make a group according to their handsets like groups of Samsung Note 4, Samsung galaxy etc.
It is my college project so my guide suggested me to use k means algorithm, I search a lot on k means but failed to understand how to identifies the Centroid for this basically i failed to understand how to apply K means to this situation in MapReduce.
Please gude me if I am doing wrong as I am new to this concept
K-means is clustering algorithm. It cluster or group similar data and calculate the common centroid. You can create time-series for the above questions you have mention. Group the tweets according to the topic.
K-mean implementation in MapReduce.
https://github.com/himank/K-Means
Using K-means in Twitter datasets.
You can check the following links
https://github.com/JulianHill/R-Tutorials/blob/master/r_twitter_cluster.r
http://www.r-bloggers.com/cluster-your-twitter-data-with-r-and-k-means/
http://rstudio-pubs-static.s3.amazonaws.com/5983_af66eca6775f4528a72b8e243a6ecf2d.html
I'm trying to use j48 tree to perform a text categorization task. I read a lot of papers and websites that explain how to use classification having datasets whose data are single labeled.
In my case I have only multi-labeled data in my training set, what can I have to treat these data in a single decision tree? Or the only solution is generating many trees as many as the number of the labels?
You can use a tree with adapted entropy formula. You must define beforehand if your dataset has hierachical labels:
papers and code