Sentiment analysis feature extraction - word2vec

I am new to NLP and feature extraction, i wish to create a machine learning model that can determine the sentiment of stock related social media posts. For feature extraction of my dataset I have opted to use Word2Vec. My question is:
Is it important to train my word2vec model on a corpus of stock related social media posts - the datasets that are available for this are not very large. Should I just use a much larger pretrained word vector ?

The only way to to tell what will work better for your goals, within your constraints of data/resources/time, is to try alternate approaches & compare the results on a repeatable quantititave evaluation.
Having training texts that are properly representative of your domain-of-interest can be quite important. You may need your representation of the word 'interest', for example, to represent that of stock/financial world, rather than the more general sense of the word.
But quantity of data is also quite important. With smaller datasets, none of your words may get great vectors, and words important to evaluating new posts may be missing or of very-poor quality. In some cases taking some pretrained set-of-vectors, with its larger vocabulary & sharper (but slightly-mismatched to domain) word-senses may be a net help.
Because these pull in different directions, there's no general answer. It will depend on your data, goals, limits, & skills. Only trying a range of alternative approaches, and comparing them, will tell you what should be done for your situation.
As this iterative, comparative experimental pattern repeats endlessly as your projects & knowledge grow – it's what the experts do! – it's also important to learn, & practice. There's no authority you can ask for any certain answer to many of these tradeoff questions.
Other observations on what you've said:
If you don't have a large dataset of posts, and well-labeled 'ground truth' for sentiment, your results may not be good. All these techniques benefit from larger training sets.
Sentiment analysis is often approached as a classification problem (assigning texts to bins of 'positive' or 'negative' sentiment, operhaps of multiple intensities) or a regression problem (assigning texts a value on numerical scale). There are many more-simple ways to create features for such processes that do not involve word2vec vectors – a somewhat more-advanced technique, which adds complexity. (In particular, word-vectors only give you features for individual words, not texts of many words, unless you add some other choices/steps.) If new to the sentiment-analysis domain, I would recommend against starting with word-vector features. Only consider adding them later, after you've achieved some initial baseline results without their extra complexity/choices. At that point, you'll also be able to tell if they're helping or not.

Related

how to assign weights to articles in the corpus for generating word embedding (e.g. word2vec)?

there are certain articles in the corpus that I found much more important than other articles (for instance I like their wording more). As a result, I would like to increase their "weights" in the entire corpus during the process of generating word vectors. Is there a way to implement this? The current solution that I can think of is to copy the more important articles multiple times, and add them to the corpus. However, will this work for the word embedding process? And is there a better way to achieve this? Many thanks!
The word2vec library with which I am most familiar, in gensim for Python, doesn't have a feature to overweight certain texts. However, your idea of simply repeating the more important texts should work.
Note though that:
it'd probably work better if the texts don't repeat consecutively in your corpus - spreading out the duplicated contexts so that they're encountered in an interleaved fashion with other diverse usage examples
the algorithm really benefits from diverse usage examples – repeating the same rare examples 10 times is nowhere near as good as 10 naturally-subtly-contrasting usages, to induce the kinds of continuous gradations-of-meaning that people want from word2vec
you should be sure to test your overweighting strategy, with a quantitative quality score related to your end purpose, to be sure it's helping as you hope. It might be extra code/training-effort for negligible benefit, or even harm some word vectors' quality.

Understanding model.similarity in word2vec

Hello I am fairly new to word2vec, I wrote a small program to teach myself
import gensim
from gensim.models import Word2Vec
sentence=[['Yellow','Banana'],['Red','Apple'],['Green','Tea']]
model = gensim.models.Word2Vec(sentence, min_count=1,size=300,workers=4)
print(model.similarity('Yellow', 'Banana'))
The similarity came out to be:
-0.048776340629810115
My question is why not is the similarity between banana and yellow closer to 1 like .70 or something. What am I missing? Kindly guide me.
Word2Vec doesn't work well on toy-sized examples – it's the subtle push-and-pull of many varied examples of the same words that moves word-vectors to useful relative positions.
But also, especially, in your tiny tiny example, you've given the model 300-dimensional vectors to work with, and only a 6-word vocabulary. With so many parameters, and so little to learn, it can essentially 'memorize' the training task, quickly becoming nearly-perfect in its internal prediction goal – and further, it can do that in many, many alternate ways, that may not involve much change from the word-vectors random initialization. So it is never forced to move the vectors to a useful position that provides generalized info about the words.
You can sometimes get somewhat meaningful results from small datasets by shrinking the vectors, and thus the model's free parameters, and giving the model more training iterations. So you could try size=2, iter=20. But you'd still want more examples than just a few, and more than a single occurrence of each word. (Even in larger datasets, the vectors for words with just a small number of examples tend to be poor - hence the default min_count=5, which should be increased even higher in larger datasets.)
To really see word2vec in action, aim for a training corpus of millions of words.

Does Mikolov 2014 Paragraph2Vec models assume sentence ordering?

In Mikolov 2014 paper regarding paragraph2Vectors, https://arxiv.org/pdf/1405.4053v2.pdf, do the authors assume in both PV-DM and PV-DBOW, the ordering of sentences need to make sense?
Imagine I am handling a stream of tweets, and each tweet is a paragraph. The paragraphs/tweets do not necessarily have ordering relations. After training, does the vector embedding for paragraphs still make sense?
Each document/paragraph is treated as a single unit for training – and there’s no explicit way that the neighboring documents directly affect a document’s vector. So the ordering of documents doesn’t have to be natural.
In fact, you generally don’t want all similar text-examples to be clumped together – for example, all those on a certain topic, or using a certain vocabulary, in the front or back of all training examples. That’d mean those examples are all trained with a similar alpha learning rate, and affect all related words without interleaved offsetting examples with other words. Either of those could make a model slightly less balanced/general, across all possible documents. For this reason, it can be good to perform at least one initial shuffle of the text examples before training a gensim Doc2Vec (or Word2Vec) model, if your natural ordering might not spread all topics/vocabulary words evenly through the training corpus.
The PV-DM modes (default dm=1 mode in gensim) do involve sliding context-windows of nearby words, so word proximity within each example matters. (Don’t shuffle the words inside each text!)

What is considered a good accuracy for trained Word2Vec on an analogy test?

After training Word2Vec, how high should the accuracy be during testing on analogies? What level of accuracy should be expected if it is trained well?
The analogy test is just a interesting automated way to evaluate models, or compare algorithms.
It might not be the best indicator of how well word-vectors will work for your own project-specific goals. (That is, a model which does better on word-analogies might be worse for whatever other info-retrieval, or classification, or other goal you're really pursuing.) So if at all possible, create an automated evaluation that's tuned to your own needs.
Note that the absolute analogy scores can also be quite sensitive to how you trim the vocabulary before training, or how you treat analogy-questions with out-of-vocabulary words, or whether you trim results at the end to just higher-frequency words. Certain choices for each of these may boost the supposed "correctness" of the simple analogy questions, but not improve the overall model for more realistic applications.
So there's no absolute accuracy rate on these simplistic questions that should be the target. Only relative rates are somewhat indicative - helping to show when more data, or tweaked training parameters, seem to improve the vectors. But even vectors with small apparent accuracies on generic analogies might be useful elsewhere.
All that said, you can review a demo notebook like the gensim "Comparison of FastText and Word2Vec" to see what sorts of accuracies on the Google word2vec.c `questions-words.txt' analogy set (40-60%) are achieved under some simple defaults and relatively small training sets (100MB-1GB).

dimension reduction in spam filtering

I'm performing an experiment in which I need to compare classification performance of several classification algorithms for spam filtering, viz. Naive Bayes, SVM, J48, k-NN, RandomForests, etc. I'm using the WEKA data mining tool. While going through the literature I came to know about various dimension reduction methods which can be broadly classified into two types-
Feature Reduction: Principal Component Analysis, Latent Semantic Analysis, etc.
Feature Selection: Chi-Square, InfoGain, GainRatio, etc.
I have also read this tutorial of WEKA by Jose Maria in his blog: http://jmgomezhidalgo.blogspot.com.es/2013/02/text-mining-in-weka-revisited-selecting.html
In this blog he writes, "A typical text classification problem in which dimensionality reduction can be a big mistake is spam filtering". So, now I'm confused whether dimensionality reduction is of any use in case of spam filtering or not?
Further, I have also read in the literature about Document Frequency and TF-IDF as being one of feature reduction techniques. But I'm not sure how does it work and come into play during classification.
I know how to use weka, chain filters and classifiers, etc. The problem I'm facing is since I don't have enough idea about feature selection/reduction (including TF-IDF) I am unable to decide how and what feature selection techniques and classification algorithms I should combine to make my study meaningful. I also have no idea about optimal threshold value that I should use with chi-square, info gain, etc.
In StringToWordVector class, I have an option of IDFTransform, so does it makes sence to set it to TRUE and also use a feature selection technique, say InfoGain?
Please guide me and if possible please provide links to resources where I can learn about dimension reduction in detail and can plan my experiment meaningfully!
Well, Naive Bayes seems to work best for spam filtering, and it doesn't play nicely with dimensionality reduction.
Many dimensionality reduction methods try to identify the features of the highest variance. This of course won't help a lot with spam detection, you want discriminative features.
Plus, there is not only one type of spam, but many. Which is likely why naive Bayes works better than many other methods that assume there is only one type of spam.