Is there a way to calculate semantic distance between adjectives relative to a common noun with word2vec embedding? - word2vec

I would like to develop a human memory experiment, where I need to manipulate the semantic distance of features of a concept. I would like to acquire close and distant semantic features for the same object.
E.g.: “hot coffee” and “strong coffee” would maybe closer semantically than “hot coffee” and “creamed coffee”.
I have a 200 dimensional Hungarian word2vec database with a vocabulary of 200k made from the Hungarian National Corpus. I know that “cosine similarity” is a good index for semantic distance, and I understand how I can calculate this metric between two word-vectors.
My main problem is that I really need the information from the noun too, and I am not sure how to measure the distance accounting for that. Simply measuring the adjectives would not be enough as “hot” and “strong” in terms of “coffee” would definitely have different semantic distance than in the case of “furniture” for instance.
Thank you for any of your help, I would really appreciate any ideas.

Related

Sentiment analysis feature extraction

I am new to NLP and feature extraction, i wish to create a machine learning model that can determine the sentiment of stock related social media posts. For feature extraction of my dataset I have opted to use Word2Vec. My question is:
Is it important to train my word2vec model on a corpus of stock related social media posts - the datasets that are available for this are not very large. Should I just use a much larger pretrained word vector ?
The only way to to tell what will work better for your goals, within your constraints of data/resources/time, is to try alternate approaches & compare the results on a repeatable quantititave evaluation.
Having training texts that are properly representative of your domain-of-interest can be quite important. You may need your representation of the word 'interest', for example, to represent that of stock/financial world, rather than the more general sense of the word.
But quantity of data is also quite important. With smaller datasets, none of your words may get great vectors, and words important to evaluating new posts may be missing or of very-poor quality. In some cases taking some pretrained set-of-vectors, with its larger vocabulary & sharper (but slightly-mismatched to domain) word-senses may be a net help.
Because these pull in different directions, there's no general answer. It will depend on your data, goals, limits, & skills. Only trying a range of alternative approaches, and comparing them, will tell you what should be done for your situation.
As this iterative, comparative experimental pattern repeats endlessly as your projects & knowledge grow – it's what the experts do! – it's also important to learn, & practice. There's no authority you can ask for any certain answer to many of these tradeoff questions.
Other observations on what you've said:
If you don't have a large dataset of posts, and well-labeled 'ground truth' for sentiment, your results may not be good. All these techniques benefit from larger training sets.
Sentiment analysis is often approached as a classification problem (assigning texts to bins of 'positive' or 'negative' sentiment, operhaps of multiple intensities) or a regression problem (assigning texts a value on numerical scale). There are many more-simple ways to create features for such processes that do not involve word2vec vectors – a somewhat more-advanced technique, which adds complexity. (In particular, word-vectors only give you features for individual words, not texts of many words, unless you add some other choices/steps.) If new to the sentiment-analysis domain, I would recommend against starting with word-vector features. Only consider adding them later, after you've achieved some initial baseline results without their extra complexity/choices. At that point, you'll also be able to tell if they're helping or not.

How does word2vec learn word relations?

Which part of the algorithm specifically makes the embeddings to have the king - boy + girl = queen ability? Did they just did this by accident?
Edit :
Take the CBOW as an example. I know about they use embeddings instead of one-hot vectors to encode the words and made the embeddings trainable instead of how we do when using one hot vectors that the data itself is not trainable. Then the output is a one-hot vector for target word. They just average all the surrounding word embeddings at some point then put some lego layers afterwards. So at the end they find the mentioned property by surprise, or is there a training procedure or network structure that gave the embeddings that property?
The algorithm simply works to train (optimize) a shallow neural-network model that's good at predicting words, from other nearby words.
That's the only internal training goal – subject to the neural network's constraints on how the words are represented (N floating-point dimensions), or combined with the model's internal weights to render an interpretable prediction (forward propagation rules).
There's no other 'coaching' about what words 'should' do in relation to each other. All words are still just opaque tokens to word2vec. It doesn't even consider their letters: the whole-token is just a lookup key for a whole-vector. (Though, the word2vec variant FastText varies that somewhat by also training vectors for subwords – & thus can vaguely simulate the same intuitions that people have for word-roots/suffixes/etc.)
The interesting 'neighborhoods' of nearby words, and relative orientations that align human-interpretable aspects to vague directions in the high-dimensional coordinate space, fall out of the prediction task. And those relative orientations are what gives rise to the surprising "analogical arithmetic" you're asking about.
Internally, there's a tiny internal training cycle applied over and over: "nudge this word-vector to be slightly better at predicting these neighboring words". Then, repeat with another word, and other neighbors. And again & again, millions of times, each time only looking at a tiny subset of the data.
But the updates that contradict each other cancel out, and those that represent reliable patterns in the source training texts reinforce each other.
From one perspective, it's essentially trying to "compress" some giant vocabulary – tens of thousands, to millions, of unique words – into a smaller N-dimensional representation - usually 100-400 dimensions when you have enough training data. The dimensional-values that become as-good-as-possible (but never necessary great) at predicting neighbors turn out to exhibit the other desirable positionings, too.

how to differentiate sentences with antonyms using word2vec

Say I have two sentences, which are similar except there is only one different word with opposite meaning. e.g. "I like her" vs. "I hate her".
word2vec is used in my classification project. As far as I know, word2vec seems unable to figure out differences between antonyms. Is there any way to solve this?
Unfortunately, what we consider 'antonyms' are usually quite similar in word2vec coordinate spaces. That's because such words are quite similar in almost all respects – except for the one contrast they emphasize.
And further, to the extent those contrasts may be captured by the word2vec orientations, they will be in many varied directions. The 'hot'-vs-'cold' contrast will be different from the 'light'-vs-'dark' and the 'small'-vs-'big'.
There might be some analytic technique on sets of word-vectors that helps discover antonymic directions/pairs, but I haven't noticed one discussed, especially not anything that's simple/intuitive or applicable to general word-vector sets. (Once you do know words are opposites, as when consulting prior labeled lexicons or analogy questions, then the directions-between-their-word-vectors can be useful in other analysis, like discovering other words that contrast-in-the-same-way, as when solving analogy problems.)
Can you be more specific about your ultimate goal, with more example of the kinds of input you'll have and what specific results you want software to report?
The one example you give, "I like her" vs "I hate her", could be more generally seen as a sentiment classification, and word2vec-powered classifiers can do OK (though far from perfect) on such challenges. That is, with enough labeled training data, a classifier with a lot of examples of "positive" and "negative" texts will tend to learn that 'like' (and similar words) are positive and 'hate' (and similar) are negative, and do OK on other variants of positive/negative statements (excepting more complex constructions, like negations, subtle qualifications, understatement, irony, etc.)
So more info on what exactly you hope to detect/report, and what you've tried and found insufficient, might generate more ideas on how to achieve it.

Understanding model.similarity in word2vec

Hello I am fairly new to word2vec, I wrote a small program to teach myself
import gensim
from gensim.models import Word2Vec
sentence=[['Yellow','Banana'],['Red','Apple'],['Green','Tea']]
model = gensim.models.Word2Vec(sentence, min_count=1,size=300,workers=4)
print(model.similarity('Yellow', 'Banana'))
The similarity came out to be:
-0.048776340629810115
My question is why not is the similarity between banana and yellow closer to 1 like .70 or something. What am I missing? Kindly guide me.
Word2Vec doesn't work well on toy-sized examples – it's the subtle push-and-pull of many varied examples of the same words that moves word-vectors to useful relative positions.
But also, especially, in your tiny tiny example, you've given the model 300-dimensional vectors to work with, and only a 6-word vocabulary. With so many parameters, and so little to learn, it can essentially 'memorize' the training task, quickly becoming nearly-perfect in its internal prediction goal – and further, it can do that in many, many alternate ways, that may not involve much change from the word-vectors random initialization. So it is never forced to move the vectors to a useful position that provides generalized info about the words.
You can sometimes get somewhat meaningful results from small datasets by shrinking the vectors, and thus the model's free parameters, and giving the model more training iterations. So you could try size=2, iter=20. But you'd still want more examples than just a few, and more than a single occurrence of each word. (Even in larger datasets, the vectors for words with just a small number of examples tend to be poor - hence the default min_count=5, which should be increased even higher in larger datasets.)
To really see word2vec in action, aim for a training corpus of millions of words.

How many samples are optimal in one class using k-nearest neighbor?

I have implemented k-nearest algorithm in my system. It consists from 26 classes, each of 100 samples. In my case, K=7 and it was completely trial and error to get the best classification result.
I know that K should be chosen wisely to reduce the noise on the classification. But what about the number of samples? Is there any general rule such as "the more samples the better result"? Does it depend on something?
Thank you for all your responses.
You could try considering whatever underlying mechanism is generating your data, or whatever background knowledge you have on the problem, which might give you an idea of the relative size of noise and true underlying variation. E.g. predicting favourite sports team from location I would expect more change than predicting favourite sport, so would use smaller k. However I don't know of much general guidance, except to use cross-validation.