Algorithm to rank the simplicity of a random name - linguistics

I have been looking for a name for a new project. I want the name to have available domains and social media handles. For months, all those I can think of are taken.
So I generated a list of names with at least a consonant and a vowel and checked if the domains are available (which is very fast). I have about a million possible names.
I would like to sort them by some rank of simplicity. "Aaazq" would be close to the bottom, "Cawel" would be close to the top. I thought of the CVC structure (Consonant-Vowel-Consonant) and wonder if some more sophisticated algorithm exists. I searched for "sonority" but it has a different meaning in linguistics.
How can I automatically rank the simplicity of a random name?

I assume you would judge simplicity as compared to a target language, say English. Something that is 'simple' in English might not be 'simple' in German or Korean, as these languages have very different phonological structures.
I would recommend the following:
collect some data of the language you are using. Just get some novels from Project Gutenberg, for example, or newspaper articles. Whatever you can easily get hold of.
now generate n-grams from this: all sequences of two (bigrams) or three (trigrams) letters. Turn this into a frequency list, so that common n-grams are at the top of the list with a high frequency.
turn your suggested name into n-grams. Count how many times the respective n-gram occurs in your frequency list, and take the average or median of the result
Your examples would do as follows:
aa aa az zq: "aa" is rare ("aardvark") "az" a bit more common ("glaze", "raze"), and "zq" would not exist. So, not a very high score.
ca aw we el: all of these are fairly common in English words, so a reasonably high score.
You could also add a dummy # at the beginning and the end, so in your first example you'd get #a, which is fine, as many English words start with "a", but the final q# bombs out, as there's only words such as "Iraq" which end in a "q".
You can obviously do the same for other languages.
Also, you can reverse the process in a way, and pick random n-grams from your frequency list to generate names: by picking higher-frequency n-grams you will make sure the name is a good match with the phonological structure of your target language.
Note for pedants: I use phonological structure, but it's really its representation in the spelling system that we're dealing with here.

Related

How do I train to find the occurrence of a US state in NLP?

How do I train to find the occurrence of a US state, when this set is constrained to 50 states because we need a large amount of data (say 1000 rows) to train a certain label.
I think it depends on the task you're trying to solve here. Do you need to differentiate if some two-letter combinations are US state name or not? Just a simple set of names would work? Or you're trying to build some kind of simple NER (https://en.wikipedia.org/wiki/Named-entity_recognition) for state names? This way, you can also start with simple matching by regex, but if you want to train some model later - you have much more than 50 examples. Your dataset won't be just "is these two letters represent state or not", but many sentences, which have state names somewhere in them, or not at all.

How to reduce semantically similar words?

I have a large corpus of words extracted from the documents. In the corpus are words which might mean the same.
For eg: "command" and "order" means the same, "apple" and "apply" which does not mean the same.
I would like to merge the similar words, say "command" and "order" to "command".
I have tried to use word2vec but it doesn't check for semantic similarity of words(it ouputs good similarity for apple and apply since four characters in the words are the same). And when I try using wup similarity, it gives good similarity score if the words have matching synonyms whose results are not that impressive.
What could be the best approach to reduce semantically similar words to get rid of redundant data and merge similar data?
I believe one of the options here is using WordNet. It gives you a list of synonyms for the word, so you may merge them together given you know its part of speech.
However, I'd like to point out that "order" and "command" are not the same, e.g. you do not command food in restaurants and such homonymy is true for many-many words.
Also I'd like to point out that for Word2vec spelling is irrelevant and is not taken into consideration at all, the algorithm considers only concurrent usage. I suppose you might be mixing it with FastText.
However, there should be some problems with your model.
Because in a standard set of embeddings distance between these concepts should be large. MUSE FastText similarity between "apple" and "apply" is only 0.15, which is quite low.
I use Gensim's function
model.similarity("apply", "apple")
So you might need to fix learning parameters or just use a pretrained model.

SAS Fuzzy Lookup

I'm trying to do a fuzzy lookup on two datasets in SAS. I have searched over google and found the below link which explains the process of doing the fuzzy lookup in SAS.
Link: http://blogs.sas.com/content/sgf/2015/01/27/how-to-perform-a-fuzzy-match-using-sas-functions/
To explain in detail the problem, the two datasets contains information of Hospital names and other additional information. I have to match both the data sets based on Hospital names. But the main challenge is in some cases I have the hospital name as follows:
Dataset1(hospital Name): St.Hospital
Dataset2(hospital Name): Saint.Hospital
Like wise INC and Incorporated.
I would like to know is there any best way to do the fuzzy lookup in SAS.
Thanks,
VJ
There can't be any single best way to do a fuzzy lookup, as the article you linked to explains. You have to decide on the best approach for your particular problem domain and your particular tolerances for false positives and false negatives, etc.
For your data, I would probably just define a set of 'best guess' transformations on the hospital name in both input data sets, and then do a standard merge on the transformed names. The transformations would be something like:
Convert to uppercase
Convert 'ST.' or 'ST ' to 'SAINT' (or should that be 'STREET'??)
Convert 'INC' or 'INC.' to 'INCORPORATED'
Convert any other known common strings as above
Remove any remaining punctuation
Use COMPBL to reduce multiple spaces to a single space
Do the merge
You will then have to examine the result and decide if it's good enough for your purposes. There is no general way for a computer to match up two strings that might be arbitrarily badly-spelled, particularly if there are multiple possible 'correct' matches - this is the same problem that spell-checkers have been trying to solve for decades - there's no way of knowing (in isolation) whether a misspelled word like 'falt' was meant to be 'fault', 'fall', 'fast', 'fat' etc.
If your results have to be perfect, you will need a human to review anything that isn't an exact match, and even then some of the exact matches might be misspellings that happen to match another hospital's name (eg, 'Saint Mary's Hospital' vs 'Saint May's Hospital'). That's why the preferred approach would usually be to identify the hospital by an ID number and the name, rather than just the name.

Custom word weights for sentences when calling h2o transform and word2vec, instead of straight AVERAGE of words

I am using H2O machine learning package to do natural language predictions, including the functions h2o.word2vec and h2o.transform. I need sentence level aggregation, which is provided by the AVERAGE parameter value:
h2o.transform(word2vec, words, aggregate_method = c("NONE", "AVERAGE"))
However, in my case I strongly wish to avoid equal weighting of "the" and "platypus" for example.
Here's a scheme I concocted to achieve custom word-weightings. If H2O's word2vec "AVERAGE" option uses all the words including duplicates that might appear, then I could effect a custom word weighting when calling h2o.transform by adding additional duplicates of certain words to my sentences, when I want to weight them more heavily than other words.
Can any H2O experts confirm that that the word2vec AVERAGE parameter is using all the words rather than just the unique words when computing AVERAGE of the words in sentence?
Alternatively, is there a better way? I tried but I find myself unable to imagine any correct math to multiply the sentence average by some factor, after it was already computed.
Yes, h2o.transform will consider each occurrence of a word for the averaging, not just the unique words. Your trick will therefore work.
There is currently no direct way to provide user defined weights. You could probably do an ugly hack and weight directly the word embeddings but that won't be a straightforward solution I could recommend.
We can add this feature to H2O. I would love to hear what API would work for you (how would you like to provide the weights).

Improving classification results with Weka J48 and Naive Bayes Multinomial classifiers

I have been using Weka’s J48 and Naive Bayes Multinomial (NBM) classifiers upon
frequencies of keywords in RSS feeds to classify the feeds into target
categories.
For example, one of my .arff files contains the following data extracts:
#attribute Keyword_1_nasa_Frequency numeric
#attribute Keyword_2_fish_Frequency numeric
#attribute Keyword_3_kill_Frequency numeric
#attribute Keyword_4_show_Frequency numeric
…
#attribute RSSFeedCategoryDescription {BFE,FCL,F,M, NCA, SNT,S}
#data
0,0,0,34,0,0,0,0,0,40,0,0,0,0,0,0,0,0,0,0,24,0,0,0,0,13,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BFE
0,0,0,12,0,0,0,0,0,20,0,0,0,0,0,0,0,0,0,0,25,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BFE
0,0,0,10,0,0,0,0,0,11,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BFE
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BFE
…
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,FCL
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,F
…
20,0,64,19,0,162,0,0,36,72,179,24,24,47,24,40,0,48,0,0,0,97,24,0,48,205,143,62,7
8,0,0,216,0,36,24,24,0,0,24,0,0,0,0,140,24,0,0,0,0,72,176,0,0,144,48,0,38,0,284,
221,72,0,72,0,SNT
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,SNT
0,0,0,0,0,0,11,0,0,0,0,0,0,0,19,0,0,0,0,0,0,0,0,0,0,10,0,0,0,0,0,0,0,0,0,0,0,0,0
,0,0,0,0,0,0,0,0,0,17,0,0,0,0,0,0,0,0,0,0,0,0,0,20,0,S
And so on: there’s a total of 570 rows where each one is contains with the
frequency of a keyword in a feed for a day. In this case, there are 57 feeds for
10 days giving a total of 570 records to be classified. Each keyword is prefixed
with a surrogate number and postfixed with ‘Frequency’.
I am using 10 fold x validation for both the J48s and NBM classifiers on a
'black box' basis. Other parameters used are also defaults, i.e. 0.25 confidence
and min number of objects is 2 for the J48s.
So far, my classification rates for an instance of varying numbers of days, date
ranges and actual keyword frequencies with both J28 and NBM results being
consistent in the 50 - 60% range. But, I would like to improve this if possible.
I have reduced the decision tree confidence level, sometimes as low as 0.1 but
the improvements are very marginal.
Can anyone suggest any other way of improving my results?
To give more information, the basic process here involves a diverse collection of RSS feeds where each one belongs to a single category.
For a given date range, e.g. 01 - 10 Sep 2011, the text of each feed's item elements are combined. The text is then validated to remove words with numbers, accents and so on, and stop words (a list of 500 stop words from MySQL is used). The remaining text is then indexed in Lucene to work out the most popular 64 words.
Each of these 64 words is then searched for in the description elements of the feeds for each day within the given date range. As part of this, the description text is also validated in the same way as the title text and again indexed by Lucene. So a popular keyword from the title such as 'declines' is stemmed to 'declin': then if any similar words are found in the description elements which also stem to 'declin', such as 'declined', the frequency for 'declin' is taken from Lucene's indexing of the word from the description elements.
The frequencies shown in the .arff file match on this basis, i.e. on the first line above, 'nasa', 'fish', 'kill' are not found in the description items of a particular feed in the BFE category for that day, but 'show' is found 34 times. Each line represents occurrences in the description items of a feed for a day for all 64 keywords.
So I think that the low frequencies are not due to stemming. Rather I see it as the inevitable result of some keywords being popular in feeds of one category, but which don't appear in other feeds at all. Hence the spareness shown in the results. Generic keywords may also be pertinent here as well.
The other possibilities are differences in the numbers of feeds per category where more feeds are in categories like NCA than S, or the keyword selection process itself is at fault.
You don't mention anything about stemming. In my opinion you could have better results if you were performing word stemming and the WEKA evaluation was based on the keyword stems.
For example let's suppose that your WEKA model is built given a keyword surfing and a new rss feed contains the word surf. There should be a match between these two words.
There are many free available stemmers for several languages.
For the English language some available options for stemming are:
The Porter's stemmer
Stemming based on the WordNet's dictionary
In case you would like to perform stemming using the WordNet's dictionary, there are libraries & frameworks that perform integration with WordNet.
Below you can find some of them:
MIT Java WordNet interface (JWI)
Rita
Java WorNet Library (JWNL)
EDITED after more information was provided
I believe that the keypoint in the specified case is the selection of the "most popular 64 words". The selected words or phrases should be keywords or keyphrases. So the challenge here is the keywords or keyphrases extraction.
There are several books, papers and algorithms written about keywords/keyphrases extraction. The university of Waikato has implemented in JAVA, a famous algorithm called Keyword Extraction Algorithm (KEA). KEA extracts keyphrases from text documents and can be either used for free indexing or for indexing with a controlled vocabulary. The implementation is distributed under the GNU General Public License.
Another issue that should be taken into consideration is the (Part of Speech)POS tagging. Nouns contain more information than the other POS tags. Therefore may you would have better results if you were checking the POS tag and the selected 64 words were mostly nouns.
In addition according to the Anette Hulth's published paper Improved Automatic Keyword Extraction Given More Linguistic Knowledge, her experiments showed that the keywords/keyphrases mostly have or are contained in one of the following five patterns:
ADJECTIVE NOUN (singular or mass)
NOUN NOUN (both sing. or mass)
ADJECTIVE NOUN (plural)
NOUN (sing. or mass) NOUN (pl.)
NOUN (sing. or mass)
In conclusion a simple action that in my opinion could improve your results is to find the POS tag for each word and select mostly nouns in order to evaluate the new RSS feeds. You can use WordNet in order to find the POS tag for each word and as I mentioned above there are many libraries on the web that perform integration with the WordNet's dictionary. Of course stemming is also essential for the classification process and has to be maintained.
I hope this helps.
Try turning off stemming altogether. The Stanford Intro to IR authors provide a rough justification of why stemming hurts, and at the very least does not help, in text classification contexts.
I have tested stemming myself on a custom multinomial naive Bayes text classification tool (I get accuracies of 85%). I tried the 3 Lucene stemmers available from org.apache.lucene.analysis.en version 4.4.0, which are EnglishMinimalStemFilter, KStemFilter and PorterStemFilter, plus no stemming, and I did the tests on small and larger training document corpora. Stemming significantly degraded classification accuracy when the training corpus was small, and left accuracy unchanged for the larger corpus, which is consistent with the Intro to IR statements.
Some more things to try:
Why only 64 words? I would increase that number by a lot, but preferably you would not have a limit at all.
Try tf-idf (term frequency, inverse document frequency). What you're using now is just tf. If you multiply this by idf you can mitigate problems arising from common and uninformative words like "show". This is especially important given that you're using so few top words.
Increase the size of the training corpus.
Try shingling to bi-grams, tri-grams, etc, and combinations of different N-grams (you're now using just unigrams).
There's a bunch of other knobs you could turn, but I would start with these. You should be able to do a lot better than 60%. 80% to 90% or better is common.