Clojure dictionary of words - clojure

I want a dictionary of English words available, to pick random english words. I have a dictionary text file that I downloaded form the internet which has almost 1 million words, what's the best way to go about using this list in Clojure, given that most of the time I'll only need 1 randomly selected word?
Edit:
To answer the comments, this is for some tests which I may turn into load tests which is why I want a decent number of random words and I guess access speed is the most important thing. I do not want to use a database for this. I originally thought of a dictionary just because that's the first thing that popped into my mind but I think a random sequence of letters and numbers would be good enough, perhaps I will just use a UUID as a string.

Read all the words into a Vector and then call rand-nth , e.g.
(rand-nth all-words)
rand-nth uses the nth function on the underlying data structure and Clojure Vectors have log32N performance for index based retrieval.
Edit: This is assuming that it is for a test environment as you described in your question. A more memory efficient method would be to use RandomAccessFile and seek to a random location in the file of words, read until you find the first word delimiter (e.g. comma, EOL) and then read the following bytes until the next delimiter which will give you a random word.

Related

Regex - How can you identify strings which are not words?

Got an interesting one, and can't come up with any solid ideas, so thought maybe someone else may have done something similar.
I want to be able to identify strings of letters in a longer sentence that are not words and remove them. Essentially things like kuashdixbkjshakd
Everything annoyingly is in lowercase which makes it more difficult, but since I only care about English, I'm essentially looking for the opposite of consonant clusters, groups of them that don't make phonetically pronounceable sounds.
Has anyone heard of/done something like this before?
EDIT: this is what ChatGpt tells me
It is difficult to provide a comprehensive list of combinations of consonants that have never appeared in a word in the English language. The English language is a dynamic and evolving language, and new words are being created all the time. Additionally, there are many regional and dialectal variations of the language, which can result in different sets of words being used in different parts of the world.
It is also worth noting that the frequency of use of a particular combination of consonants in the English language is difficult to quantify, as the existing literature on the subject is limited. The best way to determine the frequency of use of a particular combination of consonants would be to analyze a large corpus of written or spoken English.
In general, most combinations of consonants are used in some words in the English language, but some combinations of consonants may be relatively rare. Some examples of relatively rare combinations of consonants in English include "xh", "xw", "ckq", and "cqu". However, it is still possible that some words with these combinations of consonants exist.
You could try to pass every single word inside the sentence to a function that checks wether the word is listed inside a dictionary. There is a good number of dictionary text files on GitHub. To speed up the process: use a hash map :)
You could also use an auto-corretion API or a library.
Algorithm to combine both methods:
Run sentence through auto correction
Run every word through dictionary
Delete words that aren't listed in the dictionary
This could remove typos and words that are non-existent.
You could train a simple model on sequences of characters which are permitted in the language(s) you want to support, and then flag any which contain sequences which are not in the training data.
The LangId language detector in SpamAssassin implements the Cavnar & Trenkle language-identification algorithm which basically uses a sliding window over the text and examines the adjacent 1 to 5 characters at each position. So from the training data "abracadabra" you would get
a 5
ab 2
abr 2
abra 2
abrac 1
b 2
br 2
bra 2
brac 1
braca 1
:
With enough data, you could build a model which identifies unusual patterns (my suggestion would be to try a window size of 3 or smaller for a start, and train it on several human languages from, say, Wikipedia) but it's hard to predict how precise exactly this will be.
SpamAssassin is written in Perl and it should not be hard to extract the language identification module.
As an alternative, there is a library called libtextcat which you can run standalone from C code if you like. The language identification in LibreOffice uses a fork which they adapted to use Unicode specifically, I believe (though it's been a while since I last looked at that).
Following Cavnar & Trenkle, all of these truncate the collected data to a few hundred patterns; you would probably want to extend this to cover up to all the 3-grams you find in your training data at least.
Perhaps see also Gertjan van Noord's link collection: https://www.let.rug.nl/vannoord/TextCat/
Depending on your test data, you could still get false positives e.g. on peculiar Internet domain names and long abbreviations. Tweak the limits for what you want to flag - I would think that GmbH should be okay even if you didn't train on German, but something like 7 or more letters long should probably be flagged and manually inspected.
This will match words with more than 5 consonants (you probably want "y" to not be considered a consonant, but it's up to you):
\b[a-z]*[b-z&&[^aeiouy]]{6}[a-z]*\b
See live demo.
5 was chosen because I believe witchcraft has the longest chain of consonants of any English word. You could dial back "6" in the regex to say 5 or even 4 if you don't mind matching some outliers.

SpaCy questions about lists of lists in Python 2.7

I think part of my issue has to do with spaCy and part has to do with not understanding the most elegant way to work within python itself.
I am uploading a txt file in python, tokenizing it into sentences and then tokenizing that into words with nltk:
sent_text = nltk.sent_tokenize(text)
tokenized_text = [nltk.word_tokenize(x) for x in sent_text]
That gives me a list of lists, where each list within the main list is a sentence of tokenized words. So far so good.
I then run it through SpaCy:
text = nlp(unicode(tokenized_text))
Still a list of lists, same thing, but it has all the SpaCy info.
This is where I'm hitting a block. Basically what I want to do is, for each sentence, only retain the nouns, verbs, and adjectives, and within those, also get rid of auxiliaries and conjunctions. I was able to do this earlier by creating a new empty list and appending only what I want:
sent11 = []
for token in sent1:
if (token.pos_ == 'NOUN' or token.pos_ == 'VERB' or token.pos_ =='ADJ') and (token.dep_ != 'aux') and (token.dep_ != 'conj'):
sent11.append(token)
This works fine for a single sentence, but I don't want to be doing it for every single sentence in a book-length text.
Then, once I have these new lists (or whatever the best way to do it is) containing only the pieces I want, I want to use the "similarity" function of SpaCy to determine which sentence is closest semantically to some other, much shorter text that I've done the same stripping of everything but nouns, adj, verbs, etc to.
I've got it working when comparing one single sentence to another by using:
sent1.similarity(sent2)
So I guess my questions are
1) What is the best way to turn a list of lists into a list of lists that only contain the pieces I want?
and
2) How do I cycle through this new list of lists and compare each one to a separate sentence and return the sentence that is most semantically similar (using the vectors that SpaCy comes with)?
You're asking a bunch of questions here so I'm going to try to break them down.
Is nearly duplicating a book-length amount of text by appending each word to a list bad?
How can one eliminate or remove elements of a list efficiently?
How can one compare a sentence to each sentence in the book where each sentence is a list and the book is a list of sentences.
Answers:
Generally yes, but on a modern system it isn't a big deal. Books are text which are probably just UTF-8 characters if English, otherwise they might be Unicode. A UTF-8 character is a byte and even a long book such as War and Peace comes out to under 3.3 Mb. If you are using chrome, firefox, or IE to view this page your computer has more than enough memory to fit a few copies of it into ram.
In python you can't really.
You can do removal using:
l = [1,2,3,4]
del l[-2]
print(l)
[1,2,4]
but in the background python is copying every element of that list over one. It is not recommended for large lists. Instead using a dequeue which implements itself as a doublely-linked-list has a bit of extra overhead but allows for efficient removal of elements in the middle.
If memory is an issue then you can also use generators wherever possible. For example you could probably change:
tokenized_text = [nltk.word_tokenize(x) for x in sent_text]
which creates a list that contains tokens of the entire book, with
tokenized_text = (nltk.word_tokenize(x) for x in sent_text)
which creates a generator that yields tokens of the entire book. Generators have almost no memory overhead and instead compute the next element as they go.
I'm not familiar with SpaCy, and while the question fits on SO you're unlikely to get good answers about specific libraries here.
From the looks of it you can just do something like:
best_match = None
best_similarity_value = 0
for token in parsed_tokenized_text:
similarity = token.similarity(sent2)
if similarity > best_similarity_value:
best_similarity_value = similarity
best_match = token
And if you wanted to check against multiple sentences (non-consecutive) then you could put an outer loop that goes through those:
for sent2 in other_token_list:

Backend challenge in Python

For the past few days I have taken a challenge to write an algorithm in Python 2.7, to find a passphrase given a few hints.
Specifically:
An anagram of the passphrase is: "poultry outwits ants"
The MD5 hash of the secret phrase is "4624d200580677270a54ccff86b9610e"
A Wordlist
What approaches would you use to find the password? I know that brute-forcing all the possible combinations is the sure way but also the one taking way too long to complete (never, i think it is around 20^20 possible combinations).
What I have come up with is to filter the wordlist based on the letters that exist in the anagram and the words. Meaning, if I find a word that contains a letter that is not in the passphrase, I discard it. In addition, I wanted to take into consideration the frequencies of the characters. So I removed from the wordlist any word that has characters with frequency higher than the character frequency of the passphrase. The above was performed on word-level, meaning I checked each word individual. Eventually, from 90k+ words from the wordlist I narrowed them down to 1700 unique words that can be used for the passphrase.
Question 1: Is there anything more that can be done in order to reduce the number of possibilities? Is there a more clever way to take into account the frequencies of the letters?
Next, I thought that since I narrowed it down to 1.7k words maybe I could try permutations (itertools.permutations()) of these words that could possible match the md5 hash of passphrase. Since the anagram of the passphrase contains 2 space characters, I assumed that the passphrase is also three words and not just scrambled characters (maybe I am wrong, but at least I should try it first). As a result, I tried checking permutations of three words from the filtered wordlist. As it turned out, neither that kind of approach is fast enough for my laptop to get any results. The program reaches the memory limit, and the computer freezes.
I also thought about taking into consideration pairs of words instead of triplets and somehow match the letter frequencies in order to filter out some possibilities but I did not come up with a way to do it yet.
Question 2: Is there a way that I can get any more information about the passphrase without checking all the permutations, since this task is prohibitive for my laptop (at least for 1.7k words and above).
I tried using hashcat but I found it too complicated. I ran a couple mask attacks on the md5 hash but with no success. I tried brute-forcing it too, since it can use the GPU but it was still impossible. The main reason was that I could not understand the kind of arguments I needed to give. I know there is an extensive wiki, but as someone with close to no background about hash cracking it was not really helpful. In addition, I would prefer if there was a way to do it on my own and use other programs as less as possible.
If you have any suggestions about solving this please let me know. I am doing this for education purposes, so any input on this will be greatly appreciated.
Thanks

SQL Hash table for words

I'm trying to solve the "find all possible words for a set of letters" problems. There are some good answers out there, but I still can't figure it out.
In my first test, I put the whole dictionary in an array and then looped through each letter. This is super fast, but it takes forever to load the dictionary in the array, and requires huge amount of memory.
So I need to store the dictionary (750,000) letter is a sql database.
I guess there are two solutions to find all the possible words:
Make an advance query that returns all the possible words
Make a simple query that return a fraction of the database with words that might be possible, and then quickly loop through that array and valide the words.
The problem?:
It must be super fast. An iPhone 4 need to be able to get all possible words in under 5-6 seconds so it doesn't hinder the game.
Here's a similar questions:
IOS: Sqlite. Find record fast
Sulthans answer seems like a good idea. Create a hash table, and then:
Bitmask for ASCII letters (ignoring any non-ASCII alphabets). Bit at
position 0 means the word contains "a", at position 1 contains "b"
etc. If we create the same bitmask for our letters, we can select
words such as (wordMask & ~lettersMask) == 0
How do you make the bitmask, hash table, and how do you construct the sql query?
Thanks
sql is probably not the best option. The traditional data structure for storing a collection of words is called a Trie. I'm sure there implementations out there you can find. Someone else will have an answer to that.
The algorithm I envision is to permute the letters you are given, and check each permutation to see if it is in the Trie.

Selecting data structure to store words and look up their occurance?

Let's say I'm given a document with a bunch of words(a poem, for example). I'd then like to be able to store each word so that I can run a command such as: "ocean 4" to find where the fourth occurrence of the word "ocean" is within my text. What would be the best data structure to store this in?
I'd like to stay under O(n^2) but I think the solutions I've come up with so far are too inefficient.
Any help getting started would be appreciated.
Thanks
You could try using a hashtable where each word is the key, and then use a list. Each position within the list would store the location of the word within your text. for example in python you would call myDict["ocean"][4].