I'm trying to split a string (a persons name) into components: prefix (Dr, Mr, Miss, etc), given, middle, family, and suffix (Jr, III, etc...).
Prefixes and suffixes can be a known list of options.
Edge cases for double barreled family names like 'da Vinci' or 'di Caprio' don't really bother me too much. The da's and di's will just be dropped in the middle name, or if a middle is given (i.e. 4 names are found that don't match a prefix or suffix) then everything after the second name is dropped in the family name.
I'm thinking about writing the regex myself... but before I go and reinvent the wheel, I wonder if anyone has something that works I can use?
Thanks.
Here is a proposal in perl (I did not find a language or regex flavor requirement).
Perl supports non-capturing groups, e.g. "(?:\w+)", which I consider needed to stay below 10 captured groups.
I am using "\w+" almost everywhere, for simplicity. Names can therefor contain "_" and digits. If you do not like that, use "[[:alpha:]]+" instead.
perl -pe"s/(?:(Dr\.|Mr\.) )?(?:(\w+)(?: (\w+(?: \w+)*))? )?(?:(\w+) (Jr\.|I+))|(?:(Dr\.|Mr\.) )?(?:(\w+)(?: (\w+(?: \w+)*))? )?(\w+)/pre\1\6 give\2\7 middle\3\8 fam\4\9 post\5/"
For demonstration purposes, the code replaces, while inserting field names.
Please extract the requested regex and fill in the missing pres and posts.
What I consider the trick is to have one big alternative "|", which prefers matches with a postfix.
The fields are filled by using two groups each, one from the first, one from the second alternative. Only one of each pair is non-empty.
I tested with a test text file, containing combination of
prefix present
postfix present
given present
middle present (assuming that more middles work too)
second middle present
All test cases have a family name.
"Superman II" and "Madonna" would both only have a family name, hope that is OK, the super hero movie gets a suffix.
"Dr. Who" has a prefix and a family name.
I.e. I ignored the "Di"s, as you permitted.
I consider the output plausible.
Related
I have a string variable containing school names and I need to find all the possible combination of each word in this string variable in stata:
For example variation of a word "Academy" would be:
Academy,
Academy,
acdamey,
aacdemy,
dmcaamy,
aacedmy,
and so on.
I need this to standardize the raw data of school names, which has many typos of each word due to data entry issues, like the ones given above for "academy".
Depending whether your data is already in the Excel sheets or a file, you can either use regex trying to match all possible combinations (and probably fix them when found) or parse the strings first before bringing them into Excel. In either case you could make a file (or Excel list/table/area/etc.) that includes all the common typos and pick each typo as regex match to use when comparing to your actual input.
Making regexp that would actually find all possible cases is next to impossible, especially if there are cases where very similar (but correct) names for schools exist. In any case direct regexps would be very messy and complex, so I would advice you to parse the data by finding first the correct form, excluding it and then using (greedy) search/regex to find the typoed versions. You can then save the typos to use them as a filter/match/pattern.
To get some sort of starting ideas, check this links:
Regex: Search for verb roots
Read text file and extract string into Excel sheet using regex
P.s You should keep the count of all strings/school names and finally get a list of all names that did not match correct form or any of your regexp filters, so you can manually insert/correct them.
As I stated on the title.
I'm try to find regex result on a specific word(like apple) having random newline(\r\n) special character.
Illustrate more detail...
Let's find a word 'apple' on the text file. but We don't know where is exact position of newline(\r\n) on the file like below...
ap
ple
or
appl
e
I also googled many pages but I couldn't find the answer.
Should I have to write beginner regex like below?
(a\r\npple|ap\r\nple|app\r\nle|appl\r\ne|apple\r\n|)
I need to find more smarter regex to find exact word.
updated.
the word can be vary like "ripe apple", "rotten apple" and "brightapple".
In the case of third item, white space removed by writer.
updated
i have many txt files. i have to find the string within those.
So remove /r/n is not useful and cannot handle(too much menory and time required).
You have to take the naive approach ("beginner regex") if you want to use regular expressions, since they belong to the type 3 grammars and cannot express the state needed (see also The difference between Chomsky type 3 and Chomsky type 2 grammar)
Is there an easy way to generate a human-readable inflection list from Hunspell/Aspell dictionary data files?
For example, I'd like to generate the following outputs (for different languages):
...
book, books
book, books, booked, booking
...
go, goes, went, gone, going
...
I looked at the Hunspell/Aspell docs, but couldn't find an API call that would do this.
There is a method that the command line one does, but it doesn't output quite in the format you're looking for. You could also do this manually if you wanted though just by some simple scripting with regex.
The format of for each set of affixes is
TYPE TAG REMOVE REPLACE MATCH
Such that where TAG matches what follows what's behind the /in a given word in the .dicfile, you can do the following (presuming you've already stripped the word of the /...):
if($word =~ /$match$/) $word =~ s/$remove$/$replace/;
Notice the $ there matching the end-of-line/word. Adjust with ^ if it's a prefix.
There are three caveats:
The $match directly from the .aff file is in almost all cases equivalent to standard regex. There are minor variations such that if the match is something like [abc-gh], you'd be better to change it to (a|b|c|-|g|h) or [abcgh-] (hunspell doesn't use hyphen as a metacharacter) otherwise it'll be interpreted as [abcdefgh] (standard regex). For a negated character class, your options are to manually move the - to the end of the expression (e.g. [^a-df] to [^adf-] or to use negative look behinds.
If $replace is 0, then you should change it to an empty string.
If your result ends with /..., you need to reprocess it again because it has a double affix.
Be careful. By my rough calculations, the dictionary I'm working on could have more than 50 million words being formed (and I wouldn't be surprised if it hits beyond 100 million).
I have a plain text and need to extract company names. It's a huge document including company names, financial reports and lots of text.Here are examples of company names:
Big laundry, a.s.
AVERA, s.r.o.
Airoflot Airlines, a.s.
Is it even possible to make regex like this? I'm complete beginner to regex and have no idea how to create this one. Thanks for any help.
Example of text:
`There are many competitors of AVERA, s.r.o. the main one is Airflot Airlines, a.s. and Big laundry, s.r.o. These organisations hold main share of market.
Another companies:
a. Big Company, a.s.
b. Smaller company, s.r.o.
c. Huge company, a.s.`
As the question currently stands, no it is not possible to create a regex for company names.
It would be possible if you are able to create a PATTERN.
Means e.g. A company name always:
starts with an uppercase letter
has a comma
after the comma there is always one of "a.s." or "s.r.o."
So, the difficulties that I see here are:
How many words before the comma belong to the name?
Is there always a comma with following abbreviation?
Names are always difficult to match because a name can be nearly everything, especially company names.
The examples you give follow this pattern : ([A-Z][A-Za-z]+ ?)+, (\w\.)+
The matching operation will be dependent of the tool you use.
For example in JavaScript :
var line = "some name is Airoflot Airlines, a.s. in this line";
var m = line.match(/([A-Z][A-Za-z]+ ?)+, (\w\.)+/);
if (m.length) console.log(m[0]);
This logs
"Airoflot Airlines, a.s."
But this isn't a very reliable solutions : many real company names wouldn't fit and, more importantly perhaps, this would match sentences that aren't company names. So this can only be used as an help in a solution which also incorporates some kind of validation (human or dictionary based).
I use this
(?:\s*[a-zA-Z0-9,_\.\077\0100\*\+\&\#\'\~\;\-\!\#\;]{2,}\s*)*
it matches all a-z, A-Z,0-9 and some special characters which Quickbook supports.
https://community.intuit.com/articles/1146006-acceptable-characters-in-the-company-name-in-quickbooks-online
with your given examples, this regexp would match
Big laundry, a\.s\.|AVERA, s\.r\.o\.|Airoflot Airlines, a\.s\.
The trick is to use the alternation operator | on a set of strings
You may wish to consider missing punctuation and white space in the company names too
I recently started working with ontologies and I am using Protege to build an ontology which I'd also like to use for automatically classifying strings. The following illustrates a very basic class hierarchy:
String
|_ AlphabeticString
|_ CountryName
|_ CityName
|_ AlphaNumericString
|_ PrefixedNumericString
|_ NumericString
Eventually strings like Spain should be classified as CountryName or UE4564 would be a PrefixedNumericString.
However I am not sure how to model this knowledge. Would I have to first define if a character is alphabetic, numeric, etc. and then construct a word from the existing characters or is there a way to use Regexes? So far I only managed to classify strings based on an exact phrase like String and hasString value "UE4565".
Or would it be better to safe a regex for each class in the ontology and then classify the string in Java using those regexes?
An approach that might be appropriate here, especially if the ontology is large/complicated or might change in the future, and assuming that some errors are acceptable, is machine learning.
An outline of a process utilizing this approach might be:
Define a feature set you can extract from each string, relating to your ontology (some examples below).
Collect a "train set" of strings and their true matching categories.
Extract features from each string, and train some machine-learning algorithm on this data.
Use the trained model to classify new strings.
Retrain or update your model as needed (e.g. when new categories are added).
To illustrate more concretely, here are some suggestions based on your ontology example.
Some boolean features that might be applicable: does the string matches a regexp (e.g the ones Qtax suggests); does the string exist in a prebuilt known city-names list; does it exist in a known country-names list; existence of uppercase letters; string length (not boolean), etc.
So if, for instance, you have a total of 8 features: match to the 4 regular expressions mentioned above; and the additional 4 suggested here, then "Spain" would be represented as (1,1,0,0,1,0,1,5) (matching the first 2 regular expressions but not the last two, is a city name but not a country name, has an uppercase letter and length is 5).
This set of feature will represent any given string.
to train and test a machine learning algorithm, you can use WEKA. I would start from rule or tree based algorithms, e.g. PART, RIDOR, JRIP or J48.
Then the trained models can be used via Weka either from within Java or as an external command line.
Obviously, the features I suggest have almost 1:1 match with your Ontology, but assuming your taxonomy is larger and more complex, this approach would probably be one of the best in terms of cost-effectiveness.
I don't know anything about Protege, but you can use regex to match most of those cases. The only problem would be differentiating between country and city name, I don't see how you could do that without a complete list of either one.
Here are some expressions that you could use:
AlphabeticString:
^[A-Za-z]+\z (ASCII) or ^\p{Alpha}+\z (Unicode)
AlphaNumericString:
^[A-Za-z0-9]+\z (ASCII) or ^\p{Alnum}+\z (Unicode)
PrefixedNumericString:
^[A-Za-z]+[0-9]+\z (ASCII) or ^\p{Alpha}+\p{N}+\z (Unicode)
NumericString:
^[0-9]+\z (ASCII) or ^\p{N}+\z (Unicode)
A particular string is an instance, so you'll need some code to make the basic assertions about the particular instance. That code itself might contain the use of regular expressions. Once you've got those assertions, you'll be able to use your ontology to reason about them.
The hard part is that you've got to decide what level you're going to model at. For example, are you going to talk about individual characters? You can, but it's not necessarily sensible. You've also got the challenge that arises from the fact that negative information is awkward (as the basic model of such models is intuitionistic, IIRC) which means (for example) that you'll know that a string contains a numeric character but not that it is purely numeric. Yes, you'd know that you don't have an assertion that the instance contains an alphabetic character, but you wouldn't know whether that's because the string doesn't have one or just because nobody's said so yet. This stuff is hard!
It's far easier to write an ontology if you know exactly what problems you intend to solve with it, as that allows you to at least have a go at working out what facts and relations you need to establish in the first place. After all, there's a whole world of possible things that could be said which are true but irrelevant (“if the sun has got his hat on, he'll be coming out to play”).
Responding directly to your question, you start by checking whether a given token is numeric, alphanumeric or alphabetic (you can use regex here) and then you classify it as such. In general, the approach you're looking for is called generalization hierarchy of tokens or hierarchical feature selection (Google it). The basic idea is that you could treat each token as a separate element, but that's not the best approach since you can't cover them all [*]. Instead, you use common features among tokens (for example, 2000 and 1981 are distinct tokens but they share a common feature of being 4 digit numbers and possibly years). Then you have a class for four digit numbers, another for alphanumeric, and so on. This process of generalization helps you to simplify your classification approach.
Frequently, if you start with a string of tokens, you need to preprocess them (for example, remove punctuation or special symbols, remove words that are not relevant, stemming, etc). But maybe you can use some symbols (say, punctuation between cities and countries - e.g. Melbourne, Australia), so you assign that set of useful punctuation symbols to other symbol (#) and use that as a context (so the next time you find an unknown word next to a comma next to a known country, you can use that knowledge to assume that the unknown word is a city.
Anyway, that's the general idea behind classification using an ontology (based on a taxonomy of terms). You may also want to read about part-of-speech tagging.
By the way, if you only want to have 3 categories (numeric, alphanumeric, alphabetic), a viable option would be to use edit distance (what is more likely, that UA4E30 belongs to the alphanumeric or numeric category, considering that it doesn't correspond to the traditional format of prefixed numeric strings?). So, you assume a cost for each operation (insertion, deletion, subtitution) that transforms your unknown token into a known one.
Finally, although you said you're using Protege (which I haven't used) to build your ontology, you may want to look at WordNet.
[*] There are probabilistic approaches that help you to determine a probability for an unknown token, so the probability of such event is not zero. Usually, this is done in the context of Hidden Markov Models. Actually, this could be useful to improve the suggestion given by etov.