extract comma separated strings - regex

I have data frame as below. This is a sample set data with uniform looking patterns but whole data is not very uniform:
locationid address
1073744023 525 East 68th Street, New York, NY 10065, USA
1073744022 270 Park Avenue, New York, NY 10017, USA
1073744025 Rockefeller Center, 50 Rockefeller Plaza, New York, NY 10020, USA
1073744024 1251 Avenue of the Americas, New York, NY 10020, USA
1073744021 1301 Avenue of the Americas, New York, NY 10019, USA
1073744026 44 West 45th Street, New York, NY 10036, USA
I need to find the city and country name from this address. I tried the following:
1) strsplit
This gives me a list but I cannot access the last or third last element from this.
2) Regular expressions
finding country is easy
str_sub(str_extract(address, "\\d{5},\\s.*"),8,11)
but for city
str_sub(str_extract(address, ",\\s.+,\\s.+\\d{5}"),3,comma_pos)
I cannot find comma_pos as it leads me to the same problem again.
I believe there is a more efficient way to solve this using any of the above approached.

Try this code:
library(gsubfn)
cn <- c("Id", "Address", "City", "State", "Zip", "Country")
pat <- "(\\d+) (.+), (.+), (..) (\\d+), (.+)"
read.pattern(text = Lines, pattern = pat, col.names = cn, as.is = TRUE)
giving the following data.frame from which its easy to pick off components:
Id Address City State Zip Country
1 1073744023 525 East 68th Street New York NY 10065 USA
2 1073744022 270 Park Avenue New York NY 10017 USA
3 1073744025 Rockefeller Center, 50 Rockefeller Plaza New York NY 10020 USA
4 1073744024 1251 Avenue of the Americas New York NY 10020 USA
5 1073744021 1301 Avenue of the Americas New York NY 10019 USA
6 1073744026 44 West 45th Street New York NY 10036 USA
Explanation It uses this pattern (when within quotes the backslashes must be doubled):
(\d+) (.+), (.+), (..) (\d+), (.+)
visualized via the following debuggex railroad diagram -- for more see this Debuggex Demo :
and explained in words as follows:
"(\\d+)" - one or more digits (representing the Id) followed by
" " a space followed by
"(.+)" - any non-empty string (representing the Address) followed by
", " - a comma and a space followed by
"(.+)" - any non-empty string (representing the City) followed by
", " - a comma and a space followed by
"(..)" - two characters (representing the State) followed by
" " - a space followed by
"(\\d+)" - one or more digits (representing the Zip) followed by
", " - a comma and a space followed by
"(.+)" - any non-empty string (representing the Country)
It works since regular expressions are greedy always trying to find the longest string that can match backtracking each time subsequent portions of the regular expression fail to match.
The advantage of this appraoch is that the regular expression is quite simple and straight forward and the entire code is quite concise as one read.pattern statement does it all:
Note: We used this for Lines:
Lines <- "1073744023 525 East 68th Street, New York, NY 10065, USA
1073744022 270 Park Avenue, New York, NY 10017, USA
1073744025 Rockefeller Center, 50 Rockefeller Plaza, New York, NY 10020, USA
1073744024 1251 Avenue of the Americas, New York, NY 10020, USA
1073744021 1301 Avenue of the Americas, New York, NY 10019, USA
1073744026 44 West 45th Street, New York, NY 10036, USA"

Split the data
ss <- strsplit(data,",")`
Then
n <- sapply(s,len)
will give the number of elements (so you can work backward). Then
mapply(ss,"[[",n)
gives you the last element. Or you could do
sapply(ss,tail,1)
to get the last element.
To get the second-to-last (or more generally) you need
sapply(ss,function(x) tail(x,2)[1])

Here's an approach using a the tidyr package. Personally, I'd just split the whole thing into all the various elements using just the tidyr package's extract. This uses regex but in a different way than you asked for.
library(tidyr)
extract(x, address, c("address", "city", "state", "zip", "state"),
"([^,]+),\\s([^,]+),\\s+([A-Z]+)\\s+(\\d+),\\s+([A-Z]+)")
## locationid address city state zip state
## 1 1073744023 525 East 68th Street New York NY 10065 USA
## 2 1073744022 270 Park Avenue New York NY 10017 USA
## 3 1073744025 50 Rockefeller Plaza New York NY 10020 USA
## 4 1073744024 1251 Avenue of the Americas New York NY 10020 USA
## 5 1073744021 1301 Avenue of the Americas New York NY 10019 USA
## 6 1073744026 44 West 45th Street New York NY 10036 USA
Her'es a visual explanation of the regular expression taken from http://www.regexper.com/:

I think you want something like this.
> x <- "1073744026 44 West 45th Street, New York, NY 10036, USA"
> regmatches(x, gregexpr('^[^,]+, *\\K[^,]+', x, perl=T))[[1]]
[1] "New York"
> regmatches(x, gregexpr('^[^,]+, *[^,]+, *[^,]+, *\\K[^\n,]+', x, perl=T))[[1]]
[1] "USA"
Regex explanation:
^ Asserts that we are at the start.
[^,]+ Matches any character but not of , one or more times. Change it to [^,]* if your dataframe contains empty fields.
, Matches a literal ,
<space>* Matches zero or more spaces.
\K discards previously matched characters from printing. The characters matched by the pattern following \K will be shown as output.

How about this pattern :
,\s(?<city>[^,]+?),\s(?<shortCity>[^,]+?)(?i:\d{5},)(?<country>\s.*)
This pattern will match this three groups:
"group": "city", "value": "New York"
"group": "shortCity", "value": "NY "
"group": "country", "value": " USA"

Using rex to construct the regular expression may make this type of task a little simpler.
x <- data.frame(
locationid = c(
1073744023,
1073744022,
1073744025,
1073744024,
1073744021,
1073744026
),
address = c(
'525 East 68th Street, New York, NY 10065, USA',
'270 Park Avenue, New York, NY 10017, USA',
'Rockefeller Center, 50 Rockefeller Plaza, New York, NY 10020, USA',
'1251 Avenue of the Americas, New York, NY 10020, USA',
'1301 Avenue of the Americas, New York, NY 10019, USA',
'44 West 45th Street, New York, NY 10036, USA'
))
library(rex)
sep <- rex(",", spaces)
re <-
rex(
capture(name = "address",
except_some_of(",")
),
sep,
capture(name = "city",
except_some_of(",")
),
sep,
capture(name = "state",
uppers
),
spaces,
capture(name = "zip",
some_of(digit, "-")
),
sep,
capture(name = "country",
something
))
re_matches(x$address, re)
#> address city state zip country
#>1 525 East 68th Street New York NY 10065 USA
#>2 270 Park Avenue New York NY 10017 USA
#>3 50 Rockefeller Plaza New York NY 10020 USA
#>4 1251 Avenue of the Americas New York NY 10020 USA
#>5 1301 Avenue of the Americas New York NY 10019 USA
#>6 44 West 45th Street New York NY 10036 USA
This regular expression will also handle 9 digit zip codes (12345-1234) and countries other than USA.

Related

Regular expression working in Pythex.com but not in pandas

I'm having trouble applying a regex function to a column in a python dataframe. It works fine in Pythex online editor.
Here is the head of my dataframe -
ID
Text
1
UMM SURE THE ADDRESS IS IN 25088 KITTAN DRIVE NORTH CAROLINA 28605
2
IT IS ON 26 W STREET 7TH HIGHWAY ORLANDO FLORIDA 28262
3
COOL 757979 EAST TYRON BLVD NEW YORK NEW YORK 29875
I've tried the following code to create another column which gives us just the address. but the new column is showing up as empty.
df['Address']=df['Text'].str.findall('[0-9]{2,6}(?:\s+\S+){3,8}\s{1,}\b(?:FLORIDA|NORTH CAROLINA|NEW YORK)\b')
The desired output should look like -
ID
Text
Address
1
UMM SURE THE ADDRESS IS IN 25088 KITTAN DRIVE NORTH CAROLINA 28605
25088 KITTAN DRIVE NORTH CAROLINA
2
IT IS ON 26 W STREET 7TH HIGHWAY ORLANDO FLORIDA 28262
26 W STREET 7TH HIGHWAY ORLANDO FLORIDA
3
COOL 757979 EAST TYRON BLVD NEW YORK NEW YORK 29875
757979 EAST TYRON BLVD NEW YORK NEW YORK
Thanks in advance.
If your text data are examples of this pattern, you can try the following code:
df['Address']=df['Text'].str.findall(r'[0-9]{2,6}(.*?)(?:\d+$)')
You could use a pattern to extract the values that you want from column Text:
\b([0-9]{2,6}\b.*?(?:FLORIDA|NORTH CAROLINA|NEW YORK)) \d
The pattern matches:
\b A word boundary to prevent a partial word match
( Capture group 1
[0-9]{2,6}\b Match 2-6 digits followed by a word boundary
.*?(?:FLORIDA|NORTH CAROLINA|NEW YORK) Match as least as possible chars until you can match one of the alternatives
) \d Close group 1, and match a space and a digit
See a regex demo.
For example
import pandas as pd
items = [
[1, "UMM SURE THE ADDRESS IS IN 25088 KITTAN DRIVE NORTH CAROLINA 28605"],
[2, "IT IS ON 26 W STREET 7TH HIGHWAY ORLANDO FLORIDA 28262"],
[3, "COOL 757979 EAST TYRON BLVD NEW YORK NEW YORK 29875"]
]
df = pd.DataFrame(items, columns=["ID", "Text"])
df["Address"] = df["Text"].str.extract(
r'\b([0-9]{2,6}\b.*?(?:FLORIDA|NORTH CAROLINA|NEW YORK)) \d'
)
print(df)
Output
ID Text Address
0 1 UMM SURE THE ADDRESS IS IN 25088 KITTAN DRIVE ... 25088 KITTAN DRIVE NORTH CAROLINA
1 2 IT IS ON 26 W STREET 7TH HIGHWAY ORLANDO FLORI... 26 W STREET 7TH HIGHWAY ORLANDO FLORIDA
2 3 COOL 757979 EAST TYRON BLVD NEW YORK NEW YORK ... 757979 EAST TYRON BLVD NEW YORK NEW YORK

Pandas - Strip col1 values from column2 values if exists match (regex match with dynamic value)

I have a task to update values in column_1 IF it has a full match to value from column_2.
Like so
name city
Danny London London
Tim Detroit Detroit
Keith New Orleans The city of New Orleans
Mary Jane London
=>
name city
Danny London <- updated
Tim Detroit <- updated
Keith New Orleans The city of New Orleans
Mary Jane London
So far I've tried this
condlidt = [df.apply(lambda x: x.name_cleaned.endwith(f"{x.city}"), axis=1)]
choicelist = [df.name_cleaned.str.replace(rf'{df.city}$', '', regex=True]
fd['name_cleaned'] = np.select(condlist, choicelist, default=df.name_cleaned)
But it returns the same df. I've checked and condlist works as expected - returns True/False for values, the problem is in choicelist - not sure how to pass regex with dynamic value. Would really appreciate any help.
Instead test by endswith you can add value $ for end of string and also \s+ for match spaces and repalce these values to empty string in re.sub:
import re
df['name'] = df.apply(lambda x: re.sub(rf"\s+{x.city}$",'',x['name']), axis=1)
print (df)
name city
0 Danny London
1 Tim Detroit
2 Keith New Orleans The city of New Orleans
3 Mary Jane London

Regex optional everything separated by space or comma (city, state)

I am trying to get the street, city, state and zip from a non-well-formed list of addresses, everything but the "street" is optional sequentially. (I can have street, street+city, street+city+state, street+city+state+zip). Separators are either a comma + space, or space only.
So far, I have
^(?<STREET>.*?)(?<SEPARATOR1>(?: *-{1,2} *)|(?:, ?))(?<CITY>[a-z-' ]*)?((?<SEPARATOR2>(?: )|(?:, ))(?<STATE>AL|AK|AS|AZ|AR|CA|CO|CT|DE|DC|FM|FL|GA|GU|HI|ID|IL|IN|IA|KS|KY|LA|ME|MH|MD|MA|MI|MN|MS|MO|MT|NE|NV|NH|NJ|NM|NY|NC|ND|MP|OH|OK|OR|PW|PA|PR|RI|SC|SD|TN|TX|UT|VT|VI|VA|WA|WV|WI|WY))?((?<SEPARATOR3>(?: )|(?:, ))(?<ZIP>[0-9]{5}(-[0-9]{4})?))?
I am having trouble to get a capture after the CITY capture if it's only separated by a space.
Test data:
123 Ave Ave - Hoquiam WA 98103
123 Ave Ave - Hoquiam, WA 98103
123 Ave Ave - Hoquiam, WA 98103-1345
123 Ave Ave - Hoquiam
123 Ave Ave - Ocean Shores WA
123 Ave Ave - Ocean Shores, WA
123 Ave Ave - D'ile, WA
123 Ave Ave
What am I doing wrong?
https://regex101.com/r/v476Gx/1
With some tweaking, following updated regex should work for you:
^(?<STREET>.*?)(?:(?<SEPARATOR1>(?: *-{1,2} *)|(?:, ?))(?<CITY>[a-z-' ]*?)?((?<SEPARATOR2>(?: )|(?:, ))(?<STATE>AL|AK|AS|AZ|AR|CA|CO|CT|DE|DC|FM|FL|GA|GU|HI|ID|IL|IN|IA|KS|KY|LA|ME|MH|MD|MA|MI|MN|MS|MO|MT|NE|NV|NH|NJ|NM|NY|NC|ND|MP|OH|OK|OR|PW|PA|PR|RI|SC|SD|TN|TX|UT|VT|VI|VA|WA|WV|WI|WY))?((?<SEPARATOR3>(?: )|(?:, ))(?<ZIP>[0-9]{5}(?:-[0-9]{4})?))?)?$
Updated RegEx Demo
While you have your answer, this is probably more readable/maintainable:
^
(?P<street>[^-\n]+)
(?:-\h*)?
(?P<town>(?:(?!\b[A-Z]{2}\b).)*)
(?P<state>\b[A-Z]{2}\b)?\h*
(?P<zip>[-\d]*)
$
See a demo o nregex101.com. It just needs a bit cleaning on the town part.

regex to match last 3 strings

I am new to using regular expressions so please pardon me. I need to match only the town, region name and country name using regex. Below is the sample from the dataset I have
1 Cliff Street ; Fremantle, Western Australia ; AUSTRALIA
10 Montpelier Square, London SW7 1JU ;,; UNITED KINGDOM
125 Hay Street ; East Perth, Western Australia ; AUSTRALIA
1395 Brickell Ave 3404, Miami, FL 33131 ;,; USA
14 Save Ljuboje ; Banja Luka,; BOSNIA AND HERZEGOVINA
15 Grosvenor Street ; Beaconsfield, Western Australia ; AUSTRALIA
151 Royal Street, 2nd Floor ; East Perth, Western Australia ; AUSTRALIA
168-170 St Georges Terrace ; Perth, Western Australia ; AUSTRALIA
184 Bennet Street ; East Perth, Western Australia ; AUSTRALIA
189 Royal Street ; East Perth, Western Australia ; AUSTRALIA
197 St Georges Terrace ; Perth, Western Australia ; AUSTRALIA
Example: 1 Cliff Street ; Fremantle, Western Australia ; AUSTRALIA I would want only Fremantle, Western Australia ; AUSTRALIA and not the address tags along. This is just a sample of my dataset and I would want only the last 3 strings in each row. It would be great if anyone could help me
You could use capturing groups for this...
(.*);(.*);(.*)
That regex splits the string into 3 groups. How you access the groups from the match object depends on your language's regex library.
As #sin suggested, a better approach would probably be just splitting the string on ; character. Just google for "String Splitting" to see how it is done in your language. Using regexes overcomplicates this problem.
If you want to match them use this regex:
[1-9a-zA-Z\s,]+;[1-9a-zA-Z\s]+$
Demo: https://regex101.com/r/cF1gW4/1
EDIT
If you want to leave them and remove first part of the address, using SublimeText replace this:
^[1-9a-zA-Z\s,]+;\s?
by nothing
Demo: https://regex101.com/r/cF1gW4/3

Replace Value & Shift Data Frame If Certain Condition Met

I've scraped data from a source online to create a data frame (df1) with n rows of information pertaining to individuals. It comes in as a single string, and I split the words apart into appropriate columns.
90% of the information is correctly formatted to the proper number of columns in a data frame (6) - however, once in a while there is a row of data with an extra word that is located in the spot of the 4th word from the start of the string. Those lines now have 7 columns and are off-set from everything else in the data frame.
Here is an example:
Num Last-Name First-Name Cat. DOB Location
11 Jackson, Adam L 1982-06-15 USA
2 Pearl, Sam R 1986-11-04 UK
5 Livingston, Steph LL 1983-12-12 USA
7 Thornton, Mark LR 1982-03-26 USA
10 Silver, John RED LL 1983-09-14 USA
df1 = c(" 11 Jackson, Adam L 1982-06-15 USA",
"2 Pearl, Sam R 1986-11-04 UK",
"5 Livingston, Steph LL 1983-12-12 USA",
"7 Thornton, Mark LR 1982-03-26 USA",
"10 Silver, John RED LL 1983-09-14 USA")
You can see item #10 has an extra input added, the color "RED" is inserted into the middle of the string.
I started to run code that used stringr to evaluate how many characters were present in the 4th word, and if it was 3 or greater (every value that will be in the Cat. column is is 1-2 characters), I created a new column at the end of the data frame, assigned the value to it, and if there was no value (i.e. it evaluates to FALSE), input NA. I'm sure I could likely create a massive nested ifelse statement in a dplyr mutate (my personal comfort zone), but I figure there must be a more efficient way to achieve my desired result:
Num Last-Name First-Name Cat. DOB Location Color
11 Jackson, Adam L 1982-06-15 USA NA
2 Pearl, Sam R 1986-11-04 UK NA
5 Livingston, Steph LL 1983-12-12 USA NA
7 Thornton, Mark LR 1982-03-26 USA NA
10 Silver, John LL 1983-09-14 USA RED
I want to find the instances where the 4th word from the start of the string is 3 characters or longer, assign that word or value to a new column at the end of the data frame, and shift the corresponding values in the row to the left to properly align with the others rows of data.
here's a simpler way:
input <- gsub("(.*, \\w+) ((?:\\w){3,})(.*)", "\\1 \\3 \\2", input, TRUE)
input <- gsub("([0-9]\\s\\w+)\\n", "\\1 NA\n", input, TRUE)
the first gsub transposes colors to the end of the string. the second gsub makes use of the fact that unchanged lines will now end with a date and country-code (not a country-code and a color), and simply adds an "NA" to them.
IDEone demo
We could use gsub to remove the extra substrings
v1 <- gsub("([^,]+),(\\s+[[:alpha:]]+)\\s*\\S*(\\s+[[:alpha:]]+\\s+\\d{4}-\\d{2}-\\d{2}.*)",
"\\1\\2\\3", trimws(df1))
d1 <- read.table(text=v1, sep="", header=FALSE, stringsAsFactors=FALSE,
col.names = c("Num", "LastName", "FirstName", "Cat", "DOB", "Location"))
d1$Color <- trimws(gsub("^[^,]+,\\s+[[:alpha:]]+|[[:alpha:]]+\\s+\\d{4}-\\d{2}-\\d{2}\\s+\\S+$",
"", trimws(df1)))
d1
# Num LastName FirstName Cat DOB Location Color
#1 11 Jackson Adam L 1982-06-15 USA
#2 2 Pearl Sam R 1986-11-04 UK
#3 5 Livingston Steph LL 1983-12-12 USA
#4 7 Thornton Mark LR 1982-03-26 USA
#5 10 Silver John LL 1983-09-14 USA RED
Using strsplit instead of regex:
# split strings in df1 on commas and spaces not preceded by the start of the line
s <- strsplit(df1, '(?<!^)[, ]+', perl = T)
# iterate over s, transpose the result and make it a data.frame
df2 <- data.frame(t(sapply(s, function(x){
# if number of items in row is 6, insert NA, else rearrange
if (length(x) == 6) {c(x, NA)} else {x[c(1:3, 5:7, 4)]}
})))
# add names
names(df2) <- c("Num", "Last-Name", "First-Name", "Cat.", "DOB", "Location", "Color")
df2
# Num Last-Name First-Name Cat. DOB Location Color
# 1 11 Jackson Adam L 1982-06-15 USA <NA>
# 2 2 Pearl Sam R 1986-11-04 UK <NA>
# 3 5 Livingston Steph LL 1983-12-12 USA <NA>
# 4 7 Thornton Mark LR 1982-03-26 USA <NA>
# 5 10 Silver John LL 1983-09-14 USA RED