Replace Value & Shift Data Frame If Certain Condition Met - regex

I've scraped data from a source online to create a data frame (df1) with n rows of information pertaining to individuals. It comes in as a single string, and I split the words apart into appropriate columns.
90% of the information is correctly formatted to the proper number of columns in a data frame (6) - however, once in a while there is a row of data with an extra word that is located in the spot of the 4th word from the start of the string. Those lines now have 7 columns and are off-set from everything else in the data frame.
Here is an example:
Num Last-Name First-Name Cat. DOB Location
11 Jackson, Adam L 1982-06-15 USA
2 Pearl, Sam R 1986-11-04 UK
5 Livingston, Steph LL 1983-12-12 USA
7 Thornton, Mark LR 1982-03-26 USA
10 Silver, John RED LL 1983-09-14 USA
df1 = c(" 11 Jackson, Adam L 1982-06-15 USA",
"2 Pearl, Sam R 1986-11-04 UK",
"5 Livingston, Steph LL 1983-12-12 USA",
"7 Thornton, Mark LR 1982-03-26 USA",
"10 Silver, John RED LL 1983-09-14 USA")
You can see item #10 has an extra input added, the color "RED" is inserted into the middle of the string.
I started to run code that used stringr to evaluate how many characters were present in the 4th word, and if it was 3 or greater (every value that will be in the Cat. column is is 1-2 characters), I created a new column at the end of the data frame, assigned the value to it, and if there was no value (i.e. it evaluates to FALSE), input NA. I'm sure I could likely create a massive nested ifelse statement in a dplyr mutate (my personal comfort zone), but I figure there must be a more efficient way to achieve my desired result:
Num Last-Name First-Name Cat. DOB Location Color
11 Jackson, Adam L 1982-06-15 USA NA
2 Pearl, Sam R 1986-11-04 UK NA
5 Livingston, Steph LL 1983-12-12 USA NA
7 Thornton, Mark LR 1982-03-26 USA NA
10 Silver, John LL 1983-09-14 USA RED
I want to find the instances where the 4th word from the start of the string is 3 characters or longer, assign that word or value to a new column at the end of the data frame, and shift the corresponding values in the row to the left to properly align with the others rows of data.

here's a simpler way:
input <- gsub("(.*, \\w+) ((?:\\w){3,})(.*)", "\\1 \\3 \\2", input, TRUE)
input <- gsub("([0-9]\\s\\w+)\\n", "\\1 NA\n", input, TRUE)
the first gsub transposes colors to the end of the string. the second gsub makes use of the fact that unchanged lines will now end with a date and country-code (not a country-code and a color), and simply adds an "NA" to them.
IDEone demo

We could use gsub to remove the extra substrings
v1 <- gsub("([^,]+),(\\s+[[:alpha:]]+)\\s*\\S*(\\s+[[:alpha:]]+\\s+\\d{4}-\\d{2}-\\d{2}.*)",
"\\1\\2\\3", trimws(df1))
d1 <- read.table(text=v1, sep="", header=FALSE, stringsAsFactors=FALSE,
col.names = c("Num", "LastName", "FirstName", "Cat", "DOB", "Location"))
d1$Color <- trimws(gsub("^[^,]+,\\s+[[:alpha:]]+|[[:alpha:]]+\\s+\\d{4}-\\d{2}-\\d{2}\\s+\\S+$",
"", trimws(df1)))
d1
# Num LastName FirstName Cat DOB Location Color
#1 11 Jackson Adam L 1982-06-15 USA
#2 2 Pearl Sam R 1986-11-04 UK
#3 5 Livingston Steph LL 1983-12-12 USA
#4 7 Thornton Mark LR 1982-03-26 USA
#5 10 Silver John LL 1983-09-14 USA RED

Using strsplit instead of regex:
# split strings in df1 on commas and spaces not preceded by the start of the line
s <- strsplit(df1, '(?<!^)[, ]+', perl = T)
# iterate over s, transpose the result and make it a data.frame
df2 <- data.frame(t(sapply(s, function(x){
# if number of items in row is 6, insert NA, else rearrange
if (length(x) == 6) {c(x, NA)} else {x[c(1:3, 5:7, 4)]}
})))
# add names
names(df2) <- c("Num", "Last-Name", "First-Name", "Cat.", "DOB", "Location", "Color")
df2
# Num Last-Name First-Name Cat. DOB Location Color
# 1 11 Jackson Adam L 1982-06-15 USA <NA>
# 2 2 Pearl Sam R 1986-11-04 UK <NA>
# 3 5 Livingston Steph LL 1983-12-12 USA <NA>
# 4 7 Thornton Mark LR 1982-03-26 USA <NA>
# 5 10 Silver John LL 1983-09-14 USA RED

Related

Removing numbers before

I have a dataset of a customer list. The first column of type factor (Kunden.Nr..Kurzname) has always a number (ranges from 1 to 4 digits) before the actual customer name, that I would like to remove. The data set currently looks like this:
Kunden.Nr..Kurzname Name..Vorname Adresse Postfach PLZ
1 1529 33ER TAXI AG 33er Taxi AG Jägerstrasse 5 <NA> 4016
2 2384 4EYES GMBH 4eyes GmbH Grubenweg 25 <NA> 4153
3 1548 A. SCHULMANN AG A. Schulmann AG Kernstrasse 10 <NA> 8004
4 3427 AAA DENT AG AAA Dent AG Die Zahnärzte.ch Centralbahnstrasse 20 4051
5 555 AARE SEELAND MOB Aare Seeland mobil AG Hauptstrasse 93 <NA> 2560
6 856 AASTRA TELECOM S Aastra Telecom Schweiz AG Schulhausgasse 24 <NA> 3113
And I would like to have it like this:
Kunden.Nr..Kurzname Name..Vorname Adresse Postfach PLZ
1 33ER TAXI AG 33er Taxi AG Jägerstrasse 5 <NA> 4016
2 4EYES GMBH 4eyes GmbH Grubenweg 25 <NA> 4153
3 A. SCHULMANN AG A. Schulmann AG Kernstrasse 10 <NA> 8004
4 AAA DENT AG AAA Dent AG Die Zahnärzte.ch Centralbahnstrasse 20 4051
5 AARE SEELAND MOB Aare Seeland mobil AG Hauptstrasse 93 <NA> 2560
6 AASTRA TELECOM S Aastra Telecom Schweiz AG Schulhausgasse 24 <NA> 3113
Basically, I would need to remove everything before and including the first space. Figured out that I probably have to use "gsub", but unfortunately I haven't used R for a long time. Help is highly appreciated.
I would like to suggest making use of groups:
gsub("^(\\d+)([[:space:]])(.+)$","\\3",x)
For example:
> x <- c("1529 33ER TAXI AG", "2384 4EYES GMBH")
> gsub("^(\\d+)([[:space:]])(.+)$","\\3",x)
[1] "33ER TAXI AG" "4EYES GMBH"
Demos
Regex101
Ideone
Explanation
Courtesy of regex101.com.
All the answers before are kind of overloaded. Here is a suggestion, that is somewhat straightforward and does everything like you asked.
DF <- #your data.frame
FindFirstSpace <- regexpr(" ", DF$Kunden.Nr..Kurzname, fixed = TRUE)
DF$Kunden.Nr..Kurzname <- substr(DF$Kunden.Nr..Kurzname, FindFirstSpace + 1, 1000)
regexpr returns the first instance of " " from your character vector. Note that regexpr is made for finding expressions "like" your pattern. But fixed = TRUE makes the search specific.
Then take the Substring from after the first space. For stop value you can take any number big enough.
You can simply do gsub("^[0-9]{1,4}\\s","",df$Kunden.Nr..Kurzname)

Regex extraction of text data between 2 commas in R

I have a bunch of text in a dataframe (df) that usually contains three lines of an address in 1 column and my goal is to extract the district (central part of the text), eg:
73 Greenhill Gardens, Wandsworth, London
22 Acacia Heights, Lambeth, London
Fortunately for me in 95% of cases the person inputing the data has used commas to separate the text I want, which 100% of the time ends ", London" (ie comma space London). To state things clearly therefore my goal is to extract the text BEFORE ", London" and AFTER the preceding comma
My desired output is:
Wandsworth
Lambeth
I can manage to extract the part before:
df$extraction <- sub('.*,\\s*','',address)
and after
df$extraction <- sub('.*,\\s*','',address)
But not the middle part that I need. Can someone please help?
Many Thanks!
You could save yourself the headache of a regular expression and treat the vector like a CSV, using a file reading function to extract the relevant part. We can use read.csv(), taking advantage of the fact that colClasses can be used to drop columns.
address <- c(
"73 Greenhill Gardens, Wandsworth, London",
"22 Acacia Heights, Lambeth, London"
)
read.csv(text = address, colClasses = c("NULL", "character", "NULL"),
header = FALSE, strip.white = TRUE)[[1L]]
# [1] "Wandsworth" "Lambeth"
Or we could use fread(). Its select argument is nice and it strips white space automatically.
data.table::fread(paste(address, collapse = "\n"),
select = 2, header = FALSE)[[1L]]
# [1] "Wandsworth" "Lambeth"
Here are a couple of approaches:
# target ", London" and the start of the string
# up until the first comma followed by a space,
# and replace with ""
gsub("^.+?, |, London", "", address)
#[1] "Wandsworth" "Lambeth"
Or
# target the whole string, but use a capture group
# for the text before ", London" and after the first comma.
# replace the string with the captured group.
sub(".+, (.*), London", "\\1", address)
#[1] "Wandsworth" "Lambeth"
Here are two options that aren't dependent on the city name being the same. The first uses a regex pattern with stringr::str_extract():
raw_address <- c(
"73 Greenhill Gardens, Wandsworth, London",
"22 Acacia Heights, Lambeth, London",
"Street, District, City"
)
df <- data.frame(raw_address, stringsAsFactors = FALSE)
df$distict = stringr::str_extract(raw_address, '(?<=,)[^,]+(?=,)')
> df
raw_address distict
1 73 Greenhill Gardens, Wandsworth, London Wandsworth
2 22 Acacia Heights, Lambeth, London Lambeth
3 Street, District, City District
The second uses strsplit() and makes getting the other elements of the address easier:
df$address <- sapply(strsplit(raw_address, ',\\s*'), `[`, 1)
df$distict <- sapply(strsplit(raw_address, ',\\s*'), `[`, 2)
df$city <- sapply(strsplit(raw_address, ',\\s*'), `[`, 3)
> df
raw_address address distict city
1 73 Greenhill Gardens, Wandsworth, London 73 Greenhill Gardens Wandsworth London
2 22 Acacia Heights, Lambeth, London 22 Acacia Heights Lambeth London
3 Street, District, City Street District City
The split is done on ,\\s* in case there is no space or are multiple spaces after a comma.
You could try this
(?<=, )(.+?),
Works with any data set location doesn't have to be in london.

Substring extraction from vector in R

I am trying to extract substrings from a unstructured text. For example, assume a vector of country names:
countries <- c("United States", "Israel", "Canada")
How do I go about passing this vector of character values to extract exact matches from unstructured text.
text.df <- data.frame(ID = c(1:5),
text = c("United States is a match", "Not a match", "Not a match",
"Israel is a match", "Canada is a match"))
In this example, the desired output would be:
ID text
1 United States
4 Israel
5 Canada
So far I have been working with gsub by where I remove all non-matches and then eliminate then remove rows with empty values. I have also been working with str_extract from the stringr package, but haven't had success getting the arugments for the regular expression correct. Any assistance would be greatly appreciated!
1. stringr
We could first subset the 'text.df' using the 'indx' (formed from collapsing the 'countries' vector) as pattern in 'grep' and then use 'str_extract' the get the pattern elements from the 'text' column, assign that to 'text' column of the subset dataset ('text.df1')
library(stringr)
indx <- paste(countries, collapse="|")
text.df1 <- text.df[grep(indx, text.df$text),]
text.df1$text <- str_extract(text.df1$text, indx)
text.df1
# ID text
#1 1 United States
#4 4 Israel
#5 5 Canada
2. base R
Without using any external packages, we can remove the characters other than those found in 'ind'
text.df1$text <- unlist(regmatches(text.df1$text,
gregexpr(indx, text.df1$text)))
3. stringi
We could also use the faster stri_extract from stringi
library(stringi)
na.omit(within(text.df, text1<- stri_extract(text, regex=indx)))[-2]
# ID text1
#1 1 United States
#4 4 Israel
#5 5 Canada
Here's an approach with data.table:
library(data.table)
##
R> data.table(text.df)[
sapply(countries, function(x) grep(x,text),USE.NAMES=F),
list(ID, text = countries)]
ID text
1: 1 United States
2: 4 Israel
3: 5 Canada
Create the pattern, p, and use strapply to extract the match to each component of text returning NA for each unmatched component. Finally remove the NA values using na.omit. This is non-destructive (i.e. text.df is not modified):
library(gsubfn)
p <- paste(countries, collapse = "|")
na.omit(transform(text.df, text = strapply(paste(text), p, empty = NA, simplify = TRUE)))
giving:
ID text
1 1 United States
4 4 Israel
5 5 Canada
Using dplyr it could also be written as follows (using p from above):
library(dplyr)
library(gsubfn)
text.df %>%
mutate(text = strapply(paste(text), p, empty = NA, simplify = TRUE)) %>%
na.omit

extract comma separated strings

I have data frame as below. This is a sample set data with uniform looking patterns but whole data is not very uniform:
locationid address
1073744023 525 East 68th Street, New York, NY 10065, USA
1073744022 270 Park Avenue, New York, NY 10017, USA
1073744025 Rockefeller Center, 50 Rockefeller Plaza, New York, NY 10020, USA
1073744024 1251 Avenue of the Americas, New York, NY 10020, USA
1073744021 1301 Avenue of the Americas, New York, NY 10019, USA
1073744026 44 West 45th Street, New York, NY 10036, USA
I need to find the city and country name from this address. I tried the following:
1) strsplit
This gives me a list but I cannot access the last or third last element from this.
2) Regular expressions
finding country is easy
str_sub(str_extract(address, "\\d{5},\\s.*"),8,11)
but for city
str_sub(str_extract(address, ",\\s.+,\\s.+\\d{5}"),3,comma_pos)
I cannot find comma_pos as it leads me to the same problem again.
I believe there is a more efficient way to solve this using any of the above approached.
Try this code:
library(gsubfn)
cn <- c("Id", "Address", "City", "State", "Zip", "Country")
pat <- "(\\d+) (.+), (.+), (..) (\\d+), (.+)"
read.pattern(text = Lines, pattern = pat, col.names = cn, as.is = TRUE)
giving the following data.frame from which its easy to pick off components:
Id Address City State Zip Country
1 1073744023 525 East 68th Street New York NY 10065 USA
2 1073744022 270 Park Avenue New York NY 10017 USA
3 1073744025 Rockefeller Center, 50 Rockefeller Plaza New York NY 10020 USA
4 1073744024 1251 Avenue of the Americas New York NY 10020 USA
5 1073744021 1301 Avenue of the Americas New York NY 10019 USA
6 1073744026 44 West 45th Street New York NY 10036 USA
Explanation It uses this pattern (when within quotes the backslashes must be doubled):
(\d+) (.+), (.+), (..) (\d+), (.+)
visualized via the following debuggex railroad diagram -- for more see this Debuggex Demo :
and explained in words as follows:
"(\\d+)" - one or more digits (representing the Id) followed by
" " a space followed by
"(.+)" - any non-empty string (representing the Address) followed by
", " - a comma and a space followed by
"(.+)" - any non-empty string (representing the City) followed by
", " - a comma and a space followed by
"(..)" - two characters (representing the State) followed by
" " - a space followed by
"(\\d+)" - one or more digits (representing the Zip) followed by
", " - a comma and a space followed by
"(.+)" - any non-empty string (representing the Country)
It works since regular expressions are greedy always trying to find the longest string that can match backtracking each time subsequent portions of the regular expression fail to match.
The advantage of this appraoch is that the regular expression is quite simple and straight forward and the entire code is quite concise as one read.pattern statement does it all:
Note: We used this for Lines:
Lines <- "1073744023 525 East 68th Street, New York, NY 10065, USA
1073744022 270 Park Avenue, New York, NY 10017, USA
1073744025 Rockefeller Center, 50 Rockefeller Plaza, New York, NY 10020, USA
1073744024 1251 Avenue of the Americas, New York, NY 10020, USA
1073744021 1301 Avenue of the Americas, New York, NY 10019, USA
1073744026 44 West 45th Street, New York, NY 10036, USA"
Split the data
ss <- strsplit(data,",")`
Then
n <- sapply(s,len)
will give the number of elements (so you can work backward). Then
mapply(ss,"[[",n)
gives you the last element. Or you could do
sapply(ss,tail,1)
to get the last element.
To get the second-to-last (or more generally) you need
sapply(ss,function(x) tail(x,2)[1])
Here's an approach using a the tidyr package. Personally, I'd just split the whole thing into all the various elements using just the tidyr package's extract. This uses regex but in a different way than you asked for.
library(tidyr)
extract(x, address, c("address", "city", "state", "zip", "state"),
"([^,]+),\\s([^,]+),\\s+([A-Z]+)\\s+(\\d+),\\s+([A-Z]+)")
## locationid address city state zip state
## 1 1073744023 525 East 68th Street New York NY 10065 USA
## 2 1073744022 270 Park Avenue New York NY 10017 USA
## 3 1073744025 50 Rockefeller Plaza New York NY 10020 USA
## 4 1073744024 1251 Avenue of the Americas New York NY 10020 USA
## 5 1073744021 1301 Avenue of the Americas New York NY 10019 USA
## 6 1073744026 44 West 45th Street New York NY 10036 USA
Her'es a visual explanation of the regular expression taken from http://www.regexper.com/:
I think you want something like this.
> x <- "1073744026 44 West 45th Street, New York, NY 10036, USA"
> regmatches(x, gregexpr('^[^,]+, *\\K[^,]+', x, perl=T))[[1]]
[1] "New York"
> regmatches(x, gregexpr('^[^,]+, *[^,]+, *[^,]+, *\\K[^\n,]+', x, perl=T))[[1]]
[1] "USA"
Regex explanation:
^ Asserts that we are at the start.
[^,]+ Matches any character but not of , one or more times. Change it to [^,]* if your dataframe contains empty fields.
, Matches a literal ,
<space>* Matches zero or more spaces.
\K discards previously matched characters from printing. The characters matched by the pattern following \K will be shown as output.
How about this pattern :
,\s(?<city>[^,]+?),\s(?<shortCity>[^,]+?)(?i:\d{5},)(?<country>\s.*)
This pattern will match this three groups:
"group": "city", "value": "New York"
"group": "shortCity", "value": "NY "
"group": "country", "value": " USA"
Using rex to construct the regular expression may make this type of task a little simpler.
x <- data.frame(
locationid = c(
1073744023,
1073744022,
1073744025,
1073744024,
1073744021,
1073744026
),
address = c(
'525 East 68th Street, New York, NY 10065, USA',
'270 Park Avenue, New York, NY 10017, USA',
'Rockefeller Center, 50 Rockefeller Plaza, New York, NY 10020, USA',
'1251 Avenue of the Americas, New York, NY 10020, USA',
'1301 Avenue of the Americas, New York, NY 10019, USA',
'44 West 45th Street, New York, NY 10036, USA'
))
library(rex)
sep <- rex(",", spaces)
re <-
rex(
capture(name = "address",
except_some_of(",")
),
sep,
capture(name = "city",
except_some_of(",")
),
sep,
capture(name = "state",
uppers
),
spaces,
capture(name = "zip",
some_of(digit, "-")
),
sep,
capture(name = "country",
something
))
re_matches(x$address, re)
#> address city state zip country
#>1 525 East 68th Street New York NY 10065 USA
#>2 270 Park Avenue New York NY 10017 USA
#>3 50 Rockefeller Plaza New York NY 10020 USA
#>4 1251 Avenue of the Americas New York NY 10020 USA
#>5 1301 Avenue of the Americas New York NY 10019 USA
#>6 44 West 45th Street New York NY 10036 USA
This regular expression will also handle 9 digit zip codes (12345-1234) and countries other than USA.

How to replace specific characters of a string with tab in R

Having a data frame with a string in each row, I need to replace n'th character into tab. Moreover, there are an inconstant number of spaces before m'th character that I need to convert to tab as well.
For instance having following row:
"00001 000 0 John Smith"
I need to replace the 6th character (space) into tab and replace the spaces between John and Smith into tab as well. For all the rows the last word (Smith) starts from 75th character. So, basically I need to replace all spaces before 78th character into tab.
I need the above row as follows:
"00001<Tab>000 0 John<Tab>Smith"
Thanks for the help.
You could use gsub here.
x <- c('00001 000 0 John Smith',
'00002 000 1 Josh Black',
'00003 000 2 Jane Smith',
'00004 000 3 Jeff Smith')
x <- gsub("(?<=[0-9]{5}) |(?<!\\d) +(?=(?i:[a-z]))", "\t", x, perl=T)
Output
[1] "00001\t000 0 John\tSmith" "00002\t000 1 Josh\tBlack"
[3] "00003\t000 2 Jane\tSmith" "00004\t000 3 Jeff\tSmith"
To actually see the \t in output use cat(x)
00001 000 0 John Smith
00002 000 1 Josh Black
00003 000 2 Jane Smith
00004 000 3 Jeff Smith
Here's one solution if it always starts at 75. First some sample data
#sample data
a <- "00001 000 0 John Smith"
b <- "00001 000 0 John Smith"
Now since you know positions, i'll use substr. To extract the parts, then i'll trim the middle, then you can paste in the tabs.
#extract parts
part1<-substr(c(a,b), 1, 5)
part2<-gsub("\\s*$","",substr(c(a,b), 7, 74))
part3<-substr(c(a,b), 75, 10000L)
#add in tabs
paste(part1, part2, part3, sep="\t")