What is wrong with my regular expression in R? [closed] - regex

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 7 years ago.
Improve this question
I am trying to extract the label, name, address, city, zip, and distance from the following text:
A
Carl's Jr.
308 WESTWOOD PLAZA
LOS ANGELES, CA 90095-8355
0.0 mi.
B
Carl's Jr.
2727 SANTA MONICA
SANTA MONICA, CA 90404-2407
4.8 mi.
...
...
Here is my regular expression pattern and code, but I get a matrix of NA values.
p <- "(^[AZ]\\n)^(\\w+.\\w+\\s\\w+.\\s*\\w*)\\n^(\\d+\\w+\\s*\\w*\\s*\\w*)\\n^(\\w+\\s*\\w*),\\s(CA)\\s(\\d+-*\\d*)\\n^(\\d+.\\d*)\\smi."
matches <- str_match(cj, p)
Do I have a syntax error in my pattern?

Maybe try strsplit() instead. See regex101 for an explanation of the regex used below. Afterwards, we can figure out how many rows there will be by finding the number of single character elements.
s <- strsplit(x, "\n+|, | (?=[0-9]+)", perl = TRUE)[[1]]
as.data.frame(matrix(s, sum(nchar(s) == 1), byrow = TRUE))
# V1 V2 V3 V4 V5 V6 V7
# 1 A Carl's Jr. 308 WESTWOOD PLAZA LOS ANGELES CA 90095-8355 0.0 mi.
# 2 B Carl's Jr. 2727 SANTA MONICA SANTA MONICA CA 90404-2407 4.8 mi.
Data:
x <- "A\n\nCarl's Jr.\n\n308 WESTWOOD PLAZA\n\nLOS ANGELES, CA 90095-8355\n\n0.0 mi.\n\nB\n\nCarl's Jr.\n\n2727 SANTA MONICA\n\nSANTA MONICA, CA 90404-2407\n\n4.8 mi."

Here's a way to do it without regular expressions
library(dplyr)
library(tidyr)
text =
"A
Carl's Jr.
308 WESTWOOD PLAZA
LOS ANGELES, CA 90095-8355
0.0 mi.
B
Carl's Jr.
2727 SANTA MONICA
SANTA MONICA, CA 90404-2407
4.8 mi." %>% textConnection %>% readLines
result =
data_frame(text = text) %>%
filter(text != "") %>%
mutate(type = c("ID", "name", "street_address", "city_state_zip", "distance") %>%
rep_len(n()),
index = ceiling((1:n())/5)) %>%
spread(type, text) %>%
separate(city_state_zip, c("city", "state_zip"), sep = ", " ) %>%
separate(state_zip, c("state", "zip"), sep = " ") %>%
separate(distance, c("distance", "unit"), sep = " ") %>%
mutate(distance = as.numeric(distance))

Related

R - extract all strings matching pattern and create relational table

I am looking for a shorter and more pretty solution (possibly in tidyverse) to the following problem. I have a data.frame "data":
id string
1 A 1.001 xxx 123.123
2 B 23,45 lorem ipsum
3 C donald trump
4 D ssss 134, 1,45
What I wanted to do is to extract all numbers (no matter if the delimiter is "." or "," -> in this case I assume that string "134, 1,45" can be extracted into two numbers: 134 and 1.45) and create a data.frame "output" looking similar to this:
id string
1 A 1.001
2 A 123.123
3 B 23.45
4 C <NA>
5 D 134
6 D 1.45
I managed to do this (code below) but the solution is pretty ugly for me also not so efficient (two for-loops). Could someone suggest a better way to do do this (preferably using dplyr)
# data
data <- data.frame(id = c("A", "B", "C", "D"),
string = c("1.001 xxx 123.123",
"23,45 lorem ipsum",
"donald trump",
"ssss 134, 1,45"),
stringsAsFactors = FALSE)
# creating empty data.frame
len <- length(unlist(sapply(data$string, function(x) gregexpr("[0-9]+[,|.]?[0-9]*", x))))
output <- data.frame(id = rep(NA, len), string = rep(NA, len))
# main solution
start = 0
for(i in 1:dim(data)[1]){
tmp_len <- length(unlist(gregexpr("[0-9]+[,|.]?[0-9]*", data$string[i])))
for(j in (start+1):(start+tmp_len)){
output[j,1] <- data$id[i]
output[j,2] <- regmatches(data$string[i], gregexpr("[0-9]+[,|.]?[0-9]*", data$string[i]))[[1]][j-start]
}
start = start + tmp_len
}
# further modifications
output$string <- gsub(",", ".", output$string)
output$string <- as.numeric(ifelse(substring(output$string, nchar(output$string), nchar(output$string)) == ".",
substring(output$string, 1, nchar(output$string) - 1),
output$string))
output
1) Base R This uses relatively simple regular expressions and no packages.
In the first 2 lines of code replace any comma followed by a space with a
space and then replace all remaining commas with a dot. After these two lines s will be: c("1.001 xxx 123.123", "23.45 lorem ipsum", "donald trump", "ssss 134 1.45")
In the next 4 lines of code trim whitespace from beginning and end of each string field and split the string field on whitespace producing a
list. grep out those elements consisting only of digits and dots. (The regular expression ^[0-9.]*$ matches the start of a word followed by zero or more digits or dots followed by the end of the word so only words containing only those characters are matched.) Replace any zero length components with NA. Finally add data$id as the names. After these 4 lines are run the list L will be list(A = c("1.001", "123.123"), B = "23.45", C = NA, D = c("134", "1.45")) .
In the last line of code convert the list L to a data frame with the appropriate names.
s <- gsub(", ", " ", data$string)
s <- gsub(",", ".", s)
L <- strsplit(trimws(s), "\\s+")
L <- lapply(L, grep, pattern = "^[0-9.]*$", value = TRUE)
L <- ifelse(lengths(L), L, NA)
names(L) <- data$id
with(stack(L), data.frame(id = ind, string = values))
giving:
id string
1 A 1.001
2 A 123.123
3 B 23.45
4 C <NA>
5 D 134
6 D 1.45
2) magrittr This variation of (1) writes it as a magrittr pipeline.
library(magrittr)
data %>%
transform(string = gsub(", ", " ", string)) %>%
transform(string = gsub(",", ".", string)) %>%
transform(string = trimws(string)) %>%
with(setNames(strsplit(string, "\\s+"), id)) %>%
lapply(grep, pattern = "^[0-9.]*$", value = TRUE) %>%
replace(lengths(.) == 0, NA) %>%
stack() %>%
with(data.frame(id = ind, string = values))
3) dplyr/tidyr This is an alternate pipeline solution using dplyr and tidyr. unnest converts to long form, id is made factor so that we can later use complete to recover id's that are removed by subsequent filtering, the filter removes junk rows and complete inserts NA rows for each id that would otherwise not appear.
library(dplyr)
library(tidyr)
data %>%
mutate(string = gsub(", ", " ", string)) %>%
mutate(string = gsub(",", ".", string)) %>%
mutate(string = trimws(string)) %>%
mutate(string = strsplit(string, "\\s+")) %>%
unnest() %>%
mutate(id = factor(id))
filter(grepl("^[0-9.]*$", string)) %>%
complete(id)
4) data.table
library(data.table)
DT <- as.data.table(data)
DT[, string := gsub(", ", " ", string)][,
string := gsub(",", ".", string)][,
string := trimws(string)][,
string := setNames(strsplit(string, "\\s+"), id)][,
list(string = list(grep("^[0-9.]*$", unlist(string), value = TRUE))), by = id][,
list(string = if (length(unlist(string))) unlist(string) else NA_character_), by = id]
DT
Update Removed assumption that junk words do not have digit or dot. Also added (2), (3) and (4) and some improvements.
We can replace the , in between the numbers with . (using gsub), extract the numbers with str_extract_all (from stringr into a list), replace the list elements that have length equal to 0 with NA, set the names of the list with 'id' column, stack to convert the list to data.frame and rename the columns.
library(stringr)
setNames(stack(setNames(lapply(str_extract_all(gsub("(?<=[0-9]),(?=[0-9])", ".",
data$string, perl = TRUE), "[0-9.]+"), function(x)
if(length(x)==0) NA else as.numeric(x)), data$id))[2:1], c("id", "string"))
# id string
#1 A 1.001
#2 A 123.123
#3 B 23.45
#4 C NA
#5 D 134
#6 D 1.45
Same idea as Gabor's. I had hoped to use R's built-in parsing of strings (type.convert, used in read.table) rather than writing custom regex substitutions:
sp = setNames(strsplit(data$string, " "), data$id)
spc = lapply(sp, function(x) {
x = x[grep("[^0-9.,]$", x, invert=TRUE)]
if (!length(x))
NA_real_
else
mapply(type.convert, x, dec=gsub("[^.,]", "", x), USE.NAMES=FALSE)
})
setNames(rev(stack(spc)), names(data))
id string
1 A 1.001
2 A 123.123
3 B 23.45
4 C <NA>
5 D 134
6 D 1.45
Unfortunately, type.convert is not robust enough to consider both decimal delimiters at once, so we need this mapply malarkey instead of type.convert(x, dec = "[.,]").

R: Abbreviate state names in strings

I have strings with state names in them. How do I efficiently abbreviate them? I am aware of state.abb[grep("New York", state.name)] but this works only if "New York" is the whole string. I have, for example, "Walmart, New York". Thanks in advance!
Let's assume this input:
x = c("Walmart, New York", "Hobby Lobby (California)", "Sold in Sears in Illinois")
Edit: desired outputs will be a la "Walmart, NY", "Hobby Lobby (CA)", "Sold in Sears in IL". As you can see from here, state can appear in many ways in a string
Here's a base R way, using gregexpr(), regmatches(), and regmatches<-(), :
abbreviateStateNames <- function(x) {
pat <- paste(state.name, collapse="|")
m <- gregexpr(pat, x)
ff <- function(x) state.abb[match(x, state.name)]
regmatches(x, m) <- lapply(regmatches(x, m), ff)
x
}
x <- c("Hobby Lobby (California)",
"Hello New York City, here I come (from Greensboro North Carolina)!")
abbreviateStateNames(x)
# [1] "Hobby Lobby (CA)"
# [2] "Hello NY City, here I come (from Greensboro NC)!"
Alternatively -- and quite a bit more naturally -- you can accomplish the same thing using the gsubfn package:
library(gsubfn)
pat <- paste(state.name, collapse="|")
gsubfn(pat, function(x) state.abb[match(x, state.name)], x)
[1] "Hobby Lobby (CA)"
[2] "Hello NY City, here I come (from Greensboro NC)!"

Use strsplit to get last character in r

I have a file of baby names that I am reading in and then trying to get the last character in the baby name. For example, the file looks like..
Name Sex
Anna F
Michael M
David M
Sarah F
I read this in using
sourcenames = read.csv("babynames.txt", header=F, sep=",")
I ultimately want to end up with my result looking like..
Name Last Initial Sex
Michael l M
Sarah h F
I've managed to split the name into separate characters..
sourceout = strsplit(as.character(sourcenames$Name),'')
But now where I'm stuck is how to get the last letter, so in the case of Michael, how to get 'l'. I thought tail() might work but its returning the last few records, not the last character in each Name element.
Any help or advice is greatly appreciated.
Thanks :)
For your strsplit method to work, you can use tail with sapply
df$LastInit <- sapply(strsplit(as.character(df$Name), ""), tail, 1)
df
# Name Sex LastInit
# 1 Anna F a
# 2 Michael M l
# 3 David M d
# 4 Sarah F h
Alternatively, you can use substring
with(df, substring(Name, nchar(Name)))
# [1] "a" "l" "d" "h"
Try this function from stringi package:
require(stringi)
x <- c("Ala", "Sarah","Meg")
stri_sub(x, from = -1, to = -1)
This function extracts substrings between from and to index. If indexes are negative, then it counts characters from the end of a string. So if from=-1 and to=-1 it means that we want substring from last to last character :)
Why use stringi? Just look at this benchmarks :)
require(microbenchmark)
x <- sample(x,1000,T)
microbenchmark(stri_sub(x,-1), str_extract(x, "[a-z]{1}$"), gsub(".*(.)$", "\\1", x),
sapply(strsplit(as.character(x), ""), tail, 1), substring(x, nchar(x)))
Unit: microseconds
expr min lq median uq max neval
stri_sub(x, -1) 56.378 63.4295 80.6325 85.4170 139.158 100
str_extract(x, "[a-z]{1}$") 718.579 764.4660 821.6320 863.5485 1128.715 100
gsub(".*(.)$", "\\\\1", x) 478.676 493.4250 509.9275 533.8135 673.233 100
sapply(strsplit(as.character(x), ""), tail, 1) 12165.470 13188.6430 14215.1970 14771.4800 21723.832 100
substring(x, nchar(x)) 133.857 135.9355 141.2770 147.1830 283.153 100
Here is another option using data.table (for relatively clean syntax) and stringr (easier grammar).
library(data.table); library(stringr)
df = read.table(text="Name Sex
Anna F
Michael M
David M
Sarah F", header=T)
setDT(df) # convert to data.table
df[, "Last Initial" := str_extract(Name, "[a-z]{1}$") ][]
Name Sex Last Initial
1: Anna F a
2: Michael M l
3: David M d
4: Sarah F h
One liner:
x <- c("abc","123","Male")
regmatches(x,regexpr(".$", x))
## [1] "c" "3" "e"
You can do it with a Regular Expression and gsub:
sourcenames$last.letter = gsub(".*(.)$", "\\1", sourcenames$Name)
sourcenames
Name Sex last.letter
1 Anna F a
2 Michael M l
3 David M d
4 Sarah F h
you can try this one... str_sub() function in stringr package would help you.
library(dplyr)
library(stringr)
library(babynames)
babynames %>%
select(name,sex) %>%
mutate(last_letter = str_sub(name,-1,-1)) %>%
head()
dplyr approach:
sourcenames %>% rowwise() %>% mutate("Last Initial" = strsplit(as.character(Name),'') %>% unlist() %>% .[length(.)])

How to separate the variables of a particular column in a CSV file and write to a CSV file in R?

I have a CSV file like
Market,CampaignName,Identity
Wells Fargo,Gary IN MetroChicago IL Metro,56
EMC,Los Angeles CA MetroBoston MA Metro,78
Apple,Cupertino CA Metro,68
Desired Output to a CSV file with the first row as the headers
Market,City,State,Identity
Wells Fargo,Gary,IN,56
Wells Fargo,Chicago,IL,56
EMC,Los Angeles,CA,78
EMC,Boston,MA,78
Apple,Cupertino,CA,68
res <-
gsub('(.*) ([A-Z]{2})*Metro (.*) ([A-Z]{2}) .*','\\1,\\2:\\3,\\4',
xx$Market)
How to modify the above regular expressions to get the result in R?
New to R, any help is appreciated.
library(stringr)
xx.to.split <- with(xx, setNames(gsub("Metro", "", as.character(CampaignName)), Market))
do.call(rbind, str_match_all(xx.to.split, "(.+?) ([A-Z]{2}) ?"))[, -1]
Produces:
[,1] [,2]
Wells Fargo "Gary" "IN"
Wells Fargo "Chicago" "IL"
EMC "Los Angeles" "CA"
EMC "Boston" "MA"
Apple "Cupertino" "CA"
This should work even if you have different number of Compaign Names in each market. Unfortunately I think base options are annoying to implement because frustratingly there isn't a gregexec, although I'd be curious if someone comes up with something comparably compact in base.
Here is a solution using base R. Split the CampaignName column on the string Metro adding sequential numbers as names. stack turns it into a data frame with columns ind and values which we massage into DF1. Merge that with xx by the sequence numbers of DF1 and the row numbers of xx. Move Market to the front of DF2 and remove ind and CampaignName. Finally write it out.
xx <- read.csv("Campaign.csv", as.is = TRUE)
s <- strsplit(xx$CampaignName, " Metro")
names(s) <- seq_along(s)
ss <- stack(s)
DF1 <- with(ss, data.frame(ind,
City = sub(" ..$", "", values),
State = sub(".* ", "", values)))
DF2 <- merge(DF1, xx, by.x = "ind", by.y = 0)
DF <- DF2[ c("Market", setdiff(names(DF2), c("ind", "Market", "CampaignName"))) ]
write.csv(DF, file = "myfile.csv", row.names = FALSE, quote = FALSE)
REVISED to handle extra columns after poster modified the question to include such. Minor improvements.

Question regarding llply or lapply - applying functions to data.frames in a list

Dear R user community,
I have many data.frames in a list, as follows (only one data.frame in the list of 21 shown for convenience):
> str(datal)
List of 21
$ BallitoRaw.DAT :'data.frame': 1083 obs. of 3 variables:
..$ Filename: Factor w/ 21 levels "BallitoRaw.DAT",..: 1 1 1 1 1 1 1 1 1 1 ...
..$ date :Class 'Date' num [1:1083] 7318 7319 7320 7321 7322 ...
..$ temp : num [1:1083] NA 25.8 NA NA NA NA NA NA NA 24.4 ...
If I work on each data.frame in the list individually I can create a zoo object from temp and date, as such:
> BallitoRaw.zoo <- zoo(datal$BallitoRaw.DAT$temp, datal$BallitoRaw.DAT$date)
The zoo object looks like this:
> head(BallitoRaw.zoo)
1990-01-14 1990-01-15 1990-01-16 1990-01-17 1990-01-18 1990-01-19
NA 25.8 NA NA NA NA
How do I use llply or apply (or similar) to work on the whole list at once?
The output needs to go into a new list of data.frames, or a series of independent data.frames (each one named as in the zoo example above). Note that the date column, although a regular time series (days), contains missing dates (in addition to NAs for temps of existing dates); the missing dates will be filled by the zoo function. The output data.frame with the zoo object will thus be longer than the original one.
Help kindly appreciated.
makeNamedZoo <- function(dfrm){ dfrmname <- deparse(substitute(dfrm))
zooname <-dfrmname
assign(zooname, zoo(dfrm$temp, dfrm$date))
return(get(zooname)) }
ListOfZoos <- lapply(dflist, makeNamedZoo)
names(ListOfZoos) <- paste( sub("DAT$", "", names(dflist) ), "zoo", sep="")
Here is a simple test case:
df1 <- data.frame(a= letters[1:10], date=as.Date("2011-01-01")+0:9, temp=rnorm(10) )
df2 <- data.frame(a= letters[1:10], date=as.Date("2011-01-01")+0:9, temp=rnorm(10) )
dflist <- list(dfone.DAT=df1,dftwo.DAT=df2)
ListOfZoos <- lapply(dflist, makeNamedZoo)
names(ListOfZoos) <- paste( sub("DAT$", "", names(dflist) ), "zoo", sep="")
$dfone.zoo
2011-01-01 2011-01-02 2011-01-03 2011-01-04 2011-01-05 2011-01-06 2011-01-07
0.7869056 1.6523928 -1.1131432 1.2261783 1.1843587 0.2673762 -0.4159968
2011-01-08 2011-01-09 2011-01-10
-1.2686391 -0.4135859 -1.4916291
$dftwo.zoo
2011-01-01 2011-01-02 2011-01-03 2011-01-04 2011-01-05 2011-01-06 2011-01-07
0.7356612 -0.1263861 -1.6901240 -0.6441732 -1.4675871 2.3006544 1.0263354
2011-01-08 2011-01-09 2011-01-10
-0.8577544 0.6079986 0.6625564
This is an easier way to achieve what I needed:
tozoo <- function(x) zoo(x$temp, x$date)
data1.zoo <- do.call(merge, lapply(split(data1, data1$Filename), tozoo))
The result is a nice zoo object.