sapply function in R is not giving me the desired result - regex

I'm trying to use sapply instead of a 'for' loop but I'm not getting the result I'm expecting, I've tested each line separated and the code is working but when I use sapply is not. I'm looking for some hints on what might be wrong:
event <- c('Astronomical Low Tide', 'Avalanche', 'Blizzard', 'Coastal Flood',
'Cold/Wind Chill', 'Debris Flow', 'Dense Fog', 'Dense Smoke', 'Drought',
'Dust Devil', 'Dust Storm','Excessive Heat', 'Extreme Cold/Wind Chill',
'Flash Flood', 'Flood', 'Frost/Freeze', 'Funnel Cloud', 'Freezing Fog',
'Hail', 'Heat', 'Heavy Rain', 'Heavy Snow', 'High Surf', 'High Wind',
'Hurricane/Typhoon', 'Ice Storm', 'Lake/Effect Snow', 'Lakeshore Flood',
'Lightning', 'Marine Hail', 'Marine High Wind', 'Marine Strong Wind',
'Marine Thunderstorm Wind', 'Rip Current', 'Seiche', 'Sleet',
'Storm Surge/Tide', 'Strong Wind', 'Thunderstorm Wind', 'Tornado',
'Tropical Depression', 'Tropical Storm', 'Tsunami', 'Volcanic Ash',
'Waterspout', 'Wildfire', 'Winter Storm', 'Winter Weather')
replace <- function(dt, x, col) {
idx <- grep(paste('(?i)', event[x], sep = ''), dt[, col])
dt[idx, col] <- event[x]
}
sapply(1:length(event), function(x) replace(stormdata, x, 8))
Basically, what I'm trying to do is to use every value on the event variable as a pattern on the grep function within the custom made replace function then I get the index of the rows that matched my pattern and stored them in the idx variable. After that I want to replace the rows in the data frame that correspond to the idx values with the value contained in the event variable.
I'm trying to create a loop with the sapply function to use every value on the event variable, so I want a loop that goes 48 times looking for each pattern in the data frame stormdata on its 8th column and replace them. BUT my code does nothing, after running it the data remains the same, no substitutions. When I run each line separately without the sapply it works.
I've looking everywhere, I can't find why isn't working. Help.

Try using global assignment eg stormdata[idx, col] <<- event[x] in your function. Not clean but probably will work.

Related

Python: referring to each duplicate item in a list by unique index

I am trying to extract particular lines from txt output file. The lines I am interested in are few lines above and few below the key_string that I am using to search through the results. The key string is the same for each results.
fi = open('Inputfile.txt')
fo = open('Outputfile.txt', 'a')
lines = fi.readlines()
filtered_list=[]
for item in lines:
if item.startswith("key string"):
filtered_list.append(lines[lines.index(item)-2])
filtered_list.append(lines[lines.index(item)+6])
filtered_list.append(lines[lines.index(item)+10])
filtered_list.append(lines[lines.index(item)+11])
fo.writelines(filtered_list)
fi.close()
fo.close()
The output file contains the right lines for the first record, but multiplied for every record available. How can I update the indexing so it can read every individual record? I've tried to find the solution but as a novice programmer I was struggling to use enumerate() function or collections package.
First of all, it would probably help if you said what exactly goes wrong with your code (a stack trace, it doesn't work at all, etc). Anyway, here's some thoughts. You can try to divide your problem into subproblems to make it easier to work with. In this case, let's separate finding the relevant lines from collecting them.
First, let's find the indexes of all the relevant lines.
key = "key string"
relevant = []
for i, item in enumerate(lines):
if item.startswith(key):
relevant.append(item)
enumerate is actually quite simple. It takes a list, and returns a sequence of (index, item) pairs. So, enumerate(['a', 'b', 'c']) returns [(0, 'a'), (1, 'b'), (2, 'c')].
What I had written above can be achieved with a list comprehension:
relevant = [i for (i, item) in enumerate(lines) if item.startswith(key)]
So, we have the indexes of the relevant lines. Now, let's collected them. You are interested in the line 2 lines before it and 6 and 10 and 11 lines after it. If your first lines contains the key, then you have a problem – you don't really want lines[-1] – that's the last item! Also, you need to handle the situation in which your offset would take you past the end of the list: otherwise Python will raise an IndexError.
out = []
for r in relevant:
for offset in -2, 6, 10, 11:
index = r + offset
if 0 < index < len(lines):
out.append(lines[index])
You could also catch the IndexError, but that won't save us much typing, as we have to handle negative indexes anyway.
The whole program would look like this:
key = "key string"
with open('Inputfile.txt') as fi:
lines = fi.readlines()
relevant = [i for (i, item) in enumerate(lines) if item.startswith(key)]
out = []
for r in relevant:
for offset in -2, 6, 10, 11:
index = r + offset
if 0 < index < len(lines):
out.append(lines[index])
with open('Outputfile.txt', 'a') as fi:
fi.writelines(out)
To get rid of duplicates you can cast list to set; example:
x=['a','b','a']
y=set(x)
print(y)
will result in:
['a','b']

Use of local macro

I want to write six temp data files from my original data keeping the following variables:
temp1: v1-v18
temp2: v1-v5 v19-v31
temp3: v1-v5 v32-v44
temp4: v1-v5 v45-v57
temp5: v1-v5 v58-v70
temp6: v1-v5 v71-v84
I have tried the following:
forvalues i =1(1)6 {
preserve
local j = 6 + (`i'-1)*13
local k = `j'+12
keep v1-v18 if `j'==6
keep v1-v5 v`i'-v`k' if `i'>6 & `j'<71
keep v1-v5 v71-v84 if `j'==71
export delimited using temp`i'.csv, delimiter(";") novarnames replace
restore
}
I get an invalid syntax error. The problem lies with the keep statements. Specifically the if condition with a local macro seems to be against syntax rules.
I think part of your confusion is due to misunderstanding the if qualifier vs the if command.
The if command evaluates an expression: if that expression is true, it executes what follows. The if command should be used to evaluate a single expression, in this case, the value of a macro.
You might use an if qualifier, for example, when you want to regress y x if x > 2 or replace x = . if x <= 2 etc. See here for a short description.
Your syntax has other issues too. You cannot have code following on the same line as the open brace in your forvalues loop, or again on the same line as your closing brace. You also use the local i to condition your keep. I think you mean to use j here, as i simply serves to iterate the loop, not identify a variable suffix.
Further, the logic here seems to work, but doesn't seem very general or efficient. I imagine there is a better way to do this but I don't have time to play around with it at the moment - perhaps an update later.
In any case, I think the correct syntax most analogous to what you have tried is something like the following.
clear *
set more off
set obs 5
forvalues i = 1/84 {
gen v`i' = runiform()
}
forvalues i =1/6 {
preserve
local j = 6 + (`i'-1)*13
local k = `j'+12
if `j' == 6 {
keep v1-v18
}
else if `j' > 6 & `j' < 71 {
keep v1-v5 v`j'-v`k'
}
else keep v1-v5 v71-v84
ds
di
restore
}
I use ds here to simply list the variables in the data followed by di do display a blank line as a separator, but you could simply plug back in your export and it should work just fine.
Another thing to consider if you truly want temp data files is to consider using tempfile so that you aren't writing anything to disk. You might use
forvalues i = 1/6 {
tempfile temp`i'
// other commands
save `temp`i''
}
This will create six Stata data files temp1 - temp6 that are held in memory until the program terminates.

Need to extract 4 spaces of text before the occurrence of a word that appears in a column in a df, and may occur several times per row

I need to extract text (4 characters) before the occurrence of the word "exception" per row in a column of my dataframe. For example, see two lines of my data below:
MPSA: Original Version (01/16/2015); FMV Exception: Original Version (04/11/2014); MM Exception: 08.19.15 (08/19/2015)
MPSA: Original Version (02/10/2015); FMV Exception: Original Version (12/18/2014); MEI FMV: V3 (12/18/2014); MEI FMV: updated (11/18/2014); Meeting Material exception: Original Version (04/21/2014);
As you can see, "exception" occurrs more than one time per line, is sometimes capitalized and sometimes not, and has different text before. I need to extract the "FMV", "MM", and "ial" that come before in each case. The goal is to extract as a version of the following (comma separating would be fine but not needed):
"FMVMM"
"FMVial"
I am planning on making all text lower case for simplicity, but I cannot find a regex to extract the 4 characters of text I need after that. Any recommendations?
You basically need strsplit, substr and nchar:
t1 <- "1.MPSA: Original Version (01/16/2015); FMV Exception: Original Version (04/11/2014); MM Exception: 08.19.15 (08/19/2015)"
t2 <- "2.MPSA: Original Version (02/10/2015); FMV Exception: Original Version (12/18/2014); MEI FMV: V3 (12/18/2014); MEI FMV: updated (11/18/2014); Meeting Material exception: Original Version (04/21/2014); "
f <- function(x){
tmp <- strsplit(x, "[Ee]xception")[[1]]
ret <- array(dim = length(tmp) - 1)
for(i in 1:length(ret)){
ret[i] <- substr(tmp[i], start = nchar(tmp[i]) - 3, stop = nchar(tmp[i]))
}
return(paste(ret, collapse = ","))
}
f(t1) #gives "FMV , MM "
f(t2) #gives "FMV ,ial "
Avoiding the loop would be better but for now, this should work.
Edit by Qaswed: Improved the function (shorter and does not need tolower any more).
Edit by TigeronFire:
#Qaswed, thank you for your guidance - the answer, however, poses another problem. t1 and t2 are only two lines on a dataframe 10000 rows long. I attempted to add the column logic to the function you built a few different ways, but I always received the error message:
"Error in strsplit(BOSSMWF_practice$Documents, "[Ee]xception") : non-character argument"
I tried the following with reference to dataframe column BOSSMWF_practice$Documents:
f <- function(x){
tmp <- strsplit(BOSSMWF_practice$Documents, "[Ee]xception")[[1]]
ret <- array(dim = length(tmp) - 1)
for(i in 1:length(ret)){
ret[i] <- substr(tmp[i], start = nchar(tmp[i]) - 3, stop = nchar(tmp[i]))
}
return(paste(ret, collapse = ","))
}
AND:
f <- function(x){
BOSSMWF_practice$tmp <- strsplit(BOSSMWF_practice$Documents, "[Ee]xception")[[1]]
BOSSMWF_practice$ret <- array(dim = length(BOSSMWF_practice$tmp) - 1)
for(i in 1:length(BOSSMWF_practice$ret)){
BOSSMWF_practice$ret[i] <- substr(BOSSMWF_practice$tmp[i], start = nchar(BOSSMWF_practice$tmp[i]) - 3, stop = nchar(BOSSMWF_practice$tmp[i]))
}
return(paste(ret, collapse = ","))
}
I attempted to run the function on my applicable column using both function setups
BOSSMWF_practice$Funct <- f(BOSSMWF_practice$Documents)
But I always received the above error message. Can you take your advice one step further and indicate how to apply this to a dataframe and place the results in a new column?
Edit by Qaswed:
#TigeronFire you should have added a comment to my answer or editing your question, but not editing my question. To your comment:
#if your dataset looks something like this:
df <- data.frame(variable_name = c(t1, t2))
#...use
apply(df, 1, FUN = f)
#note: there was an error in f. You need strsplit(x, ...) and not strsplit(t1, ...).

Apply function to NA matrix in R

I want to use the apply to use a function that I used to pull data via regular expressions and fill a matrix with it.
planetdata = function(dline) {
new_line = unlist(strsplit(as.character(dline),"</td><td>"))
new_first_value = substring(new_line[1],9)
new_last_value =substring(new_line[11],1,nchar(new_line[11])-10)
new_line[1] <- new_first_value
new_line[11] <- new_last_value
new_data <- new_line
return(new_data)
}
new.dt = dt[21:1912]
exo.mat = matrix(data = NA, nrow=1892, ncol = 11)
colnames(exo.mat) <- c(exo.col.names)
apply(exo.mat,2,function(new.dt) planetdata(new.dt))
However, my matrix does not change and all the values are still NA.
Why is this happening?
Did you mean this? exo.mat[] <- apply(new.dt, 2, planetdata)
R usually passes by value, not by reference. Modifying a variable inside a function will generally not modify it outside. You need to save the value out explicitly.
Also, you were passing in the empty matrix to apply(), it just didn't look that way because you made an anonymous function with a new.dt parameter, which is different from the new.dt variable you had in your session.

R Subset Dataset Using Regular Expression

Is there a way to make the R code below run quicker (i.e. vectorized to avoid use of for loops)?
My example contains two data frames. First is dimension n1*p. One of the p columns contains names. Second data frame is a column vector (n2*1). It contains names as well. I want to keep all rows of the first data frame, where some part of the name in the column vector of the second data frame appears in the corresponding first data frame. Sorry for the brutal explanation.
Example (Data frame 1):
x y
Doggy 1
Hello 2
Hi Dog 3
Zebra 4
Example (Data frame 2)
z
Hello
Dog
So in the above example I want to keep rows 1,2,3 but NOT 4. Since "Dog" appears in "Doggy" and "Hi Dog". And "Hello" appears in "Hello". Exclude row four since no part of "Hello" or "Dog" appears in "Zebra".
Below is my R code to do this...runs fine. However, for my real task. Data frame 1 has 1 million rows and data frame 2 has 50 items to match on. So runs pretty slow. Any suggestion on how to speed this up are appreciated.
x <- c("Doggy", "Hello", "Hi Dog", "Zebra")
y <- 1:4
dat <- as.data.frame(cbind(x,y))
names(dat) <- c("x","y")
z <- as.data.frame(c("Hello", "Dog"))
names(z) <- c("z")
dat$flag <- NA
for(j in 1:length(z$z)){
for(i in 1:dim(dat)[1]){
if ( is.na(dat$flag[i])==TRUE ) {
dat$flag[i] <- length(grep(paste(z[j,1]), dat[i,1], perl=TRUE, value=TRUE))
} else {
if (dat$flag[i]==0) {
dat$flag[i] <- length(grep(paste(z[j,1]), dat[i,1], perl=TRUE, value=TRUE))
} else {
if (dat$flag[i]==1) {
dat$flag[i]==1
}
}
}
}
}
dat1 <- subset(dat, flag==1)
dat1
Try this:
dat[grep(paste(z$z, collapse = "|"), dat$x), ]
or
subset(dat, grepl(paste(z$z, collapse = "|"), x))
This question inspired a boolean text search function (%bs%) in the qdap package and thus I thought I'd share the approach to this question:
library(qdap)
dat[dat$x %bs% paste(z$z, collapse = "OR"), ]
In this case no less typing but if multiple or/and statements are involved this may be a useful approach.