split string without loss of characters - regex

I wish to split strings at a certain character while retaining that character in the second resulting string. I can achieve almost all of the desired operation, except that I lose the characters I specify in strsplit, which I guess is called the delimiter.
Is there a way to request that strsplit retain the delimiter? Or must I use a regular expression of some kind? Thank you for any advice. This seems like a very basic question. Sorry if it is a duplicate. I prefer to use base R.
Here is an example showing what I have so far:
my.table <- read.table(text = '
model npar AICc
AA(~region+state+county+city)BB(~region+state+county+city)CC(~1) 17 11111.11
AA(~region+state+county)BB(~region+state+county)CC(~123) 14 22222.22
AA(~region+state)BB(~region+state)CC(~33) 13 33333.33
AA(~region)BB(~region)CC(~4321) 6 44444.44
', header = TRUE, stringsAsFactors = FALSE)
desired.result <- read.table(text = '
model CC npar AICc
AA(~region+state+county+city)BB(~region+state+county+city) CC(~1) 17 11111.11
AA(~region+state+county)BB(~region+state+county) CC(~123) 14 22222.22
AA(~region+state)BB(~region+state) CC(~33) 13 33333.33
AA(~region)BB(~region) CC(~4321) 6 44444.44
', header = TRUE, stringsAsFactors = FALSE)
split.model <- strsplit(my.table$model, 'CC\\(')
split.models <- matrix(unlist(split.model), ncol=2, byrow=TRUE, dimnames = list(NULL, c("model", "CC")))
desires.result2 <- data.frame(split.models, my.table[,2:ncol(my.table)])
desires.result2
# model CC npar AICc
# 1 AA(~region+state+county+city)BB(~region+state+county+city) ~1) 17 11111.11
# 2 AA(~region+state+county)BB(~region+state+county) ~123) 14 22222.22
# 3 AA(~region+state)BB(~region+state) ~33) 13 33333.33
# 4 AA(~region)BB(~region) ~4321) 6 44444.44

The basic idea is to use look-around operations from regular expressions to strsplit to get your desired result. However, it's a bit trickier than that with strsplit and positive lookahead. Read this excellent post from #JoshO'Brien for explanation.
pattern <- "(?<=\\))(?=CC)"
strsplit(my.table$model, pattern, perl=TRUE)
# [[1]]
# [1] "AA(~region+state+county+city)BB(~region+state+county+city)"
# [2] "CC(~1)"
# [[2]]
# [1] "AA(~region+state+county)BB(~region+state+county)"
# [2] "CC(~123)"
# [[3]]
# [1] "AA(~region+state)BB(~region+state)" "CC(~33)"
# [[4]]
# [1] "AA(~region)BB(~region)" "CC(~4321)"
Of course, I leave the task of do.call(rbind, ...) and cbind to get the final desired.output to you.

Almost right after I posted I thought of using gsub to insert a space and then split on the space. Although, I like Arun's answer better.
my.table <- read.table(text = '
model npar AICc
AA(~region+state+county+city)BB(~region+state+county+city)CC(~1) 17 11111.11
AA(~region+state+county)BB(~region+state+county)CC(~123) 14 22222.22
AA(~region+state)BB(~region+state)CC(~33) 13 33333.33
AA(~region)BB(~region)CC(~4321) 6 44444.44
', header = TRUE, stringsAsFactors = FALSE)
my.table$model <- gsub("CC", " CC", my.table$model)
split.model <- strsplit(my.table$model, ' ')
split.models <- matrix(unlist(split.model), ncol=2, byrow=TRUE, dimnames = list(NULL, c("model", "CC")))
desires.result <- data.frame(split.models, my.table[,2:ncol(my.table)])
desires.result
# model CC npar AICc
# 1 AA(~region+state+county+city)BB(~region+state+county+city) CC(~1) 17 11111.11
# 2 AA(~region+state+county)BB(~region+state+county) CC(~123) 14 22222.22
# 3 AA(~region+state)BB(~region+state) CC(~33) 13 33333.33
# 4 AA(~region)BB(~region) CC(~4321) 6 44444.44

... why not just tack the separator back on afterwards? Would seem to save a lot of trouble fiddling with regexes.
split.model <- lapply(strsplit(my.table$model, 'CC\\('), function(x) {
x[2] <- paste0("CC(", x[2])
x
})

Related

R Merging 4 Strings into 1 String

I'm searching for the locations of 4 different substrings in x and trying to merge these four outputs into one cumulative string:
x <- ("AAABBADSJALKACCWIEUADD")
outputA <- gregexpr(pattern = "AAA", x)
outputB <- gregexpr(pattern = "ABB", x)
outputC <- gregexpr(pattern = "ACC", x)
outputD <- gregexpr(pattern = "ADD", x)
I would like to merge these four outputs and output this merged result as a text file with each element separated on new line.
merged_output
# 1
# 3
# 13
# 20
Thank you
Actually you can do it all at once using a lookahead (?=)
gregexpr("A(?=AA|BB|CC|DD)", x, perl=T)[[1]]
# [1] 1 3 13 20
# attr(,"match.length")
# [1] 1 1 1 1
# attr(,"useBytes")
# [1] TRUE
For example
library(stringi)
cat("merged_output",
paste("#",
stri_locate_first_fixed(pattern = c("AAA", "ABB", "ACC", "ADD"), ("AAABBADSJALKACCWIEUADD"))[, "start"]),
file = tf <- tempfile(fileext = ".txt"),
sep = "\n")
Now, the file named in tf contains
> merged_output
> # 1
> # 3
> # 13
> # 20
Not very automated, but
cat(paste(c(outputA[[1]][1], outputB[[1]][1], outputC[[1]][1], outputD[[1]][1]),
collapse = "\n"),
file = "outputfile.txt")
should do it.

Subset all 3 digit numbers and collapse them with a separator in a data frame. R

I'm formating a data set so each entry has the adegenet format for codominant markers, such as:
Loci1
###/###
208/210
200/204
198/208
where the # represents any digit (the number is a allele size in basepairs). My data has some homozygous entries (all 3 digit integers with no separator) that have the the form of:
Loci1
###
208
198
I intend to paste the 3 digit string to itself with sep='/' to produce the first format. I've tried to use grep to subset these homozygous entries by finding all non ###/### and negating the match using the table matching such as:
a <- grep('\\b\\d{3}?[/]\\d{3}', score$Loci1, value =T ) # Subset all ###/###/
score[!(a %in% 1:nrow(score$Loci1)), ] # works but only on vectors...
After the subset I could paste. The problem arises when I apply this to a data frame. grep seems to treat the data frame as a list (which in part it is) and returns columns that have a match.
So in short how can I go from ### to ###/### in a data frame
self contained example of data:
score2 <- NULL
set.seed(9)
Loci1 <- NULL
Loci2 <- NULL
Loci3 <- NULL
for (i in 1:5) Loci1 <- append(Loci1, paste(sample(seq(from = 230, to=330, by=3), 2, replace = F), collapse = '/'))
for (i in 1:5) Loci2 <- append(Loci2, paste(sample(seq(from = 230, to=330, by=3), 2, replace = F), collapse = '/'))
for (i in 1:5) Loci3 <- append(Loci3, paste(sample(seq(from = 230, to=330, by=3), 2, replace = F), collapse = '/'))
score2 <- data.frame(Loci1, Loci2, Loci3, stringsAsFactors = F)
score2[2,3] <- strsplit(score2[2,3], split = '/')[1]
score2[5,2] <- strsplit(score2[3,3], split = '/')[1]
score2[1,1] <- strsplit(score2[1,1], split = '/')[1]
score2[c(1, 4),c(2,3)] <- NA
score2
You could just replace the 3 digit items with the separator and a copy:
sub("^(...)$", "\\1/\\1", Loci1)
Use lapply with an anonymized function:
data.frame( lapply(score2, function(x) sub("^(...)$", "\\1/\\1", x) ) )
Loci1 Loci2 Loci3
1 251/251 <NA> <NA>
2 251/329 320/257 260/260
3 275/242 278/329 281/320
4 269/266 <NA> <NA>
5 296/326 281/281 326/314
(Not sure what the "paste-part" was supposed to refer to, but I think this was the intent of your question)
If the numeric values could have a varying number of digits then use a pattern argument like "^([0-9]{1,9})$"
An option using grep/paste,
m1 <- as.matrix(score2)
indx <- grep('^...$', m1)
m1[indx] <- paste(m1[indx], m1[indx], sep="/")
as.data.frame(m1)
# Loci1 Loci2 Loci3
#1 251/251 <NA> <NA>
#2 251/329 320/257 260/260
#3 275/242 278/329 281/320
#4 269/266 <NA> <NA>
#5 296/326 281/281 326/314
Or without converting to matrix, this can be done using lapply
score2[] <- lapply(score2, function(x) ifelse(grepl('^...$', x),
paste(x, x, sep="/"),x))

R match between two comma-separated strings

I am trying to find an elegant way to find matches between the two following character columns in a data frame. The complicated part is that either string can contain a comma-separated list, and if a member of one list is a match for any member of the other list, then that whole entry would be considered a match. I'm not sure how well I've explained this, so here's sample data and output:
Alt1:
AT
A
G
CGTCC,AT
CGC
Alt2:
AA
A
GG
AT,GGT
CG
Expected Match per row:
Row 1 = none
Row 2 = A
Row 3 = none
Row 4 = AT
Row 5 = none
Non-working solutions:
First attempt: merge entire data frames by desired columns, then match up the alt columns shown above:
match1 = data.frame(merge(vcf.df, ref.df, by=c("chr", "start", "end", "ref")))
matches = unique(match1[unlist(sapply(match1$Alt1 grep, match1$Alt2, fixed=TRUE)),])
Second method, using findoverlaps feature from VariantAnnoatation/Granges:
findoverlaps(ranges(vcf1), ranges(vcf2))
Any suggestions would be greatly appreciated! Thank you!
Solution
Thanks to #Marat Talipov's answer below, the following solution works to compare two comma-separated strings:
> ##read in edited kaviar vcf and human ref
> ref <- readVcfAsVRanges("ref.vcf.gz", humie_ref)
Warning message:
In .vcf_usertag(map, tag, ...) :
ScanVcfParam ‘geno’ fields not present: ‘AD’
> ##rename chromosomes to match with vcf files
> ref <- renameSeqlevels(ref, c("1"="chr1"))
> ##################################
> ## Gather VCF files to process ##
> ##################################
> ##data frame *.vcf.gz files in directory path
> vcf_path <- data.frame(path=list.files(vcf_dir, pattern="*.vcf.gz$", full=TRUE))
> ##read in everything but sample data for speediness
> vcf_param = ScanVcfParam(samples=NA)
> vcf <- readVcfAsVRanges("test.vcf.gz", humie_ref, param=vcf_param)
> #################
> ## Match SNP's ##
> #################
> ##create data frames of info to match on
> vcf.df = data.frame(chr =as.character(seqnames(vcf)), start = start(vcf), end = end(vcf), ref = as.character(ref(vcf)),
+ alt=alt(vcf), stringsAsFactors=FALSE)
> ref.df = data.frame(chr =as.character(seqnames(ref)), start = start(ref), end = end(ref),
+ ref = as.character(ref(ref)), alt=alt(ref), stringsAsFactors=FALSE)
>
> ##merge based on all positional fields except vcf
> col_match = data.frame(merge(vcf.df, ref.df, by=c("chr", "start", "end", "ref")))
> library(stringi)
> ##split each alt column by comma and bind together
> M1 <- stri_list2matrix(sapply(col_match$alt.x,strsplit,','))
> M2 <- stri_list2matrix(sapply(col_match$alt.y,strsplit,','))
> M <- rbind(M1,M2)
> ##compare results
> result <- apply(M,2,function(z) unique(na.omit(z[duplicated(z)])))
> ##add results column to col_match df for checking/subsetting
> col_match$match = result
> head(col_match)
chr start end ref alt.x alt.y match
1 chr1 39998059 39998059 A G G G
2 chr1 39998059 39998059 A G G G
3 chr1 39998084 39998084 C A A A
4 chr1 39998084 39998084 C A A A
5 chr1 39998085 39998085 G A A A
6 chr1 39998085 39998085 G A A A
In the case that input lists are of equal length and you'd like to compare list elements in the pairwise manner, you could use this solution:
library(stringi)
M1 <- stri_list2matrix(sapply(Alt1,strsplit,','))
M2 <- stri_list2matrix(sapply(Alt2,strsplit,','))
M <- rbind(M1,M2)
result <- apply(M,2,function(z) unique(na.omit(z[duplicated(z)])))
Sample input:
Alt1 <- list('AT','A','G','CGTCC,AT','CGC','GG,CC')
Alt2 <- list('AA','A','GG','AT,GGT','CG','GG,CC')
Output:
# [[1]]
# character(0)
#
# [[2]]
# [1] "A"
#
# [[3]]
# character(0)
#
# [[4]]
# [1] "AT"
#
# [[5]]
# character(0)
#
# [[6]]
# [1] "GG" "CC"
Sticking with the stringi package, you could do something like this, using the Alt1 and Alt2 data from Marat's answer.
library(stringi)
f <- function(x, y) {
ssf <- stri_split_fixed(c(x, y), ",", simplify = TRUE)
if(any(sd <- stri_duplicated(ssf))) ssf[sd] else NA_character_
}
Map(f, Alt1, Alt2)
# [[1]]
# [1] NA
#
# [[2]]
# [1] "A"
#
# [[3]]
# [1] NA
#
# [[4]]
# [1] "AT"
#
# [[5]]
# [1] NA
#
# [[6]]
# [1] "GG" "CC"
Or in base R, we can use scan() to separate the strings with commas.
g <- function(x, y, sep = ",") {
s <- scan(text = c(x, y), what = "", sep = sep, quiet = TRUE)
s[duplicated(s)]
}
Map(g, Alt1, Alt2)
you could do something like this:
Alt1 <- list('AT','A','G',c('CGTCC','AT'),'CGC')
Alt2 <- list('AA','A','GG',c('AT','GGT'),'CG')
# make sure you change the lists within in the lists into vectors
matchlist <- list()
for (i in 1:length(Alt1)){
matchlist[[i]] <- ifelse(Alt1[[i]] %in% Alt2[[i]],
paste("Row",i,"=",c(Alt1[[i]],Alt2[[i]])[duplicated(c(Alt1[[i]],Alt2[[i]]))],sep=" "),
paste("Row",i,"= none",sep=" "))
}
print(matchlist)

Replace the first N dots of a string revisited

In January I asked how to replace the first N dots of a string: replace the first N dots of a string
DWin's answer was very helpful. Can it be generalized?
df.1 <- read.table(text = '
my.string other.stuff
1111111111111111 120
..............11 220
11.............. 320
1............... 320
.......1........ 420
................ 820
11111111111111.1 120
', header = TRUE)
nn <- 14
# this works:
df.1$my.string <- sub("^\\.{14}", paste(as.character(rep(0, nn)), collapse = ""),
df.1$my.string)
# this does not work:
df.1$my.string <- sub("^\\.{nn}", paste(as.character(rep(0, nn)), collapse = ""),
df.1$my.string)
Using sprintf you can have the desired output
nn <- 3
sub(sprintf("^\\.{%s}", nn),
paste(rep(0, nn), collapse = ""), df.1$my.string)
## [1] "1111111111111111" "000...........11" "11.............."
## [4] "1..............." "000....1........" "000............."
## [7] "11111111111111.1"
pattstr <- paste0("\\.", paste0( rep(".",nn), collapse="") )
pattstr
#[1] "\\..............."
df.1$my.string <- sub(pattstr,
paste0( rep("0", nn), collapse=""),
df.1$my.string)
> df.1
my.string other.stuff
1 1111111111111111 120
2 000000000000001 220
3 11.............. 320
4 100000000000000 320
5 00000000000000. 420
6 00000000000000. 820
7 11111111111111.1 120

Find the location of a character in string

I would like to find the location of a character in a string.
Say: string = "the2quickbrownfoxeswere2tired"
I would like the function to return 4 and 24 -- the character location of the 2s in string.
You can use gregexpr
gregexpr(pattern ='2',"the2quickbrownfoxeswere2tired")
[[1]]
[1] 4 24
attr(,"match.length")
[1] 1 1
attr(,"useBytes")
[1] TRUE
or perhaps str_locate_all from package stringr which is a wrapper for gregexpr stringi::stri_locate_all (as of stringr version 1.0)
library(stringr)
str_locate_all(pattern ='2', "the2quickbrownfoxeswere2tired")
[[1]]
start end
[1,] 4 4
[2,] 24 24
note that you could simply use stringi
library(stringi)
stri_locate_all(pattern = '2', "the2quickbrownfoxeswere2tired", fixed = TRUE)
Another option in base R would be something like
lapply(strsplit(x, ''), function(x) which(x == '2'))
should work (given a character vector x)
Here's another straightforward alternative.
> which(strsplit(string, "")[[1]]=="2")
[1] 4 24
You can make the output just 4 and 24 using unlist:
unlist(gregexpr(pattern ='2',"the2quickbrownfoxeswere2tired"))
[1] 4 24
find the position of the nth occurrence of str2 in str1(same order of parameters as Oracle SQL INSTR), returns 0 if not found
instr <- function(str1,str2,startpos=1,n=1){
aa=unlist(strsplit(substring(str1,startpos),str2))
if(length(aa) < n+1 ) return(0);
return(sum(nchar(aa[1:n])) + startpos+(n-1)*nchar(str2) )
}
instr('xxabcdefabdddfabx','ab')
[1] 3
instr('xxabcdefabdddfabx','ab',1,3)
[1] 15
instr('xxabcdefabdddfabx','xx',2,1)
[1] 0
To only find the first locations, use lapply() with min():
my_string <- c("test1", "test1test1", "test1test1test1")
unlist(lapply(gregexpr(pattern = '1', my_string), min))
#> [1] 5 5 5
# or the readable tidyverse form
my_string %>%
gregexpr(pattern = '1') %>%
lapply(min) %>%
unlist()
#> [1] 5 5 5
To only find the last locations, use lapply() with max():
unlist(lapply(gregexpr(pattern = '1', my_string), max))
#> [1] 5 10 15
# or the readable tidyverse form
my_string %>%
gregexpr(pattern = '1') %>%
lapply(max) %>%
unlist()
#> [1] 5 10 15
You could use grep as well:
grep('2', strsplit(string, '')[[1]])
#4 24