String rearrangment in R - regex

I have a long list of names of city and its province name. This is partial list of my data
data <- c('Ranchi_Capital_State_Jharkhand', 'Bokaro_State_Jharkhand', 'Tata Nagar_State_Jharkhand', 'Ramgarh_State_Jharkhand',
'Pune_State_Maharashtra', 'Mumbai_Capital_State_Maharashtra', 'Nagpur_State_Maharashtra')
I want to arrange it such that State should come first, like this State_Jharkhand_Bokaro. If city is a capital then State_Jharkhand_Capital_Ranchi. Also note that city name or state name may have single string or more than one string (eg Tata Nagar).
What is most efficient way to do it, (without using any loop)?

You could use the below gsub function.
> data <- c('Ranchi_Capital_State_Jharkhand', 'Bokaro_State_Jharkhand', 'Tata Nagar_State_Jharkhand', 'Ramgarh_State_Jharkhand',
+ 'Pune_State_Maharashtra', 'Mumbai_Capital_State_Maharashtra', 'Nagpur_State_Maharashtra')
> gsub("^(?:(.*?)(_Capital))?(.*?)_(State.*)", "\\4\\2_\\1\\3", data)
[1] "State_Jharkhand_Capital_Ranchi" "State_Jharkhand_Bokaro"
[3] "State_Jharkhand_Tata Nagar" "State_Jharkhand_Ramgarh"
[5] "State_Maharashtra_Pune" "State_Maharashtra_Capital_Mumbai"
[7] "State_Maharashtra_Nagpur"
DEMO

This doesn't really use much regex, but is mostly based on the expected position of the information. Split the strings by "_" and then reorder them as required:
data
# [1] "Ranchi_Capital_State_Jharkhand" "Bokaro_State_Jharkhand"
# [3] "Tata Nagar_State_Jharkhand" "Ramgarh_State_Jharkhand"
# [5] "Pune_State_Maharashtra" "Mumbai_Capital_State_Maharashtra"
# [7] "Nagpur_State_Maharashtra"
A <- strsplit(data, "_", TRUE)
sapply(A, function(x) {
if (length(x) == 3) {
paste(x[c(2, 3, 1)], collapse = "_")
} else if (length(x) == 4) {
paste(x[c(3, 4, 2, 1)], collapse = "_")
} else {
stop("unexpected length")
}
})
# [1] "State_Jharkhand_Capital_Ranchi" "State_Jharkhand_Bokaro"
# [3] "State_Jharkhand_Tata Nagar" "State_Jharkhand_Ramgarh"
# [5] "State_Maharashtra_Pune" "State_Maharashtra_Capital_Mumbai"
# [7] "State_Maharashtra_Nagpur"
I don't know if using sapply breaks your requirement of "without using any loop" though.

Related

readHTMLTable is not giving me the information I want

I am trying to analyze some Formule 1 data. Wikipedia has a table with the data I want. I am importing the data into R with the code below:
library(XML)
library(RCurl)
url <- "https://en.wikipedia.org/wiki/List_of_Formula_One_drivers"
tabs <- getURL(url)
tabs <- readHTMLTable(tabs, stringsAsFactors=FALSE)
pilots <- tabs[[3]]
pilots <- pilots[-dim(pilots)[1], ]
head(pilots[, 1])
[1] "Abate, CarloCarlo Abate"
[2] "Abecassis, GeorgeGeorge Abecassis"
[3] "Acheson, KennyKenny Acheson"
[4] "Adamich, Andrea deAndrea de Adamich"
[5] "Adams, PhilippePhilippe Adams"
[6] "Ader, WaltWalt Ader"
However, the pilot names are strange. Notice how they are. I'd like them to be like this:
head(pilots[, 1])
[1] "Carlo Abate"
[2] "George Abecassis"
[3] "Kenny Acheson"
[4] "Andrea de Adamich"
[5] "Philippe Adams"
[6] "Walt Ader"
However, it seems I am not able to write a regex that can deal with this problem or find an argument for the function readHTMLTable that ignores the sortkey value in the table I am interested. How can I solve my problem?
Use readHTMLTable with a bespoke elFun argument.
library(XML)
library(RCurl)
url <- "https://en.wikipedia.org/wiki/List_of_Formula_One_drivers"
tabs <- getURL(url)
myFun <- function(x){
if(length(y <- getNodeSet(x, ".//a")) > 0){
# return data.frame
title <- xpathSApply(x, ".//a", fun = xmlGetAttr, name = "title")
href <- xpathSApply(x, ".//a", fun = xmlGetAttr, name = "href")
value <- xpathSApply(x, ".//a", fun = xmlValue)
return(paste(value, collapse = ","))
}
xmlValue(x, encoding = "UTF-8")
}
tabs <- readHTMLTable(tabs, elFun = myFun, stringsAsFactors=FALSE)
pilots <- tabs[[3]]
pilots <- pilots[-dim(pilots)[1], ]
> head(pilots[, 1])
[1] "Carlo Abate" "George Abecassis" "Kenny Acheson" "Andrea de Adamich"
[5] "Philippe Adams" "Walt Ader"
> pilots[1,]
Name Country Seasons Championships Entries Starts Poles Wins Podiums Fastest laps Points[note]
1 Carlo Abate Italy 1962,1963 0 2 0 0 0 0 0 0

Lookup table with subset/grepl in R

I'm analyzing a set of urls and values extracted using a crawler. While I could extract substrings from the URL, I'd really rather not bother with the regex to do so—is there a simple way to do a lookup table-style replacement using subset/grepl without resorting to dplyr(do a conditional mutate on the vairables)?
My current process:
test <- data.frame(
url = c('google.com/testing/duck', 'google.com/evaluating/dog', 'google.com/analyzing/cat'),
content = c(1, 2, 3),
subdir = NA
)
test[grepl('testing', test$url), ]$subdir <- 'testing'
test[grepl('evaluating', test$url), ]$subdir <- 'evaluating'
test[grepl('analyzing', test$url), ]$subdir <- 'analyzing'
Obviously, this is a little clumsy and doesn't scale well. With dplyr, I'd be able to do something with conditionals like:
test %<>% tbl_df() %>%
mutate(subdir = ifelse(
grepl('testing', subdir),
'test r',
ifelse(
grepl('evaluating', subdir),
'eval r',
ifelse(
grepl('analyzing', subdir),
'anal r',
NA
))))
But, again, really goofy and I don't want to incur a package dependency if at all possible. Is there any way to do regex-based subsetting with some sort of lookup table?
Edit: Just a few clarifications:
For extracting subdirectories, yes, regex would be most efficient; however, I was hoping for a more general pattern that could match a dictionary-like struct of strings with other, arbitrary values.
Of course, nested ifelse is ugly and prone to error—just wanted to get a quick-and-dirty example with dplyr up.
Edit 2: Thought I'd loop back and post what I ended up with based upon BondedDust's approach. Decided to practice some mapping and non-standard eval while at it:
test <- data.frame(
url = c(
'google.com/testing/duck',
'google.com/testing/dog',
'google.com/testing/cat',
'google.com/evaluating/duck',
'google.com/evaluating/dog',
'google.com/evaluating/cat',
'google.com/analyzing/duck',
'google.com/analyzing/dog',
'google.com/analyzing/cat',
'banana'
),
content = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10),
subdir = NA
)
# List used for key/value lookup, names can be regex
lookup <- c(
"testing" = "Testing is important",
"Eval.*" = 'eval in R',
"analy(z|s)ing" = 'R is fun'
)
# Dumb test for error handling:
# lookup <- c('test', 'hey')
# Defining new lookup function
regexLookup <- function(data, dict, searchColumn, targetColumn, ignore.case = TRUE){
# Basic check—need to separate errors/handling
if(is.null(names(dict)) || is.null(dict[[1]])) {
stop("Not a valid replacement value; use a key/value store for `dict`.")
}
# Non-standard eval for the column names; not sure if I should
# add safetytype/checks for these
searchColumn <- eval(substitute(searchColumn), data)
targetColumn <- deparse(substitute(targetColumn))
# Define find-and-replace utility
findAndReplace <- function (key, val){
data[grepl(key, searchColumn, ignore.case = ignore.case), targetColumn] <- val
data <<- data
}
# Map over the key/value store
mapply(findAndReplace, names(dict), dict)
# Return result, with non-matching rows preserved
return(data)
}
regexLookup(test, lookup, url, subdir, ignore.case = FALSE)
for (target in c('testing','evaluating','analyzing') ) {
test[grepl(target, test$url),'subdir' ] <- target }
test
url content subdir
1 google.com/testing/duck 1 testing
2 google.com/evaluating/dog 2 evaluating
3 google.com/analyzing/cat 3 analyzing
The vector of targets could have instead been the name of a vector that is in the workspace.
targets <- c('testing','evaluating','analyzing')
for( target in targets ) { ...}
Try this:
test$subdir<-gsub('.*\\/(.*)\\/.*','\\1',test$url)

R match between two comma-separated strings

I am trying to find an elegant way to find matches between the two following character columns in a data frame. The complicated part is that either string can contain a comma-separated list, and if a member of one list is a match for any member of the other list, then that whole entry would be considered a match. I'm not sure how well I've explained this, so here's sample data and output:
Alt1:
AT
A
G
CGTCC,AT
CGC
Alt2:
AA
A
GG
AT,GGT
CG
Expected Match per row:
Row 1 = none
Row 2 = A
Row 3 = none
Row 4 = AT
Row 5 = none
Non-working solutions:
First attempt: merge entire data frames by desired columns, then match up the alt columns shown above:
match1 = data.frame(merge(vcf.df, ref.df, by=c("chr", "start", "end", "ref")))
matches = unique(match1[unlist(sapply(match1$Alt1 grep, match1$Alt2, fixed=TRUE)),])
Second method, using findoverlaps feature from VariantAnnoatation/Granges:
findoverlaps(ranges(vcf1), ranges(vcf2))
Any suggestions would be greatly appreciated! Thank you!
Solution
Thanks to #Marat Talipov's answer below, the following solution works to compare two comma-separated strings:
> ##read in edited kaviar vcf and human ref
> ref <- readVcfAsVRanges("ref.vcf.gz", humie_ref)
Warning message:
In .vcf_usertag(map, tag, ...) :
ScanVcfParam ‘geno’ fields not present: ‘AD’
> ##rename chromosomes to match with vcf files
> ref <- renameSeqlevels(ref, c("1"="chr1"))
> ##################################
> ## Gather VCF files to process ##
> ##################################
> ##data frame *.vcf.gz files in directory path
> vcf_path <- data.frame(path=list.files(vcf_dir, pattern="*.vcf.gz$", full=TRUE))
> ##read in everything but sample data for speediness
> vcf_param = ScanVcfParam(samples=NA)
> vcf <- readVcfAsVRanges("test.vcf.gz", humie_ref, param=vcf_param)
> #################
> ## Match SNP's ##
> #################
> ##create data frames of info to match on
> vcf.df = data.frame(chr =as.character(seqnames(vcf)), start = start(vcf), end = end(vcf), ref = as.character(ref(vcf)),
+ alt=alt(vcf), stringsAsFactors=FALSE)
> ref.df = data.frame(chr =as.character(seqnames(ref)), start = start(ref), end = end(ref),
+ ref = as.character(ref(ref)), alt=alt(ref), stringsAsFactors=FALSE)
>
> ##merge based on all positional fields except vcf
> col_match = data.frame(merge(vcf.df, ref.df, by=c("chr", "start", "end", "ref")))
> library(stringi)
> ##split each alt column by comma and bind together
> M1 <- stri_list2matrix(sapply(col_match$alt.x,strsplit,','))
> M2 <- stri_list2matrix(sapply(col_match$alt.y,strsplit,','))
> M <- rbind(M1,M2)
> ##compare results
> result <- apply(M,2,function(z) unique(na.omit(z[duplicated(z)])))
> ##add results column to col_match df for checking/subsetting
> col_match$match = result
> head(col_match)
chr start end ref alt.x alt.y match
1 chr1 39998059 39998059 A G G G
2 chr1 39998059 39998059 A G G G
3 chr1 39998084 39998084 C A A A
4 chr1 39998084 39998084 C A A A
5 chr1 39998085 39998085 G A A A
6 chr1 39998085 39998085 G A A A
In the case that input lists are of equal length and you'd like to compare list elements in the pairwise manner, you could use this solution:
library(stringi)
M1 <- stri_list2matrix(sapply(Alt1,strsplit,','))
M2 <- stri_list2matrix(sapply(Alt2,strsplit,','))
M <- rbind(M1,M2)
result <- apply(M,2,function(z) unique(na.omit(z[duplicated(z)])))
Sample input:
Alt1 <- list('AT','A','G','CGTCC,AT','CGC','GG,CC')
Alt2 <- list('AA','A','GG','AT,GGT','CG','GG,CC')
Output:
# [[1]]
# character(0)
#
# [[2]]
# [1] "A"
#
# [[3]]
# character(0)
#
# [[4]]
# [1] "AT"
#
# [[5]]
# character(0)
#
# [[6]]
# [1] "GG" "CC"
Sticking with the stringi package, you could do something like this, using the Alt1 and Alt2 data from Marat's answer.
library(stringi)
f <- function(x, y) {
ssf <- stri_split_fixed(c(x, y), ",", simplify = TRUE)
if(any(sd <- stri_duplicated(ssf))) ssf[sd] else NA_character_
}
Map(f, Alt1, Alt2)
# [[1]]
# [1] NA
#
# [[2]]
# [1] "A"
#
# [[3]]
# [1] NA
#
# [[4]]
# [1] "AT"
#
# [[5]]
# [1] NA
#
# [[6]]
# [1] "GG" "CC"
Or in base R, we can use scan() to separate the strings with commas.
g <- function(x, y, sep = ",") {
s <- scan(text = c(x, y), what = "", sep = sep, quiet = TRUE)
s[duplicated(s)]
}
Map(g, Alt1, Alt2)
you could do something like this:
Alt1 <- list('AT','A','G',c('CGTCC','AT'),'CGC')
Alt2 <- list('AA','A','GG',c('AT','GGT'),'CG')
# make sure you change the lists within in the lists into vectors
matchlist <- list()
for (i in 1:length(Alt1)){
matchlist[[i]] <- ifelse(Alt1[[i]] %in% Alt2[[i]],
paste("Row",i,"=",c(Alt1[[i]],Alt2[[i]])[duplicated(c(Alt1[[i]],Alt2[[i]]))],sep=" "),
paste("Row",i,"= none",sep=" "))
}
print(matchlist)

How to very efficiently extract specific pattern from characters?

I have big data like this :
> Data[1:7,1]
[1] mature=hsa-miR-5087|mir_Family=-|Gene=OR4F5
[2] mature=hsa-miR-26a-1-3p|mir_Family=mir-26|Gene=OR4F9
[3] mature=hsa-miR-448|mir_Family=mir-448|Gene=OR4F5
[4] mature=hsa-miR-659-3p|mir_Family=-|Gene=OR4F5
[5] mature=hsa-miR-5197-3p|mir_Family=-|Gene=OR4F5
[6] mature=hsa-miR-5093|mir_Family=-|Gene=OR4F5
[7] mature=hsa-miR-650|mir_Family=mir-650|Gene=OR4F5
what I want to do is that, in every row, I want to select the name after word mature= and also the word after Gene= and then pater them together with
paste(a,b, sep="-")
for example, the expected output from first two rows would be like :
hsa-miR-5087-OR4F5
hsa-miR-26a-1-3p-OR4F9
so, the final implementation is like this:
for(i in 1:nrow(Data)){
Data[i,3] <- sub("mature=([^|]*).*Gene=(.*)", "\\1-\\2", Data[i,1])
Name <- strsplit(as.vector(Data[i,2]),"\\|")[[1]][2]
Data[i,4] <- as.numeric(sub("pvalue=","",Name))
print(i)
}
which work well, but it's very slow. the size of Data is very big and it has 200,000,000 rows. this implementation is very slow for that. how can I speed it up ?
If you can guarantee that the format is exactly as you specified, then a regular expression can capture (denoted by the brackets below) everything from the equals sign upto the pipe symbol, and from the Gene= to the end, and paste them together with a minus sign:
sub("mature=([^|]*).*Gene=(.*)", "\\1-\\2", Data[,1])
Another option is to use read.table with = as a separator then pasting the 2 columns:
res = read.table(text=txt,sep='=')
paste(sub('[|].*','',res$V2), ## get rid from last part here
sub('^ +| +$','',res$V4),sep='-') ## remove extra spaces
[1] "hsa-miR-5087-OR4F5" "hsa-miR-26a-1-3p-OR4F9" "hsa-miR-448-OR4F5" "hsa-miR-659-3p-OR4F5"
[5] "hsa-miR-5197-3p-OR4F5" "hsa-miR-5093-OR4F5" "hsa-miR-650-OR4F5"
The simple sub solution already given looks quite nice but just in case here are some other approaches:
1) read.pattern Using read.pattern in the gsubfn package we can parse the data into a data.frame. This intermediate form, DF, can then be manipulated in many ways. In this case we use paste in essentially the same way as in the question:
library(gsubfn)
DF <- read.pattern(text = Data[, 1], pattern = "(\\w+)=([^|]*)")
paste(DF$V2, DF$V6, sep = "-")
giving:
[1] "hsa-miR-5087-OR4F5" "hsa-miR-26a-1-3p-OR4F9" "hsa-miR-448-OR4F5"
[4] "hsa-miR-659-3p-OR4F5" "hsa-miR-5197-3p-OR4F5" "hsa-miR-5093-OR4F5"
[7] "hsa-miR-650-OR4F5"
The intermediate data frame, DF, that was produced looks like this:
> DF
V1 V2 V3 V4 V5 V6
1 mature hsa-miR-5087 mir_Family - Gene OR4F5
2 mature hsa-miR-26a-1-3p mir_Family mir-26 Gene OR4F9
3 mature hsa-miR-448 mir_Family mir-448 Gene OR4F5
4 mature hsa-miR-659-3p mir_Family - Gene OR4F5
5 mature hsa-miR-5197-3p mir_Family - Gene OR4F5
6 mature hsa-miR-5093 mir_Family - Gene OR4F5
7 mature hsa-miR-650 mir_Family mir-650 Gene OR4F5
Here is a visualization of the regular expression we used:
(\w+)=([^|]*)
Debuggex Demo
1a) names We could make DF look nicer by reading the three columns of data and the three names separately. This also improves the paste statement:
DF <- read.pattern(text = Data[, 1], pattern = "=([^|]*)")
names(DF) <- unlist(read.pattern(text = Data[1,1], pattern = "(\\w+)=", as.is = TRUE))
paste(DF$mature, DF$Gene, sep = "-") # same answer as above
The DF in this section that was produced looks like this. It has 3 instead of 6 columns and remaining columns were used to determine appropriate column names:
> DF
mature mir_Family Gene
1 hsa-miR-5087 - OR4F5
2 hsa-miR-26a-1-3p mir-26 OR4F9
3 hsa-miR-448 mir-448 OR4F5
4 hsa-miR-659-3p - OR4F5
5 hsa-miR-5197-3p - OR4F5
6 hsa-miR-5093 - OR4F5
7 hsa-miR-650 mir-650 OR4F5
2) strapplyc
Another approach using the same package. This extracts the fields coming after a = and not containing a | producing a list. We then sapply over that list pasting the first and third fields together:
sapply(strapplyc(Data[, 1], "=([^|]*)"), function(x) paste(x[1], x[3], sep = "-"))
giving the same result.
Here is a visualization of the regular expression used:
=([^|]*)
Debuggex Demo
Here is one approach:
Data <- readLines(n = 7)
mature=hsa-miR-5087|mir_Family=-|Gene=OR4F5
mature=hsa-miR-26a-1-3p|mir_Family=mir-26|Gene=OR4F9
mature=hsa-miR-448|mir_Family=mir-448|Gene=OR4F5
mature=hsa-miR-659-3p|mir_Family=-|Gene=OR4F5
mature=hsa-miR-5197-3p|mir_Family=-|Gene=OR4F5
mature=hsa-miR-5093|mir_Family=-|Gene=OR4F5
mature=hsa-miR-650|mir_Family=mir-650|Gene=OR4F5
df <- read.table(sep = "|", text = Data, stringsAsFactors = FALSE)
l <- lapply(df, strsplit, "=")
trim <- function(x) gsub("^\\s*|\\s*$", "", x)
paste(trim(sapply(l[[1]], "[", 2)), trim(sapply(l[[3]], "[", 2)), sep = "-")
# [1] "hsa-miR-5087-OR4F5" "hsa-miR-26a-1-3p-OR4F9" "hsa-miR-448-OR4F5" "hsa-miR-659-3p-OR4F5" "hsa-miR-5197-3p-OR4F5" "hsa-miR-5093-OR4F5"
# [7] "hsa-miR-650-OR4F5"
Maybe not the more elegant but you can try :
sapply(Data[,1],function(x){
parts<-strsplit(x,"\\|")[[1]]
y<-paste(gsub("(mature=)|(Gene=)","",parts[grepl("mature|Gene",parts)]),collapse="-")
return(y)
})
Example
Data<-data.frame(col1=c("mature=hsa-miR-5087|mir_Family=-|Gene=OR4F5","mature=hsa-miR-26a-1-3p|mir_Family=mir-26|Gene=OR4F9"),col2=1:2,stringsAsFactors=F)
> Data[,1]
[1] "mature=hsa-miR-5087|mir_Family=-|Gene=OR4F5" "mature=hsa-miR-26a-1-3p|mir_Family=mir-26|Gene=OR4F9"
> sapply(Data[,1],function(x){
+ parts<-strsplit(x,"\\|")[[1]]
+ y<-paste(gsub("(mature=)|(Gene=)","",parts[grepl("mature|Gene",parts)]),collapse="-")
+ return(y)
+ })
mature=hsa-miR-5087|mir_Family=-|Gene=OR4F5 mature=hsa-miR-26a-1-3p|mir_Family=mir-26|Gene=OR4F9
"hsa-miR-5087-OR4F5" "hsa-miR-26a-1-3p-OR4F9"

Text file to list in R

I have a large text file with a variable number of fields in each row. The first entry in each row corresponds to a biological pathway, and each subsequent entry corresponds to a gene in that pathway. The first few lines might look like this
path1 gene1 gene2
path2 gene3 gene4 gene5 gene6
path3 gene7 gene8 gene9
I need to read this file into R as a list, with each element being a character vector, and the name of each element in the list being the first element on the line, for example:
> pathways <- list(
+ path1=c("gene1","gene2"),
+ path2=c("gene3","gene4","gene5","gene6"),
+ path3=c("gene7","gene8","gene9")
+ )
>
> str(pathways)
List of 3
$ path1: chr [1:2] "gene1" "gene2"
$ path2: chr [1:4] "gene3" "gene4" "gene5" "gene6"
$ path3: chr [1:3] "gene7" "gene8" "gene9"
>
> str(pathways$path1)
chr [1:2] "gene1" "gene2"
>
> print(pathways)
$path1
[1] "gene1" "gene2"
$path2
[1] "gene3" "gene4" "gene5" "gene6"
$path3
[1] "gene7" "gene8" "gene9"
...but I need to do this automatically for thousands of lines. I saw a similar question posted here previously, but I couldn't figure out how to do this from that thread.
Thanks in advance.
Here's one way to do it:
# Read in the data
x <- scan("data.txt", what="", sep="\n")
# Separate elements by one or more whitepace
y <- strsplit(x, "[[:space:]]+")
# Extract the first vector element and set it as the list element name
names(y) <- sapply(y, `[[`, 1)
#names(y) <- sapply(y, function(x) x[[1]]) # same as above
# Remove the first vector element from each list element
y <- lapply(y, `[`, -1)
#y <- lapply(y, function(x) x[-1]) # same as above
One solution is to read the data in via read.table(), but use the fill = TRUE argument to pad the rows with fewer "entries", convert the resulting data frame to a list and then clean up the "empty" elements.
First, read your snippet of data in:
con <- textConnection("path1 gene1 gene2
path2 gene3 gene4 gene5 gene6
path3 gene7 gene8 gene9
")
dat <- read.table(con, fill = TRUE, stringsAsFactors = FALSE)
close(con)
Next we drop the first column, first saving it for the names of the list later
nams <- dat[, 1]
dat <- dat[, -1]
Convert the data frame to a list. Here I just split the data frame on the indices 1,2,...,n where n is the number of rows:
ldat <- split(dat, seq_len(nrow(dat)))
Clean up the empty cells:
ldat <- lapply(ldat, function(x) x[x != ""])
Finally, apply the names
names(ldat) <- nams
Giving:
> ldat
$path1
[1] "gene1" "gene2"
$path2
[1] "gene3" "gene4" "gene5" "gene6"
$path3
[1] "gene7" "gene8" "gene9"
A quick solution based on the linked page...
inlist <- strsplit(readLines("file.txt"), "[[:space:]]+")
pathways <- lapply(inlist, tail, n = -1)
names(pathways) <- lapply(inlist, head, n = 1)
One more solution:
sl <- c("path1 gene1 gene2", "path2 gene1 gene2 gene3") # created by readLines
f <- function(l, s) {
v <- strsplit(s, " ")[[1]]
l[[v[1]]] <- v[2:length(v)]
return(l)
}
res <- Reduce(f, sl, list())