busan<-subset(influ_busan, select = c(CNT,temp_min,temp_diff,humid_mean,hpa_mean,rad_mean,wind_mean,o3))
new_busan<-mice(busan, seed=12345, n=5)
lm_busan <- with(new_busan,lm(CNT~temp_min+temp_diff+humid_mean+hpa_mean+rad_mean+wind_mean+o3))
summary(lm_busan)
busan_predict<-data.frame(fitted.values(lm_busan))
This is my simply version syntax. I use multiple imputation for NA and After multiple imputation, I want to extract fitted values. However I can't extract fitted values, how can I extract fitted values?
You can do this via extract_imputations function from my version of mice; hopefully will be incorporated into the main mice version shortly:
see: https://github.com/stefvanbuuren/mice/pull/51
devtools::install_github("alexwhitworth/mice")
library(mice)
new_busan <- mice(busan, seed= 12345, m=2)
busan_predict <- extract_imputations(busan, new_busan$imp, j= 1)
busan_predict <- extract_imputations(busan, new_busan$imp, j= 2)
Edit Apparently, I didn't read the mice documentation thoroughly enough. This functionality already existed in mice -- mice::complete
Related
I am currently working with genetic data from different patients. To date I have always worked with PCAs by comparing independent groups. Example: (Sick Vs Control, Treatment Vs Control etc.)
But now I have paired data. I mean that there exists a relationship between the samples of different groups. The typical example is having a group of subjects and measuring each of them before and after treatment.
I did this PCA with Thermofisher program, but I would like to do in R. This is the output of the ThermoFisher program. B (Before treatment) P (Post-treatment)
I tried to looking for any example in Google, but I didn't found it.
An example would be something like this:
data.matrix <- matrix(nrow=100, ncol=10)
colnames(data.matrix) <- c(
paste("P_BT", 1:5, sep=""),
paste("P_AT", 1:5, sep=""))
rownames(data.matrix) <- paste("gene", 1:100, sep="")
for (i in 1:100) {
wt.values <- rpois(5, lambda=sample(x=10:1000, size=1))
ko.values <- rpois(5, lambda=sample(x=10:1000, size=1))
data.matrix[i,] <- c(wt.values, ko.values)
}
head(data.matrix)
I have a text file which has information, like so:
product/productId: B000GKXY4S
product/title: Crazy Shape Scissor Set
product/price: unknown
review/userId: A1QA985ULVCQOB
review/profileName: Carleen M. Amadio "Lady Dragonfly"
review/helpfulness: 2/2
review/score: 5.0
review/time: 1314057600
review/summary: Fun for adults too!
review/text: I really enjoy these scissors for my inspiration books that I am making (like collage, but in books) and using these different textures these give is just wonderful, makes a great statement with the pictures and sayings. Want more, perfect for any need you have even for gifts as well. Pretty cool!
product/productId: B000GKXY4S
product/title: Crazy Shape Scissor Set
product/price: unknown
review/userId: ALCX2ELNHLQA7
review/profileName: Barbara
review/helpfulness: 0/0
review/score: 5.0
review/time: 1328659200
review/summary: Making the cut!
review/text: Looked all over in art supply and other stores for "crazy cutting" scissors for my 4-year old grandson. These are exactly what I was looking for - fun, very well made, metal rather than plastic blades (so they actually do a good job of cutting paper), safe ("blunt") ends, etc. (These really are for age 4 and up, not younger.) Very high quality. Very pleased with the product.
I want to parse this into a dataframe with the productID, title, price.. as columns and the data as the rows. How can I do this in R?
A quick and dirty approach:
mytable <- read.table(text=mytxt, sep = ":")
mytable$id <- rep(1:2, each = 10)
res <- reshape(mytable, direction = "wide", timevar = "V1", idvar = "id")
There will be issues if there are other colons in the data. Also assumes that there is an equal number (10) of variables for each case. All
I want to parse an access.log in R. It has the following form and I want to get it into a data.frame:
TIME="2013-07-25T06:28:38+0200" MOBILE_AGENT="0" HTTP_REFERER="-" REQUEST_HOST="www.example.com" APP_ENV="envvar" APP_COUNTRY="US" APP_DEFAULT_LOCATION="New York" REMOTE_ADDR="11.222.33.444" SESSION_ID="rstg35tsdf56tdg3" REQUEST_URI="/get/me/something" HTTP_USER_AGENT="Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" REQUEST_METHOD="GET" REWRITTEN_REQUEST_URI="/index.php?url=/get/me/something" STATUS="200" RESPONSE_TIME="155,860ms" PEAK_MEMORY="18965" CPU="99,99"
The logs are 400MB per file and currently I have about 4GB logs so size matters.
Another thing.. There are two different log structures (different columns are included) so you can not assume to have the same columns always, but you can assume that only one kind of structure is parsed at a time.
What I have up to now is a regex for this structure:
(\\w+)[=][\"](.*?)[\"][ ]{0,1}
I can read the data in and somehow fit it into a dataframe using readlines, gsub and read.table but it is slow and messy.
Any ideas? Tnx!
You can do this for example:
text <- readLines(textConnection(text))
## since we can't use = as splitter (used in url) I create a new splitter
dd <- read.table(text=gsub('="','|"',text),sep=' ')
## use data.table since it is faster to apply operation by columns and bind them again
library(data.table)
DT <- as.data.table(dd)
DT.split <- DT[,lapply(.SD,function(x)
unlist(strsplit(as.character(x) ,"|",fixed=TRUE)))]
DT.split[c(F,T)]
I am attempting to extract tables from very large text files (computer logs). Dickoa provided very helpful advice to an earlier question on this topic here: extracting table from text file
I modified his suggestion to fit my specific problem and posted my code at the link above.
Unfortunately I have encountered a complication. One column in the table contains spaces. These spaces are generating an error when I try to run the code at the link above. Is there a way to modify that code, or specifically the read.table function to recognize the second column below as a column?
Here is a dummy table in a dummy log:
> collect.models(, adjust = FALSE)
model npar AICc DeltaAICc weight Deviance
5 AA(~region + state + county + city)BB(~region + state + county + city)CC(~1) 17 11111.11 0.0000000 5.621299e-01 22222.22
4 AA(~region + state + county)BB(~region + state + county)CC(~1) 14 22222.22 0.0000000 5.621299e-01 77777.77
12 AA(~region + state)BB(~region + state)CC(~1) 13 33333.33 0.0000000 5.621299e-01 44444.44
12 AA(~region)BB(~region)CC(~1) 6 44444.44 0.0000000 5.621299e-01 55555.55
>
> # the three lines below count the number of errors in the code above
Here is the R code I am trying to use. This code works if there are no spaces in the second column, the model column:
my.data <- readLines('c:/users/mmiller21/simple R programs/dummy.log')
top <- '> collect.models\\(, adjust = FALSE)'
bottom <- '> # the three lines below count the number of errors in the code above'
my.data <- my.data[grep(top, my.data):grep(bottom, my.data)]
x <- read.table(text=my.data, comment.char = ">")
I believe I must use the variables top and bottom to locate the table in the log because the log is huge, variable and complex. Also, not every table contains the same number of models.
Perhaps a regex expression could be used somehow taking advantage of the AA and the CC(~1) present in every model name, but I do not know how to begin. Thank you for any help and sorry for the follow-up question. I should have used a more realistic example table in my initial question. I have a large number of logs. Otherwise I could just extract and edit the tables by hand. The table itself is an odd object which I have only ever been able to export directly with capture.output, which would probably still leave me with the same problem as above.
EDIT:
All spaces seem to come right before and right after a plus sign. Perhaps that information can be used here to fill the spaces or remove them.
try inserting my.data$model <- gsub(" *\\+ *", "+", my.data$model) before read.table
my.data <- my.data[grep(top, my.data):grep(bottom, my.data)]
my.data$model <- gsub(" *\\+ *", "+", my.data$model)
x <- read.table(text=my.data, comment.char = ">")
This is a repost from the Statistics portion of the Stack Exchange. I had asked the question there, I was advised to ask this question here. So here it is.
I have a list of data-frames. Each data-frame has a similar structure. There is only one column in each data-frame that is numeric. Because of my data-requirements it is essential that each data-frame has different lengths. I want to create a boxplot of the numerical values, categorized over the attributes in another column. But the boxplot should include information from all the data-frames.
I hope it is a clear question. I will post sample data soon.
Sam,
I'm assuming this is a follow up to this question? Maybe your sample data will illustrate the nuances of your needs better (the "categorized over attributes in another column" part), but the same melting approach should work here.
library(ggplot2)
library(reshape2)
#Fake data
a <- data.frame(a = rnorm(10))
b <- data.frame(b = rnorm(100))
c <- data.frame(c = rnorm(1000))
#In a list
myList <- list(a,b,c)
#In a melting pot
df <- melt(myList)
#Separate boxplots for each data.frame
qplot(factor(variable), value, data = df, geom = "boxplot")
#All values plotted together as one boxplot
qplot(factor(1), value, data = df, geom = "boxplot")
a<-data.frame(c(1,2),c("x","y"))
b<-data.frame(c(3,4,5),c("a","b","c"))
boxplot(c(a[1],b[1]))
With the "1"'s i select the column i want out of the data-frame.
A data-frames can not have different column-lengths (has to have same number of rows for each column), but you can tell boxplot to plot multiple datasets in parallel.
Using the melt() function and base R boxplot:
#Fake data
a <- data.frame(a = rnorm(10))
b <- data.frame(b = rnorm(100))
c <- data.frame(c = rnorm(100) + 5)
#In a list
myList <- list(a,b,c)
#In a melting pot
df <- melt(myList)
# plot using base R boxplot function
boxplot(value ~ variable, data = df)