Background
Experiment: a CRD with 4 treatments (N,L,M, and ML) and 3 rep/trt. So a total of 12 runs/trials/subjects. Concentration of each run was determined at day 0, 1,2,6,7,8,9,10,and 13, which results in a total of 9x12 = 108 obs/data points.
Here is a graph of the data.
Boxplot of all trt and day combinations as follows.
The following models were run.
gls without correlation
conc.gls1 <- gls(Conc ~ factor(Trt)*Day,
data = data,
method="REML")
Residuals vs. Fitted and QQ Plots.
gls with ar(1) for each trial/run as subject
conc.gls3 <- gls(Conc ~ factor(Trt)*Day,
data = data,
weights = varIdent(form = ~1|factor(Trt)*factor(Day)),
correlation = corAR1(form = ~Day|subject))
Residuals vs. Fitted and QQ plot.
lme with day as a random factor
conc.lme <- lme(Conc ~ factor(Trt)*Day,
data = data,
random = ~1|Day,
weights = varIdent(form = ~1|Day))
Residuals vs. fitted and QQ plot
gam with day as a smoothing variable
conc.gam <- mgcv::gam(Conc ~ factor(Trt)+s(Day,k=6),
data=data)
Residuals vs. fitted and QQ plot
Here are AIC
df AIC
conc.gls1 9.000000 3799.592
conc.gls2 45.000000 3731.522
conc.lme 18.000000 3533.088
conc.gam 9.936895 3998.234
All the models except the first one seem justified.
The lme model seems to be the best: residuals vs fitted, qq plot, AIC all look great. But is it justified to treat day as a random factor?
The gls2 seems to fit the experimental well, but the results were not satisfactory, probably due to the poor fit of the time series. Does anyone have a better way to model?
Appreciate any comments/suggestions!
Related
I have used the following statement to calculate predicted values of a logistic model
proc logistic data = dev descending outest =model;
class cat_vars;
Model dep = cont_var cat_var / selection = stepwise slentry=0.1 slstay=0.1
stb lackfit;
output out = tmp p= probofdefault;
Score data=dev out = Logit_File;
run;
I want to know what would be the interpretation of the probabilities i get in the logit_file . Are those probabilities odds ratio ( exp(y)) or are they the probabilities (odds ratio/1+odds ratio)?
Probabilities cannot be odds ratios. A probability is between 0 and 1, odds ratios have no upper bound. The output from SCORE are probabilities.
If you consider the reason for there being a SCORE option in the first place, this should make sense: SCORE is designed to score new data sets using an old model. It uses the odds ratios and so on of the old model on a new data set.
I have a set of data with a dependent variable that is a count, and several independent variables. My primary independent variable is large dollar values. If I divide the dollar values by 10,000(to keep the coefficients manageable), the models(negative binomial and zero-inflated negative binomial) run in Stata and I can generate predicted counts with confidence intervals. However, theoretically it is more logical to take the natural log of this variable. When I do that, the models still run but now predicted counts on range between 0.22-0.77 or so. How do I fix this so the predicted counts generate correctly?
Your question does not show any code or data. It's nearly impossible to know what is going wrong without these two ingredients. Your questions reads as "I did some stuff to this other stuff with surprising results." In order to ask a good question, you should replicate your coding approach with a dataset that everyone would have access to, like rod93.
Here's my attempt at that, which shows reasonably similar predictions with nbreg from both models:
webuse rod93, clear
replace exposure = exposure/10000
nbreg deaths exposure age_mos, nolog
margins
predictnl d1 =predict(n), ci(lb1 ub1)
/* Compare the prediction for the first obs by hand */
di exp(_b[_cons]+_b[age_mos]*age_mos[1]+_b[exposure]*exposure[1])
di d1[1]
gen ln_exp = ln(exposure)
nbreg deaths ln_e age_mos, nolog
margins
predictnl d2 =predict(n), ci(lb2 ub2)
/* Compare the prediction for the first obs by hand */
di exp(_b[_cons]+_b[age_mos]*age_mos[1]+_b[ln_e]*ln(exposure[1]))
di d2[1]
sum d? lb* ub*, sep(2)
This produces very similar predictions and confidence intervals:
. sum d? lb* ub*, sep(2)
Variable | Obs Mean Std. Dev. Min Max
-------------+---------------------------------------------------------
d1 | 21 84.82903 25.44322 12.95853 104.1868
d2 | 21 85.0432 25.24095 32.87827 105.1733
-------------+---------------------------------------------------------
lb1 | 21 64.17752 23.19418 1.895858 80.72885
lb2 | 21 59.80346 22.01917 10.9009 79.71531
-------------+---------------------------------------------------------
ub1 | 21 105.4805 29.39726 24.02121 152.7676
ub2 | 21 110.2829 29.16468 51.76427 143.856
In what follows I plot the mean of an outcome of interest (price) by a grouping variable (foreign) for each possible value taken by the fake variable time:
sysuse auto, clear
gen time = rep78 - 3
bysort foreign time: egen avg_p = mean(price)
scatter avg_p time if (foreign==0 & time>=0) || ///
scatter avg_p time if (foreign==1 & time>=0), ///
legend(order(1 "Domestic" 2 "Foreign")) ///
ytitle("Average price") xlab(#3)
What I would like to do is to plot the difference in the two group means over time, not the two separate means.
I am surely missing something, but to me it looks complicated because the information about the averages is stored "vertically" (in avg_p).
The easiest way to do this is to arguably use linear regression to estimate the differences:
/* Regression Way */
drop if time < 0 | missing(time)
reg price i.foreign##i.time
margins, dydx(foreign) at(time =(0(1)2))
marginsplot, noci title("Foreign vs Domestic Difference in Price")
If regression is hard to wrap your mind around, the other is involves mangling the data with a reshape:
/* Transform the Data */
keep price time foreign
collapse (mean) price, by(time foreign)
reshape wide price, i(time) j(foreign)
gen diff = price1-price0
tw connected diff time
Here is another approach. graph dot will happily plot means.
sysuse auto, clear
set scheme s1color
collapse price if inrange(rep78, 3, 5), by(foreign rep78)
reshape wide price, i(rep78) j(foreign)
rename price0 Domestic
label var Domestic
rename price1 Foreign
label var Foreign
graph dot (asis) Domestic Foreign, over(rep78) vertical ///
marker(1, ms(Oh)) marker(2, ms(+))
When I calculate the jaccard similarity between each of my training data of (m) training examples each with 6 features (Age,Occupation,Gender,Product_range, Product_cat and Product) forming a (m*m) similarity matrix.
I get a different outcome for matrix. I have identified the problem source but do not posses a optimized solution for the same.
Find the sample of the dataset below:
ID AGE Occupation Gender Product_range Product_cat Product
1100 25-34 IT M 50-60 Gaming XPS 6610
1101 35-44 Research M 60-70 Business Latitude lat6
1102 35-44 Research M 60-70 Performance Inspiron 5810
1103 25-34 Lawyer F 50-60 Business Latitude lat5
1104 45-54 Business F 40-50 Performance Inspiron 5410
The matrix I get is
Problem Statement:
If you see the value under the red box that shows the similarity of row (1104) and (1101) of the sample dataset. The two rows are not similar if you look at their respective columns, however the value 0.16 is because of the term "Business" present in "occupation" column of row (1104) and "product_cat" column of row(1101), which gives outcome as 1 when the intersection of the rows are taken.
My code just takes the intersection of the two rows without looking at the columns, How do I change my code to handle this case and keep the performance equally good.
My code:
half_matrix=[]
for row1, row2 in itertools.combinations(data_set, r=2):
intersection_len = row1.intersection(row2)
half_matrix.append(float(len(intersection_len)) /tot_len)
The simplest way out of this is to add a column-specific prefix to all entries. Example of a parsed row:
row = ["ID:1100", "AGE:25-34", "Occupation:IT", "Gender:M", "Product_range:50-60", "Product_cat:Gaming", "Product:XPS 6610"]
There are many other ways around this, including splitting each row into a set of k-mers and applying the Jaccard-based MinHash algorithm to compare these sets, but there is no need in such a thing in your case.
I have a binary outcome variable (disease) and a continuous independent variable (age). There's also a cluster variable clustvar. Logistic regression assumes that the log odds is linear with respect to the continuous variable. To visualize this, I can categorize age as (for example, 0 to <5, 5 to <15, 15 to <30, 30 to <50 and 50+) and then plot the log odds against the category number using:
logistic disease i.agecat, vce(cluster clustvar)
margins agecat, predict(xb)
marginsplot
However, since the categories are not equal width, it would be better to plot the log odds against the mid-point of the categories. Is there any way that I can manually define that the values plotted on the x-axis by marginsplot should be 2.5, 10, 22.5, 40 and (slightly arbitrarily) 60, and have the points spaced appropriately?
If anyone is interested, I achieved the required graph as follows:
Recategorised age variable slightly differently using (integer) labels that represent the mid-point of the category:
gen agecat = .
replace agecat = 3 if age<6
replace agecat = 11 if age>=6 & age<16
replace agecat = 23 if age>=16 & age<30
replace agecat = 40 if age>=30 & age<50
replace agecat = 60 if age>=50 & age<.
For labelling purposes, created a label:
label define agecat 3 "Less than 5y" 11 "10 to 15y" 23 "15 to <30y" 40 "30 to <50y" 60 "Over 50 years"
label values agecat
Ran logistic regression as above:
logistic disease i.agecat, vce(cluster clustvar)
Used margins and plot using marginsplot:
margins agecat, predict(xb)
marginsplot