I'm using a LMM in SAS and, I would like to get an estimation (and a p-value) of a linear combination of some of the regression coefficients.
Say that the model is:
b0+b1Time+b2X1+b3X2+b4(Time*X1)
and say that, I want to get an estimate and a p-value for the b1+b4.
What should I do?
Related
I am new to generalised linear modelling. I ran the negative binomial model, and then try to estimate the residuals from the model.
Here is what I did:
Run a negative binomial regression model with nbreg command in stata 17.
Run the predict command to estimate the predicted values.
Then, generate the residual by subtracting predicted values from observed values.
Did I do it correctly?
I try to estimate the above nonlinear model by Stata. Unfortunately, I am not comfortable with Stata. Can anyone help me about writing the above function in Stata?
How can we write regional dummy, time fixed effect and country fixed effect in nl command in Stata?
Is there a way to write the summation in the above equation in Stata? Alternatively, is it easier to estimate the equation for each individual region?
Stata 15 introduced a native command for fitting non-linear panel data models.
https://www.stata.com/new-in-stata/nonlinear-panel-data-models-with-random-effects/
That might help get you started, but you need Stata 15.
We are interested in regression where both input and output vectors are
multivariate, in particular linear regression. We know that there is a
linear regression function in Weka that only accepts a single value as the
output. Although we could train a single-output regressor for each output
dimension, this would be very inefficient. We also found a RBFRegressor
function that does RBF regression but again with a single output. Is there
any function for learning linear or nonlinear regression in Weka that
accepts a vector as the output instead of a single value?
We are new to Weka so apologies if we are missing something obvious, but
we couldn't find it in the Weka documentation, textbook, or the web.
I'm using PROC LOGISTIC procedure in SAS and option SELECTION=SCORE which gives me few logistic regression models and their Chi-Square values. My question would be which model is better - with smaller Chi-Square or bigger?
In general, the larger chi-squared statistic will correspond with a lower p-value (more significance). However, it is important to know the shape of the chi-squared distribution and also the number of degrees of freedom. As you can see in the graph, the relationship between p and chi-squared changes based on the degrees of freedom.
Score for Chi-Square is larger, the model is better.
I'm estimating the parameters of a GMM using EM,
When I use my Matlab script And run the EM code i get a single value of "log-likelihood"..
However in opencv the output of EM.train gives a matrix which contained the loglikelihood value of every sample.
How do I get a single log likelihood value? Do I need to take the minimum of all the loglikelihood values of all samples or the sum of all loglikelihood values?
You need sum of log probabilities of datapoints which you use to estimate probability density function. You'll get loglikelihood of your estimation.
You can find good explanation in "Pattern Recognition and Machine Learning" book