I am trying to estimate the marginal effects of my xtlogit model in stata, which looks like this:
xtlogit onset c.l.log_welfarespending##c.l.ethnic_groups l.gdplog, re vce(robust)
The model itself runs smoothly and there are no complications at all. When I try to estimate the marginal effects:
margins log_welfarespending, at(l.ethnic_groups(= (0, 6))
I receive log_welfarespending: factor variables may not contain noninteger values. From what I have read so far sometimes Stata needs a specification on whether the variables are categorical or continuous. When I change the code to:
c.margins log_welfarespending, at(l.ethnic_groups(= (0, 6))
The error changes to
only factor variables and their interactions are allowed. Any suggestions how to solve the problem?
Related
(cross-posted at http://www.statalist.org/forums/forum/general-stata-discussion/general/1370770-margins-plot-of-treatment-effect-rather-than-y-for-values-of-a-covariate)
I'm running a multivariate regression (outcome variable is continuous, happens to be GPA). The covariate of interest is a dummy variable for treatment status; another of the covariates is a pre-score. We want to look at how the treatment effect differs at various values of pre-score. The structure of the model is not complicated:
regress GPA treatment pre_score X3 X4 X5...
What I want is a graph that shows what the treatment effect is (values of Beta1) at various values of pre-score (X2). It's straightforward to get a graph with values of the OUTCOME at various values of X2:
margins, at(pre_score= (1(0.25)5)) post
marginsplot
I have consulted an array of resources and tried alternatives using marginscontplot, coefplot with recast, the dy/dx option, and so forth. I remain unsuccessful. But this seems like something that there must be a way to do; wanting to know if a treatment effect varies for values of a control (say, income) must be common.
Can anyone direct me to the right command, or options for Margins, to output values of Beta1 (coefficient on treatment dummy), rather than of Y (GPA), at values of the pre_score?
Question was resolved at Statalist. Turns out that Margins alone can't do what I was trying to; the model needs to be run with an interaction term. Then it's simple.
I have a group of treated firms in a country, and for each firm I would like to find the closest match in terms of industry, size and profitability in the rest of the country. I am working on Stata. All I need is to form a control group- could anybody guide me with the code? That'd be greatly appreciated! I currently have the following, which doesn't get me what I need:
psmatch2 (logpension) (treated sector logassets logebitda), logit ate
Here's how you might match on x1 and x2 using Mahalanobis distance as a metric, to get the effect on y from treatment t:
use http://ssc.wisc.edu/sscc/pubs/files/psm, clear
psmatch2 t, mahalanobis(x1 x2) outcome(y) ate
The variable _n1 stores the observation number of the matched control observation for every treatment observation.
The following is a full set of code you can run to find your average treatment effect on the treated (your most important indicator result) and to check if the data is balanced (whether your result is valid). Before you run it, you need to make sure your treated is labeled in the following manner: 0 should be labeled as the control group and 1 should be labeled as the experimental/treatment. "neighbor(1)" means I chose the option nearest-neighbor matching. It basically pairs each treated observation with a control observation whose propensity score is closest in absolute value.
psmatch2 treated sector logassets logebitda, outcome (logpension) neighbor(1) common
After running psmatch, you need to make sure your data is balanced. So you need to run this:
pstest sector logassets logebitda, treated(treated)
if your t-test shows any significance below 0.05, it means your data is not balanced. to check the balance of your data visually, you can also run
psgraph
right after your psmatch2 command.
Good luck!
I'm using Heckman Selection Model which are two consist of 2 equation. i'm using Probit as a selection equation and multiple regression as a result equation.
how can put in dummy variables in those equation ?
Do we have to make the variables into logaritmic form ?
How can I make logaritmic variables with stata ?
Thank you..
Here's an example of how you might do what you ask. The example looks at the effect of being a union member on log wages:
webuse union3
gen log_wage = ln(wage)
etregress log_wage age grade i.smsa i.black tenure, treat(union = i.south i.black tenure) twostep
etregress estimates an average treatment effect of an endogenous binary-treatment variable. In plain English, that means the "first-stage" is a probit. Estimation is by either full maximum likelihood or a two-step consistent estimator, as above.
The dummies are created on the fly by putting an i. in front of the covariates. This is called factor variable notation, and it also makes interactions a breeze. You can also do tab race, gen(d_) to create d_1, d_2, and d_3 (3 race dummies, one of which you can drop).
I am using unbalanced panel data for 4 years. In trying to decide which time variant model (xtgls, xtreg, re, or xtgee) is most appropriate for my analysis, I am trying to estimate coefficients for xtgls under both the homoskedasticity and hetero assumptions. When I run this model with the hetero option, I get very high z-scores (>30) and a significant effect on a term that is insig in all other models.
Also, when I attempt to run lrtest comparing the hetero and homoskedastic models I get an error that reads “hetero does not contain scalar e(ll)”. I read that one way to address this is to add option igls, which supposedly gives the same coeff as the model without the igls option. However, my model will not converge with the igls option. I thought these odd results for the hetero xtgls model could be because some time invariant variable was miscoded (i.e. person coded as female = 1 for one year and female = 0 for another year). I checked my 2 ivs and this is not the case. I can’t figure out what else could be causing this.
So my specific questions are:
Why would I be getting this error - “hetero does not contain scalar e(ll)” - for the lrtest comparing the homo and hetero models? What does it mean?
Below is my stata code:
xtgls continuous_DV IV1 IV2 IV1xIV2, i(person_id) panels(hetero)
estimates store hetero
xtgls continuous_DV IV1 IV2 IV1xIV2, i(person_id)
local df=e(N_g)-1
disp `df'
lrtest hetero ., df(`df')
I ran xttest3 which indicated errors are hetero.
Is igls an appropriate work around for the error I am getting following the lrtest (“hetero does not contain scalar e(ll)”)? If so, what could be causing this model with the igls option not to converge? Below is the code:
xtgls continuous_DV IV1 IV2 IV1xIV2, i(person_id) panels(hetero) igls
In Stata,
the xtgls command does not estimate a log likelihood because it is not maximum likelihood estimation. So you cannot get a log-likelihood test out of that model. To get a log-likelihood, you need to use the setup you had above but instead use the igls option. That is an appropriate workaround and is entirely appropriate; I don't think you need to start by slashing your dataset.
Alternatively, you can use a different estimator. GLS is appropriate when you have few, wide panels. If you have really short panels (only a couple years per individual), you should probably use something like xtreg.
http://www.stata.com/support/faqs/statistics/xtgls-versus-regress/
I am running Logit Regression in Stata.
How can I know the explanatory power of the regression (in OLS, I look at R^2)?
Is there a meaningful approach in expanding the regression with other independent variables (in OLS, I manually keep on adding the independent variables and look for adjusted R^2; my guess is Stata should have simplified this manual process)?
The concept of R^2 is meaningless in logit regression and you should disregard the McFadden Pseudo R2 in the Stata output altogether.
Lemeshow recommends 'to assess the significance of an independent variable we compare the value of D with and without the independent variable in the equation' with the Likelihood ratio test (G): G=D(Model without variables [B])-D(Model with variables [A]).
The Likelihood ratio test (G):
H0: coefficients for eliminated variables are all equal to 0
Ha: at least one coefficient is not equal to 0
When the LR-test p>.05 do not reject H0, which implies that, statistically speaking, there is no advantage to include the additional IV's into the model.
Example Stata syntax to do this is:
logit DV IV1 IV2
estimates store A
logit DV IV1
estimates store B
lrtest A B // i.e. tests if A is 'nested' in B
Note, however, that many more aspects have to checked and tested before we can conclude whether or not a logit model is 'acceptable'. For more detauls, I recommend to visit:
http://www.ats.ucla.edu/stat/stata/topics/logistic_regression.html
and consult:
Applied logistic regression, David W. Hosmer and Stanley Lemeshow , ISBN-13: 978-0471356325
I'm worried that you are getting the fundamentals of modelling wrong here:
The explanatory power of a regression model is theoretically determined by your interpretation of the coefficients, not by the R-squared. The R^2 represents the amount of variance that your linear model predicts, which might be an appropriate benchmark to your model, or not.
Identically, the presence or absence of an independent variable in your model requires substantive justification. If you want to have a look at how the R-squared changes when adding or subtracting parts of your model, see help nestreg for help on nested regression.
To summarize: the explanatory power of your model and its variable composition cannot be determined just by crunching the numbers. You first need an adequate theory to build your model onto.
Now, if you are running logit:
Read Long and Freese (Ch. 3) to understand how log likelihood converges (or not) in your model.
Do not expect to find something as straightforward as the R-squared for logit.
Use logit diagnostics on your model, just like you should be after running OLS.
You might also want to read the likelihood ratio Chi-squared test or run additional lrtest commands as explained by Eric.
I certainly agree with the above posters that almost any measure of R^2 for a binary model like logit or probit shouldn't be considered very important. There are ways to see how good of a job your model does at predicting. For example, check out the following commands:
lroc
estat class
Also, here's a good article for further reading:
http://www.statisticalhorizons.com/r2logistic