I am trying to follow with a simple linear regression example provided by Stata: https://www.youtube.com/watch?v=HafqFSB9x70
It is done using Stata/SE 12 and works perfectly.
I am using Stata/MP 13.
And I am getting the following error:
. predict Predicted Wage, xb
too many variables specified
r(103);
I tried to look this up, couldn't figure it out.
How can I fix this, does this relate to the version?
predict takes one new variable name, and you gave it two: Predicted and Wage. Try deleting the space between them, making PredictedWage one word.
Related
I built a pymc3 model using the DensityDist distribution. I have four parameters out of which 3 use Metropolis and one uses NUTS (this is automatically chosen by the pymc3). However, I get two different UserWarnings
1.Chain 0 contains number of diverging samples after tuning. If increasing target_accept does not help try to reparameterize.
MAy I know what does reparameterize here mean?
2. The acceptance probability in chain 0 does not match the target. It is , but should be close to 0.8. Try to increase the number of tuning steps.
Digging through a few examples I used 'random_seed', 'discard_tuned_samples', 'step = pm.NUTS(target_accept=0.95)' and so on and got rid of these user warnings. But I couldn't find details of how these parameter values are being decided. I am sure this might have been discussed in various context but I am unable to find solid documentation for this. I was doing a trial and error method as below.
with patten_study:
#SEED = 61290425 #51290425
step = pm.NUTS(target_accept=0.95)
trace = sample(step = step)#4000,tune = 10000,step =step,discard_tuned_samples=False)#,random_seed=SEED)
I need to run these on different datasets. Hence I am struggling to fix these parameter values for each dataset I am using. Is there any way where I give these values or find the outcome (if there are any user warnings and then try other values) and run it in a loop?
Pardon me if I am asking something stupid!
In this context, re-parametrization basically is finding a different but equivalent model that it is easier to compute. There are many things you can do depending on the details of your model:
Instead of using a Uniform distribution you can use a Normal distribution with a large variance.
Changing from a centered-hierarchical model to a
non-centered
one.
Replacing a Gaussian with a Student-T
Model a discrete variable as a continuous
Marginalize variables like in this example
whether these changes make sense or not is something that you should decide, based on your knowledge of the model and problem.
I run SegNet on my own dataset (by Segnet tutorial). I see great results via test_segmentation.py.
my problem is that I want to see the real net results and not test_segmentation own colorisation (via classes).
for example, if I have trained net with 2 classes, so after the train I will see not only 2 colors (as we see with the classes), but we will see the real net color segmentation ([0.22,0.19,0.3....) lighter and darker as the net see it]
I hope that I explained myself well. thanks for helping.
You could use a python script to achieve what you want. Take a look at this script.
The command out = out['argmax'], extracts the raw output, so you can get a segmentation map with 'lighter and darker' values as you wanted.
When you say the 'real' net color segmentation I will assume that you mean the probability maps. Effectively the last layer will have one map for every class; and if you check the function predict in inference.py, they take the argmax; that is the channel (which represents the class) with the highest probability. If you want to get these maps, you just have to get the data without computing the argmax; something like:
predicted = net.blobs['prob'].data
I solve it. the solution is to range cmin and cmax from 0 to 1 in the scipy saving method. for example: scipy.misc.toimage(output, cmin=0.0, amax=1).save(/path/.../image.png)
I'm trying to save output from several hundred eststo's storing results of bivariate probability models into one excel file using esttab. It works for xtlogit(both ,re and ,pa), xtprobit (both ,re and ,pa) and for the linear probability model xtreg (both standard and ,fe. However, when I use xtreg y x i.year, fe I get the error message too many base levels specified. Google doesn't help me much.
I've been trying for an hour to create a reproducible example but the stata datasets all work fine. It does not seem to be due to the number of years or the fact that different specifications have data for different years. Still, the normal xtreg, fe' works, the problem only appears with time dummies. The weirdest thing is that it works for all subsets of my variables but not for the whole list (again just the time fixed effects specifications).
Does anyone have an idea how to proceed? Using drop(*.year) works whenever the problem does not arise (so in specifications where it works, I get outputs without the year dummies) but does not prevent the too many base levels specified error; ,nobaselevels has no apparent effect as well. Is there a way to remove the time fixed effects from eststo before I pass those on to esttab? Any workaround would be appreciated as well.
The problem you might be facing is that of Stata creating different base levels for the factor variable year, in different regressions.
Try fixing the factor variable base level beforehand with fvset:
fvset base <some_number> year
Check help fvset and the manual entry for details. Also, read the source given below, which contains more information.
Source: two posts from Statalist; one from Tim Wade and another by Jeff Pitblado.
I have a data set of 140 000 observations, and am trying to compare the ROC (receiver operating characteristics) using two different predictors. However, the roccomp command fails with an r(134) error which reports too many values.
I am using Stata/MP 12 if that makes a difference.
Is there a work around for this?
Interested in others' comments but the following seems to work.
Download the somersd package via ssc install somersd. And then using the c-statistic transform, the c-statistic with confidence intervals is produced very quickly.
somersd truth_var test_var, tr(c)
I am not sure how to construct a significance test to compare two variables but it is immediately obvious whether the confidence intervals overlap.
I am using unbalanced panel data for 4 years. In trying to decide which time variant model (xtgls, xtreg, re, or xtgee) is most appropriate for my analysis, I am trying to estimate coefficients for xtgls under both the homoskedasticity and hetero assumptions. When I run this model with the hetero option, I get very high z-scores (>30) and a significant effect on a term that is insig in all other models.
Also, when I attempt to run lrtest comparing the hetero and homoskedastic models I get an error that reads “hetero does not contain scalar e(ll)”. I read that one way to address this is to add option igls, which supposedly gives the same coeff as the model without the igls option. However, my model will not converge with the igls option. I thought these odd results for the hetero xtgls model could be because some time invariant variable was miscoded (i.e. person coded as female = 1 for one year and female = 0 for another year). I checked my 2 ivs and this is not the case. I can’t figure out what else could be causing this.
So my specific questions are:
Why would I be getting this error - “hetero does not contain scalar e(ll)” - for the lrtest comparing the homo and hetero models? What does it mean?
Below is my stata code:
xtgls continuous_DV IV1 IV2 IV1xIV2, i(person_id) panels(hetero)
estimates store hetero
xtgls continuous_DV IV1 IV2 IV1xIV2, i(person_id)
local df=e(N_g)-1
disp `df'
lrtest hetero ., df(`df')
I ran xttest3 which indicated errors are hetero.
Is igls an appropriate work around for the error I am getting following the lrtest (“hetero does not contain scalar e(ll)”)? If so, what could be causing this model with the igls option not to converge? Below is the code:
xtgls continuous_DV IV1 IV2 IV1xIV2, i(person_id) panels(hetero) igls
In Stata,
the xtgls command does not estimate a log likelihood because it is not maximum likelihood estimation. So you cannot get a log-likelihood test out of that model. To get a log-likelihood, you need to use the setup you had above but instead use the igls option. That is an appropriate workaround and is entirely appropriate; I don't think you need to start by slashing your dataset.
Alternatively, you can use a different estimator. GLS is appropriate when you have few, wide panels. If you have really short panels (only a couple years per individual), you should probably use something like xtreg.
http://www.stata.com/support/faqs/statistics/xtgls-versus-regress/