I am running several simple regressions and I wish to save the value of the significance (P > |t|) of a regression for a given coefficient in a local macro.
For example, I know that:
local consCoeff = _b[_cons]
will save the coefficient for the constant, and that with _se[_cons] I can get the standard error. However, there doesn't seem to be any documentation on how to get the significance.
It would be best if the underscore format worked (like _pt etc.), but anything will do.
There is no need to calculate anything yourself because Stata already does that for you.
For example:
. sysuse auto, clear
(1978 Automobile Data)
. regress price weight mpg
Source | SS df MS Number of obs = 74
-------------+---------------------------------- F(2, 71) = 14.74
Model | 186321280 2 93160639.9 Prob > F = 0.0000
Residual | 448744116 71 6320339.67 R-squared = 0.2934
-------------+---------------------------------- Adj R-squared = 0.2735
Total | 635065396 73 8699525.97 Root MSE = 2514
------------------------------------------------------------------------------
price | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
weight | 1.746559 .6413538 2.72 0.008 .467736 3.025382
mpg | -49.51222 86.15604 -0.57 0.567 -221.3025 122.278
_cons | 1946.069 3597.05 0.54 0.590 -5226.245 9118.382
------------------------------------------------------------------------------
The results are also returned in matrix r(table):
. matrix list r(table)
r(table)[9,3]
weight mpg _cons
b 1.7465592 -49.512221 1946.0687
se .64135379 86.156039 3597.0496
t 2.7232382 -.57468079 .54101802
pvalue .00812981 .56732373 .59018863
ll .46773602 -221.30248 -5226.2445
ul 3.0253823 122.27804 9118.3819
df 71 71 71
crit 1.9939434 1.9939434 1.9939434
eform 0 0 0
So for the p-value of, say weight, you type:
. matrix A = r(table)
. local pval = A[4,1]
. display `pval'
.00812981
The t-stat for the coefficient is the coefficient divided by the standard error. The p-value can then be calculated using the ttail function with the appropriate degrees of freedom. Since you are looking for the two-tailed p-value, the result gets multiplied by two.
In your case, the following should do it:
local consPvalue = (2 * ttail(e(df_r), abs(_b[cons]/_se[cons])))
Related
I need to compare two different estimation methods and see if there are statistically same or not. However one of my estimation methods is SUR (Seemingly Unrelated Regression). And, I estimated the my 11 different models using
sureg (Y1 trend X1 .... X106) (Y2 trend X1..... X181) ..... (Y11 trend X1 .... X 130)
Then I estimated single OLS model as shown in following
glm(Y1 trend X1 ...... X106)
Now I need to test if parameter estimates of X1 to X106 comming from sureg is equal to glm estimates of same variables or not? I need to use Haussman specification test. I couldn't figure how can I store parameters estimates for specific equation in an SUR system estimation.
I couldn't find object should I add to estimates store XXX to subset a part of SUR estimates.
It's not easy to give a working example using my own crowded data, but let me present same problem using stata's auto data.
. sysuse auto (1978 automobile data)
. sureg (price mpg headroom) (trunk weight length) (gear_ratio turn headroom)
Seemingly unrelated regression
------------------------------------------------------------------------------ Equation Obs Params RMSE "R-squared" chi2 P>chi2
------------------------------------------------------------------------------ price 74 2 2576.37 0.2266 21.24
0.0000 trunk 74 2 2.912933 0.5299 82.93 0.0000 gear_ratio 74 2 .3307276 0.4674 65.12 0.0000
------------------------------------------------------------------------------
------------------------------------------------------------------------------
| Coefficient Std. err. z P>|z| [95% conf. interval]
-------------+---------------------------------------------------------------- price |
mpg | -258.2886 57.06953 -4.53 0.000 -370.1428 -146.4344
headroom | -419.4592 390.4048 -1.07 0.283 -1184.639 345.7201
_cons | 12921.65 2025.737 6.38 0.000 8951.277 16892.02
-------------+---------------------------------------------------------------- trunk |
weight | -.0010525 .0013499 -0.78 0.436 -.0036983 .0015933
length | .1735274 .0471176 3.68 0.000 .0811785 .2658762
_cons | -15.6766 5.182878 -3.02 0.002 -25.83485 -5.518345
-------------+---------------------------------------------------------------- gear_ratio |
turn | -.0652416 .0097031 -6.72 0.000 -.0842594 -.0462238
headroom | -.0601831 .0505198 -1.19 0.234 -.1592001 .0388339
_cons | 5.781748 .3507486 16.48 0.000 5.094293 6.469202
------------------------------------------------------------------------------
. glm (price mpg headroom)
Iteration 0: log likelihood = -686.17715
Generalized linear models Number of obs = 74 Optimization : ML Residual df = 71
Scale parameter = 6912463 Deviance = 490784895.4 (1/df) Deviance = 6912463 Pearson = 490784895.4 (1/df) Pearson = 6912463
Variance function: V(u) = 1 [Gaussian] Link function : g(u) = u [Identity]
AIC = 18.62641 Log likelihood = -686.1771533 BIC = 4.91e+08
------------------------------------------------------------------------------
| OIM
price | Coefficient std. err. z P>|z| [95% conf. interval]
-------------+----------------------------------------------------------------
mpg | -259.1057 58.42485 -4.43 0.000 -373.6163 -144.5951
headroom | -334.0215 399.5499 -0.84 0.403 -1117.125 449.082
_cons | 12683.31 2074.497 6.11 0.000 8617.375 16749.25
------------------------------------------------------------------------------
as you see for the price model (glm) parameter estimate of mpg coef is -259.10 and parameter for same variable estimated in SUR system is -258.288
Now I wanted to test if parameter estimates of GLM and SUR methods are statistically equal or not.
I am running a regression on categorical variables in Stata:
regress y i.age i.birth
Part of the regression results output is below:
coef
age
28 .1
29 -.2
birth
1958 .2
1959 .5
I want the above results to be shown in the reverse order, so that I can export them to Excel using the putexcel command:
coef
age
29 -.2
28 .1
birth
1959 .5
1958 .2
I tried sorting the birth and age variables before regression, but this does not work.
Can someone help?
You cannot directly reverse the factor levels of a variable in the regression output.
However, if your end goal is to create a table in Microsoft Excel one way to do this is the following:
sysuse auto.dta, clear
estimates clear
keep if !missing(rep78)
tabulate rep78, generate(rep)
regress price mpg weight rep2-rep5
estimates store r1
regress price mpg weight rep5 rep4 rep3 rep2
estimates store r2
Normal results:
esttab r1 using results.csv, label refcat(rep2 "Repair record", nolabel)
------------------------------------
(1)
Price
------------------------------------
Mileage (mpg) -63.10
(-0.72)
Weight (lbs.) 2.093**
(3.29)
Repair record
rep78== 2.0000 753.7
(0.39)
rep78== 3.0000 1349.4
(0.76)
rep78== 4.0000 2030.5
(1.12)
rep78== 5.0000 3376.9
(1.78)
Constant -599.0
(-0.15)
------------------------------------
Observations 69
------------------------------------
t statistics in parentheses
* p<0.05, ** p<0.01, *** p<0.001
Reversed results:
esttab r2 using results.csv, label refcat(rep5 "Repair record", nolabel)
------------------------------------
(1)
Price
------------------------------------
Mileage (mpg) -63.10
(-0.72)
Weight (lbs.) 2.093**
(3.29)
Repair record
rep78== 5.0000 3376.9
(1.78)
rep78== 4.0000 2030.5
(1.12)
rep78== 3.0000 1349.4
(0.76)
rep78== 2.0000 753.7
(0.39)
Constant -599.0
(-0.15)
------------------------------------
Observations 69
------------------------------------
t statistics in parentheses
* p<0.05, ** p<0.01, *** p<0.001
Note that here I am using the commmunity-contributed command esttab to export the results.
You can make further tweaks if you fiddle with its options.
EDIT:
This solution manually creates dummies for esttab but instead you can also create a new variable with the reverse coding and use the opposite base level as #NickCox demonstrates in his solution.
You can reverse the coding and apply value labels to insist on what you will see:
sysuse auto, clear
generate rep78_2 = 6 - rep78
label define new 1 "5" 2 "4" 3 "3" 4 "2" 5 "1"
label values rep78_2 new
regress mpg i.rep78_2
Source | SS df MS Number of obs = 69
-------------+---------------------------------- F(4, 64) = 4.91
Model | 549.415777 4 137.353944 Prob > F = 0.0016
Residual | 1790.78712 64 27.9810488 R-squared = 0.2348
-------------+---------------------------------- Adj R-squared = 0.1869
Total | 2340.2029 68 34.4147485 Root MSE = 5.2897
------------------------------------------------------------------------------
mpg | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
rep78_2 |
4 | -5.69697 2.02441 -2.81 0.006 -9.741193 -1.652747
3 | -7.930303 1.86452 -4.25 0.000 -11.65511 -4.205497
2 | -8.238636 2.457918 -3.35 0.001 -13.14889 -3.32838
1 | -6.363636 4.066234 -1.56 0.123 -14.48687 1.759599
|
_cons | 27.36364 1.594908 17.16 0.000 24.17744 30.54983
------------------------------------------------------------------------------
regress mpg ib5.rep78_2
Source | SS df MS Number of obs = 69
-------------+---------------------------------- F(4, 64) = 4.91
Model | 549.415777 4 137.353944 Prob > F = 0.0016
Residual | 1790.78712 64 27.9810488 R-squared = 0.2348
-------------+---------------------------------- Adj R-squared = 0.1869
Total | 2340.2029 68 34.4147485 Root MSE = 5.2897
------------------------------------------------------------------------------
mpg | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
rep78_2 |
5 | 6.363636 4.066234 1.56 0.123 -1.759599 14.48687
4 | .6666667 3.942718 0.17 0.866 -7.209818 8.543152
3 | -1.566667 3.863059 -0.41 0.686 -9.284014 6.150681
2 | -1.875 4.181884 -0.45 0.655 -10.22927 6.479274
|
_cons | 21 3.740391 5.61 0.000 13.52771 28.47229
------------------------------------------------------------------------------
If you wanted to see the same variable name as before, you could also do the following:
drop rep78
rename rep78_2
I am running a simple regression of race times against temperature just to develop some basic intuition. My data-set is very large and each observation is the race completion time of a unit in a given race, in a given year.
For starters I am running a very simple regression of race time on temperature bins.
Summary of temp variable:
|
Variable | Obs Mean Std. Dev Min Max
------------+--------------------------------------------
avg_temp_scc| 8309434 54.3 9.4 0 89
Summary of time variable:
Variable | Obs Mean Std. Dev Min Max
------------+--------------------------------------------
chiptime | 8309434 267.5 59.6 122 1262
I decided to make 10 degree bins for temperature and regress time against those.
The code is:
egen temp_trial = cut(avg_temp_scc), at(0,10,20,30,40,50,60,70,80,90)
reg chiptime i.temp_trial
The output is
Source | SS df MS Number of obs = 8309434
---------+------------------------------ F( 8,8309425) =69509.83
Model | 1.8525e+09 8 231557659 Prob > F = 0.0000
Residual | 2.7681e+108309425 3331.29368 R-squared = 0.0627
-----+-------------------------------- Adj R-squared = 0.0627
Total | 2.9534e+108309433 3554.22521 Root MSE = 57.717
chiptime | Coef. Std. Err. t P>|t| [95% Conf. Interval]
----------+----------------------------------------------------------------
temp_trial |
10 | -26.63549 2.673903 -9.96 0.000 -31.87625 -21.39474
20 | 10.23883 1.796236 5.70 0.000 6.71827 13.75939
30 | -16.1049 1.678432 -9.60 0.000 -19.39457 -12.81523
40 | -13.97918 1.675669 -8.34 0.000 -17.26343 -10.69493
50 | -10.18371 1.675546 -6.08 0.000 -13.46772 -6.899695
60 | -.6865365 1.675901 -0.41 0.682 -3.971243 2.59817
70 | 44.42869 1.676883 26.49 0.000 41.14206 47.71532
80 | 23.63064 1.766566 13.38 0.000 20.16824 27.09305
_cons | 273.1366 1.675256 163.04 0.000 269.8531 276.42
So stata correctly drops the one of the bins (in this case 0-10) of temperature.
Now I manually created the bins and ran the regression again:
gen temp0 = 1 if temp_trial==0
replace temp0 = 0 if temp_trial!=0
gen temp1 = 1 if temp_trial == 10
replace temp1 = 0 if temp_trial != 10
gen temp2 = 1 if temp_trial==20
replace temp2 = 0 if temp_trial!=20
gen temp3 = 1 if temp_trial==30
replace temp3 = 0 if temp_trial!=30
gen temp4=1 if temp_trial==40
replace temp4=0 if temp_trial!=40
gen temp5=1 if temp_trial==50
replace temp5=0 if temp_trial!=50
gen temp6=1 if temp_trial==60
replace temp6=0 if temp_trial!=60
gen temp7=1 if temp_trial==70
replace temp7=0 if temp_trial!=70
gen temp8=1 if temp_trial==80
replace temp8=0 if temp_trial!=80
reg chiptime temp0 temp1 temp2 temp3 temp4 temp5 temp6 temp7 temp8
The output is:
Source | SS df MS Number of obs = 8309434
---------+------------------------------ F( 9,8309424) =61786.51
Model | 1.8525e+09 9 205829030 Prob > F = 0.0000
Residual | 2.7681e+108309424 3331.29408 R-squared = 0.0627
--------+------------------------------ Adj R-squared = 0.0627
Total | 2.9534e+108309433 3554.22521 Root MSE = 57.717
--------------------------------------------------------------------------
chiptime | Coef. Std. Err. t P>|t| [95% Conf. Interval]
---------+----------------------------------------------------------------
temp0 | -54.13245 6050.204 -0.01 0.993 -11912.32 11804.05
temp1 | -80.76794 6050.204 -0.01 0.989 -11938.95 11777.42
temp2 | -43.89362 6050.203 -0.01 0.994 -11902.08 11814.29
temp3 | -70.23735 6050.203 -0.01 0.991 -11928.42 11787.94
temp4 | -68.11162 6050.203 -0.01 0.991 -11926.29 11790.07
temp5 | -64.31615 6050.203 -0.01 0.992 -11922.5 11793.87
temp6 | -54.81898 6050.203 -0.01 0.993 -11913 11803.36
temp7 | -9.703755 6050.203 -0.00 0.999 -11867.89 11848.48
temp8 | -30.5018 6050.203 -0.01 0.996 -11888.68 11827.68
_cons | 327.269 6050.203 0.05 0.957 -11530.91 12185.45
Note the bins are exhaustive of the entire data set and stata is including a constant in the regression and none of the bins are getting dropped. Is this not incorrect? Given that the constant is being included in the regression, shouldn't one of the bins get dropped to make it the "base case"? I feel as though I am missing something obvious here.
Edit:
Here is a dropbox link for the data and do file:
It contains only the two variables under consideration. The file is 129 mb. I also have a picture of my output at the link.
This too is not an answer, but an extended comment, since I'm tired of fighting with the 600-character limit and the freeze on editing after 5 minutes.
In the comment thread on the original post, #user52932 wrote
Thank you for verifying this. Can you elaborate on what exactly this
precision issue is? Does this only cause problems in this
multicollinearity issue? Could it be that when I am using factor
variables this precision issue may cause my estimates to be wrong?
I want to be unambiguous that the results from the regression using factor variables are as correct as those of any well-specified regression can be.
In the regression using dummy variables, the model was misspecified to include a set of multicollinear variables. Stata is then faulted for failing to detect the multicollinearity.
But there's no magic test for multicollinearity. It's inferred from characteristics of the cross-products matrix. In this case the cross-products matrix represents 8.3 million observations, and despite Stata's use of double-precision throughout, the calculated matrix passed Stata's test and was not detected as containing a multicollinear set of variables. This is the locus of the precision problem to which I referred. Note that by reordering the observations, the accumulated cross-products matrix differed enough so that it now failed Stata's test, and the misspecification was detected.
Now look at the results in the original post obtained from this misspecified regression. Note that if you add 54.13245 to the coefficients on each of the dummy variables and subtract the same amount from the constant, the resulting coefficients and constant are identical to those in the regression using factor variables. This is the textbook definition of the problem with multicollinearity - not that the coefficient estimates are wrong, but that the coefficient estimates are not uniquely defined.
In a comment above, #user52932 wrote
I am unsure what Stata is using as the base case in my data.
The answer is that Stata used no base case; the results are what are to be expected when a set of multicollinear variables is included among the independent variables.
So this question is a reminder to us that statistical packages like Stata cannot infallibly detect multicollinearity. As it turns out, that's part of the genius of factor variable notation, I realize now. With factor variable notation, you tell Stata to create a set of dummy variables that by definition will be multicollinear, and since it understands that relationship between the dummy variables, it can eliminate the multicollinearity ex ante, before constructing the cross-products matrix, rather than attempt to infer the problem ex post, using the cross-products matrix's characteristics.
We should not be surprised that Stata occasionally fails to detect multicollinearity, but rather gratified that it does as well as it does at doing so. After all, the second model is indeed a misspecification, which constitutes an unambiguous violation of the assumptions of OLS regression on the user's part.
This may not be an "answer" but it's too long for a comment, so I write it here.
My results are different. At the final regression, one variable is dropped:
. clear all
. set obs 8309434
number of observations (_N) was 0, now 8,309,434
. set seed 1
. gen avg_temp_scc = floor(90*uniform())
. egen temp_trial = cut(avg_temp_scc), at(0,10,20,30,40,50,60,70,80,90)
. gen chiptime = rnormal()
. reg chiptime i.temp_trial
Source | SS df MS Number of obs = 8,309,434
-------------+---------------------------------- F(8, 8309425) = 0.88
Model | 7.07729775 8 .884662219 Prob > F = 0.5282
Residual | 8308356.5 8,309,425 .999871411 R-squared = 0.0000
-------------+---------------------------------- Adj R-squared = -0.0000
Total | 8308363.58 8,309,433 .9998713 Root MSE = .99994
------------------------------------------------------------------------------
chiptime | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
temp_trial |
10 | .0010732 .0014715 0.73 0.466 -.0018109 .0039573
20 | .0003255 .0014713 0.22 0.825 -.0025581 .0032092
30 | .0017061 .0014713 1.16 0.246 -.0011776 .0045897
40 | .0003128 .0014717 0.21 0.832 -.0025718 .0031973
50 | .0007142 .0014715 0.49 0.627 -.0021699 .0035983
60 | .0021693 .0014716 1.47 0.140 -.0007149 .0050535
70 | -.0008265 .0014715 -0.56 0.574 -.0037107 .0020577
80 | -.0005001 .0014714 -0.34 0.734 -.0033839 .0023837
|
_cons | -.0006364 .0010403 -0.61 0.541 -.0026753 .0014025
------------------------------------------------------------------------------
. * "qui tab temp_trial, gen(temp)" is more convenient than "forv ..."
. forv k = 0/8 {
2. gen temp`k' = temp_trial==`k'0
3. }
. reg chiptime temp0-temp8
note: temp6 omitted because of collinearity
Source | SS df MS Number of obs = 8,309,434
-------------+---------------------------------- F(8, 8309425) = 0.88
Model | 7.07729775 8 .884662219 Prob > F = 0.5282
Residual | 8308356.5 8,309,425 .999871411 R-squared = 0.0000
-------------+---------------------------------- Adj R-squared = -0.0000
Total | 8308363.58 8,309,433 .9998713 Root MSE = .99994
------------------------------------------------------------------------------
chiptime | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
temp0 | -.0021693 .0014716 -1.47 0.140 -.0050535 .0007149
temp1 | -.0010961 .0014719 -0.74 0.456 -.003981 .0017888
temp2 | -.0018438 .0014717 -1.25 0.210 -.0047282 .0010407
temp3 | -.0004633 .0014717 -0.31 0.753 -.0033477 .0024211
temp4 | -.0018566 .0014721 -1.26 0.207 -.0047419 .0010287
temp5 | -.0014551 .0014719 -0.99 0.323 -.00434 .0014298
temp6 | 0 (omitted)
temp7 | -.0029958 .0014719 -2.04 0.042 -.0058808 -.0001108
temp8 | -.0026694 .0014718 -1.81 0.070 -.005554 .0002152
_cons | .0015329 .0010408 1.47 0.141 -.0005071 .0035729
------------------------------------------------------------------------------
The difference with yours is: (i) different data (I generated random numbers), (ii) I used a forvalue loop instead of manual variable creation. Yet, I see no errors in your codes.
In Stata the command nlcom employs the delta method to test nonlinear hypotheses about estimated coefficients. The command displays the standard errors in the results window, though unfortunately does not save them anywhere.
What is available after estimation is just the matrix r(V), but I cannot figure out how to use it to compute the standard errors.
You need to use the post option, like this:
. sysuse auto
(1978 Automobile Data)
. reg price mpg weight
Source | SS df MS Number of obs = 74
-------------+------------------------------ F( 2, 71) = 14.74
Model | 186321280 2 93160639.9 Prob > F = 0.0000
Residual | 448744116 71 6320339.67 R-squared = 0.2934
-------------+------------------------------ Adj R-squared = 0.2735
Total | 635065396 73 8699525.97 Root MSE = 2514
------------------------------------------------------------------------------
price | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
mpg | -49.51222 86.15604 -0.57 0.567 -221.3025 122.278
weight | 1.746559 .6413538 2.72 0.008 .467736 3.025382
_cons | 1946.069 3597.05 0.54 0.590 -5226.245 9118.382
------------------------------------------------------------------------------
. nlcom ratio: _b[mpg]/_b[weight], post
ratio: _b[mpg]/_b[weight]
------------------------------------------------------------------------------
price | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
ratio | -28.34844 58.05769 -0.49 0.625 -142.1394 85.44254
------------------------------------------------------------------------------
. di _se[ratio]
58.057686
This standard error is the square root of the entry from the variance matrix r(V):
. matrix list r(V)
symmetric r(V)[1,1]
ratio
ratio 3370.6949
. di sqrt(3370.6949)
58.057686
Obviously you need to take square roots of the diagonal elements of r(V). Here's an approach that returns the standard errors as variables in a one-observation data set.
sysuse auto, clear
reg mpg weight turn
nlcom (v1: 1/_b[weight]) (v2: _b[weight]/_b[turn])
mata: se = sqrt(diagonal(st_matrix("r(V)")))'
clear
getmata (se1 se2 ) = se /* supply names as needed */
list
I was trying to examine whether Stata is taking the initial values in the model NormalReg (sample model) that I used from previous reg. However, it seems to me by looking at iteration 0 that it is not taking into account my initial values. Any help to fix this issue will be highly appreciated.
set seed 123
set obs 1000
gen x = runiform()*2
gen u = rnormal()*5
gen y = 2 + 2*x + u
reg y x
Source | SS df MS Number of obs = 1000
-------------+------------------------------ F( 1, 998) = 52.93
Model | 1335.32339 1 1335.32339 Prob > F = 0.0000
Residual | 25177.012 998 25.227467 R-squared = 0.0504
-------------+------------------------------ Adj R-squared = 0.0494
Total | 26512.3354 999 26.5388743 Root MSE = 5.0227
------------------------------------------------------------------------------
y | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
x | 1.99348 .2740031 7.28 0.000 1.455792 2.531168
_cons | 2.036442 .3155685 6.45 0.000 1.417188 2.655695
------------------------------------------------------------------------------
cap program drop NormalReg
program define NormalReg
args lnlk xb sigma2
qui replace `lnlk' = -ln(sqrt(`sigma2'*2*_pi)) - ($ML_y-`xb')^2/(2*`sigma2')
end
ml model lf NormalReg (reg: y = x) (sigma2:)
ml init reg:x = `=_b[x]'
ml init reg:_cons = `=_b[_cons]'
ml max,iter(1) trace
ml max,iter(1) trace
initial: log likelihood = -<inf> (could not be evaluated)
searching for feasible values .+
feasible: log likelihood = -28110.03
rescaling entire vector .+.
rescale: log likelihood = -14623.922
rescaling equations ...+++++.
rescaling equations ....
rescale eq: log likelihood = -3080.0872
------------------------------------------------------------------------------
Iteration 0:
Parameter vector:
reg: reg: sigma2:
x _cons _cons
r1 3.98696 1 32
log likelihood = -3080.0872
------------------------------------------------------------------------------
Iteration 1:
Parameter vector:
reg: reg: sigma2:
x _cons _cons
r1 2.498536 1.773872 24.10726
log likelihood = -3035.3553
------------------------------------------------------------------------------
convergence not achieved
Number of obs = 1000
Wald chi2(1) = 86.45
Log likelihood = -3035.3553 Prob > chi2 = 0.0000
------------------------------------------------------------------------------
y | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
reg |
x | 2.498536 .2687209 9.30 0.000 1.971853 3.02522
_cons | 1.773872 .3086854 5.75 0.000 1.16886 2.378885
-------------+----------------------------------------------------------------
sigma2 |
_cons | 24.10726 1.033172 23.33 0.000 22.08228 26.13224
------------------------------------------------------------------------------
Warning: convergence not achieved
Apparently, if you want ml to evaluate the likelihood at the specified initial values at iteration 0, you must also supply a value for sigma2;. Change the last section of your code to:
matrix rmse = e(rmse)
scalar mse = rmse[1,1]^2
ml model lf NormalReg (reg: y = x) (sigma2:)
ml init reg:x = `=_b[x]'
ml init reg:_cons = `=_b[_cons]'
ml init sigma2:_cons = `=scalar(mse)'
ml maximize, trace
Note that the ML estimate of sigma^2 will differ from the root mean square error because ML doesn't know about degrees of freedom. With n = 1,000 sigma2 = (998/1000)*rmse.
Stuff like this is very sensitive. You are trusting that the results from the previous regression are still visible at the exact point the program is defined. That could be undermined directly or indirectly by several different operations. It's best to treat arguments you want to use as arguments to be fed to your program using the program's options at the point it runs.