BY processing in PROC NLMIXED; procedure stops due to error - sas

I simulated 500 replications and planned to analyze each in NLMIXED using BY processing. My NLMIXED code is below:
PROC NLMIXED DATA=MELS GCONV=1E-12 QPOINTS=11;
BY Rep;
PARMS LMFI=&LMFI.
SMFI=&SMFI.
LMRIvar=&LMRIvar.
SMRIvar=0 TO 0.15 BY 0.005;
mu = LMFI + b0i;
evar = EXP(SMFI + t0i);
MODEL Y ~ NORMAL(mu,evar);
RANDOM b0i t0i ~ NORMAL([0,0],[LMRIvar,0,SMRIvar]) SUBJECT=PersonID;
ODS OUTPUT FitStatistics=Fit2 ConvergenceStatus=Conv2 ParameterEstimates=Parm2;
RUN;
For some of these replications, the variance components were sampled to be small, so some non-zero number of convergence errors are expected (note the ConvergenceStatus request on the ODS OUTPUT statement). However, when I get the warning below, NLMIXED quits processing regardless of the number of replications remaining to be analyzed.
WARNING: The final Hessian matrix is full rank but has at least one negative eigenvalue. Second-order optimality condition violated.
ERROR: QUANEW Optimization cannot be completed.
Am I missing something? I would think that NLMIXED could acknowledge the error for that replication, but continue with the remaining replications. Thoughts are appreciated!
Best,
Ryan

Here is what I believe is occurring. The requirement that variances must be non-negative and the fact that the distribution of variance estimates is long-tailed make variances troublesome to estimate. Variance component estimate updates may result in a negative value for one or more of the estimates. The NLMIXED procedure attempts to compute eigenvalues of the model variance components. At that point, NLMIXED crashes.
But note that
V[Y] = (sd[Y])^2
V[Y] = exp(ln(V[Y]))
V[Y] = exp(2*ln(sd[Y]))
V[Y] = exp(2*ln_sd_Y)
Now, suppose that we make ln_sd_Y the parameter. References to V[Y] would need to be written as the function shown in the last statement above. As the domain of the parameter ln_sd_Y is (-infinity, infinity), there is no lower bound on ln_sd_Y. The function exp(2*ln_sd_Y) will always produce a non-negative variance estimate. Actually, given limitations of digital computers such that negative infinity cannot be represented, only values which head to negative infinity), the function exp(2*ln_sd_Y) will always produce a positive parameter estimate. The estimate may be very, very close to 0. But the estimate would always come at 0 from above. This should preclude SAS trying to compute the eigenvalue of a negative number.
A slight alteration of your code writes LMRIvar and SMRIvar as functions of ln_sd_LMRIvar and ln_sd_SMRIvar.
PROC NLMIXED DATA=MELS GCONV=1E-12 QPOINTS=11;
BY Rep;
PARMS LMFI=&LMFI.
SMFI=&SMFI.
ln_sd_LMRIvar=%sysfunc(log(%sysfunc(sqrt(&LMRIvar.))))
ln_sd_SMRIvar=-5 to -1 by 0.1;
mu = LMFI + b0i;
evar = EXP(SMFI + t0i);
MODEL Y ~ NORMAL(mu,evar);
RANDOM b0i t0i ~ NORMAL([0,0],
[exp(2*ln_sd_LMRIvar), 0,
exp(2*ln_sd_SMRIvar)]) SUBJECT=PersonID;
ODS OUTPUT FitStatistics=Fit2 ConvergenceStatus=Conv2 ParameterEstimates=Parm2;
RUN;
Alternatively, you could employ a bounds statement in an attempt to prevent updates of LMRIvar and/or SMRIvar from going negative. You could keep your original code, inserting the statement
bounds LMRIvar SMRIvar > 0;
This is simpler than writing the model in terms of parameters which are allowed to go negative. However, my experience has been that employing parameters which have domain (-infinity, infinity) is actually the better approach.

Related

SAS twosamplesurvival sample size question

I am trying to perform a sample size calculation in SAS for a two sample time to event case.
Here is the situation:
Assume both sample follows exponential distribution
Assume a given constant hazard ratio under alternative hypothesis, we call hr (group 2 vs group 1)
We will use logrank test.
Given accrual time a, and follow up time f
Also given the exponential hazard for group 1, called it exph1
Assume the sample size ratio between the two group is 1:1
required nominal power is p
Now my code looks like this:
proc power;
twosamplesurvival test=logrank
accrualtime = a
followuptime = f
refsurvexphazard= exph1
hazardratio = hr
power = p
/* eventstotal = . /*events total */
/* ntotal= . /*total sample size */
;
run;
you can uncomment either eventstotal = . or ntotal=. depending on whether you want to compute the requested number of events, or the actual total sample size.
They should not be the same consider by the end of follow up, if the event does not happen, then the subject will be right censored.
However I am always getting the same number for events total and total sample size. What did I do wrong here?
I actually know how to compute this by hand, and my hand calculation for requested event number is very close to SAS output (SAS gives a slightly larger value but very close), however my total sample size is much larger than the event number.
I could not disclose any particular initial value for the parameters above due to confidential reasons. Could someone help? Would really appreciate that.

Evaluating the Fractional Logit Model - McFadden's Adjusted R^2

I am estimating a model where the dependent variable is a fraction (between 0 and 1). I used the commands in Stata 14.1
glm y x, link(logit) family(binomial) robust nolog
as well as
fracreg logit y x, vce(robust)
Both commands deliver the same results.
Now I want to evaluate the outcome, ideally with McFadden's adjusted r^2. Yet, neither fitstat nor estat gof seem to work after I run the regressions. I get the error message fitstat does not work with the last model estimated and not available after fracreg r(321).
Does any of you know an alternative command for McFadden's adjusted r^2?
Or do I have to use a different evaluation method?
It appears that the pseudo-R-squared that appears in the fracreg output is McFadden's pseudo R squared. I'm not sure if this is the same as the McFadden's adjusted r^2 that you mention.
You can see it is McFadden's pseudo-R-squared from investigating the maximize command as suggested by #nick-cox's post on Stata.com. In the reference manual for maximize, page 1478 (Stata 14) it says:
Let L1 be the log likelihood of the full model (that is, the log-likelihood value shown on the output), and let L0 be the log likelihood of the “constant-only” model. ... The pseudo-R2 (McFadden 1974) is defined as 1 - L1 / L0. This is simply the log likelihood on a scale where 0 corresponds to the “constant-only” model and 1 corresponds to perfect prediction for a discrete model (in which case the overall log likelihood is 0).
If this is what you are looking for, this value may be pulled out using
fracreg logit y x, vce(robust)
scalar myRsquared = e(r2_p)
To adjust the McFadden's R^2, you just need to subtract the number of predictors from the full model log likelihood in the numerator of the fractional part. The formula is here. Note that you may get negative values.
Here's how you might do that:
set more off
webuse set http://fmwww.bc.edu/repec/bocode/w
webuse wedderburn, clear
/* (1) Fracreg Way */
fracreg logit yield i.site i.variety, nolog
di "Fracreg McFadden's Adj. R^2:" %-9.3f 1-(e(ll)-e(k))/(e(ll_0))
/* (2) GLM Way */
glm yield, link(logit) family(binomial) robust nolog // intercept only model
local ll_0 = e(ll)
glm yield i.site i.variety, link(logit) family(binomial) robust nolog // full model
di "McFadden's Adj. R^2: " %-9.3f 1-(e(ll)-e(k))/`ll_0'
The GLM R^2 will be slightly different because the maximization algorithm is different and so the likelihoods will be different as well. I am not sure how to tweak the ML options so they match exactly.
You can verify that we did things correctly with a command where fitstat works:
sysuse auto, clear
logit foreign price mpg
fitstat
di "McFadden's Adj. R^2: " %-9.3f 1-(e(ll)-e(k))/(e(ll_0))

How SAS computes Ridge values in PROC PHREG

The itprint option in the class statement of SAS proc phreg causes the display of the iteration history. This includes a Ridge value, along with the beta values and log likelihoods for each iteration. Ridge is usually zero but is non-zero whenever a log likelihood would otherwise be more negative than the log likelihood for the previous iteration. I need to know how SAS computes that ridge value and I can find nothing in the Details section for that procedure, or anywhere else.
It appears that, by default, that Ridge value is always 0.0001 * 2^n, and that SAS starts with n=0 and increments n until log likelihood is less negative than in the previous iteration. But I have tested at least one example where SAS used Ridge=0.4096 when Ridge=0.2048 would suffice.
Update: I now think that SAS is iterating 4^n, rather than 2^n. That explains skipping 2048 and is consistent with my testing so far.
So I think I have answered my own question and would now like academic support for this method. I'll likely seek that at Cross Validated as Robert Penridge and Joe suggest.
When PHREG fails to converge, that is, when a log likelihood value is more negative than in the previous iteration, the procedure computes a ridge value. This value is RIDGEINIT * 2^n, with n incremented until either the log likelihood value becomes less negative, or the ridge value reaches RIDGEMAX.
The default RIDGEINIT is 1e-4.
The default RIDGEMAX is MAX(1, RIDGEINIT) * 2000.

SAS ceil/floor issues using big numbers and wanting to ceil/floor to the nearest 10,000

I have one number which I need to find the ceiling and the floor value of (203,400) in order to use this number to create a weighted average. From this number I want: 200,000 and 210,000 so the code I was using that doesn't work is:
S1CovA_ceil = ceil(S1CovA,10000);
S1CovA_floor = floor(S1CovA,10000);
When I run this program, I get these errors:
ERROR 72-185: The CEIL function call has too many arguments.
ERROR 72-185: The FLOOR function call has too many arguments.
Does anybody know a way around this or different SAS code I could use?
CEIL and FLOOR only remove decimals - specifically rounding to integer value. If you want it rounded to (above/below) multiple of 10,000, you have to do it a bit more complicatedly:
S1CovA_ceil = ceil(s1covA/10000)*10000;
And the same for floor. Basically you have to divide it by the desired rounding level, round the rest with ceil/floor, and then multiply back.
Unfortunately, as far as I'm aware, SAS doesn't allow rounding in a particular direction except for straight integer rounding.
You can also use the round() function...
%LET ROUNDTO = 10000 ;
data xyz ;
S1CovA_ceil = round(S1CovA+(&ROUNDTO / 2),&ROUNDTO) ;
S1CovA_floor = round(S1CovA-(&ROUNDTO / 2),&ROUNDTO) ;
run ;
Try
S1CovA_ceil = ceil(S1CovA/10000)*10000;
S1CovA_floor = floor(S1CovA/10000)*10000;

Stata seems to be ignoring my starting values in maximum likelihood estimation

I am trying to estimate a maximum likelihood model and it is running into convergence problems in Stata. The actual model is quite complicated, but it converges with no troubles in R when it is supplied with appropriate starting values. I however cannot seem to get Stata to accept the starting values I provide.
I have included a simple example below estimating the mean of a poisson distribution. This is not the actual model I am trying to estimate, but it demonstrates my problem. I set the trace variable, which allows you to see the parameters as Stata searches the likelihood surface.
Although I use init to set a starting value of 0.5, the first iteration still shows that Stata is trying a coefficient of 4.
Why is this? How can I force the estimation procedure to use my starting values?
Thanks!
generate y = rpoisson(4)
capture program drop mypoisson
program define mypoisson
args lnf mu
quietly replace `lnf' = $ML_y1*ln(`mu') - `mu' - lnfactorial($ML_y1)
end
ml model lf mypoisson (mean:y=)
ml init 0.5, copy
ml maximize, iterations(2) trace
Output:
Iteration 0:
Parameter vector:
mean:
_cons
r1 4
Added: Stata doesn't ignore the initial value. If you look at the output of the ml maximize command, the first line in the listing will be titled
initial: log likelihood =
Following the equal sign is the value of the likelihood for the parameter value set in the init statement.
I don't know how the search(off) or search(norescale) solutions affect the subsequent likelihood calculations, so these solution might still be worthwhile.
Original "solutions":
To force a start at your initial value, add the search(off) option to ml maximize:
ml maximize, iterate(2) trace search(off)
You can also force a use of the initial value with search(norescale). See Jeff Pitblado's post at http://www.stata.com/statalist/archive/2006-07/msg00499.html.