object-Z specification of credit card using eclipse - z-notation

i am currently trying to run an example of object-Z specification for the CreditCard class, but i encountered a problem when declaring the visibility list and INIT schema.Is there any way to fix this ? Thank you for reading
visibility list says the items are not features of the class
undeclared name : balance

I'm not sure CZT copes well with Object Z.
For an example of a big Z specification, I suggest this recently uploaded project:
https://github.com/vinahradau/finma
For the credit card example, I created this schema (CZT and then converted into latex), which executes well in jaza.
CZT:
┌ LIMIT
limit: ℤ
|
limit ∈ {1000, 2000, 5000}
└
┌ BALANCE
ΞLIMIT
balance: ℤ
|
balance + limit ≥ 0
└
┌ Init
BALANCE ′
|
balance′ = 0
└
┌ SetLimit
ΔLIMIT
limit?: ℕ
|
limit′ = limit?
└
┌ Withdraw
ΔBALANCE
amount?: ℕ
|
amount? ≤ balance + limit
balance′ = balance − amount?
└
┌ Deposit
ΔBALANCE
amount?: ℕ
|
balance′ = balance + amount?
└
┌ WithdrawAvail
ΔBALANCE
amount!: ℕ
|
amount! = balance + limit
balance′ = -limit
└
Latex:
\begin{schema}{LIMIT}
limit : \num
\where
limit \in \{ 1000 , 2000 , 5000 \}
\end{schema}
\begin{schema}{BALANCE}
\Xi LIMIT \\
balance : \num
\where
balance + limit \geq 0
\end{schema}
\begin{schema}{Init}
BALANCE~'
\where
balance' = 0
\end{schema}
\begin{schema}{SetLimit}
\Delta LIMIT \\
limit? : \nat
\where
limit' = limit?
\end{schema}
\begin{schema}{Withdraw}
\Delta BALANCE \\
amount? : \nat
\where
amount? \leq balance + limit \\
balance' = balance - amount?
\end{schema}
\begin{schema}{Deposit}
\Delta BALANCE \\
amount? : \nat
\where
balance' = balance + amount?
\end{schema}
\begin{schema}{WithdrawAvail}
\Delta BALANCE \\
amount! : \nat
\where
amount! = balance + limit \\
balance' =~\negate limit
\end{schema}
Jaza:
JAZA> load C:\jaza\creditcard.
Loading 'C:\jaza\creditcard.' ...
Added 7 definitions.
JAZA> do Init
\lblot balance'==0, limit'==1000, limit''==1000 \rblot
JAZA> ; SetLimit
Input limit? = 1000
\lblot balance'==0, limit'==1000 \rblot
JAZA> ; Deposit
Input amount? = 10
\lblot balance'==10, limit'==1000, limit''==1000 \rblot
JAZA>
JAZA>
JAZA>
JAZA> ; Withdraw
Input amount? = 5
\lblot balance'==5, limit'==1000, limit''==1000 \rblot
JAZA> ; WithdrawAvail
\lblot amount!==1005, balance'==-1000, limit'==1000, limit''==1000 \rblot
JAZA>

Related

How can I write a query to carry a remaining balance of hours forward for load leveling a schedule?

I have a query result with a total amount of hours scheduled per week in chronological order without gaps and have a set amount of hours that can be processed each week. Any hours not processed should be carried over to one or more following weeks. The following information is available.
Week | Hours | Capacity
1 2000 160
2 100 160
3 0 140
4 150 160
5 500 160
6 1500 160
Each week it should reduce the new hours plus carried over hours by the Capacity but never go below zero. A positive value should carry into the following week(s).
Week | Hours | Capacity | LeftOver = (Hours + LAG(LeftOver) - Capacity)
1 400 160 240 (400 + 0 - 160)
2 100 160 180 (100 + 240 - 160)
3 0 140 40 ( 0 + 180 - 140)
4 20 160 0 ( 20 + 40 - 160) (no negative, change to zero)
5 500 160 340 (500 + 0 - 160)
6 0 160 180 ( 0 + 340 - 160)
I'm assuming this can be done with cte recursion and a running value that doesn't go below zero but I can't find any specific examples of how this would be written.
Well, you are not wrong, a recursive common table expression is indeed an option to construct a solution.
Construction of recursive queries can generally be done in steps. Run your query after every step and validate the result.
Define the "anchor" of your recursion: where does the recursion start?Here the start is defined by Week = 1.
Define a recursion iteration: what is the relation between iterations?Here that would be the incrementing week numbers d.Week = r.Week + 1.
Avoiding negative numbers can be resolved with a case expression.
Sample data
create table data
(
Week int,
Hours int,
Capacity int
);
insert into data (Week, Hours, Capacity) values
(1, 400, 160),
(2, 100, 160),
(3, 0, 140),
(4, 20, 160),
(5, 500, 160),
(6, 0, 160);
Solution
with rcte as
(
select d.Week,
d.Hours,
d.Capacity,
case
when d.Hours - d.Capacity > 0
then d.Hours - d.Capacity
else 0
end as LeftOver
from data d
where d.Week = 1
union all
select d.Week,
d.Hours,
d.Capacity,
case
when d.Hours + r.LeftOver - d.Capacity > 0
then d.Hours + r.LeftOver - d.Capacity
else 0
end
from rcte r
join data d
on d.Week = r.Week + 1
)
select r.Week,
r.Hours,
r.Capacity,
r.LeftOver
from rcte r
order by r.Week;
Result
Week Hours Capacity LeftOver
---- ----- -------- --------
1 400 160 240
2 100 160 180
3 0 140 40
4 20 160 0
5 500 160 340
6 0 160 180
Fiddle to see things in action.
I ended up writing a few CTEs then a recursive CTE and got what I needed. The capacity is a static number here but will be replaced later with one that takes holidays and vacations into account. Will also need to consider the initial 'LeftOver' value for the first week but could use this query with an earlier date period to find the most recent date with a zero LeftOver value then use that as a new start date, then filter out those earlier weeks in the final query.
DECLARE #StartDate date = (SELECT MAX(FirstDayOfWorkWeek) FROM dbo._Calendar WHERE Date <= GETDATE());
DECLARE #EndDate date = DATEADD(week, 12, #StartDate);
DECLARE #EmployeeQty int = (SELECT ISNULL(COUNT(*), 0) FROM Employee WHERE DefaultDepartment IN (4) AND Hidden = 0 AND DateTerminated IS NULL);
WITH hours AS (
/* GRAB ALL NEW HOURS SCHEDULED FOR EACH WEEK IN THE SELECTED PERIOD */
SELECT c.FirstDayOfWorkWeek as [Date]
, SUM(budget.Hours) as hours
FROM dbo.Project_Phase phase
JOIN dbo.Project_Budget_Labor budget on phase.ID = budget.Phase
JOIN dbo._Calendar c on CONVERT(date, phase.Date1) = c.[Date]
WHERE phase.CompletedOn IS NULL AND phase.Project <> 4266
AND phase.Date1 BETWEEN #StartDate AND #EndDate
AND budget.Department IN (4)
GROUP BY c.FirstDayOfWorkWeek
)
, weeks AS (
/* CREATE BLANK ROWS FOR EACH WEEK AND JOIN TO ACTUAL HOURS TO ELIMINATE GAPS */
/* ADD A ROW NUMBER FOR RECURSION IN NEXT CTE */
SELECT cal.[Date]
, ROW_NUMBER() OVER(ORDER BY cal.[Date]) as [rownum]
, ISNULL(SUM(hours.Hours), 0) as Hours
FROM (SELECT FirstDayOfWorkWeek as [Date] FROM dbo._Calendar WHERE [Date] BETWEEN #StartDate AND #EndDate GROUP BY FirstDayOfWorkWeek) as cal
LEFT JOIN hours on cal.[Date] = hours.[Date]
GROUP BY cal.[Date]
)
, spread AS (
/* GRAB FIRST WEEK AND USE RECURSION TO CREATE RUNNING TOTAL THAT DOES NOT DROP BELOW ZERO*/
SELECT TOP 1 [Date]
, rownum
, Hours
, #EmployeeQty * 40 as Capacity
, CONVERT(numeric(9,2), 0.00) as LeftOver
, Hours as running
FROM weeks
ORDER BY rownum
UNION ALL
SELECT curr.[Date]
, curr.rownum
, curr.Hours
, #EmployeeQty * 40 as Capacity
, CONVERT(numeric(9,2), CASE WHEN curr.Hours + prev.LeftOver - (#EmployeeQty * 40) < 0 THEN 0 ELSE curr.Hours + prev.LeftOver - (#EmployeeQty * 40) END) as LeftOver
, curr.Hours + prev.LeftOver as running
FROM weeks curr
JOIN spread prev on curr.rownum = (prev.rownum + 1)
)
SELECT spread.Hours as NewHours
, spread.LeftOver as PrevHours
, spread.Capacity
, spread.running as RunningTotal
, CASE WHEN running < Capacity THEN running ELSE Capacity END as HoursThisWeek
FROM spread

How to extract components of a disorganized string variable in Stata?

I have a text variable showing patient prescription that looks quite messy like this:
PatientRx
ACETAZOLAMIDE 250MG TABLET- 100
ADAPALENE + BENZOYL 0.1% + 2.5% GEL-..
ADRENALINE/EPIPEN 300MCG/0.3ML INJ..
ALENDRONATE + COLECA 70MG + 140MCG TA..
ALLOPURINOL 100MG TABLET- 100
ALUM HYDROX + MAG HY 250+120+120MG/5M..
AMILORIDE + HYDROCHL 5MG + 50MG HCL T..
While I haven't looked through all these values, some patterns may arise:
Often times there are more than one drugs and they are separated, for example by space and forward slash.
Drugs are also be separated with plus sign. But plus sign is also used between doses.
The rule related to space is very arbitrary, both at the beginning and in the middle of entry.
How can I extract only the names of the drugs into new variables? New variables should look like this:
Newvar1 Newvar2
ACETAZOLAMIDE
ADAPALENE BENZOYL
ADRENALINE EPIPEN
ALENDRONATE COLECA
and so on.
Some would reach first for regular expressions, which you might indeed need for the full problem. In addition note moss as installed by ssc install moss.
But it seems easiest, given the information in the example here, which is all we have to go on, to look for the position of the first numeric digit 0 to 9 and then parse what goes before. I don't know whether drug names ever contain numeric digits.
clear
input str40 sandbox
" ACETAZOLAMIDE 250MG TABLET- 100"
"ADAPALENE + BENZOYL 0.1% + 2.5% GEL-"
" ADRENALINE/EPIPEN 300MCG/0.3ML INJ"
"ALENDRONATE + COLECA 70MG + 140MCG TA"
" ALLOPURINOL 100MG TABLET- 100"
"ALUM HYDROX + MAG HY 250+120+120MG/5M"
" AMILORIDE + HYDROCHL 5MG + 50MG HCL T"
end
gen wherenum = .
quietly forval j = 0/9 {
replace wherenum = min(wherenum, strpos(sandbox, "`j'")) if strpos(sandbox, "`j'")
}
gen drug = substr(sandbox, 1, wherenum - 1)
split drug, parse(+ /)
l drug?, sep(0)
+---------------------------+
| drug1 drug2 |
|---------------------------|
1. | ACETAZOLAMIDE |
2. | ADAPALENE BENZOYL |
3. | ADRENALINE EPIPEN |
4. | ALENDRONATE COLECA |
5. | ALLOPURINOL |
6. | ALUM HYDROX MAG HY |
7. | AMILORIDE HYDROCHL |
+---------------------------+

Pandas groupby mean absolute deviation

I have a pandas dataframe like this:
Product Group Product ID Units Sold Revenue Rev/Unit
A 451 8 $16 $2
A 987 15 $40 $2.67
A 311 2 $5 $2.50
B 642 6 $18 $3.00
B 251 4 $28 $7.00
I want to transform it to look like this:
Product Group Units Sold Revenue Rev/Unit Mean Abs Deviation
A 25 $61 $2.44 $0.24
B 10 $46 $4.60 $2.00
The Mean Abs Deviation column is to be performed on the Rev/Unit column in the first table. The tricky thing is taking into account the respective weights behind the Rev/Unit calculation.
For example taking a straight MAD of Product Group A's Rev/Unit would yield $0.26. However after taking weight into consideration, the MAD would be $0.24.
I know to use groupby to get the simple summation for units sold and revenue, but I'm a bit lost on how to do the more complicated calculations of the next 2 columns.
Also while we're giving advice/help---is there any easier way to create/paste tables into SO posts??
UPDATE:
Would a solution like this work? I know it will for the summation fields, but not sure how to implement for the latter 2 fields.
grouped_df=df.groupby("Product Group")
grouped_df.agg({
'Units Sold':'sum',
'Revenue':'sum',
'Rev/Unit':'Revenue'/'Units Sold',
'MAD':some_function})
you need to clarify what the "weights" are, I assumed the weights are the number of units sold, but that gives a different results from yours:
pv = df.pivot_table( rows='Product Group',
values=[ 'Units Sold', 'Revenue' ],
aggfunc=sum )
pv[ 'Rev/Unit' ] = pv.Revenue / pv[ 'Units Sold' ]
this gives:
Revenue Units Sold Rev/Unit
Product Group
A 61 25 2.44
B 46 10 4.60
As for WMAD:
def wmad( prod ):
idx = df[ 'Product Group' ] == prod
w = df[ 'Units Sold' ][ idx ]
abs_dev = np.abs ( df[ 'Rev/Unit' ][ idx ] - pv[ 'Rev/Unit' ][ prod ] )
return sum( abs_dev * w ) / sum( w )
pv[ 'Mean Abs Deviation' ] = [ wmad( idx ) for idx in pv.index ]
which as I mentioned gives different result
Revenue Units Sold Rev/Unit Mean Abs Deviation
Product Group
A 61 25 2.44 0.2836
B 46 10 4.60 1.9200
From your suggested solution, you can use a lambda function to operate on each row e.g:
'Rev/Unit': lambda x: calculate_revenue_per_unit(x)
Bear in mind that x is a tuple for each row, so you'll need to unpack that within your calculate_revenue_per_unit function.

Deleting tables with regular expressions

Not really a specific question, since I don't know enough - more of a question on how to approach this.
Example file can be seen below:
LOADING CONDITION : 4-Homogenous cargo 98% 1.018t/m3, draught 3.35m
- outgoing
ITEMS OF LOADING
-------------------------------------------------------------------------------
CAPA ITEM REFERENCE X1 X2 WEIGHT KG LCG YG FSM
No (m) (m) (t) (m) (m) (m) (t.m)
-------------------------------------------------------------------------------
13 No2 CARGO TK P 1.650 29.400 609.04 2.745 15.525 -3.384 483.49
14 No2 CARGO TK S 1.650 29.400 603.61 2.745 15.525 3.384 483.49
15 No1 CARGO TK P 29.400 56.400 587.23 2.745 42.900 -3.384 470.42
16 No1 CARGO TK S 29.400 56.400 592.45 2.745 42.900 3.384 470.42
17 MGO tank aft 21.150 23.400 23.42 6.531 22.275 -0.500 15.70
18 TO storage tank 21.150 23.400 2.68 7.225 22.275 2.300 0.00
19 MGO fore tank 33.150 35.400 25.90 6.643 34.275 -0.212 0.00
-------------------------------------------------------------------------------
DEADWEIGHT 2444.34 2.828 29.007 -0.005 1923.52
SUMMARY OF LOADING
WEIGHT KG LCG YG FSM
(t) (m) (m) (m) (t.m)
-------------------------------------------------------------------------------
DEADWEIGHT 2444.34 2.828 29.007 -0.005 1923.52
LIGHT SHIP 634.00 3.030 28.654 0.000 0.00
-------------------------------------------------------------------------------
TOTAL WEIGHT 3078.34 2.869 28.935 -0.004 1923.52
LOADING CONDITION : 4-Homogenous cargo 98% 1.018t/m3, draught 3.35m
- outgoing
Damage Case : 1bott: all cargo & void3
Flooding Percentage : 100 %
Flooded Volumes : No.3 Void space P No.3 Void space S No2 CARGO TK P
No2 CARGO TK S No1 CARGO TK P No1 CARGO TK S
-------------------------------------------------------------------------------
WEIGHT KG LCG YG FSM CORR.KG
(t) (m) (m) (m) (t.m) (m)
-------------------------------------------------------------------------------
TOTAL WEIGHT 3078.34 2.869 28.935 -0.004 1923.52 3.494
RUN-OFF WEIGHTS 0.00 0.000 0.000 0.000 0.00 0.000
-------------------------------------------------------------------------------
DAMAGE CONDITION 3078.34 2.869 28.935 -0.004 1923.52 3.494
EQUILIBRIUM NOT FOUND ON STARBOARD
LOADING CASE :
4-Homogenous cargo 98% 1.018t/m3, draught 3.35m - outgoing
-------------------------------------------------------------------------------
WEIGHT KG LCG YG FSM CORR.KG
(t) (m) (m) (m) (t.m) (m)
-------------------------------------------------------------------------------
TOTAL WEIGHT 3078.34 2.869 28.935 -0.004 1923.52 3.494
SUMMARY OF RESULTS OF DAMAGE STABILITY
-------------------------------------------------------------------------------
DAMAGE CASE % R HEEL GM FBmin GZ>0 GZmax Area
(deg) (m) (m) (deg) (m) (m.rad)
-------------------------------------------------------------------------------
1bott: all cargo & void3 100 0 EQUILIBRIUM NOT FOUND
% : Flooding percentage.
R : R=1 if run-off weights considered, R=0 if no run-off.
HEEL : Heel at equilibrium (negative if equilibrium is on port).
GM : GM at equilibrium.
FBmin : Minimum distance of margin line, weathertight or non-weathertight
points from waterline.
GZ>0 : Range of positive GZ limited to immersion of non-weathertight openings.
GZmax : Maximum GZ value.
It is one of many, they can differ a bit, but they all come down to tables in textual form. I need to clean up some items from them, before pasting them in a report.
So I was wondering - what would be the text way to delete a certain table. For example, SUMMARY OF LOADING (it starts with the line containing "SUMMARY OF LOADING" and end in the line containing "TOTAL WEIGHT").
How to match that table and delete it?
Try the following from within vim
:g/SUMMARY OF LOADING/, /TOTAL WEIGHT/d
sed works in the same way:
sed '/SUMMARY OF LOADING/, /TOTAL WEIGHT/d' input_with_tables.txt
Fredrik Pihl's solution with :g works well if you need to delete all such tables. For more specific edits, you could use my CountJump plugin to create custom motions and text objects by defining start and end patterns (like SUMMARY OF LOADING and TOTAL WEIGHT in your case), and then quickly jump to the next table and delete a table with a quick mapping.

Bioprobit - the covariance matrix of the Beta’s

I am comparing weights of credit rating determinants across Moody's and S&P's.
The goal of doing the bioprobit analysis is then to do a test whether the beta coefficients are the same between Moody's and S&P.
I wanna do this based on a Wald test, but I need the covariance matrix of the Beta’s. Could you please help me with the code for Stata how to get the covariance matrix??
Variables entering the model are S&Prat Mrat GDP Inflation Ratio etc
Thanks in advance
Based on #Nick Cox:
Example from Stata data (you need to install bioprobit which is user written command)
sysuse auto
bioprobit headroom foreign price length mpg turn
. bioprobit headroom foreign price length mpg turn
group(forei |
gn) | Freq. Percent Cum.
------------+-----------------------------------
1 | 52 70.27 70.27
2 | 22 29.73 100.00
------------+-----------------------------------
Total | 74 100.00
initial: log likelihood = -148.5818
rescale: log likelihood = -148.5818
rescale eq: log likelihood = -147.44136
Iteration 0: log likelihood = -147.44136
Iteration 1: log likelihood = -147.43958
Iteration 2: log likelihood = -147.43958
Bivariate ordered probit regression Number of obs = 74
Wald chi2(4) = 22.61
Log likelihood = -147.43958 Prob > chi2 = 0.0002
------------------------------------------------------------------------------
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
headroom |
price | -.0000664 .0000478 -1.39 0.164 -.00016 .0000272
length | .0347597 .013096 2.65 0.008 .009092 .0604274
mpg | -.0118916 .0354387 -0.34 0.737 -.0813502 .0575669
turn | -.0333833 .0554614 -0.60 0.547 -.1420857 .0753191
-------------+----------------------------------------------------------------
foreign |
price | .0003981 .0001485 2.68 0.007 .0001071 .0006892
length | -.0585548 .0284639 -2.06 0.040 -.114343 -.0027666
mpg | -.0306867 .0543826 -0.56 0.573 -.1372745 .0759012
turn | -.3471526 .1321667 -2.63 0.009 -.6061946 -.0881106
-------------+----------------------------------------------------------------
athrho |
_cons | .053797 .3131717 0.17 0.864 -.5600082 .6676022
-------------+----------------------------------------------------------------
/cut11 | 2.72507 2.451108 -2.079014 7.529154
/cut12 | 3.640296 2.445186 -1.152181 8.432772
/cut13 | 4.227321 2.443236 -.561334 9.015975
/cut14 | 4.792874 2.452694 -.0143182 9.600067
/cut15 | 5.586825 2.480339 .7254488 10.4482
/cut16 | 6.381491 2.505192 1.471404 11.29158
/cut17 | 7.145783 2.529663 2.187735 12.10383
/cut21 | -21.05768 6.50279 -33.80292 -8.312449
-------------+----------------------------------------------------------------
rho | .0537452 .3122671 -.5079835 .5834004
------------------------------------------------------------------------------
LR test of indep. eqns. : chi2(1) = 0.03 Prob > chi2 = 0.8636
# results that are in `Stata's memory`
ereturn list
scalars:
e(rc) = 0
e(ll) = -147.4395814769408
e(converged) = 1
e(rank) = 17
e(k) = 17
e(k_eq) = 11
e(k_dv) = 2
e(ic) = 2
e(N) = 74
e(k_eq_model) = 1
e(df_m) = 4
e(chi2) = 22.60944901065799
e(p) = .0001515278365065
e(ll_0) = -147.4543291018424
e(k_aux) = 8
e(chi2_c) = .0294952498030625
e(p_c) = .8636405133599019
macros:
e(chi2_ct) : "LR"
e(depvar) : "headroom foreign"
e(predict) : "bioprobit_p"
e(cmd) : "bioprobit"
e(chi2type) : "Wald"
e(vce) : "oim"
e(opt) : "ml"
e(title) : "Bivariate ordered probit regression"
e(ml_method) : "d2"
e(user) : "bioprobit_d2"
e(crittype) : "log likelihood"
e(technique) : "nr"
e(properties) : "b V"
matrices:
e(b) : 1 x 17
e(V) : 17 x 17
e(gradient) : 1 x 17
e(ilog) : 1 x 20
functions:
e(sample)
#You need to use mat list e(V) to display the variance covariance matrix
mat list e(V)
symmetric e(V)[17,17]
headroom: headroom: headroom: headroom: foreign: foreign: foreign: foreign:
price length mpg turn price length mpg turn
headroom:price 2.280e-09
headroom:length -1.431e-07 .00017151
headroom:mpg 3.991e-07 .00018914 .0012559
headroom:turn 4.426e-07 -.00050302 .00027186 .00307597
foreign:price 1.124e-10 -4.999e-09 2.093e-08 2.079e-08 2.205e-08
foreign:length -5.846e-09 8.021e-06 9.950e-06 -.0000249 -2.087e-06 .00081019
foreign:mpg 1.712e-08 .00001035 .00006387 .00001352 1.254e-06 .0006546 .00295746
foreign:turn 1.145e-08 -.00002418 .00001022 .00015562 -.00001083 -.00028103 -.0001411 .01746805
athrho:_cons 2.360e-07 -.00004531 .0000684 .00005575 -2.010e-06 .00043717 -.00147713 -.00449239
cut11:_cons .0000134 .01507955 .07578798 .03653671 1.039e-06 .00068972 .00401168 .00211706
cut12:_cons .00001374 .01514192 .07570527 .03630636 9.488e-07 .0007133 .00386727 .00165474
cut13:_cons .00001393 .01520261 .07550433 .03603257 9.668e-07 .0007088 .00386171 .00165557
cut14:_cons .00001363 .01539981 .07532214 .03582323 1.042e-06 .00068687 .00392914 .00189195
cut15:_cons .00001264 .01584186 .07541396 .03541453 1.101e-06 .00068091 .0040106 .00209853
cut16:_cons .00001148 .01611862 .07562328 .03535426 1.052e-06 .00069849 .00401805 .00206701
cut17:_cons .00001055 .01602514 .07547739 .03620485 9.866e-07 .00069868 .00399718 .00207143
cut21:_cons 4.412e-07 .00073781 .00377201 .00190456 -.00058242 .13231539 .18778679 .51179829
athrho: cut11: cut12: cut13: cut14: cut15: cut16: cut17:
_cons _cons _cons _cons _cons _cons _cons _cons
athrho:_cons .09807649
cut11:_cons -.0064343 6.0079319
cut12:_cons .00229188 5.9652808 5.9789347
cut13:_cons .00187855 5.9546524 5.9639617 5.9694026
cut14:_cons -.00310632 5.9724552 5.9793328 5.9820512 6.0157096
cut15:_cons -.00783593 6.0300908 6.03522 6.0360956 6.0667389 6.1520838
cut16:_cons -.00756313 6.0745198 6.0789515 6.0788816 6.1081885 6.1880183 6.275988
cut17:_cons -.00673882 6.0811477 6.0851101 6.0844209 6.1128719 6.1897756 6.2679698 6.3991936
cut21:_cons -.13478036 .30582954 .28918756 .28844026 .29527602 .30401845 .30575462 .30503648
cut21:
_cons
cut21:_cons 42.286275
# If you want to use variance covariance matrix of first four variables
mat kk=e(V)
mat kkk=kk[1..4,1..4]
mat list kkk
symmetric kkk[4,4]
headroom: headroom: headroom: headroom:
price length mpg turn
headroom:price 2.280e-09
headroom:length -1.431e-07 .00017151
headroom:mpg 3.991e-07 .00018914 .0012559
headroom:turn 4.426e-07 -.00050302 .00027186 .00307597