gruobi: used model.write but cannot find the file - c++

I am using Gurobi with C++ and want to save the Lp as a file.lp. Therefore, I used
model.write("model.lp");
model.optimize();
This is my output and nor error occurs:
Optimize a model with 105 rows, 58 columns and 186 nonzeros
Coefficient statistics:
Matrix range [1e+00, 1e+00]
Objective range [1e+00, 1e+00]
Bounds range [0e+00, 0e+00]
RHS range [1e+00, 6e+00]
Presolve removed 105 rows and 58 columns
Presolve time: 0.00s
Presolve: All rows and columns removed
Iteration Objective Primal Inf. Dual Inf. Time
0 -0.0000000e+00 0.000000e+00 0.000000e+00 0s
Solved in 0 iterations and 0.00 seconds
Optimal objective -0.000000000e+00
obj: -0
Status2
Process finished with exit code 0
So there is probably a mistake in my LP, since the optimal solution should not be 0. This is why I want to have a look at the model.lp file. However, I cannot find it. I searched my whole computer. Am I missing anything?

Related

Pyomo Gurobi solver error- "ERROR: Solver (gurobi) returned non-zero return code (137)"

I have a solution for solving an MIP problem of graph which works fine and gives the following output when I run it for smaller graphs. I'm using Gurobi solver with Pyomo.
Problem:
- Name: x73
Lower bound: 192.0
Upper bound: 192.0
Number of objectives: 1
Number of constraints: 10
Number of variables: 37
Number of binary variables: 36
Number of integer variables: 36
Number of continuous variables: 1
Number of nonzeros: 37
Sense: minimize
Solver:
- Status: ok
Return code: 0
Message: Model was solved to optimality (subject to tolerances), and an optimal solution is available.
Termination condition: optimal
Termination message: Model was solved to optimality (subject to tolerances), and an optimal solution is available.
Wall time: 0.03206682205200195
Error rc: 0
Time: 0.09361410140991211
Solution:
- number of solutions: 0
number of solutions displayed: 0
But I am getting the following error while running the code with larger graphs.
ERROR: Solver (gurobi) returned non-zero return code (137)
ERROR: Solver log: Using license file /opt/shared/gurobi/gurobi.lic Set
parameter TokenServer to value gurobi.lm.udel.edu Set parameter TSPort to
value 40100 Read LP format model from file /tmp/tmpaud9ogrn.pyomo.lp
Reading time = 0.01 seconds x1101: 56 rows, 551 columns, 551 nonzeros
Changed value of parameter TimeLimit to 600.0
Prev: inf Min: 0.0 Max: inf Default: inf
Gurobi Optimizer version 9.0.1 build v9.0.1rc0 (linux64) Optimize a model
with 56 rows, 551 columns and 551 nonzeros Model fingerprint: 0xafe0319a
Model has 15400 quadratic objective terms Variable types: 1 continuous,
550 integer (550 binary) Coefficient statistics:
Matrix range [1e+00, 1e+00] Objective range [0e+00, 0e+00]
QObjective range [4e+00, 8e+01] Bounds range [1e+00, 1e+00] RHS
range [1e+00, 1e+00]
Found heuristic solution: objective 22880.000000 Presolve removed 1 rows
and 1 columns Presolve time: 0.01s Presolved: 55 rows, 550 columns, 550
nonzeros Presolved model has 15400 quadratic objective terms Variable
types: 0 continuous, 550 integer (550 binary)
Root simplex log...
Iteration Objective Primal Inf. Dual Inf. Time
130920 9.8490000e+02 1.610955e+03 0.000000e+00 5s 263917
1.0999000e+03 1.710649e+03 0.000000e+00 10s 397157
1.0999000e+03 2.243077e+03 0.000000e+00 15s 529512
1.0999000e+03 1.910603e+03 0.000000e+00 20s 662404
1.0999000e+03 1.584650e+03 0.000000e+00 25s 791296
1.0999000e+03 1.812443e+03 0.000000e+00 30s 906473
1.3475000e+03 0.000000e+00 0.000000e+00 34s
Root relaxation: objective 1.347500e+03, 906473 iterations, 34.32 seconds
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node
Time
H 0 0 1730.0000000 0.00000 100% - 52s
H 0 0 1654.0000000 0.00000 100% - 52s
H 0 0 1578.0000000 0.00000 100% - 52s
0 0 1347.50000 0 137 1578.00000 1347.50000 14.6% - 52s
0 0 1347.50000 0 137 1578.00000 1347.50000 14.6% - 53s
H 0 0 1540.0000000 1347.50000 12.5% - 53s
0 2 1347.50000 0 145 1540.00000 1347.50000 12.5% - 55s
101 118 1396.92351 10 140 1540.00000 1347.50000 12.5% 157 61s
490 591 1416.40484 18 128 1540.00000 1347.50000 12.5% 63.6 65s
2136 2347 1440.09938 42 100 1540.00000 1347.50000 12.5% 42.9 70s
3847 3402 1461.55736 81 80 1540.00000 1347.50000 12.5% 37.0 82s
/opt/shared/gurobi/9.0.1/bin/gurobi.sh: line 17: 23890 Killed
$PYTHONHOME/bin/python3.7 "$#"
Traceback (most recent call last):
File "/home/2925/EdgeColoring/main.py", line 91, in <module>
qubo_coloring, qubo_time = qubo(G, colors, edge_list, solver)
File "/home/2925/EdgeColoring/qubo.py", line 59, in qubo
result = solver.solve(model)
File "/home/2925/.conda/envs/qubo/lib/python3.9/site-packages/pyomo/opt/base/solvers.py", line 596, in solve
raise ApplicationError(
pyomo.common.errors.ApplicationError: Solver (gurobi) did not exit normally
Using TimeLimit upto 2 minutes breaks the model early without any error but doesn't always give any optimal solution for larger graphs. Memory or processing power is not an issue here. I need to run the code without any interruption for at least 10 minutes if not for hours.

How do you rank based on a measure referencing two tables in a star schema?

I have a star schema data model. DimDate, DimBranchName, BranchActual, BranchBudget.
I have measures to calculate the YTD variance to Budget by Branch called QVar. Qvar takes the counts from BranchActual and compares it BranchBudget between two dates. The visual is controlled by DimBranchName and DimDate.
Current Result:
BranchName YTDActual YTDBudget QVar
A 100 150 (33%)
B 200 200 0.0%
C 25 15 66%
I want a measure to be able to rank DimBranchName[BranchName] by QVar that will interact with the filters I have in place.
Desired result:
BranchName YTDActual YTDBudget QVar Rank
A 100 150 (33%) 3
B 200 200 0.0% 2
C 25 15 66% 1
What I've tried so far is
R Rank of Actual v Goal =
var V = [QVar]
RETURN
RANKX(ALLSELECTED('BranchActual'),CALCULATE(V),,ASC,Dense)
What I get is all 1's
BranchName YTDActual YTDBudget QVar Rank
A 100 150 (33%) 1
B 200 200 0.0% 1
C 25 15 66% 1
Thanks!
When you declare a variable it is computed once and treated as a constant through the rest of your DAX code, so CALCULATE(V) is simply whatever V was when you declared the variable.
This might be closer to what you want:
R Rank of Actual v Goal =
RANKX ( ALLSELECTED ( DimBranchName[BranchName] ), [QVar],, ASC, DENSE )
This way [QVar] is called within the filter context of the BranchName instead of being a constant. (Note that referencing a measure within another measure implicitly wraps it in CALCULATE so you don't need that again.)

Weka displaying weird results for classification - question marks"?"

I am trying to use the ZeroR algorithm in Weka in order to make baseline performance for my classification problem. However, Weka is displaying weird results for precision and F-measure, it is showing a question mark '?' instead of any number. Anyone knows how can I fix this ?
=== Classifier model (full training set) ===
ZeroR predicts class value: label 1
Time taken to build model: 0 seconds
=== Stratified cross-validation ===
=== Summary ===
Correctly Classified Instances 431 53.607 %
Incorrectly Classified Instances 373 46.393 %
Kappa statistic 0
Mean absolute error 0.4974
Root mean squared error 0.4987
Relative absolute error 100 %
Root relative squared error 100 %
Total Number of Instances 804
=== Detailed Accuracy By Class ===
TP Rate FP Rate Precision Recall F-Measure MCC ROC Area PRC Area Class
0.000 0.000 ? 0.000 ? ? 0.488 0.457 label 0
1.000 1.000 0.536 1.000 0.698 ? 0.488 0.530 label 1
Weighted Avg. 0.536 0.536 ? 0.536 ? ? 0.488 0.496
=== Confusion Matrix ===
a b <-- classified as
0 373 | a = label 0
0 431 | b = label 1
It's not wrong. Note that you have no cases classified as "a", so the precision (etc.) are indeterminable for "a". Evidently Weka propagates incalculatable values (like Excel does), so the overall precision isn't calculated, either.
Your real problem here is that you have a model that is classifying everything as "b", which is unlikely to be useful. But that's ZeroR, so that's just your starting point.

Counting gradient using 2 columns array from external .dat file

I have got a .dat file with 2 columns and rows between 14000 to 36000 saved in file like below:
0.00 0.00
2.00 1.00
2.03 1.01
2.05 1.07
.
.
.
79.03 23.01
The 1st column is extension, the 2nd is strain. When I want to count gradient to designate Hooks Law of the plot, I use below code.
CCCCCC
Program gradient
REAL S(40000),E(40000),GRAD(40000,1)
open(unit=300, file='Probka1A.dat', status='OLD')
open(unit=321, file='result.out', status='unknown')
write(321,400)
400 format('alfa')
260 DO 200 i=1, 40000
read(300,30) S(i),E(i)
30 format(2F7.2)
GRAD(i,1)=(S(i)-S(i-1))/(E(i)-E(i-1))
write(321,777) GRAD(i,1)
777 Format(F7.2)
200 Continue
END
But after I executed it I got the warning
PGFIO-F-231/formatted read/unit=300/error on data conversion.
File name = Probka1A.dat formatted, sequential access record = 1
In source file gradient1.f, at line number 9
What can I do to count gradient by this or other way in Fortran 77?
You are reading from file without checking for the end of the file. Your code should be like this:
260 DO 200 i=1, 40000
read(300,*,ERR=400,END=400) S(i),E(i)
if (i>1) then
GRAD(i-1,1)=(S(i)-S(i-1))/(E(i)-E(i-1))
write(321,777) GRAD(i-1,1)
end if
777 Format(F7.2)
200 Continue
400 continue

Logistic Time Discrete Hazard Model Parameter Estimates Intrepration

I am using PROC GLIMMIX, and I'm curious as to why my parameter estimates are behaving strangely.
proc glimmix data=blah pconv=1e-3;
class strata1;
model event(event=LAST)=time1--time20/
noint solution link=logit dist=binary;
nloptions tech=nrridg;
covtest 'var(strata1)=0'/WALD;
random intercept/subject=strata1;
run;
Since I'm using a logistic discrete time hazard model (without any censored observations), I have my dataset constructed using the 'person-period' dataset. Here is an example of what a person-period dataset looks like:
id time1 time2 time3 time4 event
100 1 0 0 0 0
100 0 1 0 0 0
100 0 0 1 0 1
101 1 0 0 0 1
102 1 0 0 0 0
102 0 1 0 0 0
102 0 0 1 0 0
102 0 0 0 1 0
Essentially, each 'time' variable represents whether this period is occuring. So, time1=1 during the first period, 0 otherwise. And then time2=1 during the first period, 0 otherwise, and so on. I am modelling the probability that the event occurs during each of these periods. When I use PROC LOGISITIC, I get sensible parameter estimates.
proc logistic data=blah;
model event (event=LAST)=time1--time20 /noint;
run;
This code delivers parameter estimates for time1=-3.0052, which gives me a probability of the event occuring in time period 1 of .047. These estimates slowly get smaller, for each time[i] variable, which is what I would expect. However, when I run my GLIMMIX code and add in this random effect for strata1, it blows up my model - the parameter estimates for time flip their sign. time1=2.84, time2=2.67, time3=2.41, and they consistently get smaller. I'm really confused as to why- this model is telling me that the probability of the event occuring is over 90% in this period, which I know to be untrue. Does anyone have any idea why this is? I would expect these estimates to essentially have their negative sign be flipped.
Thanks.