How to increase the verbosity level of SCIP solver log - linear-programming

I have the following SCIP solver log
time | node | left |LP iter|LP it/n| mem |mdpt |frac |vars |cons |cols |rows |
0.0s| 1 | 0 | 4 | - | 6k| 0 | 0 | 6 | 200 | 6 | 200 |
0.0s| 1 | 0 | 7 | - | 6k| 0 | 0 | 8 | 200 | 8 | 200 |
0.0s| 1 | 0 | 10 | - | 6k| 0 | 0 | 9 | 200 | 9 | 200 |
0.0s| 1 | 0 | 10 | - | 6k| 0 | 0 | 9 | 200 | 9 | 200 |
0.0s| 1 | 0 | 10 | - | 6k| 0 | 0 | 9 | 200 | 9 | 200 |
0.0s| 1 | 0 | 10 | - | 6k| 0 | 0 | 9 | 200 | 9 | 200 |
I want the log to be more verbose, as in display a new line at each LP iteration. So far I only came across
SCIP_CALL( SCIPsetIntParam(scip, "display/verblevel", 5));
This is increasing but not as much as I want and not where I want. Essentially I would like to have lines at LP iter 4, 5, 6, 7, 8, 9 and 10 too.

You cannot print a line of SCIP output at every LP iteration. You can set display/freq to 1, then SCIP will display a line at every node.
Additionally you can set display/lpinfo to true, then the LP solver will print additional information. I don't think any LP solver will print you a line for every LP iteration though . Do you use SCIP with SoPlex?
Edit: Looked and you can set the SoPlex frequency to 1 with the parameter "--int:displayfreq". I don't think you can set this through the SCIP api though. If you only want to solve the LP you could just do it in SoPlex or you would have to edit the lpi_spx2 source code.

Related

Conditional count Measure

I have data looking like this:
| ID |OpID|
| -- | -- |
| 10 | 1 |
| 10 | 2 |
| 10 | 4 |
| 11 |null|
| 12 | 3 |
| 12 | 4 |
| 13 | 1 |
| 13 | 2 |
| 13 | 3 |
| 14 | 2 |
| 14 | 4 |
Here OpID 4 means 1 and 2.
I would like to count the different occurrences of 1, 2 and 3 in OpID of distinct ID.
If the counts of OpID having 1 would be 4, 2 would be 4, 3 would be 2.
If ID has OpID of 4 but already has data of 1, 2 it wouldn't be counted. But if 4 exists and only 1 (2) is there, count for 2 (1) would be incremented.
The expected output would be:
|OpID|Count|
| 1 | 4 |
| 2 | 4 |
| 3 | 2 |
(Going to be using the results in a column chart)
Hope this makes sense...
edit: there are other columns too and an ID and OpID can be duplicated hence need to do a groupby clause before.

How to Sum all working days for each month but restart from 0 for every month in power Bi Dax

I would like to know how could I get the Sum of all working days for specific month but in the table starting each month's Sum over again.
This is my DateTable Now with this query for Work Days Sum:
Work Days Sum =
CALCULATE (
SUM ( 'DateTable'[Is working Day] ),
ALL ( 'DateTable' ),
'DateTable'[Date] <= EARLIER ( 'DateTable'[Date] )
)
Date | Month Order | Is working day | Work Days Sum |
January - 21 331
2022/01/01 | 1 | 0 | |
2022/01/02 | 1 | 0 | |
2022/01/03 | 1 | 1 | 1 |
2022/01/04 | 1 | 1 | 2 |
2022/01/05 | 1 | 1 | 3 |
2022/01/06 | 1 | 1 | 4 |
.....
2022/01/27 | 1 | 1 | 19 |
2022/01/28 | 1 | 1 | 20 |
2022/01/29 | 1 | 0 | 20 |
2022/01/30 | 1 | 0 | 20 |
2022/01/31 | 1 | 1 | 21 |
February 20 890
2022/02/01 | 2 | 1 | 22 |
2022/02/02 | 2 | 1 | 23 |
2022/02/03 | 2 | 1 | 24 |
2022/02/04 | 2 | 1 | 25 |
|
|
V
Date | Month Order | Is working day | Work Days Sum |
January - 21 21
2022/01/01 | 1 | 0 | |
2022/01/02 | 1 | 0 | |
2022/01/03 | 1 | 1 | 1 |
2022/01/04 | 1 | 1 | 2 |
2022/01/05 | 1 | 1 | 3 |
2022/01/06 | 1 | 1 | 4 |
.....
2022/01/27 | 1 | 1 | 19 |
2022/01/28 | 1 | 1 | 20 |
2022/01/29 | 1 | 0 | 20 |
2022/01/30 | 1 | 0 | 20 |
2022/01/31 | 1 | 1 | 21 |
February 20 41
2022/02/01 | 2 | 1 | 1 |
2022/02/02 | 2 | 1 | 2 |
2022/02/03 | 2 | 1 | 3 |
2022/02/04 | 2 | 1 | 4 |
2022/02/05 | 2 | 0 | 4 |
.....
Any idea on how I can change my dax query to achieve output of second table below the down arrow would be much appreciated.

Maximum overlapping of arithmetic sequences with intervals

I have n number of arithmetic sequences with intervals. I need to find the first point at which most of this intervals overlap. The sequences are infinite. Let me give an example for finite sequences where I have total 8 sequences.
Given,
N=8. So, there are 8 sequences. The sequences are as follows:
seq-1: [{1,2,3,4,5,6,7,8}..{17,18,19,20,21,22,23,24}]
seq-2: [{9,10,11,12,13,14,15,16}..{25,26,27,28,29,30,31,32}]
seq-3: [{1,2,3,4}..{9,10,11,12}..{17,18,19,20}..{25,26,27,28}]
seq-4: [{5,6,7,8}..{13,14,15,16}..{21,22,23,24}..{29,30,31,32}]
seq-5: [{5}..{13}..{21}..{29}]
seq-6: [{4}..{8}..{12}..{16}..{20}..{24}..{28}..{32}]
seq-7: [{9,11,13,15}..{25,27,29,31}]
seq-8: [{2}..{18}]
Here point 13 and 29 have the maximum overlap with 4 over laps. And the first point is 13.
Can I solve it using some efficient algorithm like O(n),O(n^2), O(n^3), O(n^4), O(n log n) etc.
Here, the value of n is 8.
I suggest applying Distribution Counting algorithm.
Below is a simple demo for the algorithm with 3 sample sequences:
seq-1: [{1,2,}..{9,10}]
seq-2: [{1,2,3}..{5,7,8}]
seq-3: [{2,3,4}..{6,7}..{9,10}]
You need to find the maximum value in all sequences. Which is 10 in this case.
Create an int array of 11 elements starting from 0 to 10.
i | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
--------------------------------------------------
A[i]| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Now we count the appearance of elements in all sequences by increasing its value by 1.
seq-1: [{1,2,}..{9,10}]
This sequence contains 1, 2, 9, and 10.
Increase value at index 1, 2, 9, and 10 by 1.
i | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
--------------------------------------------------
A[i]| 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 |
seq-2: [{1,2,3}..{5,7,8}]
This sequence contains 1, 2, 3, 5, 7,and 8.
Increase value at index 1, 2, 3, 5, 7, and 8 by 1.
i | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
--------------------------------------------------
A[i]| 0 | 2 | 2 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | 1 |
seq-3: [{2,3,4}..{6,7}..{9,10}]
This sequence contains 2, 3, 4, 6, 7, 9, and 10.
Increase value at index 2, 3, 4, 6, 7, 9, and 10 by 1.
i | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
--------------------------------------------------
A[i]| 0 | 2 | 3 | 2 | 1 | 1 | 1 | 2 | 1 | 2 | 2 |
In the end, it is obvious that number 2 has the maximum overlapping times of 3 between all the sequences.
Hope my suggestion will help!

Empty disks in Redshift Cluster

I have two nodes 8xl cluster. And today I've decided to take a look at some metrics that Amazon provides, what I've noticed is that some disks are empty.
From Amazon docs:
capacity integer Total capacity of the partition in 1 MB disk blocks.
SQL:
select owner, used, tossed, capacity, trim(mount) as mount
from stv_partitions
where capacity < 1;
owner | used | tossed | capacity | mount
-------+------+--------+----------+-----------
0 | 0 | 1 | 0 | /dev/xvdo
1 | 0 | 1 | 0 | /dev/xvdo
(2 rows)
Can someone explain to me why am I seeing this? Is that an expected behaviour?
Updated:
owner | host | diskno | part_begin | part_end | used | tossed | capacity | reads | writes | seek_forward | seek_back | is_san | failed | mbps | mount
-------+------+--------+---------------+---------------+------+--------+----------+-------+--------+--------------+-----------+--------+--------+------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1 | 1 | 13 | 0 | 1000126283776 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | /dev/xvdo
0 | 1 | 13 | 1000126283776 | 2000252567552 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | /dev/xvdo
It is due to the fact that the device has failed (=1) and hence the disk capacity is set to 0.

Stata: Cumulative number of new observations

I would like to check if a value has appeared in some previous row of the same column.
At the end I would like to have a cumulative count of the number of distinct observations.
Is there any other solution than concenating all _n rows and using regular expressions? I'm getting there with concatenating the rows, but given the limit of 244 characters for string variables (in Stata <13), this is sometimes not applicable.
Here's what I'm doing right now:
gen tmp=x
replace tmp = tmp[_n-1]+ "," + tmp if _n > 1
gen cumu=0
replace cumu=1 if regexm(tmp[_n-1],x+"|"+x+",|"+","+x+",")==0
replace cumu= sum(cumu)
Example
+-----+
| x |
|-----|
1. | 12 |
2. | 32 |
3. | 12 |
4. | 43 |
5. | 43 |
6. | 3 |
7. | 4 |
8. | 3 |
9. | 3 |
10. | 3 |
+-----+
becomes
+-------------------------------+
| x | tmp |
|-----|--------------------------
1. | 12 | 12 |
2. | 32 | 12,32 |
3. | 12 | 12,32,12 |
4. | 43 | 3,32,12,43 |
5. | 43 | 3,32,12,43,43 |
6. | 3 | 3,32,12,43,43,3 |
7. | 4 | 3,32,12,43,43,3,4 |
8. | 3 | 3,32,12,43,43,3,4,3 |
9. | 3 | 3,32,12,43,43,3,4,3,3 |
10. | 3 | 3,32,12,43,43,3,4,3,3,3|
+--------------------------------+
and finally
+-----------+
| x | cumu|
|-----|------
1. | 12 | 1 |
2. | 32 | 2 |
3. | 12 | 2 |
4. | 43 | 3 |
5. | 43 | 3 |
6. | 3 | 4 |
7. | 4 | 5 |
8. | 3 | 5 |
9. | 3 | 5 |
10. | 3 | 5 |
+-----------+
Any ideas how to avoid the 'middle step' (for me that gets very important when having strings in x instead of numbers).
Thanks!
Regular expressions are great, but here as often elsewhere simple calculations suffice. With your sample data
. input x
x
1. 12
2. 32
3. 12
4. 43
5. 43
6. 3
7. 4
8. 3
9. 3
10. 3
11. end
end of do-file
you can identify first occurrences of each distinct value:
. gen long order = _n
. bysort x (order) : gen first = _n == 1
. sort order
. l
+--------------------+
| x order first |
|--------------------|
1. | 12 1 1 |
2. | 32 2 1 |
3. | 12 3 0 |
4. | 43 4 1 |
5. | 43 5 0 |
|--------------------|
6. | 3 6 1 |
7. | 4 7 1 |
8. | 3 8 0 |
9. | 3 9 0 |
10. | 3 10 0 |
+--------------------+
The number of distinct values seen so far is then just a cumulative sum of first using sum(). This works with string variables too. In fact this problem is one of several discussed within
http://www.stata-journal.com/sjpdf.html?articlenum=dm0042
which is accessible to all as a .pdf. search distinct would have pointed you to this article.
Becoming fluent with what you can do with by:, sort, _n and _N is an important skill in Stata. See also
http://www.stata-journal.com/sjpdf.html?articlenum=pr0004
for another article accessible to all.