I have two nodes 8xl cluster. And today I've decided to take a look at some metrics that Amazon provides, what I've noticed is that some disks are empty.
From Amazon docs:
capacity integer Total capacity of the partition in 1 MB disk blocks.
SQL:
select owner, used, tossed, capacity, trim(mount) as mount
from stv_partitions
where capacity < 1;
owner | used | tossed | capacity | mount
-------+------+--------+----------+-----------
0 | 0 | 1 | 0 | /dev/xvdo
1 | 0 | 1 | 0 | /dev/xvdo
(2 rows)
Can someone explain to me why am I seeing this? Is that an expected behaviour?
Updated:
owner | host | diskno | part_begin | part_end | used | tossed | capacity | reads | writes | seek_forward | seek_back | is_san | failed | mbps | mount
-------+------+--------+---------------+---------------+------+--------+----------+-------+--------+--------------+-----------+--------+--------+------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1 | 1 | 13 | 0 | 1000126283776 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | /dev/xvdo
0 | 1 | 13 | 1000126283776 | 2000252567552 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | /dev/xvdo
It is due to the fact that the device has failed (=1) and hence the disk capacity is set to 0.
Related
I would like to know how could I get the Sum of all working days for specific month but in the table starting each month's Sum over again.
This is my DateTable Now with this query for Work Days Sum:
Work Days Sum =
CALCULATE (
SUM ( 'DateTable'[Is working Day] ),
ALL ( 'DateTable' ),
'DateTable'[Date] <= EARLIER ( 'DateTable'[Date] )
)
Date | Month Order | Is working day | Work Days Sum |
January - 21 331
2022/01/01 | 1 | 0 | |
2022/01/02 | 1 | 0 | |
2022/01/03 | 1 | 1 | 1 |
2022/01/04 | 1 | 1 | 2 |
2022/01/05 | 1 | 1 | 3 |
2022/01/06 | 1 | 1 | 4 |
.....
2022/01/27 | 1 | 1 | 19 |
2022/01/28 | 1 | 1 | 20 |
2022/01/29 | 1 | 0 | 20 |
2022/01/30 | 1 | 0 | 20 |
2022/01/31 | 1 | 1 | 21 |
February 20 890
2022/02/01 | 2 | 1 | 22 |
2022/02/02 | 2 | 1 | 23 |
2022/02/03 | 2 | 1 | 24 |
2022/02/04 | 2 | 1 | 25 |
|
|
V
Date | Month Order | Is working day | Work Days Sum |
January - 21 21
2022/01/01 | 1 | 0 | |
2022/01/02 | 1 | 0 | |
2022/01/03 | 1 | 1 | 1 |
2022/01/04 | 1 | 1 | 2 |
2022/01/05 | 1 | 1 | 3 |
2022/01/06 | 1 | 1 | 4 |
.....
2022/01/27 | 1 | 1 | 19 |
2022/01/28 | 1 | 1 | 20 |
2022/01/29 | 1 | 0 | 20 |
2022/01/30 | 1 | 0 | 20 |
2022/01/31 | 1 | 1 | 21 |
February 20 41
2022/02/01 | 2 | 1 | 1 |
2022/02/02 | 2 | 1 | 2 |
2022/02/03 | 2 | 1 | 3 |
2022/02/04 | 2 | 1 | 4 |
2022/02/05 | 2 | 0 | 4 |
.....
Any idea on how I can change my dax query to achieve output of second table below the down arrow would be much appreciated.
I have the following SCIP solver log
time | node | left |LP iter|LP it/n| mem |mdpt |frac |vars |cons |cols |rows |
0.0s| 1 | 0 | 4 | - | 6k| 0 | 0 | 6 | 200 | 6 | 200 |
0.0s| 1 | 0 | 7 | - | 6k| 0 | 0 | 8 | 200 | 8 | 200 |
0.0s| 1 | 0 | 10 | - | 6k| 0 | 0 | 9 | 200 | 9 | 200 |
0.0s| 1 | 0 | 10 | - | 6k| 0 | 0 | 9 | 200 | 9 | 200 |
0.0s| 1 | 0 | 10 | - | 6k| 0 | 0 | 9 | 200 | 9 | 200 |
0.0s| 1 | 0 | 10 | - | 6k| 0 | 0 | 9 | 200 | 9 | 200 |
I want the log to be more verbose, as in display a new line at each LP iteration. So far I only came across
SCIP_CALL( SCIPsetIntParam(scip, "display/verblevel", 5));
This is increasing but not as much as I want and not where I want. Essentially I would like to have lines at LP iter 4, 5, 6, 7, 8, 9 and 10 too.
You cannot print a line of SCIP output at every LP iteration. You can set display/freq to 1, then SCIP will display a line at every node.
Additionally you can set display/lpinfo to true, then the LP solver will print additional information. I don't think any LP solver will print you a line for every LP iteration though . Do you use SCIP with SoPlex?
Edit: Looked and you can set the SoPlex frequency to 1 with the parameter "--int:displayfreq". I don't think you can set this through the SCIP api though. If you only want to solve the LP you could just do it in SoPlex or you would have to edit the lpi_spx2 source code.
This configuration is not valid in 2.0 version
<!-- Enable off-heap storage with unlimited size. -->
<property name="offHeapMaxMemory" value="0"/>
Error:
WARNING: Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException:
Error creating bean with name 'grid.cfg' defined in URL [file:/home/ignite/sample-cache.xml]: Cannot create inner bean
'org.apache.ignite.configuration.CacheConfiguration#4cc0edeb' of type [org.apache.ignite.configuration.CacheConfiguration] while setting bean
property 'cacheConfiguration' with key [0]; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with
name 'org.apache.ignite.configuration.CacheConfiguration#4cc0edeb' defined in URL [file:/home/ignite/sample-cache.xml]: Error setting property
values; nested exception is org.springframework.beans.NotWritablePropertyException: Invalid property 'offHeapMaxMemory' of bean class
[org.apache.ignite.configuration.CacheConfiguration]: Bean property 'offHeapMaxMemory' is not writable or has an invalid setter method. Does the
parameter type of the setter match the return type of the getter?
Visor SnapShot
Time of the snapshot: 07/07/17, 16:54:35
+===========================================================================================================================+
| Name(#) | Mode | Nodes | Entries (Heap / Off-heap) | Hits | Misses | Reads | Writes |
+===========================================================================================================================+
| txnCache(#c0) | PARTITIONED | 1 | min: 2917681 (2917681 / 0) | min: 0 | min: 0 | min: 0 | min: 0 |
| | | | avg: 2917681.00 (2917681.00 / 0.00) | avg: 0.00 | avg: 0.00 | avg: 0.00 | avg: 0.00 |
| | | | max: 2917681 (2917681 / 0) | max: 0 | max: 0 | max: 0 | max: 0 |
+---------------------------------------------------------------------------------------------------------------------------+
Cache 'txnCache(#c0)':
+--------------------------------------------------------------+
| Name(#) | txnCache(#c0) |
| Nodes | 1 |
| Total size Min/Avg/Max | 2917681 / 2917681.00 / 2917681 |
| Heap size Min/Avg/Max | 2917681 / 2917681.00 / 2917681 |
| Off-heap size Min/Avg/Max | 0 / 0.00 / 0 |
+--------------------------------------------------------------+
Nodes for: txnCache(#c0)
+============================================================================================================+
| Node ID8(#), IP | CPUs | Heap Used | CPU Load | Up Time | Size | Hi/Mi/Rd/Wr |
+============================================================================================================+
| 924C5A56(#n0), 10.0.2.55 | 2 | 8.93 % | 93.83 % | 00:12:31:969 | Total: 2917681 | Hi: 0 |
| | | | | | Heap: 2917681 | Mi: 0 |
| | | | | | Off-Heap: 0 | Rd: 0 |
| | | | | | Off-Heap Memory: 0 | Wr: 0 |
+------------------------------------------------------------------------------------------------------------+
'Hi' - Number of cache hits.
'Mi' - Number of cache misses.
'Rd' - number of cache reads.
'Wr' - Number of cache writes.
Aggregated queries metrics:
Minimum execution time: 00:00:00:000
Maximum execution time: 00:00:00:000
Average execution time: 00:00:00:000
Total number of executions: 0
Total number of failures: 0
Visor snapshot shows Off-Heap/Off-Heap-Memory as 0. In Documentation, its mentioned as default off heap is enabled by default. Is there any threshold before storing off-heap? How can i configure that?
There is no offHeapMaxMemory property in CacheConfiguration since 2.0.
Yes, since version 2.0 by default caches store data in off-heap.
You can check it with:
cache.size(CachePeekMode.OFFHEAP))
But also, visor not properly counts "off-heap entries count" metric, but this is already fixed and will be available in version 2.1
I already asked a question here (https://stackoverflow.com/questions/28658283/c-getslotlisttokenpresent-pslotlist-pulcount-return-pulcount-0) about my SmartCard (https://en.wikipedia.org/wiki/Universal_electronic_card), but I would like to know: is it possible to get a specific record from a smart card, knowing the pin code and where the record is located?
Map developed by ISO-7816, so the APDU-command must be based on the following scheme:
[CLA] [INS] [P1] [P2] [Lc field] [Data field] [Le field]
How APDU-command should look like and what the library is better to use on C++/C#, if I need the data from the field 5F20?
P.s.: here is data from file sectors.ini:
[Sector1_11]
Icon = "IDENTIFICATION SECTOR"
BlockDescr1 = "0 | 0 | The data block for sharing"
BlockDescr2 = "0 | 0 | block public access to the PIN"
DataDescr21 = "DF27 | 1 | 6 | 0,0,0 | 1 | SNILS"
DataDescr22 = "DF2B | 4 | 8 | 0,0,0 | 1 | Number of MHI"
DataDescr23 = "5F20 | 0 | 26 | 0,0,0 | 1 | Name"
DataDescr24 = "DF23 | 0 | 100 | 0,0,0 | 1 | Address of the issuer"
DataDescr25 = "5F2B | 4 | 4 | 0,0,0 | 1 | Born"
DataDescr26 = "DF24 | 0 | 100 | 0,0,0 | 1 | Birthplace"
DataDescr27 = "5F35 | 3 | 1 | 0,0,0 | 1 | Paul"
DataDescr28 = "DF2D | 0 | 40 | 0,0,0 | 1 | Last"
DataDescr29 = "DF2E | 0 | 40 | 0,0,0 | 1 | Name"
DataDescr210 = "DF2F | 0 | 40 | 0,0,0 | 1 | Middle"
I only know that the third number indicates the amount of data in bytes.
I'm not understanding my professor means when he says the write flag and read flag. Does 0 mean it is triggered?
He wants us to draw a state transition diagram but I think I can do that myself if I knew what was going on.
+---------+------------+-----------+----------------+
| Counter | Write flag | Read flag | Interpretation |
+---------+------------+-----------+----------------+
| 0 | 0 | 0 | Write locked |
| 0 | 0 | 1 | Invalid |
| 0 | 1 | 0 | Invalid |
| 0 | 1 | 1 | Available |
| N | 0 | 0 | Write request |
| N | 0 | 1 | Read locked |
| N | 1 | 0 | Invalid |
| N | 1 | 1 | Invalid |
+---------+------------+-----------+----------------+
The write flag and the read flag are each a boolean value, meaning it can hold a 0 or a 1. The state appears to be defined by the value of the counter and the two flags. I think your professor is asking that you draw a state diagram that shows transitions between different counter/flag value combinations. (My guess is that the intent is that you collapse all the counter>0 sub-states into a single sub-state labeled counter=N.)