my queryset :
Status.objects.filter(date__gte='2017-07-05', date__lt='2017-07-09', type='X').update(value=F('value') + 1)
my database :
date | value | value1 | value2 | type
2017-07-05 | 0 | 0 | 0 | X
2017-07-06 | 0 | 0 | 0 | X
2017-07-07 | 0 | 0 | 0 | X
2017-07-08 | 0 | 0 | 0 | X
2017-07-09 | 0 | 0 | 0 | X
2017-07-10 | 0 | 0 | 0 | X
I have two question, but my above queryset don't work.
1 - How update field "value" in date range ?
2 - How to replace "value" with a variable ?
update(value=F('value') + 1)
I need to dynamically select field (value1, value2, valuse3) from the database to change value.
you can path a field name with a variable using this.
somename='some_field' #value.value1,... in your case
Status.objects.filter(Q(date__gte='2017-07-05'), Q(date__lt='2017-07-09'), Q(type='X')).update(**{somename: F(somename)+1})
Related
I have a below table
+------+------+------+------+------+-----+
| Yr | col1 | col2 | col3 | col4 | PQR |
+------+------+------+------+------+-----+
| 2012 | 1 | 0 | 1 | 1 | 2 |
| 2012 | 0 | 1 | 0 | 0 | 4 |
| 2013 | 1 | 1 | 1 | 1 | 6 |
| 2014 | 0 | 0 | 0 | 0 | 8 |
| 2012 | 1 | 0 | 1 | 1 | 7 |
| 2013 | 0 | 1 | 0 | 0 | 3 |
| 2014 | 1 | 0 | 1 | 1 | 2 |
| 2012 | 0 | 1 | 0 | 0 | 10 |
| 2014 | 0 | 0 | 1 | 0 | 12 |
| 2014 | 0 | 0 | 0 | 0 | 5 |
+------+------+------+------+------+-----+
The output I want is as below
+------+-------+------+------+------+
| | Total | 2012 | 2013 | 2014 |
+------+-------+------+------+------+
| col1 | 17 | 9 | 6 | 2 |
| col2 | 23 | 14 | 9 | 0 |
| col3 | 29 | 9 | 6 | 14 |
| col4 | 17 | 9 | 6 | 2 |
+------+-------+------+------+------+
For row col1 in my output table
The column `Total` is `SUM(PQR)` when `col1` is 1 my input table
The value `17` is `SUM(PQR)` when `col1` is 1 in my input table
The value in col `2012` is `SUM(PQR)` when `col1` is 1 and `Yr=2012` in my input table
The value `9` is `SUM(PQR)` when `col1` is 1 and `Yr=2012` in my input table
Similarly 6 in column 2013 is SUM(PQR) when col1 is 1 and Yr is 2013
Hope the process to get output table is understood
I want to achieve the above result with SAS.
Any help will be really appreciated
Transpose the data into a categorical form and use PQR as a weight in your aggregating sum. Proc TABULATE is very adept at creating such tabulations.
data have;
infile datalines dlm='|'; input
Yr col1 col2 col3 col4 PQR ; datalines;
| 2012 | 1 | 0 | 1 | 1 | 2 |
| 2012 | 0 | 1 | 0 | 0 | 4 |
| 2013 | 1 | 1 | 1 | 1 | 6 |
| 2014 | 0 | 0 | 0 | 0 | 8 |
| 2012 | 1 | 0 | 1 | 1 | 7 |
| 2013 | 0 | 1 | 0 | 0 | 3 |
| 2014 | 1 | 0 | 1 | 1 | 2 |
| 2012 | 0 | 1 | 0 | 0 | 10 |
| 2014 | 0 | 0 | 1 | 0 | 12 |
| 2014 | 0 | 0 | 0 | 0 | 5 |
run;
data have_row_id / view=have_row_id;
set have;
rowid+1;
run;
proc transpose data=have_row_id out=have_categorical;
by rowid yr pqr;
run;
proc tabulate data=have_categorical;
class yr _name_;
var col1;
weight pqr;
table _name_='', col1='' * sum=''*f=8. * (all='Total' yr='') / nocellmerge;
run;
The ='' removes labelling cells and compactifies the output.
Lets say I have two pandas DataFrames, X and Y:
X =
+---+----------+---------+
| | Value1 | Value2 |
+---+----------+---------+
| A | 1 | NaN |
| B | 0 | 0 |
+---+----------+---------+
Y =
+---+----------+---------+
| | Value1 | Value2 |
+---+----------+---------+
| A | 2 | NaN |
| C | 30 | NaN |
+---+----------+---------+
I want to merge / join them based on the index (row name) resulting in this:
+---+----------+---------+
| | Value1 | Value2 |
+---+----------+---------+
| A | 1 | 2 |
| B | 0 | 0 |
| C | 30 | NaN |
+---+----------+---------+
Using merge and 'outer', the resulting table has columns per table, instead of just concatenating. I need something that appends new rows to the end, but also appends new columns for a matching index.
This is the result of an 'outer' merge:
+---+----------+---------+----------+---------+
| | Value1_X | Value2_X| Value1_Y | Value2_Y|
+---+----------+---------+----------+---------+
| A | 1 | NaN | 2 | NaN |
| B | 0 | 0 | NaN | NaN |
| C | NaN | NaN | 30 | NaN |
+---+----------+---------+----------+---------+
Which is almost what I want, but ignoring the original column labels...
On the result of the 'outer' merge:
X =
+---+----------+---------+----------+---------+
| | Value1_X | Value2_X| Value1_Y | Value2_Y|
+---+----------+---------+----------+---------+
| A | 1 | NaN | 2 | NaN |
| B | 0 | 0 | NaN | NaN |
| C | NaN | NaN | 30 | NaN |
+---+----------+---------+----------+---------+
do, X = X.apply(lambda x: pd.Series(x.dropna().values), axis = 1)
which will give
0 1
A 1.0 2.0
B 0.0 0.0
C 30.0 NaN
I am looking to following code at following link
https://www.geeksforgeeks.org/divide-and-conquer-set-2-karatsuba-algorithm-for-fast-multiplication/
// The main function that adds two bit sequences and returns the addition
string addBitStrings( string first, string second )
{
string result; // To store the sum bits
// make the lengths same before adding
int length = makeEqualLength(first, second);
int carry = 0; // Initialize carry
// Add all bits one by one
for (int i = length-1 ; i >= 0 ; i--)
{
int firstBit = first.at(i) - '0';
int secondBit = second.at(i) - '0';
// boolean expression for sum of 3 bits
int sum = (firstBit ^ secondBit ^ carry)+'0';
result = (char)sum + result;
// boolean expression for 3-bit addition
carry = (firstBit&secondBit) | (secondBit&carry) | (firstBit&carry);
}
// if overflow, then add a leading 1
if (carry) result = '1' + result;
return result;
}
I am having difficulty in understanding following expressions
// boolean expression for sum of 3 bits
int sum = (firstBit ^ secondBit ^ carry)+'0';
and other expression
// boolean expression for 3-bit addition
carry = (firstBit&secondBit) | (secondBit&carry) | (firstBit&carry);
What is difference between two? What are they trying to achieve?
Thanks
To understand this, a table with all possible combinations may help. (For our luck, the number of combinations is very limited for bits.)
Starting with AND (&), OR (|), XOR (^):
a | b | a & b | a | b | a ^ b
---+---+-------+-------+-------
0 | 0 | 0 | 0 | 0
0 | 1 | 0 | 1 | 1
1 | 0 | 0 | 1 | 1
1 | 1 | 1 | 1 | 0
Putting it together:
a | b | carry | a + b + carry | a ^ b ^ carry | a & b | b & carry | a & carry | a & b | a & carry | b & carry
---+---+-------+---------------+---------------+-------+-----------+-----------+-------------------------------
0 | 0 | 0 | 00 | 0 | 0 | 0 | 0 | 0
0 | 0 | 1 | 01 | 1 | 0 | 0 | 0 | 0
0 | 1 | 0 | 01 | 1 | 0 | 0 | 0 | 0
0 | 1 | 1 | 10 | 0 | 0 | 1 | 0 | 1
1 | 0 | 0 | 01 | 1 | 0 | 0 | 0 | 0
1 | 0 | 1 | 10 | 0 | 0 | 0 | 1 | 1
1 | 1 | 0 | 10 | 0 | 1 | 0 | 0 | 1
1 | 1 | 1 | 11 | 1 | 1 | 1 | 1 | 1
Please, note, how the last digit of a + b resembles exactly the result of a ^ b ^ carry as well as a & b | a & carry | b & carry resembles the first digit of a + b.
The last detail is, adding '0' (ASCII code of digit 0) to the resp. result (0 or 1) translates this to the corresponding ASCII character ('0' or '1') again.
I already asked a question here (https://stackoverflow.com/questions/28658283/c-getslotlisttokenpresent-pslotlist-pulcount-return-pulcount-0) about my SmartCard (https://en.wikipedia.org/wiki/Universal_electronic_card), but I would like to know: is it possible to get a specific record from a smart card, knowing the pin code and where the record is located?
Map developed by ISO-7816, so the APDU-command must be based on the following scheme:
[CLA] [INS] [P1] [P2] [Lc field] [Data field] [Le field]
How APDU-command should look like and what the library is better to use on C++/C#, if I need the data from the field 5F20?
P.s.: here is data from file sectors.ini:
[Sector1_11]
Icon = "IDENTIFICATION SECTOR"
BlockDescr1 = "0 | 0 | The data block for sharing"
BlockDescr2 = "0 | 0 | block public access to the PIN"
DataDescr21 = "DF27 | 1 | 6 | 0,0,0 | 1 | SNILS"
DataDescr22 = "DF2B | 4 | 8 | 0,0,0 | 1 | Number of MHI"
DataDescr23 = "5F20 | 0 | 26 | 0,0,0 | 1 | Name"
DataDescr24 = "DF23 | 0 | 100 | 0,0,0 | 1 | Address of the issuer"
DataDescr25 = "5F2B | 4 | 4 | 0,0,0 | 1 | Born"
DataDescr26 = "DF24 | 0 | 100 | 0,0,0 | 1 | Birthplace"
DataDescr27 = "5F35 | 3 | 1 | 0,0,0 | 1 | Paul"
DataDescr28 = "DF2D | 0 | 40 | 0,0,0 | 1 | Last"
DataDescr29 = "DF2E | 0 | 40 | 0,0,0 | 1 | Name"
DataDescr210 = "DF2F | 0 | 40 | 0,0,0 | 1 | Middle"
I only know that the third number indicates the amount of data in bytes.
I have two nodes 8xl cluster. And today I've decided to take a look at some metrics that Amazon provides, what I've noticed is that some disks are empty.
From Amazon docs:
capacity integer Total capacity of the partition in 1 MB disk blocks.
SQL:
select owner, used, tossed, capacity, trim(mount) as mount
from stv_partitions
where capacity < 1;
owner | used | tossed | capacity | mount
-------+------+--------+----------+-----------
0 | 0 | 1 | 0 | /dev/xvdo
1 | 0 | 1 | 0 | /dev/xvdo
(2 rows)
Can someone explain to me why am I seeing this? Is that an expected behaviour?
Updated:
owner | host | diskno | part_begin | part_end | used | tossed | capacity | reads | writes | seek_forward | seek_back | is_san | failed | mbps | mount
-------+------+--------+---------------+---------------+------+--------+----------+-------+--------+--------------+-----------+--------+--------+------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1 | 1 | 13 | 0 | 1000126283776 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | /dev/xvdo
0 | 1 | 13 | 1000126283776 | 2000252567552 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | /dev/xvdo
It is due to the fact that the device has failed (=1) and hence the disk capacity is set to 0.