Django ORM calculations between records - django

Is it possible to perform calculations between records in a Django query?
I know how to perform calculations across records (e.g. data_a + data_b). Is there way to perform say the percent change between data_a row 0 and row 4 (i.e. 09-30-17 and 09-30-16)?
+-----------+--------+--------+
| date | data_a | data_b |
+-----------+--------+--------+
| 09-30-17 | 100 | 200 |
| 06-30-17 | 95 | 220 |
| 03-31-17 | 85 | 205 |
| 12-31-16 | 80 | 215 |
| 09-30-16 | 75 | 195 |
+-----------+--------+--------+
I am currently using Pandas to perform these type of calculations, but would like eliminate this additional step if possible.

I would go with a Database cursor raw SQL
(see https://docs.djangoproject.com/en/2.0/topics/db/sql/)
combined with a Lag() window function as so:
result = cursor.execute("""
select date,
data_a - lag(data_a) over (order by date) as data_change,
from foo;""")
This is the general idea, you might need to change it according to your needs.

There is no row 0 in a Django database, so we'll assume rows 1 and 5.
The general formula for calculation of percentage as expressed in Python is:
((b - a) / a) * 100
where a is the starting number and b is the ending number. So in your example:
a = 100
b = 75
((b - a) / a) * 100
-25.0
If your model is called Foo, the queries you want are:
(a, b) = Foo.objects.filter(id__in=[id_1, id_2]).values_list('data_a', flat=True)
values_list says "get just these fields" and flat=True means you want a simple list of values, not key/value pairs. By assigning it to the (a, b) tuple and using the __in= clause, you get to do this as a single query rather than as two.
I would wrap it all up into a standalone function or model method:
def pct_change(id_1, id_2):
# Get a single column from two rows and return percentage of change
(a, b) = Foo.objects.filter(id__in=[id_1, id_2]).values_list('data_a', flat=True)
return ((b - a) / a) * 100
And then if you know the row IDs in the db for the two rows you want to compare, it's just:
print(pct_change(233, 8343))
If you'd like to calculate progressively the change between row 1 and row 2, then between row 2 and row 3, and so on, you'd just run this function sequentially for each row in a queryset. Because row IDs might have gaps we can't just use n+1 to compute the next row. Instead, start by getting a list of all the row IDs in a queryset:
rows = [r.id for r in Foo.objects.all().order_by('date')]
Which evaluates to something like
rows = [1,2,3,5,6,9,13]
Now for each elem in list and the next elem in list, run our function:
for (index, row) in enumerate(rows):
if index < len(rows):
current, next_ = row, rows[index + 1]
print(current, next_)
print(pct_change(current, next_))

Related

How to repeatedly insert arguments from a list into a function until the list is empty?

Using R, I am working with simulating the outcome from an experiment where participants choose between two options (A or B) defined by their outcomes (x) and probabilities of winning the outcome (p). I have a function "f" that collects its arguments in a matrix with the columns "x" (outcome) and "p" (probability):
f <- function(x, p) {
t <- matrix(c(x,p), ncol=2)
colnames(t) <- c("x", "p")
t
}
I want to use this function to compile a big list of all the trials in the experiment. One way to do this is:
t1 <- list(1A=f(x=c(10), p=c(0.8)),
1B=f(x=c(5), p=c(1)))
t2 <- list(2A=f(x=c(11), p=c(0.8)),
2B=f(x=c(7), p=c(1)))
.
.
.
tn <- list(nA=f(x=c(3), p=c(0.8)),
nB=f(x=c(2), p=c(1)))
Big_list <- list(t1=t1, t2=t2, ... tn=tn)
rm(t1, t2, ... tn)
However, I have very many trials, which may change in future simulations, why repeating myself in this way is intractable. I have my trials in an excel document with the following structure:
| Option | x | p |
|---- |------| -----|
| A | 10 | 0.8 |
| B | 7 | 1 |
| A | 9 | 0.8 |
| B | 5 | 1 |
|... |...| ...|
I am trying to do some kind of loop which takes "x" and "p" from each "A" and "B" and inserts them into the function f, while skipping two rows ahead after each iteration (so that each option is only inserted once). This way, I want to get a set of lists t1 to tn while not having to hardcode everything. This is my best (but still not very good) attempt to explain it in pseudocode:
TRIALS <- read.excel(file_with_trials)
for n=1 to n=(nrows(TRIALS)-1) {
t(*PRINT 'n' HERE*) <- list(
(*PRINT 'n' HERE*)A=
f(x=c(*INSERT COLUMN 1, ROW n FROM "TRIALS"*),
p=c(*INSERT COLUMN 2, ROW n FROM "TRIALS"*)),
(*PRINT 'Z' HERE*)B=
f(x=c(*INSERT COLUMN 1, ROW n+1 FROM "TRIALS"*),
p=c(*INSERT COLUMN 2, ROW n+1 FROM "TRIALS"*)))
}
Big_list <- list(t1=t1, t2=t2, ... tn=tn)
That is, I want the code to create a numbered set of lists by drawing x and p from each pair of rows until my excel file is empty.
Any help (and feedback on how to improve this question) is greatly appreciated!

How do I find the most frequent element in a list in pyspark?

I have a pyspark dataframe with two columns, ID and Elements. Column "Elements" has list element in it. It looks like this,
ID | Elements
_______________________________________
X |[Element5, Element1, Element5]
Y |[Element Unknown, Element Unknown, Element_Z]
I want to form a column with the most frequent element in the column 'Elements.' Output should look like,
ID | Elements | Output_column
__________________________________________________________________________
X |[Element5, Element1, Element5] | Element5
Y |[Element Unknown, Element Unknown, Element_Z] | Element Unknown
How can I do that using pyspark?
Thanks in advance.
We can use higher order functions here (available from spark 2.4+)
First use transform and aggregate to get counts for each distinct value in the array.
Then sort the array of structs in descending manner and then get the first element.
from pyspark.sql import functions as F
temp = (df.withColumn("Dist",F.array_distinct("Elements"))
.withColumn("Counts",F.expr("""transform(Dist,x->
aggregate(Elements,0,(acc,y)-> IF (y=x, acc+1,acc))
)"""))
.withColumn("Map",F.arrays_zip("Dist","Counts")
)).drop("Dist","Counts")
out = temp.withColumn("Output_column",
F.expr("""element_at(array_sort(Map,(first,second)->
CASE WHEN first['Counts']>second['Counts'] THEN -1 ELSE 1 END),1)['Dist']"""))
Output:
Note that I have added a blank array for ID z to test. Also you can drop the column Map by adding .drop("Map") to the output
out.show(truncate=False)
+---+---------------------------------------------+--------------------------------------+---------------+
|ID |Elements |Map |Output_column |
+---+---------------------------------------------+--------------------------------------+---------------+
|X |[Element5, Element1, Element5] |[{Element5, 2}, {Element1, 1}] |Element5 |
|Y |[Element Unknown, Element Unknown, Element_Z]|[{Element Unknown, 2}, {Element_Z, 1}]|Element Unknown|
|Z |[] |[] |null |
+---+---------------------------------------------+--------------------------------------+---------------+
For lower versions, you can use a udf with statistics mode:
from pyspark.sql import functions as F,types as T
from statistics import mode
u = F.udf(lambda x: mode(x) if len(x)>0 else None,T.StringType())
df.withColumn("Output",u("Elements")).show(truncate=False)
+---+---------------------------------------------+---------------+
|ID |Elements |Output |
+---+---------------------------------------------+---------------+
|X |[Element5, Element1, Element5] |Element5 |
|Y |[Element Unknown, Element Unknown, Element_Z]|Element Unknown|
|Z |[] |null |
+---+---------------------------------------------+---------------+
You can use pyspark sql functions to achieve that (spark 2.4+).
Here is a generic function that adds a new column containing the most common element in another array column. Here it is:
import pyspark.sql.functions as sf
def add_most_common_val_in_array(df, arraycol, drop=False):
"""Takes a spark df column of ArrayType() and returns the most common element
in the array in a new column of the df called f"MostCommon_{arraycol}"
Args:
df (spark.DataFrame): dataframe
arraycol (ArrayType()): array column in which you want to find the most common element
drop (bool, optional): Drop the arraycol after finding most common element. Defaults to False.
Returns:
spark.DataFrame: df with additional column containing most common element in arraycol
"""
dvals = f"distinct_{arraycol}"
dvalscount = f"distinct_{arraycol}_count"
startcols = df.columns
df = df.withColumn(dvals, sf.array_distinct(arraycol))
df = df.withColumn(
dvalscount,
sf.transform(
dvals,
lambda uval: sf.aggregate(
arraycol,
sf.lit(0),
lambda acc, entry: sf.when(entry == uval, acc + 1).otherwise(acc),
),
),
)
countercol = f"ReverseCounter{arraycol}"
df = df.withColumn(countercol, sf.map_from_arrays(dvalscount, dvals))
mccol = f"MostCommon_{arraycol}"
df = df.withColumn(mccol, sf.element_at(countercol, sf.array_max(dvalscount)))
df = df.select(*startcols, mccol)
if drop:
df = df.drop(arraycol)
return df

Convert list to dataframe and then join with different dataframe in pyspark

I am working with pyspark dataframes.
I have a list of date type values:
date_list = ['2018-01-19', '2018-01-20', '2018-01-17']
Also I have a dataframe (mean_df) that has only one column (mean).
+----+
|mean|
+----+
|67 |
|78 |
|98 |
+----+
Now I want to convert date_list into a column and join with mean_df:
expected output:
+------------+----+
|dates |mean|
+------------+----+
|2018-01-19 | 67|
|2018-01-20 | 78|
|2018-01-17 | 98|
+------------+----+
I tried converting list to dataframe (date_df) :
date_df = spark.createDataFrame([(l,) for l in date_list], ['dates'])
and then used monotonically_increasing_id() with new column name "idx" for both date_df and mean_df and used join :
date_df = mean_df.join(date_df, mean_df.idx == date_df.idx).drop("idx")
I get error of timeout exceeded so I changed default broadcastTimeout 300s to 6000s
spark.conf.set("spark.sql.broadcastTimeout", 6000)
But it did not work at all. Also I am working with a really small sample of data right now. The actual data is large enough.
Snippet of code:
date_list = ['2018-01-19', '2018-01-20', '2018-01-17']
mean_list = []
for d in date_list:
h2_df1, h2_df2 = hypo_2(h2_df, d, 2)
mean1 = h2_df1.select(_mean(col('count_before')).alias('mean_before'))
mean_list.append(mean1)
mean_df = reduce(DataFrame.unionAll, mean_list)
You can use withColumn and lit to add the date to the dataframe:
import pyspark.sql.functions as F
date_list = ['2018-01-19', '2018-01-20', '2018-01-17']
mean_list = []
for d in date_list:
h2_df1, h2_df2 = hypo_2(h2_df, d, 2)
mean1 = h2_df1.select(F.mean(F.col('count_before')).alias('mean_before')).withColumn('date', F.lit(d))
mean_list.append(mean1)
mean_df = reduce(DataFrame.unionAll, mean_list)

How to calculate number of non blank rows based on the value using dax

I have a table with numeric values and blank records. I'm trying to calculate a number of rows that are not blank and bigger than 20.
+--------+
| VALUES |
+--------+
| 2 |
| 0 |
| 13 |
| 40 |
| |
| 1 |
| 200 |
| 4 |
| 135 |
| |
| 35 |
+--------+
I've tried different options but constantly get the next error: "Cannot convert value '' of type Text to type Number". I understand that blank cells are treated as text and thus my filter (>20) doesn't work. Converting blanks to "0" is not an option as I need to use the same values later to calculate AVG and Median.
CALCULATE(
COUNTROWS(Table3),
VALUE(Table3[VALUES]) > 20
)
OR getting "10" as a result:
=CALCULATE(
COUNTROWS(ALLNOBLANKROW(Table3[VALUES])),
VALUE(Table3[VALUES]) > 20
)
The final result in the example table should be: 4
Would be grateful for any help!
First, the VALUE function expects a string. It converts strings like "123"into the integer 123, so let's not use that.
The easiest approach is with an iterator function like COUNTX.
CountNonBlank = COUNTX(Table3, IF(Table3[Values] > 20, 1, BLANK()))
Note that we don't need a separate case for BLANK() (null) here since BLANK() > 20 evaluates as False.
There are tons of other ways to do this. Another iterator solution would be:
CountNonBlank = COUNTROWS(FILTER(Table3, Table3[Values] > 20))
You can use the same FILTER inside of a CALCULATE, but that's a bit less elegant.
CountNonBlank = CALCULATE(COUNT(Table3[Values]), FILTER(Table3, Table3[Values] > 20))
Edit
I don't recommend the CALCULATE version. If you have more columns with more conditions, just add them to your FILTER. E.g.
CountNonBlank =
COUNTROWS(
FILTER(Table3,
Table3[Values] > 20
&& Table3[Text] = "xyz"
&& Table3[Number] <> 0
&& Table3[Date] <= DATE(2018, 12, 31)
)
)
You can also do OR logic with || instead of the && for AND.

Retrieving a single row of a truth table with a non-constant number of variables

I need to write a function that takes as arguments an integer, which represents a row in a truth table, and a boolean array, where it stores the values for that row of the truth table.
Here is an example truth table
Row| A | B | C |
1 | T | T | T |
2 | T | T | F |
3 | T | F | T |
4 | T | F | F |
5 | F | T | T |
6 | F | T | F |
7 | F | F | T |
8 | F | F | F |
Please note that a given truth table could have more or fewer rows than this table, since the number of possible variables can change.
A function prototype could look like this
getRow(int rowNum, bool boolArr[]);
If this function was called, for example, as
getRow(3, boolArr[])
It would need to return an array with the following elements
|1|0|1| (or |T|F|T|)
The difficulty for me arises because the number of variables can change, therefore increasing or decreasing the number of rows. For instance, the list of variables could be A, B, C, D, E, and F instead of just A, B, and C.
I think the best solution would to be write a loop that counted up to the row number, and essentially changed the elements of the array like it was counting in binary. So that
1st loop iteration, array elements are 0|0|...|0|1|
2nd loop iteration, array elements are 0|0|...|1|0|
I can't for the life of me figure out how to do this, and can't find a solution elsewhere on the web. Sorry for all the confusion and thanks for the help
Ok now that you rewrote your question to be much clearer. First, getRow needs to take an extra argument: the number of bits. Row 1 with 2 bits produces a different result than row 1 with 64 bits, so we need a way to differentiate that. Second, typically with C++, everything is zero-indxed, so I am going to shift your truth table down one row so that row "0" returns all trues.
The key here is to realize that the row number in binary is already what you want. Take this row (having shifted down the 4 to 3):
3 | T | F | F |
3 in binary is 011, which inverted is {true, false, false} - exactly what you want. We can express that using bitwise-or as the array:
{!(3 | 0x4), !(3 | 0x2), !(3 | 0x1)}
So it's just a matter of writing that as a loop:
void getRow(int rowNum, bool* arr, int nbits)
{
int mask = 1 << (nbits - 1);
for (int i = 0; i < nbits; ++i, mask >>= 1) {
arr[i] = !(rowNum & mask);
}
}