Reference a column by a variable - powerbi

I want to reference a table column by a variable while creating another column but I can't get the syntax:
t0 = Table.FromRecords({[a = 1, b = 2]}),
c0 = "a", c1 = "b",
t1 = Table.AddColumn(t0, "c", each([c0] + [c1]))
I get the error the record's field 'c0' was not found. It is understanding c0 as a literal but I want the text value contained in c0. How to do it?
Edit
I used this inspired by the accepted answer:
t0 = Table.FromRecords({[a = 1, b = 2]}),
c0 = "a", c1 = "b",
t1 = Table.AddColumn(t0, "c", each(Record.Field(_, c0) + Record.Field(_, c1)))

Another way:
let
t0 = Table.FromRecords({[a = 1, b = 2]}),
f = {"a","b"},
t1 = Table.AddColumn(t0, "sum", each List.Sum(Record.ToList(Record.SelectFields(_, f))))
in
t1

try using an index as below
let t0 = Table.FromRecords({[a = 1, b = 2]}),
#"Added Index" = Table.AddIndexColumn(t0, "Index", 0, 1),
c0 = "a",
c1 = "b",
t1 = Table.AddColumn(#"Added Index", "c", each Table.Column(#"Added Index",c0){[Index]} + Table.Column(#"Added Index",c1){[Index]} )
in t1

Expression.Evaluate is another possibility:
= Table.AddColumn(t0, "c", each Expression.Evaluate("["&c0&"] + ["&c1&"]", [_=_]) )
Please refer to this article to understand the [_=_] context argument:
Expression.Evaluate() In Power Query/M
This article explains that argument specifically:
Inside a table, the underscore _ represents the current row, when working with line-by-line operations. The error can be fixed, by adding [_=_] to the environment of the Expression.Evaluate() function. This adds the current row of the table, in which this formula is evaluated, to the environment of the statement, which is evaluated inside the Expression.Evaluate() function.

Related

subset of a list by first match of a part of the column's name

I have a list (L) with several form of AB variable (like AB_1, AB_1_1 ,...), can I have a subset of list with only the first column that matches AB form.
List (L) and desired result as List (R) are as follow:
L1 = data.frame(AB_1 = c(1:4) , AB_1_1 = c(1:4) , C1 = c(1:4))
L2 = data.frame(AB_1_1 = c(1:4) , AB_2 = c(1:4), D = c(1:4) )
L=list(L1,L2)
R1 = data.frame(AB_1 = c(1:4) , C1 = c(1:4))
R2 = data.frame(AB_1_1 = c(1:4) , D = c(1:4))
R=list(R1,R2)
It is not the best answer, but it is a solution:
First change the name of all columns start with AB... to AB, and then remove the duplicate column names for each data frame in list (L).
for (i in 1:length(L)){
colnames(L[[i]])[grepl('AB',colnames(L[[i]]))] <- 'AB'
L[[i]] <- L[[i]][ , !duplicated(colnames(L[[i]]))]
}

Pinescript - "Compare" Indicator with Percentage Change Function takes only last bar data

need help pls.
In Tradingview I use "Compare" to see the BTCUSDT vs. ETHUSDT on Binance and it's basically OK. But lines on the chart are too "up & down" and I want to see the SMA or EMA for those tickers.
I'm trying to do it step by step but I can't pass through the issue that my code takes only last calculated value in consideration and "percentage change line" starts from 0 with each new data. So it makes no sence. Meaning, my last data doesn't add upon prior value, but always starts from zero.
So, data (value) that comes out is good (same as when I put same tickers on Tradingview "Compare") but Tradingview "Compare" calculates / adds data on historical data, while my code starts from 0.
Here is the Pine script code:
//#version=4
study(title="Compare", shorttitle="Compare", overlay=false, max_bars_back=0)
perc_change = (((close[0] - open[0]) / open [0]) * 100)
sym1 = "BINANCE:BTCUSDT", res1 = "30", source1 = perc_change
plot(security(sym1, res1, source1), color=color.orange, linewidth=2)
sym2 = "BINANCE:ETHUSDT", res2 = "30", source2 = perc_change
plot(security(sym2, res2, source2), color=color.blue, linewidth=2)
Sounds like the delta between the two ROCs is what you are looking for. With this you can show only the 2 ROCs, but also columns representing the delta between the two. you can also change the ROC's period:
//#version=4
study(title="Compare", shorttitle="Compare")
rocPeriod = input(1, minval = 1)
showLines = input(true)
showDelta = input(true)
perc_change = roc(close, rocPeriod)
sym1 = "BINANCE:BTCUSDT"
sym2 = "BINANCE:ETHUSDT"
res = "30"
s1 = security(sym1, res, perc_change)
s2 = security(sym2, res, perc_change)
delta = s1 - s2
plot(showLines ? s1 : na, "s1", color.orange)
plot(showLines ? s2 : na, "s2", color.blue)
hline(0)
plot(showDelta ? delta : na, "delta", delta > 0 ? color.lime : color.red, 1, plot.style_columns)

Is there an M function to create a table type?

In M language, there is Type.ForRecord function which can be used to create a record type dynamically.
// These two expressios are exchangeable
type [A = Int64.Type, optional B = text]
Type.ForRecord(
[
A = [Type = type number, Optional = false],
B = [Type = type text, Optional = true]
],
false
)
Using this, we are able to create a new record type based on an existing type with an additional field like below.
// These two expressions are exchangeable
type [A = Int64.Type, optional B = text, optional C = date]
let
existingType = [A = Int64.Type, optional B = text],
newFieldName = "C",
newFieldType = type date
in
Type.ForRecord(
Record.AddField(
Type.RecordFields(existingType),
newFieldName,
[Type = newFieldType, Optional = true]
),
false
)
Now, I am looking for a way to do a similar thing with table types. I want to be able to modify an existing table type adding a new column. The name and the type of the new column is dynamically determined in the runtime.
// I want this result
type table [A = Int64.Type, B = text, C = date]
// How can I create a new table type adding column `C = date`?
let
existingType = type table [A = Int64.Type, B = text],
newColumnName = "C",
newColumnType = type date
in
...
I was expecting there is a function like Type.ForTable, but I could not find such. Is there any idea?
The trivial way to do this is as follows:
newType = type table [A = Int64.Type, B = text, newFieldName = newFieldType]
But that isn't going to work if your existingType is dynamic.
If you keep your existingType definition as a generic record (rather than a table), you can get the new table type by casting it to a table type after adding the new record field.
let
existingType = type /*no table here*/ [A = Int64.Type, B = text],
newFieldName = "C",
newFieldType = type date,
newType =
Type.ForRecord(
Record.AddField(
Type.RecordFields(existingType),
newFieldName,
[Type = newFieldType, Optional = false]
),
false
),
newTableType = type table newType
in
newTableType

Count multiple terms

class Term(models.Model):
created = models.DateTimeField(auto_now_add=True)
term = models.CharField(max_length=255)
Hey Guys,
I try to Count duplicate/multiple terms from my db table but I get still a list of all items ({term: a, count: 1, term: a, count: 1,term: b, count: 1,...}) of my table and not like {term: a, count: 12, term: b, count: 1}
Has anyone an idea?
EDIT:
ee = Term.objects.annotate(Count("term")).values("term", "term__count")
Result:
[{'term': u'tes', 'term__count': 1}, {'term': u'tes', 'term__count': 1},
What I expected:
[{'term': u'tes', 'term__count': 2}, {'term': 'b', 'term__count': 1}
https://docs.djangoproject.com/en/dev/topics/db/aggregation/
says about the order being important. Also, if you have an order_by on the model, that will affect it.
How about ...
ee = Term.objects.values("term").annotate(Count("term")).order_by()
In SQL you cannot do that in only one query, you need a sub query. I guess it is the same with Django, so try this:
ee = Term.objects.extra(select={'count': "SELECT COUNT(term) FROM appname_term AS subtable WHERE subtable.term = appname_term.term"})
It should add a count attribute to every term from ee with the number of rows with this. It applies a subquery on the same relation in the main query. It's equivalent of SQL:
SELECT *, (
SELECT COUNT(term)
FROM appname_term AS subtable
WHERE subtable.term = appname_term.term
) AS count
FROM appname_term

Bind specific parameter (one) in JPQL

In database I have partitioning table by column 'status' for better performance. My database administrator ask me about put in query value for that column directly in sql (not bind by parameter).
I can change binding by set hint QueryHints.BIND_PARAMETERS on false, but then all parameters are inside sql.
Can I set not bind only on 'status' parameter ?
Example result when BIND_PARAMETERS = true
SELECT t0.* FROM S_JOBS_ORG_UNIT_CFG t0
WHERE ((((t0.ORG_ID = ?) AND (t0.SCHEDULER_NEXT_ACTIVATION < SYSDATE)) AND (t0.ACTIVE = ?))
AND NOT EXISTS (SELECT ? FROM S_JOBS t1 WHERE (((t1.ORDER_ID = t0.ORDER_ID) AND (t1.ORG_ID = t0.ORG_ID)) AND NOT ((t1.STATUS = ?)))) )
bind => [472100, Y, 1, E]
and result when BIND_PARAMETERS = false
SELECT t0.* FROM S_JOBS_ORG_UNIT_CFG t0
WHERE ((((t0.ORG_ID = 472100) AND (t0.SCHEDULER_NEXT_ACTIVATION < SYSDATE)) AND (t0.ACTIVE = Y))
AND NOT EXISTS (SELECT 1 FROM S_JOBS t1 WHERE (((t1.ORDER_ID = t0.ORDER_ID) AND (t1.ORG_ID = t0.ORG_ID)) AND NOT ((t1.STATUS = E)))) )
Code:
Query jobOrgUnitCfgQuery = entityManager.createQuery(
"SELECT c FROM JobOrgUnitCfg c WHERE c.orgId = :orgId and c.schedulerNextActivation < current_timestamp and c.active = :active and " +
" not exists (SELECT j FROM Job j WHERE j.orderId = c.orderId and j.orgId = c.orgId and j.status <> 'E')");
jobOrgUnitCfgQuery.setParameter("orgId", orgId);
jobOrgUnitCfgQuery.setParameter("active", Boolean.TRUE);
return jobOrgUnitCfgQuery.getResultList();
I think your best bet is just to programmatically build your query like you're doing with a hard coded status and escape the other paramaters manually to avoid SQL Injection.