Power BI(DAX) | Getting rows with the same value - powerbi

I need some help with getting a final status per each ID. The table is sorted by time, and IDs are composed of random alphabets and numbers. Currently a final status is in a string format like below. I created a look up table for each status, converting each status to a numerical value as I wished to be prioritized. What I want is, if operations for an ID has at least one "Complete", I want the table to say "yes", and otherwise(no Complete at all) "no". For example of ID "K304R" below, it operated three times with Status of "Completed", "Error", and "Canceled", and thus the result I want would be a "yes".
My intuition was 1) ALLEXCEPT original table with ID and Status, 2) somehow get rows with the same ID(ex "K304R"), 3) somehow get Status for each rows of "K304R", 4) somehow connect Status back to the look up table, 4) get Max value for statuses, 5) return "yes" if the max value is 100, and otherwise "no".
Any help would be really appreciated. Thanks ahead!
OriginalTable
Time
ID
Status
2022/10/4 10:47AM
1ZT56
Error
2022/10/4 9:47AM
K304R
Completed
2022/10/4 7:47AM
K304R
Canceled
2022/10/3 10:47PM
1ZT56
Completed
2022/10/3 7:47AM
PQ534
Canceled
2022/10/3 4:47AM
12PT3
Error
2022/10/2 10:40PM
12PT3
Error
2022/10/2 7:47PM
1ZT56
Canceled
2022/10/1 10:47AM
U73RL
Completed
LookupTable
Status
StatusVal
Completed
100
Canceled
0
Error
0
Result I want
Time
ID
Status
FinalStatus
2022/10/4 10:47AM
1ZT56
Error
yes
2022/10/4 9:47AM
K304R
Completed
yes
2022/10/4 7:47AM
K304R
Canceled
yes
2022/10/3 10:47PM
1ZT56
Completed
yes
2022/10/3 7:47AM
PQ534
Canceled
no
2022/10/3 4:47AM
12PT3
Error
no
2022/10/2 10:40PM
12PT3
Error
no
2022/10/2 7:47PM
1ZT56
Canceled
yes
2022/10/1 10:47AM
U73RL
Completed
yes

This calculated column works:
FinalStatus =
if(
CALCULATE(
COUNTROWS(OriginalTable),
FILTER(OriginalTable,
OriginalTable[ID] = EARLIER(OriginalTable[ID]) &&
OriginalTable[Status]="Completed"))>0,
"Yes","No")
The idea is to filter the table for rows where the ID matches the ID in question, then filter for rows where status is "Completed", and then count the remaining rows.

Related

Redshift Result size exceeds LISTAGG limit on svl_statementtext

Trying to reconstruct my query history from svl_statementtext using listagg.
Getting error :
Result size exceeds LISTAGG limit (limit: 65535)
However, I cannot see how or where I have exceeded limit.
My failing query :
SELECT pid,xid, min(starttime) AS starttime,
pg_catalog.listagg(
CASE WHEN (len(rtrim(("text")::text)) = 0) THEN ("text")::text ELSE rtrim(("text")::text) END
, ''::text
) WITHIN GROUP(ORDER BY "sequence")
AS query_statement
FROM svl_statementtext
GROUP BY pid,xid
HAVING min(starttime) >= '2022-06-27 10:00:00';
After the fail, I checked to see if I could find where the excessive size was coming from :
SELECT pid,xid, min(starttime) AS starttime,
SUM(OCTET_LENGTH(
CASE WHEN (len(rtrim(("text")::text)) = 0) THEN ("text")::text ELSE rtrim(("text")::text) END
)) as total_bytes
FROM svl_statementtext
GROUP BY pid,xid
HAVING min(starttime) >= '2022-06-27 10:00:00'
ORDER BY total_bytes desc;
However the largest size that this query reports is 2962
So how/why is listagg complaining about 65535 ??
Have seen some other posts mentioning using listaggdistinct, and catering for when the value being aggregated is null, but none seem to change my problem.
Any guidance appreciated :)
The longest string that Redshift can hold is 64K bytes. Listagg() is likely generating a string longer than this. The "text" column in svl_statementtext is 200 characters so if you have more than 319 segments you can overflow this string size.
The other issue I see is that your query will combine multiple statements into one string. You are only grouping by xid and pid which will give you all statements for a transaction. Add starttime to your group by list and this will break different statements into different results.
Also remember that xid and pid values repeat every few days so have some date range limit can help prevent a lot of confusion.
You need to add
where sequence < 320
to your query and also group by starttime.
Here's a query I have used to put together statements in Redshift:
select xid, pid, starttime, max(datediff('sec',starttime,endtime)) as runtime, type, listagg(regexp_replace(text,'\\\\n*',' ')) WITHIN GROUP (ORDER BY sequence) || ';' as querytext
from svl_statementtext
where pid = (SELECT pg_backend_pid()) --current session
and sequence < 320
and starttime > getdate() - interval '24 hours'
group by starttime, 1, 2, "type" order by starttime, 1 asc, "type" desc ;

How do I fix this lookup value formula error?

I received this error:
A table of multiple values was supplied where a single value was expected
I'm trying to pull in multiple contract status' (Next 7, Next 15, Next 30, Next 90, Next 120, Next 180, > 180 and Expired) and return them to the site master table you see below.
These status are flowing over as a formula, from the contract end date, on Sales Force Contract Status
Here is the formula:
Z - Status Flag = LOOKUPVALUE('Sales Force_Contract'[Z - Contract Status],'Sales Force_Contract'[Z Sage ID],viw_SiteMaster[JobNo])
What am I doing wrong?
I ended up trying this formula:
Z - Status Flag = calculate(FIRSTNONBLANK('Sales Force_Contract'[Z - Contract Status], 1),
filter(all('Sales Force_Contract'),
'Sales Force_Contract'[Z Sage ID] = viw_SiteMaster[JobNo]))
and it worked! It had to do with blanks.
I feel like alot of my errors have to do with blanks.
Thanks team!

Siddhi externalTime window issue

I'm trying to calculate some stats in a sliding window interval
from CsvInputFileStreamWithConvertedTimestamp#window.externalTime(time, 250 milliseconds)
select time as timeslice, time:dateFormat(time, 'yyyy-MM-dd') as date, time:dateFormat(time, 'HH:mm:ss') as time, instrument, sum(fin) / sum(quantity) as vwap, max(price) * min(price)-1 as prange, max(price) as prangemax, min(price) as prangemin, sum(quantity) as totalquant, avg(quantity) as avgquant, 0 as medquant, sum(fin) as totalfin, avg(fin) as avgfin, count() as trades, distinctCount(buyer) as nofbuy, distinctCount(seller) as nofsell, cast(distinctCount(seller), 'double') / distinctCount(buyer) as bsratio, count(buyer) as buyaggr, count(seller) as sellaggr, sum(quantity) as totalblockquant
insert expired events into OutputStream;
But in the output most values are null
This is my input data
Any idea what i'm doing wrong here
Solved.
I was using insert expired events into instead of insert all events into

broker.backtesting [DEBUG] Not enough cash to fill 600800 order [1684] for 998 share/s

I am using optimizer in Pyalgotrade to run my strategy to find the best parameters. The message I get is this:
2015-04-09 19:33:35,545 broker.backtesting [DEBUG] Not enough cash to fill 600800 order [1681] for 888 share/s
2015-04-09 19:33:35,546 broker.backtesting [DEBUG] Not enough cash to fill 600800 order [1684] for 998 share/s
2015-04-09 19:33:35,547 server [INFO] Partial result 7160083.45 with parameters: ('600800', 4, 19) from worker-16216
2015-04-09 19:33:36,049 server [INFO] Best final result 7160083.45 with parameters: ('600800', 4, 19) from client worker-16216
This is just part of the message. You can see only for parameters ('600800', 4, 19) and ('600800', 4, 19) we have result, for other combination of parameters, I get the message : 546 broker.backtesting [DEBUG] Not enough cash to fill 600800 order [1684] for 998 share/s.
I think this message means that I have created a buy order but I do not have enough cash to busy it. However, from my script below:
shares = self.getBroker().getShares(self.__instrument)
if bars[self.__instrument].getPrice() > up and shares == 0:
sharesToBuy = int(self.getBroker().getCash()/ bars[self.__instrument].getPrice())
self.marketOrder(self.__instrument, sharesToBuy)
if shares != 0 and bars[self.__instrument].getPrice() > up_stop:
self.marketOrder(self.__instrument, -1 * shares)
if shares != 0 and bars[self.__instrument].getPrice() < up:
self.marketOrder(self.__instrument, -1 * shares)
The logic of my strategy is that is the current price is larger than up, we buy, and if the current price is larger than up_stop or smaller than up after we buy, we sell. So from the code, there is no way that I will generate an order which I do not have enough cash to pay because the order is calculated by my current cash.
So where do I get wrong?
You calculate the order size based on the current price, but the price for the next bar may have gone up. The order is not filled in the current bar, but starting from the next bar.
With respect to the 'Partial result' and 'Best final result' messages, how many combinations of parameters are you trying ? Note that if you are using 10 different combinations, you won't get 10 different 'Partial result' because they are evaluated in batches of 200 combinations and only the best partial result for each batch of 200 combinations gets printed.

C++ SQLite3 returns an empty query immediately after an update

I am having an issue with SQLite 3 version 3.7.14.1 where if I do an update then immediately try to do a query it will return 0 results as if it is not there.
Here is a sample of my log:
SQLiteDB::beginTransaction: 0
SQLiteDB::prepareStatement: 0
SQLiteDB::executeStatement -- Executing SQL query:SELECT ... ID LIKE '2' ...
SQLiteDB::executeStatement: 100
SqlHandler::queryEvent -- Bound results.
SqlHandler::retrieveEvents -- Result set has 1 entries
SQLiteDB::releaseStatement: 0
SQLiteDB::prepareStatement: 0
SQLiteDB::executeStatement -- Executing SQL query:UPDATE ... WHERE ID = '2'
SQLiteDB::executeStatement: 101
SQLiteDB::releaseStatement: 0
SQLiteDB::endTransaction: 0
SQLiteDB::beginTransaction: 0
SQLiteDB::prepareStatement: 0
SQLiteDB::executeStatement -- Executing SQL query:SELECT ... ID LIKE '2' ...
SQLiteDB::executeStatement: 100
SqlHandler::queryEvent -- Bound results.
SqlHandler::retrieveEvents -- Result set has 1 entries
SQLiteDB::releaseStatement: 0
SQLiteDB::prepareStatement: 0
SQLiteDB::executeStatement -- Executing SQL query:UPDATE id = '2'
SQLiteDB::executeStatement: 101
SQLiteDB::releaseStatement: 0
SQLiteDB::endTransaction: 0
SQLiteDB::beginTransaction: 0
SQLiteDB::prepareStatement: 0
SQLiteDB::executeStatement -- Executing SQL query:SELECT ... ID LIKE '2' ...
SQLiteDB::executeStatement: 101
SqlHandler::queryEvent -- Nothing found.
SQLiteDB::releaseStatement: 0
Accessor::query -- Unable to perform query. queryEvent failed.
SQLiteDB::endTransaction: 0
If I do an update on '3' and select '2' there is no problem, only when I update '2' and select '2' (or UPDATE and SELECT on same record)
All of the updated information is correct, but the select fails unless I place a small sleep or break between the operations (can't run back to back)
I am multithreaded, shared cache, and multiple connections
sqlite3_config(SQLITE_CONFIG_SERIALIZED);
sqlite3_enable_shared_cache(true);
sqlite3_open("file::memory:?cache=shared", &hDBC_);
Thanks in adavance for any help.
Edit:
For the begin and end transactions I am using
sqlite3_exec
For the prepare statements for Query, Insert, Delete, Update I am using
sqlite3_prepare_v2
For the execute statements I am using
sqlite3_step
For the release statements I am using
sqlite3_finalize
Thank You for the help everyone.
The problem was in my query. I have a trigger that updates the a "last_modified" field anytime there is an update on a given record. My query has a where clause that says ... last_modified < current_timestamp. So if there were any updates that occur during the same 1s interval as the query, they were dropped. The issue was that I did not understand/relalize that the precision on the timestamps were 1s.
Once again, thank you for all the help