Replace null value with previous value - SQL Server 2008 R2 - replace

Will post this question again with full code. Last try I didn't write it all which resulted in answers that I could not use.
I have below query and want to replace the latest NULL-value with previous value for that currency. Sometimes there are many null-values on the same date and sometimes there is only one.
I guess I have to do something with the left join on cteB? Any ideas?
See result and desired result below query
With cte as (
SELECT
PositionDate,
c.Currency,
DepositLclCcy
FROM
[Static].[tbl_DateTable] dt
CROSS JOIN (Values ('DKK'), ('EUR'), ('SEK')) as c (Currency)
Left join
(
SELECT
BalanceDate,
Currency,
'DepositLclCcy' = Sum(Case when Activity = 'Deposit' then BalanceCcy else 0 END)
FROM
[Position].[vw_InternalBank]
Group By
BalanceDate,
Currency
) ib
on dt.PositionDate = ib.BalanceDate
and c.Currency = ib.Currency
Where
WeekDate = 'Yes')
Select
*
From cte cteA
Left join
( Select ... from Cte ) as cteB
on .....
Order by
cteA.PositionDate desc,
cteA.Currency
Current Result
PositionDate Currency DepositLclCcy
2017-04-11 SEK 1
2017-04-11 DKK 3
2017-04-11 EUR 7
2017-04-10 SEK NULL
2017-04-10 DKK 3
2017-04-10 EUR 5
2017-04-07 SEK 5
2017-04-07 DKK 3
2017-04-07 EUR 5
Desired Result
PositionDate Currency DepositLclCcy
2017-04-11 SEK 1
2017-04-11 DKK 3
2017-04-11 EUR 7
2017-04-10 SEK 5
2017-04-10 DKK 3
2017-04-10 EUR 5
2017-04-07 SEK 5
2017-04-07 DKK 3
2017-04-07 EUR 5

using outer apply() to get the previous value for DepositLclCcy, and replacing null values using coalesce().
with cte as (
select
PositionDate
, c.Currency
, DepositLclCcy
from [Static].[tbl_DateTable] dt
cross join (values ('DKK') , ('EUR') , ('SEK')) as c(Currency)
left join (
select
BalanceDate
, Currency
, DepositLclCcy = Sum(case when Activity = 'Deposit' then BalanceCcy else 0 end)
from [Position].[vw_InternalBank]
group by BalanceDate, Currency
) ib
on dt.PositionDate = ib.BalanceDate
and c.Currency = ib.Currency
where WeekDate = 'Yes'
)
select
cte.PositionDate
, cte.Currency
, DepositLclCcy = coalesce(cte.DepositLclCcy,x.DepositLclCcy)
from cte
outer apply (
select top 1 i.DepositLclCcy
from cte as i
where i.PositionDate < cte.PositionDate
and i.Currency = cte.Currency
order by i.PositionDate desc
) as x
Skipping the initial left join and using outer apply() there instead:
with cte as (
select
dt.PositionDate
, c.Currency
, ib.DepositLclCcy
from [Static].[tbl_DateTable] dt
cross join (values ('DKK'), ('EUR'), ('SEK')) as c(Currency)
outer apply (
select top 1
DepositLclCcy = sum(BalanceCcy)
from [Position].[vw_InternalBank] as i
where i.Activity = 'Deposit'
and i.Currency = c.Currency
and i.BalanceDate <= dt.PositionDate
group by i.BalanceDate, i.Currency
order by i.BalanceDate desc
) as ib
where dt.WeekDate = 'Yes'
)
select *
from cte

Related

Combination of DATESYTD but keep the month separete to get a avrage on a distinct employee of the accumulated months

This data
Row_ID EmployeeID Date
1 1 2023-02-13
2 2 2023-02-13
3 3 2023-02-13
4 1 2023-01-13
5 8 2023-01-13
6 7 2023-01-13
7 4 2023-01-13
8 5 2023-01-13
9 6 2023-01-13
and a DATE table
I use a Measure
_DistinctEmployee = DISTINCTCOUNT(tblTestEmployee[EmployeeID])
And a Messure for the DATESTYD
_DATESTYD = CALCULATE([_DistinctEmployee],DATESYTD(vwDimDatum[Datum2]))
Image for date choise and table
The end result i want on the measure (NOT drag in MONTH in the table)
Is that the datestyd take the distinct numbers of employee (Becouse in the real scenarie there can be more then one employee that month)
and then + that month with the next month and then + it to the next month thats accumulated
and then take al this addition of values and devide it with the number if the month thats choisen.
Example of the data
I have choisen MONTH Febuary
In my measure i get the number 8 and thats the distinct accumulated employees in JAN and FEB and if i devide that with the number of the month i get 4 and that wrong that no the avrage.
there is 3 distinct employee in January
and there is 6 Distinct employee in Febuary
3+6 / 2(Febuary) = 4,5
Thats the average of employees in al this month.
Try the following measure:
=
VAR DateChosen =
MIN( vwDimDatum[Datum2] )
VAR T1 =
SUMMARIZE(
FILTER( ALL( vwDimDatum ), vwDimDatum[Datum2] <= DateChosen ),
vwDimDatum[Datum2],
"Distinct Employees", [_DistinctEmployee]
)
RETURN
AVERAGEX( T1, [Distinct Employees] )

DAX column count latest record for each set of group

I want to get latest record for each group in a table of rows. ex, i want column c like getting latest record by count
Column A
Column B
Column C
1
09-11-2022 15:46:33
2
1
09-11-2022 21:16:33
4
1
09-11-2022 15:09:40
1
1
09-11-2022 20:39:40
3
2
09-11-2022 15:46:33
1
2
09-11-2022 21:16:33
2
OR
Column A
Column B
Column C
1
09-11-2022 15:46:33
1
09-11-2022 21:16:33
True
1
09-11-2022 15:09:40
1
09-11-2022 20:39:40
2
09-11-2022 15:46:33
2
09-11-2022 21:16:33
True
I want to get flag for latest record in Column C. above mentioned result set i want any of it
thanks in advance
I have tried like this
LastById =
Var modifiedon = 'Table' Column C
Return
COUNTROWS(
FILTER(
ALL( 'Table' ),
'Table' Column C < modifiedon
)
)
The first alternative:
Column C rank =
RANKX (
CALCULATETABLE ( 'Table' , ALLEXCEPT ( 'Table' , 'Table'[Column A] )) ,
[Column B] ,, ASC
)
The second alternative:
Column C bool =
VAR _max =
CALCULATE (
MAX ( 'Table'[Column B] ) ,
ALLEXCEPT ( 'Table' , 'Table'[Column A] )
)
RETURN IF ( [Column B] = _max , "True" )
Column C =
VAR latest = CALCULATE(MAX('Table'[Column B]), ALLEXCEPT('Table','Table'[Column A]))
RETURN IF('Table'[Column B] = latest,"true")

create a countif measure not impacted by filter in visual in powerBI

Need help in creating measures that will reflect the actual count of rows in the table when filtered.
Example:
ID
RankC
RankA
Avg Diff
RankC_count
RankA_count
Avg Diff_count
1000
AAA
XYZ
+01.00 to +01.25
5
6
4
1001
AAA
ZY1
+01.5.00 to +01.75
5
1
5
1002
AAB
XYZ
+01.5.00 to +01.75
3
6
5
1003
AAB
ZY2
+01.5.00 to +01.75
3
1
5
1004
AAB
XYZ
+01.00 to +01.25
3
6
4
1005
AAA
XYZ
+01.00 to +01.25
5
6
4
1006
AAA
ZY3
+01.00 to +01.25
5
1
4
1007
AAC
XYZ
+01.25.00 to +01.5
1
6
2
1008
AAA
ZY4
+01.25.00 to +01.5
5
2
2
1009
AAZ
ZY4
+01.5.00 to +01.75
1
2
5
1010
ABY
XYZ
+01.5.00 to +01.75
1
6
5
The last 3 columns represent the count of each entry. If I use the measure such as below, it provides the correct count. However, when I use in the visual, filtering by ID, say ID 1000, I want it to show line 1 with 5,6, and 4 on the counts, instead of all 1.
Questions:
Is there any measure to give me the correct result? say summarize the table first then do a lookup?
is creating a column the only choice? I cannot create columns since I need 1000 of these calculated columns. whereas using measure, I can create 1000 in one go.
Thanks for any help.
AverageDiff_Count =
CALCULATE (
COUNTROWS (
FILTER ( '28Jun_1973', [Average Diff] = '28Jun_1973'[Average Diff] )
)
)
The ALL function is useful here. It removes filter context so that it uses the whole table instead of just the part in the current filter context.
AvgDiff_Count =
VAR CurrAvgDiff = SELECTEDVALUE ( '28Jun_1973'[Avg Diff] )
RETURN
COUNTROWS (
FILTER ( ALL ( '28Jun_1973' ), '28Jun_1973'[Avg Diff] = CurrAvgDiff )
)

Fetch text between delimiter using regex in oracle

I'm getting a text oracle enclosed between delimiters. If possible, please help in creating a Regex for the text. I've an example of text
12322ABCD124A||!!123!!word1 !!word2!! word3!!||!!789!!word4!!word5 !! word6!!||!!2345 !!word7!!word8!! 890!!||
Till now I'm only able to fetch:
||!!123!!word1 !!word2!! word3!!||!!789!!word4!!word5 !! word6!!||!!2345 !!word7!!word8!! 890!!
using this (\|\|(.*))+([^\|\|]).
But I need this data to be separated from || and then split from !!. After which I need to save it into an array like this:
array[1]= (123,word1 ,word2, word3)
array[2]=(789,word4,word5 , word6)
array[3]=(2345 ,word7,word8, 890)
This one should work:
with v1 as
(
select '12322ABCD124A||!!123!!word1 !!word2!! word3!!||!!789!!word4!!word5 !! word6!!||!!2345 !!word7!!word8!! 890!!||' t from dual
)
select level -1 id, trim(',' from regexp_replace(regexp_substr(t,'[^\|]+',1,level),'!!',',')) array from v1
where level > 1
connect by level <= regexp_count(t,'\|\|');
Output:
ID ARRAY
---------- --------------------------
1 123,word1 ,word2, word3
2 789,word4,word5 , word6
3 2345 ,word7,word8, 890
And if number of parts is constant (4) and You want them in separate columns:
with v1 as
(
select '12322ABCD124A||!!123!!word1 !!word2!! word3!!||!!789!!word4!!word5 !! word6!!||!!2345 !!word7!!word8!! 890!!||' t from dual
), v2 as
(
select level -1 id, trim(',' from regexp_replace(regexp_substr(t,'[^\|]+',1,level),'!!',',')) array
from v1
where level > 1
connect by level <= regexp_count(t,'\|\|')
)
select id,
regexp_substr(array,'[^,]+',1,1) val1,
regexp_substr(array,'[^,]+',1,2) val2,
regexp_substr(array,'[^,]+',1,3) val3,
regexp_substr(array,'[^,]+',1,4) val4
from v2;
Output:
ID VAL1 VAL2 VAL3 VAL4
---------- ---------- ---------- ---------- ----------
1 123 word1 word2 word3
2 789 word4 word5 word6
3 2345 word7 word8 890
PLSQL STYLE:
declare
type t_text_array is table of varchar2(4000);
v_text_array t_text_array := t_text_array();
val varchar2(4000);
cursor c1 is
select '12322ABCD124A||!!123!!word1 !!word2!! word3!!||!!789!!word4!!word5 !! word6!!||!!2345 !!word7!!word8!! 890!!||' t from dual;
begin
open c1;
fetch c1 bulk collect into v_text_array;
for i in 1..v_text_array.count loop
for j in 2..regexp_count(v_text_array(i),'\|\|') loop
val := trim(',' from regexp_replace(regexp_substr(v_text_array(i),'[^\|]+',1,j),'!!',','));
for k in 1..regexp_count(val,',')+1 loop
--display to console or further process...
dbms_output.put_line(regexp_substr(val,'[^,]+',1,k));
end loop;
end loop;
end loop;
end;
/
The below one returns expected results:
with x as
(select '2322ABCD124A||!!123!!word1 !!word2!! word3!!||!!789!!word4!!word5 !! word6!!||!!2345 !!word7!!word8!! 890!!||' str
from dual),
y as (
select regexp_substr(str,'[^||]+[!!]*', 1, level) str from x
where level > 1
connect by regexp_substr(str, '[^||]+[!!]*', 1, level) is not null
)
select
regexp_replace (
regexp_replace (
regexp_replace(str, '^!!', '(') ,
'!!$', ')'),
'[ ]*!![ ]*', ',') str
from y
You need apply twice the split on delimiter as described here.
Finally get the values (word) flat again using LISTAGG and finalize with some string concatenation.
I'm providing a complete example with two input records, so it can scale for any number of your parsed lines.
You may need to adjust the T2table limiting the number of splits. Some special handling is additionally needed if you can have NULL values in your keyword.
The query - commented below
WITH t1 AS
(SELECT 1 id,
'12322ABCD124A||!!123!!word1 !!word2!! word3!!||!!789!!word4!!word5 !! word6!!||!!2345 !!word7!!word8!! 890!!|| ' col
FROM dual
UNION ALL
SELECT 2 id,
'22222ACCCC12Y||!!567!!word21 !!word22!! word23!!||!!789!!word24!!word25 !! word26!!||!!2345 !!word27!!word28!! 890!!|| ' col
FROM dual
),
t2 AS
(SELECT rownum colnum
FROM dual
CONNECT BY level < 10
/* (max) number of columns */
),
t3 AS
(SELECT t1.id,
t2.colnum,
regexp_substr(t1.col,'[^|]+', 1, t2.colnum) col
FROM t1,
t2
WHERE regexp_substr(t1.col, '[^|]+', 1, t2.colnum) IS NOT NULL
),
first_split AS
( SELECT id, colnum, col FROM t3 WHERE col LIKE '%!!%'
),
second_split AS
(SELECT t1.id,
t1.colnum linenum,
t2.colnum,
regexp_substr(t1.col,'[^!]+', 1, t2.colnum) col
FROM first_split t1,
t2
WHERE regexp_substr(t1.col, '[^!]+', 1, t2.colnum) IS NOT NULL
),
agg_values AS
(SELECT id,
linenum,
LISTAGG(col, ',') WITHIN GROUP (
ORDER BY colnum) val_lst
FROM second_split
GROUP BY id,
linenum
)
SELECT id,
'array['
|| row_number() over (partition BY ID order by linenum)
|| ']= ('
||val_lst
||')' array_text
FROM agg_values
ORDER BY 1,2
Yields as requested
ID ARRAY_TEXT
1 array[1]= (123, word1, word2, word3)
1 array[2]= (789, word4, word5, word6)
1 array[3]= (2345, word7, word8, 890)
2 array[1]= (567, word21, word22, word23)
2 array[2]= (789, word24, word25, word26)
2 array[3]= (2345, word27, word28, 890)
This is the result of the first_split query. You break the data in lines.
ID COLNUM COL
---------- ---------- ------------------------------------------
1 2 !!123!!word1 !!word2!! word3!!
1 3 !!789!!word4!!word5 !! word6!!
1 4 !!2345 !!word7!!word8!! 890!!
2 2 !!567!!word21 !!word22!! word23!!
2 3 !!789!!word24!!word25 !! word26!!
2 4 !!2345 !!word27!!word28!! 890!!
The second_split query breaks the lines in word.
ID LINENUM COLNUM COL
---------- ---------- ---------- --------------------------------------------------------------------------------------------------------------------------
1 2 1 123
1 2 2 word1
1 2 3 word2
1 2 4 word3
1 3 1 789
1 3 2 word4
1 3 3 word5
.....
The rest is LISTAGG to get the csv keyword list and a ROW_NUMBER function to get nice sequential array_ids
If you want to extract the values in separate columns use PIVOT instead of LISTAGG. The drawback is that you must adjust the query for the actual number of the values.
WITH t1 AS
(SELECT 1 id,
'12322ABCD124A||!!123!!word1 !!word2!! word3!!||!!789!!word4!!word5 !! word6!!||!!2345 !!word7!!word8!! 890!!|| ' col
FROM dual
UNION ALL
SELECT 2 id,
'22222ACCCC12Y||!!567!!word21 !!word22!! word23!!||!!789!!word24!!word25 !! word26!!||!!2345 !!word27!!word28!! 890!!|| ' col
FROM dual
),
t2 AS
(SELECT rownum colnum
FROM dual
CONNECT BY level < 10
/* (max) number of columns */
),
t3 AS
(SELECT t1.id,
t2.colnum,
regexp_substr(t1.col,'[^|]+', 1, t2.colnum) col
FROM t1,
t2
WHERE regexp_substr(t1.col, '[^|]+', 1, t2.colnum) IS NOT NULL
),
first_split AS
( SELECT id, colnum, col FROM t3 WHERE col LIKE '%!!%'
),
--select * from first_split order by 1,2,3;
second_split AS
(SELECT t1.id,
t1.colnum linenum,
t2.colnum,
regexp_substr(t1.col,'[^!]+', 1, t2.colnum) col
FROM first_split t1,
t2
WHERE regexp_substr(t1.col, '[^!]+', 1, t2.colnum) IS NOT NULL
),
pivot_values AS
(SELECT *
FROM second_split PIVOT (MAX(col) col FOR (colnum) IN (1 AS "K1", 2 AS "K2", 3 AS "K3", 4 AS "K4"))
)
SELECT id,
row_number() over (partition BY ID order by linenum) AS array_id,
K1_COL,
K2_COL,
K3_COL,
K4_COL
FROM pivot_values
ORDER BY 1,2;
gives the relational view
ID ARRAY_ID K1_COL K2_COL K3_COL K4_COL
---------- ---------- -------- -------- -------- --------
1 1 123 word1 word2 word3
1 2 789 word4 word5 word6
1 3 2345 word7 word8 890
2 1 567 word21 word22 word23
2 2 789 word24 word25 word26
2 3 2345 word27 word28 890
Oracle Setup:
CREATE TABLE table_name ( id, value ) AS
SELECT 1, '12322ABCD124A||!!123!!word1 !!word2!! word3!!||!!789!!word4!!word5 !! word6!!||!!2345 !!word7!!word8!! 890!!||' FROM DUAL UNION ALL
SELECT 2, '12322ABCD124A||!!321!!word1a !!word2a!! word3a!!||!!987!!word4a!!word5a !! word6a!!||!!5432 !!word7a!!word8a!! 098!!||' FROM DUAL;
Query 1:
SELECT id,
grp_no,
CAST(
MULTISET(
SELECT REGEXP_SUBSTR( t.text, '!\s*([^!]+?)\s*!', 1, LEVEL, NULL, 1 )
FROM DUAL
CONNECT BY LEVEL <= REGEXP_COUNT( t.text, '!\s*([^!]+?)\s*!' )
)
AS SYS.ODCIVARCHAR2LIST
) AS words
FROM (
SELECT id,
COLUMN_VALUE AS grp_no,
REGEXP_SUBSTR( value, '\|([^|]+)\|', 1, COLUMN_VALUE, NULL, 1 ) AS text
FROM table_name t,
TABLE(
CAST(
MULTISET(
SELECT LEVEL
FROM DUAL
CONNECT BY LEVEL <= REGEXP_COUNT( t.value, '\|([^|]+)\|' )
)
AS SYS.ODCINUMBERLIST
)
)
) t;
Results:
ID GRP_NO WORDS
---------- ---------- --------------------------------------------------------
1 1 SYS.ODCIVARCHAR2LIST('123','word1','word2','word3')
1 2 SYS.ODCIVARCHAR2LIST('789','word4','word5','word6')
1 3 SYS.ODCIVARCHAR2LIST('2345','word7','word8','890')
2 1 SYS.ODCIVARCHAR2LIST('321','word1a','word2a','word3a')
2 2 SYS.ODCIVARCHAR2LIST('987','word4a','word5a','word6a')
2 3 SYS.ODCIVARCHAR2LIST('5432','word7a','word8a','098')
Query 2:
SELECT id,
grp_no,
REGEXP_SUBSTR( t.text, '!\s*([^!]+)!', 1, 1, NULL, 1 ) AS Word1,
REGEXP_SUBSTR( t.text, '!\s*([^!]+)!', 1, 2, NULL, 1 ) AS Word2,
REGEXP_SUBSTR( t.text, '!\s*([^!]+)!', 1, 3, NULL, 1 ) AS Word3,
REGEXP_SUBSTR( t.text, '!\s*([^!]+)!', 1, 4, NULL, 1 ) AS Word4
FROM (
SELECT id,
COLUMN_VALUE AS grp_no,
REGEXP_SUBSTR( value, '\|([^|]+)\|', 1, COLUMN_VALUE, NULL, 1 ) AS text
FROM table_name t,
TABLE(
CAST(
MULTISET(
SELECT LEVEL
FROM DUAL
CONNECT BY LEVEL <= REGEXP_COUNT( t.value, '\|([^|]+)\|' )
)
AS SYS.ODCINUMBERLIST
)
)
) t;
Results:
ID GRP_NO WORD1 WORD2 WORD3 WORD4
---- ------ ------- ------- ------- -------
1 1 123 word1 word2 word3
1 2 789 word4 word5 word6
1 3 2345 word7 word8 890
2 1 321 word1a word2a word3a
2 2 987 word4a word5a word6a
2 3 5432 word7a word8a 098

Extract string from a large string oracle regexp

I have String as below.
select b.col1,a.col2,lower(a.col3) from table1 a inner join table2 b on a.col = b.col and a.col = b.col
inner join (select col1, col2, col3,col4 from tablename ) c on a.col1=b.col2
where
a.col = 'value'
Output need to be table1,table2 and tablename from above string. please let me know the regex to get the result.
Should be a simple one :-)
SQL> WITH DATA AS(
2 select q'[select b.col1,a.col2,lower(a.col3) from table1 a inner join table2 b on
3 a.col = b.col and a.col = b.col inner join (select col1, col2, col3,col4 from tablename )
4 c on a.col1=b.col2 where a.col = 'value']' str
5 FROM DUAL)
6 SELECT LISTAGG(TABLE_NAMES, ' , ') WITHIN GROUP (
7 ORDER BY val) table_names
8 FROM
9 (SELECT 1 val,
10 regexp_substr(str,'table[[:alnum:]]+',1,level) table_names
11 FROM DATA
12 CONNECT BY level <= regexp_count(str,'table')
13 )
14 /
TABLE_NAMES
--------------------------------------------------------------------------------
table1 , table2 , tablename
SQL>
Brief explanation, so that OP/even others might find it useful :
The REGEXP_SUBSTR looks for the words 'table', it could be followed
by a number or string like 1,2, name etc.
To find all such words, I used connect by level technique, but it
gives the output in different rows.
Finally, to put them in a single row as comma separated values, I
used LISTAGG.
Oh yes, and that q'[]' is the string literal technique.