I have a table that looks like this
(MAIN TABLE)
====================================================================================
|| TRANSACTION_ID || TRADE_PARTY_PREFIX || TRADE_PARTY_VALUE || TRADE_PARTY_TYPE
====================================================================================
| 1 | EXAMPLE | 123456789 | Dealer
--------------------------------------------------------------------------------
| 2 | EXAMPLE | 123456789 |
--------------------------------------------------------------------------------
| 3 | EXAMPLE_2 | 123456789 | Non-Dealer
--------------------------------------------------------------------------------
| 4 | EXAMPLE | 987654321 |
I have another table that looks like this
(LOOKUP TABLE)
=================================================================
|| TRADE_PARTY_PREFIX || TRADE_PARTY_VALUE || TRADE_PARTY_TYPE ||
=================================================================
| EXAMPLE | 123456789 | Dealer
-----------------------------------------------------------------
| EXAMPLE | 987654321 | Dealer
I want to have a SAS script that replaces all the missing values in the TRADE_PARTY_TYPE by looking up the TRADE_PARTY_PREFIX and TRADE_PARTY_VALUE in the lookup table (but it should not touch the row if that column is already filled in.
Something along the lines of (in pseudo-code):
for row in main_table:
if row["trade_party_type"] is not null:
print(row)
else
row["trade_party_type"] == lookup_table["trade_party_prefix"]["trade_party_value"]
Not sure how to do this in SAS.
I would use the coalesce function in a PROC SQL step like so:
PROC SQL;
CREATE TABLE WANT AS
SELECT a.transaction_id, a.trade_party_prefix, a.trade_party_value, coalesce(a.trade_party_type,b.trade_party_type) AS trade_party_type
FROM dset1 a LEFT JOIN dset2 b
ON a.trade_party_prefix = b.trade_party_prefix AND a.trade_party_value = b.trade_party_value;
QUIT;
The coalesce function evaluates the arguments in order and returns the current value of the first expression that initially does not evaluate to NULL.
Related
I have 2 tables in SSAS / Power BI:
Table1:
| ValueName| ValueKey |
|:---- |:------: |
| abc | 1,2,3 |
Table2:
| ID | ValueKey | Value |
|:---- |:------: |:------: |
| ID1 | 1 | 87,8 |
| ID2 | 85 | 14 |
| ID3 | 90 | 95,8 |
| ID4 | 3 | 13,4 |
I need to retrieve (in temp table, later make calculations over this temp table) ID, Value and only those rows, which have ValueKey 1 or 2 or 3.
I need to do it with DAX. In SQL we have for such situation STING_SPLIT function. Is there some way how can I achive this with DAX? My ValueKey column (table1) is comma separated text and ValueKey (table2) column is INT.
Thanks in advance
Like #Jeroen Mostert suggests, you can do this by abusing the PATHCONTAINS function like this:
FilteredTable2 =
VAR CurrKey = SELECTEDVALUE ( Table1[ValueKey] )
VAR PathFromKey = SUBSTITUTE ( CurrKey, ",", "|" ) /* Paths use | as separator. */
RETURN
FILTER ( Table2, PATHCONTAINS ( PathFromKey, Table2[ValueKey] ) )
However, this is not best practice for relating tables. In general, you don't want multiple keys in a single fields.
Before we get to my question please note that I purposefully did not include example data in this post, as my problem occurs on my full dataset and subsets of it. I have two dataset with client data in the following format.
Have_1
+------------+------------+------+
| dt | dt_next | id |
+------------+------------+------+
| 30.09.2010 | 31.10.2010 | 0001 |
+------------+------------+------+
| 31.10.2010 | 30.11.2010 | 0001 |
+------------+------------+------+
| 30.11.2010 | 31.12.2010 | 0001 |
+------------+------------+------+
| 31.12.2010 | 31.01.2011 | 0001 |
+------------+------------+------+
Have_2
+------+-------+------------+------------+
| id | event | start_date | end_date |
+------+-------+------------+------------+
| 0001 | 1 | 31.10.2010 | 30.11.2010 |
+------+-------+------------+------------+
| 0001 | 2 | 31.10.2010 | 31.12.2010 |
+------+-------+------------+------------+
I am trying to use the IFN function to put 1-0 flags in my dataset by using the following logic:
Proc SQL;
Create table want as
Select a.*
,ifn(a.id in (select id from have_2 where a.dt <= end_date and start_date <= a.dt_next), 1, 0) as flg_1
,ifn(a.id in (select id from have_2 where a.dt <= end_date and start_date <= a.dt), 1, 0) as flg_2
From have_1 as a;
Quit;
The code works fine if I take only one client, however, if I take the full dataset (or even a small subset of it such as only 10 clients) then the code gets stuck in the sense that the process begins without error but simply never finishes. I tried setting indexes to both my input datasets, without success.
Are there any peculiarities to the IFN function, which can make it behave that way?
So why not just join and take the max of all events if any event's dates fall into those periods? That should eliminate the need to do two subqueries for every observation in HAVE1.
proc sql;
create table want2 as
select a.id
, a.dt
, a.dt_next
, max(a.dt <= b.end_date and b.start_date <= a.dt_next) as flg1
, max(a.dt <= b.end_date and b.start_date <= a.dt) as flg2
from have1 a
left join have2 b
on a.id = b.id
group by 1,2,3
;
quit;
Note the issue is with the subqueries, not the IFN() function call. Also there is no need for IFN() function here. SAS evaluates boolean expressions to 1 or 0. So the expression a=b returns the same result as IFN(a=b,1,0) returns.
Can someone help me to convert the sql string to Dax?
row_number() p over (partition by date, customer, type order by day)
The row number is my desired output.
Assuming that your data looks like this table:
Sample
+------------+----------+---------+--------+
| Date | Customer | Product | Gender |
+------------+----------+---------+--------+
| 01/01/2018 | 1234 | P2 | F |
| 01/01/2018 | 1234 | P2 | M |
| 03/01/2018 | 1235 | P1 | F |
| 03/01/2018 | 1235 | P2 | F |
+------------+----------+---------+--------+
I have created a calculated column called Rank, using the RANKX and FILTER function.
The first part of the calculation is to create variables outside the scope of the FILTER function. The second part uses RANKX that takes an expression value - in this case Gender - to order the values.
Rank =
VAR _currentdate = 'Sample'[Date]
VAR _customer = 'Sample'[Customer]
var _product = 'Sample'[Product]
return
RANKX(FILTER('Sample',
[Date]=_currentdate &&
[Customer] = _customer &&
[Product] = _product),[Gender],,ASC)
The output is
I contrasted the output to the SQL equivalent.
select
*,
row_number() over(partition by Date,Customer,Product order by Gender)
from (
select '2018-01-01' as Date,1234 as CUSTOMER,'P2' AS PRODUCT, 'M' Gender union
select '2018-01-01' as Date,1234,'P2','F' UNION
select '2018-01-03' as Date,1235,'P1','F' UNION
select '2018-01-03' as Date,1235,'P2','F'
)t1
I am trying to avoid having a multiple if formula by index matching a table instead, however what i need to match is the actual condition and a string.
Lookup table:
+---+------------------+-------------------+-------+
| | A | B | C |
+---+------------------+-------------------+-------+
| 1 | Current to Prior | Portfolio Comment | Error |
| 2 | =0 | "" | 1 |
| 3 | <>0 | "" | -1 |
| 4 | >0 | OK – Losses | 0 |
| 5 | <0 | OK – Losses | 1 |
| 6 | <0 | OK – New Sales | 0 |
| 7 | >0 | OK – New Sales | 1 |
+---+------------------+-------------------+-------+
Column A: Lookup Condition
Column B: Lookup string
Column C: Return value
Data example with correct hard coded output (column C):
+---+------------------+-------------------+-------+
| | A | B | C |
+---+------------------+-------------------+-------+
| 1 | Current to Prior | Portfolio comment | Error |
| 2 | 0 | | 1 |
| 3 | -100 | OK – Losses | 1 |
| 4 | 50 | | -1 |
| 5 | 200 | OK – Losses | 0 |
| 6 | 0 | | 1 |
| 7 | -400 | OK – New Sales | 0 |
| 8 | 0 | | 1 |
+---+------------------+-------------------+-------+
Column A: Data value
Column B: Data string
Column C: Output formula
I need a formula that matches the data value with the lookup condition, the data string with the lookup string and outputs the return value.
I know you weren't necessarily asking for a VBA solution, but myself (and many others) prefer using UDFs as, in my opinion, it makes reading formulas easier and cleaner - plus you can do without the helper cells.
We start off your UDF by creating a Select Case Statement. We could choose to use either the Numerical Value or String for the cases. I decided to go with the string.
Within each case, you will compare the numerical values provided to the lngCondition parameter, which will ultimately return the value to the function.
Since you didn't have any cases for when textual values could have a lngCondition = 0, I made it return a worksheet error code #VALUE, just as you'd expect from any other Excel formula. This is the reason for the UDF having a variant return type.
Public Function ReturnErrorCode(lngCondition As Long, strComment As String) As Variant
Select Case strComment
Case ""
If lngCondition = 0 Then
ReturnErrorCode = 1
Else
ReturnErrorCode = -1
End If
Case "OK - Losses"
If lngCondition > 0 Then
ReturnErrorCode = 0
ElseIf lngCondition < 0 Then
ReturnErrorCode = 1
Else
' Your conditions don't specify that 'OK - Losses'
' can have a 0 value
ReturnErrorCode = CVErr(xlErrValue)
End If
Case "OK - New Sales"
If lngCondition < 0 Then
ReturnErrorCode = 0
ElseIf lngCondition > 0 Then
ReturnErrorCode = 1
Else
' Your conditions don't specify that 'OK - New Sales'
' can have a 0 value
ReturnErrorCode = CVErr(xlErrValue)
End If
Case Else
ReturnErrorCode = CVErr(xlErrValue)
End Select
End Function
You would then use this formula in the worksheet as such:
=ReturnErrorCode(A1, B1)
Great! But I have no knowledge of VBA and don't know how to add a UDF.
First, you need to open the VBA Editor. You can do this by simultaneously pressing Alt + F11.
Next, you need to create a standard code module. In the VBE, click Insert then select Module (NOT Class module!).
Then copy the code above, and paste it into the new code module you just created.
Since you have now added VBA code to your workbook, you now need to save it as a macro-enabled workbook the next time you save.
I have a set of multiple choice responses from a survey with 45 questions, and I've placed the correct responses as my first observation in the dataset.
In my DATA step I would like to set values to 0 or 1depending on whether the variable in each observation matches the same variable in the first observation, I want to replace the response letter (A-D) with the 0 or 1 in the dataset, how do I go about doing that comparison?
I'm not doing any grouping, so I believe I can access the first row using First.x, but I'm not sure how to compare that across each variable(answer1-answer45).
| Id | answer1 | answer2 | ...through answer 45
|:-------------|---------:|
| KEY | A | B |
| 2 | A | C |
| 3 | C | D |
| 4 | A | B |
| 5 | D | C |
| 6 | B | B |
Should become:
| Id | answer1 | answer2 | ...through answer 45
|:-------------|---------:|
| KEY | A | B |
| 2 | 1 | 0 |
| 3 | 0 | 0 |
| 4 | 1 | 1 |
| 5 | 0 | 0 |
| 6 | 0 | 1 |
Current code for reading in the data:
DATA TEST(drop=name fill answer0);
INFILE SCORES DSD firstobs=2;
length id $4;
length answer1-answer150 $1;
INPUT name $ fill id $ (answer0-answer150) ($);
RUN;
Thanks in advance!
Here's how I might do it. Create a data set to PROC COMPARE the KEY to the observed. Then you have X for not matching key and missing for matched. You can then use PROC TRANSREG to score the 'X.' to 01. PROC TRANSREG also creates macro variables which contain the names of the new variables and the number.
From log NOTE: _TRGINDN=2 _TRGIND=answer1D answer2D
data questions;
input id:$3. (answer1-answer2)(:$1.);
cards;
KEY A B
2 A C
3 C D
4 A B
5 D C
6 B B
;;;;
run;
data key;
if _n_ eq 1 then set questions(obs=1);
set questions(keep=id firstobs=2);
run;
proc compare base=key compare=questions(firstobs=2) out=comp outdiff noprint;
id id;
run;
options validvarname=v7;
proc transreg design data=comp(drop=_type_ type=data);
id id;
model class(answer:) / noint;
output out=scored(drop=intercept _:);
run;
%put NOTE: &=_TRGINDN &=_TRGIND;
I don't have my SAS license here at home, so I can't actually test this code. I'll give it me best shot, though ...
First, I'd keep my correct answers in a separate table, and then merge it with the answers from the respondents. That also makes the solution scalable, should you have more multiple choice solutions and answers in the same table, since you'd be joining on the assignment ID as well.
Now, import all your correct answers to a table answers_correct with column names answer_correct1-answer_correct45.
Then, merge the two tables and determine the outcome for each question.
DATA outcome;
MERGE answers answers_correct;
* We will not be using any BY.;
* If you later add more questionnaires, merge BY the questionnaire ID;
ARRAY answer(*) answer1-answer45;
ARRAY answer_correct(*) answer_correct1-answer_correct45;
LENGTH result1-result45 $1;
ARRAY result(*) result1-result45;
DROP i;
FOR i = 1 TO DIM(answer);
IF answer(i) = answer_correct(i) THEN result(i) = '1';
ELSE result(i) = '0';
END;
RUN;