SAS append to existing option - sas

I used Table Loader transformation (append to existing) and received following error:
"Decryption/Decompression failure".
Is there another option to add rows to current dataset ?

Related

Converting a target interval variable into class variable i.e., 0 and 1 in SAS Enterprise Miner

I have a target variable which is called profit and which has values in +ve or -ve. I want to convert this into binary such that ive profit is 0 and +ve profit is 1. So far i am unable to do this in SAS Enterprise Miner.
You can modify your data by connecting your input data to a SAS Code Node. Let's use sashelp.class as an example, converting the variable sex into a 1/0 binary variable.
Add the following nodes to your diagram:
[Data] ---> [SAS Code] ---> [Metadata] ---> [Rest of your diagram]
Select the SAS Code Node and go to the Code Editor. Click the ellipses (...) on the left side of the screen under the "Train" menu. Add the following code:
data &em_export_train.;
set &em_import_data.;
sex_binary = (sex = 'M');
run;
&em_export_train and &em_import_data are special macro variables that are shown to you above in the "Macro" menu. All data is treated as training data until it is partitioned. &em_import_data resolves to the data coming in to the node, &em_export_train resolves to the data going out of the node.
Now that we've modified our data, we need to modify the metadata to tell Enterprise Miner to ignore the original variable and use our binary variable instead. Click the Metadata node and select the Train ellipses (...) under the "Variables" section on the left side of the screen. Modify your metadata as follows:
Sex: New Role --> Rejected
sex_binary: New Role --> Target
sex_binary: New Level --> Binary
sex_binary is now your target variable that you can use for predictive modeling.
Note that you can avoid all of this if you modify your data before bringing it in. The method described here effectively treats both a SAS Code Node and a Metadata Node as the new Data Node. This might be necessary if you're working with an immutable database, for example. Enterprise Miner can run all SAS code as well as R code, so you have multiple ways to ETL your data within it.

Informatica update strategy and unconnected lookup not working as expected

I am trying to working on a table with two columns and developing an update strategy with an unconnected lookup. I am using the two columns PAYER_NM and ST_CD as keys. I had marked PAYER_NM as L port and ST_CD column as O/L/R port. The logic for lookup is on both the columns. When I try to make a change to one of the existing record with the same PAYER_NM but changing only the ST_CD column value. I want the only the ST_CD value of the record in the target table be updated for the same PAYER_NM column value. But the result for is that it keeps inserting a record into the target table instead of update. I would like to know how and what changes I need to make in my code to get what I need as above.
I tried using only one key column PAYER_NM and making other column ST_CD as non key column and vise versa.
Plese check session property - if its set to 'Insert' only. If not check the target properties and see if its insert only as well.
You can use UPDATE Starategy too to ensure update is working fine.

Variable in a Power BI query

I have a SQL query to get the data into Power BI. For example:
select a,b,c,d from table1
where a in ('1111','2222','3333' etc.)
However, the list of variables ('1111','2222','3333' etc.) will change every day so I would like the SQL statement to be updated before refreshing the data. Is this possible?
Ideally, I would like to keep a spreadsheet with a list of a values (in this example) so before refresh, it will feed those parameters into this script.
Another problem I have is the list will have a different nr of parameters so the last variable needs to be without a comma.
Another option I was considering is to run the script without the where a in ('1111','2222','3333' etc.) and then load the spreadsheet with a list of those a's and filter the report down based on that list however this will be a lot of data to import into Power BI.
It's my first post ever, although I was sourcing help from Stackoverflow for years, so hopefully, it's all clear.
I would create a new Query to read the "a values" from your spreadsheet. I would set the Load To / Import Data option to Only Create Connection (to avoid duplicating the data).
Then in your SQL query I would remove the where clause. With that gone you actually don't need to write custom SQL at all - just select the table/view from the Navigation UI.
Then from the the "table1" query I would add a Merge Queries step, connecting to the "a values" Query on the "a" column, using the Join Type: Inner. The resulting rows will be only those with a matching "a" column value (similar to your current SQL where clause).
Power Query wont be able to send this to your SQL Server as a single query, so it will first select all the rows from table1. But it is still fairly quick and efficient.

SAS Data Integration - Create a physical table from metadata structure

i need to use a append object after a series of join that have a conditional run... So the join step may be not execute if the condition is not verified and his work physical dataset will not be created.
The problem is that the append step take an error if one o more input physical dataset are not created.
Is there a smart way to create a physical empty table from a metadata structure of the works table of the joins or to use the append with some non-created datasets?
The create table with the list of all field is not a real solution because i've to replicate it per 8 different joins and then replicate the job 10 times...
Thanks to all
Roberto
Thank you for your comments.
What you should do:
Amend your conditional node so that it would on positive condition to create a global macro variable with value of MAX. On negative condition to create the same variable with value of 0.
Replace offending SQL step with "CREATE TABLE" node
In the options for "CREATE TABLE", specify macro variable for "MAXIMUM OUTPUT ROWS (OUTOBS)". See the picture below for example of those options.
So now when your condition is not met, you will always end up with an empty table. When condition is met, the step executes normally.
I must say my version of DI Studio is a bit old. In my version SQL node doens't allow passing macro variables to SQL options, only integers can be typed in. Check if your version allows it because if it does, then you can amend existing SQL step and avoid replacing it with another node.
One more thing, you will get a warning when OUTOBS options is less then the resulting would be dataset.
Let me know if you have any questions.
See the picture for create table options:
At the end i've created another step that extract 0 row from the source table by the condition 1=0 in the where tab. In this way i have a empty table that i can use with a data/set in the post sql of the conditional run if the work table of the join does not exist.
This is not a solution but a valid work around.

SAS Enterprise Miner split Dataset by binary variable

I am using the SAS Enterprise Miner 13.2.
I have a SAS table as a data source. In this table i have a binary variable D_TYP ( "I" and "P" ) and other categorical variables.
I want to split the data by D_TYP so i got two tables. One with all "I" and the other with "P". The problem i don’t know how.
I have been looking in the taskbar and i tried Filter and Data Partition. I can probably use SAS Code to split the Data but i think there is an other way with the taks.
You could use two filter nodes to do the job, with one filtering out I and the another filtering out P. The resulted data set should only consist of one type of the binary variable. In case you are not familiar with the filter node, click on the option Class Variable at properties panel and apply User specified filter. You have to manually select the group by clicking on its corresponding bar.