I am working on a project where I'm reading raw census data into SAS enterprise guide to be processed as a different merged output. The first few columns are character fields, serving as geographic identifiers.
The rest of the raw data contains numeric fields, all fields are like "HD01_VD01" and so on up through numbers like "HD01_VD78". However, occasionally with census data numbers get suppressed and some observations have "*****" in the raw data like in the picture below. Whenever that happens, SAS reads in the numeric field as a character.
What would be a good way to ensure that anytime an "HD01_VD(whatevernumber)" is always numeric and converts "*****" to a blank/missing value like "." thus keeping the field numeric?
I don't want to hard-code every instance of a field being read in as a character back to numeric because my code is working with many different census tables. Would a macro variable be the way to do this? An if statement in each census table's data step?
Using arrays and looping them would be the best option; as mention in the comment by david25272.
Another option is to change the format of the fields in Enterprise Guide either in:
Import Task taht reads the files: change the field to numeric
or
Add a Query Builder Task: and create calculate field and use this advanced expression input(HD02_V36,11.)
Related
I am trying to build a binary classifier based on a tabular dataset that is rather sparse, but training is failing with the following message:
Training pipeline failed with error message: Too few input rows passed validation. Of 1169548 inputs, 194 were valid. At least 50% of rows must pass validation.
My understanding was that tabular AutoML should be able to handle Null values, so I'm not sure what's happening here, and I would appreciate any suggestions. The documentation explicitly mentions reviewing each column's nullability, but I don't see any way to set or check a column's nullability on the dataset tab (perhaps the documentation is out of date?). Additionally, the documentation explicitly mentions that missing values are treated as null, which is how I've set up my CSV. The documentation for numeric however does not explicitly list support for missing values, just NaN and inf.
The dataset is 1 million rows, 34 columns, and only 189 rows are null-free. My most sparse column has data in 5,000 unique rows, with the next rarest having data in 72k and 274k rows respectively. Columns are a mix of categorical and numeric, with only a handful of columns without nulls.
The data is stored as a CSV, and the Dataset import seems to run without issue. Generate statistics ran on the dataset, but for some reason the missing % column failed to populate. What might be the best way to address this? I'm not sure if this is a case where I need to change my null representation in the CSV, change some dataset/training setting, or if its an AutoML bug (less likely). Thanks!
To allow invalid & null values during training & prediction, we have to explicitly set the allow invalid values flag to Yes during training as shown in the image below. You can find this setting under model training settings on the dataset page. The flag has to be set on a column by column basis.
I tried #Kabilan Mohanraj's suggestion and it resolved my issue. What I had to do was click the dropdown to allow invalid values into training. After making this change, all rows passed validation and my model was able to train without issue. I'd initially assumed that missing values would not count as invalid, which was incorrect.
so, I got 3 xlsx full of data already treated, so I pretty much just got to display the data using the graphs. The problem seems to be, that Powerbi aggregates all numeric data (using: count, sum, etc.) In their community they suggest to create new measures, the thing is, in that case I HAVE TO CREATE A LOT OF MEASURES...Also, I tried to convert the data to text and even so, Powerbi counts it!!!
any help, pls?
There are several ways to tackle this:
When you pull a field into the field well for a visualisation, you can click the drop down in the field well and select "Don't summarize"
in the data model, select the column and on the ribbon select "don't summarize" as the summarization option in the Properties group.
The screenshot shows the field well option on the left and the data model options on the right, one for a numeric and one for a text field.
And, yes, you never want to use the implicit measures, i.e. the automatic calculations that Power BI creates. If you want to keep on top of what is being calculated, create your own measures, and yes, there will be many.
Edit: If by "aggregating" you are referring to the fact that text values will be grouped in a table (you don't see any duplicates), then you need to add a column with unique values to the table so all the duplicates of the text values show up. This can be done in the data source by adding an Index column, then using that Index column in the table and setting it to a very narrow with to make it invisible.
I am trying to load SAS data file together with its variable and value labels, but I cant seem to make it work.
I have 3 SAS files
sas data ("data_final.sas7bdat")
sas format dictionary that contains the format name, variable name/labels, etc ("formats.sas7bdat")
sas format library that contains the format name, value name/labels,etc ("format_library.sas7bdat")
I am trying to load this to SPSS using the following code but it doesn't work. It loads the data and the variable labels but not the value labels.
GET SAS DATA='\data_final.sas7bdat'
/FORMATS='\formats.sas7bdat'
/FORMATS='\format_library.sas7bdat'.
Any help is greatly appreciated.
Thank you!
The FORMATS= option wants the name of the SAS format catalog, not another SAS dataset. Catalogs use sas7bcat as the extension.
GET SAS DATA='\data_final.sas7bdat'
/FORMATS='\formats.sas7bcat'.
If you really cannot get it to work then read in the formats_library.sas7bdat and look at the FMTNAME, TYPE, START, END and LABEL variables and use those to generate the SPSS code you need to attach data labels to your SPSS data.
FMTNAME is the name of the format. The TYPE determines if it is applies to character values or numeric values (or if in fact is an INFORMAT instead of FORMAT). The START and END mark the range of values (frequently they will be the same) and LABEL is the decoded value (aka the data label). Unlike in SPSS in SAS you only have to define the code/decode mapping once and then apply to as many variables as you want.
The dataset you show as being named formats.sas7bdat looks like it is the variable level metadata. That should list each variable (NAME) and what format, if any, has been attached to it (FORMAT). So if that shows there is a variable named FRED that has the format YESNO attached to it then look for records in format_library where FMTNAME='YESNO' and see what values it maps. So if FRED is numeric with values 1 and 2 then format YESNO might have one record with START='1' and LABEL='YES' and another with START='2' and LABEL='NO'.
I am using the SAS Enterprise Miner 13.2.
I have a SAS table as a data source. In this table i have a binary variable D_TYP ( "I" and "P" ) and other categorical variables.
I want to split the data by D_TYP so i got two tables. One with all "I" and the other with "P". The problem i don’t know how.
I have been looking in the taskbar and i tried Filter and Data Partition. I can probably use SAS Code to split the Data but i think there is an other way with the taks.
You could use two filter nodes to do the job, with one filtering out I and the another filtering out P. The resulted data set should only consist of one type of the binary variable. In case you are not familiar with the filter node, click on the option Class Variable at properties panel and apply User specified filter. You have to manually select the group by clicking on its corresponding bar.
I am trying to create a table where it only counts the attendees one one type of training (rows) if they attended another particular training (column) AFTER the first one. I think I need to recreate a countif function that compares the dates of the trainings, but not sure how to set this up so that it compares the dates of the row trainings and column trainings. Any ideas?
Edit 3/23
Alex, your solution would work if I had different variables for the dates of each type of training. Is there a way to construct this without having to create new variables for each type of training that I want to compare? Put another way, is there a way to refer to the rows and columns of the table in the formula that would compare the dates? So, something like "count if the start date of this column exceeds the start date of this row." (basically, is there something like the Excel index function in Tableau?)
It may help to see how my data is structured -- here is a scrubbed version: https://docs.google.com/spreadsheets/d/1YR1Wz-pfGHhBxDQDGYgmemLGoCK0cSvKOeE8w33ZI3s/edit?usp=sharing
The "table" tab shows the table that I'm trying to create in Tableau.
Define a calculated field for your condition, called say, trained_after, as:
training_b_date > training_a_date
trained_after will be true or false for each data row depending on whether the B training was dated later than the A training
If you want more precise control over the difference between the dates, use the date_diff function. Say date_diff("hour", training_a_date, training_b_date) > 24 to insist upon a short waiting period.
That field may be all you need. You can put trained_after on the filter shelf to filter only to see data rows meeting the condition. Or put it on another shelf to partition the data according to that condition. Or use your field to create other calculated fields.
Realize that if either of your date fields is null, then your calculated field will evaluate to null in that case. Aggregate functions like Sum(), Count() etc ignore null values.