I am new to SAS and I need some help here
The question below:
So far, I have done this:
data Purchase;
infile ‘c:\temp\PurchaseRecords.dat’ dlm=’,’ DSD;
input id $8 visit_no # unitpurchased #;
keep id unitpurchased;
run;
What do I need to add in my statement to make those orders look like this?
just an example.
Thank you.
You can use the infile column= in conjunction with the input held input # modifier to determine when held input has run past a trailing comma meant to indicate a missing value that is to be interpreted as a case of zero units_purchased. The automatic variable _infile_ is used to check when an input statement has positioned itself for the next read to be beyond the length of a data line.
data want;
infile datalines dsd dlm=',' column=p;
attrib id length=$8 units_purchased length=8 ;
input id #; * held input record;
* loop over held input record;
do while (p <= length(_infile_)+1); * +1 for dealing with trailing comma;
input units_purchased #; * continue to hold the record;
if missing(units_purchased) then units_purchased = 0;
output;
end;
datalines;
C005,3,15,,39
D2356,4,11,,5
A323,3,10,15,20
F123,1,
run;
The sometimes easier to use ## modifier wouldn't be used in this case because a missing value is to be considered valid input and thus can't be used to assert a 'no more data' condition.
Since the data includes the number of values use that to control a DO loop to read the values. I am not sure why you would want to lose the information on the order of the values, so I have commented out the KEEP statement. To convert the missing values to zeros I used a sum statement. You could use an IF/THEN statement or a COALESE() function call or other methods to convert the missing values to zeros.
data Purchase;
infile 'c:\temp\PurchaseRecords.dat' dsd truncover ;
length id $8 ;
input id visit_no # ;
do visit=1 to visit_no ;
input unitpurchased #;
unitpurchased+0;
output;
end;
* keep id unitpurchased;
run;
Your original program had a few errors:
Wrong quote characters. Use normal ASCII single or double quote characters.
It is reading value of ID from only column 8. I find it better to use LENGTH statement to define the variables instead of forcing SAS to guess at how to define the variables.
The input statement improperly is trying to use column pointer motion command, #nnn. Plus the variable location to move the pointer to, unitpurchased, has not yet been given a value.
No attempt was made to read more than one value from the line.
You did not include truncover (or even the older missover) option on your infile statement.
Related
I am trying to export SAS data into CSV, sas dataset name is abc here and format is
LINE_NUMBER DESCRIPTION
524JG 24PC AMEFA VINTAGE CUTLERY SET "DUBARRY"
I am using following code.
filename exprt "C:/abc.csv" encoding="utf-8";
proc export data=abc
outfile=exprt
dbms=tab;
run;
output is
LINE_NUMBER DESCRIPTION
524JG "24PC AMEFA VINTAGE CUTLERY SET ""DUBARRY"""
so there is double quote available before and after the description here and additional doble quote is coming after & before DUBARRY word. I have no clue whats happening. Can some one help me to resolve this and make me understand what exatly happening here.
expected result:
LINE_NUMBER DESCRIPTION
524JG 24PC AMEFA VINTAGE CUTLERY SET "DUBARRY"
There is no need to use PROC EXPORT to create a delimited file. You can write it with a simple DATA step. If you want to create your example file then just do not use the DSD option on the FILE statement. But note that depending on the data you are writing that you could create a file that cannot be properly parsed because of extra un-protected delimiters. Also you will have trouble representing missing values.
Let's make a sample dataset we can use to test.
data have ;
input id value cvalue $ name $20. ;
cards;
1 123 A Normal
2 345 B Embedded|delimiter
3 678 C Embedded "quotes"
4 . D Missing value
5 901 . Missing cvalue
;
Essentially PROC EXPORT is writing the data using the DSD option. Like this:
data _null_;
set have ;
file 'myfile.txt' dsd dlm='09'x ;
put (_all_) (+0);
run;
Which will yield a file like this (with pipes replacing the tabs so you can see them).
1|123|A|Normal
2|345|B|"Embedded|delimiter"
3|678|C|"Embedded ""quotes"""
4||D|Missing value
5|901||Missing cvalue
If you just remove DSD option then you get a file like this instead.
1|123|A|Normal
2|345|B|Embedded|delimiter
3|678|C|Embedded "quotes"
4|.|D|Missing value
5|901| |Missing cvalue
Notice how the second line looks like it has 5 values instead of 4, making it impossible to know how to split it into 4 values. Also notice how the missing values have a minimum length of at least one character.
Another way would be to run a data step to convert the normal file that PROC EXPORT generates into the variant format that you want. This might also give you a place to add escape characters to protect special characters if your target format requires them.
data _null_;
infile normal dsd dlm='|' truncover ;
file abnormal dlm='|';
do i=1 to 4 ;
if i>1 then put '|' #;
input field :$32767. #;
field = tranwrd(field,'\','\\');
field = tranwrd(field,'|','\|');
len = lengthn(field);
put field $varying32767. len #;
end;
put;
run;
You could even make this datastep smart enough to count the number of fields on the first row and use that to control the loop so that you wouldn't have to hard code it.
I am reading in a .csv file in SAS where some of the fields are populated in the main by null values . and a handful are populated by 5 digit SAS dates. I need SAS to recognise the field as a date field (or at the very least a numeric field), instead of reading it in as text as it is is doing at the minute.
A simplified version of my code is as so:
data test;
informat mydate date9.;
infile myfile dsd dlm ',' missover;
input
myfirstval
mydate
;
run;
With this code all values are read in as . and the field data type is text. Can anyone tell me what I need to change in the above code to get the output I need?
Thanks
If you write a data step to read a CSV file SAS will create the variable as the data type that you specify. If you tell it that MYDATE is a number it will NOT convert it to a character variable.
data test;
infile cards dsd dlm=',' TRUNCOVER ;
length myfirstval 8 mydate 8 mythirdval 8;
input myfirstval mydate mythirdval;
format mydate date9.;
cards;
1,1234,5
2,.,6
;
Note that the data step compiler will define the type of the variable at the first chance that it can. For example if the first reference is in a statement like IF MYDATE='.' ... then MYDATE will be defined as character length one to match the type of the value that it is being compared to. That is why it is best to start with a LENGTH or ATTRIB statement to clearly define your variables.
Is it possible to input the following with a single input statement without producing any erroneous missing values? I believe I've got the right format for the first 19 characters of each of the datetime variables below, but I can't seem to find a way to make SAS ignore the extraneous characters and skip to the next delimiter before trying to input the next variable.
data _null_;
infile datalines dlm=',' dsd missover;
input a is8601dt19. b is8601dt19. c $4.;
format a b is8601dt.;
put a= b= c=;
datalines;
2013-01-19T09:40:39.812+0000,2013-01-19T09:40:39.812+0000,text
,2013-01-19T09:40:39.812+0000,text
,,text
;
run;
My workaround for the time being is to initially input as $28. and then use the substr and input functions, but I suspect that there may be a more direct/efficient way.
I don't see a clear way to do this. The problem is that these are not actually ISO8601 values, at least according to SAS.
SAS recognizes two versions of ISO: Basic (B8601DZ.) and Extended (E8601DZ.). Basic has no colons/dashes/etc., and Extended has all possible ones.
Basic: 20130119T094039812+0000
Extended: 2013-01-19T09:40:39.812+00:00
(see the doc page on ISO date/times for more information)
Yours are an amalgamation of the two, and SAS doesn't seem to like that.
Add to that the fact that you're reading this from a delimited file, and I don't see a good single pass solution. I think your method is fine. You can probably skip the substring, but otherwise you will be stuck.
Your input above doesn't work because you can't use informats in a list input method like that; if you prepend a : then the informat will be used, but unfortunately you can't actually use it to limit the incoming text to the informat (not sure why - it can in other contexts). IE:
input a :e8601dz19. b :e8601dz19. c :$4.;
That's legal, but doesn't help you, as it tries to stick the 28 long bit into that (I'm not sure if it's right-aligning it perhaps, but it's definitely not left-aligning it like it would in formatted input). You're using formatted input but mean to use modified list input, hence the issue.
You could do this, if you didn't have all that missing data, for example:
data _null_;
infile datalines dlm=',' dsd missover;
informat a b e8601dt19.;
input
#1 a e8601dt19.
#"," b e8601dt19.
#"," c $4.;
format a b is8601dt.;
put a= b= c=;
datalines;
2013-01-19T09:40:39.812+0000,2013-01-19T09:40:39.812+0000,text
,2013-01-19T09:40:39.812+0000,text
, ,text
;
run;
That works for the first line, basically reading the first 19 into a and then skipping to the next comma and reading the b. But notice it fails for every other row, because it eats up too many characters for a. Anything you do to adapt this to work (which probably could be done) is going to be far more than you'd do just substringing.
I would do this:
data _null_;
infile datalines dlm=',' dsd missover;
informat a b e8601dt19.;
length a_c b_c $28;
input
a_c $ b_c $ c $;
a = input(a_c,??e8601dt19. -l);
b = input(b_c,??e8601dt19. -l);
format a b is8601dt.;
put a= b= c=;
datalines;
2013-01-19T09:40:39.812+0000,2013-01-19T09:40:39.812+0000,text
,2013-01-19T09:40:39.812+0000,text
, ,text
;
run;
No substring necessary, just use the w to shorten to 19. Or add the : programmatically if you would like the TZ information used.
This seems like it should be straightforward, but I can't find how to do this in the documentation. I want to read in a comma-delimited file, but it's very wide, and I just want to read a few columns.
I thought I could do this, but the # pointer seems to point to columns of the text rather than the column numbers defined by the delimiter:
data tmp;
infile 'results.csv' delimiter=',' MISSOVER DSD lrecl=32767 firstobs=2;
#1 id
#5 name$
run;
In this example, I want to read just what is in the 1st and 5th columns based on the delimiter, but SAS is reading what is in position 1 and position 5 of text file. So if the first line of the input file starts like this
1234567, "x", "y", "asdf", "bubba", ... more variables ...
I want id=1234567 and name=bubba, but I'm getting name=567, ".
I realize that I could read in every column and drop the ones I don't want, but there must be a better way.
Indeed, # does point to column of text not the delimited column. The only method using standard input I've ever found was to read in blank, ie
input
id
blank $
blank $
blank $
name $
;
and then drop blank.
However, there is a better solution if you don't mind writing your input differently.
data tmp;
infile datalines;
input #;
id = scan(_INFILE_,1,',');
name = scan(_INFILE_,5,',');
put _all_;
datalines;
12345,x,y,z,Joe
12346,x,y,z,Bob
;;;;
run;
It makes formatting slightly messier, as you need put or input statements for each variable you do not want in base character format, but it might be easier depending on your needs.
You can skip fields fairly efficiently if you know a bit of INPUT statement syntax, note the use of (3*dummy)(:$1.). Reading just one byte should also improve performance slightly.
data tmp;
infile cards DSD firstobs=2;
input id $ (3*dummy)(:$1.) name $;
drop dummy;
cards;
id,x,y,z,name
1234567, "x", "y", "asdf", "bubba", ... more variables
1234567, "x", "y", "asdf", "bubba", ... more variables
run;
proc print;
run;
One more option that I thought of when answering a related question from another user.
filename tempfile temp;
data _null_;
set sashelp.cars;
file tempfile dlm=',' dsd lrecl=32767;
put (Make--Wheelbase) ($);
run;
data mydata;
infile tempfile dlm=',' dsd truncover lrecl=32767;
length _tempvars1-_tempvars100 $32;
array _tempvars[100] $;
input (_tempvars[*]) ($);
make=_tempvars[1];
type=_tempvars[3];
MSRP=input(_tempvars[6],dollar8.);
keep make type msrp;
run;
Here we use an array of effectively temporary (can't actually BE temporary, unfortunately) variables, and then grab just what we want specifying the columns. This is probably overkill for a small file - just read in all the variables and deal with it - but for 100 or 200 variables where you want just 15, 18, and 25, this might be easier, as long as you know which column you want exactly. (I could see using this in dealing with census data, for example, if you have it in CSV form. It's very common to just want a few columns most of which are way down 100 or 200 columns from the starting column.)
You have to take some care with your lengths for the temporary array (has to be as long as your longest column that you care about!), and you have to make sure not to mess up the columns since you won't get to know if you mess up unless it's obvious from the data.
This seems like it should be straightforward, but I can't find how to do this in the documentation. I want to read in a comma-delimited file, but it's very wide, and I just want to read a few columns.
I thought I could do this, but the # pointer seems to point to columns of the text rather than the column numbers defined by the delimiter:
data tmp;
infile 'results.csv' delimiter=',' MISSOVER DSD lrecl=32767 firstobs=2;
#1 id
#5 name$
run;
In this example, I want to read just what is in the 1st and 5th columns based on the delimiter, but SAS is reading what is in position 1 and position 5 of text file. So if the first line of the input file starts like this
1234567, "x", "y", "asdf", "bubba", ... more variables ...
I want id=1234567 and name=bubba, but I'm getting name=567, ".
I realize that I could read in every column and drop the ones I don't want, but there must be a better way.
Indeed, # does point to column of text not the delimited column. The only method using standard input I've ever found was to read in blank, ie
input
id
blank $
blank $
blank $
name $
;
and then drop blank.
However, there is a better solution if you don't mind writing your input differently.
data tmp;
infile datalines;
input #;
id = scan(_INFILE_,1,',');
name = scan(_INFILE_,5,',');
put _all_;
datalines;
12345,x,y,z,Joe
12346,x,y,z,Bob
;;;;
run;
It makes formatting slightly messier, as you need put or input statements for each variable you do not want in base character format, but it might be easier depending on your needs.
You can skip fields fairly efficiently if you know a bit of INPUT statement syntax, note the use of (3*dummy)(:$1.). Reading just one byte should also improve performance slightly.
data tmp;
infile cards DSD firstobs=2;
input id $ (3*dummy)(:$1.) name $;
drop dummy;
cards;
id,x,y,z,name
1234567, "x", "y", "asdf", "bubba", ... more variables
1234567, "x", "y", "asdf", "bubba", ... more variables
run;
proc print;
run;
One more option that I thought of when answering a related question from another user.
filename tempfile temp;
data _null_;
set sashelp.cars;
file tempfile dlm=',' dsd lrecl=32767;
put (Make--Wheelbase) ($);
run;
data mydata;
infile tempfile dlm=',' dsd truncover lrecl=32767;
length _tempvars1-_tempvars100 $32;
array _tempvars[100] $;
input (_tempvars[*]) ($);
make=_tempvars[1];
type=_tempvars[3];
MSRP=input(_tempvars[6],dollar8.);
keep make type msrp;
run;
Here we use an array of effectively temporary (can't actually BE temporary, unfortunately) variables, and then grab just what we want specifying the columns. This is probably overkill for a small file - just read in all the variables and deal with it - but for 100 or 200 variables where you want just 15, 18, and 25, this might be easier, as long as you know which column you want exactly. (I could see using this in dealing with census data, for example, if you have it in CSV form. It's very common to just want a few columns most of which are way down 100 or 200 columns from the starting column.)
You have to take some care with your lengths for the temporary array (has to be as long as your longest column that you care about!), and you have to make sure not to mess up the columns since you won't get to know if you mess up unless it's obvious from the data.