Reading a delimited text file with blank column - sas

I'm trying to read a pipe delimited text file in SAS with following code :
Data MyData;
Infile MyFile Dsd Dlm= '|' Firstobs= 2 Termstr = CRLF Truncover;
Input A: $30.
B: 2.
C: $30.
D: $30.
E: 2.;
Run;
Column A to C are definitely present for each record but columns D and E may or may not be present. The file is delimited in a way that there is a pipe between two inputs but not after the end of a line.
An example is shown below.
A1|4|C1|D1|5A2|7|C2A3|3|C3|D3|1A4 ...
How do I read this file where the last two inputs are optional? I don't want to use Proc Import because it's a large file and the columns A, B and C have a range of values that Proc Import isn't able to handle very well (as per my experience).
My current code causes some of the values from column A to be pulled into column D when there are missing values.

Usually, there's some indication of when E ends. Some EOL character (maybe one you can't see). If so, then you can use that as a delimiter.
If there's no way to tell when E ends, then you will need to figure it out from business logic (what kind of value exists in E and in A). If E is only 2 long, then you can process the field using the _INFILE_ variable. Something like this might work, if the total line length is <= 32767:
data want;
infile 'h:\test.txt' dlm='|'; *infile with dlm statement as usual;
input ##; *open input pointer;
call scan(_infile_,5*_N_,pos,len,'|',); *find where the 5Nth field is;
_infile_ = cat(substr(_infile_,1,pos+1),'|',substr(_infile_,pos+2));
*Insert a | there;
input a: $30.
b: 11.
c: $5.
d: $5.
e: 2.
##
; *note the ## holding the input pointer;
run;

Related

SAS Infile Statement Not Getting Observations

I need to use the INFILE statement to read a file called np_traffic.csv, name the table traffic2, and only import a column called ReportingDate as a character.
Current Code is giving me the error
"The data set WORK.TRAFFIC2 may be incomplete. When this step was
stopped there were 0 observations and 1 variables."
DATA traffic2;
INFILE “E:/Documents/Week 2/np_traffic.csv”
dsd firstobs=2;
INPUT ReportingDate $;
RUN;
Let's assume that you really have a delimited text file, which is what a CSV file is, instead of the spreadsheet you pictured in the photograph in your post. To read the 6th field in a line you need to first read the first 5 fields. That does not mean you need use the values read from those fields.
data traffic2;
infile “E:/Documents/Week 2/np_traffic.csv”
dsd firstobs=2
;
length dummy $1 ReportingDate $12;
input 5*dummy ReportingDate ;
drop dummy;
run;
I would suggest to try it this way:
data traffic2;
drop a b c d e g;
infile 'E:\Documents\Week 2\np_traffic.csv' dsd dlm='<Insert your delimiter>' firstobs=2;
input a b c d e f g;
run;
https://documentation.sas.com/?docsetId=lestmtsref&docsetTarget=n1rill4udj0tfun1fvce3j401plo.htm&docsetVersion=9.4&locale=en

SAS Export Issue as it is giving additional double quote

I am trying to export SAS data into CSV, sas dataset name is abc here and format is
LINE_NUMBER DESCRIPTION
524JG 24PC AMEFA VINTAGE CUTLERY SET "DUBARRY"
I am using following code.
filename exprt "C:/abc.csv" encoding="utf-8";
proc export data=abc
outfile=exprt
dbms=tab;
run;
output is
LINE_NUMBER DESCRIPTION
524JG "24PC AMEFA VINTAGE CUTLERY SET ""DUBARRY"""
so there is double quote available before and after the description here and additional doble quote is coming after & before DUBARRY word. I have no clue whats happening. Can some one help me to resolve this and make me understand what exatly happening here.
expected result:
LINE_NUMBER DESCRIPTION
524JG 24PC AMEFA VINTAGE CUTLERY SET "DUBARRY"
There is no need to use PROC EXPORT to create a delimited file. You can write it with a simple DATA step. If you want to create your example file then just do not use the DSD option on the FILE statement. But note that depending on the data you are writing that you could create a file that cannot be properly parsed because of extra un-protected delimiters. Also you will have trouble representing missing values.
Let's make a sample dataset we can use to test.
data have ;
input id value cvalue $ name $20. ;
cards;
1 123 A Normal
2 345 B Embedded|delimiter
3 678 C Embedded "quotes"
4 . D Missing value
5 901 . Missing cvalue
;
Essentially PROC EXPORT is writing the data using the DSD option. Like this:
data _null_;
set have ;
file 'myfile.txt' dsd dlm='09'x ;
put (_all_) (+0);
run;
Which will yield a file like this (with pipes replacing the tabs so you can see them).
1|123|A|Normal
2|345|B|"Embedded|delimiter"
3|678|C|"Embedded ""quotes"""
4||D|Missing value
5|901||Missing cvalue
If you just remove DSD option then you get a file like this instead.
1|123|A|Normal
2|345|B|Embedded|delimiter
3|678|C|Embedded "quotes"
4|.|D|Missing value
5|901| |Missing cvalue
Notice how the second line looks like it has 5 values instead of 4, making it impossible to know how to split it into 4 values. Also notice how the missing values have a minimum length of at least one character.
Another way would be to run a data step to convert the normal file that PROC EXPORT generates into the variant format that you want. This might also give you a place to add escape characters to protect special characters if your target format requires them.
data _null_;
infile normal dsd dlm='|' truncover ;
file abnormal dlm='|';
do i=1 to 4 ;
if i>1 then put '|' #;
input field :$32767. #;
field = tranwrd(field,'\','\\');
field = tranwrd(field,'|','\|');
len = lengthn(field);
put field $varying32767. len #;
end;
put;
run;
You could even make this datastep smart enough to count the number of fields on the first row and use that to control the loop so that you wouldn't have to hard code it.

Construct SAS dataset based on file containing metadata

I have two text files, one containing raw data with no headers and another containing the associated column names and lengths. I'd like to use these two files to construct a single SAS dataset containing the data from one file with the column names and lengths from the other.
The file containing the data is a fixed-width text file. That is, each column of data is aligned to a particular column of the text file, padded with spaces to ensure alignment.
datafile.txt:
John 45 Has two kids
Marge 37 Likes books
Sally 29 Is an astronaut
Bill 60 Drinks coffee
The file containing the metadata is tab-delimited with two columns: one with the name of the column in the data file and one with the character length of that column. The names are listed in the order in which they appear in the data file.
metadata.txt:
Name 7
Age 5
Comments 15
My goal is to have a SAS dataset that looks like this:
Name | Age | Comments
-------+------+-----------------
John | 45 | Has two kids
Marge | 37 | Likes books
Sally | 29 | Is an astronaut
Bill | 60 | Drinks coffee
I want every column to be character with the length specified in the metadata file.
There has to be a better way than my naive approach, which is to construct a length statement and an input statement using the imported metadata, like so:
/* Import metadata */
data meta;
length colname $ 50 collen 8;
infile 'C:\metadata.txt' dsd dlm='09'x;
input colname $ collen;
run;
/* Construct LENGTH and INPUT statements */
data _null_;
length lenstmt inptstmt $ 1000;
retain lenstmt inptstmt '' colstart 1;
set meta end=eof;
call catx(' ', lenstmt, colname, '$', collen);
call catx(' ', inptstmt, cats('#', colstart), colname, '$ &');
colstart + collen;
if eof then do;
call symputx('lenstmt', lenstmt);
call symputx('inptstmt', inptstmt);
end;
run;
/* Import data file */
data datafile;
length &lenstmt;
infile 'C:\datafile.txt' dsd dlm='09'x;
input &inptstmt;
run;
This gets me what I need, but there has to be a cleaner way. One could run into trouble with this approach if insufficient space is allocated to the variables storing the length and input statements, or if the statement lengths exceed the maximum macro variable length.
Any ideas?
What you're doing is a fairly standard method of doing this. Yes, you could check things a bit more carefully; I would allocate $32767 for the two statements, for example, just to be cautious.
There are some ways you can improve this, though, that may take some of your worries away.
First off, a common solution is to build this at the row level (as you do) and then use proc sql to create the macro variable. This has a larger maximum length limitation than the data step method (the data step method maximum is $32767 if you don't use multiple variables, SQL's is double that at 64kib).
proc sql;
select catx(' ',colname,'$',collen)
into :lenstmt separated by ' '
from meta; *and similar for inputstmt;
quit;
Second, you can surpass the 64k limit by writing to a file instead of to a macro variable. Take your data step, and instead of accumulating and then using call symput, write each line out to a temp file (or two). Then %include those files instead of using the macro variable in the input datastep - yes, you can %include in the middle of a datastep.
There are other methods, but these two are the most common and should work for most use cases. Some other methods include call execute, run_macro, or using file open commands to work with the file directly. In general, those are either more complicated or less useful than the most common two, although certainly they are also acceptable solutions and not uncommon to see in practice.
call execute show be able to help.
data _null_;
retain start 0;
infile 'c:\metadata.txt' missover end=eof;
if _n_=1 then do;
start=1;
call execute('data final_output; infile "c:\datafile.txt" truncover; input ');
end;
input colname :$8.
collen :8.
;
call execute( '#'|| put(start,8. -l) || ' ' || colname || ' $'|| put(collen,8. -r) ||'. ' );
start=sum(start,collen);
if eof then do;
call execute(';run;');
end;
run;
proc contents data=final_output;run;

Input delimited is8601 datetimes in SAS

Is it possible to input the following with a single input statement without producing any erroneous missing values? I believe I've got the right format for the first 19 characters of each of the datetime variables below, but I can't seem to find a way to make SAS ignore the extraneous characters and skip to the next delimiter before trying to input the next variable.
data _null_;
infile datalines dlm=',' dsd missover;
input a is8601dt19. b is8601dt19. c $4.;
format a b is8601dt.;
put a= b= c=;
datalines;
2013-01-19T09:40:39.812+0000,2013-01-19T09:40:39.812+0000,text
,2013-01-19T09:40:39.812+0000,text
,,text
;
run;
My workaround for the time being is to initially input as $28. and then use the substr and input functions, but I suspect that there may be a more direct/efficient way.
I don't see a clear way to do this. The problem is that these are not actually ISO8601 values, at least according to SAS.
SAS recognizes two versions of ISO: Basic (B8601DZ.) and Extended (E8601DZ.). Basic has no colons/dashes/etc., and Extended has all possible ones.
Basic: 20130119T094039812+0000
Extended: 2013-01-19T09:40:39.812+00:00
(see the doc page on ISO date/times for more information)
Yours are an amalgamation of the two, and SAS doesn't seem to like that.
Add to that the fact that you're reading this from a delimited file, and I don't see a good single pass solution. I think your method is fine. You can probably skip the substring, but otherwise you will be stuck.
Your input above doesn't work because you can't use informats in a list input method like that; if you prepend a : then the informat will be used, but unfortunately you can't actually use it to limit the incoming text to the informat (not sure why - it can in other contexts). IE:
input a :e8601dz19. b :e8601dz19. c :$4.;
That's legal, but doesn't help you, as it tries to stick the 28 long bit into that (I'm not sure if it's right-aligning it perhaps, but it's definitely not left-aligning it like it would in formatted input). You're using formatted input but mean to use modified list input, hence the issue.
You could do this, if you didn't have all that missing data, for example:
data _null_;
infile datalines dlm=',' dsd missover;
informat a b e8601dt19.;
input
#1 a e8601dt19.
#"," b e8601dt19.
#"," c $4.;
format a b is8601dt.;
put a= b= c=;
datalines;
2013-01-19T09:40:39.812+0000,2013-01-19T09:40:39.812+0000,text
,2013-01-19T09:40:39.812+0000,text
, ,text
;
run;
That works for the first line, basically reading the first 19 into a and then skipping to the next comma and reading the b. But notice it fails for every other row, because it eats up too many characters for a. Anything you do to adapt this to work (which probably could be done) is going to be far more than you'd do just substringing.
I would do this:
data _null_;
infile datalines dlm=',' dsd missover;
informat a b e8601dt19.;
length a_c b_c $28;
input
a_c $ b_c $ c $;
a = input(a_c,??e8601dt19. -l);
b = input(b_c,??e8601dt19. -l);
format a b is8601dt.;
put a= b= c=;
datalines;
2013-01-19T09:40:39.812+0000,2013-01-19T09:40:39.812+0000,text
,2013-01-19T09:40:39.812+0000,text
, ,text
;
run;
No substring necessary, just use the w to shorten to 19. Or add the : programmatically if you would like the TZ information used.

How to read only select columns in infile statement [duplicate]

This seems like it should be straightforward, but I can't find how to do this in the documentation. I want to read in a comma-delimited file, but it's very wide, and I just want to read a few columns.
I thought I could do this, but the # pointer seems to point to columns of the text rather than the column numbers defined by the delimiter:
data tmp;
infile 'results.csv' delimiter=',' MISSOVER DSD lrecl=32767 firstobs=2;
#1 id
#5 name$
run;
In this example, I want to read just what is in the 1st and 5th columns based on the delimiter, but SAS is reading what is in position 1 and position 5 of text file. So if the first line of the input file starts like this
1234567, "x", "y", "asdf", "bubba", ... more variables ...
I want id=1234567 and name=bubba, but I'm getting name=567, ".
I realize that I could read in every column and drop the ones I don't want, but there must be a better way.
Indeed, # does point to column of text not the delimited column. The only method using standard input I've ever found was to read in blank, ie
input
id
blank $
blank $
blank $
name $
;
and then drop blank.
However, there is a better solution if you don't mind writing your input differently.
data tmp;
infile datalines;
input #;
id = scan(_INFILE_,1,',');
name = scan(_INFILE_,5,',');
put _all_;
datalines;
12345,x,y,z,Joe
12346,x,y,z,Bob
;;;;
run;
It makes formatting slightly messier, as you need put or input statements for each variable you do not want in base character format, but it might be easier depending on your needs.
You can skip fields fairly efficiently if you know a bit of INPUT statement syntax, note the use of (3*dummy)(:$1.). Reading just one byte should also improve performance slightly.
data tmp;
infile cards DSD firstobs=2;
input id $ (3*dummy)(:$1.) name $;
drop dummy;
cards;
id,x,y,z,name
1234567, "x", "y", "asdf", "bubba", ... more variables
1234567, "x", "y", "asdf", "bubba", ... more variables
run;
proc print;
run;
One more option that I thought of when answering a related question from another user.
filename tempfile temp;
data _null_;
set sashelp.cars;
file tempfile dlm=',' dsd lrecl=32767;
put (Make--Wheelbase) ($);
run;
data mydata;
infile tempfile dlm=',' dsd truncover lrecl=32767;
length _tempvars1-_tempvars100 $32;
array _tempvars[100] $;
input (_tempvars[*]) ($);
make=_tempvars[1];
type=_tempvars[3];
MSRP=input(_tempvars[6],dollar8.);
keep make type msrp;
run;
Here we use an array of effectively temporary (can't actually BE temporary, unfortunately) variables, and then grab just what we want specifying the columns. This is probably overkill for a small file - just read in all the variables and deal with it - but for 100 or 200 variables where you want just 15, 18, and 25, this might be easier, as long as you know which column you want exactly. (I could see using this in dealing with census data, for example, if you have it in CSV form. It's very common to just want a few columns most of which are way down 100 or 200 columns from the starting column.)
You have to take some care with your lengths for the temporary array (has to be as long as your longest column that you care about!), and you have to make sure not to mess up the columns since you won't get to know if you mess up unless it's obvious from the data.