SAS - Unzip all .gz files in a folder - sas

SAS Program - I am trying to unzip all .gz type files in a folder and reading them to a dataset using filename statement. However, I am not able to make it work.
I have .gz files named as follows in a folder:
EQY_US_ALL_TRADE_20210701
EQY_US_ALL_TRADE_20210702
EQY_US_ALL_TRADE_20210705
EQY_US_ALL_TRADE_20210706
EQY_US_ALL_TRADE_20210707
.....
.....
EQY_US_ALL_TRADE_20210729
EQY_US_ALL_TRADE_20210730
so on.
Note that it does NOT have files for all 31 days in a folder - files are only for business days.
See my code below:
/* Change working directory to where all the files are located */
data _null_;
rc=dlgcdir("C:\EQY_US_ALL_TRADE_202107");
put rc=;
run;
/* using filename statement unzip all files and read them into "f1" */
filename f1 zip EQY_US_ALL_TRADE_202107* gzip lrecl=500;
/* This code worked when I had the actual name of one of the file - for e.g. "EQY_US_ALL_TRADE_20210702" but does not work when I use the wildcard to run through all of them */

You can read the file names in the folder using the DREAD function, and then a dynamic INFILE statement using the FILEVAR= option to specify the gunzip stream that will be INPUTed from.
Example:
All gzipped files are presumed to be data only and do not contain a header row. The compressed files are all in a single folder and have the .gz file extension
data want(keep=filename a b c);
length folderef $8;
rc = filename (folderef, 'c:\temp\trade_data');
did = dopen(folderef);
do _n_ = 1 to dnum(did);
filename = dread(did,_n_);
if scan(filename,-1,'.') ne 'gz' then continue;
fullname = pathname(folderef) || '/' || filename;
do while(1);
infile archive zip filevar=fullname gzip dlm=',' eof=nextfile;
input a b c;
OUTPUT;
end;
nextfile:
end;
stop;
run;

Related

Modifying the datatype in SAS XML Map File

I need to control the data type when reading XML data in SAS. The XML data are written and accessed using the XML libname engine in SAS.
XML File :
<Test>
<origin>YYYY</origin>
<NumToUse>50503</NumToUse>
<AcctNum>3-219HHJLJ</AcctNum>
<Status>1</Status>
<TADIG>AUSVF</TADIG>
<LocationNumber>1234567891011</LocationNumber>
<Phnumber>1234567890</Phnumber>
<ReferenceNumber>0044E71146</ReferenceNumber>
Map File :
<COLUMN name="LocationNumber">
<PATH syntax="XPath">/Test/LocationNumber</PATH>
<TYPE>character</TYPE>
<DATATYPE>string</DATATYPE>
<LENGTH>11</LENGTH>
</COLUMN>
<COLUMN name="PhNumber">
<PATH syntax="XPath">/Test/PhNumber</PATH>
<TYPE>character</TYPE>
<DATATYPE>string</DATATYPE>
<LENGTH>15</LENGTH>
</COLUMN>
<COLUMN name="ReferenceNumber">
<PATH syntax="XPath">/Test/ReferenceNumber</PATH>
<TYPE>numeric</TYPE>
<DATATYPE>double</DATATYPE>
</COLUMN>
Since the Reference Number is treated as Numeric am not able get the value for that particular column .its giving me
ERROR: Data contains invalid content for float datatype. Invalid content is 0044E71146
How to read the data into SAS dataset ? suggestion pls
The map file created by the XMLV2 library can be modified before using the libref to copy data into your SAS session.
There are many ways to process the engine generated map file (which is an xml file itself)
XSL transform (Proc XSL)
Someone (not me) well versed in XSLT language could likely write a short program to perform the modifications
Programmatic manipulation of the parsed xml document
Textual manipulation of the xml file
Hand editing
Text processing program
The map file :
defines which xml nodes are to become data set columns
defines the type, length, format, label, etc, of the column
defines which tables are to be created
defines which tables are to contain which columns
For the case of wanting to change the XMLV2 engine's interpretation of numeric columns to character columns, the mapfile needs to be modified (aka transformed).
<COLUMN> nodes have children nodes that, at a minimum, need to be changed
from
<TYPE>numeric</TYPE>
to
<TYPE>character</TYPE>
Considerations
There are other considerations you might need to make when transforming numbers to character enmasse, such as:
Is this numeric a date ?
Should the number be rendered through it's format first ?
SAS users are comfortable with the concept of single 'data set' which contains both data and metadata (in the header). For XML data, the data is in one file and the metadata (the mapfile) is in another.
When a SAS data that is exported and only the data xml is kept, the metadata gets lost. Making the round trip back to SAS means the metadata must be guessed at via automap.
Example code for "Programmatic manipulation of the parsed xml document"
This example changes all column definitions for numeric columns to character. The XPath /SXLEMAP/TABLE/COLUMN[not(#class='ORDINAL') and ./TYPE[text()='numeric'] is used to identify the column definitions that will be changed. No further special considerations are made.
Create xml data file to be processed
%macro createXmlV2(data=,folder=);
%local lib mem;
%let syslast = &data;
%let lib = %scan(&syslast,1,.);
%let mem = %scan(&syslast,2,.);
FILENAME XMLOUT "&folder./&data..xml";
LIBNAME XMLOUT XMLV2;
proc copy in=&lib out=xmlout;
select &mem;
run;
LIBNAME XMLOUT clear;
FILENAME XMLOUT clear;
%mend;
%* Something to play with;
%* Create an XMLV2 generated xml file containing the data set;
%createXmlV2 (data=sashelp.citiday, folder=/temp)
%createXmlV2 (data=sashelp.citimon, folder=/temp)
%createXmlV2 (data=sashelp.baseball, folder=/temp)
%*;
Transform automap file so numeric columns become character columns
%macro prepXmlRefsFor(file=);
FILENAME XMLFILE "&file";
FILENAME MAPOUT "&file..map";
FILENAME MAPOUT2 "&file..map.transformed";
%mend;
%prepXmlRefsFor(file=/temp/sashelp.citiday.xml)
%prepXmlRefsFor(file=/temp/sashelp.baseball.xml)
%prepXmlRefsFor(file=/temp/sashelp.citimon.xml)
%*;
%* create an automap file using XMLV2 library engine;
LIBNAME XMLFILE XMLV2 XMLTYPE=XMLMAP XMLMAP=MAPOUT AUTOMAP=REPLACE ;
%* parse and rewrite the generated map file ;
%* change ALL non-ordinal, non-character COLUMN nodes to indicate character type wanted;
proc groovy;
submit
"%sysfunc(pathname(MAPOUT))"
"%sysfunc(pathname(MAPOUT2))"
"20"
;
import javax.xml.parsers.*;
import javax.xml.xpath.*;
import javax.xml.transform.*;
import javax.xml.transform.dom.*;
import javax.xml.transform.stream.*;
// get parameter from submit line;
map_in=args[0]; // the automap
map_out=args[1]; // the automap transformed
length=args[2]; // length of character value for columns previously considered numeric
// parse mapfile;
doc = DocumentBuilderFactory.newInstance().newDocumentBuilder().parse(map_in);
xPath = XPathFactory.newInstance().newXPath();
// select the set of automap nodes that define non-ordinal, numeric columns
columns = xPath.evaluate(
"/SXLEMAP/TABLE/COLUMN" +
"[" +
"not(#class='ORDINAL')" +
" and " +
"./TYPE[text()='numeric']" +
"]", doc, XPathConstants.NODESET);
for (column in columns) {
type = xPath.evaluate("TYPE", column, XPathConstants.NODE);
dtyp = xPath.evaluate("DATATYPE", column, XPathConstants.NODE);
leng = xPath.evaluate("LENGTH", column, XPathConstants.NODE);
type.setTextContent("character");
dtyp.setTextContent("string");
if (leng == null)
column.appendChild(leng = doc.createElement("LENGTH"));
leng.setTextContent(length);
}
// rewrite mapfile with updated nodes
TransformerFactory.newInstance().newTransformer().transform(
new DOMSource(doc), new StreamResult(new File(map_out))
);
println "Programmatic transformation of mapfile completed.";
endsubmit;
quit;
* resubmit libname so libref uses transformed mapfile;
LIBNAME XMLFILE XMLV2 XMLTYPE=XMLMAP XMLMAP=MAPOUT2 AUTOMAP=REUSE;
proc copy in=xmlfile out=work;
run;
LIBNAME XMLFILE clear;
FILENAME XMLFILE clear;
FILENAME MAPOUT clear;
FILENAME MAPOUT2 clear;
One thing that became obvious after examining the 'round-trip' outcome is that xml files created by XMLV2, when re-read, will create separate column-named tables for any columns that contain missing values. These tables would have to be merged to recreate the original data set.
You might understand that the auto-mapping feature built into XMLV2 engine is choosing to define the ReferenceNumber as number, instead of as a character, is because the only one value the parser is examining is 0044E71146 and is presuming the #E# is scientific (or exponent) notation for a number.
The solution is to let the libname automap the data xml file and then update the map file xml to meet your requirements.
Example code:
XMLV2 engine creates MAPFILE, and Proc GROOVY is used to XML parse and rewrite the mapfile.
FILENAME XMLFILE "/temp/test.xml" ;
FILENAME MAPFILE "/temp/test.xml.map" ;
* parse data test.xml and write mapfile test.xml.map;
LIBNAME XMLFILE XMLV2 XMLTYPE=XMLMAP XMLMAP=MAPFILE AUTOMAP=REPLACE ;
* parse and rewrite mapfile;
* change desired column nodes to be string/character of a specified length;
proc groovy;
submit "%sysfunc(pathname(mapfile))";
import java.io.File;
import java.io.IOException;
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import javax.xml.parsers.ParserConfigurationException;
import javax.xml.transform.Transformer;
import javax.xml.transform.TransformerException;
import javax.xml.transform.TransformerFactory;
import javax.xml.transform.dom.DOMSource;
import javax.xml.transform.stream.StreamResult;
import javax.xml.xpath.XPathFactory;
import javax.xml.xpath.XPathConstants;
import org.w3c.dom.Document;
import org.w3c.dom.Element;
import org.w3c.dom.NamedNodeMap;
import org.w3c.dom.Node;
import org.w3c.dom.NodeList;
import org.xml.sax.SAXException;
* get parameter from submit line;
mapfile=args[0];
* parse mapfile;
doc = DocumentBuilderFactory
.newInstance()
.newDocumentBuilder()
.parse(
mapfile
)
;
xPath = XPathFactory
.newInstance()
.newXPath()
;
void setCharacter(column,length) {
* find column node and child nodes important to XMLV2 mapfile usage;
node = xPath.evaluate("/SXLEMAP/TABLE/COLUMN[#name='"+column+"']", doc, XPathConstants.NODE);
type = xPath.evaluate("TYPE", node, XPathConstants.NODE);
dtyp = xPath.evaluate("DATATYPE", node, XPathConstants.NODE);
leng = xPath.evaluate("LENGTH", node, XPathConstants.NODE);
if (type != null && !type.getTextContent().equals("character")) { type.setTextContent("character") }
if (dtyp != null && !dtyp.getTextContent().equals("string")) { dtyp.setTextContent("string") }
if (leng == null) {
leng = doc.createElement("LENGTH");
leng.setTextContent(length.toString());
node.appendChild(leng);
}
else
if (!length.getTextContent().equals(length.toString())) {
leng.setTextContent(length.toString());
}
}
// Make sure these two columns will be character, if not already
setCharacter("ReferenceNumber",25);
setCharacter("Phnumber", 20);
// rewrite mapfile with updated nodes
TransformerFactory
.newInstance()
.newTransformer()
.transform(
new DOMSource(doc),
new StreamResult(new File(mapfile))
);
endsubmit;
quit;
* resubmit libname so libref uses now updated mapfile;
LIBNAME XMLFILE XMLV2 XMLTYPE=XMLMAP XMLMAP=MAPFILE;
proc copy in=xmlfile out=work;
run;
Note: You could textually parse and rewrite the map file, however, there is a small outside chance the mapfile may not meet your 'text-parsing' expectations.
Let SAS make the map file.
FILENAME XMLFILE "/v/temp/test.xml" ;
FILENAME MAPFILE "/v/temp/test.xml.map" ;
LIBNAME XMLFILE XMLV2 XMLTYPE=XMLMAP XMLMAP=MAPFILE AUTOMAP=REUSE ;
Edit the file and fix the definition
<COLUMN name="ReferenceNumber">
<PATH syntax="XPath">/Test/ReferenceNumber</PATH>
<TYPE>character</TYPE>
<DATATYPE>string</DATATYPE>
<LENGTH>15</LENGTH>
</COLUMN>
You probably will want to save this in permanent and not a temporary location. Now use the fixed file to re-define the libref.
FILENAME XMLFILE "/v/temp/test.xml" ;
FILENAME MAPFILE "/v/permanent/fixed_xml.map" ;
LIBNAME XMLFILE XMLV2 XMLTYPE=XMLMAP XMLMAP=MAPFILE AUTOMAP=REUSE ;

How to send a .JSON file through API (POST) on EG?

I've trying to send a JSON file through SAS Enterprise Guide, But I believe that I'm doing some mistake on code.
Below is there my code:
>> GENERATE JSON FILE:
data teste30;
set MATABLES.TEMPJSON;
RESP=cat(CUSTOMERID,"|",RESPTRACKING_CD);
CODSMS="WXS";
MOBILE="5511111111";
DATAPARA="09/05/2019";
DATALIMI="10/05/2019";
REMETENTE="TF";
run;
filename code temp;
data _null_;
set teste30;
file code ;
put 'WRITE OPEN OBJECT;'
/ 'WRITE VALUES "TP_SMS" ' CODSMS :$quote. ';'
/ 'WRITE VALUES "NM_REMETENTESMS" ' REMETENTE :$quote. ';'
/ 'WRITE VALUES "NR_TELEFONECELULARSMS" ' MOBILE :$quote. ';'
/ 'WRITE VALUES "TX_MENSAGEMSMS" ' msgtext :$quote. ';'
/ 'WRITE VALUES "DT_PARAENVIOSMS" ' DATAPARA :$quote. ';'
/ 'WRITE VALUES "DT_LIMITEENVIOSMS" ' DATALIMI :$quote. ';'
/ 'WRITE VALUES "DS_CHAVEORIGEMSMS" ' RESP :$quote. ';'
/ 'WRITE CLOSE;'
;
run;
proc json out="%sysfunc(getoption(WORK))/TEST.json" pretty nokeys nosastags;
write open array; /* container for all the data */
%include code;
write close; /* container for all the data */
run;
My JSON seems ok. The problem occours when I try to send it calling the API:
>> CALLING API POST
FILENAME POSTA "C:\TEMP\POSTA.TXT";
FILENAME code2 "%sysfunc(getoption(WORK))/test.json";
PROC HTTP
URL="HTTPS://*********/SMS/INCLUISMS"
CT="APPLICATION/JSON"
IN=code2
METHOD="POST"
OUT=POSTA;
HEADERS
"HOST"="*****"
"AUTHORIZATION"="BEARER xxxxxxxxxxxx"
"CONTENT-TYPE"="APPLICATION/JSON"
"CONTENT-LENGTH"="xx"
RUN;
%echoFile(fn=code2);
%PUT HTTP STATUS CODE = &SYS_PROCHTTP_STATUS_CODE. : &SYS_PROCHTTP_STATUS_PHRASE.;
The log on EG return this error:
ERROR: The tcpSockRead call failed. The system error is 'The connection was reset by a peer.'.
ERROR: Connection has been closed.
I tried input the path & filename (physical) on "IN=" (example: IN="C:\TEMP\Test.json) - without success.
Has anyone had a similar experience like this? How can I send the JSON (file) using "IN=" ? Is it possible?
Tks Guys!
When I haver using the "datafiles" with only one parameter, it is working normaly. I've success on API call. Below is there the code:
/* Put the body of the JSON content in external file */
filename json_in temp;
data _null_;
file json_in;
input;
put _infile_;
datalines;
{ "Tp_Email":"CCA001" }
run;
filename posta "c:\temp\posta.txt";
filename post1 temp;
proc http
url="https://***********/*****/***/ObterEmailTexto"
method="POST"
ct="application/x-www-form-urlencoded"
in=json_in
out=posta;
headers
"Host"="****"
"Authorization"="bearer &TOKEN"
"Content-Type"="application/json"
"Content-Length"="**"
"CD_Login"="*****";
Run;
/*
data _null_;
infile posta;
input;
put _infile_;
run;
*/
%put HTTP Status code = &SYS_PROCHTTP_STATUS_CODE. : &SYS_PROCHTTP_STATUS_PHRASE.;
When I tried pass the JSON file on body, occurred the error informed.
FILENAME POSTA "C:\TEMP\POSTA.TXT";
FILENAME code2 "%sysfunc(getoption(WORK))/test.json";
PROC HTTP java_http
URL="HTTPS://**********/*****/SMS/INCLUISMS"
ct="application/x-www-form-urlencoded"
IN=code2
METHOD="POST"
OUT=POSTA;
HEADERS
"HOST"="*****"
"AUTHORIZATION"="BEARER &TOKEN"
"CONTENT-TYPE"="APPLICATION/JSON"
"CONTENT-LENGTH"="**"
"CD_LOGIN"="******";
RUN;
%PUT HTTP STATUS CODE = &SYS_PROCHTTP_STATUS_CODE. : &SYS_PROCHTTP_STATUS_PHRASE.;
Witout java_http options: ERROR: The tcpSockRead call failed. The system error is 'The connection was reset by a peer.'. ERROR: Connection has been closed.
With java_http options: ERROR: Statement HEADERS is not supported in Java execution mode.
Tks. again!!!
TF

how do i concatenate two different files into one file using python

input: I have more than 100 sample files. Each sample file has two different files has an extension of *.column' and *.datatypes
File1 each file has column names and has an extension of *.column datatypes description and has an extension of *.datatypes
What I need is an output file in their respective files sample
Output File should have column names along with datatypes.
Currently am getting all 100 files data merged and saved into one file.
Eg: file_1:
column names datatypes
id int
name string
Eg: file_2:
column names datatypes
id int
name string
i got the output for all files column names and datatypes merged in one single file.
What I need is to get individual files merged separately for each sample.
for name in os.listdir("C:\Python27"):
if name.endswith(".column"):
for file in name:
file = os.path.join(name)
joined = file+ ".joined"
with open(joined,"w") as fout:
filenames = glob.glob('*.column')
for filename in filenames:
with open(filename) as f1:
file_names = glob.glob('*.datatypes')
for filename in file_names:
with open(filename) as f2:
for line1,line2 in zip(f1,f2):
x = ("{0} {1} \n".format(line1.rstrip(),line2.rstrip()))
y = x.strip()
fout.write(y.strip() + ',\n')
Please assist me.
Hopefully the below would work. This is on the understanding that each *.column file has a corresponding *.datatypes file name, if not the code will throw a File not found. error.
for colname in os.listdir("C:\Python27"):
if colname.endswith(".column"):
print('Processing:' + colname)
file = os.path.splitext(colname)[0]
joined = file+ ".joined"
with open(joined,"w") as fout:
with open(colname) as f1:
datname = file+'.datatypes'
with open(datname) as f2:
for line1,line2 in zip(f1,f2):
x = ("{0} {1}".format(line1.rstrip(),line2.rstrip()))
y = x.strip()
fout.write(y.strip() + ',\n')
print('Finished writing to :'+joined)
I test ran this with a few sample input files as below file1.column
date_sev
pos
file1.datatypes
timestamp
date
file2.column
id
name
file2.datatypes
int
string
file3.column
id
name
file3.datatypes
int
string
When I run the file I get the below output in the console
Processing:file1.column
Finished writing to :file1.joined
Processing:file2.column
Finished writing to :file2.joined
Processing:file3.column
Finished writing to :file3.joined
And the output files I get are file1.joined
date_sev timestamp,
pos date,
file2.joined
id int,
name string,
file3.joined
id int,
name string,
Also if you want to better the output syntax of the files then I would make the changes as below...
From
x = ("{0} {1}".format(line1.rstrip(),line2.rstrip()))
To
x = ("{0},{1}".format(line1.rstrip(),line2.rstrip()))
From
fout.write(y.strip() + ',\n')
To
fout.write(y.strip() + '\n')
I left the formatting as is from your initial version in my original solution posted in the beginning.

Read HDF files into MATLAB from a list.dat file containing the names of these files

I have a list.dat file that contains the names, in order, of about 1000 hdf files. I need to read these into MATLAB one by one in order and input the data contained in them into a matrix. How do I make MATLAB read in the hdf files? I know how to make MATLAB read one file, but when it's only the filenames in a list (in the same directory as the actual files), I don't know how to make it read in the variable.
Here's what I have so far:
% Read in sea ice concentrations
% AMSR-E data format: 'asi-s6250-20110101-v5.hdf';
% AMSR2 data format: 'asi-AMSR2-s6250-20120724-v5.hdf';
% SSMI data format: 'asi-SSMIS17-s6250-20111001-v5.hdf';
fname = 'list.dat';
data = double(hdfread(fname, 'ASI Ice Concentration'));
This currently does not work. It throws an error saying,
??? Error using ==> hdfquickinfo>findInsideVgroup at 156
HDF file '/home/AMSR_SeaIceData_Antarctic/list.dat' may be invalid or corrupt.
Error in ==> hdfquickinfo at 34
[found, hinfo] = findInsideVgroup ( filename, dataname );
Error in ==> hdfread>dataSetInfo at 363
hinfo = hdfquickinfo(filename,dataname);
Error in ==> hdfread at 210
[hinfo,subsets] = dataSetInfo(varargin{:});
The code works when I just put in the actual filename of the hdf file for fnames.
Thanks.

Registering a SAS library in Metadata - programmatically

I am writing a deployment script, and would like to programmatically register a simple (and empty) BASE library, such as the one below, in Metadata.
libname MYLIB 'C:\temp';
Sample XML syntax can be found here.. Am just not sure how to combine that with proc metadata to perform the update (eg how do the metadata ID's get generated?)
#user2173800 Did u ever recieve a solution to the question above?
Here is what i came up with :
The below Code creates a SAS Library called BASE_Metalib under the Metadata
folder :/Shared Data/Libraries/BASE_Metalib (this folder is assumed to already exist in Metadata). The code also resgisters all tables under this Directory defined for this Library. The below code uses Metadata Datastep functions to Interface with metadata.
/*Creating a Metadata Library with BASE Engine and register all the tables under it */
options metaserver="taasasf2"
metaport=8561
metauser="testuser"
metapass="test123"
metarepository="Foundation";
%Let MetaLibName=BASE_Metalib; /* Name of the SAS Library with BASE Engine to be created */
data _null_;
length luri uri muri $256;
rc=0;
Call missing(luri,uri,muri);
/* Create a SASLibrary object in the Shared Data folder. */
rc=metadata_newobj("SASLibrary",
luri,
"&MetaLibname.",
"Foundation",
"omsobj:Tree?#Name=%bquote('&Metalibname.')",
"Members");
put rc=;
put luri=;
/* Add PublicType,UsageVersion,Engine,Libref,IsDBMSLibname attribute values. */
rc=metadata_setattr(luri,
"PublicType",
"Library");
put rc=;
put luri=;
rc=metadata_setattr(luri,
"UsageVersion",
"1000000.0");
put rc=;
put luri=;
rc=metadata_setattr(luri,
"Engine",
"BASE");
put rc=;
put luri=;
rc=metadata_setattr(luri,
"Libref",
"SASTEST");
put rc=;
put luri=;
rc=metadata_setattr(luri,
"IsDBMSLibname",
"0");
put rc=;
put luri=;
/* Set Directory Object via UsingPackages Association for the SAS Library Object */
rc=metadata_newobj("Directory",
uri,
"");
put uri=;
rc=metadata_setassn(luri,
"UsingPackages",
"Replace",
uri);
put rc=;
rc=metadata_setattr(uri,"DirectoryName","/shrproj/files/ANA_AR2_UWCRQ/data");
put rc=;
/* Set Server Context Object via DeployedComponents Association for the SAS Library Object */
rc=metadata_getnobj("omsobj:ServerContext?#Name='SASApp'",1,muri);
put muri=;
rc=metadata_setassn(luri,
"DeployedComponents",
"Append",
muri);
put rc=;
Run;
proc metalib;
omr (library="&Metalibname.");
report;
run;
I finally got around to this - there are a few things to consider!
1) Making sure all the necessary objects exist (to avoid orphan metadata data)
2) Checking to ensure that objects are successfully created
3) Checking to avoid creating the library twice (idempotence)
4) General preference to avoid data step metadata functions and the corresponding risk of infinite loops
The XML part of the program looks like this:
/**
* Prepare the XML and create the library
*/
data _null_;
file &frefin;
treeuri=quote(symget('treeuri'));
serveruri=quote(symget('serveruri'));
directoryuri=quote(symget('directoryuri'));
libname=quote(symget('libname'));
libref=quote(symget('libref'));
IsPreassigned=quote(symget('IsPreassigned'));
prototypeuri=quote(symget('prototypeuri'));
/* escape description so it can be stored as XML */
libdesc=tranwrd(symget('libdesc'),'&','&');
libdesc=tranwrd(libdesc,'<','<');
libdesc=tranwrd(libdesc,'>','>');
libdesc=tranwrd(libdesc,"'",'&apos;');
libdesc=tranwrd(libdesc,'"','"');
libdesc=tranwrd(libdesc,'0A'x,'
');
libdesc=tranwrd(libdesc,'0D'x,'
');
libdesc=quote(trim(libdesc));
put "<AddMetadata><Reposid>$METAREPOSITORY</Reposid><Metadata> "/
'<SASLibrary Desc=' libdesc ' Engine="BASE" IsDBMSLibname="0" '/
' IsHidden="0" IsPreassigned=' IsPreassigned ' Libref=' libref /
' UsageVersion="1000000" PublicType="Library" name=' libname '>'/
' <DeployedComponents>'/
' <ServerContext ObjRef=' serveruri "/>"/
' </DeployedComponents>'/
' <PropertySets>'/
' <PropertySet Name="ModifiedByProductPropertySet" '/
' SetRole="ModifiedByProductPropertySet" UsageVersion="0" />'/
' </PropertySets>'/
" <Trees><Tree ObjRef=" treeuri "/></Trees>"/
' <UsingPackages> '/
' <Directory ObjRef=' directoryuri ' />'/
' </UsingPackages>'/
' <UsingPrototype>'/
' <Prototype ObjRef=' prototypeuri '/>'/
' </UsingPrototype>'/
'</SASLibrary></Metadata><NS>SAS</NS>'/
'<Flags>268435456</Flags></AddMetadata>';
run;
For full code, check out the github repo.