I am running a huge program in SAS and am using proc printto in order to save the log elsewhere. This works for about ten hours, and then SAS switches itself back to the normal log window. It was printing to a notepad file, which only maxed out at 6718KB.
Anyone know why SAS reverts to printing in the log window? Is it just me?
Thanks!!
One workaround for this would be to execute your program in batch. In batch mode, there are no log limits, as the log is automatically printed out to a file (by default, a file that is in the same location as your .sas program, with the same name but .log). Listing similarly will go to the same location .lst.
This would avoid log length limitations. If you want your log printed to a specific place, you can use the -altlog startup option.
Related
I have more than 50 tables running in work. Before, it worked well.
But recently, there are some errors like:
ERROR: An I/O error has occurred on file
WORK.'SASTMP-000000030'n.UTILITY. ERROR: File
WORK.'SASTMP-000000030'n.UTILITY is damaged. I/O processing did not
complete. NOTE: Error was encountered during utility-file processing.
You may be able to execute the SQL statement successfully if you
allocate more space to the WORK library. ERROR: There is not enough WORK disk space to store the results of an internal sorting
phase. ERROR: An error has occurred.
Does anyone know how to solve this error?
Your disk is full. If this is running on a server, ask your system administrator to investigate the problem.
If this is your desktop, find and delete un-needed files to free up space.
Clean out old SAS Work Folders
Often, old SAS Work folders do not get cleared when SAS closes. You can get back a lot of disk space by going to the path defined for SAS Work, and deleting all the old folders.
In SAS
%put %sysfunc(pathname(work));
will show you where the current WORK library is located. One level up is where all SAS Work folders are created.
On my system, that returns:
C:\Users\dpazzula\AppData\Local\Temp\SAS Temporary Files\_TD9512_GXM2L12-PAZZULA_
That means that I should look in "C:\Users\dpazzula\AppData\Local\Temp\SAS Temporary Files\" to find old folders to delete.
Your work space is full.
Your SAS server uses a dedicated directory where all SAS sessions store their temporary files: All files in the work libraries, as well as temp files as used while sorting, joining etc.
Solutions:
Have more space allocated.
Make certain only to put necessary files into work/ clean up/ close old sessions.
Run less processes.
Replace interim datasets with views instead, especially if you're using large source datasets :
data master /view=master ;
set lib.monthlydata20: ; /* all datasets since Jan 2000 */
run ;
proc sql ;
create table want as
select *
from master
where ID in(select ID from lookup) ;
quit ;
try to compress all datasets using this option
OPTIONS COMPRESS=YES REUSE=YES;
this should be in the very beginning of your code. it will compress all datasets by nearly 98%.It will also make your code run faster. It will consume more CPU but will decrease size.
In some cases, this might not help if the compressed data sets exceed the hard disk space.
Also, change your work directory to the biggest drive that has disk space.
Study your code.
Create a Data Flow Diagram to determine WHEN each file is created, where it is used downstream. Find out when a data set is no longer needed and DELETE it. If you have 50 data sets, chances are numerous data sets are 'value-added' by a subsequent step, and can go away freeing up your work space. A cute trick is to REUSE some of the data set names - to keep the number of unneeded data sets in check.
Rule of thumb: leave the environment the way you found it - if there were no files in WORK to start, manually clean up after yourself. Unless it is a Stored Process, which starts a completely new SAS job, and will clean up after itself upon completion of the job.
SAS can take a very long time to load an output table after you click on it from the process flow, is there a way to cancel it? I've wasted hours waiting for tables to load, I am hoping there is a way to exit the "Input Data" or "Output Data" tabs and return to the process flow window.
I don't know of a way to directly stop the table from loading; in my brief tests, ctrl+break and esc don't stop it, which are the usual possibilities, and both do work in some places in EG. I will say that this is somewhat of a weakness in EG in general; it's not perfect at handling things in the background and allowing you to interrupt, though it's improved greatly over the years.
What you can do to avoid this being a problem, at least possibly, is to go to the Tools->Options->Data->Performance option screen, and limit the Maximum dimensions to display in the data grid value to something well under the default 1,000,000. Change it to 50,000 or something else that is a reasonable compromise between your needs and how long it takes to connect to datasets.
Alternately, you can prevent datasets from appearing on the process flow by altering the option in Tools->Options->Results->Results General Maximum number of output data sets to add to the project to zero. That doesn't prevent you from browsing datasets; you would just have to do it through the Servers tab on the lower left (in the default setup).
If I am not wrong then you would "NOT" want the Input and Output datasets to open when the process is run. Then uncheck the highlighted options in SAS EG, Goto Tools--> Options --> Results --> Results General and uncheck the options shown in image below.
I have the following code that is running inside a macro. When it is run in interactive mode, it runs absolutely fine, no errors or warning. That was the case for last two year.
The same code has now been deployed in batch mode and it generates a warning WARNING: Apparent symbolic reference FIRSTRECCOUNT not resolved. and no value assigned to macro variable.
My question is, does anyone have any ideas why batch mode and interactive mode would behave differently?
Here some more information:
The dataset is being created and it is in work library.
The dataset does get opened by data step.
`firstreccount' doesn't get initialiased anywhere else in the program
I have search sas community. There is a topic here, but I don't have the same errors in batch initilisation as described in the answer.
Detailed information on the warning but it doesn't explain by it would work in interactive mode, but not in batch mode.
.
1735 %LET FIRSTSET = work.dataset1;
1744 DATA _NULL_;
1745 IF 0 THEN
1746 SET &FIRSTSET NOBS=X;
1747 CALL SYMPUT('FIRSTRECCOUNT' ,X);
1748 STOP;
1749 RUN;
1755 DATA _NULL_;
1756 IF 0 THEN
1757 SET &SECONDSET NOBS=X;
1758 CALL SYMPUT('SECONDRECOUNT' ,X);
1759 STOP;
1760 RUN;
WARNING: Apparent symbolic reference FIRSTRECCOUNT not resolved.
Update:
So I have attempted to replicate the error by copying the code with warning into a separate scheduled flow, but it didn't cause any errors at all.
By the way, the original job was deployed from SAS DI studio. I have checked all lines in user written code nodes and made sure that the length was within 80 characters as recommended by #RawFocus, #RobertPentridge, but it didn't solve the issue.
As recomended by #data_null_ I have checked VALIDVARNAME and it was different between interactive (value of "any") and batch mode (value of "V7") but changing these hasn't made any difference.
I have rewritted the logic to get the number of observations by calling attr for an open dataset. This eliminated the warning, but program would still fail with warning popping out in different places. It made me think Robert Partridge is correct. At the same time, I got an error that a macro not being resolved. The macro was inserted by DI studio to collect performance MI even that the job wasn't meant to be collecting MI. This made me think that SAS DI studio is not generating code correctly when deploying it, so I manually edited the deployed code to remove offending macro call and I also spotted that there was one line of code with MD5 function that was too long on one line because of a number of parameters being passed to it, so I inserted some white space. And finally the problem was fixed!!
I still need to do something about the job because when it will get redeployed from SAS DI, it will generate the same errors again. I don't have time to look into this further at the moment.
Conclusion: what you write in SAS DI and what gets deployed could be slightly different which could cause syntax parse to throw errors in random places. So I will mark Robert's answer as correct because it got me closer to solving the problem then any other answer.
The problem could be happening above the code snippet you pasted. The parser got into a funk earlier, and ended up issuing warning about code that is perfectly fine.
Check to make sure that no code within a macro is longer that ~160 chars on a single line. I try to keep my code well below that but long lines of code can run fine interactively and fail in batch - particularly when inside of a macro.
I expect your program has some small error above that does not cause SAS to go into syntax check mode when run interactively but does cause SAS to set obs to 0 and enter syntax check mode when run in batch.
One possibility is the limit (in batch mode) of the length of a line in your submitted SAS program:
See: http://support.sas.com/kb/15/883.html
Which version of SAS are you running?
I have 1000+ .sas files that I am trying to run in batch (all the code in each file is on one line), but SAS truncates the line to 256 characters and the code fails. Running each file individually (outside the batch) works fine.
Is there a way around this that would not force me to open each file and manually change the length of each line to something SAS can handle?
Prior to SAS 9.2, you can't extend the line limit in a batch file. You can extend the line limit in an %include, though, using the LRECL option; one option is to do so and have your batch submission all in one file with a bunch of %includes.
If you are running SAS 9.2 or later, you have the LRECL system option, which allows up to 32767 characters per line. See this page for more information.
In Enterprise Guide 4.2, is there a way to refresh your view of a file short of deleting it from the Process Flow then reopening it?
My Google-fu has failed to provide an answer (one way or the other) and my SAS admin has said he's not aware of a way (but to let him know if I find one).
A definitive "no" (from documentation) or a "yes" with example would be much appreciated.
I have a log file that's updated when I run my SAS program from the command line (outside of EG). I edit my code within EG, and I'd like to peek at the log file to see the results. Currently I have to delete the log file from my Process Flow then reopen it to see the updated log.
From your last comment on your question, it sounds like you are running a non-interactive SAS program on a server (from a PuTTY session) and looking at the log file with your EG client, is that correct? If so, there are much easier ways to watch the log file.
When you mention PuTTY, I'll assume your server is UNIX. If so, use the tail command with the -f option. For example, if your SAS program is named "myprog.sas", it will create a log file named "myprog.log", so try this command at your UNIX prompt:
tail -f myprog.log
The -f option means to continue writing output to your terminal window as lines are written to the log. When you get tired of watching (or your see the SAS "end of job" message), type the letter "q" to quit.
EG in intended to be the application that you use to actually execute your SAS program. Running things from the UNIX prompt is outside the design (and you lose all those cool EG features), as well as miss out any site features that have been set up for you in the metadata environment.
If I'm completely off-base, please clarify your question.
When using SAS EG or SAS Studio in a SAS platform were the compute nodes are hosted in Linux machines, I always use code to see the contents of an output file created by SAS; The only requirements are that you know the fullpath of the file you want to browse and that you have the privileges to read from it.
The simple idea is to use a generic DATA step to:
Access the file
Read line by line
Filter the data, by contents, position, line number, etc.
Print the data to the SAS log so you can see it there
Here is a simple example to get you going:
First I create a file for the test. You already have it!, so use yours.
/* create a test file */
data _null_;
file '/folders/myshortcuts/test/file'; /* Note I'm using fullpath */
put "The begining, :)";
put "line 2 in my file is shy and likes to hide.";
put "line 3, all good so far.";
put "line 4 in my file is to remain private.";
put "Finally the last line in my file!";
run;
Then, here is the code to read its data
data _null_;
/*--------
references which input file will be read
setting variable 'theEnd'=1 when
reaches end-of-file
Note I'm using fullpath
--------*/
infile '/folders/myshortcuts/test/file' end=theEnd;
/*--------
reads one line at a time from input file
storing it in variable _infile_
--------*/
input;
/*--------
contents of file will be writen in the log, to keep it readable,
mark where the contents of the file will follow
--------*/
if _n_=1 then
put "-----start file ----";
/*--------
filter out shy, private or unwanted data
--------*/
if _n_ ne 4; /* continue only if row number is not 4 */
if indexw(_infile_,"shy") le 0; /* continue only if data does not contains 'shy' */
/*--------
write the data you want, complete line read in this case
--------*/
put _N_= "->" _infile_;
/*--------
mark where the data in the file has ended
--------*/
if theEnd then put "-----end file ----";
run;
Your SAS log will look like this:
NOTE: The infile '/folders/myshortcuts/test/file' is:
Filename=/folders/myshortcuts/test/file,
Owner Name=sasdemo,Group Name=sas,
Access Permission=-rw-rw-r--,
Last Modified=11Jan2017:22:42:56,
File Size (bytes)=160
-----start file ----
_N_=1 ->The begining, :)
_N_=3 ->line 3, all good so far.
_N_=5 ->Finally the last line in my file!
-----end file ----
NOTE: 5 records were read from the infile '/folders/myshortcuts/test/file'.
The minimum record length was 16.
The maximum record length was 43.
NOTE: DATA statement used (Total process time):
real time 0.02 seconds
cpu time 0.03 seconds