SAS EG: including local file while running on server - sas

How can I use include statement to reference local file when I'm connected to server?
When I run the code I get an error saying that there is no such file on the server.
How do I tell EG to look for file locally?
I have ways to get around that, like telling the code to run on 'local' server and then
I copy results over. Or I just add the code I need as a Code node in EG.
Both of those are not very practical when working with old code that uses include heavily.

In Enterprise Guide, programs (code nodes) have an option to choose the environment. Choose LOCAL for the program that executes the %INCLUDE for the local program. Then you need to move the resulting data set(s) to the remote server via SAS/CONNECT (RSUBMIT) and put them in a permanent location. The subsequent programs can have your remote server chosen as the environment and will be able to act on the data you previously moved up the remote server.
If this seems plausible...I can expand the answer

If you are looking to reference local SAS data sets (sas7bdat files), you can use the Tasks->Data->Upload Data Files to Server task. This will capture the step of copying your data set files to the SAS workspace as part of your process flow.

Related

Changing SAS working directory on Citrix machine

I'm running SAS EG 7.1 on a Citrix machine at work. I'm trying to do a very large sql pull and keep running out of disk space. Our server drive has plenty of storage, so I assume switching the working directory to a folder on that drive temporarily will avoid this error?
The error says "insufficient disk space....file is damaged. I/O processing did not complete. You may be able to execute sql statement successfully if you allocate more space to the WORK library." I don't seem to have access to the config file and I've tried to change it programmatically with no luck. I am trying to save the resulting dataframe to a server folder already with a LIBNAME statement, but I think the temporary files created in WORK during the process are too much to handle. Any help?
I've tried both:
x 'cd "Q:\folder"';
and
data _null_;
rc = system( 'cd "Q:\folder"' );
if rc = 0
then putlog 'Command successful';
else putlog 'Command failed';
run;
These run fine, but in the log it still says my working directory is unchanged:
SYMBOLGEN: Macro variable SASWORKLOCATION resolves to "C:\Users\user\AppData\Local\Temp\SEG5432\SAS Temporary
Files\citrixMachineDrive\Prc2/"
The current directory is relevant for file references and such, but it's not related to your Work or Util directories.
WORK and UTIL are only settable at startup, and are either set in the arguments for sas.exe or in the sasv9.cfg configuration file. How you solve your particular problem depends on whether EG is connecting to a SAS server, or if it's just running locally. If it's running locally, you may be able to modify the startup options. If it's connecting to a server, you will have to talk to your SAS administrator.
However, it's highly unlikely you would want to use the network folder as your WORK folder. SAS expects high speed disk connected directly to the machine for WORK; if you set it to a network drive, your performance would be extremely poor.
Also note that "C:" is on the machine the SAS server is running on - it might be the same machine, if you're running SAS locally (on Citrix), but if it's a remote SAS server then the C:\ is on that server.

How to mount a file via CloudFoundry manifest similar to Kubernetes?

With Kubernetes, I used to mount a file containing feature-flags as key/value pairs. Our UI would then simply get the file and read the values.
Like this: What's the best way to share/mount one file into a pod?
Now I want to do the same with the manifest file for CloudFoundry. How can I mount a file so that it will be available in /dist folder at deployment time?
To add more information, when we mount a file, the UI later can download the file and read the content. We are using React and any call to the server has to go through Apigee layer.
The typical approach to mounting files into a CloudFoundry application is called Volume Services. This takes a remote file system like NFS or SMB and mounts it into your application container.
I don't think that's what you want here. It would probably be overkill to mount in a single file. You totally could go this route though.
That said, CloudFoundry does not have a built-in concept that's similar to Kubernetes, where you can take your configuration and mount it as a file. With CloudFoundry, you do have a few similar options. They are not exactly the same though so you'll have to make the determination if one will work for your needs.
You can pass config through environment variables (or through user-provided service bindings, but that comes through an environment variable VCAP_SERVICES as well). This won't be a file, but perhaps you can have your UI read that instead (You didn't mention how the UI gets that file, so I can't comment further. If you elaborate on that point like if it's HTTP or reading from disk, I could perhaps expand on this option).
If it absolutely needs to be a file, your application could read the environment variable contents and write it to disk when it starts. If your application isn't able to do that like if you're using Nginx, you could include a .profile script at the root of your application that reads it and generates the file. For example: echo "$CFG_VAR" > /dist/file or whatever you need to do to generate that file.
A couple of more notes when using environment variables. There are limits to how much information can go in them (sorry I don't know the exact value off the top of my head, but I think it's around 128K). It is also not great for binary configuration, in which case, you'd need to base64 encode your data first.
You can pull the config file from a config server and cache it locally. This can be pretty simple. The first thing your app does when it starts is to reach out and download the file, place it on the disk and the file will persist there for the duration of your application's lifetime.
If you don't have a server-side application like if you're running Nginx, you can include a .profile script (can be any executable script) at the root of your application which can use curl or another tool to download and set up that configuration.
You can replace "config server" with an HTTP server, Git repository, Vault server, CredHub, database, or really any place you can durably store your data.
Not recommended, but you can also push your configuration file with the application. This would be as simple as including it in the directory or archive that you push. This has the obvious downside of coupling your configuration to the application bits that you push. Depending on where you work, the policies you have to follow, and the tools you use this may or may not matter.
There might be other variations you could use as well. Loading the file in your application when it starts or through a .profile script is very flexible.

Perforce (AWZ Server Lightsail Windows Instance) - Unreal Engine Source Control - Move Perforce Depot

I'll give a bit of a background as to the setup we have and why. Currently myself and a friend want to collaborate on an Unreal Engine Project. To do this I've set up an Amazon Lightsail Instance with Windows Server running. I've then installed Perforce onto this Server and added two users. Both of us are able to connect to this server from our local machines (great I thought!). Our goal was to attach two 'virtual' disks of 32gb to this server via Lightsails Storage option. I've formatted these discs and they are detected as Disk D and E on the Server. Our goal was to have two depots, one on Disk E and one on Disk D, the reason for this being the C disk was only 20gb (12gb Free after Windows).
I have tried multiple things (not got much hair left after this) to try and map the depots created to each HDD but have had little success and need your wisdom!
I've followed both the process indicated in this support guide (https://community.perforce.com/s/article/2559) via CMD as well as changing the depot storage location in P4Admin on the Server via RDP to the virtual disks D and E respectively.
Example change is from //UE_WIP/... to D:/UE_WIP/... (I have create a folder UE_WIP and UE_LIVE on each HDD).
When I open up P4V on my local machine I'm able to happily connect (as per screenshot) and set workstation to my local machine (detects both depots). This is when we're getting stuck. I then open up a new unreal engine file and save the unreal engine file to the the following local directory E:/DELETE/Perforce/Test/ and open up source control (See image 04). This is great, it detects the workspace and all is connecting to the server.
When I click submit to source control I get the following 'Failed Checking Source Control' when I try adding via P4V manually marking the new content folder for add I get the following 'file(s) not in client view.
All we want is the ability to send an Unreal Engine up to either the WIP Drive Depot or the Live Drive Depot. To resolve this does it require:
Two different workstations (one set up for LIVE and one for WIP)
Do we need to add some local folders to our directory? E:/DELETE/Perforce/UE_WIP & E:/DELETE/Perforce/UE_LIVE?
Do we need to tweak something on the Perforce Server?
Do we need to tweak something in Unreal Engine?
Any and all help would be massively appreciated.
Best,
Ben
https://imgur.com/a/aaMPTvI - Image gallery of issues
Your screenshots don't show how (or if?) you set up your local workspace (i.e. the thing that tells Perforce where the files are on your local workstation).
See: https://www.perforce.com/perforce/r13.1/manuals/p4v/Defining_a_client_view.html
The Perforce server acts as a layer of abstraction between the backend storage (i.e. the depots you've set up) and the client machines where you actually do your work. The location of the depot files doesn't matter at all to the client (any more than, say, the backend filesystem of a web server matters to your web browser); all that matters is how you set up the workspace, which is a simple matter of "here's where my local files are" (the Root) and "here's how my local paths map to depot paths" (the View).
You get the "file not in view" error if you try to add a local file to the depot and it's not in the View you've defined. The fix there is generally to simply fix the Root and/or View to accurately describe where you local files are. One View can easily map to multiple depots (as long as they're on a single server).
(edit)
Specifically, in your case, all of the files you're trying to add are under the path:
E:\DELETE\Perforce\Test\Saved\...
Since you've set up your workspace as:
Client: bsmith
Root: E:\DELETE\Perforce\bsmith
View:
//WIP/... //bsmith/WIP/...
//LIVE/... //bsmith/LIVE/...
then your bsmith workspace consists of these two local paths:
E:\DELETE\Perforce\bsmith\WIP\...
E:\DELETE\Perforce\bsmith\LIVE\...
The files you're trying to add aren't even under your Root, much less under either of the View mappings. That's what the "not in client view" error messages mean.
If you want to add the files where they are, modify your Root and View so that you define your workspace as being where your files are; if you want to have the files in one of the local directories above that you've already defined as being where your workspace lives, you'll have to move them there. If you put your files in bsmith\WIP, then when you add them they'll go to the WIP depot; if you put them in bsmith\LIVE, then they'll go to the LIVE depot, per your View.
Either way, once they're in your workspace, you can add them to the depot. Simple as that!

Use a Windows share in a Libname statement with SAS

When running SAS through EGuide locally I can successfully declare a libname as follows:
libname winlib '\\pc\folder\';
When using a SAS server this is not possible and I have to resort to using a Copy Files task.
For interest:
I believe this is because of the fact that the SAS server is Unix, is this correct?
What I've tried:
libname test '//pc/folder/'
libname test2 'smb://pc/folder/'
The other options I can think of is mounting the drive to the SAS server, this isn't viable for me as this is for ad-hoc cases.
The question:
How would I correctly declare a libname to \\pc\folder for the SAS server?
A few notes:
I cannot run locally as I have to connect to a few DBs, and I don't want to use a PROC UPLOAD or DOWNLOAD for this.
If you want SAS to read a directory then the SAS process needs to be able to see the directory.
What most companies do is create a shared directory that can be mounted by both the SAS machine and your PC then you can reference the files directly from both, just using different paths.
Otherwise if you want SAS to use a file that EG can see but SAS cannot then I suggest asking EG to upload the file. There are custom tasks available for EG to upload binary files.
Another method would be to create SAS code to connect to a machine that can see the files and pull the files over. Perhaps using FTP or SFTP protocol.
Unfortunately there is no way to do this in the manner I wish (directly using the remote path in the libname statement in a Unix environment).
You should be able to do this with a Windows SAS server and can do it with the local windows SAS server.
This is due to how Unix works, meaning one would have to mount the share.
That isn't feasible as an ad-hoc method.
I do wish Unix had a more direct way of accessing remote directories.
That being said, alternatively one can do one of the following:
write the data to a server-local directory, even work or home. Then Copy the data to a local directory. (by using Copy Files task in Enterprise Guide for example, or copying them manually if you have access to the location from your local PC)
Do the SAS processing locally and fetch the needed data over the network (this isn't feasible if you need DB access, which can't be done on the local server)
Get whoever is in charge of your SAS server's to set up a mount that's accessible from both your machine and the SAS server.
Use SAS PC files Server to accomplish this for M$ office files.
Setup an FTP server on your local machine and use a filename with the FTP option to read/write to it. see How do I read raw data via FTP in SAS? for an idea.
Thanks to #Tom for the suggestions.

SAS Enterprise Guide: Set a prompt both in Local and Metadata Server

I have a prompt that generate a variable across all my project (the SET_ENVIRONMENT macro variable).
I then run my programs on by one in my process flow.
The only problem is that some of them are local (when I want to upload data), and some of them are remote (using sas metadata server).
A solution would be to run my SET_PROMPT program twice, once on my local, once on my SAS Metadata Server.
I was wondering if it was possible to do set both prompts at the same time?
If this works like a SAS/CONNECT session, then what you might do is link your prompt to a program in the local session. Have that program then be responsible for using %SYSRPUT to assign the variable on the metadata server, and have that program be always the first program you execute. That way you don't need two prompts (which is annoying for the user) but get it assigned in two places at once.