Determining Server Context (Workspace Server vs Stored Process Server) - sas

I'd like to conditionally execute code depending on whether I'm in a Workspace or Stored Process server context.
I could do this by testing the existence of an automatic STP variable, eg _metaperson, but this wouldn't be very robust.
Assuming I already have a metadata connection, how best to check my server type?

Bulletproof way would be to create a macro variable that is initialised by the autoexec or config in the required server context.
Of course this would only work if you have access and permission to modify files stored in sas configuration folder.

Hurrah - there is, in fact, an automatic variable that does just this - sysprocessmode (available since 9.4)
Extract from documentation:
SYSPROCESSMODE is a read-only automatic macro variable, which contains
the name of the current SAS session run mode or server type, such as
the following:
SAS DMS Session
SAS Batch Mode
SAS Line Mode
SAS/CONNECT Session
SAS Share Server
SAS IntrNet Server
SAS Workspace Server
SAS Pooled Workspace Server
SAS Stored Process Server
SAS OLAP Server
SAS Table Server
SAS Metadata Server
Being an automatic variable, it is of course read only:

The stored process server will preset the _PROGRAM macro variable with the program that is running. I do not know if this macro variable is read-only in the STP execution context.
But as you say, a program in the workspace context could set a _PROGRAM macro variable.
For workspace sessions look for _CLIENTAPP macro variable.
I am unaware of a function to call or immutable system option that can be examined. Try PROC OPTIONS in both contexts and see what pops out. An OBJECTSERVERPARMS value, if reported, is a list of name=value pairs. One of them would be server= and may differentiate.

Related

What does retain persistent mapping variable value mean in informatica when importing a mapping(XML file) in the repository

I'm trying to import a mapping from a folder to my DEV environment. I already have the same named mapping in DEV environment so I did replace in conflicts window that popped up. After that it is asking me to check a box that says retain persistent mapping variable value. Does it mean:
1: It will retain persistent variable values from the import file XML that I'm trying to import in DEV.or
2: It will retain persistent variable values from the earlier same named mapping that I have already in the DEV env repository.
which is it? Please help
I tried the above that I am mentioning
If you check it Infa will retain persistent variable values in target repository/folder.
For example, if you are migrating a mapping which has a sequence generator with initial value as 5000 in production and 10 in dev, while migrating if you check the option retain persistent mapping variable, after migration from Prod to Dev, value of seq gen initial value will be 10. Because it retained the value in Dev.
This is mostly used in case of lower environment to higher environment migration. In prod, if value is higher, its always a good idea to retain that higher and correct value and we check this option.
In your scenario i would say not to retain dev values because it will not be consistent with prod.

how can I use R Studio data in shinyServer

In my local machine I use RStudio + Shiny to work properly.
Now that I have Shiny-Server installed on linux, but I do not know the Data generated by RStudiom.
how can I get Shiny-Server to read it?
Do not know what keyword query?
Thanks
Importing data in the server
As I see it, there are two ways to supply data in this situation.
The first one is to upload the data to the server where your shiny-apps are hosted. This can be done via ssh (wget) or something like FileZilla. You can put your data in the same folder as the app and then access them with relative paths. For example if you have
- app-folder
- app.R
- data.rds
- more_data.csv
You can use readRDS("data.rds") or readr::read_csv2("more_data.csv") in app.R to use the data in the app.
The second option is to use fileInput inside your app. This will give you the option to upload data from your local machine in the GUI. This data will then be put onto the server temporarilly. See ?shiny::fileInput.
Exporting data from RStudio
There are numerous ways to do this. You can use save to write your whole workspace to disk. If you just want to save single objects, saveRDS is quite handy. If you want to save datasets (for example data.frames) you can also use readr::write_csv or similar functions.

SAS stored process to a web service link

I have created a SAS stored process and I need to attach it to a web service link which I intend to use it as an input in a python program .
I would really appreciate if I could get help in creating a web service from a SAS stored process .
Thank you,
Nishant
You need to create a Stored Process in SAS Management Console, and assign it to use the Stored Process Server (not Workspace Server). Ensure it has the 'streaming output' checkbox selected.
The SAS code behind this Stored Process should then send the output (that you wish to receive from your python program) to the _webout fileref, eg:
data _null_;
file _webout;
put 'Hello python!';
run;
The %stpbegin and %stpend macros should NOT be used.
To reference the Stored Process just call the URL with your Stored Process name & path in the _program parameter, as follows:
http://[yourMachineName]:8080/SASStoredProcess/do?_PROGRAM=/Your/MetadataPath/YourSTPName
Easiest is to use the SAS Stored Process Web App. It allows you to call a stored process via URL. You should read (http://support.sas.com/documentation/cdl/en/stpug/68399/HTML/default/viewer.htm#n0mbwll43n6sw3n1jhcfnx51i8ze.htm).
From there, use the Python requests library.

DI studio: SAS attempting to remote signon to local for transformations

Whenever I use a transformation directly on an oracle table in DI studio, the transformation is auto-generating a piece of code as below: (remote sign-on to local)
Options comamid =tcp
%let local=<servername> 7551;
Data _null_;
Signon local authdomain="DefaultAuth"
Run;
It throws an error similar as below:
User authentication failed at metadata server
A communication link was not set up...
My question is: earlier when I used Table loader or delete transform, such a remote sign to local code was not there. Why is SAS trying to remote connect using defaultauth metadata object on this server?
How to disable SAS from doing it?
Your library is probably assigned to another application server than the one you are using in DI Studio. Try switching application server (right bottom of DI Studio) or assign a different application server to the library (library > properties > assign tab).

Why LOAD DATA LOCAL INFILE will work from the CLI but not in application?

The problem:
My C++ application connects to a MySQL server, reads the first/header line of each db export.txt, makes a create table statement to prepare for the import and executes that against the database (no problem with that, the table appears just as intended) -- but when I try and execute the LOAD DATA LOCAL INFILE to import the data into the newly created table, I get the error "The used command is not allowed with this MySQL version". But, this works on the CLI! When I execute this command on the CLI using mysql -u <user> -p<password> -e "LOAD DATA LOCAL INFILE 'myfile.txt' INTO TABLE mytable FIELDS TERMINATED BY '|' LINES TERMINATED BY '\r\n';" it works flawlessly?
The Situation:
My company gets a large quantity of database exports (160 files/10gb of .txt files that are '|' delimited) from our vendors on a monthly basis that have to replace the old vendor lists. I am working on a smallish C++ app to deal with it on my work desktop. The application is meant to set up the required tables, import the data, then execute a series of intermediate queries against multiple tables to assemble information in a series of final tables, which is then itself exported and uploaded to the production environment, for use in the companies e-commerce website.
My Setup:
Ubuntu 12.04
MySQL Server v. 5.5.29 + MySQL Command Line client
Linux GNU C++ Compiler
libmysqlcppconn is installed and I have the required mysqlconn library linked in.
I have already overcome/tried the following issues/combinations:
1.) I have already discovered (the hard way) that LOAD DATA [LOCAL] INFILE statements must be enabled in the config -- I have the "local-infile" option set in the configuration files for both client and server. (fixed by updating the /etc/mysql/my.cnf with "local-infile" statements for the client and server. NOTE: I could have used the --local-infile=1 to restart the mysql-server, but this is my local dev environment so I just wanted it turned on permanently)
2.) LOAD DATA LOCAL INFILE seems to fail to perform the import (from the CLI) if the target import file does not have execute permissions enabled (fixed with chmod +x target_file.txt)
3.) I am using the mysql root account in my application code (because its my localhost, not production and this particular program will never run on a production server.)
4.) I have tried executing my compiled binary program using the sudo command (no change, same error "The used command is not allowed with this MySQL version")
5.) I have tried changing the ownership of the binary file from my normal login to root (no change, same error "The used command is not allowed with this MySQL version")
6.) I know the libcppmysqlconn is working because I am able to connect and perform the CREATE TABLE call without a problem, and I can do other queries and execute statements
What am I missing? Any suggestions? Thanks in advance :)
After much diligent trial and error working with the /etc/mysql/my.cfg file (I know this is a permissions issue because it works on the command line, but not from the connector) and after much googling and finding some back alley tech support posts I've come to conclude that the MySQL C++ connector did not (for whatever reason) decide to implement the ability for developers to be able to allow the local-infile=1 option from the C++ connector.
Apparently some people have been able to hack/fork the MySQL C++ connector to expose the functionality, but no one posted their source code -- only said it worked. Apparently there is a workaround in the MySQL C API after you initialize the connection you would use this:
mysql_options( &mysql, MYSQL_OPT_LOCAL_INFILE, 1 );
which apparently allows the LOAD DATA LOCAL INFILE statements to work with the MySQL C API.
Here are some reference articles that lead me to this conclusion:
1.) How can I get the native C API connection structure from MySQL Connector/C++?
2.) Mysql 5.5 LOAD DATA INFILE Permissions
3.) http://osdir.com/ml/db.mysql.c++/2004-04/msg00097.html
Essentially if you want the ability to use the LOAD DATA LOCAL INFILE functionality from a programmatic Connector API -- you have to use the mysql C API or hack/fork the existing mysql C++ api to expose the connection structure. Or just stick to executing the LOAD DATA LOCAL INFILE from the command line :(