Transfer data from server to server SAS without SAS/CONNECT license - sas

We need to transfer a file (.sashdata) from server to another. The limit is to not use SAS/CONNECT.
Is it possibile to perform this task with SAS code?
Is it possibile to perform this task with FTP/SFTP?

Server to Server transfer can be performed, but that requires host side actions. SAS Servers by default have option NOXCMD active which mean SAS code run on the server cannot perform host side actions.
You should coordinate with your SAS admins to effectuate a server-to-server transfer (i.e. host side scp or winscp)
The alternative is to have a client machine that receives a download from host A and then uploads to host B. The client would need access to both hosts and the security rights to download and upload the data set in question. The client machine app can be Enterprise Guide.

Related

SymmetricDS unidirectional replication

I'm implementing SymmetricDS (vs 3.9.4), one way replication (server => client), and I have some questions:
server and client are Oracle 12c in 2 different CentOS 7 machines.
On client I need only to install and start symmetric service,
right?
I need to create the SYM tables on client? Since this
replication is only from server to client I think it is not
necessary. right?
how client communicate with server? just based on
the sync.url property on engine file?
thanks
on client you'll need to install the symmetricDs service
you can create the sym_* tables or leave symmetricDs to create them. sym_* tables are necessary for symmetricDs to function
yes, the client will use the sync.url to connect to the server, register and request the initial load. Then the server will either push new syncing data to the client or the client will pull the new data from server. It depends how this communication is configured

SAS stored process server vs workspace server

SAS has a stored process server that runs stored processes and a workspace server that runs SAS code. But a stored process is nothing but a combination of SAS code statements, so why can't the workspace server run SAS code?
I am trying to understand why SAS developers came up with the concept of a separate server just for stored processes.
A stored process server reuses the SAS process between runs. It is a stateless server meant to run small pre-written programs and return results. The server maintains a pool of processes and allocates requests to that pool. This minimizes the time to run a job as there is no startup/shut down of the process overhead.
A workspace server is a SAS process that is started for 1 user. Every user connection gets a new SAS process on the server. This server is meant to run more interactive processes where a user runs something, looks at output and then runs something else. Code does not have to be prewritten and stored on the server. In that scenario, startup time is not a limiting factor.
Also, a workspace server can provide additional access to the server. A programmer can use this server to access SAS data sets (via ADO in .NET or JDBC in Java) as well as files on the server.
So there are 2 use cases and these servers address them.
From a developers perspective, the two biggest differences are:
Identity. The stored process server runs under the system account (&sysuserid) configured in the SAS General Servers group, sassrv by default. This will affect permissions (eg database access) at the OS level. Workspace sessions are always run under the client account (logged in user) credentials.
Sessions. The option to retain 'state' by leaving your session alive for a set time period (and accessing the same session again using a session id) is available on for the Stored Process server, however - avoid this pattern at all costs! The reason being, that this session will tie up one of your multibridge ports and play havoc with the load balancing. It's also a poor design choice.
Both stored process and workspace servers can be configured to provide pooled sessions (generic sessions kept alive to be re-used, avoiding startup cost for frequent requests).
To further address your points - a Stored Process is a metadata object which points to (or can also contain) raw sas code. A stored process can run on either type (stored process or workspace) of server. The choice of which will depend on your functional needs above, plus performance considerations as per your pooling and load balancing configuration.

Which SAS servers are involved in servicing requests from enterprise Guide and DI studio

Behind the scenes, SAS has the following servers:
1.metadata server
2.Workspace Server
3.Stored Process Server
4.OLAP Server
When we run a macro or a stored process on Enterprise Guide, does it use Workspace Server which internally uses Meta data server and Stored process Server?
When we run an ETL job on DI Studio, which servers service the request?
When you start EG or DI you initially connect to your metadata server. The metadata server knows who the users are, where the data resides, and how to connect to SAS workspace servers and SAS Stored Process servers.
When you hit the submit button in a project or job from EG or DI, the EG or DI is going to connect to an Object Spawner (daemon) to launch a SAS workspace wherein your SAS code is executed. Stored Process server is not involved. SAS metadata server is only involved in checking permissions and helping the client application find its object spawner.
There are a couple of cases where you can touch a Stored Process server. This typically happens when you ask to run a stored process or convert a job or program to a stored process. Unfortunately, SAS has made this a bit complicated by allowing a "stored process" to run on either a Stored Process server or a SAS Workspace server. This is a regrettable name choice, but something we all need to deal with when using this software stack.

Cache data on Multiple Hosts in AppFabric

Let me first explain that I am very new when it comes to use AppFabric for improving the Responsiveness of your application. I am trying to configure the Server Cluster with 2 Nodes over XML provider over Network Shared location.
My requirement is that the cached data should be created on both the Hosts so that If One of the host is down my other host in the Cluster should be able to serve the request and provide the cached data. As I said I have 2 Host in my Cluster and one of them is defined as Lead Host. Now when I am saving the data in cache I could not see the data in both the hosts (Not sure is there any specific command where you can see the data in a specific host). So what I want to test is that I’ll stop one of the Cache host and try to see if still I able to get the data from the second cache host.
thanks in advance
-Nitin
What you're talking about here is High Availability. To enable this, you'll need to be running Windows Server Enterprise Edition - if you're on Standard Edition then you just can't do it. You also really need a minimum of three hosts, so that if one goes down there are still two copies of your cached data to provide failover. If you can meet these requirements then the only extra step to create a highly-available cache is to set the Secondaries flag when you call new-cache e.g.
new-cache myHACache -Secondaries 1
There's no programmatic way to query what data is held on a specific host, because you only ever address the logical cache, not an individual physical host.
From our experience, using SQL authentication to the database does not work. Its clearly stated that only Integrated Security option is supported. Also we faced issues with the service running with "Integrated Security" since our SQL cluster was running under a domain account and AppFabric needs to run under "Network service" and we couldnt successfully connect to the sql cluster from AppFabric service.
This was a painful experience for us and I hope AppFabric caching improves the way it sends out "error messages and error codes". And also allows us to decide how we want to connect to the sql. KInd of stupid having to undergo this pain of "has to run as Network Service" and "no SQL authentication".

Filtering data with Microsoft Sync Framework

Context: I'm working on a project that use Offline Application Architecture. Our client program has 2 modes: connected and disconnected. When user in disconnected mode, they will use their local database (SQL CE) for retrieving and storing data. When user connects to application server again, the local database will be synchronized with central database as well. The transport layer in this project is WCF, we implement a proxy class to expose SQLSyncProvider on client for Sync Framework to sync data.
Question: How could I implement data filtering with MSF? In our project, each client will has a role, and each role will have access to different number of tables as well as rows in table. As far as I know, MSF allows us to filter data with a parameter column however, the provision for users will be same. In my case, the provision for each user will be so different, it depends on user's role.
Thanks.
You can use adapter filters on server side, and can send some parameter to fetch data on client bases from client.
Client
this.Configuration.SyncParameters.Add(
new SyncParameter("#CustomerName", "Sharp Bikes"));
Server
SqlSyncAdapterBuilder