SAS 9.3 Giving Access to a particular table from one server to the user on the other user - sas

I have 2 SAS 9.3 servers Server-A and Server-B which have seperate libraries, datasets and users.
There are five tables in a library on SERVER A called LIB_A:
A1
A2
A3
A4
A5
I want to share only dataset A3 with another user from Server B.
A3 is a dynamic table so i decided to make a SAS view out of it called V_A3 and store it in a new library called LIB_B.
PROC SQL;
CREATE VIEW LIB_B.V_A3 AS
SELECT * FROM LIB_A.A3;
QUIT;
RUN:
How can i share this V_A3 SAS view which is on Server A in library LIB_A with a user from Server B?
I have tried to create a new library on Server B and assign path as network share of the main folder of LIB_B (\SERVER B\LIB_B).
User sees the library and the view but view produces ERROR and data is not shown. Probably because the code which created the view references to the outher sources like
SELECT * FROM LIB_A.A3
Without creating views, how can i keep the datasource dynamic?
On Server B, if i create a SAS BASE library which uses path of the original folder on Server A as;
\Server A\LIB_A
user sees all the five tables: A1,A2,A3,A4,A5.
But I want him to see only table A3. If i can manage this, there will be no need to create view.
How can i make it work? My servers are on Windows and SAS 9.3 T1M1.

The simple answer is to permission things at the table level, rather than at the folder level. This is true whether you are using SAS Metadata Server to manage the permissions, or Windows.
In SAS Metadata Server, you could create an ACT deny (no read access) for ServerBUsers, and then you could separately create another ACT permit (read access) for ServerBUsers. Then, apply the deny ACT to \ServerA\LibA, and apply the permit ACT to \ServerA\LibA\TableA3.
Windows permissions work roughly the same way, and your Windows permissions would have to be at least as permissive as your SAS permissions, or the user will get annoying messages. If you are comfortable handling the permissions in SAS, then just give read access to the whole folder; but if not, you could create Windows AD groups akin to the ACTs above, and then have SAS use those Windows AD groups to define SAS user groups that then have permissions, or don't have permission, to the various files/folders.

Related

Attach multiple query tabs in BigQuery to the same BQ Session

I cannot find a way to do this in the UI: I'd like to have distinct query tabs in the BigQuery's UI attached to the same session (i.e. so they share the same ##session_id and _SESSION variables). For example, I'd like to create a temporary table (session-scoped) in one tab, then in a separate query tab be able to refer to that temp table.
As far as I can tell, when I put a query tab in Session Mode, it always creates a new session, which is precisely what I don't want :-\
Is this doable in BQ's UI?
There is 3rd party IDE for BigQuery supporting such a feature (namely: joining Tab(s) into existing session)
This is Goliath - part of Potens.io Suite available at Marketplace.
Let's see how it works there:
Step 1 - create Tab with new session and run some query to actually initiate session
Step 2 - create new Tab(s) and join to existing session (either using session_id or just simply respective Tab Name
So, now both Tabs(Tab 2 and Tab 3) share same session with all expected perks
You can add as many Tabs to that session as you want to comfortably organize your workspace
And, as you can see Tabs that belong to same session are colored in user defined color so easy to navigate between them
Note: Another tool in this suite is Magnus - Workflow Automator. Supports all BigQuery, Cloud Storage and most of Google APIs as well as multiple simple utility type Tasks like BigQuery Task, Export to Storage Task, Loop Task and many many more along with advanced scheduling, triggering, etc. Supports GitHub as a source control as well
Disclosure: I am GDE for Google Cloud and creator of those tools and leader on Potens team

Connect PowerBi to As400

I tried to connect to ODBC with PowerBi using this string connection
Driver={Client Access ODBC Driver
(32-bit)};System=xxxxx.xxx.xxxxx;libraries=XXXXXX;naming=system;transaction
isolation=read committed;
Connection is done but i cannot see the right tables, i see 3 folders
EXPLOIT, INSTAL, QGPL
With different tables inside that are not tables when I connect with squirrel client, for example.
I know there are few elements to understand. Someone has any ideas?
UPDATE
I found out this three catalogs (EXPLOIT, INSTAL and QGPL) also in Squirrel, but I cannot see all others catalogs that I see in Squirrel. Could be any limited views? The user is always the same.
Seems that the default library list for your server does not include QSYS2. You can access most of the DB2 for i catalog files in QSYS2. Try this:
Select * from qsys2.systables;
That should show you all the tables your heart desires, as well as the schema (library) that the table resides in.
BTW, I don't think EXPLOIT, INSTAL, and QGPL are catalogs. They are likely libraries, well QGPL is definitely a system library supplied by IBM. The other two seem to be something provided by some 3rd party app.

Restrict access to a table in SQL Lab in Superset

I have database with many tables. Users have full access to this database and tables to create various charts and dashboards. They use SQL Lab extensively to write custom queries.
However I added a sensitive data in a separate table that needs to be accessed only by few set of users. How can I achieve?
I tried ROW-LEVEL-SECURITY feature.
However, this affects only to Virtual Tables created by Superset. I want to restrict during direct SQL Lab access also.
Possible Solution:
Create ACL at database level and create a seperate connection in Superset.
Cons - This requires a duplicate connection to same database twice.
Ideal solution:
To restrict SQL Lab access to specific tables at superset level. e.g Superset should check User roles and ACLs and decide upon a table can be queried or not.
Is this possible?
Maybe consider implement proper access control to your data with Ranger and from superset impersonate login user.

Reuse a previously published datasource in a Power BI report

I have developed a Power BI report using Power BI Desktop, pointing to a private on premise development database as the datasource so that I was able to develop and test it easily. Then, I published it from my Power BI Desktop pbix to the work area of my customer.
As a result, the work area contains the published report and the dataset. Later, my customer has changed the dataset so that it now points to the correct on premise production database of their own. It works perfectly.
Now, I want to publish a new report for my customer using the previously published and reconfigured dataset. The problem is that I can't see any option in Power BI Desktop to have the report point to the published dataset, nor I can't see any option to avoid creating a new dataset each time I publish a report, nor any way to reconfigure from the web portal the new published report to point to the same dataset as the first one.
Is there any way to do this or any work around for this scenario? I think the most reasonable solution would be to be able to change the dataset of any report, so that the datasets of any report could be interchangeable.
Update:
I had already used connection specific parameters, but I'm not given rights to change the published dataset, so thats a dead end.
Another thing I have come up to is that in Power BI Desktop you cannot change the connection parameters values to those of production enviroment and publish the report if you can't access the target database from your computer, because PowerBI Desktop ask you to apply changes first, and when it tries to apply the values it tries to connect to the corresponding database and, obviously, ends with a network related error or timeout error trying to connect to the database server, therefore cancelling changes and returning to the starting point.
It's always a good practice to use connection specific parameters to define the data source. This means that you do not enter server name directly, but specify it indirectly using a parameter. The same for the database name, if applicable.
If you are about to make a new report, cancel Get data dialog, define parameters as described bellow, and then in Get data specify the datasource using these parameters:
To modify an existing report, open Power Query Editor by clicking Edit Queries and in Manage Parameters define two new text parameters, lets name them ServerName and DatabaseName:
Set their current values to point to one of your data sources, e.g. SQLSERVER2016 and AdventureWorks2016. Then right click your query in the report and open Advanced Editor. Find the server name and database name in the M code:
and replace them with the parameters defined above, so the M code will look like this:
Now you can close and apply changes and your report should work as before. But now when you want to change the data source, do it using Edit Parameters:
and change the server and/or database name to point to the other data source, that you want to use for your report:
After changing parameter values, Power BI Desktop will ask you to apply the changes and reload the data from the new data source. To change the parameter values (i.e. the data source) of a report published in Power BI Service, go to dataset's settings and enter new server and/or database name:
If the server is on-premise, check the Gateway connection too, to make sure that it is configured properly to use the right gateway. You may also want to check the available gateways in Manage gateways:
After changing the data source, refresh your dataset to get the data from the new data source. With Power BI Pro account you can do this 8 times per 24 hours, while if the dataset is in a dedicated capacity, this limit is raised to 48 times per 24 hours.
This is a easy way to make your reports "switchable", e.g. for switching one report from DEV or QA to PROD environment, or as part of your disaster recovery plan, to automate switching all reports in some workgroup to another DR server. In your case, this will allow you (or your customers) to easily switch the datasource of the report.
I think the only correct answer is that it cannot be done, at least at this moment.
The most closest way of achieving this is with Live connections:
https://learn.microsoft.com/en-us/power-bi/desktop-report-lifecycle-datasets
But if you have already designed your report without using the Live connection but your own development enviroment and corresponding connection parameters then you are lost, your only chance is redo all your report with the Live Connection, or the queerest one solution, to use an alias in your configuration matching the name of the database server and the same database name that in the target production environment.

Coldfusion: Move data from one datasource to another

I need to move a series of tables from one datasource to another. Our hosting company doesn't give shared passwords amongst the databases so I can't write a SQL script to handle it.
The best option is just writing a little coldfusion scripty that takes care of it.
Ordinarily I would do something like:
SELECT * INTO database.table FROM database.table
The only problem with this is that cfquery's don't allow you to use two datasources in the same query.
I don't think I could use a QoQ's either because you can't tell it to use the second datasource, but to have a dbType of 'Query'.
Can anyone think of any intelligent ways of getting this done? Or is the only option to just loop over each line in the first query adding them individually to the second?
My problem with that is that it will take much longer. We have a lot of tables to move.
Ok, so you don't have a shared password between the databases, but you do seem to have the passwords for each individual database (since you have datasources set up). So, can you create a linked server definition from database 1 to database 2? User credentials can be saved against the linked server, so they don't have to be the same as the source DB. Once that's set up, you can definitely move data between the two DBs.
We use this all the time to sync data from our live database into our test environment. I can provide more specific SQL if this would work for you.
You CAN access two databases, but not two datasources in the same query.
I wrote something a few years ago called "DataSynch" for just this sort of thing.
http://www.bryantwebconsulting.com/blog/index.cfm/2006/9/20/database_synchronization
Everything you need for this to work is included in my free "com.sebtools" package:
http://sebtools.riaforge.org/
I haven't actually used this in a few years, but I can't think of any reason why it wouldn't still work.
Henry - why do any of this? Why not just use SQL manager to move over the selected tables usign the "import data" function? (right click on your dB and choose "import" - then use the native client and permissions for the "other" database to specify the tables. Your SQL manager will need to have access to both DBs, but the db servers themselves do not need access to each other. Your manager studio will serve as a conduit.