Adding new Sas users to existing Groups in Sas metadata - sas

I am trying a similar approach to add new sas users to sas metadata and assigning them the existing groups in sas metadata. Do you have to pass the object reference values in %mm_adduser2group(user="",group="") macro provided in the GitHub link. Passing the object values would be but of a strech considering we would have to fetch the values from sas application. Instead could passing the normal values work for the macro Eg -(user="xyz",group=sasstudio""). We were facing issues while assigning new users to existing group using this macro. Any suggestions on how can I resolve this issue
Reference question :- adding a meta user to a meta group in sas
GitHub link for Macro -
https://github.com/sasjs/core/blob/main/meta/mm_adduser2group.sas

The values to the macro should be passed unquoted, eg:
%mm_adduser2group(user=xyz,group=sasstudio)
You say:
We were facing issues while assigning new users to existing group using this macro. Any suggestions on how can I resolve this issue
However you don't actually say what the issue was. Does it work when you add users to groups manually using SAS Management Console? Perhaps the users are already in those groups, or have an inherited membership to those groups?

Related

Give access to bigquery tables with specific tables names, to be created in future, across all datasets in a gcp project?

I've searched the documentation a lot, but couldn't find anything that allows me to do the following:
Allow creating a role which allows full table access to tables with certain table names only (ex.: "table1", etc.) that'll be created in future. This should work across all available datasets in a GCP project, and also the ones that'll be created in future.
Is this possible? If not directly, indirectly maybe?
Thanks..
The simplest way to do that would be to create a dataset for housing such tables, and set the access appropriate to what you need. Tables requiring a different set of policies should be housed in other datasets.
More information here: https://cloud.google.com/bigquery/docs/dataset-access-controls

GCP CLOUD SQL denies permission for pre aggregation

I am trying to use pre aggregations over CLOUD SQL on Google Cloud Platform but the database is denying access and giving error Statement violates GTID consistency.
Any help is appreciated.
Cube.js done pre-aggregation by CREATE TABLE ... SELECT, but you are using MySQL on top of Google SQL with --enforce-gtid-consistency (has limitations).
Since only transactionally safe statements can be logged, there is a limitation to use CREATE TABLE ... SELECT (and some another SQL), because this statement is actually logged as two separate events.
There are two ways how to solve this issue:
1. Use pre-aggregations to an external database. (recommended way).
https://cube.dev/docs/pre-aggregations/#read-only-data-source-pre-aggregations
2. Use not documented flag loadPreAggregationWithoutMetaLock
Attention: This flag is an experimental and can be removed or changed in the feature..
Take a look at the source code
You can pass it directly in the driver constructor. This will produce two SQL statements to pass the limitation:
CREATE TABLE
INSERT INTO
Thanks

BigQuery How to remove inherited access to a dataset

I have been providing access to datasets in BigQuery using the Share Dataset option for some time now. No problem.
But now, I have a specific requirement: I need to provide access to specific people/account/group but I don't want inherited access to work on this dataset.
I mean, I really need to provide access only to specific people to this dataset, so that not even inherited access work.
Is that possible? And if so, how can I do that?
To add more context. There is a dataset which should be available only for one Service Account (the one populating it) and some specific consumer account (HR) as it will contain sensitive data.
Problem is that our project already contains a couple of BigQuery Admin accounts and they of course inherit permissions over the dataset.
I don't think it would be possible as Project level roles are inherited automatically. Making new project may be helpful.

Executing Named Queries in Athena

We want to execute a parameterized query in Athena using the javascript sdk by aws.
Seems Athena's named query may be the way to do, but the documentation seems very cryptic to understand how to go about doing this.
It would be great if someone can help us do the following
What is the recommended way to avoid sql injection in athena?
Create a parameterized query like SELECT c FROM Country c WHERE c.name = :name
Pass the name parameter's value
Execute this query
Edit: this answer was written before Athena supported prepared statements.
Named queries is a weird feature of Athena that is not really useful for anything, unfortunately.
Athena does not support prepared statements like many RDBMSs. There are SQL libraries with support for doing parameter expansion client side – Sequel for Ruby is one I have experience with, unfortunately I can't give you a suggestion for JavaScript.
Escaping in Athena's SQL dialect isn't very complicated, however. In identifiers double quotes need to be escaped as two double quotes and in literal strings single quotes need to be escaped as single quotes. Other datatypes just need to be clean, e.g. only digits for integers.
Also, keep in mind that in Athena, the dangers of SQL injection are different than in an RDBMS: Athena can't delete your data. If you set up your IAM permissions correctly the user can't even drop tables, and even if you for some reason run queries with a user that is allowed to drop tables, tables are just metadata and can easily be set up again.

Coldfusion: Move data from one datasource to another

I need to move a series of tables from one datasource to another. Our hosting company doesn't give shared passwords amongst the databases so I can't write a SQL script to handle it.
The best option is just writing a little coldfusion scripty that takes care of it.
Ordinarily I would do something like:
SELECT * INTO database.table FROM database.table
The only problem with this is that cfquery's don't allow you to use two datasources in the same query.
I don't think I could use a QoQ's either because you can't tell it to use the second datasource, but to have a dbType of 'Query'.
Can anyone think of any intelligent ways of getting this done? Or is the only option to just loop over each line in the first query adding them individually to the second?
My problem with that is that it will take much longer. We have a lot of tables to move.
Ok, so you don't have a shared password between the databases, but you do seem to have the passwords for each individual database (since you have datasources set up). So, can you create a linked server definition from database 1 to database 2? User credentials can be saved against the linked server, so they don't have to be the same as the source DB. Once that's set up, you can definitely move data between the two DBs.
We use this all the time to sync data from our live database into our test environment. I can provide more specific SQL if this would work for you.
You CAN access two databases, but not two datasources in the same query.
I wrote something a few years ago called "DataSynch" for just this sort of thing.
http://www.bryantwebconsulting.com/blog/index.cfm/2006/9/20/database_synchronization
Everything you need for this to work is included in my free "com.sebtools" package:
http://sebtools.riaforge.org/
I haven't actually used this in a few years, but I can't think of any reason why it wouldn't still work.
Henry - why do any of this? Why not just use SQL manager to move over the selected tables usign the "import data" function? (right click on your dB and choose "import" - then use the native client and permissions for the "other" database to specify the tables. Your SQL manager will need to have access to both DBs, but the db servers themselves do not need access to each other. Your manager studio will serve as a conduit.