Does anyone know if Superset allows for multiple users to connect to a hive datasource with their own credentials? Or will you have to a create a separate datasource for each user?
Related
Since the cloud-sql-proxy already forces individual user authentication with the database through a users iam account, and allows specifying read / write permissions, it seems potentially pointless to also have an individual database accounts for each user as well.
For security, is it necessary to have a database user per dev when using cloud-sql-proxy, or is it fine to just have one database user, since they are already authenticated by the time they can enter a database user / password anyways. I'm not a server dev or a DBA, so I thought it best to ask.
In fact, you have 2 levels of permissions
Cloud IAM allows you to access to Cloud SQL product or not
Database user management allows to log into the db engine and to get the db engine permission (access to a specific schema, one schema per developer, on the same SQL instance for instance).
The hosted database engine are based on MySQL, PostgreSQL or SQL Server. All those databases have their legacy user authentication in place. You have to deal with.
When I create a new database instance in Google CloudSQL, it creates a default user called postgres. I created another user and when I tried to remove the default postgres user I received a message: Can not remove a System user.
Some months ago I could remove the default user without problems. Did google change anything in CloudSQL? How can I remove the default user?
The postgres user is part of the cloudsqlsuperuser role. Because Cloud SQL for PostgreSQL is a managed service, it restricts access to certain system procedures and tables that require advanced privileges. In Cloud SQL, customers cannot create or have access to users with superuser attributes, including the postgres user. This is documented on the PostgreSQL users documentation page.
We connect to AWS through our office365 users given to us by our company. Since we assume a role in the UI, how would I connect to Athena from a JDBC tool like DataGrip?
If you want to use JDBC for Athena then you can have the connection string as shown below and your access key credentials as user name and password.
jdbc:awsathena://athena.{REGION}.amazonaws.com:443
Or
jdbc:awsathena://AwsRegion={REGION}
Please refer to this and this which helps you in configuring the connection and run queries from DataGrip.
Use "com.simba.athena.amazonaws.auth.DefaultAWSCredentialsProviderChain". This class trying to load credential from different provider one by one until it get success.
More detail about Simba athena lib - simba athena lib
Sample code for referene:
PoolConfiguration configuration = ((org.apache.tomcat.jdbc.pool.DataSource) dataSource).getPoolProperties();
Properties properties = new Properties();
properties.put("S3OutputLocation", "s3://bucket-name/folder/");
properties.setProperty("AwsCredentialsProviderClass", "com.simba.athena.amazonaws.auth.DefaultAWSCredentialsProviderChain");
configuration.setDbProperties(properties);
JdbcTemplate template = new JdbcTemplate(dataSource);
i'm used wso2 api manager v2.2.0 for api gateway, and i've problem for backup users and roles, at the moment i'm used api-import-export-2.1.0, but just backup APIs, any advice for my problem?
Users and roles are in the user database, therefore you need to backup your user database (or any other userstore you use).
By default WSO2AM comes with H2 embedded database which is not really recommended (or suited) for production deployment and you should setup your own databases on any supported DB system.
If you still use the the embedded H2 database, by default the user database file is located at repository/database/WSO2CARBON_DB.h2.db , however I don't recommend to backup/copy the file while it is open (while wso2am is running).
I wanted to setup an RDS instance store data for reporting. I have scrips that run different rest calls against certain sites that require bulk admin privilege on the back end because they dump their rest call data into a csv and then do a bulk csv insert into Sql Server SE. In my local environment setting up a user for my scripts to use with bulk admin privileges was easy. However, I couldn't seem to figure out how to do it in RDS. I opened a ticket with Amazon and they suggested writing a policy for it. So I figured I would ask here if this is possible and possible alternatives? If bulk/system admin privileges are out of the question in RDS I guess I will just have to use an AWS EC2 instance with Sql Server set up on it.
Bulk insert is not possible with RDS. The data_file parameter of the BULK INSERT command must refer to a file accessible by the machine running SQL Server.
Also, RDS does not support the bulkadmin server role.
Supported SQL Server Roles and Permissions
Importing and Exporting SQL Server Data