Google Cloud SQL shared or individual database user accounts when using cloud-sql-proxy - google-cloud-platform

Since the cloud-sql-proxy already forces individual user authentication with the database through a users iam account, and allows specifying read / write permissions, it seems potentially pointless to also have an individual database accounts for each user as well.
For security, is it necessary to have a database user per dev when using cloud-sql-proxy, or is it fine to just have one database user, since they are already authenticated by the time they can enter a database user / password anyways. I'm not a server dev or a DBA, so I thought it best to ask.

In fact, you have 2 levels of permissions
Cloud IAM allows you to access to Cloud SQL product or not
Database user management allows to log into the db engine and to get the db engine permission (access to a specific schema, one schema per developer, on the same SQL instance for instance).
The hosted database engine are based on MySQL, PostgreSQL or SQL Server. All those databases have their legacy user authentication in place. You have to deal with.

Related

Password rotation strategy for snowflake when using multi-tenant tables

We are using snowflake database and we have created multi-tenant tables.
We have created user-name and password for each tenant.
These user-name and password are used in applications that use AWS quicksight and microsoft power bi tool.
These user-names and password are NOT directly accessible by tenant, instead only our own application teams have access to them.
We cant use key pair authentication as quicksight does not support it yet.
Question:
Looking for a pattern on how to rotate these password without downtime, we want to rotate this password on a fixed schedule, like every 6 months.
We decide to go with 2 user strategy and alternate between the 2. We manage these users ourselves.
Consider the "Snowflake Database Secrets Engine" by Hashicorp Vault:
https://www.vaultproject.io/docs/secrets/databases/snowflake
"This plugin generates database credentials dynamically based on configured roles for Snowflake-hosted databases and supports Static Roles"
For example, you can configure it to rotate passwords every 24 hours, and it gives you an endpoint to retrieve the latest password.

Google CloudSQL - How to remove the default postgres user?

When I create a new database instance in Google CloudSQL, it creates a default user called postgres. I created another user and when I tried to remove the default postgres user I received a message: Can not remove a System user.
Some months ago I could remove the default user without problems. Did google change anything in CloudSQL? How can I remove the default user?
The postgres user is part of the cloudsqlsuperuser role. Because Cloud SQL for PostgreSQL is a managed service, it restricts access to certain system procedures and tables that require advanced privileges. In Cloud SQL, customers cannot create or have access to users with superuser attributes, including the postgres user. This is documented on the PostgreSQL users documentation page.

How can I limit the access to a database in google cloud import interface?

I have a Google Cloud SQL server running MySQL that is used by low technical knowledge users to import csv's to a MySQL database. They use the Import function built into GC SQL. However, there are several databases and I would like to limit access to a database to each user.
Here is the menu that I refer to: https://i.imgur.com/LyX7Wbk.png
I already tried assigning a IAM Role with less access but everything excep SQL Admin greyes out the Import option. SQL Admin gives complete access to even delete the instance so its definitely not an option.
Any help would be greatly appreciated.
You can't. Indeed, IAM role allow you to control access to GCP component. Here, your requirements is to administrate user that access to specific database schema, which is the role of DBA.
The only solution is to build something on top of Google API for limiting/controlling access.

Backup users and roles WSO2 API Manager

i'm used wso2 api manager v2.2.0 for api gateway, and i've problem for backup users and roles, at the moment i'm used api-import-export-2.1.0, but just backup APIs, any advice for my problem?
Users and roles are in the user database, therefore you need to backup your user database (or any other userstore you use).
By default WSO2AM comes with H2 embedded database which is not really recommended (or suited) for production deployment and you should setup your own databases on any supported DB system.
If you still use the the embedded H2 database, by default the user database file is located at repository/database/WSO2CARBON_DB.h2.db , however I don't recommend to backup/copy the file while it is open (while wso2am is running).

Enabling bulk admin privilege in amazon rds?

I wanted to setup an RDS instance store data for reporting. I have scrips that run different rest calls against certain sites that require bulk admin privilege on the back end because they dump their rest call data into a csv and then do a bulk csv insert into Sql Server SE. In my local environment setting up a user for my scripts to use with bulk admin privileges was easy. However, I couldn't seem to figure out how to do it in RDS. I opened a ticket with Amazon and they suggested writing a policy for it. So I figured I would ask here if this is possible and possible alternatives? If bulk/system admin privileges are out of the question in RDS I guess I will just have to use an AWS EC2 instance with Sql Server set up on it.
Bulk insert is not possible with RDS. The data_file parameter of the BULK INSERT command must refer to a file accessible by the machine running SQL Server.
Also, RDS does not support the bulkadmin server role.
Supported SQL Server Roles and Permissions
Importing and Exporting SQL Server Data