How to secure service account key for an application that is NOT running on google cloud - google-cloud-platform

I have request from users to be able to connect to my datasets and table in bigquery to fetch the data and manipulate it programmatically outside of GCP
The situation now that i created a service account with credentials to view data and i share the json key of this service account with users in email .
I want to avoid users to use the key inside their code
best way to secure sharing this key with them

The best way to share your application outside Google Cloud is through Workload Identity Federation. Although, creating public/private key pairs is also a secured way to use and share your user-managed service account, it can still impose a threat and security risk if not correctly managed.
Just run through this documentation and use IAM external identities to impersonate a service account to avoid any security issues on your security account keys even without mantaining it.

Related

Amazon Marketplace Web Services in Azure Data Factory - Error multiple values AWSAccessKeyId?

We are struggling to connect with Azure Data Factory to Amazon Marketplace Web Services.
It seems that we have all information required, however, we are getting the following error:
Parameter AWSAccessKeyId cannot have multiple values.
All data seems to be correct. However, we think it is strange that a Access Key Id and Secret Access Key are needed to connect to the Marketplace Web Services. Both keys come from the AWS environment which is currently not connected to anything.
Any help is appreciated.
Kind regards,
Jens
Yes, you need Access key ID and Secret key while creating the Amazon Marketplace Web Service linked service in Azure Data Factory. There should only be one Access Key assigned to per user in AWS Marketplace. Apart from this, other properties are also required. Please refer below image for the same. Some properties are mandatory and others not.
To allow people in your company to sign in to the AWS Marketplace Management Portal, create an IAM user for each person who needs access.
To create IAM users
Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.
In the navigation pane, choose Users and then choose Create New Users.
In the numbered text boxes, enter a name for each user that you want to create.
Clear the Generate an access key for each user check box and then choose Create.
This key now you will pass in Linked Service in ADF.
Also, for better security, you can save the SecretKey in Azure Key Vault and use Azure Key Vault Linked Service to access the SecretKey. Refer Store credentials in Azure Key Vault.

How an app deployed on GKE can deploy other app on same GCP project without authentication

I have a java application that is deployed on GKE cluster. Let's call it the "orchestrator"
The application should be able to deploy other applications on same GCP project where the "orchestrator" app is running (can be same GKE or different GKE cluster), using helm cli commands.
We were able to do that using Google Service Account authentication, where the JSON key is provided to the "orchestrator" and we could use it to generate tokens.
My question is.. since both theĀ "orchestrator" and the others apps are running on same GCP project (sometimes on same GKE cluster), is there a way to use some default credentials auto discovered by GCP, instead of generating and providing a Service Account JSON key to theĀ "orchestrator" app?
That way, the customer won't need to expose this Key to our system and the authentication will be happened behind the scenes, without our app intervention.
Is there something a GCP admin can do which make this use case work seamlessly?
I will elaborate on my comment.
When you are using a Service Account, you have to use keys to authenticate - Each service account is associated with a public/private RSA key pair. As you are working on GKE cluster, did you consider using Workload identity, like mentioned in Best practices for using and managing SA?
According to Best practices for using and managing service accounts all non-human accounts should be represented by Service Account:
Service accounts represent non-human users. They're intended for scenarios where a workload, such as a custom application, needs to access resources or perform actions without end-user involvement.
So in general, whenever you want to provide some permissions to applications, you should use Service Account.
In Types of keys for service accounts you can find information, that all Service Accounts needs RSA pair key:
Each service account is associated with a public/private RSA key pair. The Service Account Credentials API uses this internal key pair to create short-lived service account credentials, and to sign blobs and JSON Web Tokens (JWTs). This key pair is known as the Google-managed key pair.
In addition, you can create multiple public/private RSA key pairs, known as user-managed key pairs, and use the private key to authenticate with Google APIs. This private key is known as a service account key.
You could also think about Workload Identity, but I am not sure if this would fulfill your needs as there are still many unknowns about your environment.
Just as additional information, there was something called Basic Authentication which could be an option for you, but due to security reasons it's not supported since GKE 1.19. This was mentioned in another stack case: We have discouraged Basic authentication in Google Kubernetes Engine (GKE).
To sum up:
Best Practice to provide permissions for non-human accounts is to use Service Account. Each service account requires a pair of RSA Keys and you can create multiple keys.
Good Practice is also to use Workload Identity if you have this option, but due to lack of details it is hard to determine if this would work in your scenario.
Additional links:
Authenticating to the Kubernetes API server
Use the Default Service Account to access the API server
One way to achieve that is to use use default credentials approach mentioned here :
Finding credentials automatically. Instead of exposing the SA key to our App, the GCP admin can attach the same SA to the GKE cluster resource (see attached screenshot), and the default credentials mechanism will use that SA credentials to get access the APIs and resources (depends on the SA roles and permissions).

Rotate service accounts with Vault in GCP

I'm in process of implementing Vault in my organization. We run our services on GCP on compute engine instances as docker containers.
Each compute node can run multiple services and hence we use JSON Service Accounts Keys to auth against other google services (Dataproc, Google Cloud Storage etc.).
One of the challenge that we are facing right now is that we generate these json keys using terraform and are baked into the machines when infrastructure is getting provisioned.
Once provisioned these keys lives on forever, which is a bad way to handling the keys as if any key get compromised then we are at high risk.
To reduce the surface area, we are planning to have key rotation in place for which we are looking into vault. Vault will also help us have centralized secrets (instead of secrets in gitlab variables) and dynamic database credentials for MySQL.
While reading Vault's documentation the Vault's architecture is as follows.
You authenticate with vault using a service account.
Based on membership of the service account in a group you have different policies assigned to you.
Those policies have role-sets based on which ephemeral service accounts are generated.
You use the ephemeral service account which has a lease and can be revoked centrally.
Now from what I understand, you need a service account to authenticate with vault, so that you can get service account from Vault. This seemed to me like a chicken and egg problem.
I want a service account from vault, but to get that I need a service account to authenticate.
So how will I get my first service account ? Lets say I bake in the first service accounts via Terraform ? I couldn't find a way to rotate them ?
Am I missing something in my understanding of Vault ?

Share Google group permissions with GCP service account

A Google group of which I'm Manager have been granted certain permissions to access certain BigQuery tables. Effectively all user in the group can access to those tables using their personal credentials.
I would like to share those permissions with a service account and access the tables using service account credentials.
Is this possible? How to configure it?
A service account is generally used for Server to Server communication (between applications). With that in mind, a service account has associated an email address just like the ones associated to your personnel. So, you can assign roles/permissions to the service accounts using its email just like you assigned to your group.
I hope that the following steps help you in some manner:
Create a service account.
Assign predefined BigQuery roles (Admin, DataEditor, User, etc).
Download its json file which contains the credentials.
Use those credentials to authenticate and authorize your application.
To add a specific permission (owner, edit or view) on a specific dataset you can use its service account email.

amazon redshift single sign or service account approach

Does anybody know if it is possible to access Amazon Redshift via single sign on or service accounts. Our specific need is to map domain users to Redshift users, and then grant access to specific objects to this mapped users, so if a user wants to query Redhisft via some SQL client or some Excel connector (for example) he can use his domain credentials without having to store or type passwords on every connector. I know the existence of AWS Identity and Access Management (IAM) but from my understanding this works just for SSO to the management console, am i right?
AWS has recently announced the support for Federated authentication with single sign-on support for Redshift.
Using IAM Authentication to Generate Database User Credentials for AWS Redshift
I am currently trying to implement it and will update this answer once I am done with the setup.