I'm in process of implementing Vault in my organization. We run our services on GCP on compute engine instances as docker containers.
Each compute node can run multiple services and hence we use JSON Service Accounts Keys to auth against other google services (Dataproc, Google Cloud Storage etc.).
One of the challenge that we are facing right now is that we generate these json keys using terraform and are baked into the machines when infrastructure is getting provisioned.
Once provisioned these keys lives on forever, which is a bad way to handling the keys as if any key get compromised then we are at high risk.
To reduce the surface area, we are planning to have key rotation in place for which we are looking into vault. Vault will also help us have centralized secrets (instead of secrets in gitlab variables) and dynamic database credentials for MySQL.
While reading Vault's documentation the Vault's architecture is as follows.
You authenticate with vault using a service account.
Based on membership of the service account in a group you have different policies assigned to you.
Those policies have role-sets based on which ephemeral service accounts are generated.
You use the ephemeral service account which has a lease and can be revoked centrally.
Now from what I understand, you need a service account to authenticate with vault, so that you can get service account from Vault. This seemed to me like a chicken and egg problem.
I want a service account from vault, but to get that I need a service account to authenticate.
So how will I get my first service account ? Lets say I bake in the first service accounts via Terraform ? I couldn't find a way to rotate them ?
Am I missing something in my understanding of Vault ?
Related
I have request from users to be able to connect to my datasets and table in bigquery to fetch the data and manipulate it programmatically outside of GCP
The situation now that i created a service account with credentials to view data and i share the json key of this service account with users in email .
I want to avoid users to use the key inside their code
best way to secure sharing this key with them
The best way to share your application outside Google Cloud is through Workload Identity Federation. Although, creating public/private key pairs is also a secured way to use and share your user-managed service account, it can still impose a threat and security risk if not correctly managed.
Just run through this documentation and use IAM external identities to impersonate a service account to avoid any security issues on your security account keys even without mantaining it.
I have a java application that is deployed on GKE cluster. Let's call it the "orchestrator"
The application should be able to deploy other applications on same GCP project where the "orchestrator" app is running (can be same GKE or different GKE cluster), using helm cli commands.
We were able to do that using Google Service Account authentication, where the JSON key is provided to the "orchestrator" and we could use it to generate tokens.
My question is.. since both the "orchestrator" and the others apps are running on same GCP project (sometimes on same GKE cluster), is there a way to use some default credentials auto discovered by GCP, instead of generating and providing a Service Account JSON key to the "orchestrator" app?
That way, the customer won't need to expose this Key to our system and the authentication will be happened behind the scenes, without our app intervention.
Is there something a GCP admin can do which make this use case work seamlessly?
I will elaborate on my comment.
When you are using a Service Account, you have to use keys to authenticate - Each service account is associated with a public/private RSA key pair. As you are working on GKE cluster, did you consider using Workload identity, like mentioned in Best practices for using and managing SA?
According to Best practices for using and managing service accounts all non-human accounts should be represented by Service Account:
Service accounts represent non-human users. They're intended for scenarios where a workload, such as a custom application, needs to access resources or perform actions without end-user involvement.
So in general, whenever you want to provide some permissions to applications, you should use Service Account.
In Types of keys for service accounts you can find information, that all Service Accounts needs RSA pair key:
Each service account is associated with a public/private RSA key pair. The Service Account Credentials API uses this internal key pair to create short-lived service account credentials, and to sign blobs and JSON Web Tokens (JWTs). This key pair is known as the Google-managed key pair.
In addition, you can create multiple public/private RSA key pairs, known as user-managed key pairs, and use the private key to authenticate with Google APIs. This private key is known as a service account key.
You could also think about Workload Identity, but I am not sure if this would fulfill your needs as there are still many unknowns about your environment.
Just as additional information, there was something called Basic Authentication which could be an option for you, but due to security reasons it's not supported since GKE 1.19. This was mentioned in another stack case: We have discouraged Basic authentication in Google Kubernetes Engine (GKE).
To sum up:
Best Practice to provide permissions for non-human accounts is to use Service Account. Each service account requires a pair of RSA Keys and you can create multiple keys.
Good Practice is also to use Workload Identity if you have this option, but due to lack of details it is hard to determine if this would work in your scenario.
Additional links:
Authenticating to the Kubernetes API server
Use the Default Service Account to access the API server
One way to achieve that is to use use default credentials approach mentioned here :
Finding credentials automatically. Instead of exposing the SA key to our App, the GCP admin can attach the same SA to the GKE cluster resource (see attached screenshot), and the default credentials mechanism will use that SA credentials to get access the APIs and resources (depends on the SA roles and permissions).
I created a service account mycustomsa#myproject.iam.gserviceaccount.com.
Following the GCP best practices, I would like to use it in order to run a GCE VM named instance-1 (not yet created).
This VM has to be able to write logs and metrics for Stackdriver.
I identified:
roles/monitoring.metricWriter
roles/logging.logWriter
However:
Do you advise any additional role I should use? (i.e. instance admin)
How should I setup the IAM policy binding at project level to restrict the usage of this service account just for GCE and instance-1?
For writing logs and metrics on Stackdriver those roles are appropriate, you need to define what kind of activities the instance will be doing. However as John pointed in his comment, using a conditional role binding 1 might be useful as they can be added to new or existing IAM policies to further control access to Google Cloud resources.
As for the best practices on SA, I would recommend to make the SA as secure as possible with the following:
-Specify who can act as service accounts. Users who are Service Account Users for a service account can indirectly access all the resources the service account has access to. Therefore, be cautious when granting the serviceAccountUser role to a user.
-Grant the service account only the minimum set of permissions required to achieve their goal. Learn about granting roles to all types of members, including service accounts.
-Create service accounts for each service with only the permissions required for that service.
-Use the display name of a service account to keep track of the service accounts. When you create a service account, populate its display name with the purpose of the service account.
-Define a naming convention for your service accounts.
-Implement processes to automate the rotation of user-managed service account keys.
-Take advantage of the IAM service account API to implement key rotation.
-Audit service accounts and keys using either the serviceAccount.keys.list() method or the Logs Viewer page in the console.
-Do not delete service accounts that are in use by running instances on App Engine or Compute Engine unless you want those applications to lose access to the service account.
Does anyone know in which case choose Kubernetes secrets instead of google secret manager and the reverse ? Differences between the two ?
With Kubernetes secret (K8S Secret), you use a built in feature of K8S. You load your secrets in config maps, and you mount them on the pods that require them.
PRO
If a day you want to deploy on AWS, Azure or on prem, still on K8S, the behavior will be the same, no update to perform in your code.
CONS
The secrets are only accessible by K8S cluster, impossible to reuse them with another GCP services
Note: With GKE, no problem the ETCD component is automatically encrypted with a key form KMS service to keep the secret encrypted at rest. But, it's not always the same for every K8S installation, especially on premise, where the secrets are kept in plain text. Be aware about this part of the security.
Secret Manager is a vault managed by Google. You have API to read and write them and the IAM service checks the authorization.
PRO
It's a Google Cloud service and you can access it from any GCP services (Compute Engine, Cloud Run, App Engine, Cloud Functions, GKE,....) as long as you are authorized for
CONS
It's Google Cloud specific product, you are locked in.
You can use them together via this sync service: https://external-secrets.io/
From AWS docs:
When to Create an IAM User (Instead of a Role)
...
You want to use the command-line interface (CLI) to work with AWS.
When to Create an IAM Role (Instead of a User)
- You're creating an application that runs on an Amazon Elastic Compute Cloud (Amazon EC2) instance and that application makes requests to AWS.
- You're creating an app that runs on a mobile phone and that makes requests to AWS.
- Users in your company are authenticated in your corporate network and want to be able to use AWS without having to sign in again—that is, you want to allow users to federate into AWS.
But it seems like companies heavily use roles for everything:
Role for groups by creating roles with specific policies and creating custom policies to apply to groups.
Assume role to use the CLI.
Switch role to use different accounts.
Is that excessive or real work based solution?
Is that excessive or real work based solution?
Based on my own experience with AWS, heavily using roles is a real work based solution because, in my company, we use only roles to give access to users (yes, we have 0 users registered in your AWS environments). I'll list the reasons why we chose this way:
We are using AWS Control Tower.
This service enables AWS Organizations with at least 3 AWS accounts to manage your organization. It'd be a mess with we had to create a user for each AWS account. Also, AWS Control Tower enables AWS Single Sign-On.
We're using AWS Single Sign-On.
This service correlates multiples AWS accounts with multiples roles with multiples users. Description:
AWS Single Sign-On (SSO) is a cloud SSO service that makes it easy to centrally manage SSO access to multiple AWS accounts and business applications. With just a few clicks, you can enable a highly available SSO service without the upfront investment and on-going maintenance costs of operating your own SSO infrastructure. With AWS SSO, you can easily manage SSO access and user permissions to all of your accounts in AWS Organizations centrally. AWS SSO also includes built-in SAML integrations to many business applications, such as Salesforce, Box, and Office 365. Further, by using the AWS SSO application configuration wizard, you can create Security Assertion Markup Language (SAML) 2.0 integrations and extend SSO access to any of your SAML-enabled applications. Your users simply sign in to a user portal with credentials they configure in AWS SSO or using their existing corporate credentials to access all their assigned accounts and applications from one place.
Please, check out some features offered by this service. There are a lot of benefits using roles instead of users. In my point of view, with AWS SSO, AWS itself facilitates the use of roles.
The only disadvantage I found is that every time I need to use AWS CLI, I need to access AWS SSO portal, copy the credentials and paste in my terminal because credentials expires after some time. But in the end, this disadvantage is small compared to the security that this process offers - if my computer is stolen, AWS CLI couldn't be accessed because of credentials expiration.