GKE Secrets OR Google Secret manager - google-cloud-platform

Does anyone know in which case choose Kubernetes secrets instead of google secret manager and the reverse ? Differences between the two ?

With Kubernetes secret (K8S Secret), you use a built in feature of K8S. You load your secrets in config maps, and you mount them on the pods that require them.
PRO
If a day you want to deploy on AWS, Azure or on prem, still on K8S, the behavior will be the same, no update to perform in your code.
CONS
The secrets are only accessible by K8S cluster, impossible to reuse them with another GCP services
Note: With GKE, no problem the ETCD component is automatically encrypted with a key form KMS service to keep the secret encrypted at rest. But, it's not always the same for every K8S installation, especially on premise, where the secrets are kept in plain text. Be aware about this part of the security.
Secret Manager is a vault managed by Google. You have API to read and write them and the IAM service checks the authorization.
PRO
It's a Google Cloud service and you can access it from any GCP services (Compute Engine, Cloud Run, App Engine, Cloud Functions, GKE,....) as long as you are authorized for
CONS
It's Google Cloud specific product, you are locked in.

You can use them together via this sync service: https://external-secrets.io/

Related

How an app deployed on GKE can deploy other app on same GCP project without authentication

I have a java application that is deployed on GKE cluster. Let's call it the "orchestrator"
The application should be able to deploy other applications on same GCP project where the "orchestrator" app is running (can be same GKE or different GKE cluster), using helm cli commands.
We were able to do that using Google Service Account authentication, where the JSON key is provided to the "orchestrator" and we could use it to generate tokens.
My question is.. since both theĀ "orchestrator" and the others apps are running on same GCP project (sometimes on same GKE cluster), is there a way to use some default credentials auto discovered by GCP, instead of generating and providing a Service Account JSON key to theĀ "orchestrator" app?
That way, the customer won't need to expose this Key to our system and the authentication will be happened behind the scenes, without our app intervention.
Is there something a GCP admin can do which make this use case work seamlessly?
I will elaborate on my comment.
When you are using a Service Account, you have to use keys to authenticate - Each service account is associated with a public/private RSA key pair. As you are working on GKE cluster, did you consider using Workload identity, like mentioned in Best practices for using and managing SA?
According to Best practices for using and managing service accounts all non-human accounts should be represented by Service Account:
Service accounts represent non-human users. They're intended for scenarios where a workload, such as a custom application, needs to access resources or perform actions without end-user involvement.
So in general, whenever you want to provide some permissions to applications, you should use Service Account.
In Types of keys for service accounts you can find information, that all Service Accounts needs RSA pair key:
Each service account is associated with a public/private RSA key pair. The Service Account Credentials API uses this internal key pair to create short-lived service account credentials, and to sign blobs and JSON Web Tokens (JWTs). This key pair is known as the Google-managed key pair.
In addition, you can create multiple public/private RSA key pairs, known as user-managed key pairs, and use the private key to authenticate with Google APIs. This private key is known as a service account key.
You could also think about Workload Identity, but I am not sure if this would fulfill your needs as there are still many unknowns about your environment.
Just as additional information, there was something called Basic Authentication which could be an option for you, but due to security reasons it's not supported since GKE 1.19. This was mentioned in another stack case: We have discouraged Basic authentication in Google Kubernetes Engine (GKE).
To sum up:
Best Practice to provide permissions for non-human accounts is to use Service Account. Each service account requires a pair of RSA Keys and you can create multiple keys.
Good Practice is also to use Workload Identity if you have this option, but due to lack of details it is hard to determine if this would work in your scenario.
Additional links:
Authenticating to the Kubernetes API server
Use the Default Service Account to access the API server
One way to achieve that is to use use default credentials approach mentioned here :
Finding credentials automatically. Instead of exposing the SA key to our App, the GCP admin can attach the same SA to the GKE cluster resource (see attached screenshot), and the default credentials mechanism will use that SA credentials to get access the APIs and resources (depends on the SA roles and permissions).

How to store AWS Access Key and Secret Key in .Net Core API Securely

How in a work environment with different AWS environments say for example develop, staging and production is it best to store the AWS Access Key and Secret Key other than in the appsettings.json files in .Net Core? I know there is Secret Manager but not sure if that is the best way to store these two values. Looking for someone that may have done this specifically for production and how they handled this within their organization. Thanks for any information.
I believe that your application is running outside of AWS and that it needs to make API calls to AWS services, for example SQS. To make those API calls, your application needs AWS credentials.
Here are approaches for authenticating external applications in a machine-to-machine scenario. In your case, your client seems to need to be able to make arbitrary AWS service requests and that means using AWS signature v4 requests, signed using AWS credentials, which are ideally temporary, rotated credentials from STS rather than persistent credentials (such as IAM user credentials).
Typically, you would configure your application with a base set of IAM credentials that allow the application to assume an IAM role. That role itself, rather than the base credentials, would then give your application the permissions it needs to make SQS API calls etc.
The issue you face is how to securely store the base set of credentials. This is a problem that on-premise applications have had since day one, well before the cloud era, and there are various solutions, depending on the technology you're using.
Typically these credentials would be encrypted, not committed to code repos, and populated on the relevant, locked down application servers in some secure fashion. Some potentially useful resources:
Encrypting sections of a configuration file for an ASP.NET application
Use AWS Secrets Manager to store & read passwords in .Net Core apps
Securely store and retrieve sensitive info in .NET Core apps with Azure Key Vault
AWS Secret Manager securely stores your secrets until you retrieve them at runtime. If your going to be running your ASP.NET Core app in AWS, then AWS Secrets Manager is a great option, as it allows you to finely control the permissions associated with the AWS IAM roles running your apps.
Here are some faqs which were given from the AWS for secrets-manager service and which will clear your doubts also.
Here is the article which you can refer to for implementing secure secrets storage for .net core with AWS Secret Manager

Accessing Google Secrets from an application running on a Google Cloud VM instance - Assigning Cloud APIs to VM

I'm using Google Secrets to store API keys and other application specific "secrets". When I deploy my application to a VM instance using a docker container, I want it to access the Google Secrets using the VM's associated service account.
I have gotten this working using the following:
Assigning the "Secret Manager Secret Accessor" permission to the Service Account.
Giving my VM access to all APIs:
From a security perspective and recommended best practice, I don't want to give access to all APIs. The default access option doesn't work and I can't figure out from the list which options to enable to allow access to Google Secrets from my VM.
TLDR - Which Cloud API(s) do I need to give my Google Compute VM instance access to so that it can access Google Secrets without giving it access to all of the Cloud APIs?
According to the Secret Manager documentation, you need the cloud-platform OAuth scope.
Note: To access the Secret Manager API from within a Compute Engine instance or a Google Kubernetes Engine node (which is also a Compute Engine instance), the instance must have the cloud-platform OAuth scope. For more information about access scopes in Compute Engine, see Service account permissions in the Compute Engine documentation.
I'm not sure you can set this scope in the web UI, though you can set it through the command line.
Possible Alternative
What I do, rather than setting scopes on VMs, is create a service account specifically for the VMs (instead of using the default service account) and then give this service account access to specific resources (like the specific Secrets it should have access to, rather than all of them). When you do this in the web UI, the access scopes disappear and you are instructed to use IAM roles to control VM access like so:

how to set credentials to use GCP API from Dataproc instance

I am trying to access some credentials stored in google Secret Manager. To access this its required to have credentials setup in the Cluster machine where the jar is running.
I have SSH into the master instance, and seen there is nothing configured for GOOGLE_APPLICATION_CREDENTIALS.
I am curious to know how to assign GOOGLE_APPLICATION_CREDENTIALS or any other alternative that allows to use GCP APIs that require credentials.
If you are running on Dataproc clusters, default GCE service account should be already configured for you. Assuming your clusters are running outside GCP environment, in that case you want to follow this instruction to manually set up a service account that has editor/owner role for Google Secret Manager, and download the credential key file and point GOOGLE_APPLICATION_CREDENTIALS to it.

Rotate service accounts with Vault in GCP

I'm in process of implementing Vault in my organization. We run our services on GCP on compute engine instances as docker containers.
Each compute node can run multiple services and hence we use JSON Service Accounts Keys to auth against other google services (Dataproc, Google Cloud Storage etc.).
One of the challenge that we are facing right now is that we generate these json keys using terraform and are baked into the machines when infrastructure is getting provisioned.
Once provisioned these keys lives on forever, which is a bad way to handling the keys as if any key get compromised then we are at high risk.
To reduce the surface area, we are planning to have key rotation in place for which we are looking into vault. Vault will also help us have centralized secrets (instead of secrets in gitlab variables) and dynamic database credentials for MySQL.
While reading Vault's documentation the Vault's architecture is as follows.
You authenticate with vault using a service account.
Based on membership of the service account in a group you have different policies assigned to you.
Those policies have role-sets based on which ephemeral service accounts are generated.
You use the ephemeral service account which has a lease and can be revoked centrally.
Now from what I understand, you need a service account to authenticate with vault, so that you can get service account from Vault. This seemed to me like a chicken and egg problem.
I want a service account from vault, but to get that I need a service account to authenticate.
So how will I get my first service account ? Lets say I bake in the first service accounts via Terraform ? I couldn't find a way to rotate them ?
Am I missing something in my understanding of Vault ?