I have a CLI tool that interacts with Google KMS. In order for it to work, I fetch the user credentials as a JSON file which is stored on disk. Now a new requirement came along. I need to make a web app out of this CLI tool. The web app will be protected via Google Cloud IAP. Question is, how do I run the CLI tool on behalf of the authenticated user?
You don't. Better use a service-account and assign the required role. That service account still could have domain-wide delegation of rights (able to impersonate just any user, which is known).
Running CLI tools from a web-application probably also could/should be avoided. Iit might be better to convert his CLI tool into a Cloud Function and then call it via HTTP trigger, from within the web-application (so that access to the service account is limited as far as possible).
This might also be something to reconsider, security-wise:
I fetch the user credentials as a JSON file which is stored on disk.
Even if it might have been required, with a service-account it wouldn't.
Related
I am the owner of a google project, and also one of the developers. At times I would like to give my local machine access to run code for various configuration, testing, and maintenance operations via google client libraries on my production environment, e.g.:
from google.cloud import storage
storage.Client()
...
There are two ways I've been doing this:
gcloud auth application-default login and via GOOGLE_APPLICATION_CREDENTIALS by creating a service account and then downloading its json private key and setting the environment variable.
Both make me nervous about accidentally running code that could damage my production environment.
I'm not sure how to give myself least privileges when I'm also the project Owner, and how to carefully turn on/off privileges. Perhaps I shouldn't be doing anything on Production from my local machine, and only running code in cloud instances that are more controlled?
What do people typically do to manage both development and production google projects? I'm leaning towards creating a service account that I manually add/remove from my production IAM as needed, but I've read that the use of service accounts and local private key json files is also risky.
Are there ways to use OAuth that use my personal credentials but restrict scope/access for a specific session?
I assume you don't have access to Google Workspace.
YOu could create a(nother) Google user (commercial) account solely for project ownership?
NOTE You needn't get another Gmail address. The signup flow includes the option to use an existing email address. In this case, your#not-gmail.com gets wrapped with a Google account (and distinct password|2FA).
Service Account Keys carry risk because they're bearer credentials and so you need to be judicious with their management. It's good practice to only create keys when you must and to delete|cycle them promptly. However, a Service Account Key is generally (!) more secure than using gcloud auth application-default because generally (!) Service Accounts are granted fewer permissions than user (e.g. Owner) accounts. See Workload Identity Federation for another approach.
It used to be that Service Accounts were not fully interchangeable with User accounts. Service Accounts used to not be permitted to be Project Owners (this is no longer true). I think there are still cases where Service Accounts are not equivalent (when GCP requires sending e.g. terms of service emails).
Is it possible to do local development without using google service account key in minikube.
Currently I have a service account key which I use to authenticate google services inside the pod in minikube.
I want to avoid using service account key and instead use IAM role.
Iam User - XXX#XX.com
I am given required role/permission to this iam user.
After using gcloud auth login, I can access the google services. Is it possible to do similar way in k8s/minikube pods.
I think that you can cheat. Use it only for development environment, never in production
Firstly, locate your own user credentials created with the gcloud auth application-default login. The created file is
on linux ~/.config/gcloud/application_default_credentials.json
on windows %appdata%\gcloud\application_default_credentials.json
Then mount this file (or copy it) into minikube
You can define your GOOGLE_APPLICATION_CREDENTIALS env var to reference this file.
That's all, your credential will be used! Be careful, there is some limitation, I wrote an article on this
I think you are mixing things up. You can never use a key instead of a role. In most cases, you need both of them. You need a key to authenticate to the Google Cloud Platform. And you need a certain IAM role to access services within GCP. Authentication means confirming your own identity, whereas authorization means being allowed access to the system.
In your specific case, I think you are referring to the process of letting your application/code use your own identity to authentication to the Google Cloud Platform. There are 2 possibilities here:
Download a service account key file, which is prone to security leaks, because those key files are not rotated by themselves.
As #guillaume blaquiere explains below, you could also generate a key file using your own identity. The specifics about this are well explained here and here. For local development, this is preferred over the other option.
If you want to know how your SDK works with key files, I would recommend you take a look inside the SDK for the programming language you are using. There is always be a reference to GOOGLE_APPLICATION_CREDENTIALS. This is the location of the key file you are using.
I have a Cloud Function that interacts with Cloud Storage and BigQuery and they all belong to the same project. The usual way that I have followed when deploying Cloud Function from the command line is this:
$ gcloud functions deploy my_function ... --set-env-vars GOOGLE_APPLICATION_CREDENTIALS=my_project_credentials.json
Where my_project_credentials.json is a json key file that contains service account and key to allow access to Cloud Storage and BigQuery.
As this is the way that I have done ever since, what I need is another way in order to avoid this json credentials file altogether (since these interacting services belong to the same Google Cloud project anyway). Is there such a way? I am a bit new with Google Cloud so I am not familiar with in and outs of IAM.
(An additional reason that I need this, is that I have a client that is not comfortable with me as a developer having access to that json key and also he/she doesn't want that json key deployed alongside with Function code. Kindly provide some details on how to this in IAM particularly to BigQuery and Cloud Storage as I don't have control over IAM as well).
When you can, and at least when you application run on GCP, you mustn't use service account key file. 2 reasons
It's a simple file for the authentication: you can easily copy it, send it by email and even commit it in your code repository, maybe public!!
It's a secret, you have to store it securely and to rotate it frequently (Google recommend at least every 90 days). It's hard to manage, you want redeploy your function every 90 days with a news security file!
So, my peer Gabe and Kolban have right. Use function identity:
Either you specify the service account email when deploying the function
Or the default service account will be used (this one of compute engine, with editor role by default. Not really safe, prefer the first solution)
In your code, use the getDefaultCredential (according with the language, the name change slightly but the meaning is the same). If you look into the source code, you will see that the function perform this
Look if GOOGLE_APPLICATION_CREDENTIALS env var exists. If so, use it
Look if "well known file" exists. According with the OS, and when you perform a gcloud auth application-default login, the credentials are stored in different place locally. The library look for them.
Look if the metadata server exists. This link reference compute engine but other environment followed the same principle.
There is no "magic" stuff. The metadata server know the identity of the function and can generate access and identity token on demand. The libraries implements calls to it if your code run on GCP -> That's why, you never need a service account key file, the metadata server is here for serving you this information!
What Kolban said. When you deploy your Cloud Function you can define a service account to use, and then any API calls that use Application Default Credentials will automatically use that service account without the need of a service account bearer token (the json file). Check out the docs here:
https://cloud.google.com/docs/authentication/production#auth-cloud-implicit-nodejs
In order to limit the number of service accounts to manage as well as handling their keys, I'm exploring other ways of accessing GCP resources from a developer laptop or desktop so I can run ad-hoc scripts or interactive programs (e.g. Jupyter notebook) that access GCP services.
Using gcloud auth application-default login generates, after authenticating via a web browser, a refresh token that can be used to get and renew access tokens that can be used to interact with GCP services.
The workflow I'm following is this:
Run gcloud auth application-default login. This generates a JSON file on my disk that
contains the refresh token.
Export the JSON file location as GOOGLE_APPLICATION_CREDENTIALS env variable
GOOGLE_APPLICATION_CREDENTIALS=/Users/my.username/.config/gcloud/application_default_credentials.jsonĀ
Use that file to authenticate via Google auth library and interact with different GCP services.
This is convenient, as it reduces the need to circulate, secure and, if needed, share service account key files around team members. However, I have noticed that the refresh token provided does not expire and is still valid.
Unless I'm missing something here, this makes application_default_credentials.json file as sensitive as a service account key. If it gets lost or compromised it can be used to get access tokens without the need to re-authenticate, which is fairly insecure, IMO.
We're aware of the GCP security best practices recommend using service account (and their keys) for service-to-service workloads. This scenario I'm describing is for ad-hoc, development/testing of code from
a developer's or engineer's laptop. We think that forcing users to interactively authenticate via the web to get new tokens every few hours would be more secure and convenient than using long-lived service account keys stored in the hard drive.
I have read through [1] but I could not find a definitive answer.
Does anyone know if there is an expiration for these refresh tokens?
Is there a way of controlling and limiting their lifetimes (ideally to hours or minutes)?
What is the best/common practice for this scenario? Using a single service account (and key) per individual user?
[1] https://developers.google.com/identity/protocols/OAuth2#expiration
Note: User Credentials have Refresh Tokens too.
Does anyone know if there is an expiration for these refresh tokens?
Google OAuth Refresh Tokens do not expire. They can be revoked.
Is there a way of controlling and limiting their lifetimes (ideally to
hours or minutes)?
You could periodically revoke the Refresh Token which will invalidate the Access and Client ID tokens. This means that you are handling the Refresh Tokens which adds another security issue to manage.
What is the best/common practice for this scenario? Using a single
service account (and key) per individual user?
If you use User Credentials (the method where you log in to Google) you will receive SDK warnings and if you make a lot of API calls, you will become blocked. Google does not want you to use User credentials in place of Service Account credentials. The verification process for User Credentials requires more effort on Google's backend systems. User Credentials are assumed to be created in an insecure environment (web browsers) whereas Service Account credentials are assumed to be in a secure environment.
Best practices are to issue service account JSON key files to an individual application with only the required permissions for that application to operate. For example, if you create a tool that only needs to read Cloud Storage objects, create a service account with only read permissions. Periodically the service account keys should be rotated and new keys downloaded and old keys deleted. Each application should have its own service account JSON key file. I wrote an article on how to securely store JSON key files on Cloud Storage. This helps with rotating keys as your application just downloads the latest key when needed. (link). My article discusses Google Cloud Run, but the same principles apply.
I want to deploy a node application on a google cloud compute engine micro instance from a source control repo.
As part of this deployment I want to use KMS to store database credentials rather than having them in my source control. To get the credentials from KMS I need to authenticate on the instance with GCLOUD in the first place.
Is it safe to just install the GCloud CLI as part of a startup script and let the default service account handle the authentication? Then use this to pull in the decrypted details and save them to a file?
The docs walkthrough development examples, but I've not found anything about how this should work in production, especially as I obviously don't want to store the GCloud credentials in source control either.
Yes, this is exactly what we recommend: use the default service account to authenticate to KMS and decrypt a file with the credentials in it. You can store the resulting data in a file, but I usually either pipe it directly to the service that needs it or put it in tmpfs so it's only stored in RAM.
You can check the encrypted credentials file into your source repository, store it in Google Cloud Storage, or elsewhere. (You create the encrypted file by using a different account, such as your personal account or another service account, which has wrap but not unwrap access on the KMS key, to encrypt the credentials file.)
If you use this method, you have a clean line of control:
Your administrative user authentication gates the ability to run code as the trusted service account.
Only that service account can decrypt the credentials.
There is no need to store a secret in cleartext anywhere
Thank you for using Google Cloud KMS!