App Engine – Restrict deploy to specific service - google-cloud-platform

We have two services running on Google App Engine.
We would like to restrict deployment to only specific users to the default (prod) target, but allow any devs to deploy to dev target.
Can't figure out the IAM conditions for it.
App engine doesn't seem to be an official resource type here https://cloud.google.com/iam/docs/conditions-resource-attributes#resource-name
and it's consistent with the service filter dropdown
I've tried using the name i get from gcloud app services describe dev:
resource.name == 'apps/my-project/services/dev'
Bt that doesn't seem to work either, it just gives access denied so guessing that's not the right resource name filter.
Is there a way to limit this as above?

App Engine permissions are granted at the project level and cannot be filtered for each different service of the application.
There is an open feature request https://issuetracker.google.com/115904598 to allow specific deployments of versions that I recommend you to star and follow.
Separating your prod and dev environments (I understand that this can be inconvenient sometimes) in different GCP projects could be the only viable alternative for the time being.

AFAIK, you can't restrict permissions to deploy to specific service because users can create custom services per GCP account.
Two options that I can suggest:
Create a different GCP project for prod. If you're using CLI, the prod devs can simply change the GCP project and deploy.
Use CICD with Cloud Build, and only grant merge access to prod branch to prod devs. No dev in this case would need access to your GCP projects.

For anyone still looking for options, it is possible to accomplish deployment isolation via gcloud config. It allows the creation of named configuration for each environment, for example.
Configuration governs the behavior of gcloud CLI.
Usage example:
Deploy to DEV (default) environment:
deploy:
image: google/cloud-sdk:alpine
stage: deploy
environment: Development
script:
- cp $GAE_ENV_VARIABLES ./env_variables.yaml
- echo $GAE_SERVICE_ACCOUNT > /tmp/$CI_PIPELINE_ID.json
- gcloud auth activate-service-account --key-file /tmp/$CI_PIPELINE_ID.json
- gcloud app deploy app.yaml --project $GCP_PROJECT_ID --version $CI_COMMIT_SHORT_SHA --image-url=us.gcr.io/$GCP_PROJECT_ID/$IMAGE_NAME:$CI_COMMIT_REF_NAME-$CI_COMMIT_SHORT_SHA
Deploy to TEST environment :
you can make it a requirement to have a specific service account to create/activate this environment and even a dedicated app.yaml like app_test.yaml with different service name.
deploy_test:
image: google/cloud-sdk:alpine
stage: deploy
environment: Test
only:
- master
script:
- cp $GAE_ENV_VARIABLES ./env_variables.yaml
- echo $GAE_SERVICE_ACCOUNT > /tmp/$CI_PIPELINE_ID.json
- gcloud auth activate-service-account --key-file /tmp/$CI_PIPELINE_ID.json
- gcloud config configurations create test --quiet --project $GCP_PROJECT_ID
- gcloud config configurations activate test --quiet --project $GCP_PROJECT_ID
- gcloud auth activate-service-account --key-file /tmp/$CI_PIPELINE_ID.json #activate service account under the new configuration
- gcloud app deploy app_test.yaml --project $GCP_PROJECT_ID --version $CI_COMMIT_SHORT_SHA --promote --image-url=us.gcr.io/$GCP_PROJECT_ID/$IMAGE_NAME:$CI_COMMIT_REF_NAME-$CI_COMMIT_SHORT_SHA

Related

403 trying to run terraform from Gitlab without json file

After a pile of troubleshooting, I managed to get my gitlab CICD pipeline to connect to GCP without requiring my service account to use a JSON key. However, I'm unable to do anything with Terraform in my pipeline using a remote statefile because of the following error:
Error: Failed to get existing workspaces: querying Cloud Storage failed: googleapi: Error 403: Insufficient Permission, insufficientPermissions
My gitlab-ci.yml file is defined as follows:
stages:
- auth
- validate
gcp-auth:
stage: auth
image: google/cloud-sdk:slim
script:
- echo ${CI_JOB_JWT_V2} > .ci_job_jwt_file
- gcloud iam workload-identity-pools create-cred-config ${GCP_WORKLOAD_IDENTITY_PROVIDER}
--service-account="${GCP_SERVICE_ACCOUNT}"
--output-file=.gcp_temp_cred.json
--credential-source-file=.ci_job_jwt_file
- gcloud auth login --cred-file=`pwd`/.gcp_temp_cred.json
- gcloud auth list
tf-stuff:
stage: validate
image:
name: hashicorp/terraform:light
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
before_script:
- export TF_LOG=DEBUG
- cd terraform
- rm -rf .terraform
- terraform --version
- terraform init
script:
- terraform validate
My gcp-auth job is running successfully from what I can see:
Authenticated with external account credentials for: [[MASKED]].
I've also went as far as adding in a gsutil cp command inside the gcp-auth job to make sure I can access the desired bucket as expected, which I can. I can successfully edit the contents of the bucket where my terraform statefile is stored.
I'm fairly new to gitlab CICD pipelines. Is there something I need to do to have the gcp-auth job tied to the tf-stuff job? It's like that job does not know the pipeline was previously authenticated using the service account.
Thanks!
Like mentioned by other posters, gitlab jobs run independently and dont share env variables or filesystem. So to preserve login state betwen jobs you have to preserve the state somehow.
I wrote a blog with a working example: https://ael-computas.medium.com/gcp-workload-identity-federation-on-gitlab-passing-authentication-between-jobs-ffaa2d51be2c
I have done it like github actions is doing it, by storing (tmp) credentials as artifacts. By setting correct env variables you should be able to "keep" the logged in state (gcp will implicitly refresh your token) without you having to create a base image containing everything. All jobs must run the gcp_auth_before method, or extend the auth job for this to work. and also have _auth/ artifacts preserved between jobs
In the sample below you can see that login state is preserved over two jobs, but only actuallt signing in on the first one. I have used this together with terraform images for further steps and it works like a charm so far.
This is very early so there might be hardening required for production.
Hope this example gives you some ideas on how to solve this!
.gcp_auth_before: &gcp_auth_before
- export GOOGLE_APPLICATION_CREDENTIALS=$CI_PROJECT_DIR/_auth/.gcp_temp_cred.json
- export CLOUDSDK_AUTH_CREDENTIAL_FILE_OVERRIDE=$CI_PROJECT_DIR/_auth/.gcp_temp_cred.json
- export GOOGLE_GHA_CREDS_PATH=$CI_PROJECT_DIR/_auth/.gcp_temp_cred.json
- export GOOGLE_CLOUD_PROJECT=$(cat $CI_PROJECT_DIR/_auth/.GOOGLE_CLOUD_PROJECT)
- export CLOUDSDK_PROJECT=$(cat $CI_PROJECT_DIR/_auth/.GOOGLE_CLOUD_PROJECT)
- export CLOUDSDK_CORE_PROJECT=$(cat $CI_PROJECT_DIR/_auth/.GOOGLE_CLOUD_PROJECT)
- export GCP_PROJECT=$(cat $CI_PROJECT_DIR/_auth/.GOOGLE_CLOUD_PROJECT)
- export GCLOUD_PROJECT=$(cat $CI_PROJECT_DIR/_auth/.GOOGLE_CLOUD_PROJECT)
.gcp-auth:
artifacts:
paths:
- _auth/
before_script:
*gcp_auth_before
stages:
- auth
- debug
auth:
stage: auth
image: "google/cloud-sdk:slim"
variables:
SERVICE_ACCOUNT_EMAIL: "... service account email ..."
WORKLOAD_IDENTITY_PROVIDER: "projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL/providers/PROVIDER"
GOOGLE_CLOUD_PROJECT: "... project id ...."
artifacts:
paths:
- _auth/
script:
- |
mkdir -p _auth
echo "$CI_JOB_JWT_V2" > $CI_PROJECT_DIR/_auth/.ci_job_jwt_file
echo "$GOOGLE_CLOUD_PROJECT" > $CI_PROJECT_DIR/_auth/.GOOGLE_CLOUD_PROJECT
gcloud iam workload-identity-pools create-cred-config \
$WORKLOAD_IDENTITY_PROVIDER \
--service-account=$SERVICE_ACCOUNT_EMAIL \
--service-account-token-lifetime-seconds=600 \
--output-file=$CI_PROJECT_DIR/_auth/.gcp_temp_cred.json \
--credential-source-file=$CI_PROJECT_DIR/_auth/.ci_job_jwt_file
gcloud config set project $GOOGLE_CLOUD_PROJECT
- "export GOOGLE_APPLICATION_CREDENTIALS=$CI_PROJECT_DIR/_auth/.gcp_temp_cred.json"
- "gcloud auth login --cred-file=$GOOGLE_APPLICATION_CREDENTIALS"
- gcloud auth list # DEBUG!!
debug:
extends: .gcp-auth
stage: debug
image: "google/cloud-sdk:slim"
script:
- env
- gcloud auth list
- gcloud storage ls
Your two Gitlab job run on a separated pod for the Kubernetes runner.
The tf-stuff job doesn't see the authentication done in the job gcp-auth.
To solve this issue, you can add the authentication code logic in a separated Shell script, then reuse this script in the two Gitlab jobs, example :
Authentication Shell script gcp_authentication.sh :
echo ${CI_JOB_JWT_V2} > .ci_job_jwt_file
gcloud iam workload-identity-pools create-cred-config ${GCP_WORKLOAD_IDENTITY_PROVIDER}
--service-account="${GCP_SERVICE_ACCOUNT}"
--output-file=.gcp_temp_cred.json
--credential-source-file=.ci_job_jwt_file
gcloud auth login --cred-file=`pwd`/.gcp_temp_cred.json
gcloud auth list
# Check if you need to set GOOGLE_APPLICATION_CREDENTIALS env var on `pwd`/.gcp_temp_cred.json
For the tf-stuff, you can create a custom Docker image containing gcloud and Terraform because the image hashicorp/terraform doesn't contains gcloud cli natively.
Your Docker image can be added in Gitlab registry
Your Gitlab yml file :
stages:
- auth
- validate
gcp-auth:
stage: auth
image: google/cloud-sdk:slim
script:
- . ./gcp_authentication.sh
tf-stuff:
stage: validate
image:
name: yourgitlabregistry/your-custom-image:1.0.0
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
before_script:
- . ./gcp_authentication.sh
- export TF_LOG=DEBUG
- cd terraform
- rm -rf .terraform
- terraform --version
- terraform init
script:
- terraform validate
Some explanations :
The same Shell script has been used in the 2 Gitlab jobs : gcp_authentication.sh
A custom Docker image has been created with Terraform and gcloud cli in the job concerning the Terraform part. This image can be added to the Gitlab registry
In the authentication Shell script, check if you need to set GOOGLE_APPLICATION_CREDENTIALS env var on pwd/.gcp_temp_cred.json
You have to give the needed permission to your Service Account to use Gitlab with Workload Identity :
roles/iam.workloadIdentityUser
You can check this example project and the documentation

Google cloud build with Compute Engine

I want to use Cloud Build with a trigger on commit to automatically fetch updated repo and run sudo supervisorctl restart on a Compute Engine instance.
On the Cloud Build settings page, there is an option to connect Compute Engine, but so far I only found examples including Kubernetes Engine and App Engine here.
Is it possible to accomplish? Is it the right way to make updates? Or should I instead restart the instance(s) with a startup-script?
There's a repo in Github from the cloud-builders-community that may be what you are looking for.
As specified in the aforementioned link, it does connect cloud Build to Compute Engine with the following steps:
A temporary SSH key will be created in your Container Builder workspace
A instance will be launched with your configured flags
The workpace will be copied to the remote instance
Your command will be run inside that instance's workspace
The workspace will be copied back to your Container Builder workspace
You will need to create an appropriate IAM role with create and destroy Compute Engine permissions:
export PROJECT=$(gcloud info --format='value(config.project)')
export PROJECT_NUMBER=$(gcloud projects describe $PROJECT --format 'value(projectNumber)')
export CB_SA_EMAIL=$PROJECT_NUMBER#cloudbuild.gserviceaccount.com
gcloud services enable cloudbuild.googleapis.com
gcloud services enable compute.googleapis.com
gcloud projects add-iam-policy-binding $PROJECT --member=serviceAccount:$CB_SA_EMAIL --role='roles/iam.serviceAccountUser' --role='roles/compute.instanceAdmin.v1' --role='roles/iam.serviceAccountActor'
And then you can configure your build step with something similar to this:
steps:
- name: gcr.io/$PROJECT_ID/remote-builder
env:
- COMMAND=sudo supervisorctl restart
You can also find more information in the examples section of the Github repo.

Service account does not have storage.buckets.lists access to project while pushing images to GCR via Gitlab CI

I am using Gitlab CI to build docker images and to push them to GCR. My Script goes like this -
build:
image: google/cloud-sdk
services:
- docker:dind
stage: build
cache:
script:
- echo "$GCP_SERVICE_KEY" > gcloud-service-key.json # Google Cloud service accounts
- gcloud auth activate-service-account --key-file gcloud-service-key.json
- gcloud auth configure-docker --quiet
- gcloud config set project $GCP_PROJECT_ID
- echo ${IMAGE_NAME}:${IMAGE_TAG}
- PYTHONUNBUFFERED=1 gcloud builds submit -t ${IMAGE_NAME}:${IMAGE_TAG} .
only:
- master
and I am getting this error-
ERROR: (gcloud.builds.submit) HTTPError 403: <service account name>#<projectname>.iam.gserviceaccount.com does not have storage.buckets.list access to project <projectid>.
After giving service account Cloud Editor permissions, I am getting the error -
ERROR: (gcloud.builds.submit) User [<service account name>#<projectname>.iam.gserviceaccount.com] does not have permission to access b [<bucker_name>] (or it may not exist): <service account name>#<projectname>.iam.gserviceaccount.com does not have storage.buckets.get access to <bucket_name>
What all permissions do I have to give to service account to achieve so?
From the error:
<service account name>#<projectname>.iam.gserviceaccount.com
does not have storage.buckets.list access to project <projectid>
I suspect that <projectname> and <projectid> refer to 2 different projects.
The project that owns the service account (<projectname>) may well have storage.[buckets|objects].* permissions but these will apply to the GCS resources controlled by <projectname> and not to those controlled by <projectid>.
NB Yes, it's confusing to see projects referenced by different types of keys but, to confirm, compare the ProjectID of <projectname> with <projectid>. Replace <projectname> with its value in the below to retrieve the ProjectID:
gcloud projects list --filter="name=<projectname> --format="value(projectId)"
There are 2 approaches to permitting identities to access GCS resources. The first is (as above) to create these at the project level. The second is to apply these to specific buckets.
See the link below for guidance. It's for Cloud Build's service account but the principle is the same. The service account (in project <projectname>) needs to have access to the GCS resources in <projectid>:
https://cloud.google.com/cloud-build/docs/securing-builds/set-service-account-permissions#push_private_images_to_others

gcloud - ERROR: (gcloud.app.deploy) Permissions error fetching application

I am trying to deploy node js app on google cloud but getting following error -
Step #1: ERROR: (gcloud.app.deploy) Permissions error fetching application [apps
/mytest-240512]. Please make sure you are using the correct project ID and that
you have permission to view applications on the project.
I am running following command -
gcloud builds submit . --config cloudbuild.yaml
My cloudbuild.yaml file looks like -
steps:
#install
- name: 'gcr.io/cloud-builders/npm'
args: ['install']
#deploy
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy']
The default Cloud Build service account does not allow access to deploy App Engine. You need to enable the Cloud Build service account to perform actions such as deploy.
The Cloud Build service account is formatted like this:
[PROJECT_NUMBER]#cloudbuild.gserviceaccount.com
Go to the Google Cloud Console -> IAM & admin -> IAM.
Locate the service account and click the pencil icon.
Add the role "App Engine Deployer" to the service account.
Wait a couple of minutes for the service account to update globally and then try again.
I had this same error today and the way I resolve it was by running: $ gcloud auth login on the console.
This will open a new browser tab for you to login with the credentials that has access to the project you're trying to deploy.
I was able to deploy to gcloud after that.
ps.: I'm not sure this is the best approach, but I'm leaving this as a possible solution as this is how I usually go around this problem. Worst case, I'll stand corrected and learn something new.
The most common way to deploy an app to App Engine is to use gcloud app deploy ....
When you use gcloud app deploy against App Engine Flex, the service uses Cloud Build.
It's entirely possible|reasonable to use Cloud Build to do your deployments too, it's just more involved.
I've not tried this but I think that, if you wish to use Cloud Build to perform the deployment, you will need to ensure that the Cloud Build service account has permissions to deploy to App Engine.
Here's an example of what you would need to do, specifically granting Cloud Build's service account the correct role.
Two commands can handle the perms needed (run in your terminal if you have gcloud sdk installed and authenticated or run in cloud shell for your project):
export PROJECT_ID=[[put your project id here]]
export PROJECT_NUMBER=$(gcloud projects describe $PROJECT_ID --format="value(projectNumber)")
gcloud iam service-accounts add-iam-policy-binding ${PROJECT_ID}#appspot.gserviceaccount.com \
--member=serviceAccount:${PROJECT_NUMBER}#cloudbuild.gserviceaccount.com \
--role=roles/iam.serviceAccountUser \
--project=${PROJECT_ID}
```
gcloud projects add-iam-policy-binding ${PROJECT_ID} \
--member=serviceAccount:${PROJECT_NUMBER}#cloudbuild.gserviceaccount.com \
--role=roles/appengine.appAdmin

Google Compute Engine: Required 'compute.instances.get' permission error

I am trying to connect to GCloud using CircleCI and deploy my code. I can successfully authenticate my service account user using:
gcloud --quiet auth activate-service-account --key-file=${HOME}/gcloud-service-key.json
gcloud --quiet config set project ${GOOGLE_PROJECT_ID}
gcloud --quiet config set compute/zone ${GOOGLE_COMPUTE_ZONE}
However, when I try to use the code below to deploy:
gcloud --quiet compute scp --recurse /[Folder_name] [Instance_Name]:/var/www/test --zone=northamerica-northeast1-a --project [Project_Name]
I get the following error:
ERROR: (gcloud.compute.scp) Could not fetch resource:
- Required 'compute.instances.get' permission for [Instance_name]
I have looked at the rights and looks like the service user has admin rights, which should include instances.get permission as well according to this.
I think my question is similar to this; however, the solution proposed there is not working, which is why I am asking a separate question. How can I solve this problem?
Just create a new service account in the console rather than using the default compute service account.
Export the JSON file and follow this guide - works for me.
https://circleci.com/docs/2.0/google-auth/