When trying to delete my cloud composer environment it gets stuck complaining about insufficient permissions. I have deleted the storage bucket, GKE cluster and the deployment according to this post:
Cannot delete Cloud Composer environment
And the service account is the standard compute SA.
DELETE operation on this environment failed 33 minutes ago with the following error message:
Could not configure workload identity: Permission iam.serviceAccounts.getIamPolicy is required to perform this operation on service account projects/-/serviceAccounts/"project-id"-compute#developer.gserviceaccount.com.
Even though I made the compute account a project owner and IAM Security Admin temporarily it does not work.
And I've tried to delete it through the GUI, gcloud CLI and terraform without success. Any advice or things to try out will be appreciated :)
I got help from the google support, and instead of adressing the SA projects/-/serviceAccounts/"project-id"-compute#developer.gserviceaccount.com.
It was apparently the default service agent that has the format of
service-"project-nr"#cloudcomposer-accounts.iam.gserviceaccount.com with the
Cloud Composer v2 API Service Agent Extension
Thank you for the kind replies!
The issue iam.serviceAccounts.getIamPolicy, seems to be more related to the credentials, that your server is having issues retrieving credentials data.
You should set up your path credentials variable again:
export GOOGLE_APPLICATION_CREDENTIALS=fullpath.json
Also there another options where you can try to run:
gcloud auth activate-service-account
Also you can add it to your script:
provider "google" {
credentials = file(var.service_account_file_path)
project = var.project_id
}
Don't forget that you need to have the correct roles to delete the composer.
For more details about it you can check:
https://cloud.google.com/composer/docs/delete-environments#gcloud
https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/composer_environment
https://cloud.google.com/composer/docs/how-to/access-control?hl=es_419
Related
I am trying to create basic Composer environment:
image version: 1.17.8/2.1.4
using service account with composer.worker permission
my own user has project.owner permission
public ip
All my attempts failed with following error:
Http error status code: 400
Http error message: BAD REQUEST
Errors in: [Web server]; Error messages:
The caller does not have permission
Required 'deploymentmanager.typeProviders.create' permission for 'projects/<my-project>/global/typeProviders/europe-west2-<name-id>-addons-gke-typer'
deploymentmanager.typeProviders.create is covered by Deployment Manager Type Editor, so I added this permission to both my account and service account, but the error remains the same.
Cloud Composer Service Agent account is present in the project without any modifications to its permissions.
Is there anything else I can check or something that I missed during the set up?
For an account (whether User Account or Service Account) to be able to create a Composer Environment, the account must have a composer.environments.create permission.
And according to Google Cloud's documentation on Cloud Composer Access Control,
The Composer Worker role provides the permissions necessary to run a Cloud Composer environment
VM and intended for service accounts.
The Composer Worker role is not intended for creation of environments thus, it does not have the composer.environments.create permission.
If you want your service account to be able to create a Composer environment, you will need to assign the role Composer Administrator and this has the composer.environments.create permission needed.
You may refer to Access Control for Cloud Composer for the complete list of permission for Composer Worker, Composer Administrator and other Composer related roles.
I am trying to deploy some GCP resources using terraform.
Executed gcloud auth login (authenticated with my gcp
account, assigned gcp project).
Executed gcloud auth application-default login.
Assigned roles to my useraccount (user99#gmail.com) at project level and the Terraform Service Account at organisation level.
Now, when I run terraform scripts from my CLI on my local machine, I get the "Error 403: The caller does not have permissions" error.
My question is:
When running terraform commands from my local machines CLI, which account is terraform using to deploy resources (user99#gmail.com or Terraform Service Account)?
Is Terraform complaining about missing permissions for my user99#gmail.com or the Terraform Service Account?
Is there a way to check which account is being used to deploy resources on GCP?
Without changing the project on gcloud auth login, can we deploy resources in other GCP projects?
If you're running on a (e.g. local) host (i.e. that's not on GCP):
with gcloud and you've gcloud auth application-default-login then (!) Terraform should be using that user's credentials (gcloud config get-value account).
and the environment exports GOOGLE_APPLICATION_CREDENTIALS (and this correctly points to a service account's key), then the Service Account will be used.
If you're running Terraform on GCP (e.g. on Compute Engine) then the Compute Engine's service account will be automatically determined by ADCs (see below).
Google Provider Configuration Authentication
Application Default Credentials (ADCs) Finding Credentials Automatically
Is there a way to check which account is being used to deploy resources on GCP?
This simple code will show you which account is used to terraform resources:
data "google_client_openid_userinfo" "me" {
}
output "my-email" {
value = data.google_client_openid_userinfo.me.email
}
Without changing the project on gcloud auth login, can we deploy resources in other GCP projects?
Actually, you can rely on self_links that contain full path with the project itself, so you can specify different projects. However, the proper way would be to use different provider aliases for each of the project.
Reference: https://www.terraform.io/language/providers/configuration#alias-multiple-provider-configurations
I tried to deploy an OpenVPN Access Server to Google Compute Engines and received the following error message:
openvpn-access-server-1-vm: {"ResourceType":"compute.v1.instance","ResourceErrorCode":"EXTERNAL_RESOURCE_NOT_FOUND","ResourceErrorMessage":"The resource 'PROJECT_ID-compute#developer.gserviceaccount.com' of type 'serviceAccount' was not found."}
PROJECT_ID is just a placeholder for my own PROJECT_ID.
In the cloud console, I can't find the "compute engine default service account" (I think, I accidentally deleted it last year). In the log files, I found in 2020 it's ACCOUNT_ID, so I tried to undelete it with the following command:
gcloud beta iam service-accounts undelete ACCOUNT_ID
I had no success, I received:
ERROR: (gcloud.beta.iam.service-accounts.undelete) NOT_FOUND: Not found; Not found AccountDataType for <numeric_id>
<numeric_id> was a 12-digit number.
I tried to disable and enable compute service to restore the default service account, but it wasn't successful, I received:
response:
'#type': type.googleapis.com/google.iam.admin.v1.ServiceAccount
serviceName: iam.googleapis.com
status:
code: 6
message: ALREADY_EXISTS
receiveTimestamp: '2021-08-05T06:45:55.798772716Z'
Because of this error, I tried to delete it, but this didn't work too.
Now I don't know what to do, to get the default service account back.
Is it still existing or not?
Why isn't it working?
Keep in mind, I'm talking about PROJECT_ID-compute#developer.gserviceaccount.com.
service-PROJECT_ID#compute-system.iam.gserviceaccount.com is existing and recreated each time I disable and enable the Compute Engine API again.
Thanks for helping.
Since the Service Account was deleted an year ago it cannot be undeleted using the following command,
gcloud beta iam service-accounts undelete ACCOUNT_ID
This only works for Service Accounts deleted fewer than 30 days ago. Undeleting a service account for more information.
Instead, we can create a new Service Account and grant an ‘Editor’ role to it. As a Default Compute Engine Service Account has the same role by default. Compute Engine default service account for more information.
Now, we can create a new Compute Engine VM using the new Service Account. Setting up a new instance to run as a service account for more information.
If we already have a running VM and the Service Account got deleted, As #John Hanley suggested, we can edit the VM instance in the Google Cloud Console and assign the new Service Account to the instance. Changing the service account and access scopes for an instance for more information.
To set the new Service Account as the Compute Engine Default Service Account on the project, we can use the following command,
gcloud alpha compute project-info set-default-service-account
But since the command is in the ‘alpha’ launch stage, it is not available for everyone.
Another workaround would be creating a new project and deploying our instance there.
This is a bit of a newbie question, but I've just gotten started with GCP provisioning using Terraform / Terragrunt, and I find the workflow with obtaining GCP credentials quite confusing. I've come from using AWS exclusively, where obtaining credentials, and configuring them in the AWS CLI was quite straightforward.
Basically, the Google Cloud Provider documentation states that you should define a provider block like so:
provider "google" {
credentials = "${file("account.json")}"
project = "my-project-id"
region = "us-central1"
zone = "us-central1-c"
}
This credentials field shows I (apparently) must generate a service account, and keep a JSON somewhere on my filesystem.
However, if I run the command gcloud auth application-default login, this generates a token located at ~/.config/gcloud/application_default_credentials.json; alternatively I can also use gcloud auth login <my-username>. From there I can access the Google API (which is what Terraform is doing under the hood as well) from the command line using a gcloud command.
So why does the Terraform provider require a JSON file of a service account? Why can't it just use the credentials that the gcloud CLI tool is already using?
By the way, if I configure Terraform to point to the application_default_credentials.json file, I get the following errors:
Initializing modules...
Initializing the backend...
Error: Failed to get existing workspaces: querying Cloud Storage
failed: Get
https://www.googleapis.com/storage/v1/b/terraform-state-bucket/o?alt=json&delimiter=%2F&pageToken=&prefix=projects%2Fsomeproject%2F&prettyPrint=false&projection=full&versions=false:
private key should be a PEM or plain PKCS1 or PKCS8; parse error:
asn1: syntax error: sequence truncated
if I configure Terraform to point to the application_default_credentials.json file, I get the following errors:
The credentials field in provider config expects a path to service account key file, not user account credentials file. If you want to authenticate with your user account try omitting credentials and then running gcloud auth application-default login; if Terraform doesn't find your credentials file you can set the GOOGLE_APPLICATION_CREDENTIALS environment variabe to point to ~/.config/gcloud/application_default_credentials.json.
Read here for more on the topic of service accounts vs user accounts. For what it's worth, Terraform docs explicitly advice against using application-default login:
This approach isn't recommended- some APIs are not compatible with credentials obtained through gcloud
Similarly GCP docs state the following:
Important: For almost all cases, whether you are developing locally or in a production application, you should use service accounts, rather than user accounts or API keys.
Change the credentials to point directly to the file location. Everything else looks good.
Example: credentials = "/home/scott/gcp/FILE_NAME"
Still it is not recommended to use gcloud auth application-default login, Best best approaches are
https://www.terraform.io/docs/providers/google/guides/provider_reference.html#credentials-1
I want to compare Google Cloud Run to both Google App Engine and Google Cloud Functions. The Cloud Run Quickstart: Build and Deploy seems like a good starting point.
My Application Default Credentials are too broad to use during development. I'd like to use a service account, but I struggle to configure one that can complete the quickstart without error.
The question:
What is the least privileged set of predefined roles I can assign to a service account that must execute these commands without errors:
gcloud builds submit --tag gcr.io/{PROJECT-ID}/helloworld
gcloud beta run deploy --image gcr.io/{PROJECT-ID}/helloworld
The first command fails with a (seemingly spurious) error when run via a service account with two roles: Cloud Build Service Account and Cloud Run Admin. I haven't run the second command.
Edit: the error is not spurious. The command builds the image and copies it to the project's container registry, then fails to print the build log to the console (insufficient permissions).
Edit: I ran the second command. It fails with Permission 'iam.serviceaccounts.actAs' denied on {service-account}. I could resolve this by assigning the Service Account User role. But that allows the deploy command to act as the project's runtime service account, which has the Editor role by default. Creating a service account with (effectively) both Viewer and Editor roles isn't much better than using my Application Default Credentials.
So I should change the runtime service account permissions. The Cloud Run Service Identity docs have this to say about least privileged access configuration:
This changes the permissions for all services in a project, as well
as Compute Engine and Google Kubernetes Engine instances. Therefore,
the minimum set of permissions must contain the permissions required
for Cloud Run, Compute Engine, and Google Kubernetes Engine in a
project.
Unfortunately, the docs don't say what those permissions are or which set of predefined roles covers them.
What I've done so far:
Use the dev console to create a new GCP project
Use the dev console to create a new service account with the Cloud Run Admin role
Use the dev console to create (and download) a key for the service account
Create (and activate) a gcloud configuration for the project
$ gcloud config list
[core]
account = {service-account-name}#{project-id}.iam.gserviceaccount.com
disable_usage_reporting = True
project = {project-id}
[run]
region = us-central1
Activate the service account using the downloaded key
Use the dev console to enable the Cloud Run API
Use the dev console to enable Container Registry→Settings→Container Analysis API
Create a sample application and Dockerfile as instructed by the quickstart documentation
Run gcloud builds submit --tag gcr.io/[PROJECT-ID]/helloworld
...fails due to missing cloud build permissions
Add the Cloud Build Editor role to service account and resubmit build
...fails due to missing storage permissions. I didn't pay careful attention to what was missing.
Add the Storage Object Admin role to service account and resubmit build
...fails due to missing storage bucket permissions
Replace service account's Storage Object Admin role with the Storage Admin role and resubmit build
...fails with
Error: (gcloud.builds.submit) HTTPError 403:
<?xml version='1.0' encoding='UTF-8'?>
<Error>
<Code>AccessDenied</Code>
<Message>Access denied.</Message>
<Details>
{service-account-name} does not have storage.objects.get access to
{number}.cloudbuild-logs.googleusercontent.com/log-{uuid}.txt.</Details>
</Error>
Examine the set of available roles and the project's automatically created service accounts. Realize that the Cloud Build Service Account role has many more permissions that the Cloud Build Editor. This surprised me; the legacy Editor role has "Edit access to all resources".
Remove the Cloud Build Editor and Storage Admin roles from service account
Add the Cloud Build Service Account role to service account and resubmit build
...fails with the same HTTP 403 error (missing get access for a log file)
Check Cloud Build→History in the dev console; find successful builds!
Check Container Registry→Images in the dev console; find images!
At this point I think I could finish Google Cloud Run Quickstart: Build and Deploy. But I don't want to proceed with (seemingly spurious) error messages in my build process.
Cloud Run PM here:
We can break this down into the two sets of permissions needed:
# build a container image
gcloud builds submit --tag gcr.io/{PROJECT_ID}/helloworld
You'll need:
Cloud Build Editor and Cloud Build Viewer (as per #wlhee)
# deploy a container image
gcloud beta run deploy --image gcr.io/{PROJECT_ID}/helloworld
You need to do two things:
Grant your service account the Cloud Run Deployer role (if you want to change the IAM policy, say to deploy the service publicly, you'll need Cloud Run Admin).
Follow the Additional Deployment Instructions to grant that service account the ability to deploy your service account
#1
gcloud projects add-iam-policy-binding PROJECT_ID \
--member="serviceAccount:{service-account-name}#{project-id}.iam.gserviceaccount.com" \
--role="roles/run.developer"
#2
gcloud iam service-accounts add-iam-policy-binding \
PROJECT_NUMBER-compute#developer.gserviceaccount.com \
--member="serviceAccount:{service-account-name}#{project-id}.iam.gserviceaccount.com" \
--role="roles/iam.serviceAccountUser"
EDIT: As noted, the latter grants your service account the ability to actAs the runtime service account. What role this service account has is dependent on what it needs to access: if the only thing Run/GKE/GCE accesses is GCS, then give it something like Storage Object Viewer instead of Editor. We are also working on per-service identities, so you can create a service account and "override" the default with something that has least-privilege.
According to https://cloud.google.com/cloud-build/docs/securing-builds/set-service-account-permissions
"Cloud Build Service Account" - Cloud Build executes your builds using a service account, a special Google account that executes builds on your behalf.
In order to call
gcloud builds submit --tag gcr.io/path
Edit:
Please "Cloud Build Editor" and "Viewer" your service account that starts the build, it's due to the current Cloud Build authorization model.
Sorry for the inconvenience.