This is a bit of a newbie question, but I've just gotten started with GCP provisioning using Terraform / Terragrunt, and I find the workflow with obtaining GCP credentials quite confusing. I've come from using AWS exclusively, where obtaining credentials, and configuring them in the AWS CLI was quite straightforward.
Basically, the Google Cloud Provider documentation states that you should define a provider block like so:
provider "google" {
credentials = "${file("account.json")}"
project = "my-project-id"
region = "us-central1"
zone = "us-central1-c"
}
This credentials field shows I (apparently) must generate a service account, and keep a JSON somewhere on my filesystem.
However, if I run the command gcloud auth application-default login, this generates a token located at ~/.config/gcloud/application_default_credentials.json; alternatively I can also use gcloud auth login <my-username>. From there I can access the Google API (which is what Terraform is doing under the hood as well) from the command line using a gcloud command.
So why does the Terraform provider require a JSON file of a service account? Why can't it just use the credentials that the gcloud CLI tool is already using?
By the way, if I configure Terraform to point to the application_default_credentials.json file, I get the following errors:
Initializing modules...
Initializing the backend...
Error: Failed to get existing workspaces: querying Cloud Storage
failed: Get
https://www.googleapis.com/storage/v1/b/terraform-state-bucket/o?alt=json&delimiter=%2F&pageToken=&prefix=projects%2Fsomeproject%2F&prettyPrint=false&projection=full&versions=false:
private key should be a PEM or plain PKCS1 or PKCS8; parse error:
asn1: syntax error: sequence truncated
if I configure Terraform to point to the application_default_credentials.json file, I get the following errors:
The credentials field in provider config expects a path to service account key file, not user account credentials file. If you want to authenticate with your user account try omitting credentials and then running gcloud auth application-default login; if Terraform doesn't find your credentials file you can set the GOOGLE_APPLICATION_CREDENTIALS environment variabe to point to ~/.config/gcloud/application_default_credentials.json.
Read here for more on the topic of service accounts vs user accounts. For what it's worth, Terraform docs explicitly advice against using application-default login:
This approach isn't recommended- some APIs are not compatible with credentials obtained through gcloud
Similarly GCP docs state the following:
Important: For almost all cases, whether you are developing locally or in a production application, you should use service accounts, rather than user accounts or API keys.
Change the credentials to point directly to the file location. Everything else looks good.
Example: credentials = "/home/scott/gcp/FILE_NAME"
Still it is not recommended to use gcloud auth application-default login, Best best approaches are
https://www.terraform.io/docs/providers/google/guides/provider_reference.html#credentials-1
Related
When trying to delete my cloud composer environment it gets stuck complaining about insufficient permissions. I have deleted the storage bucket, GKE cluster and the deployment according to this post:
Cannot delete Cloud Composer environment
And the service account is the standard compute SA.
DELETE operation on this environment failed 33 minutes ago with the following error message:
Could not configure workload identity: Permission iam.serviceAccounts.getIamPolicy is required to perform this operation on service account projects/-/serviceAccounts/"project-id"-compute#developer.gserviceaccount.com.
Even though I made the compute account a project owner and IAM Security Admin temporarily it does not work.
And I've tried to delete it through the GUI, gcloud CLI and terraform without success. Any advice or things to try out will be appreciated :)
I got help from the google support, and instead of adressing the SA projects/-/serviceAccounts/"project-id"-compute#developer.gserviceaccount.com.
It was apparently the default service agent that has the format of
service-"project-nr"#cloudcomposer-accounts.iam.gserviceaccount.com with the
Cloud Composer v2 API Service Agent Extension
Thank you for the kind replies!
The issue iam.serviceAccounts.getIamPolicy, seems to be more related to the credentials, that your server is having issues retrieving credentials data.
You should set up your path credentials variable again:
export GOOGLE_APPLICATION_CREDENTIALS=fullpath.json
Also there another options where you can try to run:
gcloud auth activate-service-account
Also you can add it to your script:
provider "google" {
credentials = file(var.service_account_file_path)
project = var.project_id
}
Don't forget that you need to have the correct roles to delete the composer.
For more details about it you can check:
https://cloud.google.com/composer/docs/delete-environments#gcloud
https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/composer_environment
https://cloud.google.com/composer/docs/how-to/access-control?hl=es_419
I am trying to deploy some GCP resources using terraform.
Executed gcloud auth login (authenticated with my gcp
account, assigned gcp project).
Executed gcloud auth application-default login.
Assigned roles to my useraccount (user99#gmail.com) at project level and the Terraform Service Account at organisation level.
Now, when I run terraform scripts from my CLI on my local machine, I get the "Error 403: The caller does not have permissions" error.
My question is:
When running terraform commands from my local machines CLI, which account is terraform using to deploy resources (user99#gmail.com or Terraform Service Account)?
Is Terraform complaining about missing permissions for my user99#gmail.com or the Terraform Service Account?
Is there a way to check which account is being used to deploy resources on GCP?
Without changing the project on gcloud auth login, can we deploy resources in other GCP projects?
If you're running on a (e.g. local) host (i.e. that's not on GCP):
with gcloud and you've gcloud auth application-default-login then (!) Terraform should be using that user's credentials (gcloud config get-value account).
and the environment exports GOOGLE_APPLICATION_CREDENTIALS (and this correctly points to a service account's key), then the Service Account will be used.
If you're running Terraform on GCP (e.g. on Compute Engine) then the Compute Engine's service account will be automatically determined by ADCs (see below).
Google Provider Configuration Authentication
Application Default Credentials (ADCs) Finding Credentials Automatically
Is there a way to check which account is being used to deploy resources on GCP?
This simple code will show you which account is used to terraform resources:
data "google_client_openid_userinfo" "me" {
}
output "my-email" {
value = data.google_client_openid_userinfo.me.email
}
Without changing the project on gcloud auth login, can we deploy resources in other GCP projects?
Actually, you can rely on self_links that contain full path with the project itself, so you can specify different projects. However, the proper way would be to use different provider aliases for each of the project.
Reference: https://www.terraform.io/language/providers/configuration#alias-multiple-provider-configurations
while creating backend in gcp using terraform getting below errors.
Error loading state: Failed to open state file at gs://tf-state-demo/demo-terraform.state/default.tfstate: googleapi: got HTTP response code 403 with body: AccessDeniedAccess denied.service account does not have storage.objects.get access to the Google Cloud Storage object.
i have given full storage admin role to service account used for creating bucket.
It's an issue with your environment configuration. Terraform use the application default credentials (ADC), therefore you need to create the environment variable GOOGLE_APPLICATION_CREDENTIALS equal to the absolute path of your service account key file.
If you want to avoid to use service account key file (and you have right, because of security reasons), you can use your own credential by doing gcloud auth application-default login
Note: the environment variable has the highest precedence on any other ADC modes
I am trying to upload a dockerfile to the Google Cloud containers.
Turns out, this is more difficult than actual developing the app.
I did everything the authenticaion page mentioned (https://cloud.google.com/container-registry/docs/advanced-authentication#gcloud-helper), but then i had to set up a key.
I followed this guide (https://cloud.google.com/iam/docs/creating-managing-service-account-keys#creating_service_account_keys), downloaded my json file.
I could not use gcloud shell, so i downloaded gcloud in my local machine from snaps.
Finally issued this command:
gcloud auth activate-service-account ACCOUNT --key-file=KEY-FILE
Where
ACCOUNT is the service account name in the format [USERNAME]#[PROJECT-ID].iam.gserviceaccount.com. You can view existing service accounts on the Service Accounts page of Cloud Console or with the command gcloud iam service-accounts list
KEY-FILE is the service account key file. See the Identity and Access Management (IAM) documentation for information about creating a key.
However, i get this error:
ERROR: (gcloud.auth.activate-service-account) Invalid value for [ACCOUNT]: The given account name does not match the account name in the key file. This argument can be omitted when using .json keys.
I don't know neither what is going on, or why i am getting this error, since i am doing everything by the book.
Some help would be much appreciated.
For some reason Packer fails to authenticate to AWS, using plain aws client works though, and my environment variables are correctly set:
AWS_ROLE_SESSION_NAME=...
AWS_SESSION_TOKEN=...
AWS_SECRET_ACCESS_KEY=...
AWS_ROLE=...
AWS_ACCESS_KEY_ID=...
AWS_CLI=...
AWS_ACCOUNT=...
AWS_SECURITY_TOKEN=...
I am using authentication using aws saml, and Packer gives me the following:
Error querying AMI: AWS was not able to validate the provided access credentials (AuthFailure)
The problem lies within the way Packer authenticates with AWS.
Packer is written in go and uses goamz for authentication. When creating a config using aws saml, a couple of files are generated in ~/.aws : config and credentials.
Turns out this credentials file takes precedence over the environment variables, so if these credentials are incorrect and you rely on your environment variables, you will get the same error.
Since aws-saml needs aws_access_key_id and aws_secret_access_key to be defined, deleting the credentials file would not suffice in this case.
We had to copy these values into ~/.aws/config and delete the credentials file, then Packer was happy to use our environment variables.
A ticket has been raised in github for goamz so AWS CLI and Packer can have the same authenticating behavior, feel free to vote it up if you have the issue too : https://github.com/mitchellh/goamz/issues/171