Terraform GCP Backend - google-cloud-platform

while creating backend in gcp using terraform getting below errors.
Error loading state: Failed to open state file at gs://tf-state-demo/demo-terraform.state/default.tfstate: googleapi: got HTTP response code 403 with body: AccessDeniedAccess denied.service account does not have storage.objects.get access to the Google Cloud Storage object.
i have given full storage admin role to service account used for creating bucket.

It's an issue with your environment configuration. Terraform use the application default credentials (ADC), therefore you need to create the environment variable GOOGLE_APPLICATION_CREDENTIALS equal to the absolute path of your service account key file.
If you want to avoid to use service account key file (and you have right, because of security reasons), you can use your own credential by doing gcloud auth application-default login
Note: the environment variable has the highest precedence on any other ADC modes

Related

How to stop running my terraform commands with a service account

gcloud auth list
Credentialed Accounts ACTIVE ACCOUNT
* xxxx#xxxx.com
To set the active account, run:
$ gcloud config set account `ACCOUNT
The credentials used here are correct and make sense, but when I run it HOPING it to be that user account, the error is saying that this service account does not have permissions needed in this environment. Which makes total sense! but! I can't figure out how to stop terraform from running as this service account and not as my own active user account.
Error: Error loading state: Failed to open state file at
gs://dxxxxdefault.tfstate: googleapi: got HTTP response code 403 with
body: AccessDeniedAccess
denied.terraform-win#xxxx.iam.gserviceaccount.com
does not have storage.objects.get access to the Google Cloud Storage
object.
The error message said that your service account terraform-win#xxxx.iam.gserviceaccount.com cannot access terraform state on Google Cloud Storage, you just need to add storage.objects.get permission to the service account to fix the error.
If you don't want Terraform to use this service account, you can remove impersonation settings in Terraform and environment variable GOOGLE_APPLICATION_CREDENTIALS.

"The caller does not have permission" when signing GCP storage file URL from AWS lambda

I am trying to sign URL in GCP storage from AWS EC2 or Lambda, I have generated a JSON file for permissions providing my AWS account ID and role which is given to EC2 or Lambda. When I call sign URL even with storage admin or owner permission I get: Error: The caller does not have permission.
I used the code provided by GCP documentation.
const {Storage} = require('#google-cloud/storage');
const storage = new Storage();
const options = {
version: 'v4',
action: 'read',
expires: Date.now() + 15 * 60 * 1000, // 15 minutes
};
// Get a v4 signed URL for reading the file
const [url] = await storage
.bucket(bucketName)
.file(fileName)
.getSignedUrl(options);
Can anybody tell me what did I miss? What is wrong?
Seems the pro
*** update.
I am creating a service account, granting this service account storage admin to my project, then creating pull in Workload Identity Pools, setting AWS and my AWS account ID, then granting access by my AWS identities matching role, downloading JSON, and putting environment variables - GOOGLE_APPLICATION_CREDENTIALS - path to my JSON file and GOOGLE_CLOUD_PROJECT - my project ID. How to correctly load that clientLibraryConfig.json file to run functions I need?
update ** 2
my clientLibraryConfig JSON has the following content..
{
"type": "external_account",
"audience": "..",
"subject_token_type": "..",
"service_account_impersonation_url": "..",
"token_url": "..",
"credential_source": {
"environment_id": "aws1",
"region_url": "..",
"url": "..",
"regional_cred_verification_url": ".."
}
}
How can I generate an access token in node js SDK from this config file to access GCP storage from AWS ec2?
You have to set up the following permissions for the IAM service account:
Storage Object Creator: This is to create signed URLs.
Service Account Token Creator role: This role enables impersonation
of service accounts to create OAuth2 access tokens, sign blobs, or sign JWTs.
Also, you can try to run locally in GCP to sign the URL with the service account.
You can use an existing private key for a service account. The key can be in JSON or PKCS12 format.
Use the command gsutil signurl and pass the path to the private key from the previous step, along with the name of the bucket and object.
For example, if you use a key stored in the folder Desktop, the following command will generate a signed URL for users to view the object cat.jpegfor for 10 minutes.
gsutil signurl -d 10m Desktop/private-key.json gs://example-bucket/cat.jpeg
If successful, the response should look like this:
URL HTTP Method Expiration Signed URL
gs://example-bucket/cat.jpeg GET 2018-10-26 15:19:52 https://storage.googleapis.
com/example-bucket/cat.jpeg?x-goog-signature=2d2a6f5055eb004b8690b9479883292ae74
50cdc15f17d7f99bc49b916f9e7429106ed7e5858ae6b4ab0bbbdb1a8ccc364dad3a0da2caebd308
87a70c5b2569d089ceb8afbde3eed4dff5116f0db5483998c175980991fe899fbd2cd8cb813b0016
5e8d56e0a8aa7b3d7a12ee1baa8400611040f05b50a1a8eab5ba223fe5375747748de950ec7a4dc5
0f8382a6ffd49941c42498d7daa703d9a414d4475154d0e7edaa92d4f2507d92c1f7e811a7cab64d
f68b5df4857589259d8d0bdb5dc752bdf07bd162d98ff2924f2e4a26fa6b3cede73ad5333c47d146
a21c2ab2d97115986a12c28ff37346d6c2ca83e5618ec8ad95632710b489b75c35697d781c38e&
x-goog-algorithm=GOOG4-RSA-SHA256&x-goog-credential=example%40example-project.
iam.gserviceaccount.com%2F20181026%2Fus%2Fstorage%2Fgoog4_request&x-goog-date=
20201026T211942Z&x-goog-expires=3600&x-goog-signedheaders=host
The signed URL is the string that starts with https://storage.googleapis.com, and it is likely to span multiple lines. Anyone can use the URL to access the associated resource (in this case, cat.jpeg) during the designated time frame (in this case, 10 minutes).
So if this works locally, then you can start configuring Workload Identity Federation to impersonate your service account. In this link, you will find a guide to deploy it.
To access resources from AWS using your Workload Identity Federation you will need to review if the following requirements have been already configured:
The workload identity pool has been created.
AWS has been added as an identity provider in the workload identity
pool (The Google organization policy needs to allow federation from
AWS).
The permissions to impersonate a service account have been granted to the external account.
I will add this guide to configure the Workload Identity Federation.
Once the previous requirements have been completed, you will need to generate the service account credential, this file will only contain non sensitive metadata in order to instruct the library on how to retrieve external subject tokens and exchange them for service accounts tokens, as you mentioned the file could be an config.json and could be generated running the following command:
# Generate an AWS configuration file.
gcloud iam workload-identity-pools create-cred-config \
projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/$POOL_ID/providers/$AWS_PROVIDER_ID \
--service-account $SERVICE_ACCOUNT_EMAIL \
--aws \
--output-file /path/to/generated/config.json
Where the following variables need to be substituted:
$PROJECT_NUMBER: The Google Cloud project number.
$POOL_ID:The workload identity pool ID.
$AWS_PROVIDER_ID: The AWS provider ID.
$SERVICE_ACCOUNT_EMAIL: The email of the service account to
impersonate.
Once you generate the JSON credentials configuration file for your external identity, you can store the path at the GOOGLE_APPLICATION_CREDENTIALS environment variable.
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/config.json
So, with this, the library can automatically choose the right type of client and initialize the credential from the configuration file. Please note that the service account will also need the roles/browser when using external identities with Application Default Credentials in Node.js or you can pass the project ID to avoid the need to grant roles/browser to the service account as is shown in the bellow code:
async function main() {
const auth = new GoogleAuth({
scopes: 'https://www.googleapis.com/auth/cloud-platform'
// Pass the project ID explicitly to avoid the need to grant `roles/browser` to the service account
// or enable Cloud Resource Manager API on the project.
projectId: 'CLOUD_RESOURCE_PROJECT_ID',
});
const client = await auth.getClient();
const projectId = await auth.getProjectId();
// List all buckets in a project.
const url = `https://storage.googleapis.com/storage/v1/b?project=${projectId}`;
const res = await client.request({ url });
console.log(res.data);
}

Terraform permissions issue when deploying from GCP gcloud

I am trying to deploy some GCP resources using terraform.
Executed gcloud auth login (authenticated with my gcp
account, assigned gcp project).
Executed gcloud auth application-default login.
Assigned roles to my useraccount (user99#gmail.com) at project level and the Terraform Service Account at organisation level.
Now, when I run terraform scripts from my CLI on my local machine, I get the "Error 403: The caller does not have permissions" error.
My question is:
When running terraform commands from my local machines CLI, which account is terraform using to deploy resources (user99#gmail.com or Terraform Service Account)?
Is Terraform complaining about missing permissions for my user99#gmail.com or the Terraform Service Account?
Is there a way to check which account is being used to deploy resources on GCP?
Without changing the project on gcloud auth login, can we deploy resources in other GCP projects?
If you're running on a (e.g. local) host (i.e. that's not on GCP):
with gcloud and you've gcloud auth application-default-login then (!) Terraform should be using that user's credentials (gcloud config get-value account).
and the environment exports GOOGLE_APPLICATION_CREDENTIALS (and this correctly points to a service account's key), then the Service Account will be used.
If you're running Terraform on GCP (e.g. on Compute Engine) then the Compute Engine's service account will be automatically determined by ADCs (see below).
Google Provider Configuration Authentication
Application Default Credentials (ADCs) Finding Credentials Automatically
Is there a way to check which account is being used to deploy resources on GCP?
This simple code will show you which account is used to terraform resources:
data "google_client_openid_userinfo" "me" {
}
output "my-email" {
value = data.google_client_openid_userinfo.me.email
}
Without changing the project on gcloud auth login, can we deploy resources in other GCP projects?
Actually, you can rely on self_links that contain full path with the project itself, so you can specify different projects. However, the proper way would be to use different provider aliases for each of the project.
Reference: https://www.terraform.io/language/providers/configuration#alias-multiple-provider-configurations

Google Cloud credentials with Terraform

This is a bit of a newbie question, but I've just gotten started with GCP provisioning using Terraform / Terragrunt, and I find the workflow with obtaining GCP credentials quite confusing. I've come from using AWS exclusively, where obtaining credentials, and configuring them in the AWS CLI was quite straightforward.
Basically, the Google Cloud Provider documentation states that you should define a provider block like so:
provider "google" {
credentials = "${file("account.json")}"
project = "my-project-id"
region = "us-central1"
zone = "us-central1-c"
}
This credentials field shows I (apparently) must generate a service account, and keep a JSON somewhere on my filesystem.
However, if I run the command gcloud auth application-default login, this generates a token located at ~/.config/gcloud/application_default_credentials.json; alternatively I can also use gcloud auth login <my-username>. From there I can access the Google API (which is what Terraform is doing under the hood as well) from the command line using a gcloud command.
So why does the Terraform provider require a JSON file of a service account? Why can't it just use the credentials that the gcloud CLI tool is already using?
By the way, if I configure Terraform to point to the application_default_credentials.json file, I get the following errors:
Initializing modules...
Initializing the backend...
Error: Failed to get existing workspaces: querying Cloud Storage
failed: Get
https://www.googleapis.com/storage/v1/b/terraform-state-bucket/o?alt=json&delimiter=%2F&pageToken=&prefix=projects%2Fsomeproject%2F&prettyPrint=false&projection=full&versions=false:
private key should be a PEM or plain PKCS1 or PKCS8; parse error:
asn1: syntax error: sequence truncated
if I configure Terraform to point to the application_default_credentials.json file, I get the following errors:
The credentials field in provider config expects a path to service account key file, not user account credentials file. If you want to authenticate with your user account try omitting credentials and then running gcloud auth application-default login; if Terraform doesn't find your credentials file you can set the GOOGLE_APPLICATION_CREDENTIALS environment variabe to point to ~/.config/gcloud/application_default_credentials.json.
Read here for more on the topic of service accounts vs user accounts. For what it's worth, Terraform docs explicitly advice against using application-default login:
This approach isn't recommended- some APIs are not compatible with credentials obtained through gcloud
Similarly GCP docs state the following:
Important: For almost all cases, whether you are developing locally or in a production application, you should use service accounts, rather than user accounts or API keys.
Change the credentials to point directly to the file location. Everything else looks good.
Example: credentials = "/home/scott/gcp/FILE_NAME"
Still it is not recommended to use gcloud auth application-default login, Best best approaches are
https://www.terraform.io/docs/providers/google/guides/provider_reference.html#credentials-1

Vault GCP Project Level Role Binding

I am trying to apply the role binding below to grant the Storage Admin Role to a GCP roleset in Vault.
resource "//cloudresourcemanager.googleapis.com/projects/{project_id_number}" {
roles = [
"roles/storage.admin"
]
}
I want to grant access to the project level, not a specific bucket so that the GCP roleset can access and read/write to the Google Container Registry.
When I try to create this roleset in Vault, I get this error:
Error writing data to gcp/roleset/my-roleset: Error making API request.
URL: PUT http://127.0.0.1:8200/v1/gcp/roleset/my-roleset
Code: 400. Errors:
* unable to set policy: googleapi: Error 403: The caller does not have permission
My Vault cluster is running in a GKE cluster which has OAuth Scopes for all Cloud APIs, I am the project owner, and the service account Vault is using has the following permissions:
Cloud KMS CryptoKey Encrypter/Decrypter
Service Account Actor
Service Account Admin
Service Account Key Admin
Service Account Token Creator
Logs Writer
Storage Admin
Storage Object Admin
I have tried giving the service account both Editor and Owner roles, and I still get the same error.
Firstly, am I using the correct resource to create a roleset for the Storage Admin Role at the project level?
Secondly, if so, what could be causing this permission error?
I had previously recreated the cluster and skipped this step:
vault write gcp/config credentials=#credentials.json
Adding the key file fixed this.
There is also a chance that following the steps to create a custom role here and adding that custom role played a part.