Error 403: Storage objects forbidden in GCP - google-cloud-platform

I’m trying to create all new sandbox project in GCP for easy deployment and upgrade project. Using Terraform I am creating a GKE cluster. Issue is, the terraform scripts are written for the service accounts of a project named let’s say NP-H. Now, I am trying to create clusters using the same scripts in a project named let’ say NP-S.
I ran Terraform init and experiencing an
error 403: XXX.serviceaccount does not have storage.object.create access to google cloud storage objects., forbidden.
storage: object doesn’t exist.
Now, is the problem with Terraform script or service account permissions?
If it is Terraform script, what are the changes I need to make?
PS: I was able to create a buckets and upload them to cloud storage…

Two ways you can store credentials:
provider "google" {
credentials = file("service-account-file.json")
project = "PROJECT_ID"
region = "us-central1"
zone = "us-central1-c"
}
or
export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/service-account-file.json"
Make sure service account is from project ID NP-S, Menu > IAM & admin > service account, and has proper permissions: Menu > IAM & admin > IAM > ...
cat service-account-file.json
and make sure it is the email from correct project ID. You can do a quick test with owner/editor role to isolate the issue if need be as those role have most permissions.

If you're using service account impersonation, do this :
terraform {
backend "gcs" {
bucket = "<your-bucket>"
prefix = "<prefix>/terraform.tfstate"
impersonate_service_account = "<your-service-account>#<your-project-id>.iam.gserviceaccount.com"
}
}
Source : Updating remote state files with a service account

Related

How To Solve: Error when reading or editing Project Service Foo/container.googleapis.com: googleapi: Error 403

I'm new to GCP and I'm trying to enable a number of API's via Terraform.
variable "gcp_service_list" {
description ="Projectof apis"
type = list(string)
default = [
"cloudresourcemanager.googleapis.com",
"serviceusage.googleapis.com"
]
}
resource "google_project_service" "gcp" {
for_each = toset(var.gcp_service_list)
project = "project-id"
service = each.key
}
but I keep running into the error
Error when reading or editing Project Service Foo/compute.googleapis.com: googleapi: Error 403: The caller does not have permission, forbidden
What permissions do I need to grant my service account in order for it to be able to do this please?
In order to enable service APIs in GCP, your user or service account which is being used to run Terraform needs to have the following role:
roles/serviceusage.serviceUsageAdmin
So, you will either have to grant the user or SA the role above from the console or if you have a Terraform resource to bind roles to users/SA that can be used as well.
From Terraform authentication perspective, if you are using a user account make sure you are properly authenticating from the terminal to GCP using the following command:
gcloud auth application-default login
If you are using a service account, you will need to specify the environment variable GOOGLE_APPLICATION_CREDENTIALS passing the json key file.
For Terraform authentication reference: https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference

Sharing custom images within an organization in GCP

I am trying to share a custom image in GCP between projects in the organization.
1) Project A
2) project B
All my custom Images are in project A.
I would like to share images of project A to Project B
As per the documentation I ran the following command to share images to project B
gcloud projects add-iam-policy-binding projecta --member serviceAccount:xxxxxx#cloudservices.gserviceaccount.com --role roles/compute.imageUser
I am using Terraform to provision the instances. In terraform, I am specifying to take the image from project A.
boot_disk {
initialize_params {
image = "projects/project_A/global/images/custom_image"
}
}
I am getting the below error
Error: Error creating instance: googleapi: Error 403: Required 'compute.images.useReadOnly' permission for 'projects/project_A/global/images/custom_image', forbidden
Can someone please help me out....
I guess the documentation is for Deployment Manager, not for Terraform, the command you run granted the role to service account xxxxxx#cloudservices.gserviceaccount.com, but Terraform is not using that account by default.
You need to make sure Terraform has enough permission. You may supply xxxxxx#cloudservices.gserviceaccount.com to Terraform or create a new service account for Terraform and grant roles/compute.imageUser to it.
You've just done the first step which is granti an service account a proper permissions to share your images across your organisation. The roles/compute.imageUser role is required to do it.
Your Terraform config also looks OK (you have to make sure the self_link to your image is correct (refer to this documentation to make sure image value in Terraform config is OK).
Also make sure you're providing proper service account credentials to Terraform as stated in #Ken Hung's answer.

Need clarification on using Terraform to manage Google Cloud projects

I read this article on using Terraform with GCP:
https://cloud.google.com/community/tutorials/managing-gcp-projects-with-terraform
I almost have it working, but I ran into some issues and wanted some clarification.
I made a terraform admin project, and made a service account in that project with the roles/viewer and roles/storage.admin roles. I then made a bucket in the admin project and use that as the terraform backend storage.
terraform {
backend "gcs" {
bucket = "test-terraform-admin-1"
prefix = "terraform/state"
credentials = "service-account.json"
}
}
I then use that service account to create another project and provision resources in that project:
provider "google" {
alias = "company_a"
credentials = "./service-account.json"
region = "us-east4"
zone = "us-east4-c"
version = "~> 2.12"
}
resource "google_project" "project" {
name = var.project_name
project_id = "${random_id.id.hex}"
billing_account = "${var.billing_account}"
org_id = "${var.org_id}"
}
I thought that it would be sufficient to enable services for the project created with terraform like this:
resource "google_project_service" "container_service" {
project = "${google_project.project.project_id}"
service = "container.googleapis.com"
}
However, I got an error when terraform tried to create my gke cluster:
resource "google_container_cluster" "primary" {
project = "${google_project.project.project_id}"
name = "main-gke-cluster"
node_pool {
....
}
network = "${google_compute_network.vpc_network.self_link}"
}
It said that the container service was not enabled yet for my project, and it referenced the terraform admin project ID (not the project created with the google_project resource!). It seems that I have to enable the services on the terraform admin project in order for the service account to access those services on any projects created by the service account.
In fact, I can get it working without ever enabling the container, servicenetworking, etc. services on the create project as long as they are enabled on the terraform admin project.
Is there some parent/child relationship between the projects where services in one project are inherited by projects created from a service account in the parent project? This seems to be the case, but I cannot find any documentation about this anywhere.
Thanks for listening!
In my company, we created a folder and a service account in this folder. Then, we created a project for terraform and each terraform job use the folder level service account for creating projects and resources into this folder.
As the role and permission are inherited from folder to lower level (folders or projects) we don't have issue to create resources.
I don't if it helps your specific issue, but for us, it solved a lot and simplify the service account management.

Error while configuring Terraform S3 Backend

I am configuring S3 backend through terraform for AWS.
terraform {
backend "s3" {}
}
On providing the values for (S3 backend) bucket name, key & region on running "terraform init" command, getting following error
"Error configuring the backend "s3": No valid credential sources found for AWS Provider. Please see https://terraform.io/docs/providers/aws/index.html for more information on providing credentials for the AWS Provider
Please update the configuration in your Terraform files to fix this error
then run this command again."
I have declared access & secret keys as variables in providers.tf. While running "terraform init" command it didn't prompt any access key or secret key.
How to resolve this issue?
When running the terraform init you have to add -backend-config options for your credentials (aws keys). So your command should look like:
terraform init -backend-config="access_key=<your access key>" -backend-config="secret_key=<your secret key>"
I also had the same issue, the easiest and the secure way is to fix this issue is that configure the AWS profile. Even if you properly mentioned the AWS_PROFILE in your project, you have to mention it again in your backend.tf.
my problem was, I have already set up the AWS provider in the project as below and it is working properly.
provider "aws" {
region = "${var.AWS_REGION}"
profile = "${var.AWS_PROFILE}"
}
but end of the project I was trying to configure the S3 backend configuration file. therefore I have run the command terraform init and I also got the same error message.
Error: error configuring S3 Backend: no valid credential sources for S3 Backend found.
Note that is not enough for the terraform backend configuration. you have to mention the AWS_PROFILE in the backend file as well.
Full Solution
I'm using the terraform latest version at this moment. it's v0.13.5.
please see the provider.tf
provider "aws" {
region = "${var.AWS_REGION}"
profile = "${var.AWS_PROFILE}" # lets say profile is my-profile
}
for example your AWS_PROFILE is my-profile
then your backend.tf should be as below.
terraform {
backend "s3" {
bucket = "my-terraform--bucket"
encrypt = true
key = "state.tfstate"
region = "ap-southeast-2"
profile = "my-profile" # you have to give the profile name here. not the variable("${var.AWS_PROFILE}")
}
}
then run the terraform init
I've faced a similar problem when renamed profile in AWS credentials file. Deleting .terraform folder, and running terraform init again resolved the problem.
If you have set up custom aws profile already, use the below option.
terraform init -backend-config="profile=your-profile-name"
If there is no custom profile,then make sure to add access_key and secret_key to default profile and try.
Don't - add variables for secrets. It's a really really bad practice and unnecessary.
Terraform will pick up your default AWS profile, or use whatever AWS profile you set AWS_PROFILE too. If this in AWS you should be using an instance profile. Roles can be done too.
If you hardcode the profile into your tf code then you have to have the same profile names where-ever you want to run this script and change it for every different account its run against.
Don't - do all this cmdline stuff, unless you like wrapper scripts or typing.
Do - Add yourself a remote_state.tf that looks like
terraform {
backend "s3" {
bucket = "WHAT-YOU-CALLED-YOUR-STATEBUCKET"
key = "mykey/terraform.tfstate"
region = "eu-west-1"
}
}
now when your terraform init:
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
The values in the provider aren't relevant to the perms for the remote_state and could even be different AWS accounts (or even another cloud provider).
Had the same issue and I was using export AWS_PROFILE as I always had. I checked my credentials which were correct.
Re-running aws configure fixed it for some reason.
I had same issue and below is my usecase.
AWS account 1: Management account (IAM user created here and this user will assume role into Dev and Prod account)
AWS account 2: Dev environment account (Role is created here for the trusted account in this case Management account user)
AWS account 3: Prod environment account (Role is created here for the trusted account in this case Management account user)
So I created a dev-backend.conf and prod-backend.conf file with the below content. The main point that fixed this issue is passing the "role_arn" value in S3 backend configuration
Defining below content in dev-backend.conf and prod-backend.conf files
bucket = "<your bucket name>"
key = "< your key path>"
region = "<region>"
dynamodb_table = "<db name>"
encrypt = true
profile = "< your profile>" # this profile has access key and secret key of the IAM user created in Management account
role_arn = "arn:aws:iam::<dev/prod account id>:role/<dev/prod role name >"
Terraform initialise with dev s3 bucket config from local state to s3 state
$ terraform init -reconfigure -backend-config="dev-backend.conf"
Terraform apply using dev environment variables file
$ terraform apply --var-file="dev-app.tfvars"
Terraform initialise with prod s3 bucket config from dev s3 bucket to prod s3 bucket state
$ terraform init -reconfigure -backend-config="prod-backend.conf"
Terraform apply using prod environment variables file
$ terraform apply --var-file="prod-app.tfvars"
I decided to put an end to this issue for once and for all, since there is a bunch of different topics about this same issue. This issue mainly arises because of different forms of authentication used while developing locally versus running a CI/CD pipeline. People tend to mix different authentication options together without taking into account the order of precedence.
When running locally you should definitely use the aws cli, since you don’t wanna have to set access keys every time you run a build. If you happen to work with multiple accounts locally you can tell the aws cli to switch profiles:
export AWS_PROFILE=my-profile
When you want to run (the same code) in a CI/CD pipeline (e.g. Github Actions, CircleCI), all you have to do is export the required environment variables within your build pipeline:
export AWS_ACCESS_KEY_ID="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export AWS_SECRET_ACCESS_KEY="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export AWS_REGION="eu-central-1"
This only works if you do not set any hard-coded configuration within the provider block. Because the AWS Terraform provider documentation learns us the order of authentication. Parameters in the provider configuration are evaluated first, then come environment variables.
Example:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
provider "aws" {}
terraform {
backend "s3" {}
}
Before you plan or apply this, you'll have to initialize the backend:
terraform init \
-backend-config="bucket=${TFSTATE_BUCKET}" \
-backend-config="key=${TFSTATE_KEY}" \
-backend-config="region=${TFSTATE_REGION}"
Best practices:
When running locally use the aws cli to authenticate. When running in a build pipeline, use environment variables to authenticate.
Keep your Terraform configuration as clean as possible, so try to avoid hard-coded settings and keep the provider block empty, so that you'll be able to authenticate dynamically.
Preferably also keep the s3 backend configuration empty and initialize this configuration from environment variables or a configuration file.
The Terraform documentation recommends including .terraform.lock.hcl in your version control so that you can discuss potential changes to your external dependencies via code review.
Setting AWS_PROFILE in a build pipeline is basically useless. Most of the times you do not have the aws cli installed during runtime. If you would somehow need this, then you should probably think of splitting this into separate build pipelines.
Personally, I like to use Terragrunt as a wrapper around Terraform. One of the main reasons is that it enables you to dynamically set the backend configuration. This is not possible in plain Terraform.
If someone is using localstack, for me only worked using this tip https://github.com/localstack/localstack/issues/3982#issuecomment-1107664517
backend "s3" {
bucket = "curso-terraform"
key = "terraform.tfstate"
region = "us-east-1"
endpoint = "http://localhost:4566"
skip_credentials_validation = true
skip_metadata_api_check = true
force_path_style = true
dynamodb_table = "terraform_state"
dynamodb_endpoint = "http://localhost:4566"
encrypt = true
}
And don't forget to add the endpoint in provider:
provider "aws" {
region = "us-east-1"
skip_credentials_validation = true
skip_requesting_account_id = true
skip_metadata_api_check = true
s3_force_path_style = true
endpoints {
ec2 = "http://localhost:4566"
s3 = "http://localhost:4566"
dynamodb = "http://localhost:4566"
}
}
in my credentials file, 2 profile names are there one after another caused the error for me. when I removed 2nd profile name this issue was resolved.
I experienced this issue when trying to apply some Terraform changes to an existing project. The terraform commands have been working fine, and I even ran worked on the project couple of hours before the issue started.
I was encountering the following errors:
❯ terraform init
Initializing modules...
Initializing the backend...
╷
│ Error: error configuring S3 Backend: IAM Role (arn:aws:iam::950456587296:role/MyRole) cannot be assumed.
│
│ There are a number of possible causes of this - the most common are:
│ * The credentials used in order to assume the role are invalid
│ * The credentials do not have appropriate permission to assume the role
│ * The role ARN is not valid
│
│ Error: NoCredentialProviders: no valid providers in chain. Deprecated.
│ For verbose messaging see aws.Config.CredentialsChainVerboseErrors
I had my organization VPN turned on when running the Terraform commands, and this caused the commands to fail.
Here's how I fixed it
My VPN caused the issue, this may not apply to everyone.
Turning off my VPN fixed it.

Vault GCP Project Level Role Binding

I am trying to apply the role binding below to grant the Storage Admin Role to a GCP roleset in Vault.
resource "//cloudresourcemanager.googleapis.com/projects/{project_id_number}" {
roles = [
"roles/storage.admin"
]
}
I want to grant access to the project level, not a specific bucket so that the GCP roleset can access and read/write to the Google Container Registry.
When I try to create this roleset in Vault, I get this error:
Error writing data to gcp/roleset/my-roleset: Error making API request.
URL: PUT http://127.0.0.1:8200/v1/gcp/roleset/my-roleset
Code: 400. Errors:
* unable to set policy: googleapi: Error 403: The caller does not have permission
My Vault cluster is running in a GKE cluster which has OAuth Scopes for all Cloud APIs, I am the project owner, and the service account Vault is using has the following permissions:
Cloud KMS CryptoKey Encrypter/Decrypter
Service Account Actor
Service Account Admin
Service Account Key Admin
Service Account Token Creator
Logs Writer
Storage Admin
Storage Object Admin
I have tried giving the service account both Editor and Owner roles, and I still get the same error.
Firstly, am I using the correct resource to create a roleset for the Storage Admin Role at the project level?
Secondly, if so, what could be causing this permission error?
I had previously recreated the cluster and skipped this step:
vault write gcp/config credentials=#credentials.json
Adding the key file fixed this.
There is also a chance that following the steps to create a custom role here and adding that custom role played a part.