Running terraform apply getting below error.
{"error":{"code":403,"message":"The billing account for the owning project is disabled in state absent","errors":[{"message":"The billing account for the owning project is disabled in state absent","domain":"global","reason":"accountDisabled","locationType":"header","location":"Authorization"}]}}: timestamp=2022-12-31T00:04:43.690-0500
For the project I added the billing account.
I am able to run gcloud commands from shell without any errors using the same service account.
terraform_gcp % gcloud auth activate-service-account --key-file=sakey.json
Activated service account credentials for: [gcp-terraform#saproject.iam.gserviceaccount.com]
terraform_gcp % gsutil ls
gs://mygcptfstatebucket/
terraform_gcp % gcloud compute instances list
Listed 0 items.
my main.tf is
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "4.47.0"
}
}
}
provider "google" {
project = "my-gcp-project"
region = "us-east1"
zone = "us-east1-b"
}
Any insights into this error.
Tried adding below block and its working seems issue with my configuration.
data "google_project" "project_name" {
project_id = "projectid"
}
and referencing project_id in the resource block.
Related
Can we automate GCP billing export into BQ through Terraform?
I tried below terraform code but it's not working. So, not sure if GCP billing exporting into BQ would be possible through Terraform or not.
resource "google_logging_billing_account_sink" "billing-sink" {
name = "billing-sink"
description = "Billing export"
billing_account = "**********"
unique_writer_identity = true
destination = "bigquery.googleapis.com/projects/${var.project_name}/datasets/${google_bigquery_dataset.billing_export.dataset_id}"
}
resource "google_project_iam_member" "log_writer" {
project = var.project_name
role = "roles/bigquery.dataEditor"
member = google_logging_billing_account_sink.billing-sink.writer_identity
}
Unfortunately,there is no such option. This concern is already raised under github and this is in enhancement. Currently there is no ETA available. I can see in terraform only google_logging_billing_account_sink and Automating logs export to BigQuery with Terraform.
We're in the process of setting up some green field AWS infrastructure.
We have at the organisation account level an IAM user that Terraform authenticates as using access keys. We've then setup our Terraform code to "assume role" into our sub accounts within their respective git repos (we have 1 git repo per account). Something like,
provider "aws" {
assume_role {
role_arn = "arn:aws:iam::XXXXXXXXXX:role/TerraformCloudRole"
}
}
We're running into issues setting up an EKS cluster using the terraform-aws-modules/eks/aws module. The cluster creates fine, but we've set manage_aws_auth_configmap = true so we can provide IAM roles/users and manage what they can authenticate against. We're actually seeing multiple errors depending on where we do creates or updates, and some subtle changes to the code. Essentially they are,
Error: The configmap "aws-auth" does not exist
with module.eks_main.module.eks.kubernetes_config_map_v1_data.aws_auth[0]
on .terraform/modules/eks_main.eks/main.tf line 470, in resource "kubernetes_config_map_v1_data" "aws_auth":
Or
Error: Get "http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth": dial tcp [::1]:80: connect: connection refused
with module.eks_main.module.eks.kubernetes_config_map_v1_data.aws_auth[0]
on .terraform/modules/eks_main.eks/main.tf line 470, in resource "kubernetes_config_map_v1_data" "aws_auth":
We did some Googling and we found this issue. We added a provider and this seemed to solve some issues, specifically using this approach. The reason was because the exec route didn't work for us. It appeared to be trying to execute the AWS CLI command using the base access keys and not the assumed role. But the errors are back when we're making updates to the cluster or trying to run a destroy! It doesn't appear to be picking up the provider for some reason. The latter error above is during the plan phase, not apply.
So to my question. How do we setup Terraform to connect to/manage an EKS cluster properly when AWS assume-role/cross-account is involed?
maybe help :
provider "aws" {
alias = "main"
region = var.aws_region
}
data "aws_eks_cluster" "selected" {
provider = aws.main
name = local.eks_cluster_name
}
data "aws_eks_cluster_auth" "selected" {
provider = aws.main
name = local.eks_cluster_name
}
provider "kubernetes" {
host = element(concat(data.aws_eks_cluster.selected[*].endpoint, tolist([""])), 0)
cluster_ca_certificate = base64decode(element(concat(data.aws_eks_cluster.selected[*].certificate_authority.0.data, tolist([""])), 0))
exec {
api_version = "client.authentication.k8s.io/v1alpha1"
args = ["token", "-i", element(concat(data.aws_eks_cluster.selected[*].id, tolist([""])), 0)]
command = "aws-iam-authenticator"
}
}
provider "helm" {
kubernetes {
host = element(concat(data.aws_eks_cluster.selected[*].endpoint, tolist([""])), 0)
cluster_ca_certificate = base64decode(element(concat(data.aws_eks_cluster.selected[*].certificate_authority.0.data, tolist([""])), 0))
exec {
api_version = "client.authentication.k8s.io/v1alpha1"
args = ["token", "-i", element(concat(data.aws_eks_cluster.selected[*].id, tolist([""])), 0)]
command = "aws-iam-authenticator"
}
}
}
I'm not sure this will up to date, but I believe this should help you a lot, you can change the command to tho, as it use aws-iam-authenticator binary
I’m trying to create all new sandbox project in GCP for easy deployment and upgrade project. Using Terraform I am creating a GKE cluster. Issue is, the terraform scripts are written for the service accounts of a project named let’s say NP-H. Now, I am trying to create clusters using the same scripts in a project named let’ say NP-S.
I ran Terraform init and experiencing an
error 403: XXX.serviceaccount does not have storage.object.create access to google cloud storage objects., forbidden.
storage: object doesn’t exist.
Now, is the problem with Terraform script or service account permissions?
If it is Terraform script, what are the changes I need to make?
PS: I was able to create a buckets and upload them to cloud storage…
Two ways you can store credentials:
provider "google" {
credentials = file("service-account-file.json")
project = "PROJECT_ID"
region = "us-central1"
zone = "us-central1-c"
}
or
export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/service-account-file.json"
Make sure service account is from project ID NP-S, Menu > IAM & admin > service account, and has proper permissions: Menu > IAM & admin > IAM > ...
cat service-account-file.json
and make sure it is the email from correct project ID. You can do a quick test with owner/editor role to isolate the issue if need be as those role have most permissions.
If you're using service account impersonation, do this :
terraform {
backend "gcs" {
bucket = "<your-bucket>"
prefix = "<prefix>/terraform.tfstate"
impersonate_service_account = "<your-service-account>#<your-project-id>.iam.gserviceaccount.com"
}
}
Source : Updating remote state files with a service account
I am planning to use terraform to deploy to GCP and I have read the instruction on how to set it up:
provider "google" {
project = "{{YOUR GCP PROJECT}}"
region = "us-central1"
zone = "us-central1-c"
}
it requires a project name in the provider configuration. But I am planning to create the project via terraform like below code:
resource "google_project" "my_project" {
name = "My Project"
project_id = "your-project-id"
org_id = "1234567"
}
how can I use terraform without a pre-created project?
Take a look on this tutorial (from Community):
Creating Google Cloud projects with Terraform
This tutorial assumes that you already have a Google Cloud account set up for your organization and that you are allowed to make organization-level changes in the account
First step,for example, is to setup your ENV variables with your Organization ID and your billing account ID which will allow you to create the projects using terraform:
export TF_VAR_org_id=YOUR_ORG_ID
export TF_VAR_billing_account=YOUR_BILLING_ACCOUNT_ID
export TF_ADMIN=${USER}-terraform-admin
export TF_CREDS=~/.config/gcloud/${USER}-terraform-admin.json
TLDR
I'm just trying to start up a simple vm and it terraform tells me I don't have sufficient permissions. Keep in mind I have a Trial account (and like $290 left of sweet sweet free money).
Details
provider.tf
provider "google" {
project = "My First Project"
region = "us-east1"
}
resource.tf
resource "google_compute_instance" "vm_instance" {
name = "terraform-instance"
machine_type = "f1-micro"
zone = "us-east1-c"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
# A default network is created for all GCP projects
network = "default"
access_config {
}
}
}
Error
Error: Error loading zone 'us-east1-c': googleapi: Error 403: Permission denied on resource project My First Project., forbidden
My Trouble Shooting
I tried switching us-east1-c for other zones and tried us-central1 with different zones. Always got the same error.
I'm passing the credentials withGOOGLE_APPLICATION_CREDENTIALS environment variable and I know I'm passing it in correctly cause when I change the filename it breaks and says something like "that filename doesn't exist"
I've tried different server types (n1-standard-1, n1-highcpu-16)
I've tried so many different IAM permissions of particular note I tried Compute Admin, Compute Admin with Service Account User, and Compute Instance Admin and Service Account Admin.
Concerning the last point. I used
gcloud projects get-iam-policy <PROJECT NAME> \
--flatten="bindings[].members" \
--format='table(bindings.role)' \
--filter="bindings.members:<KEY NAME>"
And got this output
ROLE
roles/compute.admin
roles/compute.instanceAdmin
roles/compute.instanceAdmin.v1
roles/compute.instanceAdmin.v1
roles/iam.serviceAccountUser
But wait there's more
I went through this link and added all the permissions they suggested using the aforementioned key (except for all the ones that have to do with billing cause the organization is my school). When I checked the roles again I saw that it had added roles/storage.admin. Produced the same error though.
Update
A billing account is linked to my account and now the roles are this. AND IT STILL doesn't work
roles/billing.projectManager
roles/compute.admin
roles/compute.instanceAdmin
roles/compute.instanceAdmin.v1
roles/compute.instanceAdmin.v1
roles/iam.serviceAccountUser
roles/storage.admin