I am working on a Terraform script to create a cluster (tf code below). It gets 90% done then errors. When trying to apply another change or delete this cluster, I run into lack of permission. I made every account in the project Owner and still have the issue. How do I clear this out?
Error:
(1) (1) (1) Google Compute Engine: Required 'compute.instanceGroupManagers.delete' permission for 'projects/gke-eval-319218/zones/us-east4-a/instanceGroupManagers/gke-hello-default-pool-6e16e226-grp' (2) Google Compute Engine: Required 'compute.instanceGroupManagers.delete' permission for 'projects/gke-eval-319218/zones/us-east4-b/instanceGroupManagers/gke-hello-default-pool-a00f72b6-grp' (3) Google Compute Engine: Required 'compute.instanceGroupManagers.delete' permission for 'projects/gke-eval-319218/zones/us-east4-c/instanceGroupManagers/gke-hello-default-pool-ea0634bc-grp' (2) (1) Google Compute Engine: Required 'compute.projects.get' permission for 'projects/gke-eval-319218' (2) retry budget exhausted (5 attempts): Google Compute Engine: Required 'compute.routes.list' permission for 'projects/gke-eval-319218' (3) Google Compute Engine: Required 'compute.firewalls.delete' permission for 'projects/gke-eval-319218/global/firewalls/gke-hello-c4849243-all' (4) Google Compute Engine: Required 'compute.firewalls.delete' permission for 'projects/gke-eval-319218/global/firewalls/gke-hello-c4849243-ssh' (5) Google Compute Engine: Required 'compute.firewalls.delete' permission for 'projects/gke-eval-319218/global/firewalls/gke-hello-c4849243-vms' (2) Google Compute Engine: Required 'compute.subnetworks.get' permission for 'projects/gke-eval-319218/regions/us-east4/subnetworks/default'.
Script that created this mess:
variable project_id {}
variable zones {}
variable region {}
variable name {}
variable network {}
variable subnetwork {}
variable ip_range_pods { default = null }
variable ip_range_services { default = null }
locals {
service_account = "${var.name}-sa"
}
resource "google_service_account" "service_account" {
project = var.project_id
account_id = "${local.service_account}"
display_name = "${var.name} cluster service account"
}
resource "google_project_iam_binding" "service_account_iam" {
project = var.project_id
role = "roles/container.admin"
members = [
"serviceAccount:${local.service_account}#${var.project_id}.iam.gserviceaccount.com",
]
}
module "gke" {
source = "terraform-google-modules/kubernetes-engine/google"
project_id = var.project_id
name = var.name
region = var.region
zones = var.zones
network = var.network
subnetwork = var.subnetwork
ip_range_pods = var.ip_range_pods
ip_range_services = var.ip_range_services
http_load_balancing = true
horizontal_pod_autoscaling = false
network_policy = false
service_account = "${local.service_account}#${var.project_id}.iam.gserviceaccount.com"
node_pools = [
{
name = "default-pool"
machine_type = "e2-medium"
min_count = 3
max_count = 20
local_ssd_count = 0
disk_size_gb = 100
auto_repair = true
auto_upgrade = true
preemptible = false
initial_node_count = 10
},
]
node_pools_oauth_scopes = {
all = []
default-pool = [
"https://www.googleapis.com/auth/cloud-platform",
]
}
node_pools_labels = {
all = {}
default-pool = {
default-pool = true
}
}
node_pools_metadata = {
all = {}
default-pool = {
node-pool-metadata-custom-value = "my-node-pool"
}
}
node_pools_taints = {
all = []
default-pool = [
{
key = "default-pool"
value = true
effect = "PREFER_NO_SCHEDULE"
},
]
}
node_pools_tags = {
all = []
default-pool = [
"default-pool",
]
}
}
You might require to enable API if you are forgetting any
like
gcloud services enable container.googleapis.com
also, make sure service account you are have the role or policy attached.
--role roles/compute.admin
Related
I have been trying to test and implement self hosted runners but the issue is that instances are turning on even after everything is connected. I have followed every step github apps and everything but still the instances are not turning up. Cloudwatch registers the request but for somereason the request to create instances are not working. I have attached the main.tf file clould anyone please help in sorting this issue out.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.44.0"
}
}
}
provider "aws" {
region = var.aws_region
access_key = "Akey"
secret_key = "skey"
}
data "aws_caller_identity" "current" {}
resource "random_id" "random" {
byte_length = 20
}
resource "aws_iam_service_linked_role" "spot" {
aws_service_name = "spot.amazonaws.com"
}
module "github-runner" {
create_service_linked_role_spot = true
source = "philips-labs/github-runner/aws"
version = "2.0.0-next.1"
aws_region = var.aws_region
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
prefix = var.prefix
github_app = {
key_base64 = "key"
id = "id"
webhook_secret = random_id.random.hex
}
webhook_lambda_zip = "lambdas-download/webhook.zip"
runner_binaries_syncer_lambda_zip = "lambdas-download/runner-binaries-syncer.zip"
runners_lambda_zip = "lambdas-download/runners.zip"
enable_organization_runners = true
runner_extra_labels = "default,example"
# enable access to the runners via SSM
enable_ssm_on_runners = true
instance_types = ["m5.large", "c5.large"]
# Uncomment to enable ephemeral runners
runner_run_as = "ubuntu"
enable_ephemeral_runners = false
# enabled_userdata = true
delay_webhook_event = 0
runners_maximum_count = 20
idle_config = [{
cron = "* * o-23 * * *"
timeZone = "Europe/Amsterdam"
idleCount = 3
}]
# fifo_build_queue = true
enable_job_queued_check = true
# override scaling down
scale_down_schedule_expression = "cron(* * * * ? *)"
}
I have tried with ephemeral and isle configs enabling and disabling fifo queue and most other toggles but nothing seems to work.
I'm trying to deploy a cluster with self managed node groups. No matter what config options I use, I always come up with the following error:
Error: Post "http://localhost/api/v1/namespaces/kube-system/configmaps": dial tcp 127.0.0.1:80: connect: connection refusedwith module.eks-ssp.kubernetes_config_map.aws_auth[0]on .terraform/modules/eks-ssp/aws-auth-configmap.tf line 19, in resource "kubernetes_config_map" "aws_auth":resource "kubernetes_config_map" "aws_auth" {
The .tf file looks like this:
module "eks-ssp" {
source = "github.com/aws-samples/aws-eks-accelerator-for-terraform"
# EKS CLUSTER
tenant = "DevOpsLabs2"
environment = "dev-test"
zone = ""
terraform_version = "Terraform v1.1.4"
# EKS Cluster VPC and Subnet mandatory config
vpc_id = "xxx"
private_subnet_ids = ["xxx","xxx", "xxx", "xxx"]
# EKS CONTROL PLANE VARIABLES
create_eks = true
kubernetes_version = "1.19"
# EKS SELF MANAGED NODE GROUPS
self_managed_node_groups = {
self_mg = {
node_group_name = "DevOpsLabs2"
subnet_ids = ["xxx","xxx", "xxx", "xxx"]
create_launch_template = true
launch_template_os = "bottlerocket" # amazonlinux2eks or bottlerocket or windows
custom_ami_id = "xxx"
public_ip = true # Enable only for public subnets
pre_userdata = <<-EOT
yum install -y amazon-ssm-agent \
systemctl enable amazon-ssm-agent && systemctl start amazon-ssm-agent \
EOT
disk_size = 20
instance_type = "t2.small"
desired_size = 2
max_size = 10
min_size = 2
capacity_type = "" # Optional Use this only for SPOT capacity as capacity_type = "spot"
k8s_labels = {
Environment = "dev-test"
Zone = ""
WorkerType = "SELF_MANAGED_ON_DEMAND"
}
additional_tags = {
ExtraTag = "t2x-on-demand"
Name = "t2x-on-demand"
subnet_type = "public"
}
create_worker_security_group = false # Creates a dedicated sec group for this Node Group
},
}
}
module "eks-ssp-kubernetes-addons" {
source = "github.com/aws-samples/aws-eks-accelerator-for-terraform//modules/kubernetes-addons"
eks_cluster_id = module.eks-ssp.eks_cluster_id
# EKS Addons
enable_amazon_eks_vpc_cni = true
enable_amazon_eks_coredns = true
enable_amazon_eks_kube_proxy = true
enable_amazon_eks_aws_ebs_csi_driver = true
#K8s Add-ons
enable_aws_load_balancer_controller = true
enable_metrics_server = true
enable_cluster_autoscaler = true
enable_aws_for_fluentbit = true
enable_argocd = true
enable_ingress_nginx = true
depends_on = [module.eks-ssp.self_managed_node_groups]
}
Providers:
terraform {
backend "remote" {}
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 3.66.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.6.1"
}
helm = {
source = "hashicorp/helm"
version = ">= 2.4.1"
}
}
}
Based on the example provided in the Github repo [1], my guess is that the provider configuration blocks are missing for this to work as expected. Looking at the code provided in the question, it seems that the following needs to be added:
data "aws_region" "current" {}
data "aws_eks_cluster" "cluster" {
name = module.eks-ssp.eks_cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks-ssp.eks_cluster_id
}
provider "aws" {
region = data.aws_region.current.id
alias = "default" # this should match the named profile you used if at all
}
provider "kubernetes" {
experiments {
manifest_resource = true
}
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
}
If helm is also required, I think the following block [2] needs to be added as well:
provider "helm" {
kubernetes {
host = data.aws_eks_cluster.cluster.endpoint
token = data.aws_eks_cluster_auth.cluster.token
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
}
}
Provider argument reference for kubernetes and helm is in [3] and [4] respectively.
[1] https://github.com/aws-samples/aws-eks-accelerator-for-terraform/blob/main/examples/eks-cluster-with-self-managed-node-groups/main.tf#L23-L47
[2] https://github.com/aws-samples/aws-eks-accelerator-for-terraform/blob/main/examples/eks-cluster-with-eks-addons/main.tf#L49-L55
[3] https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs#argument-reference
[4] https://registry.terraform.io/providers/hashicorp/helm/latest/docs#argument-reference
The above answer from Marko E seems to fix / just ran into this issue. After applying the above code, altogether in a separate providers.tf file, terraform now makes it past the error. Will post later as to whether the deployment makes it fully through.
For reference was able to go from 65 resources created down to 42 resources created before I hit this error. This was using the exact best practice / sample configuration recommended at the top of the README from AWS Consulting here: https://github.com/aws-samples/aws-eks-accelerator-for-terraform
In my case i was trying to deploy to the kubernetes cluster(GKE) using Terraform. I have replaced the kubeconfig path with the kubeconfig file's absolute path.
From
provider "kubernetes" {
config_path = "~/.kube/config"
#config_context = "my-context"
}
TO
provider "kubernetes" {
config_path = "/Users/<username>/.kube/config"
#config_context = "my-context"
}
I've been trying to deploy a self managed node EKS cluster for a while now, with no success. The error I'm stuck on now are EKS addons:
Error: error creating EKS Add-On (DevOpsLabs2b-dev-test--eks:kube-proxy): InvalidParameterException: Addon version specified is not supported, AddonName: "kube-proxy", ClusterName: "DevOpsLabs2b-dev-test--eks", Message_: "Addon version specified is not supported" }
with module.eks-ssp-kubernetes-addons.module.aws_kube_proxy[0].aws_eks_addon.kube_proxy
on .terraform/modules/eks-ssp-kubernetes-addons/modules/kubernetes-addons/aws-kube-proxy/main.tf line 19, in resource "aws_eks_addon" "kube_proxy":
This error repeats for coredns as well, but ebs_csi_driver throws:
Error: unexpected EKS Add-On (DevOpsLabs2b-dev-test--eks:aws-ebs-csi-driver) state returned during creation: timeout while waiting for state to become 'ACTIVE' (last state: 'DEGRADED', timeout: 20m0s) [WARNING] Running terraform apply again will remove the kubernetes add-on and attempt to create it again effectively purging previous add-on configuration
My main.tf looks like this:
terraform {
backend "remote" {}
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 3.66.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.7.1"
}
helm = {
source = "hashicorp/helm"
version = ">= 2.4.1"
}
}
}
data "aws_eks_cluster" "cluster" {
name = module.eks-ssp.eks_cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks-ssp.eks_cluster_id
}
provider "aws" {
access_key = "xxx"
secret_key = "xxx"
region = "xxx"
assume_role {
role_arn = "xxx"
}
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
}
provider "helm" {
kubernetes {
host = data.aws_eks_cluster.cluster.endpoint
token = data.aws_eks_cluster_auth.cluster.token
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
}
}
My eks.tf looks like this:
module "eks-ssp" {
source = "github.com/aws-samples/aws-eks-accelerator-for-terraform"
# EKS CLUSTER
tenant = "DevOpsLabs2b"
environment = "dev-test"
zone = ""
terraform_version = "Terraform v1.1.4"
# EKS Cluster VPC and Subnet mandatory config
vpc_id = "xxx"
private_subnet_ids = ["xxx","xxx", "xxx", "xxx"]
# EKS CONTROL PLANE VARIABLES
create_eks = true
kubernetes_version = "1.19"
# EKS SELF MANAGED NODE GROUPS
self_managed_node_groups = {
self_mg = {
node_group_name = "DevOpsLabs2b"
subnet_ids = ["xxx","xxx", "xxx", "xxx"]
create_launch_template = true
launch_template_os = "bottlerocket" # amazonlinux2eks or bottlerocket or windows
custom_ami_id = "xxx"
public_ip = true # Enable only for public subnets
pre_userdata = <<-EOT
yum install -y amazon-ssm-agent \
systemctl enable amazon-ssm-agent && systemctl start amazon-ssm-agent \
EOT
disk_size = 10
instance_type = "t2.small"
desired_size = 2
max_size = 10
min_size = 0
capacity_type = "" # Optional Use this only for SPOT capacity as capacity_type = "spot"
k8s_labels = {
Environment = "dev-test"
Zone = ""
WorkerType = "SELF_MANAGED_ON_DEMAND"
}
additional_tags = {
ExtraTag = "t2x-on-demand"
Name = "t2x-on-demand"
subnet_type = "public"
}
create_worker_security_group = false # Creates a dedicated sec group for this Node Group
},
}
}
module "eks-ssp-kubernetes-addons" {
source = "github.com/aws-samples/aws-eks-accelerator-for-terraform//modules/kubernetes-addons"
eks_cluster_id = module.eks-ssp.eks_cluster_id
# EKS Addons
enable_amazon_eks_vpc_cni = true
enable_amazon_eks_coredns = true
enable_amazon_eks_kube_proxy = true
enable_amazon_eks_aws_ebs_csi_driver = true
#K8s Add-ons
enable_aws_load_balancer_controller = true
enable_metrics_server = true
enable_cluster_autoscaler = true
enable_aws_for_fluentbit = true
enable_argocd = true
enable_ingress_nginx = true
depends_on = [module.eks-ssp.self_managed_node_groups]
}
What exactly am I missing?
K8s is hard to get right sometimes. The examples on Github are shown for version 1.21 [1]. Because of that, if you leave only this:
enable_amazon_eks_vpc_cni = true
enable_amazon_eks_coredns = true
enable_amazon_eks_kube_proxy = true
enable_amazon_eks_aws_ebs_csi_driver = true
#K8s Add-ons
enable_aws_load_balancer_controller = true
enable_metrics_server = true
enable_cluster_autoscaler = true
enable_aws_for_fluentbit = true
enable_argocd = true
enable_ingress_nginx = true
Images that will be downloaded by default will be the ones for K8s version 1.21 as shown in [2]. If you really need to use K8s version 1.19, then you will have to find the corresponding Helm charts for that version. Here's an example of how you can configure the images you need [3]:
amazon_eks_coredns_config = {
addon_name = "coredns"
addon_version = "v1.8.4-eksbuild.1"
service_account = "coredns"
resolve_conflicts = "OVERWRITE"
namespace = "kube-system"
service_account_role_arn = ""
additional_iam_policies = []
tags = {}
}
However, the CoreDNS version here (addon_version = v1.8.4-eksbuild.1) is used with K8s 1.21. To check the version you would need for 1.19, go here [4]. TL;DR: the CoreDNS version you would need to specify is 1.8.0. In order to make the add-on work for 1.19, for CoreDNS (and other add-ons based on the image version), you would have to have a code block like this:
enable_amazon_eks_coredns = true
# followed by
amazon_eks_coredns_config = {
addon_name = "coredns"
addon_version = "v1.8.0-eksbuild.1"
service_account = "coredns"
resolve_conflicts = "OVERWRITE"
namespace = "kube-system"
service_account_role_arn = ""
additional_iam_policies = []
tags = {}
}
For other EKS add-ons, you can find more information here [5]. If you click on the links from the Name column it will lead you straight to the AWS EKS documentation with the add-on image versions supported for the EKS versions currently supported by AWS (1.17 - 1.21).
Last, but not the least, a friendly advice: never ever configure the AWS provider by hard-coding the access key and secret access key in the provider block. Use named profiles [6] or just use the default one. Instead of the block you have currently:
provider "aws" {
access_key = "xxx"
secret_key = "xxx"
region = "xxx"
assume_role {
role_arn = "xxx"
}
}
Switch to:
provider "aws" {
region = "yourdefaultregion"
profile = "yourprofilename"
}
[1] https://github.com/aws-samples/aws-eks-accelerator-for-terraform/blob/main/examples/eks-cluster-with-eks-addons/main.tf#L62
[2] https://github.com/aws-samples/aws-eks-accelerator-for-terraform/blob/main/modules/kubernetes-addons/aws-kube-proxy/local.tf#L5
[3] https://github.com/aws-samples/aws-eks-accelerator-for-terraform/blob/main/examples/eks-cluster-with-eks-addons/main.tf#L148-L157
[4] https://docs.aws.amazon.com/eks/latest/userguide/managing-coredns.html
[5] https://github.com/aws-samples/aws-eks-accelerator-for-terraform/blob/main/docs/add-ons/managed-add-ons.md
[6] https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html
I spent 4 days already testing all configurations from kubernetes terraform gcp module and I can't see the metrics of my workloads, It never shows me CPU nor Memory (and even the standard default created kubernetes in the GUI has this activated.
Here's my code:
resource "google_container_cluster" "default" {
provider = google-beta
name = var.name
project = var.project_id
description = "Vectux GKE Cluster"
location = var.zonal_region
remove_default_node_pool = true
initial_node_count = var.gke_num_nodes
master_auth {
#username = ""
#password = ""
client_certificate_config {
issue_client_certificate = false
}
}
timeouts {
create = "30m"
update = "40m"
}
logging_config {
enable_components = ["SYSTEM_COMPONENTS", "WORKLOADS"]
}
monitoring_config {
enable_components = ["SYSTEM_COMPONENTS", "WORKLOADS"]
}
}
resource "google_container_node_pool" "default" {
name = "${var.name}-node-pool"
project = var.project_id
location = var.zonal_region
node_locations = [var.zonal_region]
cluster = google_container_cluster.default.name
node_count = var.gke_num_nodes
node_config {
preemptible = true
machine_type = var.machine_type
disk_size_gb = var.disk_size_gb
service_account = google_service_account.default3.email
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/cloud-platform",
"compute-ro",
"storage-ro",
"service-management",
"service-control",
]
metadata = {
disable-legacy-endpoints = "true"
}
}
management {
auto_repair = true
auto_upgrade = true
}
}
resource "google_service_account" "default3" {
project = var.project_id
account_id = "terraform-vectux-33"
display_name = "tfvectux2"
provider = google-beta
}
Here's some info on the cluster (when I compare against the standard one with the metrics enabled I see no differences:
And here 's the workload view without the metrics that I'd like to see:
As I mentioned in the comment to solve your issue, you must add google_service_account_iam_binding module and grant your Service Account specific role - roles/monitoring.metricWriter. In comments I mention that you can also grant role/compute.admin but after another test I've run it's not necessary.
Below is a terraform snippet I've used to create a test cluster with Service Account called sa. I've changed some fields in node config. In your case, you would need to add the whole google_project_iam_binding module.
Terraform Snippet
### Creating Service Account
resource "google_service_account" "sa" {
project = "my-project-name"
account_id = "terraform-vectux-2"
display_name = "tfvectux2"
provider = google-beta
}
### Binding Service Account with IAM
resource "google_project_iam_binding" "sa_binding_writer" {
project = "my-project-name"
role = "roles/monitoring.metricWriter"
members = [
"serviceAccount:${google_service_account.sa.email}"
### in your case it will be "serviceAccount:${google_service_account.your-serviceaccount-name.email}"
]
}
resource "google_container_cluster" "default" {
provider = google-beta
name = "cluster-test-custom-sa"
project = "my-project-name"
description = "Vectux GKE Cluster"
location = "europe-west2"
remove_default_node_pool = true
initial_node_count = "1"
master_auth {
#username = ""
#password = ""
client_certificate_config {
issue_client_certificate = false
}
}
timeouts {
create = "30m"
update = "40m"
}
logging_config {
enable_components = ["SYSTEM_COMPONENTS", "WORKLOADS"]
}
monitoring_config {
enable_components = ["SYSTEM_COMPONENTS", "WORKLOADS"]
}
}
resource "google_container_node_pool" "default" {
name = "test-node-pool"
project = "my-project-name"
location = "europe-west2"
node_locations = ["europe-west2-a"]
cluster = google_container_cluster.default.name
node_count = "1"
node_config {
preemptible = "true"
machine_type = "e2-medium"
disk_size_gb = 50
service_account = google_service_account.sa.email
###service_account = google_service_account.your-serviceaccount-name.email
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/cloud-platform",
"compute-ro",
"storage-ro",
"service-management",
"service-control",
]
metadata = {
disable-legacy-endpoints = "true"
}
}
management {
auto_repair = true
auto_upgrade = true
}
}
My Screens:
Whole workload
Node Workload
Additional Information
If you would add just roles/compute.admin you might see workload for the whole application,but you wouldn't be able to see each node workload. With "roles/monitoring.metricWriter" you are able to see the whole application workload and each node workload. To achieve what you want - see workloads in the node, you just need "roles/monitoring.metricWriter".
You need to use "google_project_iam_binding" as without this in IAM roles, you won't have your newly created Service Account and it will lack permission. In short, Your new SA will be visible in IAM & Admin > Service Accounts but there will be no entry in IAM & Admin > IAM.
If you want more information about IAM and Binding in terraform, please check this Terraform Documentation
As a last thing, please remember that Oauth Scope with "https://www.googleapis.com/auth/cloud-platform" gives access to all GCP resources.
i can't create a VM on GCP using terraform, i want to attach a kms key in the attribute "kms_key_self_link", but when the machine is being created, time goes and after 2 minutes waiting (in every case) the error 503 appears. I'm going to share my script, is worthly to say that with the attribute "kms_key_self_link" dissabled, the script runs ok.
data "google_compute_image" "tomcat_centos" {
name = var.vm_img_name
}
data "google_kms_key_ring" "keyring" {
name = "keyring-example"
location = "global"
}
data "google_kms_crypto_key" "cmek-key" {
name = "crypto-key-example"
key_ring = data.google_kms_key_ring.keyring.self_link
}
data "google_project" "project" {}
resource "google_kms_crypto_key_iam_member" "key_user" {
crypto_key_id = data.google_kms_crypto_key.cmek-key.id
role = "roles/owner"
member = "serviceAccount:service-${data.google_project.project.number}#compute-system.iam.gserviceaccount.com"
}
resource "google_compute_instance" "vm-hsbc" {
name = var.vm_name
machine_type = var.vm_machine_type
zone = var.zone
allow_stopping_for_update = true
can_ip_forward = false
deletion_protection = false
boot_disk {
kms_key_self_link = data.google_kms_crypto_key.cmek-key.self_link
initialize_params {
type = var.disk_type
#GCP-CE-CTRL-22
image = data.google_compute_image.tomcat_centos.self_link
}
}
network_interface {
network = var.network
}
#GCP-CE-CTRL-2-...-5, 7, 8
service_account {
email = var.service_account_email
scopes = var.scopes
}
#GCP-CE-CTRL-31
shielded_instance_config {
enable_secure_boot = true
enable_vtpm = true
enable_integrity_monitoring = true
}
}
And this is the complete error:
Error creating instance: googleapi: Error 503: Internal error. Please try again or contact Google Support. (Code: '5C54C97EB5265.AA25590.F4046F68'), backendError
I solved this issue granting to my compute service account the role of encrypter/decripter through this resource:
resource "google_kms_crypto_key_iam_binding" "key_iam_binding" {
crypto_key_id = data.google_kms_crypto_key.cmek-key.id
role = "roles/cloudkms.cryptoKeyEncrypter"
members = [
"serviceAccount:service-${data.google_project.gcp_project.number}#compute-system.iam.gserviceaccount.com",
]
}