I'm trying to create a reserved subnet for regional load balancer. It is the first time i'm using google-beta provider and when i try to create the subnet using the following script...:
resource "google_compute_subnetwork" "proxy-subnet" {
provider = google-beta
project = "proyecto-pegachucho"
name = "website-net-proxy"
ip_cidr_range = "10.10.50.0/24"
region = "us-central1"
network = google_compute_network.HSBC_project_network.self_link
purpose = "INTERNAL_HTTPS_LOAD_BALANCER"
role = "ACTIVE"
}
... this error appears:
Error: Error creating Subnetwork: googleapi: Error 403: Required 'compute.subnetworks.create' permission for 'projects/proyecto-pegachucho/regions/us-central1/subnetworks/website-net-proxy'
More details:
Reason: forbidden, Message: Required 'compute.subnetworks.create' permission for 'projects/proyecto-pegachucho/regions/us-central1/subnetworks/website-net-proxy'
Reason: forbidden, Message: Required 'compute.networks.updatePolicy' permission for 'projects/proyecto-pegachucho/global/networks/hsbc-vpc-project'
on .terraform\modules\networking\networking.tf line 18, in resource "google_compute_subnetwork" "proxy-subnet":
18: resource "google_compute_subnetwork" "proxy-subnet" {
It doesn't make any sense because i have the owner role in my service account and that permissions are enabled. What could I do?
EDIT: I resolved it adding the provider directly in the modules like this:
provider "google-beta" {
project = var.project
region = var.region
credentials = "./mario.json"
}
resource "google_compute_health_check" "lb-health-check-global" {
name = var.healthckeck_name
check_interval_sec = var.check_interval_sec
timeout_sec = var.timeout_sec
healthy_threshold = var.healthy_threshold
unhealthy_threshold = var.unhealthy_threshold # 50 seconds
tcp_health_check {
port = var.healthckeck_port
}
}
resource "google_compute_region_health_check" "lb-health-check-regional" {
provider = google-beta
region = var.region
project = var.project
name = "healthcheck-regional"
check_interval_sec = var.check_interval_sec
timeout_sec = var.timeout_sec
healthy_threshold = var.healthy_threshold
unhealthy_threshold = var.unhealthy_threshold # 50 seconds
tcp_health_check {
port = var.healthckeck_port
}
}
I resolved this using the provider lines inside of the terraform module instead the main module (also you can configure two providers):
provider "google-beta" {
project = var.project
region = var.region
credentials = var.credentials
}
Related
terraform plan shows correct result when run locally but does not create resource mentioned in module when run on GitHub actions. The other resources in root main.tf (s3) are created fine.
Root project:-
terraform {
backend "s3" {
bucket = "sd-tfstorage"
key = "terraform/backend"
region = "us-east-1"
}
}
locals {
env_name = "sandbox"
aws_region = "us-east-1"
k8s_cluster_name = "ms-cluster"
}
# Network Configuration
module "aws-network" {
source = "github.com/<name>/module-aws-network"
env_name = local.env_name
vpc_name = "msur-VPC"
cluster_name = local.k8s_cluster_name
aws_region = local.aws_region
main_vpc_cidr = "10.10.0.0/16"
public_subnet_a_cidr = "10.10.0.0/18"
public_subnet_b_cidr = "10.10.64.0/18"
private_subnet_a_cidr = "10.10.128.0/18"
private_subnet_b_cidr = "10.10.192.0/18"
}
# EKS Configuration
# GitOps Configuration
module:-
provider "aws" {
region = var.aws_region
}
locals {
vpc_name = "${var.env_name} ${var.vpc_name}"
cluster_name = "${var.cluster_name}-${var.env_name}"
}
## AWS VPC definition
resource "aws_vpc" "main" {
cidr_block = var.main_vpc_cidr
enable_dns_support = true
enable_dns_hostnames = true
tags = {
"Name" = local.vpc_name,
"kubernetes.io/cluster/${local.cluster_name}" = "shared",
}
}
When you run it locally, you are using the default aws profile to plan it.
Have you set up your github environment with the correct aws access to perform those actions?
I'm trying to deploy my service in the region that is just newly available (Jakarta). But it looks like the Codepipeline is not available so I have to create the Codepipeline in the nearest region (Singapore) and deploy it to Jakarta region. It is also my first time setting up Codepipeline in Terraform, so I'm not sure if I do it right or not.
P.S. The default region of all these infrastructures is in "Jakarta" region. I will exclude the deploy part since the issue is showing up without it.
resource "aws_codepipeline" "pipeline" {
name = local.service_name
role_arn = var.codepipeline_role_arn
artifact_store {
type = "S3"
region = var.codepipeline_region
location = var.codepipeline_artifact_bucket_name
}
stage {
name = "Source"
action {
name = "Source"
category = "Source"
owner = "AWS"
provider = "CodeStarSourceConnection"
version = "1"
output_artifacts = ["SourceArtifact"]
region = var.codepipeline_region
configuration = {
ConnectionArn = var.codestar_connection
FullRepositoryId = "${var.team_name}/${local.repo_name}"
BranchName = local.repo_branch
OutputArtifactFormat = "CODEBUILD_CLONE_REF" // NOTE: Full clone
}
}
}
stage {
name = "Build"
action {
name = "Build"
category = "Build"
owner = "AWS"
provider = "CodeBuild"
version = "1"
input_artifacts = ["SourceArtifact"]
output_artifacts = ["BuildArtifact"]
run_order = 1
region = var.codepipeline_region
configuration = {
"ProjectName" = local.service_name
}
}
}
tags = {
Name = "${local.service_name}-pipeline"
Environment = local.env
}
}
Above is the Terraform configuration that I created, but it gives me an error like this:
│ Error: region cannot be set for a single-region CodePipeline
If I try to remove the region on the root block, the Terraform will try to access the default region which is Jakarta region (and it will fail since Codepipeline is not available in Jakarta).
│ Error: Error creating CodePipeline: RequestError: send request failed
│ caused by: Post "https://codepipeline.ap-southeast-3.amazonaws.com/": dial tcp: lookup codepipeline.ap-southeast-3.amazonaws.com on 103.86.96.100:53: no such host
You need to setup alias provider with different region. For exmaple:
provider "aws" {
alias = "singapore"
region = "ap-southeast-1"
}
Then you deploy your pipeline to that region using the alias:
resource "aws_codepipeline" "pipeline" {
provider = aws.singapore
name = local.service_name
role_arn = var.codepipeline_role_arn
# ...
}
So I have a GCP service account that is Kubernetes Admin and Kubernetes Cluster Admin in the GCP cloud console.
I am now trying to give this terraform service account the ClusterRole role in GKE to manage all namespaces via following terraform configuration:
data "google_service_account" "terraform" {
project = var.project_id
account_id = var.terraform_sa_email
}
# Terraform needs to manage cluster
resource "google_project_iam_member" "terraform-gke-admin" {
project = var.project_id
role = "roles/container.admin"
member = "serviceAccount:${data.google_service_account.terraform.email}"
}
# Terraform needs to manage K8S RBAC
# https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control#iam-rolebinding-bootstrap
resource "kubernetes_cluster_role_binding" "terraform_clusteradmin" {
depends_on = [
google_project_iam_member.terraform-gke-admin,
]
metadata {
name = "cluster-admin-binding-terraform"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = "cluster-admin"
}
subject {
api_group = "rbac.authorization.k8s.io"
kind = "User"
name = data.google_service_account.terraform.email
}
# must create a binding on unique ID of SA too
subject {
api_group = "rbac.authorization.k8s.io"
kind = "User"
name = data.google_service_account.terraform.unique_id
}
}
However, this always returns the following error:
Error: clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "client" cannot create resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope
│
│ with module.kubernetes[0].kubernetes_cluster_role_binding.terraform_clusteradmin,
│ on kubernetes/terraform_role.tf line 15, in resource "kubernetes_cluster_role_binding" "terraform_clusteradmin":
│ 15: resource "kubernetes_cluster_role_binding" "terraform_clusteradmin" {
Any ideas what goes wrong here?
Could this be related to using Google Groups RBAC?
authenticator_groups_config {
security_group = "gke-security-groups#${var.acl_group_domain}"
}
data "google_client_config" "provider" {}
provider "kubernetes" {
cluster_ca_certificate = module.google.cluster_ca_certificate
host = module.google.cluster_endpoint
token = data.google_client_config.provider.access_token
}
I´m stuck on this script to deploy an image in GCP with Terraform. The idea is launching a V instance and have opened the ports 443 and 80 for http requests, when i writte "Terraform validate" it is shown as correct:
provider "google" {
project = "terraform-packer-xxxxxx"
region = "us-central1"
zone = "us-central1-a"
credentials = "C:/.../path"
}
data "google_compute_image" "test" {
name = "packer-08022021-1"
}
resource "google_compute_instance" "myVM" {
name = "test"
machine_type = "e2-micro"
zone = "us-central1-a"
tags = [ "http-server" ]
boot_disk {
initialize_params {
image = data.google_compute_image.test.self_link
}
}
network_interface {
# A default network is created for all GCP projects
network = "default"
access_config {
}
}
}
resource "google_compute_firewall" "allow-http" {
name = "http-firewall"
network = "default"
allow {
protocol = "all"
ports = ["80"]
}
allow {
protocol = "all"
ports = ["443"]
}
allow {
protocol = "all"
ports = ["22"]
}
source_tags = ["http-server"]
}
# resource "google_compute_network" "default" {
# name = "test-network"
# }
output "ip" {
value = google_compute_instance.myVM.network_interface.0.access_config.0.nat_ip
}
But when i writte "Terraform apply" this error apears:
Error: Error creating Firewall: googleapi: Error 403: Required 'compute.firewalls.create' permission for 'projects/terraform-packer-303806/global/firewalls/http-firewall'
More details:
Reason: forbidden, Message: Required 'compute.firewalls.create' permission for 'projects/terraform-packer-303806/global/firewalls/http-firewall'
Reason: forbidden, Message: Required 'compute.networks.updatePolicy' permission for 'projects/terraform-packer-303806/global/networks/default'
I have doublechecked for the permissions in my service account and i have the following:
Admin of compute instances,
User of service acount,
Networking admin,
Firewall admin.
I don't know what i'm doing wrong
From the error message provided it seems the Service Account doesn't have the compute.firewalls.create permissions assigned. This permission is required in order to create firewall rules, as can be seen here.
Here you will find a list of roles which have the permission by searching for compute.firewalls.. If none of the roles with the permissions suit your needs you can create custom roles following the steps in the official GCP Documentation.
I want to create a aws_ses_domain_identity resource in multiple regions but as far as I can see, this is only possible by changing the region of the AWS provider.
I've attempted to use a for_each with no luck. I then want to create a aws_route53_record from the verification_tokens. I suspect this also won't work.
Ultimately, I'm aiming for creating an SES domain identity and corresponding Route 53 verification records for the regions specified in a variable (ses_regions).
Code:
provider "aws" {
alias = "eu-central-1"
region = "eu-central-1"
}
provider "aws" {
alias = "us-west-2"
region = "us-west-2"
}
variable "ses_regions" {
description = "The aws region in which to operate"
default = {
region1 = "us-west-2"
region2 = "eu-central-1"
}
}
resource "aws_ses_domain_identity" "example" {
for_each = var.ses_regions
provider = each.value
domain = var.ses_domain
}
resource "aws_route53_record" "example_amazonses_verification_record" {
for_each = aws_ses_domain_identity.example.verification_token
zone_id = var.zone_id
name = "_amazonses.${var.ses_domain}"
type = "TXT"
ttl = "600"
records = each.value
}
Error:
Error: Invalid provider configuration reference
on .terraform/modules/ses/main.tf line 8, in resource "aws_ses_domain_identity" "example":
8: provider = aws.each.value
The provider argument requires a provider type name, optionally followed by a
period and then a configuration alias.