Using terraform to create nat-gateway using this module.
https://registry.terraform.io/modules/GoogleCloudPlatform/nat-gateway/google/1.1.3
using this code :
module "nat" {
source = "GoogleCloudPlatform/nat-gateway/google"
region = "${var.gcloud-region}"
network = "${google_compute_network.vpc-network.name}"
subnetwork = "${google_compute_subnetwork.vpc-subnetwork-public.name}"
machine_type = "${var.vm-type-nat-gateway}"
}
Other snippets :
variable "gcloud-region" { default = "europe-west1" }
variable "vm-type-nat-gateway" { default = "n1-standard-2"}
resource "google_compute_network" "vpc-network" {
name = "foobar-vpc-network"
auto_create_subnetworks = false
}
resource "google_compute_subnetwork" "vpc-subnetwork-public" {
name = "foobar-vpc-subnetwork-public"
ip_cidr_range = "10.0.1.0/24"
network = "${google_compute_network.vpc-network.self_link}"
region = "${var.gcloud-region}"
private_ip_google_access = false
}
================
module.nat.google_compute_route.nat-gateway: 1 error(s) occurred:
module.nat.google_compute_route.nat-gateway: element: element() may not be used with an empty list in:
${element(split("/", element(module.nat-gateway.instances[0], 0)),
10)}
Above errror coming up and whole terraform script get stop , and unable to run
terraform apply or terraform destroy at any changes,
any possible issue causing this ?
Related
I want to create elastic beanstalk with tf. Here is the main.tf
resource "aws_elastic_beanstalk_application" "elasticapp" {
name = var.elasticapp
}
resource "aws_elastic_beanstalk_environment" "beanstalkappenv" {
name = var.beanstalkappenv
application = aws_elastic_beanstalk_application.elasticapp.name
solution_stack_name = var.solution_stack_name
tier = var.tier
setting {
namespace = "aws:ec2:vpc"
name = "VPCId"
value = var.vpc_id
}
setting {
namespace = "aws:ec2:vpc"
name = "Subnets"
value = var.public_subnets
}
setting {
namespace = "aws:elasticbeanstalk:environment:process:default"
name = "MatcherHTTPCode"
value = "200"
}
setting {
namespace = "aws:elasticbeanstalk:environment"
name = "LoadBalancerType"
value = "application"
}
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "InstanceType"
value = "t2.micro"
}
setting {
namespace = "aws:ec2:vpc"
name = "ELBScheme"
value = "internet facing"
}
setting {
namespace = "aws:autoscaling:asg"
name = "MinSize"
value = 1
}
setting {
namespace = "aws:autoscaling:asg"
name = "MaxSize"
value = 2
}
setting {
namespace = "aws:elasticbeanstalk:healthreporting:system"
name = "SystemType"
value = "enhanced"
}
}
I have variables defined in vars.tf.
This is the provider.tf
provider "aws" {
region = "eu-west-3"
}
When I try to apply I get the following message
Error: ConfigurationValidationException: Configuration validation exception: Invalid option value: 'subnet-xxxxxxxxxxxxxxx' (Namespace: 'aws:ec2:vpc', OptionName: 'ELBSubnets'): The subnet 'subnet-xxxxxxxxxxxxxxx' does not exist.
│ status code: 400, request id: be485042-a653-496b-8510-b310d5796eef
│
│ with aws_elastic_beanstalk_environment.beanstalkappenv,
│ on main.tf line 9, in resource "aws_elastic_beanstalk_environment" "beanstalkappenv":
│ 9: resource "aws_elastic_beanstalk_environment" "beanstalkappenv" {
I created the subnet inside the vpc that I provided in main.tf.
EDIT: I have only one subnet.
EDIT: adding vars.tf
variable "elasticapp" {
default = "pos-eb"
}
variable "beanstalkappenv" {
type = string
default = "pos-eb-env"
}
variable "solution_stack_name" {
type = string
default = "64bit Amazon Linux 2 v3.2.0 running Python 3.8"
}
variable "tier" {
type = string
default = "WebServer"
}
variable "vpc_id" {
default = "vpc-xxxxxxxxxxx"
}
variable "public_subnets" {
type = string
default = "subnet-xxxxxxxxxxxxxxx"
}
Ok, so first, check if the error message is correct.
As mentioned above, there is a chance you are working in the wrong account/region.
So check if terraform can find that subnet by using a datasource:
data "aws_subnet" "selected" {
id = var.public_subnets # based on your code above, this is a single subnet_id
}
output "subnet_detail" {
value = data.aws_subnet.selected
}
If the above code fails, that means terraform is not able to use/find that subnet.
So, if the subnet was created by terraform there is a chance regions/alias/account got mixed on the way to this module.
If it was manually created and you are only using the ID as manually inputted string, than the chances are that you copied the wrong subnet_id, vpc_id or that you are working in the wrong account/region.
If the above return data, and terraform can indeed find that subnet, check if it belongs to the VPC you are using on elastic_beanstalk.
If all the above is correct, than the issue may by in the "aws_elastic_beanstalk_environment" definition.
As you have an ELBScheme but you don't have the rest of the fields related to that ELB it could be throwing an error.
Since ELBSubnets was not provided in the "aws_elastic_beanstalk_environment" definition, it may be trying to use a default subnet from the default vpc.
I'm trying to deploy a managed instance group with a load balancer which will be running a web server container.
The container is stored in the google artifcat registry.
If I manually create a VM and define the container usage, it is successfully able to pull and activate the container.
When I try to create the managed instance group via terraform, the VM does not pull nor activate the container.
When I ssh to the VM and try to manually pull the container, I get the following error:
Error response from daemon: Get https://us-docker.pkg.dev/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
The only notable difference between the VM I created manually to the VM created by terraform is that the manual VM has an external IP address. Not sure if this matters and not sure how to add one to the terraform file.
Below is my main.tf file. Can anyone tell me what I'm doing wrong?
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "3.53.0"
}
google-beta = {
source = "hashicorp/google-beta"
version = "~> 4.0"
}
}
}
provider "google" {
credentials = file("compute_lab2-347808-dab33a244827.json")
project = "lab2-347808"
region = "us-central1"
zone = "us-central1-f"
}
locals {
google_load_balancer_ip_ranges = [
"130.211.0.0/22",
"35.191.0.0/16",
]
}
module "gce-container" {
source = "terraform-google-modules/container-vm/google"
version = "~> 2.0"
cos_image_name = "cos-stable-77-12371-89-0"
container = {
image = "us-docker.pkg.dev/lab2-347808/web-server-repo/web-server-image"
volumeMounts = [
{
mountPath = "/cache"
name = "tempfs-0"
readOnly = false
},
]
}
volumes = [
{
name = "tempfs-0"
emptyDir = {
medium = "Memory"
}
},
]
restart_policy = "Always"
}
resource "google_compute_firewall" "rules" {
project = "lab2-347808"
name = "allow-web-ports"
network = "default"
description = "Opens the relevant ports for the web server"
allow {
protocol = "tcp"
ports = ["80", "8080", "5432", "5000", "443"]
}
source_ranges = ["0.0.0.0/0"]
#source_ranges = local.google_load_balancer_ip_ranges
target_tags = ["web-server-ports"]
}
resource "google_compute_autoscaler" "default" {
name = "web-autoscaler"
zone = "us-central1-f"
target = google_compute_instance_group_manager.default.id
autoscaling_policy {
max_replicas = 10
min_replicas = 1
cooldown_period = 60
cpu_utilization {
target = 0.5
}
}
}
resource "google_compute_instance_template" "default" {
name = "my-web-server-template"
machine_type = "e2-medium"
can_ip_forward = false
tags = ["ssh", "http-server", "https-server", "web-server-ports"]
disk {
#source_image = "cos-cloud/cos-73-11647-217-0"
source_image = module.gce-container.source_image
}
network_interface {
network = "default"
}
service_account {
#scopes = ["userinfo-email", "compute-ro", "storage-ro"]
scopes = ["cloud-platform"]
}
metadata = {
gce-container-declaration = module.gce-container.metadata_value
}
}
resource "google_compute_target_pool" "default" {
name = "web-server-target-pool"
}
resource "google_compute_instance_group_manager" "default" {
name = "web-server-igm"
zone = "us-central1-f"
version {
instance_template = google_compute_instance_template.default.id
name = "primary"
}
target_pools = [google_compute_target_pool.default.id]
base_instance_name = "web-server-instance"
}
Your VM templates haven't public IPs, therefore, you can't reach public IP.
However, you have 3 ways to solve that issue:
Add a public IP on the VM template (bad idea)
Add a Cloud NAT on your VM private IP range to allow outgoing traffic to the internet (good idea)
Activate the Google private access in the subnet that host the VM private iP range. It create a bridge to access to Google services without having a public IP (my prefered idea) -> https://cloud.google.com/vpc/docs/configure-private-google-access
Apparently I was missing the following acecss_config inside network_interface of the google_compute_instance_template as following:
network_interface {
network = "default"
access_config {
network_tier = "PREMIUM"
}
i can't create a VM on GCP using terraform, i want to attach a kms key in the attribute "kms_key_self_link", but when the machine is being created, time goes and after 2 minutes waiting (in every case) the error 503 appears. I'm going to share my script, is worthly to say that with the attribute "kms_key_self_link" dissabled, the script runs ok.
data "google_compute_image" "tomcat_centos" {
name = var.vm_img_name
}
data "google_kms_key_ring" "keyring" {
name = "keyring-example"
location = "global"
}
data "google_kms_crypto_key" "cmek-key" {
name = "crypto-key-example"
key_ring = data.google_kms_key_ring.keyring.self_link
}
data "google_project" "project" {}
resource "google_kms_crypto_key_iam_member" "key_user" {
crypto_key_id = data.google_kms_crypto_key.cmek-key.id
role = "roles/owner"
member = "serviceAccount:service-${data.google_project.project.number}#compute-system.iam.gserviceaccount.com"
}
resource "google_compute_instance" "vm-hsbc" {
name = var.vm_name
machine_type = var.vm_machine_type
zone = var.zone
allow_stopping_for_update = true
can_ip_forward = false
deletion_protection = false
boot_disk {
kms_key_self_link = data.google_kms_crypto_key.cmek-key.self_link
initialize_params {
type = var.disk_type
#GCP-CE-CTRL-22
image = data.google_compute_image.tomcat_centos.self_link
}
}
network_interface {
network = var.network
}
#GCP-CE-CTRL-2-...-5, 7, 8
service_account {
email = var.service_account_email
scopes = var.scopes
}
#GCP-CE-CTRL-31
shielded_instance_config {
enable_secure_boot = true
enable_vtpm = true
enable_integrity_monitoring = true
}
}
And this is the complete error:
Error creating instance: googleapi: Error 503: Internal error. Please try again or contact Google Support. (Code: '5C54C97EB5265.AA25590.F4046F68'), backendError
I solved this issue granting to my compute service account the role of encrypter/decripter through this resource:
resource "google_kms_crypto_key_iam_binding" "key_iam_binding" {
crypto_key_id = data.google_kms_crypto_key.cmek-key.id
role = "roles/cloudkms.cryptoKeyEncrypter"
members = [
"serviceAccount:service-${data.google_project.gcp_project.number}#compute-system.iam.gserviceaccount.com",
]
}
I am using terraform to create an elastic beanstalk environment and application. The solution stack name seen on the aws doc website does not list a solution name for the current time frame [8th September 2020].
Consequently, I keep getting an error No Solution Stack named '64bit Amazon Linux 2 v3.1.2 running Go 1' found. when I try to run terraform apply.
Also, I am using a module provided by cloud posse to get my infrastructure up and running but I doubt that cloud posse is at fault here. Any help is highly appreciated. Thank you
Update: Here's the terraform source code to create elastic beasntalk resources.
provider aws {
region = var.region
}
module vpc {
source = "git::https://github.com/cloudposse/terraform-aws-vpc.git?ref=master"
namespace = var.namespace
stage = var.stage
name = var.name
cidr_block = "172.16.0.0/16"
}
module subnets {
source = "git::https://github.com/cloudposse/terraform-aws-dynamic-subnets.git?ref=master"
availability_zones = ["us-east-1a"]
namespace = var.namespace
stage = var.stage
name = var.name
vpc_id = module.vpc.vpc_id
igw_id = module.vpc.igw_id
cidr_block = module.vpc.vpc_cidr_block
nat_gateway_enabled = false
nat_instance_enabled = false
}
module elastic_beanstalk_application {
source = "git::https://github.com/cloudposse/terraform-aws-elastic-beanstalk-application.git?ref=master"
namespace = var.namespace
stage = var.stage
name = var.name
description = "Sentinel Staging"
}
module elastic_beanstalk_environment {
source = "git::https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment.git?ref=master"
namespace = var.namespace
stage = var.stage
name = var.name
description = "Test elastic_beanstalk_environment"
region = var.region
availability_zone_selector = "Any 2"
dns_zone_id = var.dns_zone_id
elastic_beanstalk_application_name = module.elastic_beanstalk_application.elastic_beanstalk_application_name
instance_type = "t3.small"
autoscale_min = 1
autoscale_max = 2
updating_min_in_service = 0
updating_max_batch = 1
loadbalancer_type = "application"
vpc_id = module.vpc.vpc_id
loadbalancer_subnets = module.subnets.public_subnet_ids
application_subnets = module.subnets.public_subnet_ids
allowed_security_groups = [module.vpc.vpc_default_security_group_id]
// https://docs.aws.amazon.com/elasticbeanstalk/latest/platforms/platforms-supported.html
// https://docs.aws.amazon.com/elasticbeanstalk/latest/platforms/platforms-supported.html#platforms-supported.docker
solution_stack_name = "64bit Amazon Linux 2 v3.1.2 running Go 1"
additional_settings = [
{
namespace = "aws:elasticbeanstalk:application:environment"
name = "DB_URI"
value = var.db_uri
},
{
namespace = "aws:elasticbeanstalk:application:environment"
name = "USER"
value = var.user
},
{
namespace = "aws:elasticbeanstalk:application:environment"
name = "PASSWORD"
value = var.password
},
{
namespace = "aws:elasticbeanstalk:application:environment"
name = "PORT"
value = "5000"
}
]
}
And here's the exact error taken from the console when I try terraform apply with suitable variables passed as command line arguments.
module.subnets.aws_network_acl.public[0]: Refreshing state... [id=acl-0a62395908205c288]
module.subnets.data.aws_availability_zones.available[0]: Reading... [id=2020-09-07 06:34:08.002137232 +0000 UTC]
module.subnets.data.aws_availability_zones.available[0]: Read complete after 0s [id=2020-09-07 06:34:12.416467455 +0000 UTC]
module.elastic_beanstalk_environment.aws_elastic_beanstalk_environment.default: Creating...
Error: InvalidParameterValue: No Solution Stack named '64bit Amazon Linux 2 v3.1.2 running Go 1' found.
status code: 400, request id: 631d5204-f667-4355-aa65-d617536a00b7
on .terraform/modules/elastic_beanstalk_environment/main.tf line 505, in resource "aws_elastic_beanstalk_environment" "default":
505: resource "aws_elastic_beanstalk_environment" "default" {
I am having an error with a terraform code, while deploy a GCP composer resource:
google_composer_environment.composer-beta: googleapi: Error 400: Property key must be of the form section-name. The section may not contain opening square brackets, closing square brackets or hyphens, and the name may not contain a semicolon or equals sign. The entire property key may not contain periods., badRequest
The issue arises while this GCP resource is being deployed: https://www.terraform.io/docs/providers/google/r/composer_environment.html
This is my code:
Variables.tf file:
variable "composer_airflow_version" {
type = "map"
default = {
image_version="composer-1.6.1-airflow-1.10.1"
}
}
variable "composer_python_version" {
type = "map"
default = {
python_version="3"
}
}
my-composer.tf file:
resource "google_composer_environment" "composer-beta" {
provider= "google-beta"
project = "my-proyect"
name = "${var.composer_name}"
region = "${var.region}"
config {
node_count = "${var.composer_node_count}"
node_config {
zone = "${var.zone}"
machine_type = "${var.composer_machine_type}"
network = "${google_compute_network.network.self_link}"
subnetwork = "${lookup(var.vpc_subnets_01[0], "subnet_name")}"
}
software_config {
airflow_config_overrides="${var.composer_airflow_version}",
airflow_config_overrides="${var.composer_python_version}",
}
}
depends_on = [
"google_service_account.comp-py3-dev-worker",
"google_compute_subnetwork.subnetwork",
]
}
According to the error message, the root cause of the error seems be related to the software_config section in the terraform code. I understand that the variables "composer_airflow_version" and "composer_python_version" should be of type "map", therefore, I set up them as map format.
A really appreciate it, if someone could identify the cause of the error, and tell me the adjustment to apply. It is likely that I should apply a change in variables, but I don't know what it is. :-(
Thanks in advance,
Jose
Based on the documentations, airflow_config_overrides, pypi_packages, env_variables, image_version and python_version should be directly under software_config.
Variables.tf file:
variable "composer_airflow_version" {
default = "composer-1.6.1-airflow-1.10.1"
}
variable "composer_python_version" {
default = "3"
}
my-composer.tf file:
resource "google_composer_environment" "composer-beta" {
provider= "google-beta"
project = "my-proyect"
name = "${var.composer_name}"
region = "${var.region}"
config {
node_count = "${var.composer_node_count}"
node_config {
zone = "${var.zone}"
machine_type = "${var.composer_machine_type}"
network = "${google_compute_network.network.self_link}"
subnetwork = "${lookup(var.vpc_subnets_01[0], "subnet_name")}"
}
software_config {
image_version = "${var.composer_airflow_version}",
python_version = "${var.composer_python_version}",
}
}
depends_on = [
"google_service_account.comp-py3-dev-worker",
"google_compute_subnetwork.subnetwork",
]
}