I am having an error with a terraform code, while deploy a GCP composer resource:
google_composer_environment.composer-beta: googleapi: Error 400: Property key must be of the form section-name. The section may not contain opening square brackets, closing square brackets or hyphens, and the name may not contain a semicolon or equals sign. The entire property key may not contain periods., badRequest
The issue arises while this GCP resource is being deployed: https://www.terraform.io/docs/providers/google/r/composer_environment.html
This is my code:
Variables.tf file:
variable "composer_airflow_version" {
type = "map"
default = {
image_version="composer-1.6.1-airflow-1.10.1"
}
}
variable "composer_python_version" {
type = "map"
default = {
python_version="3"
}
}
my-composer.tf file:
resource "google_composer_environment" "composer-beta" {
provider= "google-beta"
project = "my-proyect"
name = "${var.composer_name}"
region = "${var.region}"
config {
node_count = "${var.composer_node_count}"
node_config {
zone = "${var.zone}"
machine_type = "${var.composer_machine_type}"
network = "${google_compute_network.network.self_link}"
subnetwork = "${lookup(var.vpc_subnets_01[0], "subnet_name")}"
}
software_config {
airflow_config_overrides="${var.composer_airflow_version}",
airflow_config_overrides="${var.composer_python_version}",
}
}
depends_on = [
"google_service_account.comp-py3-dev-worker",
"google_compute_subnetwork.subnetwork",
]
}
According to the error message, the root cause of the error seems be related to the software_config section in the terraform code. I understand that the variables "composer_airflow_version" and "composer_python_version" should be of type "map", therefore, I set up them as map format.
A really appreciate it, if someone could identify the cause of the error, and tell me the adjustment to apply. It is likely that I should apply a change in variables, but I don't know what it is. :-(
Thanks in advance,
Jose
Based on the documentations, airflow_config_overrides, pypi_packages, env_variables, image_version and python_version should be directly under software_config.
Variables.tf file:
variable "composer_airflow_version" {
default = "composer-1.6.1-airflow-1.10.1"
}
variable "composer_python_version" {
default = "3"
}
my-composer.tf file:
resource "google_composer_environment" "composer-beta" {
provider= "google-beta"
project = "my-proyect"
name = "${var.composer_name}"
region = "${var.region}"
config {
node_count = "${var.composer_node_count}"
node_config {
zone = "${var.zone}"
machine_type = "${var.composer_machine_type}"
network = "${google_compute_network.network.self_link}"
subnetwork = "${lookup(var.vpc_subnets_01[0], "subnet_name")}"
}
software_config {
image_version = "${var.composer_airflow_version}",
python_version = "${var.composer_python_version}",
}
}
depends_on = [
"google_service_account.comp-py3-dev-worker",
"google_compute_subnetwork.subnetwork",
]
}
Related
I want to create elastic beanstalk with tf. Here is the main.tf
resource "aws_elastic_beanstalk_application" "elasticapp" {
name = var.elasticapp
}
resource "aws_elastic_beanstalk_environment" "beanstalkappenv" {
name = var.beanstalkappenv
application = aws_elastic_beanstalk_application.elasticapp.name
solution_stack_name = var.solution_stack_name
tier = var.tier
setting {
namespace = "aws:ec2:vpc"
name = "VPCId"
value = var.vpc_id
}
setting {
namespace = "aws:ec2:vpc"
name = "Subnets"
value = var.public_subnets
}
setting {
namespace = "aws:elasticbeanstalk:environment:process:default"
name = "MatcherHTTPCode"
value = "200"
}
setting {
namespace = "aws:elasticbeanstalk:environment"
name = "LoadBalancerType"
value = "application"
}
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "InstanceType"
value = "t2.micro"
}
setting {
namespace = "aws:ec2:vpc"
name = "ELBScheme"
value = "internet facing"
}
setting {
namespace = "aws:autoscaling:asg"
name = "MinSize"
value = 1
}
setting {
namespace = "aws:autoscaling:asg"
name = "MaxSize"
value = 2
}
setting {
namespace = "aws:elasticbeanstalk:healthreporting:system"
name = "SystemType"
value = "enhanced"
}
}
I have variables defined in vars.tf.
This is the provider.tf
provider "aws" {
region = "eu-west-3"
}
When I try to apply I get the following message
Error: ConfigurationValidationException: Configuration validation exception: Invalid option value: 'subnet-xxxxxxxxxxxxxxx' (Namespace: 'aws:ec2:vpc', OptionName: 'ELBSubnets'): The subnet 'subnet-xxxxxxxxxxxxxxx' does not exist.
│ status code: 400, request id: be485042-a653-496b-8510-b310d5796eef
│
│ with aws_elastic_beanstalk_environment.beanstalkappenv,
│ on main.tf line 9, in resource "aws_elastic_beanstalk_environment" "beanstalkappenv":
│ 9: resource "aws_elastic_beanstalk_environment" "beanstalkappenv" {
I created the subnet inside the vpc that I provided in main.tf.
EDIT: I have only one subnet.
EDIT: adding vars.tf
variable "elasticapp" {
default = "pos-eb"
}
variable "beanstalkappenv" {
type = string
default = "pos-eb-env"
}
variable "solution_stack_name" {
type = string
default = "64bit Amazon Linux 2 v3.2.0 running Python 3.8"
}
variable "tier" {
type = string
default = "WebServer"
}
variable "vpc_id" {
default = "vpc-xxxxxxxxxxx"
}
variable "public_subnets" {
type = string
default = "subnet-xxxxxxxxxxxxxxx"
}
Ok, so first, check if the error message is correct.
As mentioned above, there is a chance you are working in the wrong account/region.
So check if terraform can find that subnet by using a datasource:
data "aws_subnet" "selected" {
id = var.public_subnets # based on your code above, this is a single subnet_id
}
output "subnet_detail" {
value = data.aws_subnet.selected
}
If the above code fails, that means terraform is not able to use/find that subnet.
So, if the subnet was created by terraform there is a chance regions/alias/account got mixed on the way to this module.
If it was manually created and you are only using the ID as manually inputted string, than the chances are that you copied the wrong subnet_id, vpc_id or that you are working in the wrong account/region.
If the above return data, and terraform can indeed find that subnet, check if it belongs to the VPC you are using on elastic_beanstalk.
If all the above is correct, than the issue may by in the "aws_elastic_beanstalk_environment" definition.
As you have an ELBScheme but you don't have the rest of the fields related to that ELB it could be throwing an error.
Since ELBSubnets was not provided in the "aws_elastic_beanstalk_environment" definition, it may be trying to use a default subnet from the default vpc.
I'm using Terraform with GCP ... I have the groups variable that I have not been able to get to work. Here's the definitions:
resource "google_compute_instance_group" "vm_group" {
name = "vm-group"
zone = "us-central1-c"
project = "myproject-dev"
instances = [google_compute_instance.east_vm.id, google_compute_instance.west_vm.id]
named_port {
name = "http"
port = "8080"
}
named_port {
name = "https"
port = "8443"
}
lifecycle {
create_before_destroy = true
}
}
data "google_compute_image" "debian_image" {
family = "debian-9"
project = "debian-cloud"
}
resource "google_compute_instance" "west_vm" {
name = "west-vm"
project = "myproject-dev"
machine_type = "e2-micro"
zone = "us-central1-c"
boot_disk {
initialize_params {
image = data.google_compute_image.debian_image.self_link
}
}
network_interface {
network = "default"
}
}
resource "google_compute_instance" "east_vm" {
name = "east-vm"
project = "myproject-dev"
machine_type = "e2-micro"
zone = "us-central1-c"
boot_disk {
initialize_params {
image = data.google_compute_image.debian_image.self_link
}
}
network_interface {
network = "default"
}
}
And here are the variables:
http_forward = true
https_redirect = true
create_address = true
project = "myproject-dev"
backends = {
"yobaby" = {
description = "my app"
enable_cdn = false
security_policy = ""
custom_request_headers = null
custom_response_headers = null
iap_config = {
enable = false
oauth2_client_id = ""
oauth2_client_secret = ""
}
log_config = {
enable = false
sample_rate = 0
}
groups = [{group = "google_compute_instance_group.vm_group.id"}]
}
}
... this is my latest attempt to get a group value that works, but this one won't work for me either; I still get
Error 400: Invalid value for field 'resource.backends[0].group': 'google_compute_instance_group.vm_group.id'. The URL is malformed., invalid
I've tried this with DNS FQDNs and variations on the syntax above; still no go.
Thanks much for any advice whatsoever!
There are couple clues that can lead in this direction based from the error message reported by Terraform Error 400: Invalid value for field 'resource.backends[0].group': 'google_compute_instance_group.vm_group.id'. The URL is malformed., invalid:
Error code 400 means the request was actually sent to the server, who rejected it as malformed (HTTP error code 400 is for client-side errors); this implies that Terraform itself has no problem with the syntax, i.e., the configuration file is correct and actionable from TF's PoV
Value of field resource.backends[0].group is reported as being literally 'google_compute_instance_group.vm_group.id' which strongly suggests that a variable substitution did not take place.
The quotes around the code block makes it into a literal value instead of a variable reference. The solution is to change this:
groups = [{group = "google_compute_instance_group.vm_group.id"}]
To this:
groups = [{group = google_compute_instance_group.vm_group.id}]
I gave up on Terraform and used gcloud scripts to do what I needed to do, based on this posting.
i can't create a VM on GCP using terraform, i want to attach a kms key in the attribute "kms_key_self_link", but when the machine is being created, time goes and after 2 minutes waiting (in every case) the error 503 appears. I'm going to share my script, is worthly to say that with the attribute "kms_key_self_link" dissabled, the script runs ok.
data "google_compute_image" "tomcat_centos" {
name = var.vm_img_name
}
data "google_kms_key_ring" "keyring" {
name = "keyring-example"
location = "global"
}
data "google_kms_crypto_key" "cmek-key" {
name = "crypto-key-example"
key_ring = data.google_kms_key_ring.keyring.self_link
}
data "google_project" "project" {}
resource "google_kms_crypto_key_iam_member" "key_user" {
crypto_key_id = data.google_kms_crypto_key.cmek-key.id
role = "roles/owner"
member = "serviceAccount:service-${data.google_project.project.number}#compute-system.iam.gserviceaccount.com"
}
resource "google_compute_instance" "vm-hsbc" {
name = var.vm_name
machine_type = var.vm_machine_type
zone = var.zone
allow_stopping_for_update = true
can_ip_forward = false
deletion_protection = false
boot_disk {
kms_key_self_link = data.google_kms_crypto_key.cmek-key.self_link
initialize_params {
type = var.disk_type
#GCP-CE-CTRL-22
image = data.google_compute_image.tomcat_centos.self_link
}
}
network_interface {
network = var.network
}
#GCP-CE-CTRL-2-...-5, 7, 8
service_account {
email = var.service_account_email
scopes = var.scopes
}
#GCP-CE-CTRL-31
shielded_instance_config {
enable_secure_boot = true
enable_vtpm = true
enable_integrity_monitoring = true
}
}
And this is the complete error:
Error creating instance: googleapi: Error 503: Internal error. Please try again or contact Google Support. (Code: '5C54C97EB5265.AA25590.F4046F68'), backendError
I solved this issue granting to my compute service account the role of encrypter/decripter through this resource:
resource "google_kms_crypto_key_iam_binding" "key_iam_binding" {
crypto_key_id = data.google_kms_crypto_key.cmek-key.id
role = "roles/cloudkms.cryptoKeyEncrypter"
members = [
"serviceAccount:service-${data.google_project.gcp_project.number}#compute-system.iam.gserviceaccount.com",
]
}
I am trying to pass mat_ip of google compute instances created in module "microservice-instance" to another module "database". Since I am creating more than one instance, I am getting following error for output variable in module "microservice-instance".
Error: Missing resource instance key
on modules/microservice-instance/ms-outputs.tf line 3, in output "nat_ip": 3: value = google_compute_instance.apps.network_interface[*].access_config[0].nat_ip
Because google_compute_instance.apps has "count" set, its attributes must be accessed on specific instances.
For example, to correlate with indices of a referring resource, use:
google_compute_instance.apps[count.index]
I have looked at following and using the same way of accessing attribute but its not working. Here is code -
main.tf
provider "google" {
credentials = "${file("../../service-account.json")}"
project = var.project
region =var.region
}
# Include modules
module "microservice-instance" {
count = var.appserver_count
source = "./modules/microservice-instance"
appserver_count = var.appserver_count
}
module "database" {
count = var.no_of_db_instances
source = "./modules/database"
nat_ip = module.microservice-instance.nat_ip
no_of_db_instances = var.no_of_db_instances
}
./modules/microservice-instance/microservice-instance.tf
resource "google_compute_instance" "apps" {
count = var.appserver_count
name = "apps-${count.index + 1}"
# name = "apps-${random_id.app_name_suffix.hex}"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "ubuntu-os-cloud/ubuntu-1804-lts"
}
}
network_interface {
network = "default"
access_config {
// Ephemeral IP
}
}
}
./modules/microservice-instance/ms-outputs.tf
output "nat_ip" {
value = google_compute_instance.apps.network_interface[*].access_config[0].nat_ip
}
./modules/database/database.tf
resource "random_id" "db_name_suffix" {
byte_length = 4
}
resource "google_sql_database_instance" "postgres" {
name = "postgres-instance-${random_id.db_name_suffix.hex}"
database_version = "POSTGRES_11"
settings {
tier = "db-f1-micro"
ip_configuration {
dynamic "authorized_networks" {
for_each = var.nat_ip
# iterator = ip
content {
# value = ni.0.access_config.0.nat_ip
value = each.key
}
}
}
}
}
You are creating var.appserver_count number of google_compute_instance.apps resources. So you will have:
google_compute_instance.apps[0]
google_compute_instance.apps[1]
...
google_compute_instance.apps[var.appserver_count - 1]
Therefore, in your output, instead of:
output "nat_ip" {
value = google_compute_instance.apps.network_interface[*].access_config[0].nat_ip
}
you have to reference individual apps resources or all of them using [*], for example:
output "nat_ip" {
value = google_compute_instance.apps[*].network_interface[*].access_config[0].nat_ip
}
I have a new issue with setting up GCP instance template. I am presuming there was an update on the terraform gcp provider.
resource "google_compute_instance_template" "backend-template" {
name = "${var.platform_name}-backend-instance-template"
description = "Template used for backend instances"
instance_description = "backend Instance"
machine_type = "n1-standard-1"
metadata_startup_script = "${lookup(var.startup_scripts,"backend-server")}"
disk {
boot = "true"
source_image = "backend-packer-image"
}
metadata {
APP_SETTINGS = "${var.app_settings}"
URL_STAGING = "${var.url_staging}"
API_URL_STAGING = "${var.api_url_staging}"
URL_PRODUCTION = "${var.url_production}"
API_URL_PRODUCTION = "${var.api_url_production}"
LOGIN_URL = "${var.login_url}"
API_URL = "${var.api_url}"
vault_server_IP = "${lookup(var.static_ips, "vault-server")}"
environment = "${var.environment}"
}
network_interface {
subnetwork = "${google_compute_subnetwork.private-fe-be.self_link}"
}
lifecycle {
create_before_destroy = true
}
tags = ["no-ip", "backend-server"]
service_account {
scopes = ["cloud-platform"]
}
}
This is the current error after running the script. However, the image backend-packer-image was already created and exists on GCP
* google_compute_instance_template.backend-template: 1 error(s) occurred:
* google_compute_instance_template.backend-template: error flattening disks: Error getting relative path for source image: String was not a self link: global/images/backend-packer-image
I had the exact same problem today, I had to go look directly into the pull request to find a way to use this correctly.
So, what I came with is this:
you must first be sure to be in your project before typing this command or you won't find the image you are looking for if it's a custom one:
gcloud compute images list --uri | grep "your image name"
Like this you will have the uri of your image, you can then put it fully for the image and it will work.
Replace the image name with the URI on source_image
resource "google_compute_instance_template" "backend-template" {
name = "${var.platform_name}-backend-instance-
template"
description = "Template used for backend instances"
instance_description = "backend Instance"
machine_type = "n1-standard-1"
metadata_startup_script = "${lookup(var.startup_scripts,"backend-server")}"
disk {
boot = "true"
source_image = "https://www.googleapis.com/compute/v1/projects/<project-name>/global/images/backend-packer-image"
}
metadata {
APP_SETTINGS = "${var.app_settings}"
URL_STAGING = "${var.url_staging}"
API_URL_STAGING = "${var.api_url_staging}"
URL_PRODUCTION = "${var.url_production}"
API_URL_PRODUCTION = "${var.api_url_production}"
LOGIN_URL = "${var.login_url}"
API_URL = "${var.api_url}"
vault_server_IP = "${lookup(var.static_ips, "vault-server")}"
environment = "${var.environment}"
}
network_interface {
subnetwork = "${google_compute_subnetwork.private-fe-be.self_link}"
}
lifecycle {
create_before_destroy = true
}
tags = ["no-ip", "backend-server"]
service_account {
scopes = ["cloud-platform"]
}
}
It is also possible to tie the terraform scripts to run previous versions
provider "google"{
version = "<= 1.17"
credentials = "${var.service_account_path}"
project = "${var.gcloud_project}"
region = "${var.gcloud_region}"
}