I used this module to create a security group inside a VPC. One of the outputs is the security_group_id, but I'm getting this error:
│ Error: Unsupported attribute
│
│ on ecs.tf line 39, in resource "aws_ecs_service" "hello_world":
│ 39: security_groups = [module.app_security_group.security_group_id]
│ ├────────────────
│ │ module.app_security_group is a object, known only after apply
│
│ This object does not have an attribute named "security_group_id".
I need the security group for an ECS service:
resource "aws_ecs_service" "hello_world" {
name = "hello-world-service"
cluster = aws_ecs_cluster.container_service_cluster.id
task_definition = aws_ecs_task_definition.hello_world.arn
desired_count = 1
launch_type = "FARGATE"
network_configuration {
security_groups = [module.app_security_group.security_group_id]
subnets = module.vpc.private_subnets
}
load_balancer {
target_group_arn = aws_lb_target_group.loadbalancer_target_group.id
container_name = "hello-world-app"
container_port = 3000
}
depends_on = [aws_lb_listener.loadbalancer_listener, module.app_security_group]
}
I understand that I can only know the security group ID after it is created. That's why I added the depends_on part on the ECS stanza, but it kept returning the same error.
Update
I specified count as 1 on the app_security_group module and this is the error I'm getting now.
│ Error: Unsupported attribute
│
│ on ecs.tf line 39, in resource "aws_ecs_service" "hello_world":
│ 39: security_groups = module.app_security_group.security_group_id
│ ├────────────────
│ │ module.app_security_group is a list of object, known only after apply
│
│ Can't access attributes on a list of objects. Did you mean to access an attribute for a specific element of the list, or across all elements of the list?
Update II
This is the module declaration:
module "app_security_group" {
source = "terraform-aws-modules/security-group/aws//modules/web"
version = "3.17.0"
name = "${var.project}-web-sg"
description = "Security group for web-servers with HTTP ports open within VPC"
vpc_id = module.vpc.vpc_id
# ingress_cidr_blocks = module.vpc.public_subnets_cidr_blocks
ingress_cidr_blocks = ["0.0.0.0/0"]
}
I took a look at that module. The problem is that the version 3.17.0 of the module simply does not have the output of security_group_id. You are using a really old version.
The latest version from the site is 4.7.0, you would want to upgrade to this one. In fact, any version above 4.0.0 has the security_group_id, so you need to at least 4.0.0.
As you are using count, please try below.
network_configuration {
security_groups = [module.app_security_group[0].security_group_id]
subnets = module.vpc.private_subnets
}
Related
Trying to upgrade AWS provider to version 4, but getting the following error in RDS module:
Error: Conflicting configuration arguments
│
│ with module.my-instance-mysql-eu[0].module.rds.module.db_instance.aws_db_instance.this[0],
│ on .terraform/modules/my-instance-mysql-eu.rds/modules/db_instance/main.tf line 47, in resource "aws_db_instance" "this":
│ 47: db_name = var.db_name
│
│ "db_name": conflicts with replicate_source_db
The error is stating that the db_name attribute conflicts with the replicate_source_db attribute; you cannot specify both attributes, it must be one or the other. This is also mentioned in the Terraform documentation.
If you are replicating an existing RDS database, the database name will be the same as the name of the source. If this is a new database, do not set the replicate_source_db attribute at all.
I encountered a similar issue with the engine & engine_version variables:
│ Error: Conflicting configuration arguments
│
│ with module.production.module.replica_app_db_production.aws_db_instance.db,
│ on modules/rds/postgres/main.tf line 36, in resource "aws_db_instance" "db":
│ 36: engine = var.engine
│
│ "engine": conflicts with replicate_source_db
╵
╷
│ Error: Conflicting configuration arguments
│
│ with module.production.module.replica_app_db_production.aws_db_instance.db,
│ on modules/rds/postgres/main.tf line 37, in resource "aws_db_instance" "db":
│ 37: engine_version = var.engine_version
│
│ "engine_version": conflicts with replicate_source_db
╵
I found a good example of a solution here: https://github.com/terraform-aws-modules/terraform-aws-rds/blob/v5.2.2/modules/db_instance/main.tf
And I managed to solve this with the below conditions:
# Replicas will use source metadata
username = var.replicate_source_db != null ? null : var.username
password = var.replicate_source_db != null ? null : var.password
engine = var.replicate_source_db != null ? null : var.engine
engine_version = var.replicate_source_db != null ? null : var.engine_version
If var.replicate_source_db is not null, then the username/password/engine/engine_version will be set to null (which is what we need as these variables cannot be specified for a replica). And if it is not a replica, then we will have the variables set accordingly :)
You can add the same for the db_name parameter:
db_name = var.replicate_source_db != null ? null : var.db_name
The module I'm working on represents one app which is deployed to a VPC. The VPC is declared elsewhere.
The relevant data path includes these resources:
variable "vpc_id" { }
data "aws_subnets" "private" {
filter {
name = "vpc-id"
values = [data.aws_vpc.vpc.id]
}
filter {
name = "tag:Visibility"
values = ["private"]
}
}
data "aws_subnet" "private" {
for_each = toset(data.aws_subnets.private.ids)
vpc_id = data.aws_vpc.vpc.id
id = each.value
}
resource "aws_rds_cluster" "database" {
availability_zones = data.aws_subnet.private.*.availability_zones
}
That feels like the correct syntax, though it is a verbose chain of data retrieval. However, when I terraform plan it:
│ Error: Unsupported attribute
│
│ on ../../../../../appmodule/rds_postgres.tf line 23, in resource "aws_rds_cluster" "webapp":
│ 23: availability_zones = data.aws_subnet.private.*.availability_zone_id
│
│ This object does not have an attribute named "availability_zone_id".
I'm using aws-provider 4.18.0 and Terraform v1.1.2. The documentation for the subnet data source shows that availability_zone_id
Am I missing something obvious here?
As mentioned in the comments, you can get the list of AZs by using the values built-in function [1]. This is necessary as the data source you are relying on to provide the AZs is in a key value format due to for_each meta-argument use:
data "aws_subnet" "private" {
for_each = toset(data.aws_subnets.private.ids)
vpc_id = data.aws_vpc.vpc.id
id = each.value
}
The change you need to make is:
resource "aws_rds_cluster" "database" {
availability_zones = values(data.aws_subnet.private)[*].availability_zone
}
A test with an output and a default VPC shows the following result:
+ subnet_azs = [
+ "us-east-1b",
+ "us-east-1c",
+ "us-east-1d",
+ "us-east-1a",
+ "us-east-1f",
+ "us-east-1e",
]
As you can see, it is already a list so you can use it as is.
Note that there is an explanation why you should use the availability_zone attribute:
availability_zone_id - (Optional) ID of the Availability Zone for the subnet. This argument is not supported in all regions or partitions. If necessary, use availability_zone instead
[1] https://www.terraform.io/language/functions/values
I'm completely new to DataBricks and trying to deploy an E2 workspace using the sample Terraform code provided by DataBricks. I've just started with the VPC part:
data "aws_availability_zones" "available" {}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
# version = "3.2.0"
name = local.prefix
cidr = var.cidr_block
azs = data.aws_availability_zones.available.names
enable_dns_hostnames = true
enable_nat_gateway = true
single_nat_gateway = true
create_igw = true
private_subnets = [cidrsubnet(var.cidr_block, 3, 1),
cidrsubnet(var.cidr_block, 3, 2)]
manage_default_security_group = true
default_security_group_name = "${local.prefix}-sg"
default_security_group_egress = [{
cidr_blocks = "0.0.0.0/0"
}]
default_security_group_ingress = [{
description = "Allow all internal TCP and UDP"
self = true
}]
}
When I run terraform plan I get this error:
│ Error: Error in function call
│
│ on .terraform/modules/vpc/main.tf line 1090, in resource "aws_nat_gateway" "this":
│ 1090: subnet_id = element(
│ 1091: aws_subnet.public.*.id,
│ 1092: var.single_nat_gateway ? 0 : count.index,
│ 1093: )
│ ├────────────────
│ │ aws_subnet.public is empty tuple
│ │ count.index is 0
│ │ var.single_nat_gateway is true
│
│ Call to function "element" failed: cannot use element function with an empty list.
Would really appreciate any pointers on what is going wrong here.
You set that you want internet gateway create_igw = true, but you haven't specified public_subnets. You must have public_subnets if you have igw.
I'm trying to provision GCP resources through Terraform, but it's timing out while also throwing errors saying that resources already exist (I've looked in GCP and through the CLI, and the resources do not exist).
Error: Error waiting to create Image: Error waiting for Creating Image: timeout while waiting for state to become 'DONE' (last state: 'RUNNING', timeout: 15m0s)
│
│ with google_compute_image.student-image,
│ on main.tf line 29, in resource "google_compute_image" "student-image":
│ 29: resource "google_compute_image" "student-image" {
│
╵
╷
│ Error: Error creating Firewall: googleapi: Error 409: The resource 'projects/**-****-**********-******/global/firewalls/*****-*********-*******-*****-firewall' already exists, alreadyExists
│
│ with google_compute_firewall.default,
│ on main.tf line 46, in resource "google_compute_firewall" "default":
│ 46: resource "google_compute_firewall" "default" {
Some (perhaps salient) details:
I have previously provisioned these resources successfully using this same approach.
My billing account has since changed.
At another point, it was saying that the machine image existed (which, if it does, I can't see either in the console or the CLI).
I welcome any insights/suggestions.
EDIT
Including HCL; variables are defined in variables.tf and terraform.tfvars
provider google {
region = var.region
}
resource "google_compute_image" "student-image" {
name = var.google_compute_image_name
project = var.project
raw_disk {
source = var.google_compute_image_source
}
timeouts {
create = "15m"
update = "15m"
delete = "15m"
}
}
resource "google_compute_firewall" "default" {
name = "cloud-computing-project-image-firewall"
network = "default"
project = var.project
allow {
protocol = "tcp"
# 22: SSH
# 80: HTTP
ports = [
"22",
"80",
]
}
source_ranges = ["0.0.0.0/0"]
}
source = "./vm"
name = "workspace-vm"
project = var.project
image = google_compute_image.student-image.self_link
machine_type = "n1-standard-1"
}
There is a vm subdirectory with main.tf:
resource "google_compute_instance" "student_instance" {
name = var.name
machine_type = var.machine_type
zone = var.zone
project = var.project
boot_disk {
initialize_params {
image = var.image
size = var.disk_size
}
}
network_interface {
network = "default"
access_config {
}
}
labels = {
project = "machine-learning-on-the-cloud"
}
}
...and variables.tf:
variable name {}
variable project {}
variable zone {
default = "us-east1-b"
}
variable image {}
variable machine_type {}
variable disk_size {
default = 20
}
It sounds like maybe the resources were provisioned with Terraform but perhaps someone deleted them manually and so now your state file and what's actual doesn't match. terraform refresh might solve your problem.
I am trying to spin-up an AWS bastion host on AWS EC2. I am using the Terraform module provided by Guimove. I am getting stuck on the bastion_host_key_pair field. I need to provide a keypair that can be used to launch the EC2 template, but the bucket (aws_s3_bucket.bucket) that needs to contain the public key of the key pair gets created during the module, therefore the key isn't there when it tries to launch the instance and it fails. It feels like a chicken and egg scenario, so I am obviously doing something wrong. What am I doing wrong?
Error:
╷
│ Error: Error creating Auto Scaling Group: AccessDenied: You are not authorized to use launch template: lt-004b0af2895c684b3
│ status code: 403, request id: c6096e0d-dc83-4384-a036-f35b8ca292f8
│
│ with module.bastion.aws_autoscaling_group.bastion_auto_scaling_group,
│ on .terraform\modules\bastion\main.tf line 300, in resource "aws_autoscaling_group" "bastion_auto_scaling_group":
│ 300: resource "aws_autoscaling_group" "bastion_auto_scaling_group" {
│
╵
Terraform:
resource "tls_private_key" "bastion_host" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "aws_key_pair" "bastion_host" {
key_name = "bastion_user"
public_key = tls_private_key.bastion_host.public_key_openssh
}
resource "aws_s3_bucket_object" "bucket_public_key" {
bucket = aws_s3_bucket.bucket.id
key = "public-keys/${aws_key_pair.bastion_host.key_name}.pub"
content = aws_key_pair.bastion_host.public_key
kms_key_id = aws_kms_key.key.arn
}
module "bastion" {
source = "Guimove/bastion/aws"
bucket_name = "${var.identifier}-ssh-bastion-bucket-${var.env}"
region = var.aws_region
vpc_id = var.vpc_id
is_lb_private = "false"
bastion_host_key_pair = aws_key_pair.bastion_host.key_name
create_dns_record = "false"
elb_subnets = var.public_subnet_ids
auto_scaling_group_subnets = var.public_subnet_ids
instance_type = "t2.micro"
tags = {
Name = "SSH Bastion Host - ${var.identifier}-${var.env}",
}
}
I had the same issue. The fix was to go into AWS Market place, accept the EULA and subscribe to the AMI I was trying to use.