I need to add the same route to multiple route tables in AWS. I want to use terraform for this. For a single table I can use something like:
resource "aws_route" "route" {
route_table_id = "${var.routetableid}"
destination_cidr_block = "${var.destcidrblock}"
instance_id = "${aws_instance.vpn.id}"
}
However I'd like to add the route for every route_table_id that's specified by the user as a list. Is this possible?
Terraform resources allow you to loop through them using the count meta parameter.
In your case you could do something like this:
variable "route_tables" { type = "list" }
resource "aws_route" "route" {
count = "${length(var.route_tables)}"
route_table_id = "${element(var.route_tables, count.index)}"
destination_cidr_block = "${var.destcidrblock}"
instance_id = "${aws_instance.vpn.id}"
}
Where route_tables is a list of route table ids.
Related
I am trying to create VPC Module, here i am facing issue with private subnets. We have multiple resources like RDS, REDSHIFT, CASSANDRA. I want to create subnet for each of this resource in each AZ from single block of code. How ever i am unable figure out how to assign the tags in that case.
resource "aws_subnet" "packages_subnet" {
count = "${length(var.packages_subnet)}"
vpc_id = "${aws_vpc.vpc.id}"
cidr_block = "${element(var.packages_subnet, count.index)}"
availability_zone = "${element(var.availability_zones, count.index)}"
map_public_ip_on_launch = false
tags = {
Name = "${var.env_name}-${element(var.test, count.index)}-${element(var.availability_zones, count.index)}"
}
}
this is how my vars.tf looks like
variable "test" {
type = list
default = ["rds","redshift","lambda","emr","cassandra","redis"]
}
with the above approach rds subnet is always creating in 1a.
and redshift in 1b.
module "Networking" {
source = "../modules/Networking"
packages_subnet = ["10.3.4.0/24", "10.3.5.0/24", "10.3.6.0/24", "10.3.10.0/24", "10.3.7.0/24", "10.3.8.0/24", "10.3.9.0/24", "10.3.11.0/24", "10.3.12.0/24", "10.3.13.0/24", "10.3.14.0/24", "10.3.15.0/24", "10.3.16.0/24", "10.3.17.0/24", "10.3.18.0/24"]
}
ERROR: no matching VPC Endpoint found
(error referring to data code block)
I am trying to retrieve multiple endpoints from data "aws_vpc_endpoint" resource. I created locals to retrieve service name for multiple endpoints that share the first few characters. Afterwards, the endpoints have unique characters to identify them individually.
I am wanting the data resource to loop through the data and retrieve each endpoint that shares those few characters. Then grab each endpoint id for "aws_route". FYI: The endpoints are being created from resource "aws_networkfirewall_firewall" The main thing to look at in this code snippet is locals, data, and the last line for resource "aws_route" How can I express in locals that the service_name does not end there and the rest of the string is unique to the endpoint without hard coding each service_name?
locals {
endpoints = {
service_name = "com.amazonaws.vpce.us-east-1.vpce-svc-"
}
}
data "aws_vpc_endpoint" "firewall-endpoints" {
for_each = local.endpoints
vpc_id = aws_vpc.vpc.id
service_name = each.value
#filter {
# name = "tag:AWSNetworkFirewallManaged"
# values = [true]
#}
}
resource "aws_route" "tgw_route" {
count = var.number_azs
route_table_id = aws_route_table.tgw_rt[count.index].id
destination_cidr_block = var.tgw_aws_route[0]
vpc_endpoint_id = data.aws_vpc_endpoint.firewall-endpoints["service_name"].id
}
I can't test this, but I think what you want to do is something like this:
resource "aws_route" "tgw_route" {
for_each = aws_networkfirewall_firewall.firewall_status.sync_states
route_table_id = aws_route_table.tgw_rt[???].id
destination_cidr_block = var.tgw_aws_route[0]
vpc_endpoint_id = each.value.attachment.endpoint_id
}
I'm not clear on the structure of the firewall_status output, so that may need to change slightly. The main question is how to get the appropriate route table ID per subnet. Can you access the outputs of the tgw_rt module in some way other than by index? Unfortunately, I have no experience with setting up an AWS firewall, just with Terraform, so I don't know how to solve this part of the puzzle.
I'm trying to deploy an EKS cluster and everything seems to be fine except for one!
The facade module looks like this:
module "eks" {
source = "../../../../infrastructure_modules/eks"
## EKS ##
create_eks = var.create_eks
cluster_version = var.cluster_version
cluster_name = local.cluster_name
vpc_id = data.aws_vpc.this.id
subnets = data.aws_subnet_ids.this.ids
# note: either pass worker_groups or node_groups
# this is for (EKSCTL API) unmanaged node group
worker_groups = var.worker_groups
# this is for (EKS API) managed node group
node_groups = var.node_groups
## Common tag metadata ##
env = var.env
app_name = var.app_name
tags = local.eks_tags
region = var.region
}
The VPC id is retrieved through the following block :
data "aws_vpc" "this" {
tags = {
Name = "tagName"
}
}
Which then is used to retrieve the subnet_IDs as following:
data "aws_subnet_ids" "this" {
vpc_id = data.aws_vpc.this.id
}
Nevertheless, deploying this results in error stating:
Error: error creating EKS Cluster (data-layer-eks):
UnsupportedAvailabilityZoneException: Cannot create cluster
'data-layer-eks' because us-east-1e, the targeted availability zone,
does not currently have sufficient capacity to support the cluster.
Which is a well known error, and anybody can come across this for even EC2s.
I could solve this by simply hardcoding the subnet value, but that's really undesirable and hardly maintainable.
So the question is, how can I filter out subnet_IDs based on availability zones that have sufficient capacity?
First you need to collect the subnets with all of their attributes:
data "aws_subnets" "this" {
filter {
name = "vpc-id"
values = [data.aws_vpc.this.id]
}
}
data "aws_subnet" "this" {
for_each = toset(data.aws_subnets.this.ids)
id = each.value
}
data.aws_subnet.this is now a map(object) with all of the subnets and their attributes. You can now filter by availability zone accordingly:
subnets = [for subnet in data.aws_subnet.this : subnet.id if subnet.availability_zone != "us-east-1e"]
You can also filter by truthy conditionals if that condition is easier for you:
subnets = [for subnet in data.aws_subnet.this : subnet.id if contains(["us-east-1a", "us-east-1b", "us-east-1c", "us-east-1d"], subnet.availability_zone)]
It depends on your personal use case.
I'm using below code to assign defualt subnets to ASG
resource "aws_autoscaling_group" "ecs_spot_asg" {
for_each = data.aws_subnet_ids.all_subnets.ids
.... etc...
Subnets done via
data "aws_subnet_ids" "all_subnets" {
vpc_id = data.aws_vpc.default.id
}
Below I have aws_autoscaling_policy and I'm stuck on how to relate one to the other
resource "aws_autoscaling_policy" "ecs_cluster_scale_policy" {
autoscaling_group_name = aws_autoscaling_group.ecs_spot_asg.name
Getting error:
Because aws_autoscaling_group.ecs_spot_asg has "for_each" set, its
attributes must be accessed on specific instances.
For example, to correlate with indices of a referring resource, use:
aws_autoscaling_group.ecs_spot_asg[each.key]
How this should be modified ?
My mistake was adding [] to vpc_zone_identifier = data.aws_subnet_ids.all_subnets.ids
So instead of vpc_zone_identifier = [data.aws_subnet_ids.all_subnets.ids] it should be vpc_zone_identifier = data.aws_subnet_ids.all_subnets.ids
Thanks in advance if you know the answer!
When I add the routing for multiple subnets like this for cross account vpc peering it forces a new resource every apply
resource "aws_route" "route" {
count = "${var.first_route_table_count}"
route_table_id = "${element(var.first_route_table_ids, count.index)}"
destination_cidr_block = "${data.aws_vpc.second_vpc.cidr_block}"
vpc_peering_connection_id = "${aws_vpc_peering_connection.peer.id}"
}
resource "aws_route" "second_account_route" {
provider = "aws.second_account"
count = "${var.second_route_table_count}"
route_table_id = "${element(var.second_route_table_ids, count.index)}"
destination_cidr_block = "${data.aws_vpc.first_vpc.cidr_block}"
vpc_peering_connection_id = "${aws_vpc_peering_connection.peer.id}"
}
Here is the solution if anyone comes across this Terraform quirk in the future..
Ive come to realise that because I am defining a route table and a route together that you cannot add another route later.
The solution to this is to create a route table with no routes, then add all other routes separately.