CIDR = 10.50.0.0/16
variable "region" {
default = "us-east-1"
description = "AWS region"
}
data "aws_availability_zones" "available" {}
us-east-1 have 6 Azs.
["us-east-1a", "us-east-1b", "us-east-1c", "us-east-1d", "us-east-1e", "us-east-1f"]
I want to create 1 public and 1 private subnet per AZ configured.
I got 3 environment (dev/stage/prod)
For
dev env, I want to create subnet on 3 availability zones
stage env, on 4 availability zones
prod env on all availability zones. for this us-east-1 region have 6 availability_zones.
local.tf
locals {
selected_azs = map(data.avaialbility_zones.name[3])
}
vpc.tf
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = var.vpc_name
cidr = var.vpc_cidr
azs = data.aws_availability_zones.available.names
private_subnets = var.ath_private_subnet_block
public_subnets = var.ath_public_subnet_block
enable_nat_gateway = local.natgw_states[var.natgw_configuration].enable_nat_gateway
single_nat_gateway = local.natgw_states[var.natgw_configuration].single_nat_gateway
one_nat_gateway_per_az = local.natgw_states[var.natgw_configuration].one_nat_gateway_per_az
tags = var.resource_tags
}
variable.tf
variable "az_throttle_limit" {
type = number
default = 0
description = "number of AZs to limit to, 0 for all"
}
Any advice on reading availability zones, How Can I control from the local state.
Default will create a subnet for all availability zones on the current region
summary:
Target AZs: all “opt-in-not-required” AZs.
(us-east-1a, us-east-1b, us-east-1c, us-east-1d, us-east-1e, us-east-1f)
AZ’s should not be a static list, and should be automatically get from aws
Configurable: limit # of AZs to limit resources used (especially in non-production environment)
You can have new variable called env with local variable having different AZs for each env:
variable "env" {
type = string
default = "dev"
}
locals {
selected_azs = {
"dev" = [for i in range(3): data.aws_availability_zones.available.names[i]]
"stage" = [for i in range(4): data.aws_availability_zones.available.names[i]]
"prod" = data.aws_availability_zones.available.names
}
}
then use it:
azs = locals.selected_azs[var.env]
Related
Main Two Question with terraform code.
Alb for Ecs fargate is for routing to another avaliablity zones? or routing to containers
If I create a subnet based on the availability zone number (us-east-2a, 2b, 2c, so number is 3 and create 3 subnets) and map it to an ecs cluster with alb, does the availability zone apply?
I'm trying to build infra like under image
resource "aws_vpc" "cluster_vpc" {
tags = {
Name = "ecs-vpc"
}
cidr_block = "10.30.0.0/16"
}
data "aws_availability_zones" "available" {
}
resource "aws_subnet" "cluster" {
vpc_id = aws_vpc.cluster_vpc.id
count = length(data.aws_availability_zones.available.names)
cidr_block = "10.30.${10 + count.index}.0/24"
availability_zone = data.aws_availability_zones.available.names[count.index]
tags = {
Name = "ecs-subnet"
}
}
resource "aws_internet_gateway" "cluster_igw" {
vpc_id = aws_vpc.cluster_vpc.id
tags = {
Name = "ecs-igw"
}
}
resource "aws_route_table" "public_route" {
vpc_id = aws_vpc.cluster_vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.cluster_igw.id
}
tags = {
Name = "ecs-route-table"
}
}
resource "aws_route_table_association" "to-public" {
count = length(aws_subnet.cluster)
subnet_id = aws_subnet.cluster[count.index].id
route_table_id = aws_route_table.public_route.id
}
resource "aws_ecs_cluster" "staging" {
name = "service-ecs-cluster"
}
resource "aws_ecs_service" "staging" {
name = "staging"
cluster = aws_ecs_cluster.staging.id
task_definition = aws_ecs_task_definition.service.arn
desired_count = 1
launch_type = "FARGATE"
network_configuration {
security_groups = [aws_security_group.ecs_tasks.id]
subnets = aws_subnet.cluster[*].id
assign_public_ip = true
}
load_balancer {
target_group_arn = aws_lb_target_group.staging.arn
container_name = var.app_name
container_port = var.container_port
}
resource "aws_lb" "staging" {
name = "alb"
subnets = aws_subnet.cluster[*].id
load_balancer_type = "application"
security_groups = [aws_security_group.lb.id]
access_logs {
bucket = aws_s3_bucket.log_storage.id
prefix = "frontend-alb"
enabled = true
}
tags = {
Environment = "staging"
Application = var.app_name
}
}
... omit like lb_target, or specific components
Alb for Ecs fargate is for routing to another avaliablity zones? or routing to containers
Not really. It is to provide a single, fixed endpoint (url) to your ECS service. The ALB will automatically distribute incoming connection from the internet across your ECS services. They can be in one or multiple AZs. In your case it is only 1 AZ since you are using desired_count = 1. This means that you will have only 1 ECS service in a single AZ.
If I create a subnet based on the availability zone number (us-east-2a, 2b, 2c, so number is 3 and create 3 subnets) and map it to an ecs cluster with alb, does the availability zone apply?
Yes, because your ALB is enabled for the same subnets as your ECS service through aws_subnet.cluster[*].id. But as explained in the first question, you will have only 1 service in one AZ.
my intent is to build infra which has three availability zone and also deploy aws fargate on three availablity zone.
As explained before, your desired_count = 1 so you will not have ECS services across 3 AZs.
Also you are creating only public subnets, while your schematic diagram shows that ECS services should be in private ones.
I have been trying to deploy an EKS cluster within us-east-1 region and I see that one of the availability zone us-east-1e does not support the setup due to which my cluster fails to create.
Please see the error below and let me know if there is a way to skip us-east-1e AZ within terraform deployment.
Plan: 26 to add, 0 to change, 0 to destroy.
This plan was saved to: development.tfplan
To perform exactly these actions, run the following command to apply:
terraform apply "development.tfplan"
(base) _C0DL:deploy-eks-cluster-using-terraform-master snadella001$
terraform apply
"development.tfplan"data.aws_availability_zones.available_azs:
Reading... [id=2020-12-04 22:10:40.079079 +0000 UTC]
data.aws_availability_zones.available_azs: Read complete after 0s
[id=2020-12-04 22:10:47.208548 +0000 UTC]
module.eks-cluster.aws_eks_cluster.this[0]: Creating...
Error: error creating EKS Cluster (eks-ha):
UnsupportedAvailabilityZoneException: Cannot create cluster 'eks-hia'
because us-east-1e, the targeted availability zone, does not currently
have sufficient capacity to support the cluster. Retry and choose from
these availability zones: us-east-1a, us-east-1b, us-east-1c,
us-east-1d, us-east-1f { RespMetadata: {
StatusCode: 400,
RequestID: "0f2ddbd1-107f-490e-b45f-6985e1c7f1f8" }, ClusterName: "eks-ha", Message_: "Cannot create cluster 'eks-hia'
because us-east-1e, the targeted availability zone, does not currently
have sufficient capacity to support the cluster. Retry and choose from
these availability zones: us-east-1a, us-east-1b, us-east-1c,
us-east-1d, us-east-1f", ValidZones: [
"us-east-1a",
"us-east-1b",
"us-east-1c",
"us-east-1d",
"us-east-1f" ] }
on .terraform/modules/eks-cluster/cluster.tf line 9, in resource
"aws_eks_cluster" "this": 9: resource "aws_eks_cluster" "this" {
Please find the EKS cluster listed below:
# create EKS cluster
module "eks-cluster" {
source = "terraform-aws-modules/eks/aws"
version = "12.1.0"
cluster_name = var.cluster_name
cluster_version = "1.17"
write_kubeconfig = false
availability-zones = ["us-east-1a", "us-east-1b", "us-east-1c"]## tried but does not work
subnets = module.vpc.private_subnets
vpc_id = module.vpc.vpc_id
worker_groups_launch_template = local.worker_groups_launch_template
# map developer & admin ARNs as kubernetes Users
map_users = concat(local.admin_user_map_users, local.developer_user_map_users)
}
# get EKS cluster info to configure Kubernetes and Helm providers
data "aws_eks_cluster" "cluster" {
name = module.eks-cluster.cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks-cluster.cluster_id
}
#################
# Private subnet
#################
resource "aws_subnet" "private" {
count = var.create_vpc && length(var.private_subnets) > 0 ? length(var.private_subnets) : 0
vpc_id = local.vpc_id
cidr_block = var.private_subnets[count.index]
# availability_zone = ["us-east-1a", "us-east-1b", "us-east-1c"]
availability_zone = length(regexall("^[a-z]{2}-", element(var.azs, count.index))) > 0 ? element(var.azs, count.index) : null
availability_zone_id = length(regexall("^[a-z]{2}-", element(var.azs, count.index))) == 0 ? element(var.azs, count.index) : null
assign_ipv6_address_on_creation = var.private_subnet_assign_ipv6_address_on_creation == null ? var.assign_ipv6_address_on_creation : var.private_subnet_assign_ipv6_address_on_creation
ipv6_cidr_block = var.enable_ipv6 && length(var.private_subnet_ipv6_prefixes) > 0 ? cidrsubnet(aws_vpc.this[0].ipv6_cidr_block, 8, var.private_subnet_ipv6_prefixes[count.index]) : null
tags = merge(
{
"Name" = format(
"%s-${var.private_subnet_suffix}-%s",
var.name,
element(var.azs, count.index),
)
},
var.tags,
var.private_subnet_tags,
)
}
variable "azs" {
description = "A list of availability zones names or ids in the region"
type = list(string)
default = []
#default = ["us-east-1a", "us-east-1b","us-east-1c","us-east-1d"]
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "2.44.0"
name = "${var.name_prefix}-vpc"
cidr = var.main_network_block
# azs = data.aws_availability_zones.available_azs.names
azs = ["us-east-1a", "us-east-1b", "us-east-1c"]
private_subnets = [
# this loop will create a one-line list as ["10.0.0.0/20", "10.0.16.0/20", "10.0.32.0/20", ...]
# with a length depending on how many Zones are available
for zone_id in data.aws_availability_zones.available_azs.zone_ids :
cidrsubnet(var.main_network_block, var.subnet_prefix_extension, tonumber(substr(zone_id, length(zone_id) - 1, 1)) - 1)
]
Is there a better way to optime the code below so I don't have to ask for availability zone again and again instead can do it in once. as the region is variable so I cant define hardcoded availability zone. can you guys please I want my public subnets to be /24
provider "aws" {
region = var.region
}
resource "aws_vpc" "app_vpc" {
cidr_block = var.vpc_cidr
enable_dns_support = true
enable_dns_hostnames = true
tags = {
Name = var.vpc_name
}
}
data "aws_availability_zones" "available" {
state = "available"
}
#provision public subnet
resource "aws_subnet" "public_subnet_01" {
vpc_id = aws_vpc.app_vpc.id
cidr_block = var.public_subnet_01
availability_zone = data.aws_availability_zones.available.names[0]
tags = {
Name = "public_subnet_01"
}
depends_on = [aws_vpc_dhcp_options_association.dns_resolver]
}
resource "aws_subnet" "public_subnet_02" {
vpc_id = aws_vpc.app_vpc.id
cidr_block = var.public_subnet_02
availability_zone = data.aws_availability_zones.available.names[1]
tags = {
Name = "public_subnet_02"
}
depends_on = [aws_vpc_dhcp_options_association.dns_resolver]
}
resource "aws_subnet" "public_subnet_03" {
vpc_id = aws_vpc.app_vpc.id
cidr_block = var.public_subnet_03
availability_zone = data.aws_availability_zones.available.names[2]
tags = {
Name = "public_subnet_03"
}
depends_on = [aws_vpc_dhcp_options_association.dns_resolver]
}
An important hazard to consider with the aws_availability_zones data source is that the set of available zones can change over time, and so it's important to write your configuration so that you don't find yourself trapped in a situation where Terraform thinks you intend to replace a subnet that you are currently using and therefore cannot destroy.
A key part of that is ensuring that Terraform understands that each of the subnets belongs to a specific availability zone, so that when the set of availability zones changes Terraform can either add a new subnet for a new availability zone or remove an existing subnet for a now-removed availability zone, without affecting the others that haven't changed. The easiest way to achieve that is to use resource for_each with the set of availability zones:
resource "aws_subnet" "public" {
for_each = aws_avaiability_zones.available.names
# ...
}
The above will declare subnet instances with addresses that each include the availability zone name, like this:
aws_subnet.public["eu-west-1a"]
aws_subnet.public["eu-west-1b"]
aws_subnet.public["eu-west-1e"]
Because they are identified by the availability zone name, Terraform can see that each subnet belongs to a particular availability zone.
For subnets in particular there is an additional challenge: we must assign each subnet its own CIDR block, which means we need a systematic way of allocating IP address space to availability zones so that the networks won't get renumbered by future changes to the set of availability zones.
The documentation for the aws_availability_zone data source includes an example of declaring a mapping table that assigns each region and each availability zone a number between 1 and 14 which is then used to populate one of the octets of the IP address to create a separate prefix per (region, AZ) pair. That example creates only a single VPC and a single subnet, but we can expand on that by using for_each to do it for each of the availability zones, as long as we update the mapping tables whenever we use a new region or a new availability zone suffix letter is assigned (up to 14 of each):
variable "region_number" {
# Arbitrary mapping of region name to number to use in
# a VPC's CIDR prefix.
default = {
us-east-1 = 1
us-west-1 = 2
us-west-2 = 3
eu-central-1 = 4
ap-northeast-1 = 5
}
}
variable "az_number" {
# Assign a number to each AZ letter used in our configuration
default = {
a = 1
b = 2
c = 3
d = 4
e = 5
f = 6
# and so on, up to n = 14 if that many letters are assigned
}
}
data "aws_region" "current" {}
# Determine all of the available availability zones in the
# current AWS region.
data "aws_availability_zones" "available" {
state = "available"
}
# This additional data source determines some additional
# details about each VPC, including its suffix letter.
data "aws_availability_zone" "all" {
for_each = aws_avaiability_zones.available.names
name = each.key
}
# A single VPC for the region
resource "aws_vpc" "example" {
cidr_block = cidrsubnet("10.1.0.0/16", 4, var.region_number[data.aws_region.current.name])
}
# A subnet for each availability zone in the region.
resource "aws_subnet" "example" {
for_each = aws_availability_zone.all
vpc_id = aws_vpc.example.id
availability_zone = each.key
cidr_block = cidrsubnet(aws_vpc.example.cidr_block, 4, var.az_number[each.value.name_suffix])
}
For example, if we were working in us-west-2 and there were availability zones us-west-2a and us-west-2c, the above would declare:
A single aws_vpc.example with CIDR block 10.1.48.0/20, where 48 is the decimal representation of hex 0x30, where 3 is the number for us-west-2.
A subnet aws_subnet.example["us-west-2a"] in us-west-2a with CIDR block 10.1.49.0/24, where 49 is the decimal representation of hex 0x31.
A subnet aws_subnet.example["us-west-2c"] in us-west-2c with CIDR block 10.1.51.0/24, where 51 is the decimal representation of hex 0x33.
Notice that there is no subnet for 10.1.50.0/24, because 50 (hex 0x32) is reserved for a hypothetical us-west-2b. By allocating these addresses statically by subnet letter we can ensure that they will not change over time as availability zones are added and removed.
You can automate creation of subnets using count and cidrsubnets.
An example would be:
resource "aws_subnet" "public_subnet" {
count = length(data.aws_availability_zones.available.names)
vpc_id = aws_vpc.app_vpc.id
cidr_block = cidrsubnet(aws_vpc.app_vpc.cidr_block, 8, count.index)
availability_zone = data.aws_availability_zones.available.names[count.index]
tags = {
Name = "public_subnet_${count.index}"
}
depends_on = [aws_vpc_dhcp_options_association.dns_resolver]
}
The above will automatically create subnet in each AZ as well as assing
cidr block (/24, assuming that vpc is /16) to it.
I'm usin terraform to set up an EKS cluster i need to make sure that my worker nodes will be placed on private subnets and that my public subnets will be used for my load balancers but i don't actually know how to inject public and private subnets in my cluster because i'm only using private ones.
resource "aws_eks_cluster" "master_node" {
name = "my-cluster"
role_arn = aws_iam_role.master_iam_role.arn
version = "1.14"
vpc_config {
security_group_ids = [aws_security_group.master_security_group.id]
subnet_ids = var.private_subnet_eks_ids
}
depends_on = [
aws_iam_role_policy_attachment.main-cluster-AmazonEKSClusterPolicy,
aws_iam_role_policy_attachment.main-cluster-AmazonEKSServicePolicy,
]
}
resource "aws_autoscaling_group" "eks_autoscaling_group" {
desired_capacity = var.desired_capacity
launch_configuration = aws_launch_configuration.eks_launch_config.id
max_size = var.max_size
min_size = var.min_size
name = "my-autoscaling-group"
vpc_zone_identifier = var.private_subnet_eks_ids
depends_on = [
aws_efs_mount_target.efs_mount_target
]
}
Give only private subnets to your eks cluster but, before that, make sure you've tagged the public subnets so:
Key: kubernetes.io/role/elb
value: 1
as described here: https://aws.amazon.com/premiumsupport/knowledge-center/eks-vpc-subnet-discovery/
EKS will discover the public subnets where to place the load balancer querying by tags.
I make use to create both public and private subnets on the VPC using the vpc module. Then I create the EKS cluster using the eks module and refere to the vpc-data.
Example
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = "my-vpc"
cidr = "10.0.0.0/16"
azs = ["eu-north-1a", "eu-north-1b", "eu-north-1c"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
enable_nat_gateway = true
enable_vpn_gateway = true
}
And then EKS cluster where I refer to the VPC subnets using module.vpc.private_subnets and module.vpc.vpc_id:
module "eks-cluster" {
source = "terraform-aws-modules/eks/aws"
cluster_name = "my-eks-cluster"
cluster_version = "1.17"
subnets = module.vpc.private_subnets
vpc_id = module.vpc.vpc_id
worker_groups = [
{
instance_type = "t3.small"
asg_max_size = 2
}
]
}
In my application I am using AWS autoscaling group using terraform. I launch an Autoscaling group giving it a number of instances in a region. But Since, only 20 are instances allowed in a region. I want to launch an autoscaling group that will create instances across multiple regions so that I can launch multiple. I had this configuration:
# ---------------------------------------------------------------------------------------------------------------------
# THESE TEMPLATES REQUIRE TERRAFORM VERSION 0.8 AND ABOVE
# ---------------------------------------------------------------------------------------------------------------------
terraform {
required_version = ">= 0.9.3"
}
provider "aws" {
access_key = "${var.aws_access_key}"
secret_key = "${var.aws_secret_key}"
region = "us-east-1"
}
provider "aws" {
alias = "us-west-1"
region = "us-west-1"
}
provider "aws" {
alias = "us-west-2"
region = "us-west-2"
}
provider "aws" {
alias = "eu-west-1"
region = "eu-west-1"
}
provider "aws" {
alias = "eu-central-1"
region = "eu-central-1"
}
provider "aws" {
alias = "ap-southeast-1"
region = "ap-southeast-1"
}
provider "aws" {
alias = "ap-southeast-2"
region = "ap-southeast-2"
}
provider "aws" {
alias = "ap-northeast-1"
region = "ap-northeast-1"
}
provider "aws" {
alias = "sa-east-1"
region = "sa-east-1"
}
resource "aws_launch_configuration" "launch_configuration" {
name_prefix = "${var.asg_name}-"
image_id = "${var.ami_id}"
instance_type = "${var.instance_type}"
associate_public_ip_address = true
key_name = "${var.key_name}"
security_groups = ["${var.security_group_id}"]
user_data = "${data.template_file.user_data_client.rendered}"
lifecycle {
create_before_destroy = true
}
}
# ---------------------------------------------------------------------------------------------------------------------
# CREATE AN AUTO SCALING GROUP (ASG)
# ---------------------------------------------------------------------------------------------------------------------
resource "aws_autoscaling_group" "autoscaling_group" {
name = "${var.asg_name}"
max_size = "${var.max_size}"
min_size = "${var.min_size}"
desired_capacity = "${var.desired_capacity}"
launch_configuration = "${aws_launch_configuration.launch_configuration.name}"
vpc_zone_identifier = ["${data.aws_subnet_ids.default.ids}"]
lifecycle {
create_before_destroy = true
}
tag {
key = "Environment"
value = "production"
propagate_at_launch = true
}
tag {
key = "Name"
value = "clj-${var.job_id}-instance"
propagate_at_launch = true
}
}
# ---------------------------------------------------------------------------------------------------------------------
# THE USER DATA SCRIPT THAT WILL RUN ON EACH CLIENT NODE WHEN IT'S BOOTING
# ---------------------------------------------------------------------------------------------------------------------
data "template_file" "user_data_client" {
template = "${file("./user-data-client.sh")}"
vars {
company_location_job_id = "${var.job_id}"
docker_login_username = "${var.docker_login_username}"
docker_login_password = "${var.docker_login_password}"
}
}
# ---------------------------------------------------------------------------------------------------------------------
# DEPLOY THE CLUSTER IN THE DEFAULT VPC AND SUBNETS
# Using the default VPC and subnets makes this example easy to run and test, but it means Instances are
# accessible from the public Internet. In a production deployment, we strongly recommend deploying into a custom VPC
# and private subnets.
# ---------------------------------------------------------------------------------------------------------------------
data "aws_subnet_ids" "default" {
vpc_id = "${var.vpc_id}"
}
But this configuration does not work, it is only launching instances in a single region and throwing error as they reach 20.
How can we create instances across multiple regions in an autoscaling group ?
You correctly instantiate multiple aliased providers, but are not using any of them.
If you really need to create resources in different regions from one configuration, you must pass the alias of the provider to the resource:
resource "aws_autoscaling_group" "autoscaling_group_eu-central-1" {
provider = "aws.eu-central-1"
}
And repeat this block as many times as needed (or, better, extract it into a module and pass the providers to module.
But, as mentioned in a comment, if all you want to achieve is to have more than 20 instances, you can increase your limit by opening a ticket with AWS support.