Attach Auto-Scaling Policy to ECS service from CLI - amazon-web-services

I have a service running on ECS deployed with Fargate. I am using ecs-cli compose to launch this service. Here is the command I currently use:
ecs-cli compose service up --cluster my_cluster —-launch-type FARGATE
I also have an ecs-params.yml to configure this service. Here is the content:
version: 1
task_definition:
task_execution_role: ecsTaskExecutionRole
task_role_arn: arn:aws:iam::XXXXXX:role/MyExecutionRole
ecs_network_mode: awsvpc
task_size:
mem_limit: 2GB
cpu_limit: 1024
run_params:
network_configuration:
awsvpc_configuration:
subnets:
- "subnet-XXXXXXXXXXXXXXXXX"
- "subnet-XXXXXXXXXXXXXXXXX"
security_groups:
- "sg-XXXXXXXXXXXXXX"
assign_public_ip: ENABLED
Once the service is created, I have to log into the AWS console and attach an auto-scaling policy through the AWS GUI. Is there an easier way to attach an auto-scaling policy, either through the CLI or in my YAML configuration?

While you can use the AWS CLI itself (see application-autoscaling in the docs),
I think it is much better for the entire operation to be performed in one deployment, and for that, you have tools such as Terraform.
You can use the terraform-ecs module written by arminc from Github, or you can do by it yourself! Here's a quick (and really dirty) example for the entire cluster, but you can also just grab the autoscaling part and use that if you don't want to have the entire deployment in one place:
provider "aws" {
region = "us-east-1" # insert your own region
profile = "insert aw cli profile, should be located in ~/.aws/credentials file"
# you can also use your aws credentials instead
# access_key = "insert_access_key"
# secret_key = "insert_secret_key"
}
resource "aws_ecs_cluster" "cluster" {
name = "my-cluster"
}
resource "aws_ecs_service" "service" {
name = "my-service"
cluster = "${aws_ecs_cluster.cluster.id}"
task_definition = "${aws_ecs_task_definition.task_definition.family}:${aws_ecs_task_definition.task_definition.revision}"
network_configuration {
# These can also be created with Terraform and applied dynamically instead of hard-coded
# look it up in the Docs
security_groups = ["SG_IDS"]
subnets = ["SUBNET_IDS"] # can also be created with Terraform
assign_public_ip = true
}
}
resource "aws_ecs_task_definition" "task_definition" {
family = "my-service"
execution_role_arn = "ecsTaskExecutionRole"
task_role_arn = "INSERT_ARN"
network_mode = "awsvpc"
container_definitions = <<DEFINITION
[
{
"name": "my_service"
"cpu": 1024,
"environment": [{
"name": "exaple_ENV_VAR",
"value": "EXAMPLE_VALUE"
}],
"essential": true,
"image": "INSERT IMAGE URL",
"memory": 2048,
"networkMode": "awsvpc"
}
]
DEFINITION
}
#
# Application AutoScaling resources
#
resource "aws_appautoscaling_target" "main" {
service_namespace = "ecs"
resource_id = "service/${var.cluster_name}/${aws_ecs_service.service.name}"
scalable_dimension = "ecs:service:DesiredCount"
# Insert Min and Max capacity here
min_capacity = "1"
max_capacity = "4"
depends_on = [
"aws_ecs_service.main",
]
}
resource "aws_appautoscaling_policy" "up" {
name = "scaling_policy-${aws_ecs_service.service.name}-up"
service_namespace = "ecs"
resource_id = "service/${aws_ecs_cluster.cluster.name}/${aws_ecs_service.service.name}"
scalable_dimension = "ecs:service:DesiredCount"
step_scaling_policy_configuration {
adjustment_type = "ChangeInCapacity"
cooldown = "60" # In seconds
metric_aggregation_type = "Average"
step_adjustment {
metric_interval_lower_bound = 0
scaling_adjustment = 1 # you can also use negative numbers for scaling down
}
}
depends_on = [
"aws_appautoscaling_target.main",
]
}
resource "aws_appautoscaling_policy" "down" {
name = "scaling_policy-${aws_ecs_service.service.name}-down"
service_namespace = "ecs"
resource_id = "service/${aws_ecs_cluster.cluster.name}/${aws_ecs_service.service.name}"
scalable_dimension = "ecs:service:DesiredCount"
step_scaling_policy_configuration {
adjustment_type = "ChangeInCapacity"
cooldown = "60" # In seconds
metric_aggregation_type = "Average"
step_adjustment {
metric_interval_upper_bound = 0
scaling_adjustment = -1 # scale down example
}
}
depends_on = [
"aws_appautoscaling_target.main",
]
}

Related

ResourceInitializationError with Fargate ECS deployment

I'm fairly new to AWS. I am trying to deploy a docker container to ECS but it fails with the following error:
ResourceInitializationError: unable to pull secrets or registry auth: execution resource retrieval failed: unable to retrieve ecr registry auth: service call has been retried 3 time(s): RequestError: send request failed caused by: Post "https://api.ecr.us-east-1.amazonaws.com/": dial tcp 52.46.146.144:443: i/o timeout
This was working perfectly fine, until I tried to add a loadbalancer, at which point this error began occuring. I must have changed something but I'm not sure what.
The ECS instance is in a public subnet
The security group has in/out access on all ports/ips (0.0.0.0/0)
The VPC has an internet gateway
Clearly something is wrong with my config but I'm not sure what. Google and other stack overflows haven't helped so far.
Terraform ECS file:
resource "aws_ecs_cluster" "solmines-ecs-cluster" {
name = "solmines-ecs-cluster"
}
resource "aws_ecs_service" "solmines-ecs-service" {
name = "solmines"
cluster = aws_ecs_cluster.solmines-ecs-cluster.id
task_definition = aws_ecs_task_definition.solmines-ecs-task-definition.arn
launch_type = "FARGATE"
desired_count = 1
network_configuration {
security_groups = [aws_security_group.solmines-ecs.id]
subnets = ["${aws_subnet.solmines-public-subnet1.id}", "${aws_subnet.solmines-public-subnet2.id}"]
assign_public_ip = true
}
load_balancer {
target_group_arn = aws_lb_target_group.solmines-lb-tg.arn
container_name = "solmines-api"
container_port = 80
}
depends_on = [aws_lb_listener.solmines-lb-listener]
}
resource "aws_ecs_task_definition" "solmines-ecs-task-definition" {
family = "solmines-ecs-task-definition"
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
memory = "1024"
cpu = "512"
execution_role_arn = "${aws_iam_role.solmines-ecs-role.arn}"
container_definitions = <<EOF
[
{
"name": "solmines-api",
"image": "${aws_ecr_repository.solmines-ecr-repository.repository_url}:latest",
"memory": 1024,
"cpu": 512,
"essential": true,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
]
}
]
EOF
}

I'm not able to join EKS node into EKS cluster (Terraform)

I'm using terraform v0.14.2 , and I'm trying to create a EKS cluster but I'm having problem when the nodes are joining to the cluster. The status stay stucked in "Creating" until get an error:
My code to deploy is:
Error: error waiting for EKS Node Group (EKS_SmartSteps:EKS_SmartSteps-worker-node-uk) creation: NodeCreationFailure: Instances failed to join the kubernetes cluster. Resource IDs: [i-00c4bac08b3c42225]
resource "aws_eks_node_group" "managed_workers" {
for_each = local.ob
cluster_name = aws_eks_cluster.cluster.name
node_group_name = "${var.cluster_name}-worker-node-${each.value}"
node_role_arn = aws_iam_role.managed_workers.arn
subnet_ids = aws_subnet.private.*.id
scaling_config {
desired_size = 1
max_size = 1
min_size = 1
}
launch_template {
id = aws_launch_template.worker-node[each.value].id
version = aws_launch_template.worker-node[each.value].latest_version
}
depends_on = [
kubernetes_config_map.aws_auth_configmap,
aws_iam_role_policy_attachment.eks-AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.eks-AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.eks-AmazonEC2ContainerRegistryReadOnly,
]
lifecycle {
create_before_destroy = true
ignore_changes = [scaling_config[0].desired_size, scaling_config[0].min_size]
}
}
resource "aws_launch_template" "worker-node" {
for_each = local.ob
image_id = data.aws_ssm_parameter.cluster.value
name = "${var.cluster_name}-worker-node-${each.value}"
instance_type = "t3.medium"
block_device_mappings {
device_name = "/dev/xvda"
ebs {
volume_size = 20
volume_type = "gp2"
}
}
tag_specifications {
resource_type = "instance"
tags = {
"Instance Name" = "${var.cluster_name}-node-${each.value}"
Name = "${var.cluster_name}-node-${each.value}"
}
}
}
In fact, I see in the EC2 instances and EKS the nodes attached to the EKS cluster, but with this status error:
"Instances failed to join the kubernetes cluster"
I cant inspect where is the error because the error messages dont show more info..
Any idea?
thx
So others can follow, you need to include a user data script to get the nodes to join the cluster. Something like:
userdata.tpl
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="
--==MYBOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"
#!/bin/bash
set -ex
/etc/eks/bootstrap.sh ${CLUSTER_NAME} --b64-cluster-ca ${B64_CLUSTER_CA} --apiserver-endpoint ${API_SERVER_URL}
--==MYBOUNDARY==--\
Where you render it like so
locals {
user_data_values = {
CLUSTER_NAME = var.cluster_name
B64_CLUSTER_CA = var.cluster_certificate_authority
API_SERVER_URL = var.cluster_endpoint
}
}
resource "aws_launch_template" "cluster" {
image_id = "ami-XXX" # Make sure the AMI is an EKS worker
user_data = base64encode(templatefile("userdata.tpl", local.user_data_values))
...
}
Aside from that, make sure the node group is part of the worker security group and has the required IAM roles and you should be fine.

How to deploy a minimalistic EKS cluster with terraform?

Friends,
I am completely new to Terraform but I am trying learn here. At the moment I am reading the book Terraform UP and Running but I need to spin up an EKS cluster to deploy one of my learning projects. For this, I am following this [tutorial][1] of Hashicorp.
My main questions are the following: Do I really need all of this (see the terraform code for aws bellow) to deploy a cluster on AWS? How could I reduce the bellow code to the minimum necessary to spin up a cluster with a master and one worker which are able to communicate with each other?
On Gloud I could spin up a cluster with just these few lines of code:
provider "google" {
credentials = file(var.credentials)
project = var.project
region = var.region
}
resource "google_container_cluster" "primary" {
name = var.cluster_name
network = var.network
location = var.region
initial_node_count = var.initial_node_count
}
resource "google_container_node_pool" "primary_preemtible_nodes" {
name = var.node_name
location = var.region
cluster = google_container_cluster.primary.name
node_count = var.node_count
node_config {
preemptible = var.preemptible
machine_type = var.machine_type
}
}
Can I do something similar do spin up an EKS cluster? The code bellow is working but I feel like I am biting more than I can chew.
provider "aws" {
region = "${var.AWS_REGION}"
secret_key = "${var.AWS_SECRET_KEY}"
access_key = "${var.AWS_ACCESS_KEY}"
}
# ----- Base VPC Networking -----
data "aws_availability_zones" "available_zones" {}
# Creates a virtual private network which will isolate
# the resources to be created.
resource "aws_vpc" "blur-vpc" {
#Specifies the range of IP adresses for the VPC.
cidr_block = "10.0.0.0/16"
tags = "${
map(
"Name", "terraform-eks-node",
"kubernetes.io/cluster/${var.cluster-name}", "shared"
)
}"
}
resource "aws_subnet" "subnet" {
count = 2
availability_zone = "${data.aws_availability_zones.available_zones.names[count.index]}"
cidr_block = "10.0.${count.index}.0/24"
vpc_id = "${aws_vpc.blur-vpc.id}"
tags = "${
map(
"Name", "blur-subnet",
"kubernetes.io/cluster/${var.cluster-name}", "shared",
)
}"
}
# The component that allows communication between
# the VPC and the internet.
resource "aws_internet_gateway" "gateway" {
# Attaches the gateway to the VPC.
vpc_id = "${aws_vpc.blur-vpc.id}"
tags = {
Name = "eks-gateway"
}
}
# Determines where network traffic from the gateway
# will be directed.
resource "aws_route_table" "route-table" {
vpc_id = "${aws_vpc.blur-vpc.id}"
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.gateway.id}"
}
}
resource "aws_route_table_association" "table_association" {
count = 2
subnet_id = "${aws_subnet.subnet.*.id[count.index]}"
route_table_id = "${aws_route_table.route-table.id}"
}
# -- Resources required for the master setup --
# This bellow block (IAM role + Policy) allows the EKS service to
# manage or retrieve data from other AWS services.
# Similar to a IAM but not uniquely associated with one person.
# A role can be assumed by anyone who needs it.
resource "aws_iam_role" "blur-iam-role" {
name = "eks-cluster"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
# Attaches the policy "AmazonEKSClusterPolicy" to the role created above.
resource "aws_iam_role_policy_attachment" "blur-iam-role-AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = "${aws_iam_role.blur-iam-role.name}"
}
# Master security group
# # A security group acts as a virtual firewall to control inbound and outbound traffic.
# This security group will control networking access to the K8S master.
resource "aws_security_group" "blur-cluster" {
name = "eks-blur-cluster"
description = "Allows the communucation with the worker nodes"
vpc_id = "${aws_vpc.blur-vpc.id}"
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "blur-cluster"
}
}
# The actual master node
resource "aws_eks_cluster" "blur-cluster" {
name = "${var.cluster-name}"
# Attaches the IAM role created above.
role_arn = "${aws_iam_role.blur-iam-role.arn}"
vpc_config {
# Attaches the security group created for the master.
# Attaches also the subnets.
security_group_ids = ["${aws_security_group.blur-cluster.id}"]
subnet_ids = "${aws_subnet.subnet.*.id}"
}
depends_on = [
"aws_iam_role_policy_attachment.blur-iam-role-AmazonEKSClusterPolicy",
# "aws_iam_role_policy_attachment.blur-iam-role-AmazonEKSServicePolicy"
]
}
# -- Resources required for the worker nodes setup --
# IAM role for the workers. Allows worker nodes to manage or retrieve data
# from other services and its required for the workers to join the cluster.
resource "aws_iam_role" "iam-role-worker"{
name = "eks-worker"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
# allows Amazon EKS worker nodes to connect to Amazon EKS Clusters.
resource "aws_iam_role_policy_attachment" "iam-role-worker-AmazonEKSWorkerNodePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = "${aws_iam_role.iam-role-worker.name}"
}
# This permission is required to modify the IP address configuration of worker nodes
resource "aws_iam_role_policy_attachment" "iam-role-worker-AmazonEKS_CNI_Policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = "${aws_iam_role.iam-role-worker.name}"
}
# Allows to list repositories and pull images
resource "aws_iam_role_policy_attachment" "iam-role-worker-AmazonEC2ContainerRegistryReadOnly" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = "${aws_iam_role.iam-role-worker.name}"
}
# An instance profile represents an EC2 instances (Who am I?)
# and assumes a role (what can I do?).
resource "aws_iam_instance_profile" "worker-node" {
name = "worker-node"
role = "${aws_iam_role.iam-role-worker.name}"
}
# Security group for the worker nodes
resource "aws_security_group" "security-group-worker" {
name = "worker-node"
description = "Security group for worker nodes"
vpc_id = "${aws_vpc.blur-vpc.id}"
egress {
cidr_blocks = [ "0.0.0.0/0" ]
from_port = 0
to_port = 0
protocol = "-1"
}
tags = "${
map(
"Name", "blur-cluster",
"kubernetes.io/cluster/${var.cluster-name}", "owned"
)
}"
}
resource "aws_security_group_rule" "ingress-self" {
description = "Allow communication among nodes"
from_port = 0
to_port = 65535
protocol = "-1"
security_group_id = "${aws_security_group.security-group-worker.id}"
source_security_group_id = "${aws_security_group.security-group-worker.id}"
type = "ingress"
}
resource "aws_security_group_rule" "ingress-cluster-https" {
description = "Allow worker to receive communication from the cluster control plane"
from_port = 443
to_port = 443
protocol = "tcp"
security_group_id = "${aws_security_group.security-group-worker.id}"
source_security_group_id = "${aws_security_group.blur-cluster.id}"
type = "ingress"
}
resource "aws_security_group_rule" "ingress-cluster-others" {
description = "Allow worker to receive communication from the cluster control plane"
from_port = 1025
to_port = 65535
protocol = "tcp"
security_group_id = "${aws_security_group.security-group-worker.id}"
source_security_group_id = "${aws_security_group.blur-cluster.id}"
type = "ingress"
}
# Worker Access to Master
resource "aws_security_group_rule" "cluster-node-ingress-http" {
description = "Allows pods to communicate with the cluster API server"
from_port = 443
to_port = "443"
protocol = "tcp"
security_group_id = "${aws_security_group.blur-cluster.id}"
source_security_group_id = "${aws_security_group.security-group-worker.id}"
type = "ingress"
}
# --- Worker autoscaling group ---
# This data will be used to filter and select an AMI which is compatible with the specific k8s version being deployed
data "aws_ami" "eks-worker" {
filter {
name = "name"
values = ["amazon-eks-node-${aws_eks_cluster.blur-cluster.version}-v*"]
}
most_recent = true
owners = ["602401143452"]
}
data "aws_region" "current" {}
locals {
node-user-data =<<USERDATA
#!/bin/bash
set -o xtrace
/etc/eks/bootstrap.sh --apiserver-endpoint '${aws_eks_cluster.blur-cluster.endpoint}'
USERDATA
}
# To spin up an auto scaling group an "aws_launch_configuration" is needed.
# This ALC requires an "image_id" as well as a "security_group".
resource "aws_launch_configuration" "launch_config" {
associate_public_ip_address = true
iam_instance_profile = "${aws_iam_instance_profile.worker-node.name}"
image_id = "${data.aws_ami.eks-worker.id}"
instance_type = "t2.micro"
name_prefix = "terraform-eks"
security_groups = ["${aws_security_group.security-group-worker.id}"]
user_data_base64 = "${base64encode(local.node-user-data)}"
lifecycle {
create_before_destroy = true
}
}
# Actual autoscaling group
resource "aws_autoscaling_group" "autoscaling" {
desired_capacity = 2
launch_configuration = "${aws_launch_configuration.launch_config.id}"
max_size = 2
min_size = 1
name = "terraform-eks"
vpc_zone_identifier = "${aws_subnet.subnet.*.id}"
tag {
key = "Name"
value = "terraform-eks"
propagate_at_launch = true
}
# "kubernetes.io/cluster/*" tag allows EKS and K8S to discover and manage compute resources.
tag {
key = "kubernetes.io/cluster/${var.cluster-name}"
value = "owned"
propagate_at_launch = true
}
}
[1]: https://registry.terraform.io/providers/hashicorp/aws/2.33.0/docs/guides/eks-getting-started#preparation
Yes, you should create most of them, because as you can see at Terraform AWS documents, VPC configuration is required to deploy EKS cluster. But you don't have to set up a security group rule for workers to access the master. Also, try to use aws_eks_node_group resource to create worker nodegroup. It will save you from creating launch configuration and autoscaling group seperately.
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_node_group

Terraform AWS EKS Cluster Deployment Error

I have been trying to deploy an EKS cluster within us-east-1 region and I see that one of the availability zone us-east-1e does not support the setup due to which my cluster fails to create.
Please see the error below and let me know if there is a way to skip us-east-1e AZ within terraform deployment.
Plan: 26 to add, 0 to change, 0 to destroy.
This plan was saved to: development.tfplan
To perform exactly these actions, run the following command to apply:
terraform apply "development.tfplan"
(base) _C0DL:deploy-eks-cluster-using-terraform-master snadella001$
terraform apply
"development.tfplan"data.aws_availability_zones.available_azs:
Reading... [id=2020-12-04 22:10:40.079079 +0000 UTC]
data.aws_availability_zones.available_azs: Read complete after 0s
[id=2020-12-04 22:10:47.208548 +0000 UTC]
module.eks-cluster.aws_eks_cluster.this[0]: Creating...
Error: error creating EKS Cluster (eks-ha):
UnsupportedAvailabilityZoneException: Cannot create cluster 'eks-hia'
because us-east-1e, the targeted availability zone, does not currently
have sufficient capacity to support the cluster. Retry and choose from
these availability zones: us-east-1a, us-east-1b, us-east-1c,
us-east-1d, us-east-1f { RespMetadata: {
StatusCode: 400,
RequestID: "0f2ddbd1-107f-490e-b45f-6985e1c7f1f8" }, ClusterName: "eks-ha", Message_: "Cannot create cluster 'eks-hia'
because us-east-1e, the targeted availability zone, does not currently
have sufficient capacity to support the cluster. Retry and choose from
these availability zones: us-east-1a, us-east-1b, us-east-1c,
us-east-1d, us-east-1f", ValidZones: [
"us-east-1a",
"us-east-1b",
"us-east-1c",
"us-east-1d",
"us-east-1f" ] }
on .terraform/modules/eks-cluster/cluster.tf line 9, in resource
"aws_eks_cluster" "this": 9: resource "aws_eks_cluster" "this" {
Please find the EKS cluster listed below:
# create EKS cluster
module "eks-cluster" {
source = "terraform-aws-modules/eks/aws"
version = "12.1.0"
cluster_name = var.cluster_name
cluster_version = "1.17"
write_kubeconfig = false
availability-zones = ["us-east-1a", "us-east-1b", "us-east-1c"]## tried but does not work
subnets = module.vpc.private_subnets
vpc_id = module.vpc.vpc_id
worker_groups_launch_template = local.worker_groups_launch_template
# map developer & admin ARNs as kubernetes Users
map_users = concat(local.admin_user_map_users, local.developer_user_map_users)
}
# get EKS cluster info to configure Kubernetes and Helm providers
data "aws_eks_cluster" "cluster" {
name = module.eks-cluster.cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks-cluster.cluster_id
}
#################
# Private subnet
#################
resource "aws_subnet" "private" {
count = var.create_vpc && length(var.private_subnets) > 0 ? length(var.private_subnets) : 0
vpc_id = local.vpc_id
cidr_block = var.private_subnets[count.index]
# availability_zone = ["us-east-1a", "us-east-1b", "us-east-1c"]
availability_zone = length(regexall("^[a-z]{2}-", element(var.azs, count.index))) > 0 ? element(var.azs, count.index) : null
availability_zone_id = length(regexall("^[a-z]{2}-", element(var.azs, count.index))) == 0 ? element(var.azs, count.index) : null
assign_ipv6_address_on_creation = var.private_subnet_assign_ipv6_address_on_creation == null ? var.assign_ipv6_address_on_creation : var.private_subnet_assign_ipv6_address_on_creation
ipv6_cidr_block = var.enable_ipv6 && length(var.private_subnet_ipv6_prefixes) > 0 ? cidrsubnet(aws_vpc.this[0].ipv6_cidr_block, 8, var.private_subnet_ipv6_prefixes[count.index]) : null
tags = merge(
{
"Name" = format(
"%s-${var.private_subnet_suffix}-%s",
var.name,
element(var.azs, count.index),
)
},
var.tags,
var.private_subnet_tags,
)
}
variable "azs" {
description = "A list of availability zones names or ids in the region"
type = list(string)
default = []
#default = ["us-east-1a", "us-east-1b","us-east-1c","us-east-1d"]
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "2.44.0"
name = "${var.name_prefix}-vpc"
cidr = var.main_network_block
# azs = data.aws_availability_zones.available_azs.names
azs = ["us-east-1a", "us-east-1b", "us-east-1c"]
private_subnets = [
# this loop will create a one-line list as ["10.0.0.0/20", "10.0.16.0/20", "10.0.32.0/20", ...]
# with a length depending on how many Zones are available
for zone_id in data.aws_availability_zones.available_azs.zone_ids :
cidrsubnet(var.main_network_block, var.subnet_prefix_extension, tonumber(substr(zone_id, length(zone_id) - 1, 1)) - 1)
]

How to prioritize terraform execution priority

After rds and elastic cache are created in terraform,
I would like to adjust the priority so that ec2 is set up.
Is this feasible with terraform?
to be precise, I am running docker on ec2. I would like to pass the endpoint of elastic cache, RDS created by terraform to docker with environment variables.
Thank you for reading my question.
It is feasible with terraform's Implicit and Explicit Dependencies.
So, you can define which resource should be created first and which one is after.
It is supported by the following construction, which takes list of resources:
depends_on = [
"", "",
]
Here is an example:
resource "aws_db_instance" "rds_example" {
allocated_storage = 10
storage_type = "gp2"
engine = "mysql"
engine_version = "5.6.17"
instance_class = "db.t1.micro"
name = "mydb"
username = "foo"
password = "bar"
db_subnet_group_name = "my_database_subnet_group"
parameter_group_name = "default.mysql5.6"
}
resource "aws_instance" "ec2_example" {
ami = "${data.aws_ami.ubuntu.id}"
instance_type = "t2.micro"
tags {
Name = "HelloWorld"
}
depends_on = [
"aws_db_instance.rds_example",
]
}