I'm currently trying to expose 2 services from my ECS cluster on a loadbalancer with a HTTP listener redirecting to a HTTPS listener (which is forwarding to the target groups of my services, depending on the host-header).
I registered the certificates with AWS Certificate Manager, but unfortunately I get timeouts when I'm trying to hit the URLS on my browser.
I am deploying those resources with Terraform :
resource "aws_lb_target_group" "nginx-tg" {
name = "nginx-tg"
target_type = "ip"
protocol = "HTTP"
port = "80"
vpc_id = var.vpc-id
}
resource "aws_lb_target_group" "apache2-tg" {
name = "apache2-tg"
target_type = "ip"
protocol = "HTTP"
port = "80"
vpc_id = var.vpc-id
}
resource "aws_lb_listener_certificate" "nginx-certificate" {
certificate_arn = var.nginx-certificate-arn # subdomain1.domain.com
listener_arn = aws_lb_listener.test-cluster-lb-https-listener.arn
}
resource "aws_lb_listener_certificate" "apache2-certificate" {
certificate_arn = var.apache2-certificate-arn # subdomain2.domain.com
listener_arn = aws_lb_listener.test-cluster-lb-https-listener.arn
}
resource "aws_lb_listener" "test-cluster-lb-https-listener" {
load_balancer_arn = aws_lb.test-cluster-elb.arn
port = "443"
protocol = "HTTPS"
ssl_policy = "ELBSecurityPolicy-2016-08"
certificate_arn = var.listener-certificate-arn
default_action {
type = "redirect"
redirect {
port = "443"
protocol = "HTTPS"
status_code = "HTTP_301"
host = "home.domain.com"
path = "/"
}
}
}
resource "aws_lb_listener_rule" "nginx-listener-rule" {
listener_arn = aws_lb_listener.test-cluster-lb-https-listener.arn
action {
type = "forward"
target_group_arn = aws_lb_target_group.nginx-tg.arn
}
condition {
host_header {
values = [var.nginx-url]
}
}
priority = 1
}
resource "aws_lb_listener_rule" "apache2-listener-rule" {
listener_arn = aws_lb_listener.test-cluster-lb-https-listener.arn
action {
type = "forward"
target_group_arn = aws_lb_target_group.apache2-tg.arn
}
condition {
host_header {
values = [var.apache2-url]
}
}
priority = 2
}
And here is the ECS services declaration :
resource "aws_ecs_service" "nginx" {
name = "nginx"
launch_type = "FARGATE"
cluster = aws_ecs_cluster.ecs_cluster.id
task_definition = aws_ecs_task_definition.nginx_task.arn
desired_count = 2
network_configuration {
subnets = [var.private-sn-az-1, var.private-sn-az-2]
security_groups = [aws_security_group.service-sg.id]
assign_public_ip = true
}
load_balancer {
target_group_arn = aws_lb_target_group.nginx-tg.arn
container_name = "nginx"
container_port = 80
}
}
resource "aws_ecs_service" "apache2" {
name = "apache2"
launch_type = "FARGATE"
cluster = aws_ecs_cluster.ecs_cluster.id
task_definition = aws_ecs_task_definition.apache2_task.arn
desired_count = 2
network_configuration {
subnets = [var.private-sn-az-1, var.private-sn-az-2]
security_groups = [aws_security_group.service-sg.id]
assign_public_ip = true
}
load_balancer {
target_group_arn = aws_lb_target_group.apache2-tg.arn
container_name = "apache2"
container_port = 80
}
}
The security group of the services allows all the requests from my loadbalancer on port 80 !
It's my first post so I may have been unclear on some points, if you need the rest of the resources declaration feel free to ask me :)
Thank you in advance !!
What I tried : Target group was exposing port 8080, switched it to 80.
What I expected : Thought that my service was only listening on port 80, so changing it would make the timeout go.
What actually happened : Still got a timeout when i'm trying to accesss the urls (subdomain1.domain.com & subdomain2.domain.com)...
Related
I have a terraform-defined ECS cluster with fargate task, service, target group and lb.
I'm trying to send requests to the fargate cluster but it's timing out. I've tried to add an attachment as follows:
resource "aws_lb_target_group_attachment" "websocket-server" {
target_group_arn = aws_lb_target_group.websocket-server.arn
target_id = aws_ecs_cluster.websocket-server-cluster.id
port = 443
}
But unfortunately this throws:
Error registering targets with target group: ValidationError: The IP address 'arn:aws:ecs:eu-west-2:xxxxxx:cluster/websocket-server-cluster' is not a valid IPv4 address
My LB/target group/ECS definitions:
resource "aws_ecs_cluster" "websocket-server-cluster" {
name = "websocket-server-cluster"
}
resource "aws_ecs_service" "websocket-server-service" {
name = "websocket-server-service"
cluster = aws_ecs_cluster.websocket-server-cluster.arn
deployment_maximum_percent = 200
deployment_minimum_healthy_percent = 0
launch_type = "FARGATE"
task_definition = aws_ecs_task_definition.websocket-server-task.arn
load_balancer {
target_group_arn = aws_lb_target_group.websocket-server.arn
container_name = "websocket-server"
container_port = 443
}
network_configuration {
assign_public_ip = true
security_groups = [aws_security_group.public.id, aws_security_group.private.id]
subnets = [aws_subnet.public.id, aws_subnet.private.id]
}
}
module "websocket-server" {
source = "git::https://github.com/cloudposse/terraform-aws-ecs-container-definition.git?ref=tags/0.58.1"
container_name = "websocket-server"
container_image = "${aws_ecr_repository.websocket-server.repository_url}:latest"
container_cpu = "256"
container_memory = "512"
port_mappings = [
{
containerPort = 443
hostPort = 443
protocol = "tcp"
}
]
environment = []
}
resource "aws_ecs_task_definition" "websocket-server-task" {
family = "websocket-server"
requires_compatibilities = ["FARGATE"]
memory = "512"
cpu = "256"
task_role_arn = aws_iam_role.ecs-container-role.arn
execution_role_arn = aws_iam_role.ecs-container-role.arn
network_mode = "awsvpc"
container_definitions = module.websocket-server.json_map_encoded_list
lifecycle {
ignore_changes = [
tags, tags_all
]
}
}
resource "aws_lb" "main" {
name = "main"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.public.id, aws_security_group.private.id]
enable_deletion_protection = false
subnets = [aws_subnet.public.id, aws_subnet.public-backup.id]
}
resource "aws_lb_target_group" "websocket-server" {
name = "websocket-server"
port = 443
protocol = "HTTPS"
vpc_id = aws_vpc.main.id
target_type = "ip"
health_check {
enabled = true
healthy_threshold = 3
unhealthy_threshold = 3
timeout = 10
protocol = "HTTPS"
path = "/apis/websocket-server/health"
interval = "100"
matcher = "200"
}
depends_on = [
aws_lb.main
]
}
resource "aws_lb_listener" "websocket-server" {
load_balancer_arn = aws_lb.main.arn
port = "443"
protocol = "HTTPS"
ssl_policy = "ELBSecurityPolicy-2016-08"
certificate_arn = aws_acm_certificate.main.arn
default_action {
target_group_arn = aws_lb_target_group.websocket-server.arn
type = "forward"
}
}
resource "aws_lb_listener" "http" {
load_balancer_arn = aws_lb.main.arn
port = "80"
protocol = "HTTP"
default_action {
type = "redirect"
redirect {
port = "443"
protocol = "HTTPS"
status_code = "HTTP_301"
}
}
}
resource "aws_lb_listener_certificate" "main" {
listener_arn = aws_lb_listener.websocket-server.arn
certificate_arn = aws_acm_certificate.main.arn
}
The attachment definition is not necessary at all. Keep in mind, containers for Fargate services do not use network interfaces of the underlying EC2 instances in the cluster (you don't see the instances at all for that matter). They use AWS VPC networking mode only -- independent network interfaces in the VPC are attached to the containers.
The target group attachment happens automatically and is configured through the load_balancer block in the aws_ecs_service resource. As ECS starts the containers, they get registered with the target group automatically. There is no static attachment to define in the case of Fargate ECS services.
Just remove the tg attachment resource from your tf file altogether.
Check out this resource for a decent reference implementation with terraform.
As a completely separate side note, you probably also do not want assign_public_ip = true in your service configuration. That would allow access to your containers directly without going through the load balancer which is almost never what you want when you're using a load balancer.
I am running tableau server 2021-1-2 on EC2 instance.
I can connect using the default public ip on port 80, also on port 8050 for the Tableau TSM UI. And the same using the hostname I defined. The only issue I have is despite following several guidelines I can't connect using https.
I setup the ports on the security-group, the load-balancer, the certificate, i waited for hours as I saw that the ssl certificate could take more than half of an hour and nothing.
I can connect using:
http://my_domain.domain
But not:
https://my_domain.domain
I receive the following error in the browser: Can't connect to the server https://my_domain.domain.
I run curl -i https://my_domain.domain
It returns:
curl: (7) Failed to connect to my_domain.domainport 443: Connection refused
The security group of my instance has the following ports (u can see it in tf too):
Here you have my tf setup.
I did the EC2 setup with:
resource "aws_instance" "tableau" {
ami = var.ami
instance_type = var.instance_type
associate_public_ip_address = true
key_name = var.key_name
subnet_id = compact(split(",", var.public_subnets))[0]
vpc_security_group_ids = [aws_security_group.tableau-sg.id]
root_block_device{
volume_size = var.volume_size
}
tags = {
Name = var.namespace
}
}
I created the load balancer setup using:
resource "aws_lb" "tableau-lb" {
name = "${var.namespace}-alb"
load_balancer_type = "application"
internal = false
subnets = compact(split(",", var.public_subnets))
security_groups = [aws_security_group.tableau-sg.id]
ip_address_type = "ipv4"
enable_cross_zone_load_balancing = true
lifecycle {
create_before_destroy = true
}
idle_timeout = 300
}
resource "aws_alb_listener" "https" {
depends_on = [aws_alb_target_group.target-group]
load_balancer_arn = aws_lb.tableau-lb.arn
protocol = "HTTPS"
port = "443"
ssl_policy = "my_ssl_policy"
certificate_arn = "arn:xxxx"
default_action {
target_group_arn = aws_alb_target_group.target-group.arn
type = "forward"
}
lifecycle {
ignore_changes = [
default_action.0.target_group_arn,
]
}
}
resource "aws_alb_target_group" "target-group" {
name = "${var.namespace}-group"
port = 80
protocol = "HTTP"
vpc_id = var.vpc_id
target_type = "instance"
health_check {
healthy_threshold = var.health_check_healthy_threshold
unhealthy_threshold = var.health_check_unhealthy_threshold
timeout = var.health_check_timeout
interval = var.health_check_interval
path = var.path
}
tags = {
Name = var.namespace
}
lifecycle {
create_before_destroy = false
}
depends_on = [aws_lb.tableau-lb]
}
resource "aws_lb_target_group_attachment" "tableau-attachment" {
target_group_arn = aws_alb_target_group.target-group.arn
target_id = aws_instance.tableau.id
port = 80
}
The security group:
resource "aws_security_group" "tableau-sg" {
name_prefix = "${var.namespace}-sg"
tags = {
Name = var.namespace
}
vpc_id = var.vpc_id
# HTTP from the load balancer
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# HTTP from the load balancer
ingress {
from_port = 8850
to_port = 8850
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# HTTP from the load balancer
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# 443 secure access from anywhere
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# Outbound internet access
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
lifecycle {
create_before_destroy = true
}
}
Also setup a hostname domain using:
resource "aws_route53_record" "tableau-record-dns" {
zone_id = var.route_53_zone_id
name = "example.hostname"
type = "A"
ttl = "300"
records = [aws_instance.tableau.public_ip]
}
resource "aws_route53_record" "tableau-record-dns-https" {
zone_id = var.route_53_zone_id
name = "asdf.example.hostname"
type = "CNAME"
ttl = "300"
records = ["asdf.acm-validations.aws."]
}
Finally solved the issue, it was related to the record A. I was assignin an ip there an its impossible to redirect to an specific ip with the loadbalancer there. I redirect traffic to an ELB and worked fine
I am running a spring cloud config server on aws which is just a docker container running a spring boot app. It is reading properties from a git repo. Our client applications read config from the server on startup and intermittently at runtime. About a third of the time, the client apps will timeout when pulling config at startup, causing the app to crash. At runtime, the apps seem to succeed 4 out of 5 times, though they will just use existing config if a request fails.
I am using an ec2 instance behind an alb which handles ssl termination. I was orignally using a t3.micro, but upgraded to an m5.large guessing that the t3 class may not support continuous availability.
The alb required 2 subnets, so I created a second with nothing in it initially. I am unsure if the alb will attempt to route to the second subnet at some point, which could be causing the failures. The target group is using a health check which returns correctly, but idk enough about alb's to rule out round-robin'ing to an empty subnet. I attempted to create a second ec2 instance to parallel my first config server in the second subnet. however, I was unable to ssh into the second instance even though it's using the same security group and config as the first. I'm not sure why that failed, but i'm guessing there is something else wrong with my setup.
All infrastructure was deployed with terraform, which I have included below.
resources.tf
provider "aws" {
region = "us-east-2"
version = ">= 2.38.0"
}
data "aws_ami" "amzn_linux" {
most_recent = true
filter {
name = "name"
values = ["amzn2-ami-hvm-2.0.*-x86_64-gp2"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["137112412989"]
}
resource "aws_vpc" "config-vpc" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
enable_dns_support = true
}
resource "aws_security_group" "config_sg" {
name = "config-sg"
description = "http, https, and ssh"
vpc_id = aws_vpc.config-vpc.id
ingress {
from_port = 9000
to_port = 9000
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 8080
to_port = 8080
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_subnet" "subnet-alpha" {
cidr_block = cidrsubnet(aws_vpc.config-vpc.cidr_block, 3, 1)
vpc_id = aws_vpc.config-vpc.id
availability_zone = "us-east-2a"
}
resource "aws_subnet" "subnet-beta" {
cidr_block = cidrsubnet(aws_vpc.config-vpc.cidr_block, 3, 2)
vpc_id = aws_vpc.config-vpc.id
availability_zone = "us-east-2b"
}
resource "aws_internet_gateway" "config-vpc-ig" {
vpc_id = aws_vpc.config-vpc.id
}
resource "aws_route_table" "config-vpc-rt" {
vpc_id = aws_vpc.config-vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.config-vpc-ig.id
}
}
resource "aws_route_table_association" "subnet-association-alpha" {
subnet_id = aws_subnet.subnet-alpha.id
route_table_id = aws_route_table.config-vpc-rt.id
}
resource "aws_route_table_association" "subnet-association-beta" {
subnet_id = aws_subnet.subnet-beta.id
route_table_id = aws_route_table.config-vpc-rt.id
}
resource "aws_alb" "alb" {
name = "config-alb"
subnets = [aws_subnet.subnet-alpha.id, aws_subnet.subnet-beta.id]
security_groups = [aws_security_group.config_sg.id]
}
resource "aws_alb_target_group" "alb_target_group" {
name = "config-tg"
port = 9000
protocol = "HTTP"
vpc_id = aws_vpc.config-vpc.id
health_check {
enabled = true
path = "/actuator/health"
port = 9000
protocol = "HTTP"
}
}
resource "aws_instance" "config_server_alpha" {
ami = data.aws_ami.amzn_linux.id
instance_type = "m5.large"
vpc_security_group_ids = [aws_security_group.config_sg.id]
key_name = "config-ssh"
subnet_id = aws_subnet.subnet-alpha.id
associate_public_ip_address = true
}
resource "aws_instance" "config_server_beta" {
ami = data.aws_ami.amzn_linux.id
instance_type = "m5.large"
vpc_security_group_ids = [aws_security_group.config_sg.id]
key_name = "config-ssh"
subnet_id = aws_subnet.subnet-beta.id
associate_public_ip_address = true
}
resource "aws_alb_target_group_attachment" "config-target-alpha" {
target_group_arn = aws_alb_target_group.alb_target_group.arn
target_id = aws_instance.config_server_alpha.id
port = 9000
}
resource "aws_alb_target_group_attachment" "config-target-beta" {
target_group_arn = aws_alb_target_group.alb_target_group.arn
target_id = aws_instance.config_server_beta.id
port = 9000
}
resource "aws_alb_listener" "alb_listener_80" {
load_balancer_arn = aws_alb.alb.arn
port = 80
default_action {
type = "redirect"
redirect {
port = 443
protocol = "HTTPS"
status_code = "HTTP_301"
}
}
}
resource "aws_alb_listener" "alb_listener_8080" {
load_balancer_arn = aws_alb.alb.arn
port = 8080
default_action {
type = "redirect"
redirect {
port = 443
protocol = "HTTPS"
status_code = "HTTP_301"
}
}
}
resource "aws_alb_listener" "alb_listener_https" {
load_balancer_arn = aws_alb.alb.arn
port = 443
protocol = "HTTPS"
ssl_policy = "ELBSecurityPolicy-2016-08"
certificate_arn = "arn:..."
default_action {
target_group_arn = aws_alb_target_group.alb_target_group.arn
type = "forward"
}
}
config server
#SpringBootApplication
#EnableConfigServer
public class ConfigserverApplication {
public static void main(String[] args) {
SpringApplication.run(ConfigserverApplication.class, args);
}
}
application.yml
spring:
profiles:
active: local
---
spring:
profiles: local, default, cloud
cloud:
config:
server:
git:
uri: ...
searchPaths: '{application}/{profile}'
username: ...
password:...
security:
user:
name: admin
password: ...
server:
port: 9000
management:
endpoint:
health:
show-details: always
info:
git:
mode: FULL
bootstrap.yml
spring:
application:
name: config-server
encrypt:
key: |
-----BEGIN RSA PRIVATE KEY-----
...
-----END RSA PRIVATE KEY-----
I am facing a ridiculous behavior from the AWS Load Balancer created by terraform. After creating the target groups, which they point healthy, I create the load balancer as following:
resource "aws_alb" "jira_elb" {
name = "${data.vault_generic_secret.atlassian_datacenter_aws_jira.data["jira_elb_name"]}"
internal = "${local.elb_internal}"
load_balancer_type = "application"
idle_timeout = 600
security_groups = ["${aws_security_group.jira_elb_sg.id}"]
subnets = "${local.elb_internal == "true" ? local.private_subnet_ids : local.public_subnet_ids}" // Set the subnets based on local variable
enable_deletion_protection = false # CHANGE!!
enable_cross_zone_load_balancing = true
access_logs {
bucket = "${data.vault_generic_secret.atlassian_datacenter_aws_jira.data["jira_elb_s3_logs_bucket"]}"
prefix = "jira-elb"
enabled = true
# interval = 20 //The publishing interval in minutes. Default: 60 minutes.
}
}
And the https listener:
resource "aws_alb_listener" "jira_https_elb_listener" {
load_balancer_arn = "${aws_alb.jira_elb.arn}"
port = "443"
protocol = "HTTPS"
ssl_policy = "ELBSecurityPolicy-2016-08"
certificate_arn = "${data.aws_acm_certificate.jira_ssl_certificate.arn}"
default_action {
target_group_arn = "${aws_lb_target_group.jira_target_group.arn}"
type = "forward"
}
}
After the load balancer is created and try to access it through the browser I retrieve connection refused. The ridiculous part is that if I delete the listener by hand, and create the same listener with same certificate, port, and forwarding the DNS work through the browser. Is there any idea what may be happening?
AWS Load Balancer with an https listener created with terraform
resource "aws_lb" "internal_alb" {
name = "INTERNAL-ALB"
internal = true
load_balancer_type = "application"
security_groups = ["${aws_security_group.ecs_sg.id}"]
subnets = ["subxxxx", "subnet-dcxxxx", "subnet-fxxxx"]
enable_deletion_protection = false
access_logs {
bucket = "bucket_name"
enabled = true
}
tags = {
Name = "INTERNAL-ALB"
}
}
resource "aws_alb_target_group" "web_alb_target_group" {
name = "WEB-TG"
port = "80"
protocol = "HTTP"
vpc_id = "${aws_lb.internal_alb.vpc_id}"
health_check {
healthy_threshold = "5"
unhealthy_threshold = "2"
interval = "30"
matcher = "200"
path = "/heartbeat"
port = "traffic-port"
protocol = "HTTP"
timeout = "5"
}
tags = {
Name = "WEB-TG"
}
}
resource "aws_lb_listener" "internal_alb_http" {
load_balancer_arn = "${aws_lb.internal_alb.id}"
port = "80"
protocol = "HTTP"
default_action {
type = "forward"
target_group_arn = "arn:aws:elasticloadbalancing:us-east-1:48xxxxxxx:targetgroup/WEB-TG/4ad42b3dadxxxxxx66"
}
}
resource "aws_lb_listener" "internal_alb_https" {
load_balancer_arn = "${aws_lb.internal_alb.id}"
port = "443"
protocol = "HTTPS"
ssl_policy = "ELBSecurityPolicy-TLS-1-2-2017-01"
certificate_arn = "arn:aws:iam::48xxxxxxx:server-certificate/certifcate"
default_action {
type = "forward"
target_group_arn = "arn:aws:elasticloadbalancing:us-east-1:48xxxxxxx:targetgroup/WEB-TG/4ad42b3dadxxxxxx66"
}
}
resource "aws_route53_record" "node" {
zone_id = "ZSxxxxxxx"
name = "www.example.com"
type = "A"
alias {
name = "${aws_lb.internal_alb.dns_name}"
zone_id = "${aws_lb.internal_alb.zone_id}"
evaluate_target_health = true
}
}
I am currently working through the beta book "Terraform Up & Running, 2nd Edition". In chapter 2, I created an auto scaling group and a load balancer in AWS.
Now I made my backend server HTTP ports configurable. By default they listen on port 8080.
variable "server_port" {
…
default = 8080
}
resource "aws_launch_configuration" "example" {
…
user_data = <<-EOF
#!/bin/bash
echo "Hello, World" > index.html
nohup busybox httpd -f -p ${var.server_port} &
EOF
…
}
resource "aws_security_group" "instance" {
…
ingress {
from_port = var.server_port
to_port = var.server_port
…
}
}
The same port also needs to be configured in the application load balancer's target group.
resource "aws_lb_target_group" "asg" {
…
port = var.server_port
…
}
When my infrastructure is already deployed, for example with the configuration for the port set to 8080, and then I change the variable to 80 by running terraform apply --var server_port=80, the following error is reported:
> Error: Error deleting Target Group: ResourceInUse: Target group
> 'arn:aws:elasticloadbalancing:eu-central-1:…:targetgroup/terraform-asg-example/…'
> is currently in use by a listener or a rule status code: 400,
How can I refine my Terraform infrastructure definition to make this change possible? I suppose it might be related to a lifecycle option somewhere, but I didn't manage to figure it out yet.
For your reference I attach my whole infrastructure definition below:
provider "aws" {
region = "eu-central-1"
}
output "alb_location" {
value = "http://${aws_lb.example.dns_name}"
description = "The location of the load balancer"
}
variable "server_port" {
description = "The port the server will use for HTTP requests"
type = number
default = 8080
}
resource "aws_lb_listener_rule" "asg" {
listener_arn = aws_lb_listener.http.arn
priority = 100
condition {
field = "path-pattern"
values = ["*"]
}
action {
type = "forward"
target_group_arn = aws_lb_target_group.asg.arn
}
}
resource "aws_lb_target_group" "asg" {
name = "terraform-asg-example"
port = var.server_port
protocol = "HTTP"
vpc_id = data.aws_vpc.default.id
health_check {
path = "/"
protocol = "HTTP"
matcher = "200"
interval = 15
timeout = 3
healthy_threshold = 2
unhealthy_threshold = 2
}
}
resource "aws_lb_listener" "http" {
load_balancer_arn = aws_lb.example.arn
port = 80
protocol = "HTTP"
default_action {
type = "fixed-response"
fixed_response {
content_type = "text/plain"
message_body = "404: page not found"
status_code = 404
}
}
}
resource "aws_lb" "example" {
name = "terraform-asg-example"
load_balancer_type = "application"
subnets = data.aws_subnet_ids.default.ids
security_groups = [aws_security_group.alb.id]
}
resource "aws_security_group" "alb" {
name = "terraform-example-alb"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_autoscaling_group" "example" {
launch_configuration = aws_launch_configuration.example.name
vpc_zone_identifier = data.aws_subnet_ids.default.ids
target_group_arns = [aws_lb_target_group.asg.arn]
health_check_type = "ELB"
min_size = 2
max_size = 10
tag {
key = "Name"
value = "terraform-asg-example"
propagate_at_launch = true
}
}
resource "aws_launch_configuration" "example" {
image_id = "ami-0085d4f8878cddc81"
instance_type = "t2.micro"
security_groups = [aws_security_group.instance.id]
user_data = <<-EOF
#!/bin/bash
echo "Hello, World" > index.html
nohup busybox httpd -f -p ${var.server_port} &
EOF
lifecycle {
create_before_destroy = true
}
}
resource "aws_security_group" "instance" {
name = "terraform-example-instance"
ingress {
from_port = var.server_port
to_port = var.server_port
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
data "aws_subnet_ids" "default" {
vpc_id = data.aws_vpc.default.id
}
data "aws_vpc" "default" {
default = true
}
From the issue link in the comment on Cannot rename ALB Target Group if Listener present:
Add a lifecycle rule to your target group so it becomes:
resource "aws_lb_target_group" "asg" {
name = "terraform-asg-example"
port = var.server_port
protocol = "HTTP"
vpc_id = data.aws_vpc.default.id
health_check {
path = "/"
protocol = "HTTP"
matcher = "200"
interval = 15
timeout = 3
healthy_threshold = 2
unhealthy_threshold = 2
}
lifecycle {
create_before_destroy = true
}
}
However you will need to choose a method for changing the name of your target group as well. There is further discussion and suggestions on how to do this.
But one possible solution is to simply use a guid but ignore changes to the name:
resource "aws_lb_target_group" "asg" {
name = "terraform-asg-example-${substr(uuid(), 0, 3)}"
port = var.server_port
protocol = "HTTP"
vpc_id = data.aws_vpc.default.id
health_check {
path = "/"
protocol = "HTTP"
matcher = "200"
interval = 15
timeout = 3
healthy_threshold = 2
unhealthy_threshold = 2
}
lifecycle {
create_before_destroy = true
ignore_changes = [name]
}
}
Slightly simpler than #FGreg's solution, add a lifecycle policy and switch from name to name_prefix which will prevent naming collisions.
resource "aws_lb_target_group" "asg" {
name_prefix = "terraform-asg-example"
port = var.server_port
protocol = "HTTP"
vpc_id = data.aws_vpc.default.id
lifecycle {
create_before_destroy = true
}
health_check {
path = "/"
protocol = "HTTP"
matcher = "200"
interval = 15
timeout = 3
healthy_threshold = 2
unhealthy_threshold = 2
}
}
No need for uuid or ignore_changes settings.
change name = "terraform-asg-example"
to
name_prefix = "asg-"