Terraform aws-lb-target-group problems - amazon-web-services

My situation is as follows:
I’m creating a AWS ECS (NGINX container for SFTP traffic) FARGATE setup with Terraform with a network load balancer infront of it. I’ve got most parts set-up just fine and the current setup works. But now i want to add more target groups to the setup so i can allow more different ports to the container. My variable part is as follows:
variable "sftp_ports" {
type = map
default = {
test1 = {
port = 50003
}
test2 = {
port = 50004
}
}
}
and the actual deployment is as follows:
resource "aws_alb_target_group" "default-target-group" {
name = local.name
port = var.sftp_test_port
protocol = "TCP"
target_type = "ip"
vpc_id = data.aws_vpc.default.id
depends_on = [
aws_lb.proxypoc
]
}
resource "aws_alb_target_group" "test" {
for_each = var.sftp_ports
name = "sftp-target-group-${each.key}"
port = each.value.port
protocol = "TCP"
target_type = "ip"
vpc_id = data.aws_vpc.default.id
depends_on = [
aws_lb.proxypoc
]
}
resource "aws_alb_listener" "ecs-alb-https-listenertest" {
for_each = var.sftp_ports
load_balancer_arn = aws_lb.proxypoc.id
port = each.value.port
protocol = "TCP"
default_action {
type = "forward"
target_group_arn = aws_alb_target_group.default-target-group.arn
}
}
This deploys the needed listeners and the target groups just fine but the only problem i have is on how i can configure the registered target part. The aws ecs service resource only allows one target group arn so i have no clue on how i can add the additional target groups in order to reach my goal. I've been wrapping my head around this problem and scoured the internet but so far .. nothing. So is it possible to configure the ecs service to contain more target groups arns or am i supposed to configure only single target group with multiple ports (as far as i know, this is not supported out of the box, checked the docs as well. But it is possible to add multiple registered targets in the GUI so i guess it is a possibility)?
I’d like to hear from you guys,
Thanks!

Related

Update the existing listener in ALB using terraform

I am trying to get the already created ALB using the data source in terraform and then updating the listener for port 443 but when I do it, it says listener already created. The problem is that I am creating a new listener can't really figure out how to update the listener or overwrite the previous one (ALB is not created using the terraform previously). Any help would be appreciated.
data "aws_lb" "alb" {
arn = var.alb.lb_arn
name = var.alb.lb_name
}
data "aws_lb_target_group" "tg" {
arn = var.alb.lb_tg_arn
name = var.alb.lb_tg_name
}
module "alb" {
source = "./modules/alb"
load_balancer_arn = data.aws_lb.alb.arn
port = var.alb.port
protocol = var.alb.protocol
certificate_arn = module.route53-acm.acm_output.arn
default_action = var.alb.default_action
}
main.tf
resource "aws_lb_listener" "front_end" {
load_balancer_arn = var.load_balancer_arn
port = var.port
protocol = var.protocol
certificate_arn = var.certificate_arn
default_action {
type = var.default_action.type
fixed_response {
content_type = var.default_action.fixed_response.content_type
message_body = var.default_action.fixed_response.message_body
status_code = var.default_action.fixed_response.status_code
}
}
}
can't really figure out how to update the listener or overwrite the previous one (ALB is not created using the terraform previously).
You can't. This is not how TF works. Your ALB must be managed by TF for it to be able to modify. You can import it to TF if you want.
The only other way would be through local exec where you would have to use AWS CLI to modify the existing ALB.

AWS - ECS EC2 cluster running container with ALB and more than 5 ports forwarded - in Terraform

I am running an ECS cluster with about 20 containers. I have a big monolith application running on 1 container which requires to listen on 10 ports.
However AWS requires to have a max of 5 load balancer target group links in an ECS Service.
Any ideas how to overcome this (if possible)? Here's what I've tried:
Defining 10+ target groups with 1 listener each. Doesn't work since AWS requires a max of 5 load balancer definitions in the aws_ecs_service - for info - here: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html as stated in the 2nd bullet under "Service load balancing considerations"
Defining 10+ listeners with 1 target group - however all listeners forward to a single port on the container...
Tried without specifying port in the load_balancer definition in aws_ecs_service, however AWS complains for missing argument
Tried without specifying port in the aws_lb_target_group, however AWS complains that target type is "ip", so port is required...
Here's my current code:
resource "aws_ecs_service" "service_app" {
name = "my_service_name"
cluster = var.ECS_CLUSTER_ID
task_definition = aws_ecs_task_definition.task_my_app.arn
desired_count = 1
force_new_deployment = true
...
load_balancer { # Note: I've stripped the for_each to simplify reading
target_group_arn = var.tga
container_name = var.n
container_port = var.p
}
}
resource "aws_lb_target_group" "tg_mytg" {
name = "Name"
protocol = "HTTP"
port = 3000
target_type = "ip"
vpc_id = aws_vpc.my_vpc.id
}
resource "aws_lb_listener" "ls_3303" {
load_balancer_arn = aws_lb.my_lb.id
port = "3303"
protocol = "HTTP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.tg_mytg.arn
}
}
...

GCP SSL Proxy LoadBalancer via Terraform

We are creating infrastructure on GCP for our application which uses SSL Porxy Load Balancer on GCP. We use Terraform for our deployments and are struggling to create SSL Proxy Load Balancer via terraform.
If anyone could point to a sample code or point me to a direction where I can find some resources to create the load balancer
You can try with the following example:
resource "google_compute_target_ssl_proxy" "default" {
name = "test-proxy"
backend_service = google_compute_backend_service.default.id
ssl_certificates = [google_compute_ssl_certificate.default.id]
}
resource "google_compute_ssl_certificate" "default" {
name = "default-cert"
private_key = file("path/to/private.key")
certificate = file("path/to/certificate.crt")
}
resource "google_compute_backend_service" "default" {
name = "backend-service"
protocol = "SSL"
health_checks = [google_compute_health_check.default.id]
}
resource "google_compute_health_check" "default" {
name = "health-check"
check_interval_sec = 1
timeout_sec = 1
tcp_health_check {
port = "443"
}
}
Take onto consideration that the Health Check is pointed to port 443/tcp, if you want a different port please change that on here.

AWS Registering Multiple Target Groups with a ECS service

I need to create multiple target groups for an ECS service. Does anyone have an example of how I can do this via AWS CLI or API?
As it is a recent functionality I have not found many examples.
This is a relatively new feature at ECS, I have not had the opportunity to test it in a project but just reading the documentation, it looks like pretty straight forward: Just add multiple load balancer (target groups) definitions inside the service. For example, if you're using Terraform, just add multiple load_balancer blocks:
resource "aws_ecs_service" "my_service" {
name = "my_service"
cluster = "${aws_ecs_cluster.foo.id}"
task_definition = "${aws_ecs_task_definition.my_task.arn}"
... # other arguments
ordered_placement_strategy {
...
}
load_balancer {
target_group_arn = "${aws_lb_target_group.one.arn}"
container_name = "my_container_name"
container_port = 1234
}
load_balancer {
target_group_arn = "${aws_lb_target_group.two.arn}"
container_name = "my_container_name"
container_port = 4321
}
}

ALB Health checks Targets Unhealthy

I am trying to provision an ECS cluster using Terraform along with an ALB. The targets come up as Unhealthy. The error code is 502 in the console Health checks failed with these codes: [502]
I checked through the AWS Troubleshooting guide and nothing helped there.
EDIT: I have no services/tasks running on the EC2 containers. Its a vanilla ECS cluster.
Here is my relevant code for the ALB:
# Target Group declaration
resource "aws_alb_target_group" "lb_target_group_somm" {
name = "${var.alb_name}-default"
port = 80
protocol = "HTTP"
vpc_id = "${var.vpc_id}"
deregistration_delay = "${var.deregistration_delay}"
health_check {
path = "/"
port = 80
protocol = "HTTP"
}
lifecycle {
create_before_destroy = true
}
tags = {
Environment = "${var.environment}"
}
depends_on = ["aws_alb.alb"]
}
# ALB Listener with default forward rule
resource "aws_alb_listener" "https_listener" {
load_balancer_arn = "${aws_alb.alb.id}"
port = "80"
protocol = "HTTP"
default_action {
target_group_arn = "${aws_alb_target_group.lb_target_group_somm.arn}"
type = "forward"
}
}
# The ALB has a security group with ingress rules on TCP port 80 and egress rules to anywhere.
# There is a security group rule for the EC2 instances that allows ingress traffic to the ECS cluster from the ALB:
resource "aws_security_group_rule" "alb_to_ecs" {
type = "ingress"
/*from_port = 32768 */
from_port = 80
to_port = 65535
protocol = "TCP"
source_security_group_id = "${module.alb.alb_security_group_id}"
security_group_id = "${module.ecs_cluster.ecs_instance_security_group_id}"
}
Has anyone hit this error and know how to debug/fix this ?
It looks like you're trying to be register the ECS cluster instances with the ALB target group. This isn't how you're meant to send traffic to an ECS service via an ALB.
Instead you should have your service join the tasks to the target group. This will mean that if you are using host networking then only the instances with the task deployed will be registered. If you are using bridge networking then it will add the ephemeral ports used by your task to your target group (including allowing for there to be multiple targets on a single instance). And if you are using awsvpc networking then it will register the ENIs of every task that the service spins up.
To do this you should use the load_balancer block in the aws_ecs_service resource. An example might look something like this:
resource "aws_ecs_service" "mongo" {
name = "mongodb"
cluster = "${aws_ecs_cluster.foo.id}"
task_definition = "${aws_ecs_task_definition.mongo.arn}"
desired_count = 3
iam_role = "${aws_iam_role.foo.arn}"
load_balancer {
target_group_arn = "${aws_lb_target_group.lb_target_group_somm.arn}"
container_name = "mongo"
container_port = 8080
}
}
If you were using bridge networking this would mean that the tasks are accessible on the ephemeral port range on the instances so your security group rule would need to look like this:
resource "aws_security_group_rule" "alb_to_ecs" {
type = "ingress"
from_port = 32768 # ephemeral port range for bridge networking tasks
to_port = 60999 # cat /proc/sys/net/ipv4/ip_local_port_range
protocol = "TCP"
source_security_group_id = "${module.alb.alb_security_group_id}"
security_group_id = "${module.ecs_cluster.ecs_instance_security_group_id}"
}
it looks like the http://ecsInstanceIp:80 is not returning HTTP 200 OK. I would check that first. It would be easy to check if the instance is public. It wont be the case most of the times. Otherwise I would create an EC2 instance and make a curl request to confirm that.
You may also check the container logs to see if its logging the health check response.
Hope this helps. good luck.