Set timeout for different targets in AWS Application load balancer - amazon-web-services

There is an Application Load Balancer (ALB) with a HTTP listener.
There are 3 target groups TG1, TG2, TG3
In the HTTP listener, there are rules:
- path /foo --> TG1
- path /bar --> TG2
- default --> TG3
ALB has a default idle timeout of 60 seconds to all incoming requests. How to set different timeouts at listener level, for example, 30 seconds for /foo and 40 seconds for /bar?

Related

Have to define a listener on port 4000 while port 80’s listener forwarding requests to target group on port 4000

I’m learning about deploying microserrvices to AWS ECS
Currently my prj has deployed successfully but I’m getting some confusion of ALB listener and target group
Let’s say that I have two services, serviceA and serviceB, one running on port 4000, and one on port 4001
I created 3 target group, one listen on port 80 (default-trg), one on port 4000 (serviceA-trg), and one on port 4001 (serviceB-trg)
Then I created the ALB to do a path base routing, which have only one listener on port 80. I have then edited a rule for this listener to forward the request based on the path:
If path is /serviceA then forward to serviceA-trg
If path is /serviceB then forward requests to serviceB-trg
Otherwise it will forward the requests to the default-trg
These configure weren’t work. My tasks stopped because of “ELB can’t do a health check”
I have to create 2 other listeners, one listener is listening on port 4000 and target is serviceA-trg, and one listening on port 4001 and target group is serviceB-trg.
Could you please explain me why I have to do that to make my app work?
So approxiate for your explaination!

Application Load Balancer Target Group Register/Deregister Infinite Loop

Setup
Security Groups
ALB (inbound rules)
HTTPS:443 from 0.0.0.0/0 & ::/0
HTTP:80 from 0.0.0.0/0 & ::/0
Cluster (inbound rules)
All traffic from ALB security group
Cluster
instance is t2.micro (only running 1 instance in subnets us-east-1<a,b,c> under default VPC with public IP enabled)
client → 0.375 vCPU/0.25 GB, 1 task, bridge network, 0:3000 (host:container)
server → 0.25 vCPU/0.25 GB, 2 tasks, bridge network, 0:5000 (host:container)
ALB
availability zones: us-east-1<a,b,c>, same default VPC
listeners:
HTTP:80 → redirect to HTTPS://#{host}:443/#{path}?#{query}
HTTPS:443 (/) → forward to client target group
HTTPS:443 (/api) → forward to server target group
Target Groups
client → HTTP:3000 with default health check of HTTP, /, Traffic Port, 5 healthy, 2 unhealthy, 5s timeout, 30s interval, 200 OK
server → HTTP:5000 with health check of HTTP, /api/health, Traffic Port, 5 healthy, 2 unhealthy, 5s timeout, 30s interval, 200 OK
Both docker images for client and server work properly locally & the client service seems to work well in AWS ECS. However, the server service keeps cycling between registering and de-registering (draining) the container instances seemingly without even becoming unhealthy
Here is what I see in the service Deployments and events tab:
5/12/2022, 8:43:04 PM service server registered 2 targets in target-group <...>
5/12/2022, 8:42:54 PM service server has started 2 tasks: task <...> task <...>. <...>
5/12/2022, 8:42:51 PM service server deregistered 1 targets in target-group <...>
5/12/2022, 8:42:51 PM service server has begun draining connections on 1 tasks. <...>
5/12/2022, 8:42:51 PM service server deregistered 1 targets in target-group <...>
5/12/2022, 8:42:17 PM service server registered 2 targets in target-group <...>
5/12/2022, 8:42:07 PM service server has started 2 tasks: task <...> task <...>. <...>
5/12/2022, 8:42:04 PM service server deregistered 1 targets in target-group <...>
5/12/2022, 8:42:04 PM service server has begun draining connections on 1 tasks. <...>
5/12/2022, 8:42:04 PM service server deregistered 1 targets in target-group <...>
Any ideas?
After enabling AWS CloudWatch logs in my task definition's container specs, I was able to see that the issue was actually with an AWS RDS instance.
The RDS instances' SG was accepting traffic from an old cluster SG (which no longer exists), so that clears up why a health check wasn't being performed and the registered instances were draining immediately.

Application Load Balancer for multiple applications on port 80

I am trying to configure an AWS Application Load Balancer to load balance multiple applications (4) running on two EC2 instances.
My application architecture is as follows:
instance 1 (4 applications running on ports: 8080, 8081, 8082, 8083)
instance 2 (4 applications running on ports: 8080, 8081, 8082, 8083)
I would like to use HTTP port 80 and create an entry for each of these ports.
I previously tried the classic load balancer configuration but this does not support listening for more than one HTTP port 80 entry.
I have never used the Application Load Balancer before, but have tried configuring a target group rule to test it out. My rule checks for the path /applicationName and the port to check is 8081 with the idea that my application url would include http://"ipaddress":8081/applicationName.
Ideally I would like to create a rule for each application.
Does anyone have any insight as to if this type of load balancer can even be used for this set up and if so, how to set it up properly?
You would have to create a target group for each application, like so:
Target Group A --> Instances 1 and 2, port 8080
Target Group B --> Instances 1 and 2, port 8081
Target Group C --> Instances 1 and 2, port 8082
Target Group D --> Instances 1 and 2, port 8083
Then on the ALB you would create 4 rules for port 80, like so:
Path /application1 --> Target Group A
Path /application2 --> Target Group B
Path /application3 --> Target Group C
Path /application4 --> Target Group D
Each application would need to be configured to serve the appropriate content at the specified path. I.e application1 would need to be configured to serve content at http://domain-name:8080/application1 etc.
You also have to configure a default path for port 80 in the ALB. I'm not sure what you would want that configured for in this instance, perhaps point it to one of your applications as the "default" if no path is specified.

AWS Load Balancer - All HTTPS requests are resulting in 503 error, but SSL has a cert?

I've recently added an SSL cert to my Load Balancer using an HTTPS listener.
I've updated the Security Group for the Load Balancer to allow HTTPS traffic through on port 443, from sources 0.0.0.0/0 and ::/0, and I've also set the Security Group for the instances themselves to allow HTTPS traffic from the Load Balancer's Security Group.
However any requests to our server using HTTPS fail, but using HTTP gets through fine.
In EC2->LoadBalancers I see a Listeners tab, and have the following in it:
LB Protocol - LB Port - Ins Protocol - Ins Port - Cipher - SSL Certificate
HTTP - 80 - HTTP - 80 - N/A - N/A
HTTPS - 443 - HTTPS - 443 - Change - myCertName (IAM) Change
Clicking Change on the Cipher shows I have the latest Security Policy selected (ELBSecurityPolicy-2016-08), and the SSL certificate is the one I generated yesterday.
If I go into EC2 -> Security Groups I see the following:
For the security group that the 2 instances are using:
Type - Protocol - Range - Source
HTTP - TCP - 80 - sg-123456
HTTPS - TCP - 443 - sg-123456
(where sg-123456 is the name of the security group the load balancer is using).
For the security group that the LoadBalancer is using:
Type - Protocol - Range - Source
HTTPS - TCP - 443 - 0.0.0.0/0
HTTPS - TCP - 443 - ::/0
HTTP - TCP - 80 - 0.0.0.0/0
HTTP - TCP - 80 - ::/0
I also tried Elastic Beanstalk -> my app - > my env - > Configuration - > Network Tier -> Load Balancing:
It had Secure listener port set to OFF. I set this to 443, the Protocol to HTTPS and then set the SSL certificate ID dropdown to the same certificate I uploaded to the Load Balancer listener.
I hit save, it started to update the environment, and then gave this error:
Updating load balancer named: failed Reason: A listener already exists for with LoadBalancerPort 443, but with a different InstancePort, Protocol, or SSLCertificateId
I feel like there is probably a single step I've missed somewhere along the way, can anyone see what that step could be?
You are forwarding SSL traffic to port 443 on your EC2 instances. This isn't going to work unless you also have an SSL certificate installed on your EC2 instances. Changing the SSL listener to use Instance Protocol HTTP and Instance Port 80 will most likely clear up your issue.

AWS load balancer for Mean stack

I am learning load balancer and I have 2 instances connected to my load balancer but I always get out of service error.
Node is running in port 3000
my port configuration: 80 (HTTP) forwarding to 80 (HTTP)
health check: HTTP:3000/
My health check
When you use "HTTP" ping protocol you have to upload a test file at the path "/" you cannot just use "/" in the path field.
Use the below setting and it will work.