AWS Application Load Balancer with HTTP2 - amazon-web-services

I have a RESTful app deployed on a number of EC2 instances sitting behind a Load Balancer.
Authentication is handled in part by a custom request header called "X-App-Key".
I have just migrated my classic Load Balancers to Application Load Balancers and I'm starting to experience intermittent issues where some valid requests (via testing with CURL) are failing authentication for some users. It looks like the custom request header is only intermittently being passed through. Using apache bench approx 100 of 500 requests failed.
If I test with a classic Load Balancer all 500 succeed.
I looked into this a bit more and found that the users who this is failing for are using a slightly newer version of CURL and specifically the requests coming from these users are using HTTP2. If I add "--http1.1" to the CURL request they all pass fine.
So the issues seem to be specific to us using a custom request header with the new generation application load balancers and HTTP2.
Am I doing something wrong?!

I found the answer on this post...
AWS Application Load Balancer transforms all headers to lower case
It seems the headers come through from the ALB in lowercase. I needed to update my backend to support this

You probably have to enable Sticky sessions in your loadbalancer.
They are needed to keep the session open liked to the same instance.
But, it's at application level the need of having to keep a session active, and not really useful in some kind of services, (depending on the nature of your system, not really recommended) as it provides performance reduction in REST like systems.

Related

AWS Application load balancer rule not working for cookies ? What could have gone wrong?

I work in the dev-ops team at my company. Recently, we shifted to aws's application load balancer and we are forwarding the request based on a cookie's value. For some reason, the rule isn't working and AWS doesn't support logs to get information on why a rule faied.
There could be 2 reasons for this, that we can think of:
Load balancer isn't able to read the cookie: We don't think this should be the issue as the applications under this load balancer are able to read and also print the cookies.
The load balancer doesn't read subsequent cookies after the first request: We have raised a concern with AWS on this and they are still to get back.
Meanwhile, can anyone point to any possible issues which we might be overlooking?

Use AWS Beanstalk Hooks to do an HTTP GET request

Is there a way to do an HTTP GET call during AWS Beanstalk deployment and make it roll back to previous version in case of an error response, even though the application has been successfully deployed.
You can use the ELB health check to customise which of your application endpoints is checked for application health, see docs.
There are slightly different settings depending on which kind of load balancer you're using, eg classic or application.

Having load balancer before Akka Http multiple applications

I have multiple identical Scala Akka-HTTP applications, each one is installed on a dedicated server (around 10 apps), responding to HTTP requests on port 80. in front of this setup I am using single HAproxy instance that receives all the incoming traffic and balances the workload to these 10 servers.
We would like to change the HAproxy (we suspect that it causes us latency problems) and to use a different load balancer. the requirement is to adopt a different 3rd party load balancer or to develop a simple one using scala that round robin each http request to the backend akka http apps and to proxy back the response.
Is there another recommended load balancer (open source) that I can use to load balance / proxy the http incoming requests to the multiple apps other than HAproxy (maybe APACHE httpd) ?
Does it make sense to write a simple akka http application route as the loadbalancer, register the backend apps hosts in some configuration file, and to roundrobin the requests to them?
maybe I should consider Akka cluster to that purpose ? the thing is, that the applications are already standalone akka http services with no cluster support. and maybe it would be too much to go for clustering. (would like to keep it simple)
What is the best practice to load balance requests to http apps (especially akka http scala apps) as I might be missing something here?
Note - having back pressure is something that we also would like to have, meaning that if the servers are busy, we would like to response with 204 or some status code so our clients wont have timeouts in case my back end is busy.
Although Akka HTTP performance is quite impressive, I would not use it for writing a simple reverse proxy since there are tons of others out there in the community.
I am not sure where you deploy your app, but, the best (and more secure) approach is to use a LB provided by your cloud provider. Most of them has one and usually it has a good benefit-cost.
If your cloud provider does not provide one or you are hosting yourself your app, then first you should take a look on your HAProxy. Did you run tests on HAProxy in an isolated way to see it still has the same latency issues? Are you sure the config optimised for what you want? Does your HAProxy has enough resources (cpu and memory) to operate? Is your HAProxy in the same DataCenter as your deployed app?
If you follow and check all of these questions and still are having latency issues, then I would recommend you to choose another one. There are tons out there, such as Envoy and NGINX. I really like Envoy and I've been using it at work for a few months now without any complains.
Hope I could help.
[]'s

ECS container routing with an application load balancer in AWS

I know application load balancers are new in AWS, and discussions (help) are scarce up-to now.
I have a few api containers (docker) running in EC2 Container Service (ECS). I can take advantage of application load balancers to manage routing on an application level rather than a network level. This is exactly what ECS has lacked up until now.
Getting to the point...
I'm trying to get to a point where the load balancer will detect the pattern in the request url and route the request to the correct container, but route the request without the pattern included.
For example:
http://elb.eu-west-1.elb.amazonaws.com/app1/ping
Should route request '/ping' to the app1 container
http://elb.eu-west-1.elb.amazonaws.com/app2/ping
Should route request '/ping' to the app2 container
etc...
Each app has it's own target group and corresponding pattern: /app1*, /app2*
the problem
I can successfully get the a request to '/app1/ping' to route to the app1 container however the request hits the container as '/app1/ping' (obviously) but I only need '/ping' to hit the container. '/app1' is irrelevant to the container.
Any ideas how I can achieve this?
Application Load Balancers do a couple of things very well, but there's an awfull lot they do not do. This is true for a lot of AWS services (e.g. SQS just recently, after almost a decade got FIFO support) and you can either love or hate this.
Your use case seems to fit the AWS API Gateway very well, which is a service that can be used to map certain external endpoints to certain internal endpoints (and a lot more...). There's even a blog post on the AWS blog about how to use Application Load Balancing with the EC2 Container Service and the API Gateway together.

how to carry performance testing on sticky enabled load balanced web application?

Hie,
I read a lot of blogs and tutorials. I cannot figure it out how to carry out performance testing on a cookie based sticky web application which sits behind a reverse proxy load balancer. I have 3 backed application servers serving same instance of a shopping cart. A load balancer sits infront of them and directs the traffic.
Problem: when i send HTTP request for performance analysis the load balancer (tracks client ip through cookie) redirects the HTTP request to the same back end server that was assigned to. I have an option of using IP spoofing but it wont work when the backend servers are distribted in WAN rather than LAN. Moreover, each backend servers has its own public IP address and sits behind the firewall.
Question: IS there a way Jmeter can be configured to load test in this scenario. or is there othere better solution
Much appreciate your thoughts and contribution.
Regards
Here are few possible workarounds:
Point different JMeter instances directly to different backend hosts bypassing the load balancer.
Use Distributed Testing having JMeter nodes somewhere in the cloud, i.e. Amazon Micro Instances are free. You can use JMeter ec2 Script to simplify the installation, configuration and execution.
Try using DNS Cache Manager, it enables individual DNS resolution for each JMeter thread.