Does Nginx becomes redundant if we have AWS Application Load balancer for a Node application? - amazon-web-services

I have a Node application running on AWS. When moving into production, there is a very high chance that it will get a very high number of requests. I'm planning to host it using AWS ECS and there will be an AWS Application load balancer in front of the application.
When I looked at "How to deploy Node application in production", I saw that everybody is suggesting the use of Nginx in front of Node application.
My doubt is that, if we have the ALB in the architecture do we need to add the Nginx also? Is there any advantage if we use Nginx if we need to host the application for 1 million users?

It depends on how you are using the NGINX for load balancing. Application Load balancer surely brings a lot of features that can make NGINX redundant in your architecture but it is not exactly as advanced as NGINX. For example ALB only use round robin load balancing, while you can configure nginx for round-robin, least connnection, etc. ALB does not have any caching capabilities while nginx provides static content caching. ALB only uses path based routing while nginx can route on request headers, cookies, or arguments, as well as the request URL.
For further reading and source : https://www.nginx.com/blog/aws-alb-vs-nginx-plus/
Note : One other important fact of using nginx is cloud agnostic. So if you plan to switch cloud provider, you can take the nginx settings with you.

It depends on the rest of your architecture. If ALB can handle everything for you, you probably don't need nginx. Also, nginx has a learning curve in case you are a first time user.

Related

ASP.NET Core on AWS Fargate with Reverse Proxy and ALB

We are looking to migrate our .NET Core applications to AWS. For some background information; At the moment we host our applications on VM's behind IIS, which with the .NET Core Hosting module, is very straight forward. Our applications are a combination of both intranet and externally facing applications, nothing with very high traffic demand.
After some research it seems like AWS ECS Fargate is a good option. The plan is to Dockerize our applications and deploy them to ECS Fargate at this point.
My consern is mainly about the topic of reverse proxies.
For now I have got an Identityserver application successfully running on ECS Fargate behind an Application Load Balancer. The ALB does TLS termination, and forwards traffic to the container running under ECS Fargate on http. It's a very straight forward setup, but I worry I am missing something as this really is not my field of expertise.
My question is, would the above setup sound sufficient? My current headache is if it would be worth to add Nginx (or similar) reverse proxies to the pipeline? In that case we'd have 2 scenarios as I understand it:
Keep the ALB and add another reverse proxy (say Nginx). The ALB still does TLS termination and forwards the traffic to Nginx which again forwards the traffic to the container running the application itself. I am maybe not seeing the benefits of this, however I fear I might be wrong. I feel it's adding unnecessary complexity to the setup.
Skip the ALB all together and expose Nginx (or another reverse proxy) publicly. The Nginx instance would stand for TLS termination, load balancing and so on. While I can see the benefit of more control with this scenario, again - the additional setup makes me think it might not be worth it, seeing we are a small team with limited hosting experience.
So - my main question would be if the original scenario would sound plausible for a production environment? Any other feedback is of course also highly appreciated if someone can contribute with some feedback.

Having load balancer before Akka Http multiple applications

I have multiple identical Scala Akka-HTTP applications, each one is installed on a dedicated server (around 10 apps), responding to HTTP requests on port 80. in front of this setup I am using single HAproxy instance that receives all the incoming traffic and balances the workload to these 10 servers.
We would like to change the HAproxy (we suspect that it causes us latency problems) and to use a different load balancer. the requirement is to adopt a different 3rd party load balancer or to develop a simple one using scala that round robin each http request to the backend akka http apps and to proxy back the response.
Is there another recommended load balancer (open source) that I can use to load balance / proxy the http incoming requests to the multiple apps other than HAproxy (maybe APACHE httpd) ?
Does it make sense to write a simple akka http application route as the loadbalancer, register the backend apps hosts in some configuration file, and to roundrobin the requests to them?
maybe I should consider Akka cluster to that purpose ? the thing is, that the applications are already standalone akka http services with no cluster support. and maybe it would be too much to go for clustering. (would like to keep it simple)
What is the best practice to load balance requests to http apps (especially akka http scala apps) as I might be missing something here?
Note - having back pressure is something that we also would like to have, meaning that if the servers are busy, we would like to response with 204 or some status code so our clients wont have timeouts in case my back end is busy.
Although Akka HTTP performance is quite impressive, I would not use it for writing a simple reverse proxy since there are tons of others out there in the community.
I am not sure where you deploy your app, but, the best (and more secure) approach is to use a LB provided by your cloud provider. Most of them has one and usually it has a good benefit-cost.
If your cloud provider does not provide one or you are hosting yourself your app, then first you should take a look on your HAProxy. Did you run tests on HAProxy in an isolated way to see it still has the same latency issues? Are you sure the config optimised for what you want? Does your HAProxy has enough resources (cpu and memory) to operate? Is your HAProxy in the same DataCenter as your deployed app?
If you follow and check all of these questions and still are having latency issues, then I would recommend you to choose another one. There are tons out there, such as Envoy and NGINX. I really like Envoy and I've been using it at work for a few months now without any complains.
Hope I could help.
[]'s

AWS Application Load Balancer with HTTP2

I have a RESTful app deployed on a number of EC2 instances sitting behind a Load Balancer.
Authentication is handled in part by a custom request header called "X-App-Key".
I have just migrated my classic Load Balancers to Application Load Balancers and I'm starting to experience intermittent issues where some valid requests (via testing with CURL) are failing authentication for some users. It looks like the custom request header is only intermittently being passed through. Using apache bench approx 100 of 500 requests failed.
If I test with a classic Load Balancer all 500 succeed.
I looked into this a bit more and found that the users who this is failing for are using a slightly newer version of CURL and specifically the requests coming from these users are using HTTP2. If I add "--http1.1" to the CURL request they all pass fine.
So the issues seem to be specific to us using a custom request header with the new generation application load balancers and HTTP2.
Am I doing something wrong?!
I found the answer on this post...
AWS Application Load Balancer transforms all headers to lower case
It seems the headers come through from the ALB in lowercase. I needed to update my backend to support this
You probably have to enable Sticky sessions in your loadbalancer.
They are needed to keep the session open liked to the same instance.
But, it's at application level the need of having to keep a session active, and not really useful in some kind of services, (depending on the nature of your system, not really recommended) as it provides performance reduction in REST like systems.

how to carry performance testing on sticky enabled load balanced web application?

Hie,
I read a lot of blogs and tutorials. I cannot figure it out how to carry out performance testing on a cookie based sticky web application which sits behind a reverse proxy load balancer. I have 3 backed application servers serving same instance of a shopping cart. A load balancer sits infront of them and directs the traffic.
Problem: when i send HTTP request for performance analysis the load balancer (tracks client ip through cookie) redirects the HTTP request to the same back end server that was assigned to. I have an option of using IP spoofing but it wont work when the backend servers are distribted in WAN rather than LAN. Moreover, each backend servers has its own public IP address and sits behind the firewall.
Question: IS there a way Jmeter can be configured to load test in this scenario. or is there othere better solution
Much appreciate your thoughts and contribution.
Regards
Here are few possible workarounds:
Point different JMeter instances directly to different backend hosts bypassing the load balancer.
Use Distributed Testing having JMeter nodes somewhere in the cloud, i.e. Amazon Micro Instances are free. You can use JMeter ec2 Script to simplify the installation, configuration and execution.
Try using DNS Cache Manager, it enables individual DNS resolution for each JMeter thread.

Should the SSL terminate at the Nginx proxy buffer or at the Amazon Elastic Load Balancer?

I have two ubuntu machines running django with gunicorn as my Python HTTP WSGI server. I currently have a ELB sitting in front of these two machines.
Many sources claim I should add NGINX to my stack for proxy buffering. However I don't know where Nginx should be placed and how it can be configured to point to the ELB which in turns points to the app servers.
NGINX ELB 2 Django/Gunicorn Servers
(proxy buffering, prevents DDOS attacks) -------> (Load balances between two app servers) ------> (My two app servers)
Is this setup appropriate? If so how can I configure it?
NGINX sort of becomes a single point of failure. Unless there is a reason to do so otherwise, I would likely put the ELB in front of nginx and run nginx on both app servers (could run on separate servers if needed).
The web server can also take care of static requests, which would probably be handled more efficiently than your app stack.
Since ELB is inherently scalable and fault tolerant, it is general practice to have them at front. You can attach your Web servers to ELB . By adding Nginx on the top you will be bringing Single point of failure.