I've been searching a solution how to configure nginx proxy server to work with websockets. I have ready found a bunch of solutions make that. But, these scripts patch nginx.conf in the instances. But my instances don't have nginx running. Nginx is run on a balancer.
So my question is how to patch nginx config on a balancer.
Your question is confusing, because you are saying you are using ELB and you want Nginx. But you can't get websockets in Nginx with normal ELB, and you probably don't need Nginx with ELB except in specific situations.
You have two choices:
1) Continue to use ELB and Elasticbeanstalk. The problem is that ELB doesn't support websockets at all. See this article. You'll need to stop using ELB as a HTTP proxy, and start using it as a TCP proxy. The downside is that your app will now be exposed to your servers going up and down. (In a HTTP proxy, each request can go to a different server. In a TCP proxy, the request stays alive for the whole session, so when the server goes down, your client must 'deal with it.')
2) Run your own load balancer. Best practice is EIP + Nginx + HAProxy. This is quite a different question.
Related
I have my server application deployed in AWS with Beanstalk.
I'm using Beanstalk with Application Loadbalancer.
Beanstalk is very handy in autoconfiguring all for me and I like to use it, but,
for now, every Beanstalk instance contains NGNIX for proxy requests, but because I already have LoadBalancer that redirects requests to my server and responsible for SSL certificates, I don't see why I need NGNIX and I want to remove it from configuration (or at least not to use it between LoadBalancer and Application server).
Moreover, during my load testing and hight load, NGNIX causing me troubles (it takes a lot of CPU time, and crying about worker_connections)
But I can't find any option to use my beanstalk with load balancer without NGNIX
I've fixed my problem by configuring load balancer in my EBS. My application was listening on 5000 port (Java), and NGINX redirects from 80 to 5000, Load Balancer sends all requests to 80.
So I have following configuration by default
LB->80:NGNIX->5000:Java server
I've changed in LB Processes from 80 to 5000 so current configuration looks like following: LB->5000:Java server, so LB will redirect all requests directly to my service.
You can see configuration details in
documentation #processes paragraph
I have a Node application running on AWS. When moving into production, there is a very high chance that it will get a very high number of requests. I'm planning to host it using AWS ECS and there will be an AWS Application load balancer in front of the application.
When I looked at "How to deploy Node application in production", I saw that everybody is suggesting the use of Nginx in front of Node application.
My doubt is that, if we have the ALB in the architecture do we need to add the Nginx also? Is there any advantage if we use Nginx if we need to host the application for 1 million users?
It depends on how you are using the NGINX for load balancing. Application Load balancer surely brings a lot of features that can make NGINX redundant in your architecture but it is not exactly as advanced as NGINX. For example ALB only use round robin load balancing, while you can configure nginx for round-robin, least connnection, etc. ALB does not have any caching capabilities while nginx provides static content caching. ALB only uses path based routing while nginx can route on request headers, cookies, or arguments, as well as the request URL.
For further reading and source : https://www.nginx.com/blog/aws-alb-vs-nginx-plus/
Note : One other important fact of using nginx is cloud agnostic. So if you plan to switch cloud provider, you can take the nginx settings with you.
It depends on the rest of your architecture. If ALB can handle everything for you, you probably don't need nginx. Also, nginx has a learning curve in case you are a first time user.
I am currently configuring some web servers running Apache2 and a PHP based web app. The servers are running the same PHP codebase on the same system configuration and should be placed behind a load balancer on AWS. The LB accepts and terminates HTTPS connections, and forwards them as HTTP traffic to the web servers, so in theory the Event MPM should work and make sense.
Now, since the servers are sitting behind an LB, my question is: Are the connections between the LB and the web servers being kept alive ("keepalive") in this scenario? Also, do the TLS-connections result in the event-mpm to behave like a worker-mpm or not, even if the HTTPS-connections are terminated by the LB and forwarded as unencrypted HTTP traffic?
Ref: https://serverfault.com/questions/383526/how-do-i-select-which-apache-mpm-to-use?answertab=votes#tab-top
With help of the AWS support, I was able to find an answer to the question:
The AWS LB opens an unlimited number of connections to the servers behind it, so the Apache settings have to be configured in a way such that the number of worker threads optimally uses the underlying system's resources. If you see that neither your servers' memory nor CPU load comes anywhere near its capacity (even during a stress test), then you might want to increase the number of worker threads/processes in the Apache config.
Also: If the LB terminates HTTPS connections and forwards them as HTTP traffic, the Event MPM will work as intended, which is apparently also the most optimal MPM for Apache when using an AWS LB, unless you use HTTPS between the LB and the servers. In that case, the worker MPM will do just fine.
I have tried unsuccessfully to configure SSL for my project.
My AWS load balancer is configured correctly and accepts the certificate keys. I have configured the listeners to route both port 80 traffic and port 443 traffic to my port 80 on the instance.
I would imagine that no further modification is necessary on the instance (Nginx and Puma) since everything is routed to port 80 on the instance. I have seen examples where the certificate is installed on the instances but I understand the load balancer is the SSL termination point so this is not necessary.
When accessing via http://www.example.com eveything works fine. However, accessing via https://www.example.com times out.
I would appreciate some help with the proper high-level setup.
Edit: I have not received any response to this question. I assume it is too general?
I would appreciate confirmation that the high level reasoning I am using is the right one. I should install the certificate in the load balancer only and configure the load balancer to accept connections on the 443 port, BUT route everything on the 80 port internally to the web server instances.
I just stumbled over this question as I had the same problem: All requests to https://myapp.com timed-out and I could not figure out why. Here in short how I could achieve (forced) HTTPS in a Rails app on AWS:
My app:
Rails 5 with enabled config.force_ssl = true (production.rb) - so all connections coming from HTTP will get re-routed to HTTPS in the Rails App. No need to set-up difficult nginx rules. The same app used the gem 'rack-ssl-enforcer' as it was on Rails 4.2.
Side note: AWS LoadBalancers used in the past HTTP GET requests to check the health of the instances (today they support HTTPS). Therefore exception rules had to be defined for the SSL enforcement: Rails 5: config.ssl_options = { redirect: { exclude: -> request { request.path =~ /health-check/ } } } (in production.rb) with a respective route to a controller in the Rails App.
Side note to side note: In Rails 5, the initializer new_framework_defaults.rb has already defined "ssl_options". Make sure to deactivate this before using the "ssl_options" rule in production.rb.
AWS:
Elastic Beanstalk set-up on AWS with a valid cert on the Load Balancer using two Listener rules:
HTTP 80 requests on LB get directed to HTTP 80 on the instances
HTTPS 443 requests on LB get directed to HTTP 80 on the instances (here the certificate needs to be applied)
You see that the Load Balancer is the termination point of SSL. All requests coming from HTTP will go through the LB and will then be redirected by the Rails App to HTTPS automatically.
The thing no one tells you
With this in place, the HTTPS request will still time-out (here I spent days! to figure out why). In the end it was an extremely simple issue: The Security Group of the LoadBalancer (in AWS console -> EC2 -> Security Groups) only accepts requests on Port 80 (HTTP) -> Just activate Port 443 (HTTPS) as well. It should then work (at least it did for me).
I don't know if you managed your problem but for whoever may find this question here is what I did to get it working.
I've been all day reading and found a mix of two configurations that at this moment are working
Basically you need to configure nginx to redirect to https, but some of the recommended configurations do nothing to the nginx config.
Basically I'm using this gist configuration:
https://gist.github.com/petelacey/e35c98f9a35063a89fa9
But from this configuration I added the command to restart the nginx server:
https://gist.github.com/KeithP/f8534c04d20c2b4e4b1d
My take on this is that when the eb deploy process manages to copy the config files nginx has already started(?) making those changes useless. Hence the need to manually restarted, if some has a better approach let us know
Michael Fehr's answer worked and should be the accepted answer. I had the same problem, adding the config.force_ssl = true is what I missed. With the remark that you don't need to add the ebs configuration file they say you have to add if you are using the load balancer. That can be misleading and they do not specify it in the docs
I have two ubuntu machines running django with gunicorn as my Python HTTP WSGI server. I currently have a ELB sitting in front of these two machines.
Many sources claim I should add NGINX to my stack for proxy buffering. However I don't know where Nginx should be placed and how it can be configured to point to the ELB which in turns points to the app servers.
NGINX ELB 2 Django/Gunicorn Servers
(proxy buffering, prevents DDOS attacks) -------> (Load balances between two app servers) ------> (My two app servers)
Is this setup appropriate? If so how can I configure it?
NGINX sort of becomes a single point of failure. Unless there is a reason to do so otherwise, I would likely put the ELB in front of nginx and run nginx on both app servers (could run on separate servers if needed).
The web server can also take care of static requests, which would probably be handled more efficiently than your app stack.
Since ELB is inherently scalable and fault tolerant, it is general practice to have them at front. You can attach your Web servers to ELB . By adding Nginx on the top you will be bringing Single point of failure.