AWS ELB fails healthcheck with http ports other than 80 - amazon-web-services

We've been using AWS for a while and have also setup many ELB's. The problem we have is that we have multiple sites running on multiple servers. All IIS 7.5 sites are on each of 3 web servers. We utilize ports 80 and 443 with all bindings for sites setup correctly with domains/subdomains. We have an ELB for each site.
The problem we have is that each ELB is currently setup with a healthcheck of HTTP:80/ so each ELB is not really checking the health of it's respective site.
What we'd like to do is setup each site to listen to a different extra port (i.e., 8082, 8083, etc.) and have each ELB's healthcheck check the site's extra port (i.e., HTTP:8082/, HTTP:8083/, etc.). The ports are opened on the firewalls and in the security groups correctly and we can hit each site on their respective extra port (i.e., http://web1.mysite.com:8082/).
AWS's documentation says you should be able to do what we're trying to do, but the healthchecks of the instances don't pass. I've even gone so far as to define a listener for the respective port. Just to confirm, I can set the check to be HTTP:80/ and the instance comes "InService", but when I change it to HTTP:8082/, it immediately goes out of service. This is driving me nuts so any help would be greatly appreciated.

We determined that we can fix this by setting up each site's extra port binding to it's corresponding port with a blank host name. Also, by assuring that the "Default Site" is turned off (in our case it was). The binding hostname was set to the site's name before (i.e., www.mysite.com).

Related

HTTP ERROR 408 - After setting up kubernetes , along with AWS ELB and NGINX Ingress

I am finding it extremely hard to debug this issue, I have a Kubernetes cluster setup along with services for the pods, they are connected to the Nginx ingress and connected to was elb classic, which also connects to the AWS route53 DNS my domain name is connected to. Everything works fine with that, but then am facing an issue where my domains do not behave the way I would like them to.
My domains in the Nginx-ingress-rules are connected to a service which sends alive page when hit with a domain, now when I do that I get this page.
Please help me what what to do to resolve this quickly, thanks in advance!
Talk to you soon
enter image description here
While you are using web servers behind ELB you must know that they generate a lot of 408 responses due to their health checks.
Possible solutions:
1. Set RequestReadTimeout header=0 body=0
This disables the 408 responses if a request times out.
2. Disable logging for the ELB IP addresses with:
SetEnvIf Remote_Addr "10\.0\.0\.5" exclude_from_log
CustomLog logs/access_log common env=!exclude_from_log
3. Set up different port for ELB health check.
4. Adjust your request timeout higher than 60.
5. Make sure that the idle time configured on the Elastic Loadbalancer is slightly lower than the idle timeout configured for the Apache httpd running on each of the instances.
Take a look: amazon-aws-http-408, haproxy-elb, 408-http-elb.

How to make a specific port publicly available within AWS

I have my React website hosted in AWS on https using a classic load balancer and cloudfront but I now need to have port 1234 opened as well. When I currently browse my domain with port 1234 the page cannot be displayed. The reason I want port 1234 opened as this is where my nodeJs web server is running for React to communicate with.
I tried adding port 1234 into my load balancer listener settings although it made no difference. It's noticeable the load balancer health check panel seems to only have one value which is currently HTTP:80/index.html. I assume the load balancer can listen to port 80 and 1234 (even though it can only perform a health check on one port number)?
Do I need to use action groups or something else to open up the port? Please help, any advice much appreciated.
Many thanks,
Load balancer settings
Infrastructure
I am using the following
EC2 (free tier) with the two code projects installed (React website and node server on the same machine in different directories)
Certificate created (using Certificate Manager)
I have created a CloudFront Distribution and verified it using email. My certificate was selected in the cloud front as the customer SSL certificate
I have a classic load balancer (instance points to my only EC2) and the status is InService. When I visit the load balancer DNS name value I see my React website. The load balancer listens to HTTP port 80. I've added port 1234 but this didn't help
Note:
Please note this project is to learn AWS, React and NodeJs so if things are strange please indicate
EC2 instance screenshot
Security group screenshot
Load balancer screenshot
Target group screenshot
An attempt to register a target group
Thank you for having clarified your architecture.
I woud keep CloudFront out of the game now and be sure your setup works with just the load balancer. When everything will be configured correctly, you can easily add Cloudfront as a next step. In general, for all things in IT, it is easier to build a simple system that is working and increase complexity one step at a time rather than debugging a complex system that does not work.
The idea is to have an Application Load Balancer with two listeners, one for the web (TCP 80) and one for the API (TCP 123). The ALB will have two target groups (one for each port on your EC2 instance) and you will create Listeners rules to forward the correct port to the correct target groups. Please read "Application Load Balancer components" to understand how ALBs work.
Here are a couple of thing to check
be sure you have two listeners and two target group on your Application Load Balancer
the load balancer must be in a security group allowing TCP 80 and TCP 1234 from anywhere (0.0.0.0/0) (let's say SG-001)
the EC2 instance must be in a security group allowing TCP connections on port 1234 (for the API) and 80 (for the web site) only from source SG-001 (just the load balancer)
After having written all this, I realise you are using Classic Load Balancer. This should work as well, just be sure your EC2 instance has the correct security group (two rules, one for each port)

VM Instance group to configure to listen on port 80 and 8080

I have configure my VM in such a way that I have 2 application running on one VM.
First App listen on ip:80 port
Second App listen on ip:8080 port
I have enabled ports on VM instances group like this.
I have my Load Balancer configured with two front rules like this.
I want to map ip1:80 to my 80 port application and ip2:8080 to 8080 application
when I tried accessing my application using load balancers IP address it always show me 8080 port application.
I have two backend service running
help me here google team. I m newb
If you want to use IP addresses but not URLs/Domain(s) to reach to your web applications, then URL Maps cannot help to implement your design, as URL map forwards the request to the correct backend service using host values (example.com) and path values (/path) in the destination URL.
That being said, you can add one more Target Proxy to your LB resources to route incoming requests directly to the desired backend services. This will allow you to keep your minimum number of instances as one VM.
For more information, visit this article.
I had similar problem and I had to add second backend.
So I have two backends: one for 80 port, other for 8080. And I have on managed group.

Enabling SSL on Rails 4 with AWS Load Balancer Ngnix and Puma

I have tried unsuccessfully to configure SSL for my project.
My AWS load balancer is configured correctly and accepts the certificate keys. I have configured the listeners to route both port 80 traffic and port 443 traffic to my port 80 on the instance.
I would imagine that no further modification is necessary on the instance (Nginx and Puma) since everything is routed to port 80 on the instance. I have seen examples where the certificate is installed on the instances but I understand the load balancer is the SSL termination point so this is not necessary.
When accessing via http://www.example.com eveything works fine. However, accessing via https://www.example.com times out.
I would appreciate some help with the proper high-level setup.
Edit: I have not received any response to this question. I assume it is too general?
I would appreciate confirmation that the high level reasoning I am using is the right one. I should install the certificate in the load balancer only and configure the load balancer to accept connections on the 443 port, BUT route everything on the 80 port internally to the web server instances.
I just stumbled over this question as I had the same problem: All requests to https://myapp.com timed-out and I could not figure out why. Here in short how I could achieve (forced) HTTPS in a Rails app on AWS:
My app:
Rails 5 with enabled config.force_ssl = true (production.rb) - so all connections coming from HTTP will get re-routed to HTTPS in the Rails App. No need to set-up difficult nginx rules. The same app used the gem 'rack-ssl-enforcer' as it was on Rails 4.2.
Side note: AWS LoadBalancers used in the past HTTP GET requests to check the health of the instances (today they support HTTPS). Therefore exception rules had to be defined for the SSL enforcement: Rails 5: config.ssl_options = { redirect: { exclude: -> request { request.path =~ /health-check/ } } } (in production.rb) with a respective route to a controller in the Rails App.
Side note to side note: In Rails 5, the initializer new_framework_defaults.rb has already defined "ssl_options". Make sure to deactivate this before using the "ssl_options" rule in production.rb.
AWS:
Elastic Beanstalk set-up on AWS with a valid cert on the Load Balancer using two Listener rules:
HTTP 80 requests on LB get directed to HTTP 80 on the instances
HTTPS 443 requests on LB get directed to HTTP 80 on the instances (here the certificate needs to be applied)
You see that the Load Balancer is the termination point of SSL. All requests coming from HTTP will go through the LB and will then be redirected by the Rails App to HTTPS automatically.
The thing no one tells you
With this in place, the HTTPS request will still time-out (here I spent days! to figure out why). In the end it was an extremely simple issue: The Security Group of the LoadBalancer (in AWS console -> EC2 -> Security Groups) only accepts requests on Port 80 (HTTP) -> Just activate Port 443 (HTTPS) as well. It should then work (at least it did for me).
I don't know if you managed your problem but for whoever may find this question here is what I did to get it working.
I've been all day reading and found a mix of two configurations that at this moment are working
Basically you need to configure nginx to redirect to https, but some of the recommended configurations do nothing to the nginx config.
Basically I'm using this gist configuration:
https://gist.github.com/petelacey/e35c98f9a35063a89fa9
But from this configuration I added the command to restart the nginx server:
https://gist.github.com/KeithP/f8534c04d20c2b4e4b1d
My take on this is that when the eb deploy process manages to copy the config files nginx has already started(?) making those changes useless. Hence the need to manually restarted, if some has a better approach let us know
Michael Fehr's answer worked and should be the accepted answer. I had the same problem, adding the config.force_ssl = true is what I missed. With the remark that you don't need to add the ebs configuration file they say you have to add if you are using the load balancer. That can be misleading and they do not specify it in the docs

amazon ec2 elastic ip redirecting not wroking

I've registered domain with bigrock.in
Created ec2 instance in aws
created elastic ip
registered with route53 and gave my domain name
changed the name servers in bigrock with the provided names in bigrock
ssh to the ec2 instance with elastic ip
ran node.js app with forever
with the following environemnt variables
export ROOT_URL="www.domain.com"
at the time of route53 process, I created A record with www sub-domain to elasticIp
But, I'm not seeing anything at domain.com or at elasticIp xxx.xxx.xxx.xxx
Did I miss any steps, Is there anything wrong I did or do I need to do anything to make this works
EDIT
I haven't added any A or CNAME records to bigrock just changed the name server to the servers provided by ROuter53
Edit 2
that is my security group outbound details, My app is running on port 80.
Are those settings correct?
EDIT 3
My INbound rules
You've got a rule to allow all traffic from anywhere on the INBOUND security groups so its not that (make sure you fix this later when you get it working - as it is, its a bit of a security hole).
Next thing I would normally say is its a dns problem, but as you say you've tried going to the eip as well as the domain name its not that either.
Next likely candidates are:
The server isnt listening - it may be that it hasnt started properly try checking the logs,The machine's firewall is blocking connections. (try turning it off - keep this at VERY short time length though - its a huge risk in combination with your security group settings)
Or your server is not listening on port 80, e.g. it might be listening on 8080 or 443. Check the server config - by default browsers assume port 80 for http, if its not listening on that you will have to specify the port in the address bar as well e.g. http://example.com:8080