Site Become unavailable when one instance under loadbalancer become unhealthy - amazon-web-services

We are using AutoScaling and Elastic Load Balancer from Amazon AWS. We are running three linux(ubuntu) server under a load balancer. When one of the three instance become unhealthy(Status check fails) our site become unavailable.
But other 2 instance was healthy at that time.

We figured out the actual cause of this problem. Actually apache was taking all the memory of the instance. At one point memory exhausted on the machine, so apache was not able to serve response.
Following is another question where the problem is described.
apache2 processes stuck in sending reply - W

Related

Wordpress running on EC2 t3.small becomes unavailable (ELB Error 504) after X amount of time, needs rebooting

I have a problem with my Amazon EC2 instance (that did not happened when I was using DigitalOcean).
I've several EC2 instances that are managed by me. My personal EC2 has about 5 Wordpress sites running on a t2.micro instance and the traffic is not high so it is working well in load speed.
Also I have another 2 instances for one of my clients, one t2.micro (running only one Wordpress site) and a t3a.micro (running 4 Wordpress sites). The issue is with all 3 instances (mine and all the 2 of my client).
I have a CloudWatch alarm to notify me by email when Error 504 happen. Since I get the alarm, the website becomes unavailable (Cloudflare shows me Error 504), but I can get into SSH or Webmin. I do service nginx status and all seems to be fine, same to service php7.2-fpm. I do pkill nginx && pkill php* and then service nginx start && service php7.2-fpm start correctly but when I try to enter to the site, the Error 504 is still there.
To test, I decided to install and configure Apache with and without PHP-FPM enabled, same problem. Instance going well and websites running fast but after X amount of hours, it becomes unaccessible via web and the only solution is rebooting...
What's the only thing that solve the issue? Well, rebooting the instance.... After it boots, the websites are available again. Please note that I moved from DigitalOcean to AWS because it is more useful but I can't understand why the problem is happening here and not there since I've a similar instance configured very similar...
In all of the instances I've a setup with:
OS: Ubuntu 18.04
Types: Two t2.micro and one t3a.micro
ELB: Enabled
Security Groups: only allow ports 80, 443 from all the sources.
Database: In a RDS, not on the same instance.
I can provide the logs of everything that you probably can ask but I review all the Nginx and PHP-fpm logs and I can't see any anomalies. Also with syslog and kern.log, but I can provide if it can helps.
Hope you can give me a hand. Thanks for your advice!
EDIT:
I already found the origin of the issue. The problem wasn't in the EC2, all my headache was because I have the RDS set with only one Security Group attached to allow access from my IP to remote management of the databases and the public IPs of the EC2 that runs Wordpress, but I figured that I also need to whitelist the private IPs of those EC2s... Really noob mistake but that was the solution.

How to prevent Google Cloud Load balancer to forward the traffic to newly created auto scaled Instance without being ready?

I will need to host a PHP Laravel application on Google Cloud Compute Engine with auto scaling and load balancing. I tried to setup and configure following:
I Created instance template, where I have added startup script to install apache2, PHP, cloning the git repository of my project, Configuring the Cloud SQL proxy, and configure all settings required to run this Laravel project.
Created Instance group, Where I have configured a rule when CPU reaches certain percent it start creating other instances for auto scale.
Created Cloud SQL instance.
Created Storage bucket, in my application all of the public contents like images will be uploaded into storage bucket and it will be served from there.
Created Load Balancer and assigned the Public IP to load balancer, configured the fronted and backed correctly for load balancer.
As per my above configuration, everything working fine, When a instance reaches a defined CPU percentage, Auto scaling start creating another instances and load balancer start routing the traffic to new instance.
The issue I'm getting, to configure and setup my environment(the startup script of instance template) takes about 20-30 minutes to configure and start ready to serve the content from the newly created instance. But when the load balancer detects if the newly created machine is UP and running it start routing the traffic to new VM instance which is not being ready to serve the content from it.
As a result, when load balancer routes the traffic to not ready machine, it obviously send me 404 error, and some other errors.
How to prevent to happen it, is there any way that the instance that created through auto scaling service send some information to load balancer after this machine is ready to serve the content and then only the load balancer route the traffic to the newly created instance?
How to prevent Google Cloud Load balancer to forward the traffic to
newly created auto scaled Instance without being ready?
Google Load Balancers use the parameter Cool Down to determine how long to wait for a new instance to come online and be 100% available. However, this means that if your instance is not available at that time, errors will be returned.
The above answers your question. However, taking 20 or 30 minutes for a new instance to come online defeats a lot of the benefits of autoscaling. You want instances to come online immediately.
Best practices mean that you should create an instance. Configure the instance with all the required software applications, etc. Then create an image of this instance. Then in your template specify this image as your baseline image. Now your instances will not have to wait for software downloads and installs, configuration, etc. All you need to do is run a script that does the final configuration, if needed, to bring an instance online. Your goal should be 30 - 180 seconds from launch to being online and running for a new instance. Rethink / redesign anything that takes longer than 180 seconds. This will also save you money.
John Hanley answer is pretty good, I'm just completing it a bit.
You should take a look at packer to create your preconfigured google images, this will help you when you need to add a new configuration or do updates.
The cooldown is a great way, but in your case you can't really be sure that your installation won't take a bit more time sometimes due to updates as you should do an apt-get update && apt-get upgrade at instance startup to be up to date it will only take more and more time...
Load balancers normally should have a health check configured and should not route traffic unless the instance is detected as healthy. In your case as you have apache2 installed I suppose you have a HC on the port 80 or 443 depending on your configuration on a /healthz path.
A way to use the health check correctly would be to create a specific vhost for the health check and you add a fake domain in the HC, let's say health.test, that would give a vhost listening for health.test and returning a 200 response on /healthz path.
This way if you don't change you conf, just activate the health vhost last so the loadbalancer don't start routing traffic before the server is really up...

With AWS I removed elastic IP and can no longer access the instance

I have a load balancer and EC2 instance with AWS. I had problems with e-mail restrictions and was recommended to use an elastic IP. I then read somethere that you can't use elastic IP and a load balancer so I removed the elastic IP. I can no longer access my instance even when I've rebooted it and waiting 2 hours later. I can ping it (after enabling ICMP with network security) but I can't SSH or go to the web server. All the network settings remain, which included allowing TCP ports for HTTP and SSH. Does anyone know what has happened to make port 80 and 23 no longer accessible? This is a real nightmare for me because I did a bit of a marketing campaign, got increased traffic, noticed emails weren't getting sent, then in an attempt to fix that I've screwed the server completely so the website is down at the worst possible time :(
I fixed it all up. This isn't a direct solution to the problem, more like a workaround. I couldn't connect to that server no matter what, so I created a new instance and that worked. It was as if the Linux server itself was corrupt, not the AWS settings. I detached the volume from the old instance and attached it as a secondary volume on the new instance. When I logged into the new instance I was able to mount the secondary volume as a new drive and I just copied the files over that way. I don't have a bloated server so this wasn't really a big deal to pull off. Anyway, if you can't log in to a server anymore, you can always mount it to a new instance and access it via the file-system

Elastic Beanstalk reports 5xx errors even though instances are in perfect health

I need to set up an api application for gathering event data to be used in a recommendation engine. This is my setup:
Elastic Beanstalk env with a load balancer and autoscaling group.
I have 2x t2.medium instances running behind a load balancer.
EBS configuration is 64bit Amazon Linux 2016.03 v2.1.1 running Tomcat 8 Java 8
Additionally I have 8x t2.micro instances that I use for high-load testing the api, sending thousands of requests/sec to be handled by the api.
Im using Locust (http://locust.io/) as my load testing tool.
Each t2.micro instance that is run by Locust can send up to about 500req/sec
Everything works fine while the reqs/sec are below 1000, maybe 1200. Once over that, my load balancer reports that some of the instances behind it are reporting 5xx errors (attached). I've also tried with 4 instances behind the load balancer, and although things start out well with up to 3000req/sec, soon after, the ebs health tool and Locust both report 503s and 504s, while all of the instances are in perfect health according to the actual numbers in the ebs Health Overview, showing only 10%-20% CPU utilization.
Is there smth I'm missing in configuring the env? It seems like no matter how many machines I have behind the load balancer, the env handles no more than 1000-2000 requests per second.
EDIT:
Now I know for sure that it's the ELB that is causing the problems, not the instances.
I ran a load test with 10 simulated users. Each user sends about 1req/sec and the load increases by 10 users/sec to 4000 users, which should equal to about 4000req/sec. Still it doesn't seem to like any request rate over 3.5k req/sec (attachment1).
As you can see from attachment2, the 4 instances behind the load balancer are in perfect health, but I still keep getting 503 errors. It's just the load balancer itself causing problems. Look how SurgeQueueLength and SpilloverCount increase rapidly at some point. (attachment3) I'm trying to figure out why.
Also I completely removed the load balancer and tested with just one instance alone. It can handle up to about 3k req/sec. (attachment4 and attachment5), so it's definitely the load balancer.
Maybe I'm missing some crucial limit that load balancers have by default, like the queue size of 1024? What is normal handle rate for 1 load balancer? Should I be adding more load balancers? Could it be related to availability zones? ELB listeners from one zone are trying to route to instances from a different zone?
attachment1:
attachment2:
attachment3:
attachment4:
attachment5:
UPDATE:
Cross zone load balancing is enabled
UPDATE:
maybe this helps more:
The message says that "9.8 % of the requests to the ELB are failing with HTTP 5xx (6 minutes ago)". This does not mean that your instances are not returning HTTP 5xx responses. The requests are failing at the ELB itself. This can happen when your backend instances are at capacity (e.g. connections are saturated and they are rejecting connections to the ELB).
Your requests are spilling over at the ELB. They never make it to the instance. If they were failing at the EC2 instances then the cause would be different and data for the environment would match the data for the instances.
Also note that the cause says that this was the state "6 minutes ago". Elastic Beanstalk multiple data sources - one is the data coming from the instance which shows the requests per second and HTTP status codes in the table shown. Another data source is cloudwatch metrics for your ELB. Since cloudwatch metrics for ELB are 1 minute, this data is slightly delayed and the cause tells you how old the information is.

Load balancer for php application

Questions about load balancers if you have time.
So I've been using AWS for some time now. Super basic instances, using them to do some tasks whenever I needed something done.
I have a task that needs to be load balanced now. It's not a public service though. It's pretty much a giant cron job that I don't want running on the same servers as my website.
I set up an AWS load balancer, but it doesn't do what I expected it to do.
It get's stuck on one server, and doesn't load balance at all. I've read why it does this, and that's all fine and well, but I need it to be a serious round-robin load balancer.
edit:
I've set up the instances on different zones, but no matter how many instances I add to the ELB, it just uses one. If I take that instance down, it switches to a different one, so I know it's working. But I really would like it to always use a different one under every circumstance.
I know there are alternatives. Here's my question(s):
Would a custom php load balancer be an ok option for now?
IE: Have a list of servers, and have php randomly select a ec2 instance. Wouldn't be scalable at all, bu atleast I could set this up in 2 mins and it can work for now.
or
Should I take the time to learn how HAProxy works, and set that up in place of the AWS ELB?
or
Am I doing it wrong, and AWS's ELB does do round-robin. I just have something configured wrong?
edit:
Structure:
1) Web server finds a task to do.
2) If it's too large it sends it off to AWS (to load balancer).
3) Do the job on EC2
4) Report back via curl to an API
5) Rinse and repeat
Everything works great. But because the connection always comes from my server (one IP) it get's sticky'd to a single EC2 machine.
ELB works well for sites whose loads increase gradually. If you are expecting an uncommon and sudden increase on the load, you can ask AWS to pre-warm it for you.
I can tell you I used ELB in different scenarios and it always worked well for me. As you didn't provide too much information about your architecture, I would bet that ELB works for you, and the case that all connections are hitting only one server, I would ask you:
1) Did you check the ELB to see how many instances are behind it?
2) The instances that you have behind the ELB, are all alive?
3) Are you accessing your application through the ELB DNS?
Anyway, I took an excerpt from the excellent article that does a very good comparison between ELB and HAProxy. http://harish11g.blogspot.com.br/2012/11/amazon-elb-vs-haproxy-ec2-analysis.html
ELB provides Round Robin and Session Sticky algorithms based on EC2
instance health status. HAProxy provides variety of algorithms like
Round Robin, Static-RR, Least connection, source, uri, url_param etc.
Hope this helps.
This point comes as a surprise to many users using Amazon ELB. Amazon
ELB behaves little strange when incoming traffic is originated from
Single or Specific IP ranges, it does not efficiently do round robin
and sticks the request. Amazon ELB starts favoring a single EC2 or
EC2’s in Single Availability zones alone in Multi-AZ deployments
during such conditions. For example: If you have application
A(customer company) and Application B, and Application B is deployed
inside AWS infrastructure with ELB front end. All the traffic
generated from Application A(single host) is sent to Application B in
AWS, in this case ELB of Application B will not efficiently Round
Robin the traffic to Web/App EC2 instances deployed under it. This is
because the entire incoming traffic from application A will be from a
Single Firewall/ NAT or Specific IP range servers and ELB will start
unevenly sticking the requests to Single EC2 or EC2’s in Single AZ.
Note: Users encounter this usually during load test, so it is ideal to
load test AWS Infra from multiple distributed agents.
More info at the Point 9 in the following article http://harish11g.blogspot.in/2012/07/aws-elastic-load-balancing-elb-amazon.html
HAProxy is not hard to learn and is tremendously lightweight yet flexible. I actually use HAProxy behind ELB for the best of both worlds -- the hardened, managed, hands-off reliability of ELB facing the Internet and unwrapping SSL, and the flexible configuration of HAProxy to allow me to fine tune how things hit my servers. I've never lost an HAProxy instance yet, but it I do, ELB will just take that one out of rotation... as I have seen happen when the back-end servers have all become inaccessible, which (because of the way it's configured) makes ELB think the HAProxy is unhealthy, but that's by design in my setup.