EC2 Auto scaling - amazon-web-services

I have one EC2 instance and running a tomcat service on the EC2 machine. I know, how to configure auto scaling when the CPU usage goes up, ... But, not sure how to configure auto scaling to launch a new instance when my tomcat service goes down even the EC2 instance is up. Also how to configure auto scaling when the tomcat service is hung even the tomcat process is up and running.
If this is not possible with Ec2 auto scaling, Is this possible with ELB and Beanstalk?

If you go to the auto scaling page in the web console and click edit, you can choose either ec2 or elb health check. Ec2 monitors instance performance characteristics. Elb health checks can be used to monitor server response. As the name implies the auto scaling health status is controlled by the response given to a load balancer. This could be a tcp check to port 80 that just checks that the server is there, listening and responding, all the way up to a custom http check to a page you define, e.g. You could do hostname/myserverstatus and at that page have a script that checks server status, database availability etc, and then return either a success or error. See http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-add-elb-healthcheck.html
Good Luck!

There are some standard unix tools that do that for you. Upstart will watch your server and restart if it goes down. I don't know about it hanging. If you run on beanstalk, you can set up a call that the load balancer will make to see if your app is responsive, and it can then message you to let you know there is a problem. You can probably set it up to reboot the box, or restart the process.

Related

AWS ELB Load Sharing Configuration

We have 3 EC2 Instances(Apache Web Server) running under AWS ELB, it sharing load correctly but whenever any of Web Server down i.e. Web1 having some issue i.e. Disk Full or Apache Crash then still ELB trying to send request to that server which is already not responding or don't have capacity to respond, hence user who is connected to that server are getting error.
Question : Is there way to identify Fail server and force ELB to stop passing request to failed server?
FYI: Auto Scaling is not enabled.
You need to configure health checks for your ELB. When the checks are failing, the elb will stop forwarding traffic to the unhealthy instance.

AWS ECS Fargate ALB Error (Request Timed Out)

I have set up a Docker container running on port 5566 with a small Django application. The Docker image is uploaded into the ECR and later used by Fargate container(s).
I have set up an ECS cluster with a VPC.
After creating the Task Definition and Service, the Service starts up 2 tasks (as it is supposed to):
Here's the Service's Network Access (with health check grace period on 300s):
I also set up an Application Load Balancer (with DNS) with a target group for the service, but the health checks seem to be failing:
Here's the health check configuration:
Because the health checks are failing the tasks are terminated and new ones are started after ~every 5 minutes.
Here's the container's port mapping:
As one cannot access the Fargate container (via SSH for example) and the logs are empty, how should I troubleshoot the issue?
I have tried to follow every step in the Troubleshoot Your Application Load Balancer.
Feel free to ask additional information.
can you confirm once, your application is working on port 5566 inside docker?
you can check logs in cloudwatch. you'll get the link in cluster -> service -> tasks -> your task.
Can you post your ALB configuration? your Target group port?

How to prevent Google Cloud Load balancer to forward the traffic to newly created auto scaled Instance without being ready?

I will need to host a PHP Laravel application on Google Cloud Compute Engine with auto scaling and load balancing. I tried to setup and configure following:
I Created instance template, where I have added startup script to install apache2, PHP, cloning the git repository of my project, Configuring the Cloud SQL proxy, and configure all settings required to run this Laravel project.
Created Instance group, Where I have configured a rule when CPU reaches certain percent it start creating other instances for auto scale.
Created Cloud SQL instance.
Created Storage bucket, in my application all of the public contents like images will be uploaded into storage bucket and it will be served from there.
Created Load Balancer and assigned the Public IP to load balancer, configured the fronted and backed correctly for load balancer.
As per my above configuration, everything working fine, When a instance reaches a defined CPU percentage, Auto scaling start creating another instances and load balancer start routing the traffic to new instance.
The issue I'm getting, to configure and setup my environment(the startup script of instance template) takes about 20-30 minutes to configure and start ready to serve the content from the newly created instance. But when the load balancer detects if the newly created machine is UP and running it start routing the traffic to new VM instance which is not being ready to serve the content from it.
As a result, when load balancer routes the traffic to not ready machine, it obviously send me 404 error, and some other errors.
How to prevent to happen it, is there any way that the instance that created through auto scaling service send some information to load balancer after this machine is ready to serve the content and then only the load balancer route the traffic to the newly created instance?
How to prevent Google Cloud Load balancer to forward the traffic to
newly created auto scaled Instance without being ready?
Google Load Balancers use the parameter Cool Down to determine how long to wait for a new instance to come online and be 100% available. However, this means that if your instance is not available at that time, errors will be returned.
The above answers your question. However, taking 20 or 30 minutes for a new instance to come online defeats a lot of the benefits of autoscaling. You want instances to come online immediately.
Best practices mean that you should create an instance. Configure the instance with all the required software applications, etc. Then create an image of this instance. Then in your template specify this image as your baseline image. Now your instances will not have to wait for software downloads and installs, configuration, etc. All you need to do is run a script that does the final configuration, if needed, to bring an instance online. Your goal should be 30 - 180 seconds from launch to being online and running for a new instance. Rethink / redesign anything that takes longer than 180 seconds. This will also save you money.
John Hanley answer is pretty good, I'm just completing it a bit.
You should take a look at packer to create your preconfigured google images, this will help you when you need to add a new configuration or do updates.
The cooldown is a great way, but in your case you can't really be sure that your installation won't take a bit more time sometimes due to updates as you should do an apt-get update && apt-get upgrade at instance startup to be up to date it will only take more and more time...
Load balancers normally should have a health check configured and should not route traffic unless the instance is detected as healthy. In your case as you have apache2 installed I suppose you have a HC on the port 80 or 443 depending on your configuration on a /healthz path.
A way to use the health check correctly would be to create a specific vhost for the health check and you add a fake domain in the HC, let's say health.test, that would give a vhost listening for health.test and returning a 200 response on /healthz path.
This way if you don't change you conf, just activate the health vhost last so the loadbalancer don't start routing traffic before the server is really up...

Purposefully make instance attached to ELB as unhealthy

Is there any way to make an instance attached to an ELB unhealthy purposefully using boto ?
I tried few methods and non of them working so far.
Thanks for any help !!
No, this is not possible. There is no AWS API call that can change the health status of an instance. (Auto Scaling has this capability, but not Load Balancing).
You could use the deregister_instances() API call, which would effectively achieve the same result.
The Register or Deregister EC2 Instances for Your Classic Load Balancer documentation says:
Deregistering an EC2 instance removes it from your load balancer. The load balancer stops routing requests to an instance as soon as it is deregistered. If demand decreases, or you need to service your instances, you can deregister instances from the load balancer. An instance that is deregistered remains running, but no longer receives traffic from the load balancer, and you can register it with the load balancer again when you are ready.
When you deregister an instance, Elastic Load Balancing waits until in-flight requests have completed if connection draining is enabled.
Yeah, We can do that in the below scenario.
Let's assume that you have loadblancer(myloadbalancer), an instance attached with it and PingPath configuration as such below.
Ping Protocol: HTTP
Ping Port: 80
Ping Path: /
Just add boto3 code to edit the health check configuration as below and you can see the magic(Instance OutOfService).
client.configure_health_check(
LoadBalancerName='myloadbalancer',
HealthCheck={
'Target': 'HTTP:80/hjkx',
'Interval': 30,
'Timeout': 5,
'UnhealthyThreshold': 5,
'HealthyThreshold': 3
}
)
Two other options:
1. Temporarily disable the web server / process that's responding to the health check. In our case, we were running Java webapps with and nginx proxy in front of it. Shutting down the nginx proxy made the health check fail while the Java app would still be running.
2. Temporarily firewall the port that the ELB uses to perform the health check on. You could do this via a call to the AWS api.

When AWS ElasticBeanstalk scales to another server it seems to make it available before it is ready ?

When my Java application is deployed to Tomcat on Elastic-Beanastalk it takes a while (11 minutes) because it has to copy large data files from S3 and unzip them, but that is okay because this is all done in .ebextensions and the instance doesn't report itself ready until that is completed.
However, I have it configured for Autoscaling and it seems that when it decides it needs to start a new instance there is a period before the next instance has fully deployed that Elastic-Beanstalk will direct some application requests to this new server, of course because it is not ready it returns a 503 error.
But surely all calls should only go to the original instance until the second one is ready, has anyone else noticed this ?
Whether requests are directed to the new instance or not is decided by the Elastic Load Balancer (ELB). Your autoscaled instances are behind the ELB and ELB performs periodic health checks on your EC2 instances to decide whether traffic to your instances or not. By default the health check is TCP connect on port 80. So if ELB can establish a connection to port 80 on the Tomcat server, it will start sending traffic to the instance even before it is actually "ready".
The solution is to use a custom HTTP health check instead of the default TCP check. Set up your web app to return a 200 OK on a special path say '/health_ping'. Then configure the "Application Healthcheck URL" option to "/health_ping". You can do this using the following ebextension.
Create a file called .ebextensions/01-health-check.config in your app source with the following contents. Then deploy it to your environment.
option_settings:
- namespace: aws:elasticbeanstalk:application
option_name: Application Healthcheck URL
value: /health_ping
Read more about this option setting here.
You can also configure this in the web console or using the aws cli.