i set up an elastic load balancer on AWS to reach a target group made of 3 EC2 instances, in 3 different zones.
I saw I can see CloudWatch load balancer metric, target group metric, or EC2 metric. I'd like to know if exists a kind of plugin to display metrics for all the hosts available in the target group, like grafana/prometheus.
In addition I'd like to know if the are best practise to gather application logs from EC2 instances to consult them, if some error occur.
Thank you very much
It depends on what kind of monitoring you want to use, but assuming you just want to gather logs, you can do the following:
Pre-bake the AMI, based on your OS, with Cloudwatch Logs agent.
Specify log group name in agent configuration, enable agent on startup
Launch instance group from that AMI
This way logs from different instances should be collected in one log group under different streams corresponding to instance.
You can also use 3rd-party services, like ELK stack, but the idea is the same - AMI with log agent.
Related
Can someone help me with GCP autoscaling. I want to achive Auto Scaling Without using Load Balancer in GCP because the service which is running on the VM does not need any endpoint its more likely a kafka consumer where its only fetch the data from cluster and send it to DB so there is no load balancing.
so far i have successfully created instaces template and have define the minimum and maximum state there but thats only maintaining the running state not perfroming autoscaling.
You can use instance groups which is a collection of virtual machine (VM) instances that you can manage as a single entity.
Autoscaling groups of instances have managed instance groups which will autoscale as per requirement by using the following policies.
CPU Usage: The size of the group is controlled to keep the average processor load of the virtual machines in the group at the required level
HTTP Load Balancing Usage: The size of the group is controlled to keep the load of the HTTP traffic balancer at the required level
Stackdriver Monitoring Metric: The size of the group is controlled to keep the selected metric from the Stackdriver Monitoring instrument at the required level .
Multiple Metrics: The decision to change the size of the group is made on the basis of multiple metrics.
Select your required policy and create a managed group of instances which will autoscale your VM.Here in this document you can find the steps to create scaling based on CPU usage, similarly you can create a required group.
For understanding attaching a Blog refer to it for more information.
I am using EC2 with Elastic Beanstalk to deploy a Spring Boot application. This deployment connects to an RDS MySQL instance and an assigned default security group allows the communication.
For a 3rd time, I have found the security group has been dropped from the EC2s list of groups, resulting in degraded Spring Boot, in which Boot is stuck in a startup loop (I am not sure why brought it down)
A separate Boot/EBS deployment uses this same group for RDS connectivity, and has never experienced this.
Has anyone else experienced this? Logs reveal nothing other than connection timeout to RDS.
To troubleshoot this issue, you can use AWS CloudTrail. Using AWS CloudTrail, you can trace who is detaching security group from the related AWS EC2 Instance. This kind of event is logged as ModifyNetworkInterfaceAttribute with event source as ec2.amazonaws.com.
Here you can find AWS CloudTrail user guide.
Note:Typically, CloudTrail delivers an event within 15 minutes of the API call/event.
I believe your problem is you are attaching the security groups to the instance using EC2 console instead of using EB environment's configuration.
Go to EB console, chose your environment, click on configuration.
Click Edit on the Instances section, add security groups from this location. Doing so will ensure that all your security groups are applied when EB is creating instances as an example when it scale-out.
I'm a little too confused on the terms and its usage. Can you please help me understand how are these used with Load Balancers?
I referred the aws-doc in vain for this :(
Target groups are just a group of Ec2 instances. Target groups are closely associated with ELB and not ASG.
ELB -> TG - > Group of Instances
We can just use ELB and Target groups to route requests to EC2 instances. With this setup, there is no autoscaling which means instances cannot be added or removed when your load increases/decreases.
ELB -> TG - > ASG -> Group of Instances
If you want autoscaling, you can attach a TG to ASG which in turn gets associated to ELB. Now with this setup, you get request routing and autoscaling together. Real world usecases follow this pattern. If you detach the target group from the Auto Scaling group, the instances are automatically deregistered from the target group
Hope this helps.
What is a target group?
A target group contains EC2 instances to which a load balancer distributes workload.
A load balancer paired with a target group does NOT yet have auto scaling capability.
What is an Auto Scaling Group (ASG)?
This is where auto scaling comes in. An auto scaling group (ASG) can be attached to a load balancer.
We can attach auto scaling rules to an ASG. Then, when thresholds are met (e.g. CPU utilization), the number of instances will be adjusted programatically.
How to attach an ASG to a load balancer?
For Classic load balancer, link ASG with the load balancer directly
For Application load balancer, link ASG with the target group (which itself is attached to the load balancer)
Auto Scaling Group is just a group of identical instances that AWS can scale out (add a new one) or in (remove) automatically based on some configurations you've specified. You use this to ensure at any point in time, there is the specific number of instances running your application, and when a threshold is reached (like CPU utilization), it scales up or down.
Target Group is a way of getting network traffic routed via specified protocols and ports to specified instances. It's basically load balancing on a port level. This is used mostly to allow accessing many applications running on different ports but the same instance.
Then there are the classical Load Balancers where network traffic is routed between instances.
The doc you referred to is about attaching load balancers (either classical or target group) to an auto-scaling group. This is done so scaling instances can be auto-managed (by the auto scaling group) while still having network traffic routed to these instances based on the load balancer.
Target groups
They listen to HTTP/S request from a Load Balancer
Are the Load Balancer's targets which will be available to handle an HTTP/S request from any kind of clients (Browser, Mobile, Lambda, Etc). A target has a specific purpose like Mobile API processing, Web App processing, Etc. Further, these target groups could contain instances with any kind of characteristics.
AWS Docs
Each target group is used to route requests to one or more registered targets. When you create each listener rule, you specify a target group and conditions. When a rule condition is met, traffic is forwarded to the corresponding target group. You can create different target groups for different types of requests. For example, create one target group for general requests and other target groups for requests to the microservices for your application. Reference
So, a Target Group provides a set of instances to process specific HTTP/S requests.
AutoScaling groups
They are a set of instances who were started up to handle a specific workload, i.e: HTTP requests, SQS' message, Jobs to process any kind of tasks, Etc.
On this side, these groups are a set of instances who were started up by a metric which exceeded a specific threshold and triggered an alarm. The main difference is that Autoscaling groups' instances are temporary and they are available to process anything, from HTTP/S requests until SQS' messages. Further, the instances here are temporary and can be terminated at any time according to the configured metric. Likewise , the Autoscaling groups share the same characteristics because the follow something called Launch Configuration.
AWS Docs
An Auto Scaling group contains a collection of EC2 instances that share similar characteristics and are treated as a logical grouping for the purposes of instance scaling and management. For example, if a single application operates across multiple instances, you might want to increase the number of instances in that group to improve the performance of the application or decrease the number of instances to reduce costs when demand is low. Reference
So, an Autoscaling group not only will be able to process HTTP/S requests but also can process backend stuff, like Jobs to send emails, jobs to process tasks, Etc.
As I understand it, Target Groups is a connection between ELB and EC2 instances. Some kind of a service discovery rules. This layer allows to Target Groups for ECS Services for instance when it's possible to have more than one container per instance.
Auto-Scaling Groups is an abstraction for aggregation of EC2 metrics and taking some actions based on that data.
Also, bear in mind, that the possibility of attaching of Auto-Scaling Groups to ELB comes from the previous generation of ELBs. You may compare the first generation and the second one in the CloudFormation docs.
Trying to create an Amazon Cloudwatch alert to monitor an Elastic Beanstalk deployment of a public facing website. The alert options for Elastic Beanstalk don't seem to allow for monitoring specific instances that fail Beanstalk's health check URL query. I need to identify the specific unhealthy INSTANCE and terminate it. From there, my autoscaling policy will automatically replace the terminated instance.
Some background
Setup: Elastic Beanstalk deployment running LAMP for a public facing site.
Purpose: For additional failsafe security, I've added a daemon to monitor the state of the file system at /var/www. If the timestamp or size of the filesystem changes (i.e., unwanted file introduction or change), the monitor fires a script that deletes the php file located at elasticbeanstlak's health check URL (random url in the /var/www dir) and forces an "unhealthy" state at the ELB monitoring level.
All is working fine except I can't seem to find a way to get Amazon to identify the specific instance which has caused the health check to fail and let me terminate only that instance.
The AWS docs for creating alarms to handle this specific functionality and initiating instance termination is unclear. I've tried setting up health monitoring at the Beanstalk level, which works to identify an unhealthy state, but not the specific instance. Not new to AWS, but relatively new to Cloudwatch metrics.
Thanks for suggestions.
So it looks like your base use case is this:
Something is wrong in the /var/www dir and your script deleted the health check script.
Instance fails the health check
The instance gets terminated then replaced by Autoscaling
One option would be to use Elastic Beanstalk's Scaling Triggers setting to configure your Autoscaling Group to immediately replace hosts on the UnhealthyHostCount trigger measurement. If you are using the API you can set the triggers with these option settings.
my aws auto-scaled instances are not picked up by load-balancer and the auto-scaled instances are recreated frequently,
also is there any problem in using auto-scaled instances and static instances at the same time in aws ELB ?
what are the precautions to take when doing so if it is possible
is there any disadvantages doing so ?
Need to make sure that your autoscaling group is registered with the load balancer appropriately, and that you have the appropriate policies. Really need more details to answer this though.
Don't do it. If you need an instance to be running all the time, configure your group to have a minimum of the number of "static" instances. If you need to run a "static" instance, and a scaling group - you're probably thinking about the problem the wrong way.
One reason could be: If you have configured your autoscaling group for multiple availability zones, but those zones are not added to the associated load balancer. In Management Console, go to Load Balancers -> Instances and verify Availability Zones.
I would go with #Peter H. Modify your design so you don't depend on any particular instance for persistent data. Store persistent data externally in a database or on S3.