I am using Elasticbeanstalk for my microservices architecture. I would like to setup a loadbalancer which can spin off another instance once my memory is exhausted 100% while i am unable to see any metrics.
Another Question: In How much time an instance could be spinned off.
let me know if there are some other wayouts for this problem.
When you set up your Beanstalk environment you can choose it to be Load Balanced or Single instance (you can also change this at any time). In the Configuration of the environment, you can choose scale up and down metrics, however you won't see any memory related metrics in the list.
You can set up a CloudWatch alarm to trigger the load balancer scale up and down event, but as mentioned by #Stefan you will actually need to create memory related metrics to do this, as they don't exist by default.
You can do this is with a config file in your .ebextensions folder within your deployment package which writes to CloudWatch every n minutes.
You have to give your Beanstalk IAM role permissions to write to CloudWatch using an inline policy.
You will then start to see memory metrics in CloudWatch and can therefore use them to scale up and down.
YAML for .config file will look like this (you may not want all these metrics):
packages:
yum:
perl-DateTime: []
perl-Sys-Syslog: []
perl-LWP-Protocol-https: []
perl-Switch: []
perl-URI: []
perl-Bundle-LWP: []
sources:
/opt/cloudwatch: https://aws-cloudwatch.s3.amazonaws.com/downloads/CloudWatchMonitoringScripts-1.2.1.zip
container_commands:
01-setupcron:
command: |
echo '*/5 * * * * root perl /opt/cloudwatch/aws-scripts-mon/mon-put-instance-data.pl `{"Fn::GetOptionSetting" : { "OptionName" : "CloudWatchMetrics", "DefaultValue" : "--mem-util --disk-space-util --disk-path=/" }}` >> /var/log/cwpump.log 2>&1' > /etc/cron.d/cwpump
02-changeperm:
command: chmod 644 /etc/cron.d/cwpump
03-changeperm:
command: chmod u+x /opt/cloudwatch/aws-scripts-mon/mon-put-instance-data.pl
option_settings:
"aws:autoscaling:launchconfiguration" :
IamInstanceProfile : "aws-elasticbeanstalk-ec2-role"
"aws:elasticbeanstalk:customoption" :
CloudWatchMetrics : "--mem-util --mem-used --mem-avail --disk-space-util --disk-space-used --disk-space-avail --disk-path=/ --auto-scaling"
Links:
Troubleshooting CPU and Memory Issues in Beanstalk
How to monitor memory on Beanstalk
How to add permissions to IAM role
Example .config file which includes setting up CloudWatch triggers
Related
Background: I'm running docker-compose ecs locally and need to ensure I use Spot instances due to my hobbyist budget.
Question: How do I determine and guarantee that instances are running as Fargate Spot instances?
Evidence:
I have setup the default capacity provider strategy as FARGATE_SPOT
I have both the default-created capacity providers 'FARGATE' and 'FARGATE_SPOT'
capacity providers
default strategy
You can see this in the web console when you view a specific task:
To find this page open click on your cluster from within ECS, then go to the "Tasks" tab and click on the task id.
You can also see this through the aws cli:
aws ecs describe-tasks --cluster <your cluster name> --tasks <your task id> | grep capacityProviderName
Using Elastic beanstalk autoscaling for an application, recently added a new SQS queue as an environment variable.
I can see the variable using eb printenv *my-env* | grep "SQS":
SQS_QUEUE = https://sqs.xxxxxx.amazonaws.com/xxxxxxx/xxxxxxx
When I SSH to a newly created instance I can't see that config sudo /opt/elasticbeanstalk/bin/get-config environment --output YAML | grep "SQS"
If I make a code change and deploy then the instance gets the config - this isn't a practical solution when the scaling goes up / down multiple times a day.
I'm using a custom AMI but I can't find any documentation on AWS around environment variables - adding new ones should just work..!?
What am I missing??
Please double check. I think -output YAML is incorrect. The queue should be accessible through:
/opt/elasticbeanstalk/bin/get-config environment -key SQS_QUEUE
I am implementing Amazon EC2 Auto Scaling and AWS CodeDeploy (blue green deployment). I have assigned a baked AMI to the Auto Scaling group.
Auto Scaling works with no problem without CodeDeploy.
AWS CodeDeploy for blue green deployment works with no problem. I have assigned the autoscaling group in the deployment group.
However, In order to test Blue Green deployment, I terminate one of the instances manually so that Auto Scaling can launch one more instance. However, the instance starts and terminates abruptly.
I see that AWS CodeDeploy has an error:
The deployment failed because a specified file already exists at this location: webserver/server.js
The AWS CodeDeploy configuration I am using is OneAtTime and content options: Overwrite the content.
I only have 1 deployment group for the application.
Currently, I have removed the Auto Scaling group from the AWS CodeDeploy by changing the "Automatically copy Amazon EC2 Auto Scaling group" to "Manually provision instances", which has stopped terminating the instances. However, the new instance created by Auto Scaling does not have the new code. Does CodeDeploy not update or replace the AMI with the new code?
Questions:
Why do I get the error "The deployment failed because a specified file already exists at this location: webserver/server.js"?
The EC2 instance created from autoscaling does not have the latest deployment code?
Is there a better approach to do blue green deployment and autoscaling. or any issues with the above approach?
I have read the AWS CodeDeploy tutorial but have missed something.
overwrite:true will not work you need to make the below changes in the configuration.
files:
- source: /
destination: /my/sample/code
overwrite: true
file_exists_behavior: OVERWRITE
The reason for terminating the instances is :
the instance starts and terminates abruptly because Rollback is
enabled in the deployment group. in the blue-green deployment new instance launch, every time AWS code deploys try to deploy the latest code on it and replace it with an old install. but in this case, because of misconfiguration in appspec.yml, it's failing every time. and things stuck in the loop.
Disable rollback in the deployment group. Configured the appspec.yml
with the above configuration and then se-tup the rollback.
You can use a custom script to cleanup the destination folder and run it from AppSpec hooks section , for example, the lifecycle event BeforeInstall looks like a good place to run this script.
You could create an appspec hook called "BeforeInstall" that will clean up the directory that need to be deployed.
For example:
version: 0.0
os: linux
files:
- source: /
destination: /var/www/html
hooks:
BeforeInstall:
- location: ./cleanup.sh
and the content of cleanup.sh is similar to like this:
#!/bin/bash -xe
# cleanup folder /var/www/html
rm -rf /var/www/html
With this hooks, before your code is deployed, BeforeInstall hook will run and that script will be executed to clean up the directory.
More information about appspec hook can be found here: http://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html
I just followed this tutorial to learn how to use eb command.
One thing I want to do is to modify the Health Check Type of the auto scaling group created by Elastic-Beanstalk to ELB. But I just can't find how to do it.
Here's what I have done:
Change the Health Check Type of the environment dev-env to ELB through the AWS console.
Use eb config save dev-env --cfg my-configuration to save the configuration file locally.
The ELB health check type doesn't appear inside .elasticbeanstalk/saved_configs/my-configuration.cfg.yml file. This means that I must specify the health check type somewhere else.
Then I find another article saying that you can put the health check type inside .ebextensions folder.
So I make a modification to eb-python-flask, which is the example of the tutorial.
Here's my modification of eb-python-flask.
I thought that running eb config put prod, and eb create prod2-env --cfg prod with my eb-python-flask would create an environment whose health-check-type of the auto scaling group is ELB. But I was wrong. The health check type created by the eb commands is still EC2.
Anyone know how to set the health check type programmatically?
I don't want to set it through AWS console. It's inconvenient.
An ebextension like the below will do it:
Resources:
AWSEBAutoScalingGroup:
Type: "AWS::AutoScaling::AutoScalingGroup"
Properties:
HealthCheckType: ELB
HealthCheckGracePeriod: 300
I use the path .ebextensions/autoscaling.config
eb create prod3-env --cfg prod command uses git HEAD version to create a zip file to upload to elastic beanstalk.
This can be discovered through eb create --verbose prod3-env --cfg prod command, which shows you a verbose output.
The reason I failed to run my own configuraion is that I didn't commit the config file to git before running eb create prod3-env --cfg prod.
After committing the changes of the code, I successfully deployed an Auto Scaling Group whose Health Check Type is ELB.
I am currently migrating my config management on AWS to Terraform to make it more pluggable. What I like is the possibility to manage rolling updates to an Autoscaling Group where Terraform waits until the new instances are in service before it destroys the old infrastructure.
This works fine with the "bare" infrastructure. But I ran into a problem when update the actual app instances.
The code is deployed via AWS CodeDeploy and I can tell Terraform to use the generated name of the new Autoscaling Group as deployment target but it doesn't deploy the code to the new instances on startup. When I manually select "deploy changes to the deployment group" the deployment starts successfully.
Any ideas how to automate this step?
https://www.terraform.io/docs/provisioners/local-exec.html might be able to do this. Couple assumptions
You've got something like aws-cli installed where you're running terraform.
You've got your dependencies setup so that your CodeDeploy step would be one of the last things executed. If that's not the case you can play with depends_on https://www.terraform.io/intro/getting-started/dependencies.html#implicit-and-explicit-dependencies
Once your code has been posted, you would just add a
resource "something" "some_name" {
# Whatever config you've setup for the resource
provisioner "local-exec" {
command = "aws deploy create-deployment"
}
}
FYI the aws deploy create-deployment command is not complete, so you'll have to play with that in your environment till you've got the values needed to trigger the rollout but hopefully this is enough to get you started.
You can trigger the deployment directly in your user-data in the
resource "aws_launch_configuration" "my-application" {
name = "my-application"
...
user_data = "${data.template_file.node-init.rendered}"
}
data "template_file" "node-init" {
template = "${file("${path.module}/node-init.yaml")}"
}
Content of my node-init.yaml, following recommendations of this documentation: https://aws.amazon.com/premiumsupport/knowledge-center/codedeploy-agent-launch-configuration/
write_files:
- path: /root/configure.sh
content: |
#!/usr/bin/env bash
REGION=$(curl 169.254.169.254/latest/meta-data/placement/availability-zone/ | sed 's/[a-z]$//')
yum update -y
yum install ruby wget -y
cd /home/ec2-user
wget https://aws-codedeploy-$REGION.s3.amazonaws.com/latest/install
chmod +x ./install
./install auto
# Add the following line for your node to update itself
aws deploy create-deployment --application-name=<my-application> --region=ap-southeast-2 --deployment-group-name=<my-deployment-group> --update-outdated-instances-only
runcmd:
- bash /root/configure.sh
In this implementation the node is responsible for triggering the deployment itself. This is working perfectly so far for me but can result in deployment fails if the ASG is creating several instances at the same time (in that case the failed instances will be terminated quickly because not healthy).
Of course, you need to add the sufficient permissions to the role associated to your nodes to trigger the deployment.
This is still a workaround and if someone knows solution behaving the same way as cfn-init, I am interested.