I am implementing Amazon EC2 Auto Scaling and AWS CodeDeploy (blue green deployment). I have assigned a baked AMI to the Auto Scaling group.
Auto Scaling works with no problem without CodeDeploy.
AWS CodeDeploy for blue green deployment works with no problem. I have assigned the autoscaling group in the deployment group.
However, In order to test Blue Green deployment, I terminate one of the instances manually so that Auto Scaling can launch one more instance. However, the instance starts and terminates abruptly.
I see that AWS CodeDeploy has an error:
The deployment failed because a specified file already exists at this location: webserver/server.js
The AWS CodeDeploy configuration I am using is OneAtTime and content options: Overwrite the content.
I only have 1 deployment group for the application.
Currently, I have removed the Auto Scaling group from the AWS CodeDeploy by changing the "Automatically copy Amazon EC2 Auto Scaling group" to "Manually provision instances", which has stopped terminating the instances. However, the new instance created by Auto Scaling does not have the new code. Does CodeDeploy not update or replace the AMI with the new code?
Questions:
Why do I get the error "The deployment failed because a specified file already exists at this location: webserver/server.js"?
The EC2 instance created from autoscaling does not have the latest deployment code?
Is there a better approach to do blue green deployment and autoscaling. or any issues with the above approach?
I have read the AWS CodeDeploy tutorial but have missed something.
overwrite:true will not work you need to make the below changes in the configuration.
files:
- source: /
destination: /my/sample/code
overwrite: true
file_exists_behavior: OVERWRITE
The reason for terminating the instances is :
the instance starts and terminates abruptly because Rollback is
enabled in the deployment group. in the blue-green deployment new instance launch, every time AWS code deploys try to deploy the latest code on it and replace it with an old install. but in this case, because of misconfiguration in appspec.yml, it's failing every time. and things stuck in the loop.
Disable rollback in the deployment group. Configured the appspec.yml
with the above configuration and then se-tup the rollback.
You can use a custom script to cleanup the destination folder and run it from AppSpec hooks section , for example, the lifecycle event BeforeInstall looks like a good place to run this script.
You could create an appspec hook called "BeforeInstall" that will clean up the directory that need to be deployed.
For example:
version: 0.0
os: linux
files:
- source: /
destination: /var/www/html
hooks:
BeforeInstall:
- location: ./cleanup.sh
and the content of cleanup.sh is similar to like this:
#!/bin/bash -xe
# cleanup folder /var/www/html
rm -rf /var/www/html
With this hooks, before your code is deployed, BeforeInstall hook will run and that script will be executed to clean up the directory.
More information about appspec hook can be found here: http://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html
Related
I set up the pipeline three months ago and everything has been running fine with the same appspec.yml. But now suddenly AWS CodeDeploy gives the error that it can't find the appspec.yml although it is already there.It failed at the very first ApplicationStop event itself. The error is as follows:
The overall deployment failed because too many individual instances failed deployment, too few healthy instances are available for deployment, or some instances in your deployment group are experiencing problems. (Error code: HEALTH_CONSTRAINTS)
Then when I looked into the details, this is what it said:
My appspec.yml is as follows:
version: 0.0
os: windows
files:
- source: \
destination: c:\home\afb
file_exists_behavior: OVERWRITE
I also have the folder (c:\home\afb) created already on my EC2 instance. The health of the EC2 instance is fine as I can see on the Dashboard and also access it via RDP. The CodeDeploy agent is also running fine on EC2.
Please help. Thanks in advance for any advice!
I have recently changed AMI on which my ECS EC2 instances are running from Amazon Linux to Amazon Linux 2 (in both cases I am using ECS optimized images). I am deploying my instances using cloudformation and having a real headache as those new instances sometimes are being run successfully and sometimes not (same stack, no updates, same code).
On the failed instances I see that there is an issue with ECS service itself after executing ecs-logs-collector.sh I see in ecs file log "warning: The Amazon ECS Container Agent is not running". Also directory "/var/log/ecs" doesn't even exist!.
I have correct IAM role attached to an instance.
Also as mentioned, it is the same code being run, and on 75% of attempts it fails with ECS service, I have no more ideas, where else to look for some issues/logs/errors.
AMI: ami-0650e7d86452db33b (eu-central-1)
Solved. If someone will fall into this issue adding this to my userdata helped:
cp /usr/lib/systemd/system/ecs.service /etc/systemd/system/ecs.service
sed -i '/After=cloud-final.service/d' /etc/systemd/system/ecs.service
systemctl daemon-reload
I just followed this tutorial to learn how to use eb command.
One thing I want to do is to modify the Health Check Type of the auto scaling group created by Elastic-Beanstalk to ELB. But I just can't find how to do it.
Here's what I have done:
Change the Health Check Type of the environment dev-env to ELB through the AWS console.
Use eb config save dev-env --cfg my-configuration to save the configuration file locally.
The ELB health check type doesn't appear inside .elasticbeanstalk/saved_configs/my-configuration.cfg.yml file. This means that I must specify the health check type somewhere else.
Then I find another article saying that you can put the health check type inside .ebextensions folder.
So I make a modification to eb-python-flask, which is the example of the tutorial.
Here's my modification of eb-python-flask.
I thought that running eb config put prod, and eb create prod2-env --cfg prod with my eb-python-flask would create an environment whose health-check-type of the auto scaling group is ELB. But I was wrong. The health check type created by the eb commands is still EC2.
Anyone know how to set the health check type programmatically?
I don't want to set it through AWS console. It's inconvenient.
An ebextension like the below will do it:
Resources:
AWSEBAutoScalingGroup:
Type: "AWS::AutoScaling::AutoScalingGroup"
Properties:
HealthCheckType: ELB
HealthCheckGracePeriod: 300
I use the path .ebextensions/autoscaling.config
eb create prod3-env --cfg prod command uses git HEAD version to create a zip file to upload to elastic beanstalk.
This can be discovered through eb create --verbose prod3-env --cfg prod command, which shows you a verbose output.
The reason I failed to run my own configuraion is that I didn't commit the config file to git before running eb create prod3-env --cfg prod.
After committing the changes of the code, I successfully deployed an Auto Scaling Group whose Health Check Type is ELB.
I am currently migrating my config management on AWS to Terraform to make it more pluggable. What I like is the possibility to manage rolling updates to an Autoscaling Group where Terraform waits until the new instances are in service before it destroys the old infrastructure.
This works fine with the "bare" infrastructure. But I ran into a problem when update the actual app instances.
The code is deployed via AWS CodeDeploy and I can tell Terraform to use the generated name of the new Autoscaling Group as deployment target but it doesn't deploy the code to the new instances on startup. When I manually select "deploy changes to the deployment group" the deployment starts successfully.
Any ideas how to automate this step?
https://www.terraform.io/docs/provisioners/local-exec.html might be able to do this. Couple assumptions
You've got something like aws-cli installed where you're running terraform.
You've got your dependencies setup so that your CodeDeploy step would be one of the last things executed. If that's not the case you can play with depends_on https://www.terraform.io/intro/getting-started/dependencies.html#implicit-and-explicit-dependencies
Once your code has been posted, you would just add a
resource "something" "some_name" {
# Whatever config you've setup for the resource
provisioner "local-exec" {
command = "aws deploy create-deployment"
}
}
FYI the aws deploy create-deployment command is not complete, so you'll have to play with that in your environment till you've got the values needed to trigger the rollout but hopefully this is enough to get you started.
You can trigger the deployment directly in your user-data in the
resource "aws_launch_configuration" "my-application" {
name = "my-application"
...
user_data = "${data.template_file.node-init.rendered}"
}
data "template_file" "node-init" {
template = "${file("${path.module}/node-init.yaml")}"
}
Content of my node-init.yaml, following recommendations of this documentation: https://aws.amazon.com/premiumsupport/knowledge-center/codedeploy-agent-launch-configuration/
write_files:
- path: /root/configure.sh
content: |
#!/usr/bin/env bash
REGION=$(curl 169.254.169.254/latest/meta-data/placement/availability-zone/ | sed 's/[a-z]$//')
yum update -y
yum install ruby wget -y
cd /home/ec2-user
wget https://aws-codedeploy-$REGION.s3.amazonaws.com/latest/install
chmod +x ./install
./install auto
# Add the following line for your node to update itself
aws deploy create-deployment --application-name=<my-application> --region=ap-southeast-2 --deployment-group-name=<my-deployment-group> --update-outdated-instances-only
runcmd:
- bash /root/configure.sh
In this implementation the node is responsible for triggering the deployment itself. This is working perfectly so far for me but can result in deployment fails if the ASG is creating several instances at the same time (in that case the failed instances will be terminated quickly because not healthy).
Of course, you need to add the sufficient permissions to the role associated to your nodes to trigger the deployment.
This is still a workaround and if someone knows solution behaving the same way as cfn-init, I am interested.
What's the best way to send logs from Auto scaling groups (of EC2) to Logentries.
I previously used the EC2 platform to create EC2 log monitoring for all of my EC2 instances created by an Autoscaling group. However according to Autoscaling rules, new instance will spin up if a current one is destroyed.
Now how do I create an automation for Logentries to create a new hosts and starting getting logs. I've read this https://logentries.com/doc/linux-agent-with-chef/#updating-le-agent I'm stuck at the override['le']['pull-server-side-config'] = false since I don't know anything about Chef (I just took the training from their site)
For an Autoscaling group, you need to get this baked into an AMI, or scripted to run on startup. You can get an EC2 instance to run commands on startup, after you've figured out which script to run.
The Logentries Linux Agent installation docs has setup instructions for an Amazon AMI (under Installation > Select your distro below > Amazon AMI).
Run the following commands one by one in your terminal:
You will need to provide your Logentries credentials to link the agent to your account.
sudo -s
tee /etc/yum.repos.d/logentries.repo <<EOF
[logentries]
name=Logentries repo
enabled=1
metadata_expire=1d
baseurl=http://rep.logentries.com/amazon\$releasever/\$basearch
gpgkey=http://rep.logentries.com/RPM-GPG-KEY-logentries
EOF
yum update
yum install logentries
le register
yum install logentries-daemon
I recommend trying that script once and seeing if it works properly for you, then you could include it in the user data for your Autoscaling launch configuration.