I think i've done something wrong while designing my aws infrastructure.
Actually i have one autoscaling group with one ec2 instance.
On this instance there are 6 laravel projects that are associated to 6 applications in aws CodeDeploy, so when i want to update the version i simply update using codedeploy.
Issues comes when the autoscaling group adds instances to the group, all my codedeploy applications are deployed to the newly created instance and it fails with this message:
One or more lifecycle events did not run and the deployment was unsuccessful. Possible causes include:
(1) Multiple deployments are attempting to run at the same time on an instance;
So... what's the best way to get this to work ?
AWS recommends associating a single deployment group to an ASG and consolidate deployments to a single deployment for proper scale out. Each deployment group associates a lifecycle hook with ASG through which ASG will notify deployment-group when scale-out events occur. Parallel deployments (in your case 6) will be prone to codedeploy timeouts (5 -60 min) and codedeploy agent running on ec2 can take one command at time.
If each of your app takes less time (<60 mins), you may want to consolidate them to a single application and deploy via codedeploy hooks. Else would suggest to use different asg for app.
Refer: https://aws.amazon.com/blogs/devops/under-the-hood-aws-codedeploy-and-auto-scaling-integration/
list lifecycle hooks:
aws autoscaling describe-lifecycle-hooks --auto-scaling-group-name <asg_name> --region <region>
If launch of new ec2 goes infinite loop of terminate and launch,you can remove lifecycle hooks
aws autoscaling delete-lifecycle-hook --lifecycle-hook-name <lifecycleName> --auto-scaling-group-name <asg_name> --region <region>
Related
I have AWS CodeDeploy deploying to a Deployment Group that targets an AutoScalingGroup of EC2 instances that can have between min and max number of instances.
CodeDeploy hooks can be specified on individual instances to launch scripts on those instances at various stages of the deployment process.
Is there a way to launch a script, Lambda function, etc... after CodeDeploy successfully finishes deploying to the final instance in the ASG? In other words, is there an "All Done With Everything" hook that I can use? How are others tackling and solving this problem?
If you're using codepipeline, how about adding another stage after code deploy?
Or you can also trigger SNS topic with AWS CodeDeploy about deployment status as well.
Here: https://docs.aws.amazon.com/codedeploy/latest/userguide/monitoring-cloudwatch-events.html
I am able to launch a ECS cluster and auto-scaling group that attaches EC2's to the cluster.
I can launch new EC2's that can connect to the cluster using the launch template's and auto scaling group's web interfaces.
I cannot launch new EC2's to connect to the cluster using the ECS web interface via the Scale ECS Instances button mentioned here:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/scale_cluster.html
The Scale ECS Instances button shows up when I create the ECS cluster through the web console. However, I cannot get the Scale ECS Instances button to show up when I create the ECS cluster through Terraform.
I'm hypothesizing that the web console goes through a first run experience that I cannot mimic using Terraform: https://aws.amazon.com/blogs/compute/amazon-ecs-console-first-run-troubleshoot-docker-errors/
But I cannot find any documentation to prove or disprove my hypothesis.
Is it possible to use Terraform (or CloudFormation or AWS CLI) to get the Scale ESC Instances button to show up on the ECS web console?
Thank you for your time :)
That console experience is using CloudFormation under the covers so when you click that button it's modifying the CloudFormation stack to add more desired instances to your ASG:
If your cluster was created with the console first-run experience after November 24, 2015, then the Auto Scaling group associated with the AWS CloudFormation stack created for your cluster can be scaled up or down to add or remove container instances. You can perform this scaling operation from within the Amazon ECS console.
To make the same change in Terraform you should modify your min_size or desired_capacity (depending on if you are actually using scaling policies or not) of your autoscaling group and allow it to scale appropriately.
This is also a better approach anyway (and I'd recommend this approach even if you were using CloudFormation to create your ECS cluster) as it means that all of your changes are defined in code directly rather than a combination of code and people clicking around in the AWS console.
I'm new at AWS.
Before 30 minutes, I launch ecs to deploy my docker container.
Everything looks fine.
After finishing my work, I deleted cluster, task definition.
But in my ec2 console, ec2 launch every 2 minutes inifinitly.
I deleted every resource about it.
Why it launch automatically?
Is there any solution about cleaning aws ecs configuration?
Thanks.
As per your confirmation, Recreation of the associated autoscaling group which was responsible to spin up instances solved your problem.
We have setup our infrastructure for a project using Terraform, including the code-deploy, ALB and auto-scaling groups. So far, we were doing in-place deployments. But now we're trying to switch to Blue/Green deployment.
Since CodeDeploy Blue/Green deployment replaces the entire autoscaling group on successful deployments, the old state of Autoscaling group in the Terraform state file would become stale, and would not reflect the new Autoscaling group that was added by CodeDeploy service.
Is there any known way to overcome this?
Depending on how you're triggering your Code Deploy deployment, you could run a Terraform import as a post-deployment hook in your deployment scripts to update the Terraform state to point at the new autoscaling group. You would need to fetch the name of the new ASG somehow via one of the many client libraries or the CLI
terraform import aws_autoscaling_group.some_asg_identifier name-of-your-replacement-asg
You can use
lifecycle {
ignore_changes = [autoscaling_groups]
}
in the aws_codedeploy_deployment_group.
You also have to set the autoscaling_groups to [] in the aws_codedeploy_deployment_group since the newly created autoscaling group will be a different one (created by CodeDeploy) when the CodeDeploy deploys a new green environment.
So, the above code will ignore the changes that happens with the autoscaling group deletion/creation. This is because CodeDeploy takes control of the autoscaling group creation once the blue-green deployment is implemented.
We have an AutoScaling Group that runs containers (using ECS). When we add OR replace EC2 instances in the ASG, they don't have the docker images we want on them. So, we run a few docker pull commands using cloud-init to fetch the images when they boot up.
However, the ASG thinks that the new instance is ready, and terminates an old instance. But in reality, this new instance isn't ready until all docker images have been pulled.
E.g.
Let's say my ASG's desired count is 5, and I have to pull 5 containers using cloud-init. Now, I want to replace all EC2 instances in my ASG.
As new instances start to boot up, ASG will terminate old instances. But due to the docker pull lag, there will be a time during the deploy, when the actual valid instances will be lesser than 3 or 2.
How can I "mark an instance ready" only when cloud-init is finished?
Note:I think Cloudformation can bridge this communication gap using CFN-Bootstrap. But I'm not using Cloudformation.
What you're look got is AutoScaling Lifecycle Hooks. You can keep an instance in the Pending:Wait state until you're docker pull has completed. You can then move the instance to InService. all of this can be done with the AWS CLI so it should be achievable with an AWS AutoScaling command before and after your docker commands.
The link to the documentation I have provided explains this feature in detail and provides great examples on how to use it.