How to use CodeDeploy with ECS, ALB and Auto Scaling - amazon-web-services

I'm trying to use CodeDeploy and ECS with an Application Load Balancer, and Auto Scaling with a strategy on the number of request of this ALB.
I'm using this URL as some sort of tutorial, but i don't really understand how CodeDeploy will integrate with ECS and other stuffs.
First, as i can see, i need two target groups on my ALB. But i only have one (that is, instance(s) that are / will be created by Auto Scaling)
So what do i need to do ? Does creating an empty target group, and telling CodeDeploy to use both work ?
What will it do, deploy instances in this target group and redirect a part of or the whole trafic to it once working ? As stated in the same link
"During deployment, CodeDeploy installs your update into a new, replacement task set."
So it seems to create new tasks, but on what instance then ?

So what do i need to do ? Does creating an empty target group, and telling CodeDeploy to use both work ?
It can't be empty. Your new TG can be same as your first one. So at this stage you will have to TGs which are same (except thier name).
The second TG is specified when you create your ECS deployment group in CodeDeploy.
What will it do, deploy instances in this target group and redirect a part of or the whole traffic to it once working ? As stated in the same link
These will be same instances as the one running your current ecs task.
So it seems to create new tasks, but on what instance then ?
These will be same instances as those that run your current task.

Related

CloudFormation, CodeDeploy, ELB & Auto-Scaling Group

I am trying to build a stack with an ELB, an Auto-Scaling Group and a Pipeline (with CodeBuild and CodeDeploy).
I can't understand how it is supposed to work:
the auto-scaling group is starting two instances and wait X minutes before starting to check the instances state
the CodeDeploy application deployment group is waiting for the Auto-Scaling group to be created and ready
the pipeline takes about 10 minutes to start deploying the application
My issue is when I create the stack, it looks like there is a loop: AG requires an application from CodeDeploy and CodeDeploy requires an AG stabilized. To be clear, when the application is ready to deploy, my Auto-Scaling group is already starting to terminate instances and starting new ones, so the CodeDeployment is trying to deploy to instances already terminated or terminating.
I don't really want to configure HealthCheckGracePeriod and PauseTime to be ~10-15 minutes... it is way too long.
Are there any best practices for CloudFormation + ELB + AG + CodeDeploy via a Pipeline?
What should be the steps to achieve that?
Thank you!
This stopping/staring the instances is most probably linked to the Deployment Type: in-place vs. blue/green.
I have tried both in my setup, and I will try to summarize how they work.
Let's say that for this example, you have an Autoscaling group which at the time of deploying the application has 2 running instances and the deployment configuration is OneAtATime. Traffic is controlled by the Elastic Load Balancer. Then:
In-place deployment:
CodeDeploy gets notified of a new revision available.
It tells the ELB to stop directing traffic to 1st instance.
Once traffic to one instance is stopped, it starts the deployment process: Stop the application, download bundle etc.
If the deployment is successful (validate service hook returned 0), it tells ELB to resume traffic to that instance.
At this point, 1 instance is running the old code and 1 is running the new code.
Right after the ELB stops traffic to the 2nd instance, and repeats the deployment process there.
Important note:
With ELB enabled, the time it takes to block traffic to instance before deployment, and time it takes to allow traffic after it are directly dependent on your health check: time = Healthy threshold * Interval.
Blue/green deployment:
CodeDeploy gets notified of a new revision available.
It copies your Autoscaling Group: the same configuration of the group (including scaling policies, scheduled actions etc.) and the same number of instances (using same AMI as your original AG) there were there at the start of deployment - in our case 2.
At this point, there is no traffic going to the new AG.
CodeDeploy performs all the usual installation steps to one machine.
If successful, it deploys to the second machine too.
It directs traffic from the instances in your old AG to the new AG.
Once traffic is completely re-routed, it deletes the old AG and terminates all its instances (after a period specified in Deployment Settings - this option is only available if you select Blue/Green)
Now ELB is serving only the new AG.
From experience:
Blue/green deployment is a bit slower, since you need to wait for the
machines to boot up, but you get a much safer and fail-proof deployment.
In general I would stick with Blue/green, with load balancer
enabled and the Deployment Configuration: AllAtOnce (if it fails,
customers won't be affected since the instances won't be receiving
traffic. But it will be 2x as fast since you deploy in parallel
rather than sequentially).
If your health checks and validate
service are throughout enough, you can probably delete the original
AG with minimal waiting time (5 minutes at the time of writing the
post).

How can I create and deploy applications to an EC2 instance via the AWS API?

I'm looking to see if I can create an instance and deploy applications to athis instance dynamically via the API. I only want these instances to be created when my application needs them, or I request for them to be created.
I have two applications that I need to deploy to each created instance which require some set up and installation of dependencies prior to their launch. When I am finished with this application, I want to terminate the instance.
Am I able to do this? If so, could someone please point me to the right section of the documentation. I have searched on the documentation and found some information about creating images but I am unsure as to what exactly I will need to achieve this task.
Yes. Using an Autoscaling Group, you can create a launch configuration that will launch you instances. Using CodeDeploy, you would link your deployment group to the auto-scaling group.
See Integrating AWS CodeDeploy with Auto Scaling
AWS CodeDeploy supports Auto Scaling, an AWS service that can launch
Amazon EC2 instances automatically according to conditions you define.
These conditions can include limits exceeded in a specified time
interval for CPU utilization, disk reads or writes, or inbound or
outbound network traffic. Auto Scaling terminates the instances when
they are no longer needed. For more information, see What Is Auto
Scaling?.
Assuming you set your desired/minimum instances to 0, then the default state of the ASG will be to have no instances.
When you application needs an instance spun up, it would simply change the desired instance value to 1. When your application is completed with the instance, it would set your desired count to 0 again, thereby terminating that instance.
To develop this setup, start by running your instance normally (manually) and get the application deployment working. When that works, then create your auto scaling group. Finally, update your deployment group to refer to the ASG so that your code is deployed when you have scaling events.

AWS CodeDeploy Deployment Order

I haven't been able to find anywhere to see what order a deployment goes out. We have a primary instance, and then 3-4 autoscaling instances on an ELB. We selected the deployment by tags (for the AS instances) and then the primary instance by name. We then deploy half at a time. We were hoping the AS instances would always deploy first so if a deployment failed we could just terminate those instances and it was easier to fix. (Fixing the primary would be more manual work since we can't just terminate it for other reasons.)
Is there a way to specify the order in which a deployment should go out?
You cannot specify the order in which the instances will be deployed within a deployment group. AWS CodeDeploy sorts the instances under a deployment group based on instance AZ and tries to do best effort striping across AZs. If you specifically want Autoscaling instances to go first, one way to workaround is to have a separate deployment group containing the Autoscaling group.

AWS Codedeploy when Autoscaling Group set to 0 instances

I'm using Codedeploy to push to my ec2 instances within an auto scaling group. At times, that auto scaling group doesn't have any existing instances running. When I deploy in that situation, codedeploy ALWAYS fails, even though I've set the minimum healthy hosts to 0 instances.
Is there anyway I can get code deploy to say "success" when there are 0 instances?
It appears when codedeploy fails, it doesn't update the revision. This is a real pain in my situation.
You need to have at least a single instance in your deployment group for the deployment to succeed. After you hook the Autoscaling group (containing at least 1 instance) with CodeDeploy, you should do a successful deployment to update the target revision of the deployment group. After this, any new instance scale up should automatically pick up the target revision.
You could also set the :min property of your autoscaling group to 1 to always keep a single instance in it.
I know it's been over two years, but I faced this same issue. My workaround was creating my own Lifecycle Hook for my Auto Scaling Group and an SNS + Lambda for deploying my revisions.
The catch is, you should first Register a Revision for the application without deploying it. As soon as a new instance is created by the Auto Scaling Group the hook will send an SNS message to the Lambda and then you can (based on the message received + Environment variables) look for revisions (already created) and deploy them to the new instances.
I've linked all this by using CloudFormation - which I extremely recommend for this workaround and all other AWS related services.

AWS Automatically Generating New Instance After I Terminate It.

OK, odd thing is happening on AWS.
I downloaded the AWS .NET developer tools and created an elastic beanstalk default instance.
I then, for one reason or another, created another instance via the Visual Studio interface and that instance is where all the clients code / configurations reside. I then returned to the default instance created by elastic beanstalk and terminated it. An hour later, I logged back on and another default instance was up and running. It seems that AWS has detected that I terminated the instance and has spawned another. Some sort of check seems to be in place.
Can somebody tell me what is going on here and how to completely remove the default instance (and its termination protection behavior)?
Thanks.
I've experienced something similar. If the instance was created through Elastic Beanstalk, you need to go the Elastic Beanstalk screen in the AWS console and remove the application from there first. If you just terminate the instance from the EC2 screen, Elastic Beanstalk probably thinks that the instance crashed and launches a new one.
if beanstalk is not enable, then most probably it is creating from auto scaling. which is present in EC2 service itself.
Go to auto scaling and first delete the auto configuration group and launch configuration for that associate instance.
As described here it is caused by Auto Scaling Group's desired/minimum 1 instance setting. So, what that means is that the instances in that auto scaling group will always have a running instance. If you delete the only one it will create another one. To prevent this go to EC2 dashboard and on the left sidebar scroll down to find/click Auto Scaling Groups under AUTO SCALING menu. You will see the list of groups where you can click on them to see which instances are in those groups. Find the group that your instance is in and either delete it (that happens when an environment is deleted as well) or change its desired and minimum 1 instance rule to 0 and save. That is it.
Check your autoscaling groups. If you would have created node groups for your EKS cluster.