I am performing AWS CodeDeploy B/G deployment using swapping the autoscaling groups method. For this, I have created one autoscaling group with two instances. Next I have craeted two target groups originaltargetgroup and replacementtargetgroup. Then I have created an application load balancer with listeners as originaltargetgroup(100% traffic) and replacementtargetgroup(0% traffic). When I initiated B/G deployment in codedeploy with target group as replacementtargetgroup it created an copy of original autoscaling group with two new replacement instances.
My question is that I was unable to access the two new green instances with ELB DNS. I figured out that it is because the green instances were placed in replacementtargetgroup which is serving 0% traffic.
Why the ELB didn't switch all the traffic to replacementtargetgroup or maybe I am doing something wrong.
Basically I am confused how the above architecture works. Do I have to create only 1 target group or two target groups for B/G deployments. I am totally confused and can't able to figure it out.
Blue/Green deployment with CodeDeploy does not need to have 2 ASGs and 2 Targets group.
You only have to provide input as your existing AutoScalingGroup and existing ElasticLoadBalancer.
When you trigger B/G deployment, below sequence is triggered:
A new AutoScalingGroup is created by CodeDeploy, which is the exact replica of your existing ASG.
Once above steps is completed, you are served with new EC2 instances. If existing ASG had 2 EC2 servers, the new ASG will also have 2 EC2 servers running.
When new EC2 servers are provisioned, a deployment is carried out on these servers so that application on them is updated to the new version.
After deployment is completed, the new servers are registered to existing TargetGroup.
After new instances are registered and they are healthy, traffic is rerouted from old servers to new servers.
Post this, old servers are deregistered.
When old servers are deregistered, CodeDeploy can terminate them based on configu
Related
I use AWS Code Deploy and Auto-Scale Groups i.e. blue/green deployment.
I have separately created Elastic Load Balancer and Target group that points to EC2s and not to above ASG.
It seems new deployment seems to be able to add/remove instances from Elastic Load Balancer perfectly fine without any relation of Elastic Load Balancer/Target group to ASG?
How is this possible?
I created an NLB and a fargate service.
Then i create a target group with "ip" of my ecs instance.
When i now add a fargate ip to my target group, it works, but how does the scaling work? Suppose ecs has to scale out, i will have to register another ip, but i want it to scale automatically.
Let us say one task is added. How does the network load balancer the new task ip without me manually adding it?
I do not get, how the link is between the nlb and the service of ecs. Does amazon does add targets implicitly?
Instead of manually registering the IP of your Fargate task with the target group, you are supposed to configure the ECS service with knowledge of the load balancer you want to use. The ECS service will then automatically register every task that it creates as part of deployments and auto-scaling.
From what I've read so far:
EC2 ASG is a simple solution to scale your server with more copies of it with a load balancer in front of the EC2 instance pool
ECS is more like Kubernetes, which is used when you need to deploy multiple services in docker containers that works with each other internally to form a service, and auto scaling is a feature of ECS itself.
Are there any differences I'm missing here? Because ECS is almost always a superior choice to go with if they work as I understand.
You are right, in a very simple sense, EC2 Autoscaling Groups is a way to add/remove (register/unregister) EC2 instances to a Classic Load Balancer or Target Groups (ALB/NLB).
ECS has two type of scaling as does any Container orchestration platform:
Cluster Autoscaling: Add remove EC2 instances in a Cluster when tasks are pending to run
Service Autoscaling: Add/remove tasks in a service based on demand, uses Application AutoScaling service behind the scenes
I need help in order to achieve Blue-Green Deployment.
What I have in My Bucket -
One Blue Environment hosted on ElasticBeanStalk.
One Green Environment hosted on ElasticBeanStalk.
Both Environments are getting created by CF-Template.Both are having their own ELB.
What I am looking for -
I need to switch traffic from Blue to green.
First I need to know which Environment is currently live so that I can plan my app deployment to next Environment.
Once I knew my current Environment(Blue in this case) , I deployed my app to Green-Environment and now this Environment is ready to accept the traffic.
I need to migrate 25% traffic to Green and do a health check, If health check is okay I will add another 25% and do a health check and so-on.
At any point if health check fails , I should be able to route entire traffic back to Blue Environment.
I need to implement this solution in my CI-CD Job. My CI job is creating my package and deploying this to S3. My CD job is provision the Infrastructure (ElasticBeanStalk) and
uploading the package to newly created Environment.
You can't control deployment on AWS Elastic Beanstalk like that, since it involves having two live environments and doing a cname swap. Not exactly what your'e trying to achieve but something close it is called Immudatable Deployments which are available out of the box.
From the documentaion:
To perform an immutable environment update, Elastic Beanstalk creates
a second, temporary Auto Scaling group behind your environment's load
balancer to contain the new instances. First, Elastic Beanstalk
launches a single instance with the new configuration in the new
group. This instance serves traffic alongside all of the instances in
the original Auto Scaling group that are running the previous
configuration.
When the first instance passes health checks, Elastic Beanstalk
launches additional instances with the new configuration, matching the
number of instances running in the original Auto Scaling group. When
all of the new instances pass health checks, Elastic Beanstalk
transfers them to the original Auto Scaling group, and terminates the
temporary Auto Scaling group and old instances.
After CodeDeploy clones AutoScalingGroup, it leaves LoadBalancer field empty. This leads to following problem: when instance webserver dies, ELB does not understand instance is "down", this instance is not replaced automatically.
However, if i set LoadBalancer manually, it will work fine afterwards.
I watched how new ASG is cloned. There is possibility to suspend some processes while instance is booting. So as i understand, CodeDeploy suspends all actions related to ELB, because it uses his own automatic scripts to un-attach old instances and attach new ones to ELB.
fresh ASG
I dont use any custom attach or un-attach scripts myself.
Otherwise deployment runs ok, and new instances are created correctly.
I spoke with the team and apparently what's going on is that CodeDeploy now manages the load balancer for you. That is very confusing to customers to not see the ELB associated with that AutoScalingGroup. This allows CodeDeploy to control and make sure that the deployment finishes before binding the to the load balancer.
-Asaf