After CodeDeploy clones AutoScalingGroup, it leaves LoadBalancer field empty. This leads to following problem: when instance webserver dies, ELB does not understand instance is "down", this instance is not replaced automatically.
However, if i set LoadBalancer manually, it will work fine afterwards.
I watched how new ASG is cloned. There is possibility to suspend some processes while instance is booting. So as i understand, CodeDeploy suspends all actions related to ELB, because it uses his own automatic scripts to un-attach old instances and attach new ones to ELB.
fresh ASG
I dont use any custom attach or un-attach scripts myself.
Otherwise deployment runs ok, and new instances are created correctly.
I spoke with the team and apparently what's going on is that CodeDeploy now manages the load balancer for you. That is very confusing to customers to not see the ELB associated with that AutoScalingGroup. This allows CodeDeploy to control and make sure that the deployment finishes before binding the to the load balancer.
-Asaf
Related
I'm having an issue with AWS. I did deploy an application using Terraform, but when I try to destroy it, the process doesn't finish because of a subnet. That subnet was related to an EC2 instance that doesn't exist anymore.
If I try to remove it via AWS console it says there is a network interface using that subnet. Ok, but when I try to remove the network interface it says it is in use, but the supposed thing that could be using it, the EC2 instance, was terminated. Would you know how can I get rid of this network interface?
Thanks in advance!
I did try to remove the components individually on AWS console without success.
I think I figured out what happened. When I first run terraform apply, I had set up two availability zones. But then I decided to have just one availability zone, because I just wanted to work with one instance of the application. The point is that when using ELB, you MUST have at least two application instances, since it doesn't make sense to have a Load Balancer having just one app instance. When I run terraform apply with this new configuration, it applied the change partially, leaving an ALB instance available.
After removing the ELB from the Terraform configuration, everything worked fine!
My main issue is trying to work out why my health checks are failing on ECS.
My setup
I have successfully set up an ECS cluster using an EC2 auto-scaling group. All the EC2 are in private subnets with NAT gateways.
I have a load-balancer all connected up to the target group which is linked to ECS.
When I try and get an HTTP response from the load balancer from my local machine, it times out. So I am obviously not getting responses back from the containers.
I have been able to ssh into the EC2 instances and confirmed the following:
ECS is deploying containers onto the EC2 instances, then after some time killing them and then firing them up again
I can curl the healthcheck endpoint from the EC2 instance (localhost) and it runs successfully
I can reach the internet from the EC2 instance, eg curl google.com returns an html response
My question is there seems to be two different types of health-check going on, and I can't figure out which is which.
ELB health-checks
The ELB seems, as far as I can tell, to use the health-checks defined in the target group.
The target group is defined as a list of EC2 instances. So does that mean the ELB is sending requests to the instances to see if they are running?
This would of course fail because we cannot guarantee that ECS will have deployed a container to each instance.
ECS health-checks
ECS however is responsible for deploying containers into these instances, in what could turn out to be a many-to-many relationship.
So surely ECS would be querying the actual running containers to find out if they are healthy and then killing them if required.
My confusion / question
I don't really understand what role the ELB has in managing the EC2 instances in this context.
It doesn't seem like the EC2 instances are being stopped and started. However from reading the docs it seems to indicate that the ASG / ELB will manage the EC2 instances and restart them if they fail the healthcheck.
Does ECS somehow override this default behaviour and take responsibility for running the healthchecks instead of the ELB?
And if not, won't the health check just fail on any EC2 instance that happens not to have a container running on it?
I am performing AWS CodeDeploy B/G deployment using swapping the autoscaling groups method. For this, I have created one autoscaling group with two instances. Next I have craeted two target groups originaltargetgroup and replacementtargetgroup. Then I have created an application load balancer with listeners as originaltargetgroup(100% traffic) and replacementtargetgroup(0% traffic). When I initiated B/G deployment in codedeploy with target group as replacementtargetgroup it created an copy of original autoscaling group with two new replacement instances.
My question is that I was unable to access the two new green instances with ELB DNS. I figured out that it is because the green instances were placed in replacementtargetgroup which is serving 0% traffic.
Why the ELB didn't switch all the traffic to replacementtargetgroup or maybe I am doing something wrong.
Basically I am confused how the above architecture works. Do I have to create only 1 target group or two target groups for B/G deployments. I am totally confused and can't able to figure it out.
Blue/Green deployment with CodeDeploy does not need to have 2 ASGs and 2 Targets group.
You only have to provide input as your existing AutoScalingGroup and existing ElasticLoadBalancer.
When you trigger B/G deployment, below sequence is triggered:
A new AutoScalingGroup is created by CodeDeploy, which is the exact replica of your existing ASG.
Once above steps is completed, you are served with new EC2 instances. If existing ASG had 2 EC2 servers, the new ASG will also have 2 EC2 servers running.
When new EC2 servers are provisioned, a deployment is carried out on these servers so that application on them is updated to the new version.
After deployment is completed, the new servers are registered to existing TargetGroup.
After new instances are registered and they are healthy, traffic is rerouted from old servers to new servers.
Post this, old servers are deregistered.
When old servers are deregistered, CodeDeploy can terminate them based on configu
We have started using ECS and we are not quite sure if the behaviour we are experiencing is the correct one, and if it is, how to work around it.
We have setup a Beanstalk Docker Multicontainer environment which in the background uses ECS to manage everything, that has been working just fine. Yesterday, we created a standalone cluster in ECS "ecs-int", a task definition "ecs-int-task" and a service "ecs-int-service" associated to a load balance "ecs-int-lb" and we added one instance to the cluster.
When the service first ran, it worked fine and we were able to reach the docker service through the loadbalance. While we were playing with the instance security group that is associated to the cluster "ecs-int" we mistakenly removed the port rule where the container were running, and the health check started failing on the LB resulting it in draining the instance out from it. When it happened, for our surprise the service "ecs-int-service" and the task "ecs-int-task" automatically moved to the Beanstalk cluster and started running there creating an issue for our beanstalk app.
While setting up the service we setup the placement rule we set as "AZ Balanced Spread".
Should the service move around cluster? Shouldn't the service be attached only to the cluster it was originally created to? If this is the normal behaviour though, how can we set a rule so he service even if the instances for some reason fail the health check but to stick within the same cluster?
Thanks
I have re-created all the infrastructure again and the problem went away. As I suspected, services created for one cluster should not move to different cluster when instance(s) fail.
I have an Amazon EC2 instance with AutoScaling and Load balancer.
I deployed an application and configured Apache.
Everything went fine but Amazon for some reason terminated my instance and started a new one. I lost all the code and configuration there?
What should I do?
Maybe attach a EBS volume and deploy everything there? But my Apache server is installed on the main volume.
Can anyone help me?
If you are using autoscaling, instances will be terminated if they become unhealthy. In order to use autoscaling effectively, you should not keep any persistant data on the instance itself. This is called Shared Nothing architecture.
What you want to do, is create an AMI that has all your application and or tools to bootstrap your application. You would use this AMI as part of the launch configuration for your autoscale group. So if a new instance gets launched, either due to failure or needing to scale, your application will be back up without any interaction from you.