I need help in order to achieve Blue-Green Deployment.
What I have in My Bucket -
One Blue Environment hosted on ElasticBeanStalk.
One Green Environment hosted on ElasticBeanStalk.
Both Environments are getting created by CF-Template.Both are having their own ELB.
What I am looking for -
I need to switch traffic from Blue to green.
First I need to know which Environment is currently live so that I can plan my app deployment to next Environment.
Once I knew my current Environment(Blue in this case) , I deployed my app to Green-Environment and now this Environment is ready to accept the traffic.
I need to migrate 25% traffic to Green and do a health check, If health check is okay I will add another 25% and do a health check and so-on.
At any point if health check fails , I should be able to route entire traffic back to Blue Environment.
I need to implement this solution in my CI-CD Job. My CI job is creating my package and deploying this to S3. My CD job is provision the Infrastructure (ElasticBeanStalk) and
uploading the package to newly created Environment.
You can't control deployment on AWS Elastic Beanstalk like that, since it involves having two live environments and doing a cname swap. Not exactly what your'e trying to achieve but something close it is called Immudatable Deployments which are available out of the box.
From the documentaion:
To perform an immutable environment update, Elastic Beanstalk creates
a second, temporary Auto Scaling group behind your environment's load
balancer to contain the new instances. First, Elastic Beanstalk
launches a single instance with the new configuration in the new
group. This instance serves traffic alongside all of the instances in
the original Auto Scaling group that are running the previous
configuration.
When the first instance passes health checks, Elastic Beanstalk
launches additional instances with the new configuration, matching the
number of instances running in the original Auto Scaling group. When
all of the new instances pass health checks, Elastic Beanstalk
transfers them to the original Auto Scaling group, and terminates the
temporary Auto Scaling group and old instances.
Related
I am performing AWS CodeDeploy B/G deployment using swapping the autoscaling groups method. For this, I have created one autoscaling group with two instances. Next I have craeted two target groups originaltargetgroup and replacementtargetgroup. Then I have created an application load balancer with listeners as originaltargetgroup(100% traffic) and replacementtargetgroup(0% traffic). When I initiated B/G deployment in codedeploy with target group as replacementtargetgroup it created an copy of original autoscaling group with two new replacement instances.
My question is that I was unable to access the two new green instances with ELB DNS. I figured out that it is because the green instances were placed in replacementtargetgroup which is serving 0% traffic.
Why the ELB didn't switch all the traffic to replacementtargetgroup or maybe I am doing something wrong.
Basically I am confused how the above architecture works. Do I have to create only 1 target group or two target groups for B/G deployments. I am totally confused and can't able to figure it out.
Blue/Green deployment with CodeDeploy does not need to have 2 ASGs and 2 Targets group.
You only have to provide input as your existing AutoScalingGroup and existing ElasticLoadBalancer.
When you trigger B/G deployment, below sequence is triggered:
A new AutoScalingGroup is created by CodeDeploy, which is the exact replica of your existing ASG.
Once above steps is completed, you are served with new EC2 instances. If existing ASG had 2 EC2 servers, the new ASG will also have 2 EC2 servers running.
When new EC2 servers are provisioned, a deployment is carried out on these servers so that application on them is updated to the new version.
After deployment is completed, the new servers are registered to existing TargetGroup.
After new instances are registered and they are healthy, traffic is rerouted from old servers to new servers.
Post this, old servers are deregistered.
When old servers are deregistered, CodeDeploy can terminate them based on configu
I have my Beanstalk environment with a "Scaling Trigger" using "CPUUtilization" and it works well.
The problem is that I can not combine this with a system that automatically reboots (or terminate) instances that have been considered "OutOfService" for a certain amount of time.
Into the "Scaling > Scaling Trigger > Trigger measurement" there is the option of "UnHealthyHostCount". But this won't solve my problem optimally, because it will create new instances as far there is one unhealthy, this will provoque my environment to grow until the limit without a real reason. Also, I can not combine 2 "Trigger measurements" and I need the CPU one.
The problem becomes crucial when there is only one instance in the environment, and it becomes OutOfService. The whole environment dies, the Trigger measurement is never triggered.
If you use Classic Load Balancer in your Elastic Beanstalk.
You can go to EC2 -> Auto Scaling Groups.
Then change the Health Check Type of the load balancer from EC2 to ELB.
By doing this, your instances of the Elastic Beanstalk will be terminated once they are not responding. A new instance will be created to replace the terminated instance.
AWS Elastic Beanstalk uses AWS Auto Scaling to manage the creation and termination of instances, including the replacement of unhealthy instances.
AWS Auto Scaling can integrate with the ELB (load balancer), also automatically created by Elastic Beanstalk, for health checks. ELB has a health check functionality. If the ELB detects that an instance is unhealthy, and if Auto Scaling has been configured to rely on ELB health checks (instead of the default EC2-based health checks), then Auto Scaling automatically replaces that instance that was deemed unhealthy by ELB.
So all you have to do is configure the ELB health check properly (you seem to have it correctly configured already, since you mentioned that you can see the instance being marked as OutOfService), and you also have to configure the Auto Scaling Group to use the ELB health check.
For more details on this subject, including the specific steps to configure all this, check these 2 links from the official documentation:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.healthstatus.html#using-features.healthstatus.understanding
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environmentconfig-autoscaling-healthchecktype.html
This should solve the problem. If you have trouble with that, please add a comment with any additional info that you might have after trying this.
Cheers!
You can setup a CloudWatch alarm to reboot the unhealthy instance using StatusCheckFailed_Instance metric.
For detailed information on each step, go through the Adding Reboot Actions to Amazon CloudWatch Alarms section in the following AWS Documentation.
If you want Auto Scaling to replace instances whose application has stopped responding, you can use a configuration file to configure the Auto Scaling group to use Elastic Load Balancing health checks. The following example sets the group to use the load balancer's health checks, in addition to the Amazon EC2 status check, to determine an instance's health.
Example .ebextensions/autoscaling.config
Resources:
AWSEBAutoScalingGroup:
Type: "AWS::AutoScaling::AutoScalingGroup"
Properties:
HealthCheckType: ELB
HealthCheckGracePeriod: 300
See: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environmentconfig-autoscaling-healthchecktype.html
I have 2 machines running under an Elastic Beanstalk environment.
One of them is down since the last deployment.
I was hoping that the auto scaling configuration will initiate a new machine due to having a single machine available.
That didn't happen and I'm trying to figure out what's wrong with my auto scaling configuration:
The first thing I see is that your rules contradict each other. It says if the number of unhealthy hosts are above 0, add a single host. If they are below 2, remove a single host. That may explain why you aren't seeing anything happening with your trigger.
Scaling triggers are used to bring in, or reduce, EC2 instances in your Auto Scaling group. This would be useful to bring in an additional instance(s) to maintain the same amount of computational power for your application while you investigate what caused the bad instance to fail. But this will not replace the instance.
To setup your instances to terminate after a certain period of being unhealthy you can follow the documentation here.
By default ELB pings port 80 with TCP, this is what determines the "health" of the EC2 instance, along with the on host EC2 instance status check. You can specify a Application health check URL to setup a customized health check that your application returns. Check out the more detailed customization of Beanstalk ELBs here.
I created a elastic beanstalk environment and it created an EC2 instance. Then I thought I don't actually need this yet so I'll stop the EC2 instance, but then it seemed to start another one.
So my question is if I have an EB instance will I be charged by the hour for the underlying EC2 image all the time or only when the the service it provides is being access via the public elasticip. And if Im charged all the time is there a way to halt a elastic beanstalk application or only delete it or instantiate to a new environment.
The auto scaling feature of Elastic Beanstalk will automatically start another instance if a current instance continues to fail a health check. Stopping individual instances outside of the environment will cause failed health checks and trigger a new instance to be spun up.
You will be charged when the components within the environment are running as stated by Amazon here:
There is no additional charge for Elastic Beanstalk – you only pay for the underlying AWS resources (e.g. Amazon EC2, Amazon S3) that your application consumes.
You can completely stop an environment through the CLI. I gave this answer to a previous question about starting and stopping Elastic Beanstalk:
The EB command line interface has an eb stop command. Here is a little bit about what the command actually does:
The eb stop command deletes the AWS resources that are running your application (such as the ELB and the EC2 instances). It however leaves behind all of the application versions and configuration settings that you had deployed, so you can quickly get started again. Eb stop is ideal when you are developing and testing your application and don’t need the AWS resources running over night. You can get going again by simply running eb start.
I keep killing the default instance and it keeps coming back. Why?
This answer is based on the assumption that you are facing a specific issue I've seen several users stumbling over, but your question is a bit short on detail, so I might misinterpret your problem in fact.
Background
The AWS Toolkit for Visual Studio allows you to deploy applications to AWS Elastic Beanstalk, which is a Platform as a Service (PaaS) offering allowing you to quickly deploy and manage applications in the AWS cloud:
You simply upload your application, and Elastic Beanstalk
automatically handles the deployment details of capacity provisioning,
load balancing, auto-scaling, and application health monitoring.
You deploy an application to Elastic Beanstalk into an Environment comprised of an Elastic Load Balancer and resp. Auto Scaling policies, which together ensure your application will keep running even if the EC2 instance is having trouble servicing the requests for whatever reason (see Architectural Overview for an explanation and illustration how these components work together).
That is, your Amazon EC2 instances are managed by default, so you don't need to administrate the infrastructure yourself, but the specific characteristic of this AWS PaaS variation is that you still can do that:
At the same time, with Elastic Beanstalk, you retain full control over
the AWS resources powering your application and can access the
underlying resources at any time.
Now that's exactly what you unintentionally did by terminating the EC2 instance via a mechanism outside of the Elastic Beanstalk service, which the load balancer detects and, driven by those auto scaling policies, triggers the creation of a replacement instance.
Solution
Long story short, you need to terminate the Elastic Beanstalk environment instead, as illustrated in section Step 6: Clean Up within the AWS Elastic Beanstalk Walkthrough (there is a dedicated section for the Elastic Beanstalk service within the AWS Management Console).
You can also do this via Visual Studio, as explained in step 11 at the bottom of How to Deploy the PetBoard Application Using AWS Elastic Beanstalk:
To delete the deployment, expand the Elastic Beanstalk node in AWS
Explorer, and then right-click the subnode for the deployment. Click
Delete. AWS Elastic Beanstalk will begin the deletion process, which
may take a few minutes. If you specified an notification email address
the deployment, AWS Elastic Beanstalk will send status notifications
for the delete process to this address.