AWS CodeDeploy shows successful even deployment fails - amazon-web-services

I have a AWS CodeDeploy which deploy 3 Instances. No matter what deployment configure I set (oneAtTime, halfAtTime, allAtTime) or even use customized type (HOST_COUNT, min_health_host = 2 (cannot set 3 because that is not how codedeploy works), sometime I got codeDepoly succeeds even only 2 instances are successfully deployed.
I have talked to AWS support center. They said it is expected and I know why it is expected. Looks like their calculation works only if there are tons of instances to be deployed.
But in my case, it does not make sense that 2 out of 3 success means success. Is anybody unhappy about this behaviors and have any workaround?

The way CodeDeploy seems to have been designed is to try to have successful overall deployments, so, if you are looking to have your overall deployment to fail because one of the instance deployments failed, then maybe CodeDeploy is not what you are looking for. Additionally, this is the math behind the deployment configurations and the overall deployment failures for 3 instances:
AllAtOnce: The overall deployment will fail only if ALL 3 instance deployments failed. Meaning that having 1 successful instance deployment then the overall deployment will be successful.
HalfAtATime: The overall deployment will fail if 2 instance deployments failed. Having 2 instance deployments successful means the overall deployment will be successful.
OneAtATime: The overall deployment will fail if the first or the second deployment fails. If the third deployment fails, the overall deployment will be successful.

Related

Error trying to use codedeploy to load code onto a auto scalable group of EC2

When I try to run codedeploy, I get the following error :
The overall deployment failed because too many individual instances failed deployment, too few healthy instances are available for deployment, or some instances in your deployment group are experiencing problems.
When I look at the ec2 instances created by the auto scalable group, they are both running, and status is passed 2/2 checks passed in green. I am wondering if this is just a catch all error, because its supposed to throw this error if one or both or all are not running.
It seems it wanted an appspec.yml. I am not sure why a appspec.yml is needed, If you create the autodeploy group, and create the codedeploy application, why is there a need for a appspec.yml. I am using jenkins to run the code deploy. I gave it a appspec.yml that is empty, and seems to work.

Codedeploy rollback not picking the previous successful build version

For some reason AWS Codedeploy rollback seems to always pick up the latest version and fails
Deployment 1 is success and a revision is created in S3 bucket.
Deployment 2 is failure and Codedeploy rollback kicks in which is Automatic
Deployment 3 also fails for the same reason as Deployment 2
Expected Codedeploy behaviour is for Deployment 3 it should pick up the Deployment 1 S3 build version.
I am not sure if there are any missing links in S3 bucket with Codedeploy. Any thoughts much appreciated.
Thank you
Not sure if this applies to your situation specifically, but "strange" rollback behavior of CodeDeploy is documented:
However, if the deployment that failed was configured to overwrite, instead of retain files, an unexpected result can occur during the rollback.
Thus it is possible that you are observing these "unexpected results" that can occur when you deploy and fail with an existing content.
You can read up more on that in:
Rollback behavior with existing content
After a bit of investigation with AWS CLI, I can see the the version being re written. Things are more clear when using the AWS CLI than what is shown in the console.
Thank you for taking the time to post back with a possible answer

AWS CodeDeploy: How to check if the current instance is the last in the deployment and create Cloudfront invalidation?

Hello I am unfamiliar with shell scripting but I was wondering if it is possible for the current instance to check if it is the last instance in the deployment and create a Cloudfront invalidation in one of the hooks?
There is no native way to find out if the current instance is the last in the deployment. Such a method is flaky and error prone anyway. There are several different deployment configurations which govern how CodeDeploy service deploys code (AllAtOnce/HalfAtATime) [1] so it is not possible to have 1 instance as last instance as sometimes multiple instances batch maybe the last instance.
So a better engineering approach would be to put your CodeDeploy deployment as a stage in CodePipeline. As a subsequent stage (after deployment), add a Build stage (CodeBuild) and run the invalidation command from the buildspec in CodeBuild. This will make sure that your Build stage (i.e. invalidation command) will only be run after a successful deployment to all instances.
Ref:
[1] https://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-configurations.html

Amazon CodeDeploy with AllAtOnce fails to execute on new CloudFormation stack with on 1 EC2 instance

My CloudFormation stack produces a ScalingGroup which has MinSize and MaxSize set to 1. It also creates a DeploymentGroup that targets this ScalingGroup.
When the deploymentgroup is configured with Configuration name CodeDeployDefault.OneAtATime then the deployment starts successfully.
When the deploymentgroup is configured with Configuration name CodeDeployDefault.AllAtOnce then upon the creation of the stack, the codedeploy doesn't do anything and you can't see any events or log traces on the EC2. There is only one mention to error code HEALTH_CONSTRAINTS. If I terminate the instance, then the scaling group launches a new instance but CodeDeploy doesn't do anything again. If I manually start a deployment with CodeDeployDefault.OneAtATime then it works.
From what i've read in the documentation, this should not happen
There should be no health checks because no instances exists in the deployment group
CodeDeployDefault.AllAtOnce has a minimum health percentage of 0. So it could be that the reason to raise the error is that it equaled 0 and it decided to not continue with the health error code
CodeDeployDefault.AllAtOnce mentions and that should be my understanding, that it is not going to do a heath check because conceptually there is no point, as all instances will be configured at the same time.
Is my expectation or correct or am I do something wrong?
The error code HEALTH_CONSTRAINTS means CodeDeploy deployment failed and the configured healthy hosts ratio is not satisfied. You might want to got to the AWS CodeDeploy console and click on the deployment that created during the during the CloudFormation running process, and the check why the deployment failed. The different between CodeDeployDefault.OneAtATime and CodeDeployDefault.AllAtOnce can be found here: http://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-configurations.html
CodeDeployDefault.AllAtOnce Attempts to deploy an application revision to as many instances as possible at once. The status of the overall deployment will be displayed as Succeeded if the application revision is deployed to one or more of the instances. The status of the overall deployment will be displayed as Failed if the application revision is not deployed to any of the instances. Using an example of nine instances, CodeDeployDefault.AllAtOnce will attempt to deploy to all nine instances at once. The overall deployment will succeed if deployment to even a single instance is successful; it will fail only if deployments to all nine instances fail.
CodeDeployDefault.OneAtATime Deploys the application revision to only one instance at a time.
For deployment groups that contain more than one instance:
The overall deployment succeeds if the application revision is deployed to all of the instances. The exception to this rule is if deployment to the last instance fails, the overall deployment still succeeds. This is because AWS CodeDeploy allows only one instance at a time to be taken offline with the CodeDeployDefault.OneAtATime configuration.
The overall deployment fails as soon as the application revision fails to be deployed to any but the last instance.
In an example using nine instances, it will deploy to one instance at a time. The overall deployment succeeds if deployment to the first eight instances is successful; the overall deployment fails if deployment to any of the first eight instances fails.
For deployment groups that contain only one instance, the overall deployment is successful only if deployment to the single instance is successful.
Since your deployment group only contains one single instance, if you set CodeDeployDefault.OneAtATime and the single instance deployment failed, the deployment will still be marked as succeeded. Please check the deployment details on AWS CodeDeploy console.

Can I use AWS Codedeploy along with Jenkins for this use case?

I am exploring the approach of using Jenkins to trigger the build process and bring the required git branch to an Amazon S3 bucket and then trigger an AWS Codedeploy deployment to take it from there to deploy on the relevant instances.
Architecture and use cases
We have multiple EC2 instances behind a load balancer.
Sometimes, some of the instances may need to be deployed with a different git branch (to test some feature, before rolling out on all instances and this may need to be kept during subsequent deployments).
While there are multiple git branches deployed across multiple set of instances, we may need to deploy some other branches on them, depending on their current branches.
Features to be supported
During deployment, provision to do some checks on each individual instance of a deployment group and display the instances on which the same failed and then ask for manual confirmation and proceed accordingly. I am assuming that there is a possibility that one or more of the instances have something different and some check made by one of the scripts fails (with reference to options provided in the appspec file). I would not want it to cause a build failure, but would like to see a report of that, in the AWS deployment dashboard preferably, and it should wait for manual intervention to decide whether to proceed.
provision to have intervals between deployment on batches of instances within a single deployment group and ask for manual confirmation to proceed. I already know about this feature “Deployment Config” which specifies the number of instances to deploy at a time can be configured - e.g. halfatatime. However, we have this process of waiting for a few minutes after deploying on a batch of, say 10 boxes, then manually monitor load and proceed if everything is fine. This is done manually.
Sorry for getting back so late.
Some of the features requested are not directly available ATM. However there are indirect ways of getting around them.
"Sometimes, some of the instances may need to be deployed with a different git branch (to test some feature, before rolling out on all instances and this may need to be kept during subsequent deployments)."
You can have different deployment groups under the same application for the test and production instances
There is no facility to pause a deployment in between steps to ask for manual confirmation. However, if you want the check fails to not stop the deployment, you can set a safe minimum healthy hosts criteria, and emit logs from all instances to cloud-watch to see detailed results per instance.
There is no facility to pause a deployment after a batch has been completed. However you can manually introduce a bake period as a part of the deployment and abort the deployment if necessary.
I am sorry I could not help more with your use case. However I hope this helps.
Thanks