AWS CodeDeploy deployment tracking - amazon-web-services

I would like to know if it's possible to track the deployment status of CodeDeploy by using CLI.
Currently, I'm using Bamboo to trigger the CodeDeploy deployment by CLI using: aws deploy create-deployment ... My Bamboo plan will show green the moment the deployment is triggered instead of checking if the actual deployment is succeeded. Is there a way to let Bamboo/command line verify if the actual deployment was successfully deployed?
Many thanks!

Your create-deployment will return a Deployment ID. Use that in aws deploy get-deployment --deployment-id XXX to see the status and info of the deployment:
http://docs.aws.amazon.com/cli/latest/reference/deploy/get-deployment.html
You can use aws deploy wait deployment-successful --deployment-id XXX to wait for completion:
http://docs.aws.amazon.com/cli/latest/reference/deploy/wait/deployment-successful.html.

Suggest you use the AWS Code Deployment Task to manage these deployments. It will manage the entire process, and will report on actual deployment status.
With the AWS CodeDeploy task for Bamboo you can deploy applications to
EC2 instances automatically, reliably, and rapidly. Additionally, AWS
CodeDeploy keeps track of the whole deployment process.
See https://confluence.atlassian.com/bamboo/using-the-aws-codedeploy-task-750396059.html

Related

AWS CodeDeploy Hook After Finishing Deployment to All Instances?

I have AWS CodeDeploy deploying to a Deployment Group that targets an AutoScalingGroup of EC2 instances that can have between min and max number of instances.
CodeDeploy hooks can be specified on individual instances to launch scripts on those instances at various stages of the deployment process.
Is there a way to launch a script, Lambda function, etc... after CodeDeploy successfully finishes deploying to the final instance in the ASG? In other words, is there an "All Done With Everything" hook that I can use? How are others tackling and solving this problem?
If you're using codepipeline, how about adding another stage after code deploy?
Or you can also trigger SNS topic with AWS CodeDeploy about deployment status as well.
Here: https://docs.aws.amazon.com/codedeploy/latest/userguide/monitoring-cloudwatch-events.html

What's the easiest way to deploy a a Multiservices Spring/Python project on the AWS?

I have created a Multiservices Spring/Python project. What's the easiest way to deploy it on the AWS cloud with 4 machines?
You can use multiple Services to achieve the same :
ElasticBeanstalk: If you have you code then you upload it on ElasticBeanstalk and any newer version just upload it on the Beanstalk and choose the deployment method it will automatically be deployed on the machine. You can choose the whatever number of instances you want to spin along with LoadBalancer and more.
Documentation here
CodePipeline: Have your code pushed into CodeCommit or Github or S3 and let it use CodeCommit, CodeBuild and CodeDeploy to deploy it on your EC2 server.
Documentation here
CloudFormation: This service you can use to spin up your services just through code. It is called Infrastructure as Code. Write code and spin up the instances.
Documentation here

AWS CodePipeline deploy process

I am building a CI pipeline with AWS CodePipeline. I'm using CodeBuild to fetch my code from a repo, build a docker image and push the image to ECR. The source for my CodePipeline is my ECR repo and is triggered when an image is updated.
Now, here’s the functionality I am looking for. When a new image is pushed to ECR, I want to create an EC2 instance and then deploy the new image to that instance. When the app in the image has completed its task, I.e done something and pushed the results to S3, I want to terminate the instance. It could take hours to days before the task is complete.
Is CodeDeploy the right tool to use to deploy the ECR image to an EC2 instance for this use case? I see from the docs that CodeDeploy requires an already running instance to deploy to. I need to create one on the fly before CodeDeploy is initiated. Should I add a step in the CodePipeline to trigger a lambda that creates an instance before CodeDeploy gets run?
Any guidance would be much appreciated!
CloudTrail supports logging a PutImage event that you can use to do stuff with your pipeline. I prefer producing artifacts after specific steps in your build pipeline and then have a lambda function that reacts to an object created event. Your lambda function could then make the necessary calls to spin up ec2 instances. Your instance could then run a job and then call lambda again, which could tear it down. It sounds like you need an on-demand worker. Services like AWS Batch or ECS might be able to provide you with this functionality out of the box.

AWS ECS: Monitoring the status of a service update

I am trying to migrate a set of microservices from Docker Swarm, to AWS ECS using Fargate.
I have created an ECS cluster. Moreover, I have initialized repositories using the ECR, each of which contains an image of a microservice.
I have successfully came up with a way to create new images, and push them into the ECR. In fact, with each change in the code, a new docker image is built, tagged, and pushed.
Moreover, I have created a task definition that is linked to a service. This task definition contains one container, and all the necessary information. Moreover, its service defines that the task will run in a VPC, and is linked to a load balancer, and has a target group. I am assuming that every new deployment uses the image with the "latest" tag.
So far with what I have explained, everything is clear and is working well.
Below is the part that is confusing me. After every new build, I would like to update the service in order for new tasks with the update image get deployed. I am using the cli to do so with the following command:
aws ecs update-service --cluster <cluster-name> --service <service-name>
Typically, after performing the command, I am monitoring the deployment logs, under the event tab, and checking the state of the service using the following command:
aws ecs describe-services --cluster <cluster-name> --service <service-name>
Finally, I tried to simulate a case where the newly created image contains a bad code. Thus, the new tasks will not be able to get deployed. What I have witnessed is that Fargate will keep trying (without stopping) to deploy the new tasks. Moreover, aside the event logs, the describe-services command does not contain relevant information, other than what Fargate is doing (e.g., registering/deregistering tasks). I am surprised that I could not find any mechanism that instructs Fargate, or the service to stop the deployment and rollback to the already existing one.
I found this article (https://aws.amazon.com/blogs/compute/automating-rollback-of-failed-amazon-ecs-deployments/ ), which provides a solution. However, it is a fairly complicated one, and assumes that each new deployment is triggered by a new task definition, which is not what I want.
Therefore, considering what I have described above, I hope you can answer the following questions:
1) Using CLI commands (For automation purposes) Is there a way to instruct Fargate to automatically stop the current deployment, after failing to deploy new tasks after a few tries?
2) Using the CLI commands, is there a way to monitor the current status of the deployment? For instance, when performing a service update on a service on Docker swarm, the terminal generates live logs on the update process
3) After a failed deployment, is there a way for Fargate to signal an error code, or flag, or message?
At the moment, ECS does not offer deployment status directly. Once you issue a deployment, there is no way to determine its status other than to continually poll for updates until you have enough information to infer from them. Plus unexpected container exits are not logged anywhere. You have to search through failed tasks. The way I get them is by cloudwatch rule that triggers a lambda upon task state change.
I recommend you read: https://medium.com/#aaron.kaz.music/monitoring-the-health-of-ecs-service-deployments-baeea41ae737
As of now, you have a way to do this:
aws ecs wait services-stable --cluster MyCluster --services MyService
The previous example pauses and continues only after it can confirm that the service running on the cluster is stable. Will return 255 exit code after 40 failed checks.
To cancel a deployment, enable ECS Circuit Breaker when creating your service:
aws ecs create-service \
--service-name MyService \
--deployment-configuration "deploymentCircuitBreaker={enable=true,rollback=true}" \
{...}
References:
Service deployment check.
Circuit Breaker

What is a good way to deploy a distributed application using CodeDeploy and a CI tool?

When using AWS, it seems a nice way to deploy an application to a newly created instance is via AWS CodeDeploy. This works as follows:
Set up an auto-scaling group for the application
Write a user-data bash script for the auto-scaling group which pulls the CodeDeploy agent from S3, installs it and starts it
Set up a CodeDeploy deployment group which deploys to the auto-scaling group
Now, when an application bundle (e.g. jar or debian package) is deployed to the deployment group, it will be deployed automatically to new instances launched in the auto-scaling group.
My question is: how can this deployment strategy fit with a CI tool like Travis CI?
Specifically:
How can CodeDeploy pick up a package built by a CI tool like Travis CI? Does the build job need to upload the package to S3?
How can CodeDeploy be used to deploy the application gradually (e.g. one instance at a time)?
Does this deployment strategy require each running instance to be shut down and replaced, or is the new version of the application deployed on the existing instances? If it is the former, machine IP addresses would change during deployment, so how can other services discover the newly deployed application (i.e. without hardcoded IP addresses)?
tl;dr version:
The build job needs to upload the package to S3.
Use the one at a time deployment config.
The new version of the application is deployed on the existing instances.
Ok, here's the long version:
I recommend you try the Deployment Walkthrough or take a looks at Concepts in the documentation. It should help you get familiar with CodeDeploy faster.
You don't have to use an AutoScaling group with CodeDeploy if you don't want to. CodeDeploy with AutoScaling integration allows you to manage fleets that need to change in size dynamically separately from the code that is deployed to them, but that is not a requirement to use CodeDeploy. You can also launch some EC2 instances manually, install the host agent, and then tag them into a deployment group - but they won't get deployed to automatically on launch like the AutoScaling instances would. In either case, you can always create fleet wide deployments.
You'll have to do some work to integrate it with your CI tool. CodeDeploy doesn't directly manage your build artifacts, so your build process will need to do that. To have automatic deployments, your will need to:
Create a archive bundle with an appspec.yml, any scripts you need to handle the install/upgrade, and your build artifacts.
Upload the bundle to S3.
Create a deployment in CodeDeploy.
You might want to look at CodePipeline as an example of a continuous delivery system that's integrated with CodeDeploy.
CodeDeploy uses deployment configs to control how aggressively it deploys to the instances in your fleet. (This config gets ignored for automatic deployments, since each instance is handled separately.) CodeDeploy will fail your deployment and stop deploying to new instances if it cannot potentially fail another instance without violating the constraints in the deployment config.
There are three built in deployment configs, and you can create your own via the CLI or API if you need a different one. To deploy to only one instance at a time, you can use the CodeDeployDefault.OneAtATime deployment config, which allows at most one unhealthy host at any given time.
For anyone else (like me) looking for an example on how to actually integrate Travis-CI with CodeDeploy:
Configure the Application, DeploymentGroups and instances, as explained in the CodeDeploy walkthrough.
Use aws-cli commands to deploy your first revision successfully to the CodeDeploy target instance.
After you have the application deployed and running, configure Travis to trigger the deployments.
The CodeDeploy appspec.yml file and any scripts used for the deployment should be packaged inside your application bundle (latest.zip in the below example).
The following .travis.yml config worked for me:
script: npm run build
before_deploy:
- zip -r latest dist/*
- mkdir -p dpl_cd_upload
- mv latest.zip dpl_cd_upload/latest.zip
deploy:
- provider: s3
access_key_id: "XXXX"
secret_access_key: "YYYYY"
bucket: "deployments-bucket-name"
local_dir: dpl_cd_upload
skip_cleanup: true
- provider: codedeploy
access_key_id: "ZZZZZ"
secret_access_key: "WWWW"
bucket: "deployments-bucket-name"
key: latest.zip
bundle_type: zip
application: CodeDeployAppName
deployment_group: CodeDeployDeploymentGroupName
This examples were really useful:
https://github.com/travis-ci/cat-party/blob/master/.travis.yml