The command cdk deploy ... is used to deploy one or more CloudFormation stacks. When it executes, it displays messages resulting from the deployment of the various stacks and this can take some time.
The deploy command supports the --notification-arns parameter, which is an array of ARNs of SNS topics that CloudFormation will notify with stack related events.
Is it possible to execute cdk deploy and not have it report to the console its progress (i.e. the command exits immediately after uploading the new CloudFormation assets) and simply rely on the SNS topic as a means of getting feedback on the progress of a deployment?
A quick and dirty way (untested) would be to use nohup
$ nohup cdk ... --require-approval never 1>/dev/null
The --require-approval never simply means it wont stop to ask for permission for sercurity requests and obviously nohup allows the command to run with out terminating.
Its the only solution I can think of that is quick.
Another solution for long term would be to use the CdkToolkit to create your own script for deployment. Take a look at cdk command to get an idea. This is been something Ive wanted from aws-cdk for a while - I want custom deploy scripts rather than using the shell.
I found a solution to asynchronously deploy a cdk app via the --no-execute flag:
cdk deploy StackX --no-execute
change_set_name = aws cloudformation list-change-sets --stack-name StackX --query "Summaries[0].ChangeSetId" --output text
aws cloudformation execute-change-set --change-set-name $change_set_name
For my case this works, because I use this method to deploy new stacks only, so there will ever be only exactly 1 change set for this stack and I can retrieve it with the query for the entry with index 0. If you wish to update existing stacks, you will have to select the correct change set from the list returned by the list-change-sets command.
I had a similar issue - I didn't want to keep a console open while waiting for a long-running init script on an ec2 instance to finish. I've hit Ctrl-C after publishing was completed and the changeset got created. It kept running and I've checked the status in the Cloudformation view. Not perfect and not automation-friendly, but I didn't need to keep the process running on my machine.
Related
I ran this command:
sls deploy function --function [myFunction] -s production
And I got the Error that function is Iwant to udpdate is not yet deployed.
What could be the problme here?
I was trying to deploy a Lambda function.
I was expecting the function to deploy
Unfortunately you have to create the entire CloudFormation stack first. Run sls deploy -s production to create the lambda functions, IAM roles, http/eventbridge events, etc...
Once the lambda function and all associated resources are deployed, you can then simply update the lambda function with the command you posted. In general I do like to use the sls deploy function command if I'm only making code changes, because the lambda is a lot quicker to update, but there are situations where you need to deploy the entire service.
I have a Bitbucket pipeline where it creates AWS resources using cloudformation and deploys website to it. But deployment fails even the cloudformation creates the stack correctly. What I think the issue is when the deployment happens cloudformation S3 bucket creation may not have been finished.
I have a Hugo website and I have created a bitbucket pipeline to deploy it to server. What it does is it creates S3 bucket using cloudformation to host the website and then upload the Hugo website to it. When I ran the steps in the pipeline manually in a terminal with a delay between each step, it happens successfully. But when it happens on Bitbucket pipeline it gave error saying the S3 bucket that I'm trying to upload content is not available. When I checked in AWS that bucket is actually there. That means Cloudformation has worked correctly. But when the files start to copy, the bucket may have not been available to upload the file. That's my assumption. Is there a workaround for this one. When doing it locally I can wait between the two commands of cloudformation creation and file copying. But how to handle it in Bitbucket pipeline environment. Following is my pipeline code.
pipelines:
pull-requests:
'**':
- step:
script:
- aws cloudformation create-stack --stack-name demo-web --template-body file://cloudformation.json --parameters ParameterKey=S3BucketName,ParameterValue=demo-web
- hugo
- aws s3 cp public/ s3://demo-web/ --recursive
How to handle this scenario in the correct way. Is there a workaround for this situation. Or is the problem that I have identified is not the actual problem.
First, to wait in bitbucket pipelines you should be able to just use sleep x where x is the number of seconds you want to sleep.
A different note - bear in mind that after the first subsequent run of this, deployment will potentially fail the next time as you are using create-stack which will fail if stack already exists...
Using the AWS CloudFormation API for this is best practice.
There is a wait command just as there is a create-stack command, the wait command and its options allows AWS to halt your processing until stack is in COMPLETE status before continuing.
Option 1:
Put your stack creation command into a shell script and as a second step in your shell script you invoke the wait stack-create-complete command.
Then invoke that shell script in pipeline instead of the direct command.
Option 2:
In your bitbucket pipeline, right after your create command, invoke the aws cloudformation await complete command before the upload/hugo command.
I prefer option one because it allows you to manipulate and trace the creation as you wish.
See documentation: https://docs.aws.amazon.com/cli/latest/reference/cloudformation/wait/stack-create-complete.html
Using the CloudFormation API is great because you don't have to guess as to how long to wait, it is more guaranteed to be available.
I am using ansible in cloudformation
https://docs.ansible.com/ansible/2.4/cloudformation_module.html
But i could not find any way how to execute changesets using ansible
You would probably need to use the AWS CLI in an ansible command.
Something like:
- name: Execute a specific changeset
command: aws cloudformation execute-change-set --change-set-name arn:aws:cloudformation:us-east-1:123456789012:changeSet/SampleChangeSet/1a2345b6-0000-00a0-a123-00abc0abc000
Note that you would need to set the AWS_ACCESS_KEY and AWS_SECRET_KEY appropriately.
Also keep in mind the following, from the CloudFormation documentation:
After you execute a change set, AWS CloudFormation deletes all change
sets that are associated with the stack because they aren't valid for
the updated stack. If an update fails, you need to create a new change
set.
So you may find it more helpful to create a CloudFormation stack via ansible, rather than create a changeset somewhere that would just be run once from ansible and subsequently deleted.
I am using the following AWS Cli Cloud Formation commands to create and then execute and change set:
aws cloudformation create-change-set --change-set-name change-set-1
aws cloudformation execute-change-set --change-set-name change-set-1
However the first command returns before the the change set has been created, the if I execute the second command immediately it fails.
Solutions I have considered:
Adding a delay between the two commands.
Repeating the second command until it succeeds.
Both of these have their problems.
Ideally there would be an option on the create-change-set command to execute immediately, or to run synchronously and not return until the change set has been created.
Has anyone ever tried this and come up with a better solution than me?
I haven't personally tried it, but maybe you could use the command list-change-sets to loop until your change set is with a status CREATE_COMPLETE, and then execute your second command.
Hope this helps.
I solved this issue by using the following sequence :
aws cloudformation create-change-set
aws cloudformation wait change-set-create-complete
aws cloudformation execute-change-set
aws cloudformation wait stack-create-complete
Hope it will help.
If you don't require the intermediate step of creating a change set and then executing it (as we didn't) then use the update-stack sub command.
aws cloudformation update-stack --stack-name myStack --template-url ...
The use case is like - developer makes some code changes and the below things happen automatically -
build runs, application artifact created, docker image generated with the artifact, image pushed to Docker registry, AWS ECS tasks and ECS services updated.
I want to know what are the ways to achieve the above automation of update of AWS ECS services. Till now I have implemented AWS ECS update from Jenkins build using -
1>run post build AWS CLi scripts from Jenkins to update ECS
2>post build action or pipeline step to invoke AWS Lambda function. I have created one Lambda function in Java to implement that.
Please let me the other ways we can achieve the above. Thanks.
I'm continuously deploying Docker containers from CircleCI to AWS ECS.
The outline of the deployment flow is as follows:
Build and tag a new Docker image
Login to AWS ECR and push the image
Update task definitions and services of ECS with ecs-deploy
ecs-deploy is a useful script that updates Docker images in ECS.
https://github.com/silinternational/ecs-deploy
You could use a shell script that calls aws cli commands to create cloudformation stacks or directly call the create commands in the aws cli for the ECR repository, Task Definition, Events rule and target(for scheduling).
then you just call this script on your terminal using this command: ./setup.sh and it should execute all your commands at once.
aws ecr create-repository \
--repository-name tasks-${TASK_NAME}-${TASK_ENV} \
;
or if you want to set up your resources via cloudformation templates, you can launch them using this command as long as the template exists at file://name.yml:
aws cloudformation create-stack \
--stack-name stack-name \
--capabilities CAPABILITY_IAM \
--template-body file://name.yml \
--parameters
ParameterKey=ParamName,ParameterValue=${PARAM_NAME} \
;
Take a look at Codefresh - https://docs.codefresh.io/docs/amazon-ecs
You can build your pipeline
Build Step
Push to Registry
Deply to ECS
That easy
While there are a ton of CI/CD tools out there, since I am early in my rollout, I decided to write a small script instead of having CI/CD pipelines do it.
Here is a one-click deploy script I wrote using the ecs-deploy script as a dependency to achieve a rolling deploy of a docker image to ECS.
You can run this locally from your dev or build/deployment box or use Jenkins or some local build tool.
#!/bin/bash
# automatically login to AWS
eval $(aws ecr get-login)
# build local docker image and push repo to AWS
docker build -t <yourlocaldockerimagetag> .
docker tag <yourlocaldockerimagetag>:latest <yourECSRepoURL>:latest
docker -D -l debug push <yourECSRepoURL>:latest
# deploy to ECS
ecs-deploy/ecs-deploy -m 50 -k <access-key> -s <secret-key> -r <aws-region> -c <cluster-name> -n <service-name> -i <yourECSRepoURL>:latest
Parameters:
cluster-name: Your cluster name in ECS
service-name: Your service name that you had created in ECS
yourECSRepoURL: ECS Repository URL
yourlocaldockerimagetag: Any local image tag name
access-key: your AWS access key for deployments
secret-key: your AWS secret key
Make sure you install ecs-deploy before this script.
The -m 50 tells it that it can deploy even if the number of nodes drops to 50%. Ideally you would have an extra node to do deployments, but if you can't afford that setting this would ensure that deployments continue to happen.
If you are also using an ELB (load balancer), then the default deregistration delay for target groups is 5 minutes which is a bit excessive. The deregistration delay is the time to wait for existing requests to complete BEFORE ECS sends a SIGTERM or SIGINT to your docker container. You should lower this by going to the Target Groups in EC2 dashboard and click the Edit Attributes to edit it. Otherwise your deployments may take forever.
I think nobody has mentioned CodePipeline from AWS, it really integrates easilly with many AWS Services including ECS and CodeCommit:
Push commit to CodeCommit Repo, triggering the pipeline execution.
(Optional) Configure a Manual Approval step that needs you to take an action before Build.
Run a CodeBuild Project that builds your Dockerfile and push the image to an ECR Repo.
Run a "Deploy" step that deploys to a specific ECS Service. It updates the services with a new Task Definition that points to the new ECR Image.
I have used this flow with BitBucket also, just configure a BitBucket pipeline that pushes all new code to a CodeCommit Repo as a previous step.
Exactly as #minamiyojo and #astav answers, we ended up glueing ecs-deploy with a template engine to power up our CD pipeline with some reusable component, we just open-sourced as well:
https://github.com/GuccioGucci/yoke
Please refer to Motivation section in README, hope this would help your scenario too.