I have 2 cdk stacks that I have created that I use to deploy my lambda function. One of the stacks is the CodePipeline stack that gets triggered by Github commits, builds tests and deploys by using the "CloudFormationDeployAction" in CodeDeploy. Because I'm deploying my lambda through CodePipelines, I don't want it to be deployed through the terminal.
But on the terminal when I execute "cdk diff" or "cdk deploy *" for the rest of my stack, cdk will try diff both the stacks and output potential changes with the Lambda stack, even though it's deployed already. And doing a "cdk deploy *" will make cdk try to deploy and fail (due to missing parameters that is supplied through CodePipelines).
Is there a way to make the terminal commands ignore this LambdaStack?
A way I use to achieve a similar result is by doing the following:
In package.json I added under scripts "cdk-diff": "tsc && cdk diff YourStackYouWantToDiff"
I run npm run cdk-diff
This will only execute the cdk diff on the stacks you specify, here being YourStackYouWantToDiff.
In case of declared dependencies between stacks such as aStack.addDependency(bStack), the cdk diff is executed on both aStack and bStack.
Related
I ran this command:
sls deploy function --function [myFunction] -s production
And I got the Error that function is Iwant to udpdate is not yet deployed.
What could be the problme here?
I was trying to deploy a Lambda function.
I was expecting the function to deploy
Unfortunately you have to create the entire CloudFormation stack first. Run sls deploy -s production to create the lambda functions, IAM roles, http/eventbridge events, etc...
Once the lambda function and all associated resources are deployed, you can then simply update the lambda function with the command you posted. In general I do like to use the sls deploy function command if I'm only making code changes, because the lambda is a lot quicker to update, but there are situations where you need to deploy the entire service.
Following the guide in AWS SAM documentation, step by step created pipeline as instructed in the documentation till the final step where I copied the command from step 4 to connect to CodeCommit, I ran;
sam deploy -t codepipeline.yaml --stack-name prod --capabilities=CAPABILITY_IAM
I can see that CloudFormation events being generated in the shell (with Successfully created/updated stack - prod in None), as well as seeing the CodePipeline being generated and running the deployment stages.
However, as soon as the deployment is done, that pipeline is missing from AWS Developer Tool Console.
Shouldn't the pipeline be retained and when a new commit to the branch is merged, it automatically run the pipeline every time? why is my pipeline got removed right after the deployment is done?
After trying to understand the whole logic of the deploy command and their internals, I have managed to retain the CodePipeline in AWS console to run for my automated CI / CD pipeline.
Just to put it out there so if anyone facing the similar issue, can refer to this. Also, suggest an update on the guide to reflect.
Here is what i found.
sam pipeline init
define the two stages stack-name that you be use when sam deploy is executed
in this case, i i named it "prod" and "stage"
the stack is not created yet, it will be created after the execution of sam deploy
sam deploy -t codepipeline.yaml
generate two CloudFormation stack name for the two stages you defined in sam pipeline init
generate a CloudFormation stack name you defined in sam deploy but this time with template.yaml
in this case, i defined "prod"
therefore, then sam deploy is called, it finds a CloudFormation stack that you indicate and modify
when sam deploy with codepipeline.yaml template, a CloudFormation stack is created on the fly from your terminal to create the AWS CodePipeline, AWS CodeBuild and any other required resources for CI / CD with the stack name you defined in sam deploy
once it is successfully created/updated, the CodePipeline will run, which includes a creation / updating of the CloudFormation stack you defined during sam pipeline init
however at this step, if you already have a CloudFormation stack with the same name, it will be modified.
In this case, it modified the stack I created for the CI / CD, but CodePipeline is mostly only deleted last, therefore it is removed as soon as it's done with the deployment
The moral of this story is, do not name your stack for sam pipeline init and sam deploy the same!
The desired behavior is as follows:
Push code change
Run unit tests for each Serverless component
Provided all tests are successful, deploy the components into Staging environment and mark build as successful
Listen to this change and run acceptance tests suite using Gherkin
Provided all tests are successful, deploy the components into UAT/Prod environment and mark build as successful
The desired solution would have two pipelines, the second one triggered by the first one's success.
If you have any other ideas, I'd be delighted to hear!
Thanks in advance
Assuming both CodePipelines are running in the same account. You can add "post_build" phase in your buildspec.yml.
In the post_build phase you can trigger the second CodePipeline using AWS SDK commands.
build:
commands:
# npm pack --dry-run is not needed but helps show what is going to be published
- npm publish
post_build:
commands:
- aws codepipeline start-pipeline-execution --name <codepipeline_name>
The solution I propose for a second pipeline trigger would be the following:
Have the second pipelines source as S3 (not CodeCommit). This will ensure that only when a specifically named file (object key) is pushed to Amazon S3 will this pipeline start.
At the end of the first CodePipeline add a Lambda function, by this point everything must have been successful to have triggered this.
Have that Lambda copy the artifact you build for your first pipeline and place it in the bucket with the key referenced in the second buckets source.
To keep things clean use a seperate bucket for each pipeline.
The command cdk deploy ... is used to deploy one or more CloudFormation stacks. When it executes, it displays messages resulting from the deployment of the various stacks and this can take some time.
The deploy command supports the --notification-arns parameter, which is an array of ARNs of SNS topics that CloudFormation will notify with stack related events.
Is it possible to execute cdk deploy and not have it report to the console its progress (i.e. the command exits immediately after uploading the new CloudFormation assets) and simply rely on the SNS topic as a means of getting feedback on the progress of a deployment?
A quick and dirty way (untested) would be to use nohup
$ nohup cdk ... --require-approval never 1>/dev/null
The --require-approval never simply means it wont stop to ask for permission for sercurity requests and obviously nohup allows the command to run with out terminating.
Its the only solution I can think of that is quick.
Another solution for long term would be to use the CdkToolkit to create your own script for deployment. Take a look at cdk command to get an idea. This is been something Ive wanted from aws-cdk for a while - I want custom deploy scripts rather than using the shell.
I found a solution to asynchronously deploy a cdk app via the --no-execute flag:
cdk deploy StackX --no-execute
change_set_name = aws cloudformation list-change-sets --stack-name StackX --query "Summaries[0].ChangeSetId" --output text
aws cloudformation execute-change-set --change-set-name $change_set_name
For my case this works, because I use this method to deploy new stacks only, so there will ever be only exactly 1 change set for this stack and I can retrieve it with the query for the entry with index 0. If you wish to update existing stacks, you will have to select the correct change set from the list returned by the list-change-sets command.
I had a similar issue - I didn't want to keep a console open while waiting for a long-running init script on an ec2 instance to finish. I've hit Ctrl-C after publishing was completed and the changeset got created. It kept running and I've checked the status in the Cloudformation view. Not perfect and not automation-friendly, but I didn't need to keep the process running on my machine.
I have a Bitbucket pipeline where it creates AWS resources using cloudformation and deploys website to it. But deployment fails even the cloudformation creates the stack correctly. What I think the issue is when the deployment happens cloudformation S3 bucket creation may not have been finished.
I have a Hugo website and I have created a bitbucket pipeline to deploy it to server. What it does is it creates S3 bucket using cloudformation to host the website and then upload the Hugo website to it. When I ran the steps in the pipeline manually in a terminal with a delay between each step, it happens successfully. But when it happens on Bitbucket pipeline it gave error saying the S3 bucket that I'm trying to upload content is not available. When I checked in AWS that bucket is actually there. That means Cloudformation has worked correctly. But when the files start to copy, the bucket may have not been available to upload the file. That's my assumption. Is there a workaround for this one. When doing it locally I can wait between the two commands of cloudformation creation and file copying. But how to handle it in Bitbucket pipeline environment. Following is my pipeline code.
pipelines:
pull-requests:
'**':
- step:
script:
- aws cloudformation create-stack --stack-name demo-web --template-body file://cloudformation.json --parameters ParameterKey=S3BucketName,ParameterValue=demo-web
- hugo
- aws s3 cp public/ s3://demo-web/ --recursive
How to handle this scenario in the correct way. Is there a workaround for this situation. Or is the problem that I have identified is not the actual problem.
First, to wait in bitbucket pipelines you should be able to just use sleep x where x is the number of seconds you want to sleep.
A different note - bear in mind that after the first subsequent run of this, deployment will potentially fail the next time as you are using create-stack which will fail if stack already exists...
Using the AWS CloudFormation API for this is best practice.
There is a wait command just as there is a create-stack command, the wait command and its options allows AWS to halt your processing until stack is in COMPLETE status before continuing.
Option 1:
Put your stack creation command into a shell script and as a second step in your shell script you invoke the wait stack-create-complete command.
Then invoke that shell script in pipeline instead of the direct command.
Option 2:
In your bitbucket pipeline, right after your create command, invoke the aws cloudformation await complete command before the upload/hugo command.
I prefer option one because it allows you to manipulate and trace the creation as you wish.
See documentation: https://docs.aws.amazon.com/cli/latest/reference/cloudformation/wait/stack-create-complete.html
Using the CloudFormation API is great because you don't have to guess as to how long to wait, it is more guaranteed to be available.