I am using codepipeline as CI/CD pipeline by using CDK.
I'd like to make the pipeline update by itself so I tried to create one stage in the pipeline to update the pipeline itself by running cdk deploy command.
In order to make the self update work, I need to specify a parameter selfMutation in codepipeline construct: https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.pipelines.CodePipeline.html#selfmutation
but this property only exists in cdk v2 not v1 ( I am using cdk 1.84.0 ). What is the equivalent property in v1?
The CDK pipelines package also exists in v1.
CDK Pipelines are different from aws-codepipeline.Pipeline. It's build on top of it and allows you to deploy CDK apps with CodePipeline.
More information about CDK pipelines and what they are: https://aws.amazon.com/blogs/developer/cdk-pipelines-continuous-delivery-for-aws-cdk-applications/
Documentation for CDK pipelines in CDKv1: https://docs.aws.amazon.com/cdk/api/v1/docs/pipelines-readme.html
Related
I am very new to AWS's codebuild, pipelien and SAM. I know that I can set up codebuild to bring in two git repositories similar to this example. I was wondering if there is a way that I can update a SAM project's Codepipeline to use a secondary git source?
Following the guide in AWS SAM documentation, step by step created pipeline as instructed in the documentation till the final step where I copied the command from step 4 to connect to CodeCommit, I ran;
sam deploy -t codepipeline.yaml --stack-name prod --capabilities=CAPABILITY_IAM
I can see that CloudFormation events being generated in the shell (with Successfully created/updated stack - prod in None), as well as seeing the CodePipeline being generated and running the deployment stages.
However, as soon as the deployment is done, that pipeline is missing from AWS Developer Tool Console.
Shouldn't the pipeline be retained and when a new commit to the branch is merged, it automatically run the pipeline every time? why is my pipeline got removed right after the deployment is done?
After trying to understand the whole logic of the deploy command and their internals, I have managed to retain the CodePipeline in AWS console to run for my automated CI / CD pipeline.
Just to put it out there so if anyone facing the similar issue, can refer to this. Also, suggest an update on the guide to reflect.
Here is what i found.
sam pipeline init
define the two stages stack-name that you be use when sam deploy is executed
in this case, i i named it "prod" and "stage"
the stack is not created yet, it will be created after the execution of sam deploy
sam deploy -t codepipeline.yaml
generate two CloudFormation stack name for the two stages you defined in sam pipeline init
generate a CloudFormation stack name you defined in sam deploy but this time with template.yaml
in this case, i defined "prod"
therefore, then sam deploy is called, it finds a CloudFormation stack that you indicate and modify
when sam deploy with codepipeline.yaml template, a CloudFormation stack is created on the fly from your terminal to create the AWS CodePipeline, AWS CodeBuild and any other required resources for CI / CD with the stack name you defined in sam deploy
once it is successfully created/updated, the CodePipeline will run, which includes a creation / updating of the CloudFormation stack you defined during sam pipeline init
however at this step, if you already have a CloudFormation stack with the same name, it will be modified.
In this case, it modified the stack I created for the CI / CD, but CodePipeline is mostly only deleted last, therefore it is removed as soon as it's done with the deployment
The moral of this story is, do not name your stack for sam pipeline init and sam deploy the same!
I am trying to create a pipeline using terraform to create a codepipeline in aws to automatically deploy a lambda function.
i have already created 2 stages to get the code from github and build the artifact using codebuild and store the artifact to S3.
But i can't seem to find a terraform configuration for the codedeploy to deploy the artifact from s3 to lambda. I do see there is deployment setting from the console where i can specify the detail of the deployment.
I'm a kind of lost in the documentation.
I want to push Python code to a repo and use CodePipeline to deploy Lambdas.
I have CodeCommit repo, CodePipeline - so far this works and I can create/update CF stack to create supplementary resources.
I know AWS SAM can be used to deploy the functions using CF tpl, but how can I connect SAM with CodePipeline/CodeDeploy? The code should be taken from a 'source' pipeline action then deployed as lambda function.
If SAM isn't the best automated solution here then what should I use instead? Pipeline is the key requirement so we don't have to run something like aws cf update-stack manually, just push the code.
CodePipeline doesn't support deploying Lambda through CodeDeploy, so the approach is to use a CodeBuild Build action to generate an change set from the SAM template and feed it into a CloudFormation Deploy action. You can find a detailed instruction in the following doc.
https://docs.aws.amazon.com/lambda/latest/dg/build-pipeline.html
If you use SAM to deploy Lambdas, CodeDeploy is automatically used. For reference:
Gradual Code Deployment
Safe Lambda deployments
I followed this tutorial https://aws.amazon.com/blogs/devops/aws-building-a-secure-cross-account-continuous-delivery-pipeline/ to set up cross account deployments of our lambdas using Cloudformation as my automation tool.
I'm using the pipeline in this repo: https://github.com/awslabs/aws-refarch-cross-account-pipeline/blob/master/ToolsAcct/code-pipeline.yaml (pipeline starts at line 207) and the pipeline in question is in the ToolsAccount/ directory;
I am able to successfully deploy the first lambda; however, any subsequent deployment replaces the old lambda but I want to have lambda_1 and lambda_2 both present in the console not just the latest one.
To deploy the second lambda, out of all 6 steps from the tutorial, I rerun step 4 and 5 of the tutorial like below:
4.In the Tools account, which hosts AWS CodePipeline, execute this CloudFormation template. This creates a pipeline, but does not add permissions for the cross accounts (Dev, Test, and Prod)
aws cloudformation deploy --stack-name sample-lambda-pipeline \
--template-file ToolsAcct/code-pipeline.yaml \
--parameter-overrides DevAccount=ENTER_DEV_ACCT TestAccount=ENTER_TEST_ACCT \
ProductionAccount=ENTER_PROD_ACCT CMKARN=FROM_1st_STEP \
S3Bucket=FROM_1st_STEP--capabilities CAPABILITY_NAMED_IAM
5.In the Tools account, execute this CloudFormation template, which give access to the role created in step 4. This role will be assumed by AWS CodeBuild to decrypt artifacts in the S3 bucket. This is the same template that was used in step 1, but with different parameters.
aws cloudformation deploy --stack-name pre-reqs \
--template-file ToolsAcct/pre-reqs.yaml \
--parameter-overrides CodeBuildCondition=true
After running both of these steps to deploy the second lambda, it successfully deploys it but replaces the other lambda that was already deployed earlier in the console. *
How can I keep the existing lambda while deploying new ones and
have all lambdas present in the console and not just the latest one
that was deployed?
*
My guess would be that by rerunning step 4 and 5, I'm creating a changeset of the previously deployed lambda and thus it will keep replacing the old lambda in the console.
If my guess is correct, then how can I re-use the same pipeline but deploy different lambdas with it without replacing the previously deployed lambdas?
Is there an attribute of the cloudformation pipeline resource that I'm
missing?
It sounds like you're trying to use a single pipeline to deploy multiple different independent services / projects. This will cause problems when you "switch" projects because the template won't contain resources from the other project and therefore CloudFormation will think these resources need to be removed.
You can either:
Add all the lambda functions together in a single template
Setup a separate pipeline per set of functions