I am currently developing an AppSync based API in a domain driven manner, so we need to put a function to an already created Pipeline Resolver. Does anybody know if there is there any chance doing this via CloudFormation without using a custom resource?
Thanks in advance, Sven
Terraform can do this neatly if you can build this pull request yourself or vote in this issue to get it merged into a public release.
The new syntax is described here.
The build process is actually quite simple. It took me about 30 min end-to-end.
Install GoLang.
Clone the repo with the changes and sync it with the main (upstream) repo.
Make sure you cloned it into go\src\github.com\terraform-providers\terraform-provider-aws folder.
Run go build from go\src\github.com\terraform-providers\terraform-provider-aws
Replace .terraform\plugins\...\terraform-provider-aws-* executable with the one you compiled.
Run terraform init
Test by trying to import a function terraform import aws_appsync_function.example xxxxx-yyyyy
I hope the pull request gets merged by the time you read this.
Related
Background
I want to create the following CI/CD flow in AWS and Github, for a react app using Amplify:
A single main branch, with short-lived feature branches and PRs into main.
Each PR triggers its own test environment in Amplify, with its own temporary subdomain, which gets torn down when the PR is merged, as described here.
Merging into main does not automatically trigger a deploy to production.
Instead, there is a separate mechanism (a web page, or amplify command, or even triggers based on git tags) for manually selecting a commit from main to deploy to production.
Questions
It's not clear to me if...
Support for this flow is already built into Amplify (based on the docs I've read, I think the answer is "no", but I'm not sure).
Support for this flow is already built into AWS CodePipeline, or if it can be configured there.
There is another AWS tool that solves this.
I'm looking for answers to those questions, or specific references in the docs which address them.
The answers for Amplify are Yes, Yes, Yes, Partially.
(1) A single main branch, with short-lived feature branches and PRs into main.
Yes. Feature branch deploys. Can define which branch patterns, such as feature*/, you wish to auto-deploy.
(2) Each PR triggers its own test environment in Amplify, with its own temporary subdomain,
Yes. Web Previews for PRs. "A web preview deploys every pull request made to your GitHub repository to a unique preview URL which is completely different from the URL your main site uses."
(3) Merging into main does not automatically trigger a deploy to production.
Yes. Disable automatic builds on main.
(4) Instead, there is a separate mechanism ... for manually selecting a commit from main to deploy to production.
Partially (HEAD only?). Call the StartJob API to manually trigger a build from, say, Lambda. The job type RELEASE starts a new job with the latest change from the specified branch. I am not sure if jobType: MANUAL with a commitId starts a job from an arbitrary commit hash.
Another workaround for 3+4 is to skip the build for an arbitrary commit. Amplify will skip building if [skip-cd] appears at the end of a commit message.
In my experience, I don't think there is any easy way to meet your requirement.
If you are using Gitlab, you can try Gitlab Review Apps to achieve that (I tried before with some scripts)
Support for this flow is already built into Amplify (based on the docs I've read, I think the answer is "no", but I'm not sure).
Check below links, if this help:
https://www.youtube.com/watch?v=QV2WS535nyI
https://dev.to/rajandmr/deploying-react-app-using-aws-amplify-with-ci-cd-pipeline-setup-3lid
Support for this flow is already built into AWS CodePipeline, or if it can be configured there.
For this, you need to create a full your own pipeline. Yes, you can configure your pipeline.
There is another AWS tool that solves this.
If you are okay with Jenkins, then Jenkins will help you to achieve this.
You can deploy Jenkins docker in AWS EC2 and create your pipeline. You can also use the parameterised option for selecting your environment and git branch.
I am using the AWS CLI task to deploy a Lambda layer. The build pipeline upstream looks like this:
It zips up the code, publishes the artifact and then downloads that artifact.
Now in the release pipeline I'm deploying that artifact via an AWS CLI command. The release pipeline looks like this:
I'm trying to figure out a way to dynamically get the current working directory so I don't need to hardcode it. In the options and parameters section you can see I'm trying to use $(Pipeline.Workspace) but it doesn't resolve correctly.
Is this possible?
Correct me if I am wrong, but I looks like you are running this in Azure Release? Not Pipelines?
If that is the case I think the variable you are looking for is $(Release.PrimaryArtifactSourceAlias) .
See the section of the document that talks about release specific variables: https://learn.microsoft.com/en-us/azure/devops/pipelines/release/variables?view=azure-devops&tabs=batch#default-variables---release
Yes. This is completely achievable.
From your screenshot, you are using the Release Pipeline to deploy the Artifacts.
In your situation, the $(Pipeline.Workspace) can only be used in Build Pipeline.
Refer to this doc: Classic release and artifacts variables
You can use the variable: $(System.ArtifactsDirectory) or $(System.DefaultWorkingDirectory)
The directory to which artifacts are downloaded during deployment of a release. The directory is cleared before every deployment if it requires artifacts to be downloaded to the agent. Same as Agent.ReleaseDirectory and System.DefaultWorkingDirectory.
so we have a pipeline that zips code functions for our Lambda, uploads it to S3 and builds
the every lambda we have again with new version of zipped codes.
Now, the problem is, every single Lambda is being Built every pipeline run. even if there are no changes to other lambda code function. ex. (only 1 of 10 lambda has code change)
What would be the best approach or checking that we need to add in our pipeline in order to build the only Lambda that has code change? open for any suggestions even creating new pipeline and breaking this lambdas into pieces
I think the best way is to add a stage before the zipfile to see with files changes in the code in the last merge.
Simply take those names and check which lambda was affected.
Then it will pass the list of lambdas need to redeploy.
What we do at our company is have a single pipeline per lambda / repo. We have a few mono repos that when they deploy they deploy all the lambdas in that repo at once but still through a single pipeline. If you concerned about cost in the pipeline sticking around you could always delete them and then have another job to recreate them when you need to feploy a new change.
We've got everything done through cloud formation scripts so it's all simple scripts running here and there to create pipelines.
Curious what is the reason to have one pipeline deploy all 10 lambdas?
I want to automate the process of deploying Alexa skill through and want my pipeline to do the following for me:-
ask deploy --target lambda if my lambda has been modified
ask deploy --target model if my models have been modified.
I know I can put an IF condition either to check git log or check changeset in Jenkins and it Will solve my purpose, but since my skill is already in production I don't want to add any risk that by mistake skill gets modified and I will have to send it for re certification again.
I figured out the solution, Thanks to link. I was under a misconception that Amazon discards my production version of skill as soon as I deploy new changes to it.It is rather different.
Now this is what I am doing:-
I am using ask deploy to deploy my lambda as well as skill.
I can send it for re certification whenever I wish and till then develop on my current Dev version.
Using Aliases for Prod deployment of lambda and can re-deploy my lambda on default $LATEST version till I don't decide to publish it to PROD.
I am trying to make a code pipeline which will build my branch when I make a pull request to the master branch in AWS. I have many developers working in my organisation and all the developers work on their own branch. I am not very familiar with ccreating lambda function. Hoping for a solution
You can dynamically create pipelines everytime a new pull-request has been created. Look for the CodeCommit Triggers (in the old CodePipeline UI), you need lambda for this.
Basically it works like this: Copy existing pipeline and update the the source branch.
It is not the best, but afaik the only way to do what you want.
I was there and would not recommend it for the following reasons:
I hit this limit of 20 in my region: "Maximum number of pipelines with change detection set to periodically checking for source changes" - but, you definitely want this feature ( https://docs.aws.amazon.com/codepipeline/latest/userguide/limits.html )
The branch-deleted trigger does not work correctly, so you can not delete the created pipeline, when the branch has been merged into master.
I would recommend you to use Github.com if you need a workflow as you described. Sorry for this.
I have recently implemented an approach that uses CodeBuild GitHub webhook support to run initial unit tests and build, and then publish the source repository and built artefacts as a zipped archive to S3.
You can then use the S3 archive as a source in CodePipeline, where you can then transition your PR artefacts and code through Integration testing, Staging deployments etc...
This is quite a powerful pattern, although one trap here is that if you have a lot of pull requests being created at a single time, you can get CodePipeline executions being superseded given only one execution can proceed through a given stage at a time (this is actually a really important property, especially if your integration tests run against shared resources and you don't want multiple instances of your application running data setup/teardown tasks at the same time). To overcome this, I publish an S3 notification to an SQS FIFO queue when CodeBuild publishes the S3 artifact, and then poll the queue, copying each artifact to a different S3 location that triggers CodePipeline, but only if there are are currently no executions waiting to execute after the first CodePipeline source stage.
We can very well have dynamic branching support with the following approach.
One of the limitations in AWS code-pipeline is that we have to specify branch names while creating the pipeline. We can however overcome this issue using the architecture shown below.
flow diagram
Create a Lambda function which takes the GitHub web-hook data as input, using boto3 integrate it with AWS pipeline(pull the pipeline and update), have an API gateway to make the call to the Lambda function as a rest call and at last create a web-hook to the GitHub repository.
External links:
https://aws.amazon.com/quickstart/architecture/git-to-s3-using-webhooks/
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/codepipeline.html
Related thread: Dynamically change branches on AWS CodePipeline