Is there a way to do ask deploy through jenkins? - amazon-web-services

I want to automate the process of deploying Alexa skill through and want my pipeline to do the following for me:-
ask deploy --target lambda if my lambda has been modified
ask deploy --target model if my models have been modified.
I know I can put an IF condition either to check git log or check changeset in Jenkins and it Will solve my purpose, but since my skill is already in production I don't want to add any risk that by mistake skill gets modified and I will have to send it for re certification again.

I figured out the solution, Thanks to link. I was under a misconception that Amazon discards my production version of skill as soon as I deploy new changes to it.It is rather different.
Now this is what I am doing:-
I am using ask deploy to deploy my lambda as well as skill.
I can send it for re certification whenever I wish and till then develop on my current Dev version.
Using Aliases for Prod deployment of lambda and can re-deploy my lambda on default $LATEST version till I don't decide to publish it to PROD.

Related

Automate Apigee API Proxy Deployment in CI/CD Pipeline

Is there any recommended method to create and deploy the Apigee API Proxy Bundle via a CI/CD pipeline (I'm using Azure DevOps)?
I want to avoid excessive API Proxy Bundles from being created and deployed when there are no changes to be made. I've already tested, and I see that identical bundles still create a new revision.
So far, my own solution is to write a PowerShell script to use apigeecli to download the current bundle and compare it against the apiproxy that I have locally in my repo. If it differs, I create and deploy a new API Proxy Bundle.
Has anyone seen anything better?
I have mainly automated with Gitlab but will share my ideas probably may help with your specific case.
So we use version control to manage our apigee repos. I have setup a gitlab pipeline that checks for the diff anytime we push to our repository and only if there are any changes do we redeploy the proxy to Apigee. Normally when the pipeline is triggered, we check if there are any changes to target servers, proxies and shared flows, and if changes are detected, we check the deployed revision and environments.
Through my deployment script, i am able to get a list of these changes and pass them to the pipeline as CHANGES variable. This means that only these modified proxies will be deployed.
On my pipeline I could do something like this git diff --name-only $CI_COMMIT_SHA..$CI_COMMIT_BEFORE_SHA > /changes.txt and pass the content of the changes file to the CHANGES to be deployed.

An AWS CI/CD Pipeline that allows manual deploy by commit

Background
I want to create the following CI/CD flow in AWS and Github, for a react app using Amplify:
A single main branch, with short-lived feature branches and PRs into main.
Each PR triggers its own test environment in Amplify, with its own temporary subdomain, which gets torn down when the PR is merged, as described here.
Merging into main does not automatically trigger a deploy to production.
Instead, there is a separate mechanism (a web page, or amplify command, or even triggers based on git tags) for manually selecting a commit from main to deploy to production.
Questions
It's not clear to me if...
Support for this flow is already built into Amplify (based on the docs I've read, I think the answer is "no", but I'm not sure).
Support for this flow is already built into AWS CodePipeline, or if it can be configured there.
There is another AWS tool that solves this.
I'm looking for answers to those questions, or specific references in the docs which address them.
The answers for Amplify are Yes, Yes, Yes, Partially.
(1) A single main branch, with short-lived feature branches and PRs into main.
Yes. Feature branch deploys. Can define which branch patterns, such as feature*/, you wish to auto-deploy.
(2) Each PR triggers its own test environment in Amplify, with its own temporary subdomain,
Yes. Web Previews for PRs. "A web preview deploys every pull request made to your GitHub repository to a unique preview URL which is completely different from the URL your main site uses."
(3) Merging into main does not automatically trigger a deploy to production.
Yes. Disable automatic builds on main.
(4) Instead, there is a separate mechanism ... for manually selecting a commit from main to deploy to production.
Partially (HEAD only?). Call the StartJob API to manually trigger a build from, say, Lambda. The job type RELEASE starts a new job with the latest change from the specified branch. I am not sure if jobType: MANUAL with a commitId starts a job from an arbitrary commit hash.
Another workaround for 3+4 is to skip the build for an arbitrary commit. Amplify will skip building if [skip-cd] appears at the end of a commit message.
In my experience, I don't think there is any easy way to meet your requirement.
If you are using Gitlab, you can try Gitlab Review Apps to achieve that (I tried before with some scripts)
Support for this flow is already built into Amplify (based on the docs I've read, I think the answer is "no", but I'm not sure).
Check below links, if this help:
https://www.youtube.com/watch?v=QV2WS535nyI
https://dev.to/rajandmr/deploying-react-app-using-aws-amplify-with-ci-cd-pipeline-setup-3lid
Support for this flow is already built into AWS CodePipeline, or if it can be configured there.
For this, you need to create a full your own pipeline. Yes, you can configure your pipeline.
There is another AWS tool that solves this.
If you are okay with Jenkins, then Jenkins will help you to achieve this.
You can deploy Jenkins docker in AWS EC2 and create your pipeline. You can also use the parameterised option for selecting your environment and git branch.

How to get SonarQube results back to CodeBuild

I've seen many discussions on-line about Sonar web-hooks to send scan results to Jenkins, but as a CodePipeline acolyte, I could use some basic help with the steps to supply Sonar scan results (e.g., quality-gate pass/fail status) to the pipeline.
Is the Sonar web-hook the right way to go, or is it possible to use Sonar's API to fetch the status of a scan for a given code-project?
Our code is in BitBucket. I'm working with the AWS admin who will create the CodePipeline that fires when code is attempted to be pushed into the repo. sonar-scanner will be run, and then we'd like the pipeline to stop if the quality does not pass the Quality Gate.
If I would use a Sonar web-hook, I imagine the value for host would be, what, the AWS instance running the CodeBuild?
Any pointers, references, examples welcome.
I created a powershell to use with Azure DevOps, that possible may be migrated to some shell script that runs in the code build activity
https://github.com/michaelcostabr/SonarQubeBuildBreaker

Code pipeline to build a branch on pull request

I am trying to make a code pipeline which will build my branch when I make a pull request to the master branch in AWS. I have many developers working in my organisation and all the developers work on their own branch. I am not very familiar with ccreating lambda function. Hoping for a solution
You can dynamically create pipelines everytime a new pull-request has been created. Look for the CodeCommit Triggers (in the old CodePipeline UI), you need lambda for this.
Basically it works like this: Copy existing pipeline and update the the source branch.
It is not the best, but afaik the only way to do what you want.
I was there and would not recommend it for the following reasons:
I hit this limit of 20 in my region: "Maximum number of pipelines with change detection set to periodically checking for source changes" - but, you definitely want this feature ( https://docs.aws.amazon.com/codepipeline/latest/userguide/limits.html )
The branch-deleted trigger does not work correctly, so you can not delete the created pipeline, when the branch has been merged into master.
I would recommend you to use Github.com if you need a workflow as you described. Sorry for this.
I have recently implemented an approach that uses CodeBuild GitHub webhook support to run initial unit tests and build, and then publish the source repository and built artefacts as a zipped archive to S3.
You can then use the S3 archive as a source in CodePipeline, where you can then transition your PR artefacts and code through Integration testing, Staging deployments etc...
This is quite a powerful pattern, although one trap here is that if you have a lot of pull requests being created at a single time, you can get CodePipeline executions being superseded given only one execution can proceed through a given stage at a time (this is actually a really important property, especially if your integration tests run against shared resources and you don't want multiple instances of your application running data setup/teardown tasks at the same time). To overcome this, I publish an S3 notification to an SQS FIFO queue when CodeBuild publishes the S3 artifact, and then poll the queue, copying each artifact to a different S3 location that triggers CodePipeline, but only if there are are currently no executions waiting to execute after the first CodePipeline source stage.
We can very well have dynamic branching support with the following approach.
One of the limitations in AWS code-pipeline is that we have to specify branch names while creating the pipeline. We can however overcome this issue using the architecture shown below.
flow diagram
Create a Lambda function which takes the GitHub web-hook data as input, using boto3 integrate it with AWS pipeline(pull the pipeline and update), have an API gateway to make the call to the Lambda function as a rest call and at last create a web-hook to the GitHub repository.
External links:
https://aws.amazon.com/quickstart/architecture/git-to-s3-using-webhooks/
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/codepipeline.html
Related thread: Dynamically change branches on AWS CodePipeline

Dynamically append function to AppSync Pipeline Resolver via CloudFormation

I am currently developing an AppSync based API in a domain driven manner, so we need to put a function to an already created Pipeline Resolver. Does anybody know if there is there any chance doing this via CloudFormation without using a custom resource?
Thanks in advance, Sven
Terraform can do this neatly if you can build this pull request yourself or vote in this issue to get it merged into a public release.
The new syntax is described here.
The build process is actually quite simple. It took me about 30 min end-to-end.
Install GoLang.
Clone the repo with the changes and sync it with the main (upstream) repo.
Make sure you cloned it into go\src\github.com\terraform-providers\terraform-provider-aws folder.
Run go build from go\src\github.com\terraform-providers\terraform-provider-aws
Replace .terraform\plugins\...\terraform-provider-aws-* executable with the one you compiled.
Run terraform init
Test by trying to import a function terraform import aws_appsync_function.example xxxxx-yyyyy
I hope the pull request gets merged by the time you read this.