AWS: Is it possible to setup a CloudWatch event to run a pipeline at a specific time but only if there are changes on my codecommit repository?
I don't think is possible out of the box.
An approach could be having a lambda function executing on a regular schedule (3am).
Then get your lambda to compare the latest codepipeline release against your latest revision committed, and trigger the pipeline accordingly.
Related
I have two AWS accounts. I develop code in one CodeCommit repository. Once it is done, I need to clone that code into the other account CodeCommit repository. Is there a way to do that using lambda function or any other method to automate the process.
Please help me, it was a really a headache more than a month. :)
There are several ways doing that. Essentially, what you'll need is a trigger, that then kicks of the replication process into another account after each commit. Below are two possible ways documented doing this.
Lambda + Fargate
The first one uses a combination of Lambda, which you can select CodeCommit to be a trigger for. The Lambda function then runs a Fargate task, which in turn replicates the repository using git clone --mirror. Fargate is used here as the replication of larger repositories might exceed the temporary storage that Lambda can allocate.
https://aws.amazon.com/blogs/devops/replicate-aws-codecommit-repository-between-regions-using-aws-fargate/
CodePipeline + CodeBuild
This is probably the "cleaner" variant as it uses native CI/CD tooling in AWS, making it easier to set up as compared to ECS/Fargate, amongst other advantages.
Here you're setting up AWS CodePipeline, which will monitor the CodeCommit repository for any changes. When a commit is detected, it will trigger CodeBuild, which in turn runs the same git command outlined earlier.
https://medium.com/geekculture/replicate-aws-codecommit-repositories-between-regions-using-codebuild-and-codepipeline-39f6b8fcefd2
Assuming that you have repo 1 on account A, repo 2 on account B, you want to sync repo 1 -> repo 2
The easiest way is to do the following:
create SNS topic on Account A
enable Notification for repo 1, and send all event to SNS topic
create a lambda function to subscribe the SNS topic
make sure you followed this guide https://docs.aws.amazon.com/codecommit/latest/userguide/cross-account.html to grant lambda function cross account CodeCommit permission
write a python function to decide what git events you want to replicate. If you just want to sync the main branch and ignore all other branch, you can say something like: if event["source_ref"].endswith("main"), then use boto3 CodeCommit API https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/codecommit.html, (take a look at batch_get_commits) to commit the change to the remote CodeCommit repo.
However, I really doubt that do you really need to do this? How about just dump the all git history as a zip to S3 to your remote account? and just import everytime if you see any changes? I believe your remote account is mostly READ ONLY and just serve as a backup. If you only need backup, you can just dump to S3 and don't even need to import.
Background:
I'm planning on creating a Codepipeline that has multiple source actions within the initial source stage. Each source action is a GitHub repo that will have its own AWS CodePipeline webhook. Within the pipeline's next stage, I want to have an invoke action that will get the pipeline execution's webhook that triggered the pipeline run and set the input artifact for the downstream build stage to be the source action that is associated with the triggered webhook. For example, if repo A's webhook caused pipeline execution #1, then the invoke action will somehow identify that the repo A's webhook was the trigger and then pass repo A's output artifact to the downstream build stage.
Problem:
I haven't found a solution to get the Codepipeline webhook that triggered the pipeline run. Looking at the boto3 Codepipeline docs, the closest I've got was list_webhooks that identifies what pipeline the webhook is associated with but nothing in regards to if that webhook triggered Codepipeline execution ID 123.
The list_pipeline_executions command should help you in this case. It provides you with CodePipeline execution summaries, where the first result is the latest execution ID. Each summary has a trigger attribute with information about how the execution was triggered. For a webhook it looks like this:
"trigger": {
"triggerType": "Webhook",
"triggerDetail": "arn:aws:codepipeline:<region>:<account-id>:webhook:<webhook-id>"
}
If your pipeline is likely to be running concurrently, make sure you get the current execution ID first so things do not get mixed up. You can do this with a one-liner in CodeBuild as suggested here.
I m trying to setup my CI/CD process with Bitbucket, CodeBuild and CodePipeline. So far, I have Webhook on Bitbucket that will run CodeBuild on custom docker image from ECR and upload the artifacts to Amazon S3 bucket which has versions enabled in it. The new upload triggers the CodePipeline based on the Cloudwatch event and runs another CodeBuild after Manual approve which makes the deployment on new S3 bucket which is fine till now. The CodePipeline has 3 stages: Source (s3),-> Manual Approve -> CodeBuild.
The problem is: When I tried to push multiple branches/new commits, the first CodeBuild runs and upload artifacts on S3 with different versions as expected. However, each upload should trigger each CodePipline. If I had made 3 different code push to the Bitbucket Repository, there should be three Pipelines waiting for Manual approval to be deployed.
But this isn't the case.
I have three artifacts uploaded in S3 with different VersionID based on the commits:
1st commit's artifacts VersionID: OKIBabVQQX80aAuARAne8jnClsTpJGXu
2nd commit's artifacts VersionID: YNsgp9rodnAx7du1Te1OQD2UO0t5IZc
3rd commit's artifacts VersionID: VN7pwVS5zpYNTmzJrLbFGKBupogpgtHN
In CodePipeline:
Stage: Manual Approve is waiting for approval which has S3 Source Version: OKIBabVQQX80aAuARAne8jnClsTpJGXu
Stage: Source is triggered from S3 with VersionID: VN7pwVS5zpYNTmzJrLbFGKBupogpgtHN
Here I am completely missing the 2nd artifacts: (s3 VersionID: YNsgp9rodnAx7du1Te1OQD2UO0t5IZc )
I would expect all three artifacts to trigger the CodePiple one after another so that I can deploy all my 3 push.
Many many thanks!!!
When a pipeline execution starts, it runs a revision through every stage and action in the pipeline. Start a Pipeline Execution in CodePipeline
Code pipeline stage shows the last trigger for particular the stage.
In your case you have:
Artifact1 triggered Source and Manual Approve.
Artifact2 triggered Source and waiting to trigger Manual Approval.
Artifact3 triggered Source and waiting to trigger Manual Approval.
Since your Artifact3 comes after Artifact2 - the Source stage shows it (the last trigger).
Artifact2 is not shown but still waiting for its turn. Once you finish with Artifact1 on Manual Approval stage, the Artifact2 would appear at Manual Approval.
After it, Artifact3 would go to Manual Approval stage and so on.
I have an aws-lambda that I can not except downtime even for a little time that it takes to rebuild the stack.
I want to update the lambda trigger by adding a new SNS trigger to it. How do I do that with aws cli?
Check Using AWS Lambda with Amazon Simple Notification Service for cli example.
However, best practice would be to use Lambda Versioning and Aliases
Create new version of your function with updated configuration, test it, pre-warm it, and shift traffic to new version without any downtime.
In my CodePipeline, I am creating a CloudFormation ChangeSet and then executing it to deploy Lambda functions. It doesn't seem like CloudFormation saves the old ChangeSets so that I can revert to an old version. Am I wrong?
CloudFormation does automatically rollback when it fails to create/execute the ChangeSet due to IAM permission issues and such but I want the ability to manually rollback in case I deploy a buggy function.
You could use rollback triggers in AWS CloudFormation to detect failed tests in your code, via Amazon CloudWatch metrics and alarms, and perform an automated rollback.
Your application code would need to be modified to perform the tests upon deployment, and then write the metric values into Amazon CloudWatch.
There are a couple limits you'll want to be aware of:
Maximum of five (5) rollback configurations per CloudFormation stack
Monitoring time: 0 - 180 minutes (3 hours)