I have two different sources in my CodePipeline, ECR and S3. My deployment pipeline uploads a zip to S3, and then an image to ECR.
I need CodePipeline to detect just the ECR commit, which happens last, and then trigger the S3 source action. However whichever one is detected first starts the CodePipeline, which leads to a race condition in which the image for the new version hasn't been uploaded yet.
How can I resolve this? I cannot move S3 out of the Source stage, as per CodePipeline limitations. I've tried moving this S3 download to a Lambda function, but I can't seem to pass the zip back to CodePipeline as an output artifact.
As trigger for the CodePipeline to run, define the CloudWatch event trigger only for ECR, not for S3 changes. Disable the Pipeline built-in trigger/pull.
https://docs.aws.amazon.com/codepipeline/latest/userguide/create-cwe-ecr-source-console.html
This will make sure only ECR triggers a pipeline execution.
If the intended way is to have the ECR commit processed along with whatever is uploaded to your S3, and assuming it's a new version of the exact same object name, you could get that data into your Build stage of your Pipeline, pulling the latest version like you tried with the Lambda function; or have some alternative way to identify that new S3 object.
If you're uploading the zip content to S3 and then committing the ECR push, you definitely could move the S3 bucket out of the Source stage and have it be somewhat independent as far as the Pipeline and triggers are concerned. It'll be another step in your Build project at the appropriate phase.
Related
Anyone run into this issue where you have a Codepipeline with 2 input sources. Say, two s3 buckets, and you want to run the buildspec off the second input source, but you only want to trigger the pipeline on the 1st input source? I don't want it to trigger if there are updates on the 2nd bucket.
Anyway I can prevent that? Or even prevent anything from being run in the buildspec if the 2nd bucket gets updated?
Do not add the Second S3 Bucket as a source to your CodePipeline. Instead, "s3 cp" the files from this bucket in a CodeBuild action to gather the files from this bucket and use them in the way intended.
A CI/CD pipeline is supposed to run when code changes, the fact that you dont want the pipeline to run on second source means this is more like a helper code that just needs to be there for build etc. and thus should be just provisioned while the pipeline is running using a copying mechanism like "git clone" or "s3 cp".
Well, I would like to avoid some types of commits to trigger an AWS CodePipeline, but I can't find any configuration about this in Source phase:
But, If AWS CodeBuild is not linked with AWS CodePipeline I have access to more features about trigger:
How can I configure trigger options using AWS CodePipeline ?
You can do this by editing the CloudWatch Event for the pipeline. Using a Lambda function, you can look for a specific type of change in your commit. The example in the link below looks for changes to specific files - so if you change the readme.md file, for example, don't deploy.
https://aws.amazon.com/blogs/devops/adding-custom-logic-to-aws-codepipeline-with-aws-lambda-and-amazon-cloudwatch-events/
You could take this example further and look for specific flags in your commit message, for example.
I m trying to setup my CI/CD process with Bitbucket, CodeBuild and CodePipeline. So far, I have Webhook on Bitbucket that will run CodeBuild on custom docker image from ECR and upload the artifacts to Amazon S3 bucket which has versions enabled in it. The new upload triggers the CodePipeline based on the Cloudwatch event and runs another CodeBuild after Manual approve which makes the deployment on new S3 bucket which is fine till now. The CodePipeline has 3 stages: Source (s3),-> Manual Approve -> CodeBuild.
The problem is: When I tried to push multiple branches/new commits, the first CodeBuild runs and upload artifacts on S3 with different versions as expected. However, each upload should trigger each CodePipline. If I had made 3 different code push to the Bitbucket Repository, there should be three Pipelines waiting for Manual approval to be deployed.
But this isn't the case.
I have three artifacts uploaded in S3 with different VersionID based on the commits:
1st commit's artifacts VersionID: OKIBabVQQX80aAuARAne8jnClsTpJGXu
2nd commit's artifacts VersionID: YNsgp9rodnAx7du1Te1OQD2UO0t5IZc
3rd commit's artifacts VersionID: VN7pwVS5zpYNTmzJrLbFGKBupogpgtHN
In CodePipeline:
Stage: Manual Approve is waiting for approval which has S3 Source Version: OKIBabVQQX80aAuARAne8jnClsTpJGXu
Stage: Source is triggered from S3 with VersionID: VN7pwVS5zpYNTmzJrLbFGKBupogpgtHN
Here I am completely missing the 2nd artifacts: (s3 VersionID: YNsgp9rodnAx7du1Te1OQD2UO0t5IZc )
I would expect all three artifacts to trigger the CodePiple one after another so that I can deploy all my 3 push.
Many many thanks!!!
When a pipeline execution starts, it runs a revision through every stage and action in the pipeline. Start a Pipeline Execution in CodePipeline
Code pipeline stage shows the last trigger for particular the stage.
In your case you have:
Artifact1 triggered Source and Manual Approve.
Artifact2 triggered Source and waiting to trigger Manual Approval.
Artifact3 triggered Source and waiting to trigger Manual Approval.
Since your Artifact3 comes after Artifact2 - the Source stage shows it (the last trigger).
Artifact2 is not shown but still waiting for its turn. Once you finish with Artifact1 on Manual Approval stage, the Artifact2 would appear at Manual Approval.
After it, Artifact3 would go to Manual Approval stage and so on.
This feature is not clear to me about the benefits (I didn't find any good documentation):
Is it just faster in the case you reuse the same zip for many lambda functions because you upload only 1 time and you just give the S3 link URL to each lambda function?
If you use an S3 link, will all your lambda functions be updated with the latest code automatically when you re-upload the zip file, meaning is the zip file on S3 a "reference" to use at each call to a lambda function?
Thank you.
EDIT:
I have been asked "Why do you want the same code for multiple Lambda functions anyway?"
Because I use AWS Lambda with AWS API Gateway so I have 1 project with all my handlers which are actual "endpoints" for my RESTful API.
EDIT #2:
I confirm that uploading a modified version of the zip file on S3 doesn't change the existing lambda functions result.
If an AWS guy reads this message, that would be great to have a kind of batch update feature that updates a set of selected lambda functions with 1 zip file on S3 in 1 click (or even an "automatic update" feature that detects when the file has been updated ;-))
Let's say you have 50 handlers in 1 project, then you modify something global impacting all of them, currently you have to go through all your lambda functions and update the zip file manually...
The code is imported from the zip to Lambda. It is exactly the same as uploading the zip file through the Lambda console or API. However, if your Lambda function is big (they say >10MB), they recommend uploading to S3 and then using the S3 import functionality because that is more stable than directly uploading from the Lambda page. Other than that, there is no benefit.
So for question 1: no. Why do you want the same code for multiple Lambda functions anyway?
Question 2: If you overwrite the zip you will not update the Lambda function code.
To add to other people's use cases, having the ability to update a Lambda function from S3 is extremely useful within an automated deployment / CI process.
The instructions under New Deployment Options for AWS Lambda include a simple Lambda function that can be used to copy a ZIP file from S3 to Lambda itself, as well as instructions for triggering its execution when a new file is uploaded.
As an example of how easy this can make development and deployment, my current workflow is:
I update my Node lambda application on my local machine, and git commit it to a remote repository.
A Jenkins instance picks up the commit, pulls down the appropriate files, adds them into a ZIP file and uploads this to an S3 bucket.
The LambdaDeployment function then automatically deploys this new version for me, without me needing to even leave my development environment.
To answer what I think is the essence of your question, AWS allows you to use S3 as the origin for your Lambda zip file because sometimes uploading large files via your browser can timeout. Also, storing your code on S3 allows you to store it centrally, rather than on your computer and I'm sure there is a CodeCommit tie-in there as well.
Using the S3 method of uploading your code to Lambda also allows you to upload larger files (AWS has a 10MB limit when uploading via web browser).
#!/bin/bash
cd /your/workspace
#zips up the new code
zip -FSr yourzipfile.zip . -x *.git* *bin/\* *.zip
#Updates function code of lambda and pushes new zip file to s3bucket for cloudformation lambda:codeuri source
aws lambda update-function-code --function-name arn:aws:lambda:us-west-2:YOURID:function:YOURFUNCTIONNAME --zip-file file://yourzipfile.zip
Depends on aws-cli install and aws profile setup
aws --profile yourProfileName configure
I am running a continuous code deployment with Jenkins that will automatically compile and upload binaries to S3 in parallel for multiple targets.
The final step in my deployment mechanism is to detect that all the binaries for a particular build has been uploaded, and then deploy them together.
S3 has event notifications that can trigger when objects have been pushed, but do they have anything more sophisticated that can trigger when multiple objects have been pushed?
Example:
Build machine on Windows uploads binary to S3.
Build machine on OS X uploads binary to S3.
S3 detects that both binaries are now uploaded and triggers an event.
Build machine takes both binaries and releases them together.
Right now the only solution I can think of is to set up AWS Lambda and have the event handler manually check for the existence of the other binary, which may not even be feasible if S3 has special race conditions.
Any ideas?
The short answer is no. There is no mechanism that would let you trigger an action when all three objects are uploaded. There is no conditional notification, just simple events.
But you can use something else. Create a DynamoDB table for the build records and create a row there when your build is successful from any build machine, before you upload any files. Now for each build, create a separate attribute on the row. Have S3 publish a notification to your Lambda and have your Lambda lookup and update this row and when all your attributes are in desired state, you can have this Lambda do the release.
Amazon S3 is a "base" system upon which many things can be built (eg DropBox!). As such, the functionality of Amazon S3 is limited (but very scalable and reliable).
Thus, you'll have to build your own logic on top of Amazon S3 to implement your desired solution.
One option would be to trigger an AWS Lambda function when an object is created. This Lambda function could then implement any logic you desire, such as your step #3.