Codebuild with SAM CLI Cross Account S3 issue - amazon-web-services

I have a multi-account setup at my current company. I am using CodeBuild in Account B, I am running a buildspec that uploads a transformed sam template to an s3 bucket in Account A. So the Cloudformation package uploads the sam build of the code to the "account-a-bucket". The job successfully uploads the transformed template as defined in the artifacts section, to a bucket in Account B. The problem comes when trying to deploy the template in account C. Because codebuild creates the lambda code artifact in the build step and writes the object to the bucket outside of the account. When you go look at the actual lambda artifact depositied in the bucket, so s3:://account-a-bucket/e309uofijokasjdfokajsllsk , you will see in the object permission that it does not belong to any account. Therefore no one can access it. How do I make codebuild create the object in the other account bucket, so it is owned by an account?
To note as well, I already configured the Bucket Policy to grant access to account-a-bucket to all accounts in my organization, additionally the canonical ids of the accounts for permissions. So I know for a fact is the lambda artifact is created with no canonical account owner, however the artifact that is uploaded (in the artifact section of buildspec) is created under the account canonical id.
I know you can use, if for example I was uploading in the build phase using an
aws s3api copy-object --bucket destination_awsexammplebucket --key source_awsexamplebucket/myobject --acl bucket-owner-full-control
I could use --acl bucket-owner-full-control, but that is not a supported flag with aws cloudformation package.
env:
variables:
RELEASE_NUMBER: ""
MINOR_NUMBER: "value"
phases:
install:
runtime-versions:
docker: 18
build:
commands:
- echo Building Release Version $RELEASE_NUMBER
- pip install --user aws-sam-cli
- USER_BASE_PATH=$(python -m site --user-base)
- export PATH=$PATH:$USER_BASE_PATH/bin
- sam build -t template.yaml
- aws cloudformation package --template-file template.yaml --s3-bucket account-a-bucket --output-template-file TransformedTemplate.yaml
artifacts:
files:
- TransformedTemplate.yaml
discard-paths: yes

The 'aws cloudformation package' command does not have an "--acl" option which is the cause of the issue you are facing. This is an open issue [1] but one that has not got any traction.
For now, I am thinking you can parse out the S3 object key from 'TransformedTemplate.yaml', and then run the following command in your buildspec to put an ACL on the S3 object:
$ aws s3api put-object-acl --bucket account-a-bucket --key keyname --acl bucket-owner-full-control
For parsing a json file, 'jq' is probably the best utility. Since you are using Yaml, yq [2] seems to be an option though I have never tested it myself.
Ref:
[1] https://github.com/aws/aws-cli/issues/2681
[2] https://yq.readthedocs.io/en/latest/

Related

AWS Cloudformation | How to Configure Lambda to Use Latest Code in S3 Bucket

Tests3bucketLambda:
Type: "AWS::Lambda::Function"
Properties:
Code:
S3Bucket: TestS3Bucket
S3Key: Tests3.zip
FunctionName: "test-lambda-function"
Handler: lambda-function-s3.lambda_handler
Role: !GetAtt LambdaExecutionRole.Arn
Runtime: python3.6
Issue: When I update the new code that is zipped and uploaded to the S3 bucket during code build, but the change is not deployed to the existing lambda functions.
If you deploy new code to the object with the same key, CF will not treat it like change, since template itself hasn't been modified. There are few ways to mitigate this.
Use bucket versioning and provide object version along with object key: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-function-code.html
Code:
S3Bucket: TestS3Bucket
S3Key: Tests3.zip
S3ObjectVersion: blablabla....
Modify your object key on each deployment, with timestamp for example
Code:
S3Bucket: TestS3Bucket
S3Key: Tests3_2021-05-06T17:15:55+00:00.zip
Use automated tools like Terraform or AWS CDK to take care of these things
If you want the lambda to automatically pickup the latest code then it is not possible by using cloud formation.
To do that you could synch the code file to a s3 bucket and then try the approach mentioned here How can AWS Lambda pick the latest versions of of the script from S3 . I was able to achieve it and have mentioned the solution there.
Expanding on Oleksii's answer, I'll just add that I use a Makefile and an S3 bucket with versioning to handle this issue. A version-enabled S3 bucket creates a new object and a new version number every time a modified file is uploaded (keeping all the old versions and their version numbers). If you don't want a dependency on make in your build/deploy process this won't be of interest to you.
Make can examine the filesystem and look for updated files to trigger a target action based an updated file (as a dependency).
Here's a Makefile for a simple stack with one lambda function. The relevant parts of the Cloudformation (CFN) file will be shown below.
.DEFAULT_GOAL := deploy
# Bucket must exist and be versioning-enabled
lambda_bucket = lambda_uploads
deploy: lambda.zip
#set -e ;\
lambda_version=$$(aws s3api list-object-versions \
--bucket $(lambda_bucket) --prefix lambda.zip \
--query 'Versions[?IsLatest == `true`].VersionId | [0]' \
--output text) ;\
echo "Running aws cloudformation deploy with ZIP version $$lambda_version..." ;\
aws cloudformation deploy --stack-name zip-lambda \
--template-file test.yml \
--parameter-overrides LambdaVersionId=$$lambda_version \
--capabilities CAPABILITY_NAMED_IAM
lambda.zip: lambda/lambda_func.js
#zip lambda.zip lambda
#aws s3 cp lambda.zip s3://$(lambda_bucket)
The deploy target has a dependency on the lambda.zip target which itself has a dependency on lambda_func.js. This means that the rule for lambda.zip must be valid before the rule for deploy can be run.
So, if lambda_func.js has a timestamp newer than the lambda.zip file, an updated zip file is created and uploaded. If not, the rule is not executed because the lambda function has not been updated.
Now the deploy rule can be run. It:
Uses the AWS CLI to get the version number of the latest (or newest) version of the zip file.
Passes that version number as a parameter to Cloudformation as it deploys the stack, again using the AWS CLI.
Some quirks in the Makefile:
The backslashes and semicolons are required to run the deploy rule as one shell invocation. This is needed to capture the lambda_version variable for use when deploying the stack.
The --query bit is an AWS CLI capability used to extract information from JSON data that has been returned from the command. jq could also be use here.
The relevant parts of the Cloudformation (YAML) file look like this:
AWSTemplateFormatVersion: 2010-09-09
Description: Test new lambda ZIP upload
Parameters:
ZipVersionId:
Type: String
Resources:
ZipLambdaRole: ...
ZipLambda:
Type: AWS::Lambda::Function
Properties:
FunctionName: zip-lambda
Role: !GetAtt ZipLambdaRole.Arn
Runtime: nodejs16.x
Handler: index.handler
Code:
S3Bucket: lambda_uploads
S3Key: lambda.zip
S3ObjectVersion: !Ref ZipVersionId
MemorySize: 128
Timeout: 3
The zip file is uniquely identified by S3Bucket, S3Key, and the S3ObjectVersion. Note that, and this is important, if the zip file was not updated (the version number remains the same as previous deploys) Cloudformation will not generate a change set--it requires a new version number to do that. This is the desired behavior--there is no new deploy unless the lambda has been updated.
Finally you'll probably want to put a lifecycle policy on the S3 bucket so that old versions of the zip file are periodically deleted.
These answers to other questions informed this answer.
This is a bit old, but this is in need of a concrete answer for those who are starting off.
Oleksii's answer is a correct guideline. However, The way to implement Option 2 would be as follows.
I used Java, but the same logic can apply to python too.
In your case imagine your cloud formation template for lambda that you pasted is named as cloud_formation_lambda.yml
Now in the code build stage where you are preparing this artifact that you mention Tests3 in your case. Prepare it with a unique identifier appended such as the epoch.
Then all you need to do in either the build phase or post-build phase is to use some simple linux command to accommodate those name changes.
The first step would be to rename your built artifact to append the unique value such as epoch
Use a sed command to replace the occurrence of Tests3 in your cloud formation template
Thus the buildspec.yml that implements this will be as follows
phases:
install:
runtime-versions:
java: corretto17
build:
commands:
- echo $(date +%s) > epoch.txt
- mvn package
post_build:
commands:
- mv target/Tests3.jar target/Tests3-$(cat epoch.txt).jar
- sed -i "s/Tests3.jar/Tests3-$(cat epoch.txt).jar/g" cloud_formation_lambda.yml
artifacts:
files:
- target/Tests3-$(cat epoch.txt).jar
- cloud_formation_lambda.yml

How can I upload build artifacts to s3 bucket from codepipeline?

I have a codepipeline which trigger a few codebuild projects in different stage. In my codebuild project i have this configuration to
#codebuild.yml
Artifacts:
Type: CODEPIPELINE
...
Source:
Type: CODEPIPELINE
BuildSpec: buildspec.yml
# buildspec.yml
version: 0.2
phases:
...
artifacts:
name: test-result
files:
- '**/*'
- '*'
I this configuration it specifies the artifact and use CODEPIPELINE as artifact type. so in codepipeline, how can I upload them to s3 bucket?
What I can think of is to write another codebuild project and use aws s3 command line to upload the files. But it is too manual. Is there an automatic way to do the job?
The build artifact includes test results and I'd like to upload the test results regardless previous failure or not. Is it possible to achieve this in codepipeline?
how can I upload them to s3 bucket?
There are two ways. One you already pointed out - use CB action (same or different) to copy files to S3 using AWS CLI.
Second, is to use S3 deploy action. This allows you to deploy zipped or unzipped artifact to your destination bucket.

CodeBuild upload build artifact to S3 with ACL

I have 2 AWS accounts. Lets say A and B.
Account A uses CodeBuild to build and upload artifacts to an S3 bucket owned by B. B account has set a ACL permission for the bucket in order to give Write permissions to A.
The artifact file is successfully uploaded to the S3 bucket. However, B account doesnt have any permission over the file, since the file is owned by A.
Account A can change the ownership by running
aws s3api put-object-acl --bucket bucket-name --key key-name --acl bucket-owner-full-control
But this has to be manually run after every build from A account. How can I grant permissions to account B through CodeBuild procedure? Or how can account B override this ownership permission error.
The CodeBuild starts automatically with web-hooks and my buildspec is this:
version: 0.2
env:
phases:
install:
runtime-versions:
java: openjdk8
commands:
- echo Entered the install phase...
build:
commands:
- echo Entered the build phase...
post_build:
commands:
- echo Entered the post_build phase...
artifacts:
files:
- 'myFile.txt'
CodeBuild does not natively support writing artifact to a different account as it does not set proper ACL on the cross account object. This is the reason the following limitation is called out in the CodePipeline documentation:
Cross-account actions are not supported for the following action types:
Jenkins build actions
CodeBuild build or test actions
https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-create-cross-account.html
One workaround is setting the ACL on the artifact yourself in the CodeBuild:
version: 0.2
phases:
post_build:
commands:
- aws s3api list-objects --bucket testingbucket --prefix CFNtest/OutputArti >> $CODEBUILD_SRC_DIR/objects.json
- |
for i in $(jq -r '.Contents[]|.Key' $CODEBUILD_SRC_DIR/objects.json); do
echo $i
aws s3api put-object-acl --bucket testingbucket --key $i --acl bucket-owner-full-control
done
I did it using aws cli commands from the build phase.
version: 0.2
phases:
build:
commands:
- mvn install...
- aws s3 cp my-file s3://bucketName --acl bucket-owner-full-control
I am using the build phase, since post_build will be executed even if the build was not successful.
edit: updated answer with a sample.

Codepipeline: Insufficient permissions Unable to access the artifact with Amazon S3 object key

Hello I created a codepipeline project with the following configuration:
Source Code in S3 pulled from Bitbucket.
Build with CodeBuild, generating an docker image and storing it into a Amazon ECS repository.
Deployment provider Amazon ECS.
All the process works ok until when it tries to deploy, for some reason I am getting the following error during deployment:
Insufficient permissions Unable to access the artifact with Amazon S3
object key 'FailedScanSubscriber/MyAppBuild/Wmu5kFy' located in the
Amazon S3 artifact bucket 'codepipeline-us-west-2-913731893217'. The
provided role does not have sufficient permissions.
During the building phase, it is even able to create a new docker image in the ECS repository.
I tried everything, changed IAM roles and policies, add full access to S3, I have even setted the S3 bucket as public, nothing worked. I am without options, if someone could help me that would be wonderful, I have poor experience with AWS, so any help is appreciated.
I was able to find a solution. The true issue is that when the deployment provider is set as Amazon ECS, we need to generate an output artifact indicating the name of the task definition and the image uri, for example:
post_build:
commands:
- printf '[{"name":"your.task.definition.name","imageUri":"%s"}]' $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG > imagedefinitions.json
artifacts:
files: imagedefinitions.json
This happens when AWS CodeDeploy cannot find the build artifact from AWS CodeBuild. If you go into the S3 bucket and check the path you would actually see that the artifact object is NOT THERE!
Even though the error says about a permission issue. This can happen due the absent of the artifact object.
Solution: Properly configure artifacts section in buildspec.yml and configure AWS Codepipeline stages properly specifying input and output artifact names.
artifacts:
files:
- '**/*'
base-directory: base_dir
name: build-artifact-name
discard-paths: no
Refer this article - https://medium.com/#shanikae/insufficient-permissions-unable-to-access-the-artifact-with-amazon-s3-247f27e6cdc3
For me the issue was that my CodeBuild step was encrypting the artifacts using the Default AWS Managed S3 key.
My Deploy step uses a Cross-Account role, and so it couldn't retrieve the artifact. Once I changed the Codebuild encryption key to my CMK as it should've been originally, my deploy step succeeded.

Importing AWS information from CloudFormation to CodeBuild

I have a pipeline in AWS with Codestar, CodeBuild, CloudFormation, etc.
I am trying to figure out how to get information from the CloudFormation step returned to the CodeBuild step. Let me break it down:
I have a buildspec.yml for CodeBuild
# buildspec.yml
...
phases:
...
build:
commands:
- aws cloudformation package --region $REGION --template template.yml --s3-bucket $S3_BUCKET --output-template $OUTPUT_TEMPLATE
The above kicks off a CloudFormation build using our template.yml
# template.yml
...
S3BucketAssets:
Type: AWS::S3::Bucket
...
At this point, it creates a unique name for an S3 bucket. Awesome. Now, for step 2 in my buildspec.yml for CodeBuild, I want to push items to the S3 bucket just created in the CloudFormation template. BUT, I don't know how to get the dynamically created name of the S3 bucket from the CloudFormation template. I want something similar to:
# buildspec.yml
...
phases:
...
build:
commands:
# this is the same step as shown above
- aws cloudformation package --region $REGION --template template.yml --s3-bucket $S3_BUCKET --output-template $OUTPUT_TEMPLATE
# this is the new step
- aws s3 sync dist_files/ s3://{NAME_OF_THE_NEW_S3_BUCKET}
How can I accomplish getting the dynamically named S3 bucket so that I can push to it?
I am aware that within a CloudFormation template, you can reference the S3 bucket name with something like !GetAtt [ClientWebAssets, WebsiteURL]. But, I do not know how to get that information out of the cloudformation template and back into the codebuild template.
You could move to using CodePipeline. Stage one would be deploying the application via cloudformation with an output artifact being the stack creation output
http://docs.aws.amazon.com/codepipeline/latest/userguide/reference-pipeline-structure.html#action-requirements