I have 2 AWS accounts. Lets say A and B.
Account A uses CodeBuild to build and upload artifacts to an S3 bucket owned by B. B account has set a ACL permission for the bucket in order to give Write permissions to A.
The artifact file is successfully uploaded to the S3 bucket. However, B account doesnt have any permission over the file, since the file is owned by A.
Account A can change the ownership by running
aws s3api put-object-acl --bucket bucket-name --key key-name --acl bucket-owner-full-control
But this has to be manually run after every build from A account. How can I grant permissions to account B through CodeBuild procedure? Or how can account B override this ownership permission error.
The CodeBuild starts automatically with web-hooks and my buildspec is this:
version: 0.2
env:
phases:
install:
runtime-versions:
java: openjdk8
commands:
- echo Entered the install phase...
build:
commands:
- echo Entered the build phase...
post_build:
commands:
- echo Entered the post_build phase...
artifacts:
files:
- 'myFile.txt'
CodeBuild does not natively support writing artifact to a different account as it does not set proper ACL on the cross account object. This is the reason the following limitation is called out in the CodePipeline documentation:
Cross-account actions are not supported for the following action types:
Jenkins build actions
CodeBuild build or test actions
https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-create-cross-account.html
One workaround is setting the ACL on the artifact yourself in the CodeBuild:
version: 0.2
phases:
post_build:
commands:
- aws s3api list-objects --bucket testingbucket --prefix CFNtest/OutputArti >> $CODEBUILD_SRC_DIR/objects.json
- |
for i in $(jq -r '.Contents[]|.Key' $CODEBUILD_SRC_DIR/objects.json); do
echo $i
aws s3api put-object-acl --bucket testingbucket --key $i --acl bucket-owner-full-control
done
I did it using aws cli commands from the build phase.
version: 0.2
phases:
build:
commands:
- mvn install...
- aws s3 cp my-file s3://bucketName --acl bucket-owner-full-control
I am using the build phase, since post_build will be executed even if the build was not successful.
edit: updated answer with a sample.
Related
I am trying to access S3 objects user-defined metadata inside codebuild and set as environment variable.
As per docs, it only output etag and VersionId. So I am assuming by default user defined metadata is not exported to codepipeline when s3 is a source action
I am thinking to use aws cli command and then set this as environment variable for the codebuild. Is there a better way?
aws s3api head-object --bucket bucket-name --profile profile --key xxxx.zip
You are right: the only way to get the object metadata is to use a head-object CLI call. You can use the below buildspec in your CodeBuild stage to get the object metadata for a pipeline with s3 source action.
version: 0.2
phases:
build:
commands:
- BUCKET_PATH=$(echo $CODEBUILD_SOURCE_VERSION | cut -d ':' -f 6)
- BUCKET=$(echo $BUCKET_PATH | cut -d '/' -f 1)
- KEY=$(echo $BUCKET_PATH | cut -d '/' -f 2,3,4)
- aws s3api head-object --bucket $BUCKET --key $KEY --query Metadata
Please note that updating metadata on the s3 source object will also trigger the pipeline with s3 source action.
I am trying to deploy and update code in multiple lambdas at the same time, but when making a push to my branch and deploying CodeBuild, I getting the following error:
An error occurred (InvalidParameterValueException) when calling the
UpdateFunctionCode operation: Unzipped size must be smaller than
350198 bytes
[Container] 2021/04/24 00:09:31 Command did not exit successfully aws
lambda update-function-code --function-name my_lambda_03 --zip-file
fileb://my_lambda_03.zip exit status 254 [Container] 2021/04/24
00:09:31 Phase complete: POST_BUILD State: FAILED [Container]
2021/04/24 00:09:31 Phase context status code: COMMAND_EXECUTION_ERROR
Message: Error while executing command: aws lambda
update-function-code --function-name my_lambda_03 --zip-file
fileb://my_lambda_03.zip. Reason: exit status 254
This is the buildspec.yml:
version: 0.2
phases:
install:
runtime-versions:
python: 3.x
commands:
- echo "Installing dependencies..."
build:
commands:
- echo "Zipping all my functions....."
- cd my_lambda_01/
- zip -r9 ../my_lambda_01.zip .
- cd ..
- cd my_lambda_02/
- zip -r9 ../my_lambda_02.zip .
- cd ..
- cd my_lambda_03/
- zip -r9 ../my_lambda_03.zip .
...
- cd my_lambda_09/
- zip -r9 ../my_lambda_09.zip .
- cd ..
post_build:
commands:
- echo "Updating all lambda functions"
- aws lambda update-function-code --function-name my_lambda_01 --zip-file fileb://my_lambda_01.zip
- aws lambda update-function-code --function-name my_lambda_02 --zip-file fileb://my_lambda_02.zip
- aws lambda update-function-code --function-name my_lambda_03 --zip-file fileb://my_lambda_03.zip
...
- aws lambda update-function-code --function-name my_lambda_09 --zip-file fileb://my_lambda_09.zip
- echo "Done"
Thanks for any help.
The error is that one of your lambda archives is too big. 350198 bytes seems sort of low though and doesn't seem to match up with the advertised limits.
AWS limits the size of direct uploads in the request so it may be better to try and upload to S3 first and then run the update-function-code. Doing this should allow you to have an archive up to 250MB.
- aws s3 cp my_lambda_01.zip s3://my-deployment-bucket/my_lambda_01.zip
- aws lambda update-function-code --function-name my_lambda_01 --s3-bucket my-deployment-bucket --s3-key my_lambda_01.zip
Another option would be to try and reduce your archive size. What types of data or libraries are you trying to use? Make sure you aren't including extraneous files in your lambda archives (virtual environments files, temp build files, tests and test data, etc). Can some things be offloaded to S3 and loaded into memory/disk at runtime?
If what you are trying actually needs to be very large, you'll need to package it up as docker image. That was released a few months ago at re:invent 2020 and supports docker images up to 10 GB.
References:
https://acloudguru.com/blog/engineering/packaging-aws-lambda-functions-as-container-images
https://aws.amazon.com/blogs/compute/optimizing-lambda-functions-packaged-as-container-images/
I am new to YAML file. I want to append Timestamp to S3 bucket folder every time so that each build will be unique. In the post_build I append timestamp to S3 bucket as follows. When the codepipeline is triggered all files are stored to S3 bucket Inhouse folder but folder with timestamp is not getting generated. s3://${S3_BUCKET}/Inhouse/${'date'}
Version: 0.2
env:
variables:
S3_BUCKET: Inhouse-market-dev
phases:
install:
runtime-versions:
nodejs: 10
commands:
- npm install
- npm install -g #angular/cli
build:
commands:
- echo Build started on `date`
post_build:
commands:
- aws s3 cp . s3://${S3_BUCKET}/Inhouse/${'date'} --recursive --acl public-read --cache-control "max-age=${CACHE_CONTROL}"
- echo Build completed on `date`
I think your use of ${'date'} is incorrect. I would recommend trying the following to actually get the unix timestamp:
post_build:
commands:
- current_timestamp=$(date +"%s")
- aws s3 cp . s3://${S3_BUCKET}/Inhouse/${current_timestamp} --recursive --acl public-read --cache-control "max-age=${CACHE_CONTROL}"
- echo Build completed on `date` which is ${current_timestamp}
I have a multi-account setup at my current company. I am using CodeBuild in Account B, I am running a buildspec that uploads a transformed sam template to an s3 bucket in Account A. So the Cloudformation package uploads the sam build of the code to the "account-a-bucket". The job successfully uploads the transformed template as defined in the artifacts section, to a bucket in Account B. The problem comes when trying to deploy the template in account C. Because codebuild creates the lambda code artifact in the build step and writes the object to the bucket outside of the account. When you go look at the actual lambda artifact depositied in the bucket, so s3:://account-a-bucket/e309uofijokasjdfokajsllsk , you will see in the object permission that it does not belong to any account. Therefore no one can access it. How do I make codebuild create the object in the other account bucket, so it is owned by an account?
To note as well, I already configured the Bucket Policy to grant access to account-a-bucket to all accounts in my organization, additionally the canonical ids of the accounts for permissions. So I know for a fact is the lambda artifact is created with no canonical account owner, however the artifact that is uploaded (in the artifact section of buildspec) is created under the account canonical id.
I know you can use, if for example I was uploading in the build phase using an
aws s3api copy-object --bucket destination_awsexammplebucket --key source_awsexamplebucket/myobject --acl bucket-owner-full-control
I could use --acl bucket-owner-full-control, but that is not a supported flag with aws cloudformation package.
env:
variables:
RELEASE_NUMBER: ""
MINOR_NUMBER: "value"
phases:
install:
runtime-versions:
docker: 18
build:
commands:
- echo Building Release Version $RELEASE_NUMBER
- pip install --user aws-sam-cli
- USER_BASE_PATH=$(python -m site --user-base)
- export PATH=$PATH:$USER_BASE_PATH/bin
- sam build -t template.yaml
- aws cloudformation package --template-file template.yaml --s3-bucket account-a-bucket --output-template-file TransformedTemplate.yaml
artifacts:
files:
- TransformedTemplate.yaml
discard-paths: yes
The 'aws cloudformation package' command does not have an "--acl" option which is the cause of the issue you are facing. This is an open issue [1] but one that has not got any traction.
For now, I am thinking you can parse out the S3 object key from 'TransformedTemplate.yaml', and then run the following command in your buildspec to put an ACL on the S3 object:
$ aws s3api put-object-acl --bucket account-a-bucket --key keyname --acl bucket-owner-full-control
For parsing a json file, 'jq' is probably the best utility. Since you are using Yaml, yq [2] seems to be an option though I have never tested it myself.
Ref:
[1] https://github.com/aws/aws-cli/issues/2681
[2] https://yq.readthedocs.io/en/latest/
I am trying to run the following command:
aws s3 cp --region ap-south-1 --acl public-read my.exe s3://bucket/binaries/my.exe
upload failed: ./my.exe to s3://bucket/binaries/my.exe A client error
(InvalidRequest) occurred when calling the PutObject operation: You
are attempting to operate on a bucket in a region that requires
Signature Version 4. You can fix this issue by explicitly providing
the correct region location using the --region argument, the
AWS_DEFAULT_REGION environment variable, or the region variable in the
AWS CLI configuration file. You can get the bucket's location by
running "aws s3api get-bucket-location --bucket BUCKET".
How do I fix this error? I also tried the
AWS_DEFAULT_REGION=ap-south-1 aws s3 cp --acl public-read my.exe s3://bucket/binaries/my.exe
but with no luck.
# aws --version
aws-cli/1.10.28 Python/2.7.9 Linux/3.16.0-4-amd64 botocore/1.4.19
It seems to be working after upgrading awscli.
pip install --upgrade awscli
aws --version
aws-cli/1.10.43 Python/2.7.9 Linux/3.16.0-4-amd64 botocore/1.4.33