AWS CodeBuild Unzipped size must be smaller than 350198 bytes - amazon-web-services

I am trying to deploy and update code in multiple lambdas at the same time, but when making a push to my branch and deploying CodeBuild, I getting the following error:
An error occurred (InvalidParameterValueException) when calling the
UpdateFunctionCode operation: Unzipped size must be smaller than
350198 bytes
[Container] 2021/04/24 00:09:31 Command did not exit successfully aws
lambda update-function-code --function-name my_lambda_03 --zip-file
fileb://my_lambda_03.zip exit status 254 [Container] 2021/04/24
00:09:31 Phase complete: POST_BUILD State: FAILED [Container]
2021/04/24 00:09:31 Phase context status code: COMMAND_EXECUTION_ERROR
Message: Error while executing command: aws lambda
update-function-code --function-name my_lambda_03 --zip-file
fileb://my_lambda_03.zip. Reason: exit status 254
This is the buildspec.yml:
version: 0.2
phases:
install:
runtime-versions:
python: 3.x
commands:
- echo "Installing dependencies..."
build:
commands:
- echo "Zipping all my functions....."
- cd my_lambda_01/
- zip -r9 ../my_lambda_01.zip .
- cd ..
- cd my_lambda_02/
- zip -r9 ../my_lambda_02.zip .
- cd ..
- cd my_lambda_03/
- zip -r9 ../my_lambda_03.zip .
...
- cd my_lambda_09/
- zip -r9 ../my_lambda_09.zip .
- cd ..
post_build:
commands:
- echo "Updating all lambda functions"
- aws lambda update-function-code --function-name my_lambda_01 --zip-file fileb://my_lambda_01.zip
- aws lambda update-function-code --function-name my_lambda_02 --zip-file fileb://my_lambda_02.zip
- aws lambda update-function-code --function-name my_lambda_03 --zip-file fileb://my_lambda_03.zip
...
- aws lambda update-function-code --function-name my_lambda_09 --zip-file fileb://my_lambda_09.zip
- echo "Done"
Thanks for any help.

The error is that one of your lambda archives is too big. 350198 bytes seems sort of low though and doesn't seem to match up with the advertised limits.
AWS limits the size of direct uploads in the request so it may be better to try and upload to S3 first and then run the update-function-code. Doing this should allow you to have an archive up to 250MB.
- aws s3 cp my_lambda_01.zip s3://my-deployment-bucket/my_lambda_01.zip
- aws lambda update-function-code --function-name my_lambda_01 --s3-bucket my-deployment-bucket --s3-key my_lambda_01.zip
Another option would be to try and reduce your archive size. What types of data or libraries are you trying to use? Make sure you aren't including extraneous files in your lambda archives (virtual environments files, temp build files, tests and test data, etc). Can some things be offloaded to S3 and loaded into memory/disk at runtime?
If what you are trying actually needs to be very large, you'll need to package it up as docker image. That was released a few months ago at re:invent 2020 and supports docker images up to 10 GB.
References:
https://acloudguru.com/blog/engineering/packaging-aws-lambda-functions-as-container-images
https://aws.amazon.com/blogs/compute/optimizing-lambda-functions-packaged-as-container-images/

Related

Error while creating lambda function via CLI using gitlab CI (Error parsing parameter '--zip-file': Unable to load paramfile)

I am deploying aws handler scripts as zip files in S3 bucket. Now, I want to use this deployed zipped file in lambda function. I am doing all of this via gitlab CI.
I have the following CI:
image: ubuntu:18.04
variables:
GIT_SUBMODULE_STRATEGY: recursive
AWS_DEFAULT_REGION: eu-central-1
S3_BUCKET: $BUCKET_TRIAL
stages:
- deploy
.before_script_template: &before_script_definition
stage: deploy
before_script:
- apt-get -y update
- apt-get -y install python3-pip python3.7 zip
- python3.7 -m pip install --upgrade pip
- python3.7 -V
- pip3.7 install virtualenv
.after_script_template: &after_script_definition
after_script:
# Upload package to S3
# Install AWS CLI
- pip install awscli --upgrade # --user
- export PATH=$PATH:~/.local/bin # Add to PATH
# Configure AWS connection
- aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID
- aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY
- aws configure set default.region $AWS_DEFAULT_REGION
- aws sts get-caller-identity --output text --query 'Account'
- aws s3 cp ~/forlambda/archive.zip $BUCKET_TRIAL/${LAMBDA_NAME}-deployment.zip
- aws lambda create-function --function-name "${LAMBDA_NAME}-2" --runtime python3.7 --role arn:aws:iam::xxxxxxxxxxxx:role/abc_scripts --handler ${HANDLER_SCRIPT_NAME}.${HANDLER_FUNCTION} --memory-size 1024 --zip-file "fileb://$BUCKET_TRIAL/${LAMBDA_NAME}-deployment.zip"
my_job:
variables:
LAMBDA_NAME: my_lambda
HANDLER_SCRIPT_NAME: my_aws_handler
HANDLER_FUNCTION: my_handler_function
<<: *before_script_definition
script:
# - move scripts around and install requirements and zip the file for lambda deployment
<<: *after_script_definition
For the CI, I have added Environment variables $BUCKET_TRIAL and is of the form s3://my-folder
When the CI file is run, it throws the following error in the end:
Error parsing parameter '--zip-file': Unable to load paramfile
fileb://s3://my-folder/my_lambda-deployment.zip: [Errno 2] No such
file or directory: 's3://my-folder/my_lambda-deployment.zip'
I also tried changing the --zip-file in the last line of the after_script as:
- aws lambda create-function --function-name "${LAMBDA_NAME}-2" --runtime python3.7 --role arn:aws:iam::xxxxxxxxxxxx:role/abc_scripts --handler ${HANDLER_SCRIPT_NAME}.${HANDLER_FUNCTION} --memory-size 1024 --zip-file "fileb:///my-folder/${LAMBDA_NAME}-deployment.zip"
But it still throws the same error.
Am I missing something here?
Based on the discussion in chat.
The solution was to use
--zip-file fileb:///root/forlambda/archive.zip
instead of
--zip-file "fileb://$BUCKET_TRIAL/${LAMBDA_NAME}-deployment.zip"
The reason is that --zip-file requires a local path to a zip deployment package, rather than a remote location in s3.
From docs:
--zip-file (blob): The path to the zip file of the code you are uploading. Example: fileb://code.zip

CodeBuild upload build artifact to S3 with ACL

I have 2 AWS accounts. Lets say A and B.
Account A uses CodeBuild to build and upload artifacts to an S3 bucket owned by B. B account has set a ACL permission for the bucket in order to give Write permissions to A.
The artifact file is successfully uploaded to the S3 bucket. However, B account doesnt have any permission over the file, since the file is owned by A.
Account A can change the ownership by running
aws s3api put-object-acl --bucket bucket-name --key key-name --acl bucket-owner-full-control
But this has to be manually run after every build from A account. How can I grant permissions to account B through CodeBuild procedure? Or how can account B override this ownership permission error.
The CodeBuild starts automatically with web-hooks and my buildspec is this:
version: 0.2
env:
phases:
install:
runtime-versions:
java: openjdk8
commands:
- echo Entered the install phase...
build:
commands:
- echo Entered the build phase...
post_build:
commands:
- echo Entered the post_build phase...
artifacts:
files:
- 'myFile.txt'
CodeBuild does not natively support writing artifact to a different account as it does not set proper ACL on the cross account object. This is the reason the following limitation is called out in the CodePipeline documentation:
Cross-account actions are not supported for the following action types:
Jenkins build actions
CodeBuild build or test actions
https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-create-cross-account.html
One workaround is setting the ACL on the artifact yourself in the CodeBuild:
version: 0.2
phases:
post_build:
commands:
- aws s3api list-objects --bucket testingbucket --prefix CFNtest/OutputArti >> $CODEBUILD_SRC_DIR/objects.json
- |
for i in $(jq -r '.Contents[]|.Key' $CODEBUILD_SRC_DIR/objects.json); do
echo $i
aws s3api put-object-acl --bucket testingbucket --key $i --acl bucket-owner-full-control
done
I did it using aws cli commands from the build phase.
version: 0.2
phases:
build:
commands:
- mvn install...
- aws s3 cp my-file s3://bucketName --acl bucket-owner-full-control
I am using the build phase, since post_build will be executed even if the build was not successful.
edit: updated answer with a sample.

Cloudformation deploy --parameter-overrides doesnt accept file Workaround

I am in process of using setting up pipeline using codebuild and use cloudformation package and cloudformation deploy to spin up the stack which spuns lambda function. Now i know that with cloudformation deploy we cant use parameters file with --parameters-overrides and this feature request is still in open state wit AWS https://github.com/aws/aws-cli/issues/2828 . So i am trying to use a workaround using JQ which is decsribed in this link https://github.com/aws/aws-cli/issues/3274#issuecomment-529155262 like below.
PARAMETERS_FILE="parameters.json" && PARAMS=($(jq -r '.Parameters[] | [.ParameterKey, .ParameterValue] | "\(.[0])=\(.[1])"' ${PARAMETERS_FILE})) - aws cloudformation deploy --template-file /codebuild/output/packaged.yaml --region us-east-2 --stack-name InitialSetup --capabilities CAPABILITY_IAM --parameter-overrides ${PARAMS[#]}
This workaround works well if tested via cli . I also tried this workaround inside a container as buildspec.yaml file creates a container in background which runs these commands , but codebuild doesnt excute the aws cloudformation deploy step and fails . I get error "aws: error: argument --parameter-overrides: expected at least one argument" . I even tried copying the two steps of workaround in shell script and then executing it but i run into error "[Container] 2020/01/21 09:19:14 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: ./test.sh. Reason: exit status 255"
Can someone please guide me here .My buildspec.yaml file is as below :
'''
version: 0.2
phases:
install:
runtime-versions:
java: corretto8
commands:
- wget -O jq https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64
- chmod +x ./jq
- cp jq /usr/bin
- jq --version
pre_build:
commands:
# - echo "[Pre-Build phase]
build:
commands:
- aws cloudformation package --template-file master.yaml --s3-bucket rtestbucket --output-template-file packaged.yaml
- aws s3 cp ./packaged.yaml s3://rtestbucket/packaged.yaml
- aws s3 cp s3://rtestbucket/packaged.yaml /codebuild/output
post_build:
commands:
- PARAMETERS_FILE="parameters.json" && PARAMS=($(jq -r '.Parameters[] | [.ParameterKey, .ParameterValue] | "\(.[0])=\(.[1])"' ${PARAMETERS_FILE}))
- ls
- aws cloudformation deploy --template-file /codebuild/output/packaged.yaml --region us-east-2 --stack-name InitialSetup --capabilities CAPABILITY_IAM --parameter-overrides ${PARAMS[#]}
artifacts:
type: zip
files:
- packaged.yaml
CodeBuild buildspec commands are not running in bash shell and I think the syntax:
${PARAMS[#]}
... is bash specific.
As per the answer here: https://stackoverflow.com/a/44811491/12072431
Try to wrap your commands in a script file with a shebang specifying the shell you'd like the commands to execute with.
The expression ${PARAMS[#]} is not returning any value, which causes the error aws: error: argument --parameter-overrides: expected at least one argument. Review the code and resolve, or remove that parameter.
I was able to resolve this issue with executing the all the required steps in shell script and providing access to script.

Problem creating CodePipeline, Deploy stage for AWS SAM application

I have created a working CodePipeline for my AWS SAM application.
It is using only Source and Build phases with the following buildspec.yaml file
version: 0.2
phases:
install:
runtime-versions:
python: 3.7
commands:
- pip install --user aws-sam-cli
- USER_BASE_PATH=$(python -m site --user-base)
- export PATH=$PATH:$USER_BASE_PATH/bin
build:
commands:
- sam build
post_build:
commands:
sam package --s3-bucket deploy-bucket --output-template-file deployment.yaml
# finally:
# sam deploy --template-file deployment.yaml --stack-name MyStackSAM--region us-east-1 --capabilities CAPABILITY_IAM
As you can see I have commented out the last two lines as I want to move that action to a Deploy stage in CodePipeline
My Deploy step looks like this:
My CloudFormationPipelineServiceRole has full admin permission at this point, never the less, I'm still getting the following error as the result of executing this stage.
Action execution failed
Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: XXXXFFFFFXXXX; S3 Extended Request ID: XXXXFFFFFXXXXFFFFFXXXXX=)
I am stuck as to why I'm getting this error. Any help would be greatly appreciated.
First, sam package expects source template file that needs to be passed via --template-file flag. I don't see that template file anywhere in your code. Which template file are you trying to package?
Second, you are not uploading the necessary artifacts to the s3 bucket. The only thing that you are uploading is zipped code but as you can see from the command that you have commented out:
sam deploy --template-file deployment.yaml --stack-name MyStackSAM--region us-east-1 --capabilities CAPABILITY_IAM
you also need this file deployment.yaml but you didn't specify that in your code. There is no way for CodeBuild to guess which artifacts you want to preserve.
You will need to add additional artifacts section to the bottom of your buildspec file and specify those artifacts.
artifacts:
type: zip
files:
- template.yaml # (where do you have this file?)
- outputtemplate.yaml # (deployment.yaml in your case)
Note that the artifacts section needs to be on the same level as version and phases
version: 0.2
phases:
...
artifacts:
...

How to deploy to AWS Lambda quickly, alternative to the manual upload?

I am getting started with writing an Alexa Skill. My skill requires uploading a .ZIP file as it includes the alexa-sdk dependency being stored in the node_modules folder.
Is there a more efficient way to upload a new version of my Lambda function and files from my local machine without zipping and manually uploading the same files over and over again? Some like git push or a different way to deploy via Terminal with a single command?
You can use the update-function-code CLI command.
Note that his operation must only be used on an existing Lambda function and cannot be used to update the function configuration.
To add to Khalid's answer, I recently created this rudimentary batch script to ease a particular lambda function's deployment. This example is for a NodeJS Lambda function which has it's dependencies located in the node_modules folder.
Prerequisits:
Have 7zip installed. Found here
Have it available in CMD (have it on system PATH variable) as explained here
Have your local aws-cli set up with valid credentials that have access to uploading to AWS Lambda.
rm -rf target
mkdir -p target
cp -r index.js package.json node_modules/ target/
pushd target
7z a zip_file_name.zip -r
popd
aws lambda update-function-code \
--function-name YOUR_FUNCTION_NAME \
--zip-file fileb://target/zip_file_name.zip \
--region us-east-1
My one-liner for bash:
zip -u f.zip f.py; aws lambda update-function-code --zip-file fileb://f.zip --function-name f