Cloudformation deploy --parameter-overrides doesnt accept file Workaround - amazon-web-services

I am in process of using setting up pipeline using codebuild and use cloudformation package and cloudformation deploy to spin up the stack which spuns lambda function. Now i know that with cloudformation deploy we cant use parameters file with --parameters-overrides and this feature request is still in open state wit AWS https://github.com/aws/aws-cli/issues/2828 . So i am trying to use a workaround using JQ which is decsribed in this link https://github.com/aws/aws-cli/issues/3274#issuecomment-529155262 like below.
PARAMETERS_FILE="parameters.json" && PARAMS=($(jq -r '.Parameters[] | [.ParameterKey, .ParameterValue] | "\(.[0])=\(.[1])"' ${PARAMETERS_FILE})) - aws cloudformation deploy --template-file /codebuild/output/packaged.yaml --region us-east-2 --stack-name InitialSetup --capabilities CAPABILITY_IAM --parameter-overrides ${PARAMS[#]}
This workaround works well if tested via cli . I also tried this workaround inside a container as buildspec.yaml file creates a container in background which runs these commands , but codebuild doesnt excute the aws cloudformation deploy step and fails . I get error "aws: error: argument --parameter-overrides: expected at least one argument" . I even tried copying the two steps of workaround in shell script and then executing it but i run into error "[Container] 2020/01/21 09:19:14 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: ./test.sh. Reason: exit status 255"
Can someone please guide me here .My buildspec.yaml file is as below :
'''
version: 0.2
phases:
install:
runtime-versions:
java: corretto8
commands:
- wget -O jq https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64
- chmod +x ./jq
- cp jq /usr/bin
- jq --version
pre_build:
commands:
# - echo "[Pre-Build phase]
build:
commands:
- aws cloudformation package --template-file master.yaml --s3-bucket rtestbucket --output-template-file packaged.yaml
- aws s3 cp ./packaged.yaml s3://rtestbucket/packaged.yaml
- aws s3 cp s3://rtestbucket/packaged.yaml /codebuild/output
post_build:
commands:
- PARAMETERS_FILE="parameters.json" && PARAMS=($(jq -r '.Parameters[] | [.ParameterKey, .ParameterValue] | "\(.[0])=\(.[1])"' ${PARAMETERS_FILE}))
- ls
- aws cloudformation deploy --template-file /codebuild/output/packaged.yaml --region us-east-2 --stack-name InitialSetup --capabilities CAPABILITY_IAM --parameter-overrides ${PARAMS[#]}
artifacts:
type: zip
files:
- packaged.yaml

CodeBuild buildspec commands are not running in bash shell and I think the syntax:
${PARAMS[#]}
... is bash specific.
As per the answer here: https://stackoverflow.com/a/44811491/12072431
Try to wrap your commands in a script file with a shebang specifying the shell you'd like the commands to execute with.

The expression ${PARAMS[#]} is not returning any value, which causes the error aws: error: argument --parameter-overrides: expected at least one argument. Review the code and resolve, or remove that parameter.

I was able to resolve this issue with executing the all the required steps in shell script and providing access to script.

Related

AWS CodeBuild Unzipped size must be smaller than 350198 bytes

I am trying to deploy and update code in multiple lambdas at the same time, but when making a push to my branch and deploying CodeBuild, I getting the following error:
An error occurred (InvalidParameterValueException) when calling the
UpdateFunctionCode operation: Unzipped size must be smaller than
350198 bytes
[Container] 2021/04/24 00:09:31 Command did not exit successfully aws
lambda update-function-code --function-name my_lambda_03 --zip-file
fileb://my_lambda_03.zip exit status 254 [Container] 2021/04/24
00:09:31 Phase complete: POST_BUILD State: FAILED [Container]
2021/04/24 00:09:31 Phase context status code: COMMAND_EXECUTION_ERROR
Message: Error while executing command: aws lambda
update-function-code --function-name my_lambda_03 --zip-file
fileb://my_lambda_03.zip. Reason: exit status 254
This is the buildspec.yml:
version: 0.2
phases:
install:
runtime-versions:
python: 3.x
commands:
- echo "Installing dependencies..."
build:
commands:
- echo "Zipping all my functions....."
- cd my_lambda_01/
- zip -r9 ../my_lambda_01.zip .
- cd ..
- cd my_lambda_02/
- zip -r9 ../my_lambda_02.zip .
- cd ..
- cd my_lambda_03/
- zip -r9 ../my_lambda_03.zip .
...
- cd my_lambda_09/
- zip -r9 ../my_lambda_09.zip .
- cd ..
post_build:
commands:
- echo "Updating all lambda functions"
- aws lambda update-function-code --function-name my_lambda_01 --zip-file fileb://my_lambda_01.zip
- aws lambda update-function-code --function-name my_lambda_02 --zip-file fileb://my_lambda_02.zip
- aws lambda update-function-code --function-name my_lambda_03 --zip-file fileb://my_lambda_03.zip
...
- aws lambda update-function-code --function-name my_lambda_09 --zip-file fileb://my_lambda_09.zip
- echo "Done"
Thanks for any help.
The error is that one of your lambda archives is too big. 350198 bytes seems sort of low though and doesn't seem to match up with the advertised limits.
AWS limits the size of direct uploads in the request so it may be better to try and upload to S3 first and then run the update-function-code. Doing this should allow you to have an archive up to 250MB.
- aws s3 cp my_lambda_01.zip s3://my-deployment-bucket/my_lambda_01.zip
- aws lambda update-function-code --function-name my_lambda_01 --s3-bucket my-deployment-bucket --s3-key my_lambda_01.zip
Another option would be to try and reduce your archive size. What types of data or libraries are you trying to use? Make sure you aren't including extraneous files in your lambda archives (virtual environments files, temp build files, tests and test data, etc). Can some things be offloaded to S3 and loaded into memory/disk at runtime?
If what you are trying actually needs to be very large, you'll need to package it up as docker image. That was released a few months ago at re:invent 2020 and supports docker images up to 10 GB.
References:
https://acloudguru.com/blog/engineering/packaging-aws-lambda-functions-as-container-images
https://aws.amazon.com/blogs/compute/optimizing-lambda-functions-packaged-as-container-images/

AWS SAM CLI ignoring my Python dependencies during build, package, and deploy

I'm trying to deploy an AWS Lambda function with the SAM CLI tool, from MacOS, not using Docker containers.
SAM CLI version 0.4.0
Python 3.8 runtime for Lambda function
Python 3.7 installed locally on MacOS
I have a requirements.txt file, in the same directory as my Lambda function file
Requirements.txt
boto3
botostubs
Deploy Script (PowerShell)
sam build --template-file $InputTemplate
sam package --region $AWSRegion --template-file $InputTemplate --profile $ProfileName --s3-bucket $BucketName --output-template-file $OutputTemplate
sam deploy --region $AWSRegion --profile $ProfileName --template-file $OutputTemplate --stack-name $StackName --capabilities CAPABILITY_NAMED_IAM
Actual Behavior
SAM CLI is ignoring my requirements.txt file, and only deploying my source code. This results in the following error when I test my function.
{
"errorMessage": "Unable to import module 'xxxxxxxxxxxxxx': No module named 'botostubs'",
"errorType": "Runtime.ImportModuleError"
}
Expected Behavior
SAM CLI packages up the declared Python dependencies, in requirements.txt, along with my source code.
Question: How can I ensure that the SAM CLI downloads and packages my Python dependencies, along with my source code? I followed the documentation, to the best of my knowledge.
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-using-build.html
Just figured it out, reading about the sam build command in a bit more depth. I didn't realize that it was creating a subfolder called .aws-sam/build/ and storing the modified template there.
I updated my commands and paths, and it is working just fine now.
$InputTemplate = "$PSScriptRoot/cloudformation.yml"
$BuiltTemplate = "$PSScriptRoot/.aws-sam/build/template.yaml"
$BucketName = 'xxxxxxx'
$AWSRegion = 'xxxxxx'
$StackName = 'xxxxxx'
# Package and deploy the application
sam build --template-file $InputTemplate
sam package --region $AWSRegion --template-file $BuiltTemplate --profile $ProfileName --s3-bucket $BucketName
sam deploy --region $AWSRegion --profile $ProfileName --template-file $BuiltTemplate --stack-name $StackName --capabilities CAPABILITY_NAMED_IAM --s3-bucket $BucketName
I had similar problem and root cause of my failure was that I was specifying --template-file template.yml. As per this issue https://github.com/awslabs/aws-sam-cli/issues/1252 SAM CLI looks for the code uri specified in my template.yml and uploads only the function code.
So my solution was:
specify --template-file in sam build
run sam deploy without --template-file option
Had a similar problem. Resolved it by changing directory into the build folder (doesn't use these directory shell variables)
sam build --use-container
cd .aws-sam/build/
sam package --template-file template.yaml --s3-bucket sdd-s3-basebucket --output-template-file packaged.yaml
sam deploy --template-file ./packaged.yaml --stack-name prod --capabilities CAPABILITY_IAM --region eu-central-1
Make sure your 'requirements.txt' file is right under the path specified in 'CodeUri' attribute of the Lambda in your template file.
Mmy solution was:
specify --template-file in sam build
run sam deploy without --template-file option
It works but everytime starts script, it ask about confirmation about deploying changset - it is not a problem when I use script but problem appears when it is executed by CI/CD.

Problem creating CodePipeline, Deploy stage for AWS SAM application

I have created a working CodePipeline for my AWS SAM application.
It is using only Source and Build phases with the following buildspec.yaml file
version: 0.2
phases:
install:
runtime-versions:
python: 3.7
commands:
- pip install --user aws-sam-cli
- USER_BASE_PATH=$(python -m site --user-base)
- export PATH=$PATH:$USER_BASE_PATH/bin
build:
commands:
- sam build
post_build:
commands:
sam package --s3-bucket deploy-bucket --output-template-file deployment.yaml
# finally:
# sam deploy --template-file deployment.yaml --stack-name MyStackSAM--region us-east-1 --capabilities CAPABILITY_IAM
As you can see I have commented out the last two lines as I want to move that action to a Deploy stage in CodePipeline
My Deploy step looks like this:
My CloudFormationPipelineServiceRole has full admin permission at this point, never the less, I'm still getting the following error as the result of executing this stage.
Action execution failed
Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: XXXXFFFFFXXXX; S3 Extended Request ID: XXXXFFFFFXXXXFFFFFXXXXX=)
I am stuck as to why I'm getting this error. Any help would be greatly appreciated.
First, sam package expects source template file that needs to be passed via --template-file flag. I don't see that template file anywhere in your code. Which template file are you trying to package?
Second, you are not uploading the necessary artifacts to the s3 bucket. The only thing that you are uploading is zipped code but as you can see from the command that you have commented out:
sam deploy --template-file deployment.yaml --stack-name MyStackSAM--region us-east-1 --capabilities CAPABILITY_IAM
you also need this file deployment.yaml but you didn't specify that in your code. There is no way for CodeBuild to guess which artifacts you want to preserve.
You will need to add additional artifacts section to the bottom of your buildspec file and specify those artifacts.
artifacts:
type: zip
files:
- template.yaml # (where do you have this file?)
- outputtemplate.yaml # (deployment.yaml in your case)
Note that the artifacts section needs to be on the same level as version and phases
version: 0.2
phases:
...
artifacts:
...

Reference CodeCommit Filename in CodeBuild buildspec

I am trying to create a cloudformation stack from the CodeCommit repository.
I have created build project in CodeBuild.
My Build Command is like this:
build:
commands:
- aws cloudformation create-stack
--stack-name SGStack
--template-body file://security_groups.template
--parameters ParameterKey=VPCID,ParameterValue=vpc-77092d1
I think I have problem with the '--template-body' command.
How can I reference file of codecommit repo in codebuild build command?
In your buildspec, try accessing the template using pwd or using CODEBUILD_SRC_DIR environment variable:
e.g.:
build:
commands:
- aws cloudformation create-stack
--stack-name SGStack
--template-body file://$(pwd)/security_groups.template
--parameters ParameterKey=VPCID,ParameterValue=vpc-77092d1

AWS sam deploy return error invalid choice

I'm following the instruction here to use AWS CodeDeploy to push code from GitHub to AWS.
I run into this error:
$ sam deploy -template-file packaged.yaml –stack-name mySafeDeployStack –capabilities CAPABILITY_IAM
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws: error: argument subcommand: Invalid choice, valid choices are:
push | register
deregister | install
uninstall
I have previously run this command successfully:
$ sam package --template-file template.yaml --s3-bucket my-bucket --output-template-file packaged.yaml
Uploading to ... (100.00%)
Successfully packaged artifacts and wrote output template to file packaged.yaml.
Execute the following command to deploy the packaged template
aws cloudformation deploy --template-file .../packaged.yaml --stack-name <YOUR STACK NAME>
$ sam --version
SAM CLI, version 0.6.0
I've tried the recommended command:
aws cloudformation deploy ...
but it returns the same error.
It looks like you're using single dashes for the flags when they require two. The sam package command succeeded since you used two dashes for it.
This should work:
sam deploy --template-file packaged.yaml --stack-name mySafeDeployStack --capabilities CAPABILITY_IAM