Im following the helloworld example from the sam local repo:
aws-sam-local\samples\hello-world\python
But here is my template.yaml:
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: An AWS Serverless Specification template describing your function
Resources:
MyLambda:
Type: 'AWS::Serverless::Function'
Properties:
Handler: lambda_function.lambda_handler
Runtime: python3.6
CodeUri: .
Description: ''
MemorySize: 128
Timeout: 3
Role: 'arn:aws:iam::123345:role/myrole'
I package it up:
sam package --template-file template.yaml --s3-bucket BUCKET --output-template-file packaged-template.yaml
And deploy!
sam deploy --template-file packaged-template.yaml --stack-name test-sam-local --capabilities CAPABILITY_IAM --region REGION
And it works, so thats great, but here is the name of the lambda it created:
test-sam-local-MyLambda-SOME_GUID
Do I have control over that name? I want the name of the function to be statically defined and clobbered whenever this is redeployed (to a function with that same name).
Use:
FunctionName: MyLambda under properties. Reference (Here)
Related
I am deploying a serverless application to AWS. I have a environment parameter in my SAM template ENV: 'DEV'. When I do the deployment up to AWS, I specified a template parameter to change the variable to PROD. I can see in the SAM deploy log that the parameter override worked, but when I look at the function in the Lamda console it still has DEV listed like in the template.
How to I make it override the value upon deploy?
Template Yaml:
Resources:
GetWeatherFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: get-weather
CodeUri: get-weather/
Handler: app.lambda_handler
Runtime: python3.7
Timeout: 30
Architectures:
- x86_64
Policies: AWSLambdaBasicExecutionRole
Environment:
Variables:
ENV: 'DEV'
Deploy Window:
Deploy Log (some information changed for privacy, none of it relevant to the issue):
"C:\Program Files\Amazon\AWSSAMCLI\bin\sam.cmd" deploy --template-file C:\Users\User\PycharmProjects\Company\.aws-sam\build\packaged-template.yaml --stack-name MyProject --s3-bucket my-lambda-functions --capabilities CAPABILITY_IAM CAPABILITY_NAMED_IAM --no-execute-changeset --parameter-overrides \"ENV\"=\"PROD\"
Deploying with following values
===============================
Stack name : MyProject
Region : us-east-1
Confirm changeset : False
Disable rollback : False
Deployment s3 bucket : my-lambda-functions
Capabilities : ["CAPABILITY_IAM", "CAPABILITY_NAMED_IAM"]
Parameter overrides : {"ENV": "PROD"}
Signing Profiles : {}
Lambda Console:
The "Template Parameters" field maps to the CloudFormation template parameters rather than an individual Lambda's environment variables.
You'll need to add a Parameter definition to the top of your template:
Parameters:
EnvironmentName:
Type: String
Default: DEV
And then you can refer to it anywhere in your template, for example:
Resources:
GetWeatherFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: get-weather
...
Environment:
Variables:
ENV:
Ref: EnvironmentName
Then in the screen above you'll need to supply the EnvironmentName parameter - it should actually automatically detect that a parameter has been defined in the template.
We have the pipeline code. where it does code-commit. code-build and create and deploy changeset where it deploys to other AWS accounts. this codebase has a lambda function in the template.yaml along with java codebase. during the build phase, it creates 3 jar files. how to pack these files to create template-yml code-base. it works fine in a single lamdba function template.yml the challange is here we have lambda functions inside the template.yml
aws cloudformation package --template-file template.yml --s3-bucket $S3Bucket --s3-prefix packages-$EnvironmentName --output-template-file template-export.yml
how to add the CodeUri: this should be like s3://bucketname/packages-/pack1/
Transform: AWS::Serverless-2016-10-31
Description: Outputs the time
Resources:
TimeFunction:
Type: AWS::Serverless::Function
Properties:
Handler: firstsample/firstsample.handler # firstsample.js file is in firstsample direcotory
Role: !GetAtt BasicAWSLambdaRole.Arn
Runtime: java11
CodeUri: s3://test/packages/jar1/
SecondSampleFunction:
Type: AWS::Serverless::Function
Properties:
Handler: secondsample.handler # didn't have to include secondsample directory
Role: !GetAtt BasicAWSLambdaRole.Arn
Runtime: java11
CodeUri: s3://test/packages/jar2/
I'm trying to provide my Lambda function with the S3FullAccessPolicy policy. Note the target bucket is not configured within the template.yaml - it already exists. Considering the syntax examples from this documentation I have three options:
1.AWS managed policy named:
Policies:
- S3FullAccessPolicy
2.AWS SAM policy template (SQSPollerPolicy) defined:
Policies:
- S3FullAccessPolicy:
BucketName: abc-bucket-name
3.Or an inline policy document:
Policies:
- Statement:
...
In trying #1 I get an error that says it seems to suggest I need to provide an arn. If this is the case where would I provide it? The error:
1 validation error detected: Value 'S3FullAccessPolicy' at 'policyArn' failed to satisfy constraint:
Member must have length greater than or equal to 20
For #2 I provide the bucket name but it says that the policy is 'invalid'. I've tried adding quotes and replacing the name with an arn - but no luck.
And #3 - I can find the code for the policy here but that's in yaml so I wonder if that's even what I'm supposed to be using.
What am I missing here? I'm open to using any one of these options but right now I'm 0/3.
The full Lambda function:
testFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: lambda/testFunction/
Handler: app.lambda_handler
Runtime: python3.8
Timeout: 900
Policies:
- S3FullAccessPolicy
I used below template without any issues.
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: ./hello_world/
Handler: app.lambda_handler
Runtime: python3.8
Tracing: Active
Policies:
- S3FullAccessPolicy:
BucketName: existingbucketname # bucket name without arn
Ran it using below command and it deployed successfully.
sam deploy --stack-name sample-stack --s3-bucket bucket-to-deploy --capabilities CAPABILITY_IAM
I'm trying to use Cloudformation to package and deploy a simple "hello world" serverless app that uses a single Lambda Layer. The issue I'm having is that the LayerVersion section in my CF template file doesn't seem to like the fact that I'm using a !Ref to specify the S3Bucket and S3Key values. I don't want to hard-code these; nothing I've found in the documentation suggests that what I'm trying to do won't work, but it doesn't work :(
Here's the output of the deploy command that's failing:
aws cloudformation deploy --template-file out.yml --stack-name cftest-lambda --parameter-overrides S3BucketNameParameter=cftest-0eddf3f0b289f2c2 S3LambdaLayerNameParameter=cftest-lambda-layer-1602434332.zip --capabilities CAPABILITY_NAMED_IAM
Waiting for changeset to be created..
Failed to create the changeset: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state Status: FAILED. Reason: Transform AWS::Serverless-2016-10-31 failed with: Invalid Serverless Application Specification document. Number of errors found: 1. Resource with id [libs] is invalid. property Content not defined for resource of type AWS::Serverless::LayerVersion
Here is the full CF template file:
cat template.yml
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: An AWS Lambda application
Parameters:
S3BucketNameParameter:
Type: String
Description: Bucket name for deployment artifacts
S3LambdaLayerNameParameter:
Type: String
Description: Object name for lambda layer deployment artifact
Resources:
helloworldfunction:
Type: AWS::Serverless::Function
Properties:
Handler: lambda_function.lambda_handler
Runtime: python3.8
CodeUri: hello-world-with-layer/.
Description: Hello world function to test cf using layers
Timeout: 10
# Function's execution role
Policies:
- AWSLambdaBasicExecutionRole
- AWSLambdaReadOnlyAccess
- AWSXrayWriteOnlyAccess
Tracing: Active
Layers:
- !Ref libs
libs:
Type: AWS::Serverless::LayerVersion
Properties:
Content:
S3Bucket: !Ref S3BucketNameParameter
S3Key: !Ref S3LambdaLayerNameParameter
CompatibleRuntimes:
- python3.8
LayerName: hello-world-lib
Description: Dependencies for the hello-world-with-layer app.
Any suggestions on how to approach this correctly?
The correct properties for LayerContent are:
Bucket: String
Key: String
Version: String
However, you are using (different names):
S3Bucket: String
S3Key: String
I want to create a continous delivery pipeline for a Lambda function.
As shown in this docs, the custom environment variables of AWS::CodeBuild::Project can be used in buildspec.yaml like:
aws cloudformation package --template-file template.yaml --s3-bucket $MYEVVARKEY --output-template-file outputtemplate.yaml
Wanted to use those CodeBuild Project environment variables in the SAM template of the repository also. As shown below, I tried with dollar signs, but it did not get it as a key but as a text as it is:
# template.yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Resources:
TimeFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: $MY_FN_NAME_ENV_VAR
Role: $MY_ROLE_ARN_ENV_VAR
Handler: index.handler
Runtime: nodejs8.10
CodeUri: ./
So, is it possible to utilize CodeBuild Project environment variables in SAM template, if so what's the notation required to achieve that?
CloudFormation can't refer to environment variables, doesn't matter SAM or plain. What you can do is to pass environment variables as parameters via shell in CodeBuild buildspec.yaml file (--parameters ParameterKey=name,ParameterValue=${MY_ENV_VAR}).
Remember to add corresponding parameter to your Parameters section.
If you use aws cloudformation deploy then you should use --parameter-overrides, which is a little simpler form:
--parameter-overrides \
YourParam=${YOUR_ENV_VAR} \
Foo=Bar \