I want use graviton with my AWS Lambda (Python). So I read AWS official docs: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-resource-function.html
Type: AWS::Serverless::Function
Properties:
Architectures: List
My AWS Lambda contains a Layer, So I read AWS official docs: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-resource-layerversion.html
Type: AWS::Serverless::LayerVersion
Properties:
CompatibleArchitectures: List
My cloudFormation:
MyBulkUploadFunction:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
FunctionName: !Sub ${Project}-my-bulk-upload-${Environment}
Role: !Sub ${RoleLambda}
CodeUri: lambdas/bulk_upload/
Handler: app.lambda_handler
Layers:
- !Ref MyDataLayer
Runtime: python3.9
Architectures:
- arm64
VpcConfig: # For accessing RDS instance
SecurityGroupIds:
- !Ref LambdaSecurityGroup
SubnetIds:
- !Ref privateLambdaSubnet1
- !Ref privateLambdaSubnet2
Environment:
Variables:
RDS_HOST: !GetAtt DatabasePrimaryInstance.Endpoint.Address
RDS_USERNAME: AWS::NoValue
RDS_PASSWORD: AWS::NoValue
RDS_SECRET_NAME: !Ref DatabasePrimaryInstanceSecret
RDS_DB_NAME: !Ref RDSName
BULK_UPLOAD_S3_BUCKET: !Sub ${Project}-my-bulk-upload-${Environment}
Events:
UploadFile:
Type: S3
Properties:
Bucket: !Ref MyBulkUploadS3
Events: s3:ObjectCreated:*
MyDataLayer:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: !Sub ${Project}-my-data-layer-${Environment}
Description: Common
ContentUri: lambdas/my_common/
CompatibleRuntimes:
- python3.9
CompatibleArchitectures:
- arm64
RetentionPolicy: Retain
Metadata:
BuildMethod: makefile
Error console output:
samcli.commands.validate.lib.exceptions.InvalidSamDocumentException: [InvalidResourceException('MyBulkUploadFunction', 'property Architectures not defined for resource of type AWS::Serverless::Function'), InvalidResourceException('MyDataLayer', 'property CompatibleArchitectures not defined for resource of type AWS::Serverless::LayerVersion')] ('MyBulkUploadFunction', 'property Architectures not defined for resource of type AWS::Serverless::Function') ('MyDataLayer', 'property CompatibleArchitectures not defined for resource of type AWS::Serverless::LayerVersion')
AWS Lambda on graviton need AWS SAM CLI version greater than or equal to 1.33.0
sam --version
SAM CLI, version 1.33.0
Related
I have tried to use two different end point for different purpose, but its working fine when i use only one endpoint but deployment got failed with error when i use two endpoints. here is my code in template.yml file
DataConditionApiLambdaFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: !Sub ${Environment}-${Application}-condition-api
Description: Lambda function behind the POST, GET, DELETE schedule api endpoints
Timeout: 90
Runtime: nodejs14.x
MemorySize: 128
Handler: !Sub src/handlers/condition/api.eventHandler
Policies:
- AWSLambdaBasicExecutionRole
- DynamoDBCrudPolicy:
TableName:
!Ref DataConditionTable
- SSMParameterReadPolicy:
ParameterName: 'acf/*'
Environment:
Variables:
Environment: !Ref Environment
conditionTableName: !Ref DataConditionTable
GreenhouseApiKeySSMPath: !Ref GreenhouseApiKeySSMPath
ConsensysKeySSMPath: !Ref ConsensysKeySSMPath
ConsensysBaseUri: !Ref ConsensysBaseUri
Tags:
Environment: !Ref Environment
Application: !Ref Application
CodePath: !Ref CodePath
Events:
HttpApiEvent:
Type: HttpApi
Properties:
- Path: /datafilter/{dataConditionId}
Method: GET
- Path: /datafilter/condition
Method: POST
any suggest the better solution to implement multiple endpoints.
Pretty new to AWS Lambda function, and this is my time to get my hands dirty. I got this error in the title when I wanted to docker build my function. And here is how I configured my function:
PitchAiIngest:
Type: AWS::Serverless::Function
Properties:
FunctionName: !Sub pitch-ai-ingest-${Environment}
Handler: lambda_function.lambda_handler
Runtime: python3.7
CodeUri: pitchai_ingest/
Description: get pitchai information from API and publish to dynamodb
MemorySize: 128
Timeout: 900
Role: !GetAtt LambdaRole.Arn
Environment:
Variables:
LOGGING_LEVEL: INFO
APP_NAME: pitch-ai-ingest
APP_ENV: !Ref Environment
DYNAMO_DB: !Ref PitchAiEventDynamoDBTable
PLAYER_DB: !Ref PitchAiPlayerDynamoDBTable
PITCH_SQS: !Ref PitchAiIngestQueue
Tags:
env: !Ref Environment
service: pitch-ai-service
function_name: !Sub pitch-ai-ingest-${Environment}
Roughly speaking, I post the snippet above in file cfn-tempate.yml under the same directory of folder pitchai_ingest (including Lambda handler).
What should I do to fix it?
I mistakenly set AWS_ACCESS_KEY_ID as AWS_ACCESS_KEY. That's why the credential wasn't found.
So far I have this template.yml:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
My Lambda for doing something
Resources:
FirstLayer:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: FirstLayer
Description: First layer of dependencies
ContentUri: layers/first-layer/
CompatibleRuntimes:
- nodejs14.x
Metadata:
BuildMethod: nodejs14.x
SecondLayer:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: SecondLayer
Description: Second layer of dependencies
ContentUri: layers/second-layer/
CompatibleRuntimes:
- nodejs14.x
Metadata:
BuildMethod: nodejs14.x
MyLambda:
Type: AWS::Serverless::Function
Properties:
FunctionName: "MyLambda"
Policies:
- AmazonS3FullAccess
CodeUri: src/
Handler: lambda.handler
Timeout: 30
MemorySize: 2048 # Chrome will require higher memory
Runtime: nodejs14.x
Layers:
- !Ref FirstLayer
- !Ref SecondLayer
With this template I am able to start and invoke MyLambda locally and also deploy it to AWS. The problem I have is that I would like to reuse these same layers on other Lambdas as well, so for doing that I could simply extract these layers to another yml file, deploy them separately and then include the layers ARNs in the Layers property of my Lambda, but then, how can I run it locally with sam? I wouldn't like to have 2 template.yml files for my Lambda, one including the Layers on the Resources (like the one I already have) to run locally and another one with the refs to the actual layers ARNs to deploy on AWS, but that's the only solution I am seeing now.
The first question you need to ask is if those lambdas belong to the same application. If that´s not the case, you should use different templates, in order to deploy different stacks, to have isolated environments.
However, if you want to share resources, you have to very similar options:
Configure the layer in the parent template and pass the ARN as a parameter.
template.yml
Resources:
SharedLayer:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: shared_layer
Description: Some code to share with the other lambda functions
ContentUri: ./layer
CompatibleRuntimes:
- nodejs14.x
RetentionPolicy: Delete
Application:
Type: "AWS::Serverless::Application"
Properties:
Location: "./app.template.yml"
Parameters:
SharedLayer: !Ref SharedLayer
app.template.yml
Parameters:
SharedLayer:
Type: String
Description: ARN of the SharedLayer
LambdaFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: ./
Handler: index.handler
Layers:
- !Ref SharedLayer
Configure the layers in a nested template, set the ARN as an output, and then pass its output as a parameter to the other templates.
layers.template.yml
Resources:
SharedLayer:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: shared_layer
Description: Some code to share with the other lambda functions
ContentUri: ./layer
CompatibleRuntimes:
- nodejs14.x
RetentionPolicy: Delete
Outputs:
SharedLayerARN:
Description: ARN of the Shared Layer
Value: !Ref SharedLayer
template.yml
Layer:
Type: "AWS::Serverless::Application"
Properties:
Location: "./layers.template.yml"
Application:
Type: "AWS::Serverless::Application"
Properties:
Location: "./app.template.yml"
Parameters:
SharedLayer: !GetAtt Layer.Outputs.SharedLayerARN
Both scenarios are supported by AWS SAM.
The following AWS CloudFormation gives a circular dependency error. My understanding is that the dependencies flow like this: rawUploads -> generatePreview -> previewPipeline -> rawUploads. Although it doesn't seem like rawUploads depends on generatePreview, I guess CF needs to know what lambda to trigger when creating the bucket, even though the trigger is defined in the lambda part of the CloudFormation template.
I've found some resources online that talk about a similar issue, but it doesn't seem to apply here. https://aws.amazon.com/premiumsupport/knowledge-center/unable-validate-circular-dependency-cloudformation/
What are my options for breaking this circular dependency chain? Scriptable solutions are viable, but multiple deployments with manual changes are not for my use case.
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Resources:
rawUploads:
Type: 'AWS::S3::Bucket'
previewAudioFiles:
Type: 'AWS::S3::Bucket'
generatePreview:
Type: AWS::Serverless::Function
Properties:
Handler: generatePreview.handler
Runtime: nodejs6.10
CodeUri: .
Environment:
Variables:
PipelineId: !Ref previewPipeline
Events:
BucketrawUploads:
Type: S3
Properties:
Bucket: !Ref rawUploads
Events: 's3:ObjectCreated:*'
previewPipeline:
Type: Custom::ElasticTranscoderPipeline
Version: '1.0'
Properties:
ServiceToken:
Fn::Join:
- ":"
- - arn:aws:lambda
- Ref: AWS::Region
- Ref: AWS::AccountId
- function
- aws-cloudformation-elastic-transcoder-pipeline-1-0-0
Name: transcoderPipeline
InputBucket:
Ref: rawUploads
OutputBucket:
Ref: previewAudioFiles
One way is to give the S3 buckets explicit names so that later, instead of relying on Ref: bucketname, you can simply use the bucket name. That's obviously problematic if you want auto-generated bucket names and in those cases it's prudent to generate the bucket name from some prefix plus the (unique) stack name, for example:
InputBucket: !Join ["-", ['rawuploads', Ref: 'AWS::StackName']]
Another option is to use a single CloudFormation template but in 2 stages - the 1st stage creates the base resources (and whatever refs are not circular) and then you add the remaining refs to the template and do a stack update. Not ideal, obviously, so I would prefer the first approach.
You can also use the first technique in cases when you need a reference to an ARN, for example:
!Join ['/', ['arn:aws:s3:::logsbucket', 'AWSLogs', Ref: 'AWS:AccountId', '*']]
When using this technique, you may want to also consider using DependsOn because you have removed an implicit dependency which can sometimes cause problems.
This post helped me out in the end: https://aws.amazon.com/premiumsupport/knowledge-center/unable-validate-destination-s3/
I ended up configuring an SNS topic in CloudFormation. The bucket would push events on this topic, and the Lambda function listens to this topic. This way the dependency graph is as follows:
S3 bucket -> SNS topic -> SNS topic policy
Lambda function -> SNS topic
Lambda function -> transcoder pipeline
Something along the lines of this (some policies omitted)
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Resources:
SNSTopic:
Type: AWS::SNS::Topic
SNSTopicPolicy:
Type: AWS::SNS::TopicPolicy
Properties:
PolicyDocument:
Id: MyTopicPolicy
Version: '2012-10-17'
Statement:
- Sid: Statement-id
Effect: Allow
Principal:
AWS: "*"
Action: sns:Publish
Resource:
Ref: SNSTopic
Condition:
ArnLike:
aws:SourceArn:
!Join ["-", ['arn:aws:s3:::rawuploads', Ref: 'AWS::StackName']]
Topics:
- Ref: SNSTopic
rawUploads:
Type: 'AWS::S3::Bucket'
DependsOn: SNSTopicPolicy
Properties:
BucketName: !Join ["-", ['rawuploads', Ref: 'AWS::StackName']]
NotificationConfiguration:
TopicConfigurations:
- Topic:
Ref: "SNSTopic"
Event: 's3:ObjectCreated:*'
previewAudioFiles:
Type: 'AWS::S3::Bucket'
generatePreview:
Type: AWS::Serverless::Function
Properties:
FunctionName: !Join ["-", ['generatepreview', Ref: 'AWS::StackName']]
Handler: generatePreview.handler
Runtime: nodejs6.10
CodeUri: .
Environment:
Variables:
PipelineId: !Ref previewPipeline
Events:
BucketrawUploads:
Type: SNS
Properties:
Topic: !Ref "SNSTopic"
previewPipeline:
Type: Custom::ElasticTranscoderPipeline
DependsOn: 'rawUploads'
Version: '1.0'
Properties:
ServiceToken:
Fn::Join:
- ":"
- - arn:aws:lambda
- Ref: AWS::Region
- Ref: AWS::AccountId
- function
- aws-cloudformation-elastic-transcoder-pipeline-1-0-0
Name: transcoderPipeline
InputBucket:
!Join ["-", ['arn:aws:s3:::rawuploads', Ref: 'AWS::StackName']]
OutputBucket:
Ref: previewAudioFiles
I am trying to deploy my lambda functions using CloudFormation StackSets to multiple AWS accounts and regions. But failed because of the below error
ResourceLogicalId:OfficeHoursAutoScalingStart, ResourceType:AWS::Lambda::Function, ResourceStatusReason:Error occurred while GetObject. S3 Error Code: AuthorizationHeaderMalformed. S3 Error Message: The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'ap-southeast-1'
It seems like its a permissions thing? How do I resolve this?
My template:
AWSTemplateFormatVersion : '2010-09-09'
Description: 'Skynet. AWS Management Assistant'
Parameters:
AppName:
Type: String
Description: Prefix for resources
Default: skynet-lambda-stackset
ArtifactsBucket:
Type: String
Description: S3 bucket storing lambda function zip
ArtifactZipPath:
Type: String
Description: Path to lambda function zip
CostCenter:
Type: String
Description: Cost center
Default: Admin
Owner:
Type: String
Description: Owner
Default: Jiew Meng
Resources:
LambdaRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Sub '${AppName}-lambda'
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
- apigateway.amazonaws.com
Action:
- sts:AssumeRole
ManagedPolicyArns:
- 'arn:aws:iam::aws:policy/AmazonEC2FullAccess'
- 'arn:aws:iam::aws:policy/AWSLambdaFullAccess'
- 'arn:aws:iam::aws:policy/AWSXrayWriteOnlyAccess'
- 'arn:aws:iam::aws:policy/AmazonAPIGatewayInvokeFullAccess'
- 'arn:aws:iam::aws:policy/CloudWatchLogsFullAccess'
NewEc2AutoTag:
Type: AWS::Lambda::Function
Properties:
Code:
S3Bucket: !Ref ArtifactsBucket
S3Key: !Ref ArtifactZipPath
Handler: ec2/newEc2_autoTag.handler
Runtime: nodejs6.10
FunctionName: 'NewEC2_AutoTag'
Description: 'Auto tag new EC2 instances with Owner tag'
Timeout: 30
Role: !GetAtt LambdaRole.Arn
Tags:
- Key: Cost Center
Value: !Ref CostCenter
- Key: Owner
Value: !Ref Owner
NewEc2Event:
Type: AWS::Events::Rule
Properties:
Name: !Sub ${AppName}-newEc2
Description: On new EC2 instance created
EventPattern:
source:
- 'aws.ec2'
detail-type:
- 'AWS API Call via CloudTrail'
detail:
eventName:
- RunInstances
Targets:
- !Ref NewEc2AutoTag
AfterhoursEc2Shutdown:
Type: AWS::Lambda::Function
Properties:
Code:
S3Bucket: !Ref ArtifactsBucket
S3Key: !Ref ArtifactZipPath
Handler: ec2/afterHours_shutdown.handler
Runtime: nodejs6.10
FunctionName: 'Afterhours_Shutdown'
Description: 'Shutdown instances tagged Auto Shutdown: true'
Timeout: 30
Role: !GetAtt LambdaRole.Arn
Tags:
- Key: Cost Center
Value: !Ref CostCenter
- Key: Owner
Value: !Ref Owner
AfterHoursEvent:
Type: AWS::Events::Rule
Properties:
Name: !Sub ${AppName}-afterHours
Description: Triggered on weekdays 2400 SGT
ScheduleExpression: cron(0 16 ? * MON,TUE,WED,THUR,FRI *)
Targets:
- !Ref AfterhoursEc2Shutdown
- !Ref AfterhoursAutoScalingShutdown
OfficeHoursEc2Start:
Type: AWS::Lambda::Function
Properties:
Code:
S3Bucket: !Ref ArtifactsBucket
S3Key: !Ref ArtifactZipPath
Handler: ec2/officeHours_start.handler
Runtime: nodejs6.10
FunctionName: 'OfficeHours_Start'
Description: 'Starts instances with Auto Shutdown: true'
Timeout: 30
Role: !GetAtt LambdaRole.Arn
Tags:
- Key: Cost Center
Value: !Ref CostCenter
- Key: Owner
Value: !Ref Owner
OfficeHoursEvent:
Type: AWS::Events::Rule
Properties:
Name: !Sub ${AppName}-officeHours
Description: Triggered on 7AM SGT weekdays
ScheduleExpression: cron(0 23 ? * SUN,MON,TUE,WED,THU *)
Targets:
- !Ref OfficeHoursEc2Start
- !Ref OfficeHoursAutoScalingStart
StartedEc2ConfigureDns:
Type: AWS::Lambda::Function
Properties:
Code:
S3Bucket: !Ref ArtifactsBucket
S3Key: !Ref ArtifactZipPath
Handler: ec2/started_configureDns.handler
Runtime: nodejs6.10
FunctionName: 'StartedEc2_ConfigureDns'
Description: 'When EC2 started, configure DNS if required'
Timeout: 30
Role: !GetAtt LambdaRole.Arn
Tags:
- Key: Cost Center
Value: !Ref CostCenter
- Key: Owner
Value: !Ref Owner
Ec2StartedEvent:
Type: AWS::Events::Rule
Properties:
Name: !Sub ${AppName}-ec2-started
Description: Triggered on EC2 starts
EventPattern:
source:
- 'aws.ec2'
detail-type:
- 'EC2 Instance State-change Notification'
detail:
state:
- running
Targets:
- !Ref StartedEc2ConfigureDns
AfterhoursAutoScalingShutdown:
Type: AWS::Lambda::Function
Properties:
Code:
S3Bucket: !Ref ArtifactsBucket
S3Key: !Ref ArtifactZipPath
Handler: autoscaling/afterHours_shutdown.handler
Runtime: nodejs6.10
FunctionName: 'Afterhours_AutoScalingShutdown'
Description: 'Scales down autoscaling groups tagged Auto Shutdown: true'
Timeout: 30
Role: !GetAtt LambdaRole.Arn
Tags:
- Key: Cost Center
Value: !Ref CostCenter
- Key: Owner
Value: !Ref Owner
OfficeHoursAutoScalingStart:
Type: AWS::Lambda::Function
Properties:
Code:
S3Bucket: !Ref ArtifactsBucket
S3Key: !Ref ArtifactZipPath
Handler: autoscaling/officeHours_start.handler
Runtime: nodejs6.10
FunctionName: 'OfficeHours_AutoScalingStart'
Description: 'Scales up auto scaling groups that are scaled down to 0 and tagged autostart: true'
Timeout: 30
Role: !GetAtt LambdaRole.Arn
Tags:
- Key: Cost Center
Value: !Ref CostCenter
- Key: Owner
Value: !Ref Owner
NewAutoScalingGroupEvent:
Type: AWS::Events::Rule
Properties:
Name: !Sub ${AppName}-autoscaling-new
Description: Triggered when new autoscaling group created
EventPattern:
source:
- 'aws.autoscaling'
detail-type:
- 'AWS API Call via CloudTrail'
detail:
eventName:
- CreateAutoScalingGroup
Targets:
- !Ref NewAutoScalingGroupAutoTag
NewAutoScalingGroupAutoTag:
Type: AWS::Lambda::Function
Properties:
Code:
S3Bucket: !Ref ArtifactsBucket
S3Key: !Ref ArtifactZipPath
Handler: autoscaling/new_autoTag.handler
Runtime: nodejs6.10
FunctionName: 'NewAutoScalingGroup_AutoTag'
Description: 'Tags new autoscaling groups with owner and autoshutdown tags if not existing'
Timeout: 30
Role: !GetAtt LambdaRole.Arn
Tags:
- Key: Cost Center
Value: !Ref CostCenter
- Key: Owner
Value: !Ref Owner
Looks like you have created the s3 bucket (referenced by variable ArtifactsBucket in your template) in AWS region ap-southeast-1.
Using AWS Stacksets, You have selected us-east-1 as one of the regions in Deployment Order.
The AWS Stackset passes the SAME parameters to all the stacks which it tries to create in multiple regions/accounts.
So when it is trying to create the lambda function OfficeHoursAutoScalingStart in us-east-1 region, It is tryin to access the s3 bucket(GETObject request) in us-east-1 region itself, with the same bucket name.
ie. It is presuming that the s3 bucket with name passed by ArtifactsBucketparameter, is present in us-east-1 itself.But since the source code of the lambda function is actually in the bucket present in region ap-southeast-1,the header malformed error is thrown. In this case the bucket name is matching, but the region is not.
Currently, when you create lambda function using CloudFormation, there is a restriction that the S3 bucket that contains the source code of your Lambda function must be in the SAME region as the STACK which you are creating. Doc Reference Link
If this is the issue, then as a fix, you can think of creating s3 buckets (add region-name as a prefix to the bucket name) in the required regions and use them in the template based on the region.
Example:
us-east-1-lambdabkt
us-east-2-lambdabkt
ap-southeast-1-lambdabkt