I'm currently learning Serverles Framework with AWS. However in the serveless.yml I getting an error with some imports files. Is there any guidance to achieve imports from a different file?
AuctionsTable: ${file(resources/AuctionsTable.yml):AuctionsTable} path is located in the same level of the serverless one.
serveless.yml
service:
name: auction-service
plugins:
- serverless-bundle
- serverless-pseudo-parameters
provider:
name: aws
runtime: nodejs12.x
memorySize: 256
stage: ${opt:stage, 'dev'}
region: eu-west-1
iamRoleStatements:
- ${file(iam/AuctionsTableIAM.yml):AuctionsTableIAM}
resources:
Resources:
AuctionsTable: ${file(resources/AuctionsTable.yml):AuctionsTable}
functions:
createAuction:
handler: src/handlers/createAuction.handler
events:
- http:
method: POST
path: /auction
custom:
bundle:
linting: false
Related
I have two AWS Lambda functions. I have 3 stacks dev, test, and PROD.
I want a deploy a specific the Lambda function to only dev and test but not prod.
I want the trial Lambda function to be only in test and dev stages but not in PROD stage.
How can I achieve that? Here is my serverless.yml:
service:
name: demo-app
# Add the serverless-webpack plugin
plugins:
- serverless-webpack
- serverless-offline
provider:
name: aws
runtime: nodejs12.x
timeout: 30
stage: dev
region: us-west-2
profile: serverless-admin
custom:
region: ${self:provider.region}
stage: ${opt:stage, self:provider.stage}
prefix: ${self:service}-${self:custom.stage}
webpack:
webpackConfig: ./webpack.config.js
includeModules: true
functions:
toggle:
handler: src/functions/unleash-toggle/handler.main
timeout: 900
events:
- http:
path: /toggle
method: POST
trial:
handler: src/functions/city/handler.main
timeout: 900
events:
- http:
path: /trial
method: POST
resources:
Resources:
taskTokenTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: ${self:service}-${self:custom.stage}-tokenTable
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
After doing some research I found out this can be done.
This can be done using serverless plugin: serverless-plugin-ifelse and defining your conditions under custom block.
you can see the same on Serverless Docs.
The plugin is available on npm
custom:
serverlessIfElse:
- If: '"${self:custom.stage}" == "prod"'
Exclude:
- functions.<functionName>
Complete serverless.yml file
service:
name: demo-app
# Add the serverless-webpack plugin
plugins:
- serverless-webpack
- serverless-plugin-ifelse
- serverless-offline
provider:
name: aws
runtime: nodejs12.x
timeout: 30
stage: dev
region: us-west-2
profile: serverless-admin
custom:
region: ${self:provider.region}
stage: ${opt:stage, self:provider.stage}
prefix: ${self:service}-${self:custom.stage}
webpack:
webpackConfig: ./webpack.config.js
includeModules: true
serverlessIfElse:
- If: '"${self:custom.stage}" == "prod"'
Exclude:
- functions.trail
functions:
toggle:
handler: src/functions/unleash-toggle/handler.main
timeout: 900
events:
- http:
path: /toggle
method: POST
trial:
handler: src/functions/city/handler.main
timeout: 900
events:
- http:
path: /trial
method: POST
The same thing can be achieved by another plugin serverless-plugin-conditional-functions
I am playing with serverless, and I am trying to figure out how to rewrite this serverless.yml file so I don't duplicate the environmental variables for each function. Is there a way to set environmental variables globally?
service: test-api
frameworkVersion: ">=1.2.0 <2.0.0"
provider:
name: aws
runtime: nodejs12.x
timeout: 30
stage: dev
memorysize: 2048
region: us-east-2
logRetentionInDays: 21
functions:
doCreate:
handler: functions/do-create.handler
environment:
DB_PORT: ${ssm:/${self:custom.stage}/db_port}
DB_URL: ${ssm:/${self:custom.stage}/db_url}
API_KEY: ${ssm:/${self:custom.stage}/api_key}
ENV: "${self:custom.stage}"
SEARCH_ARN: ${ssm:/${self:custom.stage}/search_arn}
doUpdate:
handler: functions/do-update.handler
environment:
DB_PORT: ${ssm:/${self:custom.stage}/db_port}
DB_URL: ${ssm:/${self:custom.stage}/db_url}
API_KEY: ${ssm:/${self:custom.stage}/api_key}
ENV: "${self:custom.stage}"
SEARCH_ARN: ${ssm:/${self:custom.stage}/search_arn}
You just simply move them into the provider section. They will be applied to all functions in the same service.
service: test-api
frameworkVersion: ">=1.2.0 <2.0.0"
provider:
name: aws
runtime: nodejs12.x
timeout: 30
stage: dev
memorysize: 2048
region: us-east-2
logRetentionInDays: 21
environment:
DB_PORT: ${ssm:/${self:custom.stage}/db_port}
DB_URL: ${ssm:/${self:custom.stage}/db_url}
API_KEY: ${ssm:/${self:custom.stage}/api_key}
ENV: "${self:custom.stage}"
SEARCH_ARN: ${ssm:/${self:custom.stage}/search_arn}
functions:
doCreate:
handler: functions/do-create.handler
doUpdate:
handler: functions/do-update.handler
Use the Globals section in the SAM template
Globals:
Function:
Runtime: nodejs12.x
Timeout: 180
Handler: index.handler
Environment:
Variables:
TABLE_NAME: data-table
For more details please go through this document https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-specification-template-anatomy-globals.html
I`m trying to upload lambda#edge with serverless but its not working, I cant see logs on the cloudwatch.
service: image-compress
frameworkVersion: '2'
plugins:
- serverless-bundle
provider:
name: aws
runtime: nodejs12.x
memorySize: 128
lambdaHashingVersion: '20201221'
stage: ${opt:stage, 'staging'}
environment:
ENV: ${self:provider.stage}
iamRoleStatements:
- Effect: "Allow"
Action:
- "s3:*"
Resource:
- "arn:aws:s3:::*"
functions:
image-reduce:
handler: handler.reducer
events:
- cloudFront:
eventType: origin-response
origin: s3://cropped-images-test2.s3.amazonaws.com/
isDefaultOrigin: true
custom:
bundle:
linting: false
packagerOptions:
scripts:
- npm install --arch=x64 --platform=linux sharp
This is the serverless yml, I can see this configuration on the cloudFront but I cant seem to get any logs when trying to receive images.
any help?
I am currently using the serverless-iam-roles-per-function plugin to give each of my lambda functions their own IAM roles. But when I deploy it, it seems like it still creates a default lambdaRole that contains all the functions. I did not define iamRoleStatements or VPC in the provider section of the serverless.yml. Am I missing something? I would like to only have roles per function. Any feedback would be appreciated.
Snippet of yml:
provider:
name: aws
runtime: go1.x
stage: ${env:SLS_STAGE}
region: ${env:SLS_REGION}
environment:
DB_HOSTS: ${env:SLS_DB_HOSTS}
DB_NAME: ${env:SLS_DB_NAME}
DB_USERNAME: ${env:SLS_DB_USERNAME}
DB_PASSWORD: ${env:SLS_DB_PASSWORD}
TYPE: ${env:SLS_ENV_TYPE}
functions:
function1:
package:
exclude:
- ./**
include:
- ./bin/function_1
handler: bin/function_1
vpc: ${self:custom.vpc}
iamRoleStatements: ${self:custom.iamRoleStatements}
events:
- http:
path: products
method: get
private: true
cors: true
authorizer: ${self:custom.authorizer.function_1}
custom:
vpc:
securityGroupIds:
- sg-00000
subnetIds:
- subnet-00001
- subnet-00002
- subnet-00003
iamRoleStatements:
- Effect: Allow
Action:
- lambda:InvokeFunction
- ssm:GetParameter
- ssm:GetParametersByPath
- ssm:PutParameter
Resource: "*"
I'm attempting to deploy a lambda function with API gateway lambda integration. My api specification is written in openAPI 3 in an external yml file.
I would like to pass the name of the arn of the lambda into the api specification.
My serverless.yml:
service: my-test-service
provider:
name: aws
runtime: java8
functions:
mylambda-test:
handler: com.sample.MyHandler
name: mylambda-test
description: test lambda with api gateway
package:
artifact: myexample-1.0-SNAPSHOT-jar-with-dependencies.jar
individually: true
resources:
Resources:
ApiGatewayRestApi:
Type: AWS::ApiGateway::RestApi
Properties:
Name: test-api
Body:
${file(api.yml)}
in the api.yml:
openapi: "3.0.1"
info:
title: "test-api"
version: "0.0.1-oas3"
paths:
/test:
get:
*
*
*
x-amazon-apigateway-integration:
uri: {arn of mylambda-test}
How about using endly e2e runner to deploy and setup your API gatweay
the deployment workflow may looks like the following
pipeline:
setupFunction1:
action: aws/lambda:deploy
credentials: $awsCredentials
functionname: $functionName1
runtime: go1.x
handler: loginfo
code:
zipfile: $LoadBinary(${codeZip})
rolename: lambda-loginfo-executor
define:
- policyname: s3-${functionName}-role
policydocument: $Cat('${privilegePolicy}')
attach:
- policyarn: arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
setupFunction2:
action: aws/lambda:deploy
credentials: $awsCredentials
functionname: $functionName2
runtime: go1.x
handler: loginfo
code:
zipfile: $LoadBinary(${codeZip})
rolename: lambda-loginfo-executor
define:
- policyname: s3-${functionName}-role
policydocument: $Cat('${privilegePolicy}')
attach:
- policyarn: arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
setupAPI:
action: aws/apigateway:setupRestAPI
credentials: aws
'#name': loginfoAPI
resources:
- path: /path1
methods:
- httpMethod: ANY
functionname: $functionName1
- path: /path2
methods:
- httpMethod: ANY
functionname: $functionName2
sleepTimeMs: 15000
post:
endpointURL: ${setupAPI.EndpointURL}
Here is example of deployment workflow
You can also check out actual examples implmeneting e2e testing with lambda including API Gateway