Serverless lambda Global Environmental Variables - amazon-web-services

I am playing with serverless, and I am trying to figure out how to rewrite this serverless.yml file so I don't duplicate the environmental variables for each function. Is there a way to set environmental variables globally?
service: test-api
frameworkVersion: ">=1.2.0 <2.0.0"
provider:
name: aws
runtime: nodejs12.x
timeout: 30
stage: dev
memorysize: 2048
region: us-east-2
logRetentionInDays: 21
functions:
doCreate:
handler: functions/do-create.handler
environment:
DB_PORT: ${ssm:/${self:custom.stage}/db_port}
DB_URL: ${ssm:/${self:custom.stage}/db_url}
API_KEY: ${ssm:/${self:custom.stage}/api_key}
ENV: "${self:custom.stage}"
SEARCH_ARN: ${ssm:/${self:custom.stage}/search_arn}
doUpdate:
handler: functions/do-update.handler
environment:
DB_PORT: ${ssm:/${self:custom.stage}/db_port}
DB_URL: ${ssm:/${self:custom.stage}/db_url}
API_KEY: ${ssm:/${self:custom.stage}/api_key}
ENV: "${self:custom.stage}"
SEARCH_ARN: ${ssm:/${self:custom.stage}/search_arn}

You just simply move them into the provider section. They will be applied to all functions in the same service.
service: test-api
frameworkVersion: ">=1.2.0 <2.0.0"
provider:
name: aws
runtime: nodejs12.x
timeout: 30
stage: dev
memorysize: 2048
region: us-east-2
logRetentionInDays: 21
environment:
DB_PORT: ${ssm:/${self:custom.stage}/db_port}
DB_URL: ${ssm:/${self:custom.stage}/db_url}
API_KEY: ${ssm:/${self:custom.stage}/api_key}
ENV: "${self:custom.stage}"
SEARCH_ARN: ${ssm:/${self:custom.stage}/search_arn}
functions:
doCreate:
handler: functions/do-create.handler
doUpdate:
handler: functions/do-update.handler

Use the Globals section in the SAM template
Globals:
Function:
Runtime: nodejs12.x
Timeout: 180
Handler: index.handler
Environment:
Variables:
TABLE_NAME: data-table
For more details please go through this document https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-specification-template-anatomy-globals.html

Related

How can I assign environment variable that has reference of conditional resource in serverless.yml?

I want to put resource's attributes into environment variable.
The problem is that resource is not ALWAYS created. It is only created in production.
Because of this situation, when I try to deploy to another stage, such as dev, Template format error: Unresolved resource dependencies [MyQueue] in the Resources block of the template error occured.
Is there any way to allocate environment variable conditionally per stage??
service: condition-test
provider:
name: aws
runtime: nodejs12.x
stage: ${opt:stage, 'dev'}
environment:
TEST_QUEUE_URL:
Ref: TestQueue
resources:
Conditions:
MyCondition: !Equals ['prod', '${self:provider.stage}']
# false
Resources:
MyQueue:
Type: AWS::SQS:Queue
Condition: MyCondition
Properties:
QueueName: 'my-test-queue-${self:provider.stage}'
You can set custom environment variables based on stage like below:
service: condition-test
provider:
name: aws
runtime: nodejs12.x
stage: ${opt:stage, 'dev'}
environment: ${self:custom.env_vars.${self:provider.stage}}
custom:
env_vars:
alpha:
KEY_1: Val
KEY_2: Val2
KEY_3: Val3
prod:
TEST_QUEUE_URL:
Ref: TestQueue
KEY_2: Val_5
KEY_3: Val_6
resources:
Conditions:
MyCondition: !Equals ['prod', '${self:provider.stage}']
# false
Resources:
MyQueue:
Type: AWS::SQS:Queue
Condition: MyCondition
Properties:
QueueName: 'my-test-queue-${self:provider.stage}'

Assign iamRoleStatements field from a file. Serverless Framework

I'm currently learning Serverles Framework with AWS. However in the serveless.yml I getting an error with some imports files. Is there any guidance to achieve imports from a different file?
AuctionsTable: ${file(resources/AuctionsTable.yml):AuctionsTable} path is located in the same level of the serverless one.
serveless.yml
service:
name: auction-service
plugins:
- serverless-bundle
- serverless-pseudo-parameters
provider:
name: aws
runtime: nodejs12.x
memorySize: 256
stage: ${opt:stage, 'dev'}
region: eu-west-1
iamRoleStatements:
- ${file(iam/AuctionsTableIAM.yml):AuctionsTableIAM}
resources:
Resources:
AuctionsTable: ${file(resources/AuctionsTable.yml):AuctionsTable}
functions:
createAuction:
handler: src/handlers/createAuction.handler
events:
- http:
method: POST
path: /auction
custom:
bundle:
linting: false

How can I deploy a specific AWS Lambda function to a specific Stage

I have two AWS Lambda functions. I have 3 stacks dev, test, and PROD.
I want a deploy a specific the Lambda function to only dev and test but not prod.
I want the trial Lambda function to be only in test and dev stages but not in PROD stage.
How can I achieve that? Here is my serverless.yml:
service:
name: demo-app
# Add the serverless-webpack plugin
plugins:
- serverless-webpack
- serverless-offline
provider:
name: aws
runtime: nodejs12.x
timeout: 30
stage: dev
region: us-west-2
profile: serverless-admin
custom:
region: ${self:provider.region}
stage: ${opt:stage, self:provider.stage}
prefix: ${self:service}-${self:custom.stage}
webpack:
webpackConfig: ./webpack.config.js
includeModules: true
functions:
toggle:
handler: src/functions/unleash-toggle/handler.main
timeout: 900
events:
- http:
path: /toggle
method: POST
trial:
handler: src/functions/city/handler.main
timeout: 900
events:
- http:
path: /trial
method: POST
resources:
Resources:
taskTokenTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: ${self:service}-${self:custom.stage}-tokenTable
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
After doing some research I found out this can be done.
This can be done using serverless plugin: serverless-plugin-ifelse and defining your conditions under custom block.
you can see the same on Serverless Docs.
The plugin is available on npm
custom:
serverlessIfElse:
- If: '"${self:custom.stage}" == "prod"'
Exclude:
- functions.<functionName>
Complete serverless.yml file
service:
name: demo-app
# Add the serverless-webpack plugin
plugins:
- serverless-webpack
- serverless-plugin-ifelse
- serverless-offline
provider:
name: aws
runtime: nodejs12.x
timeout: 30
stage: dev
region: us-west-2
profile: serverless-admin
custom:
region: ${self:provider.region}
stage: ${opt:stage, self:provider.stage}
prefix: ${self:service}-${self:custom.stage}
webpack:
webpackConfig: ./webpack.config.js
includeModules: true
serverlessIfElse:
- If: '"${self:custom.stage}" == "prod"'
Exclude:
- functions.trail
functions:
toggle:
handler: src/functions/unleash-toggle/handler.main
timeout: 900
events:
- http:
path: /toggle
method: POST
trial:
handler: src/functions/city/handler.main
timeout: 900
events:
- http:
path: /trial
method: POST
The same thing can be achieved by another plugin serverless-plugin-conditional-functions

Old Task Definitions Becoming InActive on Creating New Definitions with serverless

I am Using Serverless to create task definitions and use this to work with step functions.
Here When I update my task definition in serverless and deploy it, prev task definitions become inactive and not able to use them further.
How to not make inactive prev Task-definitions?
Below is my serverless.yml
service: my-service
useDotenv: true
provider:
name: aws
runtime: python3.8
stage: ${opt:stage, 'alpha'}
region: ${opt:region, 'ap-south-1'}
profile: ${opt:profile, 'default'}
memorySize: 1280
timeout: 600
plugins:
- serverless-step-functions
package:
individually: true
patterns:
- '!./**'
stepFunctions:
stateMachines:
...
...
validate: true # enable pre-deployment definition validation (disabled by default)
resources:
Resources:
fargateProjectCluster:
Type: AWS::ECS::Cluster
Properties:
ClusterName: ${self:service}-${self:provider.stage}
fargateTaskDefinition:
Type: AWS::ECS::TaskDefinition
Properties:
ContainerDefinitions:
- Name: new_definition
Image: ...
Privileged: false
ReadonlyRootFilesystem: false
Essential: true
LogConfiguration:
...
...
Cpu: ${env:CPU_UNITS}
ExecutionRoleArn: !GetAtt ecsTaskExecutionRole.Arn
Family: "${self:service}-${self:provider.stage}"
Memory: ${env:MEMORY_UNITS}
NetworkMode: "awsvpc"
RuntimePlatform:
OperatingSystemFamily: LINUX
CpuArchitecture: X86_64
RequiresCompatibilities:
- FARGATE
TaskRoleArn: !GetAtt fargateTaskRole.Arn
fargateCloudWatchLogsGroup: ...

Error: Unable to upload artifact PitchAiIngest referenced by CodeUri parameter of PitchAiIngest resource

Pretty new to AWS Lambda function, and this is my time to get my hands dirty. I got this error in the title when I wanted to docker build my function. And here is how I configured my function:
PitchAiIngest:
Type: AWS::Serverless::Function
Properties:
FunctionName: !Sub pitch-ai-ingest-${Environment}
Handler: lambda_function.lambda_handler
Runtime: python3.7
CodeUri: pitchai_ingest/
Description: get pitchai information from API and publish to dynamodb
MemorySize: 128
Timeout: 900
Role: !GetAtt LambdaRole.Arn
Environment:
Variables:
LOGGING_LEVEL: INFO
APP_NAME: pitch-ai-ingest
APP_ENV: !Ref Environment
DYNAMO_DB: !Ref PitchAiEventDynamoDBTable
PLAYER_DB: !Ref PitchAiPlayerDynamoDBTable
PITCH_SQS: !Ref PitchAiIngestQueue
Tags:
env: !Ref Environment
service: pitch-ai-service
function_name: !Sub pitch-ai-ingest-${Environment}
Roughly speaking, I post the snippet above in file cfn-tempate.yml under the same directory of folder pitchai_ingest (including Lambda handler).
What should I do to fix it?
I mistakenly set AWS_ACCESS_KEY_ID as AWS_ACCESS_KEY. That's why the credential wasn't found.