Serverless - Local path and Lambda Path - amazon-web-services

I'm wondering how I can force my local serverless deployment to mimic what happens when I deploy it to AWS.
Here is my serverless yaml file:
service: payment # NOTE: update this with your service name
# You can pin your service to only deploy with a specific Serverless version
# Check out our docs for more details
# frameworkVersion: "=X.X.X"
environment:
SLS_DEBUG: "*"
provider:
name: aws
runtime: nodejs8.10
stage: production
region: ca-central-1
timeout: 60
role: ${file(../config/prod.env.json):ROLE}
vpc:
securityGroupIds:
- ${file(../config/prod.env.json):SECURITY_GROUP}
subnetIds:
- ${file(../config/prod.env.json):SUBNET}
apiGateway:
apiKeySourceType: HEADER
apiKeys:
- ${file(../config/prod.env.json):APIKEY}
package:
include:
- ../lib/**
functions:
- '${file(src/handlers/payment.serverless.yml)}'
plugins:
- serverless-offline
My file structure looks like this:
root
--- node_modules
--- lib
- models
--- payment
- serverless.yml
When I deploy it to AWS the lib folder gets put into the lambda function's folder but locally I need to define its path which usually is ../.../../
How can I make it so that locally or deployed I don't have to change the paths?

There is a docker container that is extremely close to aws lambda. You can deploy your serverless onto the container and trial and error with what you want done.
You can also create a lambda layer, which serverless supports, this way.

Related

How to fix Serverless error "Invalid API Key identifier specified" when using 2 stages with api keys on AWS?

I am using the following configuration to deploy a couple lambda functions to different stages prod and dev on AWS. Both stages should be protected with an api key which is stored in SSM.
serverless.yml
service: my-service
frameworkVersion: "3"
provider:
name: aws
runtime: nodejs16.x
region: eu-central-1
apiGateway:
apiKeys:
- name: my-apikey
value: ${ssm:my-apikey}
functions:
v1_myfunc:
handler: src/api/myfunc/get_func.get
events:
- http:
path: /v1/myfunc
method: get
private: true
plugins:
- serverless-esbuild
- serverless-offline
- serverless-dotenv-plugin
My deployment scripts look like this:
package.json
"scripts": {
"deploy:dev": "serverless deploy --stage dev",
"deploy:prod": "serverless deploy --stage prod"
}
The problem:
When I deploy one of the stages then everything works fine. But if I deploy the other one afterwards, I always get the following error (in this case I deployed prod first, and then dev):
Deploying my-service to stage dev (eu-central-1)
✖ Stack my-service-dev failed to deploy (46s)
Environment: darwin, node 16.15.0, framework 3.23.0, plugin 6.2.2, SDK 4.3.2
Credentials: Local, "default" profile
Error:
Invalid API Key identifier specified
error Command failed with exit code 1.
Looking into AWS console, I noticed that the generated api key has the same id for both stacks (dev and prod). So, I'm guessing this is where the problem is: Both stacks sharing the same api key instance.
So, I tried to fix this by setting different api key names for each stage:
- name: my-apikey-${self:provider.stage}
value: ${ssm:my-apikey}
But this doesn't solve the problem, as I'm still getting this error:
Invalid API Key identifier specified
Question: How do I have to change my serverless.yml config to fix the issue?
The api key values for both stages need to be different:
service: my-service
frameworkVersion: "3"
provider:
name: aws
runtime: nodejs16.x
region: eu-central-1
apiGateway:
apiKeys:
${self:custom.apiTest.${sls:stage}}
functions:
v1_myfunc:
handler: src/api/myfunc/get_func.get
events:
- http:
path: /v1/myfunc
method: get
private: true
plugins:
- serverless-esbuild
- serverless-offline
- serverless-dotenv-plugin
custom:
apiTest:
dev:
- name: api-key-dev
value: 123 # Needs to be different vs prod!
prod:
- name: api-key-prod
value: 456 # Needs to be different vs dev!

Using Serverless Framework to create AWS Lambda Function.Event not working

I'm trying out the Serverless Framework to deploy AWS Lambda to process Kafka Messages and write result to Database.
The trigger is Kafka Messages from a SelfManagedKafka cluster and I specify them in serverless.yml
frameworkVersion: '1'
provider:
name: aws
runtime: go1.x
region: eu-central-1
package:
exclude:
- ./**
include:
- ./bin/**
functions:
hello:
handler: bin/handlerFunc
vpc:
securityGroupIds:
- <Id>
subnetIds:
- <subnet1>
- <subnet2>
- <subnet3>
events:
- kafka:
topic: my_topic
consumerGroupId: my_group
bootstrapServers:
- host:port
accessConfigurations:
saslScram256Auth: <URI to secretsManager>
serverRootCaCertificate: <URI to secretsManager>
vpcSubnet:
- subnet1
- subnet1
- subnet1
vpcSecurityGroup: <Id>
enabled: true
When I deploy this serverless deploy, on the AWS Console for Lambda, there's no Trigger configured and no error, no logs.
serverless deploy shows warning
Configuration warning at 'functions.hello.events[0]': unsupported function event
I'm following the documentation here https://www.serverless.com/framework/docs/providers/aws/events/kafka
and don't understand what I'm missing.
Any suggestions on where/what to look for?
Turns out event type kafka is not supported in SeverlessFramework version 1.
I had to update to version 3 and update the serverless.yml to refere to FrameworkVersion: '3'
And was able to Deploy Lambda with Apache Kafka as Tigger.

Serverless framework deployment error: You're not authorized to access this resource

When I deploy my serverless framework project using AWS as provider I get:
You're not authorized to access this resource. - Please contact support and provide this identifier to reference this issue BLAHBLAH
I am logged into Serverless framework with serverless login
My serverless.yaml:
org: vladimirorg
app: vladimirapp
service: backend-rest
provider:
name: aws
runtime: nodejs12.x
apiGateway: {
shouldStartNameWithService: true
}
environment:
DYNAMODB_TABLE: ${self:service}-${opt:stage, self:provider.stage}
DYNAMODB_LOCAL_PORT: 9000
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource: "arn:aws:dynamodb:#{AWS::Region}:#{AWS::AccountId}:table/${self:provider.environment.DYNAMODB_TABLE}"
functions:
create:
handler: src/handlers/create.create
events:
- http:
path: todos
method: post
cors: true
request:
schema:
application/json: ${file(src/schemas/create.json)}
...
I have found the root cause - if you wish to deploy serverless framework application you must use the exact same organization (org) and application name (app) as one you registered with serverless framework.
To find out your current app/org name, change them or create new app/org login to Serverless Framework’s dashboard account on https://app.serverless.com/ using same credentials you use for deploying and make sure you are using the exact org and app in your serverless.yaml file:
org: orgname <---
app: appname <---
service: backend-rest
...
So you can't just use any arbitrary org/app name, you must use exact org/app registered with Serverless framework.
I had to remove org: <org> in order for it to ask me again next time sls is run.
Try using serverless logout or deleting the ~\.serverlessrc file then run serverless login again and try your command
you need to specify AWS profile in your serverless.yml, and set your AWS account credential in ~/.aws/credentials like below:
[your_preferred_profile_name]
aws_access_key_id=AKIAZEIJOWEFJOIWEF
aws_secret_access_key=siAOEIF4+TdifOHeofoe+iJR8yFokT7uBmV4DEZ
And speciffy this profile in your serverless.yml file like this:
provider:
name: aws
runtime: nodejs12.x
stage: dev
region: us-east-1
profile: your_preferred_profile_name
The error you are getting is saying sls framework couldn't access to your AWS resources.
It means you didn't setup this AWS account credential in your local environment and serverless framework.

How can I nest a serverless Step Function / State Machine / Lambda build into an existing AWS CloudFormation ElasticBeanstalk Application?

I have written a service using AWS Step Functions. I would like to integrate this into our applications existing Elastic Beanstalk development process, wherein we have distinct dev, staging, and production applications. Each of these stages have app-specific environment variables, which I would like to pull into my Lambda functions as well.
I am not presently using SAM but I can port over if necessary to accomplish this.
The following is a simplified configuration mirroring my serverless.yml file.
service:
name: small-service
plugins:
- serverless-webpack
- serverless-step-functions
- serverless-pseudo-parameters
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: us-east-2
iamRoleStatements:
- Effect: "Allow"
Action:
- "s3:*"
Resource: { "Fn::Join": ["", ["arn:aws:s3:::S3-bucket-name", "/*" ] ] }
functions:
connect:
handler: handler.connect
stepFunctions:
stateMachines:
smallService:
name: small-service-${self:provider.stage}
definition:
Comment: Service that connects to things
StartAt: Connect
States:
Connect:
Type: Task
Resource: arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:${self:service}-${self:provider.stage}-connect
End: true
How can I dynamically deploy the step functions into different beanstalk applications? How can I access ElasticBeanstalk environment properties from within Step Functions?
Is there a better way to import environment .env variables dynamically into a Serverless application outside of EB? We are integrating the service into a larger AWS applications development workflow, is there is a more "serverless" way of doing this?
Move your environment variables into SSM Parameter Store. Then you can either
reference SSM parameters in your serverless.yaml, or
fetch SSM parameters in the beginning of each Lambda invocation (see e.g. here)
Note that the former method requires re-deploying your Lambda to receive the latest SSM parameters whereas the latter always fetches the latest parameter values.

How to automatically deploy api gateway in AWS codepipeline

Currently I am able to deploy a lambda by pushing to github. I also automatically deploy a lambda but only because the api gateway is an event in the lambda yaml file
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: Identifies paragraphs in documents and links to the law
Resources:
LambdaParagraphLinker:
Type: 'AWS::Serverless::Function'
Properties:
Handler: LambdaParagraphLinker.lambda_handler
Runtime: python3.6
CodeUri: ./
Description: Identifies paragraphs in documents and links to the
law
MemorySize: 512
Timeout: 10
Events:
Api:
Type: Api
Properties:
Path: /LambdaParagraphLinker
Method: ANY
How can I deploy an api gateway using a swagger file ?
Hands down the best way to do this in codepipeline is by using https://serverless.com/ framework. This replaces every super complicated hack-job and workaround I've previously used. Way less complicated IMO.
Create a codepipeline, link it to src & a codebuild project, set a few permissions, done.
//serverless.yml
service: my-api
provider:
name: aws
runtime: python2.7
functions:
hello:
handler: handler.hello
events:
- http:
path: api/v1/message
method: post
//buildspec.yml
version: 0.2
phases:
install:
commands:
#BUILD
- sudo apt-get update -y
build:
commands:
- echo $environment
- serverless package --stage $environment --region us-east-1
- serverless deploy --stage $environment --region us-east-1
Or torture yourself by doing one of the options below...
You can do this in cloudformation from within code pipeline. Export the swagger spec from within the gatewayapi console and place in the cloudformation template.
AWSTemplateFormatVersion: '2010-09-09'
Resources:
PlayersAPI:
Type: AWS::ApiGateway::RestApi
Properties:
Name: MyApi
Description: API Description
Body:
"SWAGGER HERE"
Hooking this up to lambda is a little bit cumbersome but I can describe the steps. First create a codepipeline project with source, build, and deploy steps.
src should be standard from github or codecommit
build should output a zip file and use buildspec.yml Something like this...
//buildspec.yml
version: 0.1
phases:
install:
commands:
#BUILD
- zip -r lambda.zip . -x *.git*
artifacts:
files:
- '**/*.zip'
- '**/*.yml'
discard-paths: no
Have the build step export an artifact MyAppBuild (or whatever you want to call it)
The final pipeline step is creating the lambda function in this repo as a standalone function through the console(its reusable):
https://github.com/tkntobfrk/codepipeline-lambda-s3
This lambda function downloads the pipeline artifact/zipped lambda function and updates it using boto.
After these steps you could add another step as a cloudformation deploy step. Connect it to the lambda function that you've just deployed.
If you are dealing with multiple environments you could create lambda functions and gatewayapi cloudformation template for each environment then run them in sequence.
stage 1: src
stage 2: build
stage 3: deploy lambda test, deploy gateway api cloudformation test
stage 4: validate test
stage 5: deploy lambda prod, deploy gateway api cloudformation prod
Using straight AWS serverless like this works too. However, you need to use a standard artifact location for the uri's. The DefinitionUri: for the API can be the exported swagger from the gatewayapi console.
//cloudformation.yml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Resources:
MySimpleFunction:
Type: AWS::Serverless::Function
Properties:
Handler: app.lambda_handler
Runtime: python2.7
CodeUri: s3://somebucket/somezip.zip
MyAPI:
Type: AWS::Serverless::Api
Properties:
StageName: prod
DefinitionUri: s3://somebucket/somezip.zip
AWS::Serverless::Api
https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessapi
You can find Swagger docs all over the place, and docs on API Gateway extensions are in the developer guide. I would start by going into the API Gateway console and look at the API that Lambda creates for you. You can go to the 'Stages' page and for any stage, you can Export the API as Swagger.