I have a locally installed python module, which I am using in the lambda function. How should I include the local package while deploying to aws? I am using this framework
Here is my serverless.yml
service: lambda-daily-emails
frameworkVersion: '3'
plugins:
- serverless-python-requirements
custom:
pythonRequirements:
dockerizePip: 'non-linux'
package:
individually: true
patterns:
- '!.git'
- '!README.md'
- '!*.html'
- '!rough.py'
- '!sample-questions.json'
layers:
layerOne:
path: /absolute/location/of/package/repo
name: saral-utils-${opt:stage}
description: Saral utility package
include:
- ./**
provider:
name: aws
runtime: python3.8
stage: ${opt:stage}
region: ${env:MY_REGION}
iam:
role:
statements:
- Effect: Allow
Action:
- ses:SendEmail
- ses:SendRawEmail
- dynamodb:Get*
- dynamodb:Query
- dynamodb:Scan
Resource: "*"
functions:
emailer:
handler: handler.emailer
environment:
MY_ENV: ${env:MY_ENV}
MY_REGION: ${env:MY_REGION}
resources:
Resources:
invokeLambda:
Type: AWS::Lambda::Permission
Properties:
Action: lambda:invokeFunction
FunctionName:
"Fn::GetAtt": [EmailerLambdaFunction, Arn]
Principal: events.amazonaws.com
I have tried with providing absolute location of package repo in the serverless.yml but it throws an error while deploying.
Error:
No file matches include / exclude patterns
Related
I am trying to deploy a serverless template using bitbucket pipeline. I have provided the sls deploy command as such sls deploy --stage ${environmentName} --cloudformationStackDynamoDB $dynamodbEventStoreStackName --componentName ${componentName} --partName ${partName} --eventBusName $eventBusName --region ${awsRegion} || exit
But I get this error when the pipeline reaches the step to deploy the serverless.yml file "Error:Cannot resolve serverless.yml: "service" property is not accessible (configured behind variables which cannot be resolved at this stage)"
Below is the content of serverless.yml
service:
name: ${opt:stage}-${opt:componentName}-${opt:partName}-stream
plugins:
- "#hewmen/serverless-plugin-typescript"
- serverless-plugin-resource-tagging
#- serverless-plugin-optimize
#- serverless-offline
# - serverless-plugin-warmup
# custom:
# # Enable warmup on all functions (only for production and staging)
# warmup:
# enabled: true
provider:
name: aws
runtime: nodejs16.x
stackName: ${self:service.name}
iamRoleStatements:
- Effect: Allow
Action:
- logs:Create*
- logs:Get*
Resource: "*"
- Effect: Allow
Action:
- dynamodb:*
Resource: "*"
- Effect: Allow
Action:
- events:PutEvents
Resource: "*" #TODO - apply pattern */eventbus-name
environment:
ENVIRONMENT: ${opt:stage}
COMPONENT_NAME: ${opt:componentName}
PART_NAME: ${opt:partName}
EVENTBUS_NAME: ${opt:eventBusName} #TODO - supply this as part of the CICD build
stackTags:
COMPONENT_NAME: ${opt:componentName}
STAGE: ${opt:stage}
functions:
default:
handler: src/lambda.streamHandler
name: ${self:provider.stackName}
events:
- stream: ${cf:${opt:cloudformationStackDynamoDB}.DynamoDBTableEventsStreamArn}
timeout: 30
I don't understand why the error says "service" property is not accessible because it is defined in the template file itself.
Any help would be appreciated.
Thanks!
I have two serverless functions that are executed in lambda#edge given two cloudfront events: viewer-request and origin-response, I want to implement a nodejs sharp library to scale these images on the fly given some parameters received in the querystring.
The problem is that when I install the mentioned dependency, the deployment package is greater than 9MB, and according to the edge documentation, the functions must be <= 1MB.
Can I do something about it to fix this?
This is my serverless.yml file.
service: scale-otf
provider:
name: aws
timeout: 5
memorySize: 128
runtime: nodejs14.x
lambdaHashingVersion: 20201221
iam:
role: LambdaEdgeRole
plugins:
- serverless-lambda-edge-pre-existing-cloudfront
functions:
request:
handler: dist/request.handler
events:
- preExistingCloudFront:
distributionId: <id-of-my-existing-distribution>
eventType: viewer-request
pathPattern: '*'
includeBody: false
response:
handler: dist/response.handler
events:
- preExistingCloudFront:
distributionId: <id-of-my-existing-distribution>
eventType: origin-response
pathPattern: '*'
includeBody: false
package:
patterns:
- '!**/**'
- 'dist/**'
# Create a Lambda#Edge function via the wizard in Lambda Console
# and then copied the role and pasted it here
resources:
Resources:
LambdaEdgeRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service:
- edgelambda.amazonaws.com
- lambda.amazonaws.com
Action:
- sts:AssumeRole
Policies:
- PolicyName: LambdaEdgeExecutionRole
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
Resource: "arn:aws:logs:*:*:*"
I am currently using the serverless-iam-roles-per-function plugin to give each of my lambda functions their own IAM roles. But when I deploy it, it seems like it still creates a default lambdaRole that contains all the functions. I did not define iamRoleStatements or VPC in the provider section of the serverless.yml. Am I missing something? I would like to only have roles per function. Any feedback would be appreciated.
Snippet of yml:
provider:
name: aws
runtime: go1.x
stage: ${env:SLS_STAGE}
region: ${env:SLS_REGION}
environment:
DB_HOSTS: ${env:SLS_DB_HOSTS}
DB_NAME: ${env:SLS_DB_NAME}
DB_USERNAME: ${env:SLS_DB_USERNAME}
DB_PASSWORD: ${env:SLS_DB_PASSWORD}
TYPE: ${env:SLS_ENV_TYPE}
functions:
function1:
package:
exclude:
- ./**
include:
- ./bin/function_1
handler: bin/function_1
vpc: ${self:custom.vpc}
iamRoleStatements: ${self:custom.iamRoleStatements}
events:
- http:
path: products
method: get
private: true
cors: true
authorizer: ${self:custom.authorizer.function_1}
custom:
vpc:
securityGroupIds:
- sg-00000
subnetIds:
- subnet-00001
- subnet-00002
- subnet-00003
iamRoleStatements:
- Effect: Allow
Action:
- lambda:InvokeFunction
- ssm:GetParameter
- ssm:GetParametersByPath
- ssm:PutParameter
Resource: "*"
Starting with AWS and want to do it right from te beginning. What would be the current state of the art approach to have a complete CI pipeline?
Our idea is to have everything in a local git repository at the company and then we can trigger deployments into the different AWS stages. And with everything we mean everything, so we can automate everything and live completely without the aws web interface.
I went through some tutorials and they all seem to do it differently, tools lile Apex, Amplify, CloudFormation, SAM, etc. came up and some of them seem to be very old and deprecated. So we are trying to get a clear idea, what are the current technologies and which ones should not be used anymore.
Which editors would be good? Are there any that support the deployment with plugins to use directly from the IDE?
Also if there is a sample project out there that does all that or most, it would be a real help!
My personal "state of the art" is as the following:
For each (mirco)service I am creating a separate Git repository in
AWS CodeCommit
Each repository has its own CI/CD pipeline with AWS CodePipeline. The pipeline has the following stages:
Source (AWS CodeCommit)
Build (AWS CodeBuild)
Deploy Staging[*] (AWS CloudFormation)
Approval (Manual Approval)
Deploy Production[*] (AWS CloudFormation)
The whole infrastructure and pipeline is written in CloudFormation (in AWS SAM Syntax) and I highly recommend to do it so as well. This will also help you to tackle your requirement "[...] completely without the aws web interface."
[*]: Both stages are using the same(!) AWS CloudFormation template, I just pass a different Environment parameter into the infrastructure template to be able to make some differences based on the env.
Most of my services are written in TypeScript, to develop it I am using WebStorm with a cool plugin to help me writing AWS CloudFormation templates.
Example of pipeline.yml:
AWSTemplateFormatVersion: 2010-09-09
Parameters:
RepositoryName:
Type: String
ArtifactStoreBucket:
Type: String
Resources:
Pipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
RoleArn: !GetAtt CodePipelineRole.Arn
ArtifactStore:
Location: !Ref ArtifactStoreBucket
Type: S3
Stages:
- Name: Source
Actions:
- Name: CodeCommit
ActionTypeId:
Category: Source
Owner: AWS
Version: 1
Provider: CodeCommit
Configuration:
RepositoryName: !Ref RepositoryName
BranchName: master
InputArtifacts: []
OutputArtifacts:
- Name: SourceOutput
RunOrder: 1
- Name: Build
Actions:
- Name: Build
ActionTypeId:
Category: Build
Owner: AWS
Version: 1
Provider: CodeBuild
Configuration:
ProjectName: !Ref Build
InputArtifacts:
- Name: SourceOutput
OutputArtifacts:
- Name: BuildOutput
RunOrder: 1
- Name: Staging
Actions:
- Name: Deploy
ActionTypeId:
Category: Deploy
Owner: AWS
Version: 1
Provider: CloudFormation
Configuration:
ActionMode: CREATE_UPDATE
Capabilities: CAPABILITY_IAM,CAPABILITY_AUTO_EXPAND
StackName: !Sub ${AWS::StackName}-Infrastructure-Staging
RoleArn: !GetAtt CloudFormationRole.Arn
TemplatePath: BuildOutput::infrastructure-packaged.yml
ParameterOverrides:
!Sub |
{
"Environment": "Staging"
}
InputArtifacts:
- Name: BuildOutput
OutputArtifacts: []
RunOrder: 1
- Name: Approval
Actions:
- Name: Approval
ActionTypeId:
Category: Approval
Owner: AWS
Version: 1
Provider: Manual
InputArtifacts: []
OutputArtifacts: []
RunOrder: 1
- Name: Production
Actions:
- Name: Deploy
ActionTypeId:
Category: Deploy
Owner: AWS
Version: 1
Provider: CloudFormation
Configuration:
ActionMode: CREATE_UPDATE
Capabilities: CAPABILITY_IAM,CAPABILITY_AUTO_EXPAND
StackName: !Sub ${AWS::StackName}-Infrastructure-Production
RoleArn: !GetAtt CloudFormationRole.Arn
TemplatePath: BuildOutput::infrastructure-packaged.yml
ParameterOverrides:
!Sub |
{
"Environment": "Production"
}
InputArtifacts:
- Name: BuildOutput
OutputArtifacts: []
RunOrder: 1
Build:
Type: AWS::CodeBuild::Project
Properties:
Artifacts:
Type: CODEPIPELINE
Environment:
ComputeType: BUILD_GENERAL1_MEDIUM
Image: aws/codebuild/nodejs:10.14.1
Type: LINUX_CONTAINER
Name: !Sub ${AWS::StackName}-Build
ServiceRole: !Ref CodeBuildRole
Source:
Type: CODEPIPELINE
BuildSpec:
!Sub |
version: 0.2
phases:
build:
commands:
- npm install
- npm run lint
- npm run test:unit
- npm run build
- aws cloudformation package --s3-bucket ${ArtifactStoreBucket} --template-file ./infrastructure.yml --output-template-file infrastructure-packaged.yml
artifacts:
files:
- infrastructure-packaged.yml
BuildLogGroup:
Type: AWS::Logs::LogGroup
Properties:
RetentionInDays: 14
LogGroupName: !Sub /aws/codebuild/${Build}
CodePipelineRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: codepipeline.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName: CodePipelineRolePolicy
PolicyDocument:
Statement:
- Effect: Allow
Action:
- iam:PassRole
Resource:
- !GetAtt CloudFormationRole.Arn
- Effect: Allow
Action:
- s3:*
Resource:
- !Sub arn:aws:s3:::${ArtifactStoreBucket}/*
- Effect: Allow
Action:
- codecommit:*
Resource:
- !Sub arn:aws:codecommit:${AWS::Region}:${AWS::AccountId}:${RepositoryName}
- Effect: Allow
Action:
- codebuild:*
Resource:
- !Sub arn:aws:codebuild:${AWS::Region}:${AWS::AccountId}:project/${AWS::StackName}-Build
- Effect: Allow
Action:
- cloudformation:*
Resource:
- !Sub arn:aws:cloudformation:${AWS::Region}:${AWS::AccountId}:stack/${AWS::StackName}-Infrastructure-Staging/*
- !Sub arn:aws:cloudformation:${AWS::Region}:${AWS::AccountId}:stack/${AWS::StackName}-Infrastructure-Production/*
CodeBuildRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: codebuild.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName: CodeBuildRolePolicy
PolicyDocument:
Statement:
- Effect: Allow
Action:
- s3:*
Resource:
- !Sub arn:aws:s3:::${ArtifactStoreBucket}/*
- Effect: Allow
Action:
- codecommit:*
Resource:
- !Sub arn:aws:codecommit:${AWS::Region}:${AWS::AccountId}:${RepositoryName}
- Effect: Allow
Action:
- logs:*
Resource:
- !Sub arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/codebuild/${AWS::StackName}-Build*
CloudFormationRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service: cloudformation.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AdministratorAccess
Example of infrastructure.yml:
AWSTemplateFormatVersion: 2010-09-09
Transform: AWS::Serverless-2016-10-31
Parameters:
Environment:
Type: String
AllowedValues:
- Staging
- Production
Resources:
Api:
Type: AWS::Serverless::Function
Properties:
Handler: api.handler
CodeUri: .build
Runtime: nodejs10.x
MemorySize: 128
Timeout: 10
Events:
ProxyApi:
Type: Api
Properties:
Path: /{proxy+}
Method: ANY
Environment:
Variables:
ENVIRONMENT: !Ref Environment
DeploymentPreference:
Enabled: false
ApiLogGroup:
Type: AWS::Logs::LogGroup
Properties:
RetentionInDays: 14
LogGroupName: !Sub /aws/lambda/${Api}
Outputs:
ApiEndpoint:
Value: !Sub https://${ServerlessRestApi}.execute-api.${AWS::Region}.${AWS::URLSuffix}/${ServerlessRestApiProdStage}
I need to provide the name of the S3 bucket, which serverless create for me, to my application. Here is a simplified version of my serverless.yml file.
service: dummy-service
app: dummy-service
custom:
bucket: "I have no idea what to write here!"
provider:
name: aws
runtime: nodejs10.x
region: eu-central-1
iamRoleStatements:
- Effect: Allow
Action:
- s3:PutObject
- s3:GetObject
Resource:
- "Fn::Join":
- ""
- - "arn:aws:s3:::"
- Ref: DummyBucket
- "*"
environment:
BUCKET: ${self:custom.bucket}
resources:
Resources:
DummyBucket:
Type: AWS::S3::Bucket
functions:
createOrUpdate:
handler: handler.dummy
events:
- http:
path: dummy
method: POST
I have figured out how to make a reference in the iamRoleStatements section. But can't understand how to get it as a string for the environment variable.
Any help is welcome. Thanks.
You can use Ref to get the bucket name
service: dummy-service
app: dummy-service
custom:
bucket:
Ref: DummyBucket
provider:
name: aws
runtime: nodejs10.x
region: eu-central-1
iamRoleStatements:
- Effect: Allow
Action:
- s3:PutObject
- s3:GetObject
Resource:
- "Fn::Join":
- ""
- - "arn:aws:s3:::"
- Ref: DummyBucket
- "*"
environment:
BUCKET: ${self:custom.bucket}
resources:
Resources:
DummyBucket:
Type: AWS::S3::Bucket
functions:
createOrUpdate:
handler: handler.dummy
events:
- http:
path: dummy
method: POST