Serverless WSGI not working in pipelines (works locally) - django

I have the following steps configured in our bitbucket pipeline:
# Configure Serverless
- cp requirements.txt src/requirements.txt
- serverless config credentials --provider aws --key ${AWS_KEY} --secret ${AWS_SECRET}
- cd src
- sls plugin install -n serverless-python-requirements
- sls plugin install -n serverless-wsgi
- sls plugin install -n serverless-dotenv-plugin
# Perform the Deployment
- sls deploy --stage ${LOCAL_SERVERLESS_STAGE}
- sls deploy list functions
# Prep the environment
- sls wsgi manage --command "migrate"
- sls wsgi manage --command "collectstatic --noinput"
When the pipeline runs, the sls deploy works perfectly fine and the deployed function operates exactly as expected. The sls wsgi manage commands throw the following error, however:
Serverless Error ----------------------------------------
Function "undefined" doesn't exist in this Service
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information ---------------------------
Operating System: linux
Node Version: 11.13.0
Framework Version: 2.35.0
Plugin Version: 4.5.3
SDK Version: 4.2.2
Components Version: 3.8.2
I can run the sls wsgi manage commands locally (with same AWS keys) without issue. The issue only seems to happen in the pipeline.
Thoughts?
Additional Information
serverless.yml
service: service-api
plugins:
- serverless-python-requirements
- serverless-wsgi
- serverless-dotenv-plugin
custom:
wsgi:
app: service.wsgi.application
packRequirements: false
pythonRequirements:
dockerFile: ./serverless-dockerfile
dockerizePip: non-linux
pythonBin: python3
useDownloadCache: false
useStaticCache: false
provider:
name: aws
runtime: python3.6
stage: dev
region: us-east-1
iamRoleStatements:
- Effect: "Allow"
Action:
- s3:GetObject
- s3:PutObject
Resource: "arn:aws:s3:::*"
# vpc:
# securityGroupIds:
# -
# subnetIds:
# -
# -
# -
# -
functions:
app:
handler: wsgi.handler
events:
- http: ANY /
- http: "ANY {proxy+}"
timeout: 600
Output of Serverless deploy list functions
+ sls deploy list functions
(node:1127) ExperimentalWarning: queueMicrotask() is experimental.
(node:1127) ExperimentalWarning: The fs.promises API is experimental
(node:1127) ExperimentalWarning: The dns.promises API is experimental
Serverless: Deprecation warning: Detected ".env" files. In the next major release variables from ".env" files will be automatically loaded into the serverless build process. Set "useDotenv: true" to adopt that behavior now.
More Info: https://www.serverless.com/framework/docs/deprecations/#LOAD_VARIABLES_FROM_ENV_FILES
Serverless: DOTENV: Loading environment variables from .env:
...STUFF HERE...
Serverless: Deprecation warning: CLI options definitions were upgraded with "type" property (which could be one of "string", "boolean", "multiple"). Below listed plugins do not predefine type for introduced options:
- ServerlessWSGI for "port", "host", "disable-threading", "num-processes", "ssl", "command", "file"
Please report this issue in plugin issue tracker.
Starting with next major release, this will be communicated with a thrown error.
More Info: https://www.serverless.com/framework/docs/deprecations/#CLI_OPTIONS_SCHEMA
Serverless: Configuration warning at 'functions.app.events[1].http': value 'ANY {proxy+}' does not satisfy pattern /^(?:\*|(GET|POST|PUT|PATCH|OPTIONS|HEAD|DELETE|ANY) (\/\S*))$/i
Serverless:
Serverless: Learn more about configuration validation here: http://slss.io/configuration-validation
Serverless:
Serverless: Deprecation warning: Starting with next major, Serverless will throw on configuration errors by default. Adapt to this behavior now by adding "configValidationMode: error" to service configuration
More Info: https://www.serverless.com/framework/docs/deprecations/#CONFIG_VALIDATION_MODE_DEFAULT
Serverless: Deprecation warning: Starting with version 3.0.0, following property will be replaced:
"provider.iamRoleStatements" -> "provider.iam.role.statements"
More Info: https://www.serverless.com/framework/docs/deprecations/#PROVIDER_IAM_SETTINGS
Serverless: Deprecation warning: Resolution of lambda version hashes was improved with better algorithm, which will be used in next major release.
Switch to it now by setting "provider.lambdaHashingVersion" to "20201221"
More Info: https://www.serverless.com/framework/docs/deprecations/#LAMBDA_HASHING_VERSION_V2
Serverless: Listing functions and their last 5 versions:
Serverless: -------------
Serverless: app: $LATEST, 1, 2, 3

The answer was pretty mundane. Serverless 2.32+ introduced some change that broke the pipeline. Reverting to version 2.31.0 resolved the issue.

Related

Terraform fails in GitLab due to cache

I am trying to deploy my AWS infrastructure using Terraform from GitLab CI CD Pipeline.
I am using the GitLab managed image and it's default Terraform template.
I have configured S3 backend and it's pointing to the S3 bucket used to store the tf state file.
I had stored CI CD variables in GitLab for: AWS_ACCESS_KEY_ID', 'AWS_SECRET_ACCESS_KEY' and 'S3_BUCKET'.
Everything was working fine, until I changed the 'AWS_ACCESS_KEY_ID', 'AWS_SECRET_ACCESS_KEY' and 'S3_BUCKET' which was pointing to a different AWS account.
Now I am getting the following error:
$ terraform init
Initializing the backend...
Backend configuration changed!
Terraform has detected that the configuration specified for the backend
has changed. Terraform will now check for existing state in the backends.
Error: Error loading state:
AccessDenied: Access Denied
status code: 403, request id: XXXXXXXXXXXX,
host id: XXXXXXXXXXXXXXXXXXXXX
Terraform failed to load the default state from the "s3" backend.
State migration cannot occur unless the state can be loaded. Backend
modification and state migration has been aborted. The state in both the
source and the destination remain unmodified. Please resolve the
above error and try again.
Cleaning up file based variables 00:00
ERROR: Job failed: exit code 1
Since this issue happened because I changed the access_key and secret_key (It was working fine from my local VS Code), I commented out the 'cache:' block in the .gitlab-ci.yml file and it worked!
The following is my .gitlab-ci.yml file:
.gitlab-ci.yml
stages:
- validate
- plan
- apply
- destroy
image:
name: registry.gitlab.com/gitlab-org/gitlab-build-images:terraform
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
# Default output file for Terraform plan
variables:
PLAN: plan.tfplan
JSON_PLAN_FILE: tfplan.json
STATE: dbrest.tfstate
cache:
paths:
- .terraform
before_script:
- alias convert_report="jq -r '([.resource_changes[]?.change.actions?]|flatten)|{\"create \":(map(select(.==\"create\"))|length),\"update\":(map(select(.==\"update\"))|length),\"delete\":(map(select(.==\"delete\"))|length)}'"
- terraform --version
- terraform init
validate:
stage: validate
script:
- terraform validate
only:
- tags
plan:
stage: plan
script:
- terraform plan -out=plan_file
- terraform show --json plan_file > plan.json
artifacts:
paths:
- plan.json
expire_in: 2 weeks
when: on_success
reports:
terraform: plan.json
only:
- tags
allow_failure: true
apply:
stage: apply
extends: plan
environment:
name: production
script:
- terraform apply --auto-approve
dependencies:
- plan
only:
- tags
when: manual
terraform destroy:
extends: apply
stage: destroy
script:
- terraform destroy --auto-approve
needs: ["plan","apply"]
when: manual
only:
- tags
The issue clearly happens if I don't comment out the below block. However it used to work before I made changes to the AWS access_key and secret_key.
#cache:
# paths:
# - .terraform
When the cache was not commented, the following was the result in the CI CI Pipeline:
Is cache being stored anywhere? And How do I clear it?
Think it's related to GitLab.
It seems that runner cache can be cleared off from GitLab from the UI itself.
Go to GitLab -> CI CD -> Pipelines and hit the 'Clear Runner Cache' button to clear the cache.
It actually works!

Errors when trying to run an AWSPowerShellModuleScript#1 in Azure devops pipeline

I currently have an Azure Devops pipeline to build and deploy a next.js application via the serverless framework.
Upon reaching the AWSPowerShellModuleScript#1 task I get these errors:
[warning]MSG:UnableToDownload «https:...» «»
[warning]Unable to download the list of available providers. Check
your internet connection.
[warning]Unable to download from URI 'https:...' to ''.
[error]No match was found for the specified search criteria for the
provider 'NuGet'. The package provider requires 'PackageManagement'
and 'Provider' tags. Please check if the specified package has the
tags.
[error]No match was found for the specified search criteria and
module name 'AWSPowerShell'. Try Get-PSRepository to see all
available registered module repositories.
[error]The specified module 'AWSPowerShell' was not loaded because no
valid module file was found in any module directory.
I do have the AWS.ToolKit installed and it's visible when I go to manage extensions within Azure Devops.
My pipeline:
trigger: none
stages:
- stage: develop_build_deploy_stage
pool:
name: Default
demands:
- msbuild
- visualstudio
jobs:
- job: develop_build_deploy_job
steps:
- checkout: self
clean: true
- task: NodeTool#0
displayName: Install Node
inputs:
versionSpec: '12.x'
- script: |
npm install
npx next build
displayName: Install Dependencies and Build
- task: CopyFiles#2
inputs:
Contents: 'build/**'
TargetFolder: '$(Build.ArtifactStagingDirectory)'
- task: PublishBuildArtifacts#1
displayName: Publish Artifact
inputs:
pathtoPublish: $(Build.ArtifactStagingDirectory)
artifactName: dev_artifacts
- task: AWSPowerShellModuleScript#1
displayName: Deploy to Lambda#Edge
inputs:
awsCredentials: '###'
regionName: '###'
scriptType: 'inline'
inlineScript: 'npx serverless --package dev_artifacts'
I know I can use the ubuntu vmImage and then make use of the awsShellScript but the build agent I have available to me doesn't support bash.

Unzipped size must be smaller than 262144000 bytes - AWS Lambda Error

I have tried to upload my application using servless/lambda function AWS, but i got this issue:
An error occurred: AppLambdaFunction - Unzipped size must be smaller than 262144000 bytes (Service: AWSLambdaInternal; Status Code: 400; Error Code: InvalidParameterValueException; Request ID: 8ea0d887-5743-4db1-96cd-6c5efa57b081).
What is the best way to resolve it?
Look my dependencies:
"dependencies": {
"ethereumjs-tx": "^1.3.7",
"aws-sdk": "^2.4.52",
"body-parser": "^1.18.3",
"compression": "^1.7.4",
"consign": "^0.1.6",
"cors": "^2.8.5",
"express": "^4.16.4",
"helmet": "^3.16.0",
"moment": "^2.24.0",
"openzeppelin-solidity": "^2.3.0",
"serverless": "^1.48.2",
"serverless-http": "^1.9.1",
"serverless-offline": "^4.9.4",
"truffle": "^5.1.9",
"truffle-hdwallet-provider": "^1.0.17",
"web3": "^1.2.5-rc.0"
},
Serverless.yml:
provider:
name: aws
runtime: nodejs8.10
stage: v1
region: us-east-1
timeout: 30
memorySize: 512
package:
excludeDevDependencies: true
exclude:
- .git/**
- .vscode/**
- venv/**
functions:
app:
handler: handler.run
events:
- http:
path: /
method: ANY
cors: true
- http:
path: /{proxy+}
method: ANY
cors: true
plugins:
- serverless-offline
Use the directive exclude at your serverless.yml file. In case of Python, I've been used it as follows:
package:
exclude:
- node_modules/**
- venv/**
The build process will exclude them from the build before sending to AWS.
Tip I got in this issue at Github. The documentation for this directive is detailed here.
You can use module bundlers to package the code.
Using module bundlers such as webpack
You can consider using plugins like serverless-webpack. The serverless-webpack plugin is using webpack to build the project and it will only include the bare minimum files required to run your application. It will not include the entire node_modules directory. so that your deployment package will be smaller.
a note about using of Lambda layers
Like others mentioned, you can use the layers and move some of the libraries and code to the layer. Layers are mainly used to share code between functions. The unzipped deployed package including layers cannot exceed 250MB.
hope this helps.
References:
https://github.com/serverless-heaven/serverless-webpack
https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html#configuration-layers-path
I've had success resolving this error message using the serverless-esbuild plugin, and configuring it as follows in serverless.yml:
service: service_name
frameworkVersion: '3'
provider:
name: aws
runtime: nodejs12.x
plugins:
- serverless-esbuild
custom:
esbuild:
bundle: true
minify: false
sourcemap: true
exclude: 'aws-sdk'
target: node14
define:
'require.resolve': undefined
platform: node
concurrency: 10
You can load larger packages into AWS Lambda indirectly using s3:
Load your package into a bucket/key on S3
In the Lambda console choose Function Code -> Code Entry Type -> Upload a file from S3
See my answer here
You can deploy a Lambda function using a Docker image and it bypasses this problem, allowing a function with its dependencies to be as large as 10 gb.
Adding exclude under package is deprecated. We can use pattern to remove node_modules.
Example to remove files in the serverless.yml
...remaining props
package:
patterns:
- '!.git/**'
- '!test/**'
- '!e2e/**'
- '!src/**'
- '!node_modules/**'
Deprecation for exclude and
Pattern
Recently i faced the same issue, my total package size was more than 40mbs and it was also including the venv (python virtual environment) folder that resides in the project directory. I excluded it and build size got reduced to 16 mbs. and the project was deployed successfully. I added the following in serverless.yaml
package:
patterns:
- '!node_modules/**'
- '!venv/**'
- '!apienv/**'
- '!__pycache__/**'

Serverless Offline undefined module when loaded from lambda layer

I have the following project tree
Where nodejs folder is a lambda layer defined in the following serverless.yaml
service: aws-nodejs # NOTE: update this with your service name
provider:
name: aws
runtime: nodejs8.10
stage: dev
plugins:
- serverless-offline
layers:
layer1:
path: nodejs # required, path to layer contents on disk
name: ${self:provider.stage}-layerName # optional, Deployed Lambda layer name
functions:
hello:
handler: handler.hello
layers:
- {Ref: Layer1LambdaLayer}
events:
- http:
path: /dev
method: get
The layer1 only contains UUID package.
So when I try to run the lambda locally using serverless offline plugin, it says can't find module UUID.
But when I deploy the code to AWS, it run like a charm.
Any way we can get lambda layers running locally for testing purpose? and for speeding up the development?
Or is there any way where I can dynamically set the node_module path to point to the layer folder during the development and once I need to push to production, it change the path to the proper one
Ok after many trials, I figure out a working solution
I added a npm run command which export a temporary node_module path to the list of paths
"scripts": {
"offline": "export NODE_PATH=\"${PWD}/nodejs/node_modules\" && serverless offline"
},
So, node can lookup for the node modules inside the sub folders
I got around this by running serverless-offline in a container and copying my layers into the /opt/ directory with gulp. I set a gulp watch to monitor any layer changes and to copy them to the /opt/ directory.
I use layers in serverless offline via installing a layer from local file system as a dev dependency.
npm i <local_path_to_my_layer_package> --save-dev
BTW this issue was fixed in sls 1.49.0.
Just run:
sudo npm i serverless
Then you should specify package include in serverless.yml's layer section
service: aws-nodejs # NOTE: update this with your service name
provider:
name: aws
runtime: nodejs8.10
stage: dev
plugins:
- serverless-offline
layers:
layer1:
path: nodejs # required, path to layer contents on disk
package:
include:
- node_modules/**
name: ${self:provider.stage}-layerName # optional, Deployed Lambda layer name
functions:
hello:
handler: handler.hello
layers:
- {Ref: Layer1LambdaLayer}
events:
- http:
path: /dev
method: get
Tested on nodejs10.x runtime

How to setup Sentry to AWS lambda functions? Many errors

Trying to setup Sentry to a service with lambda functions - AWS.
I have followed the instructions on serverless-sentry-lib and it works for local environment but not for prod.
1-Have installed raven and serverless-sentry-lib
npm install --save raven
npm install --save serverless-sentry-lib
Basically this is what I have in my serverless.yml:
provider:
environment:
# SLS_DEBUG:"*"
SENTRY_ENVIRONMENT: "${opt:stage, self:provider.stage}"
SENTRY_DSN: "https://xxxxxx#sentry.io/xxxxxx"
plugins:
- serverless-delete-loggroups
- serverless-plugin-typescript
- serverless-plugin-existing-s3
# - serverless-sentry
- serverless-sentry-lib
# - serverless-plugin-optimize
And this is how I send the error:
myFunction.ts
import * as Raven from 'raven';
Raven.config('https://xxxxxxxx#sentry.io/xxxxxxx').install();
That code works good in Local but when I try to push to Serverless AWS lambda I get the follow error:
Serverless: s3 --> initiate requests ...
Error --------------------------------------------------
... Unable to validate the following destination configurations
For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
Checking this post and setting SLS_DEBUG to "*":
SLS_DEBUG:"*"
SENTRY_ENVIRONMENT: "${opt:stage, self:provider.stage}"
SENTRY_DSN: "https://xxxxxx#sentry.io/xxxxxx"
And I still have the same error. It did not disappear.
Does someone have any clue what is going on wit this settings?