Serverless framework - New variables resolver - amazon-web-services

When I run sls offline - I'm facing a deprecation warning:
Serverless: Deprecation warning: Variables resolver reports following resolution errors:
- Variable syntax error at "functions.Test.environment.TEST_URL": Invalid variable type at index 20 in "${file(./env.yml):${'${self:provider.stage}.TEST_URL'}}"
From a next major this will be communicated with a thrown error.
Set "variablesResolutionMode: 20210326" in your service config, to adapt to new behavior now
Documentation is not clear about it.
env.yml
dev:
TEST_URL: https://example.com/
serverless.yml
frameworkVersion: '2'
...
functions:
Test:
handler: handler.test
environment:
TEST_URL: ${file(./env.yml):${'${self:provider.stage}.TEST_URL'}} # <-------
It works correctly with frameworkVersion (>=1.1.0 <2.0.0).
What is a new approach to get data from another file?

This a new approach to get data from another file
environment:
TEST_URL: ${file(./env.yml):${self:provider.stage}.TEST_URL}

Related

How to access custom environment variables in an aws batch job

I'm setting an environment variable in my aws batch job like so.
BatchJobDef:
Type: 'AWS::Batch::JobDefinition'
Properties:
Type: container
JobDefinitionName: xxxxxxxxxx
ContainerProperties:
Environment:
- Name: 'PROC_ENV'
Value: 'dev'
When I look at my job definition I can see it listed in Environment variables configuration
Then I'm trying to access it in my job's python code like this:
env = os.environ['PROC_ENV']
but there is no PROC_ENV variable set, getting the following error when I go to run my job:
raise KeyError(key) from None
KeyError: 'PROC_ENV'
Can anyone tell me what I'm missing here? Am I accessing this environment variable the correct way?

Why does `serverless` always try to resolve read default value?

I am using serverless framework for deploying to AWS. Below is the version:
$ sls --version
Framework Core: 2.64.1 (standalone)
Plugin: 5.5.0
SDK: 4.3.0
Components: 3.17.1
The problem I have is how to let sls skip default value for a variable. Below is my configuration. It has a variable dbStreamArn which will be assigned if there is a command line parameter dbStreamArn, if not it tries to read it from a cloudformation output.
However, when I run sls deploy --dbStreamArn xxxx, I got an error Trying to request a non exported variable from CloudFormation.. It seems that sls tries to get the default value even there is a command line parameter. Is there a way to let sls skip parsing the default value?
custom:
dbStreamArn: ${opt:dbStreamArn, "${cf:dbs-stack.DynamoDBTableEventsSignalStreamArn}"

How to set up yaml file with gitlab to deploy when a specific file changes?

I'm trying to set up a YAML file for GitLab that will deploy to my QA server only when a specific folder has a change in it.
This is what I have but it doesn't want to work. The syntax doesn't register any errors.
deploy to qa:
script: **aws scripts**
only:
refs:
- master
changes:
- directory/*
stage: deploy
environment:
name: qa
url: **aws bucket url**
The problem seems to be with this section, the rest works without it. The documentation talks about using rules as a replacement for when only and changes are used together but I couldn't get that to work either.
only:
refs:
- master
changes:
- directory/*
The issue you're running into is the refs section of your "only" rule. Per GitLab's documentation on "changes": "If you use refs other than branches, external_pull_requests, or merge_requests, changes can’t determine if a given file is new or old and always returns true." Since you're using master as your ref, you are running into this issue.
As you've ascertained, the correct answer to this is to use a rules keyword instead. The equivalent rules setup should be as follows:
deploy to qa:
script: **aws scripts**
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
changes:
- directory/*
when: on_success
- when: never
stage: deploy
environment:
name: qa
url: **aws bucket url**
Essentially, the rule is saying "If the commit you're building from exists on your default branch (master in your case), and you have changes in directory/*, then run this job when previous jobs have succeeded. ELSE, never run this job"
Note: Technically the when: never is implied if no clauses match, but I prefer including it because it explicitly states your expectation for the next person who has to read your CI/CD file.

Serverless invoke returns "Unable to marshal response: OSError(30, 'Read-only file system') for my Python lambda

When running my python-based aws lambda, I get a read-only file system error.
But, I'm not doing any logging, it looks like serverless is.
{
"errorMessage": "Unable to marshal response: OSError(30, 'Read-only file system') is not JSON serializable",
"errorType": "Runtime.MarshalError"
}
Error --------------------------------------------------
Error: Invoked function failed
at AwsInvoke.log (/usr/local/Cellar/serverless/1.50.0/libexec/lib/node_modules/serverless/lib/plugins/aws/invoke/index.js:101:31)
Here is my serverless.yml
provider:
name: aws
runtime: python3.7
functions:
main:
handler: main.handler
package:
include:
- src/main.py
layers:
- {Ref: PythonRequirementsLambdaLayer}
environment:
REGION_NAME: us-west-2
custom:
pythonRequirements:
dockerFile: ./Dockerfile
layer: true
plugins:
- serverless-python-requirements
I've wrapped my handler in a try-catch but it doesn't even get to my code.
I expect my lambda to run my code without error
The error message Unable to marshal response {repr} is not JSON serializable occurs when the handler returns a value that is not JSON serializable to Lambda.
At some point, something is trapping an exception and returning the exception object.
Let's then look at OSError(30, 'Read-only file system'). This is also a common issue with Lambda: writing is only permitted under /tmp. Your code is mounted in a read-only volume.
I've wrapped my handler in a try-catch but it doesn't even get to my code.
Strictly, it doesn't get to your handler.
Importing a module runs it from the top; those def and class blocks are just glorified assignments into the module namespace. And when you import other modules, they can run arbitrary code, and they can absolutely try to write to the filesystem.
One simple way to diagnose this further would be to:
Unpack the zip file into its own directory.
Add a simple helper that imports your handler and calls it.
Mark everything read-only through chmod or the like on Windows.
Run your helper and see what breaks.
Since you're pulling in PythonRequirementsLambdaLayer directly, you could also unpack that into your test rig to figure out where it's breaking.

How to setup Sentry to AWS lambda functions? Many errors

Trying to setup Sentry to a service with lambda functions - AWS.
I have followed the instructions on serverless-sentry-lib and it works for local environment but not for prod.
1-Have installed raven and serverless-sentry-lib
npm install --save raven
npm install --save serverless-sentry-lib
Basically this is what I have in my serverless.yml:
provider:
environment:
# SLS_DEBUG:"*"
SENTRY_ENVIRONMENT: "${opt:stage, self:provider.stage}"
SENTRY_DSN: "https://xxxxxx#sentry.io/xxxxxx"
plugins:
- serverless-delete-loggroups
- serverless-plugin-typescript
- serverless-plugin-existing-s3
# - serverless-sentry
- serverless-sentry-lib
# - serverless-plugin-optimize
And this is how I send the error:
myFunction.ts
import * as Raven from 'raven';
Raven.config('https://xxxxxxxx#sentry.io/xxxxxxx').install();
That code works good in Local but when I try to push to Serverless AWS lambda I get the follow error:
Serverless: s3 --> initiate requests ...
Error --------------------------------------------------
... Unable to validate the following destination configurations
For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
Checking this post and setting SLS_DEBUG to "*":
SLS_DEBUG:"*"
SENTRY_ENVIRONMENT: "${opt:stage, self:provider.stage}"
SENTRY_DSN: "https://xxxxxx#sentry.io/xxxxxx"
And I still have the same error. It did not disappear.
Does someone have any clue what is going on wit this settings?