How can I use conditional configuration in serverless.yml for lambda? - amazon-web-services

I need to configure a lambda via serverless.yml to use different provision concurrency for different environments. Below is my lambda configuration:
myLambda:
handler: src/lambdas
name: myLambda
provisionedConcurrency: ${self:custom.pc}
...
custom:
pc: ${env:PC}
The value PC is loaded from environment variable. It works for values greater than 0 but I can't set a value 0 in one environment. What I want to do is to disable provision concurrency in dev environment.
I have read through this doc https://forum.serverless.com/t/conditional-serverless-yml-based-on-stage/1763/3 but it doesn't seem to help in my case.
How can I set provisionedConcurrency conditional based on environment?

Method 1: Stage-based variables via default values
This is a fairly simple trick by using a cascading value variable. The first value is the one you want, the second one being a default, or fallback value. Also called cascading variables.
// serverless.yml
provider:
stage: "dev"
custom:
provisionedConcurrency:
live: 100
staging: 50
other: 10
myLambda:
handler: src/lambdas
name: myLambda
provisionedConcurrency: ${self:custom.provisionedConcurrency.${self:provider.stage}, self:custom.provisionedConcurrency.other}
This above with stage set to dev will default to "other" value of 10, but if you set stage via serverless deploy --stage live then it will use the live value of 100.
See here for more details: https://www.serverless.com/framework/docs/providers/aws/guide/variables#syntax
Method 2: Asynchonous Value via Javascript
You can use an js include and put your conditional logic there. It's called "asynchronous value support". Basically, this allows you to put logic in a javascript file which you include and it can return different values depending on various things (like, what AWS account you're on, or if certain variables are set, or whatever). Basically, it allows you to do this...
provisionedConcurrency: ${file(./detect_env.js):get_provisioned_concurrency}
Which works if you create a javascript file in this folder called detect_env.js, and it has the contents similar to...
module.exports.get_provisioned_concurrency = () => {
if ("put logic to detect which env you are deploying to, eg for live") {
return Promise.resolve('100');
} else {
// Otherwise fallback to 10
return Promise.resolve('10');
}
}
For more info see: https://www.serverless.com/framework/docs/providers/aws/guide/variables#with-a-new-variables-resolver
I felt I had to reply here even though this was asked months ago because none of the answers were even remotely close to the right answer and I really felt sorry for the author or anyone who lands here.

For really sticky problems, I find it's useful to go to the Cloudformation script instead and use the Cloudformation Intrinsic Functions.
For this case, if you know all the environments you could use Fn::FindInMap
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-findinmap.html
Or if it's JUST production which needs 0 then you could use the conditional Fn::If and a boolean Condition test in the Cloudformation template to test if environment equals production, use 0, else use the templated value from SLS.
Potential SLS:
resources:
Conditions:
UseZero: !Equals ["production", ${provider.stage}]
Resources:
myLambda:
ProvisionedConcurrency: !If [UseZero, 0, ${self:custom.pc}]
You can explicitly remove the ProvisionedConcurrency property as well if you want:
resources:
Conditions:
UseZero: !Equals ["production", ${provider.stage}]
Resources:
myLambda:
ProvisionedConcurrency: !If [UseZero, AWS::NoValue, ${self:custom.pc}]
Edit: You can still use SLS to deploy; it simply compiles into a Cloudformation JSON template which you can explicitly modify with the SLS resources field.

The Serverless Framework provides a really useful dashboard tool with a feature called Parameters. Essentially what it lets you do is connect your service to it then you can set different values for different stages and then use those values in your serverless.yml with syntax like ${param:VARAIBLE_NANE_HERE} and it gets replaced at deploy time with the right value for whatever stage you are currently deploying. Super handy. There are also a bunch of other features in the dashboard such as monitoring and troubleshooting.
You can find out more about Parameters at the official documentation here: https://www.serverless.com/framework/docs/guides/parameters/
And how to get started with the dashboard here: https://www.serverless.com/framework/docs/guides/dashboard#enabling-the-dashboard-on-existing-serverless-framework-services

Just using a variable with a null value for dev environments during on deploy/package and SLS will skip this property:
provisionedConcurrency: ${self:custom.variables.provisionedConcurrency}

Related

How do I solve this Serverless.yml ssm dynamic path creation problem?

Fairly new to Serverless and am having problems creating a dynamic path to an SSM parameter..... I have tried a fair few ideas but am sure that this is really close but its not quite there....
I'm trying to generate an ssm path as a custom variable that will then be used to populate a value for a Lambda function.
Here's the custom variable code
custom
securityGroupSsmPath:
dev: "${self:service}/${self:custom.stage}/rds/lambdasecuritygroup"
other: "${self:service}/${env:SHARED_INFRASTRUCTURE_ENV}/rds/lambdasecuritygroup"
securityGroupId: ${ssm:, "${self:custom.securityGroupSsmPath.${env:SHARED_INFRASTRUCTURE_ENV}, self:custom.securityGroupSsmPath.other}"}
And here is where it is referenced in the function
functions:
someLambda:
handler: build/handlers/someLambda/handler.handler
timeout: 60
memorySize: 256
vpc:
securityGroupIds:
- ${self:custom.securityGroupId}
And here is the error output. It seems like it is not resolving the ssm parameter
Serverless Error ----------------------------------------
Cannot resolve serverless.yml: Variables resolution errored with:
- Cannot resolve variable at "custom.securityGroupId": Parameter name: can't be prefixed with "ssm" (case-insensitive). If formed as a path, it can consist of sub-paths divided by slash symbol; each sub-path can be formed as a mix of letters, numbers and the following 3 symbols .-_
All help much appreciated,
Thanks!
Sam
In the end we tried numerous implementations and the issue seemed to boil down to trying to both retrieve the ssm value for securityGroupId and also parse and default the second variable within it.
The solution ended up being as follows where we removed the parsing/default variable from within ssm step. Additionally we had to remove some of the double quotes on the custom vars:-
custom
securityGroupSsmPath:
dev: ${self:service}/${self:custom.stage}/rds/lambdasecuritygroup
other: ${self:service}/${env:SHARED_INFRASTRUCTURE_ENV}/rds/lambdasecuritygroup
securityGroupId: ${self:custom.securityGroupSsmPath.${env:SHARED_INFRASTRUCTURE_ENV}, self:custom.securityGroupSsmPath.other}
functions:
someLambda:
handler: build/handlers/someLambda/handler.handler
timeout: 60
memorySize: 256
vpc:
securityGroupIds:
- ${ssm:/${self:custom.securityGroupId}}

Referencing serverless stack name after pseudo parameters plugin deprecation

I'm wondering what is the correct way to reference AWS Cloudformation pseudo parameters in a serverless.yml now that pseudo parameter plugin has been deprecated.
All pseudo parameters are not available with the dollar sign syntax (e.g. ${aws:stackName} is not) in a similar manner as ${aws:region} is, for example. The serverless documentation on pseudo parameters is very short and I am not sure I fully understand it. I have tried to use Ref: "AWS::StackName", but when I try to generate an output
Fn::Sub:
- "${Stack}-someOutputResourceName"
- Stack:
Ref: "AWS::StackName"
, I get an error with [...]/Fn::Sub/1/Stack] 'null' values are not allowed in templates.
The pseudo-plugin page claims that
All functionalities as provided by this plugin are now supported by Serverless Framework natively
If this is true, how should I go about using pseudo-parameters?
It seems that while the above method does not work, I am able to use the pseudo variable directly without using Ref:
Name: !Sub "${AWS::StackName}-someOutputResourceName"

How to deploy a AWS Lambda Function conditionally, using Serverless Framework, only to prd stage

thank you for taking the time to read this post. I’m trying to deploy an AWS Lambda Function conditionally, based on stage (only for “prd” stage).
This lambda has a Role, which deploys conditionally too. I already achieved this by using cloudformation conditions on the resources block, as shown below:
However, I don’t know how to make it work for the lambda function, as it is in the functions block I don’t have idea how to reference the condition. From the serverless.yml reference I decided to do what is shown below, and it doesn’t work:
Can someone help me to understand what am I doing wrong? And also what would be the solution to make this work? Thanks in advance
This can be achieved using the serverless if-else plugin
https://www.serverless.com/plugins/serverless-plugin-ifelse
You can use the plugin by adding them to your plugin section of the serverless.yml
plugins:
- serverless-plugin-ifelse
and set up conditions to update values in the serverless.yml for the functions and exclude them.
The include option isn't available, so your condition would be something like -
custom:
currentStage: ${opt:stage, self:provider.stage}
serverlessIfElse:
- If: '"${self:provider.stage}" == "prd"'
Set:
functions.startXtractUniversalInstance.role: <custom role for prod>
ElseExclude:
- functions.startXtractUniversalInstance
if you check the serverless.yml reference, there's no support for "conditions" key in the lambda
Serverless Framework definitions ARE NOT a 1:1 to CloudFormation
you can override the AWS CloudFormation resource generated by Serverless, to apply your own options, link here
which more or less would look like this:
functions:
startXtractUniversalInstance:
...
resources:
extensions:
StartXtractUniversalInstanceFunction:
Condition: ...
make sure to double check the name generated to your function, the above StartXtractUniversalInstanceFunction could be wrong

Force AWS account numbers that start with "00" to string

Does anybody know a work-around for converting account numbers that start with “00” to string? I am using Mappings in CFn template to assign values based on the account number. I put the account number in quotes for converting it to string and it works well if it does not start with a zero, and I get the following error when it does.:
[/Mappings/EnvMap] map keys must be strings; received numeric [1.50xxx028E9]
Mappings:
EnvMap:
"8727xxxx0":
env: "dev"
"707xxxx78":
env: "test"
"00150xxx280":
env: "prod"
Resources:
rS3Stack:
Type: "AWS::CloudFormation::Stack"
Properties:
TemplateURL: "https://s3.amazonaws.com/some_bucket/nested_cfn/s3.yaml"
Parameters:
pEnvironment: !FindInMap
- EnvMap
- !Ref 'AWS::AccountId'
- env
Your problem is caused by a bug in PyYAML which results from some ambiguity in the YAML 1.1 specification. According to YAML 1.1 an integer must not start with 0 and numbers starting with 0 and are considered octal numbers. So when PyYAML parses the account id it considers the account id not to be an integer, because it's starting with 0, but also not an octal number, because it contains an 8. As it's neither an integer, nor an octal number, PyYAML considers it a string, which is safe to get dumped without surrounding quotes.
A minimal example to reproduce this looks like this:
>>> import sys
>>> import yaml
>>> yaml.dump(["1", "8", "01", "08"], sys.stdout)
- '1'
- '8'
- '01'
- 08
Now you might wonder why a PyYAML bug is mentioned here, while you just want to deploy a CloudFormation stack:
Depending on how you deploy a CloudFormation stack the template might get transformed locally, before it gets deployed. That happens for example when using aws cloudformation package, sam package or sam build to replace local code locations with paths in S3. As reading and writing the template during those transformations is done using PyYAML, it triggers the PyYAML bug mentioned above. There are bug reports for the AWS CLI and the AWS SAM CLI regarding this problem.
As the account id causing the problem is used as a key in your case, your options to work around that problem are limited, as you can't utilize CloudFormation's intrinstic functions. However there are still possible workarounds:
If you're using the AWS CLI, you can switch to using the AWS CLI v2, which doesn't suffer from this bug as it uses ruamel instead of PyYAML. ruamel handles numbers as one would expect, as it implements YAML 1.2, which doesn't contain the ambiguity in its specification.
What you can use no matter if you're using the AWS SAM CLI or the AWS CLI is to convert the transformed template from YAML to JSON and back to YAML which "fixes" that bug as well, as it results in problematic numbers being quoted again. There is a tool called cfn-flip from AWS to do so. You'd have to run this double-flip between packaging and deployment. For the AWS SAM CLI that'd for example look like:
sam build
cfn-flip .aws-sam/build/template.yaml | cfn-flip > .aws-sam/build/template.tmp.yaml
mv .aws-sam/build/template.tmp.yaml .aws-sam/build/template.yaml
sam deploy
With this said, I personally would suggest a completely different workaround and that's to remove that mapping from the template. Hardcoding account ids and environments makes a template less portable, as it limits the accounts/environments this template can be used for. I'd instead provide the environment as a parameter to the CloudFormation template, so it doesn't need to be aware of account ids at all.

Does Deployment Manager have Cloud Functions support (and support for having multiple cloud functions)?

I'm looking at this repo and very confused about what's happening here: https://github.com/GoogleCloudPlatform/deploymentmanager-samples/tree/master/examples/v2/cloud_functions
In other Deployment Manager examples I see the "type" is set to the type of resource being deployed but in this example I see this:
resources:
- name: function
type: cloud_function.py # why not "type: cloudfunctions"?
properties:
# All the files that start with this prefix will be packed in the Cloud Function
codeLocation: function/
codeBucket: mybucket
codeBucketObject: function.zip
location: us-central1
timeout: 60s
runtime: nodejs8
availableMemoryMb: 256
entryPoint: handler
"type" is pointing to a python script (cloud_function.py) instead of a resource type. The script is over 100 lines long and does a whole bunch of stuff.
This looks like a hack, like its just scripting the GCP APIs? The reason I'd ever want to use something like Deployment Manager is to avoid a mess of deployment scripts but this looks like it's more spaghetti.
Does Deployment Manager not support Cloud Functions and this is a hacky workaround or is this how its supposed to work? The docs for this example are bad so I don't know what's happening
Also, I want to deploy multiple function into a single Deployment Manager stack- will have to edit the cloud_function.py script or can I just define multiple resources and have them all point to the same script?
Edit
I'm also confused about what these two imports are for at the top of the cloud_function.yaml:
imports:
# The function code will be defined for the files in function/
- path: function/index.js
- path: function/package.json
Why is it importing the actual code of the function it's deploying?
Deployment manager simply interacts with the different kind of Google APIs. This documentation gives you a list of supported resource types by Deployment manager. I would recommend you to run this command “gcloud deployment-manager types list | grep function” and you will find this “cloudfunctions.v1beta2.function” resource type is also supported by DM.
The template is using a gcp-type (that is in beta).The cloud_functions.py is a template. If you use a template, you can reuse it for multiple resources, you can this see example. For better understanding, easier to read/follow you can check this example of cloud functions through gcp-type.
I wan to add to the answer by Aarti S that gcloud deployment-manager types list | grep function doesn't work for me as I found how to all list of resource types, including resources that are in alpha:
gcloud beta deployment-manager types list --project gcp-types
Or just gcloud beta deployment-manager types list | grep function helps.