AWS Lambda - One function with multiple parameter sets or multiple functions? - amazon-web-services

I have a lambda function which is pretty general. It queries an Athena Database table, and writes the output to an S3 Bucket. The function is currently set up to accept environment variables for the Database name, table name, output filename and output bucket.
Currently, I call this function with 4 different sets of environment variables, so I have 4 different lambdas, whose code is the same, but whose environment variables are different. Do I need to have 4 different lambdas, or is there a way of having one lambda with 4 environment variable 'sets'?
Thanks!

Here's one option: To handle 4 sets of configuration with a single lambda, send a variable (e.g. type: Foo) part of the lambda invocation1. As #Marcin suggests, the lambda uses the type value to fetch the config variables from the SSM Parameter Store at runtime with the GetParametersByPath API. Parameters support hierarchies, so you can store your config using nested variable names like: /Foo/db, /Foo/table, /Bar/table etc.
(1) For example, send type in the event detail if event-triggered, or in the SDK Invoke command payload.

Related

Add new environment variables to Lambda using Cloud Formation Template

I have a nested Cloud Formation Template (multiple templates within a root template )to create a complete web application.
Lambda is created in the first template and few environment variables are added to it.
The later part of the templates also produces some values that has to be added as environment variables.
Is there a way to attach these environment variables to the existing lambda function?
I don't think so, but there are a few options. If you could change the stack dependency order, you could build the stack creating the values depended upon first. If you cannot, you can store your environment variables in SSM Parameter Store as mentioned in this knowledge center article.
So you set the environment variable to a path where the value can be expected, then when creating the stack that knows the value, you store it at that path. When the lambda runs, you just do get parameter.

Set global parameters in aws cloudformation

I'm building a complex application in AWS using Cloudformation.
My setup is the following: I'm going to use yaml files to define the stacks and corresponding json files which contain the stack parameters. Anyway there are parameters which are the same in multiple json files and I'd like to define them globally in one file/stack instead of having to update them in multiple files everytime they change.
What is the recommended way to set such global parameters using cloudformation?
Help would be highly appreciated.
You could possibly create one stack with command parameters, end export their values from this stack. Then, in other stack, the parameter values would be accessed using Fn::ImportValue.
An alternative could be to store common parameters in SSM Parameter Store, and then use dynamic references in your template to access them.

How do I configure My AWS Lambda to take input parameters

I want my AWS Lambda function to take a input parameter. Example- the parameter is a date and I want to update a field in the Database using that Date. How do I get that date to my Lambda ??
Lambda's can get params from 2 sources:
triggering event
env variables
Adding environment variables is independent of the triggering event, but it doesn't change value between invocation.
Adding fields to the triggering event depends heavily on the type of event and should be handled on the side of the event's source.

How to access last n parameters in an AWS Lambda function

I am receiving sensory data on AWS IoT and passing these values to a Lambda function using a rule. In the Lambda function which is coded in Python, I need to make a calculation based on the latest n values.
What is the best way of accessing previous parameters?
Each Lambda invocation is supposed to be state-less and not aware of previous invocations (there's container reuse but you cannot rely on that).
If you need those, then you have to persist those parameters somewhere else like DynamoDB or Redis on Elasticache.
Then, when you need to do your calculations, you can retrieve the past n-1 values and do your calculations.

Can I parameterize AWS lambda functions differently for staging and release resources?

I have a Lambda function invoked by S3 put events, which in turn needs to process the objects and write to a database on RDS. I want to test things out in my staging stack, which means I have a separate bucket, different database endpoint on RDS, and separate IAM roles.
I know how to configure the lambda function's event source and IAM stuff manually (in the Console), and I've read about lambda aliases and versions, but I don't see any support for providing operational parameters (like the name of the destination database) on a per-alias basis. So when I make a change to the function, right now it looks like I need a separate copy of the function for staging and production, and I would have to keep them in sync manually. All of the logic in the code would be the same, and while I get the source bucket and key as a parameter to the function when it's invoked, I don't currently have a way to pass in the destination stuff.
For the destination DB information, I could have a switch statement in the function body that checks the originating S3 bucket and makes a decision, but I hate making every function have to keep that mapping internally. That wouldn't work for the DB credentials or IAM policies, though.
I suppose I could automate all or most of this with the SDK. Has anyone set something like this up for a continuous integration-style deployment with Lambda, or is there a simpler way to do it that I've missed?
I found a workaround using Lambda function aliases. Given the context object, I can get the invoked_function_arn property, which has the alias (if any) at the end.
arn_string = context.invoked_function_arn
alias = arn_string.split(':')[-1]
Then I just use the alias as an index into a dict in my config.py module, and I'm good to go.
config[alias].host
config[alias].database
One thing I'm not crazy about is that I have to invoke my function from an alias every time, and now I can't use aliases for any other purpose without affecting this scheme. It would be nice to have explicit support for user parameters in the context object.