Lambda ValueFrom Environment Variable Like in Task Definition - amazon-web-services

Is there a way to have a ValueFrom feature in Lambda's environment variable similar to what we have in Task Definition?
How it works.
We have a kv pair in parameter store /dev/db/host=localhost.
In the container definition inside the ECS task definition, we add a new environment variable DB_HOST which has a ValueFrom /dev/db/host. When a new instance of the container is run it will have the value localhost from the parameter store.
I tried on Lambda but it seems like this feature is not available. Is there another way to do this? I wonder if there is a request for this as well.
PS: I'm aware that it can be done via TerraForm or CloudFormation but that will only evaluate and copy the values from parameter store to Lambda environment variables when the infrastructure is built. The problem is some of the values are secured like DB password, thus it cannot be simply copied as it will get exposed.

Related

Add new environment variables to Lambda using Cloud Formation Template

I have a nested Cloud Formation Template (multiple templates within a root template )to create a complete web application.
Lambda is created in the first template and few environment variables are added to it.
The later part of the templates also produces some values that has to be added as environment variables.
Is there a way to attach these environment variables to the existing lambda function?
I don't think so, but there are a few options. If you could change the stack dependency order, you could build the stack creating the values depended upon first. If you cannot, you can store your environment variables in SSM Parameter Store as mentioned in this knowledge center article.
So you set the environment variable to a path where the value can be expected, then when creating the stack that knows the value, you store it at that path. When the lambda runs, you just do get parameter.

I need a strategy for handling optional SSM Parameter Store parameters in CDK

In my stack definition I pull in a number of parameters from SSM Parameter Store...
const p1 = ssm.StringParameter.fromStringParameterAttributes( ... )
const p2 = ssm.StringParameter.fromStringParameterAttributes( ... )
I then pass them along to the relevant lambdas as environment vars...
environment: {
PARAM_ONE: p1.stringValue
PARAM_TWO: p2.stringValue
}
However I don't want all of those parameters to be mandatory. I would like the ones that exist to be passed in as env vars, and the ones that don't to just remain undefined as my app has defaults for them anyway. However, trying to inspect the value of p1.stringValue just gives me a Token, not a value, so I can't do any logic based on it's presence or absence: https://docs.aws.amazon.com/cdk/latest/guide/tokens.html
If I ask for the parameter and it is not defined in SSM Parameter Store I then get an error that I can't catch or ignore when it tries to build the changeset and the deployment fails...
MyApp: creating CloudFormation changeset...
❌ MyAppStack failed: Error [ValidationError]: Unable to fetch parameters [/myapp/param1,/myapp/param2] from parameter store for this account.
So how can I deal with SSM parameters which may or may not exist at deploy time?
I assume you are only grabbing the manager in your import, not the actual values inside your secrets. If this is the case, then your best bet is to leverage the SDK to do this for you - a simple call using the SDK (which will be run during the synth stage of a cdk deploy or cdk synth) to see if said SMM fields/groups exist. If they do, go ahead and import them.
I do something very similar with Layers - the from methods for layers require the version number - that may change at any time. So i have a small function that gets the latest version number of a given layer using the SDK and i can then use that to import the layer definition into my stack.
If you are trying to get the actual secret inside the secret manager parameter ... that is better suited to outside the CDK for most scenarios - done in the exact location you need the secrets so you dont end up with secret value in plain text somewhere.

Global environment variables for AWS CloudFormation

Is there a way to have global environment variables in a AWS CloudFormation yaml file for Lambdas?
Currently we are using the SSM Parameter Store for global variables, but we don't want to use that anymore.
I looking forward to have something like this:
Environment:
Variables:
variable1: xxx // local variables
variable2: xxx
...
${file(./globalvariables.yml)} // global variables
Or even better: every lambda is including the global environment variables as default without explicitly calling it.
Is this possible? Or what approach would you suggest? Thanks in advance!
Sadly I'm unaware of having predefined defaults for environment variables to be set through CloudFormation for Lambdas however - One possible option is instead of using env variables in CloudFormation add a lambda layer with all the config and pull the values from there.
Benefits of this are that if a value changes you only have to update your layer once then update lambdas to use new layer which could be a single parameter instead of manually updating every single one.
Docs here: https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html
Another option would be to use AWS Secrets Manager Or SSM Parameter Store as ServerMonkey suggested.

CronJob, django and environment variables

I have built an application for django on Openshift v3 PRO with the django-ex template. It works great. I'm using POSTGRESQL with persistent storage.
I need a scheduled cron job to fire every hour to run some django management commands. I'm using the CronJob pod for this.
My problem is this: I need to create the CronJob job with the same environment variables that the django pod was created with (DATABASE_, DJANGO_, and others), but don't see an easy way to do this.
Any help would appreciate it.
You should be able to include a list of environment variables to set as part of the containers definition in the template spec for the job. I can't properly extract the resource definition for a CronJob using oc explain in OpenShift 3.6 because of the way it is registered, but I would expect the field to be similar to:
CronJob.spec.jobTemplate.spec.template.spec.containers.env
RESOURCE: env <[]Object>
DESCRIPTION:
List of environment variables to set in the container. Cannot be updated.
EnvVar represents an environment variable present in a Container.
FIELDS:
name <string> -required-
Name of the environment variable. Must be a C_IDENTIFIER.
value <string>
Variable references $(VAR_NAME) are expanded using the previous defined
environment variables in the container and any service environment
variables. If a variable cannot be resolved, the reference in the input
string will be unchanged. The $(VAR_NAME) syntax can be escaped with a
double $$, ie: $$(VAR_NAME). Escaped references will never be expanded,
regardless of whether the variable exists or not. Defaults to "".
valueFrom <Object>
Source for the environment variable's value. Cannot be used if value is not
empty.

Can I parameterize AWS lambda functions differently for staging and release resources?

I have a Lambda function invoked by S3 put events, which in turn needs to process the objects and write to a database on RDS. I want to test things out in my staging stack, which means I have a separate bucket, different database endpoint on RDS, and separate IAM roles.
I know how to configure the lambda function's event source and IAM stuff manually (in the Console), and I've read about lambda aliases and versions, but I don't see any support for providing operational parameters (like the name of the destination database) on a per-alias basis. So when I make a change to the function, right now it looks like I need a separate copy of the function for staging and production, and I would have to keep them in sync manually. All of the logic in the code would be the same, and while I get the source bucket and key as a parameter to the function when it's invoked, I don't currently have a way to pass in the destination stuff.
For the destination DB information, I could have a switch statement in the function body that checks the originating S3 bucket and makes a decision, but I hate making every function have to keep that mapping internally. That wouldn't work for the DB credentials or IAM policies, though.
I suppose I could automate all or most of this with the SDK. Has anyone set something like this up for a continuous integration-style deployment with Lambda, or is there a simpler way to do it that I've missed?
I found a workaround using Lambda function aliases. Given the context object, I can get the invoked_function_arn property, which has the alias (if any) at the end.
arn_string = context.invoked_function_arn
alias = arn_string.split(':')[-1]
Then I just use the alias as an index into a dict in my config.py module, and I'm good to go.
config[alias].host
config[alias].database
One thing I'm not crazy about is that I have to invoke my function from an alias every time, and now I can't use aliases for any other purpose without affecting this scheme. It would be nice to have explicit support for user parameters in the context object.