Can use environment variables with the sub intrinsic function? - amazon-web-services

I am using the sub function to define a resource within an aws IAM service.
Resource:
- !Sub 'arn:aws:s3:::example-${TEST1}-${AWS::REGION}-test'
${TEST1}: it is an environment variable that I have in my java project.
${AWS::REGION}: pseudo parameter
I want to know if !sub is able to read the environment variable and if it can't, is there any way I can do it even if it's not with this function

No, you cannot reference environment variables within cloudformation.
You'll have to wrap the cloudformation deployment with a script that parses the environment variables and feeds them in as parameters, which can then be referenced in the template.

Make the environment variable a CloudFormation Parameter and set the value. Then !Sub will work as expected. You can create a parameter file if you need to massage the values with sed or jq.

Related

CloudFormation Parameter Dynamic Regex

I have a a CloudFormation template with 2 parameters. I want to set an AllowedPattern value for the 2nd parameter such that its regex is dynamically generated based on the value for the first parameter. Something like this:
CloudFormation Parameters
WebsiteDomain:
Type: String
EmailAddress:
Type: String
AllowedPattern: !Sub '.*#${WebsiteDomain}'
Allowed Input
WebsiteDomain: google.com
EmailAddress: test#google.com
Disallowed Input
WebsiteDomain: google.com
EmailAddress: test#yahoo.com
Is there any way to accomplish this?
You would have to use CloudFormation macro for pre-processing of your template, before actuall deployment. Ohterwise, you can't do what you wan't.
Cloudformation templates are not so dynamic, but you could write a script using the AWS CDK which would allow you to write logic to check this yourself and then either immediately deploy it to the AWS environment or synthesize a CloudFormation template from the defined resources in your CDK app.
Unfortunately, No. CF Templates are not that dynamic.

How to pass a parameter from CloudFormation to an AWS Lambda function stored in an S3 bucket

My Lambda code is stored in a S3 bucket. I use CloudFormation to deploy in the child account.
Lambda code snippet:
def lambda_handler(event,context):
ids = ["${id}"]
The CloudFormation uses the parameters to take in the id as:
Parameters:
id:
Type: String
Description: Name of id.
Default: abc
However the id doesn't get populated in the lambda because the code is stored in the S3 bucket.
How to go about it?
From your description, it seems your requirement is:
A CloudFormation template takes an input parameter
You wish to make the value of that input parameter available to all invocations of an AWS Lambda function that is defined in the same template
You could achieve this by passing the parameter to the Lambda function via an Environment Variable.
When defining the Lambda function in the CloudFormation template, environment variables can be provided:
Type: AWS::Lambda::Function
Properties:
Code:
Code
Environment:
Variables:
id:
Ref: PARAMETER-NAME
...
In this case, the value of the variable is coming from a parameter.
Within your Lambda code, you would then access the environment variable. Here's a Python example:
import os
def lambda_handler(event, context):
print("environment variable: " + os.environ['id'])
References:
Dynamically passing in ENV vars to Lambda functions created by Cloudformation | Christopher Diehl's blog
How to access an AWS Lambda environment variable from Python
Another approach is storing parameters in parameter store of system manager, which can be managed in cloudformation script as well.

Is there a way to read environment variables from CodeCommit inside Serverless?

I have a serverless.yml file where I need to expose some variables for Lambda.
These are environment variables declared within CodeCommit on AWS.
functions:
graphql:
handler: handler.test
environment:
DB_HOST: ${environment.DB_HOST}
I'm not sure how exactly to get the DB_HOST from the set environment variables.
It's stored as Plaintext.
Thanks.
You are close. Try ${env:DB_HOST}.
Relevant documentation: Referencing Environment Variables.

Access AWS auto-generated URL for deployed resources

Is there a way to access auto-generated URLs for deployed resources before the deployment is finished? (like db host, lambda function URL, etc.)
I can access them after the deployment is finished, but sometimes I need to access them while building my stack. (E.g. use them in other resources).
What is a good solution to handle this use-case? I was thinking about outputting them into the SSM parameter store from CloudFormation template, but I'm not sure if this is even possible.
Thanks for any suggestion or guidance!
If "use them in other resources" means another serverless service or another CloudFormation stack, then use CloudFormation Outputs to export the values you are interested in. Then use CloudFormation ImportValue function to reference that value in another stack.
See https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/outputs-section-structure.html and https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-importvalue.html
Within Serverless Framework, you can access a CloudFormation Output value using https://serverless.com/framework/docs/providers/aws/guide/variables/#reference-cloudformation-outputs
If you want to use the autogenerated value within the same stack, then just use CloudFormation GetAtt function. See https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-getatt.html.
For example, I have a CloudFormation stack that outputs the URL for an ElasticSearch cluster.
Resources:
Search:
Type: AWS::Elasticsearch::Domain
Properties: <redacted>
Outputs:
SearchUrl:
Value: !GetAtt Search.DomainEndpoint
Export:
Name: myapp:search-url
Assuming that the CloudFormation stack name is "mystack", then in my Serverless service, I can reference the SearchUrl by:
custom:
searchUrl: ${cf:mystack.SearchUrl}
To add to bwinant's answer, ${cf:<stack name>.<output name>} does not work if you want to reference a variable in another stack which is located in another region. There is a plugin to achieve this called serverless-plugin-cloudformation-cross-region-variables. You can use it like so
plugins:
- serverless-plugin-cloudformation-cross-region-variables
custom:
myVariable: ${cfcr:ca-central-1:my-other-stack:MyVariable}

How can I use environmental variables on AWS Lambda?

I'm writing an application which I want to run as an AWS Lambda function but also adhere to the Twelve-Factor app guidelines. In particular Part III. Config which requires the use of environmental variables for configuration.
However, I cannot find a way to set environmental variables for AWS Lambda instances. Can anyone point me in the right direction?
If it isn't possible to use environmental variables can you please recommend a way to use environmental variables for local development and have them transformed to a valid configuration system that can be accessed using the application code in AWS.
Thanks.
As of November 18, 2016, AWS Lambda supports environment variables.
Environment variables can be specified both using AWS console and AWS CLI. This is how you would create a Lambda with an LD_LIBRARY_PATH environment variable using AWS CLI:
aws lambda create-function \
--region us-east-1
--function-name myTestFunction
--zip-file fileb://path/package.zip
--role role-arn
--environment Variables={LD_LIBRARY_PATH=/usr/bin/test/lib64}
--handler index.handler
--runtime nodejs4.3
--profile default
Perhaps the 'custom environment variables' feature of node-lambda would address your concerns:
https://www.npmjs.com/package/node-lambda
https://github.com/motdotla/node-lambda
"AWS Lambda doesn't let you set environment variables for your function, but in many cases you will need to configure your function with secure values that you don't want to check into version control, for example a DB connection string or encryption key. Use the sample deploy.env file in combination with the --configFile flag to set values which will be prepended to your compiled Lambda function as process.env environment variables before it gets uploaded to S3."
There is no way to configure env variables for lambda execution since each invocation is disjoint and no state information is stored. However there are ways to achieve what you want.
AWS credentials - you can avoid storing that in env variables. Instead grant the privileges to your LambdaExec role. In fact, AWS recommends using roles instead of AWS credentials.
Database details: One suggestion is to store it in a well known file in a private bucket. Lambda can download that file when it is invoked, read the contents which can contain database details and other information. Since the bucket is private, others cannot access the file. The LambdaExec role needs IAM privileges to access the private bucket.
AWS just added support for configuration of Lambda functions via environment parameters.
Take a look here
We also had this requirement for our Lambda function and we "solved" this by generating a env file on our CI platform (in our case this is CircleCI). This file gets included in the archive that gets deployed to Lambda.
Now in your code you can include this file and use the variables.
The script that I use to generate a JSON file from CircleCI environment variables is:
cat >dist/env.json <<EOL
{
"CLIENT_ID": "$CLIENT_ID",
"CLIENT_SECRET": "$CLIENT_SECRET",
"SLACK_VERIFICATION_TOKEN": "$SLACK_VERIFICATION_TOKEN",
"BRANCH": "$CIRCLE_BRANCH"
}
EOL
I like this approach because this way you don't have to include environment specific variables in your repository.
I know it has been a while, but I didn't see a solution that works from the AWS Lambda console.
STEPS:
In your AWS Lambda Function Code, look for "Environment variables", and click on "Edit";
For the "Key", type "LD_LIBRARY_PATH";
For the "Value", type "/opt/python/lib".
Look at this screenshot for the details.
The #3 assumes that you are using Python as your runtime environment, and also that your uploaded Layer has its "lib" folder in the following structure:
python/lib
This solution works for the error:
/lib/x86_64-linux-gnu/libz.so.1: version 'ZLIB_1.2.9' not found
assuming the correct libray file is put in the "lib" folder and that the environment variable is set like above.
PS: If you are unsure about the #3 path, just look for the error in your console, and you will be able to see where your "lib" folder for your layer is at runtime.