I'm writing an application which I want to run as an AWS Lambda function but also adhere to the Twelve-Factor app guidelines. In particular Part III. Config which requires the use of environmental variables for configuration.
However, I cannot find a way to set environmental variables for AWS Lambda instances. Can anyone point me in the right direction?
If it isn't possible to use environmental variables can you please recommend a way to use environmental variables for local development and have them transformed to a valid configuration system that can be accessed using the application code in AWS.
Thanks.
As of November 18, 2016, AWS Lambda supports environment variables.
Environment variables can be specified both using AWS console and AWS CLI. This is how you would create a Lambda with an LD_LIBRARY_PATH environment variable using AWS CLI:
aws lambda create-function \
--region us-east-1
--function-name myTestFunction
--zip-file fileb://path/package.zip
--role role-arn
--environment Variables={LD_LIBRARY_PATH=/usr/bin/test/lib64}
--handler index.handler
--runtime nodejs4.3
--profile default
Perhaps the 'custom environment variables' feature of node-lambda would address your concerns:
https://www.npmjs.com/package/node-lambda
https://github.com/motdotla/node-lambda
"AWS Lambda doesn't let you set environment variables for your function, but in many cases you will need to configure your function with secure values that you don't want to check into version control, for example a DB connection string or encryption key. Use the sample deploy.env file in combination with the --configFile flag to set values which will be prepended to your compiled Lambda function as process.env environment variables before it gets uploaded to S3."
There is no way to configure env variables for lambda execution since each invocation is disjoint and no state information is stored. However there are ways to achieve what you want.
AWS credentials - you can avoid storing that in env variables. Instead grant the privileges to your LambdaExec role. In fact, AWS recommends using roles instead of AWS credentials.
Database details: One suggestion is to store it in a well known file in a private bucket. Lambda can download that file when it is invoked, read the contents which can contain database details and other information. Since the bucket is private, others cannot access the file. The LambdaExec role needs IAM privileges to access the private bucket.
AWS just added support for configuration of Lambda functions via environment parameters.
Take a look here
We also had this requirement for our Lambda function and we "solved" this by generating a env file on our CI platform (in our case this is CircleCI). This file gets included in the archive that gets deployed to Lambda.
Now in your code you can include this file and use the variables.
The script that I use to generate a JSON file from CircleCI environment variables is:
cat >dist/env.json <<EOL
{
"CLIENT_ID": "$CLIENT_ID",
"CLIENT_SECRET": "$CLIENT_SECRET",
"SLACK_VERIFICATION_TOKEN": "$SLACK_VERIFICATION_TOKEN",
"BRANCH": "$CIRCLE_BRANCH"
}
EOL
I like this approach because this way you don't have to include environment specific variables in your repository.
I know it has been a while, but I didn't see a solution that works from the AWS Lambda console.
STEPS:
In your AWS Lambda Function Code, look for "Environment variables", and click on "Edit";
For the "Key", type "LD_LIBRARY_PATH";
For the "Value", type "/opt/python/lib".
Look at this screenshot for the details.
The #3 assumes that you are using Python as your runtime environment, and also that your uploaded Layer has its "lib" folder in the following structure:
python/lib
This solution works for the error:
/lib/x86_64-linux-gnu/libz.so.1: version 'ZLIB_1.2.9' not found
assuming the correct libray file is put in the "lib" folder and that the environment variable is set like above.
PS: If you are unsure about the #3 path, just look for the error in your console, and you will be able to see where your "lib" folder for your layer is at runtime.
Related
As I understand the boto3 module has to be configured (for specifying aws_access_key_id and
aws_secret_access_key) before I could use it to access any AWS service.
As from the documentation , the three ways of configuration are:
1.A Config object that's created and passed as the config parameter when creating a client
2.Environment variables
3.The ~/.aws/config file
However, for the examples I have read that there is no need to configure if writing directly on AWS lambda. Moreover, there are no environment variables and I could not find the config file. How is boto3configured on AWS lambda?
there are no environment variables
Yes, they are. They are listed here. Each function has access to many env variables, inluding:
AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN – The access keys obtained from the function's execution role.
So boto3 takes its credentials from these env variables. And these variables are populated from your function execution role which your function assumes.
When you create an AWS Lambda function, you select an IAM Role that the function will use.
Your code within the function will be automatically supplied the credentials associated with that IAM Role. There is no need to provide any credentials. (Think of it as being the same way that software running on an Amazon EC2 instance receives credentials from an IAM Role.)
AWS Lambda Functions have an option to enter the code uploaded as a file from S3. I have a successfully running lambda function with the code taken as a zip file from an S3 Bucket, however, any time you would like to update this code you would need to either manually edit the code inline within the lambda function or upload a new zip file to S3 and go into the lambda function and manually re-upload the file from S3. Is there any way to get the lambda function to link to a file in S3 so that it will automatically update its function code when you update the code file (or zip file) contained in S3?
Lambda doesn't actually reference the S3 code when it runs--just when it sets up the function. It is like it takes a copy of the code in your bucket and then runs the copy. So while there isn't a direct way to get the lambda function to automatically run the latest code in your bucket, you can make a small script to update the function code using SDK methods. I don't know which language you might want to use, but for example, you could write a script to call the AWS CLI to update the function code. See https://docs.aws.amazon.com/cli/latest/reference/lambda/update-function-code.html
Updates a Lambda function's code.
The function's code is locked when you publish a version. You can't
modify the code of a published version, only the unpublished version.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
Synopsis
update-function-code
--function-name [--zip-file ] [--s3-bucket ] [--s3-key ] [--s3-object-version ] [--publish |
--no-publish] [--dry-run | --no-dry-run] [--revision-id ] [--cli-input-json ] [--generate-cli-skeleton ]
You could do similar things using Python or PowerShell as well, such as using
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/lambda.html#Lambda.Client.update_function_code
You can set up an AWS Code deploy pipeline to get your code build and deployed on code commit in your code repository(github,bitbucket,etc)
CodeDeploy is a deployment service that automates application
deployments to Amazon EC2 instances, on-premises instances, serverless
Lambda functions, or Amazon ECS services.
Also, wanted to add if you want to go on a more unattended route of deploying your Updated code to the Lambda use this flow in your code Pipeline
Source -> Code Build (npm installs and zipping etc.) -> S3 Upload (sourcecode.zip in S3 bucket) -> Code Build (another build just for aws lambda update-funtion-code)
Make sure the role for the last stage has both S3 getObject and Lambda UpdateFunctionCode policies attached to it.
Is it possible to provide the credential in each request in a way like
aws sns create-topic my_topic --ACCESS-KEY XXXX --SECRET-KEY XXXX
Instead of doing aws configure before I make the call.
I know that credential management can be done by using --profile like Using multiple profiles but that requires me to save the credential, which I cannot do. I'm depending on the user to provide me the key as parameter input. Is it possible?
I believe the closest option to what you are looking for would be to set the credentials as environment variables before invoking the AWS CLI.
One option is to export the environment variables that control the credentials and then call the desired CLI. The following works for me in bash:
$ export AWS_ACCESS_KEY_ID=AKIXXXXXXXXXXXXXXXX AWS_SECRET_ACCESS_KEY=YhTYxxxxxxxxxxxxxxVCSi; aws sns create-topic my_topic
You may also want to take a look at: Configuration Settings and Precedence
There is another way. Instead of "export"ing, just run the command like:
AWS_ACCESS_KEY_ID=AAAA AWS_SECRET_ACCESS_KEY=BBB aws ec2 describe-regions
This will ensure that the credentials are set only for the command.
Your best bit would be to use IAM Role for Amazon ec2 instance. That way you don't need to worry about the credentials at all. Also they keys will be rotated periodically.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
I am trying to create a lambda function in a particular region using aws-cli. I am not sure how to create it. Looking at this doc and couldn't find any parameter related to region. http://docs.aws.amazon.com/cli/latest/reference/lambda/create-function.html
Thank you.
The region is a common option to all AWS CLI commands. If you want to explicitly include the region in your command, simply include --region us-east-1, for example, to run your command in the us-east-1 region.
If this parameter is not specified explicitly, it will be implicitly derived from your configuration. This could be environment variables, your CLI's config file, or even inherited from an IAM instance profile.
A safe command to verify this is aws lambda list-functions. This is a read-only command that lists your functions; it will only list functions in the region that was implicitly supplied via your configuation. You can explicitly supply a region to this function and observe that the results will change if you have functions in one region but not the other.
Further Reading
AWS Documentation - Configuring the AWS Command Line Interface
AWS Documentation - Configuration and Credential Files
AWS Documentation - AWS CLI Options
I recently moved my app to elasticbeanstalk, and i am running Symfony3, there is a mandatory parameters.yml file that has to be populated with Environmental variables.
Id like to wget the parameters.yml from a private S3 bucket, limiting access to instances only.
I know i can set the environmental variables directly on the environment, but i have some very very sensitive stuff there, and environmental variables get leaked into my logging system, which is very bad.
I also have multiple environments such as workers using the same environmental variables, and copy pasting them is quite annoying.
So i am wondering if its possible to have the app wget it on deploy, i know how to do that, but i cant seem to configure the S3 bucket to only allow access from instances.
Yep, that definitely can be done, there are different ways of doing this depending what approach you want to take. I would suggest to use .ebextentions to create IAM Role -> grant access for that role to your bucket -> after package is unzip on the instance -> copy object from s3 using instance role
Create custom IAM role using AWS console or using .ebextentions custom resources and grant access for that role to the objects in your bucket.
Related read
Using above mentioned .ebextentions set aws:autoscaling:launchconfiguration in options_setting to specify instance profile you created before.
Again, using .ebxtentions use container_commands option to run aws s3 cp command