The following is how I'm connecting. Is there any way in Lambda to set environmental variables that aren't visible and ideally would cross to several lambdas? Ideally in another AWS service that I could access with the SDK and use across all AWS services?
var MYSQL = require('mysql');
var AWS = require('aws-sdk');
exports.handler = (event, context, callback) => {
var connection = MYSQL.createConnection({
host : '127.0.0.1',
port : '3306',
user : 'myuser',
password : 'mypass',
database : 'mydb'
});
connection.connect();
I am afraid that the answers given before experience some lack of relevance. The AWS recommended way of storing this sort of data like connection strings for lambda, is the Systems Manager Parameter Store:
AWS Systems Manager provides a centralized store to manage your
configuration data, whether plain-text data such as database strings
or secrets such as passwords. This enables you to separate your
secrets and configuration data from your code.
See also: https://aws.amazon.com/blogs/compute/sharing-secrets-with-aws-lambda-using-aws-systems-manager-parameter-store/
I would store that in either a DynamoDB table or an S3 bucket. You can assign an IAM role to the Lambda to allow access to these - maybe read-only.
Alternatively, Lambda's now have environment variables like Elastic Beanstalk and you could set it that way. They can be encrypted though that adds some complexity too.
You can now use encrypted environment variables.
Behind the scenes they will use KMS to encrypt the values and then will allow your Lambda functions to use them as if they were plain text within the container context.
Related
I am connecting my app to third party email service using the registered API key.
Since it is a sensitive information I would like to store it in some encrypted place and retrieve it from there.
As I am already using AWS Lambda, so for this use-case is it better to use Dynamo DB or S3 bucket to store the API key?
Parameter store is also a good option. It is possible to store encrypted data and more easy to manage than via Secret Manager.
https://aws.amazon.com/en/systems-manager/features/
For just storing API key, both S3 and DynamoDB are not the best option.
The simplest solution will be SecureString in ParameterStore.
Alternatively, you can use lambda encrypted environment variable if you want to encrypt with a specific KMS key. Then in your lambda code you decrypt the env variable.
If you do the second approach in many lambdas, then consider put this code for decryption in a lambda layer.
For my future projects, I would store secrets in the SSM ParameterStore and then make the secrets available to my lambdas as encrypted during the deployment phase. The lambdas can then use the KMS key to decrypt it during runtime.
The parameter store has a 120 requests per second limit, this way we can prevent us from hitting the limit.
I am using AWS Amplify to set up an AppSync GraphQL API. I have a schema with an #model annotation and I am trying to write a lambda resolver that will read/write to the DynamoDB table that #model generates. However, when I attempt to test locally using amplify mock my JS function throws
error { UnknownEndpoint: Inaccessible host: `dynamodb.us-east-1-fake.amazonaws.com'. This service may not be available in the `us-east-1-fake' region.
I can't seem to find much documentation around this use case at all (most examples of lambda resolvers read from other tables / APIs that are not part of the amplify app) so any pointers are appreciated. Is running this type of setup even supported or do I have to push to AWS in order to test?
New Answer:
Amplify now has documentation on this use case: https://docs.amplify.aws/cli/usage/mock#connecting-to-a-mock-model-table
You can set environment variables for mock that will point the DDB client in the mock lambda to the local DDB instance
=====================================================================
Original Answer:
After some digging into the Amplify CLI code, I have found a solution that will work for now.
Here is where amplify mock initializes DynamoDB Local. As you can see, it does not set the --sharedDb flag which based on the docs means that the created database files will be prefixed with the access key id of the request and then the region. The access key id of requests from Amplify will be "fake" and the region is "us-fake-1" as defined here. Furthermore, the port of the DynamoDB Local instance started by Amplify is 62224 defined here.
Therefore, to connect to the tables that are created by Amplify, the following DynamoDB config is needed
const ddb = new AWS.DynamoDB({
region: 'us-fake-1',
endpoint: "http://172.16.123.1:62224/",
accessKeyId: "fake",
secretAccessKey: "fake"
})
If you want to use the AWS CLI with the tables created by Amplify, you'll have to create a new profile with the region and access keys above.
I'll still need to do some additional work to figure out a good way to have those config values switch between the local mock values and the actual ones, but this unblocks local testing for now.
As for another question that I had about where AWS::Region of "us-east-1-fake" was being set, that gets set here but it does not appear to be used anywhere else. ie, it gets set as a placeholder value when running amplify mock but using it as a region in other places for testing locally doesn't seem to work.
Please try the below setting, It's working fine for me,
const AWS = require('aws-sdk');
// Local
const dynamoDb = new AWS.DynamoDB.DocumentClient({
region: 'us-fake-1',
endpoint: "http://localhost:62224/",
accessKeyId: "fake",
secretAccessKey: "fake"
});
// Live
// const dynamoDb = new AWS.DynamoDB.DocumentClient();
your dynamodb host is incorrect. dynamodb.us-east-1-fake is not a valid host. Please update it with real dynamodb host name.
If you are running locally setup aws configure on cli first.
I was successfully able to connect to RDS like any other database connection.
I use spring jpa data ( repository ) to do CRUD operation on postgres db.
currently I provide the db url and the credential in the properties file
spring:
datasource:
url: jdbc:postgresql://<rds-endpoint>:5432/<dbschema>
username: <dbuser>
password: <dbpassword>
However this is not an option while connecting to production or preproduction.
what is the best practise here.
Does AWS provide any inbuild mechanism to read these details from an endpoint like in the case of accessing S3 ?
My intention is not expose the password.
Several options are available to you:
Use the recently announced IAM access to Postgres RDS
Use Systems Manager Parameter Store to store the password
Use Secrets Manager to store the password and automatically rotate credentials
For 2 and 3, look up the password on application start in Spring using a PropertyPlaceholderConfiguration and the AWSSimpleSystemsManagement client (GetParameter request). SystemsManager can proxy requests to SecretsManager to keep a single interface in your code to access parameters.
IAM credentials is more secure in that:
If using EC2 instance profiles, access to the database uses short lived temporary credentials.
If not on EC2 you can generate short lived authentication tokens.
The password is not stored in your configuration.
If you have a separate database team they can manage access independent of the application user.
Removing access can be done via IAM
another generic option I found was to use AWS Secret Manager
(doc link)
RDS specific solution is to connect to DB Instance Using the AWS SDK using IAMDBAuth
My Aws Lambda function is written in Java. I am getting data from DynamoDb by giving some static credentials like below;
new BasicAWSCredentials(ACCESSKEY, SECRETKEY)
However, when i try to define my services in Aws Cloudformation. I could not find any way, how can i change these accesskey and secretkey credentials. What is the best way for managing these credentials?, Because they are special keys for each account and embedded in Java code.
Although you could use the context or download a file to pass credentials in runtime, one should not use explicit hard-coded credentials, as that is harder to acquire and rotate securely.
It is easier and safer to use roles, as described in the lambda permission model: http://docs.aws.amazon.com/lambda/latest/dg/intro-permission-model.html
Use explicit credentials only outside AWS (your dev machine for example), and even so do not hard-code them, use environment variables or CLI profiles.
I have a lambda function configured through the API Gateway that is supposed to hit an external API via Node (ex: Twilio). I don't want to store the credentials for the functions right in the lambda function though. Is there a better place to set them?
The functionality to do this was probably added to Lambda after this question was posted.
AWS documentation recommends using the environment variables to store sensitive information. They are encrypted (by default) using the AWS determined key (aws/lambda) when you create a Lambda function using the AWS Lambda console.
It leverages AWS KMS and allows you to either: use the key determined by AWS, or to select your own KMS key (by selecting Enable encryption helpers); you need to have created the key in advance.
From AWS DOC 1...
"When you create or update Lambda functions that use environment variables, AWS Lambda encrypts them using the AWS Key Management Service. When your Lambda function is invoked, those values are decrypted and made available to the Lambda code.
The first time you create or update Lambda functions that use environment variables in a region, a default service key is created for you automatically within AWS KMS. This key is used to encrypt environment variables. However, should you wish to use encryption helpers and use KMS to encrypt environment variables after your Lambda function is created, then you must create your own AWS KMS key and choose it instead of the default key. The default key will give errors when chosen."
The default key certainly does 'give errors when chosen' - which makes me wonder why they put it into the dropdown at all.
Sources:
AWS Doc 1: Introduction: Building Lambda Functions ยป Environment Variables
AWS Doc 2: Create a Lambda Function Using Environment Variables To Store Sensitive Information
While I haven't done it myself yet, you should be able to leverage AWS KMS to encrypt/decrypt API keys from within the function, granting the Lambda role access to the KMS keys.
Any storage service or database service on AWS will be able to solve your problem here. The question is what are you already using in your current AWS Lambda function? Based on that, and the following considerations:
If you need it fast and cost is not an issue, use Amazon DynamoDB
If you need it fast and mind the cost, use Amazon ElastiCache (Redis or Memcache)
If you are already using some relational database, use Amazon RDS
If you are not using anything and don't need it fast, use Amazon S3
In any case, you need to create some security policy (either IAM role or S3 bucket policy) to allow exclusive access between Lambda and your choice of storage / database.
Note: Amazon VPC support for AWS Lambda is around the corner, therefore any solution you choose, make sure it's in the same VPC with your Lambda function (learn more at https://connect.awswebcasts.com/vpclambdafeb2016/event/event_info.html)
I assume you're not referring to AWS credentials, but rather the external API credentials?
I don't know that it's a great place, but I have found posts on the AWS forums where people are putting credentials on S3.
It's not your specific use-case, but check out this forum thread.
https://forums.aws.amazon.com/thread.jspa?messageID=686261
If you put the credentials on S3, just make sure that you secure it properly. Consider making it available only to a specific IAM role that is only assigned to that Lambda function.
For 2022 we have AWS Secrets Manager for storing sensitive data like Database Credentials, API Tokens, Auth keys, etc.