In AWS Lambda, where can I securely store API Credentials? - amazon-web-services

I have a lambda function configured through the API Gateway that is supposed to hit an external API via Node (ex: Twilio). I don't want to store the credentials for the functions right in the lambda function though. Is there a better place to set them?

The functionality to do this was probably added to Lambda after this question was posted.
AWS documentation recommends using the environment variables to store sensitive information. They are encrypted (by default) using the AWS determined key (aws/lambda) when you create a Lambda function using the AWS Lambda console.
It leverages AWS KMS and allows you to either: use the key determined by AWS, or to select your own KMS key (by selecting Enable encryption helpers); you need to have created the key in advance.
From AWS DOC 1...
"When you create or update Lambda functions that use environment variables, AWS Lambda encrypts them using the AWS Key Management Service. When your Lambda function is invoked, those values are decrypted and made available to the Lambda code.
The first time you create or update Lambda functions that use environment variables in a region, a default service key is created for you automatically within AWS KMS. This key is used to encrypt environment variables. However, should you wish to use encryption helpers and use KMS to encrypt environment variables after your Lambda function is created, then you must create your own AWS KMS key and choose it instead of the default key. The default key will give errors when chosen."
The default key certainly does 'give errors when chosen' - which makes me wonder why they put it into the dropdown at all.
Sources:
AWS Doc 1: Introduction: Building Lambda Functions ยป Environment Variables
AWS Doc 2: Create a Lambda Function Using Environment Variables To Store Sensitive Information

While I haven't done it myself yet, you should be able to leverage AWS KMS to encrypt/decrypt API keys from within the function, granting the Lambda role access to the KMS keys.

Any storage service or database service on AWS will be able to solve your problem here. The question is what are you already using in your current AWS Lambda function? Based on that, and the following considerations:
If you need it fast and cost is not an issue, use Amazon DynamoDB
If you need it fast and mind the cost, use Amazon ElastiCache (Redis or Memcache)
If you are already using some relational database, use Amazon RDS
If you are not using anything and don't need it fast, use Amazon S3
In any case, you need to create some security policy (either IAM role or S3 bucket policy) to allow exclusive access between Lambda and your choice of storage / database.
Note: Amazon VPC support for AWS Lambda is around the corner, therefore any solution you choose, make sure it's in the same VPC with your Lambda function (learn more at https://connect.awswebcasts.com/vpclambdafeb2016/event/event_info.html)

I assume you're not referring to AWS credentials, but rather the external API credentials?
I don't know that it's a great place, but I have found posts on the AWS forums where people are putting credentials on S3.
It's not your specific use-case, but check out this forum thread.
https://forums.aws.amazon.com/thread.jspa?messageID=686261
If you put the credentials on S3, just make sure that you secure it properly. Consider making it available only to a specific IAM role that is only assigned to that Lambda function.

For 2022 we have AWS Secrets Manager for storing sensitive data like Database Credentials, API Tokens, Auth keys, etc.

Related

Best practice - how to pass SES IAM User credentials to an Fargate Task?

I 've got a question regarding the handling of "hardcoded" secrets in AWS. Here is my setup:
We are running a Fargate cluster consisting of 3 services. One of the services (The backend) needs credentials to our SES account (mail sending) and S3 bucket (File Storage). At the moment we use the AWS secrets manager which holds the credentials of the ses-smtp user and the s3 user and injects the secret values as environment variables at the starup of the container. The entire stack (Except the SES which is in an other region) is created using cloud formation templates. The required secrets are also created by a template. And there is my "problem" - at the moment the secrets template contains the hardcoded SES and S3 credentials which is bad since the template gets pushed to the templates bucket (it's not public but anyways) and it could potentially got committed to version control where it will be exposed to anyone who has reading access to the project. The question is - what is the best practice to pass the SES and S3 Credentials to the container without exposing them at any place?
Thanks in advance,
Al
Use IAM for SES and S3 Access
Your ECS cluster has a task role.
It is assumed by your underlining containers, and those will have all of the access defined in your role, policies attached to it.
AWS services access
For your SES and S3 access create policies with restricted access and attach those to the role.
Regular credentials
For some plain text credentials, I would recommend creating those in the parameter store and map those into the container in your task-definition. Pass secrets safely.
You should not create your parameter store secrets in the cloud formation template, it is better to add those manually, and reference from where you need those.

What is standard practice for storing private SSH keys for AWS Lambda

My lambda function is responsible for ssh connecting to some of our EC2 instances. Currently I just have our key file stored in the lambda's deployment package, but this is obviously not desirable solution for production. I have already researched a couple ways, such as storing the key in a private S3 bucket, and storing it as an encrypted environment variable. However, I'm not thrilled about pulling the key from the S3 bucket all the time, and the encrypted environment variable seems like something that wouldn't persist across future lambda functions as well. What are some other industry standard ways of storing private keys for lambda use?
You can store encrypted secrets in Secrets Manager or in Parameter Store. For certain types of secrets, you can have them auto-rotated in Secrets Manager. Limit which IAM roles have access to the secrets and you can reduce potential misuse.
Also, be aware of options available to avoid the need to SSH to EC2 instances:
SSM Run Command
EC2 Instance Connect
SSM Session Manager

Store third party API key in S3 bucket or Dynamod DB?

I am connecting my app to third party email service using the registered API key.
Since it is a sensitive information I would like to store it in some encrypted place and retrieve it from there.
As I am already using AWS Lambda, so for this use-case is it better to use Dynamo DB or S3 bucket to store the API key?
Parameter store is also a good option. It is possible to store encrypted data and more easy to manage than via Secret Manager.
https://aws.amazon.com/en/systems-manager/features/
For just storing API key, both S3 and DynamoDB are not the best option.
The simplest solution will be SecureString in ParameterStore.
Alternatively, you can use lambda encrypted environment variable if you want to encrypt with a specific KMS key. Then in your lambda code you decrypt the env variable.
If you do the second approach in many lambdas, then consider put this code for decryption in a lambda layer.
For my future projects, I would store secrets in the SSM ParameterStore and then make the secrets available to my lambdas as encrypted during the deployment phase. The lambdas can then use the KMS key to decrypt it during runtime.
The parameter store has a 120 requests per second limit, this way we can prevent us from hitting the limit.

Is there a way to integrate S3 Bucket with Appsync without using an extra AWS Lambda for checking authorization?

My Stack:
AWS Appsync + AWS Lambda(to Mongodb) + custom OpenId-Connect Provider(also Lambda)
What I am trying to achieve
I would like to store some of the user data to the S3 bucket. I am hoping I can access the stored data directly from appsync, instead of calling a lambda for authorization checking every time I have to access the data.
There are mainly two types of information I want to store. For instance, userPicture and userSecret.
userPictures,that can be accessed by anyone.
As there are not access control checking, this part can be achieved by issuing a presignedUrl. No further authorization has to be done.
userSecret, that can be accessed by oneself and all admins
Here comes the part that I am trying to avoid calling another lambda just for authorization check. I already have the userId and role stored in the $context.identity, but still haven't figured out a way to actually perform the checking.
Is there a way I can avoid the lambda overhead?
AppSync recently launched support for multiple Authorization Providers. So for example you can secure your userSecret with an OpenID Connect provider and your userPictures by using an Api key. Does that satisfy your use case?

AWS Lambda - What is the best way to encrypt/decrypt text and store in DynamoDB

I have a node.js Lambda function which stores items in DynamoDB.
I would like to encrypt one of the item properties before it is stored
in DynamoDB and then decrypt it in another Lambda function that retrieves items.
Is there a simple way to do this using an AWS service or a module that already exists in Lambda?
Or should I upload an external module such as CryptoJS and use that?
AWS KMS (Key Management Service) is a service that should be just right for this. You won't need any additional modules because the full AWS SDK is already available in Lambda. And you don't need to worry about having to expose your keys to Lambda.