Set custom environment variables in AWS - amazon-web-services

I am using AWS sagemaker, I have some secret keys and access keys to access some APIs that I don't want to expose directly in code.
What are the ways like environment variables etc., that can be used to hide these keys and I can use them securely, and how to set them.

AWS System Manager (SSM) is designed to store keys and tokens securely.
Depending on how your notebook is defined, you could use the 'env' property directly or in training data, or you could access SSM directly from sagemaker. For example this Snowflake KB article explains how to fetch auth info from ssm: https://community.snowflake.com/s/article/Connecting-a-Jupyter-Notebook-Part-3

Related

How to securely manage environment variables for AWS Elastic Beanstalk using Terraform?

We are managing an elastic beanstalk application via terraform, and are unsure of the best way to handle sensitive environment variables for our application.
Currently, we are storing these sensitive values in an AWS Secrets Manager secret. During the terraform apply step, we use an aws_secrets_manager_secret data source to load the secret. We then iterate over the key/value pairs in the secret, and create setting blocks within our aws_elastic_beanstalk_environment resource.
There are a couple of concerns we have with this approach:
We have to mark our sensitive values as nonsensitive, because terraform does not allow the use of sensitive values as arguments to for_each. This means that the plaintext values are logged as part of our terraform plan and terraform apply steps. This is an issue in our CD pipeline, but our workaround for this is to redirect all logs to /dev/null.
Our sensitive values appear in plaintext in our tfstate file. We keep this file in an S3 bucket, whose access is restricted to administrators and the deployment user. This is probably not a huge issue. The values are accessible via the Secrets Manager console anyway, and access is restricted in a similar way.
Is there a better solution that others are using to manage environment variables for an elastic beanstalk app managed via terraform?

AWS amplify environment variables vs AWS secrets

I have a react app which is deployed using AWS Amplify. I'm using Google maps in my application and I wanted to know the right place to put the Google Maps API key. I have read about AWS Amplify Environment variables where we can save the api key in key-value pairs. Also, I know that we have AWS Secrets, which is for saving private data.
What is the right approach to save the API key in my use case? Is saving the api key in Amplify Environment variables safe enough? Or should I go for AWS secrets?
The google maps api best practices include (depends exactly what you are using):
Store API keys and signing secrets outside of your application’s source code.
Store API keys or signing secrets in files outside of your application's source tree.
Amplify Environment variables are suited to store:
Third-party API keys
since
As a best practice, you can use environment variables to expose these configurations. All environment variables that you add are encrypted to prevent rogue access, so you can use them to store secret information.
So you can use them, as their are native to Amplify. AWS Secrets Manager is not natively supported by amplify, and you would have to add extra code to your backend to to make use of them.
The important thing to note is that these Amplify Environment variables are only to be used by your backend service. Not by a front-end code.

How to store GOOGLE_APPLICATION_CREDENTIALS in an AWS ECS environment on Fargate?

We have an API app that uses Firebase Admin to send messages to devices.
Earlier, we used to specify the service account key using environment variable like GOOGLE_APPLICATION_CREDENTIALS="path_to_file.json".
But now, since we are shifting to AWS Elastic Container Service on Fargate, I am unable to figure out how to put this file in the container for AWS ECS.
Any advice highly appreciated.
Thanks
Solved it by storing the service key as a JSON Stringified environment variable & using admin.credential.cert() instead of defaultAppCredentials.
Refer: https://firebase.google.com/docs/reference/admin/node/admin.credential#cert
I would suggest instead AWS Secrets Manager that is purpose-built for storing secrets. Take a look to his blog post:
https://aws.amazon.com/blogs/compute/securing-credentials-using-aws-secrets-manager-with-aws-fargate/
Even better than using environment variables which have their own downsides, you can leverage AWS Parameter Store which is a secure way to manage secrets in the AWS environment (where secrets are encrypted both in transit and at rest).
You'd need to create an IAM role for Amazon ECS for your code to have access to the Parameter Store.
You may want to check this article: https://aws.amazon.com/blogs/compute/managing-secrets-for-amazon-ecs-applications-using-parameter-store-and-iam-roles-for-tasks/
Use the specific method from_service_account_info as described here. You then pass the content of the credentials json file as a dictionary.

Automatic rotation of AWS access keys

I am looking for ways to automate the rotation of access keys (AWS credentials) for a set of users. There is a seperate process that creates the Access Keys. I need to be able to rotate the keys in an automated way. This link explains a way to do this for a specific user. How would I be able to achieve this for a list of users. Any thoughts or recommendations?
You can use AWS Config to mark the old access keys non-compliant (https://docs.aws.amazon.com/config/latest/developerguide/access-keys-rotated.html) and then use CloudWatch Events (my article how to do this) to run a Lambda function that deletes the old key, creates a new one, then send it to the user.
Access keys are generally used for programmatic access by applications. If these applications are running in, says EC2, you should use roles for EC2. This will install temporary credentials on the instance that are automatically rotated for you. The AWS CLI and SDKs know how to automatically retrieve these credentials so you don't need to add them in the application either.
Other compute solutions (Lambda, ECS/EKS) also have ways to provision roles for applications.

How to manage environment specific files in AWS

I am having properties file specific for dev, test and other environments. I have to store this files in some secure place in aws. I am using AWS Native tools for build and deployment. Please let me know how to store these files in aws
There are many ways to deal with a secret in case of AWS, but one thing is clear it depends on the service where you will use and consume these secret.
But you explore these two
The simplest way is to use the Environment variable.
AWS Secrets Manager
s3 ( for keeping files)
One common approach is to pass your secret as an environment variables, but in case of AWS I will recommend to go with AWS Secrets Manager
AWS Secrets Manager is an AWS service that makes it easier for you to
manage secrets. Secrets can be database credentials, passwords,
third-party API keys, and even arbitrary text. You can store and
control access to these secrets centrally by using the Secrets Manager
console, the Secrets Manager command line interface (CLI), or the
Secrets Manager API and SDKs.
Basic Secrets Manager Scenario
The following diagram illustrates the most basic scenario. It shows
how you can store credentials for a database in Secrets Manager, and
then use those credentials in an application that needs to access the
database.
Compliance with Standards
AWS Secrets Manager has undergone auditing for the these standards and can be part of your solution when you need to obtain compliance certification.
You can explore this article to read and write secret.
If you need to maintain files, not only object then you can store in s3 and pull files during deployment. but better to enable server-side encprtion.
But I will prefer secret manager instead of s3 and environment variable.
You can for s3 here and here
Bajjuri,
As Adil said in answer,
AWS Secret Manager -- if you want to store data as key, value pairs.
AWS S3 -- if you want to store files securely.
Adding to his answer, you can use AWS CodeDeploy Environment
Variables to fetch the files according to the your environment.
Let's say you've CodeDeploy deployment group for dev environment with
name "DEV" and deployment group for prod environment with name "PROD",
you can use this variable in bash script and call it in life cycle
hooks of appspec file to fetch the files or secret accordingly.
I've been using this technique in production for long and it works like a charm.