Why do people put a .env file to store all their secrets in a server? If someone hacks it, isn't the .env equally accessible as all the other files? Thanks!
You are correct that storing environmental secrets in a .env file poses a risk of plain text secrets being exposed to a third party if they gained access to raw code.
Just like other areas with sensitive material there are ways to get around this, generally an approach that people might take it to use a secrets management system which instead replaces any secrets values from a .env file to be accessed via a validated request.
AWS supports a couple of official services that can do this:
Secrets Manager - This service is specifically built for this purpose, you define a secret and give it either a string or JSON value that is then retrieved via a call using the SDK. All values are encrypted using a KMS key.
Systems Manager Parameter Store - Similar to secrets manager, you provide a key name and give it a value. It supports both unencrypted and encrypted values (use SecureString type).
In addition there are other services such as Hashicorp Vault that provide similar functionality.
For environmental configuration a .env file can still be appropriate i.e. enable this feature flag but if you want to try and reduce the blast radius of your application then storing secrets outside a plain text file will help to reduce this risk.
That is not the main reason for using environment variables. However, it is secure enough for saving secret values too especially when they’re combined with hashing methods.
Environment variables are most useful in the actual production level of programming. Your application must have different environments to run upon. Development: that your host is local and as a developer you need to test your code and set the debug variable to true to get stateful errors which is not something you want on the production environment. Production: that your host is your domain or server IP and you need different middleware than of the development stage. There are also staging and test environments for bigger projects. There could be a lot of things that should be handled differently on different environments: database is a great example. Besides, environment variables are useful for when there is more than one person working with the code base and people can configure the project based on their machine/OS using environment variables.
Related
I am beginner to aws.
aws keys stored in pc in folder "C:\Users\USERNAME.aws\credentials" is simple text file.
Q1) Why it is stored in easy readable format?
Q2) How to protect these credentials or is there any way to store it in encrypted way?
Q2) One option is to create an environment variable within your code’s operating environment to store the key. Environment variables are managed by the environment’s operating system and can be accessed via system libraries or the AWS SDK for your preferred programming language.
Q1) Thats they way it stores when you run aws configure post awscli installation.
On ways to secure it more :
Follow AccessKey Rotation practice to make sure that even if ur access key falls into wrong hands rotating it and creating a new accesskey would protect from any mishaps.
Use AWS Secret Manager to store your secrets which also gives you options to rotate secret values as well.
The user folder is generally considered to be private to the user. In typical OS setup, the permissions would be set, so that only the user has access to their home directory. Of course anyone who gains access to that folder also has access to the keys, but that's not any different for other common scenarios like storing ssh keys in so called 'hidden' .ssh/ folder
In any case, if you are not comfortable with that, the other option is to store them where ever you feel safe, then retrieve them and temporarily add them to your user environment profile.
The environment variables are documented here: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html
I'm trying to understand why having parameters stored in AWS Parameter Store (SSM) encrypted using KMS is a better solution than saving parameters in .env file on a nginx server.
As far as the nginx server setup goes, .env file is not exposed to public hence it can't be viewed unless someone breaks into the server.
nginx config has public folder set as a root
root /home/user/app/public;
Looked like the consensus was if someone manages to break into the server, they will be able to see all the parameters stored in .env file contents in plain text hence less secure than Parameter Store.
But isn't that the same for AWS Parameter Store? (Main question)
In the php file, the way I load parameters from the Parameter Store is using SSM Client.
e.g.
$client = new SsmClient([
'version' => 'latest',
'region' => 'us-west-2',
]);
$credentials = $client->getParameters([
'Names' => ['MY_KEY', 'MY_SECRET'],
'WithDecryption' => true
]);
$key = $credentials['Parameters'][0]['Value'];
$secret = $credentials['Parameters'][1]['Value'];
If someone breaks into the server, they will be able to perform these and retrieve any parameters.
So what makes SSM more secure than .env?
SSM makes it easy to coordinate values across multiple machines. How does that .env file get onto your servers? What happens if you have to change a value in it? SSM helps make that process easier. And when it's easy to replace secrets, it's more likely you will rotate them on a regular basis. AWS Secrets Manager makes that process even simpler by automating rotation. It runs a Lambda function that modifies both the secret value and what uses it (for example it can change the database password for you). So even if your secret does get leaked somehow, it's only good for a limited time.
Another reason having secrets on a separate server can be more secure, is that breaking into a server doesn't always mean full control. Sometimes hackers can just access files using a misconfigured service. If there are no files containing secret information, there is nothing for them to see. Other times hackers can access other servers in your network using a misconfigured proxy. If all your internal servers require some sort of authentication, the hackers won't be able to hack those too.
This is where the concept of defense in depth comes in. You need multiple layers of security. If you just assume "once they're in, they're in", you are actually making it easier on hackers. They can exploit any small opening and get complete control of your system. This becomes even more important when you factor in the weakest link in your security -- people. You, your developers, your operators, and everyone in your company will make mistakes eventually. If every little mistakes gave complete access to the system, you'd be in a pretty bad shape.
Adding to above answer
With KMS, secrets are stored in AWS Managed HSM devices which is surely more secure than storing in a regular servers we manage in plain text.
Storing in SSM Params with KMS Encryption we get fine-grained access control along with CloudTrail audits.
I have a WordPress site that is gonna be hosted using ECS in AWS.
To make the management even more flexible, I plan not to store service configurations (i.e. php.ini, nginx.conf) inside the docker image itself. I found that docker swarm offers "docker configs" for such. Are there any equivalent tools doing the same thing? (I know AWS Secrets Manager can handle docker secrets though)
Any advice or alternative approaches? thank you all.
The most similar you could use is probably AWS SSM Parameter store
You will need some logic to retrieve the values when you are running the image.
If you don't want to have the files also inside of the running containers, then you pull from Parameter Store, and add them to the environment, and you will need to do probably some work in the application to read from the environment (the application stays decoupled from the actually source of the config), or you can read directly from Param store in the application (easier, but you have some coupling in your image with Parameter store.
if your concern is only about not having the values in the image, but it is fine if they are inside of the running container, then you can read from Param Store and inject the values in the container inside of the usual location of the files, so for the application is transparent
As additional approaches:
Especially for php.ini and nginx.conf I like a simple approach that is having a separate git repo, with different config files per different environments.
You have a common docker image regardless of the environment
in build time, you pull the proper file for the enviroment, and either save as env variables, or inject in the container
And last: need to mention classic tools like Chef or Puppet, and also ansible. More complex and maybe overkill
The two ways that I store configs and secrets for most services are
Credstash which is combination of KMS and Dynamodb, and
Parameter Store which has already been mentioned,
The aws command line tool can be used to fetch from Parameter Store
and S3(for configs), while credstash is its own utility (quite useful and easy to
use) and needs to be installed separately.
I'm trying to develop a Django app on GAE, and using CloudBuild for CI/CD. I'm wondering what's the best way to pass secrets to my app (DB credentials, etc).
I was able to follow instructions at https://cloud.google.com/cloud-build/docs/securing-builds/use-encrypted-secrets-credentials to read the secret from in my build step, and pass it in to my app as an environment variable. It's a bit hacky, but it works:
- name: gcr.io/cloud-builders/gcloud
entrypoint: 'bash'
args:
- '-c'
- |
TEST_PW=$(gcloud secrets versions access latest --secret=test-key)
echo "TEST_PASSWORD=$${TEST_PW}" >> env_vars
unset TEST_PW
However, I'm not sure if this practice is safe. I dumped the env variables in running in my app (using print(dict(os.environ)) and the only sensitive values there are the secrets I passed in (all other GAE app related values are non-sensitive data).
So questions:
1) Is storing secrets in env variables safe in an app in AppEngine, i.e. can they be stolen by "somehow" dumping them through print(dict(os.environ))?
2) Or is the better option to fetch them from Secret Manager in Django (for e.g. in settings.py)? (I'm worried about restarts or version switches here, and if they'll affect this option)
3) Or is there an even better option?
Thanks.
The security issue with what you are doing is not the environment variable itself, but the fact that you are storing the secret's plain decrypted value in it, making it accessible by the os.environ command while your instance is running.
A simpler solution would be to dump that sensitive information to a file and store it on a Cloud Storage Bucket only your app engine's service account has access to, like this:
TEST_PW=$(gcloud secrets versions access latest --secret=test-key)
echo "TEST_PASSWORD=$${TEST_PW}" >> [YOUR_FILE_URL]
unset TEST_PW
If you want to keep using environment variables, you can do it by using Cloud KMS to keep data encrypted, you can find a how to here, which is a different section of the same documentation you shared on your question.
Imagine you create a docker-compose.yml with Django and a bunch of code and use an environment variable to configure the SECRET_KEY in settings.py.
If you distribute that Docker image you won't share the SECRET_KEY.
What SECRET_KEY should someone use when they deploy your Docker image?
They can't make up their own SECRET_KEY right?
According to the documentation, changing the secret key will invalidate:
All sessions if you are using any other session backend than django.contrib.sessions.backends.cache, or are using the default get_session_auth_hash().
All messages if you are using CookieStorage or FallbackStorage.
All PasswordResetView tokens.
Any usage of cryptographic signing, unless a different key is provided.
What's the best way to 'renerate' a secret key after deploying a Docker container with a bunch of code that you created?
(I have searched and searched but feel like I'm searching for the wrong things or completely missed something :-)
Everybody who deploys the service independently should have their own SECRET_KEY. (You actively do not want the things you describe to be shared across installations: if I’ve logged into my copy of your service, I shouldn’t be able to reuse my session cookie on your copy.) A command I typically use for this is
dd if=/dev/urandom bs=60 count=1 | base64
which will generate an 80-character high-quality random key.
The corollary to this is that you can’t distribute encrypted data with your image. That’s usually not a problem (it is difficult to distribute a prepopulated relational database in Docker) and if you do this by running database migration and seed jobs at first startup, they should be able to use whatever key you set when you do the installation.
This solution is platform agnostic because it uses the original Django key generator.
from django.core.management.utils import get_random_secret_key
get_random_secret_key()
It can be used standalone without initializing a Django project.