Docker+Django+SECRET_KEY: regenerate? - django

Imagine you create a docker-compose.yml with Django and a bunch of code and use an environment variable to configure the SECRET_KEY in settings.py.
If you distribute that Docker image you won't share the SECRET_KEY.
What SECRET_KEY should someone use when they deploy your Docker image?
They can't make up their own SECRET_KEY right?
According to the documentation, changing the secret key will invalidate:
All sessions if you are using any other session backend than django.contrib.sessions.backends.cache, or are using the default get_session_auth_hash().
All messages if you are using CookieStorage or FallbackStorage.
All PasswordResetView tokens.
Any usage of cryptographic signing, unless a different key is provided.
What's the best way to 'renerate' a secret key after deploying a Docker container with a bunch of code that you created?
(I have searched and searched but feel like I'm searching for the wrong things or completely missed something :-)

Everybody who deploys the service independently should have their own SECRET_KEY. (You actively do not want the things you describe to be shared across installations: if I’ve logged into my copy of your service, I shouldn’t be able to reuse my session cookie on your copy.) A command I typically use for this is
dd if=/dev/urandom bs=60 count=1 | base64
which will generate an 80-character high-quality random key.
The corollary to this is that you can’t distribute encrypted data with your image. That’s usually not a problem (it is difficult to distribute a prepopulated relational database in Docker) and if you do this by running database migration and seed jobs at first startup, they should be able to use whatever key you set when you do the installation.

This solution is platform agnostic because it uses the original Django key generator.
from django.core.management.utils import get_random_secret_key
get_random_secret_key()
It can be used standalone without initializing a Django project.

Related

Why do people use .env file on server?

Why do people put a .env file to store all their secrets in a server? If someone hacks it, isn't the .env equally accessible as all the other files? Thanks!
You are correct that storing environmental secrets in a .env file poses a risk of plain text secrets being exposed to a third party if they gained access to raw code.
Just like other areas with sensitive material there are ways to get around this, generally an approach that people might take it to use a secrets management system which instead replaces any secrets values from a .env file to be accessed via a validated request.
AWS supports a couple of official services that can do this:
Secrets Manager - This service is specifically built for this purpose, you define a secret and give it either a string or JSON value that is then retrieved via a call using the SDK. All values are encrypted using a KMS key.
Systems Manager Parameter Store - Similar to secrets manager, you provide a key name and give it a value. It supports both unencrypted and encrypted values (use SecureString type).
In addition there are other services such as Hashicorp Vault that provide similar functionality.
For environmental configuration a .env file can still be appropriate i.e. enable this feature flag but if you want to try and reduce the blast radius of your application then storing secrets outside a plain text file will help to reduce this risk.
That is not the main reason for using environment variables. However, it is secure enough for saving secret values too especially when they’re combined with hashing methods.
Environment variables are most useful in the actual production level of programming. Your application must have different environments to run upon. Development: that your host is local and as a developer you need to test your code and set the debug variable to true to get stateful errors which is not something you want on the production environment. Production: that your host is your domain or server IP and you need different middleware than of the development stage. There are also staging and test environments for bigger projects. There could be a lot of things that should be handled differently on different environments: database is a great example. Besides, environment variables are useful for when there is more than one person working with the code base and people can configure the project based on their machine/OS using environment variables.

How to handle private configuration file when deploying?

I am deploying a Django application using the following steps:
Push updates to GIT
Log into AWS
Pull updates from GIT
The issue I am having is my settings production.py file. I have it in my .gitignore so it does not get uploaded to GITHUB due to security. This, of course, means it is not available when I PULL updates onto my server.
What is a good approach for making this file available to my app when it is on the server without having to upload it to GITHUB where it is exposed?
It is definitely a good idea not to check secrets into your repository. However, there's nothing wrong with checking in configuration that is not secret if it's an intrinsic part of your application.
In large scale deployments, typically one sets configuration using a tool for that purpose like Puppet, so that all the pieces that need to be aware of a particular application's configuration can be generated from one source. Similarly, secrets are usually handled using a secret store like Vault and injected into the environment when the process starts.
If you're just running a single server, it's probably just fine to adjust your configuration or application to read secrets from the environment (or possibly a separate file) and set those values on the server. You can then include other configuration settings (secrets excluded) as a file in the repository. If, in the future, you need more flexibility, you can pick up other tools in the future.

Manage sqlite database with git

I have this small project that specifies sqlite as the database choice.
For this particular project, the framework is Django, and the server is hosted by Heroku. In order for the database to work, it must be set up with migration commands and credentials whenever the project is deployed to continuous integration tools or development site.
The question is, that many of these environments do not actually use the my_project.sqlite3 file that comes with the source repository, which we version control with git. How do I incorporate changes to the deployed database? Is a script that set up the database suitable for this scenario? Meanwhile, it is worth notice that there are security credentials that should not appear in a script in unencrypted ways, which makes the situation tricky.
that many of these environments do not actually use the my_project.sqlite3 file that comes with the source repository
If your deployment platform does not support your chosen database, then your development environment should probably be moved to using one of the databases they do support. It is possible to run different databases in development and production, but just seems like the source of headaches.
I have found a number of articles that state that Heroku just doesn't support SQLite in production and instead recommends Postgres.
How do I incorporate changes to the deployed database? Is a script that set up the database suitable for this scenario?
I assume that you are just extracting data from one database to give to another, so yes,as long as that script is a one time batch operation each time the code is updated, then it should be fine. You will want something else if you are adding/manipulating data in production and then exporting it to your git.
Meanwhile, it is worth notice that there are security credentials that should not appear in a script in unencrypted ways
An environment variable should solve that. You set your host machine to have environment variables with your credentials and then just retrieve them within the script. You are looking to have something like this:
# Set environment vars
os.environ['USER'] = 'username'
os.environ['PASSWORD'] = 'password'
# Get environment vars
USER = os.getenv('USER')
PASSWORD = os.environ.get('PASSWORD')

How to create new user when configuration is locked using "Configuration Read-only"

We have a Drupal 8 site hosted at Pantheon and the site configuration is locked via the "Configuration Read-only" module.
I created a local clone of the site using git and added a new user but when I do a git status it shows my branch as being in sync with the master. With this said, it doesn't look like the newly added user was written to any of the config YAML files.
So, I suspect that I will need to export the database from my local and import it to Pantheon - but this doesn't seem like the correct process or safest method. Can someone please confirm as I haven't found any resources applicable to this scenario and want to ensure that I'm following best practice?
Users are Entities and as such are stored in the database, not in configuration.
If you want to synchronize your users across different environments then you'll have to look into a way to retrieve db backups from Pantheon and import them into a different environment or look into a module to sync the User entities. I found the content_sync module from a quick Google search, but have not used it and cannot guarantee that it will work/ fulfill your requirements.

Persist ASP.NET Core auth cookies between docker image launches

Each time a Docker image containing a .NET Core MVC web application starts up, all authentication cookies are invalidated, presumably due to a fresh machine key (which is used when signing the cookies) being generated.
This could traditionally be set via the <machineKey/> element in the web.config of a .NET app.
This link suggests that the DataProtection package would fit the bill, but the package seems to require the full fat framework.
What would be the correct way to ensure that every time a Docker image restarts it doesn't invalidate existing auth cookies?
You want to put the keys for data protection into a persistent and shareable location.
If you're on AWS, AspNetCore.DataProtection.Aws allows to put the keyring on S3 with just a few lines of configuration code. Additionally you can leverage AWS KMS to encrypt the keys, which is especially useful to achieve consistent encryption algorithms, allowing to reuse the same key accross different operating systems which have different default encryption algorithms. The KMS option is also part of the same library.
If you're on another platform than AWS, you'll need another library or mount a shared drive. But the concept of sharing the same location for the keys remains the same.