My Django deployment has x number of pods (3 currently)running a Django backend REST API server. We're still in the development/staging phase. I wanted to ask for advice regarding DB migration. Right now the pods simply start by launching the webserver, assuming the database is migrated and ready. This assumption can be wrong of course.
Can I simply put python manage.py migrate before running the server? What happens if 2 or 3 pods are started at the same time and all run migrations at the same time? would there be any potential damage or problem from that? Is there a best practice pattern to follow here to ensure that all pods start the server with a healthy migrated database?
I was thinking about this:
During initial deployment, define a Kubernetes Job object that'll run once, after the database pod is ready. It will be using the same Django container I have, and will simply run python manage.py migrate. the script that deploys will kubectl wait for that job pod to finish, and then apply the yaml that creates the full Django deployment. This will ensure all django pods "wake up" with a database that's fully migrated.
In subsequent updates, I will run the same job again before re-applying the Django deployment pod upgrades.
Now there is a question of chicken and egg and maintaining 100% uptime during migration, but this is a question for another post: How do you apply data migrations that BREAK existing container Version X when the code to work with the new migrations is updated in container Version X+1. Do you take the entire service offline for the duration of the update? is there a pattern to keep service up and running?
Well you are right about the part that multiple migrate commands will run against your database by multiple pods getting started.
But this will not cause any problems. When you are going to make actual changes to your database, if the changes are already applied, your changes will be ignored. So, say 3 pods start at the same time and run the migrate command. Only One of those commands will end up applying changes to the database. Migrations normally need to lock the database for different actions (this is highly related to your DBMS). The lock will happen by one of the migrate commands (one of the pods) and other commands should wait until the work of the first one is over. After the job is done by the first one, others' commands will be ignored automatically. So each migration will happen once.
You can however, change your deployment strategy and ask kubernetes to first, spin up only 1 pod and when the first pod's health check succeeds, others will spin up too. In this case, you can be sure that the lock time for the migration, will happen only once and others will just check that migrations are already applied and ignore them automatically.
You may use Kubernetes init containers, which are specialized containers that run before app containers in a Pod. The init containers stop after successfully executing the commands you want, so they won't occupy unnecessary resources.
Here is the official link:
https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
The best thing you can do is to use kind:Jobs, an run this before the rollout.
Please check: Django migrations by Kubernetes Job and persistent Volume Claim
As I understand In K8s you can't control the order of the pods.
Let's say your DB host is a docker container (Dev environment) and not an RDS/ static host, migrations command could not be executed before the DB is up and ready.
I try to to use the ready function in app.py to execute run_migrations.py that execute shell script of run migrations but run migrations trigger app.py and it's infinity loop.
In my organization We use circleCI which build the image first and then push it to AWS ECR with tags before deploy it to EKS. At the build stage I can't assume the DB pod is ready so run migrations will cause an error.
When the DB for Django is also a K8s pod, the best way to execute it will be a job.
Related
I am developing Django Wagtail application on my local machine connected to a local postgres server.
I have a test server and a production server.
However when I develop locally and try upload it there is always some issue with makemigration and migrate e.g. KeyError etc.
What are the best practices of ensuring I do not get run into these issues? What files do I need to port across?
so ill tell you what i do and what most of the companies that i worked as a django developer did and i can tell you by experience that worked pretty well.
First containerize your application, this will make your life much more easy and you will remove external influence in your code, also will get you an easy way to reproduce your environment.
Your Dockerfile should be from some python image and should do 3 basically things:
Install your requirements dependencies
Run the python manage.py migrate --noinput command
Run a http server such as gunicorn with gunicorn -c /gunicorn.py wsgi:application
You ill do the makemigration in your local machine and make sure that everything is working before commit then to the repo.
In your gunicorn.py you ill put your settings to run the app such as the number of CPU, the binding port, the folder that your app is, something like:
import os
import multiprocessing
# Chdir to specified directory before apps loading.
# https://docs.gunicorn.org/en/stable/settings.html#chdir
chdir = '/app/'
# Bind the application on localhost both on ipv6 and ipv4 interfaces.
# https://docs.gunicorn.org/en/stable/settings.html#bind
bind = '0.0.0.0:8000'
Second containerize your other stuff, for example the postgres database, the redis (for cache), a connection pooler for the database depending on the size of your application.
Its highly recommend that you have a step in the pipeline to do tests, they need to run before everything, maybe just after lint
Ok what now? now you need a way to deploy that stuff, the best for that scenario is: pull your image to github registry, and you can add a tag to that for example:
IMAGE_ID=ghcr.io/${{ github.repository_owner }}/$IMAGE_NAME
# Change all uppercase to lowercase
IMAGE_ID=$(echo $IMAGE_ID | tr '[A-Z]' '[a-z]')
docker tag $IMAGE_NAME $IMAGE_ID:staging
docker push $IMAGE_ID:staging
This can be add in a github action in the build step for example.
After having your new code in a new image inside github you just need to update the current one, this can be done by creaaing a script to do it in the server and running that script from github action, is something like:
docker pull ghcr.io/${{ github.repository_owner }}/$IMAGE_NAME
echo 'Restarting Application...'
docker stop {YOUR_CONTAINER} && docker up -d
sudo systemctl restart nginx
echo 'Cleaning old images'
sudo docker system prune -af
You can see that i create the image with a staging tag, you can create a rule in github actions for example to trigger that action when you create a new release for example, and create another action to be trigger in every new commit and build/deploy for a dev tag.
For the migration problem, the first thing is, when your application go live squash every migration to the first one (you can drop the database and all the migration then create the database and run the makemigration command again to reach this), so you can have a clean migration in the server. Never creates unnecessary relation between the tables, prefer always doing cached properties instead of add new columns, use UUID for unique ids, and try to not do breaking changes in the database, its hard but if you plan the database before is not so difficult to do.
Hit me if you have any questions. A lot of the stuff that i said can be done in a lot of other platforms such as gitlab, travis, circle ci, but i use the github action in the example because i think is more simple to picture.
EDIT:
I forgot to tell you to have a cron in your server doing backups of your databases, the migrate command ill apply the changes only after the verification but if something else break the database this can save your life.
I need your help!
I started using docker this week, launched all containers for a new Django project. In this project there are several databases, python, django web server + redis, celery, etc. These all are served by separated docker containers and are launched by docker-compose up command.
This is my probjem: when I type docker-compose up in the console, it starts all services. Then I need to restore my databases dumps for each database (it takes about an hour). But when I use pycharm tools for docker-compose, it recreates some containers. And also it recreates all my postgres databases with ALL MY DATA!
Sometimes is doesn't recreate containers and I can do my job, but if I do any wrong move -then docker-compose erases my databases! I have tired to restore them!
Is there way to protect containers from erasing, to forbid recreate my postgres containers?
PS: I've also tried to export postgres containers to .tar file, but when I import it back, database insight the container is ok and container importing is faster than restoring data from sql, but metadata of docker image is different, so I can't use it.
Please, give me any ideas)
Try to use volumes to store your data. Volumes can keep data after containers recreating.
https://docs.docker.com/storage/volumes/
I have a rather complex K8s environment that one of the Deployments is a Django application. Currently we are having a very hard time whenever I need to update a model that has already been migrated to a PostgreSQL database.
Let's say for instance that I create an application named Sample, that has a simple table on the models.py. My development process (skaffold) builds the docker and apply it locally on the minikube, after this is done I connect to the pod via kubectl exec and execute the python manage.py makemigrations and python manage.py migrate, so far so good.
After some time, let's say I need to create a new table on the models.py file of the Sample application, the skaffold builds the docker, kills the old pod, and create the new pod. So I connect as usual via kubectl exec and try to execute the makemigrations and migrate command, lo and behold, there's no migration to apply. And of course no change is made on the PostgreSQL.
Upon further searching this, I believe that the reason for this is that since the docker is built without the Sample/migrations folder, and there's already a table (the original one) on the PostgreSQL, when I run the makemigrations it creates only the 0001_initial.py file, that has all the tables but, since the table already exists, when executing the migratethe django believes that the migration is already applied, therefore it won't apply.
If what I found out is true, how can I keep this files on a PVC, so that they are always kept between each pod recreation?
Thank you.
We are running our stack on Heroku. We are using Django 2.2 with a Postgres 11 DB. Our build pipeline (Github Actions) pushes to Heroku (git push https://git.heroku.com...) and immediately afterwards runs the migrations (heroku run python manage.py migrate --app heroku-app-name). All of that was working with a Postgres 9.6 database and is still working in our staging environment (Postgres 11). Now with production being on Postgres 11, the django migrate command is just stuck and doesn't produce any output, even so there are no actual migrations to apply.
The only differences between our production setup and our staging setup are a follower/slave in production attached to the master DB and "production workload".
In order to fix that deployment I have to run a
heroku pg:killall -a heroku-app-name
heroku restart -a heroku-app-name
At this point the migrations task in the build pipeline fails.
and afterwards migrations can be applied manually without problems:
heroku run python manage.py migrate --app heroku-app-name
So for some reason the migrations command is "waiting" for something, some database lock or whatever, yet I cannot put my finger on it. Especially odd for me is the fact that its also stuck where no migrations are to applied. Why would it be stuck there?
We found the solution. There are actually three things coming together.
We trigger a DB backup before running any migrations. We only do so in Production and not on staging, which was the reason why our staging environment had no issues while production had.
A DB migration (even so it looks like there is nothing to apply) is actually running some commands (besides regular SELECT, UPDATE, INSERTS). E.g. in our case there was a CREATE EXTENSION ... IF NOT EXISTS always executed at the beginning.
While it was possible with Postgres 9.6 to have a backup job running in parallel (I don't know what heroku is using under the hood, yet I assume a noraml pg_dump) the backup job on Postgres 11 (and others?) has now a more exclusiv lock on some operations. I assume that that a CREATE EXTENSION ... IF NOT EXISTS (even so the extension already exists) cannot be executed while having a backup job running in parallel.
(I am sure there are some Postgres internas missing to explain this more correctly)
As a result of these three things, the DB blocks the migrate operation, waiting for the backup job to finish. I have moved the daily backup job to a different time and reconfigured our pipeline to wait for the "pre-deploy"-backup to finish first.
I have a app that is deployed to Heroku, and I'd like to be able to run the test suite post-deployment on the target environment. I am using the Heroku Postgres add-on, which means that I have access to a single database only. I have no rights to create new databases, which in turn means that the standard Django test command fails, as it can't create the test_* database.
$ heroku run python manage.py test
Running `python manage.py test` attached to terminal... up, run.9362
Creating test database for alias 'default'...
Got an error creating the test database: permission denied to create database
Is there any way around this?
Turns out I was in the wrong. I was not testing what I thought was being tested... Since Heroku's Routing Mesh was sending requests to different servers, the LiveServerTestCase was starting a web server on one machine and Selenium was connecting to other machines altogether.
By updating the Heroku Procfile to:
web: python src/manage.py test --liveserver=0.0.0.0:$PORT
overriding the DATABASES setting to point to the test database, and customizing the test suite runner linked to below (the same idea still holds: override setup_databases so that it only drops/re-creates tables, not the entire database), I was able to run remote tests. But this is even more hacky/painful/inelegant. Still looking for something better! Sorry about the confusion.
(updated answer below)
Here are the steps that worked for me:
Create an additional, free Postgres database using the Heroku toolbelt
heroku addons:add heroku-postgresql:dev
Use the HerokuTestSuiteRunner class which you'll find here.
This custom test runner requires that you define a TEST_DATABASES setting which follows the typical DATABASES format. For instance:
TEST_DATABASES = {
'default': dj_database_url.config(env='TEST_DATABASE_URL')
}
Then, have the TEST_RUNNER setting be a Python path to wherever HerokuTestSuiteRunner can be found.
You should now be able to run Django tests on Heroku using the given database. This is very much a quick hack... Let me know how it could be improved / made less hackish. Enjoy!
(original answer below)
A few relevant solutions have been discussed here. As you can read in the Django docs, "[w]hen using the SQLite database engine, the tests will by default use an in-memory database".
Although this doesn't thoroughly test the database engine you're using on Heroku (I'm still on the lookout for a solution that does that), setting the database engine to SQLite will at least allow you to run your tests.
See the above-linked StackOverflow question for some pointers. There are at least two ways out: testing if 'test' in sys.argv before forcing SQLite as the database engine, or having a dedicated settings file used in testing, which you can then pass to django manage.py test using the --settings option.
Starting with version 1.8, Django now has an option called keepdb, which allows for the same database to be reused during tests.
The --keepdb option can be used to preserve the test database between test runs.
This has the advantage of skipping both the create and destroy actions which can greatly decrease the time to run tests, especially those in a large test suite.
If the test database does not exist, it will be created on the first run and then preserved for each subsequent run.
Any unapplied migrations will also be applied to the test database before running the test suite.
Since it also allows for the test database to exist prior to running tests, you can simply add a new Postgres Heroku instance to your dyno and configure the tests to use that particular database.
Bonus : you can also use the failfast option, which exits as soon as your first test crashes, so that you don't have to wait for all tests to complete.
However, if you are deploying things to Heroku and you are using Heroku Pipelines, an even better option is available : Heroku CI.