Deploying Django to production correct way to do it? - django

I am developing Django Wagtail application on my local machine connected to a local postgres server.
I have a test server and a production server.
However when I develop locally and try upload it there is always some issue with makemigration and migrate e.g. KeyError etc.
What are the best practices of ensuring I do not get run into these issues? What files do I need to port across?

so ill tell you what i do and what most of the companies that i worked as a django developer did and i can tell you by experience that worked pretty well.
First containerize your application, this will make your life much more easy and you will remove external influence in your code, also will get you an easy way to reproduce your environment.
Your Dockerfile should be from some python image and should do 3 basically things:
Install your requirements dependencies
Run the python manage.py migrate --noinput command
Run a http server such as gunicorn with gunicorn -c /gunicorn.py wsgi:application
You ill do the makemigration in your local machine and make sure that everything is working before commit then to the repo.
In your gunicorn.py you ill put your settings to run the app such as the number of CPU, the binding port, the folder that your app is, something like:
import os
import multiprocessing
# Chdir to specified directory before apps loading.
# https://docs.gunicorn.org/en/stable/settings.html#chdir
chdir = '/app/'
# Bind the application on localhost both on ipv6 and ipv4 interfaces.
# https://docs.gunicorn.org/en/stable/settings.html#bind
bind = '0.0.0.0:8000'
Second containerize your other stuff, for example the postgres database, the redis (for cache), a connection pooler for the database depending on the size of your application.
Its highly recommend that you have a step in the pipeline to do tests, they need to run before everything, maybe just after lint
Ok what now? now you need a way to deploy that stuff, the best for that scenario is: pull your image to github registry, and you can add a tag to that for example:
IMAGE_ID=ghcr.io/${{ github.repository_owner }}/$IMAGE_NAME
# Change all uppercase to lowercase
IMAGE_ID=$(echo $IMAGE_ID | tr '[A-Z]' '[a-z]')
docker tag $IMAGE_NAME $IMAGE_ID:staging
docker push $IMAGE_ID:staging
This can be add in a github action in the build step for example.
After having your new code in a new image inside github you just need to update the current one, this can be done by creaaing a script to do it in the server and running that script from github action, is something like:
docker pull ghcr.io/${{ github.repository_owner }}/$IMAGE_NAME
echo 'Restarting Application...'
docker stop {YOUR_CONTAINER} && docker up -d
sudo systemctl restart nginx
echo 'Cleaning old images'
sudo docker system prune -af
You can see that i create the image with a staging tag, you can create a rule in github actions for example to trigger that action when you create a new release for example, and create another action to be trigger in every new commit and build/deploy for a dev tag.
For the migration problem, the first thing is, when your application go live squash every migration to the first one (you can drop the database and all the migration then create the database and run the makemigration command again to reach this), so you can have a clean migration in the server. Never creates unnecessary relation between the tables, prefer always doing cached properties instead of add new columns, use UUID for unique ids, and try to not do breaking changes in the database, its hard but if you plan the database before is not so difficult to do.
Hit me if you have any questions. A lot of the stuff that i said can be done in a lot of other platforms such as gitlab, travis, circle ci, but i use the github action in the example because i think is more simple to picture.
EDIT:
I forgot to tell you to have a cron in your server doing backups of your databases, the migrate command ill apply the changes only after the verification but if something else break the database this can save your life.

Related

How to run Django migrations in Google App Engine Flexible deployment step?

I have a Django app up and running in Google App Engine flexible. I know how to run migrations using the cloud proxy or by setting the DATABASES value but I would like to automate running migrations by doing it in the deployment step. However, there does not seem to be a way to run a custom script before or after the deployment.
The only way I've come up with is by doing it in the entrypoint command which you can set in the app.yaml:
entrypoint: bash -c 'python3 manage.py migrate --noinput && gunicorn -b :$PORT app.wsgi'
This feels a lot like doing it wrong. A lot of Googling didn't provide a better answer.
Defining the python3 manage.py migrate command in your app.yaml file will make it run every time a new instance is spawned and set up to serve traffic. Although technically this may not be an issue (no migration will happen if database schema hasn't changed) this isn't the right place to declare it.
You'd want this command to run once on every new version code push. This fits perfectly in a CI/CD approach. There are several tutorials on the Google Cloud online documentation using Bitbucket Pipelines or Travis CI for example but you can use many other CI/CD solutions.

Run django migrate in docker

I am building a Python+Django development environment using docker. I defined Dockerfile files and services in docker-compose.yml for web server (nginx) and database (postgres) containers and a container that will run our app using uwsgi. Since this is a dev environment, I am mounting the the app code from the host system, so I can easily edit it in my IDE.
The question I have is where/how to run migrate command.
In case you don't know Django, migrate command creates the database structure and later changes it as needed by the project. I have seen people run migrate as part of the compose command directive command: python manage.py migrate && uwsgi --ini app.ini, but I do not want migrations to run on every container restart. I only want it to run once when I create the containers and never run again unless I rebuild.
Where/how would I do that?
Edit: there is now an open issue with the compose team. With any luck, one time command containers will get supported by compose. https://github.com/docker/compose/issues/1896
You cannot use RUN because as you mentioned in the comments your source is mounted during running of the container.
You cannot use CMD either since you don't want it to run everytime you restart the container.
I recommend using docker exec manually after running the container. I do not think there is a way to automate this inside a dockerfile or docker-compose because of the two reasons I gave above.
It sounds like what you need is a tool for managing project tasks. dobi is a tool designed to handle these tasks (disclaimer: I am the author of this tool).
You can see an example of how to run a migration here: https://github.com/dnephin/dobi/tree/master/examples/init-db-with-rails. The example uses rails, but it's basically the same idea as django.
You could setup a task called migrate which would run the command in a container and write the data to a volume. Then when you start your docker-compose containers, use that volume as the source for your database service.
https://github.com/docker/compose/issues/1896 is finally resolved now by the new service profiles introduced with docker-compose 1.28.0. With profiles you can mark services to be only started in specific profiles:
services:
nginx:
# ...
postgres:
# ...
uwsgi:
# ...
migrations:
profiles: ["cli-only"] # profile name chosen freely
# ...
docker-compose up # start only your app services, no migrations
docker-compose run migrations # run migrations on-demand
docker exec -it container-name bash
Then you will be inside the container and you can run any command you normally do when you develop without using docker.

Django Deployment Process to Webfaction.com

Trying to streamline a deployment process to webfaction.com for my django application, I have a master (working copy) and a development branch.
currently I'm doing the following:
Make changes to my development branch in my local dev environment
When changes are working, test with run local server, then merge with my master branch
git push so the code is in my remote repo (this has other issues such as passwords, keys etc which I've not quite solved yet) (also i dont believe its possible to scp code to webfaction and I'm not really a fan of any of the FTP services I've used so far)
SSH into my webfaction server and do a git pull and git merge
Test to see if everything is still working (it never is)
Make anychanges required to get everything working again
commit any changes I've had to do to fix everything then push back to the remote repo
Go back to my development environment and sync the code up with the production code
Rinse Repeat for the next feature
obviously I've missed the efficient development train, for the record I've only been working with django for a couple of months as a hobby project.
Can anyone suggest a django deployment process that would be more conducive to sane development?
I would strongly suggest Fabric to handle your deployments to WebFaction:
http://docs.fabfile.org/en/1.11/tutorial.html
By using Fabric you can deploy code and do other server side operations from your local terminal with no need to manually ssh to the server. First install Fabric:
pip install Fabric
Create fabfile.py in your project root folder. Here is an example fabfile that can get you started:
from fabric.api import task, env, run, cd
from fabric.context_managers import prefix
env.hosts = ('wf_username#wf_username.webfactional.com',)
env.forward_agent = True
MANAGEPY = '~/webapps/my_project/code/my_project/manage.py'
PY = '~/webapps/my_project/env/bin/python2.7'
#task
def deploy():
with cd('~/webapps/my_project/code/'):
with prefix('source production'):
run('git pull --rebase origin master')
run('pip install -r requirements.txt')
run('{} {} migrate'.format(PY, MANAGEPY))
run('{} {} collectstatic --noinput'.format(PY, MANAGEPY))
run('touch my_project/my_project/wsgi.py')
You can run fab task from your terminal with:
fab deploy
In my opinion, making code changes directly on server is a bad practice. Fabric can improve your development flow so that you make code edits only locally, quickly deploy them and test them.
The best and shortest way
In settings.py:
try:
from production_settings import *
except ImportError as e:
pass
You can override what needed in production_settings.py; it should stay out of your version control and you can use git resourcefully.

Multiple Django sites with shared codebase and DB

I have created a Django proyect with 20 sites (one different domain per site) for 20 different countries. The sites share everything: codebase, database, urls, templates, etc.
The only thing they don't share are small customizations (logo, background color of the CSS theme, language code, etc.) that I set in each of the site settings file (each site has one settings file, and all of these files import a global settings file with the common stuff). Right now, in order to run the sites in development mode I'll do:
django-admin.py runserver 8000 --settings=config.site_settings.site1
django-admin.py runserver 8001 --settings=config.site_settings.site2
...
django-admin.py runserver 8020 --settings=config.site_settings.site20
I have a couple of questions:
I've read that it is possible to create a virtual host for each site (domain) and pass it the site's settings.py file. However, I am afraid this would create one Django instance per site. Am I right?
Is there a more efficient way of doing the deployment? I've read about django-dynamicsites but I am not sure if it is the right way to go.
If I decide to deploy using Heroku, it seems that Heroku expects only one settings file per app, so I would need to have 20 apps. Is there a solution for that?
Thanks!
So, I recently did something similar, and found that the strategy below is the best option. I'm going to assume that you are familiar with git branching at this point, as well as Heroku remotes. If you aren't, you should read this first: https://devcenter.heroku.com/articles/git#multiple-remotes-and-environments
The main strategy I'm taking is to have a single codebase (a single Git repo) with:
A master branch that contains all your shared code: templates, views, URLs.
Many site branches, based on master, which contain all site-specific customizations: css, images, settings files (if they are vastly different).
The way this works is like so:
First, make sure you're on the master branch.
Second, create a new git branch for one of your domains, eg: git checkout -b somedomain.com.
Third, customize your somedomain.com branch so that it looks the way you want.
Next, deploy somedomain.com live to Heroku, by running heroku create somedomain.com --remote somedomain.com.
Now, push your somedomain.com branch code to your new Heroku application: git push somedomain.com somedomain.com:master. This will deploy your code on Heroku.
Now that you've got your somedomain.com branch deployed with its own Heroku application, you can do all normal Heroku stuff by adding --remote somedomain.com to your normal Heroku commands, eg:
heroku pg:info --remote somedomain.com
heroku addons:add memcache:5mb --remote somedomain.com
etc.
So, now you've basically got two branches: a master branch, and a somedomain.com branch.
Go back to your master branch, and make another new branch for your next domain: git checkout master; git checkout -b anotherdomain.com. Then customize it to your liking (css, site-specific stuff), and deploy the same way we did above.
Now I'm sure you can see where this is going by now. We've got one git branch for each of our custom domains, and each domain has it's own Heroku app. The benefit (obviously) is that each of these project customizations are based off the master branch, which means that you can easily make updates to all sites at once.
Let's say you update one of your views in your master branch--how can you deploy it to all your custom sites at once? Easily!
Just run:
git checkout somedomain.com
git merge master
git push somedomain.com somedomain.com:master # deploy the changes
And repeat for each of your domains. In my environment, I wrote a script that does this, but it's easy enough to do manually if you'd like.
Anyhow, hopefully that helps.

Google App Engine Development and Production Environment Setup

Here is my current setup:
GitHub repository, a branch for dev.
myappdev.appspot.com (not real url)
myapp.appspot.com (not real url)
App written on GAE Python 2.7, using django-nonrel
Development is performed on a local dev server. When I'm ready to release to dev, I increment the version, commit, and run "manage.py upload" to the myappdev.appspot.com
Once testing is satisfactory, I merge the changes from dev to main repo. I then run "manage.py upload" to upload the main repo code to the myapp.appspot.com domain.
Is this setup good? Here are a few issues I've run into.
1) I'm new to git, so sometimes I forget to add files, and the commit doesn't notify me. So I deploy code to dev that works, but does not match what is in the dev branch. (This is bad practice).
2) The datastore file in the git repo causes issues. Merging binary files? Is it ok to migrate this file between local machines, or will it get messed up?
3) Should I be using "manage.py upload" for each release to the dev or prod environment, or is there a better way to do this? Heroku looks like it can pull right from GitHub. The way I'm doing it now seems like there is too much room for human error.
Any overall suggestions on how to improve my setup?
Thanks!
I'm on a pretty similar setup, though I'm still runing on py2.5, django-nonrel.
1) I usually use 'git status' or 'git gui' to see if I forgot to check in files.
2) I personally don't check in my datastore. Are you familiar with .gitignore? It's a text file in which you list files for git to ignore when you run 'git status' and other functions. I put in .gaedata as well as .pyc and backup files.
To manage the database I use "python manage.py dumpdata > file" which dumps the database to a json encoded file. Then I can reload it using "python manage.py loaddata".
3) I don't know of any deploy from git. You can probably write a little python script to check whether git is up to date before you deploy. Personally though, I deploy stuff to test to make sure it's working, before I check it in.