Trying to streamline a deployment process to webfaction.com for my django application, I have a master (working copy) and a development branch.
currently I'm doing the following:
Make changes to my development branch in my local dev environment
When changes are working, test with run local server, then merge with my master branch
git push so the code is in my remote repo (this has other issues such as passwords, keys etc which I've not quite solved yet) (also i dont believe its possible to scp code to webfaction and I'm not really a fan of any of the FTP services I've used so far)
SSH into my webfaction server and do a git pull and git merge
Test to see if everything is still working (it never is)
Make anychanges required to get everything working again
commit any changes I've had to do to fix everything then push back to the remote repo
Go back to my development environment and sync the code up with the production code
Rinse Repeat for the next feature
obviously I've missed the efficient development train, for the record I've only been working with django for a couple of months as a hobby project.
Can anyone suggest a django deployment process that would be more conducive to sane development?
I would strongly suggest Fabric to handle your deployments to WebFaction:
http://docs.fabfile.org/en/1.11/tutorial.html
By using Fabric you can deploy code and do other server side operations from your local terminal with no need to manually ssh to the server. First install Fabric:
pip install Fabric
Create fabfile.py in your project root folder. Here is an example fabfile that can get you started:
from fabric.api import task, env, run, cd
from fabric.context_managers import prefix
env.hosts = ('wf_username#wf_username.webfactional.com',)
env.forward_agent = True
MANAGEPY = '~/webapps/my_project/code/my_project/manage.py'
PY = '~/webapps/my_project/env/bin/python2.7'
#task
def deploy():
with cd('~/webapps/my_project/code/'):
with prefix('source production'):
run('git pull --rebase origin master')
run('pip install -r requirements.txt')
run('{} {} migrate'.format(PY, MANAGEPY))
run('{} {} collectstatic --noinput'.format(PY, MANAGEPY))
run('touch my_project/my_project/wsgi.py')
You can run fab task from your terminal with:
fab deploy
In my opinion, making code changes directly on server is a bad practice. Fabric can improve your development flow so that you make code edits only locally, quickly deploy them and test them.
The best and shortest way
In settings.py:
try:
from production_settings import *
except ImportError as e:
pass
You can override what needed in production_settings.py; it should stay out of your version control and you can use git resourcefully.
Related
I am developing Django Wagtail application on my local machine connected to a local postgres server.
I have a test server and a production server.
However when I develop locally and try upload it there is always some issue with makemigration and migrate e.g. KeyError etc.
What are the best practices of ensuring I do not get run into these issues? What files do I need to port across?
so ill tell you what i do and what most of the companies that i worked as a django developer did and i can tell you by experience that worked pretty well.
First containerize your application, this will make your life much more easy and you will remove external influence in your code, also will get you an easy way to reproduce your environment.
Your Dockerfile should be from some python image and should do 3 basically things:
Install your requirements dependencies
Run the python manage.py migrate --noinput command
Run a http server such as gunicorn with gunicorn -c /gunicorn.py wsgi:application
You ill do the makemigration in your local machine and make sure that everything is working before commit then to the repo.
In your gunicorn.py you ill put your settings to run the app such as the number of CPU, the binding port, the folder that your app is, something like:
import os
import multiprocessing
# Chdir to specified directory before apps loading.
# https://docs.gunicorn.org/en/stable/settings.html#chdir
chdir = '/app/'
# Bind the application on localhost both on ipv6 and ipv4 interfaces.
# https://docs.gunicorn.org/en/stable/settings.html#bind
bind = '0.0.0.0:8000'
Second containerize your other stuff, for example the postgres database, the redis (for cache), a connection pooler for the database depending on the size of your application.
Its highly recommend that you have a step in the pipeline to do tests, they need to run before everything, maybe just after lint
Ok what now? now you need a way to deploy that stuff, the best for that scenario is: pull your image to github registry, and you can add a tag to that for example:
IMAGE_ID=ghcr.io/${{ github.repository_owner }}/$IMAGE_NAME
# Change all uppercase to lowercase
IMAGE_ID=$(echo $IMAGE_ID | tr '[A-Z]' '[a-z]')
docker tag $IMAGE_NAME $IMAGE_ID:staging
docker push $IMAGE_ID:staging
This can be add in a github action in the build step for example.
After having your new code in a new image inside github you just need to update the current one, this can be done by creaaing a script to do it in the server and running that script from github action, is something like:
docker pull ghcr.io/${{ github.repository_owner }}/$IMAGE_NAME
echo 'Restarting Application...'
docker stop {YOUR_CONTAINER} && docker up -d
sudo systemctl restart nginx
echo 'Cleaning old images'
sudo docker system prune -af
You can see that i create the image with a staging tag, you can create a rule in github actions for example to trigger that action when you create a new release for example, and create another action to be trigger in every new commit and build/deploy for a dev tag.
For the migration problem, the first thing is, when your application go live squash every migration to the first one (you can drop the database and all the migration then create the database and run the makemigration command again to reach this), so you can have a clean migration in the server. Never creates unnecessary relation between the tables, prefer always doing cached properties instead of add new columns, use UUID for unique ids, and try to not do breaking changes in the database, its hard but if you plan the database before is not so difficult to do.
Hit me if you have any questions. A lot of the stuff that i said can be done in a lot of other platforms such as gitlab, travis, circle ci, but i use the github action in the example because i think is more simple to picture.
EDIT:
I forgot to tell you to have a cron in your server doing backups of your databases, the migrate command ill apply the changes only after the verification but if something else break the database this can save your life.
I have a Django app up and running in Google App Engine flexible. I know how to run migrations using the cloud proxy or by setting the DATABASES value but I would like to automate running migrations by doing it in the deployment step. However, there does not seem to be a way to run a custom script before or after the deployment.
The only way I've come up with is by doing it in the entrypoint command which you can set in the app.yaml:
entrypoint: bash -c 'python3 manage.py migrate --noinput && gunicorn -b :$PORT app.wsgi'
This feels a lot like doing it wrong. A lot of Googling didn't provide a better answer.
Defining the python3 manage.py migrate command in your app.yaml file will make it run every time a new instance is spawned and set up to serve traffic. Although technically this may not be an issue (no migration will happen if database schema hasn't changed) this isn't the right place to declare it.
You'd want this command to run once on every new version code push. This fits perfectly in a CI/CD approach. There are several tutorials on the Google Cloud online documentation using Bitbucket Pipelines or Travis CI for example but you can use many other CI/CD solutions.
I have a app that is deployed to Heroku, and I'd like to be able to run the test suite post-deployment on the target environment. I am using the Heroku Postgres add-on, which means that I have access to a single database only. I have no rights to create new databases, which in turn means that the standard Django test command fails, as it can't create the test_* database.
$ heroku run python manage.py test
Running `python manage.py test` attached to terminal... up, run.9362
Creating test database for alias 'default'...
Got an error creating the test database: permission denied to create database
Is there any way around this?
Turns out I was in the wrong. I was not testing what I thought was being tested... Since Heroku's Routing Mesh was sending requests to different servers, the LiveServerTestCase was starting a web server on one machine and Selenium was connecting to other machines altogether.
By updating the Heroku Procfile to:
web: python src/manage.py test --liveserver=0.0.0.0:$PORT
overriding the DATABASES setting to point to the test database, and customizing the test suite runner linked to below (the same idea still holds: override setup_databases so that it only drops/re-creates tables, not the entire database), I was able to run remote tests. But this is even more hacky/painful/inelegant. Still looking for something better! Sorry about the confusion.
(updated answer below)
Here are the steps that worked for me:
Create an additional, free Postgres database using the Heroku toolbelt
heroku addons:add heroku-postgresql:dev
Use the HerokuTestSuiteRunner class which you'll find here.
This custom test runner requires that you define a TEST_DATABASES setting which follows the typical DATABASES format. For instance:
TEST_DATABASES = {
'default': dj_database_url.config(env='TEST_DATABASE_URL')
}
Then, have the TEST_RUNNER setting be a Python path to wherever HerokuTestSuiteRunner can be found.
You should now be able to run Django tests on Heroku using the given database. This is very much a quick hack... Let me know how it could be improved / made less hackish. Enjoy!
(original answer below)
A few relevant solutions have been discussed here. As you can read in the Django docs, "[w]hen using the SQLite database engine, the tests will by default use an in-memory database".
Although this doesn't thoroughly test the database engine you're using on Heroku (I'm still on the lookout for a solution that does that), setting the database engine to SQLite will at least allow you to run your tests.
See the above-linked StackOverflow question for some pointers. There are at least two ways out: testing if 'test' in sys.argv before forcing SQLite as the database engine, or having a dedicated settings file used in testing, which you can then pass to django manage.py test using the --settings option.
Starting with version 1.8, Django now has an option called keepdb, which allows for the same database to be reused during tests.
The --keepdb option can be used to preserve the test database between test runs.
This has the advantage of skipping both the create and destroy actions which can greatly decrease the time to run tests, especially those in a large test suite.
If the test database does not exist, it will be created on the first run and then preserved for each subsequent run.
Any unapplied migrations will also be applied to the test database before running the test suite.
Since it also allows for the test database to exist prior to running tests, you can simply add a new Postgres Heroku instance to your dyno and configure the tests to use that particular database.
Bonus : you can also use the failfast option, which exits as soon as your first test crashes, so that you don't have to wait for all tests to complete.
However, if you are deploying things to Heroku and you are using Heroku Pipelines, an even better option is available : Heroku CI.
Here is my current setup:
GitHub repository, a branch for dev.
myappdev.appspot.com (not real url)
myapp.appspot.com (not real url)
App written on GAE Python 2.7, using django-nonrel
Development is performed on a local dev server. When I'm ready to release to dev, I increment the version, commit, and run "manage.py upload" to the myappdev.appspot.com
Once testing is satisfactory, I merge the changes from dev to main repo. I then run "manage.py upload" to upload the main repo code to the myapp.appspot.com domain.
Is this setup good? Here are a few issues I've run into.
1) I'm new to git, so sometimes I forget to add files, and the commit doesn't notify me. So I deploy code to dev that works, but does not match what is in the dev branch. (This is bad practice).
2) The datastore file in the git repo causes issues. Merging binary files? Is it ok to migrate this file between local machines, or will it get messed up?
3) Should I be using "manage.py upload" for each release to the dev or prod environment, or is there a better way to do this? Heroku looks like it can pull right from GitHub. The way I'm doing it now seems like there is too much room for human error.
Any overall suggestions on how to improve my setup?
Thanks!
I'm on a pretty similar setup, though I'm still runing on py2.5, django-nonrel.
1) I usually use 'git status' or 'git gui' to see if I forgot to check in files.
2) I personally don't check in my datastore. Are you familiar with .gitignore? It's a text file in which you list files for git to ignore when you run 'git status' and other functions. I put in .gaedata as well as .pyc and backup files.
To manage the database I use "python manage.py dumpdata > file" which dumps the database to a json encoded file. Then I can reload it using "python manage.py loaddata".
3) I don't know of any deploy from git. You can probably write a little python script to check whether git is up to date before you deploy. Personally though, I deploy stuff to test to make sure it's working, before I check it in.
I'm new to Git. I need to setup Git to deploy a Django website to the production server. My question here is to know what is the best way of doing this.
By now I only have a Master branch. My problem here is that Development environment is not equal to the Production environment. How can I have the two environments(Development and Production) in Git? Should I use two new Branches(Development and Production). Please give me a clue on this.
Other question... when I finish to upload/push the code to the Production server I need to restart the Gunicorn(serves Django website). How can I do this?
And the most important question... Should I use Git to do this or I have better options?
Best Regards,
The first question you must solve is your project structure. Usually the difference between development and the production environment is setting.py and url.py. So why you firstly separate those? :) For example you can have one main settings.py where you define all the default settings which are in common. Then at the end of the file you just import the settings_dev.py and settting_prod.py for exemple:
try:
from settings_prod import *
except ImportError:
pass
try:
from settings_dev import *
except ImportError:
pass
Then simply you can overload all the setting you want and have custom settings of the project (for example installed apps). The same logic you can use for urls.py file.
Then you can simply ignore adding the *_dev files to repo and on the server side you can just checkout the code from repo and restart http server. To automatize this for now I can't give the right name of app to use. Sometimes simple python script could be solution like: watching if the file datetime changed and if yes, just run restart command for http.
Hope that helped.
Ignas
You can follow this brunching model - http://nvie.com/posts/a-successful-git-branching-model/
And, git is ok but use Fabric for deployment.