I just created a test application in Heroku so that I can stay in the same Django project, but quickly switch back and forth between connecting to my production database and my testing app database. I created an environment variable on my laptop using export:TEST_DATABASE_URL="...", but even with this below code I am still connected to my production database when I run my Django project on localhost. Does anyone know how i can accomplish this?
# ~~~ PROD SETTINGS ~~~
# DATABASE_URL = os.environ['DATABASE_URL']
# DEBUG = 'False'
# ~~~ TEST SETTINGS ~~~
DATABASE_URL = os.environ['TEST_DATABASE_URL']
DEBUG = 'True'
# tried commenting this code out so it doesn't use the local sqlite file
# DATABASES = { # Use this to use local test DB # todo: prod doesn't havea access to django_session...
# 'default': {
# 'ENGINE': 'django.db.backends.sqlite3',
# 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
# }
# }
Procfile:
release: python3 manage.py migrate
web: daphne django_project.asgi:application --port $PORT --bind 0.0.0.0 -v2
worker: python3 manage.py runworker channels --settings=django_project.settings -v2
I found the answer. Even though I was setting DATABASE_URL = os.environ['DATABASE_URL'] in settings.py, Django ignored that. When running the app locally I had to use export DATABASE_URL={my database credential} in my ubuntu terminal for my localhost to use my test database
Related
I have a webapp which is not yet complete but I recently deployed it to heroku. It uses:
Django
Rest-framework
Reactjs
Now, I have deployed deploy-heroku branch of my project to master of heroku.
The only difference between my project's master branch and deploy-heroku branch is that I have made additional changes in settings.py (adding prostgre sql settings and all) in the deploy-heroku branch.
I want to add more features to my webapp so should I work on master and later copy-paste those changes to deploy-heroku. This seems redundant !! Is there any other better way to do this?
You could just let Heroku automatic deploy on master and use a ".env" file with Django-environ (https://github.com/joke2k/django-environ) to change your settings.py. You should be able to create a local Django setting and a Heroku prod setting.
Example :
.env :
DEBUG=on
SECRET_KEY=your-secret-key
DATABASE_URL=psql://urser:un-githubbedpassword#127.0.0.1:8458/database
SQLITE_URL=sqlite:///my-local-sqlite.db
setting.py:
import environ
env = environ.Env(
# set casting, default value
DEBUG=(bool, False)
)
# reading .env file
environ.Env.read_env()
# False if not in os.environ
DEBUG = env('DEBUG')
# Raises django's ImproperlyConfigured exception if SECRET_KEY not in os.environ
SECRET_KEY = env('SECRET_KEY')
# Parse database connection url strings like psql://user:pass#127.0.0.1:8458/db
DATABASES = {
# read os.environ['DATABASE_URL'] and raises ImproperlyConfigured exception if not found
'default': env.db(),
# read os.environ['SQLITE_URL']
'extra': env.db('SQLITE_URL', default='sqlite:////tmp/my-tmp-sqlite.db')
}
Don't forget to add the .env file to your .gitignore and to update your Heroku environment variables in your app -> settings -> Reveal config vars
You can merge branches.
Here is a good explanation of how it works
I'm having troubles to deploy my GeoDjango application on heroku (using Free Dyno but I'm able to change if necessary). When I execute push heroku master --force I got the following error:
Try using 'django.db.backends.XXX', where XXX is one of:
'mysql', 'oracle', 'postgresql', 'sqlite3'
Error was: cannot import name 'GDALRaster'
I already installed postgis:
$ heroku pg:psql
create extension postgis;
Configured buildpacks:
heroku config:add BUILDPACK_URL=https://github.com/ddollar/heroku-buildpack-multi.git
Created .buildpacks file at my project with this links:
https://github.com/cyberdelia/heroku-geo-buildpack.git#1.1
https://github.com/heroku/heroku-buildpack-python.git#v29
Updated Procfile:
web: python manage.py collectstatic --noinput; gunicorn projectname.wsgi
My settings.py it's configured:
INSTALLED_APPS = [
....
'django.contrib.gis',
]
default_dburl = 'sqlite:///' + os.path.join(BASE_DIR, 'db.sqlite3')
DATABASES = {
'default': config('DATABASE_URL', default=default_dburl, cast=dburl),
}
DATABASES['default']['ENGINE'] = config('DB_ENGINE')
My DB_ENGINE is at .env file:
DB_ENGINE=django.contrib.gis.db.backends.postgis
References I already read:
Installing postgis
Buildpacks
Buildpacks 2
Configuring GeoDjango
I can't figure out solutions,
Thanks in advance for any help.
You need to install GDAL in your heroku instance.
Here is the buildpack for heroku
https://github.com/mojodna/heroku-buildpack-gdal
A friend helped me in other forum, he told me to change the buildpack url on heroku to:
git://github.com/dulaccc/heroku-buildpack-geodjango.git#1.1
And I added this lines to settings.py:
GEOS_LIBRARY_PATH = environ.get('GEOS_LIBRARY_PATH')
GDAL_LIBRARY_PATH = environ.get('GDAL_LIBRARY_PATH')
It solved the problem to deploy the application.
Thanks.
I am writing my first 'self-made deployment'. Writing the deploy script using fabric. I have added an export to .bashrc on my production machine to export a key:value {'DIGITAL_OCEAN': True} so I can add some conditions in my settings to use databases based on local or production environments.
SETTINGS.PY
import os
if 'DIGITAL_OCEAN' in os.environ:
ON_DO = True
else:
ON_DO = False
if ON_DO:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'user',
'USER': 'user',
'PASSWORD': 'pass',
'HOST': 'localhost',
'PORT': '',
}
}
else:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'localuser',
'USER': 'localuser',
'PASSWORD': 'localpass',
'HOST': 'localhost',
'PORT': '',
}
NOW... If I run an ssh command like '$ python manage.py migrate' all goes well ON_DO is discovered and it goes well, but in my deploy script, listed below, ON_DO comes through as false, I had this happen spontaneously before and then it corrected itself (maybe with a gunicorn or nginx restart) so I tried adding some restarts to the script, but no luck so far and I am out of ideas.
def server():
'''IDK'''
env.host_string = 'ip.ip.ip.ip'
env.user = 'root'
def pull_deploy():
'''Makes the server pull it from git repo at bitbucket'''
path = '/home/django/'
print(red('BEGINNING PULL DEPLOY'))
with cd('%s' % path) :
run('pwd')
print(green('Pulling Master from Bitbucket'))
run('git pull origin master')
print(green('SKIPPING installing requirements'))
run('source %spyenv/bin/activate && pip install -r langalang/requirements.txt' % path)
print('Collecting static files')
run('source %spyenv/bin/activate && python langalang/manage.py collectstatic' % path)
print('Restarting Gunicorn')
run('sudo service gunicorn restart')
print('Restarting Nginx')
run('nginx -s reload')
print('Making migrations')
run('source %spyenv/bin/activate && python langalang/manage.py makemigrations' % path)
print('Migrating DB')
run('source %spyenv/bin/activate && python langalang/manage.py migrate' % path)
print('Restarting Gunicorn')
run('sudo service gunicorn restart')
print('Restarting Nginx')
run('nginx -s reload')
print(red('DONE'))
The problem was that I had declared my environment variable 'ON_DO' in ~.bashrc or ~.profile and those only export the variable from a login shell. I guess django doesn't count as a login shell when it runs by itself. I had to export them from the .wsgi file in django itself.
That file only runs in production as far as I can tell so it only outputs the variables to the production system.
#deltaskelta Why would you set the variable in wsgi.py file? Doesn't it defeat the purpose as this variable will also be set in your development environment.
Here is the shell script that I wrote.
#!/bin/sh
ps aux | grep usr/bin/[p]ython
if [ $? != 0 ]
then
export UNIQUE_KEY='value'
python ~/project_name/manage.py runserver 0:8000
exit 0
else
exit 1
fi
What this does is, sets the UNIQUE_KEY environment variable whenever it executes and unsets it as soon as it stops. Moreover, it works with crontab as well. This works because shell scripts executed via crontab are run in a non-interactive non-login shell session.
Probably an in-depth understanding of distinct shell sessions would help.
The Difference between Login, Non-Login, Interactive, and Non-Interactive Shell Sessions
The bash shell reads different configuration files depending on how the session is started.
One distinction between different sessions is whether the shell is being spawned as a "login" or "non-login" session.
A login shell is a shell session that begins by authenticating the user. If you are signing into a terminal session or through SSH and authenticate, your shell session will be set as a "login" shell.
If you start a new shell session from within your authenticated session, like we did by calling the bash command from the terminal, a non-login shell session is started. You were were not asked for your authentication details when you started your child shell.
Another distinction that can be made is whether a shell session is interactive, or non-interactive.
An interactive shell session is a shell session that is attached to a terminal. A non-interactive shell session is one is not attached to a terminal session.
Check this link for details - Digital Ocean Tutorials
I am using Robot Framework for my acceptance testing.
To start the django server I run the python manage.py runserver command from the RobotFramework. After that, I call python manage.py migrate. But the tests are slow because is not using an in-memory database.
I tried creating a new setting file called testing and I an using the following configuration:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': ':memory:',
}
}
and setting the enviroment variable DJANGO_SETTINGS_MODULE at runtime with this config.
I run a migrate followed by a runserver but the database does not exist. I can't figure out why. If I use a physical database it works.
I tried to check how the Django TestCase works to run and create the database but I could not find it.
I try to simulate make migrate before running test cases by calling execute_from_command_line() manually.
So the command I choose is: (removing poetry run if you don't use poetry).
poetry run ./manage.py test --settings=myproject.settings_test
I put this command into GNUMakefile to allow me use make test.
Then inside myproject/settings_test.py:
from .settings import *
ALLOWED_HOSTS = [ '127.0.0.1', ]
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': ':memory:',
}
}
from django.core.management import execute_from_command_line
execute_from_command_line(['./manage.py', 'migrate'])
You might be able to use the django.db.connection.creation.create_test_db function documented here at the start of your tests and then call django.db.connection.creation.destroy_test_db after completing your tests.
I have a locally running Django 1.7 app with some tests, connecting
to MySQL
I configured Travis CI with this repo
Question:
I want to have to have a separate database for Travis , that is different from the one I use for development.
I tried adding separate settings in settings.py : default (for use with tests) and development (for use in dev boxes); and thought .travis.xml would use the 'default' when it ran migrate tasks.
But Travis CI errors out with the error : django.db.utils.OperationalError: (1045, "Access denied for user 'sajay'#'localhost' (using password: YES)")
I have no idea why it is trying to access my development db settings? I checked django1.7 docs, googled around but no luck.
Appreciate any help,
Thanks
My settings.py database section looks like the below :
DATABASES = {
'default': {
'ENGINE':'django.db.backends.mysql',
'NAME':'expenses_db',
'USER':'root',
'PASSWORD':'',
'HOST':'127.0.0.1',
'PORT':'3306',
},
# 'development': {
# 'ENGINE':'django.db.backends.mysql',
# 'NAME':'myapp_db',
# 'USER':'sajay',
# 'PASSWORD':'secret',
# 'HOST':'127.0.0.1',
# 'PORT':'3306',
# },
}
Note : When the 'development' section is commented, Travis CI build is green
My .travis.yml is pasted below:
language: python
services:
- mysql
python:
- "2.7"
env:
- DJANGO_VERSION=1.7 DB=mysql
install:
- pip install -r requirements.txt
- pip install mysql-python
before_script:
- mysql -e 'create database IF NOT EXISTS myapp_db;' -uroot
- mysql -e "GRANT ALL PRIVILEGES ON *.* TO 'root'#'localhost';" -uroot
- python manage.py migrate
script:
- python manage.py test
The problem you are getting is because you haven't got the right database name and settings for Travis CI. First you will need to separate out your settings between Travis and your project. To do that I use an environment variable called BUILD_ON_TRAVIS (alternatively you can use a different settings file if you prefer).
settings.py:
import os
#Use the following live settings to build on Travis CI
if os.getenv('BUILD_ON_TRAVIS', None):
SECRET_KEY = "SecretKeyForUseOnTravis"
DEBUG = False
TEMPLATE_DEBUG = True
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'travis_ci_db',
'USER': 'travis',
'PASSWORD': '',
'HOST': '127.0.0.1',
}
}
else:
#Non-travis DB configuration goes here
Then in your .travis.yml file in the before_script sections you will need to use the same database name as in the DATABASES settings. We then just have to set the environment variable in the .travis.yml file like so:
env:
global:
- BUILD_ON_TRAVIS=true
matrix:
- DJANGO_VERSION=1.7 DB=mysql
EDIT:
There is now an environment variable set by default when building on Travis.
Using this environment variable we can more simply solve the problem:
settings.py:
import os
#Use the following live settings to build on Travis CI
if os.getenv('TRAVIS', None):
#Travis DB configuration goes here
else:
#Non-Travis DB configuration goes here
Doing it this way is preferable as we no longer have to define the environment variable ourselves in the .travis.yml file.
Followed the approach suggested in http://www.slideshare.net/jacobian/the-best-and-worst-of-django to separate the settings for different environments
I checked out How to manage local vs production settings in Django? but did not get a clear answer. Thanks