Version number in Django applications - django

I'm working on a Django application and I want to display the version of the application (such that people, who find bugs know the version of the application and can provide better bug reports).
Is there a universally accepted way to store version number in Django (I mean the version of my application, not Django) ?

I was looking for this exact same question, and found your question. The answer you accepted is not quite satisfactory to me.
I am working with django debugtoolbar, in there you can also show all versions of the apps used. I was wondering how to get the versions of my custom applications to show there as well.
Looking a bit further I found this question and answer:
How to check the version of a installed application in Django in running time?
This answer however does not tell me where to put this __version__
So I looked in to an open application, which does show up in django toolbar.
I looked in to the django restframework code, there I found out:
the version is put in the __init__.py file
(see https://github.com/tomchristie/django-rest-framework/blob/master/rest_framework/init.py)
and it is put here as:
__version__ = '2.2.7'
VERSION = __version__ # synonym
And after this, in his setup.py, he gets this version from this __init__.py :
see: https://github.com/tomchristie/django-rest-framework/blob/master/setup.py
like this:
import re
def get_version(package):
"""
Return package version as listed in `__version__` in `init.py`.
"""
init_py = open(os.path.join(package, '__init__.py')).read()
return re.match("__version__ = ['\"]([^'\"]+)['\"]", init_py).group(1)
version = get_version('rest_framework')
When using buildout and zestreleaser:
By the way, I am using buildout and zest.releaser for building and versioning.
In this case, above is a bit different (but basically the same idea):
see http://zestreleaser.readthedocs.org/en/latest/versions.html#using-the-version-number-in-setup-py-and-as-version
The version in setup.py is automatically numbered by setup.py, so in __init__.py you do:
import pkg_resources
__version__ = pkg_resources.get_distribution("fill in yourpackage name").version
VERSION = __version__ # synonym

There are many places where you can store your app version number and a few methods that allow you to show it in django templates. A lot depends on the release tool you're using and your own preferences.
Below is the approach I'm using in my current project.
Put the version number into version.txt
I'm storing the app version number in the version.txt file. It's one of the locations the zest.releaser release tool (that I'm using) takes into account while doing a release.
The whole content of version.txt is just the app version number, for example: 1.0.1.dev0
Read the number to a variable in settings.py
...
with open(version_file_path) as v_file:
APP_VERSION_NUMBER = v_file.read()
...
Create a custom context processor
This paragraph and the following ownes are based on the wonderful answer by bcchun to Can I access constants in settings.py from templates in Django?
A custom context processor will allow you to add the app version number to the context of every rendered template. You won't have to add it manually every time you render a template (and usually you'll want to have the version number somewhere in the footer of every page).
Create context_processors.py file in your app directory:
from django.conf import settings
def selected_settings(request):
# return the version value as a dictionary
# you may add other values here as well
return {'APP_VERSION_NUMBER': settings.APP_VERSION_NUMBER}
Add the context processor to settings.py
TEMPLATES = [{
...
'OPTIONS': {
'context_processors': [
...
'your_app.context_processors.selected_settings'
],
},
}]
Use RequestContext or render in views
RequestContext and render populate the context with variables supplied by context_processors you set in settings.py.
Example:
def some_view(request):
return render(request, 'content.html')
Use it in a template
...
<div>{% trans 'App version' %}:{{APP_VERSION_NUMBER}}</div>
....

For me the best result/approach is to use the __init__.py on the project folder, such as
.
├── project_name
│   ├── __init__.py
and later check using the standar way, as said in (PEP396)
>>> import project_name
>>> project_name.__version__
'1.0.0'

I solved this by adding a templatetag to my django project:
in proj/templatetags, added version.py:
from django import template
import time
import os
register = template.Library()
#register.simple_tag
def version_date():
return time.strftime('%m/%d/%Y', time.gmtime(os.path.getmtime('../.git')))
Then, in my base.html (or whichever template), adding:
{% load version %}
<span class='version'>Last Updated: {% version_date %}</span>

If using GIT for source versioning, you might want manual promotion of stable
releases, and automatic numbering for development commits.
One why to obtain this in a Django project is:
In "PROJECT/_ init _.py" define:
__version__ = '1.0.1'
__build__ = ''
Then in setting.py do:
import os
import subprocess
import PROJECT
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
try:
PROJECT.__build__ = subprocess.check_output(["git", "describe", "--tags", "--always"], cwd=BASE_DIR).decode('utf-8').strip()
except:
PROJECT.__build__ = PROJECT.__version__ + " ?"
Thus, PROJECT._ build _ will show:
v1.0.1 in stable releases
and
v1.0.1-N-g8d2ec45
when the most recent tag doesn't point to the last commit (where N counts the number of additional commits after tag, followed by commit signature)

It seems the settings file would be a reasonable location to store the version number. I don't believe there is any Django accepted way to store a version number of your personal application. It seems like an application specific variable that you should define.
For more information on getting the version number out of svn: Getting SVN revision number into a program automatically

Not for Django applications, per se, but for Python modules, yes. See PEP 396, PEP 386 and the verlib library (easy_install verlib).
(I’d elaborate, but I just now discovered this, myself.)

Version info is typically maintained in git commit tags. Else, even git commits and last updated time is a good indicator of which version is running and when it was deployed.
For those using django-rest-framework and only having an API, you can return both of these; "last updated" as well as "last git commit" using an /api/version endpoint:
In views.py:
import os
import time
import subprocess
import json
class VersionViewSet(ViewSet):
def list(self, request):
# ['git', 'describe', '--tags'] # use this for named tags (version etc)
# ['git', 'describe', '--all', '--long'] # use this for any commit
# git log -1 --pretty=format:"Last commit %h by %an, %ar ("%s")"
# {"commit_hash": "%h", "full_commit_hash": "%H", "author_name": "%an", "commit_date": "%aD", "comment": "%s"}
FILE_DIR = os.path.dirname(os.path.abspath(__file__))
git_command = ['git', 'log', '-1', '--pretty={"commit_hash": "%h", "full_commit_hash": "%H", "author_name": "%an", "commit_date": "%aD", "comment": "%s"}']
git_identifier = subprocess.check_output(git_command, cwd=FILE_DIR).decode('utf-8').strip()
git_identifier = json.loads(git_identifier)
last_updated = time.strftime('%a, %-e %b %Y, %I:%M:%S %p (%Z)', time.localtime(os.path.getmtime('.git'))).strip()
return Response({
"last_updated": last_updated,
"git_commit": git_identifier
}, status=200)
In urls.py:
from myapp.views import VersionViewSet
router = routers.DefaultRouter()
...
router.register(r'version', VersionViewSet, base_name='version')
This creates the endpoint in line with the other endpoints in your API.
Output will be seen like this at http://www.example.com/api/version/:
HTTP 200 OK
Allow: GET, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept
{
"last_updated": "Mon, 6 May 2019, 11:19:58 PM (IST)",
"git_commit": {
"commit_hash": "e265270",
"full_commit_hash": "e265270dda196f4878f4fa194187a3748609dde0",
"author_name": "Authorname",
"commit_date": "Mon, 6 May 2019 23:19:51 +0530",
"comment": "The git commit message or subject or title here"
}
}

I use this option __import__('project').VERSION or __import__('project').__version__. The version is put in the __init__.py file as everybody said, for example:
proyect_name
| __init__.py
# __init__.py file
VERSION = '1.0.0' # or __version__ = '1.0.0'
Now from everywhere you can get it:
# Error tracking settings
sentry_sdk.init(
...
release=__import__('cjvc_project').VERSION
)

I used a context processor and it looks like this:
import sys
sys.path.append('..')
from content_services import __version__
def get_app_version(request):
"""
Get the app version
:param request:
:return:
"""
return {'APP_VERSION': __version__}
Since the project name is content_services I have to change the sys path up 1 level so I can import it.

In case you use Git and version tagging you can display the application version in admin site header.
Create a version.py file in the project or any app module:
import os
import subprocess
FILE_DIR = os.path.dirname(os.path.abspath(__file__))
def get_version_from_git():
try:
return subprocess.check_output(['git', 'describe', '--tags'],
cwd=FILE_DIR).decode('utf-8').strip()
except:
return '?'
VERSION = get_version_from_git()
Add the version to admin site header in urls.py:
from django.contrib import admin
from django.utils.safestring import mark_safe
from utils import version
...
admin.site.site_header = mark_safe('MyApp admin <span style="font-size: x-small">'
f'({version.VERSION})</span>')
If you need to provide the version to external tools like Django Debug Toolbar, you can expose the version in project __init__.py as suggested above:
from utils import version
__version__ = version.VERSION
VERSION = __version__ # synonym

Related

Django REST framework backend filtering issues with React.js frontend

So I'm trying to put together a not-so-simple Todo application, using some perhaps superfluous practices and technologies to showcase development skills. The application is meant to be a sort of Trello replica. The backend is Django utilizing the django REST framework for RESTful api requests from the frontend which is React.js. In short, I'm having trouble with my django_backend, and maybe specifically django-rest-framework, which doesn't seem to appreciate being asked to filter its querysets. I've tried several django packages as well as the method described in the DRF documentation, which I will elaborate, but my attempts aren't working and I would appreciate some guidance. Thank you!
Backend Setup
I've just completed implementing redux-saga and I have a few very basic sagas that get EVERY instance of each of the models. But I obviously don't want to request every instance, so we should filter the response.
Django REST framework uses built in classes like Serializers and ViewSets to return lists of model instances that the rest-framework can respond to a request with. These were my initial viewsets for testing requests to the backend, and they all returned the appropriate JSON object to my frontend:
viewsets.py
from rest_framework import viewsets
from corkboard.models import Card, Stage, Board
from .serializers import CardSerializer, StageSerializer, BoardSerializer
# Card is the model for a "Todo" Instance
# Todo Cards viewset
class CardViewSet(viewsets.ModelViewSet):
queryset = Card.objects.all()
serializer_class = CardSerializer
# Stages of completion model viewsets
class StageViewSet(viewsets.ModelViewSet):
queryset = Stage.objects.all()
serializer_class = StageSerializer
# Board that "houses" all stages and cards
class BoardViewSet(viewsets.ModelViewSet):
queryset = Board.objects.all()
serializer_class = BoardSerializer
viewseturls.py:
from rest_framework import routers
from .views import CardViewSet, StageViewSet, BoardViewSet
# registering router paths for each of the viewsets
router = routers.DefaultRouter()
router.register('cards', CardViewSet, 'cards')
router.register('stages', StageViewSet, 'stages')
router.register('boards', BoardViewSet, 'boards')
urlpatterns = router.urls
and these urls are used in the calls from the frontend via axios. Initially I tried simply changing the url in the request from something like
axios.get(`http://localhost:8000/api/cards)
to
axios.get(`http://localhost:800/api/cards/?stage=1`)
This different url responded with the same JSON object (ALL Card instances, regardless of stage)
Looking into the DRF filtering docs revealed some filtering methods that seem helpful. They involve creating another view class to override the get_queryset method as such:
from rest-framework import generics
class CardsFilteredByStageViewSet(generics.ListAPIView):
serializer_class = TodoSerializer
def get_queryset(self):
"""
This view should return a list of all the todo cards for
the stage as determined by the stage portion of the URL.
"""
stage = self.kwargs['stage']
return Card.objects.filter(stage=stage)
Note that this involves using the
rest-framework.generics.ListAPIView
and that I also tried inheriting from
views.ViewSets
After both, I go edit the router url in urls.py:
from .views import StageCardViewSet
router.register('stage_cards', StageCardViewSet, 'stage_cards')
and then try
http://localhost:8000/api/stage_cards/
and
http://localhost:8000/api/stage_cards/?stage=1
but got a KeyError for both:
Request Method: GET
Request URL: http://localhost:8000/api/stage_cards/?stage=1
Django Version: 3.0.5
Exception Type: KeyError
Exception Value: 'stage'
The DRF docs also provide sources for packages like django-filter, and installation seems to go fine. I used
pipenv install django-filter
As you can see later on in this post, this package appears in my requirements.txt as well as my Pipfile, so it is certainly installed. It's also been added to django_backend.settings. Yes, the 's' is supposed to be in 'django-filters' below:
INSTALLED_APPS = [
# confusing pluralization
'django-filters',
...
]
REST_FRAMEWORK = {
'DEFAULT_FILTER_BACKENDS': [
'django_filters.rest_framework.DjangoFilterBackend',
],
...
}
This should give an error if it's not added as a package, correct?
After it's added as an installed app develop the existing viewsets (or generic lists, depending on what you're using)
from django_filters.rest_framework import DjangoFilterBackend #added
class CardViewSet(viewsets.ModelViewSet):
queryset = Card.objects.all()
serializer_class = CardSerializer
filter_backends = [DjangoFilterBackend] #added
filterset_fields = ['stage', 'id'] #added
But I get an "Unable to import" Error in the IDE. Exploring this error revealed this article which suggested addition of django-rest-framework-filters, but immediately after installation, I began getting errors about not having django.utils.six, which I found has been removed from django > 3, but is required for django-rest-framework-filters. Though some other documentation mentions a lot of d-r-r-f features being added back into the django-filter. Removing/uninstalling d-r-f-f fixes the errors and allows the server to run, but I'm still getting red underlines about being unable to import django-filter to my viewsets:
Unable to import 'django_filters.rest_framework'pylint(import-error)
in the updated views:
from django_filters.rest_framework import DjangoFilterBackend
#^^^ here is the unable to import error underline
I just want to filter the instances of my models so I can return relevant information to the frontend. Please help.
Extra, maybe relevant info
Though my project also contains a Pipfile for my local VDE, it also has a requirements.txt file used in the installation of dependencies for Docker builds. I'm including both.
requirements.txt:
appdirs==1.4.3
asgiref==3.2.7
astroid==2.3.3
certifi==2019.11.28
distlib==0.3.0
Django==3.0.5
django-environ==0.4.5
django-filter==2.2.0
djangorestframework==3.11.0
filelock==3.0.12
isort==4.3.21
lazy-object-proxy==1.4.3
mccabe==0.6.1
pipenv==2018.11.26
pylint==2.4.4
pylint-django==2.0.15
pylint-plugin-utils==0.6
pytz==2019.3
six==1.14.0
sqlparse==0.3.1
virtualenv==20.0.2
virtualenv-clone==0.5.3
wrapt==1.11.2
Pipfile:
[[source]]
name = "pypi"
url = "https://pypi.org/simple"
verify_ssl = true
[dev-packages]
[packages]
appdirs = "==1.4.3"
asgiref = "==3.2.7"
astroid = "==2.3.3"
certifi = "==2019.11.28"
distlib = "==0.3.0"
djangorestframework = "==3.11.0"
filelock = "==3.0.12"
isort = "==4.3.21"
lazy-object-proxy = "==1.4.3"
mccabe = "==0.6.1"
pipenv = "==2018.11.26"
pylint = "==2.4.4"
pytz = "==2019.3"
six = "==1.14.0"
sqlparse = "==0.3.1"
virtualenv = "==20.0.2"
virtualenv-clone = "==0.5.3"
wrapt = "==1.11.2"
Django = "==3.0.5"
django-environ = "*"
django-filter = "*"
[requires]
python_version = "3.8"
The Dockerfile for django_backend:
# Use an official Python runtime as a parent image
FROM python:latest
# Adding backend directory to make absolute filepaths consistent across services
WORKDIR /usr/src/app/django-backend
# Install Python dependencies
ADD requirements.txt .
RUN pip3 install --upgrade pip -r requirements.txt
# Add the rest of the code
ADD . .
# Make port 8000 available for the app
EXPOSE 8000
# Be sure to use 0.0.0.0 for the host within the Docker container,
# otherwise the browser won't be able to find it
CMD python3 manage.py runserver 0.0.0.0:8000
Frontend
The frontend is React.js with redux and redux-saga. I don't believe it's relevant to this post as everything was working well until I started changing code in the django_backend
Of course ask any questions if I wasn't clear enough, and thanks in advance.
My first post ended up being because of a silly mistake. Answering it myself, because why not.
The pylint error is a big flag. This may just be an IDE configuration issue, and if I'm able to host locally with python manage.py runserver without any errors, then it's possible you implemented it correctly. If all I'm getting is a pylint error then try running it anyway. As you said it ran with no errors right after removing django-rest-framework-filters.
The code sections I gave seemed like the correct implementation of django-filter as the DRF docs do a decent job of explaining how to use it for basic filtering. They also recommend another package called django-rest-framework-filters as mentioned, but I understand there are issues with django.utils.six being removed from Django >3. One of that thread's comments includes a fix via installing the six package that allegedly works with Django 3.0.4, so I would explore that if instance interrelationships are important for filtering.
Indeed, I had implemented it correctly. My guess is that I missed it because of some other Error overlap along the process, and the pylint error threw me off the scent. I simply missed that it was in fact running; I just didn't test its functionality. I also wanted to start using Stackoverflow as a resource for development, but going through the process brought me to the answer. Funny.
Reiterating correct implementation procedure:
pipenv install django-filter
Add to django_backend.settings:
INSTALLED_APPS = [
# confusing pluralization
'django-filters',
...
]
REST_FRAMEWORK = {
'DEFAULT_FILTER_BACKENDS': [
'django_filters.rest_framework.DjangoFilterBackend',
],
...
}
and refactor of viewsets:
from django_filters.rest_framework import DjangoFilterBackend # added
# Todo Card viewset now filterable via filterset_fields
class CardViewSet(viewsets.ModelViewSet):
queryset = Card.objects.all()
serializer_class = CardSerializer
filter_backends = [DjangoFilterBackend] # added
filterset_fields = ['stage', 'title', 'id'] # added
Now after python manage.py runserver go to the right url (http://localhost:8000/api/cards/?stage=1) and you should see only cards in stage 1. This same url structures will hopefully also be callable from the frontend. But the django rest framework UI returns a filtered list. Success.
The pylint error was easily solved by going to the settings.json file of my IDE (which happens to be VSCode, though I'm considering alternatives) and adjusting as such:
{
"python.PythonPath": "/user/local/bin", // added
...
}
The import error disappeared.

Access python script from one project to another

Main folder
|-project1
|-project2
I have the above structure for django projects.
When I am in project1 in a script i used os.chdir(to_project2) to project 2
I want to access project2's settings.py and fetch some attributes. Is it possible?
#You need to point your directory.[Ex: project2]
import os
os.chdir(project2 )
cd = os.getcwd()
# print the current directory
print("Current directory:", cwd)
#For Access the Django setting attributes
import django
from django.conf import settings
All of those aforementioned code you may use from one of your python file: something.py

Override default Django translations

I have a template with this:
{% trans "Log out" %}
This is translated automatically by Django to Spanish as Terminar sesión. However I would like to translate it as Cerrar sesión.
I have tried to add this literal to the .po file, however I get an error saying this literal is duplicated when I compile the messages.
Is there a way to change/override default Django translations?
Thanks.
This is what worked for me:
create a file in your app folder which will hold django messages for which translations need to be overridden, e.g. django_standard_messages.py
in django lib folder or in django.po files find the message (string) that needs to be overridden, e.g. django.forms/fields.py has message _(u"This field is required.") which we want to translate to german differently
in django_standard_messages.py add all such messages like this:
# coding: utf-8
_ = lambda s: s
django_standard_messages_to_override = [
_("This field is required."),
...
]
Translate the file (makemessages, compilemessages) - makemessages will add added django standard messages in your application .po file, find them and translate, run compilemessages to update .mo file
tryout
The logic behind: (I think ;) ) - when ugettext function searches translation for one message (string), there are several .po/.mo files that needs to be searched through. The first match is used. So, if our local app .po/.mo is first in that order, our translations will override all other (e.g. django default).
Alternative
When you need to translate all or most of django default messages, the other possibility (which I didn't tried) is to copy default django .po file in our locale or some other special folder, and fix translations and register the folder (if new) in LOCALE_PATHS django settings file as first entry in the list.
The logic behind: is the very similar as noted in previous section.
Based on Robert Lujo answer, his alternative is totally working. And IMO simpler (keep the overriden locales in a special .po file only). Here are the steps:
Add an extra path to the LOCALE_PATHS Django settings.
LOCALE_PATHS = (
# the default one, where the makemessages command will generate the files
os.path.join(BASE_DIR, 'myproject', 'locale'),
# our new, extended locale dir
os.path.join(BASE_DIR, 'myproject', 'locale_extra'),
)
find the original Django (or 3rd party) string to be translated
ex.: "recent actions" for the Django admin 'recent actions' block
Add the new .po file "myproject/locale_extra/en/LC_MESSAGES/django.po" with the alternative translation :
msgid "Recent actions"
msgstr "Last actions"
Compile your messages as usual
The easiest way is to collect the .po file found in the django.contrib.admin locale folder and re-compiling it (you can use POEdit for doing so).
You could also override the django.contrib.admin templates by putting them in your projects templates folder (for example: yourproject/templates/admin/change_form.html) then running makemessages from the project root (although this is no longer supported for django 1.4 alpha if i'm correct)
edit: Robert Lujo's answer is the clean method
This is another solution we deployed. It involved monkey patching the _add_installed_apps_translations method of the DjangoTranslation class to prioritize the translations of the project apps over the translations of the Django apps.
# patches.py
from __future__ import absolute_import, unicode_literals
import os
from django.apps import apps
from django.core.exceptions import AppRegistryNotReady
from django.utils.translation.trans_real import DjangoTranslation
def patchDjangoTranslation():
"""
Patch Django to prioritize the project's app translations over
its own. Fixes GitLab issue #734 for Django 1.11.
Might needs to be updated for future Django versions.
"""
def _add_installed_apps_translations_new(self):
"""Merges translations from each installed app."""
try:
# Django apps
app_configs = [
app for app in apps.get_app_configs() if app.name.startswith('django.')
]
# Non Django apps
app_configs = [
app for app in apps.get_app_configs() if not app.name.startswith('django.')
]
app_configs = reversed(app_configs)
except AppRegistryNotReady:
raise AppRegistryNotReady(
"The translation infrastructure cannot be initialized before the "
"apps registry is ready. Check that you don't make non-lazy "
"gettext calls at import time.")
for app_config in app_configs:
localedir = os.path.join(app_config.path, 'locale')
if os.path.exists(localedir):
translation = self._new_gnu_trans(localedir)
self.merge(translation)
DjangoTranslation._add_installed_apps_translations = _add_installed_apps_translations_new
Then in the .ready() method of your main app, call patchDjangoTranslation:
from .patches import patchDjangoTranslation
class CommonApp(MayanAppConfig):
app_namespace = 'common'
app_url = ''
has_rest_api = True
has_tests = True
name = 'mayan.apps.common'
verbose_name = _('Common')
def ready(self):
super(CommonApp, self).ready()
patchDjangoTranslation() # Apply patch
The main change are these lines:
# Django apps
app_configs = [
app for app in apps.get_app_configs() if app.name.startswith('django.')
]
# Non Django apps
app_configs = [
app for app in apps.get_app_configs() if not app.name.startswith('django.')
]
app_configs = reversed(app_configs)
The original are:
app_configs = reversed(list(apps.get_app_configs()))
Instead of interpreting the translations of the apps in the order they appear in the INSTALLED_APPS setting, this block outputs the list of apps placing the project apps before the Django apps. Since this only happens when determining the translation to use, it doesn't affect any other part of the code and no other changes are necessary.
It works on Django version 1.11 up to 2.2.

Detect django testing mode

I'm writing a reusable django app and I need to ensure that its models are only sync'ed when the app is in test mode. I've tried to use a custom DjangoTestRunner, but I found no examples of how to do that (the documentation only shows how to define a custom test runner).
So, does anybody have an idea of how to do it?
EDIT
Here's how I'm doing it:
#in settings.py
import sys
TEST = 'test' in sys.argv
Hope it helps.
I think the answer provided here https://stackoverflow.com/a/7651002/465673 is a much cleaner way of doing it:
Put this in your settings.py:
import sys
TESTING = sys.argv[1:2] == ['test']
The selected answer is a massive hack. :)
A less-massive hack would be to create your own TestSuiteRunner subclass and change a setting or do whatever else you need to for the rest of your application. You specify the test runner in your settings:
TEST_RUNNER = 'your.project.MyTestSuiteRunner'
In general, you don't want to do this, but it works if you absolutely need it.
from django.conf import settings
from django.test.simple import DjangoTestSuiteRunner
class MyTestSuiteRunner(DjangoTestSuiteRunner):
def __init__(self, *args, **kwargs):
settings.IM_IN_TEST_MODE = True
super(MyTestSuiteRunner, self).__init__(*args, **kwargs)
NOTE: As of Django 1.8, DjangoTestSuiteRunner has been deprecated.
You should use DiscoverRunner instead:
from django.conf import settings
from django.test.runner import DiscoverRunner
class MyTestSuiteRunner(DiscoverRunner):
def __init__(self, *args, **kwargs):
settings.IM_IN_TEST_MODE = True
super(MyTestSuiteRunner, self).__init__(*args, **kwargs)
Not quite sure about your use case but one way I've seen to detect when the test suite is running is to check if django.core.mail has a outbox attribute such as:
from django.core import mail
if hasattr(mail, 'outbox'):
# We are in test mode!
pass
else:
# Not in test mode...
pass
This attributed is added by the Django test runner in setup_test_environment and removed in teardown_test_environment. You can check the source here: https://code.djangoproject.com/browser/django/trunk/django/test/utils.py
Edit: If you want models defined for testing only then you should check out Django ticket #7835 in particular comment #24 part of which is given below:
Apparently you can simply define models directly in your tests.py.
Syncdb never imports tests.py, so those models won't get synced to the
normal db, but they will get synced to the test database, and can be
used in tests.
I'm using settings.py overrides. I have a global settings.py, which contains most stuff, and then I have overrides for it. Each settings file starts with:
from myproject.settings import settings
and then goes on to override some of the settings.
prod_settings.py - Production settings (e.g. overrides DEBUG=False)
dev_settings.py - Development settings (e.g. more logging)
test_settings.py
And then I can define UNIT_TESTS=False in the base settings.py, and override it to UNIT_TESTS=True in test_settings.py.
Then whenever I run a command, I need to decide which settings to run against (e.g. DJANGO_SETTINGS_MODULE=myproject.test_settings ./manage.py test). I like that clarity.
Well, you can just simply use environment variables in this way:
export MYAPP_TEST=1 && python manage.py test
then in your settings.py file:
import os
TEST = os.environ.get('MYAPP_TEST')
if TEST:
# Do something
Although there are lots of good answers on this page, I think there is also another way to check if your project is in the test mode or not (if in some cases you couldn't use sys.argv[1:2] == ["test"]).
As you all may know DATABASE name will change to something like "test_*" (DATABASE default name will be prefixed with test) when you are in the test mode (or you can simply print it out to find your database name when you are running tests). Since I used pytest in one of my projects, I couldn't use
sys.argv[1:2] == ["test"]
because this argument wasn't there. So I simply used this one as my shortcut to check if I'm in the test environment or not (you know that your DATABASE name prefixed with test and if not just change test to your prefixed part of DATABASE name):
1) Any places other than settings module
from django.conf import settings
TESTING_MODE = "test" in settings.DATABASES["default"]["NAME"]
2) Inside the settings module
TESTING_MODE = "test" in DATABASES["default"]["NAME"]
or
TESTING_MODE = DATABASES["default"]["NAME"].startswith("test") # for more strict checks
And if this solution is doable, you don't even need to import sys for checking this mode inside your settings.py module.
I've been using Django class based settings. I use the 'switcher' from the package and load a different config/class for testing=True:
switcher.register(TestingSettings, testing=True)
In my configuration, I have a BaseSettings, ProductionSettings, DevelopmentSettings, TestingSettings, etc. They subclass off of each other as needed. In BaseSettings I have IS_TESTING=False, and then in TestingSettings I set it to True.
It works well if you keep your class inheritance clean. But I find it works better than the import * method Django developers usually use.

How to Unit test with different settings in Django?

Is there any simple mechanism for overriding Django settings for a unit test? I have a manager on one of my models that returns a specific number of the latest objects. The number of objects it returns is defined by a NUM_LATEST setting.
This has the potential to make my tests fail if someone were to change the setting. How can I override the settings on setUp() and subsequently restore them on tearDown()? If that isn't possible, is there some way I can monkey patch the method or mock the settings?
EDIT: Here is my manager code:
class LatestManager(models.Manager):
"""
Returns a specific number of the most recent public Articles as defined by
the NEWS_LATEST_MAX setting.
"""
def get_query_set(self):
num_latest = getattr(settings, 'NEWS_NUM_LATEST', 10)
return super(LatestManager, self).get_query_set().filter(is_public=True)[:num_latest]
The manager uses settings.NEWS_LATEST_MAX to slice the queryset. The getattr() is simply used to provide a default should the setting not exist.
EDIT: This answer applies if you want to change settings for a small number of specific tests.
Since Django 1.4, there are ways to override settings during tests:
https://docs.djangoproject.com/en/stable/topics/testing/tools/#overriding-settings
TestCase will have a self.settings context manager, and there will also be an #override_settings decorator that can be applied to either a test method or a whole TestCase subclass.
These features did not exist yet in Django 1.3.
If you want to change settings for all your tests, you'll want to create a separate settings file for test, which can load and override settings from your main settings file. There are several good approaches to this in the other answers; I have seen successful variations on both hspander's and dmitrii's approaches.
You can do anything you like to the UnitTest subclass, including setting and reading instance properties:
from django.conf import settings
class MyTest(unittest.TestCase):
def setUp(self):
self.old_setting = settings.NUM_LATEST
settings.NUM_LATEST = 5 # value tested against in the TestCase
def tearDown(self):
settings.NUM_LATEST = self.old_setting
Since the django test cases run single-threaded, however, I'm curious about what else may be modifying the NUM_LATEST value? If that "something else" is triggered by your test routine, then I'm not sure any amount of monkey patching will save the test without invalidating the veracity of the tests itself.
You can pass --settings option when running tests
python manage.py test --settings=mysite.settings_local
Although overriding settings configuration on runtime might help, in my opinion you should create a separate file for testing. This saves lot of configuration for testing and this would ensure that you never end up doing something irreversible (like cleaning staging database).
Say your testing file exists in 'my_project/test_settings.py', add
settings = 'my_project.test_settings' if 'test' in sys.argv else 'my_project.settings'
in your manage.py. This will ensure that when you run python manage.py test you use test_settings only. If you are using some other testing client like pytest, you could as easily add this to pytest.ini
Update: the solution below is only needed on Django 1.3.x and earlier. For >1.4 see slinkp's answer.
If you change settings frequently in your tests and use Python ≥2.5, this is also handy:
from contextlib import contextmanager
class SettingDoesNotExist:
pass
#contextmanager
def patch_settings(**kwargs):
from django.conf import settings
old_settings = []
for key, new_value in kwargs.items():
old_value = getattr(settings, key, SettingDoesNotExist)
old_settings.append((key, old_value))
setattr(settings, key, new_value)
yield
for key, old_value in old_settings:
if old_value is SettingDoesNotExist:
delattr(settings, key)
else:
setattr(settings, key, old_value)
Then you can do:
with patch_settings(MY_SETTING='my value', OTHER_SETTING='other value'):
do_my_tests()
You can override setting even for a single test function.
from django.test import TestCase, override_settings
class SomeTestCase(TestCase):
#override_settings(SOME_SETTING="some_value")
def test_some_function():
or you can override setting for each function in class.
#override_settings(SOME_SETTING="some_value")
class SomeTestCase(TestCase):
def test_some_function():
#override_settings is great if you don't have many differences between your production and testing environment configurations.
In other case you'd better just have different settings files. In this case your project will look like this:
your_project
your_app
...
settings
__init__.py
base.py
dev.py
test.py
production.py
manage.py
So you need to have your most of your settings in base.py and then in other files you need to import all everything from there, and override some options. Here's what your test.py file will look like:
from .base import *
DEBUG = False
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': 'app_db_test'
}
}
PASSWORD_HASHERS = (
'django.contrib.auth.hashers.MD5PasswordHasher',
)
LOGGING = {}
And then you either need to specify --settings option as in #MicroPyramid answer, or specify DJANGO_SETTINGS_MODULE environment variable and then you can run your tests:
export DJANGO_SETTINGS_MODULE=settings.test
python manage.py test
For pytest users.
The biggest issue is:
override_settings doesn't work with pytest.
Subclassing Django's TestCase will make it work but then you can't use pytest fixtures.
The solution is to use the settings fixture documented here.
Example
def test_with_specific_settings(settings):
settings.DEBUG = False
settings.MIDDLEWARE = []
..
And in case you need to update multiple fields
def override_settings(settings, kwargs):
for k, v in kwargs.items():
setattr(settings, k, v)
new_settings = dict(
DEBUG=True,
INSTALLED_APPS=[],
)
def test_with_specific_settings(settings):
override_settings(settings, new_settings)
I created a new settings_test.py file which would import everything from settings.py file and modify whatever is different for testing purpose.
In my case I wanted to use a different cloud storage bucket when testing.
settings_test.py:
from project1.settings import *
import os
CLOUD_STORAGE_BUCKET = 'bucket_name_for_testing'
manage.py:
def main():
# use seperate settings.py for tests
if 'test' in sys.argv:
print('using settings_test.py')
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project1.settings_test')
else:
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project1.settings')
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv)
Found this while trying to fix some doctests... For completeness I want to mention that if you're going to modify the settings when using doctests, you should do it before importing anything else...
>>> from django.conf import settings
>>> settings.SOME_SETTING = 20
>>> # Your other imports
>>> from django.core.paginator import Paginator
>>> # etc
I'm using pytest.
I managed to solve this the following way:
import django
import app.setting
import modules.that.use.setting
# do some stuff with default setting
setting.VALUE = "some value"
django.setup()
import importlib
importlib.reload(app.settings)
importlib.reload(modules.that.use.setting)
# do some stuff with settings new value
You can override settings in test in this way:
from django.test import TestCase, override_settings
test_settings = override_settings(
DEFAULT_FILE_STORAGE='django.core.files.storage.FileSystemStorage',
PASSWORD_HASHERS=(
'django.contrib.auth.hashers.UnsaltedMD5PasswordHasher',
)
)
#test_settings
class SomeTestCase(TestCase):
"""Your test cases in this class"""
And if you need these same settings in another file you can just directly import test_settings.
If you have multiple test files placed in a subdirectory (python package), you can override settings for all these files based on condition of presence of 'test' string in sys.argv
app
tests
__init__.py
test_forms.py
test_models.py
__init__.py:
import sys
from project import settings
if 'test' in sys.argv:
NEW_SETTINGS = {
'setting_name': value,
'another_setting_name': another_value
}
settings.__dict__.update(NEW_SETTINGS)
Not the best approach. Used it to change Celery broker from Redis to Memory.
One setting for all tests in a testCase
class TestSomthing(TestCase):
def setUp(self, **kwargs):
with self.settings(SETTING_BAR={ALLOW_FOO=True})
yield
override one setting in the testCase
from django.test import override_settings
#override_settings(SETTING_BAR={ALLOW_FOO=False})
def i_need_other_setting(self):
...
Important
Even though you are overriding these settings this will not apply to settings that your server initialize stuff with because it is already initialized, to do that you will need to start django with another setting module.