mock or override values in apps.py (Django) for testing - django

Typically I'd set a value in settings.py and then use #override_settings in tests to override it.
Is there any similar mechanism that exists for overriding settings in apps.py when using
app_config?
For instance
# apps.py
class MyConfig(AppConfig):
SOME_VAR = "SOME_VAR"
# some .py file
from django.apps import apps
apps.get_app_config('some_app').SOME_VAR
# some test.py
# How to change the value that apps.get_app_config('some_app').SOME_VAR returns?
It's strange this isn't covered in the docs as it feels this would be a common use case or maybe I'm missing something?
Thanks,

Related

Django app_label is wrong on models

I have a Django DRF application. Here is my project structure.
myproject/
myproject/
apps/
myApp1/
__init__.py
apps.py
admin.py
models.py
urls.py
views.py
myApp2/
__init__.py
static/
manage.py
and myINSTALLED_APPS contains:
INSTALLED_APPS = [
'apps.myApp1.apps.AppOneConfig',
'apps.myApp2.apps.AppTwoConfig',
]
When I went to ./manage.py shell_plus and run:
SomeModel._meta.label
I see myApp1 or myApp2 instead of apps.myApp1 && apps.myApp2. And even in migrations Models are referred as myApp1.Model or myApp2.Model not as apps.myApp1.Model or apps.myApp2.Model
Also, specified AppConfig.
from django.apps import AppConfig
class AppOneConfig(AppConfig):
name = 'apps.myApp1'
verbose_name = 'My App One'
Is that expected ? I am pretty new to Django. Can anyone suggest what the mistake was?
Is that expected?
Yes, that is expected. By default, the app label uses the last part of the "python path". You can change it by specifying this in the AppConfig [Django-doc]. It is the .label attribute [Django-doc] of this AppConfig that determines the app label, and:
(…) It defaults to the last component of name. It should be a valid Python identifier. (…)
Now the .name attribute [Django-doc], and this is:
Full Python path to the application, e.g. 'django.contrib.admin'.
You can specify this by first specifying the AppConfig in the __init__.py file of your myApp1 directory:
# apps/myApp/__init__.py
default_app_config = 'apps.myApp.apps.App1Config'
then you make a file apps.py in the myApp1 directory, and write:
# apps/myApp/apps.py
from django.apps import AppConfig
class App1Config(AppConfig):
label = 'apps_myapp1'
Note: normally directories use slug_case, so I think it might be better to rename your myApp1 to myapp1 or my_app1.
EDIT: You thus need to set the label attribute of your AppOneConfig to:
class AppOneConfig(AppConfig):
name = 'apps.myApp1'
label = 'apps_myapp1'
verbose_name = 'My App One'

How is Django apps.py supposed to be used?

I'm using django 1.10.5
It seems that the apps.py file in app1 is not imported unless I explicitely set default_app_config = 'app1.apps.App1Config' in __init__ for that module.
Yet, in the docs I'm reading "New applications should avoid default_app_config. Instead they should require the dotted path to the appropriate AppConfig subclass to be configured explicitly in INSTALLED_APPS."
I'm reading that as including the module in INSTALLED_APPS like
INSTALLED_APPS = (
'...',
'app1',
)
And I do have that.
Maybe I'm confused by the language "dotted path to the appropriate AppConfig subclass" and maybe there's more to it than listing the main module?
My specific use is that I want to import handlers.py so it will be included in the application because it has some signal receivers that need to be listening.
To do that, I followed the advice in the docs which says "In practice, signal handlers are usually defined in a signals submodule of the application they relate to. Signal receivers are connected in the ready() method of your application configuration class. If you’re using the receiver() decorator, simply import the signals submodule inside ready()."
# apps.py
from django.apps import AppConfig
class App1Config(AppConfig):
name = 'app1'
def ready(self):
import app1.handlers
# handlers.py
from django.dispatch import receiver
from django.db.models.signals import post_save
from app1.models import App1
#receiver(post_save, sender=App1)
def say_you_did_something(sender, instance, **kwargs):
print("Action has been taken.")
But that does absolutely nothing...
Until I also add
# __init__.py
default_app_config = 'individual.apps.IndividualConfig'
Which is supposed to be avoided except for < 1.7?
So to restate the question in practical terms, what is the recommended way to make the project aware of the handlers.py file?
You've misunderstood the instruction. As it says, you need to include the dotted path to the AppConfig class itself in INSTALLED_APPS, not the app.
INSTALLED_APPS = (
'...',
'app1.apps.App1Config',
)
Replace
INSTALLED_APPS = (
'...',
'app1,
)
with
INSTALLED_APPS = (
'...',
'app1.apps.App1Config',
)
No need to add default_app_config in init.py
In the django 3.0 application documentation it is mentioned how to include dotted path.
Below is the excerpt from django application documentation https://docs.djangoproject.com/en/3.0/ref/applications/#django.apps.AppConfig.ready :
For application authors¶
If you’re creating a pluggable app called “Rock ’n’ roll”, here’s how you would provide a proper name for the admin:
# rock_n_roll/apps.py
from django.apps import AppConfig
class RockNRollConfig(AppConfig):
name = 'rock_n_roll'
verbose_name = "Rock ’n’ roll"
You can make your application load this AppConfig subclass by default as follows:
# rock_n_roll/__init__.py
default_app_config = 'rock_n_roll.apps.RockNRollConfig'
That will cause RockNRollConfig to be used when INSTALLED_APPS contains 'rock_n_roll'. This allows you to make use of AppConfig features without requiring your users to update their INSTALLED_APPS setting. Besides this use case, it’s best to avoid using default_app_config and instead specify the app config class in INSTALLED_APPS as described next.
Of course, you can also tell your users to put 'rock_n_roll.apps.RockNRollConfig' in their INSTALLED_APPS setting. You can even provide several different AppConfig subclasses with different behaviors and allow your users to choose one via their INSTALLED_APPS setting.

Detect django testing mode

I'm writing a reusable django app and I need to ensure that its models are only sync'ed when the app is in test mode. I've tried to use a custom DjangoTestRunner, but I found no examples of how to do that (the documentation only shows how to define a custom test runner).
So, does anybody have an idea of how to do it?
EDIT
Here's how I'm doing it:
#in settings.py
import sys
TEST = 'test' in sys.argv
Hope it helps.
I think the answer provided here https://stackoverflow.com/a/7651002/465673 is a much cleaner way of doing it:
Put this in your settings.py:
import sys
TESTING = sys.argv[1:2] == ['test']
The selected answer is a massive hack. :)
A less-massive hack would be to create your own TestSuiteRunner subclass and change a setting or do whatever else you need to for the rest of your application. You specify the test runner in your settings:
TEST_RUNNER = 'your.project.MyTestSuiteRunner'
In general, you don't want to do this, but it works if you absolutely need it.
from django.conf import settings
from django.test.simple import DjangoTestSuiteRunner
class MyTestSuiteRunner(DjangoTestSuiteRunner):
def __init__(self, *args, **kwargs):
settings.IM_IN_TEST_MODE = True
super(MyTestSuiteRunner, self).__init__(*args, **kwargs)
NOTE: As of Django 1.8, DjangoTestSuiteRunner has been deprecated.
You should use DiscoverRunner instead:
from django.conf import settings
from django.test.runner import DiscoverRunner
class MyTestSuiteRunner(DiscoverRunner):
def __init__(self, *args, **kwargs):
settings.IM_IN_TEST_MODE = True
super(MyTestSuiteRunner, self).__init__(*args, **kwargs)
Not quite sure about your use case but one way I've seen to detect when the test suite is running is to check if django.core.mail has a outbox attribute such as:
from django.core import mail
if hasattr(mail, 'outbox'):
# We are in test mode!
pass
else:
# Not in test mode...
pass
This attributed is added by the Django test runner in setup_test_environment and removed in teardown_test_environment. You can check the source here: https://code.djangoproject.com/browser/django/trunk/django/test/utils.py
Edit: If you want models defined for testing only then you should check out Django ticket #7835 in particular comment #24 part of which is given below:
Apparently you can simply define models directly in your tests.py.
Syncdb never imports tests.py, so those models won't get synced to the
normal db, but they will get synced to the test database, and can be
used in tests.
I'm using settings.py overrides. I have a global settings.py, which contains most stuff, and then I have overrides for it. Each settings file starts with:
from myproject.settings import settings
and then goes on to override some of the settings.
prod_settings.py - Production settings (e.g. overrides DEBUG=False)
dev_settings.py - Development settings (e.g. more logging)
test_settings.py
And then I can define UNIT_TESTS=False in the base settings.py, and override it to UNIT_TESTS=True in test_settings.py.
Then whenever I run a command, I need to decide which settings to run against (e.g. DJANGO_SETTINGS_MODULE=myproject.test_settings ./manage.py test). I like that clarity.
Well, you can just simply use environment variables in this way:
export MYAPP_TEST=1 && python manage.py test
then in your settings.py file:
import os
TEST = os.environ.get('MYAPP_TEST')
if TEST:
# Do something
Although there are lots of good answers on this page, I think there is also another way to check if your project is in the test mode or not (if in some cases you couldn't use sys.argv[1:2] == ["test"]).
As you all may know DATABASE name will change to something like "test_*" (DATABASE default name will be prefixed with test) when you are in the test mode (or you can simply print it out to find your database name when you are running tests). Since I used pytest in one of my projects, I couldn't use
sys.argv[1:2] == ["test"]
because this argument wasn't there. So I simply used this one as my shortcut to check if I'm in the test environment or not (you know that your DATABASE name prefixed with test and if not just change test to your prefixed part of DATABASE name):
1) Any places other than settings module
from django.conf import settings
TESTING_MODE = "test" in settings.DATABASES["default"]["NAME"]
2) Inside the settings module
TESTING_MODE = "test" in DATABASES["default"]["NAME"]
or
TESTING_MODE = DATABASES["default"]["NAME"].startswith("test") # for more strict checks
And if this solution is doable, you don't even need to import sys for checking this mode inside your settings.py module.
I've been using Django class based settings. I use the 'switcher' from the package and load a different config/class for testing=True:
switcher.register(TestingSettings, testing=True)
In my configuration, I have a BaseSettings, ProductionSettings, DevelopmentSettings, TestingSettings, etc. They subclass off of each other as needed. In BaseSettings I have IS_TESTING=False, and then in TestingSettings I set it to True.
It works well if you keep your class inheritance clean. But I find it works better than the import * method Django developers usually use.

django - how to detect test environment (check / determine if tests are being run)

How can I detect whether a view is being called in a test environment (e.g., from manage.py test)?
#pseudo_code
def my_view(request):
if not request.is_secure() and not TEST_ENVIRONMENT:
return HttpResponseForbidden()
Put this in your settings.py:
import sys
TESTING = len(sys.argv) > 1 and sys.argv[1] == 'test'
This tests whether the second commandline argument (after ./manage.py) was test. Then you can access this variable from other modules, like so:
from django.conf import settings
if settings.TESTING:
...
There are good reasons to do this: suppose you're accessing some backend service, other than Django's models and DB connections. Then you might need to know when to call the production service vs. the test service.
Create your own TestSuiteRunner subclass and change a setting or do whatever else you need to for the rest of your application. You specify the test runner in your settings:
TEST_RUNNER = 'your.project.MyTestSuiteRunner'
In general, you don't want to do this, but it works if you absolutely need it.
from django.conf import settings
from django.test.simple import DjangoTestSuiteRunner
class MyTestSuiteRunner(DjangoTestSuiteRunner):
def __init__(self, *args, **kwargs):
settings.IM_IN_TEST_MODE = True
super(MyTestSuiteRunner, self).__init__(*args, **kwargs)
Just look at request.META['SERVER_NAME']
def my_view(request):
if request.META['SERVER_NAME'] == "testserver":
print "This is test environment!"
There's also a way to temporarily overwrite settings in a unit test in Django. This might be a easier/cleaner solution for certain cases.
You can do this inside a test:
with self.settings(MY_SETTING='my_value'):
# test code
Or add it as a decorator on the test method:
#override_settings(MY_SETTING='my_value')
def test_my_test(self):
# test code
You can also set the decorator for the whole test case class:
#override_settings(MY_SETTING='my_value')
class MyTestCase(TestCase):
# test methods
For more info check the Django docs: https://docs.djangoproject.com/en/1.11/topics/testing/tools/#django.test.override_settings
I think the best approach is to run your tests using their own settings file (i.e. settings/tests.py). That file can look like this (the first line imports settings from a local.py settings file):
from local import *
TEST_MODE = True
Then do ducktyping to check if you are in test mode.
try:
if settings.TEST_MODE:
print 'foo'
except AttributeError:
pass
If you are multiple settings file for different environment, all you need to do is to create one settings file for testing.
For instance, your setting files are:
your_project/
|_ settings/
|_ __init__.py
|_ base.py <-- your original settings
|_ testing.py <-- for testing only
In your testing.py, add a TESTING flag:
from .base import *
TESTING = True
In your application, you can access settings.TESTING to check if you're in testing environment.
To run tests, use:
python manage.py test --settings your_project.settings.testing
While there's no official way to see whether we're in a test environment, django actually leaves some clues for us.
By default Django’s test runner automatically redirects all Django-sent email to a dummy outbox. This is accomplished by replacing EMAIL_BACKEND in a function called setup_test_environment, which in turn is called by a method of DiscoverRunner. So, we can check whether settings.EMAIL_BACKEND is set to 'django.core.mail.backends.locmem.EmailBackend'. That mean we're in a test environment.
A less hacky solution would be following the devs lead by adding our own setting by subclassing DisoverRunner and then overriding setup_test_environment method.
Piggybacking off of #Tobia's answer, I think it is better implemented in settings.py like this:
import sys
try:
TESTING = 'test' == sys.argv[1]
except IndexError:
TESTING = False
This will prevent it from catching things like ./manage.py loaddata test.json or ./manage.py i_am_not_running_a_test
I wanted to exclude some data migrations from being run in tests, and came up with this solution on a Django 3.2 project:
class Migration(migrations.Migration):
def apply(self, project_state, schema_editor, collect_sql=False):
import inspect
if 'create_test_db' in [i.function for i in inspect.stack()]:
return project_state
else:
return super().apply(project_state, schema_editor, collect_sql=collect_sql)
I haven't seen this suggested elsewhere, and for my purposes it's pretty clean. Of course, it might break if Django changes the name of the create_test_db method (or the return value of the apply method) at some point in time, but modifying this to work should be reasonably simple, since it's likely that some method exists in the stack that doesn't exist during non-test migration runs.

How to Unit test with different settings in Django?

Is there any simple mechanism for overriding Django settings for a unit test? I have a manager on one of my models that returns a specific number of the latest objects. The number of objects it returns is defined by a NUM_LATEST setting.
This has the potential to make my tests fail if someone were to change the setting. How can I override the settings on setUp() and subsequently restore them on tearDown()? If that isn't possible, is there some way I can monkey patch the method or mock the settings?
EDIT: Here is my manager code:
class LatestManager(models.Manager):
"""
Returns a specific number of the most recent public Articles as defined by
the NEWS_LATEST_MAX setting.
"""
def get_query_set(self):
num_latest = getattr(settings, 'NEWS_NUM_LATEST', 10)
return super(LatestManager, self).get_query_set().filter(is_public=True)[:num_latest]
The manager uses settings.NEWS_LATEST_MAX to slice the queryset. The getattr() is simply used to provide a default should the setting not exist.
EDIT: This answer applies if you want to change settings for a small number of specific tests.
Since Django 1.4, there are ways to override settings during tests:
https://docs.djangoproject.com/en/stable/topics/testing/tools/#overriding-settings
TestCase will have a self.settings context manager, and there will also be an #override_settings decorator that can be applied to either a test method or a whole TestCase subclass.
These features did not exist yet in Django 1.3.
If you want to change settings for all your tests, you'll want to create a separate settings file for test, which can load and override settings from your main settings file. There are several good approaches to this in the other answers; I have seen successful variations on both hspander's and dmitrii's approaches.
You can do anything you like to the UnitTest subclass, including setting and reading instance properties:
from django.conf import settings
class MyTest(unittest.TestCase):
def setUp(self):
self.old_setting = settings.NUM_LATEST
settings.NUM_LATEST = 5 # value tested against in the TestCase
def tearDown(self):
settings.NUM_LATEST = self.old_setting
Since the django test cases run single-threaded, however, I'm curious about what else may be modifying the NUM_LATEST value? If that "something else" is triggered by your test routine, then I'm not sure any amount of monkey patching will save the test without invalidating the veracity of the tests itself.
You can pass --settings option when running tests
python manage.py test --settings=mysite.settings_local
Although overriding settings configuration on runtime might help, in my opinion you should create a separate file for testing. This saves lot of configuration for testing and this would ensure that you never end up doing something irreversible (like cleaning staging database).
Say your testing file exists in 'my_project/test_settings.py', add
settings = 'my_project.test_settings' if 'test' in sys.argv else 'my_project.settings'
in your manage.py. This will ensure that when you run python manage.py test you use test_settings only. If you are using some other testing client like pytest, you could as easily add this to pytest.ini
Update: the solution below is only needed on Django 1.3.x and earlier. For >1.4 see slinkp's answer.
If you change settings frequently in your tests and use Python ≥2.5, this is also handy:
from contextlib import contextmanager
class SettingDoesNotExist:
pass
#contextmanager
def patch_settings(**kwargs):
from django.conf import settings
old_settings = []
for key, new_value in kwargs.items():
old_value = getattr(settings, key, SettingDoesNotExist)
old_settings.append((key, old_value))
setattr(settings, key, new_value)
yield
for key, old_value in old_settings:
if old_value is SettingDoesNotExist:
delattr(settings, key)
else:
setattr(settings, key, old_value)
Then you can do:
with patch_settings(MY_SETTING='my value', OTHER_SETTING='other value'):
do_my_tests()
You can override setting even for a single test function.
from django.test import TestCase, override_settings
class SomeTestCase(TestCase):
#override_settings(SOME_SETTING="some_value")
def test_some_function():
or you can override setting for each function in class.
#override_settings(SOME_SETTING="some_value")
class SomeTestCase(TestCase):
def test_some_function():
#override_settings is great if you don't have many differences between your production and testing environment configurations.
In other case you'd better just have different settings files. In this case your project will look like this:
your_project
your_app
...
settings
__init__.py
base.py
dev.py
test.py
production.py
manage.py
So you need to have your most of your settings in base.py and then in other files you need to import all everything from there, and override some options. Here's what your test.py file will look like:
from .base import *
DEBUG = False
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': 'app_db_test'
}
}
PASSWORD_HASHERS = (
'django.contrib.auth.hashers.MD5PasswordHasher',
)
LOGGING = {}
And then you either need to specify --settings option as in #MicroPyramid answer, or specify DJANGO_SETTINGS_MODULE environment variable and then you can run your tests:
export DJANGO_SETTINGS_MODULE=settings.test
python manage.py test
For pytest users.
The biggest issue is:
override_settings doesn't work with pytest.
Subclassing Django's TestCase will make it work but then you can't use pytest fixtures.
The solution is to use the settings fixture documented here.
Example
def test_with_specific_settings(settings):
settings.DEBUG = False
settings.MIDDLEWARE = []
..
And in case you need to update multiple fields
def override_settings(settings, kwargs):
for k, v in kwargs.items():
setattr(settings, k, v)
new_settings = dict(
DEBUG=True,
INSTALLED_APPS=[],
)
def test_with_specific_settings(settings):
override_settings(settings, new_settings)
I created a new settings_test.py file which would import everything from settings.py file and modify whatever is different for testing purpose.
In my case I wanted to use a different cloud storage bucket when testing.
settings_test.py:
from project1.settings import *
import os
CLOUD_STORAGE_BUCKET = 'bucket_name_for_testing'
manage.py:
def main():
# use seperate settings.py for tests
if 'test' in sys.argv:
print('using settings_test.py')
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project1.settings_test')
else:
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project1.settings')
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv)
Found this while trying to fix some doctests... For completeness I want to mention that if you're going to modify the settings when using doctests, you should do it before importing anything else...
>>> from django.conf import settings
>>> settings.SOME_SETTING = 20
>>> # Your other imports
>>> from django.core.paginator import Paginator
>>> # etc
I'm using pytest.
I managed to solve this the following way:
import django
import app.setting
import modules.that.use.setting
# do some stuff with default setting
setting.VALUE = "some value"
django.setup()
import importlib
importlib.reload(app.settings)
importlib.reload(modules.that.use.setting)
# do some stuff with settings new value
You can override settings in test in this way:
from django.test import TestCase, override_settings
test_settings = override_settings(
DEFAULT_FILE_STORAGE='django.core.files.storage.FileSystemStorage',
PASSWORD_HASHERS=(
'django.contrib.auth.hashers.UnsaltedMD5PasswordHasher',
)
)
#test_settings
class SomeTestCase(TestCase):
"""Your test cases in this class"""
And if you need these same settings in another file you can just directly import test_settings.
If you have multiple test files placed in a subdirectory (python package), you can override settings for all these files based on condition of presence of 'test' string in sys.argv
app
tests
__init__.py
test_forms.py
test_models.py
__init__.py:
import sys
from project import settings
if 'test' in sys.argv:
NEW_SETTINGS = {
'setting_name': value,
'another_setting_name': another_value
}
settings.__dict__.update(NEW_SETTINGS)
Not the best approach. Used it to change Celery broker from Redis to Memory.
One setting for all tests in a testCase
class TestSomthing(TestCase):
def setUp(self, **kwargs):
with self.settings(SETTING_BAR={ALLOW_FOO=True})
yield
override one setting in the testCase
from django.test import override_settings
#override_settings(SETTING_BAR={ALLOW_FOO=False})
def i_need_other_setting(self):
...
Important
Even though you are overriding these settings this will not apply to settings that your server initialize stuff with because it is already initialized, to do that you will need to start django with another setting module.