Django executing tests for app not in INSTALLED_APPS - django

Under my Django project there are a few apps and all of them have unit tests. One of them that I'm working right now is supposed to be included only in dev/stage environments, so I'm enabling it using a environment variable.
When this variable is present it is added to INSTALLED_APPS and it is working just fine, the problem is that Django is executing the tests for this app even when it is not in INSTALLED_APPS, and it fails with the following message:
ImportError: Failed to import test module: debug.tests.unit.test_services`
...(traceback information)...
RuntimeError: Model class debug.models.Email doesn't declare an explicit app_label and isn't in an application in INSTALLED_APPS.
When I define the app_label in the class Meta of models in this app the error is different, it says it can't find a table, I assume that this is because the app is not in INSTALLED_APPS, so it's migrations are not executed.
OperationalError: no such table: debug_email
I'm not sure why Django executes the tests for all apps, but not it's migrations.
Am I missing something from Django configuration for tests?

https://docs.python.org/3/library/unittest.html#unittest.TestLoader.discover says:
If load_tests exists then discovery does not recurse into the package, load_tests is responsible for loading all tests in the package.
So in the lowest __init__.py in your app which you don't always want run:
from django.apps import apps
def load_tests(loader, tests, pattern):
from django.conf import settings
if apps.is_installed("your_dev_app"):
# Actually load the tests - thanks to #barney-szabolcs
return loader.discover(start_dir=dirname(abspath(__file__)), pattern=pattern)

You need to return the discovered tests in load_tests.
So, adding to #DaveLawrence's answer, the complete code is:
# your_dev_app/__init__.py
from django.apps import apps
from os.path import dirname, abspath
def load_tests(loader, tests, pattern):
"""
loads tests for your_dev_app if it is installed.
"""
from django.conf import settings
if apps.is_installed("your_dev_app"):
return loader.discover(start_dir=dirname(abspath(__file__)), pattern=pattern)

When you run:
python manage.py test
the command will look per default recursive for all files with the pattern test*.py in the working directory. It isn't affected by INSTALLED_APPS in settings.py.
You can specify a certain app to test it:
python manage.py test app_label
or specify a path:
python manage.py test myapp/tests
If you want to exclude some tests you can tag them and use the option --exclude-tag.
Run python manage.py test --help to get information on all options.
The official documentation gives a lot of information on the different possibilities how to run the tests.
EDIT:
If you have apps that are required only in the development environment, but not in the production, you could split your settings.py. One possible solution would be to outsource all development settings into a file local_settings.py and exclude it from versioning or from the production branch, i.e. don't push it in the production environment.
local_settings.py
DEBUG = True
INSTALLED_APPS += (
# Django Debug Toolbar would be for example
# used only in development
'debug_toolbar',
'your dev app',
)
settings.py
try:
from .local_settings import *
except ImportError:
pass

Related

Django admin.autodiscover fails to discover "primary" admin.py

My models won't show up in the admin panel, and I suspect it's because admin.py isn't being discovered.
Also, all the models work as expected in the application; I can import two.MyModelB and use it in the shell, in my web-app, etc.
root/settings.py:
INSTALLED_APPS = [
'django.contrib.admin',
'one',
'two', etc]
root/urls.py:
from django.contrib import admin
print("### Running admin-autodiscover ###")
admin.autodiscover()
Note that I don't usually use admin.autodiscover(), but it was a suggestion I came across when I was searching for solutions to this. Including it (or not) makes no difference either way.
root/admin.py:
from django.contrib import admin
from django.conf import settings
from one.models import MyModelA
from two.models import MyModelB
print("### This should be discovered! ###")
class MyModelAdmin(admin.ModelAdmin):
readonly_fields = ('id',)
# Register your models here.
admin.site.register(MyModelA, MyModelAdmin)
admin.site.register(MyModelB, MyModelAdmin)
superuser confirmed:
console output:
(env) PS E:\Django> python manage.py runserver
Watching for file changes with StatReloader
Performing system checks...
### Running admin-autodiscover ###
System check identified no issues (0 silenced).
Django version 4.1.5, using settings 'root.settings'
Starting development server at http://127.0.0.1:8000/
The statement from urls.py is output, but not the one from admin.py. I assume that indicates there's some malfunction with django.contrib.admin, but it's far beyond my knowledge.
Curiously, making admin.py files elsewhere ... cause them to be discovered:
root/random/folder/admin.py:
from django.contrib import admin
print("Is this one discovered?")
console:
(env) PS E:\Django> python manage.py runserver
Is this one discovered?
Is this one discovered?
Watching for file changes with StatReloader
Performing system checks...
### Running admin-autodiscover ###
System check identified no issues (0 silenced).
Django version 4.1.5, using settings 'root.settings'
Starting development server at http://127.0.0.1:8000/
So Django finds that admin.py - twice, once automagically and once because I explicitly call admin.autodiscover(). But the admin.py sitting literally right next to settings.py goes ignored.
I can presumably work around this by moving my model declarations into some other random folder's admin.py ... but I would really like to keep it where it is, because my project is large and I don't want to put core settings in obscure locations (if I can help it).

No fixture named 'X' found

I'm using Django 1.6 and I use South to handle migrations.
In most of my application I used to have initial_data.json files. I converted them to be loaded with migrations rather than automatically by Django (as this is recommended in the documentation)
I was using version 0.8.2 of South when I ran into a weird behavior / bug where loading fixtures is done according to the model code and not the state of the migration. I saw that the newest version (0.8.4) has since added some bug fixes related to loaddata, so I upgraded to it.
Now I get the following error on all the migration that load fixtures:
UserWarning: No fixture named 'X' found.
When I use Django's loaddata it works fine. Any ideas on how to solve this?
South and fixtures
South simply patches syncdb to skip the fixture loading for models with migrations, and actually loads them after the final migration for an app has been run.
Make sure your initial_data file is located in the correct place
Loading initial_data does not require you do actually do something, but place the fixtures in the correct place as explained in Django's documentation.
To quote the docs:
By default, Django looks in the fixtures directory inside each app for
fixtures. You can set the FIXTURE_DIRS setting to a list of additional
directories where Django should look.
This means that if you have an app called "myapp", you'd create a "fixtures" dir inside it and place the json there, e.g.: myproject/myapp/fixtures.
Django 1.7 and newer
Django 1.7 introduced built-in migrations. These have an interface similar to South; management commands to create migrations makemigrations, run them migrate, and others.
However, initial_data fixtures are no longer auto-loaded on syncdb run; unless it is an existing app, and has no migrations. This is mentioned in the release notes.
The docs now recomend to create a datamigration to handle fixture loading. Luckily, this is pretty easy to do, here's how I usually do it:
1. create an empty data migration
$ python manage.py makemigrations --empty myapp
If you had only the initial migration, you end up with these files (note that I renamed migration 0002 for clarity):
myapp/
├── __init__.py
├── fixtures
│   └── my_initial_data.json
└── migrations
   ├── 0001_initial.py
   ├── 0002_load_fixture.py
└── __init__.py
2. update the code of 0002_load_fixture.py to run loaddata
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations
from django.core.management import call_command
def load_my_initial_data(apps, schema_editor):
call_command("loaddata", "my_initial_data.json")
class Migration(migrations.Migration):
dependencies = [
('myapp', '0001_initial'),
]
operations = [
migrations.RunPython(load_my_initial_data),
]
This error also occurs when the filename does not include the '.json' extension.
In order to load a fixture from current working directory it has to end with '.json'.
Why don't you run loaddata comand from your migration then?
import datetime
from south.db import db
from south.v2 import DataMigration
from django.db import models
class Migration(DataMigration):
def forwards(self, orm):
from django.core.management import call_command
call_command("loaddata", "my_fixture.json")
If your still getting the error:
No fixture named 'X' found
Try using the path to the json file, as manage.py is probably not being run from the project directory (it may even be run from the root directory). Try something like this:
import os
app_path = os.path.dirname(os.path.dirname(os.path.realpath(__file__))
json_path = os.path.join(app_path, 'fixtures', 'my_fixture.json')
call_command("loaddata", json_path)

Django ImportError for models.py

(Updating my question with more information.)
My django app is running fine on my dev server.
I have a view that pulls from the database using the below line that works fine:
from myapp.models import MyTable
However, if I add the above 'from/import' to another module (see below structure, it's the module named 'problem_module.py') I'm writing where I want to pull from the sqlite3 database, I get this error.
raise ImportError("Could not import settings '%s' (Is it on sys.path?): %s" % (self.SETTINGS_MODULE, e))
ImportError: Could not import settings 'myfolder.settings' (Is it on sys.path?): No module named myfolder.settings
I've read and tried various solutions recommended when people get this error, but I missing something because i'm unable to resolve it.
I'm using Django 1.4 and have the lay-out as recommended.
mysite/
manage.py
mysite/
__init__.py
settings.py
urls.py
wsgi.py
myapp/
__init__.py
models.py
admin.py
views.py
indevelopment/
__init__.py
problem_module.py
I figured out what was happening and why after going through the traceback carefully and looking at the django source code. Here is what happens.
When you run:
python manage.py runserver
the environment variable gets set properly assuming you already changed this small little file or just don't pay attention to it because django 1.4 automatically configures it for you.
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myapp.settings")
However, because this setting of os.environ is under a:
if __name__ = "__main__"
expression, it only gets run if call that file directly, as you do with:
python manage.py runserver
Otherwise, if you are running a file that needs that environment variable - say testing a module in Eclipse - , the os.environ needs to get set in another place (shell, etc).
All the that I got generally pointed to this but I needed the context.
But as a little adjustment (yes, not a good idea as it couples) on the source code you can also hardcode it in manually in/django/conf/__init__.py
Specifically to see where it happens, the change below works:
# in module: /django/conf/__init__.py
class LazySettings(LazyObject):
def _setup(self):
try:
# Comment out the call to os.environ and hardcode in your app settings
# settings_module = os.environ[ENVIRONMENT_VARIABLE]
# WARNING: bad practice to do this. ;.
settings_module = "myapp.settings"
Have you changed/set DJANGO_SETTINGS_MODULE?
Try export DJANGO_SETTINGS_MODULE=mysite.settings and start your dev server.
modify your manage.py:
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mysite.settings")
PyCharm sometimes override DJANGO_SETTINGS_MODULE to empty string. Try to debug your manage.py and see if it realy changes after setdefault() call.
If its not either change pycharm settings or use os.environ['DJANGO....']='my_settings'..
or hack files at .idea/. .idea/workspaed.xml contains
env name="DJANGO_SETTINGS_MODULE" value="" in this case

Fixtures not loaded during testing

I wrote a unit test checking whether initial data is loaded correctly. However the Node.objects.all().count() always returns 0, thus it seems as the fixtures are not loaded at all. There is no output/error msg in the command line that fixtures are not loaded.
from core.models import Node
class NodeTableTestCase(unittest.TestCase):
fixtures = ['core/core_fixture.json']
def setUp(self):
print "nothing to prepare..."
def testFixture(self):
"""Check if initial data can be loaded correctly"""
self.assertEqual(Node.objects.all().count(), 14)
the fixture core_fixture.json contains 14 nodes and I'm using this fixture as a initial data load into the db using the following command:
python manage.py loaddata core/core_fixture.json
They are located in the folder I provided in the settings.py setting FIXTURE_DIRS.
Found the solution in another thread, answer from John Mee
# Import the TestCase from django.test:
# Bad: import unittest
# Bad: import django.utils.unittest
# Good: import django.test
from django.test import TestCase
class test_something(TestCase):
fixtures = ['one.json', 'two.json']
...
Doing this I got a proper error message, saying that foreign key is violated and I had to also include the fixtures for the app "auth". I exported the needed data with this command:
manage.py dumpdata auth.User auth.Group > usersandgroups.json
Using Unittest I got only the message that loading of fixture data failed, which was not very helpful.
Finally my working test looks like this:
from django.test import TestCase
class NodeTableTestCase2(TestCase):
fixtures = ['auth/auth_usersandgroups_fixture.json','core/core_fixture.json']
def setUp(self):
# Test definitions as before.
print "welcome in setup: while..nothing to setup.."
def testFixture2(self):
"""Check if initial data can be loaded correctly"""
self.assertEqual(Node.objects.all().count(), 11)
When loading fixtures in test cases, I don't think Django allows you to include the directory name. Try changing your fixtures setting to:
fixtures = ['core_fixture.json',]
You might have to change your FIXTURE_DIRS setting as well, to include the core directory.
If you run your tests in verbose mode, you will see the fixture files that Django attempts to load. This should help you debug your configuration.
python manage.py test -v 2
Make sure you have your app listed in INSTALLED_APPS and that your app contains models.py file.

Emulating an app with models in a django unittest

Im writing some code which retrieves info about installed apps, especially defined models, and then does stuff based on that information, but Im having some problems writing a clean, nice unittest. Is there a way to emulate or add an app in unittests without have to run manage.py startproject, manage.py startapp in my testsfolder to have a test app available for unittests?
Sure, try this on for size:
from django.conf import settings
from django.core.management import call_command
from django.test.testcases import TestCase
from django.db.models import loading
class AppTestCase(TestCase):
'''
Adds apps specified in `self.apps` to `INSTALLED_APPS` and
performs a `syncdb` at runtime.
'''
apps = ()
_source_installed_apps = ()
def _pre_setup(self):
super(AppTestCase, self)._pre_setup()
if self.apps:
self._source_installed_apps = settings.INSTALLED_APPS
settings.INSTALLED_APPS = settings.INSTALLED_APPS + self.apps
loading.cache.loaded = False
call_command('syncdb', verbosity=0)
def _post_teardown(self):
super(AppTestCase, self)._post_teardown()
if self._source_installed_apps:
settings.INSTALLED_APPS = self._source_installed_apps
self._source_installed_apps = ()
loading.cache.loaded = False
Your test case would look something like this:
class SomeAppTestCase(AppTestCase):
apps = ('someapp',)
In case you were wondering why, I did an override of _pre_setup() and _post_teardown() so I don't have to bother with calling super() in setUp() and tearDown() in my final test case. Otherwise, this is what I pulled out of Django's test runner. I whipped it up and it worked, although I'm sure that, with closer inspection, you can further optimize it and even avoid calling syncdb every time if it won't conflict with future tests.
EDIT:
So I seem to have gotten out of my way, thinking you need to dynamically add new models. If you've created an app for testing purposes only, here's what you can do to have it discovered during your tests.
In your project directory, create a test.py file that will contain your test settings. It should look something like this:
from settings import *
# registers test app for discovery
INSTALLED_APPS += ('path.to.test.app',)
You can now run your tests with python manage.py test --settings=myproject.test and your app will be in the installed apps.