Is there a decorator in Django that would allow to run the test function without applying fixtures?
Something like:
from django.test import TestCase
class TestSomething(TestCase):
fixtures = ['test_fixture.json']
def test_with_fixture(self):
# test something with fixtures
#do_not_use_fixtures
def test_without_fixtures(self):
# test something without fixtures
Django's TestCase loads the fixtures once for the class for performance reasons. Therefore it isn't possible to run a test method without fixtures.
It might be possible with TransactionTestCase, but you'd have to dive into the Django internals to do this, so I wouldn't recommend it.
Related
I'm running this with Django 2.2.5 and Postgres 11
It appears that the Django test framework's LiveServerTestCase reloads fixture data in-between each test without deleting the previous data. I come across this with a consistent error:
> ./manage.py test functional_tests
Creating test database for alias 'default'...
System check identified no issues (0 silenced).
.EE
======================================================================
The first test passes but the subsequent tests fail.
The error I get is:
django.db.utils.IntegrityError: Problem installing fixture '.../fixtures/unit_test.json': Could not load app.Configuration(pk=1): duplicate key value violates unique constraint "app_configuration_site_id_3124a87d_uniq"
DETAIL: Key (site_id)=(1) already exists.
If I comment out and don't load fixtures the 3 tests work perfectly fine.
There is only one record entry with the PK in question in the fixtures. This tells me that the Test reloads the data and fails because it finds an existing record with the same PK.
Is there a way to tell it to either overwrite the data with the same PKs or ignore (a-la get_or_create)?
Is there a better practice to follow with LiveServerTestCase?
It's frustrating because I can't seem to be able to use this test class with fixtures when I need to.
Any insight is appreciated.
RB
Here is the test setup:
from django.contrib.staticfiles.testing import LiveServerTestCase
from selenium.webdriver.chrome.webdriver import WebDriver
class TestHeaderNavigation(LiveServerTestCase):
fixtures = ['unit_test']
def setUp(self):
super().setUp()
self.selenium = WebDriver()
self.selenium.implicitly_wait(5)
self.selenium.get('%s' % (self.live_server_url))
def tearDown(self):
super().tearDown()
self.selenium.quit()
We use py.test to test our Django system.
We use unmanaged DBs in our system:
class Example(models.Model):
class Meta:
managed = False
When we run tests that need access to the existing DBs with existing data, they fail with
ConnectionDoesNotExist: The connection my_connection_name doesn't exist
This is how our tests look like:
#pytest.mark.django_db
def test_example_access():
from example.models import Example
Example.objects.count()
We've tried to add the following to conftest.py as described in the top answer on Django test tables are not being created, but it did not change the matter (despite being executed).
#pytest.fixture(autouse=True, scope='session')
def __make_unmanaged_managed():
from django.apps import apps
unmanaged_models = [m for m in apps.get_models() if not m._meta.managed]
for m in unmanaged_models:
m._meta.managed = True
Thanks a lot for your ideas!
ps: Testing django application with several legacy databases provides a workaround for unit tests, however I would not know how to apply this when using py.test specifically.
I had the same problem here, and as far as I can tell, this is the snippet that worked for me.
I had to put this in the conftest.py file, in the root of my django project, to override the normal django_test_environment fixture in pytest-django.
#pytest.fixture(autouse=True, scope='session')
def django_test_environment(request):
"""
This is an override of the django_test_environment function in pytest-django,
to run through the unmanaged models, and just for the test, set them to be managed,
so we can create models in our tests, even if we don't manage directly in Django.
"""
from pytest_django.plugin import django_settings_is_configured
from pytest_django.plugin import _setup_django
from django.apps import apps
unmanaged_models = [m for m in apps.get_models() if not m._meta.managed]
for m in unmanaged_models:
m._meta.managed = True
if django_settings_is_configured():
_setup_django()
from django.conf import settings as dj_settings
from django.test.utils import (setup_test_environment,
teardown_test_environment)
dj_settings.DEBUG = False
setup_test_environment()
request.addfinalizer(teardown_test_environment)
What this does
When Pytest sets up a django database, it does so with the django_db_setup fixture. This fixture itself takes django_test_environment, which we override in conftest.py, to update the managed state on the unmanaged models.
When setup_test_environment is called as part of our overriden setup_test_environment fixture, it creates the necessary tables for us to reference models in our tests
I'd love to hear if this works, as it seems to have resolved the issue at my end, when I ended up having to work with many, many unmanaged models, and manually creating them with SQL seemed like a terrible idea.
I need to do some unit tests in my Django project. The problem is that almost every use case depends on prepopulated database objects.
For example, I want to create a product and test, if there were all pre_save signals successful.
from django.contrib.auth.models import User
from django.test import TestCase
from .models import Product
class ProductTestCase(TestCase):
def setUp(self):
self.user = User.objects.create(username='test_user')
self.product = Product.objects.create(name='Test product',user=self.user)
def test_product_exists(self):
self.assertIsNotNone(self.product)
def product_is_active_by_default(self):
...
I can't do that because product has to have User object related. But I can't create a User object because User has to have related plan object. There are multiple plans in my production database from which one is default but there are no plans inside test database.
So to be able to do unit tests I need to prepopulate test database with multiple objects from multiple apps.
How can I do that?
you can simply use django fixtures for that :-)
first populate a sample db with data then export data with python manage.py dumpdata
then in one of your apps create a directory named fixtures and put exported json file there (named tests.json or something else)
in your test class load fixtures like this
class ProductTestCase(TestCase):
fixtures = ['tests.json', ]
checkout django docs
PS: checkout factory boy too (#Gabriel Muj) answer
I don't recommend using fixture since you will need to maintain them each time you make changes to the model. Here is a better approach on creating objects for tests by using this library https://factoryboy.readthedocs.io/en/latest/ which is more flexible.
I need to add a many-to-many relationship in my setUpTestData sequence so that my tests will run correctly.
According to the docs, Many-to-Many relationships cannot be created until an object has been saved, as the primary key must first exist. This means that I cannot set the relationship in setUpTestData with Model.objects.create() as I do with other variables.
Is there any way to include a Many-to-Many relationship in setUpTestData?
As a partial answer, a solution I found in the docs seems to work half way. Here is my code:
(I'm linking a social app to Django's Sites framework so that my tests will be able to run).
SetupTestData.py
from django.test import TestCase
from django.contrib.sites.models import Site
from allauth.socialaccount.models import SocialApp
class TestData(TestCase):
#classmethod
def setUpTestData(cls):
cls.current_site = Site.objects.get_current()
cls.SocialApp1 = cls.current_site.socialapp_set.create(
provider="facebook",
name="facebook",
client_id="123456789",
secret="0987654321",
)
test_sandbox.py
from MyApp.tests.SetupTestData import TestData
from django.contrib.sites.models import Site
from allauth.socialaccount.models import SocialApp
class TestHomePage(TestData):
def test_sandbox(self):
print(Site.objects.all())
print(Site.objects.get_current())
print(Site.objects.get_current().socialapp_set.all())
print(self.SocialApp1)
print(self.SocialApp1.sites)
print(SocialApp.objects.get(provider='facebook').sites)
Test Output
Creating test database for alias 'default'...
[<Site: example.com>]
example.com
[<SocialApp: facebook>]
facebook
sites.Site.None
sites.Site.None
Destroying test database for alias 'default'...
You can see from the results that I can locate the app via Sites, but it doesn't seem to work backwards. I suppose if I somehow reversed the relationship in setUpTestData I could make it work the opposite way instead. I'll leave the question open for a better solution.
I am using Django 1.5b1 and south migrations and life has generally been great. I have some schema updates which create my database, with a User table among others. I then load a fixture for ff.User (my custom user model):
def forwards(self, orm):
from django.core.management import call_command
fixture_path = "/absolute/path/to/my/fixture/load_initial_users.json"
call_command("loaddata", fixture_path)
All has been working great until I have added another field to my ff.User model, much further down the migration line. My fixture load now breaks:
DatabaseError: Problem installing fixture 'C:\<redacted>create_users.json':
Could not load ff.User(pk=1): (1054, "Unknown column 'timezone_id' in 'field list'")
Timezone is the field (ForeignKey) which I added to my user model.
The ff.User differs from what is in the database, so the Django ORM gives up with a DB error. Unfortunately, I cannot specify my model in my fixture as orm['ff.User'], which seems to be the south way of doing things.
How should I load fixtures properly using south so that they do not break once the models for which these fixtures are for gets modified?
I found a Django snippet that does the job!
https://djangosnippets.org/snippets/2897/
It load the data according to the models frozen in the fixture rather than the actual model definition in your apps code! Works perfect for me.
I proposed a solution that might interest you too:
https://stackoverflow.com/a/21631815/797941
Basicly, this is how I load my fixture:
from south.v2 import DataMigration
import json
class Migration(DataMigration):
def forwards(self, orm):
json_data=open("path/to/your/fixture.json")
items = json.load(json_data)
for item in items:
# Be carefull, this lazy line won't resolve foreign keys
obj = orm[item["model"]](**item["fields"])
obj.save()
json_data.close()
This was a frustrating part of using fixtures for me as well. My solution was to make a few helper tools. One which creates fixtures by sampling data from a database and includes South migration history in the fixtures.
There's also a tool to add South migration history to existing fixtures.
The third tool checks out the commit when this fixture was modified, loads the fixture, then checks out the most recent commit and does a south migration and dumps the migrated db back to the fixture. This is done in a separate database so your default db doesn't get stomped on.
The first two can be considered beta code, and the third please treat as usable alpha, but they're already being quite helpful to me.
Would love to get some feedback from others:
git#github.com:JivanAmara/django_fixture_tools.git
Currently, it only supports projects using git as the RCS.
The most elegant solution I've found is here where by your app model's get_model function is switched out to instead supply the model from the supplied orm. It's then set back after the fixture is applied.
from django.db import models
from django.core.management import call_command
def load_fixture(file_name, orm):
original_get_model = models.get_model
def get_model_southern_style(*args):
try:
return orm['.'.join(args)]
except:
return original_get_model(*args)
models.get_model = get_model_southern_style
call_command('loaddata', file_name)
models.get_model = original_get_model
You call it with load_fixture('my_fixture.json', orm) from within you forwards definition.
Generally South handles migrations using forwards() and backwards() functions. In your case you should either:
alter the fixtures to contain proper data, or
import fixture before migration that breaks it (or within the same migration, but before altering the schema),
In the second case, before migration adding (or, as in your case, removing) the column, you should perform the migration that will explicitly load the fixtures similarly to this (docs):
def forwards(self, orm):
from django.core.management import call_command
call_command("loaddata", "create_users.json")
I believe this is the easiest way to accomplish what you needed. Also make sure you do not do some simple mistakes like trying to import data with new structure before applying older migrations.
Reading the following two posts has helped me come up with a solution:
http://andrewingram.net/2012/dec/common-pitfalls-django-south/#be-careful-with-fixtures
http://news.ycombinator.com/item?id=4872596
Specifically, I rewrote my data migrations to use output from 'dumpscript'
I needed to modify the resulting script a bit to work with south. Instead of doing
from ff.models import User
I do
User = orm['ff.User']
This works exactly like I wanted it to. Additionally, it has the benefit of not hard-coding IDs, like fixtures require.