Django testing of multi-db with automatic routing - django

Simple problem - I'm using multi-db successfully with automatic routing setup as documented on a legacy db (which are unmanaged). Now I want to test it. I've already set a testrunner to get around the managed problem and I can confirm that I am creating the databases and as expected.
My problem is that the database routing is still trying to look at the non-test database. How can I setup my routers.py file to look at the test_ database when in test mode and the non-test database any other time.
Should be simple but I'm beating my head on the wall over this one..
FWIW:
class PmCatalogRouter(object):
"""A router to control all database operations on models in
the PmCatalog application"""
def db_for_read(self, model, **hints):
"Point all operations on pmCatalog models to 'catalog'"
if model._meta.app_label == 'pmCatalog':
return 'catalog'
return None
def db_for_write(self, model, **hints):
"Point all operations on pmCatalog models to 'catalog'"
if model._meta.app_label == 'pmCatalog':
return 'catalog'
return None
def allow_syncdb(self, db, model):
"Make sure the pmCatalog app only appears on the 'catalog' db"
if db == 'catalog':
return model._meta.app_label == 'pmCatalog'
elif model._meta.app_label == 'pmCatalog':
return False
return None
Much appreciate the additional eyeballs on this ;)
Thanks

OK - so here's what happened. Turns out it was completely working all along, but two separate issues caused my tests from passing. In this case I am testing the django query methods against the legacy methods. I wasn't passing my test because the legacy methods where not looking at the test database but rather the original database. I fixed that problem and then I realized that the procedures where not getting created in the testrunner.
Once these two problems were corrected everything magically fell together...
HTH someone.

Related

Multiple Django apps, shared authentication

Two answers to this question, depending on whether sharing is across different sites or different subdomains Second answer: Multiple Django apps, shared authentication
A user goes to site1.com and logs in. Now, if he goes to site2.com, then he should already be logged in (authenticated) at that site.
site1.com and site2.com are handled by different Django apps on the same sever.
I get that the sites can share the database containing the authentication tables. What I don't get is how the session data is handled. After logging in to site1, the user goes to site2. Here he always has request.user = "AnonymousUser: AnonymousUser" instead of a user_id.
I have set it up as here: https://docs.djangoproject.com/en/dev/topics/db/multi-db/ :
site1's settings has a single database that contains the auth models as well as some other data tables. site2's settings has 2 databases. One with its own data tables and also the one used by user1. I essentially copied class AuthRouter and set up the database router.
Is it not possible what I am trying to do? I don't actually understand how the two sites can share session data. Do I need something special outside of Django? Or should this work? I can include my code here, but don't want to confuse the issue if my basic thinking about this is wrong.
EDIT: here is my setup. I am trying this on localhost.
Site1 is running on localhost:8080
site2 is running on localhost:8000
SITE2 APP:
db_router.py:
class AuthRouter(object):
def db_for_read(self, model, **hints):
if model._meta.app_label == 'auth':
return 'the_ui'
return None
# same for write
def allow_syncdb(self, db, model):
if db == 'the_ui':
return model._meta.app_label == 'auth'
elif model._meta.app_label == 'auth':
return False
return None
class OtherRouter(object):
def db_for_read(self, model, **hints):
return "default"
# same for write, relation, syncdb
settings.py:
DATABASE_ROUTERS = ['site2_app.db_router.AuthRouter', 'site2_app.db_router.OtherRouter']
SESSION_COOKIE_DOMAIN = 'http://localhost:8080'
SESSION_ENGINE = "django.contrib.sessions.backends.signed_cookies"
DATABASES = {
'default': {
# ...
},
'the_ui': {
# ...
}
}
SITE 1 APP:
# no router
# only single database, same as the "the_ui" used in site2
SESSION_ENGINE = "django.contrib.sessions.backends.signed_cookies"
The marked answer is correct based on the initial question of using different sites.
Here is the answer for different subdomains, eg www.site.com and shop.site.com
Use the shared database authentication as described in the question. And then, in both settings.py:
SESSION_ENGINE = "django.contrib.sessions.backends.signed_cookies"
SESSION_COOKIE_DOMAIN = '.site.com' #notice the period
SESSION_COOKIE_NAME = 'my_cookie'
SECRET_KEY = "" the same in both settings.py
There might be some issue about what happens if you have other subdomains that should NOT share this information. Or, maybe not, if you give their cookies different names??
Not sure if this can work on localhost.
As you said, the two sites can have the same authentication data by sharing the database or syncing the Users table between their respective databases.
This will ensure any user of site1.com will automatically become a member of site2.com and vice versa.
But your requirement of- any user who logs into site1.com should get automatically logged in site2.com is a bit tricky. What you really need is Single Sign On (SSO).
Why it can't be achieved by merely sharing the database (including session data) is because site2.com can never gain access to a cookie set by site1.com on the browser because of cross domain issues.
There are many SSO solutions using Django. Have a look at this SO question. Though I have never used it, Django-openid seems a good option.
You can use database routers for specifying which database should be used for auth backend.
Here I have given a example router code below:
class UserSessionRouter(object):
def db_for_read(self, model, **hints):
if model._meta.app_label == 'auth':
return 'usersandsessions'
elif model._meta.app_label == 'accounts':
return 'usersandsessions'
elif model._meta.app_label == 'sessions':
return 'usersandsessions'
return None
def db_for_write(self, model, **hints):
if model._meta.app_label == 'auth':
return 'usersandsessions'
elif model._meta.app_label == 'accounts':
return 'usersandsessions'
elif model._meta.app_label == 'sessions':
return 'usersandsessions'
return None
Then specify router using the database setting DATABASE_ROUTERS and SESSION_COOKIE_DOMAIN as given below
DATABASE_ROUTERS = ['site2.routers.UserSessionRouter']
SESSION_COOKIE_DOMAIN = 'site1.com'
As Sudipta mentioned, openid is one way to accomplish SSO.
Another way is to use SAML directly (there are some tools out there for this), or a hosted service like Stormpath (https://stormpath.com) which does SSO stuff for you, and provides directly support with Django's auth system: https://github.com/stormpath/stormpath-django
I work at Stormpath, so pretty biased, but figured I'd chime in as there's quite a lot of confusion around regarding SSO + Django solutions.

How do I combine django querys to allow me to save in one go?

As it stands, I have a deserialiser that can import any model I throw at it and put it into my database. unfortunately it hits the database with every single model and I want to stop this.
Is there any way for me to lump lots of short saves into one big one?
Example code:
def deserialise(xml):
for x in model_list:
do work to make instance...
instance.save()
return True
is there any way to move the saving of instance out of the for loop?
You can use transaction.commit_manually().
from django.db import transaction
#transaction.commit_manually
def deserialise(xml):
for x in model_list:
# do work to make instance...
instance.save()
transaction.commit()
return True
Or transaction.commit_on_success() which will automatically commit the saves when and if the function returns successfully.
from django.db import transaction
#transaction.commit_on_success
def deserialise(xml):
for x in model_list:
# do work to make instance...
instance.save()
return True
Alternatively, in django 1.4 there's also bulk_create(), but do note the caveats listed in documentation.

django test client gets 404 for all urls

I am doing my first experiments with django testing and I am having the problem that I always get the 404 template regardless which url (even /) I am using.
If I throw the very same code into the django shell it's working as expected and always presents me the contents of the requested url.
class SimpleTest(TestCase):
def setUp(self):
self.user = User.objects.create_user('test', 'test', 'test')
self.user.is_staff = True
self.user.save()
self.client = Client()
def test_something(self):
self.assertTrue(self.client.login(username='test', password= 'test'))
self.client.get("/")
The login returns True, but the get() fails. Any hints what I am doing wrong here?
Keep in mind that most views use something like get_object_or_404, get_list_or_404, or simply raise Http404 when there's a problem accessing some object or another. You'll need to make sure that your test database is populated with sufficient objects to fulfill all these requirements to make the view not return a 404.
Remember, when running tests, the database is rolled back after each test (using transactions), so each test method must stand on its own or the setUp method must populate the database with any required dependencies.

django unit tests without a db

Is there a possibility to write django unittests without setting up a db? I want to test business logic which doesn't require the db to set up. And while it is fast to setup a db, I really don't need it in some situations.
You can subclass DjangoTestSuiteRunner and override setup_databases and teardown_databases methods to pass.
Create a new settings file and set TEST_RUNNER to the new class you just created. Then when you're running your test, specify your new settings file with --settings flag.
Here is what I did:
Create a custom test suit runner similar to this:
from django.test.simple import DjangoTestSuiteRunner
class NoDbTestRunner(DjangoTestSuiteRunner):
""" A test runner to test without database creation """
def setup_databases(self, **kwargs):
""" Override the database creation defined in parent class """
pass
def teardown_databases(self, old_config, **kwargs):
""" Override the database teardown defined in parent class """
pass
Create a custom settings:
from mysite.settings import *
# Test runner with no database creation
TEST_RUNNER = 'mysite.scripts.testrunner.NoDbTestRunner'
When you're running your tests, run it like the following with --settings flag set to your new settings file:
python manage.py test myapp --settings='no_db_settings'
UPDATE: April/2018
Since Django 1.8, the module django.test.simple.DjangoTestSuiteRunner were moved to 'django.test.runner.DiscoverRunner'.
For more info check official doc section about custom test runners.
Generally tests in an application can be classified in to two categories
Unit tests, these test the individual snippets of code in insolation and do not require to go to the database
Integration test cases which actually go to the database and test the fully integrated logic.
Django supports both unit and integration tests.
Unit tests, do not require to setup and tear down database and these we should inherit from SimpleTestCase.
from django.test import SimpleTestCase
class ExampleUnitTest(SimpleTestCase):
def test_something_works(self):
self.assertTrue(True)
For integration test cases inherit from TestCase in turn inherits from TransactionTestCase and it will setup and tear down the database before running each test.
from django.test import TestCase
class ExampleIntegrationTest(TestCase):
def test_something_works(self):
#do something with database
self.assertTrue(True)
This strategy will ensure that database in created and destroyed only for the test cases that access the database and therefore tests will be more efficient
From django.test.simple
warnings.warn(
"The django.test.simple module and DjangoTestSuiteRunner are deprecated; "
"use django.test.runner.DiscoverRunner instead.",
RemovedInDjango18Warning)
So override DiscoverRunner instead of DjangoTestSuiteRunner.
from django.test.runner import DiscoverRunner
class NoDbTestRunner(DiscoverRunner):
""" A test runner to test without database creation/deletion """
def setup_databases(self, **kwargs):
pass
def teardown_databases(self, old_config, **kwargs):
pass
Use like that :
python manage.py test --testrunner=app.filename.NoDbTestRunner app
I chose to inherit from django.test.runner.DiscoverRunner and make a couple of additions to the run_tests method.
My first addition checks to see if setting up a db is necessary and allows the normal setup_databases functionality to kick in if a db is necessary. My second addition allows the normal teardown_databases to run if the setup_databases method was allowed to run.
My code assumes that any TestCase that inherits from django.test.TransactionTestCase (and thus django.test.TestCase) requires a database to be setup. I made this assumption because the Django docs say:
If you need any of the other more complex and heavyweight Django-specific features like ... Testing or using the ORM ... then you should use TransactionTestCase or TestCase instead.
https://docs.djangoproject.com/en/1.6/topics/testing/tools/#django.test.SimpleTestCase
mysite/scripts/settings.py
from django.test import TransactionTestCase
from django.test.runner import DiscoverRunner
class MyDiscoverRunner(DiscoverRunner):
def run_tests(self, test_labels, extra_tests=None, **kwargs):
"""
Run the unit tests for all the test labels in the provided list.
Test labels should be dotted Python paths to test modules, test
classes, or test methods.
A list of 'extra' tests may also be provided; these tests
will be added to the test suite.
If any of the tests in the test suite inherit from
``django.test.TransactionTestCase``, databases will be setup.
Otherwise, databases will not be set up.
Returns the number of tests that failed.
"""
self.setup_test_environment()
suite = self.build_suite(test_labels, extra_tests)
# ----------------- First Addition --------------
need_databases = any(isinstance(test_case, TransactionTestCase)
for test_case in suite)
old_config = None
if need_databases:
# --------------- End First Addition ------------
old_config = self.setup_databases()
result = self.run_suite(suite)
# ----------------- Second Addition -------------
if need_databases:
# --------------- End Second Addition -----------
self.teardown_databases(old_config)
self.teardown_test_environment()
return self.suite_result(suite, result)
Finally, I added the following line to my project's settings.py file.
mysite/settings.py
TEST_RUNNER = 'mysite.scripts.settings.MyDiscoverRunner'
Now, when running only non-db-dependent tests, my test suite runs an order of magnitude faster! :)
Updated: also see this answer for using a third-party tool pytest.
#Cesar is right. After accidentally running ./manage.py test --settings=no_db_settings, without specifying an app name, my development database was wiped out.
For a safer manner, use the same NoDbTestRunner, but in conjunction with the following mysite/no_db_settings.py:
from mysite.settings import *
# Test runner with no database creation
TEST_RUNNER = 'mysite.scripts.testrunner.NoDbTestRunner'
# Use an alternative database as a safeguard against accidents
DATABASES['default']['NAME'] = '_test_mysite_db'
You need to create a database called _test_mysite_db using an external database tool. Then run the following command to create the corresponding tables:
./manage.py syncdb --settings=mysite.no_db_settings
If you're using South, also run the following command:
./manage.py migrate --settings=mysite.no_db_settings
OK!
You can now run unit tests blazingly fast (and safe) by:
./manage.py test myapp --settings=mysite.no_db_settings
As an alternative to modifying your settings to make NoDbTestRunner "safe", here's a modified version of NoDbTestRunner that closes the current database connection and removes the connection information from settings and the connection object. Works for me, test it in your environment before relying on it :)
class NoDbTestRunner(DjangoTestSuiteRunner):
""" A test runner to test without database creation """
def __init__(self, *args, **kwargs):
# hide/disconnect databases to prevent tests that
# *do* require a database which accidentally get
# run from altering your data
from django.db import connections
from django.conf import settings
connections.databases = settings.DATABASES = {}
connections._connections['default'].close()
del connections._connections['default']
super(NoDbTestRunner,self).__init__(*args,**kwargs)
def setup_databases(self, **kwargs):
""" Override the database creation defined in parent class """
pass
def teardown_databases(self, old_config, **kwargs):
""" Override the database teardown defined in parent class """
pass
Another solution would be to have your test class simply inherit from unittest.TestCase instead of any of Django's test classes. The Django docs (https://docs.djangoproject.com/en/2.0/topics/testing/overview/#writing-tests) contain the following warning about this:
Using unittest.TestCase avoids the cost of running each test in a transaction and flushing the database, but if your tests interact with the database their behavior will vary based on the order that the test runner executes them. This can lead to unit tests that pass when run in isolation but fail when run in a suite.
However, if your test doesn't use the database, this warning needn't concern you and you can reap the benefits of not having to run each test case in a transaction.
The above solutions are fine too. But the following solution will also reduce the db creation time if there are more number of migrations.
During unit testing, running syncdb instead of running all the south migrations will be much faster.
SOUTH_TESTS_MIGRATE = False # To disable migrations and use syncdb
instead
My web host only allows creating and dropping databases from their Web GUI, so I was getting a "Got an error creating the test database: Permission denied" error when trying to run python manage.py test.
I'd hoped to use the --keepdb option to django-admin.py but it doesn't seem to be supported any longer as of Django 1.7.
What I ended up doing was modifying the Django code in .../django/db/backends/creation.py, specifically the _create_test_db and _destroy_test_db functions.
For _create_test_db I commented out the cursor.execute("CREATE DATABASE ... line and replaced it with pass so the try block wouldn't be empty.
For _destroy_test_db I just commented out cursor.execute("DROP DATABASE - I didn't need to replace it with anything because there was already another command in the block (time.sleep(1)).
After that my tests ran fine - though I did set up a test_ version of my regular database separately.
This isn't a great solution of course, because it will break if Django is upgraded, but I had a local copy of Django due to using virtualenv so at least I have control over when/if I upgrade to a newer version.
Another solution not mentioned: this was easy for me to implement because I already have multiple settings files (for local / staging / production) that inherit from base.py . So unlike other people I did not have to overwrite DATABASES['default'], as DATABASES isn't set in base.py
SimpleTestCase still tried to connect to my test database and run migrations. When I made a config/settings/test.py file that didn't set DATABASES to anything, then my unit tests ran without it. It allowed me to use models that had foreign key and unique constraint fields. (Reverse foreign key lookup, which requires a db lookup, fails.)
(Django 2.0.6)
PS code snippets
PROJECT_ROOT_DIR/config/settings/test.py:
from .base import *
#other test settings
#DATABASES = {
# 'default': {
# 'ENGINE': 'django.db.backends.sqlite3',
# 'NAME': 'PROJECT_ROOT_DIR/db.sqlite3',
# }
#}
cli, run from PROJECT_ROOT_DIR:
./manage.py test path.to.app.test --settings config.settings.test
path/to/app/test.py:
from django.test import SimpleTestCase
from .models import *
#^assume models.py imports User and defines Classified and UpgradePrice
class TestCaseWorkingTest(SimpleTestCase):
def test_case_working(self):
self.assertTrue(True)
def test_models_ok(self):
obj = UpgradePrice(title='test',price=1.00)
self.assertEqual(obj.title,'test')
def test_more_complex_model(self):
user = User(username='testuser',email='hi#hey.com')
self.assertEqual(user.username,'testuser')
def test_foreign_key(self):
user = User(username='testuser',email='hi#hey.com')
ad = Classified(user=user,headline='headline',body='body')
self.assertEqual(ad.user.username,'testuser')
#fails with error:
def test_reverse_foreign_key(self):
user = User(username='testuser',email='hi#hey.com')
ad = Classified(user=user,headline='headline',body='body')
print(user.classified_set.first())
self.assertTrue(True) #throws exception and never gets here
When using the nose test runner (django-nose), you can do something like this:
my_project/lib/nodb_test_runner.py:
from django_nose import NoseTestSuiteRunner
class NoDbTestRunner(NoseTestSuiteRunner):
"""
A test runner to test without database creation/deletion
Used for integration tests
"""
def setup_databases(self, **kwargs):
pass
def teardown_databases(self, old_config, **kwargs):
pass
In your settings.py you can specify the test runner there, i.e.
TEST_RUNNER = 'lib.nodb_test_runner.NoDbTestRunner' . # Was 'django_nose.NoseTestSuiteRunner'
OR
I wanted it for running specific tests only, so I run it like so:
python manage.py test integration_tests/integration_* --noinput --testrunner=lib.nodb_test_runner.NoDbTestRunner
You can set databases to an empty list inside the normal TestCase from django.test.
from django.test import TestCase
class NoDbTestCase(TestCase):
databases = []

How do you put a file in a fixture in Django?

I can easily fill the field of a FileField or ImageField in a Django fixture with a file name, but that file doesn't exist and when I try to test my application it fails because that file doesn't exist.
How do I correctly populate a FileField or Imagefield in a Django fixture so that the file itself is available too?
I'm afraid the short answer is that you can't do this using the FileField or ImageField classes; they just store a file path and have no real concept of the file's actual data. The long answer, however, is that anything is possible if you leverage the Django API for writing your own custom model fields.
At a minimum, you'll want to implement the value_to_string method to convert the data for serialization (there's an example in the django docs at the link above). Note that the examples at the URL link above also include mention of subclassing FileField and ImageField, which is helpful for your situation!
You'll also have to decide if the data should therefore be stored in the database, or on the file system. If the former, you will have to implement your custom class as a Blob field, including customization for every DB you wish to support; you'll also have to provide some support for how the data should be returned to the user out of the database when the HTML requests a .gif/.jpg/.png/.whatever url. If the latter, which is the smarter way to go IMHO, you'll have to implement methods for serializing, de-serializing binary data to the filesystem. Either way, if you implement these as subclasses of FileField and ImageField, you should still be able to use the Admin tools and other modules that expect such django features.
If and only if you elect to use the more involved blob approach, here's a snippet of code from an old project of mind (back when I was learning Django) that handles blob for MySQL and PostgreSQL; you'll probably be able to find a number of improvements as I haven't touched it since :-) It does not handle serialization, though, so you'll have to add that using the method above.
from django.db import models
from django.conf import settings
class BlobValueWrapper(object):
"""Wrap the blob value so that we can override the unicode method.
After the query succeeds, Django attempts to record the last query
executed, and at that point it attempts to force the query string
to unicode. This does not work for binary data and generates an
uncaught exception.
"""
def __init__(self, val):
self.val = val
def __str__(self):
return 'blobdata'
def __unicode__(self):
return u'blobdata'
class BlobField(models.Field):
"""A field for persisting binary data in databases that we support."""
__metaclass__ = models.SubfieldBase
def db_type(self):
if settings.DATABASE_ENGINE == 'mysql':
return 'LONGBLOB'
elif settings.DATABASE_ENGINE == 'postgresql_psycopg2':
return 'bytea'
else:
raise NotImplementedError
def to_python(self, value):
if settings.DATABASE_ENGINE == 'postgresql_psycopg2':
if value is None:
return value
return str(value)
else:
return value
def get_db_prep_save(self, value):
if value is None:
return None
if settings.DATABASE_ENGINE =='postgresql_psycopg2':
return psycopg2.Binary(value)
else:
return BlobValueWrapper(value)
There's no way to "include" the files in the serialized fixture. If creating a test fixture, you just need to do it yourself; make sure that some test files actually exist in locations referenced by the FileField/ImageField values. The values of those fields are paths relative to MEDIA_ROOT: if you need to, you can set MEDIA_ROOT in your test setUp() method in a custom test_settings.py to ensure that your test files are found wherever you put them.
EDIT: If you want to do it in your setUp() method, you can also monkeypatch default_storage directly:
from django.core.files.storage import default_storage
class MyTest(TestCase):
def setUp(self):
self._old_default_storage_location = default_storage.location
default_storage.location = '/some/other/place'
def tearDown(self):
default_storage.location = self._old_default_storage_location
That seems to work. default_storage is a documented public API, so this should be reliable.