Django Tests Pass with SQLite, Fail with Other DB's - django

My Django tests all pass when I use the built-in SQLite as my database. However, when I switch to the built-in MySQL and PostgreSQL, they fail. (All local database servers.)
The connections to the DB are live enough -- the database is created and migrations applied. But the tests aren't even hitting views.py before the test server returns a 500 (and no additional explanation beyond "Server Error", even in debug mode).
I'm using Django with Django REST Framework, and testing with REST Framework's RequestsClient class. My test fixtures are generated with Factory Boy.
Any ideas? I have a vague feeling it's to do with Factory Boy but nothing concrete. The fact that it fails with multiple databases, and successfully builds the tables, makes me suspect it's not the database itself.
Versions tested:
Django 1.11.6 and 1.11.9
Django REST Framework 3.7.1
Postgres 10.1
MySQL 5.7.21
Factory Boy 2.9.2

I had a similar problem (not sure if its the same) going from SQLite to PostgreSQL. Are you using a setup method to setup your initial database objects?
Example::
def setUp(self):
user = User.objects.create_user(username='john', password='abbeyRd', email='john#thebeatles.com')
user.save()
I found that this would work fine on SQLite, but on PostgreSQL I was getting database connection failures (it would still build the database like you said, but couldn't run the tests without a connection error).
Turns out, the fix is to run your setup method as follows:
#classmethod
def setUpTestData(cls):
user = User.objects.create_user(username='john', password='abbeyRd', email='john#thebeatles.com')
user.save()
I don't remember the exact reason why this works and the other one didn't, but it had something to do with the connection to the database trying to reconnect every test if you use setUp(self) and if you use setUpTestData(cls) it only connects when the class is instantiated, but I might not have the details perfect. All I really know is it works!
Hope that helps, not sure if it addresses your exact issue, but it took me a good few hours to figure out, so I hope it helps you and maybe others in the future!

It turned out to be because of the testing class I was using. Apparently, although it is not right now stated in the documentation, the Django REST Framework RequestsClient class can only be used with the DRF APILiveServerTestCase. That is worked at all with SQLite is a weird fluke.
Link: https://github.com/encode/django-rest-framework/issues/5801
Edited to add, here's the reason for the "weird fluke", per the Django documentation:
When using an in-memory SQLite database to run the tests, the same database connection will be shared by two threads in parallel: the thread in which the live server is run and the thread in which the test case is run.

I don't have the exact answer to this question, but I had a similar problem where PostgreSQL complained about not finding a Foreign Key that has the id=1. This happened because -in some tests- I have hardcoded the object to test on to be the one having id=1. This worked fine on SQLite because the IDs were given sequentially starting from 1, however, this is not happening on PostgreSQL. I solved it by using the Model.objects.last().pk or Model.objects.first().pk or choosing another instance in between when I need to use a Primary Key.

Related

Cakephp Fixtures automatic truncate table not working properly

I recently switched over from cakephp 2.2.5 to 2.3 and the auto-truncate table is no longer working in 2.3.
What I did in 2.2.5 was just testing the framework using some small tables with no relations aka foreign keys constraints and the fixture import and auto-truncate table worked flawlessly.
Until I confirmed that I wanted to use cakephp and started developing my application using the CakeTestCase again. It stopped working. Data is still in the test database after they are imported the first time. So second time it would indicate that it fails to auto-load fixtures because data with the same IDs already exist.
So I started to suspect that it was mainly because the foreign key constraints I had in my current tables.
I searched the web and noticed that quite a lot of people having the same issue but there isn't a real solution to it yet.
The only one that seems to be a solution is here:
http://cakephp.lighthouseapp.com/projects/42648/tickets/2905-tests-fixture-table-ar-not-truncate-when-droptable-false
However, I tried turning on $dropTables = true, it didn't work. And also I tried modifying the CakeFixtureManager.php as suggested and it didn't work either.
Does anybody know how to fix this issue?

How to have Django test case and Selenium server use same database?

I have a Django (v1.4, using Postgresql) project which I've written a bunch of working unittests for. These use FactoryBoy to generate most of their data.
I'm now starting to write some integration tests using LiveServerTestCase with Selenium. I've just realised that my tests and the live test server use different databases. Which means that data created by factories in my tests aren't available to Selenium.
I'm not sure of the best way to progress. I think I could use fixtures to supply data that would work, although this is a pain having got this far using factories instead.
Is there a way I can continue to use factories to generate data that will work for my Selenium tests? Really I'd like my tests and LiveServerTestCase to use the same database.
I found out why this happened to me, and some possible workarounds, including Ilya Baryshev's answer above.
If your test descends from Django's TestCase, and if your database supports transactions, then each test runs in its own transaction, and nobody outside (no other thread, external process, or other test) can see the objects created in the database by your test.
LiveServerTestCase uses threads, so it would suffer from this problem. So the designers made it inherit from TransactionTestCase instead of TestCase, which disables these transactions, so that changes are globally visible.
What happened to me was that I added some mixins to my test class, and one of them pulled in TestCase. This doesn't cause an error, but it silently replaces the base class of LiveServerTestCase with TestCase, which enables transactions again, causing the problem that you describe.
Ilya's SQLite memory database workaround works because Django contains code that detects when using a SQLite :memory: database that actually shares the same connection between threads, so you see your test's objects in the LiveServerThread because they're inside the same transaction. However this comes with some caveats:
It’s important to prevent simultaneous database queries via this shared connection by the two threads, as that may sometimes randomly cause the tests to fail. So you need to ensure that the two threads don’t access the database at the same time. In particular, this means that in some cases (for example, just after clicking a link or submitting a form), you might need to check that a response is received by Selenium and that the next page is loaded before proceeding with further test execution. Do this, for example, by making Selenium wait until the HTML tag is found in the response (requires Selenium > 2.13)...
https://docs.djangoproject.com/en/1.4/topics/testing/#live-test-server
In my case, once we identifier that autocommit was being turned off when the test started, and tracked down why (because we had entered TestCase code that we shouldn't have done), we were able to fix the inheritance hierarchy to avoid pulling in TestCase, and then the same database was visible from both the live server thread and the test.
This also works with Postgres databases, so it would provide a solution for velotron.
Have you tried using sqlite as your database backend for tests?
When using an in-memory SQLite database to run the tests, the same
database connection will be shared by two threads in parallel: the
thread in which the live server is run and the thread in which the
test case is run.
from Django docs
If you're not using anything beyond regular ORM, you might benefit from test speedups as well.

TestCase To Detect DatabaseError: no such column

I recently added a new field to one of my models and forgot to add the appropriate column to the table in the database. I have test cases that test adding a new instance of this model and changing an existing instance. Neither of these test cases failed. Yet when I try to change an instance with the live site I get
DatabaseError no such column
I have made some attempts to detect this error from within a TestCase but no such luck.
Any help is greatly appreciated.
The problem is that you (I really, really hope) don't use your production database for testing with, so all that you could detect is that the column doesn't exist on your test database, which is (presumably) recreated from scratch based on your model definitions, not that the column is missing from your production database.
A better approach to this problem is to use a migration tool like South, and automate the deployment process so that migrations are run as new code is deployed.
This will only work for small(ish) sites - you might find naively running migrations causes pain if you've got a high-traffic site. If you're in that situation, you may find David Cramer's write-up on schema changes informative.
Unfortunetly, using syncdb command is not enough to update the database.
You can insert the fields manually with your database management system or use a solution like South.
Use following link it as very well explanation for your problem.
http://amionrails.wordpress.com/2013/11/17/django-databaseerror-no-such-column-error/

Django test to use existing database

I'm having a hard time customizing the test database setup behavior. I would like to achieve the following:
The test suites need to use an existing database
The test suite shouldn't erase or recreate the database instead load the data from a mysql dump
Since the db is populated from a dump, no fixtures should be loaded
Upon finishing tests the database shouldn't be destroyed
I'm having a hard time getting the testsuiterunner to bypass creation.
Fast forward to 2016 and the ability to retain the database between tests has been built into django. It's available in the form of the --keep flag to manage.py
New in Django 1.8. Preserves the test database between test runs. This
has the advantage of skipping both the create and destroy actions
which can greatly decrease the time to run tests, especially those in
a large test suite. If the test database does not exist, it will be
created on the first run and then preserved for each subsequent run.
Any unapplied migrations will also be applied to the test database
before running the test suite.
This pretty much fullfills all the criteria you have mentioned in your questions. In fact it even goes one step further. There is no need to import the dump before each and every run.
This TEST_RUNNER works in Django 1.3
from django.test.simple import DjangoTestSuiteRunner as TestRunner
class DjangoTestSuiteRunner(TestRunner):
def setup_databases(self, **kwargs):
pass
def teardown_databases(self, old_config, **kwargs):
pass
You'll need to provide a custom test runner.
The bits your interested in overriding with the default django.test.runner.DiscoverRunner are the DiscoverRunner.setup_databases and DiscoverRunner.teardown_databases methods. These two methods are involved with creating and destroying test databases and are executed only once. You'll want to provide test-specific project settings that use your existing test database by default and override these so that the dump data is loaded and the test database isn't destroyed.
Depending on the size and contents of the dump, a safe bet might be to just create a subprocess that will pipe the dump to your database's SQL command-line interface, otherwise you might be able to obtain a cursor and execute queries directly.
If your looking to get rid of fixture loading completely, you can provide a custom base test case that extends Django's default django.test.testcases.TestCase with the TestCase._fixutre_setup and TestCase._fixutre_teardown methods overriden to be noop.
Caveat emptor: this runner will make it impossible to facilitate tests for anything but your application's sources. It's possible to customize the runner to create a specific alias for a connection to your existing database and load the dump, then provide a custom test case that overrides TestCase._database_names to point to it's alias.

How do I test a Django site that uses UDFs in the database?

I have a django project that uses a postgres db with a number of UDFs. The UDFs are written in plpythonu. Since plpyhtonu is an untrusted language, only database super users can use it to to create UDFs. This poses a serious problem in that I have no way of programmatically creating them within the test suite. Right now I see two options:
Modify django.db.backends.creation._create_test_db to create the test db from a template, which has my UDFs already loaded. This seems hacky and laborious to maintain.
Create a super user with MD5 authentication and load the UDFs in the test suite with psycopg2. This seems unsecure.
Are there less terrible ways I can do this?
Thanks.
I don't know the PG UDF model, only the MySQL and SQLite ones. A few other work-arounds might be:
Create a PG instance which you use just for testing, isolated so that potential security problems won't be a concern. Databases are cheap after all.
If the UDFs are simple (or the test data size makes them simple) then recreate them in SQLite and use that database for testing. This assumes that you don't need to test the plpython functionality as part of the project.
If the UDF functionality is much more stable than the rest of the project, then split the UDF code into its own subproject, and for the rest of the code assume the UDFs will be present. At the very least this will make it so most of the code can be automated, and only the UDF code needs manual intervention to specify the password.
I went with #1. It's not ideal but it works ok.