I am new to testing in django. I am using django nose for TDD.I am using django nose version 1.2 in my virtual environment. I referred the link below for creating my tests.
http://kokoko.fluxionary.net/testing-django-part-1-nose
Currently I need to test the query that I am going to write in my views ie to check whether the query output is correct. I used the code below but test fails:
import nose.tools as nt
nt.assert_true('obj_list' in resp.context)
nt.assert_equal([obj.pk for obj in resp.context['obj_list']], [1])
Any help will be much appreciated. Thanks in advance.
It looks like you don't have any objects in your database, so the test fails - when you run your tests a new database is created, so data from development database is not going to be transfered into your isolated test environment.
Choose one of the available solutions:
Create a fixture file, so it will hold data for all of your tests:
https://docs.djangoproject.com/en/dev/howto/initial-data/
Create objects in a setUp method or in the test method, and then try to do some asserts.
Read this first, if don't have experience with testing in Django:
https://docs.djangoproject.com/en/1.6/topics/testing/overview/
Related
My Django tests all pass when I use the built-in SQLite as my database. However, when I switch to the built-in MySQL and PostgreSQL, they fail. (All local database servers.)
The connections to the DB are live enough -- the database is created and migrations applied. But the tests aren't even hitting views.py before the test server returns a 500 (and no additional explanation beyond "Server Error", even in debug mode).
I'm using Django with Django REST Framework, and testing with REST Framework's RequestsClient class. My test fixtures are generated with Factory Boy.
Any ideas? I have a vague feeling it's to do with Factory Boy but nothing concrete. The fact that it fails with multiple databases, and successfully builds the tables, makes me suspect it's not the database itself.
Versions tested:
Django 1.11.6 and 1.11.9
Django REST Framework 3.7.1
Postgres 10.1
MySQL 5.7.21
Factory Boy 2.9.2
I had a similar problem (not sure if its the same) going from SQLite to PostgreSQL. Are you using a setup method to setup your initial database objects?
Example::
def setUp(self):
user = User.objects.create_user(username='john', password='abbeyRd', email='john#thebeatles.com')
user.save()
I found that this would work fine on SQLite, but on PostgreSQL I was getting database connection failures (it would still build the database like you said, but couldn't run the tests without a connection error).
Turns out, the fix is to run your setup method as follows:
#classmethod
def setUpTestData(cls):
user = User.objects.create_user(username='john', password='abbeyRd', email='john#thebeatles.com')
user.save()
I don't remember the exact reason why this works and the other one didn't, but it had something to do with the connection to the database trying to reconnect every test if you use setUp(self) and if you use setUpTestData(cls) it only connects when the class is instantiated, but I might not have the details perfect. All I really know is it works!
Hope that helps, not sure if it addresses your exact issue, but it took me a good few hours to figure out, so I hope it helps you and maybe others in the future!
It turned out to be because of the testing class I was using. Apparently, although it is not right now stated in the documentation, the Django REST Framework RequestsClient class can only be used with the DRF APILiveServerTestCase. That is worked at all with SQLite is a weird fluke.
Link: https://github.com/encode/django-rest-framework/issues/5801
Edited to add, here's the reason for the "weird fluke", per the Django documentation:
When using an in-memory SQLite database to run the tests, the same database connection will be shared by two threads in parallel: the thread in which the live server is run and the thread in which the test case is run.
I don't have the exact answer to this question, but I had a similar problem where PostgreSQL complained about not finding a Foreign Key that has the id=1. This happened because -in some tests- I have hardcoded the object to test on to be the one having id=1. This worked fine on SQLite because the IDs were given sequentially starting from 1, however, this is not happening on PostgreSQL. I solved it by using the Model.objects.last().pk or Model.objects.first().pk or choosing another instance in between when I need to use a Primary Key.
I'm trying to get started with Symfony2 and have been trying to set up automated testing for the model layer of my application. The Symfony2 book talks about unit testing for controllers but I can't find many examples of model testing.
I would like to have a clean data set to work with before each test runs and found these articles:
http://blog.sznapka.pl/fully-isolated-tests-in-symfony2/
http://symfony.com/doc/current/cookbook/doctrine/doctrine_fixtures.html
Based on the sznapka.pl article I have a test actually running without errors, but although the test schema is created the fixtures don't load. I can't see why, or even a way to debug this.
Background: I've previously worked with CakePHP where the loading of fixtures is largely handled automatically, maybe I have the wrong approach for Symfony/Doctrine?
Yes DoctrineFixtures are a good choice.
To test model: you don't really need to load fixtures in the database, you should create objects with the data you want (by injecting it with setters).
To test controller: load doctrine fixtures and use doctrine transactions so the state of your database is the same before each testcase, begin transaction in setUp() and rollback in tearDow(). (If your controller use transactions too i haven't found a good solution yet).
For fixtures error, if you don't have any error and your fixtures aren't loaded maybe you have missed a naming convention. Can you show us some code ?
Have a look at this solution. I don't think using transactions is the best idea, since chances are that you'll use transactions in your code. This solution suggests to load fixtures manually in each of your test.
There is a very handy LiipFunctionalTestBundle that simplifies working with a fixtures in test. The basic idea is to create a database each time you run the tests and then load fixtures. Now you can save the models, delete, every test they will be the same.
I'm having a hard time customizing the test database setup behavior. I would like to achieve the following:
The test suites need to use an existing database
The test suite shouldn't erase or recreate the database instead load the data from a mysql dump
Since the db is populated from a dump, no fixtures should be loaded
Upon finishing tests the database shouldn't be destroyed
I'm having a hard time getting the testsuiterunner to bypass creation.
Fast forward to 2016 and the ability to retain the database between tests has been built into django. It's available in the form of the --keep flag to manage.py
New in Django 1.8. Preserves the test database between test runs. This
has the advantage of skipping both the create and destroy actions
which can greatly decrease the time to run tests, especially those in
a large test suite. If the test database does not exist, it will be
created on the first run and then preserved for each subsequent run.
Any unapplied migrations will also be applied to the test database
before running the test suite.
This pretty much fullfills all the criteria you have mentioned in your questions. In fact it even goes one step further. There is no need to import the dump before each and every run.
This TEST_RUNNER works in Django 1.3
from django.test.simple import DjangoTestSuiteRunner as TestRunner
class DjangoTestSuiteRunner(TestRunner):
def setup_databases(self, **kwargs):
pass
def teardown_databases(self, old_config, **kwargs):
pass
You'll need to provide a custom test runner.
The bits your interested in overriding with the default django.test.runner.DiscoverRunner are the DiscoverRunner.setup_databases and DiscoverRunner.teardown_databases methods. These two methods are involved with creating and destroying test databases and are executed only once. You'll want to provide test-specific project settings that use your existing test database by default and override these so that the dump data is loaded and the test database isn't destroyed.
Depending on the size and contents of the dump, a safe bet might be to just create a subprocess that will pipe the dump to your database's SQL command-line interface, otherwise you might be able to obtain a cursor and execute queries directly.
If your looking to get rid of fixture loading completely, you can provide a custom base test case that extends Django's default django.test.testcases.TestCase with the TestCase._fixutre_setup and TestCase._fixutre_teardown methods overriden to be noop.
Caveat emptor: this runner will make it impossible to facilitate tests for anything but your application's sources. It's possible to customize the runner to create a specific alias for a connection to your existing database and load the dump, then provide a custom test case that overrides TestCase._database_names to point to it's alias.
I want to define model only for using in my test suite. It would be nice, to not create it's table on production. Is there any variable I can test agains to check if I'm in test mode?
If you're running your tests using the Django testing framework (python manage.py test) then it will automatically create all of the tables for your models in a completely different database, and then populate those tables from your application fixtures, prior to running your tests. Once the tests have completed, the database will be dropped. (If your production database is named foo, the test database will be named foo_test, unless you specify differently.)
If you have models that you need only for tests, then all you have to do is to place your test models in the same directory structure as your test code, instead of intermingled with your production models. That will ensure that they are not inadvertently mixed into your production database.
If you use at recent version of Django(I can confirm versions since 1.4 until 1.6), and uses django.test, you could put all your test models definitions in tests/__init__.py. This way, you will have test models in unit tests, without them polluting the production database.
Running Django unit tests is far too slow. Especially when I just want to run one test but the test runner wants to create the entire database and destroy the whole thing just for that one test.
In the case where I have not changed any of my models, I could save oodles of time if Django would not bother trying to create and destroy the entire database, and instead saved it for next time. Better yet, it would be great if the test runner was capable of being able to see which models have changed and only replacing those prior to running tests.
I'd prefer to not have to subclass the test runner myself, but that's what I'm going to have to do if I don't find a solution soon. is there anything like this already in existence?
In django1.8 added new parameter for manage.py test command --keepdb
./manage.py test --keepdb
Have you tried using an in-memory SQLite database for tests? It's much faster than using a disk-based database.
I'm using Djang-nose. If you set a env var REUSE_DB=1 it will not destroy the DB after running tests and re-use that same DB for the next run. Whenever your schema changes, just set REUSE_DB=0 and do one 'full' run. After that reset it to 1 and you're good to go.
https://github.com/django-nose/django-nose