Django testing execution order and tables - django

In situations where a test executed and changed test database tables, would database tables return to original state after each test? If not, how should I know in what order the tests are executed so that I will predict the state of database tables. For example,
class SimpleTest(Testcase):
def test_insert(self):
# testing to see if data correctly added to database
def test_other_thing(self):
# does insered data available here?

The database is rolled back at the end of every test.

For proper test isolation, when tests touch the database, you need to inherit from django.test.TestCase which handles database state isolation between one test execution and another.
Never, ever, depend on test execution order: if you need to, you are doing it wrong, because you are violating test isolation.
Remember that you don't need to use only unittest.TestCase or only django.test.TestCase: you can mix them as needed (you don't need the latter if your test does not touch the database).
Note that django.test.TestCase use transactions to speed up database state cleanup after each test, so if you need to actually test a database transaction you need to use django.test.TransactionTestCase (see https://docs.djangoproject.com/en/dev/topics/testing/#testcase)

Related

How to test that nothing in a django-accessed database has changed?

I'm refactoring some code and adding an atomic.transaction block. We have some code that performs auto-updates en-masse, after the atomic block. (It can't be in the atomic block, because it traverses some many-related tables, which requires queries since the ORM doesn't store the distinct relations.)
According to my reading of the code, it seems impossible that any changes could occur, but having a test to prove it would be nice in case someone makes a bad code edit in the future.
So how would I go about writing a test that proves nothing in the database has changed between the start of the test and the end of the test?
The database is postgres. We currently have it configured for 2 databases (the database proper and a "validation" database which I'd created after we discovered the loading code had side-effects in dry-run mode. I'm in the middle of a refactor that has fixed the side-effects issue and I just added the atomic transaction block. So I would like to write a test like this:
def test_no_side_effects_in_dry_run_mode(self):
orig_db_state = self.get_db_state() # How do I write this?
call_command(
"load_animals_and_samples",
animal_and_sample_table_filename=("data.xlsx"),
dry_run=True,
)
post_db_state = self.get_db_state() # How do I write this?
self.assertNoDatabaseChange(orig_db_state, post_db_state, msg="Oops, database changed unexpectedly.") # How do I write this?
I have previously written a test that saves the counts of all the records in every table and saves them in a dict and then compared the dicts, but that doesn't account for fields changed in a record, and that's something I'd like to check. Besides that, a complementary insert/delete would be missed.
So is there like a file LMD I can check to see if anything at all in the database has changed?

How "--keepdb" affects setUpTestData?

From the Django documentation I see that a Django TestCase wraps the tests within two nested atomic() blocks: one for the whole class and one for each test.
Then I see that setUpTestData method "allows the creation of initial data at the class level, once for the whole TestCase": so we're talking about the whole class atomic block.
This means that the --keepdb flag should not affect this behaviour, because what this flag do is just skip the database create/destroy.
But I noticed that if I run the test with the --keepdb flag the data I create in the setUpTestData method are preserved.
I'm fine with this but I want to understand what --keepdb exactly does, because I can't understand why this happens. I tried to look directly at the Django TestCase class source code but I don't see any check about the --keepdb option.
Why - if I run my test with the --keepdb option - the data I create in setUpTestData method are preserved?
Ok, I was right: the --keepdb option doesn't affect the setUpTestData behaviour.
I deleted my test database, so now I'm starting from scratch.
Now I see that all the data I create in setUpTestData are "rolled back" after the TestCase ends, also with the --keepdb option.
Probably I just messed up my test database before.

Is there a way to unitary test the arguments passed to Model or Q object?

In my unit tests, I understand how I can mock objects per context, to avoid interacting with any kind of persistent datastore.
I can even mock the Q object to test how many times it has been called, which is really useful.
But I'm still uncomfortable with the fact that while I'm mocking my interaction with the datastores, I'm still assuming that my code works©, that the datastore (or the ORM in this case) is receiving the data correctly, through the "proper channels" so to speak.
Case in point:
# code to test
def related_stuff():
return Stuff.objects.filter(
parent__user__city_name="Las Vegas"
)
# more code...
# testing above
#mock.patch(f"{path_to}.Stuff.objects")
def test_related_stuff(stuff_mock):
stuff_mock.filter.return_value = stuff_mock
stuff_mock.filter.assert_called_once_with(parent__user__city_name="Las Vegas")
How can I actually test that the parent__user__city_name lookup pattern is actually correct and wont result in an error? I'm assuming there's no way to test this without touching the datastore, but any opinions are appreciated.
You could either ensure the database connection(s) are to eg. a memory sqlite instance, or maybe write a Djangon database adapter that straight out errors (or always returns an empty dataset) when a query is attempted.
With an adapter that always returns nothing, you can at least test that a query would work.

JPA - How to truncate tables between unit tests

I want to cleanup the database after every test case without rolling back the transaction. I have tried DBUnit's DatabaseOperation.DELETE_ALL, but it does not work if a deletion violates a foreign key constraint. I know that I can disable foreign key checks, but that would also disable the checks for the tests (which I want to prevent).
I'm using JUnit 4, JPA 2.0 (Eclipselink), and Derby's in-memory database. Any ideas?
Thanks,
Theo
The simplest way to do this is probably using the nativeQuery jpa method.
#After
public void cleanup() {
EntityManager em = entityManagerFactory.createEntityManager();
em.getTransaction().begin();
em.createNativeQuery("truncate table person").executeUpdate();
em.createNativeQuery("truncate table preferences").executeUpdate();
em.getTransaction().commit();
}
Simple: Before each test, start a new transaction and after the test, roll it back. That will give you the same database that you had before.
Make sure the tests don't create new transactions; instead reuse the existing one.
I am a bit confused as DBUnit will reinitialize the database to a known state before every test.
They also recommend as a best practice not to cleanup or otherwise change the data after the test.
So if it is cleanup you're after to prepare the db for the next test, I would not bother.
Yes, in-transaction test would make your life much easier, but if transaction is your thing then you need to implement compensating transaction(s) during cleanup (in #After). It sounds laborious and it might be but if properly approached you may end up with a set of helper methods (in tests) that compensate (cleanup) data accumulated during #Before and tests (using JPA or straight JDBC - whatever makes sense).
For example, if you use JPA and call create methods on entities during tests you may utilize (using AOP if you fancy or just helper test methods like us) a pattern across all tests to:
track ids of all entities that have been created during test
accumulate them in order created
replay entity deletes for these entities in reverse order in #After
My setup is quite similar: it's Derby (embedded) + OpenJPA 1.2.2 + DBUnit. Here's how I handle integration tests for my current task: in every #Before method I run 3 scripts:
Drop DB — an SQL script that drops all tables.
Create DB — an SQL script that recreates them.
A test-specific DB unit XML script to populate the data.
My database has only 12 tables and the test data set is not very big, either — about 50 records. Each script takes about 500 ms to run and I maintain them manually when tables are added or modified.
This approach is probably not recommended for testing big databases, and perhaps it cannot even be considered good practice for small ones; however, it has one important advantage over rolling back the transaction in the #After method: you can actually detect what happens at commit (like persisting detached entities or optimistic lock exceptions).
Better late then never ...
I just had the same problem and came around a pretty simple solution:
set the property "...database.action" to the value "drop-and-create" in your persistence-unit config
close the entity-manager and the entity-manager factory after each test
persistence.xml
<persistence-unit name="Mapping4" transaction-type="RESOURCE_LOCAL" >
<provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
<class>...</class>
<class>...</class>
<properties>
...
<property name="javax.persistence.schema-generation.database.action" value="drop-and-create" />
...
</properties>
</persistence-unit>
unit-test:
...
#Before
public void setup() {
factory = Persistence.createEntityManagerFactory(PERSISTENCE_UNIT_NAME);
entityManager = factory.createEntityManager();
}
#After
public void tearDown() {
entityManager.clear();
entityManager.close();
factory.close();
}
...
I delete the DB file after each run:
boolean deleted = Files.deleteIfExists(Paths.get("pathToDbFile"));
A little dirty but works for me.
Regards
Option 1: You can disable foreign key checks before truncating tables, and enable them again after truncation. You will still have checks in tests in this way.
Option 2: H2 database destroys the in-memory database when the last connection closed. I guess Derby DB supports something similar, or you can switch to H2.
See also: I wrote a code to truncate tables before each test using Hibernate in a related question: https://stackoverflow.com/a/63747005/471214

Unit testing style question: should the creation and deletion of data be in the same method?

I am writing unit tests for a PHP class that maintains users in a database. I now want to test if creating a user works, but also if deleting a user works. I see multiple possibilities to do that:
I only write one method that creates a user and deletes it afterwards
I write two methods. The first one creates the user, saves it's ID. The second one deletes that user with the saved ID.
I write two methods. The first one only creates a user. The second method creates a user so that there is one that can afterwards be deleted.
I have read that every test method should be independent of the others, which means the third possibility is the way to go, but that also means every method has to set up its test data by itself (e.g. if you want to test if it's possible to add a user twice).
How would you do it? What is good unit testing style in this case?
Two different things = Two tests.
Test_DeleteUser() could be in a different test fixture as well because it has a different Setup() code of ensuring that a User already exists.
[SetUp]
public void SetUp()
{
CreateUser("Me");
Assert.IsTrue( User.Exists("Me"), "Setup failed!" );
}
[Test]
public void Test_DeleteUser()
{
DeleteUser("Me");
Assert.IsFalse( User.Exists("Me") );
}
This means that if Test_CreateUser() passes and Test_DeleteUser() doesn't - you know that there is a bug in the section of the code that is responsible for deleting users.
Update: Was just giving some thought to Charlie's comments on the dependency issue - by which i mean if Creation is broken, both tests fail even though Delete. The best I could do was to move a guard check so that Setup shows up in the Errors and Failures tab; to distinguish setup failures (In general cases, setup failures should be easy to spot by an entire test-fixture showing Red.)
How you do this codependent on how you utilize Mocks and stubs. I would go for the more granular approach so having 2 different tests.
Test A
CreateUser("testuser");
assertTrue(CheckUserInDatabase("testuser"))
Test B
LoadUserIntoDB("testuser2")
DeleteUser("testuser2")
assertFalse(CheckUserInDatabase("testuser2"))
TearDown
RemoveFromDB("testuser")
RemoveFromDB("testuser2")
CheckUserInDatabase(string user)
...//Access DAL and check item in DB
If you utilize mocks and stubs you don't need to access the DAL until you do your integration testing so won't need as much work done on the asserting and setting up the data
Usually, you should have two methods but reality still wins over text on paper in the following case:
You need a lot of expensive setup code to create the object to test. This is a code smell and should be fixed but sometimes, you really have no choice (think of some code that aggregates data from several places: You really need all those places). In this case, I write mega tests (where a test case can have thousands of lines of code spread over many methods). It creates the database, all tables, fills them with defined data, runs the code step by step, verifies each step.
This should be a rare case. If you need one, you must actively ignore the rule "Tests should be fast". This scenario is so complex that you want to check as many things as possible. I had a case where I would dump the contents of 7 database tables to files and compare them for each of the 15 SQL updates (which gave me 105 files to compare in a single test) plus about a million asserts that would run.
The goal here is to make the test fail in such a way that you notice the source of the problem right away. It's like pouring all the constraints into code and make them fail early so you know which line of app code to check. The main drawback is that these test cases are hell to maintain. Every change of the app code means that you'll have to update many of the 105 "expected data" files.