Doctrine + ZF + phpunit - doctrine-1.2

I have ZF 1.11 integrated with Doctrine 1.2 + MySQL 5. I created some phpunit's tests in few files. Every test create db and populate it - using Zend_Db - then I make some actions using Doctrine's models and then I drop db using Zend_Db. I put them all in directory called "tests". And when I go to directory "tests" and write phpunit command then some of them return errors like "SQLSTATE[42S02]: Base table or view not found: 1146 Table 'here_db_name.here_table_name' doesn't exist". - but it exists, I checked! What is funny when I run every test separately then absolutly everything is ok. So, my question is: What's going on? Sorry, I can't provide code.

This is tricky to say without any code but I am making a wild guess here that if you say that every test creates and populates the db it might be that what you are experiencing is some sort of a "race-condition" because each test starts with cleaning up the database and then setting it up again.

Related

How to unittest a django database migration?

We've changed our database, using django migrations (django v1.7+).
The data that exists in the database is no longer valid.
Basically I want to test a migration by, inside a unittest, constructing the pre-migration database, adding some data, applying the migration, then confirming everything went smoothly.
How does one:
hold back the new migration when loading the unittest
I found some stuff about overriding settings.MIGRATION_MODULES but couldn't work out how to use it. When I inspect executor.loader.applied_migrations it still lists everything. The only way I could prevent the new migration was to actually remove the file; not a solution I can use.
create a record in the unittest database (using the old model)
If we can prevent the migration then this should be pretty straightforward. myModel.object.create(...)
apply the migration
I think I can probably work this out now that I've found the test_executor: set a plan pointing to the migration file and execute it? Um, right? Got any code for that :-D
confirm the old data in the database now matches the new model
Again, I expect this should be pretty easy: just fetch the instance created before the migration and confirm it has changed in all the right ways.
So the challenge is really just working out how to prevent the unittest from applying the latest migration script and then applying it when we're ready?
Perhaps I have the wrong approach? Should I create fixtures, and just confirm that they're all good at the end? Do fixtures get loaded before the migrations are applied, or after they're all done?
By using the MigrationExecutor and picking out specific migrations with .migrate I've been able to, maybe?, roll it back to a specific state, then roll forward one-by-one. But that is popping up doubts; currently chasing down sqlite fudging around due to the lack of an actual ALTER TABLE instruction. Jury still out.
I wasn't able to prevent the unittest from starting with the current database schema, but I did find it is quite easy to revert to earlier points in the migration history:
Where "0014_nulls_permitted" is a file in the migrations directory...
from django.db.migrations.executor import MigrationExecutor
executor.migrate([("workflow_engine", "0014_nulls_permitted")])
executor.loader.build_graph()
NB: running the executor.loader.build_graph between invocations of executor.migrate seems to be a very important part of completing the migration and making things behave as one might expect
The migrations which are currently applicable to the database can be checked with something like:
print [x[1] for x in sorted(executor.loader.applied_migrations)]
[u'0001_initial', u'0002_fix_foreignkeys', ... u'0014_nulls_permitted']
I created a model instance via the ORM then ensured the database was in the old state by running some SQL directly:
job = Job.objects.create(....)
from django.db import connection
cursor = connection.cursor()
cursor.execute('UPDATE workflow_engine_job SET next_job_state=NULL')
Great. Now I know I have a database in the old state, and can test the forwards migration. So where 0016_nulls_banished is a migration file:
executor.migrate([("workflow_engine", "0016_nulls_banished")])
executor.loader.build_graph()
Migration 0015 goes through the database converting all the NULL fields to a default value. Migration 0016 alters the schema. You can scatter some print statements around to confirm things are happening as you think they should be.
And now the test can confirm that the migration has worked. In this case by ensuring there are no nulls left in the database.
jobs = Job.objects.all()
self.assertTrue(all([j.next_job_state is not None for j in jobs]))
We have used the following code in settings_test.py to ignore the migration for the tests:
MIGRATION_MODULES = dict(
(app.split('.')[-1], '.'.join([app, 'nonexistent_django_migrations_module']))
for app in INSTALLED_APPS
)
The idea here being that none of the apps have a nonexistent_django_migrations_module folder, and thus django will simply find no migrations.

Django TestCase with fixtures causes IntegrityError due to duplicate keys

I'm having trouble moving away from django_nose.FastFixtureTestCase to django.test.TestCase (or even the more conservative django.test.TransactionTestCase). I'm using Django 1.7.11 and I'm testing against Postgres 9.2.
I have a Testcase class that loads three fixtures files. The class contains two tests. If I run each test individually as a single run (manage test test_file:TestClass.test_name), they each work. If I run them together, (manage test test_file:TestClass), I get
IntegrityError: Problem installing fixture '<path>/data.json': Could not load <app>.<Model>(pk=1): duplicate key value violates unique constraint "<app_model_field>_49810fc21046d2e2_uniq"
To me it looks like the db isn't actually getting flushed or rolled back between tests since it only happens when I run the tests in a single run.
I've stepped through the Django code and it looks like they are getting flushed or rolled back -- depending on whether I'm trying TestCase or TransactionTestCase.
(I'm moving away from FastFixtureTestCase because of https://github.com/django-nose/django-nose/issues/220)
What else should I be looking at? This seems like it should be a simple matter and is right within what django.test.TestCase and Django.test.TransactionTestCase are designed for.
Edit:
The test class more or less looks like this:
class MyTest(django.test.TransactionTestCase): # or django.test.TestCase
fixtures = ['data1.json', 'data2.json', 'data3.json']
def test1(self):
return # I simplified it to just this for now.
def test2(self):
return # I simplified it to just this for now.
Update:
I've managed to reproduce this a couple of times with a single test, so I suspect something in the fixture loading code.
One of my basic assumptions was that my db was clean for every TestCase. Tracing into the django core code I found instances where an object (in one case django.contrib.auth.User) already existed.
I temporarily overrode _fixture_setup() to assert the db was clean prior to loading fixtures. The assertion failed.
I was able to narrow the problem down to code that was in a TestCase.setUpClass() instead of TestCase.setUp(), and so the object was leaking out of the test and conflicting with other TestCase fixtures.
What I don't understand completely is I thought that the db was dropped and recreated between TestCases -- but perhaps that is not correct.
Update: Recent version of Django includes setUpTestData() that should be used instead of setUpClass()

neo4j functionality & unit testing - resetting the database

I've created an application in node.js and have Mocha tests to perform automated unit and functionality testing.
I'm now trying to test database functionality, and want the database to be reset between each test for consistency.
Solution 1
Before each test I was running:
MATCH (n) OPTIONAL MATCH (n)-[r]-() DELETE n,r
and then populating the database with cyphers queries obtained using the neo4j-shell dump command. However, the problem with this is that those cypher queries utilise the internal neo4j ids to create links between nodes and relationships, and because the delete query above doesn't reset the internal neo4j id counter to 0 it all goes wrong when you try to run it!
Solution 2
I then looked at physically shutting down the neo4j server, removing the database directory and then rebooting it and populating it. This works, but it takes around 15 seconds, which is useless when I've got 200+ unit tests to run!
Solution 3
I've also looked at transactions in order to be able to roll the database back once the test had completed, but it seems that all queries have to go through the transaction endpoint. I don't think this is feasible.
.
Are there any other ways of doing this? I think solution 1 shows the most promise, but it'd mean going through and changing all my exported cypher queries to avoid using the internal neo4j ids.
For example I'd have to change:
create (_113:`User` {`firstname`:"John", `lastname`:"Smith", `uuid`:"f843c210-26e3-11e5-af31-297c662c0848"})
create (_114:`Instrument` {`name`:"Drums", `uuid`:"f84521a0-26e3-11e5-af31-297c662c0848"})
create _113-[:`PLAYS`]->_114
To:
create (_113:`User` {`firstname`:"John", `lastname`:"Smith", `uuid`:"f843c210-26e3-11e5-af31-297c662c0848"})
create (_114:`Instrument` {`name`:"Drums", `uuid`:"f84521a0-26e3-11e5-af31-297c662c0848"})
MATCH (a:User),(b:Instrument) WHERE a.uuid = 'f843c210-26e3-11e5-af31-297c662c0848' AND b.uuid = 'f84521a0-26e3-11e5-af31-297c662c0848' CREATE UNIQUE (a)-[r:`PLAYS`]->(b) RETURN r
Which is a real pain with a large dataset..
Any thoughts?
As FrobberOfBits kindly suggested, have a look at GraphAware RestTest built precisely for your purpose.

Keep table data during Django tests

I have some tables containing a lot of data (imported from geonames using django-cities: https://github.com/coderholic/django-cities) that I want to preserve in tests (since loading them via fixtures will be very slow)… how can I keep those tables and their data during tests?
I think I have to write a custom TestRunner, but I have no idea about where to start :P
It's possible, here is a way :
1) Define your own test runner look here to see how.
2) For your custom test runner look in the default test runner, you can just copy and past the code and just comment this line : connection.creation.destroy_test_db(old_name, verbosity) which is responsible for destroying the test database, and i think you should put the connection.creation.create_test_db(..) line in a try except something like this maybe:
try:
# Create the database the first time.
connection.creation.create_test_db(verbosity, autoclobber=not interactive)
except ..: # Look at the error that this will raise when create a database that already exist
# Test database already created.
pass
3) Bound TEST_RUNNER in setting.py to your test runner.
4) Now run your test like this: ./manage.py test

Django 1.3 testing without recreating database / loading fixtures for every run of the test

I'm using django 1.3 and writing some selenium test & django unit tests. I want to know if its possible to run the tests without creating the databases & loading fixtures everytime?
I stumbled upon this SO thread which gives a good way to test without creating a database but that still flushes the database fixtures & reloads them every time. I do not want even that to happen. I just want the tests to read / write the database I have set up once. I do not want it to create database / load fixtures every time I run any test.
I'd be glad to provide any further info if required to sort this out.
Thanks in advance! :)
I was able to do this by hacking into some django code. The parts that need to be edited is,
FILE: django/db/backends/sqlite3/creation.py
change the code as follows:
setting confirm = 'yes' in line 55
commenting out all occurrences of os.remove(test_database_name)
FILE: django/db/backends/creation.py
change the code as follows
comment 359 to 376 (the syncdb & flush part in create_test_db function.
almost everything in _create_test_db. (almost everything == code part which does the undesired stuff which we are trying to eliminate)
almost everything in _destroy_test_db.
almost everything in destroy_test_db.
Hope that helps!