Django didn't save data to postgre during test case executing - django

I have webapp, which has a lot of big calculation logic. To stable it and spend less time to regress tests I decided to create some unit test cases on Django.
My purpose - don't create new test DB (I use Postgre SQL, is important) for every testcases running and save DB with all test data after test cases finished. I need to have test data for analysis and search bugs.
I created custom TestRunner, which inherited on DiscoverRunner with change some parameters:
keepdb = True debug_mode = True verbosity = 2
This thing is work, I reuse one DB for all test-cases.
My problem is I don't see test data which created during test cases in my test DB. I had some failed testcases or all OK - nothing change.
Interesting facts for me:
During test cases I have some queries to DB about new test data and these queries returns right result.
During test cases I created new objects and sequences in DB are changed as if test data exists in DB.
What I do wrong?

You might be confusing database and data in the database.
I would guess that the keepdb flag only keeps the database (schema etc.).
Some testcase remove the data at the end of the run, eg. TransactionTestCase
TestCase running on a database that does not support rollback (e.g. MySQL with the MyISAM storage engine), and all instances of TransactionTestCase, will roll back at the end of the test by deleting all data from the test database.

Related

How to query data from real Database created by django-pytest db fixture

I write db-related tests for my django project using pytest. I use db fixture for creation of records in the database. Test works fine but how to select these records from real Postgres db. When I try to select these records via PgAdmin I get empty list. Can you please clarify how db fixture works and where does it store records?
pytest.mark.django_db wraps the whole test in a transaction (which is rolled back a the end of the test) which is not visible from pgadmin.
You can try to use #pytest.mark.django_db(transaction=True) to enable testing transactions in your tests. However, it is slower and flushes the database after the test
When running tests, Django is creating a second, temporary database and run tests inside them, so it won't affect your production/development database and you'll have always the same, predictive outcome.
On top of that, depending on how each test is configured, Django can wrap the test inside a transaction that will be rolled back after a specific test has finished, so tests won't affect each other.
If you want to see this data in your real database, after the test has finished, you have to configure your tests, for which you want it to persist, to not be wrapped in a transaction. As you're using pytest-django, there are 2 possible methods to handle that, depending on which "style" of test suites are you using:
If you're using TestCase or TransactionTestCase from django.test, then switch to SimpleTestCase. Just be aware that together with this transaction rollback, you'll lose some more database-related features.
If you're using pytest.mark.django_db, simply remove it.
As for the separate database, you can either just connect to it during the tests or add a --reuse-db to the command line when running the tests, so pytest-django won't destroy it after testing has been completed. This separate database will just have test_ prefix added to your regular database name.

Am I required to use django data migrations?

I migrated a database that was manipulated via SQL to use Django's migration module: https://docs.djangoproject.com/en/3.0/topics/migrations/.
Initially I thought of using migrations only for changes in the model such as changes in columns or new tables, performing deletes, inserts and updates through SQL, as they are constant and writing a migration for each case would be a little impractical.
Could using the migration module without using data migrations cause me some kind of problem or inconsistency in the future?
You can think about data migration if you made a change that required also a manual fix on the data.
Maybe you decided to normalize something in the database, e.g. to split a name column to the first name and the last name. If you have only one instance of the application with one database and you are the only developer then you will also not write a data migration, but if you want to change it on a live production site with 24 x 7 hours traffic or you cooperate with other developers then you probably prepare a data migration for their databases or you will thoroughly test the migration on a copy of live data that the update will work on the production site correctly without issues and with minimal shutdown. If you don't write a data migration and you had no problem immediately then it is OK and will be not worse then a possible ill-conceived data migration.

Cross database in memory unit testing

Objective:Unit Testing DAO layer using any in memory database(HSQL/Derby?)
My end database is Sybase, so the query uses cross database joins.
Eg. of the query :
select table1.col1, table2.col2
from db1..table1, db2..table2
where table1.col1=table2.col2
Obviously, table1 is a table in Database db1 and table2 is a table in Database db2.
Is there any way to test this out?
if you use db specific feature (cross db joins) then you have 2 options to prevent failure on other db:
teach other db to understand the query. for example you can register new functions in h2. this probably won't work in your case. but this way you don't test anything, you just prevent failure and/or allow your application to work on other db
don't test it on other db. face it, even if your queries run on h2, you still have to run them on sysbase. otherwise you simply don't know if it works
so this way or another you need to run your tests on sysbase. if you don't plan to support other database then what's the point in fighting for making this specific test works on h2? it will have completely irrelevant to your production code.
and answering your question: Is there any way to test this out?
on sysbase? yes, just run your query and clear your database after. you can use vagrant or dedicated db
on h2/hsqlbd? probably not

Django models query result is not accurate

Hi I have some problems that has been bothering me for a week. I am running Selenium testing scripts on my dev machine, and in my test I would call simple script to delete accounts by their sub domain names:
for a in Account.objects.filter(domain = sub_domain):
a.delete()
The problem is that the query to find all such accounts are not returning correct results after the first time it is run (I use this query to clean up the database before each test). When I set a break point at this point, I can see the query return 0 records, even though in the database it has one record. I also set up mysql query log to see the actual query Django sent to mysql, and the query looks good, and will return correct result if I copy and paste to mysql command shell.
What am I missing? Why Django model query does not give me the correct result? MySQL is using InnoDB engine in case that makes any difference.
Transactions. Do a COMMIT in the shell.
This is a recurring problem, so I'm doing a shameless plug with a question in which I described details of the problem:
How do I deal with this race condition in django?

Django unit-testing with loading fixtures for several dependent applications problems

I'm now making unit-tests for already existing code. I faced the next problem:
After running syncdb for creating test database, Django automatically fills several tables like django_content_type or auth_permissions.
Then, imagine I need to run a complex test, like check the users registration, that will need a lof ot data tables and connections between them.
If I'll try to use my whole existing database for making fixtures (that would be rather convinient for me) - I will receive the error like here. This happens because, Django has already filled tables like django_content_type.
The next possible way is to use django dumpdata --exclude option for already filled with syncdb tables. But this doesn't work well also, because if I take User and User Group objects from my db and User Permissions table, that was automatically created by syncdb, I can receive errors, because the primary keys, connecting them are now pointing wrong. This is better described here in part 'fixture hell', but the solution shown there doensn't look good)
The next possible scheme I see is next:
I'm running my tests; Django creates test database, makes syncdb and creates all those tables.
In my test setup I'm dropping this database, creating the new blank database.
Load data dump from existing database also in test setup
That's how the problem was solved:
After the syncdb has created the test database, in setUp part of the tests I use os.system to access shell from my code. Then I'm just loading the dump of the database, which I want to use for tests.
So this works like this: syncdb fills contenttype and some other tables with data. Then in setUp part of tests loading the sql dump clears all the previously created data and i get a nice database.
May be not the best solution, but it works=)
My approach would be to first use South to make DB migrations easy (which doesn't help at all, but is nice), and then use a module of model creation methods.
When you run
$ manage.py test my_proj
Django with South installed with create the Test DB, and run all your migrations to give you a completely updated test db.
To write tests, first create a python module calle, test_model_factory.py In here create functions that create your objects.
def mk_user():
User.objects.create(...)
Then in your tests you can import your test_model_factory module, and create objects for each test.
def test_something(self):
test_user = test_model_factory.mk_user()
self.assert(test_user ...)