Django models query result is not accurate - django

Hi I have some problems that has been bothering me for a week. I am running Selenium testing scripts on my dev machine, and in my test I would call simple script to delete accounts by their sub domain names:
for a in Account.objects.filter(domain = sub_domain):
a.delete()
The problem is that the query to find all such accounts are not returning correct results after the first time it is run (I use this query to clean up the database before each test). When I set a break point at this point, I can see the query return 0 records, even though in the database it has one record. I also set up mysql query log to see the actual query Django sent to mysql, and the query looks good, and will return correct result if I copy and paste to mysql command shell.
What am I missing? Why Django model query does not give me the correct result? MySQL is using InnoDB engine in case that makes any difference.

Transactions. Do a COMMIT in the shell.

This is a recurring problem, so I'm doing a shameless plug with a question in which I described details of the problem:
How do I deal with this race condition in django?

Related

How can I find out which line of my Django code is creating a particular postgres query?

In my Django code, some postgres queries are created by the django query set. One of the queries, which can be created by two different view functions in the home page is being done during runtime. I don't know which one is doing the query. For debuggin purposes Is there anyway to trace back a postgres query to which line of code generated it during runtime?

What happens when using `django-admin dumpdata` on a 'hot' or 'live' database?

When using the django-admin dumpdata command, what happens if the database is modified while the data is being exported? Is it possible for the data dump to be transactional?
It is just running select queries. It will give whatever the database returns at the moment the select queries were run. There is no traction and no ability to add them. I don't think you would want to lock your site up anyways. You can see the source code here: https://github.com/django/django/blob/master/django/core/management/commands/dumpdata.py.

Sitecore fast query is not returning results from master db while Sitecore query is returning results

I'm using Sitecore fast query to get results. But I'm not getting any results if I use the query in master db, I tried running it in XPath viewer as well. But I get the results for Sitecore query. The same fast query works against web database.
fast://#sitecore#/#content#/#Something#/#Something#/#AU#/#Website#/ancestor-or-self::*[##templateid='{463D7680-BF52-49DF-B7D5-88E97416A1FA}']/Configuration/Navigation/Footer Bottom Right/*
Instead of ancestor-or-self:: use //* :
fast://#sitecore#/#content#/#Something#/#Something#/#AU#/#Website#//*[##templateid='{463D7680-BF52-49DF-B7D5-88E97416A1FA}']/Configuration/Navigation/Footer Bottom Right/*
I still really don't understand what is the cause of the issue, But cleaning up all the databases solved the issue.

Cross database in memory unit testing

Objective:Unit Testing DAO layer using any in memory database(HSQL/Derby?)
My end database is Sybase, so the query uses cross database joins.
Eg. of the query :
select table1.col1, table2.col2
from db1..table1, db2..table2
where table1.col1=table2.col2
Obviously, table1 is a table in Database db1 and table2 is a table in Database db2.
Is there any way to test this out?
if you use db specific feature (cross db joins) then you have 2 options to prevent failure on other db:
teach other db to understand the query. for example you can register new functions in h2. this probably won't work in your case. but this way you don't test anything, you just prevent failure and/or allow your application to work on other db
don't test it on other db. face it, even if your queries run on h2, you still have to run them on sysbase. otherwise you simply don't know if it works
so this way or another you need to run your tests on sysbase. if you don't plan to support other database then what's the point in fighting for making this specific test works on h2? it will have completely irrelevant to your production code.
and answering your question: Is there any way to test this out?
on sysbase? yes, just run your query and clear your database after. you can use vagrant or dedicated db
on h2/hsqlbd? probably not

Django unit-testing with loading fixtures for several dependent applications problems

I'm now making unit-tests for already existing code. I faced the next problem:
After running syncdb for creating test database, Django automatically fills several tables like django_content_type or auth_permissions.
Then, imagine I need to run a complex test, like check the users registration, that will need a lof ot data tables and connections between them.
If I'll try to use my whole existing database for making fixtures (that would be rather convinient for me) - I will receive the error like here. This happens because, Django has already filled tables like django_content_type.
The next possible way is to use django dumpdata --exclude option for already filled with syncdb tables. But this doesn't work well also, because if I take User and User Group objects from my db and User Permissions table, that was automatically created by syncdb, I can receive errors, because the primary keys, connecting them are now pointing wrong. This is better described here in part 'fixture hell', but the solution shown there doensn't look good)
The next possible scheme I see is next:
I'm running my tests; Django creates test database, makes syncdb and creates all those tables.
In my test setup I'm dropping this database, creating the new blank database.
Load data dump from existing database also in test setup
That's how the problem was solved:
After the syncdb has created the test database, in setUp part of the tests I use os.system to access shell from my code. Then I'm just loading the dump of the database, which I want to use for tests.
So this works like this: syncdb fills contenttype and some other tables with data. Then in setUp part of tests loading the sql dump clears all the previously created data and i get a nice database.
May be not the best solution, but it works=)
My approach would be to first use South to make DB migrations easy (which doesn't help at all, but is nice), and then use a module of model creation methods.
When you run
$ manage.py test my_proj
Django with South installed with create the Test DB, and run all your migrations to give you a completely updated test db.
To write tests, first create a python module calle, test_model_factory.py In here create functions that create your objects.
def mk_user():
User.objects.create(...)
Then in your tests you can import your test_model_factory module, and create objects for each test.
def test_something(self):
test_user = test_model_factory.mk_user()
self.assert(test_user ...)