The tests that I have written for my Django application have been working perfectly during initial development where I have been using SQLite. Now that I am getting ready to deploy I have setup a MySQL server (as that is what I'll be deploying to) but now some of my tests are failing.
Lastly the tests that are failing don't fail when I manually test the functionality.
What could be going on?
I'm not doing anything unusual, all of the views do some database shenanigans and return a response. There isn't anything timing related (no threading or anything).
The tests all inherit from django.test.TestCase and I'm not using any fixtures.
Here is an example of a test that fails.
class BaseTest(TestCase):
def setUp(self):
super(BaseTest, self).setUp()
self.userCreds = dict(username='test', password='a')
# Create an admin user
admin = models.User.objects.create_superuser(
email='', username='admin', password='a')
# Create a user and grant them a licence
user = models.User.objects.create_user(
email='some#address.com', first_name="Mister",
last_name="Testy", **self.userCreds)
profile = models.getProfileFor(user)
node = profile.createNode(
'12345', 'acomputer', 'auser',
'user#email.com', '0456 987 123')
self.node = node
class TestClientUIViews(BaseTest):
def test_toggleActive(self):
url = reverse('toggleActive') + '?nodeId=%s' % self.node.nodeId
self.assertFalse(self.node.active)
# This should fail because only authenticated users can toggle a node active
resp = self.client.get(url)
self.assertEqual(resp.status_code, 403)
self.assertFalse(self.node.active)
# Login and make sure visiting the url toggles the active state
self.client.login(**self.userCreds)
resp = self.client.get(url)
self.assertEqual(resp.status_code, 200)
self.assertTrue(self.node.active)
resp = self.client.get(url)
self.assertEqual(resp.status_code, 200)
self.assertFalse(self.node.active)
And here is what the model looks like:
class Node(models.Model):
#property
def active(self):
'''
Activation state gets explictly tracked in its own table but is
exposed as a property for the sake of convenience
'''
activations = NodeActivation.objects \
.filter(node=self) \
.order_by('-datetime')
try:
return activations[0].active
except IndexError:
return False
#active.setter
def active(self, state):
if self.active != state:
NodeActivation.objects.create(node=self, active=state)
class NodeActivation(models.Model):
node = models.ForeignKey("Node")
datetime = models.DateTimeField(default=datetimeM.datetime.now)
active = models.BooleanField(default=False)
My local MySQL is 5.5.19 (so its using InnoDB) but I get the same failures on the deployment server which is using 5.1.56. The tests fail regardless of the storage engine.
And as I mentioned at the beginning, if I switch back to use a SQLite database, all the tests go back to passing.
With more of the actual code now revealed I'll provide the following hypothesis as to why this test is failing.
The NodeActivation model is incorrect. The datetime field should be:
datetime = models.DateTimeField(auto_now=True)
Using datetime.datetime.now() in a model definition will only be evaluated once.
Every time the setter creates a new NodeActivation record, the record will be created with the same date/time. ie The date/time that the NodeActivation model was first evaluated.
Your getter only ever returns a single result. But since both activation records have the same date/time, the ordering may be dependent on the database back end. There will be two NodeActivation records in your database at the end of the test, which one is returned is indeterminate.
By changing the active property on the Node model class to this:
#property
def active(self):
'''
Activation state gets explictly tracked in its own table but is
exposed as a property for the sake of convenience
'''
activations = NodeActivation.objects \
.filter(node=self) \
.order_by('-id')
try:
return activations[0].active
except IndexError:
return False
the problem goes away.
Note the change to the order_by call.
The records were getting created so quickly that ordering by datetime wasn't deterministic, hence the erratic behaviour. And I guess SQLite is just slower than MySQL which is why it wasn't a problem when using it as the backing database.
NOTE: Thanks to Austin Phillips for the tip (check out comments in his answer)
I had a similar problem (not sure if its the same) going from SQLite to PostgreSQL using a setup method like you have for your initial database objects.
Example::
def setUp(self):
user = User.objects.create_user(username='john', password='abbeyRd', email='john#thebeatles.com')
user.save()
I found that this would work fine on SQLite, but on PostgreSQL I was getting database connection failures (it would still build the database like you said, but couldn't run the tests without a connection error).
Turns out, the fix is to run your setup method as follows:
#classmethod
def setUpTestData(cls):
user = User.objects.create_user(username='john', password='abbeyRd', email='john#thebeatles.com')
user.save()
I don't remember the exact reason why this works and the other one didn't, but it had something to do with the connection to the database trying to reconnect every test if you use setUp(self) and if you use setUpTestData(cls) it only connects when the class is instantiated, but I might not have the details perfect. All I really know is it works!
Hope that helps, not sure if it addresses your exact issue, but it took me a good few hours to figure out, so I hope it helps you and maybe others in the future!
Related
When i fetch a client i want to also prefetch all related probes and probe channels, and when i fetch a probe i also want to fetch all related probe channels.
But it seems that when i get a client object, or view the admin, it runs the get_queryset methods from both the client and the probe manager. so it will prefetch the probechannels twice. This can be seen both in the number of sql queries in debug toolbar, and by adding print statements in the get_queryset methods.
so i've got
class ClientManager(models.Manager):
def get_queryset(self, *args, **kwargs):
return super(ClientManager, self).get_queryset(*args, **kwargs).prefetch_related('probe_set__probechannel_set')
class Client(models.Model):
objects = ClientManager()
class ProbeManager(models.Manager):
def get_queryset(self, *args, **kwargs):
return super(ProbeManager, self).get_queryset(*args, **kwargs).prefetch_related('probechannel_set')
class Probe(models.Model):
Client = models.ForeignKey('Client')
objects = ProbeManager()
class ProbeChannel(models.Model):
Probe = models.ForeignKey('Probe')
Is there a way to avoid this? I have read about _base_managers being used for related objects, but the _base_manager here is just models.Manager. I'm using django 1.11 and this trendy new thing called python 2.7.
I'm still not sure if this is desired Django behaviour, but the prefetches are definitely firing twice. The answer i've worked out to avoid ProbeManager prefetches to be fired as well was simple- avoid using this manager.
So in the Probe class I've added
objects_without_prefetch = models.Manager()
and then used a prefetch object Prefetch('probe_set',Probe.objects_without_prefetch.all()) instead of simply using 'probe_set' which was calling its own prefetch.
Thanks anyway those that took the time to think about this and answer!
I have a test class with two methods, and want to share a saved model instance between both methods.
My fixtures:
#pytest.fixture(scope='class')
def model_factory():
class ModelFactory(object):
def get(self):
x = Model(email='test#example.org',
name='test')
x.save()
return x
return ModelFactory()
#pytest.fixture(scope='class')
def model(model_factory):
m = model_factory.get()
return m
My expectation is to receive only the model fixture on (both) my test methods and have it be the same, persisted on the database:
#pytest.mark.django_db
class TestModel(object):
def test1(self, model):
assert model.pk is not None
Model.objects.get(pk=model.pk) # Works, instance is in the db
def test2(self, model):
assert model.pk is not None # model.pk is the same as in test1
Model.objects.get(pk=model.pk) # Fails:
# *** DoesNotExist: Model matching query does not exist
I've verified using --pdb that at the end of test1, running Model.objects.all() returns the single instance I created. Meanwhile, psql shows no record:
test_db=# select * from model_table;
id | ยทยทยท fields
(0 rows)
Running the Model.objects.all() in pdb at the end of test2 returns an empty list, which is presumably right considering that the table is empty.
Why isn't my model being persisted, while the query still returns an instance anyway?
Why isn't the instance returned by the query in the second test, if my model fixture is marked scope='class' and saved? (This was my original question until I found out saving the model didn't do anything on the database)
Using django 1.6.1, pytest-django 2.9.1, pytest 2.8.5
Thanks
Tests must be independent of each other. To ensure this, Django - like most frameworks - clears the db after each test. See the documentation.
By looking at the postgres log I've found that pytest-django by default does a ROLLBACK after each test to keep things clean (which makes sense, as tests shouldn't depend on state possibly modified by earlier tests).
By decorating the test class with django_db(transaction=True) I could indeed see the data commited at the end of each test from psql, which answers my first question.
Same as before, the test runner ensures no state is kept between tests, which is the answer to my second point.
Scope argument is in this case a bit misleading, however if you would write your code like this:
#pytest.fixture(scope='class')
def model_factory(db, request):
# body
then you would get an error basically saying that database fixture has to be implemented with 'function' scope.
I would like to add that this is being currently worked on and might be an killing feature in the future ;) github pull request
I am currently writing a wrapped django app for the Google Calendar API. (My plan is to make it open source, if I ever finish it properly.)
The django site's users can be added as guests to the events, and this action is to be syncronized to google calendar. Here are the relevant code pieces:
class Event(models.Model):
def _convert_to_gcal_data(self):
# removed some code here ...
# and the interesting bit is
for attendee in self.attendee_set.all():
data['attendees'].append({
'displayName': attendee.user.name,
'email': attendee.user.email,
'responseStatus': attendee.response_status,
})
return data
def update_to_gcal(self):
"""
Updates the remote event db from local data
:return:
"""
service = cal_utils.get_service(settings.GOOGLE_CAL_USER, settings.GOOGLE_CAL_CERT)
data = self._convert_to_gcal_data()
return service.events().update(calendarId=self.calendar.cal_id, eventId=self.ev_id, sendNotifications=False, body=data).execute()
def register_user(self, user):
self.attendee_set.create(user=user, response_status='accepted')
self.update_to_gcal()
Despite the create method creating the user in attendee_set, the call to attendee_set.all() in _convert_to_gcal_data returns an empty queryset. I guess the create did not commit yet.
How can I work around this behaviour of django?
Update
Checking my tests more carefully, I've found that the described behaviour is present only if I call the register_user method from the view, not when I call it directly as I do in my tests.
The problem was in the view. I was doing a prefetch related on assignee_set, and this caused it not to be reevaluated after the assignee set has changed.
Removing prefetch related from the view code solved the issue.
class dbview(models.Model):
# field definitions omitted for brevity
class Meta:
db_table = 'read_only_view'
def main(request):
result = dbview.objects.all()
Caught an exception while rendering: (1054, "Unknown column 'read_only_view.id' in 'field list'")
There is no primary key I can see in the view. Is there a workaround?
Comment:
I have no control over the view I am accessing with Django. MySQL browser shows columns there but no primary key.
When you say 'I have no control over the view I am accessing with Django. MySQL browser shows columns there but no primary key.'
I assume you mean that this is a legacy table and you are not allowed to add or change columns?
If so and there really isn't a primary key (even a string or non-int column*) then the table hasn't been set up very well and performance might well stink.
It doesn't matter to you though. All you need is a column that is guaranteed to be unique for every row. Set that to be 'primary_key = True in your model and Django will be happy.
There is one other possibility that would be problemmatic. If there is no column that is guaranteed to be unique then the table might be using composite primary keys. That is - it is specifying that two columns taken together will provide a unique primary key. This is perfectly valid relational modelling but unfortunatly unsupported by Django. In that case you can't do much besides raw SQL unless you can get another column added.
I have this issue all the time. I have a view that I can't or don't want to change, but I want to have a page to display composite information (maybe in the admin section). I just override the save and raise a NotImplementedError:
def save(self, **kwargs):
raise NotImplementedError()
(although this is probably not needed in most cases, but it makes me feel a bit better)
I also set managed to False in the Meta class.
class Meta:
managed = False
Then I just pick any field and tag it as the primary key. It doesn't matter if it's really unique with you are just doing filters for displaying information on a page, etc.
Seems to work fine for me. Please commment if there are any problems with this technique that I'm overlooking.
If there really is no primary key in the view, then there is no workaround.
Django requires each model to have exactly one field primary_key=True.
There should have been an auto-generated id field when you ran syncdb (if there is no primary key defined in your model, then Django will insert an AutoField for you).
This error means that Django is asking your database for the id field, but none exists. Can you run django manage.py dbshell and then DESCRIBE read_only_view; and post the result? This will show all of the columns that are in the database.
Alternatively, can you include the model definition you excluded? (and confirm that you haven't altered the model definition since you ran syncdb?)
I know this post is over a decade old, but I ran into this recently and came to SO looking for a good answer. I had to come up with a solution that addresses the OP's original question, and, additionally, allows for us to add new objects to the model for unit testing purposes, which is a problem I still had with all of the provided solutions.
main.py
from django.db import models
def in_unit_test_mode():
"""some code to detect if you're running unit tests with a temp SQLite DB, like..."""
import sys
return "test" in sys.argv
"""You wouldn't want to actually implement it with the import inside here. We have a setting in our django.conf.settings that tests to see if we're running unit tests when the project starts."""
class AbstractReadOnlyModel(models.Model):
class Meta(object):
abstract = True
managed = in_unit_test_mode()
"""This is just to help you fail fast in case a new developer, or future you, doesn't realize this is a database view and not an actual table and tries to update it."""
def save(self, *args, **kwargs):
if not in_unit_test_mode():
raise NotImplementedError(
"This is a read only model. We shouldn't be writing "
"to the {0} table.".format(self.__class__.__name__)
)
else:
super(AbstractReadOnlyModel, self).save(*args, **kwargs)
class DbViewBaseModel(AbstractReadOnlyModel):
not_actually_unique_field = IntegerField(primary_key=True)
# the rest of your field definitions
class Meta:
db_table = 'read_only_view'
if in_unit_test_mode():
class DbView(DbViewBaseModel):
not_actually_unique_field = IntegerField()
"""This line removes the primary key property from the 'not_actually_unique_field' when running unit tests, so Django will create an AutoField named 'id' on the table it creates in the temp DB that it creates for running unit tests."""
else:
class DbView(DbViewBaseModel):
pass
class MainClass(object):
#staticmethod
def main_method(request):
return DbView.objects.all()
test.py
from django.test import TestCase
from main import DbView
from main import MainClass
class TestMain(TestCase):
#classmethod
def setUpTestData(cls):
cls.object_in_view = DbView.objects.create(
"""Enter fields here to create test data you expect to be returned from your method."""
)
def testMain(self):
objects_from_view = MainClass.main_method()
returned_ids = [object.id for object in objects_from_view]
self.assertIn(self.object_in_view.id, returned_ids)
I am writing an application where I will be accessing the database from django and from a stand alone application. Both need to do session verification and the session should be the same for both of them. Django has a built in authentication/session verification, which is what I am using, now I need to figure out how to reuse the same session for my stand alone application.
My question is how can I look up a session_key for a particular user?
From what it looks there is nothing that ties together auth_user and django_session
This answer is being posted five years after the original question, but this SO thread is one of the top Google results when searching for a solution to this problem (and it's still something that isn't supported out of the box with Django).
I've got an alternate solution for the use case where you're only concerned with logged in user sessions, which uses an additional UserSession model to map users to their sessions, something like this:
from django.conf import settings
from django.db import models
from django.contrib.sessions.models import Session
class UserSession(models.Model):
user = models.ForeignKey(settings.AUTH_USER_MODEL)
session = models.ForeignKey(Session)
Then you can simply save a new UserSession instance any time a user logs in:
from django.contrib.auth.signals import user_logged_in
def user_logged_in_handler(sender, request, user, **kwargs):
UserSession.objects.get_or_create(user = user, session_id = request.session.session_key)
user_logged_in.connect(user_logged_in_handler)
And finally when you'd like to list (and potentially clear) the sessions for a particular user:
from .models import UserSession
def delete_user_sessions(user):
user_sessions = UserSession.objects.filter(user = user)
for user_session in user_sessions:
user_session.session.delete()
That's the nuts and bolts of it, if you'd like more detail I have a blog post covering it.
I found this code snippet
from django.contrib.sessions.models import Session
from django.contrib.auth.models import User
session_key = '8cae76c505f15432b48c8292a7dd0e54'
session = Session.objects.get(session_key=session_key)
uid = session.get_decoded().get('_auth_user_id')
user = User.objects.get(pk=uid)
print user.username, user.get_full_name(), user.email
here
http://scottbarnham.com/blog/2008/12/04/get-user-from-session-key-in-django/
Have not verified it yet, but looks pretty straight forward.
This is somewhat tricky to do, because not every session is necessarily associated with an authenticated user; Django's session framework supports anonymous sessions as well, and anyone who visits your site will have a session, regardless of whether they're logged in.
This is made trickier still by the fact that the session object itself is serialized -- since Django has no way of knowing which data exactly you want to store, it simply serializes the dictionary of session data into a string (using Python's standard "pickle" module) and stuffs that into your database.
If you have the session key (which will be sent by the user's browser as the cookie value "sessionid"), the easiest way to get at the data is simply to query the Session table for the session with that key, which returns a Session object. You can then call that object's "get_decoded()" method to get the dictionary of session data. If you're not using Django, you can look at the source code (django/contrib/sessions/models.py) to see how the session data is deserialized.
If you have the user id, however, you'll need to loop through all of the Session objects, deserializing each one and looking for one which has a key named "_auth_user_id", and for which the value of that key is the user id.
Modifying the django_session table to add an explicit user_id can make life a lot easier. Assuming you do that (or something similar), here are four approaches to munging things to your liking:
Fork the django.contrib.session code. I know, I know, that's a horrible thing to suggest. But it's only 500 lines including all backends and minus the tests. It's pretty straightforward to hack. This is the best route only if you are going to do some serious rearranging of things.
If you don't want to fork, you could try connecting to the Session.post_save signal and munge there.
Or you could MonkeyPatch contrib.session.models.Session.save(). Just wrap the existing method (or create a new one), breakout/synthesize whatever values you need, store them in your new fields, and then super(Session, self).save().
Yet another way of doing this is to put in 2 (yes, two) middleware classes -- one before and one after SessionMiddleware in your settings.py file. This is because of the way middleware is processed. The one listed after SessionMiddleware will get, on the inbound request, a request with the session already attached to it. The one listed before can do any processing on the response and/or change/resave the session.
We used a variation on this last technique to create pseudo-sessions for search engine spiders to give them special access to material that is normally member-only. We also detect inbound links where the REFERER field is from the associated search engine and we give the user full access to that one article.
Update:
My answer is now quite ancient, although it still is mostly correct. See #Gavin_Ballard's much more recent answer (9/29/2014) below for yet another approach to this problem.
Peter Rowell, thanks for your response. It was a tremendous help. This is what I did to get it working. Only had to change one file in djang.contrib.sessions.
In django/contrib/sessions/models.py, add the user_id to the table (add to DB table manually or drop table and run manage.py syncdb).
class Session(models.Model):
...
user_id = models.IntegerField(_('user_id'), null=True)
...
def save(self, *args, **kwargs):
user_id = self.get_decoded().get('_auth_user_id')
if ( user_id != None ):
self.user_id = user_id
# Call the "real" save() method.
super(Session, self).save(*args, **kwargs)
Now in your view where you do login (if you use django's base login, you will have to override it)
# On login, destroy all prev sessions
# This disallows multiple logins from different browsers
dbSessions = Session.objects.filter( user_id = request.user.id )
for index, dbSession in enumerate( dbSessions ):
if ( dbSession.session_key != request.session.session_key ):
dbSession.delete()
This worked for me.
I ran across this problem when I wanted to kick out a spammer. It seems setting their account to "inactive" isn't enough, because they can still come in on their previous session. So - how to delete a session for a specific user, or how to deliberately expire a session for a specific user?
The answer is to user the last_login field to track down the time at which the session was disabled, which tells you that the expire_date is two weeks later, which lets you perform a useful filter on the sessions table:
from django.contrib.sessions.models import Session
from django.contrib.auth.models import User
from datetime import datetime
from dateutil.relativedelta import relativedelta
baduser = User.objects.get(username="whoever")
two_weeks = relativedelta(weeks=2)
two_hours = relativedelta(hours=2)
expiry = baduser.last_login + two_weeks
sessions = Session.objects.filter(
expire_date__gt=expiry - two_hours,
expire_date__lt=expiry + two_hours
)
print sessions.count() # hopefully a manageable number
for s in sessions:
if s.get_decoded().get('_auth_user_id') == baduser.id:
print(s)
s.delete()