Get update query when DEBUG is False, without affecting code execution - django

I would like to view the queries run inside a block of code, ideally getting it as a list of strings.
Of course there are similar SO questions and answers, but they do not address my three specific requirements:
Works for queries other than SELECT.
Works when the code is not in DEBUG mode.
The code executes normally, i.e. any production code runs as production code.
What I have so far is a transaction inside a DEBUG=True override, that is instantly rolled back after the queries are collected.
from contextlib import contextmanager
from django.conf import settings
from django.db import connections
from django.db import transaction
from django.test.utils import override_settings
#contextmanager
#override_settings(DEBUG=True)
def print_query():
class OhNo(Exception):
pass
queries = []
try:
with transaction.atomic():
yield
for connection in connections:
queries.extend(connections[connection].queries)
print(queries)
raise OhNo
except OhNo:
pass
def do_special_debug_thing():
print('haha yes')
with print_query():
Foo.objects.update(bar=1)
if settings.DEBUG:
do_special_debug_thing()
There are two problems with that snippet:
That DEBUG override doesn't do anything. The context manager prints out [].
If the DEBUG override is effective, then do_special_debug_thing is called, which I do not want to happen.
So, as far as I know, there is no way to collect all queries made inside a block of code, including those that are SELECT statements, while DEBUG is off. What ways are there to achieve this?

If you would like to do that only once, getting queries separately and put them in a list could help you.
update = Foo.objects.filter(bar=1)
query = str(update.query)
print(query)

Related

Exception in celery task

tasks.py
#shared_task(bind=True, default_retry_delay=60, max_retries=3)
def index_city(self, pk):
from .models import City
try:
city = City.objects.get(pk=pk)
except City.ObjectDoesNotExist:
self.retry()
#Do stuff here with City
When I call the above task without .delay, it works without issue. When I call the task with .delay on my dev environment with celery running, it also works fine. However, in production, the following exception is thrown:
type object 'City' has no attribute 'ObjectDoesNotExist'
I added time.sleep(10) to rule out any race conditions, but this had no effect and the exception was still thrown. The object does in fact exist, so it seems like the inline import of City is not happening (inline import is done to prevent circular import issues) Any ideas how to fix this would be appreciated please.
Stack
Django 1.8.5
Python 2.7.10
sqlite on dev and postgresql on production
You should use City.DoesNotExist or django.core.exceptions.ObjectDoesNotExist instead City.ObjectDoesNotExist
See https://docs.djangoproject.com/en/1.9/ref/exceptions/#objectdoesnotexist

Is it possible to disable django haystack for some tests?

We use django-haystack as our search index. Generally great, but during tests it adds overhead to every model object creation and save, and for most tests it is not required. So I would like to avoid it. So I thought I'd use override_settings to use a dummy that did nothing. But I've now tried both the BaseSignalProcessor and the SimpleEngine and I can still see our search index (elasticsearch) getting hit a lot.
The two version I have tried are:
First using the SimpleEngine which does no data preparation:
from django.test import TestCase
from django.test.utils import override_settings
HAYSTACK_DUMMY_INDEX = {
'default': {
'ENGINE': 'haystack.backends.simple_backend.SimpleEngine',
}
}
#override_settings(HAYSTACK_CONNECTIONS=HAYSTACK_DUMMY_INDEX)
class TestAllTheThings(TestCase):
# ...
and then using the BaseSignalProcessor which should mean that the signals to save are not hooked up:
from django.test import TestCase
from django.test.utils import override_settings
#override_settings(HAYSTACK_SIGNAL_PROCESSOR='haystack.signals.BaseSignalProcessor')
class TestAllTheThings(TestCase):
# ...
I am using pytest as the test runner in case that matters.
Any idea if there is something I am missing?
The settings are only accessed once so overriding it after the fact won't change anything.
Instead, you can subclass the signal processor and stick in some logic to conditionally disable it like so:
from django.conf import settings
from haystack.signals import BaseSignalProcessor
class TogglableSignalProcessor(BaseSignalProcessor):
settings_key = 'HAYSTACK_DISABLE'
def handle_save(self, sender, instance, **kwargs):
if not getattr(settings, self.settings_key, False):
super().handle_save(sender, instance, **kwargs)
def handle_delete(self, sender, instance, **kwargs):
if not getattr(settings, self.settings_key, False):
super().handle_delete(sender, instance, **kwargs)
Now if you configure that as your signal processor then you can easily disable it in tests. The settings key can be set with an environment variable if you're just using manage.py test and not a custom runner. Otherwise you should know where to stick it.
import os
HAYSTACK_DISABLE = 'IS_TEST' in os.environ
And run it with
IS_TEST=1 python manage.py test
And for the few tests where you want it enabled, use override_settings() like you have already tried:
class MyTest(TestCase):
#override_settings(HAYSTACK_ENABLE=True)
def that_one_test_where_its_needed(self):
pass
Of course you can go even further and have conditional settings for the signal processor class so if you have a busy site then my conditional checks don't slow it down when it's running live.

Custom Django Signals Not Working

I realize there are many other questions related to custom django signals that don't work, and believe me, I have read all of them several times with no luck for getting my personal situation to work.
Here's the deal: I'm using django-rq to manage a lengthy background process that is set off by a particular http request. When that background process is done, I want it to fire off a custom Django signal so that the django-rq can be checked for any job failure/exceptions.
Two applications, both on the INSTALLED_APPS list, are at the same level. Inside of app1 there is a file:
signals.py
import django.dispatch
file_added = django.dispatch.Signal(providing_args=["issueKey", "file"])
fm_job_done = django.dispatch.Signal(providing_args=["jobId"])
and also a file jobs.py
from app1 import signals
from django.conf import settings
jobId = 23
issueKey = "fake"
fileObj = "alsoFake"
try:
pass
finally:
signals.file_added.send(sender=settings.SIGNAL_SENDER,issueKey=issueKey,fileName=fileObj)
signals.fm_job_done.send(sender=settings.SIGNAL_SENDER,jobId=jobId)
then inside of app2, in views.py
from app1.signals import file_added, fm_job_done
from django.conf import settings
#Setup signal handlers
def fm_job_done_callback(sender, **kwargs):
print "hellooooooooooooooooooooooooooooooooooo"
logging.info("file manager job done signal fired")
def file_added_callback(sender, **kwargs):
print "hellooooooooooooooooooooooooooooooooooo"
logging.info("file added signal fired")
file_added.connect(file_added_callback,sender=settings.SIGNAL_SENDER,weak=False)
fm_job_done.connect(fm_job_done_callback,sender=settings.SIGNAL_SENDER,weak=False)
I don't get any feedback whatsoever though and am at a total loss. I know for fact that jobs.py is executing, and therefore also that the block of code that should be firing the signals is executing as well since it is in a finally block (no the try is not actually empty - I just put pass there for simplicity) Please feel free to ask for more information - I'll respond asap.
here is the solution for django > 2.0
settings.py:
change name of your INSTALLED_APPS from 'app2' to
'app2.apps.App2Config'
app2 -> apps.py:
from app1.signals import file_added, fm_job_done
Class App2Config(AppConfig):
name = 'app2'
def ready(self):
from .views import fm_job_done_callback, file_added_callback
file_added.connect(file_added_callback)
fm_job_done.connect(fm_job_done_callback)
use django receiver decorator
from django.dispatch import receiver
from app1.signals import file_added, fm_job_done
#receiver(fm_job_done)
def fm_job_done_callback(sender, **kwargs):
print "helloooooooooooooo"
#receiver(file_added)
def file_added_callback(sender, **kwargs):
print "helloooooooooooooo"
Also, I prefer to handle signals in models.py

Django global queryset

I would like to have a global variable in my django app that stores resulting list of objects that I later use in some functions and I don't want to evaluate queryset more that once, i do it like this:
from app.models import StopWord
a = list(StopWord.objects.values_list('word', flat=True))
...
def some_func():
... (using a variable) ...
This seems ok to me but the problem is that syncdb and test command throw an exception:
django.db.utils.DatabaseError: (1146, "Table 'app_stopword' doesn't exist")
I don't know how to get rid of this, may be I am on a wrong way?
It sounds like the app that StopWord is a part of is either not in your installed apps setting, or you haven't run syncdb to generate the table.
Storing a 'global value' can be simulated by using the django cache framework.
# there is more to it then this - read the documentation
# settings.py needs to be configured.
from django.core.cache import cache
class StopWord(models.Model):
... # field definitions
#classmethod
def get_all_words(cls):
key = 'StopWord.AllCachedWords.Key'
words = cache.get(key)
if words is None:
words = list(StopWord.objects.values_list('word', flat=True))
cache.set(key, words)
return words
#elsewhere
from app.models import StopWord
for word in StopWord.get_all_words():
# do something
The above also handles a sort of cache invalidation. Your settings should set a default timeout, or you can set your own timeout as a 3rd parameter to cache.set(). This ensures that while you avoid most database calls, the cache will be refreshed every so often so new stopwords can be used without restarting the application.
Don't initialize queries in a global scope. Bind None to the name, then write a function that first checks if the value is None and if so generates the data, and then returns the value.

Django error 'Signal' object has no attribute 'save'

I've been struggling with this problem for 5 hours and I have a feeling it's a simple solution that I'm just overlooking.
I'm trying to tie in a third party module (Django Activity Stream) that uses a series of senders and receivers to post data about user activity to a database table. Everything is set up and installed correctly, but I get a 'Signal' Object has No Attribute 'Save' error when I try to run it.
I suspect the problem is in my syntax somewhere. I'm just getting started with Signals, so am probably overlooking something a veteran will spot immediately.
In views.py I have:
from django.db.models.signals import pre_save
from actstream import action ##This is the third-party app
from models import Bird
def my_handler(sender, **kwargs):
action.save(sender, verb='was saved')
#return HttpResponse("Working Great")
pre_save.connect(my_handler, sender=Bird)
def animal(request):
animal = Bird()
animal.name = "Douglas"
animal.save()
The Django Activity Stream app has this signals.py file:
from django.dispatch import Signal
action = Signal(providing_args=['actor','verb','target','description','timestamp'])
And then this models.py file:
from datetime import datetime
from operator import or_
from django.db import models
from django.db.models.query import QuerySet
from django.core.urlresolvers import reverse
from django.utils.translation import ugettext_lazy as _
from django.utils.timesince import timesince as timesince_
from django.contrib.contenttypes import generic
from django.contrib.contenttypes.models import ContentType
from django.contrib.auth.models import User
from actstream import action
...
def action_handler(verb, target=None, **kwargs):
actor = kwargs.pop('sender')
kwargs.pop('signal', None)
action = Action(actor_content_type=ContentType.objects.get_for_model(actor),
actor_object_id=actor.pk,
verb=unicode(verb),
public=bool(kwargs.pop('public', True)),
description=kwargs.pop('description', None),
timestamp=kwargs.pop('timestamp', datetime.now()))
if target:
action.target_object_id=target.pk
action.target_content_type=ContentType.objects.get_for_model(target)
action.save()
action.connect(action_handler, dispatch_uid="actstream.models")
Your main problem is in the discipline in maintaining coding style, or rather in this case, lack of. You will find that it is easier to identify problems in your code if you do not use the same name to refer to multiple things within the same module; give each object a unique, meaningful name, and refer to it using only that name.
The bottom line here is that the docs for that project contain bad code. This line:
action.save(sender, verb='was saved')
isn't ever going to work. The from actstream import action ultimately imports a signal from actstream.signals, and signals do not and never have had a save method. Especially not with such an odd signature of sender, verb.
At first I thought maybe the author had done something odd with subclassing Signal, but after looking at the rest of the codebase, that's just not the case. I'm not entirely sure what the intention of those docs was supposed to be, but the right thing to do in your handler will either be to save a new Action (imported from actstream.models) instance, or to do something with your model.
Sadly, the project's repository has a pretty sorry set of tests/examples, so without downloading and trying the app myself, I can't tell you what needs to happen there. You might try contacting the author or simply try finding a better-documented/better-maintained Activity Streams app.