django signal post_save is not working while debug is false - django

I've implemented django signals for a model. It's only working fine while debug is True. Any idea how to fix this issue, so that post_save signal works when debug is False in production.
Here is my code in models.py
#receiver(post_save, sender=PackageSubscription)
def update_balance(sender, instance, **kwargs):
current_balance = Balance.objects.get(user=instance.user)
last_balance = current_balance.last_balance
get_package = Package.objects.filter(title=instance.package)
for i in get_package:
get_coin = i.coin
update_current_balance = last_balance + get_coin
if instance.payment_status is True:
Balance.objects.filter(user=instance.user).update(last_balance=update_current_balance, updated_at=timezone.now())
FYI - I'm using django 3.2.4

Related

Why in django-import-export doesn't work use_bulk?

I use django-import-export 2.8.0 with Oracle 12c.
Line-by-line import via import_data() works without problems, but when I turn on the use_bulk=True option, it stops importing and does not throw any errors.
Why does not it work?
resources.py
class ClientsResources(resources.ModelResource):
class Meta:
model = Clients
fields = ('id', 'name', 'surname', 'age', 'is_active')
batch_size = 1000
use_bulk = True
raise_errors = True
views.py
def import_data(request):
if request.method == 'POST':
file_format = request.POST['file-format']
new_employees = request.FILES['importData']
clients_resource = ClientsResources()
dataset = Dataset()
imported_data = dataset.load(new_employees.read().decode('utf-8'), format=file_format)
result = clients_resource.import_data(imported_data, dry_run=True, raise_errors=True)
if not result.has_errors():
clients_resource.import_data(imported_data, dry_run=False)
return HttpResponseRedirect(request.META.get('HTTP_REFERER'))
data.csv
id,name,surname,age,is_active
18,XSXQAMA,BEHKZFI,89,Y
19,DYKNLVE,ZVYDVCX,20,Y
20,GPYXUQE,BCSRUSA,73,Y
21,EFHOGJJ,MXTWVST,93,Y
22,OGRCEEQ,KJZVQEG,52,Y
--UPD--
I used django-debug-toolbar and saw a very strange behavior with import-queries.
With Admin Panel doesnt work. I see all importing rows, but next it writes "Import finished, with 5 new and 0 updated clients.", and see this strange queries
Then I use import by my form and here simultaneous situation:
use_bulk by django-import-export (more)
And for comparing my handle create_bulk()
--UPD2--
I've tried to trail import logic and look what I found:
import_export/resources.py
def bulk_create(self, using_transactions, dry_run, raise_errors, batch_size=None):
"""
Creates objects by calling ``bulk_create``.
"""
print(self.create_instances)
try:
if len(self.create_instances) > 0:
if not using_transactions and dry_run:
pass
else:
self._meta.model.objects.bulk_create(self.create_instances, batch_size=batch_size)
except Exception as e:
logger.exception(e)
if raise_errors:
raise e
finally:
self.create_instances.clear()
This print() showed empty list in value.
This issue appears to be due to a bug in the 2.x version of django-import-export. It is fixed in v3.
The bug is present when running in bulk mode (use_bulk=True)
The logic in save_instance() is finding that 'new' instances have pk values set, and are then incorrectly treating them as updates, not creates.
I cannot determine how this would happen. It's possible this is related to using Oracle (though I cannot see how).

Issue with using django celery when django signals is being used to sent email?

I have used the default Django admin panel as my backend. I have a Blogpost model. What I am trying to do is whenever an admin user saves a blogpost object on Django admin, I need to send an email to the newsletter subscribers notifying them that there is a new blog on the website.
I have to send mass emails so I am using django-celery. Also, I am using django signals to trigger the send email function.
But Right now, I am sending without using celery but it is too slow.
class Subscribers(models.Model):
email = models.EmailField(unique=True)
date_subscribed = models.DateField(auto_now_add=True)
def __str__(self):
return self.email
class Meta:
verbose_name_plural = "Newsletter Subscribers"
# binding signal:
#receiver(post_save,sender=BlogPost)
def send_mails(sender,instance,created,**kwargs):
subscribers = Subscribers.objects.all()
if created:
blog = BlogPost.objects.latest('date_created')
for abc in subscribers:
emailad = abc.email
send_mail('New Blog Post ', f" Checkout our new blog with title {blog.title} ",
EMAIL_HOST_USER, [emailad],
fail_silently=False)
else:
return
Using celery documentation i have written following files.
My celery.py
from __future__ import absolute_import
import os
from celery import Celery
from django.conf import settings
os.environ.setdefault('DJANGO_SETTINGS_MODULE','travel_crm.settings')
app = Celery('travel_crm')
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
Mu init file:
from __future__ import absolute_import, unicode_literals
from .celery import app as celery_app
__all__ = ('celery_app',)
Tasks file from docs:
def my_first_task(duration):
subject= 'Celery'
message= 'My task done successfully'
receiver= 'receiver_mail#gmail.com'
is_task_completed= False
error=''
try:
sleep(duration)
is_task_completed= True
except Exception as err:
error= str(err)
logger.error(error)
if is_task_completed:
send_mail_to(subject,message,receivers)
else:
send_mail_to(subject,error,receivers)
return('first_task_done')
This task doesn't work because I am using Django signal to trigger the send email function, How to employ this into tasks.py
I think I understand your question ... I was recently faced with similar challenge which included the complexity of a multi-tenant [schema] database [proved to be an issue with Redis]. I also tried django-celery, but it is dependent on a much older version of Celery. In addition, I wanted to send mass mail initiated by model signal post_save ... using EmailMultiAlternatives with 'bcc' and 'reply-to' features.
So now I am using the latest [as of this post] Django, the latest Celery with Redis ... running on macOS localhost with Poetry virtual env & package manager. The following worked for me:
Celery: I spent several hours net searching for tutorials and advice ... among others, this one added value for me Celery w Django. Good practice to dig deeper into Celery anyway if you have not done it already.
Redis: This will depend on your OS and if you are developing local or remote. The Redis website will guide you to set up. I also tried RabbitMQ but found it [personally] more complex to set up.
The code fractions: There are 4 fractions ... myapp/signals.py, myapp/tasks.py, myproj/celery.py, myproj/settings.py Disclaimer: I'm a hobby programmer ... more experienced engineers may well improve on my code ... I've done some minor testing and all seems to work.
# myapp/signals.py
#receiver(post_save, sender=MyModel)
def post_save_handler(sender, instance, **kwargs):
if instance.some_field == True:
recipient_list = list(get_user_model().objects.filter('some filters'))
from_email = SomeModel.objects.first().site_email
to_email = SomeModel.first().admin_email
# call async task Celery
task_send_mail.delay(instance.some_field, instance.someother_field, from_email, to_email, recipient_list)
# myapp/tasks.py
#shared_task(name='task_sendmail')
def task_send_mail(instance.some_field, instance.someother_field, from_email, to_email, recipient_list):
template_name = 'emails/sometemplate.html'
html_message = render_to_string(template_name, {'body': instance.some_field,}) # These variables are added to the email template
plain_message = strip_tags(html_message)
subject = f'Do Not Reply : {instance.someother_field}'
connection = get_connection()
connection.open()
message = EmailMultiAlternatives(
subject,
plain_message,
from_email,
[to_email],
bcc=recipient_list,
reply_to=[to_email],
)
message.attach_alternative(html_message, "text/html")
try:
message.send()
connection.close()
except SMTPException as e:
print('There was an error sending email: ', e)
connection.close()
# myproj/celery.py
import os
from celery import Celery
# Set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myproj.settings')
app = Celery('myproj')
# Using a string here means the worker doesn't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')
# Load task modules from all registered Django apps.
app.autodiscover_tasks()
#app.task(bind=True)
def debug_task(self):
print(f'Request: {self.request!r}')
# myproj/settings.py
...
##Celery Configuration Options
CELERY_BROKER_URL = 'redis://localhost:6379//'
CELERY_TIMEZONE = "Africa/SomeCity"
CELERY_TASK_TRACK_STARTED = True
CELERY_TASK_TIME_LIMIT = 30 * 60

Why are integration tests failing on updates to model objects when the function is run on Django q-cluster?

I am running some django integration tests of code that passes some functions to Django-Q for processing. This code essentially updates some fields on a model object.
The app uses signals.py to listen for a post_save change of the status field of the Foo object. Assuming it's not a newly created object and it has the correct status, the Foo.prepare_foo() function is called. This is handled by services.py which hands it off to q-cluster. Assuming this executes without error hooks.py changes the status field of Foo to published, or keeps it at prepare if it fails. If it passes then it also sets prepared to True. (I know this sounds convoluted and with overlapping variables - part of the desire to get tests running is to be able to refactor).
The code runs correctly in all environments, but when I try to run tests to assert that the fields have been updated, they fail.
(If I tweak the code to bypass the cluster and have the functions run in-memory then the tests pass.)
Given this, I'm assuming the issue lies in how I've written the tests, but I cannot diagnose the issue.
tests_prepare_foo.py
import time
from django.test import TestCase, override_settings, TransactionTestCase
from app.models import Foo
class CreateCourseTest(TransactionTestCase):
reset_sequences = True
#classmethod
def setUp(cls):
cls.foo = Foo(status='draft',
prepared=False,
)
cls.foo.save()
def test_foo_prepared(self):
self.foo.status = 'prepare'
self.foo.save()
time.sleep(15) # to allow the cluster to finish processing the request
self.assertEquals(self.foo.prepared, True)
models.py
import uuid
from django.db import models
from model_utils.fields import StatusField
from model_utils import Choices
class Foo(models.Model):
ref = models.UUIDField(default=uuid.uuid4,
editable=False,
)
STATUS = Choices('draft', 'prepare', 'published', 'archived')
status = StatusField(null=True,
blank=True,
)
prepared = models.BooleanField(default=False)
def prepare_foo(self):
""""
...
do stuff
"""
signals.py
from django.dispatch import receiver
from django.db.models.signals import post_save
from django_q.tasks import async_task
from app.models import Foo
#receiver(post_save, sender=Foo)
def make_foo(sender, instance, **kwargs):
if not kwargs.get('created', False) and instance.status == 'prepare' and not instance.prepared:
async_task('app.services.prepare_foo',
instance,
hook='app.hooks.check_foo_prepared',
)
services.py
def prepare_foo(foo):
foo.prepare_foo()
hooks.py
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
def check_foo_prepared(task):
foo = task.args[0]
if task.success:
logger.info("Foo Prepared: Successful")
foo.status = 'published'
foo.prepared = True
foo.save()
logger.info("Foo Status: %s", foo.status)
logger.info("Foo Prepared: %s", foo.prepared)
else:
logger.info("Foo Prepared: Unsuccessful")
foo.status = 'draft'
foo.save()
Finally - logs when the test is run with the cluster on:
q-cluster server
INFO:app.hooks:Foo Prepared: Successful
INFO:app.hooks: Foo Status: published
INFO:app.hooks: Foo Prepared: True
django server
h:m:s [Q] INFO Enqueued 1
F
======
FAIL
self.assertEquals(self.foo.prepared, True)
AssertionError: False != True
I think I'm either missing something obvious in my tests, or something really subtle, but I can't work out which. I've tried setting the cluster to run synchronously (sync=True), and in my test reloading Foo just before the assertion:
self.foo.save()
time.sleep(15)
test_foo = Foo.objects.get(pk=1)
self.assertEquals(test_foo.prepared, True)
But this also fails with
self.assertEquals(test_foo.prepared, True)
AssertionError: False != True
Which leads me to believe that the cluster is not updating the object under test (unlikely), or the assertion is being checked before the cluster has updated the object (more likely).
This is the first time I've written tests that require hand-offs to a cluster, so any pointers, suggestions gratefully received!

Enable integrity checking with sqlite in django

In my django project, I use mysql db for production, and sqlite for tests.
Problem is, some of my code rely on model integrity checking. It works well with mysql, but integrity errors are not thrown when the same code is executed in tests.
I know that foreign keys checking must be activated in sqlite :
PRAGMA foreign_keys = 1;
However, I don't know where is the best way to do this activation (same question here).
Moreover, the following code won't work :
def test_method(self):
from django.db import connection
cursor = connection.cursor()
cursor.execute('PRAGMA foreign_keys = ON')
c = cursor.execute('PRAGMA foreign_keys')
print c.fetchone()
>>> (0,)
Any ideas?
So, if finally found the correct answer. All I had to do was to add this code in the __init__.py file in one of my installed app:
from django.db.backends.signals import connection_created
def activate_foreign_keys(sender, connection, **kwargs):
"""Enable integrity constraint with sqlite."""
if connection.vendor == 'sqlite':
cursor = connection.cursor()
cursor.execute('PRAGMA foreign_keys = ON;')
connection_created.connect(activate_foreign_keys)
You could use django signals, listening to post_syncdb.
from django.db.models.signals import post_syncdb
def set_pragma_on(sender, **kwargs):
"your code here"
post_syncdb.connect(set_pragma_on)
This ensures that whenever syncdb is run (syncdb is run, when creating the test database), that your SQLite database has set 'pragma' to 'on'. You should check which database you are using in the above method 'set_pragma_on'.

Django signal m2m_changed not triggered

I recently started to use signals in my Django project (v. 1.3) and they all work fine except that
I just can't figure out why the m2m_changed signal never gets triggered on my model. The Section instance is edited by adding/deleting PageChild inline instances on an django admin form.
I tried to register the callback function either way as described in the documentation, but don't get any result.
Excerpt from my models.py
from django.db import models
from django.db.models.signals import m2m_changed
class Section(models.Model):
name = models.CharField(unique = True, max_length = 100)
pages = models.ManyToManyField(Page, through = 'PageChild')
class PageChild(models.Model):
section = models.ForeignKey(Section)
page = models.ForeignKey(Page, limit_choices_to = Q(is_template = False, is_background = False))
#receiver(m2m_changed, sender = Section.pages.through)
def m2m(sender, **kwargs):
print "m2m changed!"
m2m_changed.connect(m2m, sender = Section.pages.through, dispatch_uid = 'foo', weak = False)
Am I missing something obvious?
This is an open bug: https://code.djangoproject.com/ticket/16073
I wasted hours on it this week.
You are connecting it twice, once with m2m_changed.connect and the other time with receiver decorator.
Not sure if it will help, but the following is working for me:
class Flow(models.Model):
datalist = models.ManyToManyField(Data)
from django.db.models.signals import post_save, pre_delete, m2m_changed
def handle_flow(sender, instance, *args, **kwargs):
logger.debug("Signal catched !")
m2m_changed.connect(handle_flow, sender=Flow.datalist.through)
I'm not sure if this will help, but are you sure that you should use Sender.pages.through for this special case? perhaps if you tried #reciever(m2m_changed, sender=PageChild)
Note: if you have #reciever, you do not need m2_changed.connect(...) as #reciever already performs the connect operation.