I have a field in one of my models in django which I want to be reset every hour.
(i.e. at each o'clock its value becomes zero)
How can I do this task? Can I schedule a function in django?
As you know we can define EVENTs and TRIGGERs in mysql and other database backend. Also I am familiar with signals in django but those can not fit in my needs. (because database event is somewhat outside of django and have problems; with signals although it seems this is impossible!)
You could use schedule, it's very easy to apply for your problem.
import schedule
import time
def job():
print("I'm working...")
schedule.every().hour.do(job)
while True:
schedule.run_pending()
time.sleep(1)
Here there is a thread where it is shown how to execute a task periodically. Then you could add some conditions to fit your scenario.
Related
We have a system consisting of a Django server connected to a PostgreSQL database, and some AWS Lambda functions which need to access the database. When an admin saves a model (PremiumUser - contains premium plan information that needs to be read by the Lambdas), we want to set up a schedule of CloudWatch events based on the saved information. Those events then trigger other Lambdas which also need to directly access the database, as the database state may change at any time and they should work off the most recent state of the database.
The issue is that Django seems to think it has saved the values, but when the Lambdas read the database the expected values aren't there. We have tried using Django's post_save signal, calling the Lambdas inside the triggered function; we have tried overriding Django's default PremiumUser.save method to perform super(PremiumUser, self).save(*args, **kwargs), and only then call Lambdas (in case the post_save signal was getting triggered too early); and we have tried overriding the PremiumUser.save method and calling super(PremiumUser, self).save(*args, **kwargs) in the context of an atomic transaction (ie with transactions.atomic():).
When we call the Lambdas a few seconds after the admin dashboard has updated, they can find the values as expected and work properly, which suggests that somehow Django considers the model as having been 'saved' to the database, while the database has not yet been updated.
Is there a way to force Django to write to the database immediately? This would be the preferred solution, as it would keep Django's model and the database consistent.
An alternative solution we have considered but would prefer not to resort to would be to put a sleep in the Lambdas and calling them asynchronously so that Django's save is able to complete before the Lambda functions access the database. Obviously this would be a race condition, so we don't want to do this if it can be at all avoided.
Alright, after spending more time poring over Django's documentation we found a solution: using on_commit. It seems that post_save is triggered after Django's model is updated, but before the database has been written to. on_commit, on that other hand is triggered after the current database transaction has been committed (completed).
For us, the solution involved setting up some code like this:
from django.db import models, transaction
class PremiumUser(models.Model):
# Normal setup code...
def save(self, *args, **kwargs):
# Do some necessary things before the actual save occurs
super(PremiumUser, self).save(*args, **kwargs)
# Set up a callback for the on_commit
def create_new_schedule():
# Call lambda to create new schedule...
# Register the callback with the current transaction's on_commit
transaction.on_commit(create_new_schedule)
I have a query that I need within my Django app that I needed to hand-optimize. But getting the query to run fast means that I need to be able to tell Postgres "don't use parallelism on this".
What I thought would work was:
from django.db import connection
cursor = connection.cursor()
# start a transaction so that PGBouncer runs the next statements
# on the same connection
cursor.execute("begin")
# turn off parallelism for this next query
cursor.execute("set max_parallel_workers_per_gather = 0")
# run my query
cursor.execute("HAND-TUNED SELECT QUERY GOES HERE")
# process the cursor results
# Put this connection back in the PGBouncer pool, and reset
# max_parallel_workers_per_gather.
cursor.execute("rollback")
But it does not seem to be working. My query continues to show up in my "slow query" logs when I run it through the Django site, and the performance remains lousy (4+ seconds with parallelism, 0.5 seconds without).
Is there a way to do what I need to do?
First, you should use SET LOCAL so that the effect is limited to the transaction.
Then I recommend that you use auto_explain to find out the actual plan that was used for the query. Maybe there is a different reason for the slowdown.
I have Model "Task" and DateTimeField "deadline" in it. How can I monitor current time and execute some actions (like sending notifications to user, that own that task) when there is, for example, 1 hour left to deadline, or deadline has already come?
I'm using Django 2.0.
I've already read about Celery, but I'm not sure whether it is an appropriate solution for such task.
I'll be glad to hear your opinions.
I have a view which reads the excel sheet and saves the data. I need a way if there is any error happens(500) in that view the database transactions should not commit and hence they should rollback.
I use the following code but it saves the data before the errors comes. My tasks is if there is any error in views the database should rollback.
from django.db import transaction
#transaction.commit_on_success
def upload_data(request):
..... and so on .....
obj.save()
Error comes here on this line
Want rollback the database as it
was before this view was called
obj1.save()
If error is here noting should be saved.
Thanks
Per Django's documentation on transactions, if you're using Django 1.6, I'd wrap the whole view in #transaction.atomic:
from django.db import transaction
#transaction.atomic
If you want this behavior for your whole app, set ATOMIC_REQUESTS=True in your database configuration, as described in that documentation.
Otherwise, if you're on 1.5 and not getting the behavior you expect, you could switch to #transaction.commit_manually, wrap the whole view in a try block, and do a commit() or rollback() explicitly. It's not elegant, but it might work if you want fine-grained control of when exactly commits happen.
try to put #transaction.commit_on_success just top of your view. So, if you get an error within that view function it will rollback else commit your work.
I have a Django application.
One of my models looks like this:
class MyModel(models.Model):
def house_cleaning(self):
// cleaning up data of the model instance
Every time when I update an instance of MyModel, I'd need to clean up the data N days later. So I'd like to schedule a job to call
this_instance.house_cleaning()
N days from now.
Is there any job queue that would allow me to:
Integrate well with Django - allow me to call a method of individual model instances
Only run jobs that are scheduled to run today
Ideally handle failures gracefully
Thanks
django-chronograph might be good for your use case. If you write your cleanup jobs as django commands, then you schedule them to run at some time. It runs using unix cron behind the scene.
Is there any reason why a cron job wouldn't work? Or something like django-cron that behaves the same way? It's pretty easy to write stand-alone Django scripts. If you want to trigger house cleaning on some change to you model after a certain number of days, why not create a date flag in the model which is set to N days in the future when the job needs to be scheduled? You could run a script on a daily basis which pulls all records where the date is <= today, calls the instance's house_cleaning() method and then clears the date field. If an exception is raised during the process, it's easy enough to log it or dispatch an email.