I develop a django app where lots of DB updates could/should be deferred to later time.
What would be a good way to update the DB in a background batch job?
One way I could think of is to have a message queue that would contain raw SQL statements.
The django app would fill the queue with raw SQLs when the update should be done asynchronously.
A simple background job, (in a different un-related process), would just deqeue and execute the SQL statements it in its own pace..
What do you think?
Celery is often used for this.
Start with this related questions: https://stackoverflow.com/questions/tagged/celery.
I've found this good review about the subject. It recommends Gearman
Seems a lighter solution than Celery.. I think I will give it a try
Related
I have been looking at using MassTransit's Quartz.Net (using AdoJobStore) implementation to schedule messages send/publish for future and all of this works fairly smoothly.
The bit where I am stuck is, as a part of the production deployment, I need to set up a lot of "Scheduled Messages" to be issued at various times during the next year odd.
Is there a mechanism available to pre-populate the Quartz SQL store with Triggers/Jobs externally ?
I finally figured a way to do this; posting here, so it might help others if needed.
The Quartz SQL DB is nothing but a simple data serialised into objects.
e.g. varbinary for JOB_DATA and ticks in for time. Other values are fairly simple.
I ended up created a sample app to setup some schedules and then reversed the database later to figure.
It was all quite simple in the end, and now I have splain SQL insert script which inserts the schedules as a part of the CD pipeline.
I'm making a web app using Django.
I'd like events to trigger 'background' tasks that run parallel to the Django application. (by parallel I just mean that they don't impact the speed of the users experience)
Types of tasks I'm talking about
a user logs in and an event is trigged to start populating that users cache in anticipation of future requests based on their use habits.
a user posts some data to the database but that post triggers an api call to another website where the returned data will be parsed, aggregated and used to supplement that users post.
rolling updates of data used in the app through api calls to other websites
aggregating data and running general maintenance tasks.
After a few days of research I'm thinking that I should use twisted to accomplish this. Which has lead me to my question:
Is twisted overkill for what I'm trying to accomplish?
Many of these tasks are far more i/o bound than cpu bound. So I'm thinking asynchronous is best.
Anyone advice would be appreciated.
Thank you
Yes, I think it's overkill.
Rather than fold in a full async framework such as Twisted, with all the technical overhead that brings, you might be better off using a task queue to be able to do what you want as a background process.
When your app needs to do a background task (anything that would otherwise block the request/response cycle), put the task in the queue and let a separate worker process pick things off the queue and deal with them as fast as it can. (You can always add more workers).
Two of the most popular queue libraries for Python/Django are celery and rq. They're especially good with Redis as a backend, but there are other backend options, too.
Personally, I much prefer rq over celery, in terms of its API and its clean setup, but both are used by a lot of people.
And both are definitely easier to get your head around than something like Twisted, IMO.
I am doing data migration which deals with images/videos and such being downloaded and then sent to dropbox by using its api.
I'm using python-django for the entire web app but I imagine this will take a lot of bandwidth and there might be lot of issues where a failure of one image being saved shouldn't stop the entire migration.
Thus, is celery a good idea? Or Twisted?
I'm a bit confused about how this would help me. What I've in mind is to spawn a server/thread for the process of dealing with a single image or a small set of images and thus being able to do it on multiple threads.
The short answer to your question "is Celery a good idea?" is "Yes". I've used Celery to achieve a similar process whereby user submission of a form initiates, amongst other things, asynchronous calls to the Twitter API which then write back to saved objects in my database. I've found Celery outstanding for this task (no pun intended).
Celery would allow you to initiate pre-defined tasks (which, in part, can be thought of as "normal" Python functions with a #task decorator added to them), each time a user indicates they'd like to download an image or images. Celery gives you granular, per-task control over errors and retries, and tasks can be submitted singly or as chains, chords, or groups, all of which means you can definitely achieve your requirement of migration continuing even when a single image fails to download.
I would recommend spending some time with the Celery tutorial here and the Celery-Django tutorial here, which will give you an introduction to the basic work flow with Celery and Django.
I can't speak to the merits of Twisted, but if you are looking for opinions on the relative strengths and weaknesses of each, these look like a good start:
Twisted or Celery? Which is right for my application with lots of
SOAP calls?
sync spawing of processes: design question - Celery or Twisted
I'm building a django app which lists the hot(according to a specific algorithm) twitter trending topics.
I'd like to run some processes indefinitely to make twitter API calls and update the database(postgre) with the new information. This way the hot trending topic list gets updated asynchronously.
At first it seemed to me that celery+rabbitmq were the solution to my problem, but from what I understand they are used within django to launch scheduled or user triggered tasks, not indefinitely running tasks.
The solution that comes to my mind is write a .py file to continually put trending topics in a queue and write independent .py files continually running, making get queue requests and saving the data in the db used by django with raw SQL or SQLAlchemy. I think that this could work, but I'm pretty sure there is a much better way to do it.
If you just need to keep some processes running continually, supervisor is a nice solution.
You can combine it with any queuing technology you like to push things into your queues.
One of my view functions is a very long processing job and clearly needs to be handled differently.
Instead of making the user wait for long time, it would be best if I were able to lunch the processing job which would email the results, and without waiting for completion notify the user that their request is being processed and let them browse on.
I know I can use os.fork, but I was wondering if there is a 'right way' in terms of Django. Perhaps I can return the HTTP response, and than go on with this job somehow?
There are a couple of solutions to this problem, and the best one depends a bit on how heavy your workload will be.
If you have a light workload you can use the approach used by django-mailer which is to define a "jobs" model, save new jobs into the database, then have cron run a stand-alone script every so often to process the jobs stored in the database (deleting them when done). You can use something like django-chronograph to manage the job scheduling easier
If you need help understanding how to write a script to process the job see James Bennett's article Standalone Django Scripts for help.
If you have a very high workload, meaning you'll need more than a single server to process the jobs, then you want to use a real distribute task queue. There is a lot of competition here so I can't really detail all the options, but a good one to use with for Django apps is celery.
Why not simply start a thread to do the processing and then go on to send the response?
Before you select a solution, you need to determine how the process will be run. I.e is it the same process for every single user, the data is the same and can be scheduled regularly? or does each user request something and the results are slightly different ?
As an example, if the data will be the same for every single user and can be run on a schedule you could use cron.
See: http://www.b-list.org/weblog/2007/sep/22/standalone-django-scripts/
or
http://docs.djangoproject.com/en/dev/howto/custom-management-commands/
However if the requests will be adhoc and you need something scalable that can handle high load and is asynchronous: what you are actually looking for is a message queing system. Your view will add a request to the queue which will then get acted upon.
There are a few options to implement this in Django:
Django Queue service is purely django & python and simple, though the last commit was in April and it seems the project has been abandoned.
http://code.google.com/p/django-queue-service/
The second option if you need something that scales, is distributed and makes use of open source message queing servers: celery is what you need
http://ask.github.com/celery/introduction.html
http://github.com/ask/celery/tree