I am building an app, where I need to fetch some data from an API and update all the models with that data every few minutes.
What would be a clean way to accomplish something like this?
Well, it's a quite open question.
You'll need to create a task that runs every few minutes, you can do this with Celery. Celery has a task schedluer http://docs.celeryproject.org/en/latest/userguide/periodic-tasks.html wich will launch a certain function at a configured time similar to a crontab
The task then would fetch the data, http://docs.python-requests.org/en/master/ is a very good library to make http requests.
And lastly but no less important you would need to serialize the fetched data and save it to your model. Django rest framework serializing capabilities are a great starting point, but if data structure is simple enough you can just use JSON python library json.loads(data) and create a function that translate the fields on the API to the fields of the model.
By the way, I'm supposing a REST API.
You can use a task management tool that has the feature of running periodic tasks in the intervals you specify, like Periodic Tasks in Celery.
Also, If you run your code on a Unix-like system, you can stick with the core django functionality. Just write your functionality as a Django Management Command and set a cronjob to run it in your preferred interval.
Related
The challenge is that I need to recompute all the data I have in db, after saving a new instance. The computation takes not more than 2 mins, which is fine for my problem. I have custom save method and all I need to do is to go through all items and item.save(), but as I said It takes more than 30 sec, so I have issues with 'request timeout'(using Heroku btw). Any ideas on how to deal with this?
#Uladzislau Malinouski, you can use the asynchronous tools like celery.
The way it works is that the task that is taking a significant amount of time can be wrapped as an asynchronous task and the computing can be done in the background. You can setup celery on Heroku by following this guide - https://devcenter.heroku.com/articles/celery-heroku.
If you are using free account and have not provided your card details on Heroku, it'd not be possible for you to add the addon for the broker which are used along with celery-like Redis, Rabbitmq etc.
In such cases, you may follow this guide to schedule an asynchronous task on Heroku - https://devcenter.heroku.com/articles/clock-processes-python
I have a django application that deploys the model logic and data handling through the administration.
I also have in the same project a python file (scriptcl.py) that makes use of the model data to perform heavy calculations that take some time, per example 5 secs, to be processed.
I have migrated the project to the cloud and now I need an API to call this file (scriptcl.py) passing parameters, process the computation accordingly to the parameters and data of the DB (maintained in the admin) and then respond back.
All examples of the django DRF that I've seen so far only contain authentication and data handling (Create, Read, Update, Delete).
Could anyone suggest an idea to approach this?
In my opinion correct approach would be using Celery to perform this calculations asynchronous.
Write a class which inherits from DRF APIView which handles authentication, write whatever logic you want or call whichever function, Get the final result and send back the JsonReposen. But as you mentioned if the Api takes more time to respond. Then you might have to think of some thing else. Like giving back a request_id and hit that server with the request_id every 5seconds to get the data or something like that.
Just to give a feedback to this, the approach that I took was to build another API using flask and normal python scripts.
I also used sqlalchemy to access the database and retrieve the necessary data.
I have a django form, which is collecting user response. I also have a tensorflow sentences classification model. What is the best/standard way to put these two together.
Details:
tensorflow model was trained on the Movie Review data from Rotten Tomatoes.
Everytime a new row is made in my response model , i want the tensorflow code to classify it( + or - ).
Basically I have a django project directory and two .py files for classification. Before going ahead myself , i wanted to know what is the standard way to implement machine learning algorithms to a web app.
It'd be awesome if you could suggest a tutorial or a repo.
Thank you !
Asynchronous processing
If you don't need the classification result from the ML code to pass immediately to the user (e.g. as a response to the same POST request that submtted), then you can always queue the classification job to be ran in the background or even a different server with more CPU/memory resources (e.g. with django-background-tasks or Celery)
A queued task would be for example to populate the field UserResponse.class_name (positive, negative) on the database rows that have that field blank (not yet classified)
Real time notification
If the ML code is slow and want to return that result to the user as soon as it is available, you can use the asynchronous approach described above, and pair with the real time notification (e.g. socket.io to the browser (this can be triggered from the queued task)
This becomes necessary if ML execution time is so long that it might time-out the HTTP request in the synchronous approach described below.
Synchronous processing, if ML code is not CPU intensive (fast enough)
If you need that classification result returned immediately, and the ML classification is fast enough *, you can do so within the HTTP request-response cycle (the POST request returns after the ML code is done, synchronously)
*Fast enough here means it wouldn't time-out the HTTP request/response, and the user wouldn't lose patience.
Well, I had to develop the same solution myself. In my case, I used Theano. If you are using tensorflow or theano, you are able to save the model you have built. So first, train the model with your training dataset, then save the model using the library you have chosen. You need to deploy into your django web application only the part of your code that handles the prediction. So using a simple POST, you would give to the user the predicted class of your sentence quickly enough. Also, if you think is needed, you can run a job periodically to train your model again with the new input patterns and save it once more.
I would suggest not to use Django since it will add execution time to the solution.
Instead, you could use node to serve a Reactjs frontend that interacts with the TensorFlow rest API that functions as a standalone server.
As the answer above this post suggests, it will be better to use WebSockets, you could use a react WebSocket module so it will refresh your components once the state of the component changes.
Hope this helps.
I'm building a django app which lists the hot(according to a specific algorithm) twitter trending topics.
I'd like to run some processes indefinitely to make twitter API calls and update the database(postgre) with the new information. This way the hot trending topic list gets updated asynchronously.
At first it seemed to me that celery+rabbitmq were the solution to my problem, but from what I understand they are used within django to launch scheduled or user triggered tasks, not indefinitely running tasks.
The solution that comes to my mind is write a .py file to continually put trending topics in a queue and write independent .py files continually running, making get queue requests and saving the data in the db used by django with raw SQL or SQLAlchemy. I think that this could work, but I'm pretty sure there is a much better way to do it.
If you just need to keep some processes running continually, supervisor is a nice solution.
You can combine it with any queuing technology you like to push things into your queues.
One of my view functions is a very long processing job and clearly needs to be handled differently.
Instead of making the user wait for long time, it would be best if I were able to lunch the processing job which would email the results, and without waiting for completion notify the user that their request is being processed and let them browse on.
I know I can use os.fork, but I was wondering if there is a 'right way' in terms of Django. Perhaps I can return the HTTP response, and than go on with this job somehow?
There are a couple of solutions to this problem, and the best one depends a bit on how heavy your workload will be.
If you have a light workload you can use the approach used by django-mailer which is to define a "jobs" model, save new jobs into the database, then have cron run a stand-alone script every so often to process the jobs stored in the database (deleting them when done). You can use something like django-chronograph to manage the job scheduling easier
If you need help understanding how to write a script to process the job see James Bennett's article Standalone Django Scripts for help.
If you have a very high workload, meaning you'll need more than a single server to process the jobs, then you want to use a real distribute task queue. There is a lot of competition here so I can't really detail all the options, but a good one to use with for Django apps is celery.
Why not simply start a thread to do the processing and then go on to send the response?
Before you select a solution, you need to determine how the process will be run. I.e is it the same process for every single user, the data is the same and can be scheduled regularly? or does each user request something and the results are slightly different ?
As an example, if the data will be the same for every single user and can be run on a schedule you could use cron.
See: http://www.b-list.org/weblog/2007/sep/22/standalone-django-scripts/
or
http://docs.djangoproject.com/en/dev/howto/custom-management-commands/
However if the requests will be adhoc and you need something scalable that can handle high load and is asynchronous: what you are actually looking for is a message queing system. Your view will add a request to the queue which will then get acted upon.
There are a few options to implement this in Django:
Django Queue service is purely django & python and simple, though the last commit was in April and it seems the project has been abandoned.
http://code.google.com/p/django-queue-service/
The second option if you need something that scales, is distributed and makes use of open source message queing servers: celery is what you need
http://ask.github.com/celery/introduction.html
http://github.com/ask/celery/tree