Django setting up a scheduled task without Cron - django

I know there are many questions asking about this, especially this one: Django - Set Up A Scheduled Job?.
But what I want to understand is, how does a scheduled task inside Django actually works?
My simplistic way to think about it is that there's an infinite loop somewhere, something like this (runs every 60 seconds),
import time
interval=60 #60 seconds
while True:
some_method()
time.sleep(interval)
Question: where do you put this infinite loop? Is there some part of the Django app that just runs in the background alongside the rest of the app?
Thanks!

Django doesn't do scheduled tasks. If you want scheduled tasks, you need a daemon that runs all the time and can launch your task at the appropriate time.
Django only runs when a http request is made. If no one makes a http request for a month, django doesn't run for a month. If there are 45 http requests this second, django will run 45 times this second (in the absence of caching).
You can write scripts in the django framework (called management commands) that get called from some outside service (like cron). That's as close as you'll get to what you want. If that's the case, then the question/answer you reference is the place to get the how tos.
Probably on a unixy system, cron is the simplest outside service to work with. On recent linux systems, cron has a directory /etc/cron.d into which you can drop your app's cron config file, and it will not interfere with any other cron jobs on the system. No editing of existing files necessary.

Related

Options for running on user demand asynchronous / background tasks in Django?

My Django app generates a complex report that can take upto 5 minutes to create. Therefore it runs once a night using a scheduled management command.
That's been ok, except I now want the user to be able to select the date range for the report, which means the report needs to be created while the user waits.
What are my options for running the tast in the background? So far I've found these:
Celery - might work but is complex
django-background-tasks looks like the right tool for the job but hasn't been updated for years, last supported Django is 2.2
The report/background task could be generated by AWS Lambda, basically in a microservice. Django calls the Microservice which can execute the background task then call the Django app back once finished. This is what I did last time but not sure it would work now as I'd need to send the microservice 10mb of data to process.
Use subprocess.popen which someone here said worked for them but other reports say it doesn't work from Django.
EDIT: Looks like Django 3.1 onwards supports ASync views and may be the simple solution for this.

What is the best way to transfer large files through Django app hosted on HEROKU?

HEROKU gives me H12 error on transferring the file to an API from my Django application (Understood it's a long running process and there is some memory/worker tradeoff I guess so). I am on one single hobby Dyno right now.
The function just runs smoothly for around 50MB file. The file itself is coming from a different source ( requests python package )
The idea is to build a file transfer utility using Django app on HEROKU. The file will not gets stored in my app side. Its just getting from point A and sending to point B.
Went through multiple discussions along with the standard HEROKU documentations, however I am struggling in between in some concepts:
Will this problem be solved by background tasks really? (If YES, I am finding explanation of the process than the direct way to do it such that I can optimize my flow)
As mentioned in standard docs, they recommend background tasks using RQ package for python, I am using Postgre SQL at moment. Will I need to install and manage Redis Database as well for this. Is this even related to Database?
Some recommend using extra Worker other than the WEB worker we have by default. How does this relate to my problem?
Some say to add multiple workers, not sure how this solve it. Let's say today it starts working for large files using background tasks, what if the load of users at same time increases. How this will impact my solution and how should I plan the mitigation plan around the risks.
If someone here has a strong understanding with respect to the architecture, I am here to listen your experiences and thoughts. Also, let me know if there is other way than HEROKU from a solution standpoint which will make this more easy for me.
Have you looked at using celery to run this as a background task?
This is a very standard way of dealing with requests which take a large time to complete.
Will this problem be solved by background tasks really? ( If YES, I am finding explanation of the process than the direct way to do it such that I can optimise my flow )
Yes it can be solved by background tasks. If you are using something like Celery which has direct support for django, you will be running another instance of your Django application but with a different startup command for Celery. It then keeps reading for new tasks to execute and reads the task name from the redis queue (or rabbitmq - whichever you use as the broker) and then executes that task and updates the status back to redis (or the broker you use).
You can also use flower along with celery so that you have a dashboard to see how many tasks are being executed and what are their statuses etc.
As mentioned in standard docs, they recommend background tasks using RQ package for python, I am using Postgre SQL at moment. Will I need to install and manage Redis Database as well for this. Is this even related to Database?
To use background task with Celery you will need to set up some sort of message broker like Redis or RabbitMQ
Some recommend using extra Worker other than the WEB worker we have by default. How does this relate to my problem?
I dont think that would help for your use case
Some say to add multiple workers, not sure how this solve it. Let's say today it starts working for large files using background tasks, what if the load of users at same time increases. How this will impact my solution and how should I plan the mitigation plan around the risks.
When you use celery, you will have to start few workers for that Celery instance, these workers are the ones who execute your background tasks. Celery documentation will help you with exact count calculation of these workers based on your instance CPU and memory etc.
If someone here has a strong understanding with respect to the architecture, I am here to listen your experiences and thoughts. Also, let me know if there is other way than HEROKU from a solution standpoint which will make this more easy for me.
I have worked on few projects where we used Celery with background tasks to upload large files. It has worked well for our use cases.
Here is my final take on this after full evaluation, trials and earlier recommendations made here, thanks #arun.
HEROKU needs a web worker to deliver the website runtime which hold 512MB of memory, operations your perform if are below this limits should be fine.
Beyond that let's say you have scenarios like mentioned above where a large file is coming from one source api and going into another target api with Django app, you will have to:
First, you will have to run the file upload function as a background process since it will take time more than 30 seconds to respond which HEROKU expects to return. If not H12 Error is waiting for you. Solution to this is implementing Django Background tasks, Celery worked in my case. So here Celery is your same Django app functionality running as a background handler which needs its own app Dyno ( The Worker ) This can be scaled as needed in future.
To make your Django WSGI ( Frontend App ) talk to the Celery ( Background App), you need a message broker in between which can be HEROKU Redis, RabbitMQ, etc.
Second, the problems doesn't gets solved here even though you have a new Worker dedicated for the Celery app, the memory limits will still apply as its also a Dyno with its own memory.
To overcome this, your Python requests module should download the file in stream instead of direct downloading complete file in a single memory buffer. Iterate and load the stream data in chunks and send the file chunks to target endpoint.
Even chunk size plays here an important role. I will not put exact number here since it depends on various factors:
Should not be too small, else it will take more time to transfer.
Should not be too big to be handled by either of the source/target endpoint servers.

Django: can functions within views run continuously even as other requests are made?

I'm trying to create a function that, when called, will extract information from an external source at irregular (and undefined) intervals. This data will then be placed in a database for later retrieval. I want this to be then running in the background even as other page requests are made. Is this possible?
The best way to run a Django function outside the request/response cycle is to implement it as a custom management command, which you can then set to run periodically using cron.
If you're already using it, celery supports periodic tasks using celerybeat, but this requires configuring and running the celerybeat daemon, which can be a headache. Celery also supports long-running tasks (things started in a view, but completing in their own time), as described in your question title.
Since you seem to need the function to be called when a page is loaded, you can put it inside your view as
def my_view(request):
#Call the long running function
long_running_function()
#Do view logic and return
return HttpResponse(...)
To handle the long_running_function you could use celery and create a tasks.py which implements your external data source logic. Creating tasks, adding to the queue and configuring celery is summarized here
If you just need a simpler solution for trying it out, take a look at the subprocess module.
A very similar answer here Django: start a process in a background thread?

Interacting with a program in the background of Django

I have a program that classifies text and would like to make it interactive with a user on the front-end of my django site. The problem is it takes 20 seconds for the program to load the training set and get going, and that's not feasible every time someone enters input.
Instead, I'd like Django to load the program once when the server starts, and have all user input interact with it via a view.
I looked at launching subprocesses, but if I'm not mistaken, a subprocess would only be launched when a view is called and that is undesirable for this.
Any ideas?
Thanks.
It's possible that Celery would be appropriate here. There is Django integration available with django-celery.
As Jim noticed celery is one of the best options you have for asynchronus task management, but if you want to avoid celery & its dependecies overhead you could just add a status field on the model the process takes place (e.g. text_processed boolean field with default=False) and create an application management command which would handle the process of the created db entries.
Add the command on a cron and you are done.

Rather than using crontab, can Django execute something automatically at a predefined time

How to make Django execute something automatically at a particular time.?
For example, my django application has to ftp upload to remote servers at pre defined times. The ftp server addresses, usernames, passwords, time, day and frequency has been defined in a django model.
I want to run a file upload automatically based on the values stored in the model.
One way to do is to write a python script and add it to the crontab. This script runs every minute and keeps an eye on the time values defined in the model.
Other thing that I can roughly think of is maybe django signals. I'm not sure if they can handle this issue. Is there a way to generate signals at predefined times (Haven't read indepth about them yet).
Just for the record - there is also celery which allows to schedule messages for the future dispatch. It's, however, a different beast than cron, as it requires/uses RabbitMQ and is meant for message queues.
I have been thinking about this recently and have found django-cron which seems as though it would do what you want.
Edit: Also if you are not specifically looking for Django based solution, I have recently used scheduler.py, which is a small single file script which works well and is simple to use.
I've had really good experiences with django-chronograph.
You need to set one crontab task: to call the chronograph python management command, which then runs other custom management commands, based on an admin-tweakable schedule
The problem you're describing is best solved using cron, not Django directly. Since it seems that you need to store data about your ftp uploads in your database (using Django to access it for logs or graphs or whatever), you can make a python script that uses Django which runs via cron.
James Bennett wrote a great article on how to do this which you can read in full here: http://www.b-list.org/weblog/2007/sep/22/standalone-django-scripts/
The main gist of it is that, you can write standalone django scripts that cron can launch and run periodically, and these scripts can fully utilize your Django database, models, and anything else they want to. This gives you the flexibility to run whatever code you need and populate your database, while not trying to make Django do something it wasn't meant to do (Django is a web framework, and is event-driven, not time-driven).
Best of luck!