Django: Run Constantly Running Background Task On IIS-Hosted Application - django

I have a Django web application hosted on IIS. I subprocess should ALWAYS BE running alongside the web application. When I run the application locally using
python manage.py runserver
the background task runs perfectly while the application is running. However, hosted on IIS the background task does not appear to run. How do I make the task run even when hosted on IIS?
In the manage.py file of Django I have the following code:
def run_background():
return subprocess.Popen(["python", "background.py"], creationflag=subprocess.CREATE_NEW_PROCESS_GROUP)
run_background()
execute_from_command_line(sys.argv)
I do not know how to resolve this issue.
Would something like Celery work to indefinitely run a task? How would I do this? Please give step by step instructions.

You could set appplication set to auto-start by following below steps:
Select Site -> advance setting->Preload enable=”true”
Select Application pool->advance setting->start mode=”always running”, Under the Process Model section, set the Idle Time-out (minutes) option to 0 and Under the Recycling section, set the Regular Time Interval (minutes) option to 0
Run iisreset command from the command prompt.
Also, check that you set FastCGI setting:
Regards,
Jalpa.

Related

Running django-q using elastic beanstalk on aws linux 2 instances

I use Elastic Beanstalk on aws to host my webapp which needs a task runner like django q. I need to run it on my instance and am facing difficulty doing that. I found this script https://gist.github.com/codeSamuraii/0e11ce6d585b3290b15a9ad163b9aa06 which does what I need but its for the older version of ec2 instance. So far I know I must run django q post deployment, but is it possible to add the process to the procfile along with starting the wsgi server.
Any help that could point me in the right direction will be greatly appreciated.
You can create a "Procfile" at the root of your bundle with following content:
web: gunicorn --bind 127.0.0.1:8000 --workers=1 --threads=15 mysite.config.wsgi:application
qcluster: python3 manage.py qcluster
Obviously, replace "mysite.config.wsgi" with the path to your wsgi.
I ended up not finding a solution, i chose a different tech altogether to fulfill the requirements. It was a crontab making curl requests to a Django server. So on the Django admin I would create task routes linking it to modules in the file storage. And paste the route info in crontab setting and set the appropriate time interval.

How to have a Django app running on Heroku doing a scheduled job using Heroku Scheduler

I am developing a Django app running on Heroku.
In order to update my database with some data coming from a certain API service, I need to periodically run a certain script (let's say myscript).
How can I use Heroku Scheduler to do it?
As already explained here, quick and simple way to answer this question is asking yourself how would you do to run that script periodically, as if you were the scheduler yourself.
Now, the best way to run a script in your Django app at any moment, it is to create a custom management command and to run it from your command prompt when you need it, like this:
python manage.py some_custom_command
Then, if you were the scheduler, you would run that command from your command prompt at every time written in the schedule.
So, a good idea would be to make Heroku Scheduler behave the same. Thus, the aim here is to have Heroku Scheduler run python manage.py some_custom_command at scheduled times.
Here is how you can do it:
In your_app directory, create a folder management and then inside it create another folder commands and finally, inside it, create a file some_custom_command.py
So, just to be clear
your_app/management/commands/some_custom_command.py
Then, inside some_custom_command.py insert:
from django.core.management.base import BaseCommand
from your_app.path_to_myscript_file import myscript
class Command(BaseCommand):
def handle(self, *args, **options):
# Put here some script to get the data from api service and store it into your models.
myscript()
Then go on Heroku > your_app > resources
In add-ons section select Heroku Scheduler, click on it so that its window opens, then click on add job, select the time you want, insert the command python manage.py some_custom_command and save.

What is an efficient way to develop Airflow plugins? (without restarting the webserver for each change)

I am looking for an efficient way to develop plugins within Airflow.
Current behavior: I change something in Python files e.g. test_plugin.py, reload the page in browser and nothing happens until I restart the webserver. This is most annoying and time consuming.
Desired behavior: I change something in Python files and the change is reflected after reloading the app in the browser.
As Airflow is based on Flask and in Flask the desired behavior is achievable by running Flask in debug mode (export FLASK_DEBUG=1, then start Flask app): Can I achieve the Flask behavior somehow in Airflow?
It turns out that this was indeed a bug with the Airflow CLI's webserver --debug mode; future versions will have the expected behavior.
Issue: https://issues.apache.org/jira/browse/AIRFLOW-5867
PR: https://github.com/apache/airflow/pull/6517
In order to run Airflow with live reloading, run the following command (10.7+):
$ airflow webserver --debug
In contrast to the code modification suggested by #herrjeh42, make sure that your configuration does not include unit_test_mode = True in order to enable reloading.
Cheers!
You can force reloading the python code by starting the airflow webserver in debug & reload mode. As of Airflow 1.10.5 I had to modify airflow/bin/cli.py (from my opinion the line is buggy).
old:
app.run(debug=True, use_reloader=False if app.config['TESTING'] else True,
new:
app.run(debug=True, use_reloader=True if json.loads(app.config['TESTING'].lower()) else False,
Change in airflow.cfg
unit_test_mode = True
Start the webserver with
airflow webserver -d

Run program on server forever without manually starting it

I have created an application which resides on a server. The application uses Django to connect. So, if I want to access the web page I have to run the following command to start the server -
python manage.py runserver ip adress:port number
What is the way to keep it running all the time even after shutting down my computer?
But, I also want to save the logs of the application so that I can see it later and debug or just check the running of the program whenever I want to.
I managed to solve the issue by running the following command -
nohup python manage.py runserver ip:port > Output.txt &
The log is getting saved in the Output.txt file.

Gunicorn is creating workers in every second

I am running Django using Gunicorn behind Nginx. In one of my installation, when I run the gunicorn process, I keep getting debug output, it's like workers are being created in every second (I assume this because django is loading very slow and note the message "[20205] [DEBUG] 3 workers"). You can check the detail output at this gist
In similar setup, I am running 3 more installations without any such issues and respective site loads almost instantly.
Any idea why this is happening? Thanks.
The polling of the workers every second on --log-level debug was introduced in gunicorn==19.2.
Change the log level to info.