We have a django app which we deployed using IIS Server. The app has been running smoothly and without any problem. However, we want to schedule a job that will run every night at 02:00.
We are using APScheduler which is working perfectly fine on the django local server but it never runs on production.
Here is the code I am using to run the jobs.
myapp/scheduler.py
def schedule():
scheduler = BackgroundScheduler()
scheduler.add_job(daily_schedules, 'interval', minutes=5)
# scheduler.add_job(daily_schedules, trigger='cron', hour='2')
scheduler.start()
def daily_schedules():
time_now = time.clock()
parse_api() # my function
# Keeping logs
path = join(settings.FILES_DIR, 'schedulled/logs.csv')
logs = pd.read_csv(path, encoding='utf-8')
time_end = time.clock() - time_now
logs.loc[len(logs)] = [
datetime.now().strftime('%Y-%m-%d %H:%M:%S'), time_end
]
logs.to_csv(path, encoding='utf-8', index=False)
print(logs)
myapp/apps.py
from django.apps import AppConfig
class MyAppConfig(AppConfig):
name = 'MyApp'
def ready(self):
from myapp import scheduler
scheduler.schedule()
Is there any particular reason why the job is not being run? Do I need to do something else or this method does not work with IIS? Since the server is being used by many developers at the same time, I would like to run the jobs as part of the django application and not run them outside in a separate server.
P.S: I have read all the stackoverflow questions on this matter but none seem to answer my questions.
Related
I'm using Flask on a project on an embedded system and I'm having performance issues. I'm running gunicorn with one eventlet worker by running:
gunicorn -b 0.0.0.0 --worker-class eventlet -w 1 'app:create_app()'
The problem I'm facing is that, when the MQTT messages start to pour with more cadence, the application starts to use almost all the CPU I have available. My initial thought was that I was handling the messages not ideally but, I even took out my handler, and just receive the messages, and the problem still persists.
I have another python application that subscribes to the same information with the paho client and this is not an issue, so I'm assuming I'm missing something on my Flask application and not the information itself.
My code is:
import eventlet
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from flask_login import LoginManager, current_user
from flask_socketio import SocketIO
from flask_mqtt import Mqtt
eventlet.monkey_patch()
#USERS DB
db_alchemy = SQLAlchemy()
#socketIO
socketio = SocketIO(cors_allowed_origins="*", async_mode='eventlet')
# MQTT
mqtt_client = Mqtt()
'''
APPLICATION CREATION
'''
def create_app():
app = Flask(__name__)
if app.config["ENV"] == "production":
app.config.from_object("config.ProductionConfig")
else:
app.config.from_object("config.DevelopmentConfig")
#USERS DB
db_alchemy.init_app(app)
#LoginManager
login_manager = LoginManager()
login_manager.login_view = "auth.login"
login_manager.init_app(app)
#SOCKETIO
socketio.init_app(app)
#FLASK-MQTT
app.config['MQTT_BROKER_URL'] = 'localhost' #
app.config['MQTT_BROKER_PORT'] = 1883
app.config['MQTT_KEEPALIVE'] = 20
app.config['MQTT_TLS_ENABLED'] = False
mqtt_client.init_app(app)
return app
#MQTT
#mqtt_client.on_connect()
def mqtt_on_connect():
mqtt_client.subscribe('testTopic/#', 0)
#mqtt_client.on_disconnect()
def mqtt_on_disconnect():
loggerMqtt.warning(' > Disconnected from broker')
#mqtt_client.on_subscribe()
def mqtt_on_subscribe(client, obj, mid, granted_qos):
pass
#mqtt_client.on_message()
def mqtt_on_message(client, userdata, message):
pass
#mqtt_topicSplitter(client, userdata, message)
As you can see my handler mqtt_topicSplitter is commented but I'm still having performance issues. I've tried adding an sleep command [eventlet.sleep(0.1)] on the on_message handler which solved the CPU consumption problem but resulted on my application being constantly kicked from the broker.
I also tried using other workers (gevent, asyncio, ..) without success. Using the Flask development server is not an option, since is not recommended for production.
I'm sorry if I wasn't clear, but I'm not an expert, please feel free to ask me any questions if needed.
Thanks in advance.
I have a Django Web App set up on the Digital Ocean App Platform. I want to update my Django App daily with content from external URLs. Unfortunately, cron jobs are not available in the App Platform.
Specifically, I want to fetch images from external URLs, attempt to download the images, and update it in my Django App if the download was successful.
You can consider configuring celery and using celery beat for django, there you have many options configuring scheduled tasks without the need of a cronjob.
Documentation here
Daemonize Celery here
Example of how you would use celery beat:
from celery import Celery
from celery.schedules import crontab
app = Celery()
#app.on_after_configure.connect
def setup_periodic_tasks(sender, **kwargs):
# Calls test('hello') every 10 seconds.
sender.add_periodic_task(10.0, test.s('hello'), name='add every 10')
# Calls test('world') every 30 seconds
sender.add_periodic_task(30.0, test.s('world'), expires=10)
# Executes every Monday morning at 7:30 a.m.
sender.add_periodic_task(
crontab(hour=7, minute=30, day_of_week=1),
test.s('Happy Mondays!'),
)
#app.task
def test(arg):
print(arg)
#app.task
def add(x, y):
z = x + y
print(z)
I have a website with an API that customers can send their API-post-calls. These API's have attachments in form of a PDFs or similar that gets stored in a folder /MEDIA/Storage/. The app is written in Django.
The API-call gets stored in a model through DRF and serializers. After the data is stored some logic is done, emails os sent, lookups and storing in data-tables etc. Since this takes so much time. I implemented Celery (Azure Cache for Redis as Broker) in my app, so that only the first storage in model is done as usual. The rest us queued up through Celery.
This works well on my local machine (mac os). But not on production (Azure/Linux).
I have tried git hooks, but i cannot get it working.
I have tried some terminal through ssh on the azure VM, but no luck...
I have looked into Daemonization but it was complicated.
settings.py
CELERY_BROKER_URL = 'redis://:<password>=#<appname>.redis.cache.windows.net:6379/0'
CELERY_RESULT_BACKEND = 'django-db'
CELERY_CACHE_BACKEND = 'django-cache'
celery.py
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'hapionline.settings')
app = Celery('hapionline')
app.config_from_object('django.conf:settings', namespace="CELERY")
app.autodiscover_tasks()
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
views.py
class ProcSimpleList(generics.CreateAPIView): # Endast Create för att skapa en proc
serializer_class = ProcSimpleSerializer
permission_classes = (IsAdminOrReadOnly,)
lookup_url_kwarg = 'proc_id'
def perform_create(self, serializer):
q = serializer.save()
# Queue from starting worker. Queue created when starting cereal.
transaction.apply_async(queue='high_priority', args=(q.proc_id, self.request.user.pk))
Local machine: All works well with the command: celery -A hapionline worker -l info -Q high_priority
Production: I do not know where to run the command on the production server?
If the worker is started on the local machine, it starts the Azure Cache, and calling the production environment API works. But since the worker is started locally the Paths too attached files in the API are incorrect and local, not production-like. /User/../Media/.. instead of /wwwroot/../media/..
Any ideas? How do I start a worker on the production VM? Is there a way to run a the start worker "script" after the git push azure master?
I skipped Azure and moved the app to Heroku. This worked as a charm.
I am trying to deploy a machine learning model on AWS EC2 instance using Flask. These are sklearn's fitted Random Forest models that are pickled using joblib. When I host Flask on localhost and load them into memory everything runs smoothly. However, when I deploy it on the apache2 server using mod_wsgi, joblib works sometimes(i.e. the models are loaded using joblib sometimes) and the other times the server just hangs. There is no error in logs. Any ideas would be appreciated.
Here is the relevant code that I am using:
# In[49]:
from flask import Flask, jsonify, request, render_template
from datetime import datetime
from sklearn.externals import joblib
import pickle as pkl
import os
# In[50]:
app = Flask(__name__, template_folder="/home/ubuntu/flaskapp/")
# In[51]:
log = lambda msg: app.logger.info(msg, extra={'worker_id': "request.uuid" })
# Logger
import logging
handler = logging.FileHandler('/home/ubuntu/app.log')
handler.setLevel(logging.ERROR)
app.logger.addHandler(handler)
# In[52]:
#app.route('/')
def host_template():
return render_template('Static_GUI.html')
# In[53]:
def load_models(path):
model_arr = [0]*len(os.listdir(path))
for filename in os.listdir(path):
f = open(path+"/"+filename, 'rb')
model_arr[int(filename[2:])] = joblib.load(f)
print("Classifier ", filename[2:], " added.")
f.close()
return model_arr
# In[54]:
partition_limit = 30
# In[55]:
print("Dictionaries being loaded.")
dict_file_path = "/home/ubuntu/Dictionaries/VARR"
dictionaries = pkl.load(open(dict_file_path, "rb"))
print("Dictionaries Loaded.")
# In[56]:
print("Begin loading classifiers.")
model_path = "/home/ubuntu/RF_Models/"
classifier_arr = load_models(model_path)
print("Classifiers Loaded.")
if __name__ == '__main__':
log("/home/ubuntu/print.log")
print("Starting API")
app.run(debug=True)
I was stuck with this for quite sometime. Posting the answer in case someone runs into this problem. Using print statements and looking at logs I narrowed the problem down to joblib.load statement. I found this awesome blog: http://blog.rtwilson.com/how-to-fix-flask-wsgi-webapp-hanging-when-importing-a-module-such-as-numpy-or-matplotlib
The idea of using a global process group fixed the problem. That forced the use of main interpreter just as the top comment on that blog page mentions.
I'm beginner in twisted world, so first I'm trying to get my working django project configured under twisted,currently its working well on django testing server or apache via mod_wsgi.
I followed this link and this too to configure the setup, based on that I have a server.py file given bellow
So in-order to integrate django app with twisted I used the following code,
import sys
import os
from twisted.application import internet, service
from twisted.web import server, resource, wsgi, static
from twisted.python import threadpool
from twisted.internet import reactor
from django.conf import settings
import twresource # This file hold implementation of "Class Root".
class ThreadPoolService(service.Service):
def __init__(self, pool):
self.pool = pool
def startService(self):
service.Service.startService(self)
self.pool.start()
def stopService(self):
service.Service.stopService(self)
self.pool.stop()
class Root(resource.Resource):
def __init__(self, wsgi_resource):
resource.Resource.__init__(self)
self.wsgi_resource = wsgi_resource
def getChild(self, path, request):
path0 = request.prepath.pop(0)
request.postpath.insert(0, path0)
return self.wsgi_resource
PORT = 8080
# Environment setup for your Django project files:
#insert it to first so our project will get first priority.
sys.path.insert(0,"django_project")
sys.path.insert(0,".")
os.environ['DJANGO_SETTINGS_MODULE'] = 'django_project.settings'
from django.core.handlers.wsgi import WSGIHandler
def wsgi_resource():
pool = threadpool.ThreadPool()
pool.start()
# Allow Ctrl-C to get you out cleanly:
reactor.addSystemEventTrigger('after', 'shutdown', pool.stop)
wsgi_resource = wsgi.WSGIResource(reactor, pool, WSGIHandler())
return wsgi_resource
# Twisted Application Framework setup:
application = service.Application('twisted-django')
# WSGI container for Django, combine it with twisted.web.Resource:
# XXX this is the only 'ugly' part: see the 'getChild' method in twresource.Root
wsgi_root = wsgi_resource()
root = Root(wsgi_root)
#multi = service.MultiService()
#pool = threadpool.ThreadPool()
#tps = ThreadPoolService(pool)
#tps.setServiceParent(multi)
#resource = wsgi.WSGIResource(reactor, tps.pool, WSGIHandler())
#root = twresource.Root(resource)
#Admin Site media files
#staticrsrc = static.File(os.path.join(os.path.abspath("."), "/usr/haridas/eclipse_workplace/skgargpms/django/contrib/admin/media/"))
#root.putChild("admin/media", staticrsrc)
# Serve it up:
main_site = server.Site(root)
#internet.TCPServer(PORT, main_site).setServiceParent(multi)
internet.TCPServer(PORT, main_site).setServiceParent(application)
#EOF.
Using above code It worked well from command line using "twisted -ny server.py", but when we run it as daemon "twisted -y server.py" it will hang, but the app is listening to the port 8080. I can access it using telnet.
I found some fixes for this hanging issue from stackoverflow itself. It helped me to use the code sections given below, which is commented in the above server.py file.
multi = service.MultiService()
pool = threadpool.ThreadPool()
tps = ThreadPoolService(pool)
tps.setServiceParent(multi)
resource = wsgi.WSGIResource(reactor, tps.pool, WSGIHandler())
root = twresource.Root(resource)
and :-
internet.TCPServer(PORT, main_site).setServiceParent(multi)
instead of using the:-
wsgi_root = wsgi_resource()
root = Root(wsgi_root)
and :-
internet.TCPServer(PORT, main_site).setServiceParent(application)
The modified method also didn't helped me to avoid the hanging issue.Is any body out there who successfully run the django apps under twisted daemon mode?.
I maid any mistakes while combining these codes?, Currently I'm only started to learn the twisted architectures in detail. Please help me to solve this problem
I'm looking for the Twisted Application configuration (TAC) file, which integrate django app with twisted and run with out any problem in the daemon mode also.
Thanks and Regards,
Haridas N.
I think you are almost there. Just add one more line at the very end:
multi.setServiceParent(application)