How to set up autoreload with Flask+uWSGI? - flask

I am looking for something like uWSGI + django autoreload mode for Flask.

I am running uwsgi version 1.9.5 and the option
uwsgi --py-autoreload 1
works great

If you're configuring uwsgi with command arguments, pass --py-autoreload=1:
uwsgi --py-autoreload=1
If you're using a .ini file to configure uwsgi and using uwsgi --ini, add the following to your .ini file:
py-autoreload = 1

For development environment you can try using
--python-autoreload uwsgi's parameter.
Looking at the source code it may work only in threaded mode (--enable-threads).

You could try using supervisord as a manager for your Uwsgi app. It also has a watch function that auto-reloads a process when a file or folder has been "touched"/modified.
You will find a nice tutorial here: Flask+NginX+Uwsgi+Supervisord

The auto-reloading functionality of development-mode Flask is actually provided by the underlying Werkzeug library. The relevant code is in werkzeug/serving.py -- it's worth taking a look at. But basically, the main application spawns the WSGI server as a subprocess that stats every active .py file once per second, looking for changes. If it sees any, the subprocess exits, and the parent process starts it back up again -- in effect reloading the chages.
There's no reason you couldn't implement a similar technique at the layer of uWSGI. If you don't want to use a stat loop, you can try using underlying OS file-watch commands. Apparently (according to Werkzeug's code), pyinotify is buggy, but perhaps Watchdog works? Try a few things out and see what happens.
Edit:
In response to the comment, I think this would be pretty easy to reimplement. Building on the example provided from your link, along with the code from werkzeug/serving.py:
""" NOTE: _iter_module_files() and check_for_modifications() are both
copied from Werkzeug code. Include appropriate attribution if
actually used in a project. """
import uwsgi
from uwsgidecorators import timer
import sys
import os
def _iter_module_files():
for module in sys.modules.values():
filename = getattr(module, '__file__', None)
if filename:
old = None
while not os.path.isfile(filename):
old = filename
filename = os.path.dirname(filename)
if filename == old:
break
else:
if filename[-4:] in ('.pyc', '.pyo'):
filename = filename[:-1]
yield filename
#timer(3)
def check_for_modifications():
# Function-static variable... you could make this global, or whatever
mtimes = check_for_modifications.mtimes
for filename in _iter_module_files():
try:
mtime = os.stat(filename).st_mtime
except OSError:
continue
old_time = mtimes.get(filename)
if old_time is None:
mtimes[filename] = mtime
continue
elif mtime > old_time:
uwsgi.reload()
return
check_for_modifications.mtimes = {} # init static
It's untested, but should work.

py-autoreload=1
in the .ini file does the job

import gevent.wsgi
import werkzeug.serving
#werkzeug.serving.run_with_reloader
def runServer():
gevent.wsgi.WSGIServer(('', 5000), app).serve_forever()
(You can use an arbitrary WSGI server)

I am afraid that Flask is really too bare bones to have an implementation like this bundled by default.
Dynamically reloading code in production is generally a bad thing, but if you are concerned about a dev environment, take a look at this bash shell script http://aplawrence.com/Unixart/watchdir.html
Just change the sleep interval to whatever suits your needs and substitute the echo command with whatever you use to reload uwsgi. I run uwsgi un master mode and just send a killall uwsgi command.

Related

Receiving back string of lenght 0 from os.popen('cmd').read()

I am working with a command line tool called 'ideviceinfo' (see https://github.com/libimobiledevice) to help me to quickly get back serial, IMEI and battery health information from the iOS device I work with daily. It executes much quicker than Apple's own 'cfgutil' tools.
Up to know I have been able to develop a more complicated script than the one shown below in PyCharm (my main IDE) to assign specific values etc to individual variables and then to use something like to pyclip and pyautogui to help automatically paste these into the fields of the database app we work with. I have also been able to use the simplified version of the script both in Mac OS X terminal and in the python shell without any hiccups.
I am looking to use AppleScript to help make running the script as easy as possible.
When I try to use Applescript's "do shell script 'python script.py'" I just get back a string of lenght zero when I call 'ideviceinfo'. The exact same thing happens when I try to build an Automator app with a 'Run Shell Script' component for "python script.py".
I have tried my best to isolate the problem down. When other more basic commands such as 'date' are called within the script they return valid strings.
#!/usr/bin/python
import os
ideviceinfoOutput = os.popen('ideviceinfo').read()
print ideviceinfoOutput
print len (ideviceinfoOutput)
boringExample = os.popen('date').read()
print boringExample
print len (boringExample)
I am running Mac OS X 10.11 and am on Python 2.7
Thanks.
I think I've managed to fix it on my own. I just need to be far more explicit about where the 'ideviceinfo' binary (I hope that's the correct term) was stored on the computer.
Changed one line of code to
ideviceinfoOutput = os.popen('/usr/local/bin/ideviceinfo').read()
and all seems to be OK again.

How to output to command line when running one python script from another python script

I have multiple python scripts, each with print statements and prompts for input. I run these scripts from a single python script as below.
os.system('python script1.py ' + sys.argv[1])
os.system('python script2.py ' + sys.argv[1]).....
The run is completed successfully, however, when I run all the scripts from a single file, I no longer see any print statements or prompts for input on the run console. Have researched and attempted many different ways to get this to work w/o success. Help would be much appreciated. Thanks.
If I understand correctly you want to run multiple python scripts synchronously, i.e. one after another.
You could use a bash script instead of python, but to answer your question of starting them from python...
Checkout out the subprocess module: https://docs.python.org/3.4/library/subprocess.html
In particular the call method, it accepts a stdin and stdout which you can pass sys.stdin and sys.stdout to.
import sys
import subprocess
subprocess.call(['python', 'script1.py', sys.argv[1]], stdin=sys.stdin, stdout=sys.stdout)
subprocess.call(['python', 'script2.py', sys.argv[1]], stdin=sys.stdin, stdout=sys.stdout)
^
This will work in python 2.7 and 3, another way of doing this is by importing your file (module) and calling the methods in it. The difference here is that you're no longer running the code in a separate process.
subroutine.py
def run_subroutine():
name = input('Enter a name: ')
print(name)
master.py
import subroutine
subroutine.run_subroutine()

Python/Scrapy wait until complete

Trying to get a project I'm working on to wait on the results of the Scrapy crawls. Pretty new to Python but I'm learning quickly and I have liked it thus far. Here's my remedial function to refresh my crawls;
def refreshCrawls():
os.system('rm JSON/*.json)
os.system('scrapy crawl TeamGameResults -o JSON/TeamGameResults.json --nolog')
#I do this same call for 4 other crawls also
This function gets called in a for loop in my 'main function' while I'm parsing args:
for i in xrange(1,len(sys.argv)):
arg = sys.argv[i]
if arg == '-r':
pprint('Refreshing Data...')
refreshCrawls()
This all works and does update the JSON files, however the rest of my application does not wait on this as I foolishly expected it to. Didn't really have a problem with this until I moved the app over to a Pi and now the poor little guy can't refresh soon enough, Any suggestions on how to resolve this?
My quick dirty answer says split it into a different automated script and just run it an hour or so before I run my automated 'main function,' or use a sleep timer but I'd rather go about this properly if there's some low hanging fruit that can solve this for me. I do like being able to enter the refresh arg in my command line.
Instead of using os use subprocess:
from subprocess import Popen
import shlex
def refreshCrawls():
os.system('rm JSON/*.json')
cmd = shlex.split('scrapy crawl TeamGameResults -o JSON/TeamGameResults.json --nolog')
p = Popen(cmd)
#I do this same call for 4 other crawls also
p.wait()
for i in xrange(1,len(sys.argv)):
arg = sys.argv[i]
if arg == '-r':
pprint('Refreshing Data...')
refreshCrawls()

Need to return a python-made webpage, without waiting for a subprocess to complete

OK so I am trying to run one python script (test1.py) and print something into a webpage at the end. I also want a subprocess (test2.py) to begin during this script. However, as it happens, test2.py is going to take longer to execute than test1. The problem that I am having is that test1.py is being held up until test2 completes. I have seen numerous people post similar questions but none of the solutions I've seen have fixed my issue.
Here's a heavily simplified version of my code that demonstrates the issue. I am running it on a local server, and displaying it in firefox.
test1.py:
#!/usr/bin/env python
# -*- coding: UTF-8 -*-
import cgi
import cgitb; cgitb.enable()
import subprocess
form = cgi.FieldStorage()
print("Content-Type: text/html") # HTTP header to say HTML is following
print # blank line, end of headers
p = subprocess.Popen(['python','test2.py'])
print "script1 ended"
test2.py:
from time import sleep
print "script2 sleeping...."
sleep(60)
print "script2 ended"
Essentially I want to execute test1.py and have it say "script1 ended" in firefox, without having to wait 60 seconds for the subprocess to exit. I don't want anything from the test2 subprocess to show up.
EDIT: For anyone interested.. I solved the problem in windows by using os.startfile(), and as it turns out subprocess.Popen([...], stderr=subprocess.STDOUT, stdout=subprocess.PIPE) works in Linux. The scripts run in the background without delaying anything in the parent script when I do a wget on it.

Is it possible to have Flask/Werkzeug's auto reloader respect the -O optimisation flag

Basically, I have a small portion of my Flask-based application which spawns a background process to do some work. In a production environment I simply want to suprocess.Popen and 'ignore' what happens to that subprocess. However during development I want to use check_output instead so that in case something does go wrong I have a better chance of catching it.
In order to determine whether or not to use check_output I just wrap it in a if __debug__, which more or less translates into:
def spawn_process():
if __debug__:
subprocess.check_output(args, stderr=subprocess.STDOUT)
else:
subprocess.Popen(args)
I was under the impression that by doing this I could simply use the -O Python flag to get the alternate behavior during development -- in production I was planning on using mod_wsgi's WSGIPythonOptimize directive for the same effect. However it appears that Flask/Werkzeug's auto reloader ignores the Python flags when it spawns its own subprocess. A simple print __debug__ in the debugger showed what it was indeed set to True and sys.flags was all zero.
So my question is: is there any way to force Flask/Werkzeug's auto reloader to respect the flags initially passed to Python?
Disabling auto-reload does mean the -O flag gets used, but doing that is a small inconvenience I'd rather not deal with it there's a better way.
I don't believe you can have the autoreloader respect the -O flag. You could however check the debug flag in your application to decide how to spawn you subprocess:
from flask import current_app
def spawn_process():
if current_app.debug:
subprocess.check_output(args, stderr=subprocess.STDOUT)
else:
subprocess.Popen(args)