Why is python-pdfkit hanging on printing page with OpenLayers3 content when run with uWSGI and NGINX? - django

I'm using Django served by uWSGI and NGINX.
Ubuntu 14.04.1 LTS 64-bit
Python 3.4
Django 1.7.4
uWSGI 1.9.17.1-debian (64bit)
NGINX 1.4.6
python-pdfkit 0.5.0
wkhtmltopdf 0.12.2.1
OpenLayers v3.0.0
When I try running pdfkit.from_url(...) to print a map to pdf the request times out.
More specifically it hangs in python's subprocess.py communicate, self._communicate:
with _PopenSelector() as selector:
if self.stdin and input:
selector.register(self.stdin, selectors.EVENT_WRITE)
if self.stdout:
selector.register(self.stdout, selectors.EVENT_READ)
if self.stderr:
selector.register(self.stderr, selectors.EVENT_READ)
while selector.get_map():
...
selector.get_map() always returns a valid result, ensuring an infinite loop.
If I run this in the Django development server (instead of uWSGI+NGINX) everything runs fine.
in my view:
wkhtmltopdfBinLocationString = '/usr/local/bin/wkhtmltopdf'
wkhtmltopdfBinLocationBytes = wkhtmltopdfBinLocationString.encode('utf-8')
#this fixes some leftover python2 assumptions about strings
config = pdfkit.configuration(wkhtmltopdf=wkhtmltopdfBinLocationBytes)
pdfkit.from_url(reportPdfUrl, reportPdfFile, configuration=config, options={
'javascript-delay': 1500
})
Several places I have seen answers along the line of "set the close-on-exec flag on the socket" solving similar issues.
Is this something I can set from my "from_url" options (wkhtmltopdf does not accept it by that name) or can I configure uWSGI to assume 'close-on-exec'? I have not been able to make either of these work, but maybe I just need help with changing my uWSGI customization file:
[uwsgi]
workers = 1
chdir = [...]
plugins = python34
wsgi-file = [...]/wsgi.py
pythonpath = [...]
I tried something like
close-on-exec = true
but that didn't seem to do anything.
NOTE: the wsgi.py file is simple:
"""
WSGI config for dst project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/dev/howto/deployment/wsgi/
"""
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "[my_project].settings")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
Any thoughts?

Related

Flask environment ignored in config.py

I'm trying to build a basic flask app for learning purposes. everything flows smoothly, but there's an issue I don't understand. in my run.py file, I have the following line:
app.config.from_object('config.prodConfig')
This loads config.py in the root, which contains the following code:
class Config:
SECRET_KEY = '1234567890'
STATIC_FOLDER = 'static'
TEMPLATES_FOLDER = 'templates'
class devConfig(Config):
FLASK_ENV = 'development'
DEBUG = True
TESTING = True
class prodConfig(Config):
FLASK_ENV = 'production'
DEBUG = False
TESTING = False
my understanding is that Config contains a few "default" settings. devConfig and prodConfig are based on Config, so will always contain those values, but each will have different env, debug and testing value. though I don't get any errors and debug seems to be activated, when I run my instance of Flask it tells me I'm running in production, regardless of what I do.
* Serving Flask app "run" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: on
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
I'm not interested in setting the env variable in the terminal/environment, I know how to do that. what I'm interested in is why this doesn't work. after all, the FLASK_ENV directive is a valid one and it should load when instructed to.
What am I doing wrong?
With Chase's comment in mind, about not setting the FLASK_ENV in the app. Pass a dict of your configurations from your config.py and select the appropriate config based on the FLASK_ENV set outside your app, e.g. with a .env file.
config.py
class BaseConfig:
...
class DevConfig(BaseConfig):
...
class ProdConfig(BaseConfig):
...
configs = {"development": DevConfig, "production": ProdConfig}
app.py
import os
from flask import FLask
from config import configs
...
app.config.from_object(configs[os.environ.get("FLASK_ENV", "development")])

python ConfigParser.NoSectionError: - not working on server

Python 2.7
Django 1.10
settings.ini file(located at "/opts/myproject/settings.ini"):
[settings]
DEBUG: True
SECRET_KEY: '5a88V*GuaQgAZa8W2XgvD%dDogQU9Gcc5juq%ax64kyqmzv2rG'
On my django settings file I have:
import os
from ConfigParser import RawConfigParser
config = RawConfigParser()
config.read('/opts/myproject/settings.ini')
SECRET_KEY = config.get('settings', 'SECRET_KEY')
DEBUG = config.get('settings', 'DEBUG')
The setup works fine locally, but when I deploy to my server I get the following error if I try run any django management commands:
ConfigParser.NoSectionError: No section: 'settings'
If I go into Python shell locally I type in the above imports and read the file I get back:
['/opts/myproject/settings.ini']
On server I get back:
[]
I have tried changing "confif.read()" to "config.readfp()" as suggested on here but it didn't work.
Any help or advice is appreciated.

uwsgi and flask - cannot import name "appl"

I created several servers, without any issue, with the stack nginx - uwsgi - flask using virtualenv.
with the current one uwsgi is throwing the error cannot import name "appl"
here is the myapp directory structure:
/srv/www/myapp
+ run.py
+ venv/ # virtualenv
+ myapp/
+ init.py
+ other modules/
+ logs/
here is the /etc/uwsgi/apps-avaliable/myapp.ini
[uwsgi]
# Variables
base = /srv/www/myapp
app = run
# Generic Config
# plugins = http, python
# plugins = python
home = %(base)/venv
pythonpath = %(base)
socket = /tmp/%n.sock
module = %(app)
callable = appl
logto = %(base)/logs/uwsgi_%n.log
and this is run.py
#!/usr/bin/env python
from myapp import appl
if __name__ == '__main__':
DEBUG = True if appl.config['DEBUG'] else False
appl.run(debug=DEBUG)
appl is defined in myapp/ _ init _ .py as an instance of Flask()
(underscores spaced just to prevent SO to turn them into bold)
I accurately checked the python code and indeed if I activate manually the virtualenvironment and execute run.py manually everything works like a charm, but uwsgi keeps throwing the import error.
Any suggestion what should I search more ?
fixed it, it was just a read permissions issue. The whole python app was readable by my user but not by the group, therefore uwsgi could not find it.
This was a bit tricky because I deployed successfully many time with the same script and never had permissions issues

uwsgi: no app loaded. going in full dynamic mode

In my uwsgi config, I have these options:
[uwsgi]
chmod-socket = 777
socket = 127.0.0.1:9031
plugins = python
pythonpath = /adminserver/
callable = app
master = True
processes = 4
reload-mercy = 8
cpu-affinity = 1
max-requests = 2000
limit-as = 512
reload-on-as = 256
reload-on-rss = 192
no-orphans
vacuum
My app structure looks like this:
/adminserver
app.py
...
My app.py has these bits of code:
app = Flask(__name__)
...
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5003, debug=True)
The result is that when I try to curl my server, I get this error:
Wed Sep 11 23:28:56 2013 - added /adminserver/ to pythonpath.
Wed Sep 11 23:28:56 2013 - *** no app loaded. going in full dynamic mode ***
Wed Sep 11 23:28:56 2013 - *** uWSGI is running in multiple interpreter mode ***
What do the module and callable options do? The docs say:
module, wsgi Argument: string
Load a WSGI module as the application. The module (sans .py) must be
importable, ie. be in PYTHONPATH.
This option may be set with -w from the command line.
callable Argument: string Default: application
Set default WSGI callable name.
Module
A module in Python maps to a file on disk - when you have a directory like this:
/some-dir
module1.py
module2.py
If you start up a python interpreter while the current working directory is /some-dir you will be able to import each of the modules:
some-dir$ python
>>> import module1, module2
# Module1 and Module2 are now imported
Python searches sys.path (and a few other things, see the docs on import for more information) for a file that matches the name you are trying to import. uwsgi uses Python's import process under the covers to load the module that contains your WSGI application.
Callable
The WSGI PEPs (333 and 3333) specify that a WSGI application is a callable that takes two arguments and returns an iterable that yields bytestrings:
# simple_wsgi.py
# The simplest WSGI application
HELLO_WORLD = b"Hello world!\n"
def simple_app(environ, start_response):
"""Simplest possible application object"""
status = '200 OK'
response_headers = [('Content-type', 'text/plain')]
start_response(status, response_headers)
return [HELLO_WORLD]
uwsgi needs to know the name of a symbol inside of your module that maps to the WSGI application callable, so it can pass in the environment and the start_response callable - essentially, it needs to be able to do the following:
wsgi_app = getattr(simple_wsgi, 'simple_app')
TL;PC (Too Long; Prefer Code)
A simple parallel of what uwsgi is doing:
# Use `module` to know *what* to import
import simple_wsgi
# construct request environment from user input
# create a callable to pass for start_response
# and then ...
# use `callable` to know what to call
wsgi_app = getattr(simple_wsgi, 'simple_app')
# and then call it to respond to the user
response = wsgi_app(environ, start_response)
For anyone else having this problem, if you are sure your configuration is correct, you should check your uWSGI version.
Ubuntu 12.04 LTS provides 1.0.3. Removing that and using pip to install 2.0.4 resolved my issues.
First, check your configuration whether is correct.
my uwsgi.ini configuration:
[uwsgi]
chdir=/home/air/repo/Qiy
uid=nobody
gid=nobody
module=Qiy.wsgi:application
socket=/home/air/repo/Qiy/uwsgi.sock
master=true
workers=5
pidfile=/home/air/repo/Qiy/uwsgi.pid
vacuum=true
thunder-lock=true
enable-threads=true
harakiri=30
post-buffering=4096
daemonize=/home/air/repo/Qiy/uwsgi.log
then use uwsgi --ini uwsgi.ini to run uwsgi.
if not work, you rm -rf the venv directory, and re-initial the venv, and re-try my step.
I re-initial the venv solved my issue, seems the problem is when I pip3 install some packages of requirements.txt, and upgrade the pip, then install uwsgi package. so, I delete the venv, and re-initial my virtual environment.

django with twisted web - wgi and vhost

I have a project which has a directory setup like:
myproject
someapp
sites
foo
settings.py - site specific
settings.py - global
I am using twisted web.wsgi to serve this project. The problem am I running into is setting up the correct environment.
import sys
import os
from twisted.application import internet, service
from twisted.web import server, resource, wsgi, static, vhost
from twisted.python import threadpool
from twisted.internet import reactor
from django.core.handlers.wsgi import WSGIHandler
from django.core.management import setup_environ,ManagementUtility
sys.path.append(os.path.abspath("."))
sys.path.append(os.path.abspath("../"))
DIRNAME= os.path.dirname(__file__)
SITE_OVERLOADS = os.path.join(DIRNAME,'sites')
def import_module(name):
mod = __import__(name)
components = name.split('.')
for comp in components[1:]:
mod = getattr(mod,comp)
return mod
def buildServer():
hosts = [d for d in os.listdir(SITE_OVERLOADS) if not os.path.isfile(d) and d != ".svn"]
root = vhost.NameVirtualHost()
pool = threadpool.ThreadPool()
pool.start()
reactor.addSystemEventTrigger('after', 'shutdown', pool.stop)
for host in hosts:
settings = os.path.join(SITE_OVERLOADS,"%s/settings.py" % host)
if os.path.exists(settings):
sm = "myproject.sites.%s.settings" % host
settings_module = import_module(sm)
domain = settings_module.DOMAIN
setup_environ(settings_module)
utility = ManagementUtility()
command = utility.fetch_command('runserver')
command.validate()
wsgi_resource = wsgi.WSGIResource(reactor,pool,WSGIHandler())
root.addHost(domain,wsgi_resource)
return root
root = buildServer()
site = server.Site(root)
application = service.Application('MyProject')
sc = service.IServiceCollection(application)
i = internet.TCPServer(8001, site)
i.setServiceParent(sc)
I am trying to setup vhosts for each site which has a settings module in the subdirectory "sites". However, it appears that the settings are being shared for each site.
Django projects within the same Python process will share the same settings. You will need to spawn them as separate processes in order for them to use separate settings modules.
Since your goal is a bunch of shared-nothing virtual hosts, you probably won't benefit from trying to set up your processes in anything but the simplest way. So, how about changing your .tac file to just launch a server for a single virtual host, starting up a lot of instances (manually, with a shell script, with another simple Python script, etc), and then putting a reverse proxy (nginx, Apache, even another Twisted Web process) in front of all of those processes?
You could do this all with Twisted, and it might even confer some advantages, but for just getting started you would probably rather focus on your site than on minor tweaks to your deployment process. If it becomes a problem that things aren't more integrated, then that would be the time to revisit the issue and try to improve on your solution.