I have a project which has a directory setup like:
myproject
someapp
sites
foo
settings.py - site specific
settings.py - global
I am using twisted web.wsgi to serve this project. The problem am I running into is setting up the correct environment.
import sys
import os
from twisted.application import internet, service
from twisted.web import server, resource, wsgi, static, vhost
from twisted.python import threadpool
from twisted.internet import reactor
from django.core.handlers.wsgi import WSGIHandler
from django.core.management import setup_environ,ManagementUtility
sys.path.append(os.path.abspath("."))
sys.path.append(os.path.abspath("../"))
DIRNAME= os.path.dirname(__file__)
SITE_OVERLOADS = os.path.join(DIRNAME,'sites')
def import_module(name):
mod = __import__(name)
components = name.split('.')
for comp in components[1:]:
mod = getattr(mod,comp)
return mod
def buildServer():
hosts = [d for d in os.listdir(SITE_OVERLOADS) if not os.path.isfile(d) and d != ".svn"]
root = vhost.NameVirtualHost()
pool = threadpool.ThreadPool()
pool.start()
reactor.addSystemEventTrigger('after', 'shutdown', pool.stop)
for host in hosts:
settings = os.path.join(SITE_OVERLOADS,"%s/settings.py" % host)
if os.path.exists(settings):
sm = "myproject.sites.%s.settings" % host
settings_module = import_module(sm)
domain = settings_module.DOMAIN
setup_environ(settings_module)
utility = ManagementUtility()
command = utility.fetch_command('runserver')
command.validate()
wsgi_resource = wsgi.WSGIResource(reactor,pool,WSGIHandler())
root.addHost(domain,wsgi_resource)
return root
root = buildServer()
site = server.Site(root)
application = service.Application('MyProject')
sc = service.IServiceCollection(application)
i = internet.TCPServer(8001, site)
i.setServiceParent(sc)
I am trying to setup vhosts for each site which has a settings module in the subdirectory "sites". However, it appears that the settings are being shared for each site.
Django projects within the same Python process will share the same settings. You will need to spawn them as separate processes in order for them to use separate settings modules.
Since your goal is a bunch of shared-nothing virtual hosts, you probably won't benefit from trying to set up your processes in anything but the simplest way. So, how about changing your .tac file to just launch a server for a single virtual host, starting up a lot of instances (manually, with a shell script, with another simple Python script, etc), and then putting a reverse proxy (nginx, Apache, even another Twisted Web process) in front of all of those processes?
You could do this all with Twisted, and it might even confer some advantages, but for just getting started you would probably rather focus on your site than on minor tweaks to your deployment process. If it becomes a problem that things aren't more integrated, then that would be the time to revisit the issue and try to improve on your solution.
Related
I'm running into a problem when using Flask with a gremlin database (it's an Amazon Neptune database) and using uWSGI. Everything works fine in my unit tests which use the test_client provided by Flask. However, in production we use uWSGI and there I get the following error:
There is no current event loop in thread 'uWSGIWorker4Core1'.
My app code is creating a connection to the database before a request and assigning it to the Flask g object. During teardown, the database connection is removed. The error happens when the app is trying to close the connection.
from flask import Flask, g
from gremlin_python.structure.graph import Graph
from gremlin_python.driver.driver_remote_connection import DriverRemoteConnection
from gremlin_python.process.anonymous_traversal import traversal
app = Flask(__name__, instance_relative_config=True)
#app.before_request
def _db_connect():
if not hasattr(g, 'graph_conn'):
g.graph_conn = DriverRemoteConnection(app.config['DATABASE_HOST'],'g')
g.gg = traversal().withRemote(g.graph_conn)
# This hook ensures that the connection is closed when we've finished
# processing the request.
#app.teardown_appcontext
def _db_close(exc):
if hasattr(g, 'graph_conn'):
g.graph_conn.close(). # <- ERROR THROWN AT THIS LINE
del g.graph_conn
the uWSGI config does use multiple threads:
[uwsgi]
http = 0.0.0.0:3031
manage-script-name = true
module = dogmaserver:app
processes = 4
threads = 2
offload-threads = 2
stats = 0.0.0.0:9191
But my understanding of how Flask's g object worked would be that it is all on the same thread. Can anyone let me know what I'm missing?
I'm using Flask 1.0.2, gremlinpython 3.4.11 and uWSGI 2.0.17.1.
I used a workaround by removing the threads configuration option in uWSGI which makes there only be a single thread per process.
I want my Django project to be accessible at many different endpoints. For one app, I want it accessible at app.domain.com and for another app I want it accessible at dashboard.domain.com. How can I achieve this? I am using AWS Elastic Beanstalk and Route 53.
I tried looking at Django's djangoproject.com and their Github repo, as they do this. However, I couldn't figure it out. Thanks!
You can define two settings.py file, with two associated urls.py files :
app_settings.py
from my_project.settings import *
ROOT_URLCONF = 'my_project.app_urls'
ALLOWED_HOSTS = ['app.domain.com']
dashboard_settings.py
from my_project.settings import *
ROOT_URLCONF = 'my_project.dashboard_urls'
ALLOWED_HOSTS = ['dashboard.domain.com']
Define your urls for each website respectively in my_project/app_urls.py and my_project/dashboard_urls.py
Then start two instances of your django project (with uwsgi, gunicorn ou whatever you use) with those two distinct settings files (using DJANGO_SETTINGS_MODULE environment variable for example).
This way, both instances shares the same codebase but exposes distinct urls.
For example, using uwsgi, you could have those two files (with distinct ports) :
app.ini
[uwsgi]
http = 127.0.0.1:8001
module = my_project.wsgi
processes = 4
threads = 2
pidfile = app.pid
env = DJANGO_SETTINGS_MODULE=my_project.app_settings
dashboard.ini
[uwsgi]
http = 127.0.0.1:8002
module = my_project.wsgi
processes = 4
threads = 2
pidfile = app.pid
env = DJANGO_SETTINGS_MODULE=my_project.dashboard_settings
I try to use the flask cli to start my application, i.e. flask run. I use the FLASK_APP environment variable to point to my application, i.e. export FLASK_APP=package_name.wsgi:app
In my wsgi.py file, I create the app with a factory function, i.e. app = create_app(config) and my create_app method looks like this:
def create_app(config_object=LocalConfig):
app = connexion.App(config_object.API_NAME,
specification_dir=config_object.API_SWAGGER_DIR,
debug=config_object.DEBUG)
app.app.config.from_object(config_object)
app.app.json_encoder = JSONEncoder
app.add_api(config_object.API_SWAGGER_YAML,
strict_validation=config_object.API_SWAGGER_STRICT,
validate_responses=config_object.API_SWAGGER_VALIDATE)
app = register_extensions(app)
app = register_blueprints(app)
return app
However, the application doesn't start, I get the error:
A valid Flask application was not obtained from "package_name.wsgi:app".
Why is this?
I can start my app normally when I use gunicorn, i.e. gunicorn package_name.wsgi:app
My create_app function didn't return an object of class flask.app.Flask but an object of class connexion.apps.flask_app.FlaskApp, because I am using the connexion framework.
In my wsgi.py file, I could simply set:
application = create_app(config)
app = application.app
I didn't even have to do export FLASK_APP=package_name.wsgi:app anymore, autodescovery worked if the flask run command was executed in the folder where the wsgi.py file is.
application = create_app(config)
app = application.app
App = app.app
For me I just created another variable pointing to connexion.app.Flask type and I set the environ as below
export FLASK_APP= __main__:App
flask shell
I'm using Django served by uWSGI and NGINX.
Ubuntu 14.04.1 LTS 64-bit
Python 3.4
Django 1.7.4
uWSGI 1.9.17.1-debian (64bit)
NGINX 1.4.6
python-pdfkit 0.5.0
wkhtmltopdf 0.12.2.1
OpenLayers v3.0.0
When I try running pdfkit.from_url(...) to print a map to pdf the request times out.
More specifically it hangs in python's subprocess.py communicate, self._communicate:
with _PopenSelector() as selector:
if self.stdin and input:
selector.register(self.stdin, selectors.EVENT_WRITE)
if self.stdout:
selector.register(self.stdout, selectors.EVENT_READ)
if self.stderr:
selector.register(self.stderr, selectors.EVENT_READ)
while selector.get_map():
...
selector.get_map() always returns a valid result, ensuring an infinite loop.
If I run this in the Django development server (instead of uWSGI+NGINX) everything runs fine.
in my view:
wkhtmltopdfBinLocationString = '/usr/local/bin/wkhtmltopdf'
wkhtmltopdfBinLocationBytes = wkhtmltopdfBinLocationString.encode('utf-8')
#this fixes some leftover python2 assumptions about strings
config = pdfkit.configuration(wkhtmltopdf=wkhtmltopdfBinLocationBytes)
pdfkit.from_url(reportPdfUrl, reportPdfFile, configuration=config, options={
'javascript-delay': 1500
})
Several places I have seen answers along the line of "set the close-on-exec flag on the socket" solving similar issues.
Is this something I can set from my "from_url" options (wkhtmltopdf does not accept it by that name) or can I configure uWSGI to assume 'close-on-exec'? I have not been able to make either of these work, but maybe I just need help with changing my uWSGI customization file:
[uwsgi]
workers = 1
chdir = [...]
plugins = python34
wsgi-file = [...]/wsgi.py
pythonpath = [...]
I tried something like
close-on-exec = true
but that didn't seem to do anything.
NOTE: the wsgi.py file is simple:
"""
WSGI config for dst project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/dev/howto/deployment/wsgi/
"""
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "[my_project].settings")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
Any thoughts?
I'm following the project structure as laid out by Zachary Voase, but I'm struggling with one specific issue.
I'd very much like to have a custom settings boolean variable (let's call it SEND_LIVE_MAIL) that I would be using in the project. Basically, I'd like to use this settings variable in my code and if SEND_LIVE_MAIL is True actually send out a mail, whereas when it is set to False just print its contents out to the console. The latter would apply to the dev environment and when running unittests.
What would be a good way of implementing this? Currently, depending on the environment, the django server uses dev, staging or prd settings, but for custom settings variables I believe these need to be imported 'literally'. In other words, I'd be using in my views something like
from settings.development import SEND_LIVE_MAIL
which of course isn't what I want. I'd like to be able to do something like:
from settings import SEND_LIVE_MAIL
and depending on the environment, the correct value is assigned to the SEND_LIVE_MAIL variable.
Thanks in advance!
You shouldn't be importing directly from your settings files anyways. Use:
>>> from django.conf import settings
>>> settings.SEND_LIVE_MAIL
True
The simplest solution is to have this at the bottom of your settings file:
try:
from local_settings import *
except ImportError:
pass
And in local_settings.py specify all your environment-specific overrides. I generally don't commit this file to version control.
There are more advanced ways of doing it, where you end up with a default settings file and a per-environment override.
This article by David Cramer covers the various approaches, including both of the ones I've mentioned: http://justcramer.com/2011/01/13/settings-in-django/
import os
PROJECT_PATH = os.path.dirname(__file__)
try:
execfile(os.path.join(PROJECT_PATH, local_settings.py'))
except IOError:
pass
Then you can have your local_settings.py behave as if it was pasted directly into your settings.py:
$ cat local_settings.py
INSTALLED_APPS += ['foo']
You can do something like this for a wide variety of environment based settings, but here's an example for just SEND_LIVE_MAIL.
settings_config.py
import re
import socket
class Config:
def __init__(self):
fqdn = socket.getfqdn()
env = re.search(r'(devhost|stagehost|prodhost)', fqdn)
env = env and env.group(1)
env = env or 'devhost'
if env == 'devhost':
self.SEND_LIVE_MAIL = # whatever
elif env == 'stagehost':
self.SEND_LIVE_MAIL = # whatever
elif env == 'prodhost':
self.SEND_LIVE_MAIL = # whatever
config = Config()
settings.py
from settings_config import config
SEND_LIVE_MAIL = config.SEND_LIVE_MAIL