How to server HTTP/2 Protocol with django - django

I am planing to deploy my Django application with HTTP/2 protocol but I'm unable to find the proper solution. How can I serve my Django web application with HTTP/2, the only thing that I find is hyper-h2.
I read the documentation but unable to setup the connections.

You can do with Nginx proxy
if you have existing nginx config. you do by just adding a word .http2 in listen
listen 443 ssl http2 default_server;
full document avaliable in
https://www.digitalocean.com/community/tutorials/how-to-set-up-nginx-with-http-2-support-on-ubuntu-16-04

One option is to use Apache httpd server with mod_wsgi. Apache supports terminating HTTP/2. The link to your Django application is still via WSGI API so you don't really get any access to HTTP/2 specific features in your application. You can though configure Apache to do things like server push on your behalf.
https://httpd.apache.org/docs/2.4/howto/http2.html
https://httpd.apache.org/docs/2.4/mod/mod_http2.html

To support HTTP 2.0, You can deploy Django apps on web servers like Daphne using ASGI (which is the spiritual successor to WSGI).
you can read more about deploying Django with ASGI in the official documentaion
to read more about ASGI and what is it, introduction to ASGI
to read more about Daphne server, official repository

Related

configuring Django Channels for windows in production

Please, I need to configure Django Channels on redis-channel-layer on windows IIS in production. It is running very well in development.
I have installed redis, daphne. I have set the IIS as proxy server with URL Rewrite pointing the Inbound to localhost 6379 to redis-layer channels. I used python manage.py runworker. and also started the daphne server with the daphne command.
They all ran very well, but there is no websocket handshake for my url.

Deploy Django\Tornado on Heroku

I want to deploy some app on Heroku, it's Django with Tornado(Tornadio2) server for implement WebSockets with socket.io protocol.
So, at now it's working fine on my VPS server,
I use Nginx with using location section for routing requests for Django or Tornado.
Nginx config looks like this:
location /socket.io {
# Tornado app
proxy_pass http://localhost:8088;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
...
}
location / {
# Django app
...
}
So, how simple way to routing requests for Django/Tornado on Heroku?
Looks like necessary use custom Buildpack for install Nginx?
Or may be have good way to implement async socket.io in Django to avoid having to route requests?
Let me start with your last question:
Or may be have good way to implement async socket.io in Django to avoid having to route requests?
Django is essentially a library for processing of HTTP requests into appropriate HTTP responses. It does not provide an execution context like uWSGI, Apache mod_wsgi, Tornado, Flask, gunicorn etc. So you can not really use django to serve web sockets; there will always be an execution context around django.
When you deploy a django site to Heroku, it will normally use gunicorn as an execution environment. Performance wise this is not so cool: performance of python servers. Because the poor performance of gunicorn has to do with blocking I/O, some people install nginx as a non-blocking layer in front of gunicorn. This has led to the nginx buildpack.
I don't think this fits your needs. Nginx is an awesome web server, but does not contain a python execution environment. So you end up introducing a third server into your stack:
nginx for HTTP
gunicorn for django
server #3 for the websockets
My best suggestion is to leave gunicorn and nginx and to bring everything together in Tornado: web server, WSGI execution context for django, async context for websockets.
This link shows how to run Tornado on Heroku. And the next link shows how to run django in Tornado.

How can I serve a Django application using the SPDY protocol?

What is the best way to serve a Django application over the SPDY [1] protocol?
[1] http://www.chromium.org/spdy
One way is to run Django on Jython with Jetty - http://www.evonove.it/blog/en/2012/12/28/django-jetty-spdy-blazing-fast/
Also, apparently nginx has some draft module for SPDY
It works with nginx > 1.5.10 and Django run as fastcgi server.
Recent versions of Chrome and Firefox dropped support for SPDY v2. So you need at least SPDY3 support on the server side. Nginx versions higher than 1.5.10 support version 3 of the protocol.
Django Mainline Installation
Currently (as of Feb 2014) Nginx > 1.5.10 is only available from the mainline branch, not from stable. On most Linux distributions, it is easiest to install mainline packages provided by the nginx project.
Nginx and Django configuration
The Django documentation explains how to run Django with Nginx through fastcgi. The configuration that is provided there can be used as a starting point.
In addition, you need SSL certificates for your host and extend the Nginx configuration in the following ways:
The listen configuration options needs to be modified to:
from listen 80; to listen 443 ssl spdy;.
You need to add basic ssl configuration options, most importantly a certificate and key.
So, both modifications combined, the configuration may look as follows:
server {
listen 443 ssl spdy;
server_name yourhost.example.com;
ssl_certificate <yourhostscertificate>.pem;
ssl_certificate_key <yourhostskey>.key;
ssl_prefer_server_ciphers on;
location / {
include fastcgi_params;
fastcgi_pass 127.0.0.1:8080;
}
}
Then run your Django as in fastcgi mode, as follows:
python ./manage.py runfcgi host=127.0.0.1 port=8080
Testing your setup
Point your browser to https://yourhost.example.com
You should be able to verify that the connection is done via SPDY in:
Chrome: Look for an active SPDY session in chrome://net-internals/#spdy
Firefox: Check the Firebug Network tab and look for the X-Firefox-Spdy:"3.1" response header.

Visit webpage hosted on ubuntu server in a local network

I have a ubuntu server hosting a web page driven by Python Django, I can access that page by using the following command:elinks http:// 127.0.0.1:8000.
Now if I want to access that same web page on a macbook sharing the same home router with my ubuntu server(local ip: 10.0.0.9), how would I do it? Typing in elinks http:// 10.0.0.9:8000 wouldn't work.
Thanks a lot,
ZZ
Are you running the development server using manage.py?
If so, you should start the server using:
python manage.py runserver 0.0.0.0:8000
This will allow the development server to be visited by ips on all interfaces instead of just localhost.
You need to serve it. There are a number of ways to do this, but my preferred method is to use Nginx as a reverse proxy server for gunicorn. This is a good tutorial for that.

What is the purpose of NGINX and Gunicorn running in parallel?

A lot of Django app deployments over Amazon's EC2 use HTTP servers NGINX and Gunicorn.
I was wondering what they actually do and why both are used in parallel. What is the purpose of running them both in parallel?
They aren't used in parallel. NGINX is a reverse proxy. It's first in line. It accepts incoming connections and decides where they should go next. It also (usually) serves static media such as CSS, JS and images. It can also do other things such as encryption via SSL, caching etc.
Gunicorn is the next layer and is an application server. NGINX sees that the incoming connection is for www.domain.com and knows (via configuration files) that it should pass that connection onto Gunicorn. Gunicorn is a WSGI server which is basically a:
simple and universal interface between web servers and web applications or frameworks
Gunicorn's job is to manage and run the Django instance(s) (similar to using django-admin runserver during development)
The contrast to this setup is to use Apache with the mod_wsgi module. In this situation, the application server is actually a part of Apache, running as a module.
Nginx and Gunicorn are not used in parrallel.
Gunicorn, is a Web Server Gateway Interface (WSGI) server
implementation that is commonly used to run Python web applications.
NGINX is a free, open-source, high-performance HTTP server and reverse proxy, as well as an IMAP/POP3 proxy server.
Nginx responsible for serving static content, gzip compression, ssl,
proxy_buffers and other HTTP stuff.While gunicorn is a Python HTTP server that interfaces with both nginx and your actual python web-app code to serve dynamic content.
The following diagrams shows how nginx and Gunicorn interact.