Is it possible to configure Heroku where one has two processes that accept HTTP requests?
I would like to run one traditional request/response process (perhaps a Django Gunicorn process), and also run a NodeJS service that provides web-sockets. It would be nice if I could configure Heroku to work to a routing pattern like this:
ws/ # NodeJS websocket process
* # Django Process
Where any request with a URL begining with ws/ gets routed to the NodeJS websocket process, and everything else gets routed to Django.
Heroku gives an example of something similar.
https://devcenter.heroku.com/articles/realtime-polyglot-app-node-ruby-mongodb-socketio
But I really do not like this approach and will only consider it as a last resort. The issue is that the NodeJS process and the Rails process, are in separate Heroku apps. This will cause hassle when it comes to billing versioning and staging.
Related
Setting up Flask with uWSGI and Nginx can be difficult. I tried following this DigitalOcean tutorial and still had trouble. Even with buildout scripts it takes time, and I need to write instructions to follow next time.
If I don't expect a lot of traffic, or the app is private, does it make sense to run it without uWSGI? Flask can listen to a port. Can Nginx just forward requests?
Does it make sense to not use Nginx either, just running bare Flask app on a port?
When you "run Flask" you are actually running Werkzeug's development WSGI server, and passing your Flask app as the WSGI callable.
The development server is not intended for use in production. It is not designed to be particularly efficient, stable, or secure. It does not support all the possible features of a HTTP server.
Replace the Werkzeug dev server with a production-ready WSGI server such as Gunicorn or uWSGI when moving to production, no matter where the app will be available.
The answer is similar for "should I use a web server". WSGI servers happen to have HTTP servers but they will not be as good as a dedicated production HTTP server (Nginx, Apache, etc.).
Flask documents how to deploy in various ways. Many hosting providers also have documentation about deploying Python or Flask.
First create the app:
import flask
app = flask.Flask(__name__)
Then set up the routes, and then when you want to start the app:
import gevent.pywsgi
app_server = gevent.pywsgi.WSGIServer((host, port), app)
app_server.serve_forever()
Call this script to run the application rather than having to tell gunicorn or uWSGI to run it.
I wanted the utility of Flask to build a web application, but had trouble composing it with other elements. I eventually found that gevent.pywsgi.WSGIServer was what I needed. After the call to app_server.serve_forever(), call app_server.stop() when to exit the application.
In my deployment, my application is listening on localhost:port using Flask and gevent, and then I have Nginx reverse-proxying HTTPS requests to it.
You definitely need something like a production WSGI server such as Gunicorn, because the development server of Flask is meant for ease of development without much configuration for fine-tuning and optimization.
Eg. Gunicorn has a variety of configurations depending on the use case you are trying to solve. But the development flask server does not have these capabilities. In addition, these development servers show their limitations as soon as you try to scale and handle more requests.
With respect to needing a reverse proxy server such as Nginx is concerned it depends on your use case.
If you are deploying your application behind the latest load balancer in AWS such as an application load balancer(NOT classic load balancer), that itself will suffice for most use cases. No need to take effort into setting up NGINX if you have that option.
The purpose of a reverse proxy is to handle slow clients, meaning clients which take time to send the request. These reverse load balancers buffer the requests till the entire request is got from the clients and send them async to Gunicorn. This improves the performance of your application considerably.
I am planning to deploy an application which is built in react frontend, and calls a python backend. What I am planning is to deploy react on a linux box on a node.js server and python on django behind Apache.
Can someone would suggest if this would be right architecture from production grade perspective?
If application is expected to get 1000 requests per hour, then will this architecture work? or I should replace or add components or layers?
Generally, you run both servers on different ports, and then point apache to django server for /api/ calls (or whatever URLs need to go to django api), and then the rest of the regular requests you point at node.js serving your javascript frontend application.
1000 requests per hour seems like nothing - depending of course on the work of the backend server - but in general any webserver should handle that no problem.
Apache has following settings for this:
ProxyPass "/api" "http://127.0.0.1:8000/api"
ProxyPassReverse "/api" "http://127.0.0.1:8000/api"
You can read more in documentation here: https://httpd.apache.org/docs/2.4/howto/reverse_proxy.html
I read somewhere that Django is a blocking code. Does that mean when I deploy my code, the server will be able to serve only one request at a time? Do I need to use some other framework like tornado to solve this purpose? Is Django only meant for development and debugging purpose? If it does not solve the purpose of deployment why not use node.js or some other framework.
It means each thread or process can only serve one request at a time. But your server is normally configured to spin up multiple processes.
We are currently developing an application with django and django-omnibus (websockets).
We need to send regularly updates from the server (django) to all connected clients via websockets.
The problem is, that we can't use cron or something related to do the work. I've written a manage.py command but through some limitations it seems omnibus can't send the message to the websocket if launcher by python manage.py updateclients.
I know that django is not designed for this kind of stuff but is there a way to send the updates within the running django instance itself?
Thanks for help!
Is the reason you can't use cron because your hosting environment doesn't have cron? ...or because "it seems omnibus can't send the message to the websocket if launcher by python manage.py" ?
If you simply don't have cron then you need to find an alternative such as https://apscheduler.readthedocs.org/en/latest/ or Celery also provides scheduled tasks.
But if the problem is the other side: "a way to send the updates within the running django instance itself" then I would suggest a simple option is to add an HTTP API to your Django app.
For example:
# views.py
from django.core.management import call_command
def update_clients(request):
call_command('updateclients')
return HttpResponse(status=204)
Then on your crontab you can do something like:
curl 127.0.0.1/internalapi/update_clients
...and this way your updateclients code can run within the Django instance that has the active connection to the omnibus tornado server.
You probably want some access control on this url, either via your webserver or something like this:
https://djangosnippets.org/snippets/2095/
We have two django applications running on the same server that interact with an API that uses oauth. They function as expected, communicating with each other, when run under the django development server. However, when deployed using apache/wsgi they don't work together.
(To be more specific, one application is an instance of the Indivo server; the other one is a custom application that interacts with Indivo.)
What is the best way to trouble shoot this?
Make sure that the Django instances are working by themselves first. For example, one app could be started under Apache, and the other using ./manage.py runserver. Reverse which one is running using Apache and verify that all works as expected.
Use the Apache error logs to look for errors such as failed requests.
Since one of your apps appears to implement a web API, use something like the Google Chrome Postman App to exercise the site from a web browser.
Learn how to use the Django logging framework to log information about your apps as they execute.