How to deploy Flask-SocketIO on server? - flask

I've made a project that using flask_socketio extension for user notifications and i want to deploy it on server .
I've already finished reading the Flask-SocketIO documentation and just did everything literally in Deployment section.
Now if i run the application at the beginning everything works fine, inside the console i've just logged a message if user logged in that says User Connected! just to make sure it work .
Now, i am logged in and the site works just fine, but if i want to navigate the other pages of the site it hangouts for a while lets say 30s and it logs me out .
Here is some code snippets:
base.html the base is extended for all pages
<script type="text/javascript">
var socket = io.connect('http://' + document.domain + ':' + location.port + '/notifications-{{session.get("client_logged_in")}}-{{session.get("client_family")}}');
var appointments_received = [];
socket.on('connect', function(){
console.log('User Connected!');
});
socket.on('new_message', function(msgChat){
$('<li class="notification">\
<div class="notification_client_ava_border">\
{% if '+msgChat.image+' %}\
<img src="/static/img/'+msgChat.image+'" alt="myproject" class="notification_client_ava">\
{% else %}\
<img src="/static/img/main.png" alt="myproject" class="notification_client_ava">\
{% endif %}\
</div>\
<div class="notification_info">\
<div class="client_info">\
<p class="user_name">\
You have new message from: '+msgChat.from+'\
'+shortName+'\
</p>\
</div>\
<br>\
<div class="service_info">\
{% if '+actionArray[0] == +' "Message:" %}\
<span>'+msgChat.message+'</span></br>\
{% endif %}\
</div>\
<br><br>\
<div class="">\
<span class="date">\
'+moment().fromNow()+'\
</span>\
</div>\
</div>\
<div class="notification_action">\
<a href="/user/cat-{{g.current_directory}}/chat/'+msgChat.to_url_+'?current_user={{session.get("client_logged_in")}}+{{session.get("client_family")}}" id="noty_read" class="btn btn-primary btn-block" data-get='+msgChat.noty_id+'>read it</a>\
</div>\
</li>'
).appendTo('ul#messages_list');
});
</script>
myapp.service here the Gunicorn instance to serve my project inside system directory
[Unit]
Description=Gunicorn
After=network.target
[Service]
User=gard
WorkingDirectory=/home/gard/myproject
Environment="PATH=/home/gard/myproject/venv/bin"
ExecStart=/home/gard/myproject/venv/bin/gunicorn --bind 0.0.0.0:5000 manage:app
[Install]
WantedBy=multi-user.target
Here is also my project config file the Nginx using to serve my project:
server {
listen 80;
server_name 123.45.678.901;
location / {
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://127.0.0.1:5000;
break;
}
}
location /socket.io {
include proxy_params;
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass http://127.0.0.1:5000/socket.io;
}
}
Here all my codes that i can provide and i repeat , the site work just fine and i can see the User Connected! message inside the console when user logs in my home page , but if i want to navigate another pages the hangs happens and it logs me out of site .
I forgot to show the error that i indeed up with inside the console :
http://123.45.678.901/socket.io/?EIO=3&transport=polling&t=LtFziAg&sid=636e939b538f4b0ca6df5cd521c2e187 400 (BAD REQUEST)

The problem is solved , i must define which async_mode to use inside the SocketIO structure .
Flask-SocketIO by default gives the first choice for Eventlet , the second choice goes to Gevent , for more information read this issues/294
To resolve this kind of problem just add this to your code :
socketio = SocketIO(async_mode='gevent')

Related

Docker + Django if_debug template tag is not working

I am running a docker-compose with nginx routing requests to my Django server, with this config:
upstream django {
server backend:8000;
}
server {
listen 80;
location / {
proxy_pass http://django;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
}
To use the {% if debug %} template tag I know I need both the correct internal IP and the DEBUG setting turned on. I get the correct internal IPs by running the following code snippet in my Django settings.py:
import socket
hostname, _, ips = socket.gethostbyname_ex(socket.gethostname())
INTERNAL_IPS = [ip for ip in ips] + ['127.0.0.1']
When I docker exec -it backend sh, and import the DEBUG and INTERNAL_IPS settings, I get that the values are ['172.19.0.4', '127.0.0.1'] and True, respectively. Then, when I run docker inspect -f '{{.Name}} - {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $(docker ps -aq) to check the containers' IP address, I see that the backend container that runs Django has an IP of 172.19.0.4, same as the internal IP.
Despite this all, in the Django template, I get a Django debug mode error (further proving that debug is turned on) that says that an error got triggered on line 61!!!
57 {% if debug %}
58 <!-- This url will be different for each type of app. Point it to your main js file. -->
59 <script type="module" src="http://frontend:3000/src/main.jsx"></script>
60 {% else %}
61 {% render_vite_bundle %}
62 {% endif %}
Can someone help me understand what I'm doing wrong and why this template tag won't work? Is there an easier way to do an if statement based on the Django debug setting? Thanks in advance!
I figured it out! I was getting the IP of the backend container when I typed hostname, _, ips = socket.gethostbyname_ex(socket.gethostname()), but what I really wanted was to mark Nginx's forwarding as "internal". So, I changed the line to hostname, _, ips = socket.gethostbyname_ex("nginx") and the request got forwarded correctly. Might be worth it to consider whitelisting all of the DOcker containers' internal IPs so we can receive messages freely.

Request timeout/delay Issue

My Applications posts to Instagram using the Django rest-framework. Posting to Instagram is a two step process. First you must send the content to a media container. Then you must wait until Instagram finishes processing the photo/video. Once the media is done being processed you send the creation ID to Instagram to make the post public. In order to ensure that Instagram has enough time to process the media before the creation ID is sent, I use Django's time.sleep function to pause the request for 60 seconds. The problem is that this works on my desktop, but on my ec2 instance the same code fails when I pause the request longer than 10 seconds. To run my application I am using amazon EC2, Gunicorn and nginx.
My Django code is this
```#print('posting photo to facebook')
video_url = data.get('video_url')
container_url = "https://graph.facebook.com/" + page_id + "/media?media_type=VIDEO&\
video_url=" + video_url + "&caption=" + text + "&access_token=" + access_token
res = requests.post(container_url)
data = res.json()
time.sleep(60)
publish_url = "https://graph.facebook.com/" + page_id + "/media_publish?\
creation_id=" + data['id'] + "&access_token=" + access_token
res = requests.post(publish_url)
data = res.json()
print('result of video posting attempt')
print(json.dumps(data));
a = res.status_code
b = 200
#print(res.text)
if a != b:
return Response(data = {"data": {"id":"" , "post_id":"" }})
return Response(data = data, status = res.status_code)```
My gunicorn.service file
Description=gunicorn daemon
Requires=gunicorn.socket
After=network.target
[Service]
User=ubuntu
Group=ubuntu
WorkingDirectory=/home/ubuntu/xxxxx-xxxxxx-xxxxxx
ExecStart=/home/ubuntu/xxxxx-xxxxxx-xxxxxx/xxxxxxx/bin/gunicorn \
--access-logfile - \
--workers 3 \
--bind unix:/run/gunicorn.sock \
xxxxxxxxxx.wsgi:application
TimeoutStopSec=120
[Install]
WantedBy=multi-user.target
My nginx config file
listen 80;
listen [::]:80;
server_name xxxxxxxxx.com *.xxxxxxxxx.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name xxxxxxxxx.com *.xxxxxxxxx.com;
#SSL/TLS settings
ssl_certificate /etc/letsencrypt/live/xxxxxxxxx.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/xxxxxxxxx.com/privkey.pem;
client_max_body_size 100M;
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
location = /favicon.ico {
access_log off;
log_not_found off;
}
location /static/ {
root /home/ubuntu/xxxxxxxxx-xxxxxxxxx-xxxxxxxxx;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
}```
I recommend using celery for delayed tasks. For example:
#shared_task
def task_make_public(publish_url):
res = requests.post(publish_url)
...
res = requests.post(container_url)
data = res.json()
publish_url = ...
task_make_public.apply_async((publish_url,), eta=now + timedelta(seconds=60))
More simple setup is for django-background-tasks. It doesn't need celeworker, celerybeat and reddis to run tasks as celery does.
Install from PyPI
pip install django-background-tasks
Add to INSTALLED_APPS:
INSTALLED_APPS = (
# ...
'background_task',
# ...
)
Migrate database
python manage.py migrate
Code example
from background_task import background
from django.contrib.auth.models import User
#background
def task_make_public(publish_url):
res = requests.post(publish_url)
...
res = requests.post(container_url)
data = res.json()
publish_url = ...
task_make_public(publish_url, schedule=timedelta(seconds=60)) # 60 seconds from now

nginx - multiple django apps same domain different Urls

I want to serve multiple django projects (actually django rest API apps) On one domain but serve each of them on seperated url. like this:
http://test.com/app1/...
http://test.com/app2/...
and so on. I will be using nginx to config it. But i'm facing some problems that wants your helps:
These apps should have different cookie for each other. cause they have different auth system. so token and cookie in one is not valid for another. How to handle this?
What nginx configs you recommend.
Note:
I don't want full detail cause i know concepts. just some hints and usefull commands will do.
Update:
For example i have a django app which has a url test. and i want this path to be served on server with /app1/test. The problem is that when is send request to /app1/test, Django doesn't recognize it as /test, instead as /app1/test and because /app1 is not registered in urls.py will give 404 error.
here is a sample of my nginx config:
server {
listen 80;
server_name test.com;
location /qpp1/ {
include uwsgi_params;
proxy_pass http://unix://home//app1.sock;
}
location /qpp2/ {
include uwsgi_params;
proxy_pass http://unix://home//app2.sock;
}
}
You can try to play with proxy_cookie_path directive:
server {
...
location /app1/ {
proxy_cookie_path / /app1/;
proxy_pass http://backend1/;
}
location /app2/ {
proxy_cookie_path / /app2/;
proxy_pass http://backend2/;
}
}
Update
Here is another variant of configuraion to test.
upstream qpp1 {
server unix:/home/.../app1.sock;
}
upstream qpp2 {
server unix:/home/.../app2.sock;
}
server {
listen 80;
server_name test.com;
location /qpp1/ {
include uwsgi_params;
proxy_cookie_path / /qpp1/;
proxy_pass http://qpp1/;
}
location /qpp2/ {
include uwsgi_params;
proxy_cookie_path / /qpp2/;
proxy_pass http://qpp2/;
}
}
Because I am not using nginx, django's SESSION_COOKIE_PATH-Variable was my solution.
https://docs.djangoproject.com/en/3.1/ref/settings/#session-cookie-path
In your example, you could set it to:
app1
SESSION_COOKIE_PATH = "/app1/"
app2
SESSION_COOKIE_PATH = "/app2/"
Afterwards clear the cookie cache for the domain in your browser, if you've logged in before.

Django+gunicorn+nginx upload large file 502 error

Problem
Uploading 1-2mb files works fine.
When I attempt to upload 16mb file, i get 502 error after several seconds
More detalied:
I click "Upload"
Google Chrome uploads file (upload status is changing from 0% to 100% in left bottom corner)
Status changes to "Waiting for HOST", where HOST is my site hostname
After a half of minute server returns "502 Bad Gateway"
My view:
def upload(request):
if request.method == 'POST':
f = File(data=request.FILES['file'])
f.save()
return redirect(reverse(display), f.id)
else:
return render('filehosting_upload.html', request)
render(template, request [,data]) is my own shorthand that deals with some ajax stuff;
The filehosting_upload.html:
{% extends "base.html" %}
{% block content %}
<h2>File upload</h2>
<form action="{% url nexus.filehosting.views.upload %}" method="post" enctype="multipart/form-data">
{% csrf_token %}
<input type="file" name="file">
<button type="submit" class="btn">Upload</button>
</form>
{% endblock %}
Logs & specs
There are nothing informative in logs i can find.
Versions:
Django==1.4.2
Nginx==1.2.1
gunicorn==0.17.2
Command line parameters
command=/var/www/ernado/data/envs/PROJECT_NAME/bin/gunicorn -b localhost:8801 -w 4 PROJECT_NAME:application
Nginx configuration for related location:
location /files/upload {
client_max_body_size 100m;
proxy_pass http://HOST;
proxy_connect_timeout 300s;
proxy_read_timeout 300s;
}
Nginx log entry (changed MY_IP and HOST)
2013/03/23 19:31:06 [error] 12701#0: *88 upstream prematurely closed connection while reading response header from upstream, client: MY_IP, server: HOST, request: "POST /files/upload HTTP/1.1", upstream: "http://127.0.0.1:8801/files/upload", host: "HOST", referrer: "http://HOST/files/upload"
Django log
2013-03-23 19:31:06 [12634] [CRITICAL] WORKER TIMEOUT (pid:12829)
2013-03-23 19:31:06 [12634] [CRITICAL] WORKER TIMEOUT (pid:12829)
2013-03-23 19:31:06 [13854] [INFO] Booting worker with pid: 13854
Question(s)
how to fix that?
is it possible to fix that without nginx upload module?
Update 1
Tried suggested config
gunicorn --workers=3 --worker-class=tornado --timeout=90 --graceful-timeout=10 --log-level=DEBUG --bind localhost:8801 --debug
Works fine for me now.
I run my gunicorn with that parameters, try :
python manage.py run_gunicorn --workers=3 --worker-class=tornado --timeout=90 --graceful-timeout=10 --log-level=DEBUG --bind 127.0.0.1:8151 --debug
or if you run differently, you may run with that options
For large files handling you should use a worker-class. Also I had some trouble using gevent in python 3.7, better to use 3.6.
Django, Python 3.6 example:
Install:
pip install gevent
Run
gunicorn --chdir myApp myApp.wsgi --workers 4 --worker-class=gevent --bind 0.0.0.0:80 --timeout=90 --graceful-timeout=10
You need to used an other worker type class an async one like gevent or tornado see this for more explanation :
First explantion :
You may also want to install Eventlet or Gevent if you expect that your application code may need to pause for extended periods of time during request processing
Second one :
The default synchronous workers assume that your application is resource bound in terms of CPU and network bandwidth. Generally this means that your application shouldn’t do anything that takes an undefined amount of time. For instance, a request to the internet meets this criteria. At some point the external network will fail in such a way that clients will pile up on your servers.

Empty docstrings when running django-piston docs under nginx

I'm using django-piston for my REST json api, and I have it all set up for documentation through pistons built in generate_doc function. Under the django runserver, it works great. The template that loops over the doc objects successfully lists the docstrings for both the class and for each method.
When I serve the site via nginx and uwsgi, the docstrings are empty. At first I thought this was a problem with the django markup filter and using restructuredtext formatting, but when I turned that off and just simply tried to see the raw docstring values in the template, they are None.
I don't see any issues in the logs, and I can't understand why nginx/uwsgi is the factor here, but honestly it does work great over the dev runserver. I'm kind of stuck on how to start debugging this through nginx/uwsgi. Has anyone run into this situation or have a suggestion of where I can start to look?
My doc view is pretty simple:
views.py
def ApiDoc(request):
docs = [
generate_doc(handlers.UsersHandler),
generate_doc(handlers.CategoryHandler),
]
c = {
'docs': docs,
'model': 'Users'
}
return render_to_response("api/docs.html", c, RequestContext(request))
And my template is almost identical to the stock piston template:
api/docs.html
{% load markup %}
...
{% for doc in docs %}
<h5>top</h5>
<h3><a id="{{doc.name}}">{{ doc.name|cut:"Handler" }}:</a></h3>
<p>
{{ doc.doc|default:""|restructuredtext }}
</p>
...
{% for method in doc.get_all_methods %}
{% if method.http_name in doc.allowed_methods %}
<dt><a id="{{doc.name}}_{{method.http_name}}">request</a> <i>{{ method.http_name }}</i></dt>
{% if method.doc %}
<dd>
{{ method.doc|default:""|restructuredtext }}
<dd>
{% endif %}
The rendered result of this template under nginx would be that doc.doc and method.doc are None. I have tried removing the filter and just checking the raw value to confirm this.
I'm guessing the problem would have to be somewhere in the uwsgi layer, and its environment. Im running uwsgi with a config like this:
/etc/init/uwsgi.conf
description "uWSGI starter"
start on (local-filesystems
and runlevel [2345])
stop on runlevel [016]
respawn
exec /usr/sbin/uwsgi \
--uid www-data \
--socket /opt/run/uwsgi.sock \
--master \
--logto /opt/log/uwsgi_access.log \
--logdate \
--optimize 2 \
--processes 4 \
--harakiri 120 \
--post-buffering 8192 \
--buffer-size 8192 \
--vhost \
--no-site
And my nginx server entry location snippet looks like this:
sites-enabled/mysite.com
server {
listen 80;
server_name www.mysite.com mysite.com;
set $home /var/www/mysite.com/projects/mysite;
set $pyhome /var/www/mysite.com/env/mysite;
root $home;
...
location ~ ^/(admin|api)/ {
include uwsgi_params;
uwsgi_pass uwsgi_main;
uwsgi_param UWSGI_CHDIR $home;
uwsgi_param UWSGI_SCRIPT wsgi_app;
uwsgi_param UWSGI_PYHOME $pyhome;
expires epoch;
}
...
}
Edit: Configuration Info
Server: Ubuntu 11.04
uWSGI version 1.0
nginx version: nginx/1.0.11
django non-rel 1.3.1
django-piston latest pypi 0.2.3
python 2.7
uWSGI is starting up the interpreter in way equivalent to -OO option to Python command line. This second level of optimisation removes doc strings.
-OO : remove doc-strings in addition to the -O optimizations
Change:
--optimize 2
to:
--optimize 1