Request timeout/delay Issue - django

My Applications posts to Instagram using the Django rest-framework. Posting to Instagram is a two step process. First you must send the content to a media container. Then you must wait until Instagram finishes processing the photo/video. Once the media is done being processed you send the creation ID to Instagram to make the post public. In order to ensure that Instagram has enough time to process the media before the creation ID is sent, I use Django's time.sleep function to pause the request for 60 seconds. The problem is that this works on my desktop, but on my ec2 instance the same code fails when I pause the request longer than 10 seconds. To run my application I am using amazon EC2, Gunicorn and nginx.
My Django code is this
```#print('posting photo to facebook')
video_url = data.get('video_url')
container_url = "https://graph.facebook.com/" + page_id + "/media?media_type=VIDEO&\
video_url=" + video_url + "&caption=" + text + "&access_token=" + access_token
res = requests.post(container_url)
data = res.json()
time.sleep(60)
publish_url = "https://graph.facebook.com/" + page_id + "/media_publish?\
creation_id=" + data['id'] + "&access_token=" + access_token
res = requests.post(publish_url)
data = res.json()
print('result of video posting attempt')
print(json.dumps(data));
a = res.status_code
b = 200
#print(res.text)
if a != b:
return Response(data = {"data": {"id":"" , "post_id":"" }})
return Response(data = data, status = res.status_code)```
My gunicorn.service file
Description=gunicorn daemon
Requires=gunicorn.socket
After=network.target
[Service]
User=ubuntu
Group=ubuntu
WorkingDirectory=/home/ubuntu/xxxxx-xxxxxx-xxxxxx
ExecStart=/home/ubuntu/xxxxx-xxxxxx-xxxxxx/xxxxxxx/bin/gunicorn \
--access-logfile - \
--workers 3 \
--bind unix:/run/gunicorn.sock \
xxxxxxxxxx.wsgi:application
TimeoutStopSec=120
[Install]
WantedBy=multi-user.target
My nginx config file
listen 80;
listen [::]:80;
server_name xxxxxxxxx.com *.xxxxxxxxx.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name xxxxxxxxx.com *.xxxxxxxxx.com;
#SSL/TLS settings
ssl_certificate /etc/letsencrypt/live/xxxxxxxxx.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/xxxxxxxxx.com/privkey.pem;
client_max_body_size 100M;
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
location = /favicon.ico {
access_log off;
log_not_found off;
}
location /static/ {
root /home/ubuntu/xxxxxxxxx-xxxxxxxxx-xxxxxxxxx;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
}```

I recommend using celery for delayed tasks. For example:
#shared_task
def task_make_public(publish_url):
res = requests.post(publish_url)
...
res = requests.post(container_url)
data = res.json()
publish_url = ...
task_make_public.apply_async((publish_url,), eta=now + timedelta(seconds=60))
More simple setup is for django-background-tasks. It doesn't need celeworker, celerybeat and reddis to run tasks as celery does.
Install from PyPI
pip install django-background-tasks
Add to INSTALLED_APPS:
INSTALLED_APPS = (
# ...
'background_task',
# ...
)
Migrate database
python manage.py migrate
Code example
from background_task import background
from django.contrib.auth.models import User
#background
def task_make_public(publish_url):
res = requests.post(publish_url)
...
res = requests.post(container_url)
data = res.json()
publish_url = ...
task_make_public(publish_url, schedule=timedelta(seconds=60)) # 60 seconds from now

Related

Nginx not serving django 3 through wsgi

I am trying to serve a Django 3 build from nginx through wsgi and I cannot seem to get the last stage. I can browse to the Django site by a runserver and launching the uwsgi.ini, but, when I try to browse to the site through nginx I get a 502 error (bad gateway)(firefox). If I try from a remote site it renders the nginx home page.
I have a build of Apache on the same server. I had to spoof an IP and use a unique port to get them running side by side.
the nginx/error.log does not register any problem.
Below is the uwsgi.ini.
[uwsgi]
# variables
projectname = website
base = /opt/website/
# configuration
master = true
http = :8000
uid = nginx
virtualenv = /opt/website/djangoprojectenv
pythonpath = %(base)
chdir = %(base)
env = DJANGO_SETTINGS_MODULE=%(projectname).settings.pro
#module = %(projectname).wsgi:application
module = website.wsgi:application
socket = /tmp/%(projectname).new.sock
chown-socket = %(uid):nginx
chmod-socket = 666
And below is the conf file from nginx/conf.d
server {
listen 192.168.1.220:81;
server_name willdoit.com;
access_log off;
error_log /var/log/nginx_error.log;
location / {
uwsgi_pass unix:/tmp/website.sock;
include /etc/nginx/uwsgi_params;
uwsgi_read_timeout 300s;
client_max_body_size 32m;
}
}
The /tmp/website.sock file is owned by nginx:nginx.
If there are additional details I need to post, please advise. Any help you can provide would be welcome.
The socket was defined incorrectly. Closing.

nginx - multiple django apps same domain different Urls

I want to serve multiple django projects (actually django rest API apps) On one domain but serve each of them on seperated url. like this:
http://test.com/app1/...
http://test.com/app2/...
and so on. I will be using nginx to config it. But i'm facing some problems that wants your helps:
These apps should have different cookie for each other. cause they have different auth system. so token and cookie in one is not valid for another. How to handle this?
What nginx configs you recommend.
Note:
I don't want full detail cause i know concepts. just some hints and usefull commands will do.
Update:
For example i have a django app which has a url test. and i want this path to be served on server with /app1/test. The problem is that when is send request to /app1/test, Django doesn't recognize it as /test, instead as /app1/test and because /app1 is not registered in urls.py will give 404 error.
here is a sample of my nginx config:
server {
listen 80;
server_name test.com;
location /qpp1/ {
include uwsgi_params;
proxy_pass http://unix://home//app1.sock;
}
location /qpp2/ {
include uwsgi_params;
proxy_pass http://unix://home//app2.sock;
}
}
You can try to play with proxy_cookie_path directive:
server {
...
location /app1/ {
proxy_cookie_path / /app1/;
proxy_pass http://backend1/;
}
location /app2/ {
proxy_cookie_path / /app2/;
proxy_pass http://backend2/;
}
}
Update
Here is another variant of configuraion to test.
upstream qpp1 {
server unix:/home/.../app1.sock;
}
upstream qpp2 {
server unix:/home/.../app2.sock;
}
server {
listen 80;
server_name test.com;
location /qpp1/ {
include uwsgi_params;
proxy_cookie_path / /qpp1/;
proxy_pass http://qpp1/;
}
location /qpp2/ {
include uwsgi_params;
proxy_cookie_path / /qpp2/;
proxy_pass http://qpp2/;
}
}
Because I am not using nginx, django's SESSION_COOKIE_PATH-Variable was my solution.
https://docs.djangoproject.com/en/3.1/ref/settings/#session-cookie-path
In your example, you could set it to:
app1
SESSION_COOKIE_PATH = "/app1/"
app2
SESSION_COOKIE_PATH = "/app2/"
Afterwards clear the cookie cache for the domain in your browser, if you've logged in before.

uwsgi ImportError: No module named os

I'm teaching myself how to setup an Ubuntu Server to run my Django application. I want to use Nginx + uwsgi. I know that this question can be very easy for experts but I've spent 6 days looking for it over the internet without getting it (in any case, forgive me if there is any link with the answer). I've followed a lot of tutorials and posts but I didn't found a solution.
I describe my file structure below:
My django project is located in /usr/local/projects/myproject
My virtualenv is in /root/.virtualenvs/myproject
My uwsgi config file myproject.ini is in /etc/uwsgi/apps-available/ and correctly symbolic linked in /etc/uwsgi/apps-enabled/
[uwsgi]
plugins = python
socket = /tmp/myproject.sock
chmod-socket = 644
uid = www-data
gid = www-data
master = true
enable-threads = true
processes = 2
no-site=true
virtualenv = /root/.virtualenvs/myproject
chdir = /usr/local/projects/myproject
module = myproject.wsgi:application
pidfile = /usr/local/projects/myproject/myproject.pid
logto = /var/log/uwsgi/myproject_uwsgi.log
vacuum = true
My nginx config file myproject.conf is in /etc/nginx/sites-available/ and correctly symbolic linked in /etc/nginx/sites-enabled/
# the upstream component nginx needs to connect to
upstream django {
server unix:///tmp/myproject.sock; # for a file socket
}
server {
listen 80;
server_name dev.myproject.com www.dev.myproject.com;
access_log /var/log/nginx/myproject_access.log;
error_log /var/log/nginx/myproject_error.log;
location / {
uwsgi_pass unix:///tmp/myproject.sock;
include /etc/nginx/uwsgi_params;
uwsgi_param UWSGI_SCRIPT myproject.wsgi;
}
location /media/ {
alias /usr/local/projects/myproject/media/;
}
location /static/ {
alias /usr/local/projects/myproject/static/;
}
}
When I try to access to dev.myproject.com I get an Internal Server Error. Then I take a look to my uwsgi log:
Traceback (most recent call last):
File "./myproject/wsgi.py", line 9, in <module>
import os
ImportError: No module named os
Sat Jul 26 17:39:16 2014 - unable to load app 0 (mountpoint='') (callable not found or import error)
Sat Jul 26 17:39:16 2014 - --- no python application found, check your startup logs for errors ---
[pid: 8559|app: -1|req: -1/8] 79.148.138.10 () {40 vars in 685 bytes} [Sat Jul 26 17:39:16 2014] GET / => generated 21 bytes in 0 msecs (HTTP/1.1 500) 1 headers in 57 bytes (0 switches on core 0)
I need your help because I'm not able to find a solution despite the posibility of being very simple.
If you need to know something else let me know and I will update my question as soon as possible.
I finally found a solution.
I followed kchan's suggestion about not putting any of the contents in the /root/ directory. Basically, I did some small changes in my myproject.conf file and my myproject.ini file. I created a user and structured everything like below:
uwsgi config file myproject.ini in /etc/uwsgi/apps-available/ and correctly symbolic linked in /etc/uwsgi/apps-enabled/
[uwsgi]
plugins = python
socket = /tmp/myproject.sock
chmod-socket = 644
uid = www-data
gid = www-data
master = true
enable-threads = true
processes = 2
virtualenv = /home/user/.virtualenvs/myproject
chdir = /home/user/projects/myproject
module = myproject.wsgi:application
pidfile = /home/user/projects/myproject/myproject.pid
daemonize = /var/log/uwsgi/myproject_uwsgi.log
vacuum = true
nginx config file myproject.conf in /etc/nginx/sites-available/ and correctly symbolic linked in /etc/nginx/sites-enabled/
# the upstream component nginx needs to connect to
upstream django {
server unix:///tmp/myproject.sock; # for a file socket
}
server {
listen 80;
server_name dev.myproject.com www.dev.myproject.com;
access_log /var/log/nginx/myproject_access.log;
error_log /var/log/nginx/myproject_error.log;
location / {
uwsgi_pass django;
include /etc/nginx/uwsgi_params;
}
location /media/ {
alias /home/user/projects/myproject/media/;
}
location /static/ {
alias /home/user/projects/myproject/static/;
}
}
I must say that I think the real problem was to try to setup my DB configuration in the postactivate file of my virtualenv. Hope to help someone else.

uWSGI not releasing memory

I tried my hands with an extremely small django app that serves mainly html+static content with no db operations. The app is on nginx and uwsgi. I also have postgres installed, but for this issue, i did not do any DB operations.
I find that memory is not getting released by the uwsgi process. In this chart from newrelic, you will find that the memory occupied by the uwsgi process remains stagnant at ~100MB , though during that stagnancy there have been absolutely no activity with the website/app.
Also FYI: The app/uwsgi process when it started consumed only 56MB. I reached this ~100MB when i was testing with ab(apache benchmark) and was hitting it with -n 1000 -c 10 or around that range.
Nginx Conf
server
{
listen 80;
server_name <ip_address>;
root /var/www/mywebsite.com/;
access_log /var/www/logs/nginx_access.log;
error_log /var/www/logs/nginx_error.log;
charset utf-8;
default_type application/octet-stream;
tcp_nodelay off;
gzip on;
location /static/
{
alias /var/www/mywebsite.com/static/;
expires 30d;
access_log off;
}
location /
{
include uwsgi_params;
uwsgi_pass unix:/var/www/mywebsite.com/django.sock;
}
}
app_uwsgi.ini
[uwsgi]
plugins = python
; define variables to use in this script
project = myapp
base_dir = /var/www/mywebsite.com
app=reloc
uid = www-data
gid = www-data
; process name for easy identification in top
procname = %(project)
no-orphans = true
vacuum = true
master = true
harakiri = 30
processes = 2
processes = 2
pythonpath = %(base_dir)/
pythonpath = %(base_dir)/src
pythonpath = %(base_dir)/src/%(project)
logto = /var/www/logs/uwsgi.log
chdir = %(base_dir)/src/%(project)
module = reloc.wsgi:application
socket = /var/www/mywebsite.com/django.sock
chmod-socket = 666
chown-socket = www-data
Update 1: So it looks like, its not uwsgi, but the python processes that retain certain datastructures for faster processing.
It is common for web frameworks to load their code up into memory. This is not generally a problem, but it is not a bad idea to put a cap on your worker's total memory consumption as, over the course of several requests, an individual worker's memory consumption may grow.
When the worker reaches or exceeds the cap, it will restart itself once the request is served. This is done via the reload_on_rss flag
what you want to set it to depends on the memory available on your server and the number of workers you are running.
You might also limit the maximum number of requests per worker with the max-requests option in your .ini file. This will kill the worker that has handled the specified amount of max-requests and spawn a new one.

Nginx + uWSGI basic configuration

I'm new to both, I got to run 2 Django skeleton apps (just shows the "It works!" page) using Emperor, but I want to try it without Emperor. (to better understand how it works)
My nginx.conf:
# snipped...
server {
listen 92;
server_name example.com;
access_log /home/john/www/example.com/logs/access.log;
error_log /home/john/www/example.com/logs/error.log;
location / {
include uwsgi_params;
uwsgi_pass 127.0.0.1:8001;
}
}
# snipped...
And I start uWSGI by:
$ uwsgi --ini /home/john/www/example.com/uwsgi.ini
With uwsgi.ini being:
[uwsgi]
http = :8001
chdir = /home/john/www/example.com/example
module = example.wsgi
master = True
home = /home/john/Envs/example.com
Once uwsgi and nginx are running, I can access localhost:8001, but not localhost:92.
What am I missing?
Thanks in advance.
You are telling the uwsgi process to serve applications using the http protocol. This feature is meant mainly for developer convenience. You should instead tell it to use the uwsgi protocol:
[uwsgi]
protocol = uwsgi
socket = 127.0.0.1:8001
chdir = /home/john/www/example.com/example
module = example.wsgi
master = True
home = /home/john/Envs/example.com