Gunicorn doesn't log real ip from nginx - django

I run a django app via gunicorn, supervisor and nginx as reverse proxy and struggle to make my gunicorn access log show the actual ip instead of 127.0.0.1:
Log entries look like this at the moment:
127.0.0.1 - - [09/Sep/2014:15:46:52] "GET /admin/ HTTP/1.0" ...
supervisord.conf
[program:gunicorn]
command=/opt/middleware/bin/gunicorn --chdir /opt/middleware -c /opt/middleware/gunicorn_conf.py middleware.wsgi:application
stdout_logfile=/var/log/middleware/gunicorn.log
gunicorn_conf.py
#!python
from os import environ
from gevent import monkey
import multiprocessing
monkey.patch_all()
bind = "0.0.0.0:9000"
x_forwarded_for_header = "X-Real-IP"
policy_server = False
worker_class = "socketio.sgunicorn.GeventSocketIOWorker"
accesslog = '-'
my nginx module conf
server {
listen 80;
root /opt/middleware;
index index.html index.htm;
client_max_body_size 200M;
server_name _;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
real_ip_header X-Real-IP;
}
}
I tried all sorts of combinations in the location {} block, but can't see that it makes any difference. Any hint appreciated.

The problem is that you need to configure gunicorn's logging, because it will (by default) not display any custom headers.
From the documentation, we find out that the default access log format is controlled by access_log_format and is set to the following:
"%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"
where:
h is the remote address
l is - (not used)
u is - (not used, reserved)
t is the time stamp
r is the status line
s is the status of the request
b is length of response
f is referrer
a is user agent
You can also customize it with the following extra variables that are not used by default:
T - request time (in seconds)
D - request time (in microseconds)
p - the process id
{Header}i - request header (custom)
{Response}o - response header (custom)
To gunicorn, all requests are coming from nginx so it will display that as the remote IP. To get it to log any custom headers (what you are sending from nginx) you'll need to adjust this parameter and add the appropriate variables, in your case you would set it to the following:
%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s" "%({X-Real-IP}i)s"

Note that headers containing - should be referred to here by replacing - with _, thus X-Forwarded-For becomes %({X_Forwarded_For}i)s.

Related

How to fix 504 error caused by Docker image update

I have Django project. It works with nginx, uwsgi and google cloud run.
This project is using docker which python:3.9 image. I have got this error since 17,Aug.
2021-10-13 17:22:29.654 JSTGET504717 B899.9 sGoogleStackdriverMonitoring-UptimeChecks(https://cloud.google.com/monitoring) https://xxxx/
The request has been terminated because it has reached the maximum request timeout. To change this limit, see https://cloud.google.com/run/docs/configuring/request-timeout
and also this error occur on all my pages. However when I open my pages myself, I can see my pages. It means I can't see 504 error and I can only check that it happens from server log.
I added a line in admin.py at 17, Aug. I didn't think this line is no related with this error. Because this change is only effect in admin page. I had rollback my code before the error. Now I'm still can't fix this error.
Builded docker image is different size before after error. And Vulnerability has decreased. I think this is caused by some small change on python image. In this case, how can I solve this problem?
What I did
I changed docker image to python:3.8 and python:3.9.6-buster. I couldn't fix the error.
I solved this problem. I changed socket to port connection.
This is my settings.
uwsgi.ini
[uwsgi]
# this config will be loaded if nothing specific is specified
# load base config from below
ini = :base
# %d is the dir this configuration file is in
http = 127.0.0.1:8000
master = true
processes = 4
max-requests = 1000 ; Restart workers after this many requests
max-worker-lifetime = 3600 ; Restart workers after this many seconds
reload-on-rss = 512 ; Restart workers after this much resident memory
threaded-logger = true
[dev]
ini = :base
# socket (uwsgi) is not the same as http, nor http-socket
socket = :8001
[local]
ini = :base
http = :8000
# set the virtual env to use
home = /Users/you/envs/env
[base]
# chdir to the folder of this config file, plus app/website
chdir = %dapp/
# load the module from wsgi.py, it is a python path from
# the directory above.
module = website.wsgi:application
# allow anyone to connect to the socket. This is very permissive
chmod-socket = 666
nginx-app.conf
# the upstream component nginx needs to connect to
upstream django {
# server unix:/code/app.sock; # for a file socket
server 127.0.0.1:8000; # for a web port socket (we'll use this first)
}
# configuration of the server
server {
# the port your site will be served on, default_server indicates that this server block
# is the block to use if no blocks match the server_name
listen 8080;
# the domain name it will serve for
server_name MY_DOMAIN.COM; # substitute your machine's IP address or FQDN
charset utf-8;
# max upload size
client_max_body_size 10M; # adjust to taste
# set timeout
uwsgi_read_timeout 900;
proxy_read_timeout 900;
# Django media
location /media {
alias /code/app/media; # your Django project's media files - amend as required
}
location /static {
alias /code/app/static; # your Django project's static files - amend as required
}
# Finally, send all non-media requests to the Django server.
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
include /code/uwsgi_params; # the uwsgi_params file you installed
}
}

deploying channels with nginx

I have deployed django with nginx following the tutorials in digital ocean. Then I blindly followed the section "Example Setup" in the channels document after installation.
My confusions are:
When setting up the configuration file for supervisor, it says to set the directory as
directory=/my/app/path
Should I write down the path where the manage.py is or the path where the settings.py is?
When I reload nginx after changing nginx configuration file, I get an error saying that
host not found in upstream "channels-backend" in
/etc/nginx/sites-enabled/mysite:18 nginx: configuration file
/etc/nginx/nginx.conf test failed
I did replace "mysite" by the name of my website. I had another error earlier saying that
no live upstreams while connecting to upstream
but could not recreate the situation.
I am new to using the channels, so any additional information on upstream would be helpful. Please let me know if I need to provide more information.
Edit:
Here is the nginx.conf file. I changed some sensitive data inside the <>.
upstream channels-backend {
server localhost:8000;
}
server {
listen 80;
server_name <domain name> <ip address>;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root <root to static>;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_pass http://channels-backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
This passes nginx -t. The error message I have in the error.log
connect() failed (111: Connection refused) while connecting to upstream, client: <some ip>, server: <domain name>, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8000/", host: "<domain name>"
The problem actually in supervisor configuration file.
[fcgi-program:asgi]
# TCP socket used by Nginx backend upstream
socket=tcp://localhost:8000
# Directory where your site's project files are located
directory=/my/app/path
# Each process needs to have a separate socket file, so we use process_num
# Make sure to update "mysite.asgi" to match your project name
command=daphne -u /run/daphne/daphne%(process_num)d.sock --fd 0 --access-log - --proxy-headers mysite.asgi:application
# Number of processes to startup, roughly the number of CPUs you have
numprocs=4
# Give each process a unique name so they can be told apart
process_name=asgi%(process_num)d
# Automatically start and recover processes
autostart=true
autorestart=true
# Choose where you want your log to go
stdout_logfile=/your/log/asgi.log
redirect_stderr=true
To check if supervisor was running correctly, I ran
sudo supervisorctl status
This gave me a FATAL status. The problem was that I am currently using a virtual environment, and daphne was only installed inside the virtual environment. Therefore your command should be something like
command= /my/project/virtualenv/path/bin/daphne -u /run/daphne/daphne%(process_num)d.sock --fd 0 --access-log - --proxy-headers mysite.asgi:application

django: nginx: HTTP_HOST does not show port number

I have the following code running in production
The Nginx config is as follows:
# first we declare our upstream server, which is our Gunicorn application
upstream hello_server {
# docker will automatically resolve this to the correct address
# because we use the same name as the service: "djangoapp"
server webapp:8888;
}
# now we declare our main server
server {
listen 8558;
server_name localhost;
location / {
# everything is passed to Gunicorn
proxy_pass http://hello_server;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
}
Nginx server has port forwarding: 8555:8558
And the gunicorn command running is
gunicorn --bind :8888 basic_django.wsgi:application
Now in my browser i open this url:
http://127.0.0.1:8555/login_register_password/user_login_via_otp_form_email
Now my code in one of my views is
prev_url = request.META['HTTP_REFERER']
# EG: prev_url = http://127.0.0.1:8555/login_register_password/user_login_via_otp_form_email
# we want to get the url from namespace . We use reverse. But this give relative url not the full url with domain
login_form_email_url_reverse = reverse("login_register_password_namespace:user_login_via_otp_form_email")
# EG: login_form_email_url_reverse = "/login_register_password/user_login_via_otp_form_email"
# to get the full url we have to use do the below
login_form_email_url_reverse_full = request.build_absolute_uri(login_form_email_url_reverse)
# EG: login_form_email_url_reverse_full = "http://127.0.0.1/login_register_password/user_login_via_otp_form_email"
I am execpting prev_url and login_form_email_url_reverse_full to be same but it differs
prev_url domain is http://127.0.0.1:8555 whereas login_form_email_url_reverse_full domain is http://127.0.0.1
why this is happening.
This does not happen in development server. using runserver
"HTTP_HOST": "127.0.0.1:8555",
"HTTP_REFERER": "http://127.0.0.1:8555/login_register_password/user_login_via_otp_form_email",
Where as with nginx server: HTTP_HOST changes i.e now without port number
"HTTP_HOST": "127.0.0.1",
"HTTP_REFERER": "http://127.0.0.1:8555/login_register_password/user_login_via_otp_form_email",
I solved the problem by changing
proxy_set_header Host $host;
To
proxy_set_header Host $http_host;
in the server {} of local.conf of nginx
Got the answer from https://serverfault.com/a/916736/565479

Nginx: 403 Forbidden nginx/1.12.1 (Ubuntu)

I've never before configured any production server, I'm trying to configure nginx and keep getting the 403 Forbidden error. I can't figure out the reason why it's happening.
Here is a complete error report:
[crit] 25145#25145: *1 connect() to unix:/home/albert/deploy_test/django_env
/run/gunicorn.sock failed (13: Permission denied) while connecting to
upstream, client: 192.168.1.118, server: 192.168.1.118, request: "GET /
HTTP/1.1", upstream: "http://unix:/home/albert/deploy_test/django_env
/run/gunicorn.sock:/", host: "192.168.1.118"
Here is my /etc/nginx/sites-available/deployproject.conf:
(I removed the default config and created a symlink as follows: sudo ln -s /etc/nginx/sites-available/deployproject.conf /etc/nginx/sites-enabled/deployproject.conf)
upstream sample_project_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
server unix:/home/albert/deploy_test/django_env/run/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name 192.168.1.118;
client_max_body_size 4G;
access_log /home/albert/logs/nginx-access.log;
error_log /home/albert/logs/nginx-error.log;
location /static/ {
alias /home/albert/static/;
}
location /media/ {
alias /home/albert/media/;
}
location / {
# an HTTP header important enough to have its own Wikipedia entry:
# http://en.wikipedia.org/wiki/X-Forwarded-For
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if and only if you use HTTPS, this helps Rack
# set the proper protocol for doing redirects:
# proxy_set_header X-Forwarded-Proto https;
# pass the Host: header from the client right along so redirects
# can be set properly within the Rack application
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
# set "proxy_buffering off" *only* for Rainbows! when doing
# Comet/long-poll stuff. It's also safe to set if you're
# using only serving fast clients with Unicorn + nginx.
# Otherwise you _want_ nginx to buffer responses to slow
# clients, really.
# proxy_buffering off;
# Try to serve static files from nginx, no point in making an
# *application* server like Unicorn/Rainbows! serve static files.
if (!-f $request_filename) {
proxy_pass http://sample_project_server;
break;
}
}
# Error pages
error_page 500 502 503 504 /500.html;
location = /500.html {
root /home/albert/static/;
}
}
Here is the complete tutorial I'm using to deploy my app. Here I'm just trying to deploy the most primitive,default django app but in my real app I'm using django as a serverside, so there seems to be no need for nginx to serve static and all that.
File Permissions. Incorrect file permissions are another cause of the "403 Forbidden" error. The standard setting of 755 for directories and 644 for files is recommended for use with NGINX. The NGINX user also needs to be the owner of the files
Try to change the permissions on your web dir
sudo chown -R albert:www-data /webdirectory
sudo chmod -R 0755 /webdirectory
Move all your sites inside the webdirectory do not leave the dir and files in your root home.
Have you taken a look at the gunicorn docs here which has example of how to configure nginx
http://docs.gunicorn.org/en/stable/deploy.html
Can you try running gunicorn via TCP instead of unix socket, in your upstream sample_project_server replace server with:
server 192.168.0.7:8000 fail_timeout=0;
What are the settings in gunicorn? You can bind to localhost via TCP with the following, to check that it isn't a problem with your unix socket:
--bind 127.0.0.1:8000

nginx doesn't serve static assets in Rails 4.1.8 + Spree

I have a problem with Nginx - Unicorn - Rails 4.1 and Spree production setup, according to this tutorial.
The site shows up at the ip address (I need to get a domain yet). But it seems assets are not readable. This is the error log from /var/log/nginx/spree_zaza_error.log
2014/12/21 23:06:22 [error] 13598#0: *12 open() "/home/user/workplace/spree_zaza/public/assets/spree.js" failed (2: No such file or directory), client: 213.230.83.135, server: , request: "GET /assets/spree.js?body=1 HTTP/1.1", host: "212.111.40.25", referrer: "http://212.111.40.25/t/brand/apache"
2014/12/21 23:06:22 [error] 13598#0: *11 open() "/home/user/workplace/spree_zaza/public/assets/spree/frontend/checkout.js" failed (2: No such file or directory), client: 213.230.83.135, server: , request: "GET /assets/spree/frontend/checkout.js?body=1 HTTP/1.1", host: "212.111.40.25", referrer: "http://212.111.40.25/t/brand/apache"
2014/12/21 23:06:22 [error] 13598#0: *11 open() "/home/user/workplace/spree_zaza/public/assets/logo/spree_50.png" failed (2: No such file or directory), client: 213.230.83.135, server: , request: "GET /assets/logo/spree_50.png HTTP/1.1", host: "212.111.40.25", referrer: "http://212.111.40.25/t/brand/apache"
Although I ran rake assets:precompile and there are a bunch of hashed and gzipped files, some files don't exist, but for example assests/logo/spree_50 is there.
This is my /etc/nginx/sites-enabled/spree_zaza file:
upstream spree_zaza {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
server unix:/tmp/spree_zaza.socket fail_timeout=0;
}
server {
# if you're running multiple servers, instead of "default" you should
# put your main domain name here
listen 80 default;
# you could put a list of other domain names this application answers
#server_name [your server's address];
root /home/user/workplace/spree_zaza/public;
access_log /var/log/nginx/spree_zaza_access.log;
error_log /var/log/nginx/spree_zaza_error.log;
rewrite_log on;
location / {
#all requests are sent to the UNIX socket
proxy_pass http://spree_zaza;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
# if the request is for a static resource, nginx should serve it directly
# and add a far future expires header to it, making the browser
# cache the resource and navigate faster over the website
location ~ ^/(system|assets|spree)/ {
root /home/user/workplace/spree_zaza/public;
expires max;
break;
}
}
And the following is /home/user/workplace/spree_zaza/config/unicorn.rb:
# config/unicorn.rb
# Set environment to development unless something else is specified
#env = ENV["RAILS_ENV"] || "development"
#env = ENV["RAILS_ENV"] || "production"
env = "production"
# See http://unicorn.bogomips.org/Unicorn/Configurator.html for complete documentation.
worker_processes 3
# listen on both a Unix domain socket and a TCP port,
# we use a shorter backlog for quicker failover when busy
listen "/tmp/spree_zaza.socket", backlog: 64
# Preload our app for more speed
preload_app true
# nuke workers after 30 seconds instead of 60 seconds (the default)
timeout 30
pid "/tmp/unicorn.spree_zaza.pid"
# Production specific settings
if env == "production"
# Help ensure your application will always spawn in the symlinked
# "current" directory that Capistrano sets up.
working_directory "/home/user/workplace/spree_zaza"
# feel free to point this anywhere accessible on the filesystem user 'spree'
shared_path = "/home/user/workplace/spree_zaza"
stderr_path "#{shared_path}/log/unicorn.stderr.log"
stdout_path "#{shared_path}/log/unicorn.stdout.log"
end
before_fork do |server, worker|
# the following is highly recomended for Rails + "preload_app true"
# as there's no need for the master process to hold a connection
if defined?(ActiveRecord::Base)
ActiveRecord::Base.connection.disconnect!
end
# Before forking, kill the master process that belongs to the .oldbin PID.
# This enables 0 downtime deploys.
old_pid = "/tmp/unicorn.spree_zaza.pid.oldbin"
if File.exists?(old_pid) && server.pid != old_pid
begin
Process.kill("QUIT", File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
# someone else did our job for us
end
end
end
after_fork do |server, worker|
# the following is *required* for Rails + "preload_app true"
if defined?(ActiveRecord::Base)
ActiveRecord::Base.establish_connection
end
# if preload_app is true, then you may also want to check and
# restart any other shared sockets/descriptors such as Memcached,
# and Redis. TokyoCabinet file handles are safe to reuse
# between any number of forked children (assuming your kernel
# correctly implements pread()/pwrite() system calls)
end
Also, I uncommented the following switch in config/environments/production.rb
config.action_dispatch.x_sendfile_header = 'X-Accel-Redirect'
Thanks for your ideas.