OAuth Callback URL incompatible with nginx proxy server behavior - django

I have spent a good part of the last 3 days trying every solution that is on the internet and feeling desperate. Here's the problem statement:
I have a Dockerized app with three services:
A django application with gunicorn (web)
A Nginx server (nginx)
PostgreSQL (db)
My web application requires user to log in with their GitHub account through a fairly standard OAuth process. This has always worked without nginx. User clicks on the "log in with github" button, sent them to GitHub to authorize the application, and redirects it back to a completion page.
I have "Authorization callback URL" filled in as http://localhost:8000. Without Nginx I can navigate to the app on localhost, click on the button, and upon authorization, get redirected back to my app on localhost.
With Nginx, I would always fail with the error (nginx in console):
GET /auth/complete/github/?error=redirect_uri_mismatch&error_description=The+redirect_uri+MUST+match+the+registered+callback+URL+for+this+application.&error_uri=https%3A%2F%2Fdeveloper.github.com%2Fapps%2Fmanaging-oauth-apps%2Ftroubleshooting-authorization-request-errors%2F%23redirect-uri-mismatch&state=nmegLb41b959su31nRU4ugFOzAqE8Cbl HTTP/1.1
This is my Nginx configuration:
upstream webserver {
# docker will automatically resolve this to the correct address
# because we use the same name as the service: "web"
server web:8000;
}
# now we declare our main server
server {
listen 80;
server_name localhost;
location / {
# everything is passed to Gunicorn
proxy_pass http://webserver;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
}
location /static {
autoindex off;
alias /static_files/;
}
location /media {
alias /opt/services/starport/media;
}
}
This is my Dockerfile:
version: '3.7'
services:
web:
build: .
command: sh -c "cd starport && gunicorn starport.wsgi:application --bind 0.0.0.0:8000"
volumes:
- static_files:/static_files # <-- bind the static volume
networks:
- nginx_network
nginx:
image: nginx
ports:
- 8000:80
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- static_files:/static_files # <-- bind the static volume
depends_on:
- web
networks:
- nginx_network
networks:
nginx_network:
driver: bridge
volumes:
static_files:
My hunch was that the reason it worked without Nginx but doesn't with the Nginx http server has got to do with the redirection since Nginx is listening to a port and then forwarding it to a different port. GitHub's doc specifically said that the redirect URI needs to be exactly the same as the registered callback url. I've also used the inspector tools and these are my Request headers:
GET /accounts/login/ HTTP/1.1
Cookie: ...
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Upgrade-Insecure-Requests: 1
Host: localhost:8000
User-Agent: Mozilla/5.0
Accept-Language: en-us
Accept-Encoding: gzip, deflate
Connection: keep-alive
The error message I get with Nginx (again, stressing that it works like a charm without error without nginx 10 out of 10 times) is:
http://localhost:8000/auth/complete/github/?error=redirect_uri_mismatch&error_description=The+redirect_uri+MUST+match+the+registered+callback+URL+for+this+application.&error_uri=https%3A%2F%2Fdeveloper.github.com%2Fapps%2Fmanaging-oauth-apps%2Ftroubleshooting-authorization-request-errors%2F%23redirect-uri-mismatch
As an additional detail, I'm using the social-auth-app-django package but this should not matter.
Further troubleshooting
After countless hours of probing, I run this in local on debug mode and this time closely monitored my Request information. When use hits the link to authorize with GitHub via OAuth, this is the Request along with all the header information:
CSRF_COOKIE 'abc'
HTTP_ACCEPT 'text/html,application/xhtml+xml'
HTTP_ACCEPT_ENCODING 'gzip, deflate'
HTTP_ACCEPT_LANGUAGE 'en-us'
HTTP_CONNECTION 'close'
HTTP_COOKIE ('csrftoken=...; ')
HTTP_HOST 'localhost'
HTTP_REFERER 'http://localhost:8000/accounts/login/?next=/'
HTTP_USER_AGENT ('Mozilla/5.0')
HTTP_X_FORWARDED_FOR '172.26.0.1'
HTTP_X_FORWARDED_PROTO 'http'
PATH_INFO '/auth/complete/github/'
SERVER_NAME '0.0.0.0'
SERVER_PORT '8000'
QUERY_STRING
'error=redirect_uri_mismatch&error_description=The+redirect_uri+MUST+match+the+registered+callback+URL+for+this+application.&error_uri=https%3A%2F%2Fdeveloper.github.com%2Fapps%2Fmanaging-oauth-apps%2Ftroubleshooting-authorization-request-errors%2F%23redirect-uri-mismatch&state=NwUhVqfOCNb51zpvoFbXVvm1Cr7k3Fda'
RAW_URI
'/auth/complete/github/?error=redirect_uri_mismatch&error_description=The+redirect_uri+MUST+match+the+registered+callback+URL+for+this+application.&error_uri=https%3A%2F%2Fdeveloper.github.com%2Fapps%2Fmanaging-oauth-apps%2Ftroubleshooting-authorization-request-errors%2F%23redirect-uri-mismatch&state=NwUhVqfOCNb51zpvoFbXVvm1Cr7k3Fda'
What immediately stood out to me was the value of HTTP_HOST, HTTP_REFERRER and SERVER_NAME. What's also interesting to me was the error message says:
http://localhost/auth/complete/github/?error=redirect_uri_mismatch&error_description=The+redirect_uri+MUST+match+the+registered+callback+URL+for+this+application.&error_uri=https%3A%2F%2Fdeveloper.github.com%2Fapps%2Fmanaging-oauth-apps%2Ftroubleshooting-authorization-request-errors%2F%23redirect-uri-mismatch&state=NwUhVqfOCNb51zpvoFbXVvm1Cr7k3Fda
Where instead of http://localhost:8000 it only has http://localhost, which looks like a big hint that I am not configuring things correctly. Any leads or assistance would help!
Resources I've tried
StackOverflow threads like this seems promising but similar questions like this receive no meaningful response except to explain the error.

Related

can access django admin through nginx port 80 but not other ports

I can access Django admin by redirecting traffic from nginx port 80 to django port 8000. However, when I change nginx listen port to 81 I received, after signing in Django admin
Forbidden (403)
CSRF verification failed. Request aborted.
nginx.conf
server {
listen 81;
server_name localhost;
location = /favicon.ico {access_log off;log_not_found off;}
location /static/ {
include /etc/nginx/mime.types;
alias /static/;
}
location / {
proxy_pass http://backend:8000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
docker-compose file
version: '3.9'
services:
backend:
image: thequy/resume_builder_django:2.0
build:
context: ./backend
dockerfile: ./docker/django/Dockerfile
env_file:
- .env
command: gunicorn resume_builder.wsgi -w ${GUNICORN_WORKER_COUNT} -b 0.0.0.0:${DJANGO_PORT}
networks:
- resume_builder_network
backend_nginx:
image: thequy/resume_builder_django_nginx:1.0
build: ./backend/docker/nginx
ports:
- "${BACKEND_DJANGO_PORT}:${BACKEND_DJANGO_PORT}"
depends_on:
- backend
networks:
- resume_builder_network
networks:
resume_builder_network:
I have changed BACKEND_DJANGO_PORT=81.
I tried adding CORS_ALLOW_ALL_ORIGINS=True and CSRF_TRUSTED_ORIGINS=["http://backend_nginx:81"] but it doesn't help
Edit: I tried chaning ports of backend_nginx to different values and I realized that the host port must be on port 80, nginx port doesn't matter.
Since Django 4.0, origin checking is added in CSRF middleware as mentioned here https://docs.djangoproject.com/en/4.1/ref/csrf/.
So, if the request generated from a specific domain doesn't match with any trusted origins, it raises Forbidden (403) CSRF verification failed.
In your case, you need to set following in settings.py (I assume you are running this locally)
CSRF_TRUSTED_ORIGINS = ["http://localhost:81"]
Now the question arises why it works for 80 port without setting CSRF_TRUSTED_ORIGINS, I assume the default 80 port is always trusted, however I can't find any documentation of it.

Replacing Nginx with Traefik caused Django to use admin files via HTTP instead of HTTPS, breaking functionality

I had a perfectly fine Django CMS 3.4.1 setup running behind Nginx as an edge-server with SSL termination. The complete chain was:
nginx (SSL) → nginx (django server) → gunicorn → django
All I did was to replace the first nginx reverse proxy with traefik, for a better integration of other services. Everything is run with docker (compose)
The issue is, that Django now wants to redirect HTTPS calls to admin files (and ajax calls) to HTTP, breaking functionality because those files are blocked by the browser.
I did not change anything with the django installation. In fact, it even is the same docker image as before.
Because it did work with the old setup, I don't think that it is an issue with the Django CMS code using hardcoded http://. SSL was terminated before the django reverse proxy, as well.
Does anyone see something I am missing?
Here are some configuration files, from top to bottom:
traefic.yml:
global:
sendAnonymousUsage: false
api:
dashboard: true
insecure: true
providers:
docker:
endpoint: "unix:///var/run/docker.sock"
watch: true
exposedByDefault: false
log:
level: INFO
format: common
entryPoints:
http:
address: ":80"
http:
redirections:
entryPoint:
to: https
scheme: https
https:
address: ":443"
certificatesResolvers:
letsencrypt:
acme:
email: ***
storage: /etc/acme/acme.json
httpChallenge:
entryPoint: http
relevant parts of django-server docker-compose file:
# ...
services:
cms-nginx:
build: "./nginx"
depends_on:
- postgres
networks:
- proxy
- cms
volumes:
- cms_static:/usr/src/app/static
- cms_media:/usr/src/app/media
labels:
- "traefik.enable=true"
- "traefik.docker.network=proxy"
- "traefik.http.routers.cms.rule=Host(`***`)"
- "traefik.http.routers.cms.tls=true"
- "traefik.http.routers.cms.tls.certresolver=letsencrypt"
cms:
restart: always
build: ./cms
links:
- postgres:postgres
- static:static
expose:
- "8000"
volumes:
- ./cms:/usr/src/app
- static_out:/usr/src/app/data/generated
- cms_static:/usr/src/app/data/static
- cms_media:/usr/src/app/data/media
depends_on:
- static
env_file:
- .env
- ./cms/.env
command: /bin/sh -c "./docker-init.sh"
networks:
- cms
django server nginx conf:
server {
listen 80;
server_name *** default_server;
charset utf-8;
client_max_body_size 75M;
location ^~ /static/ {
alias /usr/src/app/static/;
}
location ^~ /media/ {
alias /usr/src/app/media/;
}
location / {
proxy_pass http://cms:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Protocol $scheme;
}
error_log /var/log/nginx/deckel_error.log;
}
gunicorn start command:
/usr/local/bin/gunicorn cms.wsgi:application -w 2 -b :8000
django settings part:
SESSION_COOKIE_SECURE = True
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTOCOL', 'https')
CSRF_COOKIE_SECURE = True
SECURE_SSL_REDIRECT = True

NGINX docker-compose - Host not found in upstream nuxt:3000

I'm trying to configure a deployed app on an EC2 instance I'm not able to get visit
the application when it's up on ec2 public IP. I've checked the security groups and allowed all
inbound traffic to ports just to see If I can reach the homepage or admin page of django.
Say my ec2 IP address is 34.245.202.112 how do I map my application so nginx serves
The frontend(nuxt) at 34.245.202.112
The backend(django) at 34.245.202.112/admin
The API(DRF) at 34.245.202.112/api
When I try to do this the error I get from nginx is
nginx | 2020-11-14T14:15:35.511973183Z 2020/11/14 14:15:35 [emerg] 1#1: host not found in upstream "nuxt:3000" in /etc/nginx/conf.d/autobets_nginx.conf:9
This is my config
docker-compose
version: "3.4"
services:
db:
restart: always
image: postgres
volumes:
- pgdata:/var/lib/postgresql/data
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
ports:
- "5432:5432"
expose:
- 5432
networks:
- random_name
django:
container_name: django
build:
context: ./backend
env_file: .env
environment:
- DEBUG=True
command: >
sh -c "./wait-for-it.sh db:5432 &&
./autobets/manage.py collectstatic --no-input &&
./autobets/manage.py makemigrations &&
./autobets/manage.py migrate --no-input &&
./autobets/manage.py runserver_plus 0.0.0.0:8001
"
- "8001"
volumes:
- ./backend:/app
depends_on:
- db
restart: on-failure
nginx:
image: nginx
container_name: nginx
ports:
- "80:80"
restart: always
depends_on:
- nuxt
- django
volumes:
- ./nginx/conf:/etc/nginx/conf.d
- ./nginx/uwsgi_params:/etc/nginx/uwsgi_params
- ./backend/static:/static
networks:
- random_name
nuxt:
build:
context: ./frontend
environment:
- API_URI=http://django:8001/api
command: sh -c "npm install && npm run dev"
volumes:
- ./frontend:/app
ports:
- "3000:3000"
depends_on:
- django
networks:
- random_name
volumes:
pgdata:
networks:
random_name:
nginx.conf
# the upstream component nginx needs to connect to
upstream django {
ip_hash;
server django:8001;
}
upstream nuxt {
ip_hash;
server nuxt:3000;
}
# configuration of the server
server {
# the port your site will be served on
listen 8000;
# the domain name it will serve for
server_name 34.245.202.112; # substitute your machine's IP address or FQDN
charset utf-8;
# max upload size
client_max_body_size 75M; # adjust to taste
location /static/ {
alias /static/;
}
# Finally, send all non-media requests to the Django server.
location / {
proxy_pass django;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Host $host;
}
}
Look at this minimal example:
server {
listen 80;
listen 8000; # from you config, remove if unnecessary
server_name 34.245.202.112;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Host $host;
location / {
# 'the frontend(nuxt) at 34.245.202.112'
# This is the default route. Requests get here when there's no
# better match to go.
proxy_pass http://nuxt:3000;
}
location /api/ {
# This location will trigger when location in URI begins with '/api/'
# e.g. http://yourserver.org/api/v1/hello/world
proxy_pass http://django:8001;
}
location /admin/ {
# exactly as /api/
proxy_pass http://django:8001;
}
location /static/ {
# same as /api/ but local files instead of proxy
alias /static/;
}
}
As you see from the example, each location has a URI prefix. NGINX will test all these 'prefixes' against location in incoming HTTP requests, finding the best match. Once the best match found NGINX will do whatever you wrote inside the block. In the example above all requests starting with /api/ or /django/ go to the django backend. Requests starting with /static/ are served from local files. Everything else goes to nuxt backend.
I'm not sure if I got your intentions right, probably because I'm missing the original config you've edited, so you have to pick up from here. Just remember that you are not limited to URI prefixes for locations (you may use regex or exact match) and that you can do nested locations. Check out this great beginner's guide from NGINX for more http://nginx.org/en/docs/beginners_guide.html .
UPDATE: After looking at other answers here I though I owe an answer to the question in title instead of just basic configuration. The reason why you got the host not found in upstream error is that you didn't specify a resolver directive. It is necessary when using DNS names in upstream blocks and for NGINX in Docker you may use this: resolver 127.0.0.11 ipv6=off;. Put it in the http block, that is outside of server block.
'127.0.0.11' is the Docker DNS. It resolves service and container names as well as 'normal' DNS records (for that is usesn host's DNS configuration). You don't have to assign an alias to a service or set a container_name because service name is a DNS record on its own. It resolves to all containers of that service. Using resolver wasn't necessary in the basic configuration I've posted because I didn't use upstream blocks.
You are missing the alias section in your network block of the docker-compose file. Aliases that you define will automatically update the /etc/hosts file of the containers and therefore your nginx container will be aware of the nuxt host.
services:
nuxt:
networks:
some-network:
aliases:
- nuxt
more info here. ctrl-f for aliases: https://docs.docker.com/compose/compose-file/
The container name "nuxt" is not defined in the docker-compose file, so the hostname cannot be resolved by the nginx container.
Try to fix the nginx error by adding container_name:nuxt to the nuxt service in the docker-compose file.

Docker NGINX Proxy not Forwarding Websockets

NGINX proxy is passing HTTP GET requests instead of WebSocket handshakes to my django application.
Facts:
Rest of the non-websocket proxying to django app is working great.
I can get WebSockets to work if I connect to the django application container directly. (Relevant log entries below.)
The nginx configuration works localhost on my development machine (no containerizing). (Log example below.)
Relevant Logs:
Daphne log when connecting through containerized nginx proxy:
`xxx.xxx.xxx.xxx:40214 - - [24/May/2017:19:16:03] "GET /flight/all_flight_updates" 404 99`
Daphne log when bypassing the containerized proxy and connecting directly to the server:
xxx.xxx.xxx.xxx:6566 - - [24/May/2017:19:17:02] "WSCONNECTING /flight/all_flight_updates" - -
xxx.xxx.xxx.xxx:6566 - - [24/May/2017:19:17:02] "WSCONNECT /flight/all_flight_updates" - -
localhost testing of nginx (non-containerized) configuration works:
[2017/05/24 14:24:19] WebSocket HANDSHAKING /flight/all_flight_updates [127.0.0.1:65100]
[2017/05/24 14:24:19] WebSocket CONNECT /flight/all_flight_updates [127.0.0.1:65100]
Configuration files:
My docker-compose.yml:
version: '3'
services:
db:
image: postgres
redis:
image: redis:alpine
web:
image: nginx
ports:
- '80:80'
volumes:
- ./deploy/proxy.template:/etc/nginx/conf.d/proxy.template
links:
- cdn
- app
command: /bin/bash -c "envsubst '' < /etc/nginx/conf.d/proxy.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
cdn:
image: nginx
volumes:
- ./cdn_static:/usr/share/nginx/static
- ./deploy/cdn.template:/etc/nginx/conf.d/cdn.template
command: /bin/bash -c "envsubst '' < /etc/nginx/conf.d/cdn.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
app:
build: .
image: app
ports:
- '8000:8000'
links:
- redis
- db
volumes:
- ./cdn_static:/var/static
My proxy.template NGINX configuration template:
upstream cdn_proxy {
server cdn:80;
}
upstream daphne {
server app:8000;
keepalive 100;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
location /static {
proxy_pass http://cdn_proxy;
}
location / {
proxy_buffering off;
proxy_pass http://daphne;
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
UPDATE
I have built a more compact example of the problem using the tutorial on the NGINX website and put it on github at https://github.com/c0yote/nginx-websocket-issue.
You get a 426 instead of a 404, but I believe this is because the simple server doesn't know how to handle the GET that NGINX is sending. I am reinforced in this thought by the fact that if you issue a GET (from a browser for example) directly against the 8000 port you get the same 426.
Therefore the core problem is still that NGINX is sending a GET.
MORE INFO:
tcpdump shows that the GET to the websocket server has an Upgrade field, but the GET against NGINX does not. This is confusing since the wscat command is identical with the exception of the target port.
GIGA UPDATE:*
If I take the NGINX proxy off port 80 to say, 8080 it works. My only guess is that the js client makes some assumption about port 80. If anyone knows why this is the case, I'd love to know.
It was my organization's firewall.
It was stripping the connection upgrade out of the GET header on port 80. When I changed to a different port, it worked fine.

uWsgi nginx integration error

I am using uWsgi to deploy my django site here is my uWsgi.ini:
[uwsgi]
socket=/var/run/uwsgi.sock
virtualenv=/root/edupalm/env/
chdir=/root/edupalm/edupalm
master=True
workers=8
pidfile=/var/run/uwsgi-master.pid
max-requests=5000
module=edupalm.wsgi:application
and using nginx, here is my configuration:
server {
listen 9000;
server_name 162.243.146.127;
access_log /var/log/nginx/edupalm_access.log;
error_log /var/log/nginx/edupalm_error.log;
location /static/ {
alias /root/edupalm/edupalm/static/;
}
location / {
uwsgi_pass unix:///var/run/uwsgi.sock;
}
}
but I am having 502 Bad Gateway
here is the logs:
nginx:
2013/11/26 08:31:09 [error] 1758#0: *57 upstream prematurely closed connection while reading response header from upstream, client: 197.160.112.183, server: 162.243.146.127, request: "GET /admin HTTP/1.1", upstream: "uwsgi://unix:///var/run/uwsgi.sock:", host: "162.243.146.127:9000"
uwsgi:
-- unavailable modifier requested: 0 --
nginx is running on user www-data and uwsgi is running as root
It's advisable to use new user for your project, not root
The problem is in configuration, you should to add
plugin=python
for permissions it's better to use www-data user/group:
uid = www-data
gid = www-data
chmod-socket = 777
chown-socket = www-data
It looks like you are using a distribution package instead of official uWSGI sources. Just load (after having installed it) the python plugin with plugin = python in your config
http://uwsgi-docs.readthedocs.org/en/latest/WSGIquickstart.html
location / {
uwsgi_pass unix:///var/run/uwsgi.sock;
include uwsgi_params;
uwsgi_param SCRIPT_NAME '';
}
I similarly had this problem for a combination of Django, uWSGI and nginx running behind CloudFront. It turned out for me that the routing table in CloudFront didn't behave as expected, so some of the callbacks weren't received.
Specifically, route "/" stole traffic from "*".
There was another issue where my Django server was running unexpected code; as a user logging in caused their User model to be changed, which I hadn't predicted for some reason. So yeah, don't rule out that your Django server might be legitimately busy, causing a timeout of the socket.