I am trying to expose two different address used like APIs. One in Django and the other one in Flask, they are Docker-compose containers.
I need configure Nginx for expose the two containers in two different subdomains.
It is my Nginx.conf:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024; ## Default: 1024, increase if you have lots of clients
}
http {
include /etc/nginx/mime.types;
# fallback in case we can't determine a type
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local]
"$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
upstream app {
server django:5000;
}
upstream app_server {
server flask:5090;
}
server {
listen 5090;
location / {
proxy_pass http://app_server;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Scheme $scheme;
}
}
server {
listen 5000;
location / {
proxy_pass http://app;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Scheme $scheme;
}
}
}
And my production.yml
Nginx:
build: ./compose/production/nginx
image: *image
ports:
- 80:80
depends_on:
- flask
- django
My containers are all up.
I use proxy_pass:
server {
listen <port>;
location / {
proxy_pass http://<container-host-name>:<port>;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Scheme $scheme;
}
}
You nginx container connected only with 80 port on machine and 80 port on container, but you nginx server listen 5000 and 5090 ports :)
Related
I have been following several different tutorials about how to set up gunicorn and daphne in parallel so that gunicorn can serve http to my django apps and daphne to my django channels app. However, I am now stuck on the welcome to nginx homepage and I cannot figure out what the problem is.
supervisor.conf
[program:example]
directory=/home/user/example/example
command=/home/user/envs/example/bin/gunicorn example.wsgi:application
user=user
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/home/user/envs/example/bin/gunicorn-error.log
[program:serverinterface]
directory=/home/user/example/example
command=/home/user/envs/example/bin/daphne -b 0.0.0.0 -p 8001 example.asgi:application
autostart=true
autorestart=true
stopasgroup=true
user=user
stdout_logfile = /home/user/example/bin/gunicorn-error.log
nginx/sites-availible/example.com
upstream app_server {
server http://unix:/run/gunicorn.sock fail_timeout=0;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
return 301 https://example.com$request_uri;
}
server {
listen [::]:443 ssl ipv6only=on;
listen 443 ssl;
server_name example.com www.example.com;
# Let's Encrypt parameters
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location = /favicon.ico { access_log off; log_not_found off; }
location / {
try_files $uri #proxy_to_app;
}
location /ws/ {
try_files $uri #proxy_to_ws;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
}
location #proxy_to_ws {
proxy_pass http://0.0.0.0:8001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
I need to edit nginx configuration of an AWS ELB environment so that it accepts payload(post request body) up to 50MB.
The default nginx payload limit is 1MB.
I researched many questions and answers, and found this:
https://stackoverflow.com/a/40745569
But I'm not sure how to access the nginx configuration file behind the ELB environment.
I also tried with this AWS doc:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/java-se-nginx.html
But I still couldn't get responses from post requests with larger payload than 1M. (The server is a Node.js server)
So, please let me know how to change the max payload size of nginx from the default 1M to 50MB. Please note that the nginx is working on an AWS ELB environment.
Appendix 1: Here's the .ebextensions/nginx/nginx.conf file code I used:
user nginx;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
worker_processes auto;
worker_rlimit_nofile 33282;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
include conf.d/*.conf;
map $http_upgrade $connection_upgrade {
default "upgrade";
}
server {
listen 80 default_server;
root /var/app/current/public;
location / {
}
location /api {
proxy_pass http://127.0.0.1:5000;
proxy_http_version 1.1;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
access_log /var/log/nginx/access.log main;
client_header_timeout 60;
client_body_timeout 60;
keepalive_timeout 60;
gzip off;
gzip_comp_level 4;
# Include the Elastic Beanstalk generated locations
include conf.d/elasticbeanstalk/01_static.conf;
include conf.d/elasticbeanstalk/healthd.conf;
client_max_body_size 100M; #100mb
}
client_max_body_size 100M; #100mb
}
Appendix 2. I already added these 2 lines to the Node.js Express app:
app.use(bodyParser.json({limit: '50mb'}));
app.use(bodyParser.urlencoded({limit: "50mb", extended: true, parameterLimit:50000}));
I tried to make run the tutorial from the channels docs on my production server, using ssl.
After a few hours i managed to get a connection but it instantly disconnects :
None - - [12/Mar/2018:17:42:22] "WSCONNECTING /ws/chat/bibou/" - -
None - - [12/Mar/2018:17:42:22] "WSCONNECT /ws/chat/bibou/" - -
None - - [12/Mar/2018:17:42:23] "WSDISCONNECT /ws/chat/bibou/" - -
my stack is
ubuntu 16.04
nginx 1.10.3
channels==2.0.2
daphne==2.1.0
channels-redis==2.1.0
Twisted==17.9.0
I have the exact copy paste of the code from the tutorial, except for this part in room.html
var chatSocket = new WebSocket(
'wss://' + window.location.host +
':8443/ws/chat/' + roomName + '/');
and here is my nginx conf
server {
#http
listen 80;
server_name domain.com;
root /usr/share/nginx/html;
include /etc/nginx/default.d/*.conf;
location / {
return 301 https://$server_name$request_uri;
}
}
server {
#https
listen 443 ssl;
listen 8443 ssl;
server_name domain.com;
root /usr/share/nginx/html;
ssl_certificate "/etc/letsencrypt/live/domain.com/fullchain.pem";
ssl_certificate_key "/etc/letsencrypt/live/domain.com/privkey.pem";
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
add_header Strict-Transport-Security "max-age=31536000";
include /etc/nginx/default.d/*.conf;
location /static/ {
root /home/ubuntu;
}
location /media/ {
root /home/ubuntu;
}
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://unix:/home/ubuntu/tlebrize/Project.sock;
}
location /ws/ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://unix:/home/ubuntu/tlebrize/Daphne.sock;
}
}
I run daphne with daphne -u Daphne.sock Project.asgi:application -v 3
I also tried bypassing nginx and using sudo daphne -e ssl:8443:privateKey=/etc/letsencrypt/live/domain.co/privkey.pem:certKey=/etc/letsencrypt/live/domain.co/fullchain.pem Project.settings:CHANNEL_LAYERS
but i had the same results.
The front break with the message Chat socket closed unexpectedly with the error code 1011 (Internal Error) and no reason.
I managed to make it work, it was an issue with nginx and/or using ReconnectingWebSocket. here's my whole working conf:
nginx
server {
#http
listen 80;
server_name domain.co;
root /usr/share/nginx/html;
include /etc/nginx/default.d/*.conf;
location / {
return 301 https://$server_name$request_uri;
}
}
server {
#https
listen 443 ssl;
server_name domain.com;
root /usr/share/nginx/html;
ssl_certificate "/etc/letsencrypt/live/domain.com/fullchain.pem";
ssl_certificate_key "/etc/letsencrypt/live/domain.com/privkey.pem";
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
add_header Strict-Transport-Security "max-age=31536000";
include /etc/nginx/default.d/*.conf;
location /static/ {
root /home/ubuntu;
}
location /media/ {
root /home/ubuntu;
}
location /ws/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://127.0.0.1:8443;
}
location / {...}
}
daphne
sudo /home/ubuntu/venv/bin/daphne -e ssl:8443:privateKey=/etc/letsencrypt/live/domain.com/privkey.pem:certKey=/etc/letsencrypt/live/domain.com/fullchain.pem Project.asgi:application -v 3
js
var chatSocket = new ReconnectingWebSocket(
'wss://' + window.location.host +
':8443/ws/chat/' + roomName + '/');
I had this problem because I've forgot to include CHANNEL_LAYERS to settings.py.
Server was even able to send 1-2 messages before disconnecting.
This was resulting in error 1011 when connecting through nginx and 1006 when connecting directly without https/wss. I tried both uvicorn and daphne.
I have a django app setup with nginx+gunicorn+supervisor and its working fine. But i need to create a subdomain for staging or development like "dev.domain.com". I have added another server block in nginx.conf for my subdomain. But my subdomain url was always pointing main domain site. so i changed the port no in proxy_pass as suggested on other posts. but due to gunicorn and supervisord i needed to add another conf file for this subdomain in "/etc/supervisord/conf.d/subdomain.conf" but when i reload supervisord its not able to start my subdomain program. below is my nginx.conf, subdomain.conf, script.sh:
nginx.conf
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
gzip on;
gzip_static on;
gzip_types application/x-javascript text/css text/html application/json text/css text/json;
server {
listen 80;
server_name domain_name
# no security problem here, since / is alway passed to upstream
root /home/path/to/project/base
# serve directly - analogous for static/staticfiles
location /static/ {
# if asset versioning is used
if ($query_string) {
expires max;
}
autoindex off;
root /home/path/to/static/;
}
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 10;
proxy_read_timeout 10;
proxy_pass http://localhost:8000/;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
server {
listen 80;
server_name subdomain_name
# no security problem here, since / is alway passed to upstream
root /home/path/to/subdomain_directory(which is different, you can say it is fully differnt project which i want to run as development project);
# serve directly - analogous for static/staticfiles
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 10;
proxy_read_timeout 10;
proxy_pass http://localhost:9000/;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
}
script.sh
set -e
NUM_WORKERS=4
# user/group to run as
USER=user_name
#GROUP=your_unix_group
cd /home/path/to/subdomain_base
source subdomain_virtualenv_activation
LOGFILE=log_file_path
LOGDIR=$(dirname $LOGFILE)
test -d $LOGDIR || mkdir -p $LOGDIR
exec virtualenvironment/bin/gunicorn_django -w $NUM_WORKERS \
--user=$USER --log-level=debug \
--log-file=$LOGFILE 2>>$LOGFILE
subdomain.conf
[program:programname]
directory = /home/path/to/subdomainbase/
user = user_name
command = /home/path/to/script.sh
stdout_logfile = /home/path/to/log
stderr_logfile = /home/path/to/log
I have a procfile too as suggested in gunicorn which is in base directory
Procfile
./manage.py runserver_plus 0.0.0.0:$PORT
Ok so these are my configurations. Please check where i am doing the wrong thing. I just want to run my development server as a different project but under same domain as subdomain. after all this whatever changes i am doing, main domain is working fine wioth the same process. Please let me know if you need more info on this error.
EDIT
I am reading again your post, and... Should not you must set ADDRESS in your gunicorn script? gunicorn by default uses port 8000, maybe your subdomain is trying to use the same port?
END EDIT
I have two Django applications running with nginx, gunicorn and supervisor as you want to do (well, not the same, but very similar, i have two domains and a subdomain). I don't see where is your mistake, I think must be in nginx configuration. Maybe the "root" line?
Have you seen if supervisord returns you an error when you try to start it using "supervisorctl" command?
I can show you my configuration and you can compare it:
I have two .conf files for nginx:
domain1.conf:
server {
listen 80;
server_name domain1.net;
return 301 $scheme://www.domain1.net$request_uri;
}
server {
listen 80;
server_name www.domain1.net;
access_log /var/log/nginx/domain1.log;
location /static {
alias /var/www/domain1/media/;
autoindex on;
access_log off;
}
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 10;
proxy_read_timeout 10;
proxy_pass http://127.0.0.1:8000/;
}
}
and domain2.conf:
server {
listen 80;
server_name subdomain.domain2.es;
access_log /var/log/nginx/domain2.log;
location /static {
alias /var/www/dev/domain2/domain2/static/;
autoindex on;
access_log off;
}
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 10;
proxy_read_timeout 10;
proxy_pass http://127.0.0.1:8005/;
}
}
My two gunicor scripts are the same, just changing paths and ADDRESS in one of them:
#!/bin/bash
set -e
LOGFILE=/var/log/gunicorn/domain1.log
LOGDIR=$(dirname $LOGFILE)
NUM_WORKERS=1
# user/group to run as
USER=user
GROUP=user
ADDRESS=127.0.0.1:8005
cd /var/www/dev/domain1
source /path/to/venv/domain1/bin/activate
test -d $LOGDIR || mkdir -p $LOGDIR
exec gunicorn_django -w $NUM_WORKERS --bind=$ADDRESS \
--user=$USER --group=$GROUP --log-level=debug \
--log-file=$LOGFILE 2>>$LOGFILE
My two supervisor scripts are the same too:
[program:domain1]
directory = /var/www/dev/domain1/
user = user
command = /path/to/bin/gunicorn_domain1.sh
stdout_logfile = /var/log/nginx/domain1.log
stderr_logfile = /var/log/nginx/domain1.log
I hope you found this helpful.
It's my first time to use django + nginx + gunicorn. I can't make server_name work. With the following configs, I am able to see django admin panel at localhost/admin. But should I be able to see admin panel when I access local-example/admin as well?
start my gunicorn
gunicorn web_wsgi_local:application
2012-10-14 19:45:50 [16532] [INFO] Starting gunicorn 0.14.6
2012-10-14 19:45:50 [16532] [INFO] Listening at: http://127.0.0.1:8000 (16532)
2012-10-14 19:45:50 [16532] [INFO] Using worker: sync
2012-10-14 19:45:50 [16533] [INFO] Booting worker with pid: 16533
nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /Users/ruixia/www/x/project/logs/nginx_access.log main;
error_log /Users/ruixia/www/x/project/logs/nginx_error.log debug;
autoindex on;
sendfile on;
tcp_nopush on;
tcp_nodelay off;
gzip on;
include /usr/local/etc/nginx/sites-enabled/*;
}
sites-enabled/x config
server {
listen 80;
server_name local-example;
root /Users/ruixia/www/x/project;
location /static/ {
alias /Users/ruixia/www/x/project/static/;
expires 30d;
}
location /media/ {
alias /Users/ruixia/www/x/project/media/;
}
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 10;
proxy_read_timeout 10;
proxy_pass http://localhost:8000/;
}
}
um... I solved it by adding local-example to /etc/hosts