It's my first time to use django + nginx + gunicorn. I can't make server_name work. With the following configs, I am able to see django admin panel at localhost/admin. But should I be able to see admin panel when I access local-example/admin as well?
start my gunicorn
gunicorn web_wsgi_local:application
2012-10-14 19:45:50 [16532] [INFO] Starting gunicorn 0.14.6
2012-10-14 19:45:50 [16532] [INFO] Listening at: http://127.0.0.1:8000 (16532)
2012-10-14 19:45:50 [16532] [INFO] Using worker: sync
2012-10-14 19:45:50 [16533] [INFO] Booting worker with pid: 16533
nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /Users/ruixia/www/x/project/logs/nginx_access.log main;
error_log /Users/ruixia/www/x/project/logs/nginx_error.log debug;
autoindex on;
sendfile on;
tcp_nopush on;
tcp_nodelay off;
gzip on;
include /usr/local/etc/nginx/sites-enabled/*;
}
sites-enabled/x config
server {
listen 80;
server_name local-example;
root /Users/ruixia/www/x/project;
location /static/ {
alias /Users/ruixia/www/x/project/static/;
expires 30d;
}
location /media/ {
alias /Users/ruixia/www/x/project/media/;
}
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 10;
proxy_read_timeout 10;
proxy_pass http://localhost:8000/;
}
}
um... I solved it by adding local-example to /etc/hosts
Related
I am trying to expose two different address used like APIs. One in Django and the other one in Flask, they are Docker-compose containers.
I need configure Nginx for expose the two containers in two different subdomains.
It is my Nginx.conf:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024; ## Default: 1024, increase if you have lots of clients
}
http {
include /etc/nginx/mime.types;
# fallback in case we can't determine a type
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local]
"$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
upstream app {
server django:5000;
}
upstream app_server {
server flask:5090;
}
server {
listen 5090;
location / {
proxy_pass http://app_server;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Scheme $scheme;
}
}
server {
listen 5000;
location / {
proxy_pass http://app;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Scheme $scheme;
}
}
}
And my production.yml
Nginx:
build: ./compose/production/nginx
image: *image
ports:
- 80:80
depends_on:
- flask
- django
My containers are all up.
I use proxy_pass:
server {
listen <port>;
location / {
proxy_pass http://<container-host-name>:<port>;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Scheme $scheme;
}
}
You nginx container connected only with 80 port on machine and 80 port on container, but you nginx server listen 5000 and 5090 ports :)
I've installed nginx on an ec2 to test how it would perform against another ec2 without nginx.
It works just fine on port 80, but I'm having trouble allowing ssl. when I go to my domain using https, it just times out.
I have already installed the ssl on the aws load balancer. How do fix this issue?
Here is the /etc/nginx/nginx.conf file:
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
server_names_hash_bucket_size 128;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80;
listen 443 ssl default_server;
access_log /var/log/nginx/agori.access.log main;
error_log /var/log/nginx/agori.error.log;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://unix:/home/ec2-user/src/project.sock;
}
}
I have a Django Gunicorn Nginx setup that is working without errors but the nginx access logs contains the following line every 5 seconds:
10.112.113.1 - - [09/Jan/2019:05:02:21 +0100] "HEAD / HTTP/1.1" 302 0 "-" "-"
The amount of information in this logging event is quite scarce, but a 302 every 5 seconds has to be something related to the nginx configuration right?
My nginx configuration is as follows:
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/.conf;
upstream app_server {
server unix:/path_to/gunicorn.sock fail_timeout=0;
}
server {
server_name example.com;
listen 80;
return 301 https://example.com$request_uri;
}
server {
listen 443;
listen [::]:443;
server_name example.com;
ssl on;
ssl_certificate /path/cert.crt;
ssl_certificate_key /path/cert.key;
keepalive_timeout 5;
client_max_body_size 4G;
access_log /var/log/nginx/nginx-access.log;
error_log /var/log/nginx/nginx-error.log;
location /static/ {
alias /path_to/static/;
}
location /media/ {
alias /path_to/media/;
}
include /etc/nginx/mime.types;
# checks for static file, if not found proxy to app
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $host;
proxy_redirect off;
proxy_pass http://app_server;
}
}
}
I need to edit nginx configuration of an AWS ELB environment so that it accepts payload(post request body) up to 50MB.
The default nginx payload limit is 1MB.
I researched many questions and answers, and found this:
https://stackoverflow.com/a/40745569
But I'm not sure how to access the nginx configuration file behind the ELB environment.
I also tried with this AWS doc:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/java-se-nginx.html
But I still couldn't get responses from post requests with larger payload than 1M. (The server is a Node.js server)
So, please let me know how to change the max payload size of nginx from the default 1M to 50MB. Please note that the nginx is working on an AWS ELB environment.
Appendix 1: Here's the .ebextensions/nginx/nginx.conf file code I used:
user nginx;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
worker_processes auto;
worker_rlimit_nofile 33282;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
include conf.d/*.conf;
map $http_upgrade $connection_upgrade {
default "upgrade";
}
server {
listen 80 default_server;
root /var/app/current/public;
location / {
}
location /api {
proxy_pass http://127.0.0.1:5000;
proxy_http_version 1.1;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
access_log /var/log/nginx/access.log main;
client_header_timeout 60;
client_body_timeout 60;
keepalive_timeout 60;
gzip off;
gzip_comp_level 4;
# Include the Elastic Beanstalk generated locations
include conf.d/elasticbeanstalk/01_static.conf;
include conf.d/elasticbeanstalk/healthd.conf;
client_max_body_size 100M; #100mb
}
client_max_body_size 100M; #100mb
}
Appendix 2. I already added these 2 lines to the Node.js Express app:
app.use(bodyParser.json({limit: '50mb'}));
app.use(bodyParser.urlencoded({limit: "50mb", extended: true, parameterLimit:50000}));
Why nginx run default page ? how to listen my django server ?
First inside the sites-availabe folder i created example.com file then i
[root#instance-4 sites-available]# ls -al /etc/nginx/sites-enabled/example.com
lrwxrwxrwx. 1 root root 21 Dec 22 11:03 /etc/nginx/sites-enabled/example.com -> example.com
/etc/nginx/sites-available/example.com
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Then when i run gunicorn example.wsgi in my app folder and later i visited the example.com but you know what i am still getting nginx default page.
What i am missing here ?
Updated :
Now this time i created example.com file in my Django root folder then after Symlink
[root#instance-4 Staging]# ln -s example.com /etc/nginx/sites-enabled/
after the nginx restart still same ...
Updated 2 :
nginx.conf file
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
include /etc/nginx/sites-enabled/*;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
Check for a default in /etc/nginx/site-enabled/ and remove it if it's there. Then reload or restart your nginx server.
You can also check gunicorn is serving requests by visiting example.com:8000.
It's worthwhile noting that you'll probably also want nginx to be serving your static files so put in a /static/ block:
location /static/ {
alias /path/to/your/app/static/;
if ($query_string) {
# If using GET params to control versions, set to max expiry.
expires max;
}
access_log off;
}
From what i remember of nginx, there is 2 places where you can find the index.html of nginx, try to do a "find / -name index.html" you will prolly find the 2nd .html i am talking about, and regarding the path u should be able to fix this