Hi I deployed my website. Everything worked find. Then I changed from http to https and now I get a blank white page or 502 502 Bad Gateway. I think the problem is in my nginx.conf.
I deploy my frontend and backend in the same task in the same service on AWS ESC.
Here are my ports:
Http->80
Https->443
Client Port 8080
Backend Port 4000
This is my nginx.conf originally before I changed to https (which worked):
worker_processes auto;
events {
worker_connections 60000;
multi_accept on;
use epoll;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay off;
gzip on;
gzip_http_version 1.0;
gzip_comp_level 5;
gzip_min_length 256;
gzip_proxied any;
gzip_vary on;
gzip_types
application/atom+xml
application/javascript
application/json
application/rss+xml
application/vnd.ms-fontobject
application/x-font-ttf
application/x-web-app-manifest+json
application/xhtml+xml
application/xml
font/opentype
image/svg+xml
image/x-icon
text/css
text/plain
text/x-component;
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format compression '$remote_addr - $remote_user [$time_local] '
'"$request" $status $upstream_addr '
'"$http_referer" "$http_user_agent" "$gzip_ratio"';
server {
listen 8080;
server_name mydomain.com;
access_log /var/log/nginx/access.log compression;
root /usr/share/nginx/html;
index index.html index.htm;
location ~* \.(?:manifest|appcache|html?|xml|json)$ {
expires -1;
}
location / {
try_files $uri $uri/ /index.html;
}
location /graphql {
proxy_pass http://localhost:4000/graphql;
}
location /subscriptions {
proxy_pass http://localhost:4000/subscriptions;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location /refresh_token {
proxy_pass http://localhost:4000/refresh_token;
proxy_set_header Authorization $http_authorization;
proxy_pass_header Authorization;
}
location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ {
expires 1M;
access_log off;
add_header Cache-Control "public";
}
location ~* \.(?:css|js)$ {
try_files $uri =404;
expires 1y;
access_log off;
add_header Cache-Control "public";
}
location ~ ^.+\..+$ {
try_files $uri =404;
}
location /static/ {
root /var/www;
}
}
}
I made many changed to my nginx.conf and nothing worked. I verified my domain with AWS.
Here some changes I made:
server {
listen 80;
listen [::]:80;
server_name mydomian.com
return 301 https://$server_name$request_uri;
}
server {
listen 443;
listen [::]:443;
server_name mydomian.com
access_log /var/log/nginx/access.log compression;
root /usr/share/nginx/html;
index index.html index.htm;
location / {
proxy_pass http://localhost:8080;
try_files $uri $uri/ /index.html;
}
}
I'm using a load-balencer to terminate SSL.
I focused on the nginx.conf but the conf is fine. I'm using websocket which caused the problem:
const host = window.location.host;
`ws://${host}/subscriptions`
so I added a s:
`wss://${host}/subscriptions`
For 2 days I tried everything in the nginx.conf and I just had to add a s.
I'm super stupid because I did't checked the console for errors.
Related
I would like to download my folders as a zip from /opt directory. I have the Nginx as a webserver. I found out that mod_zip can handle such a task. I installed and configured it as stated in GitHub.
From my understanding, I created this Nginx.conf:
uwsgi_intercept_errors on;
upstream geoserver_proxy {
server 10.0.14.84:8080;
}
upstream ziplist {
server 10.0.14.84:5555;
}
# Expires map
map $sent_http_content_type $expires {
default off;
text/html epoch;
text/css max;
application/javascript max;
~image/ max;
}
server {
listen 5555;
location /reports/ {
alias /opt/geonode/;
add_header X-Archive-Files 'zip';
# this line sets the name of the zip that the user gets
add_header Content-Disposition 'attachment; filename=example.zip';
}
}
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name _;
charset utf-8;
etag on;
expires $expires;
proxy_read_timeout 600s;
# set client body size to 2M #
client_max_body_size 50000M;
location / {
etag off;
uwsgi_pass 127.0.0.1:8000;
uwsgi_read_timeout 600s;
include uwsgi_params;
}
location /static/ {
alias /opt/geonode/geonode/static_root/;
}
location /uploaded/ {
alias /opt/geonode/geonode/uploaded/;
}
location /reports/ {
alias /opt/geonode/;
# hides the header to the user
proxy_hide_header X-Archive-Files;
# I don't remember if this is needed
proxy_set_header Accept-Encoding "";
# I don't know what this means
proxy_pass_request_headers off;
# pass the request to server B
proxy_pass http://ziplist;
}
location /geoserver {
proxy_pass http://geoserver_proxy;
include proxy_params;
}
}
Unfortunately, when I try localhost/reports/folder I am getting 301 301 Moved Permanently error.
What is wrong with my config?
I have a csrf token error when trying to log in to the django admin in production after adding SSL.
So if I use the configuration below without ssl everything works fine:
upstream app_server {
server unix:/home/app/run/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
# add here the ip address of your server
# or a domain pointing to that ip (like example.com or www.example.com)
server_name 107.***.28.***;
keepalive_timeout 5;
client_max_body_size 4G;
access_log /home/app/logs/nginx-access.log;
error_log /home/app/logs/nginx-error.log;
location /static/ {
alias /home/app/static/;
}
# checks for static file, if not found proxy to app
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
}
But if I change to configuration do listen SSL when filling in any form on the page I get the csrf_token error. My configuration nginx using SSL:
upstream app_server {
server unix:/home/app/run/gunicorn.sock fail_timeout=0;
}
server {
#listen 80;
# add here the ip address of your server
# or a domain pointing to that ip (like example.com or www.example.com)
listen 443 ssl;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
keepalive_timeout 5;
client_max_body_size 4G;
access_log /home/app/logs/nginx-access.log;
error_log /home/app/logs/nginx-error.log;
# Compression config
gzip on;
gzip_min_length 1000;
gzip_buffers 4 32k;
gzip_proxied any;
gzip_types text/plain application/javascript application/x-javascript text/javascript text/xml text/css;
gzip_vary on;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
location /static/ {
alias /home/app/static/;
}
# checks for static file, if not found proxy to app
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
}
server {
listen 80;
server_name example.com;
return 301 https://$host$request_uri;
}
server {
listen 80;
server_name www.example.com;
return 301 https://example.com$request_uri;
}
server {
listen 443 ssl;
server_name www.example.com;
return 301 https://example.com$request_uri;
}
How can I fix the error or where to find the bug. I tried to clear cookies, use different browsers, reset the server and server configuration without result.
In Django ≥ 4 it is now necessary to specify CSRF_TRUSTED_ORIGINS in settings.py
CSRF_TRUSTED_ORIGINS = [
'https://your-domain.com'',
'https://www.your-domain.com'
]
See documentation
I have a Django Gunicorn Nginx setup that is working without errors but the nginx access logs contains the following line every 5 seconds:
10.112.113.1 - - [09/Jan/2019:05:02:21 +0100] "HEAD / HTTP/1.1" 302 0 "-" "-"
The amount of information in this logging event is quite scarce, but a 302 every 5 seconds has to be something related to the nginx configuration right?
My nginx configuration is as follows:
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/.conf;
upstream app_server {
server unix:/path_to/gunicorn.sock fail_timeout=0;
}
server {
server_name example.com;
listen 80;
return 301 https://example.com$request_uri;
}
server {
listen 443;
listen [::]:443;
server_name example.com;
ssl on;
ssl_certificate /path/cert.crt;
ssl_certificate_key /path/cert.key;
keepalive_timeout 5;
client_max_body_size 4G;
access_log /var/log/nginx/nginx-access.log;
error_log /var/log/nginx/nginx-error.log;
location /static/ {
alias /path_to/static/;
}
location /media/ {
alias /path_to/media/;
}
include /etc/nginx/mime.types;
# checks for static file, if not found proxy to app
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $host;
proxy_redirect off;
proxy_pass http://app_server;
}
}
}
How to config the Nginx, then it can provide the Django/Django-Rest-Framework's media resources?
In my remote CentOS-7 Server after I distributed my Django/Django-Rest-Framework project, I can not access the media and static resources by my API.
How can I config the Nginx, so I can access them?
I tried in nginx's vhosts_backend.conf, but did not success.
server {
listen 8000;
server_name 103.20.12.76;
access_log /data/ldl/logs/103.20.12.76.access.log main;
location / {
root /var/www/html/website/backend/;
index index.html index.htm;
}
location ~ /media/*\.(jpg|png|jpeg|bmp|gif|swf)$
{
access_log off;
expires 30d;
root /var/www/html/python_backend/myProject;
break;
}
location /media/ {
root /data/ldl/repo/myProject/;
}
location /static/ {
root /data/ldl/repo/myProject/;
}
}
EDIT-1
My Django/Django-Rest-Framework project only provide the APIs, not the template views. and it use the 8000 port.
so I am looking for a way in Nginx to access the media and static resources like this:
http://103.20.12.76:8000/media/images/qiyun_admin_websitemanage/logo/logo_01_YGE3YKm.png
You need to use alias . Here is an example:
location /media {
alias /data/ldl/repo/myProject/media;
access_log off;
expires 30d;
}
And here is a full working example of a live site:
# Expires map
map $sent_http_content_type $expires {
default off;
text/html epoch;
text/css max;
application/javascript max;
~image/ max;
}
server {
listen 80;
server_name www.server.example server.example;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
server_name server.example;
ssl_certificate /etc/letsencrypt/live/server.example/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/server.example/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_stapling on;
ssl_stapling_verify on;
add_header Strict-Transport-Security max-age=15768000;
ssl_ecdh_curve secp384r1;
ssl_session_tickets off;
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
charset utf-8;
client_max_body_size 100M;
expires $expires;
access_log /var/log/nginx/server_access.log timed;
error_log /var/log/nginx/server_error.log;
location /media {
alias /home/proj/media;
}
location /static {
alias /home/proj/static;
access_log off;
expires 30d;
## No need to bleed constant updates. Send the all shebang in one
## fell swoop.
tcp_nodelay off;
## Set the OS file cache.
open_file_cache max=3000 inactive=120s;
open_file_cache_valid 45s;
open_file_cache_min_uses 2;
open_file_cache_errors off;
}
location / {
uwsgi_pass unix:///run/server.sock;
include /etc/nginx/uwsgi_params;
}
}
I'm currently trying to deploy a Django app on a REHL 7.4 server using Nginx. I've followed these tutorials :
https://simpleisbetterthancomplex.com/tutorial/2017/05/23/how-to-deploy-a-django-application-on-rhel.html
https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-16-04
The virtualenv and the nginx server seems to be allright. However I'm struggling with two errors:
Either I got a 500 error because of worker_connections parameter value (below are logs):
13494#0: *1021 1024 worker_connections are not enough while connecting to upstream, client: 192.168.1.33, server: 192.168.1.33, request: "GET /Syc/login HTTP/1.0", upstream: "http://192.168.1.33:80/Syc/login", host: "192.168.1.33"
Either I increase worker_connections value to > 4096 and I get a 400 error like in this thread 400 Bad Request - request header or cookie too large
Below are my nginx.conf and app.conf, please let me know if there are configuration mistakes and thanks in advance for any help.
nginx.conf:
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
# set open fd limit to 30000
worker_rlimit_nofile 30000;
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
large_client_header_buffers 4 32k;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
app.conf
upstream app_server {
server unix:/opt/sycoma/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name 192.168.1.33; # <- insert here the ip address/domain name
large_client_header_buffers 4 16k;
keepalive_timeout 5;
client_max_body_size 4G;
access_log /opt/sycoma/logs/nginx-access.log;
error_log /opt/sycoma/logs/nginx-error.log;
location /static/ {
alias /opt/sycoma/venv/Sycoma/Syc/static/;
}
location /media/ {
alias /opt/sycoma/venv/Sycoma/media/;
}
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://192.168.1.33;
}
}
Try to remove/comment the line:
proxy_set_header Host $http_host;
or increase large_client_header_buffers.