nginx file upload freezes - django

I'm testing a django website deployment. The site works without any issues when I connect directly to my gunicorn localhost and run it in debug mode (so that django handles file uploads itself). When I access the site with debug mode turned off through nginx (it binds to the same gunicorn localhost) everything works just as well, except file uploads. Whenever I try to upload a file > 1MB, the upload freezes at some point (with a 1.3MB file, my browser freezes at 70%).
I've installed nginx into a conda virtual environment (conda install --no-update-dependencies -c anacoda nginx). Here is the etc/nginx.conf file:
# nginx Configuration File
# https://www.nginx.com/resources/wiki/start/topics/examples/full/
# http://nginx.org/en/docs/dirindex.html
# https://www.nginx.com/resources/wiki/start/
# Run as a unique, less privileged user for security.
# user nginx www-data; ## Default: nobody
# If using supervisord init system, do not run in deamon mode.
# Bear in mind that non-stop upgrade is not an option with "daemon off".
# daemon off;
# Sets the worker threads to the number of CPU cores available in the system
# for best performance.
# Should be > the number of CPU cores.
# Maximum number of connections = worker_processes * worker_connections
worker_processes auto; ## Default: 1
# Maximum number of open files per worker process.
# Should be > worker_connections.
# http://blog.martinfjordvald.com/2011/04/optimizing-nginx-for-high-traffic-loads/
# http://stackoverflow.com/a/8217856/2127762
# Each connection needs a filehandle (or 2 if you are proxying).
worker_rlimit_nofile 8192;
events {
# If you need more connections than this, you start optimizing your OS.
# That's probably the point at which you hire people who are smarter than
# you as this is *a lot* of requests.
# Should be < worker_rlimit_nofile.
worker_connections 8000;
}
# Log errors and warnings to this file
# This is only used when you don't override it on a server{} level
#error_log logs/error.log notice;
#error_log logs/error.log info;
error_log var/log/nginx/error.log warn;
# The file storing the process ID of the main process
pid var/run/nginx.pid;
http {
# Log access to this file
# This is only used when you don't override it on a server{} level
access_log var/log/nginx/access.log;
# Hide nginx version information.
server_tokens off;
# Controls the maximum length of a virtual host entry (ie the length
# of the domain name).
server_names_hash_bucket_size 64;
# Specify MIME types for files.
include mime.types;
default_type application/octet-stream;
# How long to allow each connection to stay idle.
# Longer values are better for each individual client, particularly for SSL,
# but means that worker connections are tied up longer.
keepalive_timeout 20s;
# Speed up file transfers by using sendfile() to copy directly
# between descriptors rather than using read()/write().
# For performance reasons, on FreeBSD systems w/ ZFS
# this option should be disabled as ZFS's ARC caches
# frequently used files in RAM by default.
sendfile on;
# Don't send out partial frames; this increases throughput
# since TCP frames are filled up before being sent out.
tcp_nopush on;
# Enable gzip compression.
gzip on;
# Compression level (1-9).
# 5 is a perfect compromise between size and CPU usage, offering about
# 75% reduction for most ASCII files (almost identical to level 9).
gzip_comp_level 5;
# Don't compress anything that's already small and unlikely to shrink much
# if at all (the default is 20 bytes, which is bad as that usually leads to
# larger files after gzipping).
gzip_min_length 256;
# Compress data even for clients that are connecting to us via proxies,
# identified by the "Via" header (required for CloudFront).
gzip_proxied any;
# Tell proxies to cache both the gzipped and regular version of a resource
# whenever the client's Accept-Encoding capabilities header varies;
# Avoids the issue where a non-gzip capable client (which is extremely rare
# today) would display gibberish if their proxy gave them the gzipped version.
gzip_vary on;
# Compress all output labeled with one of the following MIME-types.
gzip_types
application/atom+xml
application/javascript
application/json
application/ld+json
application/manifest+json
application/rss+xml
application/vnd.geo+json
application/vnd.ms-fontobject
application/x-font-ttf
application/x-web-app-manifest+json
application/xhtml+xml
application/xml
font/opentype
image/bmp
image/svg+xml
image/x-icon
text/cache-manifest
text/css
text/plain
text/vcard
text/vnd.rim.location.xloc
text/vtt
text/x-component
text/x-cross-domain-policy;
# text/html is always compressed by gzip module
# This should be turned on if you are going to have pre-compressed copies (.gz) of
# static files available. If not it should be left off as it will cause extra I/O
# for the check. It is best if you enable this in a location{} block for
# a specific directory, or on an individual server{} level.
# gzip_static on;
include conf.d/*.conf;
}
This is the original version of my server's configuration file (conf.d/test.conf).
server {
server_name localhost;
listen 8081;
access_log on;
client_max_body_size 32M;
send_timeout 100s;
location /static/ {
alias /Users/user/static/;
autoindex on;
error_log /Users/user/.nginx/labsite.static.error.log warn;
}
location /media/ {
alias /Users/user/media/;
autoindex on;
error_log /Users/user/.nginx/labsite.media.error.log warn;
}
location / {
proxy_pass http://localhost:8001;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';
}
access_log /Users/user/.nginx/labsite.access.log combined;
error_log /Users/user/.nginx/labsit.error.log warn;
}
I've found several related posts:
Nginx PHP Failing with Large File Uploads (Over 6 GB)
https://serverfault.com/questions/626817/nginx-file-upload-pauses-stalls-in-the-middle-uploads-only-258kb-and-stops
https://easyengine.io/tutorials/php/increase-file-upload-size-limit/
They led me to introduce some modifications
server {
server_name localhost;
listen 8081;
access_log on;
client_max_body_size 32M;
send_timeout 300s;
gzip_static off;
location /static/ {
alias /Users/user/static/;
autoindex on;
error_log /Users/user/.nginx/labsite.static.error.log warn;
}
location /media/ {
alias /Users/user/media/;
client_body_temp_path /Users/user/media;
autoindex on;
error_log /Users/user/.nginx/labsite.media.error.log warn;
}
location / {
proxy_pass http://localhost:8001;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';
}
access_log /Users/user/.nginx/labsite.access.log combined;
error_log /Users/user/.nginx/labsit.error.log warn;
}
I've also tried setting sendfile off in my config file, because that's recommended for Free BSD (and Mac OS X is based on Free BSD), but to no avail. Am I missing something?

Seems like, I've figured this out. I had to change the temporary directory (I'm not entirely sure why, because there were no permissions-related issues) and set/increase the client_body_timeout parameter.
server {
listen 8081;
server_name localhost;
client_max_body_size 32M;
client_body_timeout 300s;
send_timeout 300s;
client_body_temp_path /Users/user/media;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /Users/user;
}
location /media/ {
root /Users/user;
}
location / {
proxy_pass http://localhost:8001;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';
}
}

Related

Unable to implement websocket with Nginix and Daphne

I am trying to setup websockets on my django application using Daphne and Ngnix. On my local setup everything works as expected but when I have uploaded to the server the websockets do not respond. This is Nginx.conf file:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
client_max_body_size 10M;
}
and this is my sites-available file which is accessed by Nginx:
server {
server_name 139.59.9.118 newadmin.aysle.tech;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/django/AysleServer/src;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
location /wss/ {
proxy_pass http://0.0.0.1:8001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/newadmin.aysle.tech/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/newadmin.aysle.tech/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = newadmin.aysle.tech) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name 139.59.9.118 newadmin.aysle.tech;
listen 80;
return 404; # managed by Certbot
}
and this is my daphne.service file:
GNU nano 4.8 daphne.service
[Unit]
Description=WebSocket Daphne Service
After=network.target
[Service]
Type=simple
User=root
WorkingDirectory=/home/django/AysleServer/src
#ExecStart=/home/django/AysleServer/MyEnv/bin/python /home/django/AysleServer/MyEnv/bin/daphne -b 0.0.0.0 -p 8001 adminPanel.asgi:application
ExecStart=/home/django/AysleServer/MyEnv/bin/python /home/django/AysleServer/MyEnv/bin/daphne -e ssl:8001:privateKey=/etc/letsencrypt/live/newadmin.aysle.tech/privkey.>
Restart=on-failure
[Install]
WantedBy=multi-user.target
I tried sending a websocket request like this:
ws://newadmin.aysle.tech/ws/test/
ws://newadmin.aysle.tech:8001/ws/test/
But I do not get any response back. I tried checking the log files for error but there is no error. My guess is Nginx is not forwarding the request to Daphne. Probably a configuration issue. But I do not know what to change. Please help me with this. Thanks for your time in advance. Please note that I am also using Gunicorn to handle the HTTP request and they work as expected.
Since you are using ssl in your nginx config, you also have to use wss instead of ws as the scheme.
Also your location is /wss/ so your uri should use this location too.
Try this for a request from the client:
wss://newadmin.aysle.tech/wss/test/
If this doesn't work, you could also check, if your host does even allow WebSockets, or if you have to activate it. For example, I used a Djangoeurope server and had to activate WebSockets for the uri.

Nginx + uWSGI + Django too slow

I have django running on nginx and uwsgi. The cached response loads very fast but at other times the website takes more than 30s to load. I am unable to diagnose the root cause of slowing down. Here's what I can provide as info to help narrow down the issue -
GTMetrix - For what I can conclude from waterfall report is that the waiting time for static files is too much alongwith the initial server response time. Here is a more detailed breakdown:
Link to the lighthouse parameters Waterfall report
nginx.conf - Here is the nginx config file:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 75;
types_hash_max_size 2048;
client_max_body_size 5M;
sendfile_max_chunk 512;
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format upstream_time '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"'
'rt="$request_time" uct="$upstream_connect_time"
uht="$upstream_header_time" urt="$upstream_response_time"';
access_log /var/log/nginx/access.log upstream_time;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable msie6;
# And all the gzip mime types here
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
proxy_cache_path /data/cache levels=1:2 keys_zone=my_cache:10m max_size=10g
inactive 60m use_temp_path off;
server {
location ~* \.(jpg|jpeg|png|gif|ico|css|js){
proxy_cache my_cache;
proxy_cache_revalidate on;
proxy_cache_min_uses 3;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503
http_504;
proxy_cache_lock on;
expires 365d;
proxy_pass http://example.net;
}
}
}
Nginx Project Config -
map $sent_http_content_type $expires{
default on;
text/html epoch;
text/css max;
appplication/javascript max;
~image/ max;
}
server{
listen 80;
server_name example.com;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/mysite/project_dir/app_dir;
expires $expires;
}
location /images/ {
expires $expires;
root /home/mysite/project_dir/app_dir/static/images/;
}
location /media/ {
expires $expires;
root /home/mysite/project_dir/;
}
location / {
include uwsgi_params;
uwsgi_pass unix:/run/uwsgi/mysite.sock;
gzip_static on;
proxy_buffering off;
proxy_cache my_cache;
proxy_cache_revalidate on;
proxy_cache_min_uses 3;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503
http_504;
proxy_cache_lock on;
expires 365d;
proxy_set_header X-Real-IP $remote-addr;
proxy_set_header Host $http-host;
proxy_set_header Connection "";
}
listen 443 ssl http2;#Managed by certbot
#All the subsequent certbot settings not tampered with
}
Logs - So, when I log nginx using the above config, the access logs show upstream_response_time perfectly only if the website was cached loaded. When it takes >30s to load, the upstream_response_time including all parameters except response_time show hyphen '-'.
UPDATE:
django-debug-toolbar- Resource Usage:
Resource
Value
User CPU time
964.000 msec
System CPU time
52.000 msec
Total CPU time
1016.000 msec
System CPU time
1019.185 msec
All the SQL queries are taking minimal time(10.78ms). Logger too shows 0 errors.
I would highly appreciate if anyone could help me diagnose the root cause of this slowdown. Thank you!
Phew! So I figured out the solution. I used - https://www.webpagetest.org and arrived to a conclusion that the initial connection time was very high (~30s). When it happens, it is most likely some dns/firewall issue. My issue was dns based. I had 2 ips added as A record to my domain. One was a private ip. So the browser actually took ~30s to load that ip and when the website got loaded, the browser cached the response so the subsequent response times were low. Simply removing the private ip worked for me.

(nginx + gunicorn) small server instance drops/timeouts connections on +60 simple API requests / second. Can it be improved?

I'm setting up the first production architecture for my Django-based app. I'm using nginx + gunicorn + remote postgres database setup.
After performing simple API load tests with https://loader.io I've found out that when increasing the number of clients sending api requests over 60 clients/second for 30 seconds-long test the tool shows errors that the connections timeout.
When using double server setup with a load balancer I can double the clients/second number but I would expect a single 3vCPU /1 GB ram setup to be able to handle more than 30 requests/second - am I right?
I've tried a lot of differente gunicorn / nginx config parameters but nothing seems to help.
This is the content of my /etc/nginx/nginx.conf file:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
worker_rlimit_nofile 100000;
events {
worker_connections 4000;
multi_accept on;
use epoll;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
server_names_hash_bucket_size 512;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
reset_timedout_connection on;
keepalive_requests 100000;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
This is the content of my /etc/nginx/sites-available/MY_DOMAIN file:
server {
listen 80;
listen [::]:80;
server_name MY_DOMAIN www.MY_DOMAIN;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
ssl on;
client_max_body_size 5M;
server_name MY_DOMAIN www.MY_DOMAIN;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /var/www/backend;
}
location /loaderio-b061bddf86a67379411d4ef54f7ee430/ {
root /var/www/backend;
}
location / {
include proxy_params;
proxy_pass http://unix:/var/www/backend/MY_SOCKET.sock;
}
location /ws/ {
include proxy_params;
proxy_pass http://unix:/var/www/backend/ws.sock;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_connect_timeout 300;
proxy_send_timeout 300;
proxy_read_timeout 300;
send_timeout 300;
}
ssl_certificate /etc/letsencrypt/live/MY_DOMAIN/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/MY_DOMAIN/privkey.pem; # managed by Certbot
This is the content of my supervisor file:
[program:gunicorn]
directory=/var/www/backend
command=/root/.pyenv/versions/VENV_NAME/bin/gunicorn --workers 5 --keep-alive 15 --worker-class gevent --bind unix:/var/www/backend/SOCK_NAME.sock config.wsgi:application
autostart=true
autorestart=true
log_level=debug
stderr_logfile=/var/log/gunicorn/gunicorn.out.log
stdout_logfile=/var/log/gunicorn/gunicorn.err.log
user=root
group=www-data
environment=LANG=en_US.UTF-8,LC_ALL=en_US.UTF-8
[program:daphne]
directory=/var/www/backend
command=/root/.pyenv/versions/VENV_NAME/bin/daphne -u /var/www/backend/ws.sock config.asgi:application
autostart=true
autorestart=true
stderr_logfile=/var/log/daphne/daphne.out.log
stdout_logfile=/var/log/daphne/daphne.err.log
user=root
group=www-data
environment=LANG=en_US.UTF-8,LC_ALL=en_US.UTF-8
[group:GROUP_NAME]
programs:gunicorn,daphne
When performing the load test the CPU vCores are between 10-18% load and the RAM usage is around 70%
Is it possible to enhance this single server instance performance above the 60 req/sec or is it just hardware limitation (I've already tried digital ocean 16vCPU 8GB ram droplet size and the results where pretty much the same (no matter if 5 or 15 workers were used)?

GUnicorn and Django for 500 concurrent requests with limited resources

I was tasked with creating a Django-Gunicorn demo app. In this task, I need to be able to handle 500 concurrent login requests in 1 second.
I have to deploy the app in a VM with 2GB RAM and 2 core CPUs (using Vagrant and VirtualBox, Ubuntu 16.04). I already tried the following for deployment.
gunicorn --workers 5 --bind "0.0.0.0:8000" --worker-class "gevent" --keep-alive 5 project.wsgi
Using JMeter test from the host machine, the test always takes around 7-10 seconds. Even if the login endpoint only returns empty response without any database access, the amount of the time is almost the same. Can you tell me what's wrong with this?
I use the default settings at /etc/nginx/nginx.conf.
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
And here is my reverse proxy settings I put in sites-available folder.
server {
listen 80;
location /static {
autoindex on;
alias /vagrant/static/;
}
location /media {
autoindex on;
alias /vagrant/uploads/;
}
location / {
proxy_redirect http://127.0.0.1:8000/ http://127.0.0.1:8080/;
proxy_pass http://127.0.0.1:8000;
}
}
Thanks
The short answer is that you miss worker connections in gunicorn. So it cannot handle more concurrent requests.
For 500 concurrent login requests, the concurrent active connections handled by the database is also important. If the database cannot handle the loading, you are going to fail too. If you're using PSQL, you have to change max connections and use a connection pool

Force WWW behind an AWS EC2 Load Balancer

I've come up with a small issue, we're using a load balancer for a new project, but we cannot force the www. without having a redirect loop between requests.
We're currently using NGINX, and the snippet to redirect is the following:
LOAD BALANCER NGINX CONFIG
# FORGE CONFIG (DOT NOT REMOVE!)
include forge-conf/mywebsite.com/before/*;
# FORGE CONFIG (DOT NOT REMOVE!)
include upstreams/mywebsite.com;
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name .mywebsite.com;
if ($host !~* ^www\.){
rewrite ^(.*)$ https://www.mywebsite.com$1;
}
# FORGE SSL (DO NOT REMOVE!)
ssl_certificate /etc/nginx/ssl/mywebsite.com/225451/server.crt;
ssl_certificate_key /etc/nginx/ssl/mywebsite.com/225451/server.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
charset utf-8;
access_log off;
error_log /var/log/nginx/mywebsite.com-error.log error;
# FORGE CONFIG (DOT NOT REMOVE!)
include forge-conf/mywebsite.com/server/*;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://370308_app/;
proxy_redirect off;
# Handle Web Socket Connections
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
# FORGE CONFIG (DOT NOT REMOVE!)
include forge-conf/mywebsite.com/after/*;
HTTP SERVER NGINX CONFIG
# FORGE CONFIG (DOT NOT REMOVE!)
include forge-conf/mywebsite.com/before/*;
server {
listen 80;
listen [::]:80;
server_name .mywebsite.com;
root /home/forge/mywebsite.com/public;
if ($host !~* ^www\.){
rewrite ^(.*)$ https://www.mywebsite.com$1;
}
# FORGE SSL (DO NOT REMOVE!)
# ssl_certificate;
# ssl_certificate_key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/dhparams.pem;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
index index.html index.htm index.php;
charset utf-8;
# FORGE CONFIG (DOT NOT REMOVE!)
include forge-conf/mywebsite.com/server/*;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
access_log off;
error_log /var/log/nginx/mywebsite.com-error.log error;
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.1-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
location ~ /\.(?!well-known).* {
deny all;
}
}
# FORGE CONFIG (DOT NOT REMOVE!)
include forge-conf/mywebsite.com/after/*;
Thing is, with this config I'm only getting redirect loops from the server.
Help please :D <3
After writing the prior general-purpose answer, I Googled "FORGE CONFIG (DOT NOT REMOVE!)", and this was the first result:
https://laracasts.com/discuss/channels/forge/forge-how-to-disable-nginx-default-redirection
inside nginx/forge-conf/be106.net/before/redirect.conf file there is this simple config:
…
server_name www.my-domain.net;
return 301 $scheme://my-domain.net$request_uri;
…
is there a simple way of removing this without altering the file itself(as it look like bad idea).
So, it appears that the redirect is being caused by the application you're using, so, we found the most likely cause of the loop!
In turn, the appropriate way to configure your application to avoid said loop would be outside of the score of StackOverflow.
However, as a workaround:
consider whether you actually need all those forge-conf include directives at the load-balancer level; subsequently, you could fake the appropriate domain to be passed to the backend that would not cause a redirect (provided you remove your own redundant redirects):
- proxy_set_header Host $http_host;
+ proxy_set_header Host example.com;
note that the reason the forge-conf/example.com/before/redirect.conf directive takes precedence over your own configuration for .example.com is the order of the directive — you could potentially move the /before/* include to be after your own configuration, if such a move would otherwise make sense.
I don't think the nginx snippets you provided would cause a redirect loop by themselves.
First, you have to figure out whether it's an actual redirect — very often in these questions, the 301 Moved Permanently response gets cached in your browser, and subsequently you see a cached version, instead of a fresh one.
Subsequently, you'd have to figure out what is causing the redirect loop:
Try adding unique strings to each redirect directive, to see which one would be causing the loop.
if ($host !~* ^www\.) {return 301 $scheme://www.$host/levelX$request_uri}
Ask yourself why do you have so many redirect directives in the first place — there doesn't seem to be much of a valid reason to have redirect directives both at the front-end load balancer, as well as the backend.
If the above doesn't resolve the issue, then you know that the redirect loop is not coming from the files you've provided, and you have to dig deeper — it's possible for it to come from some other files, perhaps one of your include directives, or perhaps a default server of www.example.com is defined elsewhere, which redirects to example.com, or perhaps the redirect is done at the application layer.