I would like to display the actual IP of the request as opposed to the localhost in my log files. Since Keter manages the Nginx config I am not sure what I need to change to get the real ip.
This is what I see now:
127.0.0.1 - - [11/Jan/2014:09:25:08 +0000] "GET /favicon.ico HTTP/1.1" 200 - ""
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:27.0) Gecko/20100101 Firefox/27.0"
Keter hasn't been based on nginx for quite a while now. Recent versions of Keter set the X-Real-IP request header to containing the client's IP address (see issue #8), which you can use in wai-extra via IPAddrSource.
Related
I have an AWS-hosted web application that initiates a long-running server process (more than 10 mintues). An Nginx reverse proxy server sits between the application load balancer (ALB) and the service. Both the Nginx server and the service reside within separate Kubernetes pods running on an EC2 instance.
I'm experiencing an issue with a connection being closed. The Nginx logs show a HTTP 499 error:
(][05/Dec/2022:12:02:27 +0000] "POST -------------- HTTP/1.1" 499 0
"https://------------.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X
10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0
Safari/537.36")
The issue is repeatable and occurs exactly 10 minutes after the request was initiated. Despite my having set the ALB, Nginx, and SQL Alchemy timeouts to be much longer than 10 minutes, I suspect a timeout is occurring with a default value of 10 minutes, but I can't figure out where.
Nginx is the product I'm least familiar with and so I suspect that I have failed to make the necessary timeout configs in its conf file. I have set this:
proxy_read_timeout 20m;
Can anyone suggest where in the system the default timeout is occurring?
I am running a Django site with Gunicorn inside a docker container.
Requests are forwarded to this container by nginx, which is running non-dockerized as a regular Ubuntu service.
My site sometimes comes under heavy DDoS attacks that cannot be prevented. I have implemented a number of measures, including Cloudflare, nginx rate limits, gunicorn's own rate limits, and fail2ban as well. Ultimately, these attacks manage to get through due to the sheer number of IP addresses that appear to be in the botnet.
I'm not running anything super-critical, and I will later be looking into load balancing and other options. However, my main issue is that the DDoS attacks do not just take down my site - it's that the site doesn't restore availability when the attack is over.
Somehow, the sheer number of requests is breaking something, and I cannot figure it out. The only way to bring the site back is to restart the container. Nginx service is running just fine, and shows the following in the error logs every time:
2022/08/02 18:03:07 [error] 2115246#2115246: *72 connect() failed (111: Connection refused) while connecting to upstream, client: 172.104.109.161, server: examplesite.com, request: "GET / HTTP/2.0", upstream: "http://127.0.0.1:8000/", host: "examplesite.com"
From this, I thought that somehow the DDoS was crashing the docker container with gunicorn and the django app. Hence, I implemented a health check in the Dockerfile:
HEALTHCHECK --interval=60s --timeout=5s --start-period=5s --retries=3 \
CMD curl -I --fail http://localhost:8000/ || exit 1
I used Docker Autoheal to monitor the health of the container, however the container never turns "unhealthy". Manually running the command curl http://localhost:8000/ returns the website's home page, hence why the container is never turning unhealthy.
Despite this, the container does not appear to be accepting any more requests from nginx, as this is the only output from gunicorn (indicating that it receives the healthcheck curl, but nothing else):
172.17.0.1 - - [02/Aug/2022:15:34:49 +0000] "GET / HTTP/1.0" 403 135 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3599.0 Safari/537.36"
[2022-08-02 15:34:49 +0000] [1344] [INFO] Autorestarting worker after current request.
[2022-08-02 15:34:49 +0000] [1344] [INFO] Worker exiting (pid: 1344)
172.17.0.1 - - [02/Aug/2022:15:34:49 +0000] "GET / HTTP/1.0" 403 135 "-" "Mozilla/5.0 (iPad; CPU OS 8_1_1 like Mac OS X) AppleWebKit/600.1.4 (KHTML, like Gecko) Version/8.0 Mobile/12B435 Safari/600.1.4"
[2022-08-02 15:34:50 +0000] [1447] [INFO] Booting worker with pid: 1447
[2022-08-02 15:34:50 +0000] [1448] [INFO] Booting worker with pid: 1448
[2022-08-02 15:34:51 +0000] [1449] [INFO] Booting worker with pid: 1449
127.0.0.1 - - [02/Aug/2022:15:35:31 +0000] "HEAD / HTTP/1.1" 200 87301 "-" "curl/7.74.0"
127.0.0.1 - - [02/Aug/2022:15:36:31 +0000] "HEAD / HTTP/1.1" 200 87301 "-" "curl/7.74.0"
127.0.0.1 - - [02/Aug/2022:15:37:31 +0000] "HEAD / HTTP/1.1" 200 87301 "-" "curl/7.74.0"
127.0.0.1 - - [02/Aug/2022:15:51:33 +0000] "HEAD / HTTP/1.1" 200 87301 "-" "curl/7.74.0"
[2022-08-02 15:51:54 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:1449)
[2022-08-02 15:51:54 +0000] [1449] [INFO] Worker exiting (pid: 1449)
127.0.0.1 - - [02/Aug/2022:15:52:33 +0000] "HEAD / HTTP/1.1" 200 87301 "-" "curl/7.74.0"
127.0.0.1 - - [02/Aug/2022:15:53:34 +0000] "HEAD / HTTP/1.1" 200 87301 "-" "curl/7.74.0"
As you can see, no more non-curl requests are received by gunicorn after 15:34:49. Nginx continues to show the upstream connection refused error. What can I do about this? Manually restarting the docker container is simply not feasible - the health check should work, but for some reason the site still works internally, but the docker container is not receiving the outside requests from nginx.
I've tried varying the gunicorn workers and number of requests per worker, but nothing works. The site works perfectly fine normally, I am just completely stuck on where the ddos is breaking something. From my observation, nginx is functioning fine, and the issue is somewhere with the dockerised gunicorn instance, but I don't know how given it responds to internal curl commands perfectly fine - if it was broken, the health check wouldn't be able to access the site!
Edit, extract of my nginx config:
server {
listen 443 ssl http2;
server_name examplesite.com www.examplesite.com;
ssl_certificate /etc/ssl/cert.pem;
ssl_certificate_key /etc/ssl/key.pem;
ssl_client_certificate /etc/ssl/cloudflare.pem;
ssl_verify_client on;
client_body_timeout 5s;
client_header_timeout 5s;
location / {
limit_conn limitzone 15;
limit_req zone=one burst=10 nodelay;
proxy_pass http://127.0.0.1:8000/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect http://127.0.0.1:8000/ https://examplesite.com;
}
}
Update: No unreasonable use of system resources from the container, still unsure where in the pipeline it's breaking
I'm trying to make an access to my Django application by HTTPS.
In the config.py of the project I has set the key
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTOCOL', 'https')
On my configuration to serve all I've an Apache 2.4 so I've create the right entry into "sites-availables" directory, obviously followed by a2ensite, I've generate my own SSL Key (.crt and .key files), changed ownership and permission, restared apache2.
When I try to access to the site I got the follow error message
"""
Internal Server Error
The server encountered an internal error or misconfiguration and was
unable to complete your request.
Please contact the server administrator at webmaster#localhost to
inform them of the time this error occurred, and the actions you
performed just before this error.
More information about this error may be available in the server error log.
Apache/2.4.7 (Ubuntu) Server at inno Port 9443
"""
I've take a look into /var/log/apache2/error.log but there's nothing, on /var/log/apache2/access.log I've found
xxx.xxx.xxx.xxx - - [25/May/2016:12:29:36 +0200] "GET /favicon.ico
HTTP/1.1" 500 996 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64;
rv:46.0) Gecko/20100101 Firefox/46.0"
There are someone that can give me some hint?
Thanks a lot
Carlos
It looks like your virtual host within apache isnt properly configured so it's not handling php files the way it should. I guess you should use a handler directive in order to tell apache to handle all .php files and most probably your issue will be fixed. This is the maximum that can be done since you didn't provide much info about the error.
I have an production django application that runs fine with Debug = True; but doesn't with Debug=False.
If I load run the domain, it shows my urls.py file, which is really bad.
I want to get my application where it uses Debug=False and TEMPLATE_DEBUG=False instead of Debug=True and TEMPLATE_DEBUG=True , since by using the True value it exposes the application
If I view my error.log under nginx with DEBUG=True:
2013/10/25 11:35:34 [error] 2263#0: *5 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.xx.xxx, server: *.myapp.com, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8001/", host: "www.myapp.com"
view my access.log under nginx with DEBUG=True:
xx.xxx.xx.xxx - - [25/Oct/2013:11:35:33 +0000] "GET / HTTP/1.1" 502 173 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0"
So my question is, why when I set DEBUG=True and TEMPLATE_DEBUG=True does it load successfully showing the application and when I set DEBUG=False and TEMPLATE_DEBUG=False it shows the custom http 500 error page? (I have created to handle http 500 errors)
Thanks to Toad013 and Dmitry for their suggestions.
It appears the issue might have been with how nginx and gunicorn were being started and not a configuration issue, thus, I ended up using the following to start my app:
/usr/local/bin/gunicorn -c /home/ubuntu/virtualenv/gunicorn_config.py myapp.wsgi
sudo nginx -c /etc/nginx/nginx.conf
My site is running correctly locally (using the built in runserver), but when running with nginx and uwsgi, I'm getting a Bad Gateway (502) during the django-social-auth redirect.
The relevant nginx error_log:
IPREMOVED - - [11/Oct/2012:12:10:18 +1100] "GET /complete/google/? ..snip .. HTTP/1.1" 502 574 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.26 Safari/537.11"
The uwsgi log:
invalid request block size: 4204 (max 4096)...skip
Thu Oct 11 12:16:46 2012 - error parsing request
Refreshing the Bad Gateway response redirects and logs in correctly. This happens every single time. The nginx and uwsgi logs here have different timing as they were separate requests. The logs are consistent.
This is the first time deploying django to nginx for me, so I'm at a loss as to where to start.
Have you tried increasing the size of the uwsgi buffer:
-b 32768
http://comments.gmane.org/gmane.comp.python.wsgi.uwsgi.general/1171