*10 upstream timed out (110: Connection timed out) while reading response header from upstream with uwsgi - django

I currently have a server setup with nginx and uwsgi with django
This error doesn't happen until I try to change my rds instance
my fully error message is
*10 upstream timed out (110: Connection timed out) while reading response header from upstream, client: xxx.xxx.xxx.xxx, server: xxx.xxx.xxx.xxx, request: "GET /load/ HTTP/1.1", upstream: "uwsgi://unix:/tmp/load.sock", host: "example.com", referrer: "https://example.com/"
I was using aws rds (postgres) which works perfectly fine. The only change I made is changing from regular postgres service to aurora postgres I didn't upgrade the db, from regular to aurora. I created a new aurora postgres. I got everything setup...changed host and everything in my django db setting. runserver locally works fine. It does connect to db with read and write. Works perfectly. But when I deploy to server, open up my domain. Anything ui related looks fine but db related, NO. Took awhile then of course the 504 gateway timeout. I went to checkout the nginx error log. That's the error message I found. Googled, tried a few settings other stackoverflow posts suggested such as addingsingle-interpreter = true into uwsgi.ini file. No luck.
Can someone please give me an idea where I should look into for this problem?
Thanks in advance.

try going to your rds instance, check its' security group setting. Happened to me once, too me a while to find out that the security group setting is the problem. I didn't recall setting up the security group but it restricted with local IP

Related

Nginx (13: Permission denied) while connecting to upstream

I'm deploying my Djano application on a VPS and I'm following the steps in the below link to configure my app with Gunicorn and Nginx.
How To Set Up Django with Postgres, Nginx, and Gunicorn on Ubuntu 16.04
Everything went well with the tutorial (gunicorn and nginx are running) but the issue is that when Im' visiting the VPS through the static IP its showing a white screen that is always reloading.
After checking nginx log I found the following:
(13: Permission denied) while connecting to upstream, client: <client_ip>, server: <server_ip>, request: "GET / HTTP/1.1, upstream: "http://unix:/root/myproject/myproject.sock:/", host: "<server_ip>", referrer: "http://<server_ip>/"
After searching for roughly 7 hours, I was finally able to find a solution to this issue in the Nginx forum:
Nginx connet to .sock failed (13:Permission denied) - 502 bad gateway
What I simply did was changing the name of the user on the first line in /etc/nginx/nginx.conf file.
In my case the default user was www-data and I changed it to my root machine username.
In the top of nginx.conf file is a user name (user nginx;). just add this user in same group that your site or project is. www-data or any is yours. sorry for english.

Django Production Site keeps going down after 3 weeks

I am running a website using Django, Gunicorn and Nginx off a Digital Ocean droplet. The website runs fine for about three weeks and then it random stops connecting users to the webpage. When I visit the webpage in my browser it returns:
This site can’t be reached ERR_CONNECTION_RESET
Restarting the droplet fixes the issue, but it will likely happen again after three weeks.
When I check the Gunicorn worker it says its running fine and my Django logs are clean.
Nginx only reported this:
2019/08/31 23:11:56 [error] 28183#28183: *36352 open() "/home/projects/server/mysite/static/img/icon.jpg" failed (2: No such file or directory), client: 66.249.64.149, server: removedurl.com, request: "GET /static/img/icon.jpg HTTP/1.1", host: "removedurl.com.com"
It seems that restarting Nginx fixes the problem.
EDIT:
I just remembered that I have a crontab set up that renews the Letsencrypt SSL certificate, which could be the problem. Here is the crontab command:
0 0 1 * * /etc/init.d/nginx stop && /opt/letsencrypt/letsencrypt-auto renew && /etc/init.d/nginx restart
EDIT2:
The command above is not a great way to go about this, instead I deleted my original letencrypt certification and used the Nginx webserver plugin for Certbot that allowed me to renew the certification through my existing webserver instead of letsencrypt --standalone trying to start up a new webserver on port 80 (which it couldn't)

aws elastic beanstalk with spring boot app

I deployed my spring boot app with this tutorial https://aws.amazon.com/pt/blogs/devops/deploying-a-spring-boot-application-on-aws-using-aws-elastic-beanstalk/, set the server port door to 5000 with enviroment varible, and it's work fine.
But some time latter without any request when i try to post our get some resource i take a timeout error:
2017/08/26 02:19:24 [error] 12955#0: *15 connect() failed (111: Connection refused) while connecting to upstream, client: 172.31.2.223, server: , request: "GET /api/motoristas HTTP/1.1", upstream: "http://127.0.0.1:5000/api/motoristas", host: "vaptuberjjaerp-env.e5y5w4fa2q.sa-east-1.elasticbeanstalk.com"
and if i try to access api documentation link it works fine: http://vaptuberjjaerp-env.e5y5w4fa2q.sa-east-1.elasticbeanstalk.com/swagger-ui.html
What is happening ?
Not knowing for certain but this sounds like your documentation might just be static content that is generated at build time. If this is the case then it would still be accessible if the Java process for your app died. In this case the rest of your app would be unavailable. I recommend checking that the app your process runs in is still active and running.

nginx, uwsgi, DJango, 502 when DEBUG=False, "upstream prematurely closed connection"

I have a working nginx production server running a Django app, using uwsgi (set up with this tutorial).
nginx and uwsgi are communicating through a UNIX socket.
However, as soon as I turn DEBUG = False in my Django settings, I get a 502 error. The nginx error log tells me:
2015/09/08 10:37:51 [error] 940#0: *4 upstream prematurely closed connection while reading response header from upstream, client: myIP, server: mydomain.ca, request: "GET /quests/ HTTP/1.1", upstream: "uwsgi://unix:///tmp/hackerspace.sock:", host: "myDomain"
How can I prevent the socket connection from timing out, and why is DEBUG = False making this difference?
Thanks!
I found the solution that works for me. I had to specify hosts for ALLOWED_HOSTS list in django's settings.py
ALLOWED_HOSTS = ['example.com', 'example.dev']
The "ALLOWED_HOSTS" answer also solved my problem. One thing to elaborate on, since it was not immediately clear to me anyways, the values you put here are the potential domain names (IPs, etc) that your site will be accessed by.
If your site is http://mysite.here/ then you NEED to put "mysite.here" in the ALLOWED_HOSTS list. Apparently, with Debug=True there is no HOST validation, once it is switched to False the system starts rejecting any request where the value of HOST: header does not appear in the list. For further reading:
https://docs.djangoproject.com/en/1.10/ref/settings/

502 error after adding application to a Django project running on nginx and gunicorn

I am trying to add an application to an existing Django project, but once I have done it I get a 502 error.The server is running Ubuntu. I don't think it has to do with the applications code because I got it running on the django development server. It goes away when I take out the app's name from settings.py and restart gunicorn.
Here's a part of the log
2011/07/15 01:24:45 [error] 16136#0: *75593 connect() failed (111: Connection refused) while connecting to upstream, client: 24.17.8.152, server: staging.site.org, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8020/", host: "staging.site.org"
Here's the nginx config file.
Nginx Config File
I'm not sure what other information is needed. Not sure where the gunicorn logs are located. My server admin skills are kind of lacking.
Nginx isn't able to connect to your backend (gunicorn) or gunicorn is refusing the connection. You provided no details about the configuration so that's all the help you'll get. You are correct that the application code has nothing to do with it. It's a configuration error on your part.