django: Localhost stopped working suddenly? - django

I'm truing to run a django server and all of a sudden I'm not able to go
to localhost:8000. I was able to a few seconds back, but now now it's just freezing up and saying "waiting for localhost"
I'm on a Mac OS X
How do I debug this?

Some links:
Waiting for localhost : getting this message on all browsers
Waiting for localhost, forever!
Why does my machine keeps waiting for localhost forever?
To summarise it - in general it means that the 1) server is waiting for input (e.g. not returning a response), 2) some other service might be running on the same port, 3) no DB connection.
However, that said a restart should sort all these out by killing all processes that might've taken the port and by restarting the DB and reconnecting properly.

Related

The error of Django application crashed ngnix and the server

My Django application run well for a while, then I got 502 Bad Gateway, after a few hours, I am unable to ping the domain and use SSH to connect my server(from Amazon Lightsail). My other application served by ngnix was also not available then. While if I didn't start the Django application, ther application served by ngnix would run steadily. So I guess it is the error of my Django application crashed ngnix and the server.
After rebooting the server for serveral times, the server seems recovered then I can ping the domain and use SSH to connect the server. But after a while, the same problem would occurs again. I wonder how to fix the problem.
Some diagnostic information since the start of the Django application to the end of the Nginx server provided below.
The RAM usage is high during the process.
The uwsgi log. https://bpa.st/FP7Q
The Nginx error log. https://bpa.st/35EQ

How can I troubleshoot PCs not seeing each other over my LAN

I'm a complete newbie when it comes to networking. I have two PCs on my LAN both running Manjaro. My main aim is to test functionality on a Django server running on one PC, from the other. I am running the Django server on the PC with ip address 192.168.1.138 using the command
python manage.py runserver 192.168.1.138:8000
and in settings.py
ALLOWED_HOSTS = ['localhost', '192.168.1.138']
I can ping 192.168.1.138 from the client PC, and ping the client PC from the server PC. But if I enter the ip address/port into the browser, it fails with
took too long to respond
I don't know if this a separate problem or a manifestation of the first, but when I run NitroShare, I am able to 'see' the PC running the Django server from the PC acting as the client, but if I try to transfer a file, again it times out. I am unable to see the client from the server in NitroShare.
Any suggestions or help gratefully received
Ensure you don't have a firewall running (or that it allows connections to port 8000). Manjaro's docs imply there might be no firewall by default, but in case there is, see https://wiki.manjaro.org/index.php?title=Firewalls
Set ALLOWED_HOSTS = ['*'], don't bother with limiting them.
Run with python manage.py runserver 0:8000 ; the 0 stands for 0.0.0.0, i.e. has the server listening on all network interfaces.
First I would scan with the other PC the open ports of you "Server"-PC, you can do that with tools like Nmap. Make sure you opened the ports of your "Server"-PC at your router interface. Another option could be the launching of the django app in a docker container. Here's the link of the official docker image at DockerHub:
https://hub.docker.com/_/django

504 gateway timeout flask socketio

I am working on a flask-socketio server which is getting stuck in a state where only 504s (gateway timeout) are returned. We are using AWS ELB in front of the server. I was wondering if anyone wouldn't mind giving some tips as to how to debug this issue.
Other symptoms:
This problem does not occur consistently, but once it begins happening, only 504s are received from requests. Restarting the process seems to fix the issue.
When I run netstat -nt on the server, I see many entries with rec-q's of over 100 stuck in the CLOSE_WAIT state
When I run strace on the process, I only see select and clock_gettime
When I run tcpdump on the server, I can see the valid requests coming into the server
AWS health checks are coming back succesfully
EDIT:
I should also add two things:
flask-socketio's server is used for production (not gunicorn or uWSGI)
Python's daemonize function is used for daemonizing the app
It seemed that switching to gunicorn as the wsgi server fixed the problem. This legitimately might be an issue with the flask-socketio wsgi server.

uWSGI downtime when restart

I have a problem with uwsgi everytime I restart the server when I have a code updates.
When I restart the uwsgi using "sudo restart accounting", there's a small gap between stop and start instance that results to downtime and stops all the current request.
When I try "sudo reload accounting", it works but my memory goes up (double). When I run the command "ps aux | grep accounting", it shows that I have 10 running processes (accounting.ini) instead of 5 and it freezes up my server when the memory hits the limit.
accounting.ini
I am running
Ubuntu 14.04
Django 1.9
nginx 1.4.6
uwsgi 2.0.12
This is how uwsgi does graceful reload. Keeps old processes until requests are served and creates new ones that will take over incoming requests.
Read Things that could go wrong
Do not forget, your workers/threads that are still running requests
could block the reload (for various reasons) for more seconds than
your proxy server could tolerate.
And this
Another important step of graceful reload is to avoid destroying
workers/threads that are still managing requests. Obviously requests
could be stuck, so you should have a timeout for running workers (in
uWSGI it is called the “worker’s mercy” and it has a default value of
60 seconds).
So i would recommend trying worker-reload-mercy
Default value is to wait 60 seconds, just lower it to something that your server can handle.
Tell me if it worked.
Uwsgi chain reload
This is another try to fix your issue. As you mentioned your uwsgi workers are restarting in a manner described below:
send SIGHUP signal to the master
Wait for running workers.
Close all of the file descriptors except the ones mapped to sockets.
Call exec() on itself.
One of the cons of this kind of reload might be stuck workers.
Additionaly you report that your server crashes when uwsgi maintains 10 proceses (5 old and 5 new ones).
I propose trying chain reload. DIrect quote from documentation explains this kind of reload best:
When triggered, it will restart one worker at time, and the following worker is not reloaded until the previous one is ready to accept new requests.
It means that you will not have 10 processes on your server but only 5.
Config that should work:
# your .ini file
lazy-apps = true
touch-chain-reload = /path/to/reloadFile
Some resources on chain reload and other kinds are in links below:
Chain reloading uwsgi docs
uWSGI graceful Python code deploy

wildfly 10 unexpected shutdown often

Currently i have issue.
application server (wildfly 10) unexpected shutdown often.  
so i have check the server log nothing is there about the shutdown.
application running in aws(amazon web services).
application server name -wildfly 10.
am using putty to start the application(remote session)
server start command /usr/share/wildfly/bin/standalone.sh &
once i start i will close the putty.
i have place & symbol in start command so it will run in background, but after few days it's unexpected shutdown and nothing is there in server log.
thanks in advance
When you start applications from a command line on any unix based OS, it will be terminated automatically when the terminal session is closed, unless you tell the OS not to do that:
some-aws-prompt$ nohup /usr/share/wildfly/bin/standalone.sh &
The nohup command is an abbreviation for "don't hang up (the phone)" or "no hang up" when the user logs out.