I have a Django application running in Gunicorn behind Nginx. Everything works fine, exect for one strange thing: I have a "download" view and a RESTful json API. When call the download view I use urllib2 to access the json API to get information. And excactly when I try to do this http get request to the json api, the request times out with an error HTTP Error 504: Gateway Time-out.
When I run the code with ./manage.py runserver everything works fine. The http get request to the json api also only takes a few miliseconds, so no danger of running into a timeout.
Here the Situation in Pseudo code:
myproject/views.py: (accessible as: http://myproject.com/download)
1 def download(request, *args, **kwargs):
2 import urllib2
3 opener = urllib2.build_opener()
4 opener.open('http://myproject.com/api/get_project_stats')
The opener.open() call in line four runs into a timeout when running in Gunicorn, when running with ./manage.py runservereverytihng works fine (and the api call only takes a few miliseconds.
Has anyone had the same problem? And more important: How have you solved it?
I had the same issue using Gunicorn, nGinx, Django and Requests
every time I did:
response = requests.get('http://my.url.com/here')
the workers would timeout
I solved the problem by switching from Syncronous (sync) workers to Asynchronous (eventlet) workers.
if you are launching command line add:
-k 'eventlet'
if you are using a config file add:
worker_class = "eventlet"
Related
I have a Django application deployed inside a container on top of Nginx using Dokku. One of the view functions of the Django application includes a request:
views.py
def foo(request):
...
response = requests.get(url)
...
It is probably noteworthy that url is the url of the Django application itself, so the request is from the application to itself. The request is to one of the API endpoints (the reasons for doing this are historical). When the view is called then the request to url fails with 504 gateway timeout.
I cannot reproduce this in any other context specifically:
There is no error when running on localhost with the development server, where url is then the url of the development app (localhost to itself works).
There is no error when running on localhost with the development server, where I manually make the url the production url (localhost to production works).
There is no error when running this request on the production server but outside of the view. Specifically, I did a docker exec into the container, started the Django environment (manage.py shell), and ran the exact request that the view was making, and it worked! (production to production works)
It seems that only when the request is made in the context of a view which itself is answering another request I get an issue.
Any ideas?
Is there a way to save the response of a flask api that is running locally on my machine?
It may not make a great sense as I have the logic locally and there is no need to get the response again from the local URL..but in my case, I have another webhook which runs locally which means I need to run flask and my webhook locally.
I am looking to get around this..
You can save your response on your machine through using pickle. But it is not recommended, because website can change their content anytime.
import requests
import pickle
resp = requests.get("https://github.com")
with open("test","wb") as fd:
pickle.dump(resp,fd)
with open("test","rb") as fd:
resp_ = pickle.load(fd)
print(resp_.url)
I have a Django server that communicates with a NodeJS server on another address (REMOTE_SOCKET_ADDRESS).
In Django, I have a line of code that goes like this:
requests.post(settings.REMOTE_SOCKET_ADDRESS, params=query_params)
I would like my Django server to not wait for the response from the NodeJS server before proceeding with the code. Just send the POST and go on, so that even if NodeJS needs 10 minutes to do whatever it is doing, it won't affect the Django Server.
How can I achieve this "fire and forget" behavior?
Additional info: I am on a shared hosting, so I cannot use workers.
I achieved this by using the requests-futures package: https://github.com/ross/requests-futures
I am sending a post request from a method inside a web application running on django+nginx+gunicorn. I have no issues receiving 200 response from the same code when executed on django's own server (using runserver).
try:
response = requests.post(post_url, data=some_data)
if response.status_code == OK and response.content == '':
logger.info("Request successful")
else:
logger.info("Request failed with response({}): {}".format(response.status_code, response.content))
return response.status_code == OK and response.content == ''
except requests.RequestException as e:
logger.info("Request failed with exception: {}".format(e.message))
return False
I checked the server logs at post_url, it is indeed returning 200 response with this data. However, when I run the app behind gunicorn and nginx, I am not able to receive the response, (however the request is being sent). The code gets stuck at the first line after the try block, and gunicorn worker times out (after 30 seconds).
This is the apache server log at the post_url:
[14/Sep/2016:13:19:20 +0000] "POST POST_URL_PATH HTTP/1.0" 200 295 "-" "python-requests/2.9.1"
UPDATE:
I forgot to mention, this request takes less than a second to execute, so it is not a timeout issue. Something is wrong with the configuration? I have the standard nginx+gunicorn setup, where gunicorn is set as the proxy_pass in nginx. I am guessing since I am behind a nginx proxy, should I be doing something different while sending a post request from the application?
In my gunicorn settings, setting workers=2 solved this issue.
When I was sending a request to the external URL, the external application would send a request back. This new request would occupy the one and only worker in the application. The original request that I sent out is workerless, and so it get's stuck.
With 2 workers, I am able to simultaneously send out a request and receive another request.
You could have issues with Nginx, where the problem could be nginx request entity too large. If you sending too large data to the server, Nginx might reject the request. We had issues with Nginx causing problem when we were trying to upload too big image. We also using nginx + gunicorn with django. So I am suspecting. this might be the same issue.
This is a gunicorn timeout issue. You can increase the timeout of gunicorn by specifying the additional flag --timeout 60 in the command you're using to execute gunicorn. Of course, you can customise the timeout length depending on your needs. The argument is in seconds.
I've tried using both CherryPy 3.5 and Tornado 4 to host my Flask WSGI application. Regardless of the WSGI server, every 2s my browser tries to make a CONNECT/GET request to myserver:9485.
Chrome's network view looks like this:
Firefox's network view looks like this:
Finally, IE's network view doesn't show anything, but I do see this in the console:
Everything about the site works fine, but it kills me knowing this is happening in the background. How do I stop the madness?
Oddly enough, I don't see these messages when I run the code shown below on my development machine.
Additional details:
Python 2.7.6, Flask 0.10.1, Angular 1.2.19
CherryPy code:
static_handler = tools.staticdir.handler(section='/', dir=app.static_folder)
d = wsgiserver.WSGIPathInfoDispatcher({'/': app, '/static': static_handler})
server = wsgiserver.CherryPyWSGIServer(('127.0.0.1', 9000), d)
server.start()
Probably unrelated, but I am using an EventSource polyfill on one of my pages (I see these messages regardless if I hit that page or not).