I have a Django server that communicates with a NodeJS server on another address (REMOTE_SOCKET_ADDRESS).
In Django, I have a line of code that goes like this:
requests.post(settings.REMOTE_SOCKET_ADDRESS, params=query_params)
I would like my Django server to not wait for the response from the NodeJS server before proceeding with the code. Just send the POST and go on, so that even if NodeJS needs 10 minutes to do whatever it is doing, it won't affect the Django Server.
How can I achieve this "fire and forget" behavior?
Additional info: I am on a shared hosting, so I cannot use workers.
I achieved this by using the requests-futures package: https://github.com/ross/requests-futures
Related
I was using Jupyter notebook and was wandering how does it works offline. Where does server is? How TCP connection is made? How does htpp request is sent?
Similarly when we are working on some website project (eg: making one website in django) when you compile that html code in your terminal, it provides you an output with an ip address and when you run that ip address in your browser, browser will show you your website. So how does this work and how that ip address it generated? Can anybody please explain me?
The browser sends a Http request to the server.
The server does its magic and dumps the request via the CGI to django.
Some part of django receives the request and turns it into a django request object.
The request object wanders on some nebulous paths through the middleware which does strange things with it.
The request object finally ends up in some function which looks at the urls, takes the patterns out of urls.py and calls up a view function.
The view functions do their magic (with models and templates as partners) in, this is probably where I have the strongest illusion of understanding (well, apart from the database abstraction magic, that is... ;)
The view functions returns an HttpResponse object, I guess this is returned on some nebulous paths to the CGI.
Webserver takes over again and sends the Http response to the client.
I'm testing deploying my first Django project using Apache.
I use Django's test client to perform an "internal" GET from my own server, which worked OK locally, but not runnning on the actual server.
The client ends up getting Django error messages, like
Page not found (404) Request Method: GET Request
URL: http://testserver/polls/forms/test1/
How can I get the client's GET to work on the actual server, having the it be performed on the actual http: //my_actual_server_name.something/polls/forms/test1 instead of "testserver" ?
I tried setting SERVER_NAME= ‘my_actual_server_name.something’ in the settings.py file but that's not it.
Django's test client doesn't actually make HTTP requests, it just makes a request object and passes it to your middleware/views.
If your goal is to make an http request to your own server, an easy way is to install requests and do something like
# Some server on the network
requests.get("http://myserver.com/polls/forms/test1/")
# or some server running on the same machine
requests.get("http://12.0.0.1:8000/polls/forms/test1/")
If you just want to use the functionality of some view, you should move that logic into a function and call that from both the view and your other code.
Very tangential side note:
If you're curious about how the test client doesn't make http requests, you can look at the test client's code in the django source (client.get() calls client.generic() which calls client.request() which instantiates WSGIRequest() and then passes that object to your app - which is the request that you receive in your views).
we added ability for admin users to change server date&time through the portal. Changing the date&time back is working fine, but changing forward(more than fastcgi_read_timeout) is returning '504 gateway timeout' even though server time successfully changed behind the scenes.
Please advice how to handle this?
Thanks.
I had a very similar issue with another project. Maybe it is best to submit the date&time credentials (I assume you would be using NTP servers IPs to do this) through the portal asynchronously via a JavaScript AJAX request. Then, let the server then do its thing with the date&time.
Meanwhile, have the client side JavaScript, continuously probe the server with interval AJAX requests (perhaps every 5 seconds) to get back a response message on the server time. That way, each subsequent AJAX request initiates a new Nginx session and if the first fails/timeouts, then try a second time, if that fails, then try a third time, and so on.
This worked on our system. However, I do not know if your product has login/authentication credentials. If it does, then the user may have to log back in once all set and done because a change in time may also expire their log-in session. I don't think this is such a big deal though because theoretically they should only need to change the date/time once in a while if not just one time only. So it shouldn't have too much of an impact on the user experience.
tags: nginx, NTP, timeout, 504
I have a Django application running in Gunicorn behind Nginx. Everything works fine, exect for one strange thing: I have a "download" view and a RESTful json API. When call the download view I use urllib2 to access the json API to get information. And excactly when I try to do this http get request to the json api, the request times out with an error HTTP Error 504: Gateway Time-out.
When I run the code with ./manage.py runserver everything works fine. The http get request to the json api also only takes a few miliseconds, so no danger of running into a timeout.
Here the Situation in Pseudo code:
myproject/views.py: (accessible as: http://myproject.com/download)
1 def download(request, *args, **kwargs):
2 import urllib2
3 opener = urllib2.build_opener()
4 opener.open('http://myproject.com/api/get_project_stats')
The opener.open() call in line four runs into a timeout when running in Gunicorn, when running with ./manage.py runservereverytihng works fine (and the api call only takes a few miliseconds.
Has anyone had the same problem? And more important: How have you solved it?
I had the same issue using Gunicorn, nGinx, Django and Requests
every time I did:
response = requests.get('http://my.url.com/here')
the workers would timeout
I solved the problem by switching from Syncronous (sync) workers to Asynchronous (eventlet) workers.
if you are launching command line add:
-k 'eventlet'
if you are using a config file add:
worker_class = "eventlet"
I made an application using Qt/C++ that reads some values every 5-7 seconds and sends them to a website.
My approach is very simple. I am just reading the values i want to send and then i make an HTTP POST to the website. I also send the username and password to the website.
The problem is that i cannot find out if the request is successful. I mean that if i send the request and server gets it, i will get an HTTP:200 always. For example if the password is not correct, there is no way to know it. It is the way HTTP works.
Now i think i will need some kind of a protocol to take care the communication between the application and the website.
The question is what protocol to use?
If the action performed completes before the response header is sent you have the option of adding a custom status to it. If your website is built on PHP you can call header() to add the custom status of the operation.
header('XAppRequest-Status: complete');
if you can modify the server side script you could do the following
on one end :
You can make the HTTP post request via ajax
and evaluate the result of the ajax request.
On the serve side
On the HTTP request you do your process and if everything goes accordingly you can send data back to the ajax script that called it.
solves your problem .. ?