Firefox requests are just stopped after some time empty status code - django

I have Django fetch request that takes some to time to finish it works fine on chrome but fails on chrome
I tried different view on Firefox and made it wait some time with time.sleep()
function test_time(request):
for i in range(70):
print(i)
time.sleep(1)
return HttpResponse("done")
if i made the time less than 20 the view works fine more than that in 25-36 seconds the page just stops loading and the request shows no status code - timing 0ms - response: No response data available for this request

Related

Set Django Rest Framework endpoint a timeout for a specific view

I'm running Django 4.0.5 + Django Rest Framework + Nginx + Gunicorn
Sometimes, I'm going to need to handle some POST requests with a lot of data to process.
The user will wait for a "ok" or "fail" response and a list of ids resulting from the process.
Everything works fine so far for mid size body requests (this is subjective), but when I get into big ones, the process will take 1min+.
It's in these cases when I get a 500 error response from DRF, but my process in the background will keep running till the end (but user will not know it finished successfully).
I was doing some investigation and changed the Gunicorn timeout parameter (to 180), but didn't change the behavior in the service.
Is there a way to set a timeout larger than 60s at the #api_view or somewhere else?
Use celery async tasks to process such requests as background tasks.

What happens if a response takes longer than the delay set in Postman Runner

I'm running tests with a delay set to 2 seconds on the Test Runner on Postman.
I'm just wondering if a response took longer than this, let says 5 seconds, does the next request get sent straight away after this response (because it's gone past the 2 seconds) ?
The delay is between each request. Once the response is received, then the delay starts.
https://learning.postman.com/docs/running-collections/intro-to-collection-runs/#starting-a-collection-run
This isn't the same as the request timeout setting.
https://learning.postman.com/docs/getting-started/settings/#request

Do ColdFusion Scheduled Tasks have a built-in request timeout?

I have several scheduled tasks that essentially perform the same type of functionality:
Request JSON data from an external API
Parse the data
Save the data to a database
The "Timeout (in seconds)" field in the Scheduled Task form is empty for each task.
Each CFM template has the following line of code at the top of the page:
<cfscript>
setting requesttimeout=299;
</cfscript>
However, I consistently see the following entries in the scheduled.log file:
"Information","DefaultQuartzScheduler_Worker-8","04/24/19","12:23:00",,"Task
default - Data - Import triggered."
"Error","DefaultQuartzScheduler_Worker-8","04/24/19","12:24:00",,"The
request has exceeded the allowable time limit Tag: cfhttp "
Notice, there is only a 1-minute difference between the start of the task, and its timing out.
I know that, according to Charlie Arehart, the timeout error messages that are logged are usually not indicative of the actual cause/point of the timeout, and, in fact, I have run tests and confirmed that the CFHTTP calls generally run in a matter of 1-10 seconds.
Lastly, when I make the same request in a browser, it runs until the requesttimeout set in the CFM page is reached.
This leads me to believe that there is some "forced"/"built-in"/"unalterable" request timeout for Scheduled Tasks, or, that it is using the default timeout value for the server and/or application (which is set to 60 seconds for this server/application) yet, I cannot find this documented anywhere.
If this is the case, is it possible to scheduled a task in ColdFusion that runs longer than the forced request timeout?

gae long time for calculation mapreduce 500 error python

I got a code to work on GAE but am struggling with the 500 error, which looks like due to the long wait (run) time.
I am doing the following:
Read the user given info
Run some mapreduce method to calculate some stats and send this as email
(Re)direct the user to a thank you page, since the results will be emailed
The code works fine on App engine SDK since there is no time limit. However, I keep getting the 500 error when I run the code on GAE. If I do not perform calculations in step 2 then the code works again (redirects to a new page and sends email). I tried doing step 2 after step 3, but keep getting the same error.
Is there any easy way to fix this? I am thinking of something like get the user info and let them know the results will be emailed to them or redirect them to the main page. In the meantime (or after the above) I can run mapreduce in the backend and email the completed results so the time limit does not abort my code.
class Guestbook(webapp2.RequestHandler):
def post(self):
#get info provided in form by user (code not shown here)
# send them to new page or main page
self.response.write('<html><body>You wrote:<pre>')
self.response.write("thanks")
self.response.write('</pre></body></html>')
#self.redirect('/')
dump_content = 'Error'
try:
dump_content = long_time_taking_mapreduce_method(user_given_info)
except DeadlineExceededError:
logging.warning("Deadline error")
send_results_as_email(OUTPFILE, dump_content)
app = webapp2.WSGIApplication([
('/', MainPage),
('/sign', Guestbook),
], debug=True)
The whole point of mapreduce is that it runs offline, taking as many tasks and as long as necessary. It's defeating the whole purpose to try and run it within your handler function.
Instead, your mapreduce task itself should call the send_results_as_email method once it has a result.

Generating pdf from Django html view hang intermittently using wkhtmltopdf with IIS on Windows Server 2008

I'm returning pdf files with wkhtmltopdf from an html page in Django using the following code:
currentSite = request.META['HTTP_HOST']
params = { 'idOrganisation': idOrganisation, 'idMunicipalite' : idMunicipalite, 'nomMunicipalite' : nomMunicipalite, 'idUe': idUe, 'dateEvenement': dateEvenement}
command_args = "wkhtmltopdf -s A4 http://%s/geocentralis/fiche-role/propriete/?%s -" % (currentSite, urlencode(params))
process = Popen(command_args.split(' '), stdout=PIPE, stderr=PIPE)
rtn_comm = process.communicate() #better than wait this wait and return for us...
pdf_contents = rtn_comm[0] #if want debug, index 1 return the stderror
r = HttpResponse(pdf_contents, mimetype='application/pdf')
r['Content-Disposition'] = 'filename=fiche-de-propriete.pdf'
return r
The code is working and the pdf is generated after 2-3 seconds but very often (intermittently), it hang around 30-60 sec before producing the pdf and firebug show me a "NetworkError: 408 Request Timeout. During this "hang" time, my Django site is not responding to any request.
I'm using Django with IIS on Windows server 2008.
I'm looking for any clue on how to solve that issue...
The reason it hangs is because the server runs into racing/concurrency issues, and hits a deadlock (and you're probably using a relatively-liked asset or two in your HTML).
You request a PDF, so the server fires up wkhtmltopdf, which begins churning out your PDF file. When it reaches an asset (image, CSS or JS file, font, etc), wkhtmltopdf attempts loading it from that server... which happens to be the same server wkhtmltopdf is running on. If the server cannot handle multiple requests concurrently (or just doesn't handle concurrency well), then it enters a deadlock: wkhtmltopdf is awaiting on an asset on a server that is waiting for wkhtmltopdf to finish up processing, so that it can serve the asset to wkhtmltopdf which is awaiting on an asset...
To fix this in dev, just Base64-embed your assets into the HTML being converted to PDF, or temporarily serve these files from another machine (e.g a temporary AWS bucket). This should not be a problem in production environments, as your live server is (hopefully) capable of handling multiple GET requests and threads.