I got a code to work on GAE but am struggling with the 500 error, which looks like due to the long wait (run) time.
I am doing the following:
Read the user given info
Run some mapreduce method to calculate some stats and send this as email
(Re)direct the user to a thank you page, since the results will be emailed
The code works fine on App engine SDK since there is no time limit. However, I keep getting the 500 error when I run the code on GAE. If I do not perform calculations in step 2 then the code works again (redirects to a new page and sends email). I tried doing step 2 after step 3, but keep getting the same error.
Is there any easy way to fix this? I am thinking of something like get the user info and let them know the results will be emailed to them or redirect them to the main page. In the meantime (or after the above) I can run mapreduce in the backend and email the completed results so the time limit does not abort my code.
class Guestbook(webapp2.RequestHandler):
def post(self):
#get info provided in form by user (code not shown here)
# send them to new page or main page
self.response.write('<html><body>You wrote:<pre>')
self.response.write("thanks")
self.response.write('</pre></body></html>')
#self.redirect('/')
dump_content = 'Error'
try:
dump_content = long_time_taking_mapreduce_method(user_given_info)
except DeadlineExceededError:
logging.warning("Deadline error")
send_results_as_email(OUTPFILE, dump_content)
app = webapp2.WSGIApplication([
('/', MainPage),
('/sign', Guestbook),
], debug=True)
The whole point of mapreduce is that it runs offline, taking as many tasks and as long as necessary. It's defeating the whole purpose to try and run it within your handler function.
Instead, your mapreduce task itself should call the send_results_as_email method once it has a result.
Related
I am executing a external job using DBMS_SCHEDULER through apex page by clicking a button in below manner.(Dynamic action=>Execute PlSql)
dbms_scheduler.run_job(job_name => 'APEXDATA.myJobName', use_current_session=> TRUE);
Its executing the external job correctly.(taking 1-2 minutes).My issue is that, in between the time while its executing i can not able to access any other page or can not able to login with new session nothing.showing below error in every task i am performing.
**503 Service Unavailable
The connection pool named: |apex|| is not correctly configured, due to the following error(s):
Exception occurred while getting connection: oracle.ucp.UniversalConnectionPoolException:
All connections in the Universal Connection Pool are in use**
Is this the general or known issue?if yes how to resolve the issue,because in same time other user also has to perform any other task or other may login same time.
Thank You.
I think you're mixing 2 things that hard to combine:
Dynamic actions are designed to submit code from the page without a page submit so the user can continue to work on the page after he has done something (eg run pl/sql code)
Running a process in the database that takes up the database session until it is completed ( use_current_session=> TRUE). Your dbms_scheduler.run_job process will run in the current session and as long as that job is running no other operations can be run in that database session (the connection is in use as shown in the error message).
Solutions:
use_current_session=> FALSE so the job runs in the background
In the dynamic action, set "Wait for result" to true, so the user is forced to wait until the job completes.
Execute the job on page submit which will also force the user to wait for the job to be completed.
Since your job takes 1-2 mins to complete, options 2 and 3 are probably not feasible because the user experience is not optimal. If you execute the job in the background, then you probably need to write some additional code to prevent the user from clicking a couple of times and submitting the job multiple times. You could do that by checking if the job is running before you submit it and not submit it if it is currently running.
I have a django app where the user sends a request, and the server does some SQL lookup, followed by computation on results and finally showing the results to the user.
The SQL lookup and the computation afterwards can take a long time, maybe 30+ minutes. I have seen some webpages ask for email in such cases then send you the URL later. But I'm not sure how this can be done in django or whether there are other options for this situation. Any pointer will be very helpful.
(I'm sorry but as I said it's a rather general question, I don't know how can I provide a min runnable code for this)
One way to accomplish this would be to use something like Celery, which is a distributed task queue. The processing task would go into the queue (synchronously or asynchronously), and it would call a function to send an email to the user alerting them it is ready when the task is complete.
Documentation: https://docs.celeryproject.org/en/stable/django/first-steps-with-django.html
I have a web page where I need to run a long sql process (up to 20 mins or so) when the user clicks on a certain button. The script runs, but the user is then unable to continue browsing the rest of the website.
I would like to have it so that when the button is clicked, it goes into a queue that runs in the background.
I have looked inth django-background-tasks, but the problem is that it does not seem to be possible to start the queued tasks without running python manage.py process_tasks.
I have heard of Celery, but I am using a Windows system and it does not seem to be suitable.
I am new to django and website infrastructures, and am not sure if this is feasible. I have also seen in older response that the threading package can work to do this, but I am unsure if it is outdated.
You can use create_task provided by Asyncio that can run a background task for you without blocking the view for clients.
Python 3.7+
Asyncio create_task
Disclaimer: I'm not so sure if myfunc() needs to be async unless you are performing an async worthwhile task.
You could also have a while loop in myfunc() for periodic repeated operations.
import asyncio
async def myfunc():
await asyncio.sleep(5)
print("Hi, after 5 seconds.")
task = asyncio.create_task(myfunc(), )
I am making a Django Rest framework based server and in one of the request, I get an audio file from front-end, on which I need to run some ML based algorithm(I have script for same) and respond back to user with the result. Problem is that this request might take 5-10 seconds to execute. I am trying to understand following things:
Will Celery help me reduce the workload on server, as in any case I need to wait for the result of the ML Algo and respond back to user.
Should I create a different server to handle this type of request? Will that be a better approach?
Also, is my flow of doing things correct. First, Upload the file to some cloud platform for storage and serialize the instance to get the url of file. Second run the script using celery and wait for the result. Third, Respond back with the result.
Thanks for helping.
I have a series of scheduled tasks that all run at various times of the day. Since the migration from Coldfusion version 7 to 10, these tasks have stopped running.
When I check the box, that outputs the results to a file, I get a text file that says nothing more than "Connection Failure". I have tried everything imaginable regarding the username and password for the task. It makes no difference. When I run the CFM page in my browser, the
page works correctly and generates an email just like it should. I just
can't make it run as a scheduled event.
Is the scheduled task folder has any check for the session or anything? I mean is the scheduled task folder is accessible without login? Please try with removing all the redirect rules for the application. That might work.
For me the requests were timing out. I increased the timeout in the administrator and that solved it. Doing a cfhttp in a test file and dumping the results helped me troubleshoot it.