"Error: error - Service Unavailable" while running any job via oracle Apex Page - oracle-apex

I am executing a external job using DBMS_SCHEDULER through apex page by clicking a button in below manner.(Dynamic action=>Execute PlSql)
dbms_scheduler.run_job(job_name => 'APEXDATA.myJobName', use_current_session=> TRUE);
Its executing the external job correctly.(taking 1-2 minutes).My issue is that, in between the time while its executing i can not able to access any other page or can not able to login with new session nothing.showing below error in every task i am performing.
**503 Service Unavailable
The connection pool named: |apex|| is not correctly configured, due to the following error(s):
Exception occurred while getting connection: oracle.ucp.UniversalConnectionPoolException:
All connections in the Universal Connection Pool are in use**
Is this the general or known issue?if yes how to resolve the issue,because in same time other user also has to perform any other task or other may login same time.
Thank You.

I think you're mixing 2 things that hard to combine:
Dynamic actions are designed to submit code from the page without a page submit so the user can continue to work on the page after he has done something (eg run pl/sql code)
Running a process in the database that takes up the database session until it is completed ( use_current_session=> TRUE). Your dbms_scheduler.run_job process will run in the current session and as long as that job is running no other operations can be run in that database session (the connection is in use as shown in the error message).
Solutions:
use_current_session=> FALSE so the job runs in the background
In the dynamic action, set "Wait for result" to true, so the user is forced to wait until the job completes.
Execute the job on page submit which will also force the user to wait for the job to be completed.
Since your job takes 1-2 mins to complete, options 2 and 3 are probably not feasible because the user experience is not optimal. If you execute the job in the background, then you probably need to write some additional code to prevent the user from clicking a couple of times and submitting the job multiple times. You could do that by checking if the job is running before you submit it and not submit it if it is currently running.

Related

Running Task In The Background

What is the technology which allows the web application to process the task in the background without holding user to wait until the task to finish.
Example, as a user,
1. I want to submit a form which requires heavy processing. (Assume it requires to checking or actions, upload documentation or etc)
2. After submitting the form, the task will be running in the background, then I can go to other page and do something else.
2.1 At the same time, I might submit another form to the server.
The request can be process at the same time or can be queue under a queue system
3. I will receive a notification from the system whenever the server return a response. (Regardless it is success or failure)
This feature is similar to Google Cloud Platform.
Try Kue or any other similar libraries. The term to "google" is "[language] task queue"
You can of course roll your own. Though it will be much easier if you make use of an existing server such as redis or rabbitmq. So that queuing part is handled for you by the server and you could concentrate on your business logic.

AWS Beanstalk: Cannot retrieve logs in degraded state

After some time of serving my app has died and gone to "degraded" state. I have no idea what happened because no one was using it. Maybe it was hibernated and did not wake up?
Now I am trying to check the logs but I am not able to do it. Requesting logs takes ages and from time to time I get timeouts. When I click Request logs (100 lines or full logs ) I get this message
Elastic Beanstalk is updating your environment.
To cancel this operation select Abort Current Operation from the Actions dropdown.
this takes some time and finally nothing happens. Moreover I cannot abort this operation as is written because:
Error
Could not abort the current environment operation for MY_APP_NAME: Environment named MY_ENV_NAME is in an invalid state for this operation. Must be pending deployment.

Break out of loop in AWS SWF activity

I'm running permanent loop in SWF Activity. Say like a web crawler crawling a website www.example1.com. However, I don't want to wait until it finishes crawling, but at certain time I want to terminate the activity and switch it to craw website www.example2.com instead.
I have tried to use 'try-cancel', 'terminate', workflow by workflow-id. It seems like it just sends signal to SWF to indicate that the task is finished in the AWS console, but the Activity process on worker is still running.
Any solution for this?
When activity is cancelled a heartbeat call returns flag that indicates that. So your activity loop should include heartbeating code to support cancellation. See "activity heartbeat" section from "error handling" page of AWS Flow Framework for Java
Developer Guide for an example.

django celery rabbitmq execute delay

I use Django-Celery +rabbitmq to execute some asyn tasks,I define a queue 'sendmail' to execute send email task,send mail is triggered by a specific task(this task has own queue), but now I encounter a problem,after the specific task finish, the mail sometimes send at once, sometimes need 5-20minutes.I want to know what reason caused it.
Django-celery will package the taskname and param as message to rabbitmq when call task.delay().
I want to know when the message go to the rabbitmq, but use web management tool only can see total messages,can't see the every message's detail, especially the time the message reached. Django-celery log can only see the work got from broker time and execute task time.I want to know all related timepoint to sure which step the time main consumed.
Django-Celery does (I believe) report task data on a per-task basis. When you sync your database, it crates a bunch of monitoring tables which are accessible via the admin. However, in order for these tasks to be recorded in these tables, you need to run the celerycam program in the django context (python ./manage.py celerycam). The celerycam program will take "snapshots" of your tasks every second or so (by default) and record information about them. Another useful tool for monitoring is the celerymon program (which also has to run in the django context). This is a command line ncurses program that reports real-time information about tasks as they occur. Finally, rabbitmqctrl has a bunch of options that might help with monitoring.
This is a particularly useful page in the docs:
http://celery.github.com/celery/userguide/monitoring.html
Anyway, this is what I use to monitor my tasks when using celery.

Operation "timing out" during new item creation in Sitecore Editor

I've created a command in the Sitecore Editor that automatically builds out up to 25 items at a time. The problem that I'm experiencing is that the operation just "hangs" and does not complete. I don't think it's an error because I've added error handling and logging.
I'm getting the following error message "The operation could not be completed. Your session may have been lost due to a time-out or a server failure. Try again."
How can I increase the "time-out" duration (if this is a setting somewhere) - or is there another solution to this problem?
Long running operations will time out eventually depending on your IIS settings, usually after 20 mins. Instead, you should run your commands as a scheduled task, as they run in the background, with no waiting for the IIS request.
However, it seemes strange that inserting 25 items is such a long operation that the browser times out. You might have another issue in your code.