Operation "timing out" during new item creation in Sitecore Editor - sitecore

I've created a command in the Sitecore Editor that automatically builds out up to 25 items at a time. The problem that I'm experiencing is that the operation just "hangs" and does not complete. I don't think it's an error because I've added error handling and logging.
I'm getting the following error message "The operation could not be completed. Your session may have been lost due to a time-out or a server failure. Try again."
How can I increase the "time-out" duration (if this is a setting somewhere) - or is there another solution to this problem?

Long running operations will time out eventually depending on your IIS settings, usually after 20 mins. Instead, you should run your commands as a scheduled task, as they run in the background, with no waiting for the IIS request.
However, it seemes strange that inserting 25 items is such a long operation that the browser times out. You might have another issue in your code.

Related

"Error: error - Service Unavailable" while running any job via oracle Apex Page

I am executing a external job using DBMS_SCHEDULER through apex page by clicking a button in below manner.(Dynamic action=>Execute PlSql)
dbms_scheduler.run_job(job_name => 'APEXDATA.myJobName', use_current_session=> TRUE);
Its executing the external job correctly.(taking 1-2 minutes).My issue is that, in between the time while its executing i can not able to access any other page or can not able to login with new session nothing.showing below error in every task i am performing.
**503 Service Unavailable
The connection pool named: |apex|| is not correctly configured, due to the following error(s):
Exception occurred while getting connection: oracle.ucp.UniversalConnectionPoolException:
All connections in the Universal Connection Pool are in use**
Is this the general or known issue?if yes how to resolve the issue,because in same time other user also has to perform any other task or other may login same time.
Thank You.
I think you're mixing 2 things that hard to combine:
Dynamic actions are designed to submit code from the page without a page submit so the user can continue to work on the page after he has done something (eg run pl/sql code)
Running a process in the database that takes up the database session until it is completed ( use_current_session=> TRUE). Your dbms_scheduler.run_job process will run in the current session and as long as that job is running no other operations can be run in that database session (the connection is in use as shown in the error message).
Solutions:
use_current_session=> FALSE so the job runs in the background
In the dynamic action, set "Wait for result" to true, so the user is forced to wait until the job completes.
Execute the job on page submit which will also force the user to wait for the job to be completed.
Since your job takes 1-2 mins to complete, options 2 and 3 are probably not feasible because the user experience is not optimal. If you execute the job in the background, then you probably need to write some additional code to prevent the user from clicking a couple of times and submitting the job multiple times. You could do that by checking if the job is running before you submit it and not submit it if it is currently running.

AWS Beanstalk: Cannot retrieve logs in degraded state

After some time of serving my app has died and gone to "degraded" state. I have no idea what happened because no one was using it. Maybe it was hibernated and did not wake up?
Now I am trying to check the logs but I am not able to do it. Requesting logs takes ages and from time to time I get timeouts. When I click Request logs (100 lines or full logs ) I get this message
Elastic Beanstalk is updating your environment.
To cancel this operation select Abort Current Operation from the Actions dropdown.
this takes some time and finally nothing happens. Moreover I cannot abort this operation as is written because:
Error
Could not abort the current environment operation for MY_APP_NAME: Environment named MY_ENV_NAME is in an invalid state for this operation. Must be pending deployment.

The operator or administrator has refused the request task scheduler

I have scheduled a C# console application in Task Scheduler of Windows 2012 R2. Application will run when executed it manually or Right click on scheduled task and click on Run, but it is failed when triggered by Task Scheduler with below error.
The operator or administrator has refused the request(0x800710E0)
I have followed below steps also after Google search
Selected "Run whether user logged in or not"
Unchecked "Start the task only if the computer is on AC power"
In my case, the error message "The operator or administrator has refused the request" meant that a previous instance of the task has still been running and the task was configured to not start a new instance if it's already running (the default configuration), so the Task Scheduler refused to start a new instance when the task was triggered.
You can find that option in a select box on the task's Settings tab, under the caption "If the task is already running, then the following rule applies". The default value is "Do not start a new instance".
But that error message is pretty confusing. From the other answers, you may see that it may mean many completely distinct errors. As is usual in Microsoft's products.
Tip
It's helpful to check the History tab of a task. That's where I have found out what's actually going on. There was an event "Launch request ignored, instance already running".
In my case, I had to redo the permissions on the task. Somehow it had lost the domain portion of the username. Instead of `DOMAIN\joeuser' it was just 'joeuser'. After a reset, it worked correctly as it had for the previous year.
In my case as per having a job setup with Task Scheduler as written about in the "Prevent a Task Scheduler Task from Executing on Setting Updates", I had a job setup to run every "X" minutes for a period of indefinitely.
Upon seeing the dreaded "The operator or administrator has refused the request" for the Last Run Result, I looked over the History tab and see detail indicating that is "missed its schedule".
The Solution
From the Settings tab of the job properties, I simply checked the option "Run task as soon as possible after a scheduled start is missed", and problem resolved; although, I did have to type in the credential again as well.
Note: This started occurring once a server was moved from a redundant backup server once hardware repair was completed back to the original hardware. The OS was Server 2012 R2 and the OS was moved to other hardware while repair was done on the production server but I didn't notice this there—maybe an oversight there though—not sure.
I know that #Sushmit-Patil found a solution, but I wanted to add a solution to my similar problem:
It turns out a prior process never exited (it was hanging around in memory because of a defect I had in my code). By default, Windows Task Scheduler won't run the process again if it's already running.
In addition to fixing the defect, in Task Scheduler, under the Settings tab, I set If the task is already running, then the following rule applies: to Run a new instance in parallel
1
Error occurred due to folder permission, I was creating CSV from my application, which was required folder permission to be granted. After giving Full Control to the folder error got resolved.
For me, the solution was to check Run with highest privileges in the properties.
In my case my task launches a PowerShell script--and it produced the "The operator or administrator has refused the request (0x800710E0)" error message as seen in the Task Scheduler's task-entry grid. My user name was correct, but when I dropped to a command prompt and simulated the task by running the PowerShell against my .ps1 file, I saw an Avast prompt that flagged my script as suspicious and wasn't allowing it to run. I created an Avast exception and now the task runs without any issue.
After turning on history I also had the error "Missed task start rejected: Task Scheduler did not launch task as it missed its schedule." but I didn't want the task to start when I woke up the computer, I wanted to figure out why the computer didn't wake up.
This answer helped me out -- by default Windows was waking for "Important Wake Timers Only" (system updates, but not my scheduled task).
In the setting Power Options > Edit Plan Settings > Change advanced power settings > Sleep > Allow wake timer change the option to "Enabled" and then your computer will wake up to run the task.
You can also do this from "settings". Probably earlier instance was already running and launching a new instance failed.
In my case, the error message "The operator or administrator has refused the request" appeared because the computer was in stand-by at the scheduled time (and the options "Wake the computer to run this task" and "Run task as soon as possible after a scheduled start was missed" were unchecked).
I had previously chosen "Enable All Tasks History" and a more useful error message appeared in the History tab: "Missed task start rejected: Task Scheduler did not launch task as it missed its schedule. Consider using the configuration option to start the task when available, if schedule is missed."
I have found what I believe to be a bizarre bug in Windows Server 2016 scheduler and maybe other Windows Server versions that produces the OP's error (and a workaround):
Here are the conditions:
You're using the "Monthly" option trigger in your task (I currently have all months selected and just a couple days chosen, e.g. 1st and 15th)
You have the "Synchronize across time zones" selected.
This was originally an issue I found back in November 2020 when my tasks were running twice all of a sudden after the DST time change (and this was a widely reported bug, but not an obvious solution). I never would have known, except that users started receiving duplicate emails from one of my tasks. In the history you would simply see the task running twice at what appeared to be exactly the same time. It worked fine before the time change. I forget all the troubleshooting I did then, but my end theory was that it was somehow confusing the time after the time change. The work around was to set the option "Synchronize across time zones" and all seemed well...
Fast forward to March when the DST time just changed back again and now I get every time the tasks with the Monthly option runs:
The operator or administrator has refused the request
The History tab on the task is also blank. If you change options and save, the History tab starts logging again and then sometimes stops if the task errors again. Weird.
One work around is to simply turn off the "Synchronize across time zones" option (tested). However, I don't recommend that option as I assume you'll have the duplicate running task issue again when the DST time changes again in November.
The one time I got an error to show in the History tab it stated:
Task Scheduler did not launch task "\EmailCampaign" as it missed its
schedule. Consider using the configuration option to start the task
when available, if schedule is missed.
Therefore, I went and set that option to start the task if the schedule is missed and all seems well. I figured I'd see the original error and then subsequently the task running, but no error any more either. It all just works.
I know this solution was reported above, but that's because most people's computers were asleep or something to that effect. My issue is on a production internet facing server that doesn't go to sleep, hibernate or anything related and only happens with specific conditions related to the Monthly trigger option. All my others tens of scheduled tasks work flawless.
I wrote a Powershell script to do a task. I was getting this error and landed here (as well as other lower ranked search results). The task would run manually and the first time it was triggered, but not on repeat even though I had it set up to end the task if it took longer than a minute.
My problem was caused by not providing an exit code in my powershell script. Task scheduler simply did not know the task had finished and would consider it still running. I could have simply allowed the next instance of the task to be started if the previous was not finished, but using the exit code is the 'right way'.
So I simply added a new line on the end of my PS1 --
exit
This topic is old but I had the same problem on windows server 2016.
My task executes a BAT script that zip a folder and upload on an external backup.
The task never ended because there was a "pause" at the end of my script. And my task was configured with "Dot not start a new instance" settings.
I solved my problem by removing the "pause". I don't know if it will be useful..

Django/Postgres performance worsening after repeatedly processing the same query

I am running Django on Apache. I have several client computers which should call urllib2.urlopen() and send over some data which my server will process and immediately send back a reply. However, when I am testing this I found a very tricky issue. I have one client repeatedly send the same data to be processed. The first time, it takes around ~20 seconds, second time, it takes about 40 seconds, third time I get a 504 (gateway timeout) error. If I try to send data some more 504 errors randomly pop up. I am pretty sure this is an issue with Postgres as the function that processes the information makes many database calls, however, I do not know why the performance of Postgres would decline so much. I have tried several database optimization tricks, including this one: (http://stackoverflow.com/questions/1125504/django-persistent-database-connection), to no avail.
Thanks in advance.
Edit: The requests are not coming concurrently. They are coming in back to back and each query involves a lot of SELECTs and JOINs, and there are a few INSERTs and UPDATEs as well. The apache error logs show that it is just a simple timeout, where the function to process the client posted data takes over 90 seconds.
If it's really Postgres, then you should turn on the logging of slow statements in the Postgres configuration to find out which statement exactly is taking so much time.
This can be done by setting the configuration property log_min_duration.
Details are in the manual:
http://www.postgresql.org/docs/current/static/runtime-config-logging.html#GUC-LOG-MIN-DURATION-STATEMENT
You say the function makes "many database calls" so I'd start with a very low number, or even 0 to log the duration of all statements, then you might be able to identify the slow ones.
It could also be a locking issued. Maybe the first call does not end its transaction properly and subsequent calls run into a timeout when waiting for a resource.
You can verify this by checking the system view pg_locks after the first call.
Have you checked the Apache error_logs? Have you set django DEBUG = True or ADMINS = ('email#addr.com',) so you can get a detailed error report about what the actual cause of the issue is? If so, how about pasting some information here.
Why are you certain that it's postgres? Have you done diagnostics to come to that conclusion? If so, please let us know.
Are you running apache with mod_wsgi? How many processes and threads have you allocated to your django application?
Also, 20 seconds to process the first transaction is a huge amount of time. Perhaps you could show us the view code that is causing the time out. We may be able to help there.
I sincerely doubt that it's going to be postgres alone that is causing the issue. It probably has something to do with application code, or server configuration.

ColdFusion 8 scheduler not rescheduling task

I have just done a clean install of CF8 on a Windows 2000 machine. I have a scheduled task I need to run every 15 minutes on this machine, and the machine does little else.
The task is set up as normal through CF admin, but for some reason, when the task takes about 5 minutes to run it will complete fine (I can see this from debug output and from cfstat) but the scheduler will not reschedule the task.
The scheduling log shows that the task started to execute, but not entry that it was rescheduled. Eg:
[ProcessRecords] Executing at Wed May 20 10:30:00 BST 2009
I have been over my server timeouts. I have NO timeout in CF admin and this particular script has a <cfsetting requesttimeout="43200" /> tag set. There are no exceptions in the console logging. The last bit of console logging is the very last debug statement in my .cfm template.
I do notice that task that run in a shorter time, say for example under a minute, will reschedule as normal.
Has anyone come across a problem like this before?
I'm baffled. Any and all replies are appreciated!
Cheers,
Ciaran
not for nothing, but i've never seen anything like this with cf8. are you sure that you have the latest hotfix and jvm installed? this might have been something in cf8 that was fixed in 8.01.
hotfix 2 for cf8.01
list of all hotfixes and updates for cf8.01
hotfix 3 for cf8
list of all hotfixes and updates for cf8
latest jvm
upgrade instruction for jvm
If you suspect that it's an uncaught exception causing the issue, then might I suggest logging portions of the process. Case in point, I had a similar problem with a scheduled task where it would just bottom out for no reason (never had the reschedule problem though). What I ended up doing to diagnose the problem was use cflog to write out portions of the process as they completed. This particular task too about 4 minutes to complete but ran through about 200 portions (it was a mass emailer for a bunch of clients).
I logged the when the portion started and completed along with how log it took. By doing so, i could see what portion would trip up the whole process and knew where to focus my attention.