I'm seeing a change that really just started yesterday afternoon, Wednesday April 6th approximately 14:00 GMT. I used to be able to trigger a Lambda function (in my case I'm using API Gateway) and I could normally see the relevant log entries show up in the AWS CloudWatch Console usually within a few seconds. As of yesterday, the logs are taking maybe up 30 minutes to show up and I'm wondering if anyone else has experienced anything similar or could point me in the right direction to check things out. Thank you in advance!
i'm going to answer (close) this question out ... a few hours ago the delay stopped. I'm now able to see logs appear in CloudWatch within a few seconds of the lambda fn. completing. Thanks all
I am running AWS Glue jobs using PySpark. They have set Timeout (as visible on the screenshot) of 1440 mins, which is 24 hours. Nevertheless, the job continues working over those 24 hours.
When this particular job had been running for over 5 days I stopped it manually (clicking stop icon in column "Run status" in GUI visible on the screenshot). However, since then (it has been over 2 days) it still hasn't stopped - the "Run status" is Stopping, not Stopped.
Additionally, after about 4 hours of running, new logs (column "Logs") in CloudWatch regarding this Job Run stopped appearing (in my PySpark script I have print() statements which regularly and often log extra data). Also, last error log in CloudWatch (column "Error logs") has been written 24 seconds after the date of the newest log in "Logs".
This behaviour continues for multiple jobs.
My questions are:
What could be reasons for Job Runs not obeying set Timeout value? How to fix that?
Why the newest log is from 4 hours since starting the Job Run, while the logs should appear regularly during 24 hours of the (desired) duration of the Job Run?
Why the Job Runs don't stop if I try to stop them manually? How can they be stopped?
Thank you in advance for your advice and hints.
I have scheduled a C# console application in Task Scheduler of Windows 2012 R2. Application will run when executed it manually or Right click on scheduled task and click on Run, but it is failed when triggered by Task Scheduler with below error.
The operator or administrator has refused the request(0x800710E0)
I have followed below steps also after Google search
Selected "Run whether user logged in or not"
Unchecked "Start the task only if the computer is on AC power"
In my case, the error message "The operator or administrator has refused the request" meant that a previous instance of the task has still been running and the task was configured to not start a new instance if it's already running (the default configuration), so the Task Scheduler refused to start a new instance when the task was triggered.
You can find that option in a select box on the task's Settings tab, under the caption "If the task is already running, then the following rule applies". The default value is "Do not start a new instance".
But that error message is pretty confusing. From the other answers, you may see that it may mean many completely distinct errors. As is usual in Microsoft's products.
Tip
It's helpful to check the History tab of a task. That's where I have found out what's actually going on. There was an event "Launch request ignored, instance already running".
In my case, I had to redo the permissions on the task. Somehow it had lost the domain portion of the username. Instead of `DOMAIN\joeuser' it was just 'joeuser'. After a reset, it worked correctly as it had for the previous year.
In my case as per having a job setup with Task Scheduler as written about in the "Prevent a Task Scheduler Task from Executing on Setting Updates", I had a job setup to run every "X" minutes for a period of indefinitely.
Upon seeing the dreaded "The operator or administrator has refused the request" for the Last Run Result, I looked over the History tab and see detail indicating that is "missed its schedule".
The Solution
From the Settings tab of the job properties, I simply checked the option "Run task as soon as possible after a scheduled start is missed", and problem resolved; although, I did have to type in the credential again as well.
Note: This started occurring once a server was moved from a redundant backup server once hardware repair was completed back to the original hardware. The OS was Server 2012 R2 and the OS was moved to other hardware while repair was done on the production server but I didn't notice this there—maybe an oversight there though—not sure.
I know that #Sushmit-Patil found a solution, but I wanted to add a solution to my similar problem:
It turns out a prior process never exited (it was hanging around in memory because of a defect I had in my code). By default, Windows Task Scheduler won't run the process again if it's already running.
In addition to fixing the defect, in Task Scheduler, under the Settings tab, I set If the task is already running, then the following rule applies: to Run a new instance in parallel
1
Error occurred due to folder permission, I was creating CSV from my application, which was required folder permission to be granted. After giving Full Control to the folder error got resolved.
For me, the solution was to check Run with highest privileges in the properties.
In my case my task launches a PowerShell script--and it produced the "The operator or administrator has refused the request (0x800710E0)" error message as seen in the Task Scheduler's task-entry grid. My user name was correct, but when I dropped to a command prompt and simulated the task by running the PowerShell against my .ps1 file, I saw an Avast prompt that flagged my script as suspicious and wasn't allowing it to run. I created an Avast exception and now the task runs without any issue.
After turning on history I also had the error "Missed task start rejected: Task Scheduler did not launch task as it missed its schedule." but I didn't want the task to start when I woke up the computer, I wanted to figure out why the computer didn't wake up.
This answer helped me out -- by default Windows was waking for "Important Wake Timers Only" (system updates, but not my scheduled task).
In the setting Power Options > Edit Plan Settings > Change advanced power settings > Sleep > Allow wake timer change the option to "Enabled" and then your computer will wake up to run the task.
You can also do this from "settings". Probably earlier instance was already running and launching a new instance failed.
In my case, the error message "The operator or administrator has refused the request" appeared because the computer was in stand-by at the scheduled time (and the options "Wake the computer to run this task" and "Run task as soon as possible after a scheduled start was missed" were unchecked).
I had previously chosen "Enable All Tasks History" and a more useful error message appeared in the History tab: "Missed task start rejected: Task Scheduler did not launch task as it missed its schedule. Consider using the configuration option to start the task when available, if schedule is missed."
I have found what I believe to be a bizarre bug in Windows Server 2016 scheduler and maybe other Windows Server versions that produces the OP's error (and a workaround):
Here are the conditions:
You're using the "Monthly" option trigger in your task (I currently have all months selected and just a couple days chosen, e.g. 1st and 15th)
You have the "Synchronize across time zones" selected.
This was originally an issue I found back in November 2020 when my tasks were running twice all of a sudden after the DST time change (and this was a widely reported bug, but not an obvious solution). I never would have known, except that users started receiving duplicate emails from one of my tasks. In the history you would simply see the task running twice at what appeared to be exactly the same time. It worked fine before the time change. I forget all the troubleshooting I did then, but my end theory was that it was somehow confusing the time after the time change. The work around was to set the option "Synchronize across time zones" and all seemed well...
Fast forward to March when the DST time just changed back again and now I get every time the tasks with the Monthly option runs:
The operator or administrator has refused the request
The History tab on the task is also blank. If you change options and save, the History tab starts logging again and then sometimes stops if the task errors again. Weird.
One work around is to simply turn off the "Synchronize across time zones" option (tested). However, I don't recommend that option as I assume you'll have the duplicate running task issue again when the DST time changes again in November.
The one time I got an error to show in the History tab it stated:
Task Scheduler did not launch task "\EmailCampaign" as it missed its
schedule. Consider using the configuration option to start the task
when available, if schedule is missed.
Therefore, I went and set that option to start the task if the schedule is missed and all seems well. I figured I'd see the original error and then subsequently the task running, but no error any more either. It all just works.
I know this solution was reported above, but that's because most people's computers were asleep or something to that effect. My issue is on a production internet facing server that doesn't go to sleep, hibernate or anything related and only happens with specific conditions related to the Monthly trigger option. All my others tens of scheduled tasks work flawless.
I wrote a Powershell script to do a task. I was getting this error and landed here (as well as other lower ranked search results). The task would run manually and the first time it was triggered, but not on repeat even though I had it set up to end the task if it took longer than a minute.
My problem was caused by not providing an exit code in my powershell script. Task scheduler simply did not know the task had finished and would consider it still running. I could have simply allowed the next instance of the task to be started if the previous was not finished, but using the exit code is the 'right way'.
So I simply added a new line on the end of my PS1 --
exit
This topic is old but I had the same problem on windows server 2016.
My task executes a BAT script that zip a folder and upload on an external backup.
The task never ended because there was a "pause" at the end of my script. And my task was configured with "Dot not start a new instance" settings.
I solved my problem by removing the "pause". I don't know if it will be useful..
We have been using AWS Elasticache for about 6 months now without any issues. Every night we have a Java app that runs which will flush DB 0 of our redis cache and then repopulate it with updated data. However we had 3 instances between July 31 and August 5 where our DB was successfully flushed and then we were not able to write the new data to the database.
We were getting the following exception in our application:
redis.clients.jedis.exceptions.JedisDataException:
redis.clients.jedis.exceptions.JedisDataException: READONLY You can't
write against a read only slave.
When we look at the cache events in Elasticache we can see
Failover from master node prod-redis-001 to replica node
prod-redis-002 completed
We have not been able to diagnose the issue and since the app was running fine for the past 6 months I am wondering if it is something related to a recent Elasticache release that was done on the 30th of June.
https://aws.amazon.com/releasenotes/Amazon-ElastiCache
We have always been writing to our master node and we only have 1 replica node.
If someone could offer any insight it would be much appreciated.
EDIT: This seems to be an intermittent problem. Some days it will fail other days it runs fine.
We have been in contact with AWS support for the past few weeks and this is what we have found.
Most Redis requests are synchronous including the flush so it will block all other requests. In our case we are actually flushing 19m keys and it takes more then 30 seconds.
Elasticache performs a health check periodically and since the flush is running the health check will be blocked, thus causing a failover.
We have been asking the support team how often the health check is performed so we can get an idea of why our flush is only causing a failover 3-4 times a week. The best answer we can get is "We think its every 30 seconds". However our flush consistently takes more then 30 seconds and doesn't consistently fail.
They said that they may implement the ability to configure the timing of the health check however they said this would not be done anytime soon.
The best advice they could give us is:
1) Create a completely new cluster for loading the new data on, and
instead of flushing the previous cluster, re-point your application(s)
to the new cluster, and remove the old one.
2) If the data that you are flushing is an update version of the data,
consider not flushing, but updating and overwriting new keys?
3) Instead of flushing the data, set the expiry of the items to be
when you would normally flush, and let the keys be reclaimed (possibly
with a random time to avoid thundering herd issues), and then reload
the data.
Hope this helps :)
Currently for Redis versions from 6.2 AWS ElastiCache has a new feature of thread monitoring. So the health check doesn't happen in the same thread as all other actions of Redis. Redis can continue to proceed a long command / lua script, but will still considered healthy. Because of this new feature failovers should happen less.
I have just done a clean install of CF8 on a Windows 2000 machine. I have a scheduled task I need to run every 15 minutes on this machine, and the machine does little else.
The task is set up as normal through CF admin, but for some reason, when the task takes about 5 minutes to run it will complete fine (I can see this from debug output and from cfstat) but the scheduler will not reschedule the task.
The scheduling log shows that the task started to execute, but not entry that it was rescheduled. Eg:
[ProcessRecords] Executing at Wed May 20 10:30:00 BST 2009
I have been over my server timeouts. I have NO timeout in CF admin and this particular script has a <cfsetting requesttimeout="43200" /> tag set. There are no exceptions in the console logging. The last bit of console logging is the very last debug statement in my .cfm template.
I do notice that task that run in a shorter time, say for example under a minute, will reschedule as normal.
Has anyone come across a problem like this before?
I'm baffled. Any and all replies are appreciated!
Cheers,
Ciaran
not for nothing, but i've never seen anything like this with cf8. are you sure that you have the latest hotfix and jvm installed? this might have been something in cf8 that was fixed in 8.01.
hotfix 2 for cf8.01
list of all hotfixes and updates for cf8.01
hotfix 3 for cf8
list of all hotfixes and updates for cf8
latest jvm
upgrade instruction for jvm
If you suspect that it's an uncaught exception causing the issue, then might I suggest logging portions of the process. Case in point, I had a similar problem with a scheduled task where it would just bottom out for no reason (never had the reschedule problem though). What I ended up doing to diagnose the problem was use cflog to write out portions of the process as they completed. This particular task too about 4 minutes to complete but ran through about 200 portions (it was a mass emailer for a bunch of clients).
I logged the when the portion started and completed along with how log it took. By doing so, i could see what portion would trip up the whole process and knew where to focus my attention.