Is there a best way to interrupt a running SAS EG process the only way I know is to press cancel:
Press cancel works some times usually only about 15% of the time. Usually I am not able to even press the buttom, but when I do it is usually just using a long time to cancel the processes.
The second way I know is to shutdown SAS EG but this feels like an unnecessary way to interrupt a running process. If I have forgotten to save my process I have to do everything over again when I use this method.
To sum it up is there a propper way to stop my SAS EG process from running when I make a mistake?
Related
I have a performance-sensitive program that I would like to run as stably as possible, thus I'm wanting to disable/suspend MsMpEng.exe, among a few others, to hopefully attain that on Windows 10 when my program starts. When the program finishes, I'd like to restore normal previous function.
I have tried directly suspending the process using resmon.exe (Resource Monitor), and it suspends... but 20-30 seconds later, the entire system just stops. I assume this is some form of self-protect... so at the very least, I'd have to suspend and resume in a timed loop.
Thoughts? Is it even worth the trouble?
EDIT: Gave it some thought and some test cases, and just adjusting process priority isn't quite enough, but it's better than nothing. I'll just recommend people disable their virus protection if they encounter slow downs unless anyone else has any suggestions.
I have embedded Lua in a C/C+= application. I want to be able to set a timeout value to prevent getting trapped with badly written scripts that can result in infinite loops (or even string searches that take an infinite time to complete).
Basically, I want to be able to set a time interval and if the script fails to complete running at the end of that time interval, I want to be able to kill the Lua script engine (gracefully, if possible).
Anyone knows of best practise way to do this?
One way to control the amount of time a script takes is to set a count hook and then raise an error in the hook. But this does not work if the script can call C functions that may take a long time.
I'm trying to test a Bash script which copies files individually and does some stuff to each file. It is meant to be resumable, so I'd like to make sure to test this properly. What is an elegant solution to kill or otherwise abort the script which does the copying from the test script, making sure it does not have time to copy and process all the files?
I have the PID of the child process, I can change the source code of both scripts, and I can create arbitrarily large files to test on.
Clarification: I start the script in the background with &, get the PID as $!, then I have a loop which checks that there is at least one file in the target directory (the test script copies three files). At that point I run kill -9 $PID, but the process is not interrupted - The files are copied successfully. This happens even if the files are big enough that creating them (with dd and /dev/urandom) takes a couple seconds.
Could it be that the files are only visible to the shell when cp has finished? It would be a bit strange, but it would explain why the kill command is too late.
Also, the idea is not to test resuming the same process, but cutting off the first process (simulate a system crash) and resuming with another invocation.
Send a KILL signal to the child process:
kill -KILL $childpid
You can try an play the timing game by using large files and sleeps. You may have an issue with the repeatability of the test.
You can add throttling code to the script your testing and then just throttle it all the way down. You can do throttling code by passing in a value which is:
a sleep value for sleeping in the loop
the number of files to process
the number of seconds after which the script will die
a nice value to execute the script at
Some of these may work better or worse from a testing point of view. nice'ing may get you variable results, as will setting up a background process to kill your script after N seconds. You can also try more than one of these at the same time which may give you the control you want. For example, accepting both a sleep value and the kill seconds could give you fine grained throttling control.
Actually this is what i want to do;
When a condition appears, my program will close itself and after five minutes it will re-open.
Is it possible with only one .exe -by using any OS property-?
I do it with two .exe
if (close_condition){
//call secondary program
system ("secondary.exe");
return (0);
}
and my secondary program just waits for five minutes and calls the primary one.
main (){
Sleep (300000)//sleep for five minutes;
system ("primary.exe");
return (0);
}
i want to do it without secondary program.
(sorry for poor english)
You can do it with one application that simply has different behaviour if a switch is given (say myapp.exe /startme).
system() is a synchronous call by the way, it does only return when the command run is finished. In win32 CreateProcess() is what you are looking for.
You can also just follow Jays suggestion of letting the OS schedule your job using NetScheduleJobAdd().
But, depending on what you're trying to achieve, a better solution might be to simply hide your application for 5 minutes.
I think you'd have to use the system task scheduler to schedule the re-launch, which in a sense is using another application, but one that is part of the OS.
I'm sure this can be done, but frankly I think you should just stick with your current setup.
I am facing strange issue on Windows CE:
Running 3 EXEs
1)First exe doing some work every 8 minutes unless exit event is signaled.
2)Second exe doing some work every 5 minutes unless exit event signaled.
3)Third exe while loop is running and in while loop it do some work at random times.
This while loop continues until exit event signaled.
Now this exit event is global event and can be signaled by any process.
The Problem is
When I run First exe it works fine,
Run second exe it works fine,
run third exe it works fine
When I run all exes then only third exe runs and no instructions get executed in first and second.
As soon as third exe gets terminated first and second starts get processing.
It that can be the case that while loop in third exe is taking all CPU cycles?
I havn't tried putting Sleep but I think that can do some tricks.
But OS should give CPU to all processes ...
Any thoughts ???
Put the while loop in the third EXE to Sleep each time through the loop and see what happens. Even if it doesn't fix this particular probem, it isn't ever good practice to poll with a while loop, and even using Sleep inside a loop is a poor substitute for a proper timer.
On the MSDN, I also read that CE allows for (less than) 32 processes simultaneously. (However, the context switches are lightning fast...). Some are already taken by system services.
(From Memory) Processes in Windows CE run until completion if there are no higher priority processes running, or they run for their time slice (100ms) if there are other processes of equal priority running. I'm not sure if Windows CE gives the process with the active/foreground window a small priority boost (just like desktop Windows), or not.
In your situation the first two processes are starved of processor time so they never run until the third process exits. Some ways to solve this are:
Make the third process wait/block on some multi-process primitives (mutex, semaphore, etc) and a short timeout. Using WaitForMultipleObjects/WaitForSingleObject etc.
Make the third process wait using a call to Sleep every time around the processing loop.
Boost the priority of the other processes so when they need to run they will interrupt the third process and actually run. I would probably make the least often called process have the highest priority of the three processes.
The other thing to check is that the third process does actually complete its tasks in time, and does not peg the CPU trying to do its thing normally.
Yeah I think that is not good solution . I may try to use timer and see the results..