why Windows Service Control Manager takes minutes to process the request? - c++

In my source code, the following code is to issue the "sc stop myservice" and "sc start myservice",
system(stopcmd.c_str());
Sleep(10*1000);
system(startcmd.c_str());
Sleep(10*1000);
But I notice that, looks like the service (myservice in the command) actually received the SCM event several minutes after the code above is executed.
What can result in such situation? Does that means the SCM is busy and cannot response the request above timely? or what else could be the problem?

Your Sleep is 10 second . in my Idea you have a do()while some where . first kill all services . then trace your code . be suspicions about do while

Related

log4cpp stops working properly after sometime

I have a log4cpp implementation in a multiple process environment . Logger is configured once during initialization and then is shared among forked processes which server http requests.
During first minute or so , I see the logs rolls perfectly fine at the query per second load( say it runs at 100qps).
After that, the log slows down dramatically. So, I logged pid as well and notice that only one process gets to write to the log for a time duration ( around 10-15 seconds) and then another process starts writing and so on so forth . Processes don't die. They just don't get a chance to write.
This is different from what happens when the server starts . At that time, every other log line is written by a different process. ( Also, I write one-log-line per process at the end of serving the request. )
At this point, I can't think of what could be going wrong.
This is how my log4cpp conf file looks
log4cpp.rootCategory=DEBUG,rootAppender
log4cpp.appender.rootAppender=org.apache.log4cpp.RollingFileAppender
log4cpp.appender.rootAppender.fileName=/tmp/mylogfile.log
log4cpp.appender.rootAppender.layout=org.apache.log4cpp.PatternLayout
log4cpp.appender.rootAppender.layout.ConversionPattern=%d|%p|%m%n
log4cpp.category.http.server.main=INFO,MAIN
log4cpp.additivity.http.server.main=false
log4cpp.appender.MAIN=org.apache.log4cpp.RollingFileAppender
log4cpp.appender.MAIN.maxBackupIndex=10
log4cpp.appender.MAIN.maxFileAge=1
log4cpp.appender.MAIN.append=true
log4cpp.appender.MAIN.fileName=/tmp/mylogfile.log
log4cpp.appender.MAIN.layout=org.apache.log4cpp.PatternLayout
log4cpp.appender.MAIN.layout.ConversionPattern=%d|%p|%m%n
Edit: more updates : Thanks #Botje for your time.
I see that whenever a new child process is created , it is only that process that gets to write to the log. That tells me that all the reference other processes were holding become invalid.
I also tried setting additive property to true. With that , server starts properly writing into the /tmp/myfile.log and then switches to writing into /tmp/myfile.log.1 withing a minute . And then stops writing after a minute.
At that point logs gets directed to stderr which is directed to another log file.
Also,
I did notice that the log4cpp FileAppender uses seek to determine the file size before writing log entries. If the file handle is shared between processes that will cause writes to end up at the start of the file instead of the end. Even if you fix that, you still have multiple processes that think they are in charge of log file rotation.
I suggest you have all processes write to a common udp/tcp/Unix socket and designate one process that collects all log entries and actually writes it to a file. You don't have to reinvent the wheel, you can use the syslog protocol and either the system syslog or a copy running in userspace.

Azure Batch Job sequential execution not working

We are using azure web job for batch processing, the job will trigger when there is a message in the storage queue.
We have configured the job to execute the messages one by one.
JobHostConfiguration config = new JobHostConfiguration();
config.Queues.BatchSize = 1;
config.Queues.MaxDequeueCount = 1;
even though the job is taking multiple messages from the storage queue and executing parallelly.
Please help.
taking multiple messages from the storage queue and executing
parallelly
How did you judge take multiple messages and executing in parallel? Did you have multiple instances?
I test the code in different situations.
1)The normal situation ,not set the batchsize, it will drag all messages in the queue.However i think it still run one by one.But from the result i think it won't wait last running completely over.Here is result.
2)Set the batchsize to 1, if you debug the code or refresh the queue frequently, you will find it did drag one message one time run. And here is result.
3) Set the batchsize to three and debug , it just change the message number dragged, each time it will drag 3 messages, then it will run like normal without setting batchsize.Here is the result.And i found if you just run not debug , the order console showing is very orgnized.
So if you don't have other instance running, i think this is working in sequential mode.
If this doesn't match your requirements or you still have questions, please let me know.

Collect dumps during Windows power events

My application is a Win32 service which is registered for OS power events. When application gets callback of event, operations are performed depending on the type of the power event. During suspend/sleep event, our application tries to do specific networking jobs within 10 seconds before acknowledging the OS to continue with the suspend operation.
Under normal circumstances, my application does job successfully within 10 seconds. But sometimes, operations take too long and fail to complete with in 10 seconds. Application uses various Win32 API functions and I want to check which specific operation is causing the delay. Is there anyway to capture the dump of process hung in completion of jobs when system is going to suspend ?
I tried ProcMon, ProcExplorer, ProcDump but it didn't help. Any ideas on troubleshooting the issue other than adding logs in the code. Thanks.
Windbg Preview tool available on Microsoft store is very helpful for capturing traces of any running process. My use case is to capture the trace of my process when OS hibernates. The trace can be used to time travel debug my application when OS was actually hibernating. A very useful tool from Microsoft. Tutorials on Channel 9 on how to use the tool. Happy learning :)

Calabash: Increase wait timeout for predefined step

Then I wait to see "Choose a drink:#0"
# calabash-cucumber-0.18.0/features/step_definitions/calabash_steps.rb:154
execution expired (Calabash::Cucumber::WaitHelpers::WaitError)
While using predefined steps, it works fine in local. However it fails in CI. There are no network calls made. It's just a transition to a new screen.
Any ideas on how to increase the wait timeout for the predefined steps will be appreciated.
The best thing to do would be to write your own step.
If you look at the step you can see that if you set WAIT_TIMEOUT you can increase the wait time.
$ WAIT_TIMEOUT=120 bundle exec cucumber
I think you should write your own step.

What happens to running processes on a continuous Azure WebJob when website is redeployed?

I've read about graceful shutdowns here using the WEBJOBS_SHUTDOWN_FILE and here using Cancellation Tokens, so I understand the premise of graceful shutdowns, however I'm not sure how they will affect WebJobs that are in the middle of processing a queue message.
So here's the scenario:
I have a WebJob with functions listening to queues.
Message is added to Queue and job begins processing.
While processing, someone pushes to develop, triggering a redeploy.
Assuming I have my WebJobs hooked up to deploy on git pushes, this deploy will also trigger the WebJobs to be updated, which (as far as I understand) will kick off some sort of shutdown workflow in the jobs. So I have a few questions stemming from that.
Will jobs in the middle of processing a queue message finish processing the message before the job quits? Or is any shutdown notification essentially treated as "this bitch is about to shutdown. If you don't have anything to handle it, you're SOL."
If we are SOL, is our best option for handling shutdowns essentially to wrap anything you're doing in the equivalent of DB transactions and implement your shutdown handler in such a way that all changes are rolled back on shutdown?
If a queue message is in the middle of being processed and the WebJob shuts down, will that message be requeued? If not, does that mean that my shutdown handler needs to handle requeuing that message?
Is it possible for functions listening to queues to grab any more queue messages after the Job has been notified that it needs to shutdown?
Any guidance here is greatly appreciated! Also, if anyone has any other useful links on how to handle job shutdowns besides the ones I mentioned, it would be great if you could share those.
After no small amount of testing, I think I've found the answers to my questions and I hope someone else can gain some insight from my experience.
NOTE: All of these scenarios were tested using .NET Console Apps and Azure queues, so I'm not sure how blobs or table storage, or different types of Job file types, would handle these different scenarios.
After a Job has been marked to exit, the triggered functions that are running will have the configured amount of time (grace period) (5 seconds by default, but I think that is configurable by using a settings.job file) to finish before they are exited. If they do not finish in the grace period, the function quits. Main() (or whichever file you declared host.RunAndBlock() in), however, will finish running any code after host.RunAndBlock() for up to the amount of time remaining in the grace period (I'm not sure how that would work if you used an infinite loop instead of RunAndBlock). As far as handling the quit in your functions, you can essentially "listen" to the CancellationToken that you can pass in to your triggered functions for IsCancellationRequired and then handle it accordingly. Also, you are not SOL if you don't handle the quits yourself. Huzzah! See point #3.
While you are not SOL if you don't handle the quit (see point #3), I do think it is a good idea to wrap all of your jobs in transactions that you won't commit until you're absolutely sure the job has ran its course. This way if your function exits mid-process, you'll be less likely to have to worry about corrupted data. I can think of a couple scenarios where you might want to commit transactions as they pass (batch jobs, for instance), however you would need to structure your data or logic so that previously processed entities aren't reprocessed after the job restarts.
You are not in trouble if you don't handle job quits yourself. My understanding of what's going on under the covers is virtually non-existent, however I am quite sure of the results. If a function is in the middle of processing a queue message and is forced to quit before it can finish, HAVE NO FEAR! When the job grabs the message to process, it will essentially hide it on the queue for a certain amount of time. If your function quits while processing the message, that message will "become visible" again after x amount of time, and it will be re-grabbed and ran against the potentially updated code that was just deployed.
So I have about 90% confidence in my findings for #4. And I say that because to attempt to test it involved quick-switching between windows while not actually being totally sure what was going on with certain pieces. But here's what I found: on the off chance that a queue has a new message added to it in the grace period b4 a job quits, I THINK one of two things can happen: If the function doesn't poll that queue before the job quits, then the message will stay on the queue and it will be grabbed when the job restarts. However if the function DOES grab the message, it will be treated the same as any other message that was interrupted: it will "become visible" on the queue again and be reran upon the restart of the job.
That pretty much sums it up. I hope other people will find this useful. Let me know if you want any of this expounded on and I'll be happy to try. Or if I'm full of it and you have lots of corrections, those are probably more welcome!