The MSDN says that:
"The ServiceMain function should create a global event, call the RegisterWaitForSingleObject function on this event, and exit. This will terminate the thread that is running the ServiceMain function, but will not terminate the service..."
So the question is: A new Thread should be created inside the ServiceMain function to execute the service code, or I can simple set the service to RUNNING state and uses the ServiceMain thread to run the service code? If the ServiceMain thread is used to run the service code the SCM will remain locked, even if the service state is set to RUNNING?
I do not think the way of implementing services described by that statement from MSDN is the only possible way. That would contradict MSDN service example at http://msdn.microsoft.com/en-us/library/windows/desktop/bb540476(v=vs.85).aspx . In the example the service waits for events in the same thread that called ServiceMain. This way is probably better for simple services that work just fine with a single thread.
If you choose to use RegisterWaitForSingleObject way you do not have to create threads explicitly. MSDN page for RegisterWaitForSingleObject says: "New wait threads are created automatically when required." You do have to open I/O channels you service is going to monitor and bind their handles to thread pool before exiting ServiceMain.
MSDN says: "The Service Control Manager (SCM) waits until the service reports a status of SERVICE_RUNNING. It is recommended that the service reports this status as quickly as possible, as other components in the system that require interaction with SCM will be blocked during this time."
The control dispatcher creates a new thread to execute the ServiceMain function for the service. The ServiceMain function should perform the following tasks.
5.. Perform the service tasks, or, if there are no pending tasks, return control to the caller. Any change in the service state warrants
a call to SetServiceStatus to report new status information.
From this example follow that you can perform more complex initialization tasks inside the ServiceMain function such as creating additional threads.
Guidance for creating Multithreaded Services.
Related
I am using Camunda workflows to automate various processes. I have come across a scenario where the process is not moving from a service task. Usually, we call the task/{taskid}/complete to complete the task, but since the process is stuck on a service task, I am not able to complete that task. Can anybody help me find a way to complete the service task?
You are using a service task. That basically means "a machine should do something". The "normal" implementation is to provide code (a java Delegate or a connector endpoint) that is called by the process engine to execute this task.
The alternativ is to use the "external task" pattern. Think of external tasks as "user tasks for computers". So the process waits, tells subscribed clients that a job is to be done and waits for their completion.
I suppose your process uses the second option? (you can check in the modeler under "Implementation"). So completion can be done through the external task API, see docs.
/external-task/{id}/complete
If it is a connector then you likely will see when checking the log that retries have occurred and that the transaction rolled back. After addressing the underlying issue the service task (email) should be sent without explicitly triggering the service task and the following user task (Approval) should be created.
I'm trying to create a simple Camunda BPM workflow with a parallel gateway and compensating actions like this:
All the Service Tasks are configured as external tasks that are executed by a C# program. This program calls the fetchAndLock method on the API to get a list of tasks to execute and then executes these tasks in parallel in the background. I'm experiencing some problems with this approach:
The Lock in the fetchAndLock method doesn't seem to do anything and the workflow engine does not seem to wait until all the fetched tasks are handled whenever one of the tasks is completed with a bpmnError'. Instead it immediately plans the execution of the compensating actions for the tasks it already received a complete` call and deletes the instances of all the other planned tasks without waiting for their results.
This results in the following problems:
The C# program continues to execute the unfinished tasks and when they complete it tries to call the complete method on the API, but that fails with a 404 error because the Camunda engine already deleted these task instances.
The compensating actions for these tasks are never called, which leaves the business process in an invalid state.
What am I doing wrong? I am not very familiar with BPMN so maybe there is a better way to design this process. Or is this a major bug in Camunda?
I can assume that after the parallel gateway there are two errors that trigger the event subprocess twice. You can try using the terminate event in the event subprocess
I created a C++ Service Application and installed the service when I try to stop the service a warning is shown.
The Service contains just one implemented method that is:
void __fastcall TService1::ServiceExecute(TService *Sender)
{
bool a = true;
while(a){
sleep(5);
writeLog("test \n");
}
}
How can I stop the service by brute force, so does not show the warning?
The correct way to stop a service program is to implement a Service Control Handler Function, which can respond to the SERVICE_CONTROL_STOP control code. Usually the handler will set a stop event, which will cause the ServiceMain loop to exit (example here).
If you used a template to generate your service application, you may already have the stub interfaces implemented to do this. If not, you would need to implement them.
Implementing a native Windows Service is not trivial, and cannot be done with just a single executable method. A service application must implement the interfaces used by the Service Control Manager (SCM), such as the Service Entry Point, Service ServiceMain Function, and a Service Control Handler Function. It must also report status and state changes to the SCM. See a complete examples from Microsoft here and here.
MSDN reading reference: Service Programs
If you don't want to implement all of the required SCM interfaces and logic, there are options available from both Microsoft and third parties to run any executable as a service. See Microsoft's srvany.exe method, or Iain Patterson's NSSM for example.
I am trying to control a service within an application. Starting the service via StartService (MSDN) works fine, the service needs about 10 seconds to start, but after calling StartService it gives the control back to the main-application immediately.
However, when stopping the service via ControlService (MSDN) - AFAIK there is no StopService - it blocks the main-application for the complete time until the service is stopped, which takes about 10 seconds.
Start: StartServiceW( handle, 0, NULL)
Stop: ControlService( handle, SERVICE_CONTROL_STOP, status )
Is there a way for a non-blocking / asynchronously stopping of a windows service?
I would probably look at stopping the service in a new thread. That will eliminate the blocking of your main thread.
The SCM processes control requests in a serialized manner. If any service is busy processing a control request, ControlService() will be blocked until the SCM can process the new request. This is stated as much in the documentation:
The SCM processes service control notifications in a serial fashion—it
will wait for one service to complete processing a service control
notification before sending the next one. Because of this, a call to
ControlService will block for 30 seconds if any service is busy
handling a control code. If the busy service still has not returned
from its handler function when the timeout expires, ControlService
fails with ERROR_SERVICE_REQUEST_TIMEOUT.
The service is doing its cleanup in its control handler routine. That's OK for a service that will only take a fraction of a second to exit, but a service that's going to take ten seconds should definitely be setting a status of STOP_PENDING and then cleaning up asynchronously.
If this is your own service, you should correct that problem. I'd start by making sure that all of the cleanup is really necessary; for example, there's no need to free memory before stopping (unless the service is sharing a process with other services). If the cleanup really can't be made fast enough, launch a separate thread (or signal your main thread) to perform the service shutdown and set the service status to STOP_PENDING.
If this is someone else's service, the only solution is to issue the stop request from a separate thread or in a subprocess.
Is there a way to kill / re-start a long running task in AWS SWF? Sometimes some of our tasks run for a longer duration and we would like to manually kill a certain task (either via UI or programmatically) and re-start the task if possible. How to achieve this?
Console is option to manually kill workflow.
You can also set timeouts to whole workflow execution time or to individual activities. This can be set when you register your activity or when you start your activity (defaultTaskStartToCloseTimeoutSecond).
It's not clear what language you're using.
If you're using java, then you should look into Exponential Retry in Flow Framework. This make SDK restart your activity if it fails.
Long running activity is expected to heartbeat using RecordActivityTaskHeartbeat. It leads to timeout failure after short hearbeat interval instead of long task execution timeout if the activity process hangs or crashes.
The workflow code (decider) can always request activity cancellation through RequestCancelActivityTask decision. The cancellation request is returned as output of the RecordActivityTaskHeartbeat call. Activity implementation should cancel itself and report back to the service using RespondActivityTaskCanceled API call.
See Error Handling section of AWS Flow Framework Developer Guide for the AWS Flow Framework way of cancelling activities.
Sometimes activity implementation cannot support heartbeating and self cancellation. The solution is to execute another kill activity that terminates the first activity execution. For example under Unix such kill activity could emit "kill -9" command for the process that implements the first one.