I find a tricky behavior of multiple SetEvents with RegisterWaitForSingleObjectEx().
#include <windows.h>
#include <iostream>
using namespace System;
using namespace System::Drawing;
using namespace System::Threading;
VOID CALLBACK Callback(PVOID lpParameter, BOOLEAN TimerOrWaitFired)
{
String^ string = gcnew String("");
Monitor::Enter(string->GetType());
//wait for 2 seconds
for(int i=1; i<=2;i++) {
Sleep(1000);
cout << i << " seconds \n";
}
Monitor::Exit(string->GetType());
}
void main()
{
HANDLE eventhandle = CreateEvent(
NULL, // default security attributes
FALSE, // manual-reset event
FALSE, // initial state is nonsignaled
TEXT("WriteEvent") // object name
);
//register the callback for the event
RegisterWaitForSingleObjectEx(eventhandle, Callback, nullptr, -1, WT_EXECUTELONGFUNCTION);
BOOL bEvented[3];
bEvented[0] = SetEvent(eventhandle);
//Sleep(10);
bEvented[1] = SetEvent(eventhandle);
//Sleep(10);
bEvented[2] = SetEvent(eventhandle);
cout << "event0 = " << bEvented[0] << ", event1 = " << bEvented[1] << ", event2 = " << bEvented[2] << " \n";
}
I set the Event 3 times. So, I expect the callback to be called 3 times (please correct me if I am wrong).
But I get only 2 callbacks.
If I uncomment the lines //Sleep(10); , I get 3 callbacks.
What is happening here?
I am using Win7 64bit
UPDATE:
Can you please give an example about how to achieve this using semaphore?
Actual scenario:
I have a third-party library where I have to register a HANDLE to get notified about the occurrence of an event. Most of the times, I am able to get the notification (signalling on the HANDLE). Sometimes, I am not getting the correct "number of signalling", as expected.
I am passing the HANDLE created using CreateEvent() and registered a callback for the HANDLE using RegisterWaitForSingleObjectEx().
I suspect that this race condition is the reason for the behavior.
How to overcome this?
SetEvent on an event that's already signalled is a no-op. You have a race condition between the main thread that calls SetEvent, and the worker thread that waits on it (and resets it automatically when the wait is satisfied).
Most likely, you manage to call SetEvent twice while the worker is still running the first callback.
Related
I am using this really simple code to try to create a mutex
int main(){
HANDLE hMutex = ::CreateMutex(nullptr, FALSE, L"SingleInstanceMutex");
if(!hMutex){
wchar_t buff[1000];
_snwprintf(buff, sizeof(buff), L"Failed to create mutex (Error: %d)", ::GetLastError());
::MessageBox(nullptr, buff, L"Single Instance", MB_OK);
return 0x1;
} else {
::MessageBox(nullptr, L"Mutex Created", L"Single Instance", MB_OK);
}
return 0x0;
}
And I get the message "Mutex Created" like if it is correctly created, but when I try to search it using the tool WinObj of SysInternals I can't find it.
Also if I restart the program many times while another instance is running I always get the message "Mutex Created" and never an error because the mutex already exists.
I'm trying it on a Windows 7 VM.
What I'm doing wrong?
Ah I'm compiling on Linux using:
i686-w64-mingw32-g++ -static-libgcc -static-libstdc++ Mutex.cpp
Thank you!
In order to use a Windows mutex (whether a named one like yours or an unnamed one), you need to use the following Win APIs:
CreateMutex - to obtain a handle to the mutex Windows kernel object. In case of a named mutex (like yours) multiple processes should succeed to get this handle. The first one will cause the OS to create a new named mutex, and the others will get a handle referring to that same mutex.
In case the function succeeds and you get a valid handle to the named mutex, you can determine whether the mutex already existed (i.e. that another process already created the mutex) by checking if GetLastError returns ERROR_ALREADY_EXISTS.
WaitForSingleObject - to lock the mutex for exclusive access. This function is actually not specific to mutex and is used for many kernel objects. See the link above for more info about Windows kernel objects.
ReleaseMutex - to unlock the mutex.
CloseHandle - to release the acquired mutex handle (as usual with Windows handles). The OS will automatically close the handle when the process exists, but it is good practice to do it explicitly.
A complete example:
#include <Windows.h>
#include <iostream>
int main()
{
// Create the mutex handle:
HANDLE hMutex = ::CreateMutex(nullptr, FALSE, L"SingleInstanceMutex");
if (!hMutex)
{
std::cout << "Failed to create mutex handle." << std::endl;
// Handle error: ...
return 1;
}
bool bAlreadyExisted = (GetLastError() == ERROR_ALREADY_EXISTS);
std::cout << "Succeeded to create mutex handle. Already existed: " << (bAlreadyExisted ? "YES" : "NO") << std::endl;
// Lock the mutex:
std::cout << "Atempting to lock ..." << std::endl;
DWORD dwRes = ::WaitForSingleObject(hMutex, INFINITE);
if (dwRes != WAIT_OBJECT_0)
{
std::cout << "Failed to lock the mutex" << std::endl;
// Handle error: ...
return 1;
}
std::cout << "Locked." << std::endl;
// Do something that required the lock: ...
std::cout << "Press ENTER to unlock." << std::endl;
std::getchar();
// Unlock the mutex:
if (!::ReleaseMutex(hMutex))
{
std::cout << "Failed to unlock the mutex" << std::endl;
// Handle error: ...
return 1;
}
std::cout << "Unlocked." << std::endl;
// Free the handle:
if (!CloseHandle(hMutex))
{
std::cout << "Failed to close the mutex handle" << std::endl;
// Handle error: ...
return 1;
}
return 0;
}
Error handling:
As you can see in the documentation links above, when CreateMutex,ReleaseMutex and CloseHandle fail, you should call GetLastError to get more info about the error. WaitForSingleObject will return a specific return value upon error (see the documentation link above). This should be done as a part of the // Handle error: ... sections.
Note:
Using a named mutex for IPC (interprocess communication) might be the only good use case for native Windows mutexes.
For a regular unnamed mutex it's better to use one of the available standard library types of mutexes: std::mutex,std::recursive_mutex,std::recursive_timed_mutex (the last 2 support repeated lock by a thread, similarly to Windows mutex).
I am looking to put in some code (temporary at the moment, but as a thought experiment, I am considering working it in long term), that would accomplish two things.
Allow toggling of logs via signals.
Trap all catchable signals and output pid/thread/signal info to the logs, followed by returning to default behavior.
here is some compiling pseudo-code which shows what I would like to achieve.
#include <iostream>
#include <csignal>
#include <thread> // for this_thread
#include <unistd.h> // For gitpid() and fork()
#include <cstdlib>
// What is a save way of determining the number of
// available signals
// ** Externally defined values
const int verbose_debug_signal = 30;
const int trace_logging_enabled_default = 0;
// ** Component Control Class Definition Start
volatile sig_atomic_t trace_logging_enabled
= trace_logging_enabled_default;
void handle_signal(int sig)
{
std::cout << "Process PID: " << std::dec << getpid()
<< " Caught signal " << std::dec
<< sig << " on thread 0x" << std::hex
<< std::this_thread::get_id() << "\n";
if(sig == verbose_debug_signal)
{
trace_logging_enabled = ~trace_logging_enabled;
std::cout << "Trace Logging "
<< (!!trace_logging_enabled ? "enabled\n" : "disabled\n");
}
else
{
signal(sig, SIG_DFL);
raise(sig);
}
}
class ComponentControlClass
{
public:
ComponentControlClass()
: trace_logging_enabled_ptr(&trace_logging_enabled)
{}
~ComponentControlClass(){}
private:
volatile sig_atomic_t *trace_logging_enabled_ptr;
};
int main()
{
for(int i = 1; i < NSIG; ++i)
{
signal(i, handle_signal);
}
// BAH! stupid SIP strikes again. I think I would
// have to sign this to make it work.
// pid_t children[3];
// for(auto child : children)
// children[child] = fork();
while(1);
return 0;
}
Compiled with the -O3 flag to make sure Volatile fields don't get clobbered. The result of running the code in window 1 and sending a "kill -30 " in window 2, exactly 4 times:
Process PID: 46008 Caught signal 30 on thread 0x0x11325f600
Trace Logging enabled
Process PID: 46008 Caught signal 30 on thread 0x0x11325f600
Trace Logging disabled
Process PID: 46008 Caught signal 30 on thread 0x0x11325f600
Trace Logging enabled
Process PID: 46008 Caught signal 30 on thread 0x0x11325f600
Trace Logging disabled
This is exactly what I hoped would happen. In this example this class would represent a component control class which is the entry point for our shared objects using a homegrown framework. In other words, this is where I would need to pass external elements to the schema, like a pointer to volatile memory indicating the log state?
My question(s)
I've never seen anyone do anything like this so I am certain there is a good reason. I am trying to understand the implications.
I found a few articles on complex signal handling, and I was having trouble determining what applied. For all other signals aside from the one I am using to toggle I am trapping and re-enabling, so I wouldn't think this would be an issue. Is it?
When the toggling signal is sent I would want to propagate the logging state to
child processes. I haven't been able to simulate this easily on my Mac because of
SIP, but I haven't quite figured out how I would do this. Only confirmation that it won't happen by itself, because signals work on a per-thread basis.
Now that I'm bouncing between linux and Mac, I am seeing that signal.h is pretty different. My main concern is that different macros are used for the max interrupt and apple seems to only have 32 interrupts where linux uses 64. Is there a boilerplate way of approaching the signals (specifically the max/min values) which is more portable? Something like this but hopefully more robust:
int sigma = 0;
#ifdef LINUX
sigmax = _NSIG;
#elif APPLE
sigmax = NSIG;
#endif
I am developing C++ class with calls to Windows API C libraries.
I am using the Semaphores for a task, let's say I have two processes:
ProcessA has two semaphores:
Global\processA_receiving_semaphore
Global\processA_waiting_semaphore
ProcessB has two semaphores:
Global\processB_receiving_semaphore
Global\processB_waiting_semaphore
I have two threads in each process:
Sending thread in processA:
Wait on "Global\processB_waiting_semaphore"
// do something
Signal "Global\processB_receiving_semaphore"
Receiving thread on processB:
Wait on "Global\processB_receiving_semaphore"
// do something
Signal "Global\processB_waiting_semaphore
I removed ALL code that Releases "Global\processB_waiting_semaphore" but it can still be acquired. Calling WaitForSingleObject on that semaphore always returns successful wait and immediately. I tried setting the timeout period to 0 and it still acquires the semaphore while NOTHING is releasing it.
The receiving semaphore has initial count = 0 and max count = 1 while the waiting semaphore has initial count = 1 and max count = 1.
Calling WaitForSingleObject on the receiving semaphore works great and blocks until it is released by the other process. The problem is with the waiting semaphore and I cannot figure out why. The code is very big and I have made sure the names of the semaphores are set correctly.
Is this a common issue? If you need more explanation please comment and I will modify the post.
EDIT: CODE ADDED:
Receiver semaphores:
bool intr_process_comm::create_rcvr_semaphores()
{
std::cout << "\n Creating semaphore: " << "Global\\" << this_name << "_rcvr_sem";
rcvr_sem = CreateSemaphore(NULL, 0, 1, ("Global\\" + this_name + "_rcvr_sem").c_str());
std::cout << "\n Creating semaphore: " << "Global\\" << this_name << "_wait_sem";
wait_sem = CreateSemaphore(NULL, 1, 1, ("Global\\" + this_name + "_wait_sem").c_str());
return (rcvr_sem && wait_sem);
}
Sender semaphores:
// this sender connects to the wait semaphore in the target process
sndr_sem = OpenSemaphore(SEMAPHORE_MODIFY_STATE, FALSE, ("Global\\" + target_name + "_wait_sem").c_str());
// this target connects to the receiver semaphore in the target process
trgt_sem = OpenSemaphore(SEMAPHORE_MODIFY_STATE, FALSE, ("Global\\" + target_name + "_rcvr_sem").c_str());
DWORD intr_process_locking::wait(unsigned long period)
{
return WaitForSingleObject(sndr_sem, period);
}
void intr_process_locking::signal()
{
ReleaseSemaphore(trgt_sem, 1, 0);
}
Receiving thread function:
void intr_process_comm::rcvr_thread_proc()
{
while (conn_state == intr_process_comm::opened) {
try {
// wait on rcvr_semaphore for an infinite time
WaitForSingleObject(rcvr_sem, INFINITE);
if (inner_release) // if the semaphore was released within this process
return;
// once signaled by another process, get the message
std::string msg_str((LPCSTR)hmf_mapview);
// signal one of the waiters that want to put messages
// in this process's memory area
//
// this doesn't change ANYTHING in execution, commented or not..
//ReleaseSemaphore(wait_sem, 1, 0);
// put this message in this process's queue
Msg msg = Msg::from_xml(msg_str);
if (msg.command == "connection")
process_connection_message(msg);
in_messages.enQ(msg);
//std::cout << "\n Message: \n"<< msg << "\n";
}
catch (std::exception e) {
std::cout << "\n Ran into trouble getting the message. Details: " << e.what();
}
}
}
Sending thread function:
void intr_process_comm::sndr_thread_proc()
{
while (conn_state == intr_process_comm::opened ||
(conn_state == intr_process_comm::closing && out_messages.size() > 0)
) {
// pull a message out of the queue
Msg msg = out_messages.deQ();
if (connections.find(msg.destination) == connections.end())
connections[msg.destination].connect(msg.destination);
if (connections[msg.destination].connect(msg.destination)
!= intr_process_locking::state::opened) {
blocked_messages[msg.destination].push_back(msg);
continue;
}
// THIS ALWAYS GETS GETS WAIT_OBJECT_0 RESULT
DWORD wait_result = connections[msg.destination].wait(wait_timeout);
if (wait_result == WAIT_TIMEOUT) { // <---- THIS IS NEVER TRUE
out_messages.enQ(msg);
continue;
}
// do things here
// release the receiver semaphore in the other process
connections[msg.destination].signal();
}
}
To clarify some things:
trgt_sem in a sender is the rcvr_sem in the receiver.
`sndr_sem' in the sender is the 'wait_sem" in the receiver.
for call WaitForSingleObject with some handle:
The handle must have the SYNCHRONIZE access right.
but you open semaphore with SEMAPHORE_MODIFY_STATE access only. with this access possible call ReleaseSemaphore (This handle must have the SEMAPHORE_MODIFY_STATE access right) but call to WaitForSingleObject fail with result WAIT_FAILED. call to GetLastError() after this must return ERROR_ACCESS_DENIED.
so if we want call both ReleaseSemaphore and any wait function - we need have SEMAPHORE_MODIFY_STATE | SYNCHRONIZE access on handle. so need open with code
OpenSemaphore(SEMAPHORE_MODIFY_STATE|SYNCHRONIZE, )
and of course always checking api return values and error codes can save a lot of time
If you set the timeout to 0 WaitForSingleObject will always return immediately, a successful WaitForSingleObject will return WAIT_OBJECT_0 (which happens to have the value 0), WFSO is not like most APIs where success is indicated by a non-zero return.
My program has a shared queue, and is largely divided into two parts:
one for pushing instances of class request to the queue, and the other accessing multiple request objects in the queue and processing these objects. request is a very simple class(just for test) with a string req field.
I am working on the second part, and in doing so, I want to keep one scheduling thread, and multiple (in my example, two) executing threads.
The reason I want to have a separate scheduling thread is to reduce the number of lock and unlock operation to access the queue by multiple executing threads.
I am using pthread library, and my scheduling and executing function look like the following:
void * sched(void* elem) {
queue<request> *qr = static_cast<queue<request>*>(elem);
pthread_t pt1, pt2;
if(pthread_mutex_lock(&mut) == 0) {
if(!qr->empty()) {
int result1 = pthread_create(&pt1, NULL, execQueue, &(qr->front()));
if (result1 != 0) cout << "error sched1" << endl;
qr->pop();
}
if(!qr->empty()) {
int result2 = pthread_create(&pt2, NULL, execQueue, &(qr->front()));
if (result2 != 0) cout << "error sched2" << endl;
qr->pop();
}
pthread_join(pt1, NULL);
pthread_join(pt2, NULL);
pthread_mutex_unlock(&mut);
}
return 0;
}
void * execQueue(void* elem) {
request *r = static_cast<request*>(elem);
cout << "req is: " << r->req << endl; // req is a string field
return 0;
}
Simply, each of execQueue has one thread to be executed on, and just outputs a request passed to it through void* elem parameter.
sched is called in main(), with a thread, (in case you're wondering how, it is called in main() like below)
pthread_t schedpt;
int schresult = pthread_create(&schedpt, NULL, sched, &q);
if (schresult != 0) cout << "error sch" << endl;
pthread_join(schedpt, NULL);
and the sched function itself creates multiple(two in here) executing threads and pops requests from the queue, and executes the requests by calling execQueue on multiple threads(pthread_create and then ptrhead_join).
The problem is the weird behavior by the program.
When I checked the size and the elements in the queue without creating threads and calling them on multiple threads, they were exactly what I expected.
However, when I ran the program with multiple threads, it prints out
1 items are in the queue.
2 items are in the queue.
req is:
req is: FIRST! �(x'�j|1��rj|p�rj|1����FIRST!�'�j|!�'�j|�'�j| P��(�(��(1���i|p��i|
with the last line constantly varying.
The desired output is
1 items are in the queue.
2 items are in the queue.
req is: FIRST
req is: FIRST
I guess either the way I call the execQueue on multiple threads, or the way I pop() is wrong, but I could not figure out the problem, nor could I find any source to refer to for a correct usage.
Please help me on this. Bear with me for clumsy use of pthread, as I am a beginner.
Your queue holds objects, not pointers to objects. You can address the object at the front of the queue via operator &() as you are, but as soon as you pop the queue that object is gone and that address is no longer valid. Of course, sched doesn't care, but the execQueue function you sent that address do certainly does.
The most immediate fix for your code is this:
Change this:
pthread_create(&pt1, NULL, execQueue, &(qr->front()));
To this:
// send a dynamic *copy* of the front queue node to the thread
pthread_create(&pt1, NULL, execQueue, new request(qr->front()));
And your thread proc should be changed to this:
void * execQueue(void* elem)
{
request *r = static_cast<request*>(elem);
cout << "req is: " << r->req << endl; // req is a string field
delete r;
return nullptr;
}
That said, I can think of better ways to do this, but this should address your immediate problem, assuming your request object class is copy-constructible, and if it has dynamic members, follows the Rule Of Three.
And here's your mildly sanitized c++11 version just because I needed a simple test thingie for MSVC2013 installation :)
See it Live On Coliru
#include <iostream>
#include <thread>
#include <future>
#include <mutex>
#include <queue>
#include <string>
struct request { std::string req; };
std::queue<request> q;
std::mutex queue_mutex;
void execQueue(request r) {
std::cout << "req is: " << r.req << std::endl; // req is a string field
}
bool sched(std::queue<request>& qr) {
std::thread pt1, pt2;
{
std::lock_guard<std::mutex> lk(queue_mutex);
if (!qr.empty()) {
pt1 = std::thread(&execQueue, std::move(qr.front()));
qr.pop();
}
if (!qr.empty()) {
pt2 = std::thread(&execQueue, std::move(qr.front()));
qr.pop();
}
}
if (pt1.joinable()) pt1.join();
if (pt2.joinable()) pt2.join();
return true;
}
int main()
{
auto fut = std::async(sched, std::ref(q));
if (!fut.get())
std::cout << "error" << std::endl;
}
Of course it doesn't actually do much now (because there's no tasks in the queue).
I am designing a server that accepts incoming connections, the clients occasionally send requests which the server needs to respond, but mostly the server detects some events and broadcasts the event to all the connected clients. Basically what I have is this:
#include <ace/Acceptor.h>
#include <ace/INET_Addr.h>
#include <ace/Reactor.h>
#include <ace/SOCK_Acceptor.h>
#include <ace/SOCK_Stream.h>
#include <ace/Svc_Handler.h>
#include <iostream>
#include <thread>
// XXX: for simplicity
HANDLE const hEvent = ::CreateEvent(NULL, FALSE, FALSE, NULL);
class MyService
: public ACE_Svc_Handler<ACE_SOCK_STREAM, ACE_NULL_SYNCH>
{
public:
MyService() : signalCount_(0) { }
int open(void*) override
{
ACE_Reactor::instance()->register_handler(
this,
ACE_Event_Handler::READ_MASK);
ACE_Reactor::instance()->register_handler(
this,
hEvent);
return 0;
}
int handle_input(ACE_HANDLE) override
{
// Handle stuff coming in from clients.
return 0;
}
int handle_signal(int, siginfo_t*, ucontext_t*) override
{
// handle the detected event, send to client.
std::cout
<< signalCount_++ << " "
<< this << " "
<< __FUNCTION__
<< std::endl;
return 0;
}
unsigned signalCount_;
};
typedef ACE_Acceptor<MyService, ACE_SOCK_ACCEPTOR> MyAcceptor;
int main()
{
WSADATA wsData;
WSAStartup(MAKEWORD(2, 0), &wsData);
std::thread thr([=]()
{
// simulate the events.
Sleep(1000);
SetEvent(hEvent);
});
auto r = ACE_Reactor::instance();
MyAcceptor acceptor(ACE_INET_Addr(1234), r);
r->run_reactor_event_loop();
}
The problem here is that whenever hEvent is set, only the first instance of MyService gets its handle_signal called. It looks like only one handler is allowed for one event, bit a handler can handle multiple events. How can I make multiple handlers handler a single event?
If I make the event manual-reset-event, then all the handlers get their handle_signal called as long as the event is set. But that is really not what I want - I don't want a client to be notified of the same event multiple times.
I kind of achieved my goal by using a semaphore instead of an event:
HANDLE const hEvent = ::CreateSemaphore(NULL, 0, 16, NULL);
And made construtor and destructor of MyService count the number of connected clients so that I could release the semaphore correct number of times:
std::thread thr([=]()
{
while (true)
{
Sleep(1000);
ReleaseSemaphore(hEvent, clientCount, nullptr);
}
});
This seems wrong and smells a lot like a hack. Is there a proper way of doing this with ACE?