Allow native C++ components to raise CLR Asynchronous events - c++

I need in my Native code to raise an asynchronous event to C#,I have used as bridge CLI ,actually I have gone though sending a pointer to function from C++/CLI but it did not work properly?
I need to know:1-What is wrong? 2-how can make it asynchronous event to not block the processing?
I need in this example to raise an event each time I reach 1000,here is my code
Count.h
typedef void pointertofunc(bool IsOverFlowOccured);
typedef pointertofunc *pointertofuncdelegate;
class count
{
public:
void startCounting(pointertofuncdelegate);
count();
~count();
};
Count.cpp
void count::startCounting(pointertofuncdelegate)
{
for (int i = 0; i < 100000; i++)
{
//printf("%d \n", i);
if (i == 1000)
{
pointertofuncdelegate(true);
printf("%d \n", i);
i = 0;
}
}
}
CLR(.h file)
public delegate void manageddelegate(bool isworkflowoccured);
ref class CounterRaiseAsynchronousEvent
{
public:
event manageddelegate^ managedEventHandler;
CounterRaiseAsynchronousEvent();
void initialize();
void raiseEvent(bool eoverFlow);
private:
count* wrapperObject;
};
CLR(.cpp file)
void CounterRaiseAsynchronousEvent::initialize()
{
//Create a new delegate and point it to the member function
manageddelegate^ prDel = gcnew manageddelegate(this, &CounterRaiseAsynchronousEvent::raiseEvent);
GCHandle gch = GCHandle::Alloc(prDel);
//Convert the delegate to a function pointer
IntPtr ip = Marshal::GetFunctionPointerForDelegate(prDel);
//... and cast it to the appropriate type
pointertofuncdelegate fp = static_cast<pointertofuncdelegate>(ip.ToPointer());
//set the native function pointer on the native class
wrapperObject->startCounting(fp);
}
void CounterRaiseAsynchronousEvent::raiseEvent(bool isOverflowOccurred)
{
//Do any thing
}

It's not usually a good idea for the event source to implement an asynchronous handling model -- that forces the client code to be multi-threaded and do synchronization, and causes complications when different components have chosen different models.
Instead, call events synchronously, and let the subscriber pend the callback and return immediately. This is compatible with any asynchronous model the client code prefers, such as native PostMessage or Control.BeginInvoke or TPL (async/await) or worker threadpool.

Related

how to implement node-nan callback using node-addon-api

Until now I've only implemented synchronous node-addon-api methods, i.e., a JavaScript function makes a call, work is done, and the addon returns. I have big gaps in knowledge when it comes to the inner workings of v8, libuv, and node, so please correct any obvious misconceptions.
The goal is to call a JavaScript callback when C++ garbage collection callbacks are called from v8. I originally just called the JavaScript callback from the v8 garbage collection callback but that ended up with a segv after a couple calls. It seems that just making a call into JavaScript while being called from a v8 callback has some problems (v8 docs the callbacks shouldn't allocate objects). So I looked around and found a Nan-based example that uses libuv and Nan's AsyncResource to make the callback. The following approach works using node-nan:
NAN_GC_CALLBACK(afterGC) {
uint64_t et = uv_hrtime() - gcStartTime;
// other bookkeeping for GCData_t raw.
if (doCallbacks) {
uv_async_t* async = new uv_async_t;
GCData_t* data = new GCData_t;
*data = raw;
data->gcTime = et;
async->data = data;
uv_async_init(uv_default_loop(), async, asyncCB);
uv_async_send(async);
}
}
class GCResponseResource : public Nan::AsyncResource {
public:
GCResponseResource(Local<Function> callback_)
: Nan::AsyncResource("nan:gcstats.DeferredCallback") {
callback.Reset(callback_);
}
~GCResponseResource() {
callback.Reset();
}
Nan::Persistent<Function> callback;
};
static GCResponseResource* asyncResource;
static void closeCB(uv_handle_t *handle) {
delete handle;
}
static void asyncCB(uv_async_t *handle) {
Nan::HandleScope scope;
GCData_t* data = static_cast<GCData_t*>(handle->data);
Local<Object> obj = Nan::New<Object>();
Nan::Set(obj, Nan::New("gcCount").ToLocalChecked(),
Nan::New<Number>((data->gcCount));
Nan::Set(obj, Nan::New("gcTime").ToLocalChecked(),
Nan::New<Number>(data->gcTime));
Local<Object> counts = Nan::New<v8::Object>();
for (int i = 0; i < maxTypeCount; i++) {
if (data->typeCounts[i] != 0) {
Nan::Set(counts, i, Nan::New<Number>(data->typeCounts[i]));
}
}
Nan::Set(obj, Nan::New("gcTypeCounts").ToLocalChecked(), counts);
Local<Value> arguments[] = {obj};
Local<Function> callback = Nan::New(asyncResource->callback);
v8::Local<v8::Object> target = Nan::New<v8::Object>();
asyncResource->runInAsyncScope(target, callback, 1, arguments);
delete data;
uv_close((uv_handle_t*) handle, closeCB);
}
My question is how would I do this using the node-addon-api instead of nan?
It's not clear to me what the node-addon-api equivalent of uv_async_init, uv_async_send, etc are. This is partially because it's not clear to me what underlying N-API (as opposed to node-addon-api) functions are required.
I have been unable to find an example like this. The callback example is completely synchronous. The async pi example uses a worker thread to perform a task but that seems overkill compared to the approach in the nan-based code using the uv primitives.
Your example is not really asynchronous, because the GC callbacks run in the main thread. However when the JS world is stopped because of the GC, this does not mean that it is stopped in a way allowing a callback to run - as the GC can stop it in the middle of a function.
You need a ThreadSafeFunction to do this. Look here for an example:
https://github.com/nodejs/node-addon-api/blob/main/doc/threadsafe_function.md

Readable node stream to native c++ addon InputStream

Conceptually what I'm trying to do is very simple. I have a Readable stream in node, and I'm passing that to a native c++ addon where I want to connect that to an IInputStream.
The native library that I'm using works like many c++ (or Java) streaming interfaces that I've seen. The library provides an IInputStream interface (technically an abstract class), which I inherit from and override the virtual functions. Looks like this:
class JsReadable2InputStream : public IInputStream {
public:
// Constructor takes a js v8 object, makes a stream out of it
JsReadable2InputStream(const v8::Local<v8::Object>& streamObj);
~JsReadable2InputStream();
/**
* Blocking read. Blocks until the requested amount of data has been read. However,
* if the stream reaches its end before the requested amount of bytes has been read
* it returns the number of bytes read thus far.
*
* #param begin memory into which read data is copied
* #param byteCount the requested number of bytes
* #return the number of bytes actually read. Is less than bytesCount iff
* end of stream has been reached.
*/
virtual int read(char* begin, const int byteCount) override;
virtual int available() const override;
virtual bool isActive() const override;
virtual void close() override;
private:
Nan::Persistent<v8::Object> _stream;
bool _active;
JsEventLoopSync _evtLoop;
};
Of these functions, the important one here is read. The native library will call this function when it wants more data, and the function must block until it is able to return the requested data (or the stream ends). Here's my implementation of read:
int JsReadable2InputStream::read(char* begin, const int byteCount) {
if (!this->_active) { return 0; }
int read = -1;
while (read < 0 && this->_active) {
this->_evtLoop.invoke(
(voidLambda)[this,&read,begin,byteCount](){
v8::Local<v8::Object> stream = Nan::New(this->_stream);
const v8::Local<v8::Function> readFn = Nan::To<v8::Function>(Nan::Get(stream, JS_STR("read")).ToLocalChecked()).ToLocalChecked();
v8::Local<v8::Value> argv[] = { Nan::New<v8::Number>(byteCount) };
v8::Local<v8::Value> result = Nan::Call(readFn, stream, 1, argv).ToLocalChecked();
if (result->IsNull()) {
// Somewhat hacky/brittle way to check if stream has ended, but it's the only option
v8::Local<v8::Object> readableState = Nan::To<v8::Object>(Nan::Get(stream, JS_STR("_readableState")).ToLocalChecked()).ToLocalChecked();
if (Nan::To<bool>(Nan::Get(readableState, JS_STR("ended")).ToLocalChecked()).ToChecked()) {
// End of stream, all data has been read
this->_active = false;
read = 0;
return;
}
// Not enough data available, but stream is still open.
// Set a flag for the c++ thread to go to sleep
// This is the case that it gets stuck in
read = -1;
return;
}
v8::Local<v8::Object> bufferObj = Nan::To<v8::Object>(result).ToLocalChecked();
int len = Nan::To<int32_t>(Nan::Get(bufferObj, JS_STR("length")).ToLocalChecked()).ToChecked();
char* buffer = node::Buffer::Data(bufferObj);
if (len < byteCount) {
this->_active = false;
}
// copy the data out of the buffer
if (len > 0) {
std::memcpy(begin, buffer, len);
}
read = len;
}
);
if (read < 0) {
// Give js a chance to read more data
std::this_thread::sleep_for(std::chrono::milliseconds(10));
}
}
return read;
}
The idea is, the c++ code keeps a reference to the node stream object. When the native code wants to read, it has to synchronize with the node event loop, then attempt to invoke read on the node stream. If the node stream returns null, this indicates that the data isn't ready, so the native thread sleeps, giving the node event loop thread a chance to run and fill its buffers.
This solution works perfectly for a single stream, or even 2 or 3 streams running in parallel. Then for some reason when I hit the magical number of 4+ parallel streams, this totally deadlocks. None of the streams can successfully read any bytes at all. The above while loop runs infinitely, with the call into the node stream returning null every time.
It is behaving as though node is getting starved, and the streams never get a chance to populate with data. However, I've tried adjusting the sleep duration (to much larger values, and randomized values) and that had no effect. It is also clear that the event loop continues to run, since my lambda function continues to get executed there (I put some printfs inside to confirm this).
Just in case it might be relevant (I don't think it is), I'm also including my implementation of JsEventLoopSync. This uses libuv to schedule a lambda to be executed on the node event loop. It is designed such that only one can be scheduled at a time, and other invocations must wait until the first completes.
#include <nan.h>
#include <functional>
// simplified type declarations for the lambda functions
using voidLambda = std::function<void ()>;
// Synchronize with the node v8 event loop. Invokes a lambda function on the event loop, where access to js objects is safe.
// Blocks execution of the invoking thread until execution of the lambda completes.
class JsEventLoopSync {
public:
JsEventLoopSync() : _destroyed(false) {
// register on the default (same as node) event loop, so that we can execute callbacks in that context
// This takes a function pointer, which only works with a static function
this->_handles = new async_handles_t();
this->_handles->inst = this;
uv_async_init(uv_default_loop(), &this->_handles->async, JsEventLoopSync::_processUvCb);
// mechanism for passing this instance through to the native uv callback
this->_handles->async.data = this->_handles;
// mutex has to be initialized
uv_mutex_init(&this->_handles->mutex);
uv_cond_init(&this->_handles->cond);
}
~JsEventLoopSync() {
uv_mutex_lock(&this->_handles->mutex);
// prevent access to deleted instance by callback
this->_destroyed = true;
uv_mutex_unlock(&this->_handles->mutex);
// NOTE: Important, this->_handles must be a dynamically allocated pointer because uv_close() is
// async, and still has a reference to it. If it were statically allocated as a class member, this
// destructor would free the memory before uv_close was done with it (leading to asserts in libuv)
uv_close(reinterpret_cast<uv_handle_t*>(&this->_handles->async), JsEventLoopSync::_asyncClose);
}
// called from the native code to invoke the function
void invoke(const voidLambda& fn) {
if (v8::Isolate::GetCurrent() != NULL) {
// Already on the event loop, process now
return fn();
}
// Need to sync with the event loop
uv_mutex_lock(&this->_handles->mutex);
if (this->_destroyed) { return; }
this->_fn = fn;
// this will invoke processUvCb, on the node event loop
uv_async_send(&this->_handles->async);
// wait for it to complete processing
uv_cond_wait(&this->_handles->cond, &this->_handles->mutex);
uv_mutex_unlock(&this->_handles->mutex);
}
private:
// pulls data out of uv's void* to call the instance method
static void _processUvCb(uv_async_t* handle) {
if (handle->data == NULL) { return; }
auto handles = static_cast<async_handles_t*>(handle->data);
handles->inst->_process();
}
inline static void _asyncClose(uv_handle_t* handle) {
auto handles = static_cast<async_handles_t*>(handle->data);
handle->data = NULL;
uv_mutex_destroy(&handles->mutex);
uv_cond_destroy(&handles->cond);
delete handles;
}
// Creates the js arguments (populated by invoking the lambda), then invokes the js function
// Invokes resultLambda on the result
// Must be run on the node event loop!
void _process() {
if (v8::Isolate::GetCurrent() == NULL) {
// This is unexpected!
throw std::logic_error("Unable to sync with node event loop for callback!");
}
uv_mutex_lock(&this->_handles->mutex);
if (this->_destroyed) { return; }
Nan::HandleScope scope; // looks unused, but this is very important
// invoke the lambda
this->_fn();
// signal that we're done
uv_cond_signal(&this->_handles->cond);
uv_mutex_unlock(&this->_handles->mutex);
}
typedef struct async_handles {
uv_mutex_t mutex;
uv_cond_t cond;
uv_async_t async;
JsEventLoopSync* inst;
} async_handles_t;
async_handles_t* _handles;
voidLambda _fn;
bool _destroyed;
};
So, what am I missing? Is there a better way to wait for the node thread to get a chance to run? Is there a totally different design pattern that would work better? Does node have some upper limit on the number of streams that it can process at once?
As it turns out, the problems that I was seeing were actually client-side limitations. Browsers (and seemingly also node) have a limit on the number of open TCP connections to the same origin. I worked around this by spawning multiple node processes to do my testing.
If anyone is trying to do something similar, the code I shared is totally viable. If I ever have some free time, I might make it into a library.

Can POSIX timers safely modify C++ STL objects?

I'm attempting to write a C++ "wrapper" for the POSIX timer system on Linux, so that my C++ program can set timeouts for things (such as waiting for a message to arrive over the network) using the system clock, without dealing with POSIX's ugly C interface. It seems to work most of the time, but occasionally my program will segfault after several minutes of running successfully. The problem seems to be that my LinuxTimerManager object (or one of its member objects) gets its memory corrupted, but unfortunately the problem refuses to appear if I run the program under Valgrind, so I'm stuck staring at my code to try to figure out what's wrong with it.
Here's the core of my timer-wrapper implementation:
LinuxTimerManager.h:
namespace util {
using timer_id_t = int;
class LinuxTimerManager {
private:
timer_id_t next_id;
std::map<timer_id_t, timer_t> timer_handles;
std::map<timer_id_t, std::function<void(void)>> timer_callbacks;
std::set<timer_id_t> cancelled_timers;
friend void timer_signal_handler(int signum, siginfo_t* info, void* ucontext);
public:
LinuxTimerManager();
timer_id_t register_timer(const int delay_ms, std::function<void(void)> callback);
void cancel_timer(const timer_id_t timer_id);
};
void timer_signal_handler(int signum, siginfo_t* info, void* ucontext);
}
LinuxTimerManager.cpp:
namespace util {
LinuxTimerManager* tm_instance;
LinuxTimerManager::LinuxTimerManager() : next_id(0) {
tm_instance = this;
struct sigaction sa = {0};
sa.sa_flags = SA_SIGINFO;
sa.sa_sigaction = timer_signal_handler;
sigemptyset(&sa.sa_mask);
int success_flag = sigaction(SIGRTMIN, &sa, NULL);
assert(success_flag == 0);
}
void timer_signal_handler(int signum, siginfo_t* info, void* ucontext) {
timer_id_t timer_id = info->si_value.sival_int;
auto cancelled_location = tm_instance->cancelled_timers.find(timer_id);
//Only fire the callback if the timer is not in the cancelled set
if(cancelled_location == tm_instance->cancelled_timers.end()) {
tm_instance->timer_callbacks.at(timer_id)();
} else {
tm_instance->cancelled_timers.erase(cancelled_location);
}
tm_instance->timer_callbacks.erase(timer_id);
timer_delete(tm_instance->timer_handles.at(timer_id));
tm_instance->timer_handles.erase(timer_id);
}
timer_id_t LinuxTimerManager::register_timer(const int delay_ms, std::function<void(void)> callback) {
struct sigevent timer_event = {0};
timer_event.sigev_notify = SIGEV_SIGNAL;
timer_event.sigev_signo = SIGRTMIN;
timer_event.sigev_value.sival_int = next_id;
timer_t timer_handle;
int success_flag = timer_create(CLOCK_REALTIME, &timer_event, &timer_handle);
assert(success_flag == 0);
timer_handles[next_id] = timer_handle;
timer_callbacks[next_id] = callback;
struct itimerspec timer_spec = {0};
timer_spec.it_interval.tv_sec = 0;
timer_spec.it_interval.tv_nsec = 0;
timer_spec.it_value.tv_sec = 0;
timer_spec.it_value.tv_nsec = delay_ms * 1000000;
timer_settime(timer_handle, 0, &timer_spec, NULL);
return next_id++;
}
void LinuxTimerManager::cancel_timer(const timer_id_t timer_id) {
if(timer_handles.find(timer_id) != timer_handles.end()) {
cancelled_timers.emplace(timer_id);
}
}
}
When my program crashes, the segfault always comes from timer_signal_handler(), usually the lines tm_instance->timer_callbacks.erase(timer_id) or tm_instance->timer_handles.erase(timer_id). The actual segfault is thrown from somewhere deep in the std::map implementation (i.e. stl_tree.h).
Could my memory corruption be caused by a race condition between different timer signals modifying the same LinuxTimerManager? I thought only one timer signal was delivered at a time, but maybe I misunderstood the man pages. Is it just generally unsafe to make a Linux signal handler modify a complex C++ object like std::map?
The signal can occur in the middle of e.g. malloc or free and thus most calls which do interesting things with containers could result in reentering the memory allocation support while its data structures are in an arbitrary state. (As pointed out in the comments, most functions are not safe to call in asynchronous signal handlers. malloc and free are just examples.) Reentering a component in this fashion leads to pretty much arbitrary failure.
Libraries cannot be made safe against this behavior without blocking signals for the entire process during any operations within the library. Doing that is prohibitively expensive, both in the overhead of managing the signal mask and in the amount of time signals would be blocked. (It has to be for the entire process as a signal handler should not block on locks. If a thread handling a signal calls into a library protected by mutexes while another thread holds a mutex the signal handler needs, the handler will block. It is very hard to avoid deadlock when this can happen.)
Designs which work around this typically have a thread which listens for specific event and then does the processing. You have to use semaphores to synchronize between the thread and the signal handler.

Get class object pointer from inside a static method called directly

I have the following class, for example in a header:
class Uart
{
public:
Uart (int ch, int bd = 9600, bool doinit = false);
......
static void isr (void);
}
The idea is this class represents USART hardware, the same way as SPI, RTC etc and I set the address of static member isr as interrupt vector routine during runtime.
For example like this
extern "C"
{
void
Uart::isr (void)
{
if ( USART1->SR & USART_SR_RXNE) //receive
{
short c = USART2->DR;
USART1->DR = c;
USART1->SR &= ~USART_SR_RXNE;
;
}
else if ( USART1->SR & USART_SR_TC) //transfer
{
USART1->SR &= ~USART_SR_TC;
}
}
}
And set it as a interrupt vector, for example
_vectors_[USART1_IRQn + IRQ0_EX] = (word) &dbgout.isr;
So each time this "callback" routine is called by CPU I want to get access to it's "parent" object to save and/or manipulate the received data in userfriendly manner.
Is it possible at all? Maybe somehow organize the class or whatever.
The architecture is strictly 32bit (ARM, gcc)
Static methods know nothing about the object.
You need a different approach:
// Create interrupt handler method (non-static!)
void Uart::inthandler() {
// whatever is needed here
}
// Create object
Uart* p = new Uart(...);
// Create interrupt handler function
void inthandler() {
if (p != NULL) {
p->inthandler();
}
}
// Install the interrupt handler function
InstallIntHandler(IRQ, inthandler);
It's just a principle that has to be adapted to your specific environment.

any simple and fast call back mechanism?

i am implmenting an event-driven message processing logic for a speed-sensitive application. I have various business logics which wrapped into a lot of Reactor classes:
class TwitterSentimentReactor{
on_new_post(PostEvent&);
on_new_comment(CommentEvent&);
};
class FacebookSentimentReactor{
on_new_post(PostEvent&);
on_new_comment(CommentEvent&);
};
class YoutubeSentimentReactor{
on_new_post(PostEvent&);
on_new_comment(CommentEvent&);
on_new_plus_one(PlusOneEvent&);
};
let's say, there are 8 such event types, each Reactor respond to a subset of them.
the core program has 8 'entry point' for the message, which hooked up with some low-level socket processing library, for instance
on_new_post(PostEvent& pe){
youtube_sentiment_reactor_instance->on_new_post(pe);
twitter_sentiment_reactor_instance->on_new_post(pe);
youtube_sentiment_reactor_instance->on_new_post(pe);
}
I am thinking about using std::function and std::bind, to build a std::vector<std::function<>>, then I loop through the vector to call each call-back function.
However, when I tried it,std::function proved to not be fast enough. Is there a fast yet simple solution here? As i mentioned earlier, this is VERY speed sensitive, so i want to avoid using virtual function and inheritance, to cut the v-table look up
comments are welcomed. thanks
I think that in your case it is easier to do an interface, as you know are going to call simple member functions that match exactly the expected parameters:
struct IReactor {
virtual void on_new_post(PostEvent&) =0;
virtual void on_new_comment(CommentEvent&) =0;
virtual void on_new_plus_one(PlusOneEvent&) =0;
};
And then make each of your classes inherit and implement this interface.
You can have a simple std::vector<IReactor*> to manage the callbacks.
And remember that in C++, interfaces are just ordinary classes, so you can even write default implementations for some or all of the functions:
struct IReactor {
virtual void on_new_post(PostEvent&) {}
virtual void on_new_comment(CommentEvent&) {}
virtual void on_new_plus_one(PlusOneEvent&) {}
};
std::function main performance issue is that whenever you need to store some context (such as bound arguments, or the state of a lambda) then memory is required which often translates into a memory allocation. Also, the current library implementations that exist may not have been optimized to avoid this memory allocation.
That being said:
is it too slow ? you will have to measure it for yourself, in your context
are there alternatives ? yes, plenty!
As an example, what don't you use a base class Reactor which has all the required callbacks defined (doing nothing by default), and then derive from it to implement the required behavior ? You could then easily have a std::vector<std::unique_ptr<Reactor>> to iterate over!
Also, depending on whether the reactors need state (or not) you may gain a lot by avoiding allocating objects from then and use just functions instead.
It really, really, depends on the specific constraints of your projects.
If you need fast delegates and event system take a look to Offirmo:
It is as fast as the "Fastest possible delegates", but it has 2 major advantages:
1) it is ready and well tested library (don't need to write your own library from an article)
2) Does not relies on compiler hacks (fully compliant to C++ standard)
https://github.com/Offirmo/impossibly-fast-delegates
If you need a managed signal/slot system I have developed my own(c++11 only).
It is not fast as Offirmo, but is fast enough for any real scenario, most important is order of magnitude faster than Qt or Boost signals and is simple to use.
Signal is responsible for firing events.
Slots are responsible for holding callbacks.
Connect how many Slots as you wish to a Signal.
Don't warry about lifetime (everything autodisconnect)
Performance considerations:
The overhead for a std::function is quite low (and improving with every compiler release). Actually is just a bit slower than a regular function call. My own signal/slot library, is capable of 250 millions(I measured the pure overhead) callbacks/second on a 2Ghz processor and is using std::function.
Since your code has to do with network stuff you should mind that your main bottleneck will be the sockets.
The second bottleneck is latency of instruction cache. It does not matter if you use Offirmo (few assembly instructions), or std::function. Most of the time is spent by fetchin instructions from L1 cache. The best optimization is to keep all callbacks code compiled in the same translation unit (same .cpp file) and possibly in the same order in wich callbacks are called (or mostly the same order), after you do that you'll see only a very tiny improvement using Offirmo (seriously, you CAN'T BE faster than Offirmo) over std::function.
Keep in mind that any function doing something really usefull would be at least few dozens instructions (especially if dealing with sockets: you'll have to wait completion of system calls and processor context switch..) so the overhead of the callback system will be neglictible.
I can't comment on the actual speed of the method that you are using, other than to say:
Premature optimization does not usually give you what you expect.
You should measure the performance contribution before you start slicing and dicing. If you know it won't work before hand, then you can search now for something better or go "suboptimal" for now but encapsulate it so it can be replaced.
If you are looking for a general event system that does not use std::function (but does use virtual methods), you can try this one:
Notifier.h
/*
The Notifier is a singleton implementation of the Subject/Observer design
pattern. Any class/instance which wishes to participate as an observer
of an event can derive from the Notified base class and register itself
with the Notiifer for enumerated events.
Notifier derived classes implement variants of the Notify function:
bool Notify(const NOTIFIED_EVENT_TYPE_T& event, variants ....)
There are many variants possible. Register for the message
and create the interface to receive the data you expect from
it (for type safety).
All the variants return true if they process the event, and false
if they do not. Returning false will be considered an exception/
assertion condition in debug builds.
Classes derived from Notified do not need to deregister (though it may
be a good idea to do so) as the base class destrctor will attempt to
remove itself from the Notifier system automatically.
The event type is an enumeration and not a string as it is in many
"generic" notification systems. In practical use, this is for a closed
application where the messages will be known at compile time. This allows
us to increase the speed of the delivery by NOT having a
dictionary keyed lookup mechanism. Some loss of generality is implied
by this.
This class/system is NOT thread safe, but could be made so with some
mutex wrappers. It is safe to call Attach/Detach as a consequence
of calling Notify(...).
*/
/* This is the base class for anything that can receive notifications.
*/
typedef enum
{
NE_MIN = 0,
NE_SETTINGS_CHANGED,
NE_UPDATE_COUNTDOWN,
NE_UDPATE_MESSAGE,
NE_RESTORE_FROM_BACKGROUND,
NE_MAX,
} NOTIFIED_EVENT_TYPE_T;
class Notified
{
public:
virtual bool Notify(NOTIFIED_EVENT_TYPE_T eventType, const uint32& value)
{ return false; };
virtual bool Notify(NOTIFIED_EVENT_TYPE_T eventType, const bool& value)
{ return false; };
virtual bool Notify(NOTIFIED_EVENT_TYPE_T eventType, const string& value)
{ return false; };
virtual bool Notify(NOTIFIED_EVENT_TYPE_T eventType, const double& value)
{ return false; };
virtual ~Notified();
};
class Notifier : public SingletonDynamic<Notifier>
{
public:
private:
typedef vector<NOTIFIED_EVENT_TYPE_T> NOTIFIED_EVENT_TYPE_VECTOR_T;
typedef map<Notified*,NOTIFIED_EVENT_TYPE_VECTOR_T> NOTIFIED_MAP_T;
typedef map<Notified*,NOTIFIED_EVENT_TYPE_VECTOR_T>::iterator NOTIFIED_MAP_ITER_T;
typedef vector<Notified*> NOTIFIED_VECTOR_T;
typedef vector<NOTIFIED_VECTOR_T> NOTIFIED_VECTOR_VECTOR_T;
NOTIFIED_MAP_T _notifiedMap;
NOTIFIED_VECTOR_VECTOR_T _notifiedVector;
NOTIFIED_MAP_ITER_T _mapIter;
// This vector keeps a temporary list of observers that have completely
// detached since the current "Notify(...)" operation began. This is
// to handle the problem where a Notified instance has called Detach(...)
// because of a Notify(...) call. The removed instance could be a dead
// pointer, so don't try to talk to it.
vector<Notified*> _detached;
int32 _notifyDepth;
void RemoveEvent(NOTIFIED_EVENT_TYPE_VECTOR_T& orgEventTypes, NOTIFIED_EVENT_TYPE_T eventType);
void RemoveNotified(NOTIFIED_VECTOR_T& orgNotified, Notified* observer);
public:
virtual void Reset();
virtual bool Init() { Reset(); return true; }
virtual void Shutdown() { Reset(); }
void Attach(Notified* observer, NOTIFIED_EVENT_TYPE_T eventType);
// Detach for a specific event
void Detach(Notified* observer, NOTIFIED_EVENT_TYPE_T eventType);
// Detach for ALL events
void Detach(Notified* observer);
// This template function (defined in the header file) allows you to
// add interfaces to Notified easily and call them as needed. Variants
// will be generated at compile time by this template.
template <typename T>
bool Notify(NOTIFIED_EVENT_TYPE_T eventType, const T& value)
{
if(eventType < NE_MIN || eventType >= NE_MAX)
{
throw std::out_of_range("eventType out of range");
}
// Keep a copy of the list. If it changes while iterating over it because of a
// deletion, we may miss an object to update. Instead, we keep track of Detach(...)
// calls during the Notify(...) cycle and ignore anything detached because it may
// have been deleted.
NOTIFIED_VECTOR_T notified = _notifiedVector[eventType];
// If a call to Notify leads to a call to Notify, we need to keep track of
// the depth so that we can clear the detached list when we get to the end
// of the chain of Notify calls.
_notifyDepth++;
// Loop over all the observers for this event.
// NOTE that the the size of the notified vector may change if
// a call to Notify(...) adds/removes observers. This should not be a
// problem because the list is a simple vector.
bool result = true;
for(int idx = 0; idx < notified.size(); idx++)
{
Notified* observer = notified[idx];
if(_detached.size() > 0)
{ // Instead of doing the search for all cases, let's try to speed it up a little
// by only doing the search if more than one observer dropped off during the call.
// This may be overkill or unnecessary optimization.
switch(_detached.size())
{
case 0:
break;
case 1:
if(_detached[0] == observer)
continue;
break;
default:
if(std::find(_detached.begin(), _detached.end(), observer) != _detached.end())
continue;
break;
}
}
result = result && observer->Notify(eventType,value);
assert(result == true);
}
// Decrement this each time we exit.
_notifyDepth--;
if(_notifyDepth == 0 && _detached.size() > 0)
{ // We reached the end of the Notify call chain. Remove the temporary list
// of anything that detached while we were Notifying.
_detached.clear();
}
assert(_notifyDepth >= 0);
return result;
}
/* Used for CPPUnit. Could create a Mock...maybe...but this seems
* like it will get the job done with minimal fuss. For now.
*/
// Return all events that this object is registered for.
vector<NOTIFIED_EVENT_TYPE_T> GetEvents(Notified* observer);
// Return all objects registered for this event.
vector<Notified*> GetNotified(NOTIFIED_EVENT_TYPE_T event);
};
Notifier.cpp
#include "Notifier.h"
void Notifier::Reset()
{
_notifiedMap.clear();
_notifiedVector.clear();
_notifiedVector.resize(NE_MAX);
_detached.clear();
_notifyDepth = 0;
}
void Notifier::Attach(Notified* observer, NOTIFIED_EVENT_TYPE_T eventType)
{
if(observer == NULL)
{
throw std::out_of_range("observer == NULL");
}
if(eventType < NE_MIN || eventType >= NE_MAX)
{
throw std::out_of_range("eventType out of range");
}
_mapIter = _notifiedMap.find(observer);
if(_mapIter == _notifiedMap.end())
{ // Registering for the first time.
NOTIFIED_EVENT_TYPE_VECTOR_T eventTypes;
eventTypes.push_back(eventType);
// Register it with this observer.
_notifiedMap[observer] = eventTypes;
// Register the observer for this type of event.
_notifiedVector[eventType].push_back(observer);
}
else
{
NOTIFIED_EVENT_TYPE_VECTOR_T& events = _mapIter->second;
bool found = false;
for(int idx = 0; idx < events.size() && !found; idx++)
{
if(events[idx] == eventType)
{
found = true;
break;
}
}
if(!found)
{
events.push_back(eventType);
_notifiedVector[eventType].push_back(observer);
}
}
}
void Notifier::RemoveEvent(NOTIFIED_EVENT_TYPE_VECTOR_T& eventTypes, NOTIFIED_EVENT_TYPE_T eventType)
{
int foundAt = -1;
for(int idx = 0; idx < eventTypes.size(); idx++)
{
if(eventTypes[idx] == eventType)
{
foundAt = idx;
break;
}
}
if(foundAt >= 0)
{
eventTypes.erase(eventTypes.begin()+foundAt);
}
}
void Notifier::RemoveNotified(NOTIFIED_VECTOR_T& notified, Notified* observer)
{
int foundAt = -1;
for(int idx = 0; idx < notified.size(); idx++)
{
if(notified[idx] == observer)
{
foundAt = idx;
break;
}
}
if(foundAt >= 0)
{
notified.erase(notified.begin()+foundAt);
}
}
void Notifier::Detach(Notified* observer, NOTIFIED_EVENT_TYPE_T eventType)
{
if(observer == NULL)
{
throw std::out_of_range("observer == NULL");
}
if(eventType < NE_MIN || eventType >= NE_MAX)
{
throw std::out_of_range("eventType out of range");
}
_mapIter = _notifiedMap.find(observer);
if(_mapIter != _notifiedMap.end())
{ // Was registered
// Remove it from the map.
RemoveEvent(_mapIter->second, eventType);
// Remove it from the vector
RemoveNotified(_notifiedVector[eventType], observer);
// If there are no events left, remove this observer completely.
if(_mapIter->second.size() == 0)
{
_notifiedMap.erase(_mapIter);
// If this observer was being removed during a chain of operations,
// cache them temporarily so we know the pointer is "dead".
_detached.push_back(observer);
}
}
}
void Notifier::Detach(Notified* observer)
{
if(observer == NULL)
{
throw std::out_of_range("observer == NULL");
}
_mapIter = _notifiedMap.find(observer);
if(_mapIter != _notifiedMap.end())
{
// These are all the event types this observer was registered for.
NOTIFIED_EVENT_TYPE_VECTOR_T& eventTypes = _mapIter->second;
for(int idx = 0; idx < eventTypes.size();idx++)
{
NOTIFIED_EVENT_TYPE_T eventType = eventTypes[idx];
// Remove this observer from the Notified list for this event type.
RemoveNotified(_notifiedVector[eventType], observer);
}
_notifiedMap.erase(_mapIter);
}
// If this observer was being removed during a chain of operations,
// cache them temporarily so we know the pointer is "dead".
_detached.push_back(observer);
}
Notified::~Notified()
{
Notifier::Instance().Detach(this);
}
// Return all events that this object is registered for.
vector<NOTIFIED_EVENT_TYPE_T> Notifier::GetEvents(Notified* observer)
{
vector<NOTIFIED_EVENT_TYPE_T> result;
_mapIter = _notifiedMap.find(observer);
if(_mapIter != _notifiedMap.end())
{
// These are all the event types this observer was registered for.
result = _mapIter->second;
}
return result;
}
// Return all objects registered for this event.
vector<Notified*> Notifier::GetNotified(NOTIFIED_EVENT_TYPE_T event)
{
return _notifiedVector[event];
}
NOTES:
You must call init() on the class before using it.
You don't have to use it as a singleton, or use the singleton template I used here. That is just to get a reference/init/shutdown mechanism in place.
This is from a larger code base. You can find some other examples on github here.
There was a topic on SO, where virtually all mechanisms available in C++ was enumerated, but can't find it.
It had a list something like this:
function pointers
functors: member function pointers wrapped along with this to object with overloaded operator()
Fast Delegates
Impossibly Fast Delegates
boost::signals
Qt signal-slots
Fast delegates and boost::function performance comparison article: link
Oh, by the way, premature optimization..., profile first then optimize, 80/20-rule, blah-blah, blah-blah, you know ;)
Happy coding!
Unless you can parameterize your handlers statically and get the inlined, std::function<...> is your best option. When type exact type needs to be erased or you need to call run-time specified function you'll have an indirection and, hence, an actual function call without the ability to get things inlined. std::function<...> does exactly this and you won't get better.