I want my program to wait for something to read in a FIFO, but if the read (I use std::fstream) lasts more than 5 seconds, I want it to exit.
Is it possible or do I have to use alarm absolutely?
Thank you.
I do not believe there is a clean way to accomplish this that is portable C++ only solution. Your best option is to use poll or select on *nix based systems and WaitForSingleObject or WaitForMultipleObjects on Windows.
You can do this transparently by creating a proxy streambuffer class that forwards calls to a real streambuffer object. This will allow you to call the appropriate wait function before doing the actual read. It might look something like this...
class MyStreamBuffer : public std::basic_streambuf<char>
{
public:
MyStreamBuffer(std::fstream& streamBuffer, int timeoutValue)
: timeoutValue_(timeoutvalue),
streamBuffer_(streamBuffer)
{
}
protected:
virtual std::streamsize xsgetn( char_type* s, std::streamsize count )
{
if(!wait(timeoutValue_))
{
return 0;
}
return streamBuffer_.xsgetn(s, count);
}
private:
bool wait() const
{
// Not entirely complete but you get the idea
return (WAIT_OBJECT_0 == WaitForSingleObject(...));
}
const int timeoutValue_;
std::fstream& streamBuffer_;
};
You would need to do this on every call through. It might get a little tedious but would provide a transparent solution for providing timeouts even where they might not be explicitly supported in client code.
For the one insterested by the way I resolved my problem, here is my function reading from my stream. I couldn't finally use std::fstream so I replaced it by C system calls.
std::string
NamedPipe::readForSeconds(int seconds)
{
fd_set readfs;
struct timeval t = { seconds, 0 };
FD_ZERO(&readfs);
FD_SET(this->_stream, &readfs);
if (select(this->_stream + 1, &readfs, NULL, NULL, &t) < 0)
throw std::runtime_error("Invalid select");
if (FD_ISSET(this->_stream, &readfs))
return this->read();
throw NamedPipe::timeoutException();
}
Related
I have a decision to make regarding the way I code something, which is running on an embedded platform and am hoping there is a general "rule-of-thumb" that can be used in this case. Coding both my ideas and then benchmarking would obviously be the best way to go, but to get any meaningful, or rather accurate results out of this platform, in my particular case, would be quite tricky. I'm also sure that there may be others that are having the same question on their respective platforms, so I decided to ask it here. Please be kind, as I'm not very familiar with the threading library, so constructive feedback would be useful.
I have many threads (well, about 10-20 at maximum) all wanting to write to this hardware device. So I decided on using a simple ring-buffer consisting of 2 buffers (primary/secondary) of 8k each. This way each in-coming thread could be dealt with in a timely fashion. An arriving thread would obtain a mutex and write into the primary buffer and then release its mutex ready for the next thread. Now when the primary buffer is full, new incoming threads obviously switch to using the secondary buffer and then you start to write the primary buffer to the hardware device.
So the question really is... How best to write to the hardware device??? I'm thinking that there are two choices:
As soon as the buffer is full, create a new thread that does the write operation.
Signal a pre-created waiting worker-thread to do the write operation.
Both of the options seem to come with their respective pros/cons. Option 1 is the simplest to code and there are a number of ways to do this, but its effectiveness is dependent on how expensive it is to create/start the thread. The thread would be created, it would perform the write operation and then it would die. Option 2 however seems to be the most performant, but if you're going to have a reusable thread, you're going to need a mutex and a couple condition variables to control it. One to notify the thread that data is ready and another to ask for the thread to terminate when the program ends. Add to that a sprinkle of atomics for spurious wake-ups/missing notifications etc, and you've got quite an intricate solution to get right.
So what is the best method here? Are threads in general heavy to create/start or is this something that is completely platform dependent and benchmarking is the only way to know? Is there any benefit to using one method over the other that I've not thought about?
-- This is for the people not suffering from TL;DR syndrome --
I'm sure some of you have already wondered what happens if the secondary buffer becomes full before the write operation has finished? The answer in my case is fairly simple: this should never happen! Although the write operation is slow, it would never be slow enough such that the secondary buffer is filled before the write is complete. However, if someone is going to use this ring-buffer method, they must be prepared for this contingency. The way I thought about tackling this is to have a second mutex that is held during the write operation. This would mean that the thread that was due to write to the buffer would block until the write completed and the mutex was released.
Here's what I roughly ended up with after going with Option 2, but it seems awfully messy. I actually wanted to use promise/futures to avoid the spin-lock predicates on the condition variable, but couldn't think of a good way of moving a promise to an already created thread. Anyway... nice feedback is appreciated, bad-feedback, well, I'm not overly familiar with the threading library.
class Bar
{
public:
Bar(const size_t size) : buffer(new uint8_t[size]), buffer_size(size), used_size(0) {}
const size_t GetRemainingBufferSize(void) const { return buffer_size - used_size; }
const size_t GetUsedBufferSize(void) const { return used_size; }
const uint8_t* GetBuffer(void) const { return buffer.get(); }
const size_t GetBufferSize() const { return buffer_size; }
void ResetBuffer(void) { used_size = 0; }
void WriteIntoBuffer(const vector<uint8_t>& data)
{
std::copy(data.begin(), data.end(), buffer.get() + used_size);
used_size += data.size();
}
private:
std::unique_ptr<uint8_t[]> buffer;
size_t buffer_size;
size_t used_size;
};
class Foo
{
public:
Foo(const size_t buffer_size = 8192) : bar_buffers{ buffer_size, buffer_size }, primary_buffer(&bar_buffers[0]), secondary_buffer(&bar_buffers[1]),
write_predicate(false), quit_predicate(false), write_buffer(primary_buffer)
{
foo_thread = std::thread(&Foo::WriteHWThread, this);
}
~Foo()
{
quit_predicate = true;
begin_write.notify_one();
if (foo_thread.joinable())
foo_thread.join();
}
Foo(const Foo&) = delete;
Foo& operator=(const Foo&) = delete;
void WriteData(const std::vector<uint8_t>& data)
{
if (std::lock_guard<std::mutex> foo_lk(foo_lock); primary_buffer->GetRemainingBufferSize() < data.size())
{
std::unique_lock<std::mutex> write_lk(write_lock);
write_buffer = primary_buffer;
write_lk.unlock();
std::swap(primary_buffer, secondary_buffer);
primary_buffer->ResetBuffer();
write_predicate = true;
begin_write.notify_one();
}
primary_buffer->WriteIntoBuffer(data);
}
void WriteHWThread(void)
{
do
{
std::unique_lock<std::mutex> write_lk(write_lock);
begin_write.wait(write_lk, [&]() -> bool { return write_predicate.load() || quit_predicate.load(); });
write_predicate = false;
if (write_buffer.load()->GetUsedBufferSize())
<<< WRITE TO DEDICATED HARDWARE >>>
write_lk.unlock();
} while (!quit_predicate);
}
private:
Bar bar_buffers[2];
Bar* primary_buffer, *secondary_buffer;
std::atomic<bool> write_predicate, quit_predicate;
std::atomic<Bar*> write_buffer;
std::mutex foo_lock, write_lock;
std::thread foo_thread;
std::condition_variable begin_write;
};
Conceptually what I'm trying to do is very simple. I have a Readable stream in node, and I'm passing that to a native c++ addon where I want to connect that to an IInputStream.
The native library that I'm using works like many c++ (or Java) streaming interfaces that I've seen. The library provides an IInputStream interface (technically an abstract class), which I inherit from and override the virtual functions. Looks like this:
class JsReadable2InputStream : public IInputStream {
public:
// Constructor takes a js v8 object, makes a stream out of it
JsReadable2InputStream(const v8::Local<v8::Object>& streamObj);
~JsReadable2InputStream();
/**
* Blocking read. Blocks until the requested amount of data has been read. However,
* if the stream reaches its end before the requested amount of bytes has been read
* it returns the number of bytes read thus far.
*
* #param begin memory into which read data is copied
* #param byteCount the requested number of bytes
* #return the number of bytes actually read. Is less than bytesCount iff
* end of stream has been reached.
*/
virtual int read(char* begin, const int byteCount) override;
virtual int available() const override;
virtual bool isActive() const override;
virtual void close() override;
private:
Nan::Persistent<v8::Object> _stream;
bool _active;
JsEventLoopSync _evtLoop;
};
Of these functions, the important one here is read. The native library will call this function when it wants more data, and the function must block until it is able to return the requested data (or the stream ends). Here's my implementation of read:
int JsReadable2InputStream::read(char* begin, const int byteCount) {
if (!this->_active) { return 0; }
int read = -1;
while (read < 0 && this->_active) {
this->_evtLoop.invoke(
(voidLambda)[this,&read,begin,byteCount](){
v8::Local<v8::Object> stream = Nan::New(this->_stream);
const v8::Local<v8::Function> readFn = Nan::To<v8::Function>(Nan::Get(stream, JS_STR("read")).ToLocalChecked()).ToLocalChecked();
v8::Local<v8::Value> argv[] = { Nan::New<v8::Number>(byteCount) };
v8::Local<v8::Value> result = Nan::Call(readFn, stream, 1, argv).ToLocalChecked();
if (result->IsNull()) {
// Somewhat hacky/brittle way to check if stream has ended, but it's the only option
v8::Local<v8::Object> readableState = Nan::To<v8::Object>(Nan::Get(stream, JS_STR("_readableState")).ToLocalChecked()).ToLocalChecked();
if (Nan::To<bool>(Nan::Get(readableState, JS_STR("ended")).ToLocalChecked()).ToChecked()) {
// End of stream, all data has been read
this->_active = false;
read = 0;
return;
}
// Not enough data available, but stream is still open.
// Set a flag for the c++ thread to go to sleep
// This is the case that it gets stuck in
read = -1;
return;
}
v8::Local<v8::Object> bufferObj = Nan::To<v8::Object>(result).ToLocalChecked();
int len = Nan::To<int32_t>(Nan::Get(bufferObj, JS_STR("length")).ToLocalChecked()).ToChecked();
char* buffer = node::Buffer::Data(bufferObj);
if (len < byteCount) {
this->_active = false;
}
// copy the data out of the buffer
if (len > 0) {
std::memcpy(begin, buffer, len);
}
read = len;
}
);
if (read < 0) {
// Give js a chance to read more data
std::this_thread::sleep_for(std::chrono::milliseconds(10));
}
}
return read;
}
The idea is, the c++ code keeps a reference to the node stream object. When the native code wants to read, it has to synchronize with the node event loop, then attempt to invoke read on the node stream. If the node stream returns null, this indicates that the data isn't ready, so the native thread sleeps, giving the node event loop thread a chance to run and fill its buffers.
This solution works perfectly for a single stream, or even 2 or 3 streams running in parallel. Then for some reason when I hit the magical number of 4+ parallel streams, this totally deadlocks. None of the streams can successfully read any bytes at all. The above while loop runs infinitely, with the call into the node stream returning null every time.
It is behaving as though node is getting starved, and the streams never get a chance to populate with data. However, I've tried adjusting the sleep duration (to much larger values, and randomized values) and that had no effect. It is also clear that the event loop continues to run, since my lambda function continues to get executed there (I put some printfs inside to confirm this).
Just in case it might be relevant (I don't think it is), I'm also including my implementation of JsEventLoopSync. This uses libuv to schedule a lambda to be executed on the node event loop. It is designed such that only one can be scheduled at a time, and other invocations must wait until the first completes.
#include <nan.h>
#include <functional>
// simplified type declarations for the lambda functions
using voidLambda = std::function<void ()>;
// Synchronize with the node v8 event loop. Invokes a lambda function on the event loop, where access to js objects is safe.
// Blocks execution of the invoking thread until execution of the lambda completes.
class JsEventLoopSync {
public:
JsEventLoopSync() : _destroyed(false) {
// register on the default (same as node) event loop, so that we can execute callbacks in that context
// This takes a function pointer, which only works with a static function
this->_handles = new async_handles_t();
this->_handles->inst = this;
uv_async_init(uv_default_loop(), &this->_handles->async, JsEventLoopSync::_processUvCb);
// mechanism for passing this instance through to the native uv callback
this->_handles->async.data = this->_handles;
// mutex has to be initialized
uv_mutex_init(&this->_handles->mutex);
uv_cond_init(&this->_handles->cond);
}
~JsEventLoopSync() {
uv_mutex_lock(&this->_handles->mutex);
// prevent access to deleted instance by callback
this->_destroyed = true;
uv_mutex_unlock(&this->_handles->mutex);
// NOTE: Important, this->_handles must be a dynamically allocated pointer because uv_close() is
// async, and still has a reference to it. If it were statically allocated as a class member, this
// destructor would free the memory before uv_close was done with it (leading to asserts in libuv)
uv_close(reinterpret_cast<uv_handle_t*>(&this->_handles->async), JsEventLoopSync::_asyncClose);
}
// called from the native code to invoke the function
void invoke(const voidLambda& fn) {
if (v8::Isolate::GetCurrent() != NULL) {
// Already on the event loop, process now
return fn();
}
// Need to sync with the event loop
uv_mutex_lock(&this->_handles->mutex);
if (this->_destroyed) { return; }
this->_fn = fn;
// this will invoke processUvCb, on the node event loop
uv_async_send(&this->_handles->async);
// wait for it to complete processing
uv_cond_wait(&this->_handles->cond, &this->_handles->mutex);
uv_mutex_unlock(&this->_handles->mutex);
}
private:
// pulls data out of uv's void* to call the instance method
static void _processUvCb(uv_async_t* handle) {
if (handle->data == NULL) { return; }
auto handles = static_cast<async_handles_t*>(handle->data);
handles->inst->_process();
}
inline static void _asyncClose(uv_handle_t* handle) {
auto handles = static_cast<async_handles_t*>(handle->data);
handle->data = NULL;
uv_mutex_destroy(&handles->mutex);
uv_cond_destroy(&handles->cond);
delete handles;
}
// Creates the js arguments (populated by invoking the lambda), then invokes the js function
// Invokes resultLambda on the result
// Must be run on the node event loop!
void _process() {
if (v8::Isolate::GetCurrent() == NULL) {
// This is unexpected!
throw std::logic_error("Unable to sync with node event loop for callback!");
}
uv_mutex_lock(&this->_handles->mutex);
if (this->_destroyed) { return; }
Nan::HandleScope scope; // looks unused, but this is very important
// invoke the lambda
this->_fn();
// signal that we're done
uv_cond_signal(&this->_handles->cond);
uv_mutex_unlock(&this->_handles->mutex);
}
typedef struct async_handles {
uv_mutex_t mutex;
uv_cond_t cond;
uv_async_t async;
JsEventLoopSync* inst;
} async_handles_t;
async_handles_t* _handles;
voidLambda _fn;
bool _destroyed;
};
So, what am I missing? Is there a better way to wait for the node thread to get a chance to run? Is there a totally different design pattern that would work better? Does node have some upper limit on the number of streams that it can process at once?
As it turns out, the problems that I was seeing were actually client-side limitations. Browsers (and seemingly also node) have a limit on the number of open TCP connections to the same origin. I worked around this by spawning multiple node processes to do my testing.
If anyone is trying to do something similar, the code I shared is totally viable. If I ever have some free time, I might make it into a library.
i am implmenting an event-driven message processing logic for a speed-sensitive application. I have various business logics which wrapped into a lot of Reactor classes:
class TwitterSentimentReactor{
on_new_post(PostEvent&);
on_new_comment(CommentEvent&);
};
class FacebookSentimentReactor{
on_new_post(PostEvent&);
on_new_comment(CommentEvent&);
};
class YoutubeSentimentReactor{
on_new_post(PostEvent&);
on_new_comment(CommentEvent&);
on_new_plus_one(PlusOneEvent&);
};
let's say, there are 8 such event types, each Reactor respond to a subset of them.
the core program has 8 'entry point' for the message, which hooked up with some low-level socket processing library, for instance
on_new_post(PostEvent& pe){
youtube_sentiment_reactor_instance->on_new_post(pe);
twitter_sentiment_reactor_instance->on_new_post(pe);
youtube_sentiment_reactor_instance->on_new_post(pe);
}
I am thinking about using std::function and std::bind, to build a std::vector<std::function<>>, then I loop through the vector to call each call-back function.
However, when I tried it,std::function proved to not be fast enough. Is there a fast yet simple solution here? As i mentioned earlier, this is VERY speed sensitive, so i want to avoid using virtual function and inheritance, to cut the v-table look up
comments are welcomed. thanks
I think that in your case it is easier to do an interface, as you know are going to call simple member functions that match exactly the expected parameters:
struct IReactor {
virtual void on_new_post(PostEvent&) =0;
virtual void on_new_comment(CommentEvent&) =0;
virtual void on_new_plus_one(PlusOneEvent&) =0;
};
And then make each of your classes inherit and implement this interface.
You can have a simple std::vector<IReactor*> to manage the callbacks.
And remember that in C++, interfaces are just ordinary classes, so you can even write default implementations for some or all of the functions:
struct IReactor {
virtual void on_new_post(PostEvent&) {}
virtual void on_new_comment(CommentEvent&) {}
virtual void on_new_plus_one(PlusOneEvent&) {}
};
std::function main performance issue is that whenever you need to store some context (such as bound arguments, or the state of a lambda) then memory is required which often translates into a memory allocation. Also, the current library implementations that exist may not have been optimized to avoid this memory allocation.
That being said:
is it too slow ? you will have to measure it for yourself, in your context
are there alternatives ? yes, plenty!
As an example, what don't you use a base class Reactor which has all the required callbacks defined (doing nothing by default), and then derive from it to implement the required behavior ? You could then easily have a std::vector<std::unique_ptr<Reactor>> to iterate over!
Also, depending on whether the reactors need state (or not) you may gain a lot by avoiding allocating objects from then and use just functions instead.
It really, really, depends on the specific constraints of your projects.
If you need fast delegates and event system take a look to Offirmo:
It is as fast as the "Fastest possible delegates", but it has 2 major advantages:
1) it is ready and well tested library (don't need to write your own library from an article)
2) Does not relies on compiler hacks (fully compliant to C++ standard)
https://github.com/Offirmo/impossibly-fast-delegates
If you need a managed signal/slot system I have developed my own(c++11 only).
It is not fast as Offirmo, but is fast enough for any real scenario, most important is order of magnitude faster than Qt or Boost signals and is simple to use.
Signal is responsible for firing events.
Slots are responsible for holding callbacks.
Connect how many Slots as you wish to a Signal.
Don't warry about lifetime (everything autodisconnect)
Performance considerations:
The overhead for a std::function is quite low (and improving with every compiler release). Actually is just a bit slower than a regular function call. My own signal/slot library, is capable of 250 millions(I measured the pure overhead) callbacks/second on a 2Ghz processor and is using std::function.
Since your code has to do with network stuff you should mind that your main bottleneck will be the sockets.
The second bottleneck is latency of instruction cache. It does not matter if you use Offirmo (few assembly instructions), or std::function. Most of the time is spent by fetchin instructions from L1 cache. The best optimization is to keep all callbacks code compiled in the same translation unit (same .cpp file) and possibly in the same order in wich callbacks are called (or mostly the same order), after you do that you'll see only a very tiny improvement using Offirmo (seriously, you CAN'T BE faster than Offirmo) over std::function.
Keep in mind that any function doing something really usefull would be at least few dozens instructions (especially if dealing with sockets: you'll have to wait completion of system calls and processor context switch..) so the overhead of the callback system will be neglictible.
I can't comment on the actual speed of the method that you are using, other than to say:
Premature optimization does not usually give you what you expect.
You should measure the performance contribution before you start slicing and dicing. If you know it won't work before hand, then you can search now for something better or go "suboptimal" for now but encapsulate it so it can be replaced.
If you are looking for a general event system that does not use std::function (but does use virtual methods), you can try this one:
Notifier.h
/*
The Notifier is a singleton implementation of the Subject/Observer design
pattern. Any class/instance which wishes to participate as an observer
of an event can derive from the Notified base class and register itself
with the Notiifer for enumerated events.
Notifier derived classes implement variants of the Notify function:
bool Notify(const NOTIFIED_EVENT_TYPE_T& event, variants ....)
There are many variants possible. Register for the message
and create the interface to receive the data you expect from
it (for type safety).
All the variants return true if they process the event, and false
if they do not. Returning false will be considered an exception/
assertion condition in debug builds.
Classes derived from Notified do not need to deregister (though it may
be a good idea to do so) as the base class destrctor will attempt to
remove itself from the Notifier system automatically.
The event type is an enumeration and not a string as it is in many
"generic" notification systems. In practical use, this is for a closed
application where the messages will be known at compile time. This allows
us to increase the speed of the delivery by NOT having a
dictionary keyed lookup mechanism. Some loss of generality is implied
by this.
This class/system is NOT thread safe, but could be made so with some
mutex wrappers. It is safe to call Attach/Detach as a consequence
of calling Notify(...).
*/
/* This is the base class for anything that can receive notifications.
*/
typedef enum
{
NE_MIN = 0,
NE_SETTINGS_CHANGED,
NE_UPDATE_COUNTDOWN,
NE_UDPATE_MESSAGE,
NE_RESTORE_FROM_BACKGROUND,
NE_MAX,
} NOTIFIED_EVENT_TYPE_T;
class Notified
{
public:
virtual bool Notify(NOTIFIED_EVENT_TYPE_T eventType, const uint32& value)
{ return false; };
virtual bool Notify(NOTIFIED_EVENT_TYPE_T eventType, const bool& value)
{ return false; };
virtual bool Notify(NOTIFIED_EVENT_TYPE_T eventType, const string& value)
{ return false; };
virtual bool Notify(NOTIFIED_EVENT_TYPE_T eventType, const double& value)
{ return false; };
virtual ~Notified();
};
class Notifier : public SingletonDynamic<Notifier>
{
public:
private:
typedef vector<NOTIFIED_EVENT_TYPE_T> NOTIFIED_EVENT_TYPE_VECTOR_T;
typedef map<Notified*,NOTIFIED_EVENT_TYPE_VECTOR_T> NOTIFIED_MAP_T;
typedef map<Notified*,NOTIFIED_EVENT_TYPE_VECTOR_T>::iterator NOTIFIED_MAP_ITER_T;
typedef vector<Notified*> NOTIFIED_VECTOR_T;
typedef vector<NOTIFIED_VECTOR_T> NOTIFIED_VECTOR_VECTOR_T;
NOTIFIED_MAP_T _notifiedMap;
NOTIFIED_VECTOR_VECTOR_T _notifiedVector;
NOTIFIED_MAP_ITER_T _mapIter;
// This vector keeps a temporary list of observers that have completely
// detached since the current "Notify(...)" operation began. This is
// to handle the problem where a Notified instance has called Detach(...)
// because of a Notify(...) call. The removed instance could be a dead
// pointer, so don't try to talk to it.
vector<Notified*> _detached;
int32 _notifyDepth;
void RemoveEvent(NOTIFIED_EVENT_TYPE_VECTOR_T& orgEventTypes, NOTIFIED_EVENT_TYPE_T eventType);
void RemoveNotified(NOTIFIED_VECTOR_T& orgNotified, Notified* observer);
public:
virtual void Reset();
virtual bool Init() { Reset(); return true; }
virtual void Shutdown() { Reset(); }
void Attach(Notified* observer, NOTIFIED_EVENT_TYPE_T eventType);
// Detach for a specific event
void Detach(Notified* observer, NOTIFIED_EVENT_TYPE_T eventType);
// Detach for ALL events
void Detach(Notified* observer);
// This template function (defined in the header file) allows you to
// add interfaces to Notified easily and call them as needed. Variants
// will be generated at compile time by this template.
template <typename T>
bool Notify(NOTIFIED_EVENT_TYPE_T eventType, const T& value)
{
if(eventType < NE_MIN || eventType >= NE_MAX)
{
throw std::out_of_range("eventType out of range");
}
// Keep a copy of the list. If it changes while iterating over it because of a
// deletion, we may miss an object to update. Instead, we keep track of Detach(...)
// calls during the Notify(...) cycle and ignore anything detached because it may
// have been deleted.
NOTIFIED_VECTOR_T notified = _notifiedVector[eventType];
// If a call to Notify leads to a call to Notify, we need to keep track of
// the depth so that we can clear the detached list when we get to the end
// of the chain of Notify calls.
_notifyDepth++;
// Loop over all the observers for this event.
// NOTE that the the size of the notified vector may change if
// a call to Notify(...) adds/removes observers. This should not be a
// problem because the list is a simple vector.
bool result = true;
for(int idx = 0; idx < notified.size(); idx++)
{
Notified* observer = notified[idx];
if(_detached.size() > 0)
{ // Instead of doing the search for all cases, let's try to speed it up a little
// by only doing the search if more than one observer dropped off during the call.
// This may be overkill or unnecessary optimization.
switch(_detached.size())
{
case 0:
break;
case 1:
if(_detached[0] == observer)
continue;
break;
default:
if(std::find(_detached.begin(), _detached.end(), observer) != _detached.end())
continue;
break;
}
}
result = result && observer->Notify(eventType,value);
assert(result == true);
}
// Decrement this each time we exit.
_notifyDepth--;
if(_notifyDepth == 0 && _detached.size() > 0)
{ // We reached the end of the Notify call chain. Remove the temporary list
// of anything that detached while we were Notifying.
_detached.clear();
}
assert(_notifyDepth >= 0);
return result;
}
/* Used for CPPUnit. Could create a Mock...maybe...but this seems
* like it will get the job done with minimal fuss. For now.
*/
// Return all events that this object is registered for.
vector<NOTIFIED_EVENT_TYPE_T> GetEvents(Notified* observer);
// Return all objects registered for this event.
vector<Notified*> GetNotified(NOTIFIED_EVENT_TYPE_T event);
};
Notifier.cpp
#include "Notifier.h"
void Notifier::Reset()
{
_notifiedMap.clear();
_notifiedVector.clear();
_notifiedVector.resize(NE_MAX);
_detached.clear();
_notifyDepth = 0;
}
void Notifier::Attach(Notified* observer, NOTIFIED_EVENT_TYPE_T eventType)
{
if(observer == NULL)
{
throw std::out_of_range("observer == NULL");
}
if(eventType < NE_MIN || eventType >= NE_MAX)
{
throw std::out_of_range("eventType out of range");
}
_mapIter = _notifiedMap.find(observer);
if(_mapIter == _notifiedMap.end())
{ // Registering for the first time.
NOTIFIED_EVENT_TYPE_VECTOR_T eventTypes;
eventTypes.push_back(eventType);
// Register it with this observer.
_notifiedMap[observer] = eventTypes;
// Register the observer for this type of event.
_notifiedVector[eventType].push_back(observer);
}
else
{
NOTIFIED_EVENT_TYPE_VECTOR_T& events = _mapIter->second;
bool found = false;
for(int idx = 0; idx < events.size() && !found; idx++)
{
if(events[idx] == eventType)
{
found = true;
break;
}
}
if(!found)
{
events.push_back(eventType);
_notifiedVector[eventType].push_back(observer);
}
}
}
void Notifier::RemoveEvent(NOTIFIED_EVENT_TYPE_VECTOR_T& eventTypes, NOTIFIED_EVENT_TYPE_T eventType)
{
int foundAt = -1;
for(int idx = 0; idx < eventTypes.size(); idx++)
{
if(eventTypes[idx] == eventType)
{
foundAt = idx;
break;
}
}
if(foundAt >= 0)
{
eventTypes.erase(eventTypes.begin()+foundAt);
}
}
void Notifier::RemoveNotified(NOTIFIED_VECTOR_T& notified, Notified* observer)
{
int foundAt = -1;
for(int idx = 0; idx < notified.size(); idx++)
{
if(notified[idx] == observer)
{
foundAt = idx;
break;
}
}
if(foundAt >= 0)
{
notified.erase(notified.begin()+foundAt);
}
}
void Notifier::Detach(Notified* observer, NOTIFIED_EVENT_TYPE_T eventType)
{
if(observer == NULL)
{
throw std::out_of_range("observer == NULL");
}
if(eventType < NE_MIN || eventType >= NE_MAX)
{
throw std::out_of_range("eventType out of range");
}
_mapIter = _notifiedMap.find(observer);
if(_mapIter != _notifiedMap.end())
{ // Was registered
// Remove it from the map.
RemoveEvent(_mapIter->second, eventType);
// Remove it from the vector
RemoveNotified(_notifiedVector[eventType], observer);
// If there are no events left, remove this observer completely.
if(_mapIter->second.size() == 0)
{
_notifiedMap.erase(_mapIter);
// If this observer was being removed during a chain of operations,
// cache them temporarily so we know the pointer is "dead".
_detached.push_back(observer);
}
}
}
void Notifier::Detach(Notified* observer)
{
if(observer == NULL)
{
throw std::out_of_range("observer == NULL");
}
_mapIter = _notifiedMap.find(observer);
if(_mapIter != _notifiedMap.end())
{
// These are all the event types this observer was registered for.
NOTIFIED_EVENT_TYPE_VECTOR_T& eventTypes = _mapIter->second;
for(int idx = 0; idx < eventTypes.size();idx++)
{
NOTIFIED_EVENT_TYPE_T eventType = eventTypes[idx];
// Remove this observer from the Notified list for this event type.
RemoveNotified(_notifiedVector[eventType], observer);
}
_notifiedMap.erase(_mapIter);
}
// If this observer was being removed during a chain of operations,
// cache them temporarily so we know the pointer is "dead".
_detached.push_back(observer);
}
Notified::~Notified()
{
Notifier::Instance().Detach(this);
}
// Return all events that this object is registered for.
vector<NOTIFIED_EVENT_TYPE_T> Notifier::GetEvents(Notified* observer)
{
vector<NOTIFIED_EVENT_TYPE_T> result;
_mapIter = _notifiedMap.find(observer);
if(_mapIter != _notifiedMap.end())
{
// These are all the event types this observer was registered for.
result = _mapIter->second;
}
return result;
}
// Return all objects registered for this event.
vector<Notified*> Notifier::GetNotified(NOTIFIED_EVENT_TYPE_T event)
{
return _notifiedVector[event];
}
NOTES:
You must call init() on the class before using it.
You don't have to use it as a singleton, or use the singleton template I used here. That is just to get a reference/init/shutdown mechanism in place.
This is from a larger code base. You can find some other examples on github here.
There was a topic on SO, where virtually all mechanisms available in C++ was enumerated, but can't find it.
It had a list something like this:
function pointers
functors: member function pointers wrapped along with this to object with overloaded operator()
Fast Delegates
Impossibly Fast Delegates
boost::signals
Qt signal-slots
Fast delegates and boost::function performance comparison article: link
Oh, by the way, premature optimization..., profile first then optimize, 80/20-rule, blah-blah, blah-blah, you know ;)
Happy coding!
Unless you can parameterize your handlers statically and get the inlined, std::function<...> is your best option. When type exact type needs to be erased or you need to call run-time specified function you'll have an indirection and, hence, an actual function call without the ability to get things inlined. std::function<...> does exactly this and you won't get better.
Having multiple processes, all writing on the same output stream (e.g. with std::cout), is there a way to lock the stream so that, when a process starts writing his own message, it can do it till the end (e.g. with std::endl)?
I need a portable way of doing it.
It's not clear if it would fit the parameters of your situation, but you could potentially funnel all data to a separate worker process that aggregates the data (with its own internal locking) before dumping them to stdout.
You are out of luck. You will have to use whatever your taget OS provides. This means using global/system-wide mutexes or lockf() like functions. You could use some 3rd party library to satisfy the portability requirement, like Boost.Interprocess.
If you are on a UNIX like OS, then you may be able to mimic the behavior you want with a stringstream adapter. This may not be the best way to accomplish it, but the idea is to trigger a single write call whenever std::endl is encountered.
// Assume fd is in blocking mode
class fdostream : public std::ostringstream {
typedef std::ostream & (*manip_t) (std::ostream &);
struct fdbuf : public std::stringbuf {
int fd_;
fdbuf (int fd) : fd_(fd) {}
int sync () {
int r = ::write(fd_, str().data(), str().size());
str(std::string());
return (r > 0) ? 0 : -1;
}
} buf_;
std::ostream & os () { return *this; }
public:
fdostream (int fd) : buf_(fd) { os().rdbuf(&buf_); }
};
fdostream my_cout(1);
my_cout << "Hello," << " world!" << std::endl;
This should achieve the effect of synchronized writes, at the cost of buffering input into a stringstream and then clearing the internal string after each flush.
For greater portability, you could modify the code to use fwrite, and specify unbuffered writes with setvbuf. But, the atomicity of fwrite would depend on the C implementation of the library function.
My critical section code does not work!!!
Backgrounder.run IS able to modify MESSAGE_QUEUE g_msgQueue and LockSections destructor hadn't been called yet !!!
Extra code :
typedef std::vector<int> MESSAGE_LIST; // SHARED OBJECT .. MUST LOCK!
class MESSAGE_QUEUE : MESSAGE_LIST{
public:
MESSAGE_LIST * m_pList;
MESSAGE_QUEUE(MESSAGE_LIST* pList){ m_pList = pList; }
~MESSAGE_QUEUE(){ }
/* This class will be shared between threads that means any
* attempt to access it MUST be inside a critical section.
*/
void Add( int messageCode ){ if(m_pList) m_pList->push_back(messageCode); }
int getLast()
{
if(m_pList){
if(m_pList->size() == 1){
Add(0x0);
}
m_pList->pop_back();
return m_pList->back();
}
}
void removeLast()
{
if(m_pList){
m_pList->erase(m_pList->end()-1,m_pList->end());
}
}
};
class Backgrounder{
public:
MESSAGE_QUEUE* m_pMsgQueue;
static void __cdecl Run( void* args){
MESSAGE_QUEUE* s_pMsgQueue = (MESSAGE_QUEUE*)args;
if(s_pMsgQueue->getLast() == 0x45)printf("It's a success!");
else printf("It's a trap!");
}
Backgrounder(MESSAGE_QUEUE* pMsgQueue)
{
m_pMsgQueue = pMsgQueue;
_beginthread(Run,0,(void*)m_pMsgQueue);
}
~Backgrounder(){ }
};
int main(){
MESSAGE_LIST g_List;
CriticalSection crt;
ErrorHandler err;
LockSection lc(&crt,&err); // Does not work , see question #2
MESSAGE_QUEUE g_msgQueue(&g_List);
g_msgQueue.Add(0x45);
printf("%d",g_msgQueue.getLast());
Backgrounder back_thread(&g_msgQueue);
while(!kbhit());
return 0;
}
#ifndef CRITICALSECTION_H
#define CRITICALSECTION_H
#include <windows.h>
#include "ErrorHandler.h"
class CriticalSection{
long m_nLockCount;
long m_nThreadId;
typedef CRITICAL_SECTION cs;
cs m_tCS;
public:
CriticalSection(){
::InitializeCriticalSection(&m_tCS);
m_nLockCount = 0;
m_nThreadId = 0;
}
~CriticalSection(){ ::DeleteCriticalSection(&m_tCS); }
void Enter(){ ::EnterCriticalSection(&m_tCS); }
void Leave(){ ::LeaveCriticalSection(&m_tCS); }
void Try();
};
class LockSection{
CriticalSection* m_pCS;
ErrorHandler * m_pErrorHandler;
bool m_bIsClosed;
public:
LockSection(CriticalSection* pCS,ErrorHandler* pErrorHandler){
m_bIsClosed = false;
m_pCS = pCS;
m_pErrorHandler = pErrorHandler;
// 0x1AE is code prefix for critical section header
if(!m_pCS)m_pErrorHandler->Add(0x1AE1);
if(m_pCS)m_pCS->Enter();
}
~LockSection(){
if(!m_pCS)m_pErrorHandler->Add(0x1AE2);
if(m_pCS && m_bIsClosed == false)m_pCS->Leave();
}
void ForceCSectionClose(){
if(!m_pCS)m_pErrorHandler->Add(0x1AE3);
if(m_pCS){m_pCS->Leave();m_bIsClosed = true;}
}
};
/*
Safe class basic structure;
class SafeObj
{
CriticalSection m_cs;
public:
void SafeMethod()
{
LockSection myLock(&m_cs);
//add code to implement the method ...
}
};
*/
#endif
Two questions in one. I don't know about the first, but the critical section part is easy to explain. The background thread isn't trying to claim the lock and so, of course, is not blocked. You need to make the critical section object crt visible to the thread so that it can lock it.
The way to use this lock class is that each section of code that you want serialised must create a LockSection object and hold on to it until the end of the serialised block:
Thread 1:
{
LockSection lc(&crt,&err);
//operate on shared object from thread 1
}
Thread 2:
{
LockSection lc(&crt,&err);
//operate on shared object from thread 2
}
Note that it has to be the same critical section instance crt that is used in each block of code that is to be serialised.
This code has a number of problems.
First of all, deriving from the standard containers is almost always a poor idea. In this case you're using private inheritance, which reduces the problems, but doesn't eliminate them entirely. In any case, you don't seem to put the inheritance to much (any?) use anyway. Even though you've derived your MESSAGE_QUEUE from MESSAGE_LIST (which is actually std::vector<int>), you embed a pointer to an instance of a MESSAGE_LIST into MESSAGE_QUEUE anyway.
Second, if you're going to use a queue to communicate between threads (which I think is generally a good idea) you should make the locking inherent in the queue operations rather than requiring each thread to manage the locking correctly on its own.
Third, a vector isn't a particularly suitable data structure for representing a queue anyway, unless you're going to make it fixed size, and use it roughly like a ring buffer. That's not a bad idea either, but it's quite a bit different from what you've done. If you're going to make the size dynamic, you'd probably be better off starting with a deque instead.
Fourth, the magic numbers in your error handling (0x1AE1, 0x1AE2, etc.) is quite opaque. At the very least, you need to give these meaningful names. The one comment you have does not make the use anywhere close to clear.
Finally, if you're going to go to all the trouble of writing code for a thread-safe queue, you might as well make it generic so it can hold essentially any kind of data you want, instead of dedicating it to one specific type.
Ultimately, your code doesn't seem to save the client much work or trouble over using the Windows functions directly. For the most part, you've just provided the same capabilities under slightly different names.
IMO, a thread-safe queue should handle almost all the work internally, so that client code can use it about like it would any other queue.
// Warning: untested code.
// Assumes: `T::T(T const &) throw()`
//
template <class T>
class queue {
std::deque<T> data;
CRITICAL_SECTION cs;
HANDLE semaphore;
public:
queue() {
InitializeCriticalSection(&cs);
semaphore = CreateSemaphore(NULL, 0, 2048, NULL);
}
~queue() {
DeleteCriticalSection(&cs);
CloseHandle(semaphore);
}
void push(T const &item) {
EnterCriticalSection(&cs);
data.push_back(item);
LeaveCriticalSection(&cs);
ReleaseSemaphore(semaphore, 1, NULL);
}
T pop() {
WaitForSingleObject(semaphore, INFINITE);
EnterCriticalSection(&cs);
T item = data.front();
data.pop_front();
LeaveCriticalSection(&cs);
return item;
}
};
HANDLE done;
typedef queue<int> msgQ;
enum commands { quit, print };
void backgrounder(void *qq) {
// I haven't quite puzzled out what your background thread
// was supposed to do, so I've kept it really simple, executing only
// the two commands listed above.
msgQ *q = (msgQ *)qq;
int command;
while (quit != (command = q->pop()))
printf("Print\n");
SetEvent(done);
}
int main() {
msgQ q;
done = CreateEvent(NULL, false, false, NULL);
_beginthread(backgrounder, 0, (void*)&q);
for (int i=0; i<20; i++)
q.push(print);
q.push(quit);
WaitForSingleObject(done, INFINITE);
return 0;
}
Your background thread needs access to the same CriticalSection object and it needs to create LockSection objects to lock it -- the locking is collaborative.
You are trying to return the last element after popping it.