I will begin with an example. Suppose I need to guard a code with a function inside a mutex. There are two ways of implementing this.
#include <iostream>
#include <vector>
#include <pthread.h>
pthread_mutex_t myMutex = PTHREAD_MUTEX_INITIALIZER;
std::vector<float> myVec;
void threadfunc(int i, float value)
{
pthread_mutex_lock(&myMutex);
if(i <= 0 || i > myVec.size())
{
pthread_mutex_unlock(&myMutex);
return;
}
if(value < 0)
{
pthread_mutex_unlock(&myMutex);
return;
}
myVec[i] += value;
pthread_mutex_unlock(&myMutex);
return;
}
class AUTOMUTEX
{
private:
pthread_mutex_t *mMutex;
public:
AUTOMUTEX(pthread_mutex_t *mutex): mMutex(mutex)
{
pthread_mutex_lock(mMutex);
}
~AUTOMUTEX()
{
pthread_mutex_unlock(mMutex);
}
};
void threadfunc_autolock(int i, float value)
{
AUTOMUTEX autoMutex(&myMutex);
if(i <= 0 || i > myVec.size())
{
return;
}
if(value < 0)
{
return;
}
myVec[i] += value;
return;
}
int main()
{
threadfunc_autolock(5, 10);
threadfunc(0, 7);
return 1;
}
As it is clear from the example threadfunc autolock is better implementation as calling pthread_mutex_unlock function return is taken care by destructor call to AUTOMUTEX (C++ 11 thread has support for this. So we don't need our own implementation of AUTOMUTEX if we are using C++11 thread library).
Is there a way we can achieve this without implementing a wrapper class each time we need to do this with some set/reset function pair. Does boost or C++ 11 have some predefined template class with which we can achieve the behaviour of AUTOMUTEX for any such "set/reset" sort of function. This is really helpful for functions with multiple points of return.
In other words does boost/C++ provide a class with the following behaviour.
//sample code not compilable.
template <class T, class Y>
class myAuto
{
myAuto()
{
T();
}
~myAuto()
{
Y();
};
You may write your own geneirc RAII class, something like:
class Finally
{
public:
explicit Finally(std::function<void()> f) : mF(f) {}
~Finally() noexcept() {
try
{
mF();
} catch (...) {
// Handle error.
}
}
Finally(const Finally&) = delete;
Finally(Finally&&) = delete;
Finally& operator=(const Finally&) = delete;
Finally& operator=(Finally&&) = delete;
private:
std::function<void()> mF;
};
Usage:
{
pthread_mutex_lock(&myMutex);
Finally finally([&](){ pthread_mutex_unlock(&myMutex); });
//..
}
Even if a dedicated RAII object may be more appropriate in some case (as Mutex).
There is a proposal for a generic scope guard to be included in the next C++ standard, and I think it is accepted. You can find an implementation here, together with a link to the reference paper.
In principle, it is similar to the classical ScopeGuard, but it also provides some special cases e.g. for C-like file APIs.
You could use something like ScopeGuard. (Now somewhat old-fashioned.)
But given how easy and clear it is to construct a specific RAII wrapper for each resource type I would normally do that.
(I don't think boost ever adopted anything like ScopeGuard. With std::function, lambdas and so on it's easy to do your own.)
What's wrong with writing your own generic resource wrapper?
template <typename Res, typename Fn = std::function<void(Res*)>>
class resource_mgr
{
Res* resource;
Fn initialize, finalize;
public:
resource_mgr (Res* r, Fn i, Fn f)
: resource(r),
initialize(i),
finalize(f)
{
initialize(resource);
}
resource_mgr (resource_mgr const&) = delete;
resource_mgr (resource_mgr&&) = delete;
resource_mgr const& operator = (resource_mgr const&) = delete;
resource_mgr const& operator = (resource_mgr&&) = delete;
~resource_mgr
{
try
{
finalize(resource);
}
catch(...)
{
std::cerr << "Uh-oh!"
}
}
};
You can keep it simple or go wild on something like this -- use smart pointers, define move operations, add support for custom error handlers, etc. You might use it like this:
void threadfunc_autolock(int i, float value)
{
resource_mgr<mutex_t> autoMutex (
&myMutex,
[](auto* p) { if (!pthread_mutex_lock(p)) throw Something(); },
[](auto* p) { if (!pthread_mutex_unlock(p)) throw Something(); }
);
/* . . . */
}
Here's an example using Boost.ScopeExit (untested):
#include <boost/scope_exit.hpp>
...
void threadfunc_autolock(int i, float value)
{
pthread_mutex_lock(&myMutex);
BOOST_SCOPE_EXIT(&myMutex) {
pthread_mutex_unlock(&myMutex);
} BOOST_SCOPE_EXIT_END
if(i <= 0 || i > myVec.size())
{
return;
}
if(value < 0)
{
return;
}
myVec[i] += value;
}
Related
I would like to be able to 'forward' a member function call of a class to every member variable of the class:
class MyObject {
X_Behavior v1;
X_Behavior v2;
...
Y_Behavior v10;
Z_Behavior v11;
...
public:
void clear() { v1.clear(); v2.clear(); ... v10.clear(); v11.clear(); }
void hide() { v1.hide(); v2.hide(); ... v10.hide(); v11.hide(); }
void show() { v1.show(); v2.show(); ... v10.show(); v11.show(); }
};
All these functions are implemented in every component class,
according to the expected 'behavior'.
e.g.
class X_Behavior {
public:
void clear();
void hide();
void show();
...
};
Manual copying if these iterations
void clear() { v1.clear(); v2.clear(); ... v10.clear(); v11.clear(); }
void hide() { v1.hide(); v2.hide(); ... v10.hide(); v11.hide(); }
void show() { v1.show(); v2.show(); ... v10.show(); v11.show(); }
... more similar members here ...
is hard to maintain and review.
There are many classes like MyObject, each with many member variables.
Many developers edit them.
Also, you cannot tell whether an ommitted call or a mixed order was intentional or not.
Can you propose a compiler-generated construct that allows me to implement these functions once and not touch them again?
void MyObject::clear() { /* call clear() for every (_Behavior) member of this class */ }
void MyObject::hide() { /* call hide() for every (_Behavior) member of this class */ }
void MyObject::show() { /* call show() for every (_Behavior) member of this class */ }
I do not wish to increase the size of MyObject.
The *_Behavior classes should stay as they are.
Not to be tied to a base class.
I want to do this without employing the Preprocessor.
Can you propose a C++11/17/20 solution for this?
Ideally, I would like to see if this could be done with minimal code, just like
the compiler generated default implementations for constructor, copy constructor, assignments, destructor.
1. std::tuple + std::apply
A simple C++17 solution to your problem would be to add an additional method that returns references to all behaviors, then you can use std::apply with a templated lambda to reduce it to the individual calls.
e.g.: godbolt example
class MyObject {
BehaviorA v1;
BehaviorA v2;
BehaviorB v3;
BehaviorB v4;
constexpr auto behaviors() { return std::tie(v1, v2, v3, v4); }
public:
void clear() {
std::apply(
[](auto&&... behavior) { (behavior.clear(), ...); },
behaviors()
);
}
};
Pros:
Easily optimizeable by compilers, will mostly result in the same code as the manual function calls
Cons:
You have to remember to add each new behavior to behaviors().
2. boost::pfr::for_each_field
If you don't mind using boost you can enhance this by putting all the behaviors into an aggregate struct (since C++14 (but C++17 makes it a lot easier) you can sort-of reflect the members of aggregates by using aggregate initialization - this is often called "magic tuple")
e.g.: godbolt example
struct BehaviorA { void clear() { std::cout << "CLEAR A" << std::endl; } };
struct BehaviorB { void clear() { std::cout << "CLEAR B" << std::endl; } };
class MyObject {
struct MyObjectBehaviours {
BehaviorA v1;
BehaviorA v2;
BehaviorB v3;
BehaviorB v4;
} behaviors;
public:
void clear() {
boost::pfr::for_each_field(behaviors, [](auto&& behavior) {
behavior.clear();
});
}
};
Pros:
Very hard to mess up with this one
Can be optimized very good
Cons:
Needs boost
2.1 magic tuples without boost
You can also do the same without using boost, you'll have to write quite a bit of code though:
godbolt example
template<class T>
concept aggregate = std::is_aggregate_v<T>;
struct any_type {
template<class T>
operator T() {}
};
template<aggregate T>
consteval std::size_t count_members(auto ...members) {
if constexpr (requires { T{ members... }; } == false)
return sizeof...(members) - 1;
else
return count_members<T>(members..., any_type{});
}
template<aggregate T>
constexpr auto tie_struct(T& data) {
constexpr std::size_t fieldCount = count_members<T>();
if constexpr(fieldCount == 0) {
return std::tie();
} else if constexpr (fieldCount == 1) {
auto& [m1] = data;
return std::tie(m1);
} else if constexpr (fieldCount == 2) {
auto& [m1, m2] = data;
return std::tie(m1, m2);
} else if constexpr (fieldCount == 3) {
auto& [m1, m2, m3] = data;
return std::tie(m1, m2, m3);
} else if constexpr (fieldCount == 4) {
auto& [m1, m2, m3, m4] = data;
return std::tie(m1, m2, m3, m4);
} else {
static_assert(fieldCount!=fieldCount, "Too many fields for tie_struct! add more if statements!");
}
}
template<aggregate T, class Callable>
constexpr void for_each_field(T& data, Callable&& callable) {
std::apply([&callable](auto&&... members){
(callable(members), ...);
}, tie_struct(data));
}
struct BehaviorA { void clear() { std::cout << "CLEAR A" << std::endl; } };
struct BehaviorB { void clear() { std::cout << "CLEAR B" << std::endl; } };
class MyObject {
struct MyObjectBehaviours {
BehaviorA v1;
BehaviorA v2;
BehaviorB v3;
BehaviorB v4;
} behaviors;
public:
void clear() {
for_each_field(behaviors, [](auto&& behavior) {
behavior.clear();
});
}
};
Pros:
Same as above
Cons:
Needs a lot of boilerplate code (but that can also be used for other things - structure reflection is always useful :D )
3. std::variant array
With std::variants you can combine all your behaviors into a single array (it's basically an union of all possible behaviors), then you can use a simple for-loop with std::visit to access the individual behaviors:
e.g.: godbolt example
struct BehaviorA { BehaviorA(int) {} void clear() { std::cout << "CLEAR A" << std::endl; } };
struct BehaviorB { BehaviorB(float) {} void clear() { std::cout << "CLEAR B" << std::endl; } };
class MyObject {
using Behavior = std::variant<BehaviorA, BehaviorB>;
Behavior behaviors[4];
public:
MyObject() : behaviors {
Behavior{std::in_place_type<BehaviorA>, 1},
Behavior{std::in_place_type<BehaviorA>, 2},
Behavior{std::in_place_type<BehaviorB>, 1.0f},
Behavior{std::in_place_type<BehaviorB>, 2.0f}
} {
}
void clear() {
for(auto& b : behaviors)
std::visit([](auto& behavior) {
behavior.clear();
}, b);
}
};
Pros:
Easy to use, no allocations
Cons:
If you want to access only a single element it gets hairy, e.g.:
auto& b = std::get<BehaviorA>(behaviors[0]);
No names for the individual behaviors, only array indices
Potentially wastes a lot of memory (if some behaviors are a lot larger than others)
I think that your best option is to create a base class for the Behavior classes. If you really want to avoid that, you could store them as unions (or std::variants, but it would make the code needlessly more complicated and less readable.
I have introduced a FreeGuard class that cleans up a resource if initialization fails:
struct Resource {...};
class FreeGuard {
public:
FreeGuard(Resource* r) : resource(r) {};
~FreeGuard() {
if (!dismissed) {
freeResource(resource);
}
}
void dismiss() { dismissed = true; }
private:
bool dismissed = false;
Resource* resource;
};
int init(Resource* r) {
FreeGuard guard(r);
if (...)
return -1;
if (...)
return -2;
...
if (...)
return -1000;
guard.dismiss();
return 0;
}
int freeResource(Resource* r) {...}
How can I achieve the same with std smart pointers so that I do not have to keep writing FreeGuard classes?
You can use the release() function of unique_ptr. This is a common pattern for exception-safe code when dealing with non-RAII resources (like C library handles):
#include <memory>
int freeResource(Resource* r) {...}
int init(Resource* r) {
std::unique_ptr<Resource, decltype(&freeResource)> guard(r, freeResource);
if (...)
return -1;
if (...)
return -2;
...
if (...)
return -1000;
guard.release(); // releases ownership, deleter will not be called
return 0;
}
Just rename your free() function to something else (here, freeResource()) to avoid conflict with the standard free() function.
I have designed a simple callback-keyListener-"Interface" with the help of a pure virtual function. Also I used a shared_ptr, to express the ownership and to be sure, that the listener is always available in the handler. That works like a charme, but now I want to implement the same functionality with the help of std::function, because with std::function I am able to use lambdas/functors and I do not need to derive from some "interface"-classes.
I tried to implement the std::function-variant in the second example and it seems to work, but I have two questions related to example 2:
Why does this example still work, although the listener is out of scope? (It seems, that we are working with a copy of the listener instead of the origin listener?)
How can I modify the second example, to achieve the same functionality like in the first example (working on the origin listener)? (member-ptr to std::function seems not to work! How can we handle here the case, when the listener is going out of scope before the handler? )
Example 1: With a virtual function
#include <memory>
struct KeyListenerInterface
{
virtual ~KeyListenerInterface(){}
virtual void keyPressed(int k) = 0;
};
struct KeyListenerA : public KeyListenerInterface
{
void virtual keyPressed(int k) override {}
};
struct KeyHandler
{
std::shared_ptr<KeyListenerInterface> m_sptrkeyListener;
void registerKeyListener(std::shared_ptr<KeyListenerInterface> sptrkeyListener)
{
m_sptrkeyListener = sptrkeyListener;
}
void pressKey() { m_sptrkeyListener->keyPressed(42); }
};
int main()
{
KeyHandler oKeyHandler;
{
auto sptrKeyListener = std::make_shared<KeyListenerA>();
oKeyHandler.registerKeyListener(sptrKeyListener);
}
oKeyHandler.pressKey();
}
Example 2: With std::function
#include <functional>
#include <memory>
struct KeyListenerA
{
void operator()(int k) {}
};
struct KeyHandler
{
std::function<void(int)> m_funcKeyListener;
void registerKeyListener(const std::function<void(int)> &funcKeyListener)
{
m_funcKeyListener = funcKeyListener;
}
void pressKey() { m_funcKeyListener(42); }
};
int main()
{
KeyHandler oKeyHandler;
{
KeyListenerA keyListener;
oKeyHandler.registerKeyListener(keyListener);
}
oKeyHandler.pressKey();
}
std::function<Sig> implements value semantic callbacks.
This means it copies what you put into it.
In C++, things that can be copied or moved should, well, behave a lot like the original. The thing you are copying or moving can carry with it references or pointers to an extrenal resource, and everything should work fine.
How exactly to adapt to value semantics depends on what state you want in your KeyListener; in your case, there is no state, and copies of no state are all the same.
I'll assume we want to care about the state it stores:
struct KeyListenerA {
int* last_pressed = 0;
void operator()(int k) {if (last_pressed) *last_pressed = k;}
};
struct KeyHandler {
std::function<void(int)> m_funcKeyListener;
void registerKeyListener(std::function<void(int)> funcKeyListener) {
m_funcKeyListener = std::move(funcKeyListener);
}
void pressKey() { m_funcKeyListener(42); }
};
int main() {
KeyHandler oKeyHandler;
int last_pressed = -1;
{
KeyListenerA keyListener{&last_pressed};
oKeyHandler.registerKeyListener(keyListener);
}
oKeyHandler.pressKey();
std::cout << last_pressed << "\n"; // prints 42
}
or
{
oKeyHandler.registerKeyListener([&last_pressed](int k){last_pressed=k;});
}
here we store a reference or pointer to the state in the callable. This gets copied around, and when invoked the right action occurs.
The problem I have with listeners is the doulbe lifetime issue; a listener link is only valid as long as both the broadcaster and reciever exist.
To this end, I use something like this:
using token = std::shared_ptr<void>;
template<class...Message>
struct broadcaster {
using reciever = std::function< void(Message...) >;
token attach( reciever r ) {
return attach(std::make_shared<reciever>(std::move(r)));
}
token attach( std::shared_ptr<reciever> r ) {
auto l = lock();
targets.push_back(r);
return r;
}
void operator()( Message... msg ) {
decltype(targets) tmp;
{
// do a pass that filters out expired targets,
// so we don't leave zombie targets around forever.
auto l = lock();
targets.erase(
std::remove_if( begin(targets), end(targets),
[](auto&& ptr){ return ptr.expired(); }
),
end(targets)
);
tmp = targets; // copy the targets to a local array
}
for (auto&& wpf:tmp) {
auto spf = wpf.lock();
// If in another thread, someone makes the token invalid
// while it still exists, we can do an invalid call here:
if (spf) (*spf)(msg...);
// (There is no safe way around this issue; to fix it, you
// have to either restrict which threads invalidation occurs
// in, or use the shared_ptr `attach` and ensure that final
// destruction doesn't occur until shared ptr is actually
// destroyed. Aliasing constructor may help here.)
}
}
private:
std::mutex m;
auto lock() { return std::unique_lock<std::mutex>(m); }
std::vector< std::weak_ptr<reciever> > targets;
};
which converts your code to:
struct KeyHandler {
broadcaster<int> KeyPressed;
};
int main() {
KeyHandler oKeyHandler;
int last_pressed = -1;
token listen;
{
listen = oKeyHandler.KeyPressed.attach([&last_pressed](int k){last_pressed=k;});
}
oKeyHandler.KeyPressed(42);
std::cout << last_pressed << "\n"; // prints 42
listen = {}; // detach
oKeyHandler.KeyPressed(13);
std::cout << last_pressed << "\n"; // still prints 42
}
Imagine have the following class :
#include <functional>
#include <vector>
template<typename T1> class Signaler
{
public:
typedef std::function<void (T1)> Func;
public:
Signaler()
{
}
void Call(T1 arg)
{
for(Int32 i = (Int32)_handlers.size() - 1; i > -1; i--)
{
Func handler = _handlers[i];
handler(arg);
}
}
Signaler& operator+=(Func f)
{
_handlers.push_back( f );
return *this;
}
Signaler& operator-=(Func f)
{
for(auto i = _handlers.begin(); i != _handlers.end(); i++)
{
if ( (*i).template target<void (T1)>() == f.template target<void (T1)>() )
{
_handlers.erase( i );
break;
}
}
return *this;
}
private:
std::vector<Func> _handlers;
};
And I use it the following way :
Signaler Global::Signal_SelectionChanged;
class C1
{
public:
void Register()
{
Global::Signal_SelectionChanged += [&](SelectionChangedEventArgs* e) { this->selectionChangedEvent_cb(e); };
}
void Unregister()
{
Global::Signal_SelectionChanged -= [&](SelectionChangedEventArgs* e) { this->selectionChangedEvent_cb(e); };
}
void selectionChangedEvent_cb(SelectionChangedEventArgs* e) {}
};
class C2
{
public:
void Register()
{
Global::Signal_SelectionChanged += [&](SelectionChangedEventArgs* e) { this->selectionChangedEvent_cb(e); };
}
void Unregister()
{
Global::Signal_SelectionChanged -= [&](SelectionChangedEventArgs* e) { this->selectionChangedEvent_cb(e); };
}
void selectionChangedEvent_cb(SelectionChangedEventArgs* e) {}
};
Now, the problem that I have is when I call 'Unregister' from the class C2, it removes the wrong version of the 'lambda" expression, because the 'lambda' looks similar.
How can I solve this problem ?
Any idea ?
Thanks
The problem is that you are using std::function::target with a type that is not the type of the object stored in the std::function, so it is returning a null pointer. That is, you need to know the actual type of the object stored in the std::function to be able to call target.
Even if you were to call target with the lambda closure type used to add the callback, this wouldn't work for two reasons: first, lambda closure types are unique (5.1.2p3) so the += and -= lambdas have different types even if they are syntactically identical; second, the closure type for a lambda-expression is not defined to have an operator== (5.1.2p3-6, 19-20), so your code would not even compile.
Switching from lambdas to std::bind wouldn't help, as bind types are also not defined to have operator==.
Instead, consider using an id to register/unregister callbacks. You could also use your own functor which defines operator==, but that would be a lot of work.
With the changes made in C++11 (such as the inclusion of std::bind), is there a recommended way to implement a simple single-threaded observer pattern without dependence on anything external to the core language or standard library (like boost::signal)?
EDIT
If someone could post some code showing how dependence on boost::signal could be reduced using new language features, that would still be very useful.
I think that bind makes it easier to create slots (cfr. the 'preferred' syntax vs. the 'portable' syntax - that's all going away). The observer management, however, is not becoming less complex.
But as #R. Martinho Fernandes mentions: an std::vector<std::function< r(a1) > > is now easily created without the hassle for an (artificial) 'pure virtual' interface class.
Upon request: an idea on connection management - probably full of bugs, but you'll get the idea:
// note that the Func parameter is something
// like std::function< void(int,int) > or whatever, greatly simplified
// by the C++11 standard
template<typename Func>
struct signal {
typedef int Key; //
Key nextKey;
std::map<Key,Func> connections;
// note that connection management is the same in C++03 or C++11
// (until a better idea arises)
template<typename FuncLike>
Key connect( FuncLike f ) {
Key k=nextKey++;
connections[k]=f;
return k;
}
void disconnect(Key k){
connections.erase(k);
}
// note: variadic template syntax to be reviewed
// (not the main focus of this post)
template<typename Args...>
typename Func::return_value call(Args... args){
// supposing no subcription changes within call:
for(auto &connection: connections){
(*connection.second)(std::forward(...args));
}
}
};
Usage:
signal<function<void(int,int)>> xychanged;
void dump(int x, int y) { cout << x << ", " << y << endl; }
struct XY { int x, y; } xy;
auto dumpkey=xychanged.connect(dump);
auto lambdakey=xychanged.connect([&xy](int x, int y){ xy.x=x; xy.y=y; });
xychanged.call(1,2);
Since you're asking for code, my blog entry Performance of a C++11 Signal System contains a single-file implementation of a fully functional signal system based on C++11 features without further dependencies (albeit single-threaded, which was a performance requirement).
Here is a brief usage example:
Signal<void (std::string, int)> sig2;
sig2() += [] (std::string msg, int d) { /* handler logic */ };
sig2.emit ("string arg", 17);
More examples can be found in this unit test.
I wrote my own light weight Signal/Slot classes which return connection handles. The existing answer's key system is pretty fragile in the face of exceptions. You have to be exceptionally careful about deleting things with an explicit call. I much prefer using RAII for open/close pairs.
One notable lack of support in my library is the ability to get a return value from your calls. I believe boost::signal has methods of calculating the aggregate return values. In practice usually you don't need this and I just find it cluttering, but I may come up with such a return method for fun as an exercise in the future.
One cool thing about my classes is the Slot and SlotRegister classes. SlotRegister provides a public interface which you can safely link to a private Slot. This protects against external objects calling your observer methods. It's simple, but nice encapsulation.
I do not believe my code is thread safe, however.
//"MIT License + do not delete this comment" - M2tM : http://michaelhamilton.com
#ifndef __MV_SIGNAL_H__
#define __MV_SIGNAL_H__
#include <memory>
#include <utility>
#include <functional>
#include <vector>
#include <set>
#include "Utility/scopeGuard.hpp"
namespace MV {
template <typename T>
class Signal {
public:
typedef std::function<T> FunctionType;
typedef std::shared_ptr<Signal<T>> SharedType;
static std::shared_ptr< Signal<T> > make(std::function<T> a_callback){
return std::shared_ptr< Signal<T> >(new Signal<T>(a_callback, ++uniqueId));
}
template <class ...Arg>
void notify(Arg... a_parameters){
if(!isBlocked){
callback(std::forward<Arg>(a_parameters)...);
}
}
template <class ...Arg>
void operator()(Arg... a_parameters){
if(!isBlocked){
callback(std::forward<Arg>(a_parameters)...);
}
}
void block(){
isBlocked = true;
}
void unblock(){
isBlocked = false;
}
bool blocked() const{
return isBlocked;
}
//For sorting and comparison (removal/avoiding duplicates)
bool operator<(const Signal<T>& a_rhs){
return id < a_rhs.id;
}
bool operator>(const Signal<T>& a_rhs){
return id > a_rhs.id;
}
bool operator==(const Signal<T>& a_rhs){
return id == a_rhs.id;
}
bool operator!=(const Signal<T>& a_rhs){
return id != a_rhs.id;
}
private:
Signal(std::function<T> a_callback, long long a_id):
id(a_id),
callback(a_callback),
isBlocked(false){
}
bool isBlocked;
std::function< T > callback;
long long id;
static long long uniqueId;
};
template <typename T>
long long Signal<T>::uniqueId = 0;
template <typename T>
class Slot {
public:
typedef std::function<T> FunctionType;
typedef Signal<T> SignalType;
typedef std::shared_ptr<Signal<T>> SharedSignalType;
//No protection against duplicates.
std::shared_ptr<Signal<T>> connect(std::function<T> a_callback){
if(observerLimit == std::numeric_limits<size_t>::max() || cullDeadObservers() < observerLimit){
auto signal = Signal<T>::make(a_callback);
observers.insert(signal);
return signal;
} else{
return nullptr;
}
}
//Duplicate Signals will not be added. If std::function ever becomes comparable this can all be much safer.
bool connect(std::shared_ptr<Signal<T>> a_value){
if(observerLimit == std::numeric_limits<size_t>::max() || cullDeadObservers() < observerLimit){
observers.insert(a_value);
return true;
}else{
return false;
}
}
void disconnect(std::shared_ptr<Signal<T>> a_value){
if(!inCall){
observers.erase(a_value);
} else{
disconnectQueue.push_back(a_value);
}
}
template <typename ...Arg>
void operator()(Arg... a_parameters){
inCall = true;
SCOPE_EXIT{
inCall = false;
for(auto& i : disconnectQueue){
observers.erase(i);
}
disconnectQueue.clear();
};
for (auto i = observers.begin(); i != observers.end();) {
if (i->expired()) {
observers.erase(i++);
} else {
auto next = i;
++next;
i->lock()->notify(std::forward<Arg>(a_parameters)...);
i = next;
}
}
}
void setObserverLimit(size_t a_newLimit){
observerLimit = a_newLimit;
}
void clearObserverLimit(){
observerLimit = std::numeric_limits<size_t>::max();
}
int getObserverLimit(){
return observerLimit;
}
size_t cullDeadObservers(){
for(auto i = observers.begin(); i != observers.end();) {
if(i->expired()) {
observers.erase(i++);
}
}
return observers.size();
}
private:
std::set< std::weak_ptr< Signal<T> >, std::owner_less<std::weak_ptr<Signal<T>>> > observers;
size_t observerLimit = std::numeric_limits<size_t>::max();
bool inCall = false;
std::vector< std::shared_ptr<Signal<T>> > disconnectQueue;
};
//Can be used as a public SlotRegister member for connecting slots to a private Slot member.
//In this way you won't have to write forwarding connect/disconnect boilerplate for your classes.
template <typename T>
class SlotRegister {
public:
typedef std::function<T> FunctionType;
typedef Signal<T> SignalType;
typedef std::shared_ptr<Signal<T>> SharedSignalType;
SlotRegister(Slot<T> &a_slot) :
slot(a_slot){
}
//no protection against duplicates
std::shared_ptr<Signal<T>> connect(std::function<T> a_callback){
return slot.connect(a_callback);
}
//duplicate shared_ptr's will not be added
bool connect(std::shared_ptr<Signal<T>> a_value){
return slot.connect(a_value);
}
void disconnect(std::shared_ptr<Signal<T>> a_value){
slot.disconnect(a_value);
}
private:
Slot<T> &slot;
};
}
#endif
Supplimental scopeGuard.hpp:
#ifndef _MV_SCOPEGUARD_H_
#define _MV_SCOPEGUARD_H_
//Lifted from Alexandrescu's ScopeGuard11 talk.
namespace MV {
template <typename Fun>
class ScopeGuard {
Fun f_;
bool active_;
public:
ScopeGuard(Fun f)
: f_(std::move(f))
, active_(true) {
}
~ScopeGuard() { if(active_) f_(); }
void dismiss() { active_ = false; }
ScopeGuard() = delete;
ScopeGuard(const ScopeGuard&) = delete;
ScopeGuard& operator=(const ScopeGuard&) = delete;
ScopeGuard(ScopeGuard&& rhs)
: f_(std::move(rhs.f_))
, active_(rhs.active_) {
rhs.dismiss();
}
};
template<typename Fun>
ScopeGuard<Fun> scopeGuard(Fun f){
return ScopeGuard<Fun>(std::move(f));
}
namespace ScopeMacroSupport {
enum class ScopeGuardOnExit {};
template <typename Fun>
MV::ScopeGuard<Fun> operator+(ScopeGuardOnExit, Fun&& fn) {
return MV::ScopeGuard<Fun>(std::forward<Fun>(fn));
}
}
#define SCOPE_EXIT \
auto ANONYMOUS_VARIABLE(SCOPE_EXIT_STATE) \
= MV::ScopeMacroSupport::ScopeGuardOnExit() + [&]()
#define CONCATENATE_IMPL(s1, s2) s1##s2
#define CONCATENATE(s1, s2) CONCATENATE_IMPL(s1, s2)
#ifdef __COUNTER__
#define ANONYMOUS_VARIABLE(str) \
CONCATENATE(str, __COUNTER__)
#else
#define ANONYMOUS_VARIABLE(str) \
CONCATENATE(str, __LINE__)
#endif
}
#endif
An example application making use of my library:
#include <iostream>
#include <string>
#include "signal.hpp"
class Observed {
private:
//Note: This is private to ensure not just anyone can spawn a signal
MV::Slot<void (int)> onChangeSlot;
public:
typedef MV::Slot<void (int)>::SharedSignalType ChangeEventSignal;
//SlotRegister is public, users can hook up signals to onChange with this value.
MV::SlotRegister<void (int)> onChange;
Observed():
onChange(onChangeSlot){ //Here is where the binding occurs
}
void change(int newValue){
onChangeSlot(newValue);
}
};
class Observer{
public:
Observer(std::string a_name, Observed &a_observed){
connection = a_observed.onChange.connect([=](int value){
std::cout << a_name << " caught changed value: " << value << std::endl;
});
}
private:
Observed::ChangeEventSignal connection;
};
int main(){
Observed observed;
Observer observer1("o[1]", observed);
{
Observer observer2("o[2]", observed);
observed.change(1);
}
observed.change(2);
}
Output of the above would be:
o[1] caught changed value: 1
o[2] caught changed value: 1
o[1] caught changed value: 2
As you can see, the slot disconnects dead signals automatically.
Here's what I came up with.
This assumes no need to aggregate results from the listeners of a broadcast signal.
Also, the "slot" or Signal::Listener is the owner of the callback.
This ought to live with the object that your (I'm guessing...) lambda is probably capturing so that when that object goes out of scope, so does the callback, which prevents it from being called anymore.
You could use methods described in other answers as well to store the Listener owner objects in a way you can lookup.
template <typename... FuncArgs>
class Signal
{
using fp = std::function<void(FuncArgs...)>;
std::forward_list<std::weak_ptr<fp> > registeredListeners;
public:
using Listener = std::shared_ptr<fp>;
Listener add(const std::function<void(FuncArgs...)> &cb) {
// passing by address, until copy is made in the Listener as owner.
Listener result(std::make_shared<fp>(cb));
registeredListeners.push_front(result);
return result;
}
void raise(FuncArgs... args) {
registeredListeners.remove_if([&args...](std::weak_ptr<fp> e) -> bool {
if (auto f = e.lock()) {
(*f)(args...);
return false;
}
return true;
});
}
};
usage
Signal<int> bloopChanged;
// ...
Signal<int>::Listener bloopResponse = bloopChanged.add([](int i) { ... });
// or
decltype(bloopChanged)::Listener bloopResponse = ...
// let bloopResponse go out of scope.
// or re-assign it
// or reset the shared_ptr to disconnect it
bloopResponse.reset();
I have made a gist for this too, with a more in-depth example:
https://gist.github.com/johnb003/dbc4a69af8ea8f4771666ce2e383047d
I have had a go at this myself also. My efforts can be found at this gist, which will continue to evolve . . .
https://gist.github.com/4172757
I use a different style, more similar to the change notifications in JUCE than BOOST signals. Connection management is done using some lambda syntax that does some capture by copy. It is working well so far.