I have two classes
class PopulationMember
{
public:
void operationOnThisMember1();
void operationOnThisMember2();
...
private:
Population* populaltion_;
}
class Population
{
public:
void operationOnAllMembers1();
void operationOnAllMembers2();
...
void operationOnAllMembers100();
void sortAllMembersCriterium1();
void sortAllMembersCriterium2();
...
void sortAllMembersCriterium100();
private:
QVector<PopulationMember*> members_;
}
I would like to implement a SELECT-like functionality to my framework. That is be able to perform operations only on those members which share a certain combination of properties.
So far I have thought out two approaches:
Implement a method that would return a new Population object composed of Members that satisfy a certain condition.
Popuation Popuation::select(bool (predicate*) (PopulationMember*));
Add a
bool selected_;
flag to each PopulationMember.
If I do 1. There is no way to implement sorting of selected data and deletion. If I do 2. There is overhead with checking for selectedness and I would have to reimplement sorting and other algorithms to operate only on selected members.
Is there a third, better way?
The approach I would take is to expose an iterator interface to the entire collection. To implement some sort of selection I would then use iterator adapters, e.g. one taking a unary predicate, which provide a new view of the range. This way there is neither an impact on the stored object nor any overhead in creating a separate collection. If you look at Boost's iterator adapters you may already get pretty much what is needed.
I have never looked, but I expect it will be method 1. See the MySQL source code to confirm my expectation. :-)
This is a proposal based on something similar i had to do once, which is an extended form of your first approach.
The advantage is the usage of STL's concepts and the freedom to either implement many functors or few parametrizable functors.
class All
{
public:
bool operator()(const PopulationMember* entity) const
{
return true;
}
};
class AscByID
{
public:
bool operator()(const PopulationMember* a, const PopulationMember* b) const
{
return a->getId() < b.getId();
}
};
template<typename Entity, class Predicate, class SortComparator>
class Query
{
public:
typedef std::set<Entity, SortComparator> ResultSet;
Query(const Predicate& predicate = Predicate(), const SortComparator& cmp = SortComparator()) :
predicate(predicate), resultSet(cmp)
{
}
bool operator()(const Entity& entity)
{
if (predicate(entity))
{
resultSet.insert(entity);
return true;
}
return false;
}
const ResultSet& getResult(void) const
{
return resultSet;
}
void clearResult(void)
{
resultSet.clear();
}
private:
Predicate predicate;
ResultSet resultSet;
};
int main()
{
Query<const PopulationMember*, All, AscByID> query;
Popuation::execute(query);
//do something with the result
query.getResult();
//clear the result
query.clearResult();
//query again
Popuation::execute(query);
//do something useful again
return 0;
}
Related
My problem comes from a project that I'm supposed to finish. I have to create an std::unordered_map<T, unsigned int> where T is a pointer to a base, polymorphic class. After a while, I figured that it will also be a good practice to use an std::unique_ptr<T> as a key, since my map is meant to own the objects. Let me introduce some backstory:
Consider class hierarchy with polymorphic sell_obj as a base class. book and table inheriting from that class. We now know that we need to create a std::unordered_map<std::unique_ptr<sell_obj*>, unsigned int>. Therefore, erasing a pair from that map will automatically free the memory pointed by key. The whole idea is to have keys pointing to books/tables and value of those keys will represent the amount of that product that our shop contains.
As we are dealing with std::unordered_map, we should specify hashes for all three classes. To simplify things, I specified them in main like this:
namespace std{
template <> struct hash<book>{
size_t operator()(const book& b) const
{
return 1; // simplified
}
};
template <> struct hash<table>{
size_t operator()(const table& b) const
{
return 2; // simplified
}
};
// The standard provides a specilization so that std::hash<unique_ptr<T>> is the same as std::hash<T*>.
template <> struct hash<sell_obj*>{
size_t operator()(const sell_obj *s) const
{
const book *b_p = dynamic_cast<const book*>(s);
if(b_p != nullptr) return std::hash<book>()(*b_p);
else{
const table *t_p = static_cast<const table*>(s);
return std::hash<table>()(*t_p);
}
}
};
}
Now let's look at implementation of the map. We have a class called Shop which looks like this:
#include "sell_obj.h"
#include "book.h"
#include "table.h"
#include <unordered_map>
#include <memory>
class Shop
{
public:
Shop();
void add_sell_obj(sell_obj&);
void remove_sell_obj(sell_obj&);
private:
std::unordered_map<std::unique_ptr<sell_obj>, unsigned int> storeroom;
};
and implementation of two, crucial functions:
void Shop::add_sell_obj(sell_obj& s_o)
{
std::unique_ptr<sell_obj> n_ptr(&s_o);
storeroom[std::move(n_ptr)]++;
}
void Shop::remove_sell_obj(sell_obj& s_o)
{
std::unique_ptr<sell_obj> n_ptr(&s_o);
auto target = storeroom.find(std::move(n_ptr));
if(target != storeroom.end() && target->second > 0) target->second--;
}
in my main I try to run the following code:
int main()
{
book *b1 = new book("foo", "bar", 10);
sell_obj *ptr = b1;
Shop S_H;
S_H.add_sell_obj(*ptr); // works fine I guess
S_H.remove_sell_obj(*ptr); // usually (not always) crashes [SIGSEGV]
return 0;
}
my question is - where does my logic fail? I heard that it's fine to use std::unique_ptr in STL containters since C++11. What's causing the crash? Debugger does not provide any information besides the crash occurance.
If more information about the project will be needed, please point it out. Thank you for reading
There are quite a few problems with logic in the question. First of all:
Consider class hierarchy with polymorphic sell_obj as base class. book and table inheriting from that class. We now know that we need to create a std::unordered_map<std::unique_ptr<sell_obj*>, unsigned int>.
In such cases std::unique_ptr<sell_obj*> is not what we would want. We would want std::unique_ptr<sell_obj>. Without the *. std::unique_ptr is already "a pointer".
As we are dealing with std::unordered_map, we should specify hashes for all three classes. To simplify things, I specified them in main like this: [...]
This is also quite of an undesired approach. This would require changing that part of the code every time we add another subclass in the hierarchy. It would be best to delegate the hashing (and comparing) polymorphically to avoid such problems, exactly as #1201programalarm suggested.
[...] implementation of two, crucial functions:
void Shop::add_sell_obj(sell_obj& s_o)
{
std::unique_ptr<sell_obj> n_ptr(&s_o);
storeroom[std::move(n_ptr)]++;
}
void Shop::remove_sell_obj(sell_obj& s_o)
{
std::unique_ptr<sell_obj> n_ptr(&s_o);
auto target = storeroom.find(std::move(n_ptr));
if(target != storeroom.end() && target->second > 0) target->second--;
}
This is wrong for couple of reasons. First of all, taking an argument by non-const reference suggest modification of the object. Second of all, the creation of n_ptr from a pointer obtained by using & on an argumnet is incredibly risky. It assumes that the object is allocated on the heap and it is unowned. A situation that generally should not take place and is incredibly dangerous. In case where the passed object is on the stack and / or is already managed by some other owner, this is a recipe for a disaster (like a segfault).
What's more, it is more or less guaranteed to end up in a disaster, since both add_sell_obj() and remove_sell_obj() create std::unique_ptrs to potentially the same object. This is exactly the case from the original question's main(). Two std::unique_ptrs pointing to the same object result in double delete.
While it's not necessarily the best approach for this problem if one uses C++ (as compared to Java), there are couple of interesting tools that can be used for this task. The code below assumes C++20.
The class hierarchy
First of all, we need a base class that will be used when referring to all the objects stored in the shop:
struct sell_object { };
And then we need to introduce classes that will represent conrete objects:
class book : public sell_object {
std::string title;
public:
book(std::string title) : title(std::move(title)) { }
};
class table : public sell_object {
int number_of_legs = 0;
public:
table(int number_of_legs) : number_of_legs(number_of_legs) { }
};
For simplicity (but to still have some distinctions) I chose for them to have just one, distinct field (title and number_of_legs).
The storage
The shop class that will represent storage for any sell_object needs to somehow store, well, any sell_object. For that we either need to use pointers or references to the base class. You can't have a container of references, so it's best to use pointers. Smart pointers.
Originally the question suggested the usage of std::unordered_map. Let us stick with it:
class shop {
std::unordered_map<
std::unique_ptr<sell_object>, int,
> storage;
public:
auto add(...) -> void {
...
}
auto remove(...) -> void {
...
}
};
It is worth mentioning that we chose std::unique_ptr as key for our map. That means that the storage is going to copy the passed objects and use the copies it owns to compare with elements we query (add or remove). No more than one equal object will be copied, though.
The fixed version of storage
There is a problem, however. std::unordered_map uses hashing and we need to provide a hash strategy for std::unique_ptr<sell_object>. Well, there already is one and it uses the hash strategy for T*. The problem is that we want to have custom hashing. Those particular std::unique_ptr<sell_object>s should be hashed according to the associated sell_objects.
Because of this, I opt to choose a different approach than the one proposed in the question. Instead of providing a global specialization in the std namespace, I will choose a custom hashing object and a custom comparator:
class shop {
struct sell_object_hash {
auto operator()(std::unique_ptr<sell_object> const& object) const -> std::size_t {
return object->hash();
}
};
struct sell_object_equal {
auto operator()(
std::unique_ptr<sell_object> const& lhs,
std::unique_ptr<sell_object> const& rhs
) const -> bool {
return (*lhs <=> *rhs) == 0;
}
};
std::unordered_map<
std::unique_ptr<sell_object>, int,
sell_object_hash, sell_object_equal
> storage;
public:
auto add(...) -> void {
...
}
auto remove(...) -> void {
...
}
};
Notice a few things. First of all, the type of storage has changed. No longer it is an std::unordered_map<std::unique_ptr<T>, int>, but an std::unordered_map<std::unique_ptr<T>, int, sell_object_hash, sell_object_equal>. This is to indicate that we are using custom hasher (sell_object_hash) and custom comparator (sell_object_equal).
The lines we need to pay extra attention are:
return object->hash();
return (*lhs <=> *rhs) == 0;
Onto them:
return object->hash();
This is a delegation of hashing. Instead of being an observer and trying to have a type that for each and every possible type derived from sell_object implements a different hashing, we require that those objects supply the sufficient hashing themselves. In the original question, the std::hash specialization was the said "observer". It certainly did not scale as a solution.
In order to achieve the aforementioned, we modify the base class to impose the listed requirement:
struct sell_object {
virtual auto hash() const -> std::size_t = 0;
};
Thus we also need to change our book and table classes:
class book : public sell_object {
std::string title;
public:
book(std::string title) : title(std::move(title)) { }
auto hash() const -> std::size_t override {
return std::hash<std::string>()(title);
}
};
class table : public sell_object {
int number_of_legs = 0;
public:
table(int number_of_legs) : number_of_legs(number_of_legs) { }
auto hash() const -> std::size_t override {
return std::hash<int>()(number_of_legs);
}
};
return (*lhs <=> *rhs) == 0;
This is a C++20 feature called the three-way comparison operator, sometimes called the spaceship operator. I opted into using it, since starting with C++20, most types that desire to be comparable will be using this operator. That means we also need our concrete classes to implement it. What's more, we need to be able to call it with base references (sell_object&). Yet another virtual function (operator, actually) needs to be added to the base class:
struct sell_object {
virtual auto hash() const -> std::size_t = 0;
virtual auto operator<=>(sell_object const&) const -> std::partial_ordering = 0;
};
Every subclass of sell_object is going to be required to be comparable with other sell_objects. The main reason is that we need to compare sell_objects in our storage map. For completeness, I used std::partial_ordering, since we require every sell_object to be comparable with every other sell_object. While comparing two books or two tables yields strong ordering (total ordering where two equivalent objects are indistinguishable), we also - by design - need to support comparing a book to a table. This is somewhat meaningless (always returns false). Fortunately, C++20 helps us here with std::partial_ordering::unordered. Those elements are not equal and neither of them is greater or less than the other. Perfect for such scenarios.
Our concrete classes need to change accordingly:
class book : public sell_object {
std::string title;
public:
book(std::string title) : title(std::move(title)) { }
auto hash() const -> std::size_t override {
return std::hash<std::string>()(title);
}
auto operator<=>(book const& other) const {
return title <=> other.title;
};
auto operator<=>(sell_object const& other) const -> std::partial_ordering override {
if (auto book_ptr = dynamic_cast<book const*>(&other)) {
return *this <=> *book_ptr;
} else {
return std::partial_ordering::unordered;
}
}
};
class table : public sell_object {
int number_of_legs = 0;
public:
table(int number_of_legs) : number_of_legs(number_of_legs) { }
auto hash() const -> std::size_t override {
return std::hash<int>()(number_of_legs);
}
auto operator<=>(table const& other) const {
return number_of_legs <=> other.number_of_legs;
};
auto operator<=>(sell_object const& other) const -> std::partial_ordering override {
if (auto table_ptr = dynamic_cast<table const*>(&other)) {
return *this <=> *table_ptr;
} else {
return std::partial_ordering::unordered;
}
}
};
The overriden operator<=>s are required due to the base class' requirements. They are quite simple - if the other object (the one we are comparing this object to) is of the same type, we delegate to the <=> version that uses the concrete type. If not, we have a type mismatch and we report the unordered ordering.
For those of you who are curious why the <=> implementation that compares two, identical types is not = defaulted: it would use the base-class comparison first, which would delegate to the sell_object version. That would dynamic_cast again and delegate to the defaulted implementation. Which would compare the base class and... result in an infinite recursion.
add() and remove() implementation
Everything seems great, so we can move on to adding and removing items to and from our shop. However, we immediately arrive at a hard design decision. What arguments should add() and remove() accept?
std::unique_ptr<sell_object>? That would make their implementation trivial, but it would require the user to construct a potentially useless, dynamically allocated object just to call a function.
sell_object const&? That seems correct, but there are two problems with it: 1) we would still need to construct an std::unique_ptr with a copy of passed argument to find the appropriate element to remove; 2) we wouldn't be able to correctly implement add(), since we need the concrete type to construct an actual std::unique_ptr to put into our map.
Let us go with the second option and fix the first problem. We certainly do not want to construct a useless and expensive object just to look for it in the storage map. Ideally we would like to find a key (std::unique_ptr<sell_object>) that matches the passed object. Fortunately, transparent hashers and comparators come to the rescue.
By supplying additional overloads for hasher and comparator (and providing a public is_transparent alias), we allow for looking for a key that is equivalent, without needing the types to match:
struct sell_object_hash {
auto operator()(std::unique_ptr<sell_object> const& object) const -> std::size_t {
return object->hash();
}
auto operator()(sell_object const& object) const -> std::size_t {
return object.hash();
}
using is_transparent = void;
};
struct sell_object_equal {
auto operator()(
std::unique_ptr<sell_object> const& lhs,
std::unique_ptr<sell_object> const& rhs
) const -> bool {
return (*lhs <=> *rhs) == 0;
}
auto operator()(
sell_object const& lhs,
std::unique_ptr<sell_object> const& rhs
) const -> bool {
return (lhs <=> *rhs) == 0;
}
auto operator()(
std::unique_ptr<sell_object> const& lhs,
sell_object const& rhs
) const -> bool {
return (*lhs <=> rhs) == 0;
}
using is_transparent = void;
};
Thanks to that, we can now implement shop::remove() like so:
auto remove(sell_object const& to_remove) -> void {
if (auto it = storage.find(to_remove); it != storage.end()) {
it->second--;
if (it->second == 0) {
storage.erase(it);
}
}
}
Since our comparator and hasher are transparent, we can find() an element that is equivalent to the argument. If we find it, we decrement the corresponding count. If it reaches 0, we remove the entry completely.
Great, onto the second problem. Let us list the requirements for the shop::add():
we need the concrete type of the object (merely a reference to the base class is not enough, since we need to create matching std::unique_ptr).
we need that type to be derived from sell_object.
We can achieve both with a constrained* template:
template <std::derived_from<sell_object> T>
auto add(T const& to_add) -> void {
if (auto it = storage.find(to_add); it != storage.end()) {
it->second++;
} else {
storage[std::make_unique<T>(to_add)] = 1;
}
}
This is, again, quite simple
*References: {1} {2}
Correct destruction semantics
There is only one more thing that separates us from the correct implementation. It's the fact that if we have a pointer (either smart or not) to a base class that is used to deallocate it, the destructor needs to be virtual.
This leads us to the final version of the sell_object class:
struct sell_object {
virtual auto hash() const -> std::size_t = 0;
virtual auto operator<=>(sell_object const&) const -> std::partial_ordering = 0;
virtual ~sell_object() = default;
};
See full implementation with example and additional printing utilities.
I have two classes for example
struct point{
dint data
};
class data{
...
public:
point left;
point right;
..... //more that 50 members of point
point some_other_point;
}example;
Is it possible use something like "for each point in example" in this situation?
Because now I need to modify many functions if I add one more point to data.
Or maybe, there is any other idea about it.
No, you cannot enumerate members of a type, because C++ does not have the concept of reflection.
This is a common use case for an array, a vector or a map.
Yes, and there are two ways to do so:
an iterator class, for external iteration
a visitation method, for internal iteration
Then the iteration logic is encapsulated in either of those classes and all the code just uses them.
Using the iterator class.
Pros:
can easily be combined with existing STL algorithms, as well as for and while loops
can suspend (and resume) iteration
Cons:
requires polymorphism of attributes iterated over
Example:
class DataIterator;
class Data {
public:
friend class DataIterator;
Data(Point a, Point b, Point c): _one(a), _two(b), _three(c) {}
DataIterator begin();
DataIterator end();
private:
Point _one;
Point _two;
Point _three;
}; // class Data
class DataIterator:
public std::iterator<std::forward_iterator_tag, Point>
{
public:
struct BeginTag{};
struct EndTag{};
DataIterator(): _data(0), _member(0) {}
DataIterator(Data& data, BeginTag): _data(&data), _member(0) {}
DataIterator(Data& data, EndTag): _data(&data), _member(N) {}
reference operator*() const {
this->ensure_valid();
MemberPtr const ptr = Pointers[_member];
return _data->*ptr;
}
pointer operator->() const { return std::addressof(*(*this)); }
DataIterator& operator++() { this->ensure_valid(); ++_member; return *this; }
DataIterator operator++(int) { DataIterator tmp(*this); ++*this; return tmp; }
friend bool operator==(DataIterator const& left, DataIterator const& right) {
return left._data == right._data and left._member == right._member;
}
friend bool operator!=(DataIterator const& left, DataIterator const& right) {
return not (left == right);
}
private:
typedef Point Data::*MemberPtr;
static size_t const N = 3;
static MemberPtr const Pointers[N];
void ensure_valid() const { assert(_data and _member < N); }
Data* _data;
size_t _member;
}; // class DataIterator
//
// Implementation
//
DataIterator Data::begin() { return DataIterator(*this, DataIterator::BeginTag{}); }
DataIterator Data::end() { return DataIterator(*this, DataIterator::EndTag{}); }
size_t const DataIterator::N;
DataIterator::MemberPtr const DataIterator::Pointers[DataIterator::N] = {
&Data::_one, &Data::_two, &Data::_three
};
And in case you wonder: yes, it really works.
Using the visitation method, though, is easier.
Pros:
can easily accommodate variance in the attributes types iterated over
Cons:
cannot be combined with existing STL algorithms or existing loops
pretty difficult to suspend the iteration part-way
Example:
class Data {
public:
Data(Point a, Point b, Point c): _one(a), _two(b), _three(c) {}
template <typename F>
void apply(F&& f) {
f(_one);
f(_two);
f(_three);
}
private:
Point _one;
Point _two;
Point _three;
}; // class Data
And of course, it works too.
Do it like this:
class data{
public:
enum POINTS {LEFT=0,RIGHT,SOME_OTHER_POINT};
std::array<point,50> points; // or just point points[50];
}example;
And use it like this:
example.points[data::LEFT]=point{};
Then you can iterate over points array with standard techniques.
You can convert your current fields into:
private:
point left;
// ..
public:
point& left() { return points[LEFT]; }
// ..
where points might be an array of points (as in other answers), and LEFT is a constant index into array. This should allow for a relatively quick and painless transition, you will only have to add (), and compiler will output errors where to apply fixes.
Then you can convert your code to iterate your point values.
You can write a for each function without modifying your example like this:
template<class Func>
void for_each_point(example& e, Func&& f){
f(e.pt1);
f(e.pt2);
f(e.blah);
....
f(e.last_pt);
}
and then call it like:
for_each_point(exam, [&](point & pt){
std::cout<<pt.data<<"\n";
});
or do whatever in the body.
This function could also be a member variable, if you prefer.
Changing tye point storage to an array or std::array amd exposing begin and end or the array also works.
Finally, you could write a custom iterator that walks the points, but that is probably unwise.
Is there are common pattern OR ready-to-use boost class for "cached calculation"/"cached getter"?
I mean something like this:
class Test{
public:
Value getValue() const;
protected:
Value calculateValue() const;//REALLY expensive operation.
mutable bool valueIsDirty;
mutable Value cachedValue;
}
Value Test::getValue() const{
if (valueIsDirty){
cachedValue = calculateValue();
valueIsDirty = false;
}
return cachedValue;
}
I can use std::pair<Value, bool> and turn getValue/calculateValue into macro, but this doesn't really help if value depends on other values (stored in other classes) and those values can also be cached.
Is there a ready-to-use solution for this kind of "pattern"? At the moment I handle such cached values manually, but this isn't "pretty".
Restrictions:
c++03 standard. Boost is allowed.
The Proxy design pattern can help with this.
A typical implementation will define a class ValuePtr that behaves just like an ordinary Value*, i.e. it has an overloaded operator-> and operator*. But instead of directly accessing the underlying Value object, these operators also contain the logic of deciding to load or recompute the actual value. This extra level of indirection will encapsulate the proxy logic.
If you need to count refences to other objects, maybe std::shared_ptr<Value> is useful to use as the underyling data type inside ValuePtr.
See this site for a code example. Boost.Flyweight might also help.
This is what I ended up using:
template<typename T, typename Owner> class CachedMemberValue{
public:
typedef T (Owner::*Callback)() const;
T get(){
if (dirty){
cachedValue = (owner->*calculateCallback)();
dirty = false;
}
return cachedValue;
}
const T& getRef(){
if (dirty){
cachedValue = (owner->*calculateCallback)();
dirty = false;
}
return cachedValue;
}
void markDirty(){
dirty = true;
}
CachedMemberValue(Owner* owner_, Callback calculateCallback_)
:owner(owner_), calculateCallback(calculateCallback_), dirty(true){
}
protected:
Owner *owner;
Callback calculateCallback;
bool dirty;
T cachedValue;
private:
CachedMemberValue(const CachedMemberValue<T, Owner>&){
}
CachedMemberValue<T, Owner>& operator=(const CachedMemberValue<T, Owner>&){
return *this;
}
};
usage:
class MyClass{
public:
int getMin() const{
return cachedMin.get();
}
void modifyValue() { /*... calculation/modification*/ cachedMin.markDirty();}
MyClass(): cachedMin(this, &MyClass::noncachedGetMin){}
private:
int noncachedGetMin() const{ /*expensive operation here*/ ... }
mutable CachedMemberValue<int, MyClass> cachedMin;
};
I have a container (C++) on which I need to operate in two ways, from different threads: 1) Add and remove elements, and 2) iterate through its members. Clearly, remove element while iteration is happening = disaster. The code looks something like this:
class A
{
public:
...
void AddItem(const T& item, int index) { /*Put item into my_stuff at index*/ }
void RemoveItem(const T& item) { /*Take item out of m_stuff*/ }
const list<T>& MyStuff() { return my_stuff; } //*Hate* this, but see class C
private:
Mutex mutex; //Goes in the *Item methods, but is largely worthless in MyStuff()
list<T> my_stuff; //Just as well a vector or deque
};
extern A a; //defined in the .cpp file
class B
{
...
void SomeFunction() { ... a.RemoveItem(item); }
};
class C
{
...
void IterateOverStuff()
{
const list<T>& my_stuff(a.MyStuff());
for (list<T>::const_iterator it=my_stuff.begin(); it!=my_stuff.end(); ++it)
{
...
}
}
};
Again, B::SomeFunction() and C::IterateOverStuff() are getting called asynchronously. What's a data structure I can use to ensure that during the iteration, my_stuff is 'protected' from add or remove operations?
sounds like a reader/writer lock is needed. Basically, the idea is that you may have 1 or more readers OR a single writer. Never can you have a read and write lock at the same time.
EDIT: An example of usage which I think fits your design involves making a small change. Add an "iterate" function to the class which owns the list and make it templated so you can pass a function/functor to define what to do for each node. Something like this (quick and dirty pseudo code, but you get the point...):
class A {
public:
...
void AddItem(const T& item, int index) {
rwlock.lock_write();
// add the item
rwlock.unlock_write();
}
void RemoveItem(const T& item) {
rwlock.lock_write();
// remove the item
rwlock.unlock_write();
}
template <class P>
void iterate_list(P pred) {
rwlock.lock_read();
std::for_each(my_stuff.begin(), my_stuff.end(), pred);
rwlock.unlock_read();
}
private:
rwlock_t rwlock;
list<T> my_stuff; //Just as well a vector or deque
};
extern A a; //defined in the .cpp file
class B {
...
void SomeFunction() { ... a.RemoveItem(item); }
};
class C {
...
void read_node(const T &element) { ... }
void IterateOverStuff() {
a.iterate_list(boost::bind(&C::read_node, this));
}
};
Another Option would be to make the reader/writer lock publicly accessible and have the caller responsible for correctly using the lock. But that's more error prone.
IMHO it is a mistake to have a private mutex in a data structure class and then write the class methods so that the whole thing is thread safe no matter what the code that calls the methods does. The complexity that is required to do this completely and perfectly is way over the top.
The simpler way is to have a public ( or global ) mutex which the calling code is responsible for locking when it needs to access the data.
Here is my blog article on this subject.
When you return the list, return it enclosed in a class that locks/unlocks the mutex in its constructor/destructor. Something along the lines of
class LockedIterable {
public:
LockedIterable(const list<T> &l, Mutex &mutex) : list_(l), mutex_(mutex)
{lock(mutex);}
LockedIterable(const LockedIterable &other) : list_(other.list_), mutex_(other.mutex_) {
// may be tricky - may be wrap mutex_/list_ in a separate structure and manage it via shared_ptr?
}
~LockedIterable(){unlock(mutex);}
list<T>::iterator begin(){return list_.begin();}
list<T>::iterator end(){return list_.end();}
private:
list<T> &list_;
Mutex &mutex_;
};
class A {
...
LockedIterable MyStuff() { return LockedIterable(my_stuff, mutex); }
};
The tricky part is writing copy constructor so that your mutex does not have to be recursive. Or just use auto_ptr.
Oh, and reader/writer lock is indeed a better solution than mutex here.
I'm wondering whether there is a good design pattern or idiom to realize the following:
You have an existing class that provides only a visitor interface, as follows
class Visitor {
public:
virtual ~Visitor() { }
virtual void visit(Node *n) = 0;
};
class Tree {
public:
void accept(Visitor *v);
};
And you want to have an interface that can be used as follows, which should iterate through the tree in the same order that the visitor would have its visit function called.
for(iterator it(...), ite(...); it != ite; ++it) {
/* process node */
}
The problem appears to be that when we just call visit, we are out of control, and can't temporarily "go back" to the loop body to execute the action for one node. This looks like it should occur regularly in real world programs. Any idea how to solve it?
In the general case, I don't think it's possible, at least not cleanly.
At least as it's usually defined, an iterator expects to deal with a homogeneous collection. I.e., an iterator is normally defined something like:
template <class Element>
class iterator // ...
...so a specific iterator can only work with elements of one specific type. The most you can do to work with differing types is create an iterator to (a pointer/reference to) a base class, and let it deal with objects of derived classes.
By contrast, it's pretty easy to write a visitor like this:
class MyVisitor {
public:
void VisitOneType(OneType const *element);
void VisitAnotherType(AnotherType const *element);
};
This can visit nodes of either OneType or AnotherType, even if the two are completely unrelated. Basically, you have one Visit member function in your Visitor class for every different type of class that it will be able to visit.
Looked at from a slightly different direction, an iterator is basically a specialized form of visitor that only works for one type of object. You exchange a little more control over the visitation pattern in exchange for losing the ability to visit unrelated types of objects.
If you only need to deal with one type (though that one type may be a base class, and the visited objects are of various derived types), then the obvious method would be to build a "bridge" class that visits objects (Tree nodes, in your example), and when its visit is called, it just copies the address of the node it's visiting into some collection that supports iterators:
template <class T>
class Bridge {
std::vector<T *> nodes;
public:
virtual void visit(T *n) {
nodes.push_back(n);
}
typedef std::vector<T *>::iterator iterator;
iterator begin() { return nodes.begin(); }
iterator end() { return nodes.end(); }
};
Using this would be a two-step process: first visit the nodes like a visitor normally would, then having collected together the nodes of interest you can iterate through them just like you would any other collection that provides iterators. At that point, your visitation pattern is limited only by the class of iterator provided by the collection you use in your bridge.
I had this problem in a real-world setting with an R-tree implementation that provided a visitor interface, whereas I needed an iterator interface. The suggestion by Jerry above works only if you can accept storing all the results in a collection. That may result in high memory consumption if your result set is huge and you don't really need to store them.
One solution that will work for sure is to launch the visitor in a separate thread and start waiting on a conditional variable for the results. When a visit call is made, you store the current result into a shared temp location, and use another conditional variable to wait for the next request. You signal the caller (main) thread's conditional variable before you wait on your own. The caller, which is implementing the iterator interface can then return the value stored at the temp location. During the next iteration, it could signal the visitor thread's conditional variable, and wait on its own for the next item. Unfortunately, this is somewhat costly if you do it on a per-item basis. You can buffer some items to improve the performance.
What we really need is an extra stack and to alternate between two contexts. This abstraction is provided by coroutines. In C++, boost::coroutine provides a clean implementation. Below I include a full example of how visitor pattern can be adapted into an iterator pattern.
#include <iostream>
#include <boost/bind.hpp>
#include <boost/coroutine/coroutine.hpp>
template<typename Data>
class Visitor
{
public:
virtual ~Visitor() { }
virtual bool visit(Data const & data) = 0;
};
template <typename Data>
class Visitable
{
public:
virtual ~Visitable() {}
virtual void performVisits(Visitor<Data> & visitor) = 0;
};
// Assume we cannot change the code that appears above
template<typename Data>
class VisitableIterator : public Visitor<Data>
{
private:
typedef boost::coroutines::coroutine<void()> coro_t;
public:
VisitableIterator(Visitable<Data> & visitable)
: valid_(true), visitable_(visitable)
{
coro_ = coro_t(boost::bind(&VisitableIterator::visitCoro, this, _1));
}
bool isValid() const
{
return valid_;
}
Data const & getData() const
{
return *data_;
}
void moveToNext()
{
if(valid_)
coro_();
}
private:
void visitCoro(coro_t::caller_type & ca)
{
ca_ = & ca;
visitable_.performVisits(*static_cast<Visitor<Data> *>(this));
valid_ = false;
}
bool visit(Data const & data)
{
data_ = &data;
(*ca_)();
return false;
}
private:
bool valid_;
Data const * data_;
coro_t coro_;
coro_t::caller_type * ca_;
Visitable<Data> & visitable_;
};
// Example use below
class Counter : public Visitable<int>
{
public:
Counter(int start, int end)
: start_(start), end_(end) {}
void performVisits(Visitor<int> & visitor)
{
bool terminated = false;
for (int current=start_; !terminated && current<=end_; ++current)
terminated = visitor.visit(current);
}
private:
int start_;
int end_;
};
class CounterVisitor : public Visitor<int>
{
public:
bool visit(int const & data)
{
std::cerr << data << std::endl;
return false; // not terminated
}
};
int main(void)
{
{ // using a visitor
Counter counter(1, 100);
CounterVisitor visitor;
counter.performVisits(visitor);
}
{ // using an iterator
Counter counter(1, 100);
VisitableIterator<int> iter(static_cast<Visitable<int>&>(counter));
for (; iter.isValid(); iter.moveToNext()) {
int data = iter.getData();
std::cerr << data << std::endl;
}
}
return EXIT_SUCCESS;
}
Building traversal logic in the visitors implementations is indeed not flexible. A usable way to cleanly separate traversing composite structures from visitation may be done via visitor combinators (there are other papers, feel free to google for them).
These slides about the same topic may also be of interest. They explain how to get clean syntax à la boost::spirit rules.