Basically i need to do reference counting on certain resources (like an integer index) that are not inmediately equivalent to a pointer/address semantic; basically i need to pass around the resource around, and call certain custom function when the count reaches zero. Also the way to read/write access to the resource is not a simple pointer derreference operation but something more complex. I don't think boost::shared_ptr will fit the bill here, but maybe i'm missing some other boost equivalent class i might use?
example of what i need to do:
struct NonPointerResource
{
NonPointerResource(int a) : rec(a) {}
int rec;
}
int createResource ()
{
data BasicResource("get/resource");
boost::shared_resource< MonPointerResource > r( BasicResource.getId() ,
boost::function< BasicResource::RemoveId >() );
TypicalUsage( r );
}
//when r goes out of scope, it will call BasicResource::RemoveId( NonPointerResource& ) or something similar
int TypicalUsage( boost::shared_resource< NonPointerResource > r )
{
data* d = access_object( r );
// do something with d
}
Allocate NonPointerResource on the heap and just give it a destructor as normal.
Maybe boost::intrusive_ptr could fit the bill. Here's a RefCounted base class and ancillary functions that I'm using in some of my code. Instead of delete ptr you can specify whatever operation you need.
struct RefCounted {
int refCount;
RefCounted() : refCount(0) {}
virtual ~RefCounted() { assert(refCount==0); }
};
// boost::intrusive_ptr expects the following functions to be defined:
inline
void intrusive_ptr_add_ref(RefCounted* ptr) { ++ptr->refCount; }
inline
void intrusive_ptr_release(RefCounted* ptr) { if (!--ptr->refCount) delete ptr; }
With that in place you can then have
boost::intrusive_ptr<DerivedFromRefCounted> myResource = ...
Here
is a small example about the use of shared_ptr<void> as a counted handle.
Preparing proper create/delete functions enables us to use
shared_ptr<void> as any resource handle in a sense.
However, as you can see, since this is weakly typed, the use of it causes us
inconvenience in some degree...
Related
I've these plain C functions from a library:
struct SAlloc;
SAlloc *new_salloc();
void free_salloc(SAlloc *s);
Is there any way I can wrap this in C++ to a smart pointer (std::unique_ptr), or otherwise a RAII wrapper ?
I'm mainly curious about the possibilities of the standard library without creating my own wrapper/class.
Yes, you can reuse unique_ptr for this. Just make a custom deleter.
struct salloc_deleter {
void operator()(SAlloc* s) const {
free_salloc(s); // what the heck is the return value for?
}
}
using salloc_ptr = std::unique_ptr<SAlloc, salloc_deleter>;
I like R. Martinho Fernandes' answer, but here's a shorter (but less efficient) alternative:
auto my_alloc = std::shared_ptr<SAlloc>(new_salloc(), free_salloc);
Is there any way I can wrap this in C++ to a smart pointer (std::unique_ptr), or otherwise a RAII wrapper ?
Yes. You need here a factory function, that creates objects initializing the smart pointer correctly (and ensures you always construct pointer instances correctly):
std::shared_ptr<SAlloc> make_shared_salloc()
{
return std::shared_ptr<SAlloc>(new_salloc(), free_salloc);
}
// Note: this doesn't work (see comment from #R.MartinhoFernandes below)
std::unique_ptr<SAlloc> make_unique_salloc()
{
return std::unique_ptr<SAlloc>(new_salloc(), free_salloc);
}
You can assign the result of calling these functions to other smart pointers (as needed) and the pointers will be deleted correctly.
Edit:
Alternately, you could particularize std::make_shared for your SAlloc.
Edit 2:
The second function (make_unique_salloc) doesn't compile. An alternative deleter functor needs to be implemented to support the implementation.
Another variation:
#include <memory>
struct SAlloc {
int x;
};
SAlloc *new_salloc() { return new SAlloc(); }
void free_salloc(SAlloc *s) { delete s; }
struct salloc_freer {
void operator()(SAlloc* s) const { free_salloc(s); }
};
typedef std::unique_ptr<SAlloc, salloc_freer> unique_salloc;
template<typename... Args>
unique_salloc make_salloc(Args&&... args) {
auto retval = unique_salloc( new_salloc() );
if(retval) {
*retval = SAlloc{std::forward<Args>(args)...};
}
return retval;
}
int main() {
unique_salloc u = make_salloc(7);
}
I included a body to SAlloc and the various functions to make it a http://sscce.org/ -- the implementation of those doesn't matter.
So long as you can see the members of SAlloc, the above will let you construct them like in an initializer list at the same time as you make the SAlloc, and if you don't pass in any arguments it will zero the entire SAlloc struct.
Trying to learn something new every day I'd be interested if the following is good or bad design.
I'm implementing a class A that caches objects of itself in a static private member variable std::map<> cache. The user of A should only have access to pointers to elements in the map, because a full copy of A is expensive and not needed. A new A is only created if it is not yet available in the map, as construction of A needs some heavy lifting. Ok, here's some code:
class B;
class A {
public:
static A* get_instance(const B & b, int x) {
int hash = A::hash(b,x);
map<int, A>::iterator found = cache.find(hash);
if(found == cache.end())
found = cache.insert(make_pair(hash, A(b,x))).first;
return &(found->second);
}
static int hash(B & b, int x) {
// unique hash function for combination of b and x
}
// ...
private:
A(B & b, int x) : _b(b), _x(x) {
// do some heavy computation, store plenty of results
// in private members
}
static map<int, A> cache;
B _b;
int _x; // added, so A::hash() makes sense (instead of B::hash())
// ...
};
Is there anything that is wrong with the code above? Are there any pitfalls,
do I miss memory management problems or anything else?
Thank you for your feedback!
The implementation is intended to only allow you to create items via get_instance(). You should ideally make your copy-constructor and assignment operator private.
It would not be thread-safe. You can use the following instead:
const boost::once_flag BOOST_ONCE_INIT_CONST = BOOST_ONCE_INIT;
struct AControl
{
boost::once_flag onceFlag;
shared_ptr<A> aInst;
void create( const B&b, int x )
{
aInst.reset( new A(b, x) );
}
AControl() : onceFlag( BOOST_ONCE_INIT_CONST )
{
}
A& get( const B&b, int x )
{
boost::call_once( onceFlag, bind( &AOnceControl::create, this, b, x ) );
return *aInst;
}
};
Change the map to
map
Have a mutex and use it thus:
AControl * ctrl;
{
mutex::scoped_lock lock(mtx);
ctrl = &cache[hash];
}
return ctrl->get(b,x);
Ideally only get_instance() will be static in your class. Everything else is private implementation detail and goes into the compilation unit of your class, including AControl.
Note that you could do this a lot simpler by just locking through the entire process of looking up in the map and creating but then you are locking for longer whilst you do the long construction process. As it is this implements record-level locking once you have inserted the item. A later thread may find the item uninitialised but the boost::once logic will ensure it is created exactly once.
Any time you use globals (in this case the static map) you have to worry about concurrency issues if this is used across multiple threads. For example, if two threads were trying to get a particular instance at once, they could both create an object resulting in duplicates. Even worse, if they both tried to update the map at the same time it could get corrupted. You'd have to use mutexes to control access to the container.
If it's single-threaded only then there's no issue until someone decides it needs to be made multi-threaded in the future.
Also as a style note, while names starting with underscore+lower case letter are technically legal, avoid any symbols starting with underscores will avoid possibly accidentally breaking the rules and getting weird behavior.
I think these are 3 separate things that you mix together inside A:
the class A itself (what its intances are supposed to do).
poolling of instances for cache purposes
having such a static singlton pool for a certain type
I think they should be separate in the code, not all together inside A.
That means:
write your class A without any consideration of how it should be allocated.
write a generic module to perform pool cache of objects, along the lines of:
*
template< typename T > class PoolHashKey { ... };
template< typename T > class PoolCache
{
//data
private: std::map< .... > map_;
//methods
public: template< typename B > PoolKey< T > get_instance( B b );
public: void release_instance( PoolKey< T > );
// notice that these aren't static function members
};
create a singleton instance of PoolCache somewhere and use it:
*
PoolCache<A>& myAPool()
{
static PoolCache<A> s;
return s;
//you should use some safe singleton idiom.
}
int main()
{
B b;
PoolKey<A> const aKey( myAPool().get_instance( b );
A* const a( aKey.get() );
//...
myAPool().release_instance( aKey ); //not using it anymore
/*or else the destructor of PoolKey<A> should probably do some reference count and let the pool know this instace isn't needed anymore*/
}
I'm doing a linear genetic programming project, where programs are bred and evolved by means of natural evolution mechanisms. Their "DNA" is basically a container (I've used arrays and vectors successfully) which contain function pointers to a set of functions available.
Now, for simple problems, such as mathematical problems, I could use one type-defined function pointer which could point to functions that all return a double and all take as parameters two doubles.
Unfortunately this is not very practical. I need to be able to have a container which can have different sorts of function pointers, say a function pointer to a function which takes no arguments, or a function which takes one argument, or a function which returns something, etc (you get the idea)...
Is there any way to do this using any kind of container ?
Could I do that using a container which contains polymorphic classes, which in their turn have various kinds of function pointers?
I hope someone can direct me towards a solution because redesigning everything I've done so far is going to be painful.
A typical idea for virtual machines is to have a separate stack that is used for argument and return value passing.
Your functions can still all be of type void fn(void), but you do argument passing and returning manually.
You can do something like this:
class ArgumentStack {
public:
void push(double ret_val) { m_stack.push_back(ret_val); }
double pop() {
double arg = m_stack.back();
m_stack.pop_back();
return arg;
}
private:
std::vector<double> m_stack;
};
ArgumentStack stack;
...so a function could look like this:
// Multiplies two doubles on top of the stack.
void multiply() {
// Read arguments.
double a1 = stack.pop();
double a2 = stack.pop();
// Multiply!
double result = a1 * a2;
// Return the result by putting it on the stack.
stack.push(result);
}
This can be used in this way:
// Calculate 4 * 2.
stack.push(4);
stack.push(2);
multiply();
printf("2 * 4 = %f\n", stack.pop());
Do you follow?
You cannot put a polymorphic function in a class, since functions that take (or return) different things cannot be used in the same way (with the same interface), which is something required by polymorphism.
The idea of having a class providing a virtual function for any possible function type you need would work, but (without knowing anything about your problem!) its usage feels weird to me: what functions would a derived class override? Aren't your functions uncorrelated?
If your functions are uncorrelated (if there's no reason why you should group them as members of the same class, or if they would be static function since they don't need member variables) you should opt for something else... If you pick your functions at random you could just have several different containers, one for function type, and just pick a container at random, and then a function within it.
Could you make some examples of what your functions do?
What you mentioned itself can be implemented probably by a container of
std::function or discriminated union like Boost::variant.
For example:
#include <functional>
#include <cstdio>
#include <iostream>
struct F {
virtual ~F() {}
};
template< class Return, class Param = void >
struct Func : F {
std::function< Return( Param ) > f;
Func( std::function< Return( Param ) > const& f ) : f( f ) {}
Return operator()( Param const& x ) const { return f( x ); }
};
template< class Return >
struct Func< Return, void > : F {
std::function< Return() > f;
Func( std::function< Return() > const& f ) : f( f ) {}
Return operator()() const { return f(); }
};
static void f_void_void( void ) { puts("void"); }
static int f_int_int( int x ) { return x; }
int main()
{
F *f[] = {
new Func< void >( f_void_void ),
new Func< int, int >( f_int_int ),
};
for ( F **a = f, **e = f + 2; a != e; ++ a ) {
if ( auto p = dynamic_cast< Func< void >* >( *a ) ) {
(*p)();
}
else if ( auto p = dynamic_cast< Func< int, int >* >( *a ) ) {
std::cout<< (*p)( 1 ) <<'\n';
}
}
}
But I'm not sure this is really what you want...
What do you think about Alf P. Steinbach's comment?
This sort of thing is possible with a bit of work. First it's important to understand why something simpler is not possible: in C/C++, the exact mechanism by which arguments are passed to functions and how return values are obtained from the function depends on the types (and sizes) of the arguments. This is defined in the application binary interface (ABI) which is a set of conventions that allow C++ code compiled by different compilers to interoperate. The language also specifies a bunch of implicit type conversions that occur at the call site. So the short and simple answer is that in C/C++ the compiler cannot emit machine code for a call to a function whose signature is not known at compile time.
Now, you can of course implement something like Javascript or Python in C++, where all values (relevant to these functions) are typed dynamically. You can have a base "Value" class that can be an integer, float, string, tuples, lists, maps, etc. You could use std::variant, but in my opinion this is actually syntactically cumbersome and you're better of doing it yourself:
enum class Type {integer, real, str, tuple, map};
struct Value
{
// Returns the type of this value.
virtual Type type() const = 0;
// Put any generic interfaces you want to have across all Value types here.
};
struct Integer: Value
{
int value;
Type type() const override { return Type::integer; }
};
struct String: Value
{
std::string value;
Type type() const override { return Type::str; }
};
struct Tuple: Value
{
std::vector<Value*> value;
Type type() const override { return Type::tuple; };
}
// etc. for whatever types are interesting to you.
Now you can define a function as anything that takes a single Value* and returns a single Value*. Multiple input or output arguments can be passed in as a Tuple, or a Map:
using Function = Value* (*)(Value*);
All your function implementations will need to get the type and do something appropriate with the argument:
Value* increment(Value* x)
{
switch (x->type())
{
Type::integer:
return new Integer(((Integer*) x)->value + 1);
Type::real:
return new Real(((Real*) x)->value + 1.0);
default:
throw TypeError("expected an integer or real argument.")
}
}
increment is now compatible with the Function type and can be stored in mFuncs. You can now call a function of unknown type on arguments of unknown type and you will get an exception if the arguments don't match, or a result of some unknown type if the arguments are compatible.
Most probably you will want to store the function signature as something you can introspect, i.e. dynamically figure out the number and type of arguments that a Function takes. In this case you can make a base Function class with the necessary introspection functions and provide it an operator () to make it look something like calling a regular function. Then you would derive and implement Function as needed.
This is a sketch, but hopefully contains enough pointers to show the way. There are also more type-safe ways to write this code (I like C-style casts when I've already checked the type, but some people might insist you should use dynamic_cast instead), but I figured that is not the point of this question. You will also have to figure out how Value* objects lifetime is managed and that is an entirely different discussion.
Here is my issue.
I have a class to create timed events. It takes in:
A function pointer of void (*func)(void* arg)
A void* to the argument
A delay
The issue is I may want to create on-the-fly variables that I dont want to be a static variable in the class, or a global variable. If either of these are not met, I cant do something like:
void doStuff(void *arg)
{
somebool = *(bool*)arg;
}
void makeIt()
{
bool a = true;
container->createTimedEvent(doStuff,(void*)&a,5);
}
That wont work because the bool gets destroyed when the function returns. So I'd have to allocate these on the heap. The issue then becomes, who allocates and who deletes. what I'd like to do is to be able to take in anything, then copy its memory and manage it in the timed event class. But I dont think I can do memcpy since I dont know the tyoe.
What would be a good way to acheive this where the time event is responsible for memory managment.
Thanks
I do not use boost
class AguiTimedEvent {
void (*onEvent)(void* arg);
void* argument;
AguiWidgetBase* caller;
double timeStamp;
public:
void call() const;
bool expired() const;
AguiWidgetBase* getCaller() const;
AguiTimedEvent();
AguiTimedEvent(void(*Timefunc)(void* arg),void* arg, double timeSec, AguiWidgetBase* caller);
};
void AguiWidgetContainer::handleTimedEvents()
{
for(std::vector<AguiTimedEvent>::iterator it = timedEvents.begin(); it != timedEvents.end();)
{
if(it->expired())
{
it->call();
it = timedEvents.erase(it);
}
else
it++;
}
}
void AguiWidgetBase::createTimedEvent( void (*func)(void* data),void* data,double timeInSec )
{
if(!getWidgetContainer())
return;
getWidgetContainer()->addTimedEvent(AguiTimedEvent(func,data,timeInSec,this));
}
void AguiWidgetContainer::addTimedEvent( const AguiTimedEvent &timedEvent )
{
timedEvents.push_back(timedEvent);
}
Why would you not use boost::shared_ptr?
It offers storage duration you require since an underlying object will be destructed only when all shared_ptrs pointing to it will have been destructed.
Also it offers full thread safety.
Using C++0x unique_ptr is perfect for the job. This is a future standard, but unique_ptr is already supported under G++ and Visual Studio. For C++98 (current standard), auto_ptr works like a harder to use version of unique_ptr... For C++ TR1 (implemented in Visual Studio and G++), you can use std::tr1::shared_ptr.
Basically, you need a smart pointer. Here's how unique_ptr would work:
unique_ptr<bool> makeIt(){ // More commonly, called a "source"
bool a = true;
container->createTimedEvent(doStuff,(void*)&a,5);
return new unique_ptr<bool>(a)
}
When you use the code later...
void someFunction(){
unique_ptr<bool> stuff = makeIt();
} // stuff is deleted here, because unique_ptr deletes
// things when they leave their scope
You can also use it as a function "sink"
void sink(unique_ptr<bool> ptr){
// Use the pointer somehow
}
void somewhereElse(){
unique_ptr<bool> stuff = makeIt();
sink(stuff);
// stuff is now deleted! Stuff points to null now
}
Aside from that, you can use unique_ptr like a normal pointer, aside from the strange movement rules. There are many smart pointers, unique_ptr is just one of them. shared_ptr is implemented in both Visual Studio and G++ and is the more typical ptr. I personally like to use unique_ptr as often as possible however.
If you can't use boost or tr1, then what I'd do is write my own function that behaves like auto_ptr. In fact that's what I've done on a project here that doesn't have any boost or tr1 access. When all of the events who care about the data are done with it it automatically gets deleted.
You can just change your function definition to take in an extra parameter that represents the size of the object passed in. Then just pass the size down. So your new function declarations looks like this:
void (*func)(void* arg, size_t size)
void doStuff(void *arg, size_t size)
{
somebool = *(bool*)arg;
memcpy( arg, myStorage, size );
}
void makeIt()
{
bool a = true;
container->createTimedEvent(doStuff,(void*)&a,sizeof(bool), 5);
}
Then you can pass variables that are still on the stack and memcpy them in the timed event class. The only problem is that you don't know the type any more... but that's what happens when you cast to void*
Hope that helps.
You should re-work your class to use inheritance, not a function pointer.
class AguiEvent {
virtual void Call() = 0;
virtual ~AguiEvent() {}
};
class AguiTimedEvent {
std::auto_ptr<AguiEvent> event;
double timeSec;
AguiWidgetBase* caller;
public:
AguiTimedEvent(std::auto_ptr<AguiEvent> ev, double time, AguiWidgetBase* base)
: event(ev)
, timeSec(time)
, caller(base) {}
void call() { event->Call(); }
// All the rest of it
};
void MakeIt() {
class someclass : AguiEvent {
bool MahBool;
public:
someclass() { MahBool = false; }
void Call() {
// access to MahBool through this.
}
};
something->somefunc(AguiTimedEvent(new someclass())); // problem solved
}
I just got burned by a bug that is partially due to my lack of understanding, and partially due to what I think is suboptimal design in our codebase. I'm curious as to how my 5-minute solution can be improved.
We're using ref-counted objects, where we have AddRef() and Release() on objects of these classes. One particular object is derived from the ref-count object, but a common function to get an instance of these objects (GetExisting) hides an AddRef() within itself without advertising that it is doing so. This necessitates doing a Release at the end of the functional block to free the hidden ref, but a developer who didn't inspect the implementation of GetExisting() wouldn't know that, and someone who forgets to add a Release at the end of the function (say, during a mad dash of bug-fixing crunch time) leaks objects. This, of course, was my burn.
void SomeFunction(ProgramStateInfo *P)
{
ThreadClass *thread = ThreadClass::GetExisting( P );
// some code goes here
bool result = UseThreadSomehow(thread);
// some code goes here
thread->Release(); // Need to do this because GetExisting() calls AddRef()
}
So I wrote up a little class to avoid the need for the Release() at the end of these functions.
class ThreadContainer
{
private:
ThreadClass *m_T;
public:
ThreadContainer(Thread *T){ m_T = T; }
~ThreadContainer() { if(m_T) m_T->Release(); }
ThreadClass * Thread() const { return m_T; }
};
So that now I can just do this:
void SomeFunction(ProgramStateInfo *P)
{
ThreadContainer ThreadC(ThreadClass::GetExisting( P ));
// some code goes here
bool result = UseThreadSomehow(ThreadC.Thread());
// some code goes here
// Automagic Release() in ThreadC Destructor!!!
}
What I don't like is that to access the thread pointer, I have to call a member function of ThreadContainer, Thread(). Is there some clever way that I can clean that up so that it's syntactically prettier, or would anything like that obscure the meaning of the container and introduce new problems for developers unfamiliar with the code?
Thanks.
use boost::shared_ptr
it is possible to define your own destructor function, such us in next example: http://www.boost.org/doc/libs/1_38_0/libs/smart_ptr/sp_techniques.html#com
Yes, you can implement operator ->() for the class, which will recursively call operator ->() on whatever you return:
class ThreadContainer
{
private:
ThreadClass *m_T;
public:
ThreadContainer(Thread *T){ m_T = T; }
~ThreadContainer() { if(m_T) m_T->Release(); }
ThreadClass * operator -> () const { return m_T; }
};
It's effectively using smart pointer semantics for your wrapper class:
Thread *t = new Thread();
...
ThreadContainer tc(t);
...
tc->SomeThreadFunction(); // invokes tc->t->SomeThreadFunction() behind the scenes...
You could also write a conversion function to enable your UseThreadSomehow(ThreadContainer tc) type calls in a similar way.
If Boost is an option, I think you can set up a shared_ptr to act as a smart reference as well.
Take a look at ScopeGuard. It allows syntax like this (shamelessly stolen from that link):
{
FILE* topSecret = fopen("cia.txt");
ON_BLOCK_EXIT(std::fclose, topSecret);
... use topSecret ...
} // topSecret automagically closed
Or you could try Boost::ScopeExit:
void World::addPerson(Person const& aPerson) {
bool commit = false;
m_persons.push_back(aPerson); // (1) direct action
BOOST_SCOPE_EXIT( (&commit)(&m_persons) )
{
if(!commit)
m_persons.pop_back(); // (2) rollback action
} BOOST_SCOPE_EXIT_END
// ... // (3) other operations
commit = true; // (4) turn all rollback actions into no-op
}
I would recommend following bb advice and using boost::shared_ptr<>. If boost is not an option, you can take a look at std::auto_ptr<>, which is simple and probably addresses most of your needs. Take into consideration that the std::auto_ptr has special move semantics that you probably don't want to mimic.
The approach is providing both the * and -> operators together with a getter (for the raw pointer) and a release operation in case you want to release control of the inner object.
You can add an automatic type-cast operator to return your raw pointer. This approach is used by Microsoft's CString class to give easy access to the underlying character buffer, and I've always found it handy. There might be some unpleasant surprises to be discovered with this method, as in any time you have an implicit conversion, but I haven't run across any.
class ThreadContainer
{
private:
ThreadClass *m_T;
public:
ThreadContainer(Thread *T){ m_T = T; }
~ThreadContainer() { if(m_T) m_T->Release(); }
operator ThreadClass *() const { return m_T; }
};
void SomeFunction(ProgramStateInfo *P)
{
ThreadContainer ThreadC(ThreadClass::GetExisting( P ));
// some code goes here
bool result = UseThreadSomehow(ThreadC);
// some code goes here
// Automagic Release() in ThreadC Destructor!!!
}