I have a bit complex situation. It may can easily be solved by inheritance, but I am now curious and due some other reasons I would solve it this way.
I have class which represents an algorithm and with time a different solution has been implemented. Currently, I have the original class, and it's member is the new one.
I would have both of the algorithm and a switch to be able to use them according to the situation.
#include "B.h"
class A {
public:
typedef void ( A::*FooFunction )( float, float );
A::FooFunction m_fooFunction;
B m_b;
A( WhichUseEnum algorithm ) : m_b( B() )
{
switch( algorithm ) {
case ALG_A:
m_fooFunction = &A::FooFunctionOfA;
break;
case ALG_B:
m_fooFunction = ??? // something like: m_b.FooFunctionOfA
break;
}
}
void FooFunction( float a , float b)
{
( this->*m_fooFunction )( a, b );
}
void FooFunctionOfA( float, float ); // implementation at the .cpp
};
class B {
public:
void FooFunctionOfB( float, float );
}
As you can see, I want to save the pointer to function of the member m_b and call it as the FooFunction does. With the own function ( FooFunctionOfA() ) I was already successful, but the other is much harder. I tried several idea, but I could not find the version which was accepted by the compiler. :)
I found a similar question where the solution looked like this: &m_b.*m_b.FooFunctionOfB and at this point I gave it up.
If anybody has some idea, pleas do not hesitate to share with me.
I am using C++ but not C++0x AND I am forced to avoid stl and boost.
You need to use std::tr1::function. This class is built for the purpose you need. It can accept any function, member function, etc.
class A {
public:
std::tr1::function<void(float, float)> m_fooFunction;
B m_b;
A( WhichUseEnum algorithm ) : m_b( B() )
{
switch( algorithm ) {
case ALG_A:
m_fooFunction = std::tr1::bind(&A::FooFunctionOfA, this);
break;
case ALG_B:
m_fooFunction = std::tr1::bind(&A::FooFunctionOfA, &m_b);
break;
}
}
void FooFunction( float a , float b)
{
m_fooFunction( a, b );
}
void FooFunctionOfA( float, float ); // implementation at the .cpp
};
class B {
public:
void FooFunctionOfB( float, float );
}
Here I have used std::tr1::bind to define the two functions. As you can see the calling syntax is much easier too- just like a regular function call. std::tr1::bind can bind a lot more than just member functions and member function pointers too. Gah, it's been a while since I had to use bind instead of lambdas.
The general rule of C++ is that if you're using function pointers or member function pointers and you're not interfacing to some old code then you're almost certainly doing it wrong. This is no exception. If you're pre-C++0x then you may need to bind them too but that's about it.
If you're using a compiler so old, it doesn't even have TR1, you can use Boost to substitute these facilities- they were Standardised from Boost so the Boost equivalent is very close in functionality and interface.
You can provide a wrapper function that does the job.
#include "B.h"
class A {
public:
typedef void ( A::*FooFunction )( float, float );
A::FooFunction m_fooFunction;
B m_b;
A( WhichUseEnum algorithm ) : m_b( B() )
{
switch( algorithm ) {
case ALG_A:
m_fooFunction = &A::FooFunctionOfA;
break;
case ALG_B:
m_fooFunction = &A::FooFunctionOfB;
break;
}
}
void FooFunction( float a , float b)
{
( this->*m_fooFunction )( a, b );
}
void FooFunctionOfA( float, float ); // implementation at the .cpp
// A wrapper function that redirects the call to B::fooFunctionOfB().
void FooFunctionOfB( float a, float b)
{
this->m_b.FooFunctionOfB(a, b);
}
};
class B {
public:
void FooFunctionOfB( float, float );
}
What you should have done:
(1) Extract Interface on the original algorithm. Now the original algorithm is an implementation of this virtual interface.
class FooFunctionInterface
{
public:
virtual void Foo(float a, float b) = 0;
};
class OriginalFoo : public FooFunctionInterface
{
public:
void Foo(float a, float b) override
{
/* ... original implementation ... */
}
};
(2) Introduce the new algorithm as an alternate implementation of the interface.
class NewFoo : public FooFunctionInterface
{
public:
void Foo(float a, float b) override
{
/* ... new implementation ... */
}
};
(3) Introduce a factory function for selecting which implementation to use.
class NullFoo : FooFunctionInterface
{
public:
void Foo(float a, float b) override {}
};
std::unique_ptr<FooFunctionInterface> FooFactory(WhichUseEnum which)
{
std::unique_ptr<FooFunctionInterface> algorithm(new NullFoo());
switch(which)
{
case ALG_A: algorithm.reset(new OriginalFoo()); break;
case ALG_B: algorithm.reset(new NewFoo()); break;
};
return algorithm;
}
Then your class A becomes pimpl idiom forwarding calls to the appropriate implementation.
class A
{
public:
A(WhichUseEnum which)
: pimpl_(FooFactory(which))
{
}
void Foo(float a, float b)
{
pimpl_->Foo(a, b);
}
private:
std::unique_ptr<FooFunctionInterface> pimpl_;
};
Hugely cleaner approach to the mess you've made. You know its cleaner when you consider what will happen when you need to add the third implementation, then the fourth implementation.
In my example, you extend the factory function and move on with your life. No other code changes.
Related
A property is a public data member of a class, which can be accessed by client code. And the owning object receives a notification (in the form of get/set notification callback) whenever the client code reads or modifies the property.
Some languages (like C#) have built-in properties.
I want to create a property for C++ that will be RAM-efficient.
The most obvious way to make a property is something like this:
class Super;
struct Prop {
Prop( Super * super ) : m_super(*super), m_a(0) {}
int operator=( int a );
operator int() const;
int m_a;
Super & m_super;
};
struct Super {
Super() : one(this), two(this) {}
void onSet() { printf("set"); }
void onGet() { printf("get"); }
Prop one;
Prop two;
};
int Prop::operator=( int a ) { m_super.onSet(); m_a = a; return a; }
Prop::operator int() const { m_super.onGet(); return m_a; }
Trouble is - every property has to keep a pointer to the outer class which I consider costly.
I want to know if there is a more RAM-efficient way to do this?
For example, if all Super-classes are generated, is it allowed by the Standard to get a pointer to the outer class from this pointer of the property?
Something like this:
struct Prop {
Prop( uint8_t offset ) : m_offset(offset), m_a(0) {}
int operator=( int a );
operator int() const;
int m_a;
const uint8_t m_offset;
};
int Prop::operator=( int a ) {
Super * super = (Super *)( ((char *)this) + m_offset);
super->onSet(); m_a = a; return a;
}
struct Super {
// assuming exact order of properties
Super() : one(0), two(sizeof(Prop)) {}
void onSet() { printf("set"); }
void onGet() { printf("get"); }
Prop one;
Prop two;
};
Since this offset is a constant expression it (theoretically) can be kept in ROM (or at least it can be smaller than sizeof(pointer)).
Or maybe there is another way?
c++ has properties as language extension
Look no further, msvc has support.
clang compiler also supports this syntax. Im not sure about gcc.
Storing offset can be also be done
Just, in the constructor calculate the offset from this, ala. :
Prop( Super& super ) {
uint8_t offset = this - std::addressof(super );//somewhat unmaintable - but may save some bytes
}
then when used, calculate back using this
Please note the space saving may be less than it seems due to alignment and padding.
I obviously don't know the context of your code, so this may be inconceivable in your specific implementation, but you could do something like
class Prop(){
Prop() : m_a(0){};
int operator=(int a){m_a = a;};
int m_a;
}
class Super(){
public:
int set_prop(int index, int value){
m_props[index] = value;
onSet();
return value;
}
private:
void onSet(){};
std::vector<Prop> m_props;
}
Obviously you need to initialize the vector and handle error cases etc but the logic is there - if you only access the props through the super.
That leaves you with purely the size of the sequence of structs with no pointers back to the super.
I have the following (kinda pseudo) code, which handles 2 containers of 2 different (but somewhat similiar) types, and I hate having these duplications for addition and deletion (and also 2 searching functions in my real code)
class PureAbstractClass
{
public:
virtual char Func() = 0;
}
class PureOpt1 : PureAbstract
{
public:
virtual int FOption1(A, B, C) = 0; // Notice 'C'
}
class PureOpt2 : PureAbstract
{
public:
virtual int FOption2(A, B, D) = 0; // Notice 'D'
}
class Handler
{
public:
void Add(PureOpt1* arg) { v1.add(arg); }
void Add(PureOpt2* arg) { v2.add(arg); }
// This is implemented using lambda
// Sorry for LINQ syntax, lambdas are too long for pseudo code
void Del1(char c) { arg = v1.find(obj => obj->Func() == c); v1.del(arg); }
void Del2(char c) { arg = v2.find(obj => obj->Func() == c); v2.del(arg); }
void Process(ch, A, B, C, D)
{
o1 = v1.Find(obj => obj->Func() == ch);
if( null == o1 )
{
o2 = v2.Find(obj => obj->Func() == ch);
if( null == o2 )
{
DoSomething();
}
else
{
o2->FOption2(A, B, D);
}
}
else
{
o1->FOption1(A, B, C);
}
}
private:
vector<PureOpt1*> v1;
vector<PureOpt2*> v2;
}
Having Handler be a template class is impossible because of Process().
Is there a more correct way to implement this kind of code?
How to correctly manage 2 containers of different types in a class?
Answer is use only 1 container.
Simplest solution would be to have pure vitual method in base class:
class PureAbstractClass
{
public:
virtual char Func() = 0;
virtual int FOption(A, B, C, D) = 0;
}
then both children override FOption() and ignore parameter they do not need. There could be better solution but you do not provide enough information. Your solution - to keep them in 2 separate containers is probably the worst. As you can see your solution conflicts with inheritance (you remove inheritance and make both children independent classes and nothing would change in your code). Alternatively you can use dynamic_cast, but using it usually shows bad program design:
PureAbstractClass *o = find( ... );
if( !o ) {
DoSomething();
return;
}
if( PureOpt1 *po1 = dynamic_cast<PureOpt1 *>( o ) )
po1->FOption1( A, B, C );
else {
if( PureOpt2 *po2 = dynamic_cast<PureOpt2 *>( o ) )
po2->FOption2( A, B, D );
else
// something wrong object is not PureOpt1 nor PureOpt2
}
Note: it is completely unnecessary for FOption1() and FOption2() to be virtual in this case. And you should not forget to add virtual destructor to the base class.
Alternatively you may use boost::variant and visitor patter, in this case you do not need inheritance as well but you can make your code generic.
If possible have the FOption1/2 be int Func(Data const & data). You then create the data and pass it to it. Data can have the four different pieces of information with C and D being optional. The specific implementation of Func can then process that data as it needs
I have several CUDA Kernels which are basically doing the same with some variations. What I would like to do is to reduce the amout of code needed. My first thought was to use macros, so my resulting kernels would look like this (simplified):
__global__ void kernelA( ... )
{
INIT(); // macro to initialize variables
// do specific stuff for kernelA
b = a + c;
END(); // macro to write back the result
}
__global__ void kernelB( ... )
{
INIT(); // macro to initialize variables
// do specific stuff for kernelB
b = a - c;
END(); // macro to write back the result
}
...
Since macros are nasty, ugly and evil I am looking for a better and cleaner way. Any suggestions?
(A switch statement would not do the job: In reality, the parts which are the same and the parts which are kernel specific are pretty interweaved. Several switch statements would be needed which would make the code pretty unreadable. Furthermore, function calls would not initialize the needed variables. )
(This question might be answerable for general C++ as well, just replace all 'CUDA kernel' with 'function' and remove '__global__' )
Updated: I was told in the comments, that classes and inheritance don't mix well with CUDA. Therefore only the first part of the answer applies to CUDA, while the others are answer to the more general C++ part of your question.
For CUDA, you will have to use pure functions, "C-style":
struct KernelVars {
int a;
int b;
int c;
};
__device__ void init(KernelVars& vars) {
INIT(); //whatever the actual code is
}
__device__ void end(KernelVars& vars) {
END(); //whatever the actual code is
}
__global__ void KernelA(...) {
KernelVars vars;
init(vars);
b = a + c;
end(vars);
}
This is the answer for general C++, where you would use OOP techniques like constructors and destructors (they are perfectly suited for those init/end pairs), or the template method pattern which can be used with other languages as well:
Using ctor/dtor and templates, "C++ Style":
class KernelBase {
protected:
int a, b, c;
public:
KernelBase() {
INIT(); //replace by the contents of that macro
}
~KernelBase() {
END(); //replace by the contents of that macro
}
virtual void run() = 0;
};
struct KernelAdd : KernelBase {
void run() { b = a + c; }
};
struct KernelSub : KernelBase {
void run() { b = a - c; }
};
template<class K>
void kernel(...)
{
K k;
k.run();
}
void kernelA( ... ) { kernel<KernelAdd>(); }
Using template method pattern, general "OOP style"
class KernelBase {
virtual void do_run() = 0;
protected:
int a, b, c;
public:
void run() { //the template method
INIT();
do_run();
END();
}
};
struct KernelAdd : KernelBase {
void do_run() { b = a + c; }
};
struct KernelSub : KernelBase {
void do_run() { b = a - c; }
};
void kernelA(...)
{
KernelAdd k;
k.run();
}
You can use device functions as "INIT()" and "END()" alternative.
__device__ int init()
{
return threadIdx.x + blockIdx.x * blockDim.x;
}
Another alternative is to use function templates:
#define ADD 1
#define SUB 2
template <int __op__> __global__ void caluclate(float* a, float* b, float* c)
{
// init code ...
switch (__op__)
{
case ADD:
c[id] = a[id] + b[id];
break;
case SUB:
c[id] = a[id] - b[id];
break;
}
// end code ...
}
and invoke them using:
calcualte<ADD><<<...>>>(a, b, c);
The CUDA compiler does the work, build the different function versions and removes the dead code parts for performance optimization.
I'm using a library (libtcod) that has an A* pathfinding algorithm. My class inherits the callback base class, and I implement the required callback function. Here is my generic example:
class MyClass : public ITCODPathCallback
{
...
public: // The callback function
float getWalkCost(int xFrom, int yFrom, int xTo, int yTo, void *userData ) const
{
return this->doSomeMath();
};
float doSomeMath() { // non-const stuff }
};
I found a number of examples using const_cast and static_cast, but they seemed to be going the other way, making a non-const function be able to return a const function result. How can I do it in this example?
getWalkCost() is defined by my library that I cannot change, but I want to be able to do non-const things in it.
The best solution depends on why you want to do non-const stuff. For example, if you have a cache of results that you want to use to improve performance, then you can make the cache be mutable, since that preserves the logical constness:
class MyClass : public ITCODPathCallback
{
...
public: // The callback function
float getWalkCost(int xFrom, int yFrom, int xTo, int yTo, void *userData ) const
{
return this->doSomeMath();
};
float doSomeMath() const { // ok to modify cache here }
mutable std::map<int,int> cache;
};
Or perhaps you want to record some statistics about how many times the getWalkCost was called and what the maximum x value was, then passing a reference to the statistics may be best:
class MyClass : public ITCODPathCallback
{
...
public:
struct WalkStatistics {
int number_of_calls;
int max_x_value;
WalkStatistics() : number_of_calls(0), max_x_value(0) { }
};
MyClass(WalkStatistics &walk_statistics)
: walk_statistics(walk_statistics)
{
}
// The callback function
float getWalkCost(int xFrom, int yFrom, int xTo, int yTo, void *userData ) const
{
return this->doSomeMath();
};
float doSomeMath() const { // ok to modify walk_statistics members here }
WalkStatistics &walk_statistics;
};
You can hack it this way:
return const_cast<MyClass*>(this)->doSomeMath();
Of course this won't be considered good design by most people, but hey. If you prefer you can instead make doSomeMath() const, and mark the data members it modifies as mutable.
I need to bind a method into a function-callback, except this snippet is not legal as discussed in demote-boostfunction-to-a-plain-function-pointer.
What's the simplest way to get this behavior?
struct C {
void m(int x) {
(void) x;
_asm int 3;
}};
typedef void (*cb_t)(int);
int main() {
C c;
boost::function<void (int x)> cb = boost::bind(&C::m, &c, _1);
cb_t raw_cb = *cb.target<cb_t>(); //null dereference
raw_cb(1);
return 0;
}
You can make your own class to do the same thing as the boost bind function. All the class has to do is accept the function type and a pointer to the object that contains the function. For example, this is a void return and void param delegate:
template<typename owner>
class VoidDelegate : public IDelegate
{
public:
VoidDelegate(void (owner::*aFunc)(void), owner* aOwner)
{
mFunction = aFunc;
mOwner = aOwner;
}
~VoidDelegate(void)
{}
void Invoke(void)
{
if(mFunction != 0)
{
(mOwner->*mFunction)();
}
}
private:
void (owner::*mFunction)(void);
owner* mOwner;
};
Usage:
class C
{
void CallMe(void)
{
std::cout << "called";
}
};
int main(int aArgc, char** aArgv)
{
C c;
VoidDelegate<C> delegate(&C::CallMe, &c);
delegate.Invoke();
}
Now, since VoidDelegate<C> is a type, having a collection of these might not be practical, because what if the list was to contain functions of class B too? It couldn't.
This is where polymorphism comes into play. You can create an interface IDelegate, which has a function Invoke:
class IDelegate
{
virtual ~IDelegate(void) { }
virtual void Invoke(void) = 0;
}
If VoidDelegate<T> implements IDelegate you could have a collection of IDelegates and therefore have callbacks to methods in different class types.
Either you can shove that bound parameter into a global variable and create a static function that can pick up the value and call the function on it, or you're going to have to generate per-instance functions on the fly - this will involve some kind of on the fly code-gen to generate a stub function on the heap that has a static local variable set to the value you want, and then calls the function on it.
The first way is simple and easy to understand, but not at all thread-safe or reentrant. The second version is messy and difficult, but thread-safe and reentrant if done right.
Edit: I just found out that ATL uses the code generation technique to do exactly this - they generate thunks on the fly that set up the this pointer and other data and then jump to the call back function. Here's a CodeProject article that explains how that works and might give you an idea of how to do it yourself. Particularly look at the last sample (Program 77).
Note that since the article was written DEP has come into existance and you'll need to use VirtualAlloc with PAGE_EXECUTE_READWRITE to get a chunk of memory where you can allocate your thunks and execute them.
#include <iostream>
typedef void(*callback_t)(int);
template< typename Class, void (Class::*Method_Pointer)(void) >
void wrapper( int class_pointer )
{
Class * const self = (Class*)(void*)class_pointer;
(self->*Method_Pointer)();
}
class A
{
public:
int m_i;
void callback( )
{ std::cout << "callback: " << m_i << std::endl; }
};
int main()
{
A a = { 10 };
callback_t cb = &wrapper<A,&A::callback>;
cb( (int)(void*)&a);
}
i have it working right now by turning C into a singleton, factoring C::m into C::m_Impl, and declaring static C::m(int) which forwards to the singleton instance. talk about a hack.