Alternative schemes for implementing vptr? - c++

This question is not about the C++ language itself(ie not about the Standard) but about how to call a compiler to implement alternative schemes for virtual function.
The general scheme for implementing virtual functions is using a pointer to a table of pointers.
class Base {
private:
int m;
public:
virtual metha();
};
equivalently in say C would be something like
struct Base {
void (**vtable)();
int m;
}
the first member is usually a pointer to a list of virtual functions, etc. (a piece of area in the memory which the application has no control of). And in most case this happens to cost the size of a pointer before considering the members, etc. So in a 32bit addressing scheme around 4 bytes, etc. If you created a list of 40k polymorphic objects in your applications, this is around 40k x 4 bytes = 160k bytes before any member variables, etc. I also know this happens to be the fastest and common implementation among C++ compiles.
I know this is complicated by multiple inheritance (especially with virtual classes in them, ie diamond struct, etc).
An alternative way to do the same is to have the first variable as a index id to a table of vptrs(equivalently in C as below)
struct Base {
char classid; // the classid here is an index into an array of vtables
int m;
}
If the total number of classes in an application is less than 255(including all possible template instantiations, etc), then a char is good enough to hold an index thereby reducing the size of all polymorphic classes in the application(I am excluding alignment issues, etc).
My questions is, is there any switch in GNU C++, LLVM, or any other compiler to do this?? or reduce the size of polymorphic objects?
Edit: I understand about the alignment issues pointed out. Also a further point, if this was on a 64bit system(assuming 64bit vptr) with each polymorphic object members costing around 8 bytes, then the cost of vptr is 50% of the memory. This mostly relates to small polymorphics created in mass, so I am wondering if this scheme is possible for at least specific virtual objects if not the whole application.

You're suggestion is interesting, but it won't work if the executable is made of several modules, passing objects among them. Given they are compiled separately (say DLLs), if one module creates an object and passes it to another, and the other invokes a virtual method - how would it know which table the classid refers to? You won't be able to add another moduleid because the two modules might not know about each other when they are compiled. So unless you use pointers, I think it's a dead end...

A couple of observations:
Yes, a smaller value could be used to represent the class, but some processors require data to be aligned so that saving in space may be lost by the requirement to align data values to e.g. 4 byte boundaries. Further, the class-id must be in a well defined place for all members of a polymorphic inheritance tree, so it is likely to be ahead of other date, so alignment problems can't be avoided.
The cost of storing the pointer has been moved to the code, where every use of a polymorphic function requires code to translate the class-id to either a vtable pointer, or some equivalent data structure. So it isn't for free. Clearly the cost trade-off depends on the volume of code vs numer of objects.
If objects are allocated from the heap, there is usually space wasted in orer to ensure objects are alogned to the worst boundary, so even if there is a small amount of code, and a large number of polymorphic objects, the memory management overhead migh be significantly bigger than the difference between a pointer and a char.
In order to allow programs to be independently compiled, the number of classes in the whole program, and hence the size of the class-id must be known at compile time, otherwise code can't be compiled to access it. This would be a significant overhead. It is simpler to fix it for the worst case, and simplify compilation and linking.
Please don't let me stop you trying, but there are quite a lot more issues to resolve using any technique which may use a variable size id to derive the function address.
I would strongly encourage you to look at Ian Piumarta's Cola also at Wikipedia Cola
It actually takes a different approach, and uses the pointer in a much more flexible way, to to build inheritance, or prototype-based, or any other mechanism the developer requires.

No, there is no such switch.
The LLVM/Clang codebase avoids virtual tables in classes that are allocated by the tens of thousands: this work well in a closed hierachy, because a single enum can enumerate all possible classes and then each class is linked to a value of the enum. The closed is obviously because of the enum.
Then, virtuality is implemented by a switch on the enum, and appropriate casting before calling the method. Once again, closed. The switch has to be modified for each new class.
A first alternative: external vpointer.
If you find yourself in a situation where the vpointer tax is paid way too often, that is most of the objects are of known type. Then you can externalize it.
class Interface {
public:
virtual ~Interface() {}
virtual Interface* clone() const = 0; // might be worth it
virtual void updateCount(int) = 0;
protected:
Interface(Interface const&) {}
Interface& operator=(Interface const&) { return *this; }
};
template <typename T>
class InterfaceBridge: public Interface {
public:
InterfaceBridge(T& t): t(t) {}
virtual InterfaceBridge* clone() const { return new InterfaceBridge(*this); }
virtual void updateCount(int i) { t.updateCount(i); }
private:
T& t; // value or reference ? Choose...
};
template <typename T>
InterfaceBridge<T> interface(T& t) { return InterfaceBridge<T>(t); }
Then, imagining a simple class:
class Counter {
public:
int getCount() const { return c; }
void updateCount(int i) { c = i; }
private:
int c;
};
You can store the objects in an array:
static Counter array[5];
assert(sizeof(array) == sizeof(int)*5); // no v-pointer
And still use them with polymorphic functions:
void five(Interface& i) { i.updateCount(5); }
InterfaceBridge<Counter> ib(array[3]); // create *one* v-pointer
five(ib);
assert(array[3].getCount() == 5);
The value vs reference is actually a design tension. In general, if you need to clone you need to store by value, and you need to clone when you store by base class (boost::ptr_vector for example). It is possible to actually provide both interfaces (and bridges):
Interface <--- ClonableInterface
| |
InterfaceB ClonableInterfaceB
It's just extra typing.
Another solution, much more involved.
A switch is implementable by a jump table. Such a table could perfectly be created at runtime, in a std::vector for example:
class Base {
public:
~Base() { VTables()[vpointer].dispose(*this); }
void updateCount(int i) {
VTables()[vpointer].updateCount(*this, i);
}
protected:
struct VTable {
typedef void (*Dispose)(Base&);
typedef void (*UpdateCount)(Base&, int);
Dispose dispose;
UpdateCount updateCount;
};
static void NoDispose(Base&) {}
static unsigned RegisterTable(VTable t) {
std::vector<VTable>& v = VTables();
v.push_back(t);
return v.size() - 1;
}
explicit Base(unsigned id): vpointer(id) {
assert(id < VTables.size());
}
private:
// Implement in .cpp or pay the cost of weak symbols.
static std::vector<VTable> VTables() { static std::vector<VTable> VT; return VT; }
unsigned vpointer;
};
And then, a Derived class:
class Derived: public Base {
public:
Derived(): Base(GetID()) {}
private:
static void UpdateCount(Base& b, int i) {
static_cast<Derived&>(b).count = i;
}
static unsigned GetID() {
static unsigned ID = RegisterTable(VTable({&NoDispose, &UpdateCount}));
return ID;
}
unsigned count;
};
Well, now you'll realize how great it is that the compiler does it for you, even at the cost of some overhead.
Oh, and because of alignment, as soon as a Derived class introduces a pointer, there is a risk that 4 bytes of padding are used between Base and the next attribute. You can use them by careful selecting the first few attributes in Derived to avoid padding...

The short answer is that no, I don't know of any switch to do this with any common C++ compiler.
The longer answer is that to do this, you'd just about have to build most of the intelligence into the linker, so it could coordinate distributing the IDs across all the object files getting linked together.
I'd also point out that it wouldn't generally do a whole lot of good. At least in a typical case, you want each element in a struct/class at a "natural" boundary, meaning its starting address is a multiple of its size. Using your example of a class containing a single int, the compiler would allocate one byte for the vtable index, followed immediately by three byes of padding so the next int would land at an address that was a multiple of four. The end result would be that objects of the class would occupy precisely the same amount of storage as if we used a pointer.
I'd add that this is not a far-fetched exception either. For years, standard advice to minimize padding inserted into structs/classes has been to put the items expected to be largest at the beginning, and progress toward the smallest. That means in most code, you'd end up with those same three bytes of padding before the first explicitly defined member of the struct.
To get any good from this, you'd have to be aware of it, and have a struct with (for example) three bytes of data you could move where you wanted. Then you'd move those to be the first items explicitly defined in the struct. Unfortunately, that would also mean that if you turned this switch off so you have a vtable pointer, you'd end up with the compiler inserting padding that might otherwise be unnecessary.
To summarize: it's not implemented, and if it was wouldn't usually accomplish much.

Related

How to exploit polymorphism on embedded systems?

I have been developing a C++ software driver for the adc peripheral of the MCU.
The individual analog inputs connected to the adc can be configured for operation in the unipolar or bipolar mode. To reflect this fact in my design I have decided to model the analog inputs by the AnalogInput abstract class and then define two derived classes. UnipolarAnalogInput for the unipolar analog inputs and BipolarAnalogInput for the bipolar analog inputs. These two classes differ only in the implementation of the getValue() method.
enum class Type
{
Unipolar,
Bipolar
};
class AnalogInput
{
public:
virtual float getValue() = 0;
};
class UnipolarAnalogInput : public AnalogInput
{
public:
UnipolarAnalogInput(uint8_t _id, bool _enabled, Type _type);
bool isEnabled();
bool isReady();
float getValue();
private:
uint8_t id;
Type type;
bool enabled;
bool ready;
uint16_t raw_value;
};
class BipolarAnalogInput : public AnalogInput
{
public:
BipolarAnalogInput(uint8_t _id, bool _enabled, Type _type);
bool isEnabled();
bool isReady();
float getValue();
private:
uint8_t id;
Type type;
bool enabled;
bool ready;
uint16_t raw_value;
};
My goal is to fullfill following requirements:
work with both types of the analog inputs uniformly
have a chance to create either the instance of the UnipolarAnalogInput or BipolarAnalogInput
based on users configuration of the Adc which is known at the compile time
have a chance to create the instances in for loop iteration
have the implementation which is suitable for the embedded systems
Here are my ideas
As far as the requirement 1.
The ideal state would be to have AnalogInput analog_inputs[NO_ANALOG_INPUTS]. As far as I understand
correctly this is not possible in C++. Instead of that I need to define AnalogInput *analog_inputs[NO_ANALOG_INPUTS].
As far as the requirement 2.
It seems to me that the best solution for the other systems than the embedded systems would be to use the factory method design pattern i.e. inside the AnalogInput define
static AnalogInput* getInstance(Type type) {
if(type == Unipolar) {
// create instance of the UnipolarAnalogInput
} else if(type == Bipolar) {
// create instance of the BipolarAnalogInput
}
}
Here I would probably need to define somewhere auxiliary arrays for the UnipolarAnalogInput instances and the BipolarAnalogInput instances where the instances would be allocated by the factory method and the pointers to those arrays would be returned by the getInstance(). This solution seems to me to be pretty cumbersome due to the auxiliary arrays presence.
As far as the requirement 3.
for(uint8_t input = 0; input < NO_ANALOG_INPUTS; input++) {
analog_inputs[input] = AnalogInput::getInstance(AdcConfig->getInputType(input));
}
As far as the requirement 4.
Here I would say that what I have suggested above is applicable also for the embedded systems
because the solution avoids usage of the standard new operator. Question mark is the virtual
method getValue().
My questions is whether the auxiliary arrays presence is unavoidable?
The "auxiliary array" as you call it is mostly needed for memory management, i.e. you need to choose the memory to store your objects in. It's also an interface - the array is how you access the ADCs.
You can store your objects either in the heap or the (global) data segment - an array of objects implements the latter (you can also create global variables, one per ADC, which is a worse solution). If the compiler has all the information it needs to allocate the memory during compilation, it's usually the preferred approach. However - as you've noticed - polymorphism becomes rather annoying to implement with statically allocated objects.
The alternative is to keep them in heap. This is often totally acceptable in an embedded system if you allocate the heap memory at startup and keep it permanently (i.e. never try to release or re-use this part of heap, which would risk fragmentation). And this is really the only humane way to do polymorphic stuff, especially object instantiation.
If you don't like the array, use some other storage method - linked list, global variables, whatever. But you need to access the objects through a pointer (or a reference, which is also a pointer) for polymorhpism to work. And arrays are a simple concept, so why not use them?

std::variant vs pointer to base class for heterogeneous containers in C++

Let's assume this class hierarchy below.
class BaseClass {
public:
int x;
}
class SubClass1 : public BaseClass {
public:
double y;
}
class SubClass2 : public BaseClass {
public:
float z;
}
...
I want to make a heterogeneous container of these classes. Since the subclasses are derived from the base class I can make something like this:
std::vector<BaseClass*> container1;
But since C++17 I can also use std::variant like this:
std::vector<std::variant<SubClass1, SubClass2, ...>> container2;
What are the advantages/disadvantages of using one or the other? I am interested in the performance too.
Take into consideration that I am going to sort the container by x, and I also need to be able to find out the exact type of the elements. I am going to
Fill the container,
Sort it by x,
Iterate through all the elements, find out the type, use it accordingly,
Clear the container, then the cycle starts over again.
std::variant<A,B,C> holds one of a closed set of types. You can check whether it holds a given type with std::holds_alternative, or use std::visit to pass a visitor object with an overloaded operator(). There is likely no dynamic memory allocation, however, it is hard to extend: the class with the std::variant and any visitor classes will need to know the list of possible types.
On the other hand, BaseClass* holds an unbounded set of derived class types. You ought to be holding std::unique_ptr<BaseClass> or std::shared_ptr<BaseClass> to avoid the potential for memory leaks. To determine whether an instance of a specific type is stored, you must use dynamic_cast or a virtual function. This option requires dynamic memory allocation, but if all processing is via virtual functions, then the code that holds the container does not need to know the full list of types that could be stored.
A problem with std::variant is that you need to specify a list of allowed types; if you add a future derived class you would have to add it to the type list. If you need a more dynamic implementation, you can look at std::any; I believe it can serve the purpose.
I also need to be able to find out the exact type of the elements.
For type recognition you can create a instanceof-like template as seen in C++ equivalent of instanceof. It is also said that the need to use such a mechanism sometimes reveals poor code design.
The performance issue is not something that can be detected ahead of time, because it depends on the usage: it's a matter of testing different implementations and see witch one is faster.
Take into consideration that, I am going to sort the container by x
In this case you declare the variable public so sorting is no problem at all; you may want to consider declaring the variable protected or implementing a sorting mechanism in the base class.
What are the advantages/disadvantages of using one or the other?
The same as advantages/disadvantages of using pointers for runtime type resolution and templates for compile time type resolution. There are many things that you might compare. For example:
with pointers you might have memory violations if you misuse them
runtime resolution has additional overhead (but also depends how would you use this classes exactly, if it is virtual function call, or just common member field access)
but
pointers have fixed size, and are probably smaller than the object of your class will be, so it might be better if you plan to copy your container often
I am interested in the performance too.
Then just measure the performance of your application and then decide. It is not a good practice to speculate which approach might be faster, because it strongly depends on the use case.
Take into consideration that, I am going to sort the container by x
and I also need to be able to find out the exact type of the elements.
In both cases you can find out the type. dynamic_cast in case of pointers, holds_alternative in case of std::variant. With std::variant all possible types must be explicitly specified. Accessing member field x will be almost the same in both cases (with the pointer it is pointer dereference + member access, with variant it is get + member access).
Sending data over a TCP connection was mentioned in the comments. In this case, it would probably make the most sense to use virtual dispatch.
class BaseClass {
public:
int x;
virtual void sendTo(Socket socket) const {
socket.send(x);
}
};
class SubClass1 final : public BaseClass {
public:
double y;
void sendTo(Socket socket) const override {
BaseClass::sendTo(socket);
socket.send(y);
}
};
class SubClass2 final : public BaseClass {
public:
float z;
void sendTo(Socket socket) const override {
BaseClass::sendTo(socket);
socket.send(z);
}
};
Then you can store pointers to the base class in a container, and manipulate the objects through the base class.
std::vector<std::unique_ptr<BaseClass>> container;
// fill the container
auto a = std::make_unique<SubClass1>();
a->x = 5;
a->y = 17.0;
container.push_back(a);
auto b = std::make_unique<SubClass2>();
b->x = 1;
b->z = 14.5;
container.push_back(b);
// sort by x
std::sort(container.begin(), container.end(), [](auto &lhs, auto &rhs) {
return lhs->x < rhs->x;
});
// send the data over the connection
for (auto &ptr : container) {
ptr->sendTo(socket);
}
It's not the same. std::variant is like a union with type safety. No more than one member can be visible at the same time.
// C++ 17
std::variant<int,float,char> x;
x = 5; // now contains int
int i = std::get<int>(v); // i = 5;
std::get<float>(v); // Throws
The other option is based on inheritance. All members are visible depending on which pointer you have.
Your selection will depend on if you want all the variables to be visible and what error reporting you want.
Related: don't use a vector of pointers. Use a vector of shared_ptr.
Unrelated: I'm somewhat not of a supporter of the new union variant. The point of the older C-style union was to be able to access all the members it had at the same memory place.

c++ switch vs. member function pointer vs. virtual inheritance

I am trying to analyze the trade offs between various methods of achieving polymorphism. I need a list of objects with some similarities and some differences in member functions. The options I see are as follows:
have a flag in each object, and a switch statement in each function.
The value of the flag directs each object to its specific section of
each function.
have an array of member function pointers in the object, which are
assigned upon construction. Then, I call that function pointer to
get the correct member function.
have an virtual base class with several derived classes. One
drawback to this is that my list will now have to contain pointers,
and not the objects themselves.
My understanding is that the pointer lookups from the list in option 3 will take longer than the member function lookups of option 2 because of the guaranteed proximity of member functions.
What are some of the benefits/drawbacks of these options? My priority is performance over readability.
Is there any other method for polymorphism?
have a flag in each object, and a switch statement in each function. The value of the flag directs each object to its specific section of each function
OK, so this could make sense if very little code varies based on the flag.
This minimises the amount of (duplicated) code which has to fit in cache, and avoids any function call indirection. Under some circumstances these benefits could outweigh the extra cost of the switch statement.
have an array of member function pointers in the object, which are assigned upon construction. Then, I call that function pointer to get the correct member function
You save one indirection (to the vtable), but also make your objects bigger so fewer fit in cache. It's impossible to say which will dominate, so you'll just have to profile, but it isn't an obvious win
have an virtual base class with several derived classes. One drawback to this is that my list will now have to contain pointers, and not the objects themselves
If the your code paths are different enough that separating them completely is reasonable, this is the cleanest solution. If you need to optimise it, you can either use a specialised allocator to ensure they're sequential (even if not sequential in your container), or move the objects directly into your container using a clever wrapper similar to Boost.Any. You'll still get the vtable indirection, but I'd prefer this to #2 unless profiling shows it's really a problem.
So, there are several questions you should answer before you can decide:
how much code is shared, and how much varies?
how big are the objects, and will a table of inline function pointers materially affect your cache miss stats?
and, after you've answered those, you should just profile anyway.
One way to achieve faster polymorphism is through the CRTP idiom and static polymorphism:
template<typename T>
struct base
{
void f()
{
static_cast<T*>( this )->f_impl();
}
};
struct foo : public base<foo>
{
void f_impl()
{
std::cout << "foo!" << std::endl;
}
};
struct bar : public base<bar>
{
void f_impl()
{
std::cout << "bar!" << std::endl;
}
};
struct quux : public base<quux>
{
void f_impl()
{
std::cout << "quux!" << std::endl;
}
};
template<typename T>
void call_f( const base<T>& something )
{
something.f();
}
int main()
{
foo my_foo;
bar my_bar;
quux my_quux;
call_f( my_foo );
call_f( my_bar );
call_f( my_quux );
}
This outputs:
foo!
bar!
quux!
Static-polymorphism performs far better than virtual dispatch, because the compiler knows which function will be called at compile-time, and it could inline everything.
Even if it provides dynamic binding, it cannot perform polymorphism in the common heterogeneous-container way, because every instance of the base class is a different type.
However, that could be achieved with something like boost::any.
With a switch statement, if you want to add a new class then you need to modify everywhere where the class is switched on, which may be in various places in your code base. There may also be places outside your code base that need to be modified, but perhaps you know this isn't the case in this scenario.
With an array of member function pointers within each member, the only downside is that you duplicate that memory for every object. If you know there's only one or two "virtual" functions though then it's a good option.
As for virtual functions, you are right in that you have to heap allocate them (or manual manage the memory), but it is the most extensible option.
If you aren't after extensible, then (1) or (2) may be your best option. As always, the only way to tell is to measure. I know that many compilers will implement a switch statement in some cases by a jump table, which essentially comes out the same as a virtual function table. For small numbers of case statement they may just use binary search branching.
Measure!

How to get "direct" function pointer to a virtual member function?

I am working on an embedded platform which doesn't cope very well with dynamic code (no speculative / OOO execution at all).
On this platform I call a virtual member function on the same object quite often, however the compiler fails to optimize the vtable-lookup away, as it doesn't seem to recognize the lookup is only required for the first invocation.
Therefore I wonder: Is there a manual way to devirtualize a virtual member function of a C++ class in order to get a function-pointer which points directly to the resolved address?
I had a look at C++ function pointers, but since they seem to require a type specified, I guess this won`t work out.
Thank you in advance
There's no general standard-C++-only way to find the address of a virtual function, given only a reference to a base class object. Furthermore there's no reasonable type for that, because the this needs not be passed as an ordinary argument, following a general convention (e.g. it can be passed in a register, with the other args on stack).
If you do not need portability, however, you can always do whatever works for your given compiler. E.g., with Microsoft's COM (I know, that's not your platform) there is a known memory layout with vtable pointers, so as to access the functionality from C.
If you do need portability then I suggest to design in the optimization. For example, instead of
class Foo_base
{
public:
virtual void bar() = 0;
};
do like
class Foo_base
{
public:
typedef (*Bar_func)(Foo_base&);
virtual Bar_func bar_func() const = 0;
void bar() { bar_func()( *this ); }
};
supporting the same public interface as before, but now exposing the innards, so to speak, thus allowing manual optimization of repeated calls to bar.
Regarding gcc I have seen the following while debuggging the assembly code compiled.
I have seen that a generic method pointer holds two data:
a) a "pointer" to the method
b) an offset to add eventually to the class instance starting address ( the offset is used when multiple inheritance is involved and for methods of the second and further parent class that if applied to their objects would have their data at different starting points).
The "pointer" to the method is as follows:
1) if the "pointer" is even it is interpreted as a normal (non virtual) function pointer.
2) If the "pointer" is odd then 1 should be subtracted and the remaining value should be 0 or 4 or 8 or 12 ( supposing a pointer size of 4 bytes).
The previous codification supposes obviously that all normal methods start at even addresses (so the compiler should align them at even addresses).
So that offset is the offset into the vtable where to fetch the address of the "real" non virual method pointer.
So the correct idea in order to devirtualize the call is to convert a virtual method pointer to a non virtual method pointer and use it aftewards in order to apply it to the "subject" that is our class instance.
The code bellow does what described.
#include <stdio.h>
#include <string.h>
#include <typeinfo>
#include <typeindex>
#include <cstdint>
struct Animal{
int weight=0x11111111;
virtual int mm(){printf("Animal1 mm\n");return 0x77;};
virtual int nn(){printf("Animal1 nn\n");return 0x99;};
};
struct Tiger:Animal{
int weight=0x22222222,height=0x33333333;
virtual int mm(){printf("Tigerxx\n");return 0xCC;}
virtual int nn(){printf("Tigerxx\n");return 0x99;};
};
typedef int (Animal::*methodPointerT)();
typedef struct {
void** functionPtr;
size_t offset;
} MP;
void devirtualize(methodPointerT& mp0,const Animal& a){
MP& t=*(MP*)&mp0;
if((intptr_t)t.functionPtr & 1){
size_t index=(t.functionPtr-(void**)1); // there is obviously a more
void** vTable=(void**)(*(void**)&a); // efficient way. Just for clearness !
t.functionPtr=(void**)vTable[index];
}
};
int main()
{
int (Animal::*mp1)()=&Animal::nn;
MP& mp1MP=*(MP*)&mp1;
Animal x;Tiger y;
(x.*mp1)();(y.*mp1)();
devirtualize(mp1,x);
(x.*mp1)();(y.*mp1)();
}
Yes, this is possible in a way that works at least with MSVC, GCC and Clang.
I was also looking for how to do this, and here is a blog post I found that explains it in detail: https://medium.com/#calebleak/fast-virtual-functions-hacking-the-vtable-for-fun-and-profit-25c36409c5e0
Taking the code from there, in short, this is what you need to do. This function works for all objects:
template <typename T>
void** GetVTable(T* obj) {
return *((void***)obj);
}
And then to get a direct function pointer to the first virtual function of the class, you do this:
typedef void(VoidMemberFn)(void*);
VoidMemberFn* fn = (VoidMemberFn*)GetVTable<BaseType>(my_obj_ptr)[0];
// ... sometime later
fn(my_obj_ptr);
So it's quite easy actually.

Placement new based on template sizeof()

Is this legal in c++11? Compiles with the latest intel compiler and appears to work, but I just get that feeling that it is a fluke.
class cbase
{
virtual void call();
};
template<typename T> class functor : public cbase
{
public:
functor(T* obj, void (T::*pfunc)())
: _obj(obj), _pfunc(pfunc) {}
virtual void call()
{
(_obj)(*_pfunc)();
}
private:
T& _obj;
void (T::*_pfunc)();
//edited: this is no good:
//const static int size = sizeof(_obj) + sizeof(_pfunc);
};
class signal
{
public:
template<typename T> void connect(T& obj, void (T::*pfunc)())
{
_ptr = new (space) functor<T>(obj, pfunc);
}
private:
cbase* _ptr;
class _generic_object {};
typename aligned_storage<sizeof(functor<_generic_object>),
alignment_of<functor<_generic_object>>::value>::type space;
//edited: this is no good:
//void* space[(c1<_generic_object>::size / sizeof(void*))];
};
Specifically I'm wondering if void* space[(c1<_generic_object>::size / sizeof(void*))]; is really going to give the correct size for c1's member objects (_obj and _pfunc). (It isn't).
EDIT:
So after some more research it would seem that the following would be (more?) correct:
typename aligned_storage<sizeof(c1<_generic_object>),
alignment_of<c1<_generic_object>>::value>::type space;
However upon inspecting the generated assembly, using placement new with this space seems to inhibit the compiler from optimizing away the call to 'new' (which seemed to happen while using just regular '_ptr = new c1;'
EDIT2: Changed the code to make intentions a little clearer.
const static int size = sizeof(_obj) + sizeof(_pfunc); will give the sum of the sizes of the members, but that may not be the same as the size of the class containing those members. The compiler is free to insert padding between members or after the last member. As such, adding together the sizes of the members approximates the smallest that object could possibly be, but doesn't necessarily give the size of an object with those members.
In fact, the size of an object can vary depending not only on the types of its members, but also on their order. For example:
struct A {
int a;
char b;
};
vs:
struct B {
char b;
int a;
};
In many cases, A will be smaller than B. In A, there will typically be no padding between a and b, but in B, there will often be some padding (e.g., with a 4-byte int, there will often be 3 bytes of padding between b and a).
As such, your space may not contain enough...space to hold the object you're trying to create there in init.
I think you just got lucky; Jerry's answer points out that there may be padding issues. What I think you have is a non-virtual class (i.e., no vtable), with essentially two pointers (under the hood).
That aside, the arithmetic: (c1<_generic_object>::size / sizeof(void*)) is flawed because it will truncate if size is not a multiple of sizeof(void *). You would need something like:
((c1<_generic_object>::size + sizeof(void *) - 1) / sizeof(void *))
This code does not even get to padding issues, because it has a few of more immediate ones.
Template class c1 is defined to contain a member T &_obj of reference type. Applying sizeof to _obj in scope of c1 will evaluate to the size of T, not to the size of reference member itself. It is not possible to obtain the physical size of a reference in C++ (at least directly). Meanwhile, any actual object of type c1<T> will physically contain a reference to T, which is typically implemented in such cases as a pointer "under the hood".
For this reason it is completely unclear to me why the value of c1<_generic_object>::size is used as a measure of memory required for in-pace construction of an actual object of type c1<T> (for any T). It just doesn't make any sense. These sizes are not related at all.
By pure luck the size of an empty class _generic_object might evaluate to the same (or greater) value as the size of a physical implementation of a reference member. In that case the code will allocate a sufficient amount of memory. One might even claim that the sizeof(_generic_object) == sizeof(void *) equality will "usually" hold in practice. But that would be just a completely arbitrary coincidence with no meaningful basis whatsoever.
This even looks like red herring deliberately inserted into the code for the purpose of pure obfuscation.
P.S. In GCC sizeof of an empty class actually evaluates to 1, not to any "aligned" size. Which means that the above technique is guaranteed to initialize c1<_generic_object>::size with a value that is too small. More specifically, in 32 bit GCC the value of c1<_generic_object>::size will be 9, while the actual size of any c1<some_type_t> will be 12 bytes.