pure virtual final functions : legal in C++11 - c++

class Foo
{
public:
virtual int foo() final = 0;
};
Compiles fine.
Isn't Foo just a waste of space, and an accident in the making? Or am I missing something?

It is almost a complete waste of space, as you've said. There is at least one admittedly contrieved usage for this. The fact that it compiles, by the way, is not surprising. As long as code is legitimate, it needs not "make sense" to compile.
Say you want to use Foo as a policy. That means it will be used as a template parameter, but it needs not be instantiated. In fact, you really don't want anyone to ever instantiate the class (although admittedly I wouldn't know why, what can it hurt).
This is exactly what you have here. A class with a type that you can lay your hands on, but you can't instantiate it (though making the constructor private would probably be a lot more straightforward).
As an added bonus, you could add enums or static functions inside the class scope. Those could be used without actually instantiating, and they'd be within that class' namespace. So, you have a class that's primarily usable only as type, but you still have "some functionality" bundled with it in the form of static functions.
Most of the time, one would probably just wrap that stuff into a namespace, but who knows, in some situation, this might be the desired way.

Isn't Foo just a waste of space
Indeed it is; you can't instantiate it since it's abstract, and you can't override the function to make a non-abstract derived class.
It could be used as a way of preventing a class from being instantiated, if you want to do that for some reason; but even then it would probably make more sense to delete the default constructor.
and an accident in the making?
Not really. Since you can't do anything with the class, you can't do anything wrong with it.

If I'm reading the grammar in 9.2 correctly this is actually legal although I may have missed something in the notes prohibiting it.
member-declarator:
declarator virt-specifier-seq(opt) pure-specifier(opt)
Then it shows that virt-specifier-seq can be final and pure-specifier is = 0
I can't see any way this would be useful although there may be some corner case that makes use of it.

Related

C++: What does "hide" means in overwrite method [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
name hiding and fragile base problem
I'm familiar with the rules involving member function hiding. Basically, a derived class with a function that has the same name as a base class function doesn't actually overload the base class function - it completely hides it.
struct Base
{
void foo(int x) const
{
}
};
struct Derived : public Base
{
void foo(const std::string& s) { }
};
int main()
{
Derived d;
d.foo("abc");
d.foo(123); // Will not compile! Base::foo is hidden!
}
So, you can get around this with a using declaration. But my question is, what is the reason for base class function hiding? Is this a "feature" or just a "mistake" by the standards committee? Is there some technical reason why the compiler can't look in the Base class for matching overloads when it doesn't find a match for d.foo(123)?
Name lookup works by looking in the current scope for matching names, if nothing is found then it looks in the enclosing scope, if nothing is found it looks in the enclosing scope, etc. until reaching the global namespace.
This isn't specific to classes, you get exactly the same name hiding here:
#include <iostream>
namespace outer
{
void foo(char c) { std::cout << "outer\n"; }
namespace inner
{
void foo(int i) { std::cout << "inner\n"; }
void bar() { foo('c'); }
}
}
int main()
{
outer::inner::bar();
}
Although outer::foo(char) is a better match for the call foo('c') name lookup stops after finding outer::inner::foo(int) (i.e. outer::foo(char) is hidden) and so the program prints inner.
If member function name weren't hidden that would mean name lookup in class scope behaved differently to non-class scope, which would be inconsistent and confusing, and make C++ even harder to learn.
So there's no technical reason the name lookup rules couldn't be changed, but they'd have to be changed for member functions and other types of name lookup, it would make compilers slower because they'd have to continue searching for names even after finding matching names in the current scope. Sensibly, if there's a name in the current scope it's probably the one you wanted. A call in a scope A probably wants to find names in that scope, e.g. if two functions are in the same namespace they're probably related (part of the same module or library) and so if one uses the name of the other it probably means to call the one in the same scope. If that's not what you want then use explicit qualification or a using declaration to tell the compiler the other name should be visible in that scope.
Is this a "feature" or just a "mistake" by the standards committee?
It's definitely not a mistake, since it's clearly stipulated in the standard. It's a feature.
Is there some technical reason why the compiler can't look in the Base class for matching overloads when it doesn't find a match for d.foo(123)?
Technically, a compiler could look in the base class. Technically. But if it did, it would break the rules set by the standard.
But my question is, what is the reason for base class function hiding?
Unless someone from the committee comes with an answer, I think we can only speculate. Basically, there were two options:
if I declare a function with the same name in a derived class, keep the base class's functions with the same name directly accessible through a derived class
don't
It could have been determined by flipping a coin (...ok, maybe not).
In general, what are the reasons for wanting a function with the same name as that of a base class? There's different functionality - where you'd more likely use polymorphism instead. For handling different cases (different parameters), and if these cases aren't present in the base class, a strategy pattern might be more appropriate to handle the job. So most likely function hiding comes in effect when you actually do want to hide the function. You're not happy with the base class implementation so you provide your own, with the option of using using, but only when you want to.
I think it's just a mechanism to make you think twice before having a function with the same name & different signature.
I believe #Lol4t0 is pretty much correct, but I'd state things much more strongly. If you allowed this, you'd end up with two possibilities: either make a lot of other changes throughout almost the entirety of the language, or else you end up with something almost completely broken.
The other changes you'd make to allow this to work would be to completely revamp how overloading is done -- you'd have to change at least the order of the steps that were taken, and probably the details of the steps themselves. Right now, the compiler looks up the name, then forms an overload set, resolves the overload, then checks access to the chosen overload.
To make this work even sort of well, you'd pretty much have to change that to check access first, and only add accessible functions to the overload set. With that, at least the example in #Lol4t0's answer could continue to compile, because Base::foo would never be added to the overload set.
That still means, however, that adding to the interface of the base class could cause serious problems. If Base didn't originally contain foo, and a public foo were added, then the call in main to d.foo() would suddenly do something entirely different, and (again) it would be entirely outside the control of whoever wrote Derived.
To cure that, you'd just about have to make a fairly fundamental change in the rules: prohibit implicit conversions of function arguments. Along with that, you'd change overload resolution so in case of a tie, the most derived/most local version of a function was favored over a less derived/outer scope. With those rules, the call to d.foo(5.0) could never resolve to Derived::foo(int) in the first place.
That, however, would only leave two possibilities: either calls to free functions would have different rules than calls to member functions (implicit conversions allowed only for free functions) or else all compatibility with C would be discarded entirely (i.e., also prohibit implicit conversions in all function arguments, which would break huge amounts of existing code).
To summarize: to change this without breaking the language entirely, you'd have to make quite a few other changes as well. It would almost certainly be possible to create a language that worked that way, but by the time you were done it wouldn't be C++ with one minor change -- it would be an entirely different language that wasn't much like C++ or C, or much of anything else.
I can only propose, that this decision was made to make things simpler.
Imagine, that derived function will overload base one. Then, does the following code should generate compilation error, or use Deriveds function?
struct Base
{
private:
void foo(float);
}
struct Derived: public Base
{
public:
void foo(int);
}
int main()
{
Derived d;
d.foo(5.0f);
}
According to existing behavior of overloads this should generate error.
Now imagine, in the first version Base had no foo(float). In second version it appears. Now changing the realization of base class breaks interface of derived.
If you are developer of Derived and cannot influence developers of Base and a lot of clients use your interface, you are in a bad situation now.

Access to protected member through member-pointer: is it a hack?

We all know members specified protected from a base class can only be accessed from a derived class own instance. This is a feature from the Standard, and this has been discussed on Stack Overflow multiple times:
Cannot access protected member of another instance from derived type's scope
;
Why can't my object access protected members of another object defined in common base class?
And others.
But it seems possible to walk around this restriction with member pointers, as user chtz has shown me:
struct Base { protected: int value; };
struct Derived : Base
{
void f(Base const& other)
{
//int n = other.value; // error: 'int Base::value' is protected within this context
int n = other.*(&Derived::value); // ok??? why?
(void) n;
}
};
Live demo on coliru
Why is this possible, is it a wanted feature or a glitch somewhere in the implementation or the wording of the Standard?
From comments emerged another question: if Derived::f is called with an actual Base, is it undefined behaviour?
The fact that a member is not accessible using class member access expr.ref (aclass.amember) due to access control [class.access] does not make this member inaccessible using other expressions.
The expression &Derived::value (whose type is int Base::*) is perfectly standard compliant, and it designates the member value of Base. Then the expression a_base.*p where p is a pointer to a member of Base and a_base an instance of Base is also standard compliant.
So any standard compliant compiler shall make the expression other.*(&Derived::value); defined behavior: access the member value of other.
is it a hack?
In similar vein to using reinterpret_cast, this can be dangerous and may potentially be a source of hard to find bugs. But it's well formed and there's no doubt whether it should work.
To clarify the analogy: The behaviour of reinterpret_cast is also specified exactly in the standard and can be used without any UB. But reinterpret_cast circumvents the type system, and the type system is there for a reason. Similarly, this pointer to member trick is well formed according to the standard, but it circumvents the encapsulation of members, and that encapsulation (typically) exists for a reason (I say typically, since I suppose a programmer can use encapsulation frivolously).
[Is it] a glitch somewhere in the implementation or the wording of the Standard?
No, the implementation is correct. This is how the language has been specified to work.
Member function of Derived can obviously access &Derived::value, since it is a protected member of a base.
The result of that operation is a pointer to a member of Base. This can be applied to a reference to Base. Member access privileges does not apply to pointers to members: It applies only to the names of the members.
From comments emerged another question: if Derived::f is called with an actual Base, is it undefined behaviour?
Not UB. Base has the member.
Just to add to the answers and zoom in a bit on the horror I can read between your lines. If you see access specifiers as 'the law', policing you to keep you from doing 'bad things', I think you are missing the point. public, protected, private, const ... are all part of a system that is a huge plus for C++. Languages without it may have many merits but when you build large systems such things are a real asset.
Having said that: I think it's a good thing that it is possible to get around almost all the safety nets provided to you. As long as you remember that 'possible' does not mean 'good'. This is why it should never be 'easy'. But for the rest - it's up to you. You are the architect.
Years ago I could simply do this (and it may still work in certain environments):
#define private public
Very helpful for 'hostile' external header files. Good practice? What do you think? But sometimes your options are limited.
So yes, what you show is kind-of a breach in the system. But hey, what keeps you from deriving and hand out public references to the member? If horrible maintenance problems turn you on - by all means, why not?
Basically what you're doing is tricking the compiler, and this is supposed to work. I always see this kind of questions and people some times get bad results and some times it works, depending on how this converts to assembler code.
I remember seeing a case with a const keyword on a integer, but then with some trickery the guy was able to change the value and successfully circumvented the compiler's awareness. The result was: A wrong value for a simple mathematical operation. The reason is simple: Assembly in x86 does make a distinction between constants and variables, because some instructions do contain constants in their opcode. So, since the compiler believes it's a constant, it'll treat it as a constant and deal with it in an optimized way with the wrong CPU instruction, and baam, you have an error in the resulting number.
In other words: The compiler will try to enforce all the rules it can enforce, but you can probably eventually trick it, and you may or may not get wrong results based on what you're trying to do, so you better do such things only if you know what you're doing.
In your case, the pointer &Derived::value can be calculated from an object by how many bytes there are from the beginning of the class. This is basically how the compiler accesses it, so, the compiler:
Doesn't see any problem with permissions, because you're accessing value through derived at compile-time.
Can do it, because you're taking the offset in bytes in an object that has the same structure as derived (well, obviously, the base).
So, you're not violating any rules. You successfully circumvented the compilation rules. You shouldn't do it, exactly because of the reasons described in the links you attached, as it breaks OOP encapsulation, but, well, if you know what you're doing...

Can we make virtual function inline [duplicate]

Pure virtual functions are those member functions that are virtual and have the pure-specifier ( = 0; )
Clause 10.4 paragraph 2 of C++03 tells us what an abstract class is and, as a side note, the following:
[Note: a function declaration cannot provide both a pure-specifier and a definition
—end note] [Example:
struct C {
virtual void f() = 0 { }; // ill-formed
};
—end example]
For those who are not very familiar with the issue, please note that pure virtual functions can have definitions but the above-mentioned clause forbids such definitions to appear inline (lexically in-class). (For uses of defining pure virtual functions you may see, for example, this GotW)
Now for all other kinds and types of functions it is allowed to provide an in-class definition, and this restriction seems at first glance absolutely artificial and inexplicable. Come to think of it, it seems such on second and subsequent glances :) But I believe the restriction wouldn't be there if there weren't a specific reason for that.
My question is: does anybody know those specific reasons? Good guesses are also welcome.
Notes:
MSVC does allow PVF's to have inline definitions. So don't get surprised :)
the word inline in this question does not refer to the inline keyword. It is supposed to mean lexically in-class
In the SO thread "Why is a pure virtual function initialized by 0?" Jerry Coffin provided this quote from Bjarne Stroustrup’s The Design & Evolution of C++, section §13.2.3, where I've added some emphasis of the part I think is relevant:
The curious =0 syntax was chosen over the obvious alternative of introducing a new keyword pure or abstract because at the time I saw no chance of getting a new keyword accepted. Had I suggested pure, Release 2.0 would have shipped without abstract classes. Given a choice between a nicer syntax and abstract classes, I chose abstract classes. Rather than risking delay and incurring the certain fights over pure, I used the tradition C and C++ convention of using 0 to represent "not there." The =0 syntax fits with my view that a function body is the initializer for a function and also with the (simplistic, but usually adequate) view of the set of virtual functions being implemented as a vector of function pointers. [ … ]
So, when choosing the syntax Bjarne was thinking of a function body as a kind of initializer part of the declarator, and =0 as an alternate form of initializer, one that indicated “no body” (or in his words, “not there”).
It stands to reason that one cannot both indicate “not there” and have a body – in that conceptual picture.
Or, still in that conceptual picture, having two initializers.
Now, that's as far as my telepathic powers, google-foo and soft-reasoning goes. I surmise that nobody's been Interested Enough™ to formulate a proposal to the committee about having this purely syntactical restriction lifted, and following up with all the work that that entails. Thus it's still that way.
You shouldn't have so much faith in the standardization committee. Not everything has a deep reason to explain it. Something are so just because at first nobody thought otherwise and after nobody thought that changing it is important enough (I think it is the case here); for things old enough it could even be an artifact of the first implementation. Some are the result of evolution -- there was a deep reason at a time, but the reason was removed and the initial decision wasn't reconsidered again (it could be also the case here, where the initial decision was because any definition of the pure function was forbidden). Some are the result of negotiation between different POV and the result lacks coherence but this lack was deemed necessary to reach to consensus.
Good guesses... well, considering the situation:
it is legal to declare the function inline and provide an explicitly inline body (outside the class), so there's clearly no objection to the only practical implication of being declared inside the class.
I see no potential ambiguities or conflicts introduced in the grammar, so no logical reason for the exclusion of function definitions in situ.
My guess: the use for bodies for pure virtual functions was realised after the = 0 | { ... } grammar was formulated, and the grammar simply wasn't revised. It's worth considering that there are a lot of proposals for language changes / enhancements - including those to make things like this more logical and consistent - but the number that are picked up by someone and written up as formal proposals is much smaller, and the number of those the Committee has time to consider, and believes the compiler-vendors will be prepared to implement, is much smaller again. Things like this need a champion, and perhaps you're the first person to see an issue in it. To get a feel for this process, check out http://www2.research.att.com/~bs/evol-issues.html.
Good guesses are welcome you say?
I think the = 0 at the declaration comes from having the implementation in mind. Most likely this definition means, that you get a NULL entry in the RTTI's vtbl of the class information -- the location where at runtime addresses of the member functions of a class are stored.
But actually, when put a definition of the function in your *.cpp file, you introduce a name into the object file for the linker: An address in the *.o file where to find a specific function.
The basic linker then does need to know about C++ anymore. It can just link together, even though you declared it as = 0.
I think I read that it is possible what you described, although I forgot the behaviour :-)...
Leaving destructors aside, implementations of pure virtual functions are a strange thing, because they never get called in the natural way. i.e. if you have a pointer or reference to your Base class the underlying object will always be some Derived that overrides the function, and that will always get called.
The only way to actually get the implementation to be called is using the Base::func() syntax from one of the derived class's overloads.
This actually, in some ways, makes it a better target for inlining, as at the point where the compiler wants to invoke it, it is always clear which overload is being called.
Also, if implementations for pure virtual functions were forbidden, there would be an obvious workaround of some other (probably protected) non-virtual function in the Base class that you could just call in the regular way from your derived function. Of course the scope would be less limited in that you could call it from any function.
(By the way, I am under the assumption that Base::f() can only be called with this syntax from Derived::f() and not from Derived::anyOtherFunc(). Am I right with this assumption?).
Pure virtual destructors are a different story, in a sense. It is used as a technique simply to prevent someone creating an instance of the derived class without there being any pure virtual functions elsewhere.
The answer to the actual question of "why" it is not permitted is really just because the standards committee said so, but my answer sheds some light on what we are trying to achieve anyway.

Reason for C++ member function hiding [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
name hiding and fragile base problem
I'm familiar with the rules involving member function hiding. Basically, a derived class with a function that has the same name as a base class function doesn't actually overload the base class function - it completely hides it.
struct Base
{
void foo(int x) const
{
}
};
struct Derived : public Base
{
void foo(const std::string& s) { }
};
int main()
{
Derived d;
d.foo("abc");
d.foo(123); // Will not compile! Base::foo is hidden!
}
So, you can get around this with a using declaration. But my question is, what is the reason for base class function hiding? Is this a "feature" or just a "mistake" by the standards committee? Is there some technical reason why the compiler can't look in the Base class for matching overloads when it doesn't find a match for d.foo(123)?
Name lookup works by looking in the current scope for matching names, if nothing is found then it looks in the enclosing scope, if nothing is found it looks in the enclosing scope, etc. until reaching the global namespace.
This isn't specific to classes, you get exactly the same name hiding here:
#include <iostream>
namespace outer
{
void foo(char c) { std::cout << "outer\n"; }
namespace inner
{
void foo(int i) { std::cout << "inner\n"; }
void bar() { foo('c'); }
}
}
int main()
{
outer::inner::bar();
}
Although outer::foo(char) is a better match for the call foo('c') name lookup stops after finding outer::inner::foo(int) (i.e. outer::foo(char) is hidden) and so the program prints inner.
If member function name weren't hidden that would mean name lookup in class scope behaved differently to non-class scope, which would be inconsistent and confusing, and make C++ even harder to learn.
So there's no technical reason the name lookup rules couldn't be changed, but they'd have to be changed for member functions and other types of name lookup, it would make compilers slower because they'd have to continue searching for names even after finding matching names in the current scope. Sensibly, if there's a name in the current scope it's probably the one you wanted. A call in a scope A probably wants to find names in that scope, e.g. if two functions are in the same namespace they're probably related (part of the same module or library) and so if one uses the name of the other it probably means to call the one in the same scope. If that's not what you want then use explicit qualification or a using declaration to tell the compiler the other name should be visible in that scope.
Is this a "feature" or just a "mistake" by the standards committee?
It's definitely not a mistake, since it's clearly stipulated in the standard. It's a feature.
Is there some technical reason why the compiler can't look in the Base class for matching overloads when it doesn't find a match for d.foo(123)?
Technically, a compiler could look in the base class. Technically. But if it did, it would break the rules set by the standard.
But my question is, what is the reason for base class function hiding?
Unless someone from the committee comes with an answer, I think we can only speculate. Basically, there were two options:
if I declare a function with the same name in a derived class, keep the base class's functions with the same name directly accessible through a derived class
don't
It could have been determined by flipping a coin (...ok, maybe not).
In general, what are the reasons for wanting a function with the same name as that of a base class? There's different functionality - where you'd more likely use polymorphism instead. For handling different cases (different parameters), and if these cases aren't present in the base class, a strategy pattern might be more appropriate to handle the job. So most likely function hiding comes in effect when you actually do want to hide the function. You're not happy with the base class implementation so you provide your own, with the option of using using, but only when you want to.
I think it's just a mechanism to make you think twice before having a function with the same name & different signature.
I believe #Lol4t0 is pretty much correct, but I'd state things much more strongly. If you allowed this, you'd end up with two possibilities: either make a lot of other changes throughout almost the entirety of the language, or else you end up with something almost completely broken.
The other changes you'd make to allow this to work would be to completely revamp how overloading is done -- you'd have to change at least the order of the steps that were taken, and probably the details of the steps themselves. Right now, the compiler looks up the name, then forms an overload set, resolves the overload, then checks access to the chosen overload.
To make this work even sort of well, you'd pretty much have to change that to check access first, and only add accessible functions to the overload set. With that, at least the example in #Lol4t0's answer could continue to compile, because Base::foo would never be added to the overload set.
That still means, however, that adding to the interface of the base class could cause serious problems. If Base didn't originally contain foo, and a public foo were added, then the call in main to d.foo() would suddenly do something entirely different, and (again) it would be entirely outside the control of whoever wrote Derived.
To cure that, you'd just about have to make a fairly fundamental change in the rules: prohibit implicit conversions of function arguments. Along with that, you'd change overload resolution so in case of a tie, the most derived/most local version of a function was favored over a less derived/outer scope. With those rules, the call to d.foo(5.0) could never resolve to Derived::foo(int) in the first place.
That, however, would only leave two possibilities: either calls to free functions would have different rules than calls to member functions (implicit conversions allowed only for free functions) or else all compatibility with C would be discarded entirely (i.e., also prohibit implicit conversions in all function arguments, which would break huge amounts of existing code).
To summarize: to change this without breaking the language entirely, you'd have to make quite a few other changes as well. It would almost certainly be possible to create a language that worked that way, but by the time you were done it wouldn't be C++ with one minor change -- it would be an entirely different language that wasn't much like C++ or C, or much of anything else.
I can only propose, that this decision was made to make things simpler.
Imagine, that derived function will overload base one. Then, does the following code should generate compilation error, or use Deriveds function?
struct Base
{
private:
void foo(float);
}
struct Derived: public Base
{
public:
void foo(int);
}
int main()
{
Derived d;
d.foo(5.0f);
}
According to existing behavior of overloads this should generate error.
Now imagine, in the first version Base had no foo(float). In second version it appears. Now changing the realization of base class breaks interface of derived.
If you are developer of Derived and cannot influence developers of Base and a lot of clients use your interface, you are in a bad situation now.

Advantages of typedef over derived class?

Simply put, what are the (or are there any) differences between doing say
class MyClassList : list<MyClass> { };
vs
typedef list<MyClass> MyClassList;
The only advantage that I can think of (and its what lead me to this question) is that with the derived class i can now easily forward declare MyClassList as
class MyClassList;
without compiler error, instead of
class MyClass;
typedef list<MyClass> MyClassList;
I can't think of any differences, but this made me wonder, are there cases in which a typedef can be used that a simple derived class can't?
Or to put it another way, is there any reason why I shouldn't change all my typedef list<...> SomeClassList; to the simple derived class so that I can easily forward declare them?
In C++ it is NOT recommended to derive from an STL container, so don't do it.
A typedef is just creating an alias for an existing type, as it were, so typedef std::list<MyClass> MyClassList; creates a "new type" that is called MyClassList which you can now use as follows:
MyClassList lst;
Changing your typedefs to a derived class is a bad idea. Don't do it.
typedef is intended exactly for this purpose -- to alias type names. Its very idiomatic and won't confuse anybody familiar with C++.
But to address why inheriting may be a bad idea.
std::list does not have a virtual destructor. Meaning MyClassList wouldn't have its destructor called when deleted through the base class. So this is typically frowned upon. In your case, you have no intention of putting any members in MyClassList, so this becomes a moot point until the next programmer sees inheritance as an invitation to add new members/override functions etc. They may not realize that std::list's destructor is not virtual and not realize that in some cases MyClassList's destructor won't get called.
Well, a typedef can only do what its name suggests while a derived class can possibly be a full-blown makeover of its base(s). So while there may not be much of a difference if you limit yourself to "just" deriving (and not add any members, or override anything, etc) as far as the compiler is concerned, there might be a big difference as far as human readers of the code are concerned.
One might wonder "why is this a derived class when a typedef would suffice"? Most people would assume that there must be a reason, so you would make life harder to the code's future maintainers. A typedef, on the other hand, is a very specific tool and does not raise questions.
And while we 're on the topic of maintenance don't forget that as most things in C++, this "nothing will go wrong as long as we are disciplined and don't cross this line" is an open invitation to disaster. Since the compiler isn't there to stop you, someone, someday, will cross the line.
A number of things have been mentioned. A big thing however:
Deriving from a type does not inherit all the constructors.
If there are a number of non-default constructors, you won't have them when inheriting (you'd have to forward them to the base constructor).
Typedefs have no such 'issue'.
Now, typedefs do not generate unique typeids. If you want that, and not have the overhead or other disadvantages of inheritance, look at boost: it has a strong typedef macro that generates a unique typeid:
http://www.boost.org/doc/libs/1_37_0/boost/strong_typedef.hpp
A typedef is an alias, while a class is a new type.
In the first case, the compiler has to simply replace MyClassList with list<MyClass>.
In the second case, MyClassList involve the generation of default constructor, copy constructor assignment operator, destructor, and - where c++11 is in use - even move constructor and move assignment.
In default cases, since MyClassList has no additional functionality, optimization will most likely wipe them out.
Note: I found the "deriving classes with non virtual destructor is not recommended" argument a weak one. A C++ developer should know that derivation does not necessarily imply polymorphism. A class that is not deleted through a pointer to its base doesn't need a virtual destructor, like a class whose method is not designed to be "called" through a base pointer does not require that method to be virtual.
Simply, if a destructor is not virtual, don't treat that type as "polymorphic" on deletion.
In this sense, destructor are not different from other virtual or non virtual methods.
If this argument has to be considered strong, then, all classes that don't have "all virtual" method shouldn't be derived!