This question is intended as a follow up question to this one: What are the differences between a pointer variable and a reference variable in C++?
Having read the answers and some further discussions I found on stackoverflow I know that the compiler should treat pass-by-reference the same way it treats pass-by-pointer and that references are nothing more than syntactic sugar. One thing I haven't been able to figure out yet if there is any difference considering binary compatibility.
In our (multiplatform) framework we have the requirement to be binary compatible between release and debug builds (and between different releases of the framework). In particular, binaries we build in debug mode must be usable with release builds and vice versa.
To achieve that, we only use pure abstract classes and POD in our interfaces.
Consider the following code:
class IMediaSerializable
{
public:
virtual tResult Serialize(int flags,
ISerializer* pSerializer,
IException** __exception_ptr) = 0;
//[…]
};
ISerializer and IException are also pure abstract classes. ISerializer must point to an existing object, so we always have to perform a NULL-pointer check. IException implements some kind of exception handling where the address the pointer points to must be changed. For this reason we use pointer to pointer, which must also be NULL-pointer checked.
To make the code much clearer and get rid of some unnecessary runtime checks, we would like to rewrite this code using pass-by-reference.
class IMediaSerializable
{
public:
virtual tResult Serialize(int flags,
ISerializer& pSerializer,
IException*& __exception_ptr) = 0;
//[…]
};
This seems to work without any flaws. But the question remains for us whether this still satisfies the requirement of binary compatibility.
UPDATE:
To clarify things up: This question is not about binary compatibility between the pass-by-pointer version of the code and the pass-by-reference version. I know this can't be binary compatible. In fact we have the opportunity to redesign our API for which we consider using pass-by-reference instead of pass-by-pointer without caring about binary compatibilty (new major release).
The question is just about binary compatibility when only using the pass-by-reference version of the code.
Binary ABI compatibility is determined by whatever compiler you are using. The C++ standard does not cover the issue of binary ABI compatibility.
You will need to check your C++ compiler's documentation, to see what it says about binary compatibility.
Generally references are implemented as pointers under-the-hood, so there will usually be ABI compatibility. You will have to check your particular compiler's documentation and possibly implementation to make sure.
However, your restriction to pure-abstract classes and POD types is over zealous in the age of C++11.
C++11 split the concept of pod into multiple pieces. Standard Layout covers most, if not all, of the "memory layout" guarantees of a pod type.
But Standard Layout types can have constructors and destructors (among other differences).
So you can make a really friendly interface.
Instead of a manually managed interface pointer, write a simple smart pointer.
template<class T>
struct value_ptr {
T* raw;
// ...
};
that ->clone()s on copy, moves the pointer on move, deletes on destroy, and (because you own it) can be guaranteed to be stable over compiler library revisions (while unique_ptr cannot). This is basically a unique_ptr that supports ->clone(). Also have your own unique_ptr for values that cannot be duplicated.
Now you can replace your pure virtual interfaces with a pair of types. First, the pure virtual interface (with a T* clone() const usually), and second a regular type:
struct my_regular_foo {
value_ptr< IFoo > ptr;
bool some_method() const { return ptr->some_method(); } // calls pure virtual method in IFoo
};
the end result is you have a type that behaves like a regular, everyday type, but it is implemented as a wrapper around a pure virtual interface class. Such types can be taken by value, taken by reference, and returned by value, and can hold arbitrary complex state within them.
These types live in header files that the library exposes.
And interface expansion of IFoo is fine. Just add a new method to both the IFoo at the end of the type (which under most ABIs is backward compatible (!) -- try it), then add a new method to my_regular_foo that forwards to it. As we did not change the layout of our my_regular_foo, even though the library and the client code may disagree about what methods it has, that is fine -- those methods are all compiled inline and never exported -- and clients who know they are using the newer version of your library can use it, and those who do not know but are using it are fine (without rebuilding).
There is one careful gotcha: if you add an overload to IFoo of a method (not an override: an overload) the order of the virtual methods changes, and if you add a new virtual parent the layout of the virtual table can change, and this only works reliably if all inheritance to your abstract classes is virtual in your public API (with virtual inheritance, the vtable has pointers to the start of each vtable of the sub-classes: so each sub-class can have a bigger vtable without messing up the address other functions virtual functions. And if you carefully only append to the end of a sub-class vtable code using the earlier header files can still find the earlier methods).
This last step -- allowing new methods on your interfaces -- might be a bridge to far, as you'd have to investigate the ABI guarantees (in practice and not) on vtable layout for every supported compiler.
No, it will not work regardless of which compiler you are using.
Consider a class Foo that exports two functions:
class Foo
{
public:
void f(int*);
void f(int&);
};
The compiler has to convert (mangles) the names of the two functions f to a ABI-specific string, so that the linker can distinguish between the two.
Now, since the compiler needs to support overload resolution, even if references were implemented exactly like pointers, the two function names will need to have a different mangled name.
For example GCC mangles these names to:
void Foo::f(int*) => _ZN3Foo1fEPi
void Foo::f(int&) => _ZN3Foo1fERi
Notice P vs R.
So if you change the signature of the function your application will fail to link.
Related
Coming from Delphi, I'm used to using class references (metaclasses) like this:
type
TClass = class of TForm;
var
x: TClass;
f: TForm;
begin
x := TForm;
f := x.Create();
f.ShowModal();
f.Free;
end;
Actually, every class X derived from TObject have a method called ClassType that returns a TClass that can be used to create instances of X.
Is there anything like that in C++?
Metaclasses do not exist in C++. Part of why is because metaclasses require virtual constructors and most-derived-to-base creation order, which are two things C++ does not have, but Delphi does.
However, in C++Builder specifically, there is limited support for Delphi metaclasses. The C++ compiler has a __classid() and __typeinfo() extension for retrieving a Delphi-compatible TMetaClass* pointer for any class derived from TObject. That pointer can be passed as-is to Delphi code (you can use Delphi .pas files in a C++Builder project).
The TApplication::CreateForm() method is implemented in Delphi and has a TMetaClass* parameter in C++ (despite its name, it can actually instantiate any class that derives from TComponent, if you do not mind the TApplication object being assigned as the Owner), for example:
TForm *f;
Application->CreateForm(__classid(TForm), &f);
f->ShowModal();
delete f;
Or you can write your own custom Delphi code if you need more control over the constructor call:
unit CreateAFormUnit;
interface
uses
Classes, Forms;
function CreateAForm(AClass: TFormClass; AOwner: TComponent): TForm;
implementation
function CreateAForm(AClass: TFormClass; AOwner: TComponent): TForm;
begin
Result := AClass.Create(AOwner);
end;
end.
#include "CreateAFormUnit.hpp"
TForm *f = CreateAForm(__classid(TForm), SomeOwner);
f->ShowModal();
delete f;
Apparently modern Delphi supports metaclasses in much the same way as original Smalltalk.
There is nothing like that in C++.
One main problem with emulating that feature in C++, having run-time dynamic assignment of values that represent type, and being able to create instances from such values, is that in C++ it's necessary to statically know the constructors of a type in order to instantiate.
Probably you can achieve much of the same high-level goal by using C++ static polymorphism, which includes function overloading and the template mechanism, instead of extreme runtime polymorphism with metaclasses.
However, one way to emulate the effect with C++, is to use cloneable exemplar-objects, and/or almost the same idea, polymorphic object factory objects. The former is quite unusual, the latter can be encountered now and then (mostly the difference is where the parameterization occurs: with the examplar-object it's that object's state, while with the object factory it's arguments to the creation function). Personally I would stay away from that, because C++ is designed for static typing, and this idea is about cajoling C++ into emulating a language with very different characteristics and programming style etc.
Type information does not exist at runtime with C++. (Except when enabling RTTI but it is still different than what you need)
A common idiom is to create a virtual clone() method that obviously clones the object which is usually in some prototypical state. It is similar to a constructor, but the concrete type is resolved at runtime.
class Object
{
public:
virtual Object* clone() const = 0;
};
If you don't mind spending some time examining foreign sources, you can take a look at how a project does it: https://github.com/rheit/zdoom/blob/master/src/dobjtype.h (note: this is a quite big and evolving source port of Doom, so be advised even just reading will take quite some time). Look at PClass and related types. I don't know what is done here exactly, but from my limited knowledge they construct a structure with necessary metatable for each class and use some preprocessor magic in form of defines for readability (or something else). Their approach allows seamlessly create usual C++ classes, but adds support for PClass::FindClass("SomeClass") to get the class reference and use that as needed, for example to create an instance of the class. It also can check inheritance, create new classes on the fly and replace classes by others, i. e. you can replace CDoesntWorksUnderWinXP by CWorksEverywhere (as an example, they use it differently of course). I had a quick research back then, their approach isn't exceptional, it was explained on some sites but since I had only so much interest I don't remember details.
I'm developing a system which takes a set of compiled .NET assemblies and emits C++ code which can then be compiled to any platform having a C++ compiler. Of course, this involves some extensive trickery due to various things .NET does that C++ doesn't.
One such situation is the ability to hide virtual methods, such as the following in C#:
class A
{
virtual void MyMethod()
{ ... }
}
class B : A
{
override void MyMethod()
{ ... }
}
class C : B
{
new virtual void MyMethod()
{ ... }
}
class D : C
{
override void MyMethod()
{ ... }
}
I came up with a solution to this that seemed clever and did work, as in the following example:
namespace impdetails
{
template<class by_type>
struct redef {};
}
struct A
{
virtual void MyMethod( void );
};
struct B : A
{
virtual void MyMethod( void );
};
struct C : B
{
virtual void MyMethod( impdetails::redef<C> );
};
struct D : C
{
virtual void MyMethod( impdetails::redef<D> );
};
This does of course require that all the call sites for C::MyMethod and D::MyMethod construct and pass the dummy object, as in this example:
C *c_d = &d;
c_d->MyMethod( impdetails::redef<C>() );
I'm not worried about this extra source code overhead; the output of this system is mainly not intended for human consumption.
Unfortunately, it turns out this actually causes runtime overhead. Intuitively, one would expect that because impdetails::redef<> is empty, it would take no space and passing it would involve no code.
However, the C++ standard, for reasons I understand but don't totally agree with, mandates that objects cannot have zero size. This leaves us with a situation where the compiler actually emits code to create and pass the object.
In fact, at least on VC2008, I found that it even went to the trouble of zeroing the dummy byte, even in release builds! I'm not sure why that was necessary, but it makes me even more not want to do it this way.
If all else fails I could always change the actual name of the function, such as perhaps having MyMethod, MyMethod$1, and MyMethod$2. However, this causes more problems. For instance, $ is actually not legal in C++ identifiers (although compilers I've tested will allow it.) A totally acceptable identifier in the output program could also be an identifier in the input program, which suggests a more complex approach would be needed, making this a less attractive option.
It also so turns out that there are other situations in this project where it would be nice to be able to modify method signatures using arbitrary type arguments similar to how I'm passing a type to impdetails::redef<>.
Is there any other clever way to get around this, or am I stuck between adding overhead at every call site or mangling names?
After considering some other aspects of the system as well such as interfaces in .NET, I am starting to think maybe it's better - perhaps even more-or-less necessary - to not even use the C++ virtual calling mechanism at all. The more I consider, the messier using that mechanism is getting.
In this approach, each user object class would have a separate struct for the vtable (perhaps kept in a separate namespace like vtabletype::. The generated class would have a pointer member that would be initialized through some trickery to point to a static instance of the vtable. Virtual calls would explicitly use a member pointer from that vtable.
If done properly this should have the same performance as the compiler's own implementation would. I've confirmed it does on VC2008. (By contrast, just using straight C, which is what I was planning on earlier, would likely not perform as well, since compilers often optimize this into a register.)
It would be hellish to write code like this manually, but of course this isn't a concern for a generator. This approach does have some advantages in this application:
Because it's a much more explicit approach, one can be more sure that it's doing exactly what .NET specifies it should be doing with respect to newslot as well as selection of interface implementations.
It might be more efficient (depending on some internal details) than a more traditional C++ approach to interfaces, which would tend to invoke multiple inheritance.
In .NET, objects are considered to be fully constructed when their .ctor runs. This impacts how virtual functions behave. With explicit knowledge of the vtables, this could be achieved by writing it in during allocation. (Although putting the .ctor code into a normal member function is another option.)
It might avoid redundant data when implementing reflection.
It provides better control and knowledge of object layout, which could be useful for the garbage collector.
On the downside, it totally loses the C++ compiler's overloading feature with regard to the vtable entries: those entries are data members, not functions, so there is no overloading. In this case it would be tempting to just number the members (say _0, _1...) This may not be so bad when debugging, since once the pointer is followed, you'll see an actual, properly-named member function anyway.
I think I may end up doing it this way but by all means I'd like to hear if there are better options, as this is admittedly a rather complex approach (and problem.)
Short Version
I'm using the entries from the vtable of a specific object to invoke virtual methods inherited from an interface.
In the end I'm searching for a way to get the exact offset each address to a virtual method has in the vtable of a specific object.
Detailed Version
Disclaimer
I know that this topic is implementation dependant and that one should not try to do this manually because the compiler does the (correct) work and a vtable is not considered to be a standard (let alone the dataformat).
I hereby testify that I already read dozens of "Don't do it... just, don't do it!"'s and am clear about the outrageous consequences my actions could have.
Therefore (and to favor a contructive discussion) I'll be using g++ (4.x.x) for the compiler on a Linux x64 Platform as my reference. Any software compiled using the below presented code will use the same setup, so it should be platform-indepandant as far as this is concerned.
Okay, this whole issue of mine is completely experimental, I don't want to use it in production code but to educate myself and my fellow students (my professor asked me if I could write a quick paper about this topic).
What I'm trying to do is basically the automatic invocation of methods using offsets to determine which method to invoke.
The basic outline of the classes is as follows (this is a simplification but shows the composition of my current attempts):
class IMethods
{
virtual double action1(double) = 0;
virtual double action2(double) = 0;
};
As you can see just a class with pure virtual methods which share the same signature.
enum Actions
{
actionID1,
actionID2
};
The enum-items are used to invoke the appropriate method.
class MethodProcessor : public IMethods
{
public:
double action1(double);
double action2(double);
};
Purposely omitted ctor/dtor of above class.
We can safely assume that these are the only virtual methods inherited from the interface and that polymorphy is of no concern.
This concludes the basic outline. Now to the real topic:
Is there a safely way to get the mapping of addresses in the vtable to the inherited virtual methods?
What I'm trying to do is something like this:
MethodProcessor proc;
size_t * vTable = *(size_t**) &proc;
double ret = ((double(*)(MethodProcessor*,double))vTable[actionID2])(&proc, 3.14159265359);
This is working out fine and is invoking action2, but I'm assuming that the address pointing to action2 is equal to index 1 and this part bugs me: What if there is some sort of offset added into the vtable before the address of action1 is defined?
In a book about the data object model of C++ I read that normally the first address in the vtable is leading to the RTTI (runtime type information), which in return I couldn't confirm because vTable[0] is legitimately pointing to action1.
The compiler knows the exact index of every virtual method pointer because, yeah, the compiler is building them and replacing every member invocation of the virtual methods with augmented code which equals the one I used above - but knows the index to use. I, for once, am taking the educated guess that there is no offset before my defined virtual methods.
Can't I use some C++-Hack to let the compiler compute the correct index on compile (or even run)-time? I could then use this information to add some offset to my enum-items and woulnd't have to worry about casting wrong addresses...
The ABI used by Linux is known: you should look at the section on virtual function table layout of the Itanium ABI. The document also specifies the object layout to find the vtable. I don't know if the approach you outlined works, however.
Note, that despite pointing at the document, I do not recommend to use the information to play tricks based on it! You unfortunately didn't explain what your actual goal is but it seems that using a pointer to member of the irtual functions is a more reliable approach to run-time dispatch to a virtual function:
double (IMethods::*method)(double)
= flag? &IMethods:: action1: &INethods::actions2;
MethodProcessor mp;
double rc = (mp.*method)(3.14);
I have nearly the same problem, even worse, in my case, this is for procuction code... don't ask me why (just tell me: "Don't do it... just, don't do it", or at least use an existing dynamic compiler such as LLC...).
Of course, this "torture" could be smoothed if compiler designers were asked to follow some generic C++ ABI specifications (or at least specify their own).
Apparently, there is a growing consensus on complying with "Generic C++ ABI" (also often called "ITANIUM C++ ABI", mentioned by Dietmar Kühl).
Because you are using G++ and because this is for educationnal purposes, I suggest you have a look at what the "-fdump-class-hierarchy" g++ switch does (you will find vtable layouts); this is what I personally use to be sure that I'm not casting wrong addresses.
Note: using a x86 (ia32) MinGW g++-4.7.2 compiler,
In double MethodProcessor::action( double ), the hidden "(MethodProcessor*)this" argument will be passed in a register (%ecx).
Whereas in ((double(*)(MethodProcessor*,double)), the first explicit "this" argument will be passed on stack.
My understanding is that exposing functions that take or return stl containers (such as std::string) across DLL boundaries can cause problems due to differences in STL implementations of those containers in the 2 binaries. But is it safe to export a class like:
class Customer
{
public:
wchar_t * getName() const;
private:
wstring mName;
};
Without some sort of hack, mName is not going to be usable by the executable, so it won't be able to execute methods on mName, nor construct/destruct this object.
My gut feeling is "don't do this, it's unsafe", but I can't figure out a good reason.
It is not a problem. Because it is trumped by the bigger problem, you cannot create an object of that class in code that lives in a module other than the one that contains the code for the class. Code in another module cannot accurately know the required object size, their implementation of the std::string class may well be different. Which, as declared, also affects the size of the Customer object. Even the same compiler cannot guarantee this, mixing optimized and debugging builds of these modules for example. Albeit that this is usually pretty easy to avoid.
So you must create a class factory for Customer objects, a factory that lives in that same module. Which then automatically implies that any code that touches the "mName" member also lives in the same module. And is therefore safe.
Next step then is to not expose Customer at all but expose an pure abstract base class (aka interface). Now you can prevent the client code from creating an instance of Customer and shoot their leg off. And you'll trivially hide the std::string as well. Interface-based programming techniques are common in module interop scenarios. Also the approach taken by COM.
As long as the allocator of instances of the class and deallocator are of the same settings, you should be ok, but you are right to avoid this.
Differences between the .exe and .dll as far as debug/release, code generation (Multi-threaded DLL vs. Single threaded) could cause problems in some scenarios.
I would recommend using abstract classes in the DLL interface with creation and deletion done solely inside the DLL.
Interfaces like:
class A {
protected:
virtual ~A() {}
public:
virtual void func() = 0;
};
//exported create/delete functions
A* create_A();
void destroy_A(A*);
DLL Implementation like:
class A_Impl : public A{
public:
~A_Impl() {}
void func() { do_something(); }
}
A* create_A() { return new A_Impl; }
void destroy_A(A* a) {
A_Impl* ai=static_cast<A_Impl*>(a);
delete ai;
}
Should be ok.
Even if your class has no data members, you cannot expect it to be usable from code compiled with a different compiler. There is no common ABI for C++ classes. You can expect differences in name mangling just for starters.
If you are prepared to constrain clients to use the same compiler as you, or provide source to allow clients to compile your code with their compiler, then you can do pretty much anything across your interface. Otherwise you should stick to C style interfaces.
If you want to provide an object oriented interface in a DLL that is truly safe, I would suggest building it on top of the COM object model. That's what it was designed for.
Any other attempt to share classes between code that is compiled by different compilers has the potential to fail. You may be able to get something that seems to work most of the time, but it can't be guaraneteed to work.
The chances are that at some point you're going to be relying on undefined behaviour in terms of calling conventions or class structure or memory allocation.
The C++ standard does not say anything about the ABI provided by implementations. Even on a single platform changing the compiler options may change binary layout or function interfaces.
Thus to ensure that standard types can be used across DLL boundaries it is your responsibility to ensure that either:
Resource Acquisition/Release for standard types is done by the same DLL. (Note: you can have multiple crt's in a process but a resource acquired by crt1.DLL must be released by crt1.DLL.)
This is not specific to C++. In C for example malloc/free, fopen/fclose call pairs must each go to a single C runtime.
This can be done by either of the below:
By explicitly exporting acquisition/release functions ( Photon's answer ). In this case you are forced to use a factory pattern and abstract types.Basically COM or a COM-clone
Forcing a group of DLL's to link against the same dynamic CRT. In this case you can safely export any kind of functions/classes.
There are also two "potential bug" (among others) you must take care, since they are related to what is "under" the language.
The first is that std::strng is a template, and hence it is instantiated in every translation unit. If they are all linked to a same module (exe or dll) the linker will resolve same functions as same code, and eventually inconsistent code (same function with different body) is treated as error.
But if they are linked to different module (and exe and a dll) there is nothing (compiler and linker) in common. So -depending on how the module where compiled- you may have different implementation of a same class with different member and memory layout (for example one may have some debugging or profiling added features the other has not). Accessing an object created on one side with methods compiled on the other side, if you have no other way to grant implementation consistency, may end in tears.
The second problem (more subtle) relates to allocation/deallocaion of memory: because of the way windows works, every module can have a distinct heap. But the standard C++ does not specify how new and delete take care about which heap an object comes from. And if the string buffer is allocated on one module, than moved to a string instance on another module, you risk (upon destruction) to give the memory back to the wrong heap (it depends on how new/delete and malloc/free are implemented respect to HeapAlloc/HeapFree: this merely relates to the level of "awarness" the STL implementation have respect to the underlying OS. The operation is not itself destructive -the operation just fails- but it leaks the origin's heap).
All that said, it is not impossible to pass a container. It is just up to you to grant a consistent implementation between the sides, since the compiler and linker have no way to cross check.
I have a data structure made of nested STL containers:
typedef std::map<Solver::EnumValue, double> SmValueProb;
typedef std::map<Solver::VariableReference, Solver::EnumValue> SmGuard;
typedef std::map<SmGuard, SmValueProb> SmTransitions;
typedef std::map<Solver::EnumValue, SmTransitions> SmMachine;
This form of the data is only used briefly in my program, and there's not much behavior that makes sense to attach to these types besides simply storing their data. However, the compiler (VC++2010) complains that the resulting names are too long.
Redefining the types as subclasses of the STL containers with no further elaboration seems to work:
typedef std::map<Solver::EnumValue, double> SmValueProb;
class SmGuard : public std::map<Solver::VariableReference, Solver::EnumValue> { };
class SmTransitions : public std::map<SmGuard, SmValueProb> { };
class SmMachine : public std::map<Solver::EnumValue, SmTransitions> { };
Recognizing that the STL containers aren't intended to be used as a base class, is there actually any hazard in this scenario?
There is one hazard: if you call delete on a pointer to a base class with no virtual destructor, you have Undefined Behavior. Otherwise, you are fine.
At least that's the theory. In practice, in the MSVC ABI or the Itanium ABI (gcc, Clang, icc, ...) delete on a base class with no virtual destructor (-Wdelete-non-virtual-dtor with gcc and clang, providing the class has virtual methods) only results in a problem if your derived class adds non-static attributes with non-trivial destructor (eg. a std::string).
In your specific case, this seems fine... but...
... you might still want to encapsulate (using Composition) and expose meaningful (business-oriented) methods. Not only will it be less hazardous, it will also be easier to understand than it->second.find('x')->begin()...
Yes there is:
std::map<Solver::VariableReference, Solver::EnumValue>* x = new SmGuard;
delete x;
results in undefined behavior.
This is one of the controversial point of C++ vs "inheritance based classical OOP".
There are two aspect that must be taken in consideration:
a typedef is introduce another name for a same type: std::map<Solver::EnumValue, double> and SmValueProb are -at all effect- the exact same thing and cna be used interchangably.
a class introcuce a new type that is (by principle) unrelated with anything else.
Class relation are defined by the way the class is "made up", and what lets implicit operations and conversion to be possible with other types.
Outside of specific programming paradigms (like OOP, that associate to the concept of "inhritance" and "is-a" relation) inheritance, implicit constructors, implicit casts, and so on, all do a same thing: let a type to be used across the interface of another type, thus defining a network of possible operations across different types. This is (generally speaking) "polymorphism".
Various programming paradigms exist about saying how such a network should be structured each attempting to optimize a specific aspect of programming, like the representation or runtime-replacable objects (classical OOP), the representation of compile-time replacable objects (CRTP), the use of genreric algorithial function for different types (Generic programming), teh use of "pure function" to express algorithm composition (functional and lambda "captures").
All of them dictates some "rules" about how language "features" must be used, since -being C++ multiparadigm- non of its features satisfy alone the requirements of the paradigm, letting some dirtiness open.
As Luchian said, inheriting a std::map will not produce a pure OOP replaceable type, since a delete over a base-pointer will not know how to destroy the derived part, being the destructor not virtual by design.
But -in fact- this is just a particular case: also pbase->find will not call your own eventually overridden find method, being std::map::find not virtual. (But this is not undefined: it is very well defined to be most likely not what you intend).
The real question is another: is "classic OOP substitution principle" important in your design or not?
In other word, are you going to use your classes AND their bases each other interchangeably, with functions just taking a std::map* or std::map& parameter, pretending those function to call std::map functions resulting in calls to your methods?
If yes, inheritance is NOT THE WAY TO GO. There are no virtual methods in std::map, hence runtime polymorphism will not work.
If no, that is: you're just writing your own class reusing both std::map behavior and interface, with no intention of interchange their usage (in particular, you are not allocating your own classes with new and deletinf them with delete applyed to an std::map pointer), providing just a set of functions taking yourclass& or yourclass* as parameters, that that's perfectly fine. It may even be better than a typedef, since your function cannot be used with a std::map anymore, thus separating the functionalities.
The alternative can be "encapsulation": that is: make the map and explicit member of your class letting the map accessible as a public member, or making it a private member with an accessor function, or rewriting yourself the map interface in your class. You gat finally an unrelated type with tha same interface an its own behavior. At the cost to rewrite the entire interface of something that may have hundredths of methods.
NOTE:
To anyone thinking about the danger of the missing of vitual dtor, note tat encapluating with public visibility won't solve the problem:
class myclass: public std::map<something...>
{};
std::map<something...>* p = new myclass;
delete p;
is UB excatly like
class myclass
{
public:
std::map<something...> mp;
};
std::map<something...>* p = &((new myclass)->mp);
delete p;
The second sample has the same mistake as the first, it is just less common: they both pretend to use a pointer to a partial object to operate on the entire one, with nothing in the partial object letting you able to know what the "containing one" is.