#ifdef _DEBUG
#define new DEBUG_NEW
#undef THIS_FILE
static char THIS_FILE[] = __FILE__;
#endif
Why define these tags?
CSortHeaderCtrl::CSortHeaderCtrl()
: m_iSortColumn( -1 )
, m_bSortAscending( TRUE )
{
}
What are the two functions after colon used for?
BEGIN_MESSAGE_MAP(CSortHeaderCtrl, CHeaderCtrl)
//{{AFX_MSG_MAP(CSortHeaderCtrl)
// NOTE - the ClassWizard will add and remove mapping macros here.
//}}AFX_MSG_MAP
END_MESSAGE_MAP()
Are there any similar things in C# like this?
What's this used for?
virtual ~CSortHeaderCtrl();
Why set the destructor function to be virtual?
void CSortHeaderCtrl::Serialize( CArchive& ar )
When will this function be called?
Is this extended from parent?
By the way, when you want to extend a MFC class, what document you will read?
Since we don't know what function it has, what function can we override?
The following is the header file:
/* File: SortHeaderCtrl.h
Purpose: Provides the header control, with drawing of
the arrows, for the list control.
*/
#ifndef SORTHEADERCTRL_H
#define SORTHEADERCTRL_H
#if _MSC_VER > 1000
#pragma once
#endif // _MSC_VER > 1000
class CSortHeaderCtrl : public
CHeaderCtrl { // Construction public:
CSortHeaderCtrl();
// Attributes public:
// Operations public:
// Overrides // ClassWizard generated
virtual function overrides
//{{AFX_VIRTUAL(CSortHeaderCtrl)
public: virtual void Serialize(CArchive& ar);
//}}AFX_VIRTUAL
// Implementation public: virtual
~CSortHeaderCtrl();
void SetSortArrow(
const int iColumn,
const BOOL bAscending );
// Generated message map functions
protected:
void DrawItem(LPDRAWITEMSTRUCT lpDrawItemStruct );
int m_iSortColumn;
BOOL m_bSortAscending;
//{{AFX_MSG(CSortHeaderCtrl) //
NOTE - the ClassWizard will add and
remove member functions here.
//}}AFX_MSG
DECLARE_MESSAGE_MAP() };
//{{AFX_INSERT_LOCATION}} // Microsoft
Visual C++ will insert additional
declarations immediately before the
previous line.
#endif // SORTHEADERCTRL_H
Question 1: The DEBUG_NEW is probably so the 'new' operator records some extra information about where and when a block was allocated to help in detecting memory leaks, see this. The THIS_FILE[] static char array simple holds the current filename, probably used by the debug 'new'
Question 2: This is an C++ initialization list.
Question 3: The destructor is declared virtual because there are other virtual members and this is a derived class. The 'delete' operator needs to know the correct size of the object it is deleting, along with which actual desctructor to call, see this
As for question 2: those are not functions. They are initializer
lists for members of CSortHeaderCtrl. You can think of it as
being equivalent to:
m_iSortColumn = -1;
m_bSortAscending = TRUE;
I emphasise "think of it", because for members that are
classes, only the copy constructor will be invoked (instead of first the
copy constructor and then the assignment operator).
Note that, with an initializer list, the initialization order
is not determined by the order it is written, but by order
of the class inheritance and by order of declaration of the member
variables.
Why define these tags ?
See jcopenha's answer.
What is the two functions after colon used for ?
See Peter's answer.
Is there any similar things in C# like this ? What's this used for ?
In C# it might be implemented as a dictionary of delegates.
It's called a "message map" (probably described in one of the subsections of MFC Library Reference Message Handling and Mapping).
Its contents are typically created/edited via the IDE "Class Wizard" (not edited manually using the code/text editor).
Why set the destructor function to be virtual ?
In C++, if a class might be subclassed then its destructor should almost always be virtual (because otherwise if it's not virtual and you invoke it by deleting a pointer to the superclass, the destructor of the subclass wouldn't be invoked).
When will this function be called ?
That's probably described here: MFC Library Reference Serialization in MFC.
is this extended from parent?
Acording to that link I just gave above, it's the CObject ancestor class: "MFC supplies built-in support for serialization in the class CObject. Thus, all classes derived from CObject can take advantage of CObject's serialization protocol."
By the way, when you want to extend a MFC class, what document you will read?
The MFC reference documentation.
Since we don't know what function it have, what function we can override...
You can typically override everything that virtual and not private. I think you can also/instead use the Class Wizard that's built-in to the IDE.
CSortHeaderCtrl is apparently a 3rd-party class, though, not a Microsoft class. Perhaps it's authors/vendor wrote some documentation for it, if you're supposed to be using it.
First of all, CSortHeaderCtrl has a virtual destructor because in C++ it is proper practice to make destructors virtual.
Destructors are made virtual in base classes because it means that the destructors in classes derived from the base will be called.
If destructors in derived classes aren't called (i.e. the base class destructor is non-virtual), then they will most likely leak memory and leave resources (streams, handles, etc) open.
The rest of the code you posted is generated by Visual Studio to handle common or redundant MFC tasks for you, for example mapping Win32 messages to member functions of your class or window. You shouldn't touch this code, as it is likely to be overriden or you will break it and have a debugging related headache coming your way.
When should my destructor be virtual?
http://www.parashift.com/c++-faq-lite/virtual-functions.html#faq-20.7
Related
I have a class in a DLL that looks something like this:
#ifdef LIB_EXPORT
#define LIB_API __declspec(dllexport)
#else
#define LIB_API __declspec(dllimport)
#endif
...
class LIB_API MyClass {
public:
// ...public interface...
private:
// ...some private fields...
std::unique_ptr<OtherClass> otherPtr_;
};
Now, I think this could be a problem: if the client code uses a slightly different version of unique_ptr, the memory layout of a MyClass object effectively becomes different from what the code in the DLL might expect.
I don't really want to resort to the Pimpl idiom to hide unique_ptr from the public header. I could, potentially, roll my own simplified version of unique_ptr (I only need a subset of its functionality, for example I don't need custom deleters). But, before I try that, are there any other methods to resolve this?
The problem you've surmised is quite real, and it applies not only to layout of Standard library classes, but also your own classes. Unless your class meets the standard-layout rules, different compilers are not expected to use the same in-memory layout, even given exactly the same source code. The answer is that C++ classes shouldn't be exported at all.
Case #1: If you want unique_ptr for managing the lifetime of public objects of the DLL:
Export a factory function and deletion function from the DLL, and put a wrapper class inside the public header. The wrapper exists completely within the client, and therefore uses the client's version of unique_ptr only.
__declspec(dllexport) is NOT used on the wrapper class.
Case #2: If the DLL uses unique_ptr internally:
Instead of pimpl, you should use inheritance. The public header file contains a base class with protected constructor, pure virtual member functions and no data members at all. Again, __declspec(dllexport) is NOT used. A dllexport factory function is used to create new instances. Inside the DLL, you inherit from this interface type, the derived class adds all the data members and function bodies. None of the data members are ever seen by the client, so you can freely use C++ objects and the layout used is local to the DLL.
A side effect of both of these is that trivial member functions won't be inlined, which may negatively effect performance. But calling into the DLL for every member access is the only way to achieve decoupling.
I provide a SDK to my users, allowing them to write DLLs in C++ for expanding the software.
The SDK headers mostly contain interface class definitions. These class are of two types:
Some that the user must subclass and implement
Some that are wrappers to core classes, passed by the app to the DLL functions as pointers, which can then be used as arguments by the DLL code for calling core functions. These interfaces should not be subclassed by the user and passed to the core functions, as they expect a specific core subclass.
I write in the manual the interfaces that should not be subclassed, and only used through pointers on objects provided by the app. But at some places, it's too tempting to subclass them in the SDK if you do not read the manual.
Would it be possible to prevent subclassing some interfaces in the SDK headers?
As long as the client doesn't need to use the pointer for anything but
passing it back into your DLL, you can just use a forward declaration;
you can't derive from an incomplete type. (When faced with a similar
case recently, I went whole hog, and designed a special wrapper type
based on void*. There's a lot of casting in the interface code, but
there's no way the client can do much other than pass the value back to
me.)
If the classes in question implement an interface which the client must
also use, there are two solutions. The first is to change this,
replacing each of the member functions with a free function which takes
a pointer to the type, and just provide a forward declaration. The
second is to use something like:
class InternallyVisibleInterface;
class ClientVisibleInterface
{
private:
virtual void doSomething() = 0;
ClientVisibleInterface() = default;
friend class InternallyVisibleInterface;
protected: // Or public, depending on whether the client should
// be able to delete instances or not.
virtual ~ClientVisibleInterface() = default;
public:
void something();
};
and in your DLL:
class InternallyVisibleInterface : public ClientVisibleInterface
{
protected:
InternallyVisibleInterface() {}
// And anything else you need. If there is only one class in
// your application which should derive from the interface,
// this is it. If there are several, they should derive from
// this class, rather than ClientVisibleInterface, since this
// is the only class which can construct the
// ClientVisibleInterface base class.
};
void ClientVisibleInterface::something()
{
assert( dynamic_cast<InternallyVisibleInterface*>( this ) != nullptr );
doSomething();
}
This offers two levels of protection: first, although derivation
directly from ClientVisibleInterface is possible, it's impossible for
the resulting class to have a constructor, and so it cannot be
instantiated. And secondly, if the client code does cheat somehow,
there will be a runtime error if he does so.
You probably don't need both protections; one or the other should
suffice. The private constructor will result in a compile time error,
rather than a runtime one. On the other hand, without it, you don't
even have to mention the name of InternallyVisibleInterface in the
distributed headers.
As soon as a developper has a developpement environment, he can do almost anything, and you should not even try to control that.
IMHO the best you can do is to identify the limit between the core application and the extension DLLs and ensure that objects received from those DLLs are or correct class, and abort with a distinctive message if they are not.
Using RTTI and typeid is generally frowned upon because it is generally the sign of a bad OOP design : in normal use case, calling virtual method is enough to have proper code invoked. But I think it can safely be considered in your use case.
Let me start by telling that I understand how virtual methods work (polymorphism, late-binding, vtables).
My question is whether or not I should make my method virtual. I will exemplify my dilemma on a specific case, but any general guidelines will be welcomed too.
The context:
I am creating a library. In this library I have a class CallStack that captures a call stack and then offers vector-like access to the captured stack frames. The capture is done by a protected method CaptureStack. This method could be redefined in a derived class, if the users of the library wish to implement another way to capture the stack. Just to be clear, the discussion to make the method virtual applies only to some methods that I know can be redefined in a derived class (in this case CaptureStack and the destructor), not to all the class methods.
Throughout my library I use CallStack objects, but never exposed as pointers or reference parameters, thus making virtual not needed considering only the use of my library.
And I cannot think of a case when someone would want to use CallStack as pointer or reference to implement polymorphism. If someone wants to derive CallStack and redefine CaptureStack I think just using the derived class object will suffice.
Now just because I cannot think polymorphism will be needed, should I not use virtual methods, or should I use virtual regardless just because a method can be redefined.
Example how CallStack can be used outside my library:
if (error) {
CallStack call_stack; // the constructor calls CaptureStack
for (const auto &stack_frame : call_stack) {
cout << stack_frame << endl;
}
}
A derived class, that redefines CaptureStack could be use in the same manner, not needing polymorphism:
if (error) {
// since this is not a CallStack pointer / reference, virtual would not be needed.
DerivedCallStack d_call_stack;
for (const auto &stack_frame : d_call_stack) {
cout << stack_frame << endl;
}
}
If your library saves the call stack during the constructor then you cannot use virtual methods.
This is C++. One thing people often get wrong when coming to C++ from another language is using virtual methods in constructors. This never works as planned.
C++ sets the virtual function table during each constructor call. That means that functions are never virtual when called from the constructor. The virtual method always points to the current class being constructed.
So even if you did use a virtual method to capture the stack the constructor code would always call the base class method.
To make it work you'd need to take the call out of the constructor and use something like:
CallStack *stack = new DerivedStack;
stack.CaptureStack();
None of your code examples show a good reason to make CaptureStack virtual.
When deciding whether you need a virtual function or not, you need to see if deriving and overriding the function changes the expected behavior/functionality of other functions that you're implementing now or not.
If you are relying on the implementation of that particular function in your other processes of the same class, like another function of the same class, then you might want to have the function as virtual. But if you know what the function is supposed to do in your parent class, and you don't want anybody to change it as far as you're concerned, then it's not a virtual function.
Or as another example, imagine somebody derives a class from you implementation, overrides a function, and passes that object as casted to the parent class to one of your own implemented functions/classes. Would you prefer to have your original implementation of the function or you want them to have you use their own overriden implementation? If the latter is the case, then you should go for virtual, unless not.
It's not clear to me where CallStack is being called. From
your examples, it looks like you're using the template method
pattern, in which the basic functionality is implemented in the
base class, but customized by means of virtual functions
(normally private, not protected) which are provided by the
derived class. In this case (as Peter Bloomfield points out),
the functions must be virtual, since they will be called from
within a member function of the base class; thus, with a static
type of CallStack. However: if I understand your examples
correctly, the call to CallStack will be in the constructor.
This will not work, as during construction of CallStack, the
dynamic type of the object is CallStack, and not
DerivedCallStack, and virtual function calls will resolve to
CallStack.
In such a case, for the use cases you describe, a solution using
templates may be more appropriate. Or even... The name of the
class is clear. I can't think of any reasonable case where
different instances should have different means of capturing the
call stack in a single program. Which suggests that link time
resolution of the type might be appropriate. (I use the
compilation firewall idiom and link time resolution in my own
StackTrace class.)
My question is whether or not I should make my method virtual. I will exemplify my dilemma on a specific case, but any general guidelines will be welcomed too.
Some guidelines:
if you are unsure, you should not do it. Lots of people will tell you that your code should be easily extensible (and as such, virtual), but in practice, most extensible code is never extended, unless you make a library that will be used heavily (see YAGNI principle).
you can use encapsulation in place of inheritance and type polymorphism (templates) as an alternative to class hierarchies in many cases (e.g. std::string and std::wstring are not two concrete implementations of a base string class and they are not inheritable at all).
if (when you are designing your code/public interfaces) you realize you have more than one class that "is an" implementation of another classes' interface, then you should use virtual functions.
You should almost certainly declare the method as virtual.
The first reason is that anything in your base class which calls CaptureStack will be doing so through a base class pointer (i.e. the local this pointer). It will therefore call the base class version of the function, even though a derived class masks it.
Consider the following example:
class Parent
{
public:
void callFoo()
{
foo();
}
void foo()
{
std::cout << "Parent::foo()" << std::endl;
}
};
class Child : public Parent
{
public:
void foo()
{
std::cout << "Child::foo()" << std::endl;
}
};
int main()
{
Child obj;
obj.callFoo();
return 0;
}
The client code using the class is only ever using a derived object (not a base class pointer etc.). However, it's the base class version of foo() that actually gets called. The only way to resolve that is to make foo() virtual.
The second reason is simply one of correct design. If the purpose of the derived class function is to override rather than mask the original, then it should probably do so unless there is a specific reason otherwise (such as performance concerns). If you don't do that, you're inviting bugs and mistakes in future, because the class may not act as expected.
I came across this article on Code Project that talks about using an abstract interface as an alternative to exporting an entire class from a C++ DLL to avoid name mangling issues. The author has a Release() method in his interface definition that is supposed to be called by the user to free the class object's resources after using it. To automate the calling of this method the author also creates an std::auto_ptr<T>-like class that calls the Release() method before deleting the object.
I was wondering whether the following approach would work instead:
#include <memory>
#if defined(XYZLIBRARY_EXPORT) // inside DLL
# define XYZAPI __declspec(dllexport)
#else // outside DLL
# define XYZAPI __declspec(dllimport)
#endif // XYZLIBRARY_EXPORT
// The abstract interface for Xyz object.
// No extra specifiers required.
struct IXyz
{
virtual int Foo(int n) = 0;
//No Release() method, sub-class' destructor does cleanup
//virtual void Release() = 0;
virtual ~IXyz() {}
};
// Factory function that creates instances of the Xyz object.
// Private function, do not use directly
extern "C" XYZAPI IXyz* __stdcall GetXyz_();
#define GetXyz() std::auto_ptr<IXyz>( GetXyz_() )
Of course, GetXyz() can be a global function defined in the header instead of a #define. The advantage to this method would be that we don't need to cook up our own derivative of auto_ptr that calls the Release() method.
Thanks for your answers,
Ashish.
by doing this, you risk calling delete (in your process, within auto_ptr's destructor) on an object that is not created by the matching call to new() (that is done inside the factory function, hence inside the dll). Trouble guaranteed, for instance when your dll is compiled in release mode while the calling process in debug mode.
The Release() method is better.
This is exactly how COM works. Avoiding re-inventing this wheel if you already target the Win32 API. Using smart pointers to store COM interface pointers is very common in Windows programming, their destructor calls the Release() method. Take a peek at the MSDN docs for _com_ptr_t and CComPtr for ideas.
The restriction you face if this is a public API is the CRT that the different modules will be linked against and that the CRT that creates the object also needs to be the one to delete it.
There will be a mess if you don't choose the right CRT
Anything sharing memory management should use CRT (or other memory allocation) as external lib - ie. MSVC: Multi-threaded DLL (/MD)
Given that, then there is not even a need for the subclass to achieve your purpose.
I've been having a discussion with my coworkers as to whether to prefix overridden methods with the virtual keyword, or only at the originating base class.
I tend to prefix all virtual methods (that is, methods involving a vtable lookup) with the virtual keyword. My rationale is threefold:
Given that C++ lacks an override
keyword, the presence of the virtual
keyword at least notifies you that
the method involves a lookup and
could theoretically be overridden by
further specializations, or could be
called through a pointer to a higher
base class.
Consistently using this style
means that, when you see a method
(at least within our code) without
the virtual keyword, you can
initially assume that it is neither
derived from a base nor specialized
in subclass.
If, through some error, the
virtual were removed from IFoo, all
children will still function
(CFooSpecialization::DoBar would
still override CFooBase::DoBar,
rather than simply hiding it).
The argument against the practice, as I understood it, was, "But that method isn't virtual" (which I believe is invalid, and borne from a misunderstanding of virtuality), and "When I see the virtual keyword, I expect that means someone is deriving from it, and go searching for them."
The hypothetical classes may be spread across several files, and there are several specializations.
class IFoo {
public:
virtual void DoBar() = 0;
void DoBaz();
};
class CFooBase : public IFoo {
public:
virtual void DoBar(); // Default implementation
void DoZap();
};
class CFooSpecialization : public CFooBase {
public:
virtual void DoBar(); // Specialized implementation
};
Stylistically, would you remove the virtual keyword from the two derived classes? If so, why? What are Stack Overflow's thoughts here?
I completely agree with your rationale. It's a good reminder that the method will have dynamic dispatch semantics when called. The "that method isn't virtual" argument that you co-worker is using is completely bogus. He's mixed up the concepts of virtual and pure-virtual.
A function once a virtual always a virtual.
So in any event if the virtual keyword is not used in the subsequent classes, it does not prevent the function/method from being 'virtual' i.e. be overridden. So one of the projects that I worked-in, had the following guideline which I somewhat liked :
If the function/method is supposed to
be overridden always use the
'virtual' keyword. This is especially
true when used in interface / base
classes.
If the derived class is supposed to
be sub-classed further explicity
state the 'virtual' keyword for every
function/method that can be
overridden. C++11 use the 'override' keyword
If the function/method in the derived
class is not supposed to be
sub-classed again, then the keyword
'virtual' is to be commented
indicating that the function/method
was overridden but there are no
further classes that override it
again. This ofcourse does not prevent
someone from overriding in the
derived class unless the class
is made final (non-derivable), but it
indicates that the method is not supposed to be
overridden.
Ex: /*virtual*/ void guiFocusEvent();
C++11, use the 'final' keyword along with the 'override'
Ex: void guiFocusEvent() override final;
Adding virtual does not have a significant impact either way. I tend to prefer it but it's really a subjective issue. However, if you make sure to use the override and sealed keywords in Visual C++, you'll gain a significant improvement in ability to catch errors at compile time.
I include the following lines in my PCH:
#if _MSC_VER >= 1400
#define OVERRIDE override
#define SEALED sealed
#else
#define OVERRIDE
#define SEALED
#endif
I would tend not to use any syntax that the compiler will allow me to omit. Having said that, part of the design of C# (in an attempt to improve over C++) was to require overrides of virtual methods to be labeled as "override", and that seems to be a reasonable idea. My concern is that, since it's completely optional, it's only a matter of time before someone omits it, and by then you'll have gotten into the habit of expecting overrides to be have "virtual" specified. Maybe it's best to just live within the limitations of the language, then.
I can think of one disadvantage:
When a class member function is not overridden and you declare it virtual, you add an uneccessary entry in the virtual table for that class definition.
Note: My answer regards C++03 which some of us are still stuck with. C++11 has the override and final keywords as #JustinTime suggests in the comments which should probably be used instead of the following suggestion.
There are plenty of answers already and two contrary opinions that stand out the most. I want to combine what #280Z28 mentioned in his answer with #StevenSudit's opinion and #Abhay's style guidelines.
I disagree with #280Z28 and wouldn't use Microsoft's language extensions unless you are certain that you will only ever use that code on Windows.
But I do like the keywords. So why not just use a #define-d keyword addition for clarity?
#define OVERRIDE
#define SEALED
or
#define OVERRIDE virtual
#define SEALED virtual
The difference being your decision on what you want to happen in the case you outline in your 3rd point.
3 - If, through some error, the virtual were removed from IFoo, all children will still function (CFooSpecialization::DoBar would still override CFooBase::DoBar, rather than simply hiding it).
Though I would argue that it is a programming error so there is no "fix" and you probably shouldn't even bother mitigating it but should ensure it crashes or notifies the programmer in some other way (though I can't think of one right now).
Should you chose the first option and don't like adding #define's then you can just use comments like:
/* override */
/* sealed */
And that should do the job for all cases where you want clarity, because I don't consider the word virtual to be clear enough for what you want it to do.