Will this cause a problem with different runtimes with DLL? - c++

My gui application supports polymorphic timed events so that means that the user calls new, and the gui calls delete. This can create a problem if the runtimes are incompatible.
So I was told a proposed solution would be this:
class base;
class Deallocator {
void operator()(base* ptr)
{
delete ptr;
}
}
class base {
public:
base(Deallocator dealloc)
{
m_deleteFunc = dealloc;
}
~base()
{
m_deleteFunc(this);
}
private:
Deallocator m_deleteFunc;
}
int main
{
Deallocator deletefunc;
base baseObj(deletefunc);
}
While this is a good solution, it does demand that the user create a Deallocator object which I do not want. I was however wondering if I provided a Deallocator to each derived class: eg
class derived : public base
{
Deallocator dealloc;
public:
Derived() : base(dealloc);
{
}
};
I think this still does not work though. The constraint is that:
The addTimedEvent() function is part of the Widget class which is also in the dll, but it is instanced by the user. The other constraint is that some classes which derive from Widget call this function with their own timed event classes.
Given that "he who called new must call delete" what could work given these constraints?
Thanks

I suggest that you study the COM reference-counting paradigm (AddRef and Release). This allows more flexible lifetime and guarantees that the correct deallocator is used, because the object deletes itself.
Please note that if you're sharing class objects across DLL boundaries, you could have much bigger problems that just using the same allocator. There's the whole one-definition-rule to account for, and calling conventions, data layout, and name mangling schemes that differ between compilers. So if you want a reusable library, you really need to adopt the COM way of doing things with reference counting, self-deletion, and an interface containing only pure virtual functions. Whether you build real COM objects or your own COM-like system would depend on your other requirements.

The first thing that comes to mind is to give the base class a virtual (abstract?) SelfDestruct method. Assuming that the consumer of your DLL passes a class he derived himself, he will know how to deallocate it.
If he can pass classes which you have written, then you've got more problems. I suggest disallowing allocating such classes and providing a static method for allocating them with your own allocator.
I'm not sure if I've explained my idea very clearly... if not, please ask, I'll provide code later.

What could work with the given constraints is that you associate a deleter function-pointer with each TimedEvent, where both are specified as arguments to addTimedEvent.
To relieve the burden of the client to create a custom deleter function, you can provide an inline deleter function as a member of the anonymous namespace in the header of your widget class.
For example:
// Widget header
class base;
namespace {
inline void default_deleter(base* p)
{
delete p;
}
}
class Widget
{
public:
addTimedEvent(base* event, void(*deleter)(base*));
};
The advantage of the inline function is that it will be compiled in the context of the client code, so delete will also use a compatible deallocator as the client used to allocate the event.
Edit: Made the deleter function a member of the anonymous namespace. This is needed to avoid ODR violations.
Without the namespace, you get two functions default_deleter that have the same external name (so they are the same as far as the linker is concerned), but with different semantics, because they refer to different deallocators.
With the anonymous namespace, all instances of default_deleter become separate entities for the linker. This has the (unfortunate) side-effect that you can no longer use the function as a default argument to addTimedEvent.

Related

Prevent subclassing an abstract class interface in C++

I provide a SDK to my users, allowing them to write DLLs in C++ for expanding the software.
The SDK headers mostly contain interface class definitions. These class are of two types:
Some that the user must subclass and implement
Some that are wrappers to core classes, passed by the app to the DLL functions as pointers, which can then be used as arguments by the DLL code for calling core functions. These interfaces should not be subclassed by the user and passed to the core functions, as they expect a specific core subclass.
I write in the manual the interfaces that should not be subclassed, and only used through pointers on objects provided by the app. But at some places, it's too tempting to subclass them in the SDK if you do not read the manual.
Would it be possible to prevent subclassing some interfaces in the SDK headers?
As long as the client doesn't need to use the pointer for anything but
passing it back into your DLL, you can just use a forward declaration;
you can't derive from an incomplete type. (When faced with a similar
case recently, I went whole hog, and designed a special wrapper type
based on void*. There's a lot of casting in the interface code, but
there's no way the client can do much other than pass the value back to
me.)
If the classes in question implement an interface which the client must
also use, there are two solutions. The first is to change this,
replacing each of the member functions with a free function which takes
a pointer to the type, and just provide a forward declaration. The
second is to use something like:
class InternallyVisibleInterface;
class ClientVisibleInterface
{
private:
virtual void doSomething() = 0;
ClientVisibleInterface() = default;
friend class InternallyVisibleInterface;
protected: // Or public, depending on whether the client should
// be able to delete instances or not.
virtual ~ClientVisibleInterface() = default;
public:
void something();
};
and in your DLL:
class InternallyVisibleInterface : public ClientVisibleInterface
{
protected:
InternallyVisibleInterface() {}
// And anything else you need. If there is only one class in
// your application which should derive from the interface,
// this is it. If there are several, they should derive from
// this class, rather than ClientVisibleInterface, since this
// is the only class which can construct the
// ClientVisibleInterface base class.
};
void ClientVisibleInterface::something()
{
assert( dynamic_cast<InternallyVisibleInterface*>( this ) != nullptr );
doSomething();
}
This offers two levels of protection: first, although derivation
directly from ClientVisibleInterface is possible, it's impossible for
the resulting class to have a constructor, and so it cannot be
instantiated. And secondly, if the client code does cheat somehow,
there will be a runtime error if he does so.
You probably don't need both protections; one or the other should
suffice. The private constructor will result in a compile time error,
rather than a runtime one. On the other hand, without it, you don't
even have to mention the name of InternallyVisibleInterface in the
distributed headers.
As soon as a developper has a developpement environment, he can do almost anything, and you should not even try to control that.
IMHO the best you can do is to identify the limit between the core application and the extension DLLs and ensure that objects received from those DLLs are or correct class, and abort with a distinctive message if they are not.
Using RTTI and typeid is generally frowned upon because it is generally the sign of a bad OOP design : in normal use case, calling virtual method is enough to have proper code invoked. But I think it can safely be considered in your use case.

make_unique, factory method or different design for client API?

We have a library with that publishes an abstract base class:
(illustrative psudocode)
/include/reader_api.hpp
class ReaderApi
{
public:
static std::unique_ptr <ReaderApi> CreatePcapReader ();
virtual ~ReaderApi() = 0;
};
In the library implementation, there is a concrete implementation of ReaderApi that reads pcap files:
/lib/pcap_reader_api.cpp
class PcapReaderApi
:
public ReaderApi
{
public:
ReaderApi() {};
};
Client code is expected to instantiate one of these PcapReaderApi objects via the factory method:
std::unique_ptr <ReaderApi> reader = ReaderApi::CreatePcapReader ();
This feels gross to me on a couple of levels.
First, the factory method should be free, not a static member of ReaderApi. It was made a static member of ReaderApi to be explicit about the namespaces. I can see pros & cons either way. Feel free to comment on this, but it's not my main point of contention.
Second, my instinct tells me I should be using std::make_unique rather than calling a factory method at all. But since the actual object being made is an implementation detail, not part of the public headers, there's nothing for the client to make_unique.
The simplest solution, in terms of understandability and code maintenance, appears to be the solution I've already come up with above, unless there is a better solution that I'm not already aware of. Performance is a not a major consideration here since, because of the nature of this object, it will only be instantiate once, at startup-time.
With code clarity, understandability, and maintainability in mind, is there a better way to design the creation of these objects than I have here?
I have considered two alternatives that I'll go over below.
One alternative I've considered is passing in some kind of identifier to a generic Create function. The identifier would specify the kind of object the client wishes to construct. It would likely be an enum class, along these lines:
enum class DeviceType
{
PcapReader
};
std::unique_ptr <ReaderApi> CreateReaderDevice (DeviceType);
But I'm not sure I see the point of doing this versus just making the create function free and explicit:
std::unique_ptr <ReaderApi> CreatePcapReader ();
I also thought about specifying the DeviceType parameter in ReaderApi's constructor:
class ReaderApi
{
public:
ReaderApi (DeviceType type);
virtual ~ReaderApi() = 0;
};
This would enable the make_unique idiom:
std::unique_ptr <ReaderApi> reader = std::make_unique <ReaderApi> (DeviceType::PcapReader);
But this obviously would present a big problem -- you're actually trying to construct a ReaderApi, not a PcapReader. The obvious solution to this problem is to implement a virtual constructor idiom or use factory construction. But virtual construction seems over-engineered to me for this use.
To me, the two options to consider are your current approach, or a namespace level appropriately named free function. There doesn't seem to be a need for an enumerated factory unless there are details you aren't mentioned.
Using make_unique is exposing implementation details so I would definitely not suggest that approach.

C++ should I use virtual methods?

Let me start by telling that I understand how virtual methods work (polymorphism, late-binding, vtables).
My question is whether or not I should make my method virtual. I will exemplify my dilemma on a specific case, but any general guidelines will be welcomed too.
The context:
I am creating a library. In this library I have a class CallStack that captures a call stack and then offers vector-like access to the captured stack frames. The capture is done by a protected method CaptureStack. This method could be redefined in a derived class, if the users of the library wish to implement another way to capture the stack. Just to be clear, the discussion to make the method virtual applies only to some methods that I know can be redefined in a derived class (in this case CaptureStack and the destructor), not to all the class methods.
Throughout my library I use CallStack objects, but never exposed as pointers or reference parameters, thus making virtual not needed considering only the use of my library.
And I cannot think of a case when someone would want to use CallStack as pointer or reference to implement polymorphism. If someone wants to derive CallStack and redefine CaptureStack I think just using the derived class object will suffice.
Now just because I cannot think polymorphism will be needed, should I not use virtual methods, or should I use virtual regardless just because a method can be redefined.
Example how CallStack can be used outside my library:
if (error) {
CallStack call_stack; // the constructor calls CaptureStack
for (const auto &stack_frame : call_stack) {
cout << stack_frame << endl;
}
}
A derived class, that redefines CaptureStack could be use in the same manner, not needing polymorphism:
if (error) {
// since this is not a CallStack pointer / reference, virtual would not be needed.
DerivedCallStack d_call_stack;
for (const auto &stack_frame : d_call_stack) {
cout << stack_frame << endl;
}
}
If your library saves the call stack during the constructor then you cannot use virtual methods.
This is C++. One thing people often get wrong when coming to C++ from another language is using virtual methods in constructors. This never works as planned.
C++ sets the virtual function table during each constructor call. That means that functions are never virtual when called from the constructor. The virtual method always points to the current class being constructed.
So even if you did use a virtual method to capture the stack the constructor code would always call the base class method.
To make it work you'd need to take the call out of the constructor and use something like:
CallStack *stack = new DerivedStack;
stack.CaptureStack();
None of your code examples show a good reason to make CaptureStack virtual.
When deciding whether you need a virtual function or not, you need to see if deriving and overriding the function changes the expected behavior/functionality of other functions that you're implementing now or not.
If you are relying on the implementation of that particular function in your other processes of the same class, like another function of the same class, then you might want to have the function as virtual. But if you know what the function is supposed to do in your parent class, and you don't want anybody to change it as far as you're concerned, then it's not a virtual function.
Or as another example, imagine somebody derives a class from you implementation, overrides a function, and passes that object as casted to the parent class to one of your own implemented functions/classes. Would you prefer to have your original implementation of the function or you want them to have you use their own overriden implementation? If the latter is the case, then you should go for virtual, unless not.
It's not clear to me where CallStack is being called. From
your examples, it looks like you're using the template method
pattern, in which the basic functionality is implemented in the
base class, but customized by means of virtual functions
(normally private, not protected) which are provided by the
derived class. In this case (as Peter Bloomfield points out),
the functions must be virtual, since they will be called from
within a member function of the base class; thus, with a static
type of CallStack. However: if I understand your examples
correctly, the call to CallStack will be in the constructor.
This will not work, as during construction of CallStack, the
dynamic type of the object is CallStack, and not
DerivedCallStack, and virtual function calls will resolve to
CallStack.
In such a case, for the use cases you describe, a solution using
templates may be more appropriate. Or even... The name of the
class is clear. I can't think of any reasonable case where
different instances should have different means of capturing the
call stack in a single program. Which suggests that link time
resolution of the type might be appropriate. (I use the
compilation firewall idiom and link time resolution in my own
StackTrace class.)
My question is whether or not I should make my method virtual. I will exemplify my dilemma on a specific case, but any general guidelines will be welcomed too.
Some guidelines:
if you are unsure, you should not do it. Lots of people will tell you that your code should be easily extensible (and as such, virtual), but in practice, most extensible code is never extended, unless you make a library that will be used heavily (see YAGNI principle).
you can use encapsulation in place of inheritance and type polymorphism (templates) as an alternative to class hierarchies in many cases (e.g. std::string and std::wstring are not two concrete implementations of a base string class and they are not inheritable at all).
if (when you are designing your code/public interfaces) you realize you have more than one class that "is an" implementation of another classes' interface, then you should use virtual functions.
You should almost certainly declare the method as virtual.
The first reason is that anything in your base class which calls CaptureStack will be doing so through a base class pointer (i.e. the local this pointer). It will therefore call the base class version of the function, even though a derived class masks it.
Consider the following example:
class Parent
{
public:
void callFoo()
{
foo();
}
void foo()
{
std::cout << "Parent::foo()" << std::endl;
}
};
class Child : public Parent
{
public:
void foo()
{
std::cout << "Child::foo()" << std::endl;
}
};
int main()
{
Child obj;
obj.callFoo();
return 0;
}
The client code using the class is only ever using a derived object (not a base class pointer etc.). However, it's the base class version of foo() that actually gets called. The only way to resolve that is to make foo() virtual.
The second reason is simply one of correct design. If the purpose of the derived class function is to override rather than mask the original, then it should probably do so unless there is a specific reason otherwise (such as performance concerns). If you don't do that, you're inviting bugs and mistakes in future, because the class may not act as expected.

Is it okay when a base class has only one derived class?

I am creating a password module using OOD and design patterns. The module will keep log of recordable events and read/write to a file. I created the interface in the base class and implementation in derived class. Now I am wondering if this is sort of bad smell if a base class has only one derived class. Is this kind of class hierarchy unnecessary? Now to eliminate the class hierarchy I can of course just do everything in one class and not derive at all, here is my code.
class CLogFile
{
public:
CLogFile(void);
virtual ~CLogFile(void);
virtual void Read(CString strLog) = 0;
virtual void Write(CString strNewMsg) = 0;
};
The derived class is:
class CLogFileImpl :
public CLogFile
{
public:
CLogFileImpl(CString strLogFileName, CString & strLog);
virtual ~CLogFileImpl(void);
virtual void Read(CString strLog);
virtual void Write(CString strNewMsg);
protected:
CString & m_strLog; // the log file data
CString m_strLogFileName; // file name
};
Now in the code
CLogFile * m_LogFile = new CLogFileImpl( m_strLogPath, m_strLog );
m_LogFile->Write("Log file created");
My question is that on one hand I am following OOD principal and creating interface first and implementation in a derived class. On the other hand is it an overkill and does it complicate things? My code is simple enough not to use any design patterns but it does get clues from it in terms of general data encapsulation through a derived class.
Ultimately is the above class hierarchy good or should it be done in one class instead?
No, in fact I believe your design is good. You may later need to add a mock or test implementation for your class and your design makes this easier.
The answer depends on how likely it is that you'll have more than one behavior for that interface.
Read and write operations for a file system might make perfect sense now. What if you decide to write to something remote, like a database? In that case, a new implementation still works perfectly without affecting clients.
I'd say this is a fine example of how to do an interface.
Shouldn't you make the destructor pure virtual? If I recall correctly, that's the recommended idiom for creating a C++ interface according to Scott Myers.
Yes, this is acceptable, even with only 1 implementation of your interface, but it may be slower at run time (slightly) than a single class. (virtual dispatch has roughly the cost of following 1-2 function pointers)
This can be used as a way of preventing dependencies on clients on the implementation details. As an example, clients of your interface do not need to be recompiled just because your implementation gets a new data field under your above pattern.
You can also look at the pImpl pattern, which is a way to hide implementation details without using inheritance.
Your model works well with the factory model where you work with a lot of shared-pointers and you call some factory method to "get you" a shared pointer to an abstract interface.
The downside of using pImpl is managing the pointer itself. With C++11 however the pImpl will work well with being movable so will be more workable. At present though, if you want to return an instance of your class from a "factory" function it has copy semantic issues with its internal pointer.
This leads to implementers either returning a shared pointer to the outer class, which is made non-copyable. That means you have a shared pointer to one class holding a pointer to an inner class so function calls go through that extra level of indirection and you get two "new"s per construction. If you have only a small number of these objects that isn't a major concern, but it can be a bit clumsy.
C++11 has the advantage of both having unique_ptr which supports forward declaration of its underlying and move semantics. Thus pImpl will become more feasible where you really do know you are going to have just one implementation.
Incidentally I would get rid of those CStrings and replace them with std::string, and not put C as a prefix to every class. I would also make the data members of the implementation private, not protected.
An alternative model you could have, as defined by Composition over Inheritance and Single Responsibility Principle, both referenced by Stephane Rolland, implemented the following model.
First, you need three different classes:
class CLog {
CLogReader* m_Reader;
CLogWriter* m_Writer;
public:
void Read(CString& strLog) {
m_Reader->Read(strLog);
}
void Write(const CString& strNewMsg) {
m_Writer->Write(strNewMsg);
}
void setReader(CLogReader* reader) {
m_Reader = reader;
}
void setWriter(CLogWriter* writer) {
m_Writer = writer;
}
};
CLogReader handles the Single Responsibility of reading logs:
class CLogReader {
public:
virtual void Read(CString& strLog) {
//read to the string.
}
};
CLogWriter handles the Single Responsibility of writing logs:
class CLogWriter {
public:
virtual void Write(const CString& strNewMsg) {
//Write the string;
}
};
Then, if you wanted your CLog to, say, write to a socket, you would derive CLogWriter:
class CLogSocketWriter : public CLogWriter {
public:
void Write(const CString& strNewMsg) {
//Write to socket?
}
};
And then set your CLog instance's Writer to an instance of CLogSocketWriter:
CLog* log = new CLog();
log->setWriter(new CLogSocketWriter());
log->Write("Write something to a socket");
Pros
The pros to this method are that you follow the Single Responsibility Principle in that every class has a single purpose. It gives you the ability to expand a single purpose without having to drag along code which you would not modify anyways. It also allows you to swap out components as you see fit, without having to create an entire new CLog class for that purpose. For instance, you could have a Writer that writes to a socket, but a reader that reads a local file. Etc.
Cons
Memory management becomes a huge concern here. You have to keep track of when to delete your pointers. In this case, you'd need to delete them on destruction of CLog, as well as when setting a different Writer or Reader. Doing this, if references are stored elsewhere, could lead to dangling pointers. This would be a great opportunity to learn about Strong and Weak references, which are reference counter containers, which automatically delete their pointer when all references to it are lost.
No. If there's no polymorphism in action there's no reason for inheritance and you should use the refactoring rule to put the two classes into one. "Prefer composition over inheritance".
Edit: as #crush commented, "prefer composition over inheritance" may not be the adequate quotation here. So let's say: if you think you need to use inheritance, think twice. And if ever you are really sure you need to use it, think about it once again.

Is this a good code (came across while reading code of a colleague)

File a.hpp:
class a;
typedef boost::shared_ptr<a> aPtr
class a{
public:
static aPtr CreateImp();
virtual void Foo() = 0 ;
....
};
File aImp.hpp:
class aImp : public a{
virtual void Foo();
};
File aImp.cpp:
aPtr a::CreateImp()
{
return aPtr(new aImp());
}
void aImp::Foo(){}
The client must use CreateImp to get pointer to a, and can't use a other ways.
What do you think about this implementation?
What do you think about this kind of implementation?
This looks like a normal implementation if the Factory Method design pattern. The return of boost::shared_ptr just makes life of the programmer using this API easier in terms of memory management and exception safety and guards against simple mistakes like calling the function and ignoring the return value.
Edit:
If this is the only implementation of the base class, then it might be that the author was aiming for pimpl idiom to hide the implementation details and/or reduce compile-time dependencies.
If the intention is using the PIMPL idiom, then it is not the most idiomatic way. Inheritance is the second strongest coupling relationship in C++ and should be avoided if other solutions are available (i.e. composition).
Now, there might be other requirements that force the use of dynamic allocation, and/or the use of an specific type of smart pointer, but with the information you have presented I would implement the PIMPL idiom in the common way:
// .h
class a {
public:
~a(); // implement it in .cpp where a::impl is defined
void op();
private:
class impl;
std::auto_ptr<impl> pimpl;
};
// .cpp
class a::impl {
public:
void op();
};
a::~a() {}
void a::op() { pimpl->op(); }
The only (dis)advantage of using inheritance is that the runtime dispatch mechanism will call the implementation method for you, and you will not be required to implement the forwarding calls (a::op() in the example). On the other hand, you are paying the cost of the virtual dispatch mechanism in each and every operation, and limiting the use of your class to the heap (you cannot create an instance of a in the stack, you must call the factory function to create the object dynamically).
On the use of shared_ptr in the interface, I would try to avoid it (leave freedom of choice to your users) if possible. In this particular case, it seems as if the object is not really shared (the creator function creates an instance and returns a pointer, forgetting about it), so it would be better to have a smart pointer that allows for transfer of ownership (either std::auto_ptr, or the newer unique_ptr could do the trick), since the use of shared_ptr imposes that decision to your users.
(NOTE: removed it as the comment makes it useless)
Looks like good encapsulation, although I don't actually see anything preventing a from being used otherwise. (For example, private constructor and friend class aImp).
No compile unit using class a will have any knowledge of the implementation details except for aImp.cpp itself. So changes to aImp.hpp won't induce recompiles across the project. This both speeds recompilation and prevents coupling, it's helps maintainability a lot.
OTOH, anything implemented in aImp.hpp now can't be inlined, so run-time performance may suffer some, unless your compiler has something akin to Visual C++ "Link-Time Code Generation" (which pretty much undoes any gain to build speed from encapsulation).
As far as the smart pointer is concerned, this depends on project coding standards (whether boost pointers are used everywhere, etc).