How do I add code automatically to a derived function in C++ - c++

I have code that's meant to manage operations on both a networked client and a server, since there is significant overlap between the two. However, there are a few functions here and there that are meant to be exclusively called by the client or server, and accidentally calling a client function on the server (or vice versa) is a significant source of bugs.
To reduce these sorts of programming errors, I'm trying to tag functions so that they'll raise a ruckus if they're misused. My current solution is a simple macro at the start of each function that calls an assert if the client or server accesses members they shouldn't. However, this runs into problems when there are multiple derived instances of classes, in that I have to tag the implementation as client or server side in EVERY child class.
What I'd like to be able to do is put a tag in the virtual member's signature in the base class, so that I only have to tag it once and not run into errors by forgetting to do it repeatedly. I've considered putting a check in a base class implementation and then referring to it with something like base::functionName, but that runs into the same issue as far as needing to manually add the function call to every implementation. Ideally, I'd be able to have parent versions of the function called automatically like default constructors do.
Does anybody know how to achieve something like this in C++? Is there an alternate approach I should be considering?
Thanks!

Another approach might be to override a different method than the one your callers actually call:
class Base {
public:
void doit(const Something &);
protected:
virtual void real_doit(const Something &);
};
class Derived: public Base {
protected:
virtual void real_doit(const Something &);
};
The implementation of Base::doit() could do the check to make sure that it's being called in the right environment, and then call the virtual real_doit() function. Derived classes would override the protected virtual function, and users of either class wouldn't be able to call the protected function.
The Base::doit() function is not virtual so that derived classes can't accidentally override the wrong one. (People can try, but hopefully they'll notice soon enough when it's not called.)

What you've proposed is incredibly complex. It sounds like a simpler solution would be
class CommonStuff {
// all common code that anybody can safely call
};
class ServerBase : public CommonStuff {
// only what the server is allowed to call; can safely be overwritten
};
class ClientBase : public CommonStuff {
// only what the client is allowed to call; can safely be overwritten
};
Compile-time enforcements are much better than any sort of runtime enforcement.

There's not a way within the language (that I know of) to do what you're asking without redesigning your classes. The simplest solution may be to have a Client interface (pure virtual) class that does not declare server functions, and a Server interface class that doesn't declare client functions, and have your consolidated code inherit (publicly) from both interfaces. Then in your client program, use a reference (or pointer) to the Client interface, which does not allow access to any methods not declared in the Client interface. On the server, use the Server interface.
This will also allow you to use derived classes as Server or Client as well.

I would consider splitting this library into three libraries: A base library that has most everything, a server-only library, and a client-only library. As long as the client doesn't use the server library, you're good. You may end up adding a few extra classes (class Processor might split into BaseProcessor, ClientProcessor, and ServerProcessor, where each subclass has one additional function that the base doesn't.)
If that won't work, could you put the server/client check in the class constructor, and call the assertion there? (That would only work if the server-only or client-only is granular to the class, not to the method.)
If that won't work, would it make any sense to actually compile different versions of your library, based on whether it's a server or client build? Surround the methods, and their declarations, with #ifdef SERVERBUILD and #ifdef CLIENTBUILD, and include some checks to make sure they aren't both defined (#if defined(SERVERBUILD) && defined(CLIENTBUILD), #error Can't define both!).

I voted up Greg Hewgill's answer, but it got me thinking about ways to add "aspects" such as you request. I used his naming convention here (class Base and method doit):
class Base {
protected:
class Aspect {
public:
Aspect(int x) {
std::cout << "aspect" << std::endl;
}
};
public:
virtual void doit(const Something &arg, const Aspect hook = 0)
{
std::cout << "doit(" << arg << ")" << std::endl;
}
};
Callers can just say base.doit(arg) since Aspect is a default argument. Its constructor runs before doit and its destructor (not pictured) runs after. Sadly my first idea to make the default argument hook = this is not allowed.
Children can override doit with the same signature and get the same effect.

Related

how to add a function to a lib class without overriding it

I've a case in which I need to add some functions to a game engine class I'm using for a VR project without overriding the class it self:
The engine class name is AnnwaynPlayer that contains many useful methods to control the player, now I'm in the networking phase so I need to add 2 extra methods to this lib class which are setActive() and setConnected(), what is the best way to do this ?
If you can't touch the class itself then you probably want to use inheritance. This is one of the main goals of object-oriented programming -- to be able to add/change the behavior of an existing class without altering it. So you want something like:
class MyAnnwaynPlayer : public AnnwaynPlayer {
public:
void setActive();
void setConnected();
// ...
}
Now, things will be fine if AnnwaynPlayer has a virtual destructor. If it doesn't and your MyAnnwaynPlayer class has a non-trivial destructor then you have to wary of using an instance of MyAnnwaynPlayer through a pointer (be it raw or smart) of base class AnnwaynPlayer. When a pointer of the type is deleted, it will not chain through a call to your MyAnnwaynPlayer destructor.
Also consider ADL if you only need access to the public API of the base class. It's safer than inheritance, because you don't necessarily know the right class to inherit from in cases where the implementation returns something ultimately unspecified (like an internal derived class).
In essence, this would look like this:
namespace AnnwaynNamespace {
void setActive(AnnwaynPlayer& p);
void setConnected(AnnwaynPlayer& p);
};
And you could call them without using those functions (or the namespace), because ADL.
void wherever(AnnwaynNamespace::AnnwaynPlayer& p) {
setActive(p);
}
So setActive, etc, become part of the actual public API of the class, without involving any inheritance.

Prevent subclassing an abstract class interface in C++

I provide a SDK to my users, allowing them to write DLLs in C++ for expanding the software.
The SDK headers mostly contain interface class definitions. These class are of two types:
Some that the user must subclass and implement
Some that are wrappers to core classes, passed by the app to the DLL functions as pointers, which can then be used as arguments by the DLL code for calling core functions. These interfaces should not be subclassed by the user and passed to the core functions, as they expect a specific core subclass.
I write in the manual the interfaces that should not be subclassed, and only used through pointers on objects provided by the app. But at some places, it's too tempting to subclass them in the SDK if you do not read the manual.
Would it be possible to prevent subclassing some interfaces in the SDK headers?
As long as the client doesn't need to use the pointer for anything but
passing it back into your DLL, you can just use a forward declaration;
you can't derive from an incomplete type. (When faced with a similar
case recently, I went whole hog, and designed a special wrapper type
based on void*. There's a lot of casting in the interface code, but
there's no way the client can do much other than pass the value back to
me.)
If the classes in question implement an interface which the client must
also use, there are two solutions. The first is to change this,
replacing each of the member functions with a free function which takes
a pointer to the type, and just provide a forward declaration. The
second is to use something like:
class InternallyVisibleInterface;
class ClientVisibleInterface
{
private:
virtual void doSomething() = 0;
ClientVisibleInterface() = default;
friend class InternallyVisibleInterface;
protected: // Or public, depending on whether the client should
// be able to delete instances or not.
virtual ~ClientVisibleInterface() = default;
public:
void something();
};
and in your DLL:
class InternallyVisibleInterface : public ClientVisibleInterface
{
protected:
InternallyVisibleInterface() {}
// And anything else you need. If there is only one class in
// your application which should derive from the interface,
// this is it. If there are several, they should derive from
// this class, rather than ClientVisibleInterface, since this
// is the only class which can construct the
// ClientVisibleInterface base class.
};
void ClientVisibleInterface::something()
{
assert( dynamic_cast<InternallyVisibleInterface*>( this ) != nullptr );
doSomething();
}
This offers two levels of protection: first, although derivation
directly from ClientVisibleInterface is possible, it's impossible for
the resulting class to have a constructor, and so it cannot be
instantiated. And secondly, if the client code does cheat somehow,
there will be a runtime error if he does so.
You probably don't need both protections; one or the other should
suffice. The private constructor will result in a compile time error,
rather than a runtime one. On the other hand, without it, you don't
even have to mention the name of InternallyVisibleInterface in the
distributed headers.
As soon as a developper has a developpement environment, he can do almost anything, and you should not even try to control that.
IMHO the best you can do is to identify the limit between the core application and the extension DLLs and ensure that objects received from those DLLs are or correct class, and abort with a distinctive message if they are not.
Using RTTI and typeid is generally frowned upon because it is generally the sign of a bad OOP design : in normal use case, calling virtual method is enough to have proper code invoked. But I think it can safely be considered in your use case.

C++ should I use virtual methods?

Let me start by telling that I understand how virtual methods work (polymorphism, late-binding, vtables).
My question is whether or not I should make my method virtual. I will exemplify my dilemma on a specific case, but any general guidelines will be welcomed too.
The context:
I am creating a library. In this library I have a class CallStack that captures a call stack and then offers vector-like access to the captured stack frames. The capture is done by a protected method CaptureStack. This method could be redefined in a derived class, if the users of the library wish to implement another way to capture the stack. Just to be clear, the discussion to make the method virtual applies only to some methods that I know can be redefined in a derived class (in this case CaptureStack and the destructor), not to all the class methods.
Throughout my library I use CallStack objects, but never exposed as pointers or reference parameters, thus making virtual not needed considering only the use of my library.
And I cannot think of a case when someone would want to use CallStack as pointer or reference to implement polymorphism. If someone wants to derive CallStack and redefine CaptureStack I think just using the derived class object will suffice.
Now just because I cannot think polymorphism will be needed, should I not use virtual methods, or should I use virtual regardless just because a method can be redefined.
Example how CallStack can be used outside my library:
if (error) {
CallStack call_stack; // the constructor calls CaptureStack
for (const auto &stack_frame : call_stack) {
cout << stack_frame << endl;
}
}
A derived class, that redefines CaptureStack could be use in the same manner, not needing polymorphism:
if (error) {
// since this is not a CallStack pointer / reference, virtual would not be needed.
DerivedCallStack d_call_stack;
for (const auto &stack_frame : d_call_stack) {
cout << stack_frame << endl;
}
}
If your library saves the call stack during the constructor then you cannot use virtual methods.
This is C++. One thing people often get wrong when coming to C++ from another language is using virtual methods in constructors. This never works as planned.
C++ sets the virtual function table during each constructor call. That means that functions are never virtual when called from the constructor. The virtual method always points to the current class being constructed.
So even if you did use a virtual method to capture the stack the constructor code would always call the base class method.
To make it work you'd need to take the call out of the constructor and use something like:
CallStack *stack = new DerivedStack;
stack.CaptureStack();
None of your code examples show a good reason to make CaptureStack virtual.
When deciding whether you need a virtual function or not, you need to see if deriving and overriding the function changes the expected behavior/functionality of other functions that you're implementing now or not.
If you are relying on the implementation of that particular function in your other processes of the same class, like another function of the same class, then you might want to have the function as virtual. But if you know what the function is supposed to do in your parent class, and you don't want anybody to change it as far as you're concerned, then it's not a virtual function.
Or as another example, imagine somebody derives a class from you implementation, overrides a function, and passes that object as casted to the parent class to one of your own implemented functions/classes. Would you prefer to have your original implementation of the function or you want them to have you use their own overriden implementation? If the latter is the case, then you should go for virtual, unless not.
It's not clear to me where CallStack is being called. From
your examples, it looks like you're using the template method
pattern, in which the basic functionality is implemented in the
base class, but customized by means of virtual functions
(normally private, not protected) which are provided by the
derived class. In this case (as Peter Bloomfield points out),
the functions must be virtual, since they will be called from
within a member function of the base class; thus, with a static
type of CallStack. However: if I understand your examples
correctly, the call to CallStack will be in the constructor.
This will not work, as during construction of CallStack, the
dynamic type of the object is CallStack, and not
DerivedCallStack, and virtual function calls will resolve to
CallStack.
In such a case, for the use cases you describe, a solution using
templates may be more appropriate. Or even... The name of the
class is clear. I can't think of any reasonable case where
different instances should have different means of capturing the
call stack in a single program. Which suggests that link time
resolution of the type might be appropriate. (I use the
compilation firewall idiom and link time resolution in my own
StackTrace class.)
My question is whether or not I should make my method virtual. I will exemplify my dilemma on a specific case, but any general guidelines will be welcomed too.
Some guidelines:
if you are unsure, you should not do it. Lots of people will tell you that your code should be easily extensible (and as such, virtual), but in practice, most extensible code is never extended, unless you make a library that will be used heavily (see YAGNI principle).
you can use encapsulation in place of inheritance and type polymorphism (templates) as an alternative to class hierarchies in many cases (e.g. std::string and std::wstring are not two concrete implementations of a base string class and they are not inheritable at all).
if (when you are designing your code/public interfaces) you realize you have more than one class that "is an" implementation of another classes' interface, then you should use virtual functions.
You should almost certainly declare the method as virtual.
The first reason is that anything in your base class which calls CaptureStack will be doing so through a base class pointer (i.e. the local this pointer). It will therefore call the base class version of the function, even though a derived class masks it.
Consider the following example:
class Parent
{
public:
void callFoo()
{
foo();
}
void foo()
{
std::cout << "Parent::foo()" << std::endl;
}
};
class Child : public Parent
{
public:
void foo()
{
std::cout << "Child::foo()" << std::endl;
}
};
int main()
{
Child obj;
obj.callFoo();
return 0;
}
The client code using the class is only ever using a derived object (not a base class pointer etc.). However, it's the base class version of foo() that actually gets called. The only way to resolve that is to make foo() virtual.
The second reason is simply one of correct design. If the purpose of the derived class function is to override rather than mask the original, then it should probably do so unless there is a specific reason otherwise (such as performance concerns). If you don't do that, you're inviting bugs and mistakes in future, because the class may not act as expected.

Ways to make (relatively) safe assumptions about the type of concrete subclasses?

I have an interface (defined as a abstract base class) that looks like this:
class AbstractInterface
{
public:
virtual bool IsRelatedTo(const AbstractInterface& other) const = 0;
}
And I have an implementation of this (constructors etc omitted):
class ConcreteThing
{
public:
virtual bool IsRelatedTo(const AbstractInterface& other) const
{
return m_ImplObject.has_relationship_to(other.m_ImplObject);
}
private:
ImplementationObject m_ImplObject;
}
The AbstractInterface forms an interface in Project A, and the ConcreteThing lives in Project B as an implementation of that interface. This is so that code in Project A can access data from Project B without having a direct dependency on it - Project B just has to implement the correct interface.
Obviously the line in the body of the IsRelatedTo function cannot compile - that instance of ConcreteThing has an m_ImplObject member, but it can't assume that all AbstractInterfaces do, including the other argument.
In my system, I can actually assume that all implementations of AbstractInterface are instances of ConcreteThing (or subclasses thereof), but I'd prefer not to be casting the object to the concrete type in order to get at the private member, or encoding that assumption in a way that will crash without a diagnostic later if this assumption ceases to hold true.
I cannot modify ImplementationObject, but I can modify AbstractInterface and ConcreteThing. I also cannot use the standard RTTI mechanism for checking a type prior to casting, or use dynamic_cast for a similar purpose.
I have a feeling that I might be able to overload IsRelatedTo with a ConcreteThing argument, but I'm not sure how to call it via the base IsRelatedTo(AbstractInterface) method. It wouldn't get called automatically as it's not a strict reimplementation of that method.
Is there a pattern for doing what I want here, allowing me to implement the IsRelatedTo function via ImplementationObject::has_relationship_to(ImplementationObject), without risky casts?
(Also, I couldn't think of a good question title - please change it if you have a better one.)

extend a abstract base class w/o source recompilation?

ignore this, i thought of a workaround involving header generation. It isnt the nicest solution but it works. This question is to weird to understand. Basically i want to call a virtual function that hasnt been declared in the lib or dll and use it as normal (but have it not implemented/empty func).
I have an abstract base class in my library. All my plugins inherit from it, the user plugin inherits from this class and his application uses this class as a plugin pointer. I want that user to be able to extend the class and add his functions. The problem is, I am sure if he adds a virtual function and try to call it, the code will crash due to my objects not having the extra data in its vtable. How can I work around that? I thought of inheriting it but that would lead to ugly problems when a 3rd user comes to play. I dont want him to typecast to send the extended functions.
I was thinking of a msg function like intptr_t sendMsg(enum msgName, void* argv); But that removes the safty and I'd need to typecast everything. Whats the best solution for this? I would much rather use vtables then use a sendMsg function. How can I work around this?
Are you asking if you can add virtual functions to the base class without recompiling? The short answer to that is "no". The long answer is in your question, you'd have to provide some kind of generic "call_func" interface that would allow you to call functions "dynamically".
I think you can use register and callback mechanism
Your plugin can provide
Abstract base class "Base" and function
Register(Base *);
Now client can call plugin Register function
Register(b);
where b is defined as
Base *b = new Derived;
where Derived is new class derived from Base
I am not 100% sure I see the problem.
If the user1 derived type extends your base class (with more virtual methods) then that should be fine (of course your code will never know or understand these new methods but presumably you would not be calling them:
class B
{
virtual void doStuff() { /* Nothing */}
};
// User 1 version:
class U1: public B
{
virtual void doStuff()
{
this->doA();
this->doB();
}
virtual void doA() {}
virtual void doB() {}
};
// User 2 version can extend it differently.
Note:
If you are worried by slicing because you are storing objects in a vector that is a slightly different problem.
std::vector<B> objs;
objs.push_back(U1());
std::for_each(objs.begin(),objs.end(),std::mem_fun_ref(&B::doStuff));
Here the problem is that a user defined type U1 can not be copied into the vector because the vector holds only B objects. This slices off the extra data held in U1.
The solution to this problem is that you need to hold pointers in the vector. This of course leads to other problems with exception safety. So boost has the ptr_vector<> container to hold objects correctly but still let them be used like objects.
#include <boost/ptr_container/ptr_vector.hpp>
......
boost::ptr_vector<B> objs;
objs.push_back(new U1());
std::for_each(objs.begin(),objs.end(),std::mem_fun_ref(&B::doStuff));