Alternative design pattern to a base class function call - c++

I have a base class that clients can "shut down":
struct base {
void shutdown() {
//code
}
};
But clients can (if they want) create a subclass to suit their own needs:
struct der : public base {
void shutdown(double some, int other, bool parameters) {
//custom shutdown stuff
base::shutdown(); //<- thus MUST be called
}
};
The problem is that the client must remember to call base::shutdown() (the superclass creator will document this to subclass developers). This seems error-prone, as nothing is enforced at compile-time.
Are there alternative design patterns to solve this in some way?

Unfortunately, this is one of those cases where you "have to do the right thing" when writing the code. This applies to many things in programming. If you don't do the right calculations when calculating something, that's also wrong. Or if you call a function twice that should only be called once. Or not calling it at all when it has to be called. All of these things are "doing it wrong". You can't prevent programmers from making mistakes...
Of course, if you hadn't added a bunch of extra parameters, you could do it "in reverse" by having a non-virtual function in the base-class that doesn't get overridden, and then let the base-class call a virtual function in the derived class. As I said, it doesn't work if you have parameters that aren't present in the base class that needs to go into the derived class.
An example of what I mean:
struct base {
void shutdown() final { // final: it can't be overridden in derived class
do_shutdown(); // Calls derived function's
// ... more code here ...
}
virtual void do_shutdown() { } // default is "do nothing".
};
struct der: public base
{
// not overriding `shutdown`, but overriding `do_shutdown`
void do_shutdown()
{
.. some code goes here ..
}
}
Now the call to der->shutdown() will call the base-class's implementation, which calls do_shutdown in the derived class before completing the in the base-class's shutdown.
However, like I said, you can't add extra parameters when you do this - in fact, deriving a class and changing the parameters is "wrong" - not so wrong you can't possibly ever do it, but it tends to be a bad thing when using inheritance polymorphism, because the whole purpose of polymorphism is that there is some common code that "doesn't know if it's a base class object, a derived1 object or a derived2 object" - so if it needs to know which kind of object to pass the correct number of arguments, it gets a bit meaningless.

I wouldn't worry about it. It's similar to the assignment operator of derived classes. It's not your fault if the subclass developer forgets to call it, but it's not an error if it isn't either.
C++ FAQ entry on the subject

Related

What if I must override a non-virtual member function

Say we have a library which provides a class
struct Base { int foo() { return 42; } };
I cannot change that class.
99% of the people never want to override foo, hence it has not been made virtual by the library designers.
But I need to override it:
struct MyClass : Base { int foo() { return 73; } };
Even worse, the library has interfaces accepting pointers to Base.
I want to plug in MyClass, but of course, since foo is not virtual, the code behind the interface always calls Base::foo. I want it to call MyClass::foo.
What can I do about it? Is there a common pattern to make Base::foo appear to be virtual?
In reality, Base::foo is QAbstractProxyModel::sourceModel.
I'm implementing a ProxyChain, to abstract many proxy models to a single one.
QAbstractProxyModel::setSourceModel is virtual, but QAbstractProxyModel::sourceModel isn't and that makes a lot of trouble.
void ProxyChain::setSourceModel(QAbstractItemModel* source_model)
{
for (auto* proxy : m_proxies) {
proxy->setSourceModel(source_model);
source_model = proxy;
}
QIdentityProxyModel::setSourceModel(source_model);
}
QAbstractItemModel* ProxyChain::sourceModel() const
{
return m_proxies.front()->sourceModel();
}
What can I do about it?
Nothing.
This is why guidelines tell us to use virtual if we want other people to be able to "pretend" that their classes are versions of our classes.
The author of Base did not do that, so you do not have that power.
That's it.
What can I do about it?
Nothing. If a member function is non-virtual, then it is non-virtual. This means that any code, anywhere in the code base, which takes a Base pointer or reference who calls base->foo will be calling exactly and only Base::foo. This call is statically (compile-time) bound to the function it calls.
You cannot reach into someone else's code and make them use dynamic binding. If they didn't choose to participate in dynamic binding, then you can't make them. You can create your own derived class and write your own version of foo which hides the base class version. But this will not affect the behavior of any code which gets a pointer/reference to Base.
In your specific case, your best bet will be to make sure to call the base class setSourceModel with the object that you want sourceModel to return any time something changes which changes what sourceModel should return.

Virtual methods in constructor

Calling virtual methods in C++ is prohibited, however there are a few situations, where it might be very useful.
Consider the following situation - pair of Parent and Child classes. Parent constructor requires a Child class to be instantiated, because it has to initialize it in a special way. Both Parent and Child may be derived, such that DerivedParent uses DerivedChild.
There's a problem, however - because in Parent::ctor call from DerivedParent::ctor, base class should have an instance of DerivedChild instead of Child. But that would require calling a virtual method some way, what is prohibited. I'm talking about something like this:
class Child
{
public:
virtual std::string ToString() { return "Child"; }
};
class DerivedChild : public Child
{
public:
std::string ToString() { return "DerivedChild"; }
};
class Parent
{
protected:
Child * child;
virtual Child * CreateChild() { return new Child(); }
public:
Parent() { child = CreateChild(); }
Child * GetChild() { return child; }
};
class DerivedParent : public Parent
{
protected:
Child * CreateChild() { return new DerivedChild(); }
};
int main(int argc, char * argv[])
{
DerivedParent parent;
printf("%s\n", parent.GetChild()->ToString().c_str());
getchar();
return 0;
}
Let's get a real-world example. Suppose, that I want to write wrapper for WinApi's windows. The base Control class should register class and instantiate a window (eg. RegisterClassEx and CreateWindowEx), to properly set it up (for example register class in such way, that window structure has additional data for class instance; set up generic WndProc for all Controls; put reference to this by SetWindowLongPtr etc.)
On the other hand, derived class should be able to specify styles and extended styles, class name for window etc.
If constructing instance of window in the Control's constructor is a contract to be fulfilled, I see no other solution than to use VMs in ctor (what won't work).
Possible workarounds:
Use static polymorphism (eg. class Derived : public Base), but it will not work, if one wants to derive from Derived;
Pass a lambda from derived to base ctor if it is possible - it will work, but it's a total hardcore solution;
Pass a ton of parameters from derived to base ctor - it shall work, but won't be neither elegant, nor easy to use.
I personally don't like neither of them. Out of curiosity, I checked, how the problem is solved in Delphi's VCL, and you know what, base class calls CreateParams, which is virtual (Delphi allows such calls and guarantees, that they are safe - class fields are initialized to 0 upon creating).
How can one overcome this language restriction?
Edit: In response to answers:
Call CreateChild from the derived constructor. You're already requiring CreateChild to be defined, so this is an incremental step. You can add a protected: void init() function in the base class to keep such initialization encapsulated.
It will work, but it's not an option - quoting the famous C++ FAQ:
The first variation is simplest initially, though the code that actually wants to create objects requires a tiny bit of programmer self-discipline, which in practice means you're doomed. Seriously, if there are only one or two places that actually create objects of this hierarchy, the programmer self-discipline is quite localized and shouldn't cause problems.
Use CRTP. Make the base class a template, have the child supply the DerivedType, and call the constructor that way. This kind of design can sometimes eliminate virtual functions completely.
It's not an option, because it will work only once, for base class and its immediate descendant.
Make the child pointer a constructor argument, for example to a factory function.
This is, so far, the best solution - injecting code into base ctor. In my case, I won't even need to do it, because I can just parametrize base ctor and pass values from the descendant. But it will actually work.
Use a factory function template to generate the child pointer of the appropriate type before returning a parent. This eliminates the complexity of a class template. Use a type traits pattern to group child and parent classes.
Yea, but it has some drawbacks:
Someone, who derives from my classes may publish his ctor and bypass the security measures;
It would prevent one from using references, one would have to use smart pointers.
Calling virtual methods in C++ is prohibited, however there are a few situations, where it might be very useful.
No, calling a virtual method in the constructor dispatches to the most-derived complete object, which is the one under construction. It's not prohibited, it's well-defined and it does the only thing that would make sense.
A C++ base class is not allowed to know the identity of the most derived object, for better or worse. Under the model, the derived object doesn't start to exist until after the base constructors have run, so there's nothing to get type information about.
Some alternatives in your case are:
Call CreateChild from the derived constructor. You're already requiring CreateChild to be defined, so this is an incremental step. You can add a protected: void init() function in the base class to keep such initialization encapsulated.
Use CRTP. Make the base class a template, have the child supply the DerivedType, and call the constructor that way. This kind of design can sometimes eliminate virtual functions completely.
Make the child pointer a constructor argument, for example to a factory function.
Use a factory function template to generate the child pointer of the appropriate type before returning a parent. This eliminates the complexity of a class template. Use a type traits pattern to group child and parent classes.

C++ Singleton class - inheritance good practice

In an existing project, I am to inherit a Controller class (MVC) declared as Singleton so as to define my own treatment. How to appropriately derive this Singleton class?
First, I expand on context and need for this inheritance.
The application that I am added to the existing software wants to use a MVC module that performs almost same task as the one I am willing to perform. It is using the same methods up to signature and slight modifications. Rewriting my own MVC module would definitively be duplication of code. The existing module is intrinsically oriented towards its application to another part of the software, and I cannot simply use the same module. But is written as a Model-View-Controller pattern where Controller is Singleton. I derived View already.
Second, I have doubt that I can classicaly derive Singleton class.
Calling constructor from inherited class would simply call getinstance() for parent class and fail to return an object from derived class (?).
Third, it's how I see some way to deal with. Please comment/help me improve!
I copy the whole Singleton class in a class I could call AbstractController. I derive this class twice. The first child is singleton and adopts the whole treatment of parent class. The second child is the Controller for my part of the application, with own redefined treatment.
Thanks!
Truth is, singletons and inheritance do not play well together.
Yeah, yeah, the Singleton lovers and GoF cult will be all over me for this, saying "well, if you make your constructor protected..." and "you don't have to have a getInstance method on the class, you can put it...", but they're just proving my point. Singletons have to jump through a number of hoops in order to be both a singleton and a base class.
But just to answer the question, say we have a singleton base class. It can even to some degree enforce its singleness through inheritance. (The constructor does one of the few things that can work when it can no longer be private: it throws an exception if another Base already exists.) Say we also have a class Derived that inherits from Base. Since we're allowing inheritance, let's also say there can be any number of other subclasses of Base, that may or may not inherit from Derived.
But there's a problem -- the very one you're either already running into, or will soon. If we call Base::getInstance without having constructed an object already, we'll get a null pointer. We'd like to get back whatever singleton object exists (it may be a Base, and/or a Derived, and/or an Other). But it's hard to do so and still follow all the rules, cause there are only a couple of ways to do so -- and all of them have some drawbacks.
We could just create a Base and return it. Screw Derived and Other. End result: Base::getInstance() always returns exactly a Base. The child classes never get to play. Kinda defeats the purpose, IMO.
We could put a getInstance of our own in our derived class, and have the caller say Derived::getInstance() if they specifically want a Derived. This significantly increases coupling (because a caller now has to know to specifically request a Derived, and ends up tying itself to that implementation).
We could do a variant of that last one -- but instead of getting the instance, the function just creates one. (While we're at it, let's rename the function to initInstance, since we don't particularly care what it gets -- we're just calling it so that it creates a new Derived and sets that as the One True Instance.)
So (barring any oddness unaccounted for yet), it works out kinda like this...
class Base {
static Base * theOneTrueInstance;
public:
static Base & getInstance() {
if (!theOneTrueInstance) initInstance();
return *theOneTrueInstance;
}
static void initInstance() { new Base; }
protected:
Base() {
if (theOneTrueInstance) throw std::logic_error("Instance already exists");
theOneTrueInstance = this;
}
virtual ~Base() { } // so random strangers can't delete me
};
Base* Base::theOneTrueInstance = 0;
class Derived : public Base {
public:
static void initInstance() {
new Derived; // Derived() calls Base(), which sets this as "the instance"
}
protected:
Derived() { } // so we can't be instantiated by outsiders
~Derived() { } // so random strangers can't delete me
};
And in your init code, you say Base::initInstance(); or Derived::initInstance();, depending on which type you want the singleton to be. You'll have to cast the return value from Base::getInstance() in order to use any Derived-specific functions, of course, but without casting you can use any functions defined by Base, including virtual functions overridden by Derived.
Note that this way of doing it also has a number of drawbacks of its own, though:
It puts most of the burden of enforcing singleness on the base class. If the base doesn't have this or similar functionality, and you can't change it, you're kinda screwed.
The base class can't take all of the responsibility, though -- each class needs to declare a protected destructor, or someone could come along and delete the one instance after casting it (in)appropriately, and the whole thing goes to hell. What's worse, this can't be enforced by the compiler.
Because we're using protected destructors to prevent some random schmuck from deleting our instance, unless the compiler's smarter than i fear it is, even the runtime won't be able to properly delete your instance when the program ends. Bye bye, RAII...hello "memory leak detected" warnings. (Of course the memory will eventually be reclaimed by any decent OS. But if the destructor doesn't run, you can't depend on it to do cleanup for you. You'll need to call a cleanup function of some sort before you exit, and that won't give you anywhere near the same assurances that RAII can give you.)
It exposes an initInstance method that, IMO, doesn't really belong in an API everyone can see. If you wanted, you could make initInstance private and let your init function be a friend, but then your class is making assumptions about code outside itself, and the coupling thing comes back into play.
Also note that the code above is not at all thread safe. If you need that, you're on your own.
Seriously, the less painful route is to forget trying to enforce singleness. The least complicated way to ensure that there's only one instance is to only create one. If you need to use it multiple places, consider dependency injection. (The non-framework version of that amounts to "pass the object to stuff that needs it". :P ) I went and designed the above stuff just to try and prove myself wrong about singletons and inheritance, and just reaffirmed to myself how evil the combination is. I wouldn't recommend ever actually doing it in real code.
I'm not sure I understand the situation you're dealing with fully, and whether or not it's possible or appropriate to derive from the singleton depends very much on how the singleton is implemented.
But since you mentioned "good practice" there's some general points that come to mind when reading the question:
Inheritance isn't usually the best tool to achieve code re-use. See: Prefer composition over inheritance?
Using singleton and "good practice" generally do not go together! See: What is so bad about singletons?
Hope that helps.
I recently had the similar need in my app... in any case here is my out of code implementation :
h.
class icProjectManagerHandler;
class icProjectManager : public bs::icBaseManager {
friend class icProjectManagerHandler;
protected:
icProjectManager();
public:
~icProjectManager();
template<typename t>
static t *PM() {
return dynamic_cast<t *>(icProjectManagerHandler::PMH()->mCurrentManager);
};
template<typename t>
static t *PMS() {
static icProjectManagerHandler pm;
return static_cast<t *>(icProjectManagerHandler::PMH()->mCurrentManager);
};
};
class icProjectManagerHandler {
friend class icProjectManager;
icProjectManager *mCurrentManager;
icProjectManagerHandler();
public:
~icProjectManagerHandler();
static icProjectManagerHandler *PMH();
inline void setProjectManager(icProjectManager *pm) {
if (mCurrentManager) { delete mCurrentManager; }
mCurrentManager = pm;
}
};
Cpp.
icProjectManagerHandler::icProjectManagerHandler() {
mCurrentManager = new icProjectManager();
}
icProjectManagerHandler::~icProjectManagerHandler() {
}
icProjectManagerHandler *icProjectManagerHandler::PMH() {
static icProjectManagerHandler pmh;
return &pmh;
}
icProjectManager::icProjectManager() {
}
icProjectManager::~icProjectManager() {
}
And example:
class icProjectX : public ic::project::icProjectManager {
public:
icProjectX() {};
~icProjectX() {};
};
int main(int argc, char *argv[]) {
auto pro = new icProjectX();
pro->setIcName("Hello");
ic::project::icProjectManagerHandler::PMH()->setProjectManager(pro);
qDebug() << "\n" << pro << "\n" << ic::project::icProjectManager::PMS<icProjectX>();
return 10;
}
The issue of this implementation is that you have to initialize your "singleton" class 1st or else you will get the default base class. But other than that... it should work?

Would using a virtual destructor make non-virtual functions do v-table lookups?

Just what the topic asks. Also want to know why non of the usual examples of CRTP do not mention a virtual dtor.
EDIT:
Guys, Please post about the CRTP prob as well, thanks.
Only virtual functions require dynamic dispatch (and hence vtable lookups) and not even in all cases. If the compiler is able to determine at compile time what is the final overrider for a method call, it can elide performing the dispatch at runtime. User code can also disable the dynamic dispatch if it so desires:
struct base {
virtual void foo() const { std::cout << "base" << std::endl; }
void bar() const { std::cout << "bar" << std::endl; }
};
struct derived : base {
virtual void foo() const { std::cout << "derived" << std::endl; }
};
void test( base const & b ) {
b.foo(); // requires runtime dispatch, the type of the referred
// object is unknown at compile time.
b.base::foo();// runtime dispatch manually disabled: output will be "base"
b.bar(); // non-virtual, no runtime dispatch
}
int main() {
derived d;
d.foo(); // the type of the object is known, the compiler can substitute
// the call with d.derived::foo()
test( d );
}
On whether you should provide virtual destructors in all cases of inheritance, the answer is no, not necessarily. The virtual destructor is required only if code deletes objects of the derived type held through pointers to the base type. The common rule is that you should
provide a public virtual destructor or a protected non-virtual destructor
The second part of the rule ensures that user code cannot delete your object through a pointer to the base, and this implies that the destructor need not be virtual. The advantage is that if your class does not contain any virtual method, this will not change any of the properties of your class --the memory layout of the class changes when the first virtual method is added-- and you will save the vtable pointer in each instance. From the two reasons, the first being the important one.
struct base1 {};
struct base2 {
virtual ~base2() {}
};
struct base3 {
protected:
~base3() {}
};
typedef base1 base;
struct derived : base { int x; };
struct other { int y; };
int main() {
std::auto_ptr<derived> d( new derived() ); // ok: deleting at the right level
std::auto_ptr<base> b( new derived() ); // error: deleting through a base
// pointer with non-virtual destructor
}
The problem in the last line of main can be resolved in two different ways. If the typedef is changed to base1 then the destructor will correctly be dispatched to the derived object and the code will not cause undefined behavior. The cost is that derived now requires a virtual table and each instance requires a pointer. More importantly, derived is no longer layout compatible with other. The other solution is changing the typedef to base3, in which case the problem is solved by having the compiler yell at that line. The shortcoming is that you cannot delete through pointers to base, the advantage is that the compiler can statically ensure that there will be no undefined behavior.
In the particular case of the CRTP pattern (excuse the redundant pattern), most authors do not even care to make the destructor protected, as the intention is not to hold objects of the derived type by references to the base (templated) type. To be in the safe side, they should mark the destructor as protected, but that is rarely an issue.
Very unlikely indeed. There's nothing in the standard to stop compilers doing whole classes of stupidly inefficient things, but a non-virtual call is still a non-virtual call, regardless of whether the class has virtual functions too. It has to call the version of the function corresponding to the static type, not the dynamic type:
struct Foo {
void foo() { std::cout << "Foo\n"; }
virtual void virtfoo() { std::cout << "Foo\n"; }
};
struct Bar : public Foo {
void foo() { std::cout << "Bar\n"; }
void virtfoo() { std::cout << "Bar\n"; }
};
int main() {
Bar b;
Foo *pf = &b; // static type of *pf is Foo, dynamic type is Bar
pf->foo(); // MUST print "Foo"
pf->virtfoo(); // MUST print "Bar"
}
So there's absolutely no need for the implementation to put non-virtual functions in the vtable, and indeed in the vtable for Bar you'd need two different slots in this example for Foo::foo() and Bar::foo(). That means it would be a special-case use of the vtable even if the implementation wanted to do it. In practice it doesn't want to do it, it wouldn't make sense to do it, don't worry about it.
CRTP base classes really ought to have destructors that are non-virtual and protected.
A virtual destructor is required if the user of the class might take a pointer to the object, cast it to the base class pointer type, then delete it. A virtual destructor means this will work. A protected destructor in the base class stops them trying it (the delete won't compile since there's no accessible destructor). So either one of virtual or protected solves the problem of the user accidentally provoking undefined behavior.
See guideline #4 here, and note that "recently" in this article means nearly 10 years ago:
http://www.gotw.ca/publications/mill18.htm
No user will create a Base<Derived> object of their own, that isn't a Derived object, since that's not what the CRTP base class is for. They just don't need to be able to access the destructor - so you can leave it out of the public interface, or to save a line of code you can leave it public and rely on the user not doing something silly.
The reason it's undesirable for it to be virtual, given that it doesn't need to be, is just that there's no point giving a class virtual functions if it doesn't need them. Some day it might cost something, in terms of object size, code complexity or even (unlikely) speed, so it's a premature pessimization to make things virtual always. The preferred approach among the kind of C++ programmer who uses CRTP, is to be absolutely clear what classes are for, whether they are designed to be base classes at all, and if so whether they are designed to be used as polymorphic bases. CRTP base classes aren't.
The reason that the user has no business casting to the CRTP base class, even if it's public, is that it doesn't really provide a "better" interface. The CRTP base class depends on the derived class, so it's not as if you're switching to a more general interface if you cast Derived* to Base<Derived>*. No other class will ever have Base<Derived> as a base class, unless it also has Derived as a base class. It's just not useful as a polymorphic base, so don't make it one.
The answer to your first question: No. Only calls to virtual functions will cause an indirection via the virtual table at runtime.
The answer to your second question: The Curiously recurring template pattern is commonly implemented using private inheritance. You don't model an 'IS-A' relationship and hence you don't pass around pointers to the base class.
For instance, in
template <class Derived> class Base
{
};
class Derived : Base<Derived>
{
};
You don't have code which takes a Base<Derived>* and then goes on to call delete on it. So you never attempt to delete an object of a derived class through a pointer to the base class. Hence, the destructor doesn't need to be virtual.
Firstly, I think the answer to the OP's question has been answered quite well - that's a solid NO.
But, is it just me going insane or is something going seriously wrong in the community? I felt a bit scared to see so many people suggesting that it's useless/rare to hold a pointer/reference to Base. Some of the popular answers above suggest that we don't model IS-A relationship with CRTP, and I completely disagree with those opinions.
It's widely known that there's no such thing as interface in C++. So to write testable/mockable code, a lot of people use ABC as an "interface". For example, you have a function void MyFunc(Base* ptr) and you can use it this way: MyFunc(ptr_derived). This is the conventional way to model IS-A relationship which requires vtable lookups when you call any virtual functions in MyFunc. So this is pattern one to model IS-A relationship.
In some domain where performance is critical, there exists another way(pattern two) to model IS-A relationship in a testable/mockable manner - via CRTP. And really, performance boost can be impressive(600% in the article) in some cases, see this link. So MyFunc will look like this template<typename Derived> void MyFunc(Base<Derived> *ptr). When you use MyFunc, you do MyFunc(ptr_derived); The compiler is going to generate a copy of code for MyFunc() that matches best with the parameter type ptr_derived - MyFunc(Base<Derived> *ptr). Inside MyFunc, we may well assume some function defined by the interface is called, and pointers are statically cast-ed at compile time(check out the impl() function in the link), there's no overheads for vtable lookups.
Now, can someone please tell me either I am talking insane nonsense or the answers above simply did not consider the second pattern to model IS-A relationship with CRTP?

How to make sure you have overriden (hide) a method in a derived class in C++?

class Base {
public:
void foo() const {
std::cout << "foo const" << std::endl;
}
};
class Derived : public Base {
public:
void foo() {
std::cout << "foo"<< std::endl;
}
}
I want to make sure that foo() const is correctly hidden for Base. Yeah, this is a bad idea, and maybe I should make Base::foo() const a pure virtual to require Dervied::foo() to overridde correctly — but let's say that I cannot make Base::foo() const pure virtual. Is there a better way to make sure Base::foo() const is correctly hidden in Derived?
Edit: I want to make sure that in Derived I have correctly hidden the base implementation.
Simply by defining a member function foo in the derived class, you have hidden all of the foo functions in the base class.
You'll need to refine the question a little - are you concerned that the foo in the derived class is not a proper replacement for the foo in the base class? I'm having a hard time trying to determine what you're really asking for.
Edit: Based on your edit and additional comments, I think you have a misunderstanding of how hiding works in C++. In this case, it doesn't matter that one function is const and the other one isn't - once C++ finds a function foo in the derived class, it stops looking anywhere else! This is usually a huge trap for people. Consider the following:
class Base
{
void foo(double d)
{
cout << d;
}
};
class Derived : public Base
{
void foo(int i)
{
cout << i;
}
};
Derived obj;
obj.foo(123.456);
What do you think the output is? It's 123! And you probably got a compiler warning telling you the double value was going to be truncated. Even though the function signature that takes a double is obviously a better match, it's never considered - it has been hidden.
"void foo() const" and "void foo()" are two completely different functions as far as C++ is concerned. That's why you don't see the Derived's "foo" hiding the "foo" from the Base.
Are you sure you want to HIDE foo() from Base? If you want to exploit polymorphism you need to make the base version of foo() virtual. It does not have to be pure, though. Otherwise you get static binding - do you want that?
If you don't want to make it pure virtual and thereby force the implementation, you might just make the body of foo() in the Base class just immediately throw an exception. This serves the purpose of forcing any implementors and users to implement an override, or have it blow up when they try to use it.
I've seen this done before, and while it's a bit ugly, it very certainly works.
If I have understood your question properly, you are asking how to make sure that Derived classes override the method compulsorily, without making the method pure in Base class.
By making sure all of your derived classes to provide the implementation of the method, IMHO you are hinting that the method in Base class is really not used for any purpose. The objects must invoke any of the Derived class version of the method foo(). Hence, the method should be made pure virtual in my opinion. I don't think any other way to achieve this, I believe.
Also, note that in your example you have changed the signature of foo() in Base and Derived. This makes the Base class method to be invisible in Derived class. Unless you use using Base::foo, the method foo() will not be visible to Derived class.
In Base::foo(), you could do the following:
assert (dynamic_cast<const Derived*> (this) == NULL);
But it has three problems:
It requires a change in the Base class, which should be closed to modification.
It may be stronger than you need, since you may want to permit Base::foo() to be called from a Base object or explicitly.
It uses RTTI, which means a vtbl, which might be why you didn't want to user virtual functions to begin with.