Missunderstanding in virtual function call - c++

I have the following code, and I don't understand why it calls the A class function instead of B class function. Could someone tell me why??
#include<iostream>
using namespace std;
class A{
public:
virtual void f(int n){
f(n);
cout << "A";
}
};
class B :public A{
public:
virtual void f(float f){
cout << "B";
}
};
int main(){
A*p= new B;
p->f(5.1);
}

These are completely different functions. A function is identified by its name and its arguments. You have no overriding here: you have two distinct functions.
If you'd used the override keyword, the compiler would have immediately told you this.

I Modified your Code slightly as under. And I get the desired result.
i.e.: when accessing a derived class reference from a base class pointer I get the compiler to map my function call correctly to the derived class' implementation.
I guess the following facts apply:
A derived class would have to first implement(override: which doesn't say change the signature) the Virtual functions defined by the base class to be able to call/access them polymorphically.
After a base class has implemented the virtual functions they can very well be overloaded.
One other thing, If we see the VTable creation mechanism, Only when the derived class implements a virtual function defined at a base calss an entry for the same function name would be made. Otherwise the pointer A is still a map of the Class A and has no idea how to resolve the call to and ends up in implicitly casting the 5.1 double to int according to the implementation of function f in class A. this is pretty much what I mentioned in point#1
virtual functions and pure virtual functions are a means of providing/creating an interface which can be shared across different layers of your software, where the you simply share the interface and the can hide the implementation from the user. So having same function name/ interfaces defined by two classes would only cause more confusion.
#include<iostream>
using namespace std;
class A{
public:
virtual void f(int n){ cout << "A" << endl; }
};
class B :public A{
public:
void f(int f){ cout << "B" << endl; }
void f(float f){ cout << "B" << endl; }
};
int main(){
A*p= new B;
p->f(5.1);
}
Since there are lot of pros in this forum, ff there is anything incorrect in my answer, please put in your comments.
Thanks.

Related

Method Overriding C++

I got a Question in my Exam which was this:
Function Overriding means the functions have the same prototype but
differ in their body
Justify the Statement with the help of an Example.
Now I quoted this code snippet as Example :
#include<iostream>
using namespace std;
class A {
public: virtual void print() {
cout << "I am Base Class's print Function" << endl;
}
};
class B: public A {
public: void print() {
cout << "I am Derived's Class print function" << endl;
}
};
Here I have made two classes, Class A and Class B and Class B is inheriting Class A. Now, by definition of Method Overriding, we mean that the Function which gets created in the Base Class gets overridden in the Derived Class.
I made the Function in the Base Class as a Virtual Function.
Now, my main() file:
int main() {
A * a1;
B b1;
a1 = & b1;
a1 - > print();
}
Now, I want to ask that is my above code snippet example for above question is right or not. I have performed Function Overriding at run time. In my main file, the Base Class Object is a Pointer that is having the Address of the Derived Class. When I will call print() function using a1, it will execute the print() function of the Derived Class.
So isn't my example justified? Am I right or not?
You could use the classical Cat vs Dog example where both classes inherit from a common base class, i.e. Animal. The common base class can then have a pure virtual function that is then overridden with a differing implementation (method body) in each subclass.
#include <iostream>
class Animal
{
public:
virtual ~Animal() = default;
virtual void MakeSound() const = 0;
};
class Dog : public Animal
{
public:
virtual void MakeSound() const override;
};
class Cat : public Animal
{
public:
virtual void MakeSound() const override;
};
void Dog::MakeSound() const
{
std::cout << "Woof!" << std::endl;
}
void Cat::MakeSound() const
{
std::cout << "Meow!" << std::endl;
}
int main()
{
const Dog dog{};
const Cat cat{};
const Animal& firstAnimal{dog};
const Animal& secondAnimal{cat};
/*
* These functions have the same prototype, void MakeSound(),
* but differ in their implementation.
*/
firstAnimal.MakeSound();
secondAnimal.MakeSound();
return 0;
}
If you teacher expected this as answer and considers your example as wrong then I would argue that they teach you overriding the wrong way.
From cppreference:
Virtual functions are member functions whose behavior can be overridden in derived classes.
Of course this does not strictly imply the reverse statement: "functions that can be overriden are virtual". But if this wasnt true, the quoted sentence would make little sense.
Non-virtual methods are not really meant to be overriden. From the C++ FAQ:
Should a derived class redefine (“override”) a member function that is non-virtual in a base class?
It’s legal, but it ain’t moral. [...]
Note that they put "override" in quotes, because strictly speaking it is not overriding but merely redefining.
Further, you can read on cpprefernce about the override specifier (emphasize mine):
In a member function declaration or definition, override ensures that the function is virtual and is overriding a virtual function from a base class. The program is ill-formed (a compile-time error is generated) if this is not true.
TL;DR If I had to judge I would consider this as a misleading bad example for overriding, while your code seems fine. It could benefit from using override and A should have a virtual destructor, but thats details.

Ambiguity in Multiple Inheritance

Let's say i have this simple code
#include <iostream>
using namespace std;
class A {
public:
void show() {
cout << "A" << endl;
}
};
class B {
public:
void show() {
cout << "B" << endl;
}
};
class C:public A, public B {
};
int main()
{
C obj;
obj.show();
}
this throws a compile time error because the call to function show() is ambiguous. I want to know how does the compiler understand this? Object creation is a run time process, so, how does the compiler know before hand that an object of Class C is going to be created which would invoke show(). Can anybody please tell me the concept behind this?
You are inheriting both the base classes A and B which has the same method show in C. This is the reason you are facing the compiler error ambiguous access of 'show'.
To get over it, you need to be more explicit on which `show method you would want to invoke. It would be easy to do it using the scope resolution operator.
To invoke show method of A: obj.A::show();
To invoke show method of B: obj.B::show();
This concept is called "Late Binding" in "Polymorphism". It means, the code tells the compiler to understand what to do on runtime. You should use virtual functions in this manner; and it is only available when you use this with "pointers". Let me give you a little example.
class Teacher { //base class
string name;
int numOfStudents;
public:
Teacher( const string &, int ); // constructor of base
virtual void print() const; // a virtual (polymorphic) function
};
class Principal : public Teacher{ // derived class
string SchoolName;
public:
Principal( const string &, int , const string & );
void print() const; // also virtual (polymorphic)
};
If you create a Principal object on main function and then call its print() function, compiler will run the function which is defined in "Principal" class. But if you don't define a print() function in a class which you derived from "Teacher" class, then when you call a print() function of that class' object pointer it will run the print() defined in "Teacher" class.
But again do not try this with the object itself, you should do it with pointers.
Best regards.
Here is the answer you are looking for: obj is of type C which means it is both type A and type B. I am trying to be careful not to say "has-a" when it is "is-a", but, in my mind, an object of type C has also an object of type A and type B. When you call a constructor for type C, you would also call the constructors of type A and B.
So, up to the point of creation of obj, everything is fine. But, after that the compiler has to decide which of the two show()'s to call. And that is when it is getting confused.
You are inheriting from both the classes A and B. That's why it is showing you an error, namely because it is ambiguous. To handle this you can refer explicitly to the function or method should be invoked at the time of calling by the object of the C's object. This way the compiler doesn't get confused and doesn't show any error to you.
CODE :
class A{
public:
void show(){ cout<<"Class A"; }
};
class B{
public:
void show(){ cout<<"Class B"; }
};
class C : public A , public B{
public:
void disp(){ A::show(); } // Here You can make explicit from which class the method
// shall be called. I refer to method "show" from class A.
}
main(){
C obj;
obj.disp(); // Ambiguity in Multiple Inheritance Problem Solved
}

C++ multiple inheritance method overloading

#include <iostream>
using namespace std;
class Base1 {
public:
virtual void f(int n) = 0;
};
class Base2 {
public:
virtual void f(char *s) = 0;
};
class Derive1 : public Base1, public Base2 {
public:
void f(int n) { cout << "d1 fn" << endl; }
void f(char *s) { cout << "d1 fs" << endl; }
};
class Derive2 : public Derive1 {
public:
void f(int n) { cout << "d2 fn" << endl; }
void f(char *s) { cout << "d2 fs" << endl; }
};
int main() {
Derive1 *d1 = new Derive2();
int n = 0;
char *s = "";
d1->f(n);
d1->f(s);
return 0;
}
The above code runs as expected, but if I comment out one method of Derive1, I got conversion error; if I comment out both methods of Derive1, I got methods ambiguity error.
What confused me is that why Derive1 has to define these two methods, why defining them only in Derive2 is not working. I need some help to understand this.
Some clarifications:
Let's suppose I never want to create any instances of Derive1. So, it is totally okay if Derive1 is an abstract class.
"All pure virtual functions should have a definition in derived class." This is not true if I don't want to create instances of this derived class.
If I change f in Base1 to f1, and f in Base2 to f2(just change names), then Derive1 does not need to define any of them, just defining f1 and f2 in Derive2 works.
So, with the support of method overloading, I thought, in the above code, I declared a function with the name like f_int in Base1; in Base2 I declared a function with the name like f_str. This is the way how the compiler implements method overloading, right? But it seems like this is not the case.
Derive1 is a derived class from Base1 and Base2. Derived classes must implement all pure virtual functions of their base classes.
It doesn't help Derive1 that Derive2 implements these functions, because it is possible to instantiate a Derive1, and to do so it must implement all inherited pure virtual methods.
For example, if you did not implement the functions in Derive1, what would you expect the behavior of this to be?
Derive1* d1 = new Derive1;
int n = 0;
char *s = "";
d1->f(n);
d1->f(s);
Derive1 didn't implement these, and d1 isn't a Derive2 instance, so what is it supposed to do?
If you declare a pure virtual function in a class, it will turned to become an abstract class and you can't instantiate that one. And all pure virtual functions should have a definition in derived class. Your both base classes has pure virtual functions and so your derived1 should define the pure virtual functions of base classes.
Let's start with removing both in Derived1:
When you call d1->f, you haven't created any specific to that class, so it looks to its base classes. Each base class would be treated as a separate set of candidates. So first it checks base1 and see an f, then checks base2 and sees another f. The compiler can't pick which one to call, so the call is ambiguous. You can using the parent functions into Derived1 if you don't want to re-implement them, or give the compiler a hint about which base to use.
If you only comment out one of the functions in Derived1, the one you you leave hides both parent versions, preventing either one of them from being selected. At this point you can call the one you've defined and cannot call the other one, unless you again tell the compiler which specific base class you want to call it from.

why I changed parent virtual function arguments in child hides the father function c++?

I made a class with virtual function f() then in the derived class I rewrote it like the following f(int) why can't I access the base class function throw the child instance ?
class B{
public:
B(){cout<<"B const, ";}
virtual void vf2(){cout<<"b.Vf2, ";}
};
class C:public B{
public:
C(){cout<<"C const, ";}
void vf2(int){cout<<"c.Vf2, ";}
};
int main()
{
C c;
c.vf2();//error should be vf2(2)
}
You have to do using B::vf2 so that the function is considered during name lookup. Otherwise as soon as the compiler finds a function name that matches while traversing the inheritance tree from child -> parent -> grand parent etc etc., the traversal stops.
class C:public B{
public:
using B::vf2;
C(){cout<<"C const, ";}
void vf2(int){cout<<"c.Vf2, ";}
};
You are encountering name hiding. Here is an explanation of why it happens ?
In C++, a derived class hides any base class member of the same name. You can still access the base class member by explicitly qualifying it though:
int main()
{
C c;
c.B::vf2();
}
You were caught by name hiding.
Name hiding creeps up everywhere in C++:
int a = 0
int main(int argc, char* argv[]) {
std::string a;
for (int i = 0; i != argc; ++i) {
a += argc[i]; // okay, refers to std::string a; not int a;
a += " ";
}
}
And it also appears with Base and Derived classes.
The idea behind name hiding is robustness in the face of changes. If this didn't exist, in this particular case, then consider what would happen to:
class Base {
};
class Derived: public Base {
public:
void foo(int i) {
std::cout << i << "\n";
}
};
int main() {
Derived d;
d.foo(1.0);
}
If I were to add a foo overload to Base that were a better match (ie, taking a double directly):
void Base::foo(double i) {
sleep(i);
}
Now, instead of printing 1, this program would sleep for 1 second!
This would be crazy right ? It would mean that anytime you wish to extend a base class, you need to look at all the derived classes and make sure you don't accidentally steal some method calls from them!!
To be able to extend a base class without ruining the derived classes, name hiding comes into play.
The using directive allows you to import the methods you truly need in your derived class and the rest are safely ignored. This is a white-listing approach.
When you overload a member function in a base class with a version in the derived class the base class function is hidden. That is, you need to either explicitly qualify calls to the base class function or you need a using declaration to make the base class function visible via objects of the derived class:
struct base {
void foo();
void bar();
};
struct derived: base {
void foo(int);
using base::foo;
void bar(int);
};
int main() {
derived().foo(); // OK: using declaration was used
derived().bar(); // ERROR: the base class version is hidden
derived().base::bar(); // OK: ... but can be accessed if explicitly requested
}
The reason this is done is that it was considered confusing and/or dangerous when a member function is declared by a derived function but a potenially better match is selected from a base class (obviously, this only really applies to member functions with the same number of arguments). There is also a pitfall when the base class used to not have a certain member function: you don't want you program to suddenly call a different member function just because a member function is being added to the base class.
The main annoyance with hiding member functions from bases is when there is a set of public virtual functions and you only want to override one of them in a derived class. Although just adding the override doesn't change the interface using a pointer or a reference to the base class, the derived class can possibly not used in a natural way. The conventional work-around for this to have public, non-virtual overload which dispatch to protected virtual functions. The virtual member function in the various facets in the C++ standard library are an example of this technique.

Multiple Inheritance from same grandparent - merge implementations?

for a certain project I have declared an interface (a class with only pure virtual functions) and want to offer users some implementations of this interface.
I want users to have great flexibility, so I offer partial implementations of this interface. In every implementation there is some functionality included, other functions are not overridden since they take care about different parts.
However, I also want to present users with a fully usable implementation of the interface as well. So my first approach was to simply derive a class from both partial implementations. This did not work and exited with the error that some functions are still pure virtual in the derived class.
So my question is if there is any way to simply merge two partial implementations of the same interface. I found a workaround by explicitely stating which function I want to be called for each method, but I consider this pretty ugly and would be grateful for an mechanism taking care of this for me.
#include <iostream>
class A{
public:
virtual void foo() = 0;
virtual void bar() = 0;
};
class B: public A{
public:
void foo(){ std::cout << "Foo from B" << std::endl; }
};
class C: public A{
public:
void bar(){ std::cout << "Bar from C" << std::endl; }
};
// Does not work
class D: public B, public C {};
// Does work, but is ugly
class D: public B, public C {
public:
void foo(){ B::foo(); }
void bar(){ C::bar(); }
};
int main(int argc, char** argv){
D d;
d.foo();
d.bar();
}
Regards,
Alexander
The actual problem is about managing several visitors for a tree, letting each of them traverse the tree, make a decision for each of the nodes and then aggregate each visitor's decision and accumulate it into a definite decision.
A separation of both parts is sadly not possible without (I think) massive overhead, since I want to provide one implementation taking care of managing the visitors and one taking care of how to store the final decision.
Have you considered avoiding the diamond inheritance completely, providing several abstract classes each with optional implementations, allowing the user to mix and match default implementation and interface as needed?
In your case what's happening is that once you inherit to D, B::bar hasn't been implemented and C::foo hasn't been implemented. The intermediate classes B and C aren't able to see each others' implementations.
If you need the full interface in the grandparent, have you considered providing the implementation in a different way, possibly a policy with templates, and default classes that will be dispatched into to provide the default behavior?
If your top level interface has a logical division in functionality, you should split it into two separate interfaces. For example if you have both serialization and drawing functions in interface A, you should separate these into two interfaces, ISerialization and IDrawing.
You're free to then provide a default implementation of each of these interfaces. The user of your classes can inherit either your interface or your default implementation as needed.
There is also the possibility that you could use a "factory" class for the main interface type. In other words the primary interface class also contains some type of static function that generates an appropriate child class on-request from the user. For instance:
#include <cstdio>
class A
{
public:
enum class_t { CLASS_B, CLASS_C };
static A* make_a_class(class_t type);
virtual void foo() = 0;
virtual void bar() = 0;
};
class B: public A
{
private:
virtual void foo() { /* does nothing */ }
public:
virtual void bar() { printf("Called B::bar()\n"); }
};
class C: public A
{
private:
virtual void bar() { /* does nothing */ }
public:
virtual void foo() { printf("Called C::foo()\n"); }
};
A* A::make_a_class(class_t type)
{
switch(type)
{
case CLASS_B: return new B();
case CLASS_C: return new C();
default: return NULL;
}
}
int main()
{
B* Class_B_Obj = static_cast<B*>(A::make_a_class(A::CLASS_B));
C* Class_C_Obj = static_cast<C*>(A::make_a_class(A::CLASS_C));
//Class_B_Obj->foo(); //can't access since it's private
Class_B_Obj->bar();
Class_C_Obj->foo();
//Class_C_Obj->bar(); //can't access since it's private
return 0;
}
If class A for some reason needs to access some private members of class B or class C, just make class A a friend of the children classes (for instance, you could make the constructors of class B and class C private constructors so that only the static function in class A can generate them, and the user can't make one on their own without calling the static factory function in class A).
Hope this helps,
Jason
Since you mentioned that you mainly needed access to the functions rather than data-members, here is another method you could use rather than multiple inheritance using templates and template partial specialization:
#include <iostream>
using namespace std;
enum class_t { CLASS_A, CLASS_B, CLASS_C };
template<class_t class_type>
class base_type
{
public:
static void foo() {}
static void bar() {}
};
template<>
void base_type<CLASS_A>::foo() { cout << "Calling CLASS_A type foo()" << endl; }
template<>
void base_type<CLASS_B>::bar() { cout << "Calling CLASS_B type bar()" << endl; }
template<>
void base_type<CLASS_C>::foo() { base_type<CLASS_A>::foo(); }
template<>
void base_type<CLASS_C>::bar() { base_type<CLASS_B>::bar(); }
int main()
{
base_type<CLASS_A> Class_A;
Class_A.foo();
base_type<CLASS_B> Class_B;
Class_B.bar();
base_type<CLASS_C> Class_C;
Class_C.foo();
Class_C.bar();
return 0;
}
Now if you need non-static functions that have access to private data-members, this can get a bit trickier, but it should still be doable. It would though most likely require the need for a separate traits class you can use to access the proper types without running into "incomplete types" compiler errors.
Thanks,
Jason
I think the problem is that when using simple inheritance between B and A, and between C and A, you end up with two objects of type A in D (each of which will have a pure virtual function, causing a compile error because D is thus abstract and you try to create an instance of it).
Using virtual inheritance solves the problem since it ensure there is only one copy of A in D.