If I have the following class:
class Foo
{
protected:
int i;
public:
Foo() : i(42) {}
};
Naturally, I don't have access to protected members from the outside, but I can do this little trick: first I create a new class which inherits Foo:
class Foo2 : public Foo
{
public:
int GetI() { return i; }
};
Now, whenever I have an instance of Foo or a pointer to such instance, I can access protected member via casting (since I don't use any additional members):
Foo *f = new Foo();
Foo f2;
std::cout << ((Foo2*)f)->GetI() << std::endl;
std::cout << (reinterpret_cast<Foo2&>(f2)).GetI() << std::endl;
I understand why this works, but will there ever be any bad consequences? Compiler doesn't mind, there aren't any run time checks.
reinterpret_cast<Foo2&>(f2)).GetI()
Technically, this is Undefined behavior. So it might work but it does not have to.
You're down casting a Foo object to a Foo2 object.
A downcast is a cast from a base class to a class derived from the
base class. A downcast is only safe if the object addressed at runtime
is actually addressing a derived class object
To protect your code, you must use dynamic_cast to check a downcast is valid or not.
Using reinterpret_cast is not recommended for down-casting. Use static_cast or dynamic_cast.
Reading chunk of articles, many wrote DO NOT USE DOWN CASTING like you did. One dangerous example is to have a virtual void GetI() in Foo.
Related
I have a class that contains some functions (none are virtual) and 2 more classes publicly inherit that class. In both the sub classes I override the same function of the base class.
After creating objects of all three classes in main (located at the same file), I call the original function with the baseclass object and the overridden functions with the derivedclass objects.
I was expecting all 3 function calls to run the original function from the base class (since I didn't use 'virtual' anywhere in the code), but I actually get each version of that function working according to the class in which it was defined (3 different versions).
I have the classes Base & Derived as follows:
struct Base
{
void foo();
};
struct Derived : Base
{
void foo();
};
in main:
int main()
{
Derived d;
d.foo();
}
I thought d.foo() should run Base::foo() if not using 'virtual'.
This is not "overriding"... and it doesn't need to be.
struct Base
{
void foo();
};
struct Derived : Base
{
void foo();
};
int main()
{
Derived d;
d.foo();
}
If I understand you correctly, then you were expecting this to execute Base::foo(), because the functions are not virtual and therefore one does not override the other.
But, here, you do not need virtual dispatch: the rules of inheritance simply state that you'll get the right function for the type of the object you run it on.
When you need virtual dispatch/overriding is a slightly different case: it's when you use indirection:
int main()
{
Base* ptr = new Derived();
ptr->foo();
delete ptr;
}
In the above snippet, the result will be that Base::foo() is called, because the expression ptr->foo() doesn't know that *ptr is really a Derived. All it knows is that ptr is a Base*.
This is where adding virtual (and, in doing so, making the one function override the other) makes magic happen.
You cannot override something that isn't virtual. Non-virtual member functions are dispatched statically based on the type of the instance object.
You could cheat by "overriding" a function by making it an inline function calling something indirectly. Something like (in C++03)
class Foo;
typedef int foo_sig_t (Foo&, std::string&);
class Foo {
foo_sig_t *funptr;
public:
int do_fun(std::string&s) { return funptr(*this,s); }
Foo (foo_sig_t* fun): funptr(fun) {};
~Foo () { funptr= NULL; };
// etc
};
class Bar : public Foo {
static int barfun(Bar&, std::string& s) {
std::cout << s << std::endl;
return (int) s.size();
};
public:
Bar () : Foo(reinterpret_cast<foo_sig_t*>)(&barfun)) {};
// etc...
};
and later:
Bar b;
int x=b.do_fun("hello");
Officially this is not overloading a virtual function, but it looks very close to one. However, in my above Foo example each Foo instance has its own funptr, which is not necessarily shared by a class. But all Bar instances share the same funptr pointing to the same barfun.
BTW, using C++11 lambda anonymous functions (internally implemented as closures), that would be simpler and shorter.
Of course, virtual functions are in generally in fact implemented by a similar mechanism: objects (with some virtual stuff) implicitly start with a hidden field (perhaps "named" _vptr) giving the vtable (or virtual method table).
I realy don't understand why it works, but below code shows an example of calling private method from the world without any friend classes:
class A
{
public:
virtual void someMethod(){
std::cout << "A::someMethod";
}
};
class B : public A
{
private:
virtual void someMethod(){
std::cout << "B::someMethod";
}
};
int main(){
A* a = new B;
a->someMethod();
}
outputs:
B::someMethod
Doesn't it violates encapsulation rules in C++? For me this is insane. The inheritance is public, but the access modifier in derived class is changed to private, thus the someMethod() in class B is private. So in fact, doing a->someMethod(), we are directly calling a private method from the world.
Consider the following code, a modification of the code in the original question:
class A
{
public:
virtual void X(){
std::cout << "A::someMethod";
}
};
class B : public A
{
private:
virtual void Y(){
std::cout << "B::someMethod";
}
};
int main(){
A* a = new B;
a->X();
}
It's easy to understand that it's legit to call X(). B inherits it from A as a public member. Theoretically if X() would call Y() this would be legit as well of course, although this is not possible because X() is declared in A which doesn't know Y(). But actually this is the case if X = Y, i.e. if both methods have the same name.
You may think of it as "B inherits a public method from A, that calls a private method (of B) with the same name".
a is a pointer to an A object where the method is public, so this is not a violation. Since, you have used virtual, the VTABLE is taken into account and you get the output as B::someMethod.
For me this is quite simple.
As you are allowed to asign any dynamic type inheritance of a class to an object of the parrent-class it would be vise versa.But as you are assigning the Child to its parrent object pointer this could be bad as if the class of B where bigger as A.
If a program attempts to access the stored value of an object through an lvalue of other than one of the following types the behavior is undefined:
the dynamic type of the object,
a cv-qualified version of the dynamic type of the object,
a type that is the signed or unsigned type corresponding to the dynamic type of the object,
a type that is the signed or unsigned type corresponding to a cv-qualified version of the dynamic type of the object,
an aggregate or union type that includes one of the aforementioned types among its members (including, recursively, a member of a subaggregate or contained union),
a type that is a (possibly cv-qualified) base class type of the dynamic type of the object,
a char or unsigned char type.
So as in your code you aren't fitting this conditions, but as B has the sime size as A, it would be possible to run it. But it is undefined behavior.
And as you creat a new B the memory block a is pointing to has the method of printing B and not A.
The access control specifiers (public, protected and private) do not apply to member functions or data members. They apply to member function and data member names. And that works. If you can refer to the member without the name, you can access it.
That's precisely what's happening here. You're calling B::someMethod, but you're calling it using the name A::someMethod, which is public. No problem.
You're suggesting a private member function shouldn't be callable from outside of the class (disregarding friends for the moment). How would your desired semantics work in the following case?
Shared library hider:
hider.h:
typedef void (*FuncType)();
class Hider {
private:
static void privFunc();
public:
static void pubFunc();
FuncType getFunction() const;
};
hider.cpp
#include <cstdlib>
#include <iostream>
#include "hider.h"
void Hider::privFunc() {
std::cout << "Private\n";
}
void Hider::pubFunc() {
std::cout << "Public\n";
}
FuncType Hider::getFunction() const {
if (std::rand() % 2) {
return &pubFunc;
} else {
return &privFunc;
}
}
Application using library hider
#include "hider.hpp"
int main()
{
Hider h;
FuncType f = h.getFunc();
f();
}
What about the call to f()? Should it fail at runtime 50% of the time with some form of access control violation?
As suggested by DyP in the comments, a more realistic scenario is the well-known "template method" design pattern:
class Container
{
public:
void insert(const Item &item) {
preInsert();
data.insert(item);
postInsert();
}
private:
std::vector<Item> data;
virtual void preInsert() = 0;
virtual void postInsert() = 0;
};
class ThreadSafeContainer : public Container
{
private:
std::mutex m;
virtual void preInsert() {
m.lock();
}
virtual void postInsert() {
m.unlock();
}
};
Your semantics would mean this code wouldn't compile.
I have the following code (stolen from virtual functions and static_cast):
#include <iostream>
class Base
{
public:
virtual void foo() { std::cout << "Base::foo() \n"; }
};
class Derived : public Base
{
public:
virtual void foo() { std::cout << "Derived::foo() \n"; }
};
If I have:
int main()
{
Base base;
Derived& _1 = static_cast<Derived&>(base);
_1.foo();
}
The print-out will be: Base::foo()
However, if I have:
int main()
{
Base * base;
Derived* _1 = static_cast<Derived*>(base);
_1->foo();
}
The print-out will be: Segmentation fault: 11
Honestly, I don't quite understand both. Can somebody explain the complications between static_cast and virtual methods based on the above examples? BTW, what could I do if I want the print-out to be "Derived::foo()"?
A valid static_cast to pointer or reference type does not affect virtual calls at all. Virtual calls are resolved in accordance with the dynamic type of the object. static_cast to pointer or reference does not change the dynamic type of the actual object.
The output you observe in your examples is irrelevant though. The examples are simply broken.
The first one makes an invalid static_cast. You are not allowed to cast Base & to Derived & in situations when the underlying object is not Derived. Any attempt to perform such cast produces undefined behavior.
Here's an example of valid application of static_cast for reference type downcasting
int main()
{
Derived derived;
Base &base = derived;
Derived& _1 = static_cast<Derived&>(base);
_1.foo();
}
In your second example the code is completely broken for reasons that have nothing to do with any casts or virtual calls. The code attempts to manipulate non-initialized pointers - the behavior is undefined.
In your second example, you segfault because you did not instanciate your base pointer. So there is no v-table to call. Try:
Base * base = new Base();
Derived* _1 = static_cast<Derived*>(base);
_1->foo();
This will print Base::foo()
The question makes no sense, as the static_cast will not affect the v-table. However, this makes more sens with non-virtual functions :
class Base
{
public:
void foo() { std::cout << "Base::foo() \n"; }
};
class Derived : public Base
{
public:
void foo() { std::cout << "Derived::foo() \n"; }
};
int main()
{
Base base;
Derived& _1 = static_cast<Derived&>(base);
_1.foo();
}
This one will output Derived::foo(). This is however a very wrong code, and though it compiles, the behavior is undefined.
The whole purpose of virtual functions is that the static type of the variable shouldn't matter. The compiler will look up the actual implementation for the object itself (usually with a vtable pointer hidden within the object). static_cast should have no effect.
In both examples the behavior is undefined. A Base object is not a Derived object, and telling the compiler to pretend that it is doesn't make it one. The way to get the code to print out "Derived::foo()" is to use an object of type Derived.
I came across a question while taking iKM test. There was a base class with two abstract methods with private access specifier. There was a derived class which was overriding these abstract methods but with protected/public access specifier.
I never came across such thing where overridden methods in derived class had different access specification. Is this allowed ? If yes, does it comply to "IS A" relation between base and derived (i.e. safely substitutable).
Could you point me to some references which can provide more details on such usages of classes ?
Thank you.
It is allowed, in both directions (ie, from private to public AND from public to private).
On the other hand, I would argue it does not break the IS-A relationship. I base my argument on 2 facts:
using a Base& (or Base*) handle, you have exactly the same interface as before
you could perfectly (if you wish) introduce a forward method that is public and calling the private method directly anyway: same effect with more typing
Yes, this is legal, accessibility is checked statically (not dynamically):
class A {
public:
virtual void foo() = 0;
private:
virtual void bar() = 0;
};
class B : public A {
private:
virtual void foo() {} // public in base, private in derived
public:
virtual void bar() {} // private in base, public in derived
};
void f(A& a, B& b)
{
a.foo(); // ok
b.foo(); // error: B::foo is private
a.bar(); // error: A::bar is private
b.bar(); // ok (B::bar is public, even though A::bar is private)
}
int main()
{
B b;
f(b, b);
}
Now, why would you want to do that? It only matters if you use the derived class B directly (2nd param of f()) as opposed to through the base A interface (1st param of f()).
If you always use the abstract A interface (as I would recommend in general), it still complies to the "IS-A" relashionship.
As many of the guys pointed out it is legal.
However, "IS-A" part is not that simple. When it comes to "dynamic polymorphism" "IS-A" relation holds, I.e. everything you can do with Super you can also do with Derived instance.
However, in C++ we also have something that is often referred as static polymorphism (templates, most of the time). Consider the following example:
class A {
public:
virtual int m() {
return 1;
}
};
class B : public A {
private:
virtual int m() {
return 2;
}
};
template<typename T>
int fun(T* obj) {
return obj->m();
}
Now, when you try to use "dynamic polymorphism" everything seems to be ok:
A* a = new A();
B* b = new B();
// dynamic polymorphism
std::cout << a->m(); // ok
std::cout << dynamic_cast<A*>(b)->m(); // ok - B instance conforms A interface
// std::cout << b->m(); fails to compile due to overriden visibility - expected since technically does not violate IS-A relationship
... but when you use "static polymorphism" you can say that "IS-A" relation no longer holds:
A* a = new A();
B* b = new B();
// static polymorphism
std::cout << fun(a); // ok
//std::cout << fun(b); // fails to compile - B instance does not conform A interface at compile time
So, in the end, changing visibility for method is "rather legal" but that's one of the ugly things in C++ that may lead you to pitfall.
Yes, this is allowed as long as the signature is the same. And in my opinion, yes, you're right, overriding visibility (for example, public -> private) breaks IS-A. I believe Scott Myers Effective C++ series has a discussion on this one.
Given the following code
class T {
public:
virtual ~T () {}
virtual void foo () = 0;
};
class U {
public:
U() {}
~U() {}
void bar () { std::cout << "bar" << std::endl; }
};
class A : public U, public T {
public:
void foo () { std::cout << "foo" << std::endl; }
};
int main () {
A * a = new A;
std::vector<U*> u;
std::vector<T*> t;
u.push_back(a);
t.push_back(reinterpret_cast<T*>(u[0]));
u[0]->bar ();
t[0]->foo ();
delete a;
return 0;
}
I get the output I would expect
bar
foo
However, if I change the definition of U to
class U {
public:
U() {}
virtual ~U() {}
virtual void bar () { std::cout << "bar" << std::endl; }
};
I still compile fine and without warnings/errors but the output is now
bar
bar
What is it about the virtual declaration that prevents me from calling into the foo?
Firstly, there are no virtual base classes in your example. Classes that contain virtual functions are called polymorphic. (There is such thing as "virtual base classes" in C++ but it has nothing to do with your example.)
Secondly, the behavior of your code does not depend on any virtual declarations. You have deliberately destroyed the integrity of the base pointer by using reinterpret_cast. For this reason the behavior of the code is undefined.
A direct cast from one base pointer to another (which is what you are trying to do in your code) is called cross-cast. The only cast in C++ that can carry out a cross-cast is dynamic_cast.
t.push_back(dynamic_cast<T *>(u[0]));
You can perform an indirect cross-cast without dynamic_cast, but for that you have to downcast the pointer to the derived type first (A *) using static_cast and then upconvert it to another base pointer type
t.push_back(static_cast<A *>(u[0])); // upconversion to `T *` is implicit
If you use reinterpret_cast you loose all guarantees, and anything you do is "undefined behaviour". In this case, I expect the VMT got messed up, or the VPTR overwritten.
As an illustration, when I compile the first code above, I get a segfault on execution on my compiler.
If you really want to "cross-execute" you should derive from a common base class, and inherit that base class by U and T virtually ( : virtual public), or use dynamic_cast instead of reinterpret_cast.
Populate t just like you did u:
t.push_back(a);
You don't need reinterpret_cast because A is a T.