C++: What does "hide" means in overwrite method [duplicate] - c++

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
name hiding and fragile base problem
I'm familiar with the rules involving member function hiding. Basically, a derived class with a function that has the same name as a base class function doesn't actually overload the base class function - it completely hides it.
struct Base
{
void foo(int x) const
{
}
};
struct Derived : public Base
{
void foo(const std::string& s) { }
};
int main()
{
Derived d;
d.foo("abc");
d.foo(123); // Will not compile! Base::foo is hidden!
}
So, you can get around this with a using declaration. But my question is, what is the reason for base class function hiding? Is this a "feature" or just a "mistake" by the standards committee? Is there some technical reason why the compiler can't look in the Base class for matching overloads when it doesn't find a match for d.foo(123)?

Name lookup works by looking in the current scope for matching names, if nothing is found then it looks in the enclosing scope, if nothing is found it looks in the enclosing scope, etc. until reaching the global namespace.
This isn't specific to classes, you get exactly the same name hiding here:
#include <iostream>
namespace outer
{
void foo(char c) { std::cout << "outer\n"; }
namespace inner
{
void foo(int i) { std::cout << "inner\n"; }
void bar() { foo('c'); }
}
}
int main()
{
outer::inner::bar();
}
Although outer::foo(char) is a better match for the call foo('c') name lookup stops after finding outer::inner::foo(int) (i.e. outer::foo(char) is hidden) and so the program prints inner.
If member function name weren't hidden that would mean name lookup in class scope behaved differently to non-class scope, which would be inconsistent and confusing, and make C++ even harder to learn.
So there's no technical reason the name lookup rules couldn't be changed, but they'd have to be changed for member functions and other types of name lookup, it would make compilers slower because they'd have to continue searching for names even after finding matching names in the current scope. Sensibly, if there's a name in the current scope it's probably the one you wanted. A call in a scope A probably wants to find names in that scope, e.g. if two functions are in the same namespace they're probably related (part of the same module or library) and so if one uses the name of the other it probably means to call the one in the same scope. If that's not what you want then use explicit qualification or a using declaration to tell the compiler the other name should be visible in that scope.

Is this a "feature" or just a "mistake" by the standards committee?
It's definitely not a mistake, since it's clearly stipulated in the standard. It's a feature.
Is there some technical reason why the compiler can't look in the Base class for matching overloads when it doesn't find a match for d.foo(123)?
Technically, a compiler could look in the base class. Technically. But if it did, it would break the rules set by the standard.
But my question is, what is the reason for base class function hiding?
Unless someone from the committee comes with an answer, I think we can only speculate. Basically, there were two options:
if I declare a function with the same name in a derived class, keep the base class's functions with the same name directly accessible through a derived class
don't
It could have been determined by flipping a coin (...ok, maybe not).
In general, what are the reasons for wanting a function with the same name as that of a base class? There's different functionality - where you'd more likely use polymorphism instead. For handling different cases (different parameters), and if these cases aren't present in the base class, a strategy pattern might be more appropriate to handle the job. So most likely function hiding comes in effect when you actually do want to hide the function. You're not happy with the base class implementation so you provide your own, with the option of using using, but only when you want to.
I think it's just a mechanism to make you think twice before having a function with the same name & different signature.

I believe #Lol4t0 is pretty much correct, but I'd state things much more strongly. If you allowed this, you'd end up with two possibilities: either make a lot of other changes throughout almost the entirety of the language, or else you end up with something almost completely broken.
The other changes you'd make to allow this to work would be to completely revamp how overloading is done -- you'd have to change at least the order of the steps that were taken, and probably the details of the steps themselves. Right now, the compiler looks up the name, then forms an overload set, resolves the overload, then checks access to the chosen overload.
To make this work even sort of well, you'd pretty much have to change that to check access first, and only add accessible functions to the overload set. With that, at least the example in #Lol4t0's answer could continue to compile, because Base::foo would never be added to the overload set.
That still means, however, that adding to the interface of the base class could cause serious problems. If Base didn't originally contain foo, and a public foo were added, then the call in main to d.foo() would suddenly do something entirely different, and (again) it would be entirely outside the control of whoever wrote Derived.
To cure that, you'd just about have to make a fairly fundamental change in the rules: prohibit implicit conversions of function arguments. Along with that, you'd change overload resolution so in case of a tie, the most derived/most local version of a function was favored over a less derived/outer scope. With those rules, the call to d.foo(5.0) could never resolve to Derived::foo(int) in the first place.
That, however, would only leave two possibilities: either calls to free functions would have different rules than calls to member functions (implicit conversions allowed only for free functions) or else all compatibility with C would be discarded entirely (i.e., also prohibit implicit conversions in all function arguments, which would break huge amounts of existing code).
To summarize: to change this without breaking the language entirely, you'd have to make quite a few other changes as well. It would almost certainly be possible to create a language that worked that way, but by the time you were done it wouldn't be C++ with one minor change -- it would be an entirely different language that wasn't much like C++ or C, or much of anything else.

I can only propose, that this decision was made to make things simpler.
Imagine, that derived function will overload base one. Then, does the following code should generate compilation error, or use Deriveds function?
struct Base
{
private:
void foo(float);
}
struct Derived: public Base
{
public:
void foo(int);
}
int main()
{
Derived d;
d.foo(5.0f);
}
According to existing behavior of overloads this should generate error.
Now imagine, in the first version Base had no foo(float). In second version it appears. Now changing the realization of base class breaks interface of derived.
If you are developer of Derived and cannot influence developers of Base and a lot of clients use your interface, you are in a bad situation now.

Related

Why is a public const method not called when the non-const one is private?

Consider this code:
struct A
{
void foo() const
{
std::cout << "const" << std::endl;
}
private:
void foo()
{
std::cout << "non - const" << std::endl;
}
};
int main()
{
A a;
a.foo();
}
The compiler error is:
error: 'void A::foo()' is private`.
But when I delete the private one it just works. Why is the public const method not called when the non-const one is private?
In other words, why does overload resolution come before access control? This is strange. Do you think it is consistent? My code works and then I add a method, and my working code does not compile at all.
When you call a.foo();, the compiler goes through overload resolution to find the best function to use. When it builds the overload set it finds
void foo() const
and
void foo()
Now, since a is not const, the non-const version is the best match, so the compiler picks void foo(). Then the access restrictions are put in place and you get a compiler error, since void foo() is private.
Remember, in overload resolution it is not 'find the best usable function'. It is 'find the best function and try to use it'. If it can't because of access restrictions or being deleted, then you get a compiler error.
In other words why does overload resolution comes before access control?
Well, let's look at:
struct Base
{
void foo() { std::cout << "Base\n"; }
};
struct Derived : Base
{
void foo() { std::cout << "Derived\n"; }
};
struct Foo
{
void foo(Base * b) { b->foo(); }
private:
void foo(Derived * d) { d->foo(); }
};
int main()
{
Derived d;
Foo f;
f.foo(&d);
}
Now let's say that I did not actually mean to make void foo(Derived * d) private. If access control came first then this program would compile and run and Base would be printed. This could be very hard to track down in a large code base. Since access control comes after overload resolution I get a nice compiler error telling me the function I want it to call cannot be called, and I can find the bug a lot easier.
Ultimately this comes down to the assertion in the standard that accessibility should not be taken into consideration when performing overload resolution. This assertion may be found in [over.match] clause 3:
... When overload resolution succeeds, and the best viable function is not accessible (Clause [class.access]) in the context in which it is used, the program is ill-formed.
and also the Note in clause 1 of the same section:
[ Note: The function selected by overload resolution is not guaranteed to be appropriate for the context. Other restrictions, such as the accessibility of the function, can make its use in the calling context ill-formed. — end note ]
As for why, I can think of a couple of possible motivations:
It prevents unexpected changes of behaviour as a result of changing the accessibility of an overload candidate (instead, a compile error will occur).
It removes context-dependence from the overload resolution process (i.e. overload resolution would have the same result whether inside or outside the class).
Suppose access control came before overload resolution. Effectively, this would mean that public/protected/private controlled visibility rather than accessibility.
Section 2.10 of Design and Evolution of C++ by Stroustrup has a passage on this where he discusses the following example
int a; // global a
class X {
private:
int a; // member X::a
};
class XX : public X {
void f() { a = 1; } // which a?
};
Stroustrup mentions that a benefit of the current rules (visibility before accessibility) is that (temporarily) chaning the private inside class X into public (e.g. for the purposes of debugging) is that there is no quiet change in the meaning of the above program (i.e. X::a is attempted to be accessed in both cases, which gives an access error in the above example). If public/protected/private would control visibility, the meaning of the program would change (global a would be called with private, otherwise X::a).
He then states that he does not recall whether it was by explicit design or a side effect of the preprocessor technology used to implement the C with Classess predecessor to Standard C++.
How is this related to your example? Basically because the Standard made overload resolution conform to the general rule that name lookup comes before access control.
10.2 Member name lookup [class.member.lookup]
1 Member name lookup determines the meaning of a name (id-expression)
in a class scope (3.3.7). Name lookup can result in an ambiguity, in
which case the program is ill-formed. For an id-expression, name
lookup begins in the class scope of this; for a qualified-id, name
lookup begins in the scope of the nestedname- specifier. Name lookup
takes place before access control (3.4, Clause 11).
8 If the name of an overloaded function is unambiguously found,
overloading resolution (13.3) also takes place before access control.
Ambiguities can often be resolved by qualifying a name with its class
name.
Since the implicit this pointer is non-const, the compiler will first check for the presence of a non-const version of the function before a const version.
If you explicitly mark the non-const one private then the resolution will fail, and the compiler will not continue searching.
It's important to keep in mind the order of things that happen, which is:
Find all the viable functions.
Pick the best viable function.
If there isn't exactly one best viable, or if you can't actually call the best viable function (due to access violations or the function being deleted), fail.
(3) happens after (2). Which is really important, because otherwise making functions deleted or private would become sort of meaningless and much harder to reason about.
In this case:
The viable functions are A::foo() and A::foo() const.
The best viable function is A::foo() because the latter involves a qualification conversion on the implicit this argument.
But A::foo() is private and you don't have access to it, hence the code is ill-formed.
This comes down to a fairly basic design decision in C++.
When looking up the function to satisfy a call, the compiler carries out a search like this:
It searches to find the first1 scope at which there's something with that name.
The compiler finds all the functions (or functors, etc.) with that name in that scope.
Then the compiler does overload resolution to find the best candidate among those it found (whether they're accessible or not).
Finally, the compiler checks whether that chosen function is accessible.
Because of that ordering, yes, it's possible that the compiler will choose an overload that's not accessible, even though there's another overload that's accessible (but not chosen during overload resolution).
As to whether it would be possible to do things differently: yes, it's undoubtedly possible. It would definitely lead to quite a different language than C++ though. It turns out that a lot of seemingly rather minor decisions can have ramifications that affect a lot more than might be initially obvious.
"First" can be a little complex in itself, especially when/if templates get involved, since they can lead to two-phase lookup, meaning there are two entirely separate "roots" to start from when doing the search. The basic idea is pretty simple though: start from the smallest enclosing scope, and work your way outward to larger and larger enclosing scopes.
Access controls (public, protected, private) do not affect overload resolution. The compiler chooses void foo() because it's the best match. The fact that it's not accessible doesn't change that. Removing it leaves only void foo() const, which is then the best (i.e., only) match.
In this call:
a.foo();
There is always an implicit this pointer available in every member function. And the const qualification of this is taken from the calling reference/object. The above call is treated by the compiler as:
A::foo(a);
But you have two declarations of A::foo which is treated like:
A::foo(A* );
A::foo(A const* );
By overload resolution, the first will be selected for non-const this, the second will be selected for a const this. If you remove the first, the second will bind to both const and non-const this.
After overload resolution to select the best viable function, comes access control. Since you specified access to the chosen overload as private, the compiler will then complain.
The standard says so:
[class.access/4]: ...In the case of overloaded function names, access control is applied to
the function selected by overload resolution....
But if you do this:
A a;
const A& ac = a;
ac.foo();
Then, only the const overload will be fit.
The technical reason has been answered by other answers. I'll only focus on this question:
In other words why overload resolution comes before access control? This is strange. Do you think it is consistent? My code works and then I add a method and my working code does not compile at all.
That's how the language was designed. The intent is trying to call the best viable overload, as far as possible. If it fails, an error will be triggered to remind you to consider the design again.
On the other hand, suppose your code compiled and worked well with the const member function being invoked. Someday, someone (maybe yourself) then decides to change the accessibility of the non-const member function from private to public. Then, the behavior would change without any compile errors! This would be a surprise.
Because the variable a in the main function is not declared as const.
Constant member functions are called on constant objects.
Access specifiers do not affect name-lookup and function-call resolution, ever. The function is selected before the compiler checks whether the call should trigger an access violation.
This way, if you change an access specifier, you'll be alerted at compile-time if there is a violation in existing code; if privacy were taken into account for function call resolution, your program's behavior could silently change.

Why are access declarations deprecated? What does this mean for SRO and using declarations?

I've been looking high and low for an answer to what I thought was a fairly simple question: Why are access declarations deprecated?
class A
{
public:
int testInt;
}
class B: public A
{
private:
A::testInt;
}
I understand that it can be fixed by simply plopping "using" in front of A::testInt,
but without some sort of understanding as to why I must do so, that feels like a cheap fix.
Worse yet, it muddies my understanding of using declarations/directives, and the scope resolution operator. If I must use a using declaration here, why am I able to use the SRO and only the SRO elsewhere? A trivial example is std::cout. Why not use using std::cout? I used to think that using and the SRO were more or less interchangeable (give or take some handy functionality provided with the "using" keyword, of which I am aware, at least in the case of namespaces).
I've seen the following in the standard:
The access of a member of a base class can be changed in the derived class by mentioning >its qualified-id in the derived class declaration. Such mention is called an access >declaration. The effect of an access declaration qualified-id; is defined to be equivalent >to the declaration using qualified-id; [Footnote: Access declarations are deprecated; member >using-declarations (7.3.3) provide a better means of doing the same things. In earlier >versions of the C++ language, access declarations were more limited; they were generalized >and made equivalent to using-declarations - end footnote]
However, that really does nothing other than confirm what I already know. If you really boiled it down, I am sure my problem stems from the fact that I think using and the SRO are interchangeable, but I haven't seen anything that would suggest otherwise.
Thanks in advance!
If I must use a using declaration here, why am I able to use the SRO and only the SRO elsewhere?
Huh? You are not able to. Not to re-declare a name in a different scope (which is what an access declaration does).
A trivial example is std::cout. Why not use using std::cout?
Because they're not the same thing, not even close.
One refers to a name, the other re-declares a name.
I am sure my problem stems from the fact that I think using and the SRO are interchangeable
I agree that's your problem, because you are entirely wrong. Following a using declaration it is not necessary to qualify the name, but that doesn't make them interchangeable.
std::cout is an expression, it refers to the variable so you can write to it, pass it as a function argument, take its address etc.
using std::cout; is a declaration. It makes the name cout available in the current scope, as an alias for the name std::cout.
std::cout << "This is an expression involving std::cout\n";
using std::cout; // re-declaration of `cout` in current scope
If you're suggesting that for consistency you should do this to write to cout:
using std::cout << "This is madness.\n";
then, erm, that's madness.
In a class, when you want to re-declare a member with a different access you are re-declaring it, so you want a declaration. You aren't trying to refer to the object to write to involve it in some expression, which (if it was allowed at class scope) would look like this:
class B: public A
{
private:
A::testInt + 1;
};
For consistency with the rest of the language, re-declaring a name from a base class is done with a using-declaration, because that's a declaration, it's not done with something that looks like an expression.
class B: public A
{
private:
A::testInt; // looks like an expression involving A::testInt, but isn't
using A::testInt; // re-declaration of `testInt` in current scope
};
Compare this to the std::cout example above and you'll see that requiring using is entirely consistent, and removing access declarations from C++ makes the language more consistent.

Why does the overload resolution of member functions exclude the global functions?

void f()
{}
struct A
{
void f()
{}
};
struct B : A
{
B()
{
f(); // A::f() is always called, and ::f is always ignored
}
};
int main()
{
B();
}
As the class B's designer, I MIGHT NOT know the fact that B's base class, i.e. A, has a member function A::f, I just know ::f, and call ::f is just what I want.
What I expects is the compiler gives an error because of ambiguity of calling f. However, the compiler always chooses A::f and ignore ::f. I think this might be a big pitfall.
I just wonder:
Why does the overload resolution of member functions exclude the global functions?
What's the rationale?
As the class B's designer, I MIGHT NOT know B's base class
I don't agree.
Why does the overload resolution of member functions exclude the
global functions?
Because the two overloads belong to two different scopes and compiler chooses the overload of same scope. Reading §3.4.1 . The f of inner (same) scope hide the outside's f.
What's the rationale?
To have a solid rule. We prefer to work in a same scope. Unless we explicitly want to call an object from somewhere else.
In a family by calling Alex, they expect their little boy Alex comes in, not the Alexander III of Macedon.
That's just how overload resolution works, and it's good.
Imagine you really have a big project, tons of inter-dependencies, third party code and cross-module includes. In this huge mess, you have that one class you know works. It has been working perfectly for 5 years, it's efficient, easy to read and clean. You don't want to touch it. You then upgrade a modules, and start getting compiler errors. Oh no! That module (which you have no control over) introduced a new function DoAmazingStuff() at the global namespace. The same as a method name in our class. You're going to have to refactor it, since now you can no longer use the same name for a class member. Bummer!

Reason for C++ member function hiding [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
name hiding and fragile base problem
I'm familiar with the rules involving member function hiding. Basically, a derived class with a function that has the same name as a base class function doesn't actually overload the base class function - it completely hides it.
struct Base
{
void foo(int x) const
{
}
};
struct Derived : public Base
{
void foo(const std::string& s) { }
};
int main()
{
Derived d;
d.foo("abc");
d.foo(123); // Will not compile! Base::foo is hidden!
}
So, you can get around this with a using declaration. But my question is, what is the reason for base class function hiding? Is this a "feature" or just a "mistake" by the standards committee? Is there some technical reason why the compiler can't look in the Base class for matching overloads when it doesn't find a match for d.foo(123)?
Name lookup works by looking in the current scope for matching names, if nothing is found then it looks in the enclosing scope, if nothing is found it looks in the enclosing scope, etc. until reaching the global namespace.
This isn't specific to classes, you get exactly the same name hiding here:
#include <iostream>
namespace outer
{
void foo(char c) { std::cout << "outer\n"; }
namespace inner
{
void foo(int i) { std::cout << "inner\n"; }
void bar() { foo('c'); }
}
}
int main()
{
outer::inner::bar();
}
Although outer::foo(char) is a better match for the call foo('c') name lookup stops after finding outer::inner::foo(int) (i.e. outer::foo(char) is hidden) and so the program prints inner.
If member function name weren't hidden that would mean name lookup in class scope behaved differently to non-class scope, which would be inconsistent and confusing, and make C++ even harder to learn.
So there's no technical reason the name lookup rules couldn't be changed, but they'd have to be changed for member functions and other types of name lookup, it would make compilers slower because they'd have to continue searching for names even after finding matching names in the current scope. Sensibly, if there's a name in the current scope it's probably the one you wanted. A call in a scope A probably wants to find names in that scope, e.g. if two functions are in the same namespace they're probably related (part of the same module or library) and so if one uses the name of the other it probably means to call the one in the same scope. If that's not what you want then use explicit qualification or a using declaration to tell the compiler the other name should be visible in that scope.
Is this a "feature" or just a "mistake" by the standards committee?
It's definitely not a mistake, since it's clearly stipulated in the standard. It's a feature.
Is there some technical reason why the compiler can't look in the Base class for matching overloads when it doesn't find a match for d.foo(123)?
Technically, a compiler could look in the base class. Technically. But if it did, it would break the rules set by the standard.
But my question is, what is the reason for base class function hiding?
Unless someone from the committee comes with an answer, I think we can only speculate. Basically, there were two options:
if I declare a function with the same name in a derived class, keep the base class's functions with the same name directly accessible through a derived class
don't
It could have been determined by flipping a coin (...ok, maybe not).
In general, what are the reasons for wanting a function with the same name as that of a base class? There's different functionality - where you'd more likely use polymorphism instead. For handling different cases (different parameters), and if these cases aren't present in the base class, a strategy pattern might be more appropriate to handle the job. So most likely function hiding comes in effect when you actually do want to hide the function. You're not happy with the base class implementation so you provide your own, with the option of using using, but only when you want to.
I think it's just a mechanism to make you think twice before having a function with the same name & different signature.
I believe #Lol4t0 is pretty much correct, but I'd state things much more strongly. If you allowed this, you'd end up with two possibilities: either make a lot of other changes throughout almost the entirety of the language, or else you end up with something almost completely broken.
The other changes you'd make to allow this to work would be to completely revamp how overloading is done -- you'd have to change at least the order of the steps that were taken, and probably the details of the steps themselves. Right now, the compiler looks up the name, then forms an overload set, resolves the overload, then checks access to the chosen overload.
To make this work even sort of well, you'd pretty much have to change that to check access first, and only add accessible functions to the overload set. With that, at least the example in #Lol4t0's answer could continue to compile, because Base::foo would never be added to the overload set.
That still means, however, that adding to the interface of the base class could cause serious problems. If Base didn't originally contain foo, and a public foo were added, then the call in main to d.foo() would suddenly do something entirely different, and (again) it would be entirely outside the control of whoever wrote Derived.
To cure that, you'd just about have to make a fairly fundamental change in the rules: prohibit implicit conversions of function arguments. Along with that, you'd change overload resolution so in case of a tie, the most derived/most local version of a function was favored over a less derived/outer scope. With those rules, the call to d.foo(5.0) could never resolve to Derived::foo(int) in the first place.
That, however, would only leave two possibilities: either calls to free functions would have different rules than calls to member functions (implicit conversions allowed only for free functions) or else all compatibility with C would be discarded entirely (i.e., also prohibit implicit conversions in all function arguments, which would break huge amounts of existing code).
To summarize: to change this without breaking the language entirely, you'd have to make quite a few other changes as well. It would almost certainly be possible to create a language that worked that way, but by the time you were done it wouldn't be C++ with one minor change -- it would be an entirely different language that wasn't much like C++ or C, or much of anything else.
I can only propose, that this decision was made to make things simpler.
Imagine, that derived function will overload base one. Then, does the following code should generate compilation error, or use Deriveds function?
struct Base
{
private:
void foo(float);
}
struct Derived: public Base
{
public:
void foo(int);
}
int main()
{
Derived d;
d.foo(5.0f);
}
According to existing behavior of overloads this should generate error.
Now imagine, in the first version Base had no foo(float). In second version it appears. Now changing the realization of base class breaks interface of derived.
If you are developer of Derived and cannot influence developers of Base and a lot of clients use your interface, you are in a bad situation now.

Is there any reason to use this->

I am programming in C++ for many years, still I have doubt about one thing. In many places in other people code I see something like:
void Classx::memberfunction()
{
this->doSomething();
}
If I need to import/use that code, I simply remove the this-> part, and I have never seen anything broken or having some side-effects.
void Classx::memberfunction()
{
doSomething();
}
So, do you know of any reason to use such construct?
EDIT: Please note that I'm talking about member functions here, not variables. I understand it can be used when you want to make a distinction between a member variable and function parameter.
EDIT: apparent duplicate:
Are there any reasons not to use "this" ("Self", "Me", ...)?
The only place where it really makes a difference is in templates in derived classes:
template<typename T>
class A {
protected:
T x;
};
template<typename T>
class B : A<T> {
public:
T get() {
return this->x;
}
};
Due to details in the name lookup in C++ compilers, it has to be made explicitly clear that x is a (inherited) member of the class, most easily done with this->x. But this is a rather esoteric case, if you don't have templated class hierarchies you don't really need to explicitly use this to access members of a class.
If there is another variable in the same scope with the same name, the this-> will remove the ambiguity.
void Bar::setFoo(int foo)
{
this->foo = foo;
}
Also it makes it clear that you're refering to a member variable / function.
To guarantee you trigger compiler errors if there is a macro that might be defined with the same name as your member function and you're not certain if it has been reliably undefined.
No kidding, I'm pretty sure I've had to do exactly this for that reason!
As "code reason", to distinguish a local parameter or value (that takes precedence) from a member:
class Foo
{
int member;
void SetMember(int member)
{
this->member = member;
}
}
However, that's bad practive to begin with, and usually can be solved locally.
The second reason is more "environment": it sometimes helps Intellisense to filter what I am really looking for. However, I also thing when I use this to find the member I am looking for I should also remove this.
So yes, there are good reasons, but they are all temporary (and bad on the long run).
I can think of readability like when you use additional parenthesis to make things clear.
I think it is mainly as an aid to the reader. It makes it explicit that what is being called is a method on the object, and not an ordinary function. When reading code, it can be helpful to know that the called function can change fields in the current object, for instance.
It's your own choice. I find it more clear when you use this. But if you don't like it, you can ommit it.
This is done to be explicit about the fact that the variable being used is a member variable as opposed to a local or global variable. It's not necessary in most cases, but being explicit about the scope could be helpful if you've trumped the variable with a declaration of the same name in a tighter scope.
At companies I've worked at, we just prepended "m_" to member variables. It can be helpful sometimes, and I much prefer it to using "this->".
Edit:
Adding a link to the GCC docs, which explain a case where using this-> is necessary to get a non-dependent lookup to work correctly.
This is really a matter of style and applies to many other languages such as Java and C#. Some people prefer to see the explicit this (or self, or Me, or whatever) and others do not. Just go with whatever is in your style guidelines, and if it's your project, you get to decide the guidelines.
There are many good answers, but none of them mention that using this-> in source code makes it easier to read, especially when you are reading a code of some long function, but even a short function, imagine a code:
bool Class::Foo()
{
return SomeValue;
}
from looking on this code, you can't clearly know what SomeValue is. It could be even some #define, or static variable, but if you write
bool Class::Foo()
{
return this->SomeValue;
}
you clearly know that SomeValue is a non-static member variable of the same class.
So it doesn't just help you to ensure that name of your functions or variables wouldn't conflict with some other names, but it also makes it easier for others to read and understand the source code, writing a self documenting source code is sometimes very important as well.
Another case, that has arrived on the scenes after C++11 is in lambdas where this is captured.
You may have something like:
class Example
{
int x;
public:
std::function<void()> getIncrementor()
{
return [this] () -> void
{
++(this->x);
}
}
};
Although your lambda is generated within a class, it will only have access to local variables by capturing them (if your compiler does C++14) or capturing this. In the second case, inside the body of lambda, there simply is not x, but only this->x.
I don't think it makes a difference to the compiler, but I always write this-> because I believe it makes the code self-documenting.
Disambiguation: in case you have another similar naming function/variable in the same namespace? I've never seen usage for any other reason.
I prefer it without the explicit this pointer as well. For method calls it doesn't add a lot of value, but it helps distinguish local variables from member variables.
I can't quite remember the exact circumstances, but I've seen (very rare) instances where I had to write "this->membername" to successfully compile the code with GCC. All that I remember is that it was not in relation to ambiguity and therefore took me a while to figure out the solution. The same code compiled fine without using this-> in Visual Studio.
I will use it to call operators implicitly (the return and parameter types below are just dummies for making up the code).
struct F {
void operator[](int);
void operator()();
void f() {
(*this)[n];
(*this)();
}
void g() {
operator[](n);
operator()();
}
};
I do like the *this syntax more. It has a slightly different semantic, in that using *this will not hide non-member operator functions with the same name as a member, though.