Any performance reason to put attributes protected/private? - c++

I "learned" C++ at school, but there are several things I don't know, like where or what a compiler can optimize, seems I already know that inline and const can boost a little...
If performance is an important thing (gaming programming for example), does putting class attributes not public (private or protected) allow the compiler to make more optimized code ?
Because all my previous teacher were saying was it's more "secure" or "prevent not wanted or authorized class access/behavior", but in the end, I'm wonder if putting attributes not public can limit the scope and thus fasten things.
I don't criticize my teachers (should I), but the programming class I was in wasn't very advanced...

The teachers were right to tell you to use private and protected to hide implementation and to teach you about information hiding instead of propsing questionable performance optimizations. Try to think of an appropriate design first and of performance second, in 99% of the cases this will be the better choice (even in performance critical scenarios). Performance bottlenecks can appear in a lot of unpredicted cases and are much easier to come by if your design is sound.
To directly answer your question however: any reduction in scope may help the compiler to do certain optimizations, form the top of my head I can not think of any however right now in regards to making members private.

No. Making members private or protected is not going to provide any performance benefits; of course, the benefits to your design (information hiding) are huge.

There's no such thing as public, private and protected once your code is compiled, so it cannot affect performance.
There's also no such thing as const in machine code (except perhaps ROM), but the compiler can make some logical optimisations to your program by knowing whether a value can change (in some situations).
inline rarely has any effect. It is merely a suggestion to the compiler, which the compiler is free to ignore (and often does). The compiler will inline functions as it sees fit.

Related

How can I better learn to "not pay for what you don't use"?

I've just gotten answers to this question which, at the bottom line, tell me: "Doing X doesn't make sense since it would make you pay for things you might not use."
I find this maxim difficult to follow; my instincts lean more towards seeing what I consider clear semantics, with things defined "in their place". More generally, it's not immediate for me to realize what the hidden costs and secret tariffs would be for a particular design choice?.
Is this covered by (non-reference) books on C++? Is there someplace relevant online to better enlighten myself on following this principle?
In the case you are presenting it is not as general a statement as it seems.
Doing X doesn't make sense since it would make you pay for things you
might not use.
This is merely a statement that if you can, avoid using virtual functions. They add overhead to the function call.
Virtual functions can often be redesigned by using templates and regular function calls. One std:: example is std::vector. In Java for instance a Vector implements interfaces to be usable in algorithms. Accomplished by virtual function calls. std::vector uses iterators.
Despite the question being overly broad and asking for off site material I think it is interesting enough to deserve an answer. Remember that C++ was originally just "C with classes" and it is still possible today to write what is basically C code without using any of the nice abstractions that C++ gives you. For example if you don't want the cost of using exceptions then don't use them, if you don't want the cost of RTTI (virtual functions) then don't use them, if you don't want the overhead of using templates... etc.
As for resources, I'm going to break the rules and recommend Game Programming Patterns which despite the name is a good general purpose guide to writing performant C++.
How can I better learn to “not pay for what you don't use”?
The key to "not paying for what you don't use" is abstractions. When you clearly understand the purpose of a class or a function, you add the data and arguments that are absolutely necessary for the class and the function to work correctly, with as little overhead as possible.
You have to be very vigilant about adding member variables, member functions (virtual as well as non-virtual) to a class. Every member variable adds to the memory requirements of the class. Every member function requires maintenance. Presence of virtual member functions add to the memory requirements of the class as well as a small penalty at run time.
You have to be very vigilant about the arguments to a function. You don't want the user to be burdened with supplying arguments that don't make sense. You also don't want to leave out any arguments by making hidden assumptions.

Member hooks versus base hooks in intrusive data structures

I'm coding an intrusive data structure and wondering whether to use base or member hooks. As the code will be called many times, my question regards performance and to what extent the compilers are able to inline such code.
Base hooks are based on inheritance while member hooks use pointers-to-members via template parameters.
My design choice would be to use member hooks, but my experience says pointers are much harder to optimize than static code. On the other hand, all those pointers are known at compile time and perhaps the compiler can do some magic to analyze what's happening.
Does anyone has some experience with this? Any data, hints or references are welcome.
As for most "X vs Y, what is faster?" questions there is only one proper answer for this one:
Ask your profiler.
Experience is vague. Human guesswork can not take into account all the nitty gritty details and pitfalls of compiler optimizations. Compilers differ in what they can optimize and how good they do it. Sometimes even between different versions of the same compiler. The only thing that can tell you well how your implementations can be optimized by your specific compiler(s) on your specific platform(s) is a proper measurement of performance with typical problem sizes.
Even if there is someone who tells you he knows what is faster and gives you some pretty graphs: can you trust him enough to not make those measurements? Does he know what your specific environment looks like? Does he and his graphs take into account the special corner cases of your problems? Most probably not.
Since data and hooks are in a "has a" Relationship I'd also prefer member hooks from a design point of view. I also don't think there is a difference in optimization between putting the hooks in a base class than putting them into a class directly.
There is also some consideration about these different approaches in Boost intrusive.

Moving from void* and casting to an ABC with PVFs (will there be a speed hit?)

I've just inherited (ahem) a QNX realtime project which uses a void*/downcasting/case statement mechanism to handle messaging. I'd prefer to switch to an abstract base class with pure virtual functions instead but I'm wondering if the original solution was done like that for speed reasons? it looks a lot like it was written originally in C and got moved at some point to C++, so I'm guessing that could be the reason behind it.
Any thoughts on this are appreciated. I don't want to make the code nice, safe and neat and then have it fail for performance reasons during testing.
I doubt that performance will be a concern. If there are sufficient disparate values in the switch/case your compiler may not even optimize it into a jump table, setting up the possibility that the virtual dispatch could be faster than the switch.
If a pure virtual interface makes sense design-wise I would definitely go that way (prototype and profile it if you're really concerned).

Messing with encapsulation

I know what exactly the encapsulation means.
But this question was asked to me in an interview.
I have a requirement where in i have to create a new class.
if in a team somebody messes up with the encapsulation part of the class but on a whole
the functionality that is required is working fine.and lets suppose its is delivered to the client.
what are the possible problems that a client might face because of that?
i tried to tell that security norms will be violated and we can use the vulnerability to add something and mess up the product.But he said client doesnt know anything regarding enhancing the code.
I had finally given up.
could anybody please help me with some examples?
Bad encapsulation (whatever it means) makes proper use of the class harder.
For instance, if you have two public methods and they should be called in proper order only and otherwise the object state becomes corrupt that's an example of bad encapsulation - the user can't know from the class definition that those methods should be only called in this order and the class doesn't do anything to protect against calling in wrong order and once the user hasn't guessed the right order he is screwed.
Encapsulation is not a property of the program. It is a property of how the source code of the program is written. As such, someone without access to the source code (such as the end user) will not be affected by proper or improper encapsulation. You could write a program with no encapsulation whatsoever, and if the functionality works fine, the end user would never notice.
Of course, those who do have access to the code are affected by the presence of encapsulation, as it usually makes debugging old code and writing new orthogonal code easier. So, in a sense, this does impact the ability to deliver working functionality on time, but those were assumed, in your interview, to be the case.
Given "client doesnt know anything regarding enhancing the code", I assume they're not writing any code themselves that uses the API. The consequences to them are then only through distributed code that directly modified what should have been private parts of the object... that means more libraries and applications may contain code that also need to be modified should the workings of that class change. So, updates to the software may be larger and possibly harder to deploy. (If properly encapsulating member functions are inline anyway, it may still work out the same as calling code still needs recompilation).
BTW, encapsulation via private and protected isn't a run-time security feature... even at compile time, it's intended as a firm reminder to client code that certain things weren't designed to be directly accessed/modified, but can easily be bypassed even then using reinterpret casts, template specialisations and other hackery.
IMO the biggest problem that you may face is that you will underestimate the cost of refactoring or of implementing new features. For example replacing that class with a new one that works as a proxy to a remote may look simple stuff, but then you may face problems because the application ended up using internals that are hard to change for remote use.
May be taking shortcuts saved a little time but you'll end up giving back this time (and more) for free to angry customers because of this underestimation.
Another serious problem is that a program with a clear interface that however has been poked through doing nasty things with internal stuff just stinks. This is a problem in se, but also a problem because whoever is going to see that code will feel that's ok to do similar things for future changes especially when under time pressure.
This in the long term will change the program in a bowl of rotten segfaulting spaghetti that anyone will fear to touch because of unexplainable ripple side effects. A description of this "broken window" effect is in the nice book The Pragmatic Programmer.

Encapsulation v Performance

Simple question:
I really like the idea of encapsulation, but I really don't know if it is worth it is a performance critical situation.
For example:
x->var;
is faster than
x->getVar();
because of the function calling overhead. Is there any solution that is both fast AND encapsulated?
getVar() in all possibility could be inlined. Even if there is a performance penalty, the benefits of encapsulation far outweigh the performance considerations.
There's no overhead if the function is inlined.
On the other hand, getters are a often code smell. And a bad one. They stick to the letters of encapsulation, but violate its principles.
"There's no overhead if the getVar function is inlined"
"If getVar() is simply return var; and is inline and non-virtual the two expressions should be optimized to the same thing"
"getVar() in all possibility could be inlined"
Can Mr Rafferty make the assumption that the code will be inlined? Not "should be" or "could be". In my opinion that's a problem with C++: it's not especially WYSIWYG: you can't be sure what code it will generate. Sure there are benefits to using oo but if execution efficiency (performance) is important C++ (or C# or Java) is not the obvious choice.
On another topic
There's a lot of talk about "Premature Optimization" being the root of all evil and, since no one gets what the premature is about a lot of programmers think that optimization is the root of all evil.
In these cases I find it helpful to bring out the original quote so everyone may see what they've been missing (not to say misunderstanding and misquoting):
"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."
Most people attribute the quote to Tony Hoare (father of QuickSort) and some to Donald Knuth (Art of Computer Programming).
An informative discussion as to what the quote may or may not mean may be found here: http://ubiquity.acm.org/article.cfm?id=1513451
You can write inline accessor functions.
You are right in that there often is a tradeoff between clean object oriented design and high performance. But do not make assumptions. If you go into these kinds of optimizations, you have to test every change for performance gains. Modern compilers are incredibly good at optimizing your code (like the comment from KennyTM says for your example), so do not fall into the trap of Premature Optimization.
It's important to realise that modern optimisers can do a lot for you, and to use C++ well you need to trust them. They will optimise this and give identical performance unless you deliberately code the accessors out-of-line (which has a different set of benefits: e.g. you can modify the implementation and relink without recompiling client code), or use a virtual function (but that's logically similar to a C program using a function pointer anyway, and has similar performance costs). This is a very basic issue: so many things - like iterators, operator[] on a vector etc. would be too costly if the optimiser failed to work well. All the mainstream C++ compilers are mature enough to have passed this stage many, many years ago.
As others have noted, the overhead is either negligible, or even entirely optimized away.
Anyway it is very unlikely that the bottleneck lies in these kind of functionss. And to add insult to injury, if you find there is a performance problem with the access pattern, if you use direct access you are out of luck, if you use accessor functions you can easily update to better performing patterns like e.g. caching.