CppCheck. The scope of the variable can be reduced (and loop) - c++

CppCheck finds me some findings like: "The scope of the variable 'x' can be reduced".
What if I have this situation:
int x;
for (int i = 0; i != 10; ++i)
{
x = someFunction();
// ... I use x variable here
}
I think my code is OK. What do you think? Should it change to something like that?
for (int i = 0; i != 10; ++i)
{
int x = someFunction();
// ... I use x variable here
}
In the second code a variable x is defined for all iteration... Isn't not ok (not optimal), I guess..

The position of an int declaration has no performance impact, so Cppcheck is right when raising this style issue. This style issue can be applied to non-trivial types as well,
for (int i = 0; i != 10; ++i)
{
MyType x = someFunction();
// ... I use x variable here
}
since constructors tend to be as equally efficient as assignments. As of Version 1.65, Cppcheck seems not to distinguish between trivial and non-trivial types.
But don't blindly follow such style suggestions, there will be cases of non-trivial types where assignment is more efficient than construction. (As usual: if in doubt about performance, measure!)
Edit: a style consideration
The second variant is better in style as it combines declaration and initialization:
This often saves you from writing (or reading) a comment that is not very meaningful.
Sometimes you can also add const, which prevents you from accidental changes

If variable x is not used outside the loop then the second approach is much better. And there is not the slightest problem with the optimization of the code. The memory for the variable is allocated only once in the loop.

As others have mentioned, for trivial types it is unlikely to make significant performance impact.
However, you should also consider that, by reducing scope, you aid readability by having the declaration closer to usage, and possibly more importantly, make it easier to refactor.
Both of these could be important when considering maintainability.
We all know we should keep functions short and well refactored, but we have all seen those 5000 line long monsters, where a variable was declared at the top, and used once, 3789 lines in. And if you haven't, pity the rest of us.

Related

Efficient functor dispatcher

I need help understanding two different versions of functor dispatcher, see here:
#include <cmath>
#include <complex>
double* psi;
double dx = 0.1;
int range;
struct A
{
double operator()(int x) const
{
return dx* (double)x*x;
}
};
template <typename T>
void dispatchA()
{
constexpr T op{};
for (int i=0; i<range; i++)
psi[i]+=op.operator()(i);
}
template <typename T>
void dispatchB(T op)
{
for (int i=0; i<range; i++)
psi[i]+=op.operator()(i);
}
int main(int argc, char** argv)
{
range= argc;
psi = new double[range];
dispatchA<A>();
// dispatchB<A>(A{});
}
Live at https://godbolt.org/z/93h5T46oq
The dispatcher will be called many times in a big loop, so I need to make sure that I'm doing it right.
Both version seem to me unnecessarily complex since the type of the functor is known at compile-time.
DispatchA, because it unnecessarily creates an (constexpr) object.
DispatchB, because it passes the object over and over.
Of course those could be solved by a) making the a static function in the functor,
but static functions are bad practice, right?
b) making a static instance of the functor inside the dispatcher, but then the lifetime of the object grows to the lifetime of the program.
That being said I don't know enough assembly to meaningfully compare the two appoaches.
Is there a more elegant/efficient approach?
This likely isn't the answer you are looking for, but the general advice you are going to get from almost any seasoned developer is to just write the code in a natural/understandable way, and only optimize if you need to.
This may sound like a non-answer, but it's actually good advice.
The majority of the time, the cost you may (if at all) incur due to small decisions like this will be inconsequential overall. Generally, you'll see more gains when optimizing an algorithm more so than optimizing a few instructions. There are, indeed, exceptions to this rule -- but generally such optimizations are part of a tight loop -- and this is the type of thing you can retroactively look at by profiling and benchmarking.
It's better to write code in a way that can be maintained in the future, and only really optimizing it if this proves to be an issue down the line.
For the code in question, both code-snippets when optimized produce identical assembly -- meaning that both approach should perform equally as well in practice (provided the calling characteristics are the same). But even then, benchmarking would be the only real way to verify this.
Since the dispatchers are function template definitions, they are implicitly inline, and their definition will always be visible before invoking. Often, this is enough for an optimizer to both introspect and inline such code (if it deems this is better than not).
... static functions are bad practice, right?
No; static functions are not bad practice. Like any utility in C++, they can surely be misused -- but there is nothing inherently bad about them.
DispatchA, ... unnecessarily creates an (constexpr) object
constexpr objects are constructed at compile-time -- and so you would not see any real cost to this other than perhaps a bit more space on the stack being reserved. This cost would really be minimal.
You could also make this static constexpr instead if you really wanted to avoid this. Although logically the "lifetime of the object grows to the lifetime of the program" as you mentioned, constexpr objects cannot have exit-time behavior in C++, so the cost is virtually nonexistent.
Assuming A is stateless, as it is in your example, and has no non-static data members, they are identical. The compiler is smart enough to see that construction of the object is a no-op and omits it. Let's clear up your code a bit to get clean assembly we can easily reason about:
struct A {
double operator()(int) const noexcept;
};
void useDouble(double);
int genInt();
void dispatchA() {
constexpr A op{};
auto const range = genInt();
for (int i = 0; i < range; i++) useDouble(op(genInt()));
}
void dispatchB(A op) {
auto const range = genInt();
for (int i = 0; i < range; i++) useDouble(op(genInt()));
}
Here, where input comes from and where the output goes is abstracted away. Generated assembly can only differ because of how the op object is created. Compiling it with GCC 11.1, I get identical assembly generation. No creation or initialization of A takes place.

C++ - Loop Efficiency: Storing temporarly values of a class member Vs pointer to this member

Within a class method, I'm accessing private attributes - or attributes of a nested class. Moreover, I'm looping over these attributes.
I was wondering what is the most efficient way in terms of time (and memory) between:
copying the attributes and accessing them within the loop
Accessing the attributes within the loop
Or maybe using an iterator over the attribute
I feel my question is related to : Efficiency of accessing a value through a pointer vs storing as temporary value. But in my case, I just need to access a value, not change it.
Example
Given two classes
class ClassA
{
public:
vector<double> GetAVector() { return AVector; }
private:
vector<double> m_AVector;
}
and
class ClassB
{
public:
void MyFunction();
private:
vector<double> m_Vector;
ClassA m_A;
}
I. Should I do:
1.
void ClassB::MyFunction()
{
vector<double> foo;
for(int i=0; i<... ; i++)
{
foo.push_back(SomeFunction(m_Vector[i]));
}
/// do something ...
}
2.
void ClassB::MyFunction()
{
vector<double> foo;
vector<double> VectorCopy = m_Vector;
for(int i=0; i<... ; i++)
{
foo.push_back(SomeFunction(VectorCopy[i]));
}
/// do something ...
}
3.
void ClassB::MyFunction()
{
vector<double> foo;
for(vector<double>::iterator it = m_Vector.begin(); it != m_Vector.end() ; it++)
{
foo.push_back(SomeFunction((*it)));
}
/// do something ...
}
II. What if I'm not looping over m_vector but m_A.GetAVector()?
P.S. : I understood while going through other posts that it's not useful to 'micro'-optimize at first but my question is more related to what really happens and what should be done - as for standards (and coding-style)
You're in luck: you can actually figure out the answer all by yourself, by trying each approach with your compiler and on your operating system, and timing each approach to see how long it takes.
There is no universal answer here, that applies to every imaginable C++ compiler and operating system that exists on the third planet from the sun. Each compiler, and hardware is different, and has different runtime characteristics. Even different versions of the same compiler will often result in different runtime behavior that might affect performance. Not to mention various compilation and optimization options. And since you didn't even specify your compiler and operating system, there's literally no authoritative answer that can be given here.
Although it's true that for some questions of this type it's possible to arrive at the best implementation with a high degree of certainty, for most use cases, this isn't one of them. The only way you can get the answer is to figure it out yourself, by trying each alternative yourself, profiling, and comparing the results.
I can categorically say that 2. is less efficient than 1. Copying to a local copy, and then accessing it like you would the original would only be of potential benefit if accessing a stack variable is quicker than accessing a member one, and it's not, so it's not (if you see what I mean).
Option 3. is trickier, since it depends on the implementation of the iter() method (and end(), which may be called once per loop) versus the implementation of the operator [] method. I could irritate some C++ die-hards and say there's an option 4: ask the Vector for a pointer to the array and use a pointer or array index on that directly. That might just be faster than either!
And as for II, there is a double-indirection there. A good compiler should spot that and cache the result for repeated use - but otherwise it would only be marginally slower than not doing so: again, depending on your compiler.
Without optimizations, option 2 would be slower on every imaginable platform, becasue it will incur copy of the vector, and the access time would be identical for local variable and class member.
With optimization, depending on SomeFunction, performance might be the same or worse for option 2. Same performance would happen if SomeFunction is either visible to compiler to not modify it's argument, or it's signature guarantees that argument will not be modified - in this case compiler can optimize away the copy altogether. Otherwise, the copy will remain.

What are the pros and cons of skipping some member variable initialization in c++?

Consider the following example,
Aclass.h
class Aclass()
{
private:
int something;
double nothing;
};
Aclass.cpp
#include "Aclass.h"
Aclass::Aclass (int x) {
something = x;
nothing = y;
}
//Write some functions to manipulate x and y.
So now, what is the difference if I skip initializing y in the constructor? What is the downside and how does it affect the remainder of the code? Is this a good way to code? What I know is that a constructor will create an object anyway whether x and y are initialized or even if both are not (default constructor) and constructors are used to create versatile objects.
If there is no reason to initialize a variable, you don´t need this variable
=> Delete it entirely. Seriously, an uninitialized var is good for...? Nothing. (only for initializing it).
If you plan to initialize it later before it is used:
Can you guarantee that it will get a value before it is first read from, independent of how often and in what order the class methods are called? Then it´s not "wrong", but instead of tediously checking that (and risking bugs because it´s complicated), it´s far more easy to give it a value in the constructor.
No, making it more complicated on purpose is not a good way to code.
Leaving any variable uninitialized will allow it to acquire some garbage value.
Result = Undefined Behaviour. And it has no pros.

Some basic C++ concepts — initialization and assignment

I was asked in an interview, to tell the exact difference for the following c/c++ code statements
int a = 10;
and
int a;
a = 10;
Though both assign the same value, they told me there is a lot of difference between the two in memory.
Can anyone please explain to me about this?
As far as language concerned, they are two ways to do the same thing, initialize the variable a and assign 10 to it.
The statement
int a; reserves memory for the value a which certainly contains garbage.
Because of that you initialize it with a = 10;
In the statement int a = 10; these two steps are done in the same statement.
First a part of memory is reserved to the variable a, and then the memory is overwritten with the value of 10.
int a = 10;
^^^^^ ^^^^^
reserve memory for the variable a write 10 to that memory location
Regarding memory the first declaration uses less memory on your PC because less characters are used, so your .c file will be smaller.
But after compilation the produced executable files will be the same.
IMPORTANT: if those statements are outside any function they are possibly not the same (although they will produce the same result).
The problem is that the first statement will assign 0 to a in the first statement because most compilers do that to global variables in C (and is defined by C standard).
"Though both assign the same value, but there is a lot of difference between the two in memory."
No, there's no difference in stack memory usage!
The difference is that assigning a value though initialization may cause some extra cost for additional assembler instructions (and thus memory needed to store it, aka. code footprint), the compiler can't optimize out at this point (because it's demanded).
If you initialize a immediately this will have some cost in code. You might want to delay initialization for later use, when the value of a is actually needed:
void foo(int x) {
int a; // int a = 30; may generate unwanted extra assembler instructions!
switch(x) {
case 0:
a = 10;
break;
case 1:
a = 20;
break;
default:
return;
}
// Do something with a correctly initialized a
}
This could have well been an interview question made to you in our company, by particular colleagues of mine. And they'd wanted you to answer, that just having the declaration for int a; in 1st place is the more efficient choice.
I'd say this interview question was made to see, if you're really have an in-depth understanding of c and c++ language (A mean-spirited though!).
Speaking for me personally, I'm more convenient on interviews about such stuff usually.
I consider the effect is just very minimal. Though it could well seriously matter on embedded MCU targets, where you have very limited space left for the code footprint (say less/equal than 256K), and/or need to use compiler toolchains that actually aren't able to optimize this out for themselves.
If you are talking about a global variable (one that doesn't appear in a block of code, but outside of all functions/methods):
int a;
makes a zero-initialized variable. Some (most?) c++ implementations will place this variable in a memory place (segment? section? whatever it is called) dedicated for zero-initialized variables.
int a = 10;
makes a variable initialized to something other than 0. Some implementations have a different region in memory for these. So this variable may have an address (&a) that is very different from the previous case.
This is, I guess, what you mean by "lot of difference between the two in memory".
Practically, this can affect your program if it has severe bugs (memory overruns) - they may get masked if a is defined in one manner or the other.
P.S. To make it clear, I am only talking about global variables here. So if your code is like int main() {int a; a = 10;} - here a is typically allocated on stack and there is no "difference in memory" between initialization and assignment.

What makes a static variable initialize only once?

I noticed that if you initialize a static variable in C++ in code, the initialization only runs the first time you run the function.
That is cool, but how is that implemented? Does it translate to some kind of twisted if statement? (if given a value, then ..)
void go( int x )
{
static int j = x ;
cout << ++j << endl ; // see 6, 7, 8
}
int main()
{
go( 5 ) ;
go( 5 ) ;
go( 5 ) ;
}
Yes, it does normally translate into an implicit if statement with an internal boolean flag. So, in the most basic implementation your declaration normally translates into something like
void go( int x ) {
static int j;
static bool j_initialized;
if (!j_initialized) {
j = x;
j_initialized = true;
}
...
}
On top of that, if your static object has a non-trivial destructor, the language has to obey another rule: such static objects have to be destructed in the reverse order of their construction. Since the construction order is only known at run-time, the destruction order becomes defined at run-time as well. So, every time you construct a local static object with non-trivial destructor, the program has to register it in some kind of linear container, which it will later use to destruct these objects in proper order.
Needless to say, the actual details depend on implementation.
It is worth adding that when it comes to static objects of "primitive" types (like int in your example) initialized with compile-time constants, the compiler is free to initialize that object at startup. You will never notice the difference. However, if you take a more complicated example with a "non-primitive" object
void go( int x ) {
static std::string s = "Hello World!";
...
then the above approach with if is what you should expect to find in the generated code even when the object is initialized with a compile-time constant.
In your case the initializer is not known at compile time, which means that the compiler has to delay the initialization and use that implicit if.
Yes, the compiler usually generates a hidden boolean "has this been initialized?" flag and an if that runs every time the function is executed.
There is more reading material here: How is static variable initialization implemented by the compiler?
While it is indeed "some kind of twisted if", the twist may be more than you imagined...
ZoogieZork's comment on AndreyT's answer touches on an important aspect: the initialisation of static local variables - on some compilers including GCC - is by default thread safe (a compiler command-line option can disable it). Consequently, it's using some inter-thread synchronisation mechanism (a mutex or atomic operation of some kind) which can be relatively slow. If you wouldn't be comfortable - performance wise - with explicit use of such an operation in your function, then you should consider whether there's a lower-impact alternative to the lazy initialisation of the variable (i.e. explicitly construct it in a threadsafe way yourself somewhere just once). Very few functions are so performance sensitive that this matters though - don't let it spoil your day, or make your code more complicated, unless your programs too slow and your profiler's fingering that area.
They are initialized only once because that's what the C++ standard mandates. How this happens is entirely up to compiler vendors. In my experience, a local hidden flag is generated and used by the compiler.