Is using std::out_of_range for logic bad? - c++

In my project, I have a lot of situations like this:
constexpr size_t element_count = 42;
std::array<bool, element_count> elements;
for(size_t i = 0; i < element_count; ++i){
if(i > 0 && elements[i - 1]){/*do something*/}
else{/*do something else*/}
if(i < element_count - 1 && elements[i + 1]){/*do something*/}
else{/*do something else*/}
}
Without checking if i > 0 or i < element_count, I'll get undefined behavior. If I use std::array::at instead of operator[], I can get std::out_of_range exceptions instead. I was wondering if there were any problems with just relying on the exception like this:
for(size_t i = 0; i < element_count; ++i){
try{
if(elements.at(i - 1)){/*do something*/}
}
catch(const std::out_of_range& e){/*do something else*/}
try{
if(elements.at(i + 1)){/*do something*/}
}
catch(const std::out_of_range& e){/*do something else*/}
}
In this example it's more code, but in my real project it would reduce the amount of code because I'm using lots of multidimensional arrays and doing bounds checking for multiple dimensions.

There isn't a problem in the sense that it will work, but that's about it. Using exceptions for basic flow control (which is what you seem to be doing here) is usually frowned upon, with reason, and I don't think I've ever seen it like this in a loop:
makes reading and reasoning about code harder, also because it's unexpected one uses exceptions for flow control (instead of for error handling, which is what it is meant for in C++)
harder to read usually also means harder to write, and makes it harder to spot mistakes
you actually made a mistake already, or at least introduced a behaviour change: i > 0 && elements[i - 1] evaluating to false does not result in 'something else' being called anymore
reduction of the amount of code isn't a good goal anymore if it results in less readable or worse code
might be less performant
Now it would be interesting to see the actual code, but I suspect it could probably do without any bounds checking whatsoever, e.g. by making the loop start at 1 instead of 0 . Or, if this is a recurrnig pattern, you'd write a helper function (or use an existing on) for iteration with access to multiple elements in one iteration. That would be a reduction in code amount which is actually worth it.

Related

if-throw precondition check effectiveness and the DRY principle

A lot of internet resources insist on checking preconditions in API functions via if (something_is_wrong) throw Exception{} instead of assert(!something_is_wrong) and I see some good points in it. However, I'm afraid such usage may result in doubling the same checks:
void foo(int positive) {
if (positive < 1) {
throw std::invalid_argument("param1 must be positive");
}
}
void caller() {
int number = get_number_somehow();
if (number >= 1) {
foo(number);
}
}
will probably execute like
int number = get_number_somehow();
if (number >= 1) {
if (number < 1) {
throw std::invalid_argument("param must be positive");
}
}
unless the call will actually be inlined and optimized by cutting out one of the ifs, I guess. Besides, writing the check twice (in foo() and in caller()) might be violating the DRY rule. Therefore, maybe I should go for
void caller() {
int number = get_number_somehow();
try {
foo(number);
} catch (std::invalid_argument const&) {
// handle or whatever
}
}
to evade repeating myself with those precondition checks, providing a bit of performance and a lot of maintainability in case of a function contract change.
However, I can't always apply such logic. Imagine std::vector having only at() but not operator[]:
for (int i = 0; i < vector.size(); ++i) {
bar(vector.at(i)); // each index is checked twice: by the loop and by at()
}
This code results in extra O(N) checks! Isn't it too much? Even if it is optimized out the same way as above, what about such situations with indirect calls or long functions, which probably won't be inlined?
So, should my program be written according to the rules below?
if an API function probably won't be inlined or is expected to be called a lot of times with checks on the call site (see the vector example), assert() its preconditions (inside it);
try-catch throwing functions instead of checking their preconditions before the call (the latter seems to break DRY).
If not, why?
So, there are two separate things you are talking about: DRY and performance.
DRY is about code maintenance and structure, and doesn't really apply to code you can't control. So, if the API is a black-box, and there happens to be code inside of it that you can't change, but you need to have separately, then I wouldn't think of it as not DRY to repeat it in your code. Y is Yourself.
But, you could still care about performance. If you measure a performance problem then fix it with whatever makes sense -- even if it's anti-DRY (or it's ok if it is).
But, if you control both sides (the API and the client) and you really want a pure, no-repeat, performant solution, then there's a pattern something like this pseudocode. I don't know the name, but I think of it as "Proof Providing"
let fn = precondition_check(myNum)
if fn != nil {
// the existence of fn is proof that myNum meets preconditions
fn()
}
The API func precondition_check returns a function that captures the myNum in it and doesn't need to check if it meets preconditions because it was only created if it did.

Branching when mixing template parameters and variables in C++

I'm trying to carry out some loop optimization as described here: Optimizing a Loop vs Code Duplication
I have the additional complication that some code inside the loop only needs to be executed depending on a combination of run-time-known variables external to the loop (which can be replaced with template parameters for optimization, as discussed in the link above) and a run-time-known variable that is only known inside the loop.
Here is the completely un-optimized version of the code:
for (int i = 0; i < 100000, i++){
if (external_condition_1 || (external_condition_2 && internal_condition[i])){
run_some_code;
}
else{
run_some_other_code;
}
run_lots_of_other_code;
}
This is my attempt at wrapping the loop in a templated function as suggested in the question linked above to optimize performance and avoid code duplication by writing multiple versions of the loop:
template<bool external_condition_1, external_condition_2>myloop(){
for (int i = 0; i < 100000, i++){
if (external_condition_1 || (external_condition_2 && internal_condition[i]){
run_some_code;
}
else{
run_some_other_code;
}
run_lots_of_other_code;
}
My question is: how can the code be written to avoid branching and code duplication?
Note that the code is sufficiently complex that the function probably can't be inlined, and compiler optimization also likely wouldn't sort this out in general.
My question is: how can the code be written to avoid branching and code duplication?
Well, you already wrote your template to avoid code duplication, right? So let's look at what branching is left. To do this, we should look at each function that is generated from your template (there are four of them). We should also apply the expected compiler optimizations based upon the template parameters.
First up, set condition 1 to true. This should produce two functions that are essentially (using a bit of pseudo-syntax) the following:
myloop<true, bool external_condition_2>() {
for (int i = 0; i < 100000, i++){
// if ( true || whatever ) <-- optimized out
run_some_code;
run_lots_of_other_code;
}
}
No branching there. Good. Moving on to the first condition being false and the second condition being true.
myloop<false, true>(){
for (int i = 0; i < 100000, i++){
if ( internal_condition[i] ){ // simplified from (false || (true && i_c[i]))
run_some_code;
}
else{
run_some_other_code;
}
run_lots_of_other_code;
}
}
OK, there is some branching going on here. However, each i needs to be analyzed to see which code should execute. I think there is nothing more that can be done here without more information about internal_condition. I'll give some thoughts on that later, but let's move on to the fourth function for now.
myloop<false, false>() {
for (int i = 0; i < 100000, i++){
// if ( false || (false && whatever) ) <-- optimized out
run_some_other_code;
run_lots_of_other_code;
}
}
No branching here. You already have done a good job avoiding branching and code duplication.
OK, let's go back to myloop<false,true>, where there is branching. The branching is largely unavoidable simply because of how your situation is set up. You are going to iterate many times. Some iterations you want to do one thing while other iterations should do another. To get around this, you would need to re-envision your setup so that you can do the same thing each iteration. (The optimization you are working from is based upon doing the same thing each iteration, even though it might be a different thing the next time the loop starts.)
The simplest, yet unlikely, scenario would be where internal_condition[i] is equivalent to something like i < 5000. It would also be convenient if you could do all of the "some code" before any of the "lots of other code". Then you could loop from 0 to 4999, running "some code" each iteration. Then loop from 5000 to 99999, running "other code". Then a third loop to run "lots of other code".
Any solution I can think of would involve adapting your situation to make it more like the unlikely simple scenario. Can you calculate how many times internal_condition[i] is true? Can you iterate that many times and map your (new) loop control variable to the appropriate value of i (the old loop control variable)? (Or maybe the exact value of i is not important?) Then do a second loop to cover the remaining cases? In some scenarios, this might be trivial. In others, far from it.
There might be other tricks that could be done, but they depend on more details about what you are doing, what you need to do, and what you think you need to do but don't really. (It's possible that the required level of detail would overwhelm StackOverflow.) Is the order important? Is the exact value of i important?
In the end, I would opt for profiling the code. Profile the code without code duplication but with branching. Profile the code with minimal branching but with code duplication. Is there a measurable change? If so, think about how you can re-arrange your internal condition so that i can cover large ranges without changing the value of the internal condition. Then divide your loop into smaller pieces.
In C++17, to guaranty no extra branches evaluation, you might do:
template <bool external_condition_1, bool external_condition_2>
void myloop()
{
for (int i = 0; i < 100000, i++){
if constexpr (external_condition_1) {
run_some_code;
} else if constexpr (external_condition_2){
if (internal_condition[i]) {
run_some_code;
} else {
run_some_other_code;
}
} else {
run_some_other_code;
}
run_lots_of_other_code;
}
}

Does this function have explicit return values on all control paths?

I have a Heaviside step function centered on unity for any data type, which I've encoded using:
template <typename T>
int h1(const T& t){
if (t < 1){
return 0;
} else if (t >= 1){
return 1;
}
}
In code review, my reviewer told me that there is not an explicit return on all control paths. And the compiler does not warn me either. But I don't agree; the conditions are mutually exclusive. How do I deal with this?
It depends on how the template is used. For an int, you're fine.
But, if t is an IEEE754 floating point double type with a value set to NaN, neither t < 1 nor t >= 1 are true and so program control reaches the end of the if block! This causes the function to return without an explicit value; the behaviour of which is undefined.
(In a more general case, where T overloads the < and >= operators in such a way as to not cover all possibilities, program control will reach the end of the if block with no explicit return.)
The moral of the story here is to decide on which branch should be the default, and make that one the else case.
Just because code is correct, that doesn't mean it can't be better. Correct execution is the first step in quality, not the last.
if (t < 1) {
return 0;
} else if (t >= 1){
return 1;
}
The above is "correct" for any datatype of t than has sane behavior for < and >=. But this:
if (t < 1) {
return 0;
}
return 1;
Is easier to see by inspection that every case is covered, and avoids the second unneeded comparison altogether (that some compilers might not have optimized out). Code is not only read by compilers, but by humans, including you 10 years from now. Give the humans a break and write more simply for their understanding as well.
As noted, some special numbers can be both < and >=, so your reviewer is simply right.
The question is: what made you want to code it like this in the first place? Why do you even consider making life so hard for yourself and others (the people that need to maintain your code)? Just the fact that you are smart enough to deduce that < and >= should cover all cases doesn't mean that you have to make the code more complex than necessary. What goes for physics goes for code too: make things as simple as possible, but not simpler (I believe Einstein said this).
Think about it. What are you trying to achieve? Must be something like this: 'Return 0 if the input is less than 1, return 1 otherwise.' What you've done is add intelligence by saying ... oh but that means that I return 1 if t is greater or equal 1. This sort of needless 'x implies y' is requiring extra think work on behalf of the maintainer. If you think that is a good thing, I would advise to do a couple of years of code maintenance yourself.
If it were my review, I'd make another remark. If you use an 'if' statement, then you can basically do anything you want in all branches. But in this case, you do not do 'anything'. All you want to do is return 0 or 1 depending on whether t<1 or not. In those cases, I think the '?:' statement is much better and more readable than the if statement. Thus:
return t<1 ? 0 : 1;
I know the ?: operator is forbidden in some companies, and I find that a horrible thing to do. ?: usually matches much better with specifications, and it can make code so much easier to read (if used with care) ...

Is there a technical reason to use > (<) instead of != when incrementing by 1 in a 'for' loop?

I almost never see a for loop like this:
for (int i = 0; 5 != i; ++i)
{}
Is there a technical reason to use > or < instead of != when incrementing by 1 in a for loop? Or this is more of a convention?
while (time != 6:30pm) {
Work();
}
It is 6:31pm... Damn, now my next chance to go home is tomorrow! :)
This to show that the stronger restriction mitigates risks and is probably more intuitive to understand.
There is no technical reason. But there is mitigation of risk, maintainability and better understanding of code.
< or > are stronger restrictions than != and fulfill the exact same purpose in most cases (I'd even say in all practical cases).
There is duplicate question here; and one interesting answer.
Yes there is a reason. If you write a (plain old index based) for loop like this
for (int i = a; i < b; ++i){}
then it works as expected for any values of a and b (ie zero iterations when a > b instead of infinite if you had used i == b;).
On the other hand, for iterators you'd write
for (auto it = begin; it != end; ++it)
because any iterator should implement an operator!=, but not for every iterator it is possible to provide an operator<.
Also range-based for loops
for (auto e : v)
are not just fancy sugar, but they measurably reduce the chances to write wrong code.
You can have something like
for(int i = 0; i<5; ++i){
...
if(...) i++;
...
}
If your loop variable is written by the inner code, the i!=5 might not break that loop. This is safer to check for inequality.
Edit about readability.
The inequality form is way more frequently used. Therefore, this is very fast to read as there is nothing special to understand (brain load is reduced because the task is common). So it's cool for the readers to make use of these habits.
And last but not least, this is called defensive programming, meaning to always take the strongest case to avoid current and future errors influencing the program.
The only case where defensive programming is not needed is where states have been proven by pre- and post-conditions (but then, proving this is the most defensive of all programming).
I would argue that an expression like
for ( int i = 0 ; i < 100 ; ++i )
{
...
}
is more expressive of intent than is
for ( int i = 0 ; i != 100 ; ++i )
{
...
}
The former clearly calls out that the condition is a test for an exclusive upper bound on a range; the latter is a binary test of an exit condition. And if the body of the loop is non-trivial, it may not apparent that the index is only modified in the for statement itself.
Iterators are an important case when you most often use the != notation:
for(auto it = vector.begin(); it != vector.end(); ++it) {
// do stuff
}
Granted: in practice I would write the same relying on a range-for:
for(auto & item : vector) {
// do stuff
}
but the point remains: one normally compares iterators using == or !=.
The loop condition is an enforced loop invariant.
Suppose you don't look at the body of the loop:
for (int i = 0; i != 5; ++i)
{
// ?
}
in this case, you know at the start of the loop iteration that i does not equal 5.
for (int i = 0; i < 5; ++i)
{
// ?
}
in this case, you know at the start of the loop iteration that i is less than 5.
The second is much, much more information than the first, no? Now, the programmer intent is (almost certainly) the same, but if you are looking for bugs, having confidence from reading a line of code is a good thing. And the second enforces that invariant, which means some bugs that would bite you in the first case just cannot happen (or don't cause memory corruption, say) in the second case.
You know more about the state of the program, from reading less code, with < than with !=. And on modern CPUs, they take the same amount of time as no difference.
If your i was not manipulated in the loop body, and it was always increased by 1, and it started less than 5, there would be no difference. But in order to know if it was manipulated, you'd have to confirm each of these facts.
Some of these facts are relatively easy, but you can get wrong. Checking the entire body of the loop is, however, a pain.
In C++ you can write an indexes type such that:
for( const int i : indexes(0, 5) )
{
// ?
}
does the same thing as either of the two above for loops, even down to the compiler optimizing it down to the same code. Here, however, you know that i cannot be manipulated in the body of the loop, as it is declared const, without the code corrupting memory.
The more information you can get out of a line of code without having to understand the context, the easier it is to track down what is going wrong. < in the case of integer loops gives you more information about the state of the code at that line than != does.
As already said by Ian Newson, you can't reliably loop over a floating variable and exit with !=. For instance,
for (double x=0; x!=1; x+=0.1) {}
will actually loop forever, because 0.1 can't exactly be represented in floating point, hence the counter narrowly misses 1. With < it terminates.
(Note however that it's basically undefined behaviour whether you get 0.9999... as the last accepted number – which kind of violates the less-than assumption – or already exit at 1.0000000000000001.)
Yes; OpenMP doesn't parallelize loops with the != condition.
It may happen that the variable i is set to some large value and if you just use the != operator you will end up in an endless loop.
As you can see from the other numerous answers, there are reasons to use < instead of != which will help in edge cases, initial conditions, unintended loop counter modification, etc...
Honestly though, I don't think you can stress the importance of convention enough. For this example it will be easy enough for other programmers to see what you are trying to do, but it will cause a double-take. One of the jobs while programming is making it as readable and familiar to everyone as possible, so inevitably when someone has to update/change your code, it doesn't take a lot of effort to figure out what you were doing in different code blocks. If I saw someone use !=, I'd assume there was a reason they used it instead of < and if it was a large loop I'd look through the whole thing trying to figure out what you did that made that necessary... and that's wasted time.
I take the adjectival "technical" to mean language behavior/quirks and compiler side effects such as performance of generated code.
To this end, the answer is: no(*). The (*) is "please consult your processor manual". If you are working with some edge-case RISC or FPGA system, you may need to check what instructions are generated and what they cost. But if you're using pretty much any conventional modern architecture, then there is no significant processor level difference in cost between lt, eq, ne and gt.
If you are using an edge case you could find that != requires three operations (cmp, not, beq) vs two (cmp, blt xtr myo). Again, RTM in that case.
For the most part, the reasons are defensive/hardening, especially when working with pointers or complex loops. Consider
// highly contrived example
size_t count_chars(char c, const char* str, size_t len) {
size_t count = 0;
bool quoted = false;
const char* p = str;
while (p != str + len) {
if (*p == '"') {
quote = !quote;
++p;
}
if (*(p++) == c && !quoted)
++count;
}
return count;
}
A less contrived example would be where you are using return values to perform increments, accepting data from a user:
#include <iostream>
int main() {
size_t len = 5, step;
for (size_t i = 0; i != len; ) {
std::cout << "i = " << i << ", step? " << std::flush;
std::cin >> step;
i += step; // here for emphasis, it could go in the for(;;)
}
}
Try this and input the values 1, 2, 10, 999.
You could prevent this:
#include <iostream>
int main() {
size_t len = 5, step;
for (size_t i = 0; i != len; ) {
std::cout << "i = " << i << ", step? " << std::flush;
std::cin >> step;
if (step + i > len)
std::cout << "too much.\n";
else
i += step;
}
}
But what you probably wanted was
#include <iostream>
int main() {
size_t len = 5, step;
for (size_t i = 0; i < len; ) {
std::cout << "i = " << i << ", step? " << std::flush;
std::cin >> step;
i += step;
}
}
There is also something of a convention bias towards <, because ordering in standard containers often relies on operator<, for instance hashing in several STL containers determines equality by saying
if (lhs < rhs) // T.operator <
lessthan
else if (rhs < lhs) // T.operator < again
greaterthan
else
equal
If lhs and rhs are a user defined class writing this code as
if (lhs < rhs) // requires T.operator<
lessthan
else if (lhs > rhs) // requires T.operator>
greaterthan
else
equal
The implementor has to provide two comparison functions. So < has become the favored operator.
There are several ways to write any kind of code (usually), there just happens to be two ways in this case (three if you count <= and >=).
In this case, people prefer > and < to make sure that even if something unexpected happens in the loop (like a bug), it won't loop infinitely (BAD). Consider the following code, for example.
for (int i = 1; i != 3; i++) {
//More Code
i = 5; //OOPS! MISTAKE!
//More Code
}
If we used (i < 3), we would be safe from an infinite loop because it placed a bigger restriction.
Its really your choice whether you want a mistake in your program to shut the whole thing down or keep functioning with the bug there.
Hope this helped!
The most common reason to use < is convention. More programmers think of loops like this as "while the index is in range" rather than "until the index reaches the end." There's value is sticking to convention when you can.
On the other hand, many answers here are claiming that using the < form helps avoid bugs. I'd argue that in many cases this just helps hide bugs. If the loop index is supposed to reach the end value, and, instead, it actually goes beyond it, then there's something happening you didn't expect which may cause a malfunction (or be a side effect of another bug). The < will likely delay discovery of the bug. The != is more likely to lead to a stall, hang, or even a crash, which will help you spot the bug sooner. The sooner a bug is found, the cheaper it is to fix.
Note that this convention is peculiar to array and vector indexing. When traversing nearly any other type of data structure, you'd use an iterator (or pointer) and check directly for an end value. In those cases you have to be sure the iterator will reach and not overshoot the actual end value.
For example, if you're stepping through a plain C string, it's generally more common to write:
for (char *p = foo; *p != '\0'; ++p) {
// do something with *p
}
than
int length = strlen(foo);
for (int i = 0; i < length; ++i) {
// do something with foo[i]
}
For one thing, if the string is very long, the second form will be slower because the strlen is another pass through the string.
With a C++ std::string, you'd use a range-based for loop, a standard algorithm, or iterators, even if though the length is readily available. If you're using iterators, the convention is to use != rather than <, as in:
for (auto it = foo.begin(); it != foo.end(); ++it) { ... }
Similarly, iterating a tree or a list or a deque usually involves watching for a null pointer or other sentinel rather than checking if an index remains within a range.
One reason not to use this construct is floating point numbers. != is a very dangerous comparison to use with floats as it'll rarely evaluate to true even if the numbers look the same. < or > removes this risk.
There are two related reasons for following this practice that both have to do with the fact that a programming language is, after all, a language that will be read by humans (among others).
(1) A bit of redundancy. In natural language we usually provide more information than is strictly necessary, much like an error correcting code. Here the extra information is that the loop variable i (see how I used redundancy here? If you didn't know what 'loop variable' means, or if you forgot the name of the variable, after reading "loop variable i" you have the full information) is less than 5 during the loop, not just different from 5. Redundancy enhances readability.
(2) Convention. Languages have specific standard ways of expressing certain situations. If you don't follow the established way of saying something, you will still be understood, but the effort for the recipient of your message is greater because certain optimisations won't work. Example:
Don't talk around the hot mash. Just illuminate the difficulty!
The first sentence is a literal translation of a German idiom. The second is a common English idiom with the main words replaced by synonyms. The result is comprehensible but takes a lot longer to understand than this:
Don't beat around the bush. Just explain the problem!
This is true even in case the synonyms used in the first version happen to fit the situation better than the conventional words in the English idiom. Similar forces are in effect when programmers read code. This is also why 5 != i and 5 > i are weird ways of putting it unless you are working in an environment in which it is standard to swap the more normal i != 5 and i < 5 in this way. Such dialect communities do exist, probably because consistency makes it easier to remember to write 5 == i instead of the natural but error prone i == 5.
Using relational comparisons in such cases is more of a popular habit than anything else. It gained its popularity back in the times when such conceptual considerations as iterator categories and their comparability were not considered high priority.
I'd say that one should prefer to use equality comparisons instead of relational comparisons whenever possible, since equality comparisons impose less requirements on the values being compared. Being EqualityComparable is a lesser requirement than being LessThanComparable.
Another example that demonstrates the wider applicability of equality comparison in such contexts is the popular conundrum with implementing unsigned iteration down to 0. It can be done as
for (unsigned i = 42; i != -1; --i)
...
Note that the above is equally applicable to both signed and unsigned iteration, while the relational version breaks down with unsigned types.
Besides the examples, where the loop variable will (unintentional) change inside the body, there are other reasions to use the smaller-than or greater-than operators:
Negations make code harder to understand
< or > is only one char, but != two
In addition to the various people who have mentioned that it mitigates risk, it also reduces the number of function overloads necessary to interact with various standard library components. As an example, if you want your type to be storable in a std::set, or used as a key for std::map, or used with some of the searching and sorting algorithms, the standard library usually uses std::less to compare objects as most algorithms only need a strict weak ordering. Thus it becomes a good habit to use the < comparisons instead of != comparisons (where it makes sense, of course).
There is no problem from a syntax perspective, but the logic behind that expression 5!=i is not sound.
In my opinion, using != to set the bounds of a for loop is not logically sound because a for loop either increments or decrements the iteration index, so setting the loop to iterate until the iteration index becomes out of bounds (!= to something) is not a proper implementation.
It will work, but it is prone to misbehavior since the boundary data handling is lost when using != for an incremental problem (meaning that you know from the start if it increments or decrements), that's why instead of != the <>>==> are used.

Shall I cache it?

When having a class:
class X
{
int data_;
void f()
{
//shall I cache var data_? by doing
int cached = data_
//and instaed of this:
if (data_ >= 0 && data_ < 1000 || data_ < 0 && data_ > -1000)//first version
{
//do something
}
else
{
//do somegthing else
}
// have this:
if (cashed >= 0 && cashed < 1000 || cashed < 0 && cashed > -1000)//second version
// in my opinion this is bad code
{
//do something
}
else
{
//do somegthing else
}
}
};
Please see comments in code.
I'm asking this question because my accomplice from university states that this kind of code (line 1) is just a bad code. I think that he's talking rubbish but I'm very interested in your opinion on this subject.
Thank you.
*Cached
And unless your data_ variable may change halfway through execution, there is no difference whatsoever between those two code segments.
Don't optimize unless it is already too slow: Write the clearest, simplest possible code, measure it, and if it is too slow, use tools to discover (not guess) which part is the slowest. If possible, use a better algorithm before deploying caching & other minor optimizations. In any case, once you've optimized that part, measure it again, and if it it's still too slow, repeat.
That being said, #Zhais is correct. There's no functional difference in the two code snippets.
This kind of optimization is called premature optimization. You should not do optimization like that until it is proven to be the bottleneck.
In the first case the variable is actually used as this->data_ but you can't be sure because compiler may cache it himself.