I have this piece of code summing std::valarray<int>'s:
#include <iostream>
#include <valarray>
#include <vector>
int main()
{
std::vector<std::valarray<int>> vectorOfValarrays{{1, 1}, {2, 2}, {3, 3}};
std::valarray<int> sumOfValarrays(2);
for (const auto& i : vectorOfValarrays)
sumOfValarrays = sumOfValarrays + i;
std::cout << sumOfValarrays[0] << ' ' << sumOfValarrays[1];
}
Compiling with x86-64 gcc 12.2 using -O0 and -O1, it prints the expect result:
6 6
But when compiling with -O2 and -O3, it prints:
3 3
What could be the reason for this? Is my code undefined behaviour or is this a gcc bug?
I'm pretty sure that this is a gcc bug. Clang gives the correct behaviour for all optimization levels (-O0, -O1, -02, -O3). I also have a look at std::valarray constructors, operator + and operator = and it seems like my code doesn't any undefined behaviour.
I found this bug report on gcc Bugzilla, and the problem seems like gcc has the wrong implementation for copying std::valarray, so in the question the line sumOfValarrays = sumOfValarrays + i; makes gcc tripped up.
Related
After updating to GCC 12.1, I got a array subscript ‘__m256d_u[0]’ is partly outside array bounds error (or rather warning with -Werror) in my project, so I tried isolating the problem.
Here's an MWE, which I also put on godbolt (vector type is __m512d_u instead, but otherwise it's the same error):
#include <Eigen/Dense>
#include <iostream>
using Eigen::Array;
Array<double, 3, 2> foo(){
Array<double, 2, 2> a;
a.setRandom();
Array<double, 3, 2> b;
b.col(0).tail(2) = a.col(1);
// b.col(0).template tail<2>() = a.col(1);
return b;
}
int main(){
std::cout << foo() << '\n';
return 0;
}
Relevant compile options are -Wall -Wextra -Werror -O3 -march=native, and the error message notes note: at offset [16, 24] into object ‘a’ of size 32.
The error does not occur under the following circumstances:
on GCC 11.3 or older,
when removing -march=native
when using -O1 or below
when replacing the line b.col(0).tail(2) = a.col(1); with b.col(0).template tail<2>() = a.col(1);
So it looks like GCC sees the 3x2 array and the 2x2 array, and doesn't realise that only two entries are accessed each.
My question now is: Who should this be reported to? GCC, Eigen? Or is it a user bug?
Bonus points for telling me what the 24 in the error note (offset [16, 24]) is. The 16 is the start, is the 24 the read size?
EDIT: Example can be further simplified by using Array3d and Array2d, see here.
I have this snippet.
#include <algorithm>
#include <vector>
int main() {
std::vector<int> v1 = {1, 2, 3};
std::vector<int> v2 = {4, 5, 6};
return std::ranges::equal(v1, v2);
}
I compile it with GCC 10 (Debian stable) and everything's alright:
$ g++ -std=c++20 test.cpp -o test
<compiles fine>
I compile it with Clang 14 and libc++14 (Debian stable, installed from packages from apt.llvm.org):
$ clang++-14 -std=c++20 -stdlib=libc++ test.cpp -o test
test.cpp:8:25: error: no member named 'equal' in namespace 'std::ranges'
return std::ranges::equal(v1, v2);
~~~~~~~~~~~~~^
1 error generated.
Same for a lot of other things. Is libc++ support for the ranges library really so behind or am I missing something?
You can find an exhaustive table for implementations feature support here: https://en.cppreference.com/w/cpp/compiler_support
For C++20s "The One Ranges Proposal" where std::equal is part of the table says "13 (partial)".
There is another overview for clang here: https://clang.llvm.org/cxx_status.html#cxx20. Though it only lists language features.
I have following C++ code:
#include <memory>
#include <vector>
#include <string>
#include <unordered_map>
void erase_from_vector(std::vector<std::weak_ptr<int>> &mvec) {
for (auto mvec_it = mvec.begin(); mvec_it != mvec.end(); )
mvec_it = mvec.erase(mvec_it);
}
int main(void) {
#if 0
std::vector<std::weak_ptr<int>> mvec;
for (auto mvec_it = mvec.begin(); mvec_it != mvec.end(); )
mvec_it = mvec.erase(mvec_it);
#endif
}
GCC generates warning when I compile it this way:
ppk#fif-cloud-dev:~$ g++ --version
g++ (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609
ppk#fif-cloud-dev:~$ g++ -fstrict-overflow -Wstrict-overflow=5 -O2 -std=c++14 warn1.cc
warn1.cc: In function ‘void erase_from_vector(std::vector<std::weak_ptr<int> >&)’:
warn1.cc:6:6: warning: assuming signed overflow does not occur when changing X +- C1 cmp C2 to X cmp C2 -+ C1 [-Wstrict-overflow]
void erase_from_vector(std::vector<std::weak_ptr<int>> &mvec) {
^
But when I change -O2 flag to -O1 it compiles without any warnings. When I keep flag -O2 and uncomment code in main() it also compiles without any warnings. Clang compiler also does not report any warnings.
I suppose that this warning comes from std::weak_ptr destructor where counter is decremented but have no idea why it appears in my code.
Is the warning caused by my error or error in the compiler?
Most likely a quirk of gcc 5.4. It's gone as soon as you get to gcc 6.1, and I don't see it reappear again in any later version.
gcc 5.4 (warnings)
gcc 6.1 (no warnings)
It's especially damning that Clang doesn't reproduce the behavior.
It should be noted that such behavior isn't exactly a bug, according to the doc (emphasis mine)
An optimization that assumes that signed overflow does not occur is perfectly safe if the values of the variables involved are such that overflow never does, in fact, occur. Therefore this warning can easily give a false positive: a warning about code that is not actually a problem.
That you're using -Wstrict-overflow=5 makes it even more likely, as this is the highest warning level that comes with its own disclaimer:
this warning level gives a very large number of false positives
My suggestion is to either upgrade your compiler or accept that gcc 5.4 is going to give you a false positive here.
This is a simple c++ program using valarrays:
#include <iostream>
#include <valarray>
int main() {
using ratios_t = std::valarray<float>;
ratios_t a{0.5, 1, 2};
const auto& res ( ratios_t::value_type(256) / a );
for(const auto& r : ratios_t{res})
std::cout << r << " " << std::endl;
return 0;
}
If I compile and run it like this:
g++ -O0 main.cpp && ./a.out
The output is as expected:
512 256 128
However, if I compile and run it like this:
g++ -O3 main.cpp && ./a.out
The output is:
0 0 0
Same happens if I use -O1 optimization parameter.
GCC version is (latest in Archlinux):
$ g++ --version
g++ (GCC) 6.1.1 20160707
However, if I try with clang, both
clang++ -std=gnu++14 -O0 main.cpp && ./a.out
and
clang++ -std=gnu++14 -O3 main.cpp && ./a.out
produce the same correct result:
512 256 128
Clang version is:
$ clang++ --version
clang version 3.8.0 (tags/RELEASE_380/final)
I've also tried with GCC 4.9.2 on Debian, where executable produces the correct result.
Is this a possible bug in GCC or am I doing something wrong? Can anyone reproduce this?
EDIT: I managed to reproduce the issue also on Homebrew version of GCC 6 on Mac OS.
valarray and auto do not mix well.
This creates a temporary object, then applies operator/ to it:
const auto& res ( ratios_t::value_type(256) / a );
The libstdc++ valarray uses expression templates so that operator/ returns a lightweight object that refers to the original arguments and evaluates them lazily. You use const auto& which causes the expression template to be bound to the reference, but doesn't extend the lifetime of the temporary that the expression template refers to, so when the evaluation happens the temporary has gone out of scope, and its memory has been reused.
It will work fine if you do:
ratios_t res = ratios_t::value_type(256) / a;
Update: as of today, GCC trunk will give the expected result for this example. I've modified our valarray expression templates to be a bit less error-prone, so that it's harder (but still not impossible) to create dangling references. The new implementation should be included in GCC 9 next year.
It's the result of careless implementation of operator/ (const T& val, const std::valarray<T>& rhs) (and most probably other operators over valarrays) using lazy evaluation:
#include <iostream>
#include <valarray>
int main() {
using ratios_t = std::valarray<float>;
ratios_t a{0.5, 1, 2};
float x = 256;
const auto& res ( x / a );
// x = 512; // <-- uncommenting this line affects the output
for(const auto& r : ratios_t{res})
std::cout << r << " ";
return 0;
}
With the "x = 512" line commented out, the output is
512 256 128
Uncomment that line and the output changes to
1024 512 256
Since in your example the left-hand side argument of the division operator is a temporary, the result is undefined.
UPDATE
As Jonathan Wakely correctly pointed out, the lazy-evaluation based implementation becomes a problem in this example due to the usage of auto.
The code:
#include <vector>
int main()
{
std::vector<int> v1 = {12, 34};
std::vector<int> v2 = {56, 78};
//Doesn't work.
v1.push_back(v2[0]);
//Works.
int i = v2[0];
v1.push_back(i);
return 0;
}
For some reason, the first push_back doesn't work, while the second does. Eclipse gives for that line the error:
Invalid arguments ' Candidates are: void push_back(const int &) void push_back(int &&) '
Could someone explain what is happening there? Thanks!
EDIT:
The code actually compiles fine. For some reason, Eclipse doesn't agree that this is valid code.
If I compile the code with g++ 4.7.3 with
g++ test.cpp --std=c++0x
It compiles correctly and if I try to print v1[2];, I get the correct result.
std::cout << v1[2]; // 56
The Eclipse code analyzer tool (CODAN) may just not be right in this situation.
Rely on the output of a C++ (in this case C++11 compatible) compiler.