A simple program throws a weird warning , the program tries to use vector of intel intrinsics. i am using g++ 7.2 , there is a bug in gcc which is resolved to be fixed but the warning still persists, i am not sure what are the implications to ignore the warning as intel intrinsics need strictly aligned data.
avx.cpp: In function ‘int main()’:
avx.cpp:7:27: warning: ignoring attributes on template argument ‘__m256i* {aka __vector(4) long long int*}’ [-Wignored-attributes]
std::vector< __m256i* > v;
^
#include <vector>
#include <immintrin.h>
#include <iostream>
int main()
{
std::vector< __m256i* > v;
return 0;
}
Related
This question already has answers here:
gcc size_t and sizeof arithmetic conversion to int
(2 answers)
Closed 2 years ago.
I have this code:
#include <cstdint>
#include <deque>
#include <iostream>
int main()
{
std::deque<uint8_t> receivedBytes;
int nbExpectedBytes = 1;
if (receivedBytes.size() >= static_cast<size_t>(nbExpectedBytes))
{
std::cout << "here" << std::endl;
}
return 0;
}
With -Wsign-conversion, this compiles without warning on my linux laptop, but on the embedded linux on which it's meant to run I get the following warning :
temp.cpp: In function ‘int main()’: temp.cpp:10:33: warning:
conversion to ‘std::deque::size_type {aka long unsigned
int}’ from ‘int’ may change the sign of the result [-Wsign-conversion]
if (receivedBytes.size() >= static_cast<size_t>(nbExpectedBytes))
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
I just don't understand:
I have -Wsign-conversion enabled both on my linux laptop and on the embedded linux, so why do I only get the warning on the embedded linux?
I'm explicitly casting from int to size_t (which should not produce a warning because the cast is explicit), then comparing a size_t to a std::deque<unsigned char>::size_type, so where is the implicit conversion from signed to unsigned that triggers the warning??!
I can't help but think the compiler on the embedded linux is wrong here. Am I missing something?
Edit: On my linux laptop I'm using g++ version 9.3.0, while on the embedded linux I'm using g++ version 6.3.0 (probably not the usual binary since it's an ARM64 architecture)
This is undoubtedly a bug/error in the embedded compiler. Separating the static_cast from the >= comparison removes the warning, as can be seen from testing the following code on Compiler Explorer, with ARM64 gcc 6.3.0 (linux) selected:
#include <deque>
#include <cstddef>
#include <cstdint>
int main()
{
std::deque<uint8_t> receivedBytes;
int nbExpectedBytes = 1;
// Warning generated ...
while (receivedBytes.size() >= static_cast<size_t>(nbExpectedBytes))
{
break;
}
// Warning NOT generated ...
size_t blob = static_cast<size_t>(nbExpectedBytes);
while (receivedBytes.size() >= blob)
{
break;
}
return 0;
}
Further, the warning also disappears when changing to the (32-bit) ARM gcc 6.3.0 (linux) compiler.
The following code (reduced from a larger, more sensible sample):
#include <vector>
void shrink(std::vector<int>& v) {
while (v.size() > 0) {
v.resize(v.size() - 1);
}
}
Leads gcc 7.3 to emit this warning (godbolt):
In function 'void shrink(std::vector<int>&)':
cc1plus: warning: 'void* __builtin_memset(void*, int, long unsigned int)':
specified size 18446744073709551612 exceeds maximum object size 9223372036854775807 [-Wstringop-overflow=]
I have been staring at this code for close to an hour with a colleague, and it just seems correct to me; what is gcc complaining about?
it just seems correct to me
The example is correct.
what is gcc complaining about?
This is a compiler bug. Here is the bugzilla. The bug appears to be fixed in GCC 8.
If I have the following code:
#include <boost/multiprecision/cpp_int.hpp>
using namespace boost::multiprecision
int main()
{
int128_t a = Func_a()
int128_t b = Func_b()
std::cout << std::max(a, b) << std::endl;
return 0;
}
And if I compile using g++ on Ubuntu, I get the following error:
error: cannot convert ‘const boost::multiprecision::number >’ to ‘int64 {aka long long int}’ in assignment
What is the proper way to compare two int128_t numbers to see which one is greater?
EDIT: I am using std::max.
Your code (except for missing semicolons) compiles and runs without error.
However, according to your compiler message, I'm suspecting that in
int128_t a = Func_a(); // are you really sure it is int128_t?
the left-hand side is not a boost::multiprecision::int128_t, since the compiler says it is a int64.
I am able to compile the following a std=c++11 required code with g++ using the following command:
g++ test.cpp -std=c++11 -Wl,-rpath,/share/apps/gcc/6.3.0/lib64
the code:
#include <chrono>
#include <map>
#include <memory>
#include <thread>
#include <utility>
int main() {
typedef std::unique_ptr<int> intPointer;
intPointer p(new int(10));
std::map<int, std::unique_ptr<int>> m;
m.insert(std::make_pair(5, std::move(p)));
auto start = std::chrono::system_clock::now();
if (std::chrono::system_clock::now() - start < std::chrono::seconds(2))
{
std::thread t;
}
}
The very same command(probably I don't know the correct one) wont work for intel compiler:
icpc test.cpp -std=c++11 -Wl,-rpath,/share/apps/intel/2016.1.056/vtune_amplifier_xe_2016.1.1.434111/target/linux64/lib64
The error is:
In file included from /share/apps/gcc/6.3.0/include/c++/6.3.0/map(60),
from test.cpp(2):
/share/apps/gcc/6.3.0/include/c++/6.3.0/bits/stl_tree.h(1437):
error: identifier "_Compare" is undefined
&& is_nothrow_move_assignable<_Compare>::value)
^
In file included from /share/apps/gcc/6.3.0/include/c++/6.3.0/map(60),
from test.cpp(2):
/share/apps/gcc/6.3.0/include/c++/6.3.0/bits/stl_tree.h(1778):
error: identifier "_Compare" is undefined
_GLIBCXX_NOEXCEPT_IF(__is_nothrow_swappable<_Compare>::value)
^
What am I doing wrong, and how should I fix this.
From the comments above:
_Compare is one of the template parameters of the _Rb_tree<_Key, _Val, _KeyOfValue, _Compare, _Alloc> defined in that file, so surely it is defined.
A more likely reason is that the Intel compiler perhaps doesn't know about is_nothrow_move_assignable which is kind of recently added to the type_traits.
I had a user with a similar c++ difficulty using gcc-6.3.0 headers and the Intel 16.1.150 compiler. I opted for the compiler bug theory, tried with the Intel 17 compiler and things worked.
-doug
Well I checked with Intel agents and it turned out that Intel 16 do not support gcc6.3, the support comes in Intel 17.
To get precision and scale of a number i am using this simple program. But while converting number into string it is giving compilation error.
g++ precision.cpp
precision.cpp: In function ‘int main()’:
precision.cpp:6: error: ‘to_string’ was not declared in this scope
When I compile with the -std=c++0x switch I get
g++ precision.cpp -std=c++0x
precision.cpp: In function ‘int main()’:
precision.cpp:6: error: call of overloaded ‘to_string(int)’ is ambiguous
/usr/lib/gcc/i686-redhat-linux/4.4.4/../../../../include/c++/4.4.4/bits/basic_string.h:2604: note: candidates are: std::string std::to_string(long long int)
/usr/lib/gcc/i686-redhat-linux/4.4.4/../../../../include/c++/4.4.4/bits/basic_string.h:2610: note: std::string std::to_string(long long unsigned int)
/usr/lib/gcc/i686-redhat-linux/4.4.4/../../../../include/c++/4.4.4/bits/basic_string.h:2616: note: std::string std::to_string(long double)
The source code looks like this:
#include <iostream>
#include <string>
using namespace std;
int main()
{
string value = to_string(static_cast<int>(1234));
int precision = value.length();
int scale = value.length()-value.find('.')-1;
cout << precision << " " << scale;
return 0;
}
What is causing this error?
The first error is because std::to_string is a C++11 feature, and GCC by default compiles in C++03 mode.
The second error, when you are using the correct flag, is probably because the support for C++11 in GCC 4.4 (which you seem to be using) is quite minimal. As you can see by the error messages, the compiler shows you the alternatives it have.
By the way, you don't need to cast integer literals to int, they are of type int by default. You might want to cast it to long double though, as that's one of the valid overloads and you seems to want to find the decimal point (the code will not work as expected if there is no decimal point in the string, like when converting an integer).
I recommend to use boost::lexical_cast instead.