I'm trying to use Coverity Scan with an embedded application written in C++17 (ARM GCC Embedded v7.2). The application itself builds well and error/warning-free; however, the Coverity Scan Self-Build tool (cov-analysis-linux64-2017.07, the latest) fails to compile certain C++ files with the following error (abridged):
[19888] EXECUTING: /home/pavel/opt/gcc-arm-none-eabi/bin/../lib/gcc/arm-none-eabi/7.2.1/../../../../arm-none-eabi/bin/as -I . -I src -I src/os_config -I eigen -I senoval -I legilimens -I popcop/c++ -I build/current_build_info -mcpu=cortex-m4 -mfloat-abi=hard -mfpu=fpv4-sp-d16 -meabi=5 -alms=build/lst/ch.lst -o build/obj/ch.o /tmp/ccK73Qaa.s
"/home/pavel/opt/gcc-arm-none-eabi/bin/../lib/gcc/arm-none-eabi/7.2.1/../../../../arm-none-eabi/include/c++/7.2.1/bits/c++17_warning.h", line 32:
error #35: #error directive: This file requires compiler and library
support for the ISO C++ 2017 standard. This support must be enabled
with the -std=c++17 or -std=gnu++17 compiler options.
#error This file requires compiler and library support \
^
As can be seen, the build tool did not pass the option -std=c++17 to the assembler. By the way, the application's own build system does not make direct calls to the assembler; the Self-Build tool does it on its own. This is how the Coverity Self-Build process is configured:
cov-configure --comptype gcc --compiler arm-none-eabi-gcc --template
make clean
cov-build --dir build/cov-int make -j8
cd build
tar czvf coverity.tgz cov-int
How do I configure the Self-Build tool to build C++17 code correctly?
Related
I'm trying to build mongodb (open source version 4.2) which uses python and scons for building. The problem relates to scons rather than mongodb.
My build fails very early with Couldn't find OpenSSL crypto.h header and library. Verbose details are:
file /.../SConstruct,line 3042:
Configure(confdir = build/scons/opt/sconf_temp)
scons: Configure: Checking for SSLeay_version(0) in C library crypto...
build/scons/opt/sconf_temp/conftest_d6743137aeb7fb2674cc9632f9989034_0.c <-
|
|
|#include "openssl/crypto.h"
|
|int
|main() {
| SSLeay_version(0);
|return 0;
|}
|
gcc -o build/scons/opt/sconf_temp/conftest_d6743137aeb7fb2674cc9632f9989034_0.o -c -std=c11 -ffp-contract=off -fno-omit-frame-pointer -fno-strict-aliasing -ggdb -pthread -Wall -Wsign-compare -Wno-unknown-pragmas -Winvalid-pch -Werror -O2 -Wno-unused-local-typedefs -Wno-unused-function -Wno-deprecated-declarations -Wno-unused-const-variable -Wno-unused-but-set-variable -Wno-missing-braces -Wno-exceptions -fstack-protector-strong -fno-builtin-memcmp -fPIE -DNDEBUG -D_XOPEN_SOURCE=700 -D_GNU_SOURCE build/scons/opt/sconf_temp/conftest_d6743137aeb7fb2674cc9632f9989034_0.c
cc1: error: command-line option '-Wno-exceptions' is valid for C++/ObjC++ but not for C [-Werror]
cc1: all warnings being treated as errors
scons: Configure: no
I'm using Arch Linux which has multiple OpenSSL packages, and the default 3.0 is not compatible with mongodb source. I also have OpenSSL 1.1 and 1.0 installed, and can switch with e.g. gcc -I/usr/include/openssl-1.1.
Unfortunately I have not found a way to instruct SConstruct to use this flag in the command lines it generates to check the environment. I have tried CFLAGS, CCFLAGS, CPPPATH both as environment variables and scons command line parameters.
I also tried reverse enginering it, and tracked this to Conftest and TryBuild but it's not obvious how I can influance theses from the command line, so I'm trying my luck with you guys before going deeper in scons code.
If you look at the SConstruct and search for openssl, you'll find this blurb under the logic to detect on macOS
NOTE: Recent versions of macOS no longer ship headers for the system OpenSSL libraries.
NOTE: Either build without the --ssl flag, or describe how to find OpenSSL.
NOTE: Set the include path for the OpenSSL headers with the CPPPATH SCons variable.
NOTE: Set the library path for OpenSSL libraries with the LIBPATH SCons variable.
NOTE: If you are using HomeBrew, and have installed OpenSSL, this might look like:
\tscons CPPPATH=/usr/local/opt/openssl/include LIBPATH=/usr/local/opt/openssl/lib ...
NOTE: Consult the output of 'brew info openssl' for details on the correct paths."""
I'd bet if you did the same but pointed at the proper locations on your system for the openssl libs and header files, you'd be able to build.
(see: https://github.com/mongodb/mongo/blob/r4.2.0/SConstruct#L3015 )
I am build liburing which is a user interface for io_uring features.
But when I run ./configure it throws this error:
/tmp/fio-conf--21449-.c:2:10: fatal error: linux/time_types.h: No such file or directory
#include <linux/time_types.h>
^~~~~~~~~~~~~~~~~~~~
compilation terminated.
but I found in
/usr/src/linux-headers-5.3.0-46-generic/include/uapi/linux$ ls time_types.h
time_types.h
My kernal version is
uname -rs
Linux 5.3.0-46-generic
I'm sure that the 5.3 version is supporting io_uring (which has already been supported by 5.1)
so how to Update kernel/libc headers to support this?
First, you have to clone the repo:
git clone https://github.com/axboe/liburing.git
Then you can build:
make -C ./liburing
After that, you can compile your program.c with liburing:
gcc program.c -o ./program -I./liburing/src/include/ -L./liburing/src/ -Wall -O2 -D_GNU_SOURCE -luring
I struggle with Caffe compilation. Unfortunately I failed to compile it.
Steps I followed:
git clone https://github.com/BVLC/caffe.git
cd caffe
mkdir build
cd build
cmake ..
make all
Running make all fails with the following error message:
[ 2%] Building NVCC (Device) object src/caffe/CMakeFiles/cuda_compile.dir/util/cuda_compile_generated_im2col.cu.o
In file included from /usr/include/cuda_runtime.h:59:0,
from <command-line>:0:
/usr/include/host_config.h:82:2: error: #error -- unsupported GNU version! gcc 4.9 and up are not supported!
#error -- unsupported GNU version! gcc 4.9 and up are not supported!
^
CMake Error at cuda_compile_generated_im2col.cu.o.cmake:207 (message):
Error generating /mydir/caffe/build/src/caffe/CMakeFiles/cuda_compile.dir/util/./cuda_compile_generated_im2col.cu.o
Software version:
OS: Debian.
gcc version: 5.3.1.
nvcc version: 6.5.12.
cat /proc/driver/nvidia/version result:
NVRM version: NVIDIA UNIX x86_64 Kernel Module 352.63 Sat Nov 7 21:25:42 PST 2015
GCC version: gcc version 4.8.5 (Debian 4.8.5-3)
Attempts to solve the problem
1st try
Simple solutions are often best ones, so (as suggested here) I tried to comment out macro checking gcc version from /usr/include/host_config.h (line 82). Unfortunately it doesn't work and compilation fails badly:
1 catastrophic error detected in the compilation of "/tmp/tmpxft_000069c2_00000000-4_im2col.cpp4.ii".
2nd try
I tried to run:
cmake -D CMAKE_CXX_COMPILER=g++-4.8 ..
make
but it fails with exactly the same error message (even though g++-4.8 should be accepted).
3rd try
I've found similar problem (though not related to Caffe) and I tried to solve it as suggested in the accepted answer.
What I did:
I've ran grep -iR "find_package(CUDA" caffe command and found Cuda.cmake file which has find_package(CUDA 5.5 QUIET) in line 225.
I added set(CUDA_HOST_COMPILER /usr/bin/gcc-4.8) to Cuda.cmake, line before line: find_package(CUDA 5.5 QUIET).
I removed everything from build directory and ran cmake and make again - with and without -D CMAKE_CXX_COMPILER=g++-4.8.
Unfortunately result is exactly the same. Caffe probably overwrites it somehow - I didn't figure it out how.
make VERBOSE=1 2>&1 | grep -i compiler-bindir returns nothing.
What's interesting, make VERBOSE=1 prints command that fails, which is:
/usr/bin/nvcc -M -D__CUDACC__ /mydir/caffe/src/caffe/util/im2col.cu -o /mydir/caffe/build/src/caffe/CMakeFiles/cuda_compile.dir/util/cuda_compile_generated_im2col.cu.o.NVCC-depend -ccbin /usr/bin/cc -m64 -DUSE_LMDB -DUSE_LEVELDB -DUSE_OPENCV -DWITH_PYTHON_LAYER -DGTEST_USE_OWN_TR1_TUPLE -Xcompiler ,\"-fPIC\",\"-Wall\",\"-Wno-sign-compare\",\"-Wno-uninitialized\",\"-O3\",\"-DNDEBUG\" -gencode arch=compute_20,code=sm_21 -Xcudafe --diag_suppress=cc_clobber_ignored -Xcudafe --diag_suppress=integer_sign_change -Xcudafe --diag_suppress=useless_using_declaration -Xcudafe --diag_suppress=set_but_not_used -Xcompiler -fPIC -DNVCC -I/usr/include -I/mydir/caffe/src -I/usr/include -I/mydir/caffe/build/include -I/usr/include/hdf5/serial -I/usr/include/opencv -I/usr/include/atlas -I/usr/include/python2.7 -I/usr/lib/python2.7/dist-packages/numpy/core/include -I/mydir/caffe/include -I/mydir/caffe/build
when I add --compiler-bindir /usr/bin/gcc-4.8 flag manually, it prints error:
nvcc fatal : redefinition of argument 'compiler-bindir'
which may be related to this bug report.
Edit: I didn't notice that --compiler-bindir and -ccbin are the same options, and the latter is already set in above command that failed. When I changed -ccbin /usr/bin/cc to -ccbin /usr/bin/gcc-4.8 in above command that failed, it completes successfully. Now I need to find option in Caffe's CMake file that overwrite -ccbin in all subsequent Caffe's CMakes. Looking at cmake/Cuda.cmake:252:list(APPEND CUDA_NVCC_FLAGS ${NVCC_FLAGS_EXTRA} seems to be good way to go.
How can I successfully complete my compilation? Any help is appreciated.
Related SO questions:
host_config.h:unsupported GNU version! gcc versions later than 4.9 are not supported.
CUDA 6.5 complains about not supporting gcc 4.9 - what to do?.
cmake -D CUDA_NVCC_FLAGS="-ccbin gcc-4.8" .. && make causes successful compilation.
Now another problem showed up: linking Google's libgflags or libprotobuf fails probably due to fact that it was compiled with newer gcc version but it's not related to asked question.
My machine runs Ubuntu 15.10, and my default compiler version is gcc 5.2.1 .
Commenting out the #error directive in line 115 of file
/usr/local/cuda-7.5/include/host_config.h
(or whatever the path on your system is) did the trick for me. Caffe compiled fine, all tests ran smoothly.
On the other hand, if one chooses to ignore this and proceed to compile part of the project with one compiler version, part of the project with another (for me it was gcc-4.8 and gcc-5.2.1), linking problems will arise. The linking problems of protobuf and libgflags another answer mentions are not unrelated to this.
I have a GNUmakefile that respects CXX and CXXFLAGS. It also performs some platform and architecture tests. Currently, the makefile assumes the host and target are the same:
IS_X86 = $(shell uname -m | $(EGREP) -c "i.86|x86|i86|amd64")
In an effort to improve robustness, I want to ask the tools what it is compiling for. I've come up with the following, but I'm not sure it is correct.
$ export CXX=clang++
$ export CXXFLAGS="-DNDEBUG -g2 -O3 -m32"
$ $CXX $CXXFLAGS -dM -E - < /dev/null | egrep "(i386|x86_64)"
#define __i386 1
#define __i386__ 1
#define i386 1
$ export CXX=clang++
$ export CXXFLAGS="-DNDEBUG -g2 -O3"
$ $CXX $CXXFLAGS -dM -E - < /dev/null | egrep "(i386|x86_64)"
#define __x86_64 1
#define __x86_64__ 1
My question is, will the above - with CXX and CXXFLAGS - work reliably to detect a target? Or do I need something else?
Here's the two reasons I ask. First, my experience with Autotools indicates something different. When Autotools performs a test like above, they test CPP, and sometimes CPP or CXX needs to include --isysroot (or other hacks) to get things configured properly.
Second, some toolchains, like Clang, integrate other components (like a preprocessor or assembler), so I can't use CPP directly under all circumstances.
In fact, doing something as simple as $CXX -Wa,-v - </dev/null (ask assembler for its version) results in an "unsupported option" error under Clang when using its integrated assembler. (Cf., With integrated assembler enabled, fail to fetch version string of assembler).
And just in case: this is not an Autools or Cmake project. It does not use Boost or any other libraries. Its a stand alone C++03 project.
My question is, will the above - with CXX and CXXFLAGS - work reliably to detect a target?
The answer is Yes, it will. The preprocessor or compielr driver (passing down to preprocessor) will mostly yield expected target defines with all else being equal. Notable exception is GCC and ARMv8/Aarch64, which is missing a slew of expected defines.
The thing to avoid is uname -m (and friends). Uname reports information on the host, and not the target.
I tried building Google Test with the following CMake configuration:
$ CMAKE_CXX_COMPILER="clang++" CMAKE_CXX_FLAGS="-std=c++11 -stdlib=libc++ -U__STRICT_ANSI__" cmake ../source
Building shows CMake picked the right compiler, but my compiler flags aren't passing through:
$ VERBOSE=1 make
...
/Users/jfreeman/local/bin/clang++ -I/Users/jfreeman/work/googletest/source/include -I/Users/jfreeman/work/googletest/source -DGTEST_HAS_PTHREAD=1 -o CMakeFiles/gtest.dir/src/gtest-all.cc.o -c /Users/jfreeman/work/googletest/source/src/gtest-all.cc
...
/Users/jfreeman/local/bin/clang++ -I/Users/jfreeman/work/googletest/source/include -I/Users/jfreeman/work/googletest/source -DGTEST_HAS_PTHREAD=1 -o CMakeFiles/gtest_main.dir/src/gtest_main.cc.o -c /Users/jfreeman/work/googletest/source/src/gtest_main.cc
The end goal is that I want my project, which builds with Clang and libc++, to have tests built with Google Test. That means I need Google Test built with libc++ as well.
Using variables on the command line with CMake sometimes requires the -D (for Define) flag.
$ cmake -DCMAKE_CXX_COMPILER="clang++" -DCMAKE_CXX_FLAGS="-std=c++11 -stdlib=libc++ -U__STRICT_ANSI__" ../source