The Questions
I have a few questions surrounding usage of Intel Pin with C++14 or other C++ verions.
There are rarely any problems compiling code from older C++ with newer versions, but since Intel Pin is manipulates instruction level, is there any undesirable side effects that might come if I compile it with C++11 or C++14?
If it's ok to compile with C++11 or C++14, how do I make a rule to enable a newer version of C++ for my tool only?
How do I set GCC/G++ default C++ version to latest, if possible, and what should I keep in mind when doing so?
Situation
I'm building a dynamic call graph pin tool. To make it understandable, I'm computing the depth of the call stack. For safety, I decided to wrap the excerpt of code that increments or decrements the depth with a std::mutex. This has gotten me to the problem that std::mutex is available only since C++11, which is not Intel Pin default in my machine.
$ g++ -v
[...]
gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.2)
Compile command:
$ make obj-intel64/callgraph.so
[...]
error: #error This file requires compiler and library support for the ISO C++ 2011 standard. This support must be enabled with the -std=c++11 or -std=gnu++11 compiler options.
#error This file requires compiler and library support
[...]
EDIT
I managed to make a build rule that defines version to C++11, but it breaks. The command sent to g++ through make was
g++ -DBIGARRAY_MULTIPLIER=1 -Wall -Werror -Wno-unknown-pragmas -D__PIN__=1
-DPIN_CRT=1 -fno-stack-protector -fno-exceptions -funwind-tables
-fasynchronous-unwind-tables -fno-rtti -DTARGET_IA32E -DHOST_IA32E -fPIC
-DTARGET_LINUX -fabi-version=2 -I/home/gabriel/Downloads/pin-3.0-76991-gcc-linux/source/include/pin
-I/home/gabriel/Downloads/pin-3.0-76991-gcc-linux/source/include/pin/gen
-isystem /home/gabriel/Downloads/pin-3.0-76991-gcc-linux/extras/stlport/include
-isystem /home/gabriel/Downloads/pin-3.0-76991-gcc-linux/extras/libstdc++/include
-isystem /home/gabriel/Downloads/pin-3.0-76991-gcc-linux/extras/crt/include
-isystem /home/gabriel/Downloads/pin-3.0-76991-gcc-linux/extras/crt/include/arch-x86_64
-isystem /home/gabriel/Downloads/pin-3.0-76991-gcc-linux/extras/crt/include/kernel/uapi
-isystem /home/gabriel/Downloads/pin-3.0-76991-gcc-linux/extras/crt/include/kernel/uapi/asm-x86
-I/home/gabriel/Downloads/pin-3.0-76991-gcc-linux/extras/components/include
-I/home/gabriel/Downloads/pin-3.0-76991-gcc-linux/extras/xed-intel64/include
-I/home/gabriel/Downloads/pin-3.0-76991-gcc-linux/source/tools/InstLib -O3
-fomit-frame-pointer -fno-strict-aliasing -std=c++11
-c -o obj-intel64/callgraph.o callgraph.cpp
This doesn't compile. Instead, it'll fall into a huge error log inside STL headers. It appears that Pin comes along with it's own subset of STL, that conflicts with C++11 and C++14. I've uploaded a paste of the g++ output. It filled 2331 lines, but I've noticed that strange thing in the folders it visits. STL libraries are included from 2 different directories:
/usr/include/c++/5/
/home/gabriel/Downloads/pin-3.0-76991-gcc-linux/extras/stlport/include/
Solving errors one-by-one is unfeasible, deleting pin stl port probably is an even worse idea. If it's possible to use Pin with newer C++, I'd say simple std=c++1y is not the way.
From the compiler options used to compile the pin tool, I presume you are using the latest version of Pin, namely 3.0. According to Intel, the CRT that ships with the framework doesn't support C++11 and later versions of the language. In particular, you will not be able to use any of the APIs supported in C++11 including std::mutex. If it's critical for you to use C++11 APIs then you should use the previous version of Pin, namely 2.14, which doesn't ship with a CRT and uses the CRT of your compiler.
However, if all you want is a mutex, you can use the OS-portable mutex that ships with Pin 3.0. For more information, refer to the documentation.
When using Pin 3.0 you are not allowed to use any header file or object file of your compiler (those from /usr/include/c++/5/). You can only use PinCRT and few system header files.
Related
I recently updated my Linux laptop from Ubuntu 16.04 to 18.04.
I had a STM32 (Cortex-M4) Makefile based project that compiled correctly with the arm-none-eabi g++ version provided by Ubuntu. The generated file required 47620 bytes in the .text section.
With the Ubuntu upgrade, I have also installed an up-to-date version of gcc (from ARM website). Version is 8.2.1.
When I compile the same project (make clean && make), the generated binary do not fit in flash (97424 bytes required, more than twice!). The project is exactly the same (sources, link script, startup files, Makefile).
The compiler options are: -mthumb -mcpu=cortex-m4 -mfloat-abi=hard -mfpu=fpv4-sp-d16 -DSTM32F303x8 -DARMCM4 -O0 -g -Wall -fexceptions -Wno-deprecated.
The linker options are -mthumb -mcpu=cortex-m4 -Tstm32f303K8.ld -mfloat-abi=hard -mfpu=fpv4-sp-d16 --specs=nosys.specs -lm -Wl,--start-group -lm -Wl,--end-group -Wl,--gc-sections -Lsys -Xlinker -Map=test.elf.map
When I look at the .Map generated file, all the user functions take approximatively the same size (new version saves 8 bytes!). But after, it includes C++ sepcific parts, and one is more than 26Kb (from map file):
.text 0x00000000080079e8 0x683c /usr/local/gcc-arm-none-eabi-8-2018-q4-major/bin/../lib/gcc/arm-none-eabi/8.2.1/../../../../arm-none-eabi/lib/thumb/v7e-m+fp/hard/libstdc++.a(cp-demangle.o)
0x000000000800e13c __cxa_demangle
Note: there is no problem with C only projects, only with C++. The library included are the same (gcc 4.9.3 -> armv7e-m/fpu, and gcc 8.2.1 -> thumb/v7e-m+fp/hard):
libm.a libstdc++.a libc.a libnosys.a libgcc.a
Is there a way to get rid of that so that I can compile and flash my (no so old) project?
regards,
I found a solution using the libstdc++_nano (instead of implicit libstc++). With that, the code size is reduced from 84kb to 26kb!
LDFLAGS += -lstdc++_nano
It just works. Thanks #Henrik, #Matthieu and #EOF for your support!
It might be related to exception handling, as std::terminate(), which is used with exceptions, might call the demangling routine. If you don't need exceptions then try disabling them with -fno-exceptions as described here.
Another solution might be to look at the GCC headers:
Demangling routine.
ABI-mandated entry point in the C++ runtime library for demangling.
[...]
returns a pointer to the start of the NUL-terminated demangled
name, or NULL if the demangling fails. The caller is
responsible for deallocating this memory using free.
The prototype is:
char*
__cxa_demangle(const char* __mangled_name, char* __output_buffer,
size_t* __length, int* __status);
So you could probably just supply your own dummy function returning NULL (Given that all library functions are weak, and can be overridden). I'll advise you to look at the disassembled code first though, and find out how and why it is being called in the first place, since it might change behaviour to just discard functionality).
They also give other advise in This forum post, which might be useful for you as well:
Optimize for size with -Os instead of -O0 (possibly add the -Og option instead, if you prefer easily debuggable code, it is often both smaller and faster than -O0).
Optimize at link-time with -flto while compiling and linking.
Maybe disable RTTI if not used.
I've just ported a STM32 microcontroller project from Keil uVision (using Keil ARM Compiler) to CooCox CoIDE (using GCC ARM Embedded compiler).
Problem is, the code size is the double size when compiled in CoIDE with GCC compared to Keil uVision.
How can this be? What can I do?
Code size in Keil: 54632b (.text)
Code size in CoIDE: 100844b (.text)
GCC compiler flags:
arm-none-eabi-gcc -mcpu=cortex-m3 -mthumb -g2 -Wl,-Map=project.map -Os
-Wl,--gc-sections -Wl,-TC:\arm-gcc-link.ld -g -o project.elf -L -lm
I am suspecting CoIDE and GCC to compile a lot of functions and files, that are present in the project, though aren't used (yet). Is it possible that it compiles whole files even if I only use 1 function out of 20 in there? (even though I have -Os)..
Hard to say which files are really compiled/linked in your final binary from the information you give. I suppose it takes all the C files it finds on your project if you did not explicitly specified which one to compile or if you don't use your own Makefile.
But from the compiler options you give, the linker flag --gc-sections won't do much garbage if you don't have the following compiler flags: -ffunction-sections -fdata-sections. Try to add those options to strip all unused functions and data at link time.
Since the question was tagged with C++, I wonder if you would like to disable exceptions and RTTI. Those take quite a bit of code. Add -fno-exceptions -fno-rtti to linker flags.
I have recently started learning C++ and, since I'm on Linux, I'm compiling using G++.
Now, the tutorial I'm following says
If you happen to have a Linux or Mac environment with development
features, you should be able to compile any of the examples directly
from a terminal just by including C++11 flags in the command for the
compiler:
and tells me to compile using this command: g++ -std=c++0x MY_CODE.cpp -o MY_APP.
Now, what I'm wondering, what is the point of the std=c++0x flag? Is it required, or can I just run g++ MY_CODE.cpp -o MY_APP?
By default, GCC compiles C++-code for gnu++98, which is a fancy way of saying the C++98 standard plus lots of gnu extenstions.
You use -std=??? to say to the compiler what standard it should follow.
Don't omit -pedantic though, or it will squint on standards-conformance.
The options you could choose:
standard with gnu extensions
c++98 gnu++98
c++03 gnu++03
c++11 (c++0x) gnu++11 (gnu++0x)
c++14 (c++1y) gnu++14 (gnu++1y)
Coming up:
c++1z gnu++1z (Planned for release sometime in 2017, might even make it.)
GCC manual: https://gcc.gnu.org/onlinedocs/gcc-4.9.2/gcc/Standards.html#Standards
Also, ask for full warnings, so add -Wall -Wextra.
There are preprocessor-defines for making the library include additional checks:
_GLIBCXX_CONCEPT_CHECKS to add additional compile-time-checks for some templates prerequisites. Beware that those checks don't actually always do what they should, and are thus deprecated.
_GLIBCXX_DEBUG. Enable the libraries debug-mode. This has considerable runtime-overhead.
_GLIBCXX_DEBUG_PEDANTIC Same as above, but checks against the standards requirements instead of only against the implementations.
You want to use the C++11 standard (and you are right to want that), but C++11 made a huge progress w.r.t. its older C++98 standard.
But old versions of GCC (i.e. GCC 4.8 or earlier) where not finalized before the standard itself (so they accepted the -std=c++0x flag). I strongly recommend (if you want C++11) to use the latest version of GCC, that is GCC 4.9. A bug fixing GCC 4.9.2 release appeared at end of october 2014. So use it please, and pass it the std=c++11 flag to tell the compiler you want C++11 conformance.
I actually suggest to pass std=c++11 -Wall -Wextra -g to get C++11, all warnings, and debug info. Once you have debugged your program (with gdb, and you'll better also use a recent version of gdb!) you might ask the compiler to optimize with -O2 (and perhaps -mtune=native if you want to optimize for your own computer)
Source for your reference:
main.cpp
#include <iostream>
using namespace std;
int main()
{
cout << "Test main CPP" << endl;
return 0;
}
build.sh
rm demoASI*
echo "**cleaned !!**"
##### C++ 11 Compliance #####
# type ONE
g++ -o demoASI_1 -std=c++0x main.cpp
echo "**rebuild-main-done (C++ 11 Compilation) !**"
# type TWO
g++ -o demoASI_2 -std=c++11 main.cpp
echo "**rebuild-main-done (C++ 11 Compilation) !**"
##### C++ 11+ Compliance #####
# type THREE
g++ -o demoASI_3 -std=c++1y main.cpp
echo "**rebuild-main-done (C++ 11+ (i.e. 1y, but not C++14) Compilation) !**"
###### C++ 14 Compliance ######
# type FOUR
g++ -o demoASI_4 -std=c++14 main.cpp
if [ $? -eq 0 ]
then
echo "**rebuild-main-done (C++ 14 Compilation) !** :: SUCCESS"
else
echo "**rebuild-main-done (C++ 14 Compilation) !** :: FAILED"
fi
Now, execute the script as;
./build.sh (assuming build.sh has execution permission)
You can first check the version of your g++ compiler, as;
g++ --version
The version of g++, after 4.3, has support for the c++11.
Please see, for c++14 support info in compiler.
I would like to find out which is the most extreme error checking flag combination for g++ (4.7). We are not using the new C++11 specification, since we need to cross compile the code with older compilers, and these older compilers (mostly g++ 4.0) often cause problems which simply are ignored by the g++4.7.
Right now we use the following set of flags:
-Wall -Wcomment -Wformat -Winit-self -ansi -pedantic-errors \
-Wno-long-long -Wmissing-include-dirs -Werror -Wextra
but this combination does not identify issues such as a double being passed in to a function which expects int, or comparison between signed and unsigned int and this causes the old compiler to choke on it.
I have read through the documentation and -Wsign-compare should be enabled by -Wextra but in practice seems this is not the case, so I might have missed something...
The -ansi is alias for the default standard without GNU extensions. I'd suggest instead being explicit using -std=c++98, but it should be default for g++ -ansi, so not really different.
But generally I've never seen anything that would be accepted by newer gcc and rejected by older gcc on the grounds of being invalid. I suspect any such problem is a bug in the older compiler or it's standard library. Gcc does not have warnings for things that are correct, but didn't work with older versions of it, so you don't have any other option than to test with the older version.
As for the specific issues you mention:
Passing double to function that expects int is not an error. It might be undefined behaviour though. -Wconversion should help.
Comparing signed with unsigned is also well defined, also always worked as defined and in case of equality comparisons actually makes programmers write worse code (comparing unsigned variable larger than int with -1 is something else than comparing it with -1u). So I actually always compile with -Wno-sign-compare.
The compiler should not print warnings for headers found in directories given with -isystem instead of -I, so that should let you silence the warning for Qt headers and keep it enabled for your own code. So you should be able to use -Wconversion.
Use lint or some other static analysis tool to check the code, in addition to compiler. On my Linux distro, apt-get install splint will get splint, maybe check if that has been packaged for your OS for easy installation.
i have some problem with my boost library. i m using freebsd and installed my boost using ports. Boost version is : 1.45 and i use g++47 as compiler. Also i have never defined BOOST DISABLE THREADS at there : /usr/local/include/boost/config/user.hpp .Also exactly my error is :
/usr/local/include/boost/config/requires_threads.hpp:29:4: error: #error "Threading support unavaliable: it has been explicitly disabled with BOOST_DISABLE_THREADS"
explicitly but where ?? And my compile command;
g++47 -O3 -Wall -std=c++0x -I. -Iinclude -I../include -I/usr/local/include -c -o Application.o src/Application.cpp
Thanks
The experimental GCC version 4.7 disables Boost.Threads. See: https://svn.boost.org/trac/boost/ticket/6165
Edit: It should be noted that as of the release version of GCC 4.7, and Boost higher than 1.48 (Boost_1_48_0 is still not working), threads works again.
See the ticket 6165 mentioned above by Joachim:
To define 'threads' support, GCC <= 4.6 defines _GLIBCXX__PTHREADSwhereas GCC >= 4.7 defines _GLIBCXX_HAS_GTHREADS. So, in order to compile older Boosts using any GCC more recent than 4.6 you need the patch libstdcpp3.hpp.patch enclosed in that ticket.
Another problem that could also prevent Boost on working with modern compiler is ticket 6940 (TIME_UTC has a special meanhing in C11, therefore Boost >= 1.50 use TIME_UTC_ instead)