Force g++/clang++ to use local `libstdc++.so` - c++

I am trying to compile a library using a specific version of libstdc++.so, not the system one.
The compilation script uses waf.
I tried to export LD_LIBRARY_PATH=<path/to/directory/of/my/libstdc++.so>.
Since I use fish, I have also tried with set --export LD_LIBRARY_PATH <path/to/directory/of/my/libstdc++.so> and LD_LIBRARY_PATH=<path/to/directory/of/my/libstdc++.so> waf
However, until now it still links against the system libstdc++.so
Anyone knows how to force the usage of my libstdc++.so?

You can set the standard library in clang using the -stdlib=<yours> flag. See here for more info: https://clang.llvm.org/docs/ClangCommandLineReference.html
EDIT: Leaving my past answer up there just so that it's a known edit, but you should be able to just use -nostdlib and optionally, -nostdinc, to compile your program. Then to supply your own standard library, just add it as a compiler argument. For example:
clang -nostdinc -nostdlib -I/custom/include/path foo.c /path/to/your/cstdlib.so
For more help configuring the linker, see this post.

Related

Is possible to pass C++11 param to Buildroot config?

I want to build boost and other packages in buildroot with -std=c++11?
Is it possible to pass it globally, instead by patching program .mk files ?
There is no easy way to pass it globally, and for a good reason: some packages may not build with C++11, e.g. because they use new reserved words.
If you really want to risk it, however, you have three options:
Add -std=c++11 to BR2_TARGET_OPTIMIZATION (in the Toolchain menu). This will be included in the toolchain wrapper and therefore be used for each compile. Note that for C programs, this will give you "command line option ‘-std=c++11’ is valid for C++/ObjC++ but not for C" warnings. So packages that to -Werror will break.
Modify package/Makefile.in and add -std=c++11 to TARGET_CXXFLAGS. In this case, it's only passed to C++ compiles. However, TARGET_CXXFLAGS is just passed to the package build system and not all build systems honor it.
Modify toolchain/toolchain-wrapper.c to add this option when g++ is called. This doesn't have the disadvantages of the other two, but is more work to implement.

How to determine in which .SO library is given C function?

I have this problem all the time in Linux programming. As long as all the manuals and almost all the source code for Linux are C-centric, all references to some function needs only some include <something.h> line and the function is accessible from the C/C++ code.
But I am programming in assembly language and know almost nothing about C/C++.
In order to be able to call some function, I have to import it from the corresponding .so library.
How to determine the file name of the library? It often differs from the name of the library itself and is not specified in the manuals.
For example, the name of the XLib is actually libX11.so.6. The name of the XShm extension library seems to be libXext.so.6.
Is there easy way to determine the secret real name of the library, using provided C manuals and references?
This is another not-100%-accurate method that may give you some ideas as to how you can narrow things down a bit. It doesn't exactly fit the question because it uses common linux utilities instead of man files, but it may still be helpful.
Use your distribution's package management software.
For example, on Arch Linux, if you were interested in a function in GLFW/glfw3.h, you could find out who owns that file:
$ pacman -Qo /usr/include/GLFW/glfw3.h
/usr/include/GLFW/glfw3.h is owned by glfw 3.1-1
Find out which .so files are in that package:
$ pacman -Ql glfw | grep 'so$'
glfw /usr/lib/libglfw.so
And, if needed, find the actual file that link points to:
$ readlink -f /usr/lib/libglfw.so
/usr/lib/libglfw.so.3.1
This will depend on your distribution. I believe on Ubuntu/Debian you'd use dpkg-query instead.
Edit: DevSolar points out in a comment that you can use apt-file search <header> and apt-file list <package> instead of dpkg-query -S <header> and dpkg-query -L <package>. apt-file appears to work even for packages that aren't installed (though it seems slower?).
I also noticed that (on my Ubuntu VM at least) that, e.g., libglfw-dev contains the libglfw.so symlink, while libglfw2 contains the actual libglfw.so.2 object.
Once you have a set of .so files, you can check them for whatever function you are interested in:
$ nm -D /usr/lib/libglfw.so | grep "glfwCreateWindow"
0000000000007cd0 T glfwCreateWindow
Note that I pulled this last step from a comment on the previous question and don't fully understand it. Maybe you could even skip the earlier steps and rely on nm and grep alone?
This is not a sure fire way, but it can help in many cases.
Basically, you can usually find the library name at the bottom of the man page.
Eg, man XCreateWindow says libX11 on the last line. Then you look for libX11.so and use nm or readelf to see all exported functions.
Another example, man XShm says libXext at the bottom. And so on.
UPDATE
If the function is in section (2) of the man pages, it's a system call (see man man) and is provided by glibc, which would be libc-2.??.so.
Lastly (thanks Basile), if the function does not mention the library, it is also most likely provided by glibc.
DISCLAIMER: Again this is not a 100% accurate method -- but it should help in most cases.
You can ask gcc to tell you which file it would use for linking like so:
gcc --print-file-name=libX11.so
Sample output:
/usr/lib/gcc/x86_64-linux-gnu/4.9/../../../x86_64-linux-gnu/libX11.so
This file will usually be a symlink, so you'll have to pipe it through readlink or realpath to get the actual file. For example:
readlink -f $(gcc --print-file-name=libXext.so)
Sample output:
/usr/lib/x86_64-linux-gnu/libXext.so.6.4.0
As I commented, you could use gcc to link your program, and then it should be able to accept -lX11 ; by using gcc -v instead of gcc you'll find out what is actually linked and how.
However, you have a much more significant issue than finding the lib*.so.*; most C or C++ APIs are described in header files, and these C or C++ header files also contain symbolic constants (like O_RDONLY for open(2)...) or macros (like WIFEXITED in POSIX wait ...) whose value or expansion you should manually find in header files or documentations. (Quite often, such constants are either preprocessor #define-d constants or enum values). Also, some headers -in particular in C++- contains a lot of inline-d functions (or macros)!
A possible way might be to generate some C files to find all these constants, enums, macros, inlined functions..., and/or to customize the GCC compiler (e.g. with MELT ...) to find them.
So my message is that for better or worse, the C language is deeply tied to Linux & POSIX.
You might restrict yourself to use only syscalls(2) from your assembler code. Then you won't use libX11 and you don't need any header or constant (except the ones for syscalls, starting from <asm/unistd.h>).
BTW, in 2015, coding entirely in assembler for performance reasons is a mistake. The compiler is generating better code than you reasonably can (as soon as you have more than a few hundred machine instructions). In practice, you can code in assembler with GCC by using extended asm instructions in your C functions.
Or are you building your own compiler ? Then you should have told so in your question!
Read also the Program Library HowTo & the Linux Assembly HowTo

Trouble including NSS header Files

Very recently, I had this idea to start using Mozilla NSS and to learn to use it, so that somewhere in the future, i can use it, or can atleast start contributing to it.
So i went to its Website and cloned it source code into a director "NSS" using mercurial
Then I used
make nss_build_all
instead of
gmake nss_build_all
Note : I don't know, if it makes a difference, gmake is just GNU Make
This make command created a dist folder outside the nss folder. So, Now my NSS folder has 3 folders nss,nspr,dist.
In .bashrc i added a line at the end
export LD_LIBRARY_PATH=/home/ayusun/workspace/NSS/dist/Linux3.5_x86_glibc_PTH_DBG.OBJ/lib
Then i went over to this Sample code, did a copy paste and saved it in my NSS Folder.
And then i tried to compile it, but it failed, stating it couldn't find iostream.h, I went over and changed the location of header files
So
<iostream.h> became <iostream>
"pk11pub.h" became "nss/lib/pk11wrap/pk11pub.h"
"keyhi.h" became "nss/lib/cryptohi/keyhi.h"
"nss.h" became "nss/lib/nss/nss.h"
I tried compiling again but this time error came, that it couldn't find "planera.h"
which is actually present in dist/*.OBJ/include/ which is a link to a file planeras.h in nspr
And so i don't know, how to include these files anymore.
I always have trouble when it comes to include 3rd party header files.
Thanks
This is an old question, but I'll answer it anyway for future reference.
The simplest way is just to use the NSS package for your operating system.
Then you can use things like nss-config --cflags, nss-config --libs, nspr-config --cflags and nspr-config --libs and add that to your CFLAGS and LDFLAGS as appropriate.
For those who do decide to compile their own NSS, I'll give the quick overview.
The NSS headers are in dist/public. Add -I/path/to/dist/public to your compiler command line. The NSPR headers are in dist/Debug/include¹ so add -I/path/to/dist/Debug/include to your comiler command line.
Now you can use #include <nspr/prio.h> and #include <nss/nss.h> and friends.
The NSS code relies on directly uncluding the NSPR headers, so you'll need to add -I/path/to/dist/Debug/include/nspr for it to find things like plarena.h. Or you could do the same and not prefix your includes like I did above. It's up to you.
Now add -L/path/to/dist/Debug/lib and -lnss3 -lnspr4 to your linker command line. You may want to also add -rpath /path/to/dist/Debug/lib for the runtime link path, or copy them to a system directory or use LD_LIBRARY_PATH.
I hope this gets you started.
¹ This actually depends on your operating system and build type. I hope you can figure out the name of the actual Debug directory in your case.

How to determine which compiler was requested

My project uses SCons to manage the build process. I want to support multiple compilers, so I decided to use AddOption so the user can specify which compiler to use on the command line (with the default being whatever their current compiler is).
AddOption('--compiler', dest = 'compiler', type = 'string', action = 'store', default = DefaultEnvironment()['CXX'], help = 'Name of the compiler to use.')
I want to be able to have built-in compiler settings for various compilers (including things such as maximum warning levels for that particular compiler). This is what my first attempt at a solution currently looks like:
if is_compiler('g++'):
from build_scripts.gcc.std import cxx_std
from build_scripts.gcc.warnings import warnings, warnings_debug, warnings_optimized
from build_scripts.gcc.optimizations import optimizations, preprocessor_optimizations, linker_optimizations
elif is_compiler('clang++'):
from build_scripts.clang.std import cxx_std
from build_scripts.clang.warnings import warnings, warnings_debug, warnings_optimized
from build_scripts.clang.optimizations import optimizations, preprocessor_optimizations, linker_optimizations
However, I'm not sure what to make the is_compiler() function look like. My first thought was to directly compare the compiler name (such as 'clang++') against what the user passes in. However, this immediately failed when I tried to use scons --compiler=~/data/llvm-3.1-obj/Release+Asserts/bin/clang++.
So I thought I'd get a little smarter and use this function
cxx = GetOption('compiler')
def is_compiler (compiler):
return cxx[-len(compiler):] == compiler
This only looks at the end of the compiler string, so that it ignores directories. Unfortunately, 'clang++' ends in 'g++', so my compiler was seen to be g++ instead of clang++.
My next thought was to do a backward search and look for the first occurrence of a path separator ('\' or '/'), but then I realized that this won't work for people who have multiple compiler versions. Someone compiling with 'g++-4.7' will not register as being g++.
So, is there some simple way to determine which compiler was requested?
Currently, only g++ and clang++ are supported (and only their most recently released versions) due to their c++11 support, so a solution that only works for those two would be good enough for now. However, my ultimate goal is to support at least g++, clang++, icc, and msvc++ (once they support the required c++11 features), so more general solutions are preferred.
Compiler just are part of build process. Also you need linker tool and may be other additional programs. In Scons it's named - Tool. List of tools supported from box you can see in man page, search by statement: SCons supports the following tool specifications out of the box: ...
Tool set necessary scons environment variables, it's documented here.
Scons automatically detects compiler in OS and have some priority to choose one of them, of course autodetect will work properly if PATH variable set to necessary dirs. For example of you have msvc and mingw on windows, scons choose msvc tool. For force using tool use Tool('name')(env). For example:
env = Environment()
Tool('mingw')(env)
Now env force using mingw.
So, clang is one of tool which currently not supported from box by scons. You need to implement it, or set env vars such CC, CXX which using scons for generate build commands.
You could just simply use the Python os.path.basename() or os.path.split() functions, as specified here.
You could do what people suggested in the comments by splitting this question into 2 different issues, but I think it could be a good idea to be able to specify the path with the compiler, since you could have 2 versions of g++ installed, and if the user only specifies g++, they may not get the expected version.
There seems to be some confusion about what question is asked here.
For what I can see, this asks how to determine which compiler was chosen by default, so I'll answer that one.
From what I found out, the official way to check the compiler is to look at the construction variable TOOLS, which contains a list of all tools / programs that SCons decided / was told to use in the given construction environment.
env = Environment()
is_gcc = 'g++' in env['TOOLS']
is_clang = 'clangxx' in env['TOOLS']
TOOLS lists only the currently used tools even if SCons can find more of them.
E.g. if you have both GCC and Clang installed and SCons is able to find both, default TOOLS will still contain only GCC.
You can find the full list of predefined tools here.

How to fix "/lib/tls/libc.so.6: version `GLIBC_2.4' not found"?

I compiled a binary and copied on another machine for execution. But I am getting the above error. On the second machine, I cannot install new libraries. I tried putting the libc from the first machine into the directory of the binary on the second machine, but the linker (as I found using ldd) still loads from the standard path /lib/tls/libc.so.6). Please let me know a least change fix for this.
Update:
Command used for compilation/linking:
g++ -O2 -DNDEBUG -o CountStrings
-I../../../../../tbb/tbb20_20080408oss_src/include/ ../src/CountStrings.cpp
-L../../../../../tbb/tbb20_20080408oss_src/build/linux_ia32_gcc_cc4.3.2_libc2.8.90_kernel2.6.27_release/
-ltbb
libtbb.so has dependency on libc.so.6
In fact the easier way to fix the issue is to upgrade your OS version.
For example, Java 1.7 is not running on RedHat 4 but works well on RedHat 5.
Try exporting LD_LIBRARY_PATH=<location_of_your_lib> for your process
e.g. $ LD_LIBRARY_PATH=/home/kumar ./a.out
will look for libs in /home/kumar/ before anywhere else
Have you checked to see if there is a static version of the library on the compilation machine? If there is you could explicitly link against it, using /lib/tls/libc.a instead of -L/lib/tls -lc (or whatever your dialect is).
I am not sure what (ill) side effects it may have to use another libc than provided by the system, but you can try to force using your special libc copy with LD_PRELOAD
LD_PRELOAD=<location_of_your_lib> <yourprogram>