I understand that when compiling DPDK with CONFIG_RTE_BUILD_SHARED_LIB=y, the driver must be explicitly set with the '-d' EAL option.
I am using an Intel X722 NIC. What should my '-d' EAL option be set to?
DPDK eal argument for the shared library is passed with the option -d. For your specific NIC you would need to pass -d librte_pmd_i40e.so.
Please note since your application or makefile is not shared, I assume you would end up passing libraries for mempool, ring, call, hash and others too.
Related
I am having trouble debugging when using gdbserver. gdb shows error loading one of the shared libraries.
Error while mapping shared library sections:
`target:<path to library>': not in executable format: Invalid argument
I have no problem when attaching with gdb using PID. But gdbserver throws the above error and then I am unable to set any breakpoints in that shared lib.
Any idea what could be wrong? I have other libraries from the same application that don't seem to have any problem.
I am running on
Centos 6.7
gdb version 7.11.1
gcc version 4.4.7
I encountered this error in GDB 7.11 (the one that ships with Android's NDK-r20), and it was caused by my library being relatively large (300MB), which tripped a bug in gdbserver's integer parser that prevented gdbserver from loading any library larger than 268MB. The bug got fixed in GDB 8.2, by raising the limit to 2GB (https://sourceware.org/bugzilla/show_bug.cgi?id=23198).
I used GDB's sysroot feature to work around this issue: https://sourceware.org/gdb/current/onlinedocs/gdb/Files.html#index-set-sysroot
I copied the libraries from the remote target to my local system* and used set sysroot sysroot-here (where "sysroot-here" is a directory containing the directories/files that I had copied). This forces GDB to read symbols locally instead of from the target.
With this sysroot approach, I did not only work around the bug, but I was also able to use the library with full debugging symbols (about 3GB, which would probably also have tripped newer GDB versions).
* I copied all system libraries and the app's libraries, while preserving the full directory structure / file paths. I wanted to only copy the specific library that triggered the bug, but with sysroot it is all or nothing: Either all libraries are to be found locally on the host, or none. See also: A way to have GDB load libraries from local sysroot and remote gdbserver
I found that gdb version 7.10+ has this problem with my particular binary. Still not sure why. This works fine with 7.9 so I downgraded to overcome this issue.
I have created .so library in my Ubuntu and run it on another machine. Got error:
/lib/tls/i686/cmov/libc.so.6: version `GLIBC_2.15' not found
I suppose this is general C++ library. But how to solve such problem? I can't change client configuration and that means I must to do something with my configuration. But what exactly I must do?
UPD
ldd -version returns
my machine:
ldd (Ubuntu EGLIBC 2.19-0ubuntu6.6) 2.19
host machine:
ldd (Ubuntu EGLIBC 2.11.1-0ubuntu7.8) 2.11.1
On the target machine, run ldd --version and check the output which will tell you what version of GLIBC_ they have.
You can then roll yours back to match their version.
Statically link your executable so it doesn't need their Clib.
You can also alter your program to use the older version, once you know what it is, that is.
See this SO solution for how to do that. How can I link to a specific glibc version?
You have to make sure that you are linking to corresponding or older versions of GLIBC. GCC has flag --sysroot which allows you to define which libs are used.
This may help with details: https://sourceware.org/glibc/wiki/Testing/Builds
The point is that creating a shared library necessarily means that you need to link it to the C library (glibc, in your case). That means that calls to C library functions (which the stdc++ library does) get replaced with actual correct symbol locations in the C library.
Now, if the C library on the compiling/linking machine is not the same as on the target machine, this must fail, and hence, libc version gets checked.
Solutions is to either statically link your .so (which honestly doesn't make much sense, usually), or just correctly compile and link it for your target machine.
Beside compiling everything static, which is usually a bad
idea or does not work at all, the only way to solve the isssue
is to recompile your binary for the target platform.
Install a virtual machine or chroot with the same Ubuntu version
as on the target platform and compile there. There are also solutions
like sbuild or pbuilder/cowbuilder which automates this for Debian/Ubuntu packages.
As I understand, to have gcc on an armv5 board compiling executables while using my x86 machine to compile that arm native gcc, I need this setup:
Machine configuring the toolchain components: the config machine : x86_64
Machine building the toolchain components: the build machine : x86_64
Machine running the toolchain: the host machine : ARM
Machine the toolchain is generating code for: the target machine : ARM
Based on reading the cross-ng docs here, I should use a cross-native setup, but when I attempt to enable that using ct-ng menuconfig I need to enable:
experimental in Paths and misc options -> Try features marked as EXPERIMENTAL
Toolchain options -> Type (Cross) -> Cross-native (NO CODE!) (EXPERIMENTAL)
But of course Cross-Native doesn't work since there is no code for it. Googling leads me to this and this discussion on a mailing list saying that I should try to do this using a Canadian build style but I am somewhat lost as to what tuple's and whatnot to use for the Build System and Host System in crosstool-ng's menuconfig, or if this is still the correct way to go considering how both discussions are over 3 years old.
This post on SO seems to imply that the build system and host system tuples should be arm-unknown-linux-gnueabi?
To be clear, I have been able to compile and run executables using a cross compiler generated from crosstool-ng already, now I want to have a compiler on that armv5 system.
Edit: So I just added the normal cross compiler (arm-unknown-linux-gnueabi) generated by crosstools-ng to the tuple in Toolchain options -> General toolchain options -> Host system -> Tuple and was able to compile gcc as well as have it execute on the arm. Example
I now just need to fix the library situation and that should be that.
This answer is an extension of my original question regarding the general workflow for cross compiling a toolchain.
I had the correct general idea, you have to do a Canadian-Build with the host system tuple being the arm-unknown-linux-gnueabi cross compiler I made earlier. Make sure to include it to your path or do some symlinking into /bin or however else you want to handle that.
I had to wait roughly 30 minutes when doing the build using 3/4 cores of an I5-3570k and ~2GB of ram in a Ubuntu Vmware virtual machine using a normal HDD. Using a SSD will probably bump the speed up significantly.
Once this is done, you should be have an output directory that Crosstools-NG made for you which includes the toolchain for the ARM architectre. You can verify this by running file filename on any of the binaries.
Now, for the library situation which took me a while and gave a decent bit of confusion. In the toolchain output there should be a rootfs folder. That folder contains the expected root file system of the target for which you will be compiling (in this case arm). You need to copy the /lib folder as well as lib's from the user, mirroring the folder hierarchy of this rootfs folder.
You can verify if you have the libraries setup right by doing objdump -p filename and seeing the NEEDED entries which point to required libraries which should be in the rootfs.
If you are using a busybox based rootfs, then assuming you didn't statically compile it then you probably already have the libraries setup correctly since you needed them for busybox. I did a static build of busybox first to make sure I can get the system to boot to a shell, and then made a nonstatic build to have a soft start for libraries, using the libraries from the toolchains rootfs folder. Once I got a dynamically linked busybox system working, simply dropping the cross compiled toolchain into your rootfs at an arbitrary location (/usr/home/toolchain for me) should suffice, after which you should use the toolchain just the same as for an x86 system referring to path and symlinks and whatever you want to do.
I have an application that use dlopen() to load additional modules. The application and modules are built on Ubuntu 12.04 x86_64 using gcc 4.6 but for i386 arch. The binaries are then copied to another machine with exactly same OS and work fine.
However if they are copied to Ubuntu 12.04 i386 then some (but not all) modules fail to load with the following message:
dlopen: cannot load any more object with static TLS
I would suspect that this is caused by the usage of __thread variables. However such variables are not used in the loaded modules - only in the loader module itself.
Can someone provide any additional info, what can be the reason?
I am reducing number of __thread variables and optimizing them (with -ftls-model etc), I'm just curious why it doesn't work on almost same system.
I would suspect that this is caused by the usage of __thread variables.
Correct.
However such variables are not used in the loaded modules - only in the loader module itself.
Incorrect. You may not be using __thread yourself, but some library you statically link in into your module is using them. You can confirm this with:
readelf -l /path/to/foo.so | grep TLS
what can be the reason?
The module is using -ftls-model=initial-exec, but should be using -ftls-model=global-dynamic. This most often happens when (some of) the code linked into foo.so is built without -fPIC.
Linking non-fPIC code into a shared library is impossible on x86_64, but is allowed on ix86 (and leads to many subtle problems, like this one).
Update:
I have 1 module compiled without -fPIC, but I do not set tls-model at all, as far as I remember the default value is not initial-exec
There could only be one tls model for each ELF image (executable or shared library).
TLS model defaults to initial-exec for non-fPIC code.
It follows that if you link even one non-fPIC object that uses __thread into foo.so, then foo.so gets initial-exec for all of its TLS.
So why it causes problems - because if initial-exec is used then the number of tls variables is limited (because they are not dynamically allocated)?
Correct.
I know the '-fPIC' option has something to do with resolving addresses and independence between individual modules, but I'm not sure what it really means. Can you explain?
PIC stands for Position Independent Code.
To quote man gcc:
If supported for the target machine, emit position-independent code, suitable for dynamic linking and avoiding any limit on the size of the global offset table. This option makes a difference on AArch64, m68k, PowerPC and SPARC.
Use this when building shared objects (*.so) on those mentioned architectures.
The f is the gcc prefix for options that "control the interface conventions used
in code generation"
The PIC stands for "Position Independent Code", it is a specialization of the fpic for m68K and SPARC.
Edit: After reading page 11 of the document referenced by 0x6adb015, and the comment by coryan, I made a few changes:
This option only makes sense for shared libraries and you're telling the OS you're using a Global Offset Table, GOT. This means all your address references are relative to the GOT, and the code can be shared accross multiple processes.
Otherwise, without this option, the loader would have to modify all the offsets itself.
Needless to say, we almost always use -fpic/PIC.
man gcc says:
-fpic
Generate position-independent code (PIC) suitable for use in a shared
library, if supported for the target machine. Such code accesses all
constant addresses through a global offset table (GOT). The dynamic
loader resolves the GOT entries when the program starts (the dynamic
loader is not part of GCC; it is part of the operating system). If
the GOT size for the linked executable exceeds a machine-specific
maximum size, you get an error message from the linker indicating
that -fpic does not work; in that case, recompile with -fPIC instead.
(These maximums are 8k on the SPARC and 32k on the m68k and RS/6000.
The 386 has no such limit.)
Position-independent code requires special support, and therefore
works only on certain machines. For the 386, GCC supports PIC for
System V but not for the Sun 386i. Code generated for the
IBM RS/6000 is always position-independent.
-fPIC
If supported for the target machine, emit position-independent code,
suitable for dynamic linking and avoiding any limit on the size of
the global offset table. This option makes a difference on the m68k
and the SPARC.
Position-independent code requires special support, and therefore
works only on certain machines.