Cross Compiling with cmake using additional third party libraries - c++

I want to cross compile my C++ application for ARM with cmake. Therefore I installed the arm-linux-gnueabi-g++ package on my ubuntu x86 system. However, this package does not contain all needed third party libraries for successful linking my C++ application. Example:
Ubuntu x86 system:
Contains basic stuff such as
/usr/lib/gcc-cross/arm-linux-gnueabi/4.7/libstdc++.so
/usr/lib/gcc-cross/arm-linux-gnueabi/4.7/libstdc++.a
/usr/lib/gcc-cross/arm-linux-gnueabi/4.7/libgcc.a
...
Target system (ARM):
Has further arm-specific packages installed which are needed if I would compile my application directly on the ARM-system. Therefore more libraries are available:
/usr/lib/libpq.so
/usr/lib/libpq.a
/usr/lib/arm-linux-gnueabi/libssl.so
/usr/lib/arm-linux-gnueabi/libssl.a
/usr/lib/arm-linux-gnueabi/libkrb5.so.3
Since the additional libraries are needed for cross compiling my attempt was to copy the required third party libraries manually from the target system to the x86 system. However, this ended up in a desaster since linking turns up messages such as "libssl.so -> requires lib B -> requires lib C which is missing".
How can I make these libraries available on my x86-system for cross compiling?

Related

How do I setup a Codeblocks project on Windows 10 to compile and link a static SFML project?

My advance appologies for being hopeless at Windows development. I am by no means a Windows developer and my understanding of how to write, compile and link C++ code on a Windows system is limited to say the least.
I am having difficulty trying to compile and link a SFML project on a Windows 10 system with the CodeBlocks IDE.
I am trying to link this project with static linking, not dynamic linking. Again I have virtually no idea how the two different methods work in detail, I just know that if I ship a static linked binary to another Windows 10 user it is much more likely to "just work" on their system.
List of things I did:
Downloaded the latest version of CodeBlocks with MINGW integration.
Installed, default options
Downloaded the latest version of SFML (32bit MINGW version)
Extracted the zip file (SFML) to my home directory
Created a new codeblocks project (console application) and followed the instructions to set the compiler and linker options
https://www.sfml-dev.org/tutorials/2.5/start-cb.php
It works fine for dynamic linking, but requires me to copy the .dll files to the same dir as the produced executable file (produced from compilation of my C++ code).
I tried to change to static linking, changing the names of the linker libs with the -s or -s-d suffix, and adding the define SMFL_STATIC option to global (release and debug) options. I also added the opengl32, freetype, winmm and gdi32 link libs, before their respective sfml link libs.
When trying to compile I get the following linker errors
cannot find -lfreetype
cannot find -lsfml-graphics-s-d
cannot find -lsfml-window-s-d
cannot find -lsfml-system-s-d
in Release mode, similar errors are produced.
What am I doing wrong?
My hunch would be that you either have the wrong compiler (see below) or you haven't defined the library directory as mentioned in the linked tutorial.
There are four common things to consider when using SFML (and essentially any other C++ library) on Windows.
Compiler versions have to fully match
Make sure to not mix x86 and x64
Settings need to be specified for the correct configuration
When linking libraries statically, you also need to link the dependencies
Compiler versions have to fully match
Since C++ doesn't have a standardized ABI, the generated libraries will never be reusable between compilers. Yes, sometimes "it works", but it can break at any point and there's no guarantee.
We strongly recommend to either use the compilers linked on the SFML Download page or build SFML from source with your current compiler.
If you got the latest stable Code::Blocks version with MinGW as stated, you should also be able to get a snapshot build of SFML, which should be using the same compiler.
Note: One exception to this rule is Visual Studio, where VS 2017 binaries are compatible with VS 2019 (and maybe VS 2022?).
Make sure to not mix x86 and x64
When you download a 32-bits (x86) version of SFML, you also need a 32-bits version of your compiler. When you download a 64-bits (x64) version of SFML, you also need a 64-bits version of your compiler.
Make sure you double check your compiler configuration that you've selected the correct bit-ness.
Note: For Visual Studio you need to select the correct compiler architecture in the IDE, usually positioned right next to the run button.
Settings need to be specified for the correct configuration
Project configurations are usually spread across the matrix built from the types:
Debug / Release / All
x86 / x64 (for VS)
Make sure when you add the settings for library paths and include paths that it's not just set for debug or release and ends up missing in either or the other configuration.
Also make sure you're not setting up debug libraries (with the -d suffix) in release mode or release libraries (without any suffix) in debug mode.
When linking libraries statically, you also need to link the dependencies
The SFML static libraries don't contain any symbols of any of its dependencies, that means, in your final application you have to link static SFML and all its dependencies.
As a short summary you can think of static libraries like an archive of object files. These object files will directly be linking into your application, like you link your own source file object files. As such, the SFML static libraries only contain object files of SFML itself and not of other libraries as well.
Troubleshooting
If nothing seems to help, then you should enable verbose compiler & linker output, that we you see exactly which commands are invoked and one can quickly spot the missing statements.

wxWidgets jpeg library build issue

I'm trying to build wxWidgets library into a custom path on a Fedora 27 operative system.
I achieved the wx-config file path recognition and works with the cmake execution. Also, I load libraries and include dirs based on modified wxWidgets finder cmake file that sets thewx-config custom path successfully.
But cmake does not load my wxWidgets configuration. I mean, wx_gtk2u_jpeg-3.1 builded lib could not be founded (suposed to be /usr/lib/libwx_gtk2u_jpeg-3.1.so). I need jpeg dependency from wxWidgets for my project.
I'm sure that problem is not about cmake files. However, the problem is wxWidgets compilation because cmake can found the other builded dependencies into /usr/lib/
I actually installed the libjpeg-turbo-devel package that includes the libjpeg.h needed for wxWidgets building without success of libwx_gtk2u_jpeg-3.1.so creation.
The weirdest part is that $ wx-config --libs shows the wx_gtk2u_jpeg-3.1 lib to be linked and the hint paths that it should be founded.
wxWidgets commands for building:
$ ./configure --with-libjpeg=builtin --with-libpng=builtin --with-libtiff=builtin --with-zlib=builtin --with-expat=builtin --enable-webviewwebkit=no --prefix=/opt/cpp_dependencies/2018Q1/usr'
$ make -j 4
$ make install
You can check out my cmake files, the cmake output and wxWidgets building output in order to reproducing it: https://gist.github.com/jjalvarezl/b70accae269ef56c56010bedf157c27f
You can see line 1543 of wxWidgets building output file that jpeg library is buildin, and, 1564 of same file, the make install command that installs all libwx_<lib_name>.so libraries into final /usr/lib path. Anyway, no one contains the needed library.
Please show the exact error message, as it's not clear what the actual problem is. What I can say, is that the different built-in versions of 3rd party libraries, such as libjpeg, are always static libraries, even when wxWidgets themselves are shared. I.e. you're never going to have libwx_gtk2u_jpeg-3.1.so, only .a.
I'd also strongly recommend using system versions of the 3rd party libraries under Unix systems. This means that your wxWidgets applications will get security updates from your OS vendor and you don't risk running into any incompatibilities due to using 2 different versions of the same library in your application.

Android NDK cmake and dependent libraries

I'd like to use a library (source codes from GH) in my JNI code. But the library depends on two other libraries (NTL and Boost) that are not available in Android NDK.
Now I am a bit confused and not sure if I understand correctly my following actions.
C++ code for Android is built into shared libraries (.so) for every platform (x86_64, armv7..). Does this mean that NTL, Boost and the lib I want to use must be compiled by me from source codes for these platforms too? If yes, how to do it correctly with cmake?
If I should build all the libs for specific platforms, how it is better to do, either as static libs (.a + headers) or as shared libs?
Do I really need to build NTL and Boost for all the platforms or I should do it just for the needed library?
Is Android.mk file required or can help with cmake? As I understand, it is used with "ndk-build" only.
Generally, if this sequence of actions is correct?
Build NTL for all platforms (.a + headers)
Build Boost for all platforms (.a + headers)
Build Library for all platforms (.so)
Add Library's .so-file as a dependency in CMakeLists for JNI project. (Do I still need dependent libs and headers or that dependencies will be incapsulated into lib?)
C++ code for Android is built into shared libraries (.so) for every platform (x86_64, armv7..). Does this mean that NTL, Boost and the lib I want to use must be compiled by me from source codes for these platforms too? If yes, how to do it correctly with cmake?
Yes, you'll need to build those libraries from source (or find a binary distribution for Android) if you want to use those libraries in your application. As for how to do that, you'll have to wait for someone else to answer or try Googling it. There are a handful of "how to build X for Android" tutorials out there, but I don't know if you'll find many for CMake since CMake is pretty new for Android.
If I should build all the libs for specific platforms, how it is better to do, either as static libs (.a + headers) or as shared libs?
That mostly depends on how many shared libraries you're building for your app. The ideal model for an app is to use a single shared library in your app and statically link in all of your dependencies (going to avoid linker bugs on old versions of Android, and will make your app as small as possible). If you have multiple shared libraries for your code, you'll need to use shared libraries for your dependencies to avoid ODR issues.
Do I really need to build NTL and Boost for all the platforms or I should do it just for the needed library?
You'll need to do it for any platform you need to use those libraries on.
Is Android.mk file required or can help with cmake? As I understand, it is used with "ndk-build" only.
CMake and ndk-build should both work, but you might have an easier time finding porting instructions for ndk-build due to CMake's relative youth in Android.

Cannot find shared libraries on target after Cross-Compiling, Ubuntu to Beaglebone

I am working on a vision project using a beaglebone white. I am using an i686 machine running Ubuntu 12.04 LTS and the eclipse IDE with CDT plugin as my development machine. My beaglebone is running the latest Angstrom distro provided from beaglebone.org. My question has to do with general cross-compiling methodologies.
My program uses OpenCV and Curl c++ libraries.
So far on my host machine I have downloaded the latest OpenCV and Curl libraries and have crossed compiled them for the arm-linux architecture.
My test program compiles without errors on my development pc and generates an executable.
I use SCP to transfer the executable to the beaglebone over ethernet, and when I run my program I get the following error on the beaglebone:
"error while loading shared libraries: libopencv_core.so.3.0: cannot open shared object file: No such file or directory"
On the host computer OpenCV and Curl source and libraries are in two separate locations.
For OpenCV I used:
sudo cmake -DSOFTFP=ON -DCMAKE_TOOLCHAIN_FILE=../arm-gnueabi.toolchain.cmake ../../..
sudo make
sudo make install
which creates arm-compiled version of OpenCV in the /home/OpenCVArm/opencv/platforms/linux/build_hardfp/install/ on my host.
For Curl I used:
sudo ./configure --host=arm-linux-gnueabi --build=i686-linux CFLAGS='-Os' --with-ssl=/usr/bin/openssl --enable-smtp
sudo make
sudo make install
which creates the Arm compiled curl library is in /usr/local/ on the host.
to link all the libraries in my program I use the following script in Eclipse:
arm-linux-gnueabi-g++ -L/usr/local/lib -L/home/OpenCVArm/opencv/platforms/linux/build_hardfp/install/lib -L/usr/arm-linux-gnueabi/lib -o "HelloWorlTest" ./src/HelloWorlTest.o -lopencv_highgui -lopencv_core -lopencv_imgproc -lcurl
My questions are:
It appears I can get rid of my shared library error on the bone, by copying the appropriate libraries from my arm-compiled versions on the host to the target. So the target needs a copy of all libraries as well in order for the program to run. Since these are shared libraries and they are not included in the final executable, why do I need to compile the source for the target platform on the host in order to make the host linker happy? It appears the arm-compiled versions of the shared libraries are never used on the host. I initially thought it was so they would be packaged with the executable, but that is obviously incorrect.
If I copy the needed shared libraries from the host to the directory where the executable is stored on the target, the program still fails to find the shared libraries. The program will only run if I place a copy of the needed .so files in the /usr/lib/ folder on the target. What folders are searched for shared libraries when running an executable? Why won't it find shared libraries within its own local folder?
As I add more libraries to my project, what is the best way to manage them, and get them on the target. I really do not want to download the source on my host, cross-compile for arm, and then sift through all the libraries generated to only transfer the .so files I need on the bone. What is the proper way to provided the target with only the needed libraries for the executable? Is there a tool/plugin to manage or make this process automated?
How can I determine what are the required libraries irrespective of all the libraries I added to the eclipse linker?
If I wanted to tell eclipse to not use shared libraries how do I change the compile scripts for OpenCV, Curl, and modify eclipse so that static libraries are used instead?
When doing embedded programming, and cross-compiling is it more typical to use shared libraries or static libraries?
Thanks for the help.
You are just making the linker happy having the shared library on the host. It looks in the shared libraries to make sure the symbols your program uses are resolved. They are not linked in or used for anything else.
/lib and /usr/lib are the usual place to find shared libraries. You can add directories to the dynamic loader's search path by defining the LD_LIBRARY_PATH environment variable:
setenv LD_LIBRARY_PATH /home/me/lib:/home/me/lib2
I have no clue if there is some kind of tool/plugin for this. I use scp. ;-)
The ldd command will tell you what shared libraries an executable uses.
Good question. I've never built them. Often packages will build both shared and static libraries.
I don't know if is more typical to use shared libraries or not. I generally use static libraries. In my ELLCC cross compiler project.
I have used ELLCC to build itself. The resulting statically linked executables were actually smaller than the gcc compiled executable that uses shared libraries. Of course that is with an entirely different set of C++ and C standard libraries.

distribute gcc 4.7 program with shared libraries on OS X

I've compiled a command-line tool against some C++ dynamic libraries using GCC 4.7 on Mac OS X 10.8. On the development system, the compiler was installed by MacPorts into /opt/local and the libraries reside in /usr/local/lib. The dynamic libraries are compiled from source alongside the program. (But they're built by cmake and I don't want to mess with that system.)
When I try to run it on another machine by putting the necessary dylibs into the executable's directory and DYLD_LIBRARY_PATH, it complains about an undefined symbol in the C++ standard library. It appears to be trying to load the older, builtin GNU standard library from /usr/lib/libstdc++.6.dylib.
How can I force the system to load the desired libstdc++?