Can I use boost on uclibc linux? - c++

Does anyone have any experience with running C++ applications that use the boost libraries on uclibc-based systems? Is it even possible? Which C++ standard library would you use? Is uclibc++ usable with boost?

We use Boost together with GCC 2.95.3, libstdc++ and STLport on an ARMv4 platform running uClinux. Some parts of Boost are not compatible with GCC 2.x but the ones that are works well in our particular case. The libraries that we use the most are date_time, bind, function, tuple and thread.
Some of the libraries we had issues with were lambda, shared_pointer and format. These issues were most likely caused by our version of GCC since it has problems when you have too many includes or deep levels of template structures.
If possible I would recommend you to run the boost test suite with your particular toolchain to ensure compatibility. At the very least you could compile a native toolchain in order to ensure that your library versions are compatible.
We have not used uClibc++ because that is not what our toolchain provider recommends so I cannot comment on that particular combination.

We are using many of the Boost libraries (thread, filesystem, signals, function, bind, any, asio, smart_ptr, tuple) on an Arcom Vulcan which is admittedly pretty powerful for an embedded device (64M RAM, 533MHz XScale). Everything works beautifully.
GCC 3.4 but we're not using uclib++ (Arcom provides a toolchain which includes libstd++).
Many embedded devices will happily run many of the Boost libraries, assuming decent compiler support. Just take care with usage. The Boost libraries raise the level of abstraction and it can be easy to use more resources than you think.

I googled "uclibc stlport". It seems there are at least a few versions of uclibc for which stlport can be compiled (see this).
Given that, i'd say Boost is just a few compilation steps away. I have read a message by David Abrahams (who is an active member of the boost community) that says that Boost does not depend directly on the used libc. But some libraries may still cause problems, Boost.Python for instance, since it depends on something else (Python in my example) that might be difficult to compile with uclibc.
Hope this helps

I have not tried but I don't know anything about uclibc that would prevent Boost from working.
Try it and see what happens, I would say.

Yes you can use boost with uclibc.
I tried this with boost 1.45 & uclibc on ARM9260
Use fresh OpenEmbedded
Configure it to use Angstrom
Configure Angstrom to use uclibc
make boost - bitbake boost

Related

what is the difference between lib-mingw and lib-vc2019

I'm new to Windows programming and found that lots of prebuilt libraries for Windows offer libraries like lib-mingw, lib-vc2019, lib-vc2017...
Could anyone help to point out
what is the difference? Which library should I use in what case?
If I want to use Clang on Windows, which one should I use?
Why these different libraries rarely seen on Linux (let's say
Ubuntu), does package managers like apt hide this detail? In other word, why there's no such thing like lib-gcc.a, lib-clang.a on Linux platform?
Mingw (GCC), VC2019 and VC2017 are different compilers. Use the library corresponding to your compiler.
I'm not sure but I think none of them will work with Clang. At least on Linux GCC and Clang are very similar. I mean they are mostly binary compatible, many same compiler flags, many same compiler extensions. Clang tried to make it possible to easily replace GCC in your build pipeline. But all these information is for Linux.
These libraries are not seen on Linux because all these compilers are Windows compilers
You can always build a library with your compiler to use it in your project with your compiler (if you have the sources).
If it's a third party closed source library and you are a paying customer you can ask if they build it for you. It's usually better the add a new compiler to the build pipeline than to lose a customer.

How do I make a C++ (shared) library compatible with clang and GCC?

I've written a fairly substantial C++11 library, and I'm planning to allow download of pre-compiled versions from my website.
So I've set up an automated build which compiles the library with clang and make it available for download, but this has exposed a problem: if I try to use the clang-compiled library with GCC I get undefined references (mainly related to std::string). I think this is related to the GCC dual-ABI changes in GCC 5.1 but I'm not sure how to fix it.
My question is, what flags should I set, or practices should I follow to make a C++ library compatible with both clang and GCC?
Or should I give up and compile two separate libraries?
As already mentioned in several places (eg. here) libc++ is not fully binary compatible with libstdc++.
There are several options, but some of them are somewhat not-so-straightforward.
Compile two separate libraries - always working solution.
Remove incompatible containers from your interface (eg. std::string) - but this might be lot of work and sometimes not a good idea.
Instruct your library GCC users that you link with libc++ and that they need to do the same, basic steps here. But I guess most GCC users do not want to do this.
Use clang with libstdc++ using -stdlib=libstdc++ flag to be compatible with libstdc++ (as suggested in other answer). This solution might be harder to setup on some platforms though.
I would suggest as already mentioned in comments to go with option 1.
There are several options:
Don't distribute it in binary form. Instead, make it easy to build everywhere (e.g. by using CMake, or autotools or ...)
Make it header only. This is by far the simplest solution but might not be what you want. It only really makes sense for templated code, and incurs a heavy impact on compile-time performance of your library.
Tell people to link with libstdc++ when using Clang and your library. Suboptimal solution (I for one like to check my code against libc++ as well as libstdc++), but (virtually) every Linux user has libstdc++ installed anyway. Make sure to pick a slightly older version (the one shipped in the latest Debian Stable distro is a good choice), because newer versions might introduce new symbols olders versions are missing. New versions should be ABI compatible anyway.
Note the situation for Visual Studio users is even worse, where every single compiler release mandates a new binary because they guarantee absolutely nothing with respect to the C++ libraries' or compiler's ABI.
Another option is for your shared library to not expose any C++ standard library types in its interface. And have a header file supplied with your shared library that converts std::string to types consumed by your library, such as struct my_string_span { char const *begin, *end; }; and other standard containers as necessary.

MinGW vs MinGW-W64 vs MSVC (VC++) in cross compiling

Let's put like this: We are going to create a library that needs to be cross platform and we choose GCC as compiler, it works awesomely on Linux and we need to compile it on Windows and we have the MinGW to do the work.
MinGW tries to implement a native way to compile C++ on Windows but it doesn't support some features like mutex and threads.
We have the MinGW-W64 that is a fork of MinGW that supports those features and I was wondering, which one to use? Knowing that GCC is one of the most used C++ compilers. Or it's better to use the MSVC (VC++) on Windows and GCC on Linux and use CMake to handle with the independent compiler?
Thanks in advance.
Personally, I prefer a MinGW based solution that cross compiles on Linux, because there are lots of platform independent libraries that are nearly impossible (or a huge PITA) to build on Windows. (For example, those that use ./configure scripts to setup their build environment.) But cross compiling all those libraries and their dependencies is also annoying even on Linux, if you have to ./configure and make each of them yourself. That's where MXE comes in.
From the comments, you seem to worry about dependencies. They are costly in terms of build environment setup when cross compiling, if you have to cross compile each library individually. But there is MXE. It builds a cross compiler and a large selection of platform independent libraries (like boost, QT, and lots of less notable libraries). With MXE, boost becomes a lot more attractive as a solution. I've used MXE to build a project that depends on Qt, boost, and libexiv2 with nearly no trouble.
Boost threads with MXE
To do this, first install mxe:
git clone -b master https://github.com/mxe/mxe.git
Then build the packages you want (gcc and boost):
make gcc boost
C++11 threads with MXE
If you would still prefer C++11 threads, then that too is possible with MXE, but it requires a two stage compilation of gcc.
First, checkout the master (development) branch of mxe (this is the normal way to install it):
git clone -b master https://github.com/mxe/mxe.git
Then build gcc and winpthreads without modification:
make gcc winpthreads
Now, edit mxe/src/gcc.mk. Find the line that starts with $(PKG)_DEPS := and add winpthreads to the end of the line. And find --enable-threads=win32 and replace it with --enable-threads=posix.
Now, recompile gcc and enjoy your C++11 threads.
make gcc
Note: You have to do this because the default configuration supports Win32 threads using the WINAPI instead of posix pthreads. But GCC's libstdc++, the library that implements std::thread and std::mutex, doesn't have code to use WINAPI threads, so they add a preprocessor block that strips std::thread and std::mutex from the library when Win32 threads are enabled. By using --enable-threads=posix and the winpthreads library, instead of having GCC try to interface with Win32 in it's libraries, which it doesn't fully support, we let the winpthreads act as glue code that presents a normal pthreads interface for GCC to use and uses the WINAPI functions to implement the pthreads library.
Final note
You can speed these compilations up by adding -jm and JOBS=n to the make command. -jm, where m is a number that means to build m packages concurrently. JOBS=n, where n is a number that means to use n processes building each package. So, in effect, they multiply, so only pick m and n so that m*n is at most not much more than the number of processor cores you have. E.g. if you have 8 cores, then m=3, n=4 would be about right.
Citations
http://blog.worldofcoding.com/2014_05_01_archive.html#windows
If you want portability, Use standard ways - <thread> library of C++11.
If you can't use C++11, pthread can be solution, although VC++ could not compile it.
Do you want not to use both of these? Then, just write your abstract layer of threading. For example, you can write class Thread, like this.
class Thread
{
public:
explicit Thread(int (*pf)(void *arg));
void run(void *arg);
int join();
void detach();
...
Then, write implementation of each platform you want to support. For example,
+src
|---thread.h
|--+win
|--|---thread.cpp
|--+linux
|--|---thread.cpp
After that, configure you build script to compile win/thread.cpp on windows, and linux/thread.cpp on linux.
You should definitely use Boost. It's really great and does all things.
Seriously, if you don't want to use some synchronization primitives that Boost.Thread doesn't support (such as std::async) take a look on the Boost library. Of course it's an extra dependency, but if you aren't scared of this, you will enjoy all advantages of Boost such as cross-compiling.
Learn about differences between Boost.Thread and the C++11 threads here.
I think this is a fairly generic list of considerations when you need to choose multi-platform tools or sets of tools, for a lot of these you probably already have an answer;
Tool support, who and how are you going to get support from if something doesn't work; how strong is the community and the vendor?
Native target support, how well does the tool understand the target platform?
Optimization potential?
Library support (now and in the intermediate future)?
Platform SDK support, if needed?
Build tools (although not directly asked here, do they work on both platforms; most of the popular ones do).
One thing I see that seems to not really have been dealt with is;
What is the target application expecting?
You mention you are building a library, so what application is going to use it and what does that application expect.
The constraint here being the target application dictates the most fundamental aspect of the system, the very tool used to built it. How is the application going to use the library;
What API and what kind of API is needed by or for that application?
What kind of API do you want to offer (C-style, vs. C++ classes, or a combination)?
What runtime is it using, will it be the same, or will there be conflicts?
Given these, and possible fact that the target application may still be unknown; maintain as much flexibility as possible. In this case, endeavour to maintain compatibility with gcc, mingw-w64 and msvc. They all offer a broad spectrum of C++11 language support (true, some more than others) and generally supported by other popular libraries (even if these other libraries are not needed right now).
I thought the comment by Hans Passant...
Do what works first
... really does apply here.
Since you mentioned it; the mingw-builds for mingw-w64 supports thread etc. with the posix build on Windows, both 64 bit and 32 bit.

Managing external libraries (e.g. boost) during transition to C++11

I want to move my current project to C++11. The code all compiles using clang++ -std=c++0x. That is the easy part :-) . The difficult part is dealing with external libraries. One cannot rely on linking one's C++11 objects with external libraries that were not compiled with c++11 (see http://gcc.gnu.org/wiki/Cxx11AbiCompatibility). Boost, for example, certainly needs re-building (Why can't clang with libc++ in c++0x mode link this boost::program_options example?). I have source for all of the external libraries I use, so I can (with some pain) theoretically re-build these libs with C++11. However, that still leaves me with some problems:
Developing in a mixed C++03/C++11 environment: I have some old projects using C++03 that require occasional maintenance. Of course, I'll want to link these with existing versions of external libraries. But for my current (and new) projects, I want to link with my re-built C++11 versions of the libraries. How do I organise my development environments (currently Ubuntu 12.04 and Mac OS X 10.7) to cope with this?
I'm assuming that this problem will be faced by many developers. It's not going to go away, but I haven't found a recommended and generally approved solution.
Deployment: Currently, I deploy to Ubuntu 12.04 LTS servers in the cloud. Experience leads one to depend (where possible) on the standard packages (e.g. libboost) available with the linux distribution. If I move my current project to c++11, my understanding is that I will have to build my own versions of the external libraries I use. My guess is that at some point this will change, and their will be 'standard' versions of library packages with C++11 compatibility. Does anyone have any idea when one might expect that to happen? And presumably this will also require a standard solution to the problem mentioned above - concurrent existence of C++03 libs and C++11 libs on the same platform.
I am hoping that I've missed something basic so that these perceived problems disappear in a puff of appropriate information! Am I trying to move to C++11 too soon?
Update(2013-09-11): Related discussion for macports: https://lists.macosforge.org/pipermail/macports-users/2013-September/033383.html
You should use your configure toolchain (e.g. autotools) to "properly" configure your build for your target deployment. Your configuration tests should check for ABI compatible C++11 binaries and instruct the linker to use them first if detected. If not, optionally fail or fallback to a C++03 build.
As for C++11 3rd part libraries installed in a separate parallel directory tree, this isn't strictly necessary. Library versioning has been around for a long time and allows you to place different versions side by side on the system or wherever you'd like, again based on configure.
This might seem messy, but configure toolchains were designed to handle these messes.

BOOST libraries in multithreading-aware mode

There is a possibility to compile BOOST libraries in the so-called thread-aware mode. If so you will see "...-mt..." appeared in the library name. I can't understand what it gives me and when do I need to use such mode? Does it give me any benefits?
More than that I'm really confused by having BOOST Threads library compiled in NO-thread-aware regime (with no -mt in the name). It does not make any sense for me. Looks self-contradictory :/
Thanks a lot for any help!
Because you did not specify how you have built, and on what platform, I'll explain the whole story. Both on Linux and Windows, Boost.Thread library is built in MT mode. On Windows, by default, you get -mt suffix for it. On Linux, by default in 1.42, you get no suffix. The reason you get no suffix on Linux is that pretty much no other library uses such convention, and it's much less important on Linux anyway.
Does this clarify things?
There is an option to put "-mt" suffix back (bjam --layout=tagged)
--layout=<layout> Determines whether to choose library names
and header locations such that multiple
versions of Boost or multiple compilers can
be used on the same system.
versioned - Names of boost binaries
include the Boost version number, name and
version of the compiler and encoded build
properties. Boost headers are installed in a
subdirectory of <HDRDIR> whose name contains
the Boost version number.
tagged -- Names of boost binaries include the
encoded build properties such as variant and
threading, but do not including compiler name
and version, or Boost version. This option is
useful if you build several variants of Boost,
using the same compiler.
system - Binaries names do not include the
Boost version number or the name and version
number of the compiler. Boost headers are
installed directly into <HDRDIR>. This option
is intended for system integrators who are
building distribution packages.
The default value is 'versioned' on Windows, and
'system' on Unix.
MT enables multithreaded support in the boost libraries meaning you are safe to use them in your multithreaded programs (at least from the library's internal code point of view).
And indeed building the threads library in the "no threads" mode does not make any sense but I was under the impression that that specific build target is disabled.
Check these out
http://sodium.resophonic.com/boost-cmake/current-docs/build_variants.html
http://www.boost.org/doc/libs/1_41_0/more/getting_started/windows.html#library-naming
You can build Boost with multi-threading support or not (threading=multi|single). Boost.Thread force the build of the library by setting threading=multi in its Jamfile (the bjam equivalent of a Makefile).
So independently of whether you request threading support or not, Boost.Thread always provide it. Hence you can find both names.
Since, under Linux, the -mt version is aliased/bound to the regular version, it makes no difference. In a vanilla modern system, both are simply included for ease of compilation.
I'm not a Boost guru, but I assume it is this:
In a MT environment, any global or shared data may have more than one thread trying to access it at the same time, which can lead to data corruption. An MT-aware object will use synchronisation (Critical Sections, Mutexes, etc.) to ensure that only one thread can access data at a time.
There might be functions in the Boost thread library that still work in single-threaded programs. Alternatively, the functions may resolve to no-ops (harmless do-nothing functions) so that the same program can be compiled with MT (and the boost functions work) or Single threaded (and the boost functions do nothing) without having to change the code.