Using library in iOS simulator: Linking with Unix Conformance Layer - c++

I am developing a framework for other iOS developers and I am using boost as a dependency. I am creating a boost.framework which contains the libraries (fat library) for arm6, arm7, arm7s, arm64, i386 and x86_64. Compilation and linking seems to work fine, but using my library and the boost.framework in XCode 5.0.2 in a simulator results in the following error
Detected an attempt to call a symbol in system libraries that is not present on the iPhone:
pthread_cond_init$UNIX2003 called from function _ZN5boost18condition_variableC2Ev
However, deploying the App on a device does not yield any problems at all.
After looking around I found a StackOverflow entry explaining that his $UNIX2003 function names are part of the OSX library. Based on that I checked the linking of the library and only the i386 parts of the library are linked against the $UNIX2003 parts (which is in accordance to Apple's own description). The arm* architectures are using the unsuffixed versions.
My question is, what can I do to have it running on the simulator? Do I need to recompile boost with specific flags? Is there an option to tell the simulator to shut up? Or is there at least I way to tell the simulator to use the actual device libraries not the i386 ones?
There is the possibility of writing these $UNIX2003 functions myself which then delegate to the real ones. But since there are quite a few function calls I would rather not do that, especially since the developers using my framework would need to do the same I guess (which I would like to avoid).

After coming across a discussion on an unrelated mailing list, I found the original culprit: The boost library was built against the Mac OS X version of (I guess libc++), not against the iPhone simulator version of it. Fixing this generated the framework in the proper formats as well as containing only links to the proper versions of the system calls.

Related

dynamic library using boost has undefined references when built on ARM architecture

I have a C++ based dynamic library that I have built for the big 3 OSs that relies heavily on boost. Currently, I am compiling it for the raspberry pi. It took me a while to find the magic words to get the library to even build (-frepo as a compiler flag was the key, but I confess that I am not certain why this is the case).
Now, when I try to link to the library, I get an 'undefined reference' error to every boost call that my library makes, i.e.:
//`libmylib.so`: undeifined reference to `boost::shared_ptr<boost::detail::thread_data_base>::shared_ptr()'
When I build libmylib.so, I also build a custom version of boost as libboost.a. This all compiles and links fine on other OSs and non-ARM architectures so I tried putting -lboost as one of the flags, but I still get the same plethora of undefined reference errors form libmylib.so.
Needless to say, all my paths are correct.
It seems like linking behaves a bit differently on the raspberry pi than it does on other linux systems. For example, I built a static library (libmythread.a) that uses libpthread. When I link to that libmythread.a, I also get undefined reference errors unless I also use -lpthread in the build recipe. On my Thinkpad running Fedora, I would never have to do this since I included -lpthread in the compilation of the static library libmythread.a.
I would love to find a tutorial or guide that explains these discrepancies. I would also love to overcome them!
I also tried the same build on a conventional linux machine and everything linked fine, no problem. At least I know that my build process is OK. This does open up the possibility, though, that the -frepo flag is doing something funny that I don't understand and that this could be the root of the problem.
Solved. In the end, the trouble stemmed from the -frepo flag. This was necessary to compile a file called legacy_abi.cpp that is part of my library to allow third party developers using older and more exotic OSs/compilers. This isn't needed on the Pi, so I just removed it from the offending file from the build, dropped the -frepo flag and happy happy.
One final note, aptitude (for Pi, anyway) only supplies boost up to 1.49 (as far as I can tell). My project requires boost >= 1.50. This is an inherited project, so I'm still discovering all its little idiosyncracies.

Third Party library requires different version of the same DLL my application does

I'm writing an application that uses both Intel's TBB library, and an API from a company called Maplink, which also uses TBB. The problem is that both my application and the Maplink API want to load TBB.dll from the directory containing my application's binary. The version of TBB.dll that Maplink provided with their API differs from the one my application requires, and they can't both co-exist in the application's executable directory. Do I have any option here other than statically linking TBB into my application so that it doesn't try to load the wrong version of TBB.dll that the Maplink API is using?
In the real world, it is a bad idea to mix different versions of the same DLL. You should really try and get your platform aligned. It is not called package hell for nothing.
That being said, it is very much up to the TBB.dll if it allows for multiple versions at once. You might be able to statically link your code against your version of TBB, but in doing so you will need to make sure the statically linked-in symbols are not dynamically visible (a compiler collection dependant linker option). The code that you have that depends on TBB must probably also be linked in a separate linker step from the one that includes linking to maplink. And the application will need to be linked without relinking against TBB.dll.
At least that is how it could work for so files in Linux.
As mentioned in the comments, you may put the newer version of tbb.dll into your application directory, and it should work properly for both the application and the 3rd party library it uses. For example, the recent version - TBB 4.2 - is binary compatible with old versions back to TBB 2.0.

How to deploy C++ app on Linux

My c++ app depends on GCC, MongoDB C++ driver and Boost. My current way is to keep the OS consistent. I develop C++ on Ubuntu 12.04 64bit Desktop, and deploy it on Ubuntu 12.04 64bit server. Also I should install the same version of dependencies on target server.
But If I want to develop my C++ app on Ubuntu 13.04 and use newest Boost, MongoDB driver and GCC 4.8.1, which way is easy to deploy C++ app on Ubuntu 12.04 server.
static linking
Dynamic linking, also deploy all dependencies to target server?
Which way is simple? Sometimes, I cannot compile libraries on target server.
If the dependencies are small easiest way is to compile everything statically. It is fairly easy during the build step, and nothing fancy is needed. However, with bigger libraries, and a bigger project this might get inconvenient.
I think that best practice would be to compile dependencies into shared objects ship them along the binaries and execute stuff in a way that ld will look for your stuff 1st. I think it's possible by for example using LD_LIBRARY_PATH e.g. LD_LIBRARY_PATH=/where/did/i/ship/lib:$LD_LIBRARY_PATH my_binary.
It can be somewhat cumbersome as you need to set up your build system to compile stuff as shared objects and properly pack everything.
I'm pretty sure some of the pre-compiled programs that are shipped for linux work this way. Strangely, I can't find any custom pre-compiled app under my hand at the moment.
It depends on your application. If your application consists of only one specific binary, then static linking of all C++ libraries is in order. You can safely link dynamically all C libraries, as the C ABI is unchanging; this just leaves you with version dependencies. However in most cases the major SO-Name verions are mostly compatible and libraries of differing major SO-Name can be installed in parallel. So I'd rely on the package manager to install those. C++ libraries are tricky due to lack of a common ABI. Even a mere compiler version bump can make them incompatible *sigh*.

Linking Statically with glibc and libstdc++

I'm writing a cross-platform application which is not GNU GPL compatible. The major problem I'm currently facing is that the application is linked dynamically with glibc and libstdc++, and almost every new major update to the libraries are not backwards compatible. Hence, random crashes are seen in my application.
As a workaround, I distribute binaries of my application compiled on several different systems (with different C/C++ runtime versions). But I want to do without this. So my question is, keeping licensing and everything in mind, can I link against glibc and libstdc++ statically? Also, will this cause issues with rtld?
You don't need to.
Copy the original libraries you linked against to a directory (../lib in this example) in your application folder.
Like:
my_app_install_path
.bin
lib
documentation
Rename you app for something like app.bin. Substitute your app for a little shell script that sets the enviroment variable LD_LIBRARY_PATH to the library path (and concatenate the previous LD_LIBRARY_PATH contents, if any). Now ld should be able to find the dynamic libraries you linked against and you don't need to compile them statically to your executable.
Remember to comply with the LGPL adding the given attribution to the libraries and pointing in the documentation where the source can be downloaded.
glibc is under the LGPL. Under section 6. of LGPL 2.1, you can distribute your program linked to the library provided you comply with one of five options. The first is to provide the source code of the library, along with the object code (source is optional, not required) of your own program, so it can be relinked with the library. You can alternatively provide a written offer of the same. Your own code does not have to be under the LGPL, and you don't have to release source.
libstdc++ is under the GPL, but with a major exception. You can basically just distribute under the license of your choice without providing source for either your own code or libstdc++. The only condition is that you compile normally, without e.g. proprietary modifications or plugins to GCC.
IANAL, and you should consider consulting one if you need real legal advice.
Specifying the option -static-libgcc to the linker would cause it to link against a static version of the C library, if available on the system. Otherwise it is ignored.
I must question what the heck you are doing with the poor library functions?
I have some cross platform software as well. It runs fine on Linux systems of all sorts. Build with the oldest version of software that you want to support. The glibc and libstdc++ libraries are really very backward compatible.
I have built on CentOS 4 and run it on RHEL 6 beta. No problems.
I can build on stable Debian and run it on testing.
Now, I do sometimes have trouble with some libraries if I try to build on, say old Debian and try to run it on CentOS 5.4. That is usually due to distribution configuration choices that are different, like choosing threading or non-threading.

Can I use boost on uclibc linux?

Does anyone have any experience with running C++ applications that use the boost libraries on uclibc-based systems? Is it even possible? Which C++ standard library would you use? Is uclibc++ usable with boost?
We use Boost together with GCC 2.95.3, libstdc++ and STLport on an ARMv4 platform running uClinux. Some parts of Boost are not compatible with GCC 2.x but the ones that are works well in our particular case. The libraries that we use the most are date_time, bind, function, tuple and thread.
Some of the libraries we had issues with were lambda, shared_pointer and format. These issues were most likely caused by our version of GCC since it has problems when you have too many includes or deep levels of template structures.
If possible I would recommend you to run the boost test suite with your particular toolchain to ensure compatibility. At the very least you could compile a native toolchain in order to ensure that your library versions are compatible.
We have not used uClibc++ because that is not what our toolchain provider recommends so I cannot comment on that particular combination.
We are using many of the Boost libraries (thread, filesystem, signals, function, bind, any, asio, smart_ptr, tuple) on an Arcom Vulcan which is admittedly pretty powerful for an embedded device (64M RAM, 533MHz XScale). Everything works beautifully.
GCC 3.4 but we're not using uclib++ (Arcom provides a toolchain which includes libstd++).
Many embedded devices will happily run many of the Boost libraries, assuming decent compiler support. Just take care with usage. The Boost libraries raise the level of abstraction and it can be easy to use more resources than you think.
I googled "uclibc stlport". It seems there are at least a few versions of uclibc for which stlport can be compiled (see this).
Given that, i'd say Boost is just a few compilation steps away. I have read a message by David Abrahams (who is an active member of the boost community) that says that Boost does not depend directly on the used libc. But some libraries may still cause problems, Boost.Python for instance, since it depends on something else (Python in my example) that might be difficult to compile with uclibc.
Hope this helps
I have not tried but I don't know anything about uclibc that would prevent Boost from working.
Try it and see what happens, I would say.
Yes you can use boost with uclibc.
I tried this with boost 1.45 & uclibc on ARM9260
Use fresh OpenEmbedded
Configure it to use Angstrom
Configure Angstrom to use uclibc
make boost - bitbake boost