First I'll explain the big picture :
I am creating an application where I separated most of the features in different libraries. One of them contains some classes that use curl. I actually use Ubuntu 64 bit to develop and test it, but my production environment is a NAS with an ARM processor. I intend to, later, also make it for windows.
Where I am now :
My application is running on linux and the ARM-based NAS. However, I don't link with curl, I use curl from command-line internally to do what I need. This has some drawbacks :
As a programmer, I consider it an ugly practice. I should link to libcurl, this is the normal and clean way of using features from other software components.
This implies that curl executable is installed on the target. Not only I don't want to rely on this but also after a system upgrade on the NAS, I found out that I can't rely anymore on this.
What I want
As I intended anyway to use curl as a library, I first tried to do it the "soft way" : dynamic linking. Although it worked on my development environment, it didn't work on the production one because I found out that the curl library installed there doesn't work as expected.
So my next try was using libcurl as a static library. I also considered it to be the best future-proof option for me as it would make sure that, either on the NAS or on any other system, the library I will be using will always be the same.
The problems I've solved so far
Including a static library in another static library
This is already well documented in other answers here in StackOverflow : How to merge two "ar" static libraries into one
I did this to create a combined library of my own one and libcurl and as far as I've checked, it worked.
Building libcurl statically from source
There also are other answers that cover this topic, and I managed to create a libcurl.a that has libcurl features only.
The problems I am still trying to solve
Building libcurl statically with all its dependencies
There are some information regarding this, for example here. I did what was suggested, calling the configure script with --disable-shared and --enable-static. I also did the "rm src/curl" before make and called make with LDFLAGS=-all-static, but the resulting libcurl still missed its dependencies (openssl, pthreads, zlib...).
If I could solve this problem, it would answer my question. But not having succesfully done that, I tried another approach :
Manually merging all libcurl dependencies in a final lib
As I did for merging my library with libcurl into a new library, I also tried do add to it curl dependencies : zlib and openssl. So I compiled both from source to create static libraries and included them in the merge. I was not able to fully check the result as it seems that another one is missing : pthread. And I was not able to find pthread for downloading - compiling - static linking.
Looking at the big picture, my main problem is : how to I include curl in my final application so that there is no external dependency to it?
I think that if either of my two remaining problems would be solved, I would be solving my main problem. But if it is not the case, I also would be glad to hear from someone who knows a better way of solving this, or ideally already solved a similar issue.
Related
I am pretty new to c++. At the moment I am working on ONNX involving project and I have a question, if I have to make my application portable (assuming that person which will be using it does not have installed onnx on their machine) do I have to install whole library in my
project folder or how should I do it?
Thanks for help
So one option is to link all your libraries statically, so the person using the application does not need to have a particular library installed on their machine
See this Q/A :
Compiling a static executable with CMake
I recommend setting up a CI machine where these libraries are available (or better yet with a reproducible installation script/container configuration) (yes this represents some work to set up for you) and then giving the statically linked binaries buit on this machine to your clients (so no hassle for them).
You'll read online about potential compatibility issues with statically linked binaries and complaints about exe size (some people believe -static is evil), but honestly this approach puts the least strain possible on client configuration.
I have a solaris shared object (common.so file) that runs as part of a third party application(app.exe). I do not have access to the source code of the application. To the so, I need to add a capability to post http requests. My plan is to use libcurl with openssl. The tricky part is that app.exe already has a dependency on an older version of curl (7.14) which does not support ssl with tls v1.2.
I downloaded the source code and built curl (7.55.1) and openssl .a files. I was also able to build common.so with static dependency on these archive files. ldd does not show dependency on curl or ssl .so files and it also does not report any "Symbol not found" errors.
With this result, I was expecting my version of curl to be invoked when the so runs as part of the application but it did not. Instead curl_version() displays older version and I get the error unknown ssl protocol error
I am using solaris studio compiler. The application does not depend on curl libraries directly but depends on a different .so file which exports symbols with the same names as curl. I realized from nm and I am assuming that this .so file also links curl statically.
When app.exe loads the two SO in question, it adds the functions of each to its symbol table. Now one of two possible scenarios must occur (which one actually is OS detail but irrelevant here...):
The newer version is loaded before the older version and the older one overwrites the newer one's entries.
The newer version is loaded after the older one, and as there are already entries in the table, it is not updated any more.
Now the cleanest solution – if applicable, i. e. if you have access to the sources – would be updating the other SO to use the newer version of curl. If doing so, consider creating curl as a new SO instead of linking it statically into both common.so and the other SO.
Otherwise, to solve the issue directly, you would have to switch the order the app loads the SO, which probaly would mean to decompile app.exe and rebuild it with the linkage order of the SO changed. Problem then: there might be two versions of app.exe around then, and you must make sure that only the correct one is distributed with your common.so. Potential source of trouble, too...
Apart from this, best I can think up is a workaround:
You might modify the curl prefix from curl_ to e. g. curl_755_. Do not try this by hand, though, you most likely would go crazy this way... Use a script (e. g. perl or python) for instead. And if you ever update the curl sources, you can just run it again then...
Faster version, but potentially unsafe: just replace any occurences. Safer: Identify the externally visible functions (and possibly global objects) in a first run and keep them in a map, then replace any occurence of a string contained in the map with the corresponding value.
The latter approach (map) would additionally allow to generate macros in the header files of the following form:
#define curl_xyz curl_755_xyz
These macros allow the sources of common.so to just look like as if the original sources of curl were used...
I am currently trying to setup a project in C++, b that uses the luabind library. Unfortunately on my distro, namely Arch, this library isn't in the official repos and the one in the AUR is out of date and fails to compile.
Considering that I need the library only for this project I thought that I could make a sandboxed environment similar to python's virtualenv by building the library then installing(copying) the include files and resulting binaries in 2 sub-directories of my project called include and lib, respectively which I'll add to the linking and include paths when building. I understand why distributing the libraries with your project is bad: security and bug fixes in the meantime for example. However distributing DLLs is almost universally done on Windows(which I might do if I cross-compile) and many projects such as games on Linux tend to package their libraries to avoid inconsistencies between disrtos. Moreover if ever need a patched or forked version of a lib I doubt I'll ever find it in any official repo.
So my question is:
Is what I described above a common practice? Should I do it like this?
If not, what is the most commonly-agreed-upon solution to this problem?
NOTE: I use Cmake for build automation, if it matters
EDIT: This question slightly overlaps with mine.
Your approach is interesting, but it is not necessary for you to devise a working system because it has already been done, and luckily, you are only one step away from the solution !
Using CMake, it is easy to automate the building and linking of external source code, using the ExternalProject module.
See http://www.kitware.com/media/html/BuildingExternalProjectsWithCMake2.8.html for useful information.
This approach has several advantages:
you do not have to include the library's source code in your repository
you can point to the specific version/git tag of the library that you know works with your software OR the latest release if you are certain it will not break compatibility
you do not have to write a complete CMakeLists.txt file to build a possibly complex code base
you can eventually configure the external project to build as a static library so you will not have to distribute shared libraries
you can even completely bypass this if not necessary, by trying to detect a working version of the library on your system with the usual find_package call, and only fall back to building it as an external project if not found
im just installing the boost library using an installer.
Its asking me which variants (about 8 options, 6 multithreaded and 2 single threaded) do i want to install. Im only installing this to get to grips and have a practice with boost, so im unsure?
Also, how do i use the libraries from VS02010 once ive 'installed' them using the installer?
thank you in advance
Boost documentation is your friend. A read of the information on getting started on Windows would save you much time.
Most of the libraries are header-only. You can use these just by including the correct headers as described in the individual library docs. If you want to use any of the ones that are not, you are going to need either to build your own libraries, or install the ones that come prebuilt. This is what your question pertains to. So you really must answer your own question - what is your target platform, and do you have to support multi-threaded programming? if in doubt, install them all and use the ones you need on a case-by-case basis.
To use the Boost libs once you have installed or built them, just include the relevant library in your project Linker options as for any other static library.
I'm building a special-purpose embedded Python interpreter and want to avoid having dependencies on dynamic libraries so I want to compile the interpreter with static libraries instead (e.g. libc.a not libc.so).
I would also like to statically link all dynamic libraries that are part of the Python standard library. I know this can be done using Freeze.py, but is there an alternative so that it can be done in one step?
I found this (mainly concerning static compilation of Python modules):
http://bytes.com/groups/python/23235-build-static-python-executable-linux
Which describes a file used for configuration located here:
<Python_Source>/Modules/Setup
If this file isn't present, it can be created by copying:
<Python_Source>/Modules/Setup.dist
The Setup file has tons of documentation in it and the README included with the source offers lots of good compilation information as well.
I haven't tried compiling yet, but I think with these resources, I should be successful when I try. I will post my results as a comment here.
Update
To get a pure-static python executable, you must also configure as follows:
./configure LDFLAGS="-static -static-libgcc" CPPFLAGS="-static"
Once you build with these flags enabled, you will likely get lots of warnings about "renaming because library isn't present". This means that you have not configured Modules/Setup correctly and need to:
a) add a single line (near the top) like this:
*static*
(that's asterisk/star the word "static" and asterisk with no spaces)
b) uncomment all modules that you want to be available statically (such as math, array, etc...)
You may also need to add specific linker flags (as mentioned in the link I posted above). My experience so far has been that the libraries are working without modification.
It may also be helpful to run make with as follows:
make 2>&1 | grep 'renaming'
This will show all modules that are failing to compile due to being statically linked.
CPython CMake Buildsystem offers an alternative way to build Python, using CMake.
It can build python lib statically, and include in that lib all the modules you want. Just set CMake's options
BUILD_SHARED OFF
BUILD_STATIC ON
and set the BUILTIN_<extension> you want to ON.
Using freeze doesn't prevent doing it all in one run (no matter what approach you use, you will need multiple build steps - e.g. many compiler invocations). First, you edit Modules/Setup to include all extension modules that you want. Next, you build Python, getting libpythonxy.a. Then, you run freeze, getting a number of C files and a config.c. You compile these as well, and integrate them into libpythonxy.a (or create a separate library).
You do all this once, for each architecture and Python version you want to integrate. When building your application, you only link with libpythonxy.a, and the library that freeze has produced.
You can try with ELF STATIFIER. I've been used it before and it works fairly well. I just had problems with it in a couple of cases and then I had to use another similar program called Ermine. Unfortunately this one is a commercial program.