I sometimes dip my toes to Free C/C++ projects that I would like to experiment with.
Nine times out ten this results in a lot of pain to get all the dependencies to work and crucially inevitably this breaks some other project's dependencies so that when I back to that other project I'm again in for an other masochist session.
There has to be a better way but I've not been able to find it.
So have should shared libraries be installed so that only one project 'sees' them?
I'm on Mac OS so I'm really only interested in solutions that work there.
Extra kudos for solutions where I can use homebrew to install those libraries.
When compiling (or rather linking) the application you can specify the -rpath option to tell it where to search for shared libraries at runtime. Including $ORIGIN as the first component of the path will cause the search to be done relative to the location of the executable.
This means that you can keep the executables libraries separate from those of other apps.
For example; let's say you install the application in /opt/myapp/bin/ and put its libraries in /opt/myapp/lib/ , then you would set rpath to $ORIGIN/../lib/ and no matter where you move the /opt/myapp directory to (as long as you move all of it) the application will find its own specific version of the libraries in its own lib/ dir.
That's the Unix solution. On Windows you could simply place the DLLs in the same directory as the executable, since Windows will search there first.
Related
I need to create a portable linux program that uses a lot of additional libraries defined from yum (CentOS).
It is forbidden to install new packages on portable machines. There are no necessary libraries there.
How to assemble my program and all packages into a single folder through the gcc compiler? When I move this folder to another machine, my program should start and run successfully.
My program is ONLY allowed to use dynamic libraries. Static libraries are STRICTLY prohibited.
When trying to replace rpath with /usr/lib64/ with my libraries that are stored in my directory, after transferring to another machine, additional libraries give an error (glibc version conflict).
This sounds like a doomed project, for anything non-trivial.
Static libraries are not the issue though. Since they're just collections of .o files, you can unpack them. You can then state that you have just linked object files. Stupid rules give stupid results.
I am ignoring software licensing here, though, but that seems implied by the question. You don't need a license for libraries installed via yum, since YOU aren't shipping them. But you absolutely need licenses when you are shipping these libraries in one form or another as part of your product. And given the stupid rules, (L)GPL is likely out of the question, so you will need to obtain commercial licenses for all 7 libraries.
I have been developing a research code using CMake to generate the Makefiles for a c++ code on an Ubuntu machine. I link in several shared libraries which are rather involved to setup and build on a machine (one in particular has a dozen or so version specific dependencies which themselves are non trivial to build). Some are also custom builds of the library (bug fixes).
I am more familiar with the windows environment with the CLR, but, what I am hoping to achieve is building the binary and including all of the shared libraries along with it. Hopefully the linker on the other environment (redhat EL6) would then be able to use those shared objects at runtime.
Since the linker doesn't look in the applications path, I assume I would also need to bring the shared libraries into a user specific library path for it to find.
Is there a nice way using Cmake (perhaps Cpack), to build the binary and 'package' all of the shared objects with it for the other machine? Then I could (even if manually) install the shared libraries for my user only, and run the binary there.
I'm hoping the answer is not using static libraries, as that has given me a lot of trouble for these dependencies in the past.
I'm a linux noob, so if my issues is in lack of understanding a better approach I am all ears :)
One approach is to set the "rpath" of the binary, which hardcodes some additional search paths. One can set it to $ORIGIN, which means that libraries in the same directory as the binary itself will be used first.
IF(UNIX)
SET(CMAKE_INSTALL_RPATH "${CMAKE_INSTALL_RPATH}:\$ORIGIN/../bin:\$ORIGIN")
ENDIF()
This CMake code takes effect at install time (make install), so first you need to set up the INSTALL commands of your CMake setup. INSTALL both your binary and all extra shared objects. Finally, CPack internally runs an "install", so once make install works, you can use CPack to automatically build a TGZ, or even an RPM if you're willing to invest the time to get it set up.
Here's an old answer of mine which talks about a similar but not identical issue, linked for completeness:
https://stackoverflow.com/a/22209962/1401351
So I recently got fed up with Windows and installed Linux Mint. I am trying to get a project to build I have in Code::Blocks. I have installed Code::Blocks but I need glew(as well as a few other libraries). I found it in the software manager and installed it. I've managed to locate and include the header files. But I feel like the next step should be relatively straightforward and all over the internet but (perhaps due to lack of proper terminology) I have been as of yet unable to locate an answer.
Do I need to locate the files on my system and link to each library manually? This is what I did on windows but I just downloaded the binaries and knew where they were. I found one library from the software manager and linked to it manually but it just feels like I'm doing it the wrong way. Since it's "installed" on the system is there some quick way to link?
You should use two flags for linker '-l' and '-L'. You can set these flags somewhere in project properties.
The first one '-l' tells linker to link with particular library. For example glew, probably in /usr/lib is a file named libglew.so, when you link your program with '-lglew' flag, it will link it with glew library. Linker looks for libraries in few standard places: /usr/lib, /usr/local/lib and few extra. If you have your libs in nonstandard place, use '-L' flag to point these dirs.
Many linux distributions provide two kinds of packages with libraries, regular ones just with runtime, and devel ones (usually prefixed or suffixed with dev or devel) with header files and development version of libraries.
use build systems, Luke!
the typical way to develop/build software in *nix world is 3 steps:
configure stage -- before building smth you have to realize in what environment you are going to build your software... is everything that required is installed... it wouldn't be good if at compile stage (after few hours of compilation) you (or user who build your soft) got an error: unable to #include the 'xxx.h'. the most popular build systems are: cmake, my favorite after autotools. yout may try also scons or maybe crazy (b)jam...
compile stage -- usually just make all
install stage -- deploy just built software into the system. or other way: build packages for target distro (.deb/.rpm/&etc)
at configuration stage using test scripts (don't worry there are plenty of them for various use cases) you can find all required headers/libraries/programs/compiler options/whatever you need to compile your package... and yes: do not use hardcoded paths in your Makefiles (or whatever you use to make your binaries)
Answer to this question really depends on what you want to achieve. If you want just to build you app by yourself then you can just write path to libraries in your makefile, or your code editor settings. You may not even have to do that as if libraries installed by your linux distribution package manager, headers usually go to /usr/include and libraries to /usr/lib or /urs/lib64 etc. That locations are standard and you do not need to specify them explicitly. Anyway you need to specify libraries you want to link to.
If you want to create application that can be build by others, or by you on many different configurations/environments using something like cmake would be very helpful.
I'm not new in C++ although I'm new in Linux. I'm using CMake to precompile a cross-platform game engine with some third-party components, but I have a lot of doubts about using libraries. My question is how to work with third-party libraries and where to put them. Apt installs libs in their official place (/usr/local, /usr/lib/ ..) but I develop in Windows using local libs that are in a folder in my project dir.
Also, I need a good tutorial to know the rules of how libraries work. For example: when trying to compile my project, luabind is asking for liblua.s0.1, but AFAIK there is no way to generate this library with the source provided by Lua (at least doing make, make install).
I know, this question is fuzzy but I haven't enough experience to be more concise.
Update: After reading some answers, a more concise question is the following. If I install all third-party libraries, how can I distribute my program? How do I manage dependencies without using a large readme?
Where to put libraries
The best solution is to use your Linux distribution's packaging system (apt-get, yum, or similar) to install libraries from distro-provided packages wherever possible.
If the distro's packaged libraries aren't of a recent enough version, or if you need some nonstandard build options, or if you need a library that your distro doesn't provide, then you can build and install it yourself. You have two main options for where to put the library:
/usr/local (libraries under /usr/local/lib, headers under /usr/local/include). This installs the libraries systemwide and is probably the simplest solution, since you should then be able to build against them without taking any extra steps. Do NOT install libraries directly under /usr, since that will interfere with your distro's packaging system.
Under your project directory, as you did under Windows. This has the advantages of not requiring root access and not making systemwide changes, but you'll have to update your project's include paths and library paths, and you'll have to put any shared library files someplace where the dynamic linker can find them (using LD_LIBRARY_PATH or ld.so.conf - see the link for more details).
How libraries work
See David A. Wheeler's excellent Programming Library HOWTO. I'd recommend reading that then posting any specific questions as new topics.
How to distribute your program
Traditionally, Unix / Linux programs do not include copies of their dependencies. It's instead up to the end user or developer to install those dependencies themselves. This can require a "large README," as you said, but it has a few advantages:
Development libraries can be installed, managed, and updated via the distro's package manager, instead of each source copy having its own set of libraries to track.
There's only one copy of any given library on a system, so there's only one place that needs updating if, for example, a security flaw is found. (For example, consider the chaos that resulted when zlib, a very widely used compression library, was found to have a security flaw, so every application that included an affected version needed to be updated.)
If your program is popular enough (and is open source or at least freely available), then package maintainers for various Linux distributions may want to package it and include it in their distro. Package maintainers really don't like bundled libraries. See, for example, Fedora's page on the topic.
If you're distributing your program to end users, you may want to consider offering a package (.dpkg or .rpm) that they could simply download and install without having to use source. Ideally, from the end user's perspective, the package would be added to distros' repositories (if it's open source or at least freely available) so that users can download it using their package managers (apt-get or yum). This can all get complicated, because of the large number of Linux distros out there, but a Debian/Ubuntu compatible .dpkg and a Red Hat/CentOS/Fedora-compatible .rpm should cover a good percentage of end users. Building packages isn't too hard, and there are good howtos online.
for the first part of your question regarding Windows: there's no real standard place for libraries/headers on Windows, so the easy solution is: create your own. Simply provide a single lib/ and include/ on your system and have all your projects use it (by setting the path in a cmake file that you include everywhere). Put all third party libs in there, for example:
your projects:
d:/projects/projectA
d:/projects/projectB
third party stuff:
d:/api/lib/lua.lib
d:/api/include/lua/....
(you can even use symlinks aka 'directory junctions' if you have different version)
and the corresponding cmake file:
include_directories( d:/api/include )
link_directories( d:/api/lib )
Okay, so this is one of the basic questions and while I myself might not come across very clear on this, here goes:
While building a project, your compiler will need to find the header files of the libraries. The headers must be in the include path.
after compilation is done, the linker will look for the library binaries (files.so or something like that). These must be in the Library path.
That's the basics.
If you have some specific libaries, you can add them to your own project-specific lib/ and include/ directories and add them to the include path and the library path respectively.
Adding these dirs to these paths can be done in many ways, depending upon how you are building the project. I'm sure there is something called LD_PATH involved in all this... But I don't really know the specifics involved with CMake.
A little googling can help you do the above with CMake.
Hope that helps,
jrh
If you are installing the libraries with a package manager, they will probably all end up in the right place. If not you can get the compiler to search for the by providing the an additional search path using the -L <path> flag. You should be able to pass this extra flag to CMake.
Incidentally the -I <path> can be used to add an extra directory to search for include files.
I'm quite new to using boost and I can't seem to find documentation anywhere on how to distribute your application when using boost?
Many of the libraries are shared libraries, I'm not expecting my users to have boost installed, I'm only using a single library (regex) so is there an easy way to package the regex library with my application without compiling with the static version?
Linux
For binary distribution, I recommend using the distribution's package management, which should take care of any dependencies.
Some commercial apps just use binary blobs and you need to install a version of boost by yourself.
Finding libraries is a bit more difficult on linux. It does not automatically load shared objects from the current directory if they are linked at compile time (as opposed to loading at runtime with dlopen).
You have to use the LD_LIBRARY_PATH env variable or use rpath. Both has it's drawbacks.
Windows
There is no way around including the dlls. The usual approach is to put everything into a directory and zip it up.
Both
To build from source you need the boost sources anyway, so no need to include libraries.
Most libraries in boost are header only anyway, regexp is not one of them. It should be sufficient to include dlls for this module.
In Linux you can check against which shared libs your binary is compiled by using:
ldd binary