I integrated Tesseract into our own C++ library, whereas it was mentioned that Tesseract may take too much time and spaces and Tesseract ISN'T ALWAYS NEEDED when our library is being used as we also have our own deep learning models for the purpose of text recognition, so I wonder if it's possible to link certain libraries on demand, both on Windows and Linux.
Related
I'm compiling a Static (added static after reading comments) C++ library PoDoFo and some of its dependencies are optional, such as libJPEG, libTiff and libPNG. Though, a lot of the libraries have an option to depend on one another as well. For instance you can enable JPEG support in libTiff by compiling libTIFF with libJPEG.
In a perfect world I would hope that libTIFF would enable the libJPEG functions by realizing it has access to libJPEG because I included it in my compilation of PoDoFo. Sadly I think enabling/disabling functions is decided when I first compile libTIFF.
So then that means my PoDoFo library will contain libJPEG multiple times, probably even identical copies if I use the same library.
Will GCC compiler realize this and eliminate/relink the libraries to just one copy of libJPEG?
Basically yes, it'll only include one copy.
The compile switches you are changing don't actually include one library into another, they just enable functionality that requires those libraries, e.g. libTIFF might include functions to convert between jpeg and tiff formats if you enable libJPEG support, but allows you to compile the rest of the library without that functionality if you don't want that.
When you link a final application with PoDoFo, you will also have to link all of the optional dependencies you enabled. This can be automatic for dynamic libraries but the dependencies will all be required at runtime.
In almost all cases there is only one copy of each library linked with the final application - the one exception is if you mix static and dynamic libraries, but that's a whole new can of worms.
Assuming all library are dynamically linked, runtime linker would only load a single copy of each dependent library (so a single copy of libJPEG will be loaded).
In a perfect world I would hope that libTIFF would enable the libJPEG functions by realizing it has access to libJPEG because I included it in my compilation of PoDoFo, but sadly I think enabling/disabling functions is decided when I first compiled libTIFF.
The feature that you describe is called delayed loading and is supported in Windows but not on Linux (at least not by default, see Implib.so tool).
I am currently trying to evaluate the use of Gradle C++ for a project that will have both Java and C++ components (with JNI to interface). I could just use CMake for the C++ portion, but then I would have 2 build systems which is less cleanly organized. As such, I prefer to use Gradle's C++ system in a multi-project build if it has the support that I need. The main thing that I can't find any detailed information (with code examples, etc.) is the linking of libraries. For Cmake, it is simple: use find_package or the pkg-config module. Every library (that I have tried to use) offers at least one of those systems. With Gradle, however, it only seems to document it for linking to C++ libraries that are built in the same project. What if, for example, I want to link to Vulkan, SFML, OpenGl, yaml-cpp, Boost, or any number of established and FOSS C++ libraries? The documentation also doesn't specify how to control dynamic or static linking.
I'm a junior programmer. I have developed a Visual Studio C++ project with a fair amount of dependencies: Boost, a fingerprint recognition library and Windows Biometrics Frameworks. As for today I know the Windows Biometric Framework can be downloaded from the standard Windows Update and I am not concerned about that, to my knowledge, the application is ready to search and link WBF dependencies on the computer by itself.
My concern is: which is the easiest (not most efficient, I need speed here) way to pack the executable file with all the resources and dependencies this .exe needs (Boost and the fingerprint recognition SDK) so that I can minimize distribution troubles, i.e this dll is missing, please reinstall the application, and things like that, without having to compile everything in the client's computer?
I've been able to see a couple ways here: copy the dlls listed in the project config, change to static linking... but I don't know if that is the simplest way. I have little to no trust in my abilities for this and those methods seem quite manual, wondering if there might be an automatic way for doing these things?
I'm not familiar with the fingerprint library or WBF, but most of Boost resides in headers so its compiled in when you compile your application. Some, like the threading library and system specific calls(e.g. getting CPU core count) are libraries that are statically linked to.
What format of the fingerprint library is provided? Dynamically, there would be at least a .dll with a corresponding import .lib file. Your application links statically to the importer after compiling, and binds to the library during run time. Or the library can be included in one large, single .lib that's linked to your application after its compiled. If you have both options available and you only want to distribute the binary file, use static linking.
Like in any systems, you will need to include every .dll libraries your app links and every external resources(images, config files, ...) your app uses. I usually make my Windows distributions by using http://www.jrsoftware.org/isinfo.php.
Very easy to use.
This question already has answers here:
Static linking vs dynamic linking
(16 answers)
Closed 9 years ago.
Probably on any OS it is possible to compile C++/C standard library statically or dynamically. On Windows I prefer static builds always, because it helps to avoid "dll hell" problem with different versions of libraries installed or not installed on specific Windows version, edition and service pack, etc. Static linking makes software more portable and less dependent on what end user did with his operating system (I even saw examples when end user could make SHIFT+DEL on some DLLs in system32, he couldn't explain why, or when users claim that my app contains virus because it tried to download dynamically linked prerequisites from official Microsoft website...) So, on Windows static linking is usually better than dynamic one in my experience.
However, I am new to Linux, so can anyone share his experience? My question is: what kind of linking (dynamic or static) is preffered on Linux if we ignore the fact that dynamic one allows to save memory & hard drive space and if we plan to distribute software with automated install program (hard drive space and memory are cheap enough now, so there are not reason to sacrifice hours of working time required to create really good and portable installer to win some megabytes of RAM or hard drive space). Are there any Linux-specific issues with dynamic/static linking?
On Linux you normally have a package manager that ensures you only have one version of libraries installed. So there normally is no dll hell and no problem with linking dynamically. Linking dynamically is the standard way on Linux.
I'd say the answer depends on how you distribute the software.
If you package the software for a specific Linux distribution and version dynamic linking is usually preferred. You know which libraries to find on the system and you can specify dependencies.
However, if you want to distribute the software as a Linux binary that runs on "any" system (such as various games or software like Matlab for example) you will end up with the same dll (or .so) hell problem as on windows. You don't know which versions of which libraries are on the system. Thus, you will have to provide your own .so files or link statically.
See the whole point in using dynamic linking is to reduce the size of executables and memory usage.If you neglect that there is too less to talk about.
On the other hand you mentioned about saving memory and disk space.It is necessary to save disk space because when you want to export your app/program, you can't put a 2Gb app on the internet for download(for example openCV library is about 2.1GB). The solution is to dynamically link them and load only those modules which are necessary to you.This enables efficient multitasking also(creates just one copy of the module and the whole program uses the same copy).
peculiarly:
For example, a media player application might originally be shipped
with a codec that
supports the mp3 file format. If the media player were statically linked it would not
be possible to dynamically update it to support a different file format, without
replacing the entire application. Dynamic linking means that a new version of the
shared library containing a more up-to-date codec, which includes some enhancements
and bug fixes, could be dynamically loaded by a dynamic linker into memory at run-time
to replace the original shared library. A shared library can also be shared by more than one application. For example, two
different media players could both use the same shared library containing the same
codec. This potentially means that the device running the application requires less
physical memory, depending on the size of the dynamic linker.
third, in linux everything is dynamically linked except for the /bin/ash.static whic also has its dynamic version /bin/ash but this shouldn't stop you from static linking in linux.
when using gcc the linking is by default dynamic.I guess you should use the "-static" flag to statically link the libraries
#Vitaliy good that you brought this up.The important thing to note here is that Smart linking and the creation of shared (or dynamic) libraries are mutually exclusive, that is, if you turn on smart linking, then the creation of shared libraries is turned of.
smart linking breaks the code into small code blocks and their dependencies are loaded.
So if you are calling a dependency multiple times, it gets loaded multiple times.
This gives a very good execution time but very high compilation time especially for large units.So there is a certain trade-off.
We have been porting our Java API to C++, and creating a library file (.a file) for Linux. Our API has dependency on the boost framework and also on log4cxx. (We discussed with the first user for this, and they are okay with those dependencies).
When I compile a sample application which uses our library file, it is linking against .so files for the dependencies (libboost_system.so, libboost_thread.so, and liblog4cxx.so). When the application runs, it needs to find the specific versioned file names for each of those .so files, e.g. liblog4cxx.so.10.
If the user is using a newer version of boost (for example), would s/he be able to link against their local version and run with that newer version (assuming bacwards compatibility)? Is there some other way to deal with versions/dependencies like these (i.e. do you try to link in the external referenced libraries to your own library)?
Yes, the program can link to newer libraries with a compatible interface. The numbers in the filename encode the library Application Binary Interface (ABI).
FreeBSD's handbook has more information about library versioning. The basic rules are:
Start from 1.0
If there is a change that is backwards compatible, bump minor number (note that ELF systems ignore the minor number)
If there is an incompatible change, bump major number
The linker will sort out the details to use the newest library version with a compatible interface. There is also more information available here on stackoverflow.