When we build the opencv library, either in a dynamic way or in a static way, we will produce several separated libraries. Take the 2.48 version for example, we will have
opencv_core248 opencv_imgproc248 zlib IlmImf comctl32 opencv_highgui248 libpng
and so on. So my question here is clearly there are some dependencies between these libraries, for example, if I want to invoke opencv_core248 and opencv_imgproc24 library in a project, I have link errors. Then, if I add zlib library, the compilation error will be solved.
Question: I want to ask a very general question: how can I know the dependencies between all the libraries insider opencv? Are there some documents I can follow? Thanks.
I don't think there is a document listing all the dependencies between the OpenCV libraries.
However I can suggest two methods to find out these dependencies:
Using Dependency Walker, a free tool allowing to analyze executables and DLLs. For instance, if you open opencv_calib3dXXX.DLL (where XXX represents your OpenCV version), you'll see that it requires opencv_coreXXX.dll, opencv_flannXXX.dll, opencv_imgprocXXX.dll and opencv_features2dXXX.dll and some system DLLs.
Using the project structure generated by CMake, a free tool for cross-platform compilation which is used for compliing OpenCV from sources. For instance, if I generate the project structure for VS2010 and open it, I can right-click on the project associated to opencv_calib3d and view the project dependencies.
I mentionned the tools I know and use for Windows, but equivalent tools must also exist for other platforms.
Related
I'm an amateur in programming. I was wondering how I can use boost's serialization only (https://www.boost.org/doc/libs/1_36_0/libs/serialization/doc/index.html).
When I download boost, it has many libraries and is a big folder, but I just want to use the serialization library. Does my user need to install of all boost in order for me to use serialization?
I'm a complete beginner, so if you can tell me each step I need to do to get serialization into my project, it'll be much appreciated. For example, do I have to statically link a library? I have no idea. Thank you for your help.
edit: I want my user to not have to deal with much. So is there a way to use boost without having the user to install anything? Thank you.
You usually need to link the "link library" (traditionally a .lib file on windows) that matches the "dynamic library" (.dll) at runtime. Of course, at runtime it needs to exist, so you need it "installed" (present at the target machine, in a compatible form, so matching the OS and architecture).
The good news:
MSVC will do "Auto Link" for the lib (https://www.boost.org/doc/libs/1_68_0/more/getting_started/windows.html#auto-linking)
If you build on a platform similar to your target platform, the default target will usually be compatible with the target
Note that you may need indirect requirements (such as Boost System).
Indeed, you can XCOPY-deploy the libraries in the same folder as the exe file, but that's not really a common approach and might not be the best idea if you have little experience.
If you can get your hands on a (free) installer builder (a quick google leads to things like https://www.techrepublic.com/blog/five-apps/five-apps-for-creating-installation-packages/) you'll enjoy the guidance of tools that know the intricacies involved.
Does my user need to install of all boost in order for me to use
serialization?
When you link boost, the MSVC++ and MFC runtimes statically into your application, you get a single executable, that includes all dependencies. Then all your user has to do is double-click the .exe file of your application.
Building boost libs from scratch can be tricky, so for getting started I recommend to download prebuilt binaries. Make sure to download the package that exactly matches both your version of Visual C++ and the bitness (32/64) of the application you are building.
Though in the long run, it can be beneficial to build boost yourself, so you don't depend on the prebuilt binaries being up-to-date for the most recent version of VC++.
Make sure to add the directory path of the .lib files to your project's library path. You don't need to specify individual .lib files because boost uses auto-linking.
In case you need to know, the static lib files include "mt-s" in the file name (e. g. "libboost_serialization-vc141-mt-s-x32-1_68.lib" for the 32-bit release version and "libboost_serialization-vc141-mt-sgd-x32-1_68.lib" for the 32-bit debug version).
In your project settings, make sure to choose MFC static library ("Generic" category > "Use of MFC"). Also choose "Runtime Library" > "Multithreaded" (/MT) or "Multithreaded Debug" (/MTd), depending on your project configuration (C/C++ category). If you don't do this, the linking will fail or you will link to the boost DLLs instead.
I am using pre-built 3.2.5 Eigen lib files, downloaded from website:
http://eigen.tuxfamily.org/index.php?title=Main_Page
I heard if I built the files by myself on my PC I could achieve higher compatibility with my processor what would lead to slight increase of lib's performance. Currently I am struggling with eigensolver calculation time being too long.
I use Visual Studio 2005 and I just add Eigen files location to my projects properties linker.
Is there any way to build those files myself on my platform? I am a bit confused how could I do it. Is it related to CMake?
There is no library to build, as Eigen is a "pure template header library". From the main site:
Requirements
Eigen doesn't have any dependencies other than the C++ standard
library.
We use the CMake build system, but only to build the documentation and
unit-tests, and to automate installation. If you just want to use
Eigen, you can use the header files right away. There is no binary
library to link to, and no configured header file. Eigen is a pure
template library defined in the headers.
You don't need to add the files location to the linker, but to the (additional) included directories in your project or to a property sheet.
Regarding calculation time, make sure you're running in Release and not Debug. There is a difference of about 100 in the speed. Also, make sure that optimizations are turned on (/O2 or /Ox).
When I'm using CMake and library which also uses CMake, I add the library directory in my CMake project to allow building the library alongside my project. For example:
# add SFML library dependencies
add_subdirectory("third_party/lib/SFML")
include_directories("third_party/lib/SFML/include")
target_link_libraries(${CMAKE_PROJECT_NAME} sfml-system sfml-window sfml-graphics)
Then CMake automatically matches project Debug builds to use library Debug build, and project Release builds to use library Release build. In some cases it is useful build targets to be controlled separately for the main project and every one of the libraries used by it. For example, if I'm not interested in debugging inside the library code I will want to build in Debug mode only my code and to link it against Release version of the library, because I don't want to sacrifice additional performance. In other cases maybe I want to debug only inside one of the used libraries, if I have suspicious for bug inside it, but again for performance reasons I want to link release versions for all other libraries. Is it possible and what is the best way to achieve this behavior?
With both imported targets and dependent targets from the same build tree, you will always get the behavior you described, that each configuration uses its own matching build of the library. Messing with this means fiddling with CMake's internals, so I'd advise against it.
If you want to link against a specific version of the library, the most robust way is to use find_library. Note that this will only work if the library dependency is already available in its binary form at configure time. That is, you can no longer build the dependency as part of the dependent project.
If that is not an option, consider using ExternalProject_Add to build the dependency and specify the location of the dependency binary manually.
All in all, your current approach is the most convenient one, so only change this if performance of the dependency's debug build is a real problem. Also note that while mixing debug and release builds is mostly fine for C libraries, it can easily break for C++ libraries, especially if you have standard library types on the interfaces.
I am currently trying to setup a project in C++, b that uses the luabind library. Unfortunately on my distro, namely Arch, this library isn't in the official repos and the one in the AUR is out of date and fails to compile.
Considering that I need the library only for this project I thought that I could make a sandboxed environment similar to python's virtualenv by building the library then installing(copying) the include files and resulting binaries in 2 sub-directories of my project called include and lib, respectively which I'll add to the linking and include paths when building. I understand why distributing the libraries with your project is bad: security and bug fixes in the meantime for example. However distributing DLLs is almost universally done on Windows(which I might do if I cross-compile) and many projects such as games on Linux tend to package their libraries to avoid inconsistencies between disrtos. Moreover if ever need a patched or forked version of a lib I doubt I'll ever find it in any official repo.
So my question is:
Is what I described above a common practice? Should I do it like this?
If not, what is the most commonly-agreed-upon solution to this problem?
NOTE: I use Cmake for build automation, if it matters
EDIT: This question slightly overlaps with mine.
Your approach is interesting, but it is not necessary for you to devise a working system because it has already been done, and luckily, you are only one step away from the solution !
Using CMake, it is easy to automate the building and linking of external source code, using the ExternalProject module.
See http://www.kitware.com/media/html/BuildingExternalProjectsWithCMake2.8.html for useful information.
This approach has several advantages:
you do not have to include the library's source code in your repository
you can point to the specific version/git tag of the library that you know works with your software OR the latest release if you are certain it will not break compatibility
you do not have to write a complete CMakeLists.txt file to build a possibly complex code base
you can eventually configure the external project to build as a static library so you will not have to distribute shared libraries
you can even completely bypass this if not necessary, by trying to detect a working version of the library on your system with the usual find_package call, and only fall back to building it as an external project if not found
So I recently got fed up with Windows and installed Linux Mint. I am trying to get a project to build I have in Code::Blocks. I have installed Code::Blocks but I need glew(as well as a few other libraries). I found it in the software manager and installed it. I've managed to locate and include the header files. But I feel like the next step should be relatively straightforward and all over the internet but (perhaps due to lack of proper terminology) I have been as of yet unable to locate an answer.
Do I need to locate the files on my system and link to each library manually? This is what I did on windows but I just downloaded the binaries and knew where they were. I found one library from the software manager and linked to it manually but it just feels like I'm doing it the wrong way. Since it's "installed" on the system is there some quick way to link?
You should use two flags for linker '-l' and '-L'. You can set these flags somewhere in project properties.
The first one '-l' tells linker to link with particular library. For example glew, probably in /usr/lib is a file named libglew.so, when you link your program with '-lglew' flag, it will link it with glew library. Linker looks for libraries in few standard places: /usr/lib, /usr/local/lib and few extra. If you have your libs in nonstandard place, use '-L' flag to point these dirs.
Many linux distributions provide two kinds of packages with libraries, regular ones just with runtime, and devel ones (usually prefixed or suffixed with dev or devel) with header files and development version of libraries.
use build systems, Luke!
the typical way to develop/build software in *nix world is 3 steps:
configure stage -- before building smth you have to realize in what environment you are going to build your software... is everything that required is installed... it wouldn't be good if at compile stage (after few hours of compilation) you (or user who build your soft) got an error: unable to #include the 'xxx.h'. the most popular build systems are: cmake, my favorite after autotools. yout may try also scons or maybe crazy (b)jam...
compile stage -- usually just make all
install stage -- deploy just built software into the system. or other way: build packages for target distro (.deb/.rpm/&etc)
at configuration stage using test scripts (don't worry there are plenty of them for various use cases) you can find all required headers/libraries/programs/compiler options/whatever you need to compile your package... and yes: do not use hardcoded paths in your Makefiles (or whatever you use to make your binaries)
Answer to this question really depends on what you want to achieve. If you want just to build you app by yourself then you can just write path to libraries in your makefile, or your code editor settings. You may not even have to do that as if libraries installed by your linux distribution package manager, headers usually go to /usr/include and libraries to /usr/lib or /urs/lib64 etc. That locations are standard and you do not need to specify them explicitly. Anyway you need to specify libraries you want to link to.
If you want to create application that can be build by others, or by you on many different configurations/environments using something like cmake would be very helpful.