C++ Development Flow with 3rd Party Dependency [closed] - c++

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm a Python developer with some background in another language such as Ruby.
In both language, dependency is managed by package manager automatically, such as pip or gem. Anyone could install such dependency by calling pip install -r requirements.txt, and it will install the necessary dependency via Python Package Index. Although, there has been an option to build the dependency manually from the source and install into the project, it is not a recommended process, and I have not done it.
I notice that C++ unfortunately have different nature in how dependency is being resolved for some reason. (e.g. different compiler flavor, compiler parameters, platforms, etc...)
At the moment, I am learning C++ using VS2015. and I have been stumbled again and again upon these library dependencies matter. With VS2015, there is a dependency package manager like python, and it is NuGet. However, not every library is available in NuGet, in fact, there are a lot of library developed independent from its IDE.
First I'm trying to use Boost. There is a manual on how to build the project, but I'm not sure what I need. Do I need to build from source? or Perhaps I just need a library that has been readily available?
Same reason for another library that I found. (e.g. QT, yaml-cpp, googletest, etc..)They only have a document how to build, instead of how to install as dependency.
And Ultimately, I will need to use lots of 3rd party library to be more productive. So, here's some of my questions that are very related.
How do C++ developer normally include 3rd party library into their project (the flow of installation 3rd party library)?
Do I have to build from source everytime I want to include? or perhaps you just need the header file which you could just copy and paste into your project directory?
I'm working in team (git), do each of my team need to build the dependency manually? Can it be automate such that the process of including new library is transparent for everyone?
Or perhaps, I don't really understand what specific question do I need
to ask. But why it is so painful to reuse library in C++?
Do I miss some fundamental understanding of C++ environement?
I'm not sure how much relevance it is, but CMake as a build tool that most library uses to build their project. Do I really need to build these library project?
More Questions:
After building some libraries, some of them generate static library .lib or dynamic library .dll to be included into the project. So is it correct to copy these generated library in our project? Should this be committed into the source version control? Some libraries are very large, and we don't want to maintain it. Yet we need the entire team to get the library transparently.

I understand you situation quite well. You cannot see the forest because too many trees are standing in your way...
Let me get one thing clear before I start to address your specific questions:
Generally speaking, dependencies in C++ are not more complicated than in Python.
The command pip install -r requirements.txt will establish an internet connection and download the necessary libraries and files from a repository server to fulfill the requirements. Under the Linux operating system (Ubuntu) the command: sudo apt-get install libboost-all-dev installs all required dependencies for boost. This is possible because there is a whole environment with servers that hold source-code as well as libraries and binaries that work together with the client programs (apt-get) that use it. This is exactly the same thing that the authors of pip have done for microsoft windows. microsoft themselves have never done this at the operating system level. They always left that to the programmer. NuGet is microsofts attempt to make-up for past mistakes.
Having this out of the way, let me address your questions:
It depends on the size of the 3rd party library. Small libraries like pugixml can be included as source in the source tree of your project. Bigger libraries like boost are better included as binary object code (library objects). Not all libraries do have binaries available to download (boost has), so you might be required to build from source. Bear in mind that all binaries are required to be built with exact the same compiler that you use in your project. The general steps to include it in your VS-Project:
Get the distribution files (either build from source or download and install binaries)
Add include paths to your Project:
Project > "projectname" properties > Configuration Properties > C/C++ > General > Additional Include Directories
Add paths to libraries:
Project > "projectname" properties > Configuration Properties > Linker > Input > Additional Dependencies.
No. You normally just use the header file. But it's better to add the path of the library into your project instead of copying the header file, because some projects (boost) have a huge hierarchy of header files.
It is a good idea that each member of your team has the same development environment with the same set of libraries installed. There are tools for this task: Chocolatey builds on top of NuGet and is therefore windows-affine. Vagrant deals with virtual boxes ands thus offers cross-platform development environments.
But more important is a decent source-control-management system. If you don't already use one - start using one Today!. This is the main collaboration-tool. It can really save your neck if you loose a developer machine.
There is another dependency problem: We've only addressed the development dependencies above. There is the problem of deployment dependencies:
your customers will need the libraries (*.dll files) that you have used for the development. You will need to package them as well into your deployment package (Installer). This is another issue which is probably already answered on SO.
Qt: if you start using Qt, I'd suggest that you start using their development environment Qt-Creator. This will automatically handle all dependencies. It will automatically detect the Visual Studio Compiler that you have installed, and use it. The IDE is quite close to Visual Studio.
CMake: No, it is not always required to use CMake to build a library project, some also include Makefiles. Others use CMake to produce Makefiles. "Follow the instructions" is the best advice I can give here.
Update 2015-10-24: paragraph point three reworked

How do C++ developer normally include 3rd party library into their
project (the flow of installation 3rd party library)?
It depends... There are a lot of ways, how to redistribute C++ libraries.
Do I have to build from source everytime I want to include? or perhaps
you just need the header file which you could just copy and paste into
your project directory?
For now, most C++ libraries contains two parts: binaries + header files. But often, there are a lot of problems, if compiler version of library is different with your compiler.
I'm working in team, do each of my team need to build the dependency
manually? Can it be automate such that the process of including new
library is transparent for everyone?
It depents on your team guidelines. You can choose what you want.
Or perhaps, I don't really understand what specific question do I need
to ask. But why it is so painful to reuse library in C++?
Because of some legacy of C. And because C++ is low-level language in compare with python/java/c#. C++ is supported by a lot of different platforms, including embedded. And ofter, it is not possible to install complex runtime on this platforms. So there are no mechanism to transparently link a "modules" in runtime.
Hopefully, there will be a normal support of modules in C++17 standard. And Microsoft will provide a technology preview of modules in C++ in MSVC 2015 update 1.
Do I miss some fundamental understanding of C++ environement?
Yes, I propose you to read about compiling and linking in C/C++. This two things are often come together, but they are different.
First, that you should mind: code in C/C++ is splitted in two parts: declaration (.h files) and implementation (.cpp files). .CPP files are compiled into binaries. .H files just declares an interfaces.

Related

Handling external C++ dependencies [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 months ago.
Improve this question
The project structure we used to use was that code + prebuild external dependencies were source controlled in SVN. This was cumbersome because the external libraries were large and didn't need to be source controlled since they were prebuilt binaries.
Now we have source in git and the prebuilt binaries are in the cloud. The dev has to download this lib folder from the cloud after cloning the repo. Problem here is that if you make changes to lib then things will not build correctly until the developer goes and redownloads this lib folder.
Our projects are generally developed for Windows (MSVC) but we just added Linux (GCC+Docker) compatibility and likely in the future Linux will be the main version. So now our libraries each have a Windows and a Linux build folder. Our dev environments are Windows + VSCode/WSL2/Docker for Linux.
What is the best, common practice here for handling external dependencies. I can think of 2 ways.
Version the lib folder in the cloud and check that during building. If Developer A adds/changes this and updates the CMakeLists file then when Developer B updates his git repo and tries to build then CMake can see that the version of the libs folder he has is out of date and will be told to go update that. This is little effort and changes almost nothing of our process. Cons are that Developer A has to remember to update the version in both the cloud and in the cmake check.
Build all external libraries locally. Use git submodules and have cmake build all dependencies while building the main project. I assume there's a way to cache it so it doesn't rebuild constantly (some of these libraries are large and take a long time to build). More work, but less maintenance and needing of extra developer steps. Also probably easier to link and include against.
Problem here is that if you make changes to lib then things will not build correctly until the developer goes and redownloads this lib folder.
clear indication that the exact version you want to use is part of what you should track alongside your source code.
I assume there's a way to cache it so it doesn't rebuild constantly (some of these libraries are large and take a long time to build)
As long as files don't change, nothing needs to be rebuilt.
So, yeah, if your project depends on external libraries in specific, git submodules do sound kind of attractive.
Also note that other build systems (like meson) have a neater understanding of in-tree dependency projects, can check your system for an installed version of the dependency, and if not there in appropriate version, download, and if necessary, build themselves.
So, the second option is probably the easiest to maintain solution, as you said.
I, however, come from an free and open source background, and my users and their platforms are diverse, and Linux distros have strict guidelines about not packaging N copies of the same dependency. So, that would make it harder to upstream such packages to debian, Ubuntu, Fedora, Arch… . So, for me the situation is this: if there's a library that we want to use in a project, we define very clearly what the oldest version of that library is that would work. Within a release cycle, we cannot bump the required version.
So, say, we've released 2.0.0 of some software. The CMake files define which version of a library we support. "Releasing" software means that we guarantee to devs as well as to users that the next bugfix/feature extension versions in our 2.a.b series still build on the same systems – and that includes the same libraries that might be installed there. So, if 2.0.0 built on your computer, so will 2.0.1 and 2.9.0. Development that requires a new version of an external dependency can only happen on a git branch that's not meant for further 2.a.b releases, but targetting an eventual 3.0.0. When picking minimum dependency versions for that 3.0.0 release, we look what is commonly available on the operating systems we support. For example, if my timeline was that 3.0.0 be released within 2022 or 2023, that version would be the one available in Ubuntu 22.04LTS (because that will be an important system for our users for a long time, and also relatively conservative), also looking at the debian version most likely to be the current unstable (or stable, depending on what your target audience is), the RHEL version, the next Fedora, and what is currently available in our condaforge and macports repos.
Everything not available in tolerable versions through these standard packaging channels needs to be built locally anyway. Turns out that if you're not crazily progressive and don't try to support 5 year old Linuxes, the number of projects that you need to build locally is quite small.
On windows, you're basically handicapped by Microsoft's inability to provide a really sensible way of downloading packages of binary shared libraries that are actually shared between different application software. So, on Windows systems, you're down to either doing all your builds locally, or using a third-party way of distributing platform-dependent packages, like Conan.
No matter how you do it, you'd let your build fail as early as possible, with a clear indication that the library version found is not sufficiently new. CMake makes this easy; its find_package command takes a minimum version as argument.

Build Tools vs Package Manager

I am very much confused between these two terms Build tools and Package Manager. According to my current knowledge, Package managers are the ones use to install dependencies required for the code to execute while Build tools are used to Package the code plus dependencies into single file i.e. building the code. Building our application will enable to make it production ready.
Am I right???
Short answer
Build systems/tools manage your compilation requirements.
Package managers/tools manager your library requirements.
A build tool may have integrated package management.
For example, in both C++ and Java you can directly call the compiler and provide all the include, source and library paths manually or you can use a build system (make/cmake... for c++, maven/gradle/ant... for java).
When you link external libraries with your build system it will do its best to find them in its search path, and will link with the first version that meets its requirements or tell you that it couldn't find it. Adding libraries manually is fairly easy, but sometimes each library you add will require another library with it.
A package manager would make sure that your libraries are downloaded, are the right version, and all the libraries they depend on are downloaded. some examples are maven and gradle which have integrated package managment for java, and conan is a fairly popular option to combine with cmake.
So ideally you would use both, but it can be more work setting them up than you save not doing things manually. It depends on your programming language, if you need multiple versions of something, and your OS.

Is there a preferred canonical way to distribute C/C++ development packages (without source) on windows?

Is there a preferred canonical way to distribute development packages (for example a DLL and headers) to developers (without source) on windows?
Coming from a Linux background my preference is split a project into three packages:
A runtime - e.g. libfoo-1.2.3.rpm containing /opt/foo/lib/libbar.so -> /opt/foo/lib/libbar.so.1.2.3
A development package - e.g. libfoo-devel.1.2.3.rpm containing /opt/foo/include/bar.h
A source package - e.g. libfoo.1.2.3-src.tgz containing the source and build machinery
I expect few windows developers would complain if given a zip file with a similar directory layout.
E.g.
foo-devel.zip:
foo/doc/foobar.pdf
foo/include/bar.h
foo/lib/libbar.dll
foo/lib/libbar.lib
for completeness only (off-topic) the run-time might be:
foo-runtime.zip:
foo/doc
foo/lib/libbar.dll
or:
foo-runtime.msi
- install libbar.dll to an appropriate location
However I am still curious if there is actually a preferred way of doing it?
For example should you provide a foo-devel.msi to be installed on the build machine?
I am not interested in the question what should go into the library.
Anyone who is can see for example Distributing (native C++) libraries on windows
It still is worth bearing in mind ABI compatibility issues if providing a C++ interface rather than only a C one.
Another related question is Is there a best practices guide to distributing native C libraries for Windows?. That question covers the what but this one asks how.
The canonical (to an extent) way of distributing redistributable files on Windows is Windows Installer Merge Module (msm). It is intended to be integrated into another Windows Installer (msi) package. DLL should go there as they are files needed at runtime.
As for development files (e.g. headers, libraries, etc), there is no standard way of organizing them in the file system on Windows. Therefore, it could be a simple archive or an msi package. For an msi package, it makes sense to not require an admin account to install it.

Ncurses static libraries to include with a C++ project

I have installed the latest ncurses library which my project is using. Now, I want to check in the ncurses static libraries into svn so that I can checkout the project on a different machine and compile it without having to install ncurses on the system again.
So the question is what is the difference between libncurses.a, libncurses++.a and libncurses_g.a files? And do I need all of them for my C++ project?
Thanks!
libncurses.a - This is the C compatible library.
libncurses++.a - This is the C++ compatible library.
libncurses_g.a - This is the debug library.
libncurses_p.a - This is the profiling library.
If you want to find out if you can get by without using libncurses.a, you can rename the library and run a build of your application.
My answer comes a little late [ :-) ] since you posted your question more than 4 years ago. But:
Archiving the pre-compiled library in your SVN means that your built application may fail if the target machine differs under some critical aspect.
Yes, you can safely run the application on other machines which are configured entirely in the same way (e.g., on a fully homogeneous computation cluster). However, if the machines differ (e.g., because one machine had a system upgrade and the other not), it may stop working. This is not very likely, so the risk may be acceptable for what you'd like to do.
I would suggest another solution: Commit a recent, stable version of the libncurses sources (tarball) to your SVN repo and add a little script (or make target) that runs the libncurses build and installs the built library to some project directory (not the system directory but next to your applciation build directories, without committing to SVN). This build step only needs to be repeated if the libary shall be upgraded or if you would like to build/run on another machine.
This does not apply to the ncurses library in special but to any library.
Depending on your project target, consider further reading about
package management
cross compile

C++ Buildsystem with ability to compile dependencies beforehand

I'm in the middle of setting up an build environment for a c++ game project. Our main requirement is the ability to build not just our game code, but also its dependencies (Ogre3D, Cegui, boost, etc.). Furthermore we would like to be able build on Linux as well as on Windows as our development team consists of members using different operating systems.
Ogre3D uses CMake as its build tool. This is why we based our project on CMake too so far. We can compile perfectly fine once all dependencies are set up manually on each team members system as CMake is able to find the libraries.
The Question is if there is an feasible way to get the dependencies set up automatically. As a Java developer I know of Maven, but what tools do exist in the world of c++?
Update: Thanks for the nice answers and links. Over the next few days I will be trying out some of the tools to see what meets our requirements, starting with CMake. I've indeed had my share with autotools so far and as much as I like the documentation (the autobook is a very good read), I fear autotools are not meant to be used on Windows natively.
Some of you suggested to let some IDE handle the dependency management. We consist of individuals using all possible technologies to code from pure Vim to fully blown Eclipse CDT or Visual Studio. This is where CMake allows use some flexibility with its ability to generate native project files.
In the latest CMake 2.8 version there is the new ExternalProject module.
This allows to download/checkout code, configure and build it as part of your main build tree.
It should also allow to set dependencies.
At my work (medical image processing group) we use CMake to build all our own libraries and applications. We have an in-house tool to track all the dependencies between projects (defined in a XML database). Most of the third party libraries (like Boost, Qt, VTK, ITK etc..) are build once for each system we support (MSWin32, MSWin64, Linux32 etc..) and are commited as zip-files in the version control system. CMake will then extract and configure the correct zip file depending on which system the developer is working on.
I have been using GNU Autotools (Autoconf, Automake, Libtool) for the past couple of months in several projects that I have been involved in and I think it works beautifully. Truth be told it does take a little bit to get used to the syntax, but I have used it successfully on a project that requires the distribution of python scripts, C libraries, and a C++ application. I'll give you some links that helped me out when I first asked a similar question on here.
The GNU Autotools Page provides the best documentation on the system as a whole but it is quite verbose.
Wikipedia has a page which explains how everything works. Autoconf configures the project based upon the platform that you are about to compile on, Automake builds the Makefiles for your project, and Libtool handles libraries.
A Makefile.am example and a configure.ac example should help you get started.
Some more links:
http://www.lrde.epita.fr/~adl/autotools.html
http://www.developingprogrammers.com/index.php/2006/01/05/autotools-tutorial/
http://sources.redhat.com/autobook/
One thing that I am not certain on is any type of Windows wrapper for GNU Autotools. I know you are able to use it inside of Cygwin, but as for actually distributing files and dependencies on Windows platforms you are probably better off using a Windows MSI installer (or something that can package your project inside of Visual Studio).
If you want to distribute dependencies you can set them up under a different subdirectory, for example, libzip, with a specific Makefile.am entry which will build that library. When you perform a make install the library will be installed to the lib folder that the configure script determined it should use.
Good luck!
There are several interesting make replacements that automatically track implicit dependencies (from header files), are cross-platform and can cope with generated files (e.g. shader definitions). Two examples I used to work with are SCons and Jam/BJam.
I don't know of a cross-platform way of getting *make to automatically track dependencies.
The best you can do is use some script that scans source files (or has C++ compiler do that) and finds #includes (conditional compilation makes this tricky) and generates part of makefile.
But you'd need to call this script whenever something might have changed.
The Question is if there is an feasible way to get the dependencies set up automatically.
What do you mean set up?
As you said, CMake will compile everything once the dependencies are on the machines. Are you just looking for a way to package up the dependency source? Once all the source is there, CMake and a build tool (gcc, nmake, MSVS, etc.) is all you need.
Edit: Side note, CMake has the file command which can be used to download files if they are needed: file(DOWNLOAD url file [TIMEOUT timeout] [STATUS status] [LOG log])
Edit 2: CPack is another tool by the CMake guys that can be used to package up files and such for distribution on various platforms. It can create NSIS for Windows and .deb or .tgz files for *nix.
At my place of work (we build embedded systems for power protection) we used CMake to solve the problem. Our setup allows cmake to be run from various locations.
/
CMakeLists.txt "install precompiled dependencies and build project"
project/
CMakeLists.txt "build the project managing dependencies of subsystems"
subsystem1/
CMakeLists.txt "build subsystem 1 assume dependecies are already met"
subsystem2/
CMakeLists.txt "build subsystem 2 assume dependecies are already met"
The trick is to make sure that each CMakeLists.txt file can be called in isolation but that the top level file can still build everything correctly. Technically we don't need the sub CMakeLists.txt files but it makes the developers happy. It would be an absolute pain if we all had to edit one monolithic build file at the root of the project.
I did not set up the system (I helped but it is not my baby). The author said that the boost cmake build system had some really good stuff in it, that help him get the whole thing building smoothly.
On many *nix systems, some kind of package manager or build system is used for this. The most common one for source stuff is GNU Autotools, which I've heard is a source of extreme grief. However, with a few scripts and an online depository for your deps you can set up something similar like so:
In your project Makefile, create a target (optionally with subtargets) that covers your dependencies.
Within the target for each dependency, first check to see if the dep source is in the project (on *nix you can use touch for this, but you could be more thorough)
If the dep is not there, you can use curl, etc to download the dep
In all cases, have the dep targets make a recursive make call (make; make install; make clean; etc) to the Makefile (or other configure script/build file) of the dependency. If the dep is already built and installed, make will return fairly promptly.
There are going to be lots of corner cases that will cause this to break though, depending on the installers for each dep (perhaps the installer is interactive?), but this approach should cover the general idea.
Right now I'm working on a tool able to automatically install all dependencies of a C/C++ app with exact version requirement :
compiler
libs
tools (cmake, autotools)
Right now it works, for my app. (Installing UnitTest++, Boost, Wt, sqlite, cmake all in correct order)
The tool, named «C++ Version Manager» (inspired by the excellent ruby version manager), is coded in bash and hosted on github : https://github.com/Offirmo/cvm
Any advices and suggestions are welcomed.