Ncurses static libraries to include with a C++ project - c++

I have installed the latest ncurses library which my project is using. Now, I want to check in the ncurses static libraries into svn so that I can checkout the project on a different machine and compile it without having to install ncurses on the system again.
So the question is what is the difference between libncurses.a, libncurses++.a and libncurses_g.a files? And do I need all of them for my C++ project?
Thanks!

libncurses.a - This is the C compatible library.
libncurses++.a - This is the C++ compatible library.
libncurses_g.a - This is the debug library.
libncurses_p.a - This is the profiling library.
If you want to find out if you can get by without using libncurses.a, you can rename the library and run a build of your application.

My answer comes a little late [ :-) ] since you posted your question more than 4 years ago. But:
Archiving the pre-compiled library in your SVN means that your built application may fail if the target machine differs under some critical aspect.
Yes, you can safely run the application on other machines which are configured entirely in the same way (e.g., on a fully homogeneous computation cluster). However, if the machines differ (e.g., because one machine had a system upgrade and the other not), it may stop working. This is not very likely, so the risk may be acceptable for what you'd like to do.
I would suggest another solution: Commit a recent, stable version of the libncurses sources (tarball) to your SVN repo and add a little script (or make target) that runs the libncurses build and installs the built library to some project directory (not the system directory but next to your applciation build directories, without committing to SVN). This build step only needs to be repeated if the libary shall be upgraded or if you would like to build/run on another machine.
This does not apply to the ncurses library in special but to any library.
Depending on your project target, consider further reading about
package management
cross compile

Related

Handling external C++ dependencies [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 months ago.
Improve this question
The project structure we used to use was that code + prebuild external dependencies were source controlled in SVN. This was cumbersome because the external libraries were large and didn't need to be source controlled since they were prebuilt binaries.
Now we have source in git and the prebuilt binaries are in the cloud. The dev has to download this lib folder from the cloud after cloning the repo. Problem here is that if you make changes to lib then things will not build correctly until the developer goes and redownloads this lib folder.
Our projects are generally developed for Windows (MSVC) but we just added Linux (GCC+Docker) compatibility and likely in the future Linux will be the main version. So now our libraries each have a Windows and a Linux build folder. Our dev environments are Windows + VSCode/WSL2/Docker for Linux.
What is the best, common practice here for handling external dependencies. I can think of 2 ways.
Version the lib folder in the cloud and check that during building. If Developer A adds/changes this and updates the CMakeLists file then when Developer B updates his git repo and tries to build then CMake can see that the version of the libs folder he has is out of date and will be told to go update that. This is little effort and changes almost nothing of our process. Cons are that Developer A has to remember to update the version in both the cloud and in the cmake check.
Build all external libraries locally. Use git submodules and have cmake build all dependencies while building the main project. I assume there's a way to cache it so it doesn't rebuild constantly (some of these libraries are large and take a long time to build). More work, but less maintenance and needing of extra developer steps. Also probably easier to link and include against.
Problem here is that if you make changes to lib then things will not build correctly until the developer goes and redownloads this lib folder.
clear indication that the exact version you want to use is part of what you should track alongside your source code.
I assume there's a way to cache it so it doesn't rebuild constantly (some of these libraries are large and take a long time to build)
As long as files don't change, nothing needs to be rebuilt.
So, yeah, if your project depends on external libraries in specific, git submodules do sound kind of attractive.
Also note that other build systems (like meson) have a neater understanding of in-tree dependency projects, can check your system for an installed version of the dependency, and if not there in appropriate version, download, and if necessary, build themselves.
So, the second option is probably the easiest to maintain solution, as you said.
I, however, come from an free and open source background, and my users and their platforms are diverse, and Linux distros have strict guidelines about not packaging N copies of the same dependency. So, that would make it harder to upstream such packages to debian, Ubuntu, Fedora, Arch… . So, for me the situation is this: if there's a library that we want to use in a project, we define very clearly what the oldest version of that library is that would work. Within a release cycle, we cannot bump the required version.
So, say, we've released 2.0.0 of some software. The CMake files define which version of a library we support. "Releasing" software means that we guarantee to devs as well as to users that the next bugfix/feature extension versions in our 2.a.b series still build on the same systems – and that includes the same libraries that might be installed there. So, if 2.0.0 built on your computer, so will 2.0.1 and 2.9.0. Development that requires a new version of an external dependency can only happen on a git branch that's not meant for further 2.a.b releases, but targetting an eventual 3.0.0. When picking minimum dependency versions for that 3.0.0 release, we look what is commonly available on the operating systems we support. For example, if my timeline was that 3.0.0 be released within 2022 or 2023, that version would be the one available in Ubuntu 22.04LTS (because that will be an important system for our users for a long time, and also relatively conservative), also looking at the debian version most likely to be the current unstable (or stable, depending on what your target audience is), the RHEL version, the next Fedora, and what is currently available in our condaforge and macports repos.
Everything not available in tolerable versions through these standard packaging channels needs to be built locally anyway. Turns out that if you're not crazily progressive and don't try to support 5 year old Linuxes, the number of projects that you need to build locally is quite small.
On windows, you're basically handicapped by Microsoft's inability to provide a really sensible way of downloading packages of binary shared libraries that are actually shared between different application software. So, on Windows systems, you're down to either doing all your builds locally, or using a third-party way of distributing platform-dependent packages, like Conan.
No matter how you do it, you'd let your build fail as early as possible, with a clear indication that the library version found is not sufficiently new. CMake makes this easy; its find_package command takes a minimum version as argument.

How do you package GCC for distribution?

I am making a modified C++ compiler and I have it built and tested locally. However, I would like to be able to package my build for Windows, Linux (Debian), and Mac OSX.
All of the instructions I can find online deal with building gcc but have no regard for making something distributable (or perhaps I am missing something?). I know for Windows I will need to bundle MinGW somehow, but that only further confuses me - and I have no idea how well Mac works with GCC these days..
Can anyone layout a set of discrete high-level steps I could try on each system so I can allow people to install my modified compiler easily?
First make sure your project installs well including executables, headers, runtime dependencies. If you're using something like cmake, this is a matter of installing things to CMAKE_INSTALL_PREFIX while possibly appending GnuInstallDirs. If you are using make, then you need to ensure that make install --prefix=... works well.
From here, you can target each platform independently. Treat the packaging independently from your project. Like Chipster mentioned, making rpm files isn't so tough. deb files for Debian-based OSs, tar.xz files for Arch-based OSs are similar. The rules for creating these packages can use your install rules to create the package. You mentioned mingw. If you're targeting an msys distribution of mingw for Windows deployment, then the Arch-based packaging of pacman will work on msys as well. You can slowly work on supporting one-platform at a time with almost no changes to your actual project.
Typically in the open-source world, people will release a tar.gz file supporting ./configure && make && make install or similar. Then someone associated with the platform (like a Debian-developer) will find your project, make some packaging rules for it, and release it into their distribution. That means your project can be totally agnostic to where it's being release. It also means you don't really need to worry about MacOS yet, you can wait until you have someone who wants it there, or some hardware to test it on.
If you really want to be in control of how things are packaged for each platform from inside of your project, and you are already using cmake, cpack is a great tool which helps out. After writing cpack rules for your project, you can simply type cpack to generate many types of deployable archives. You won't get the resulting *.deb file into Debian or Ubuntu official archives, but at least people can using those formats can install your package.
Also, consider releasing one package with the runtime libraries, and one with the development content (headers, compiler, static libraries). This way, if someone uses your compiler, they can re-distribute the runtime libraries which is probably going to be a much simpler install.

C++ Development Flow with 3rd Party Dependency [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm a Python developer with some background in another language such as Ruby.
In both language, dependency is managed by package manager automatically, such as pip or gem. Anyone could install such dependency by calling pip install -r requirements.txt, and it will install the necessary dependency via Python Package Index. Although, there has been an option to build the dependency manually from the source and install into the project, it is not a recommended process, and I have not done it.
I notice that C++ unfortunately have different nature in how dependency is being resolved for some reason. (e.g. different compiler flavor, compiler parameters, platforms, etc...)
At the moment, I am learning C++ using VS2015. and I have been stumbled again and again upon these library dependencies matter. With VS2015, there is a dependency package manager like python, and it is NuGet. However, not every library is available in NuGet, in fact, there are a lot of library developed independent from its IDE.
First I'm trying to use Boost. There is a manual on how to build the project, but I'm not sure what I need. Do I need to build from source? or Perhaps I just need a library that has been readily available?
Same reason for another library that I found. (e.g. QT, yaml-cpp, googletest, etc..)They only have a document how to build, instead of how to install as dependency.
And Ultimately, I will need to use lots of 3rd party library to be more productive. So, here's some of my questions that are very related.
How do C++ developer normally include 3rd party library into their project (the flow of installation 3rd party library)?
Do I have to build from source everytime I want to include? or perhaps you just need the header file which you could just copy and paste into your project directory?
I'm working in team (git), do each of my team need to build the dependency manually? Can it be automate such that the process of including new library is transparent for everyone?
Or perhaps, I don't really understand what specific question do I need
to ask. But why it is so painful to reuse library in C++?
Do I miss some fundamental understanding of C++ environement?
I'm not sure how much relevance it is, but CMake as a build tool that most library uses to build their project. Do I really need to build these library project?
More Questions:
After building some libraries, some of them generate static library .lib or dynamic library .dll to be included into the project. So is it correct to copy these generated library in our project? Should this be committed into the source version control? Some libraries are very large, and we don't want to maintain it. Yet we need the entire team to get the library transparently.
I understand you situation quite well. You cannot see the forest because too many trees are standing in your way...
Let me get one thing clear before I start to address your specific questions:
Generally speaking, dependencies in C++ are not more complicated than in Python.
The command pip install -r requirements.txt will establish an internet connection and download the necessary libraries and files from a repository server to fulfill the requirements. Under the Linux operating system (Ubuntu) the command: sudo apt-get install libboost-all-dev installs all required dependencies for boost. This is possible because there is a whole environment with servers that hold source-code as well as libraries and binaries that work together with the client programs (apt-get) that use it. This is exactly the same thing that the authors of pip have done for microsoft windows. microsoft themselves have never done this at the operating system level. They always left that to the programmer. NuGet is microsofts attempt to make-up for past mistakes.
Having this out of the way, let me address your questions:
It depends on the size of the 3rd party library. Small libraries like pugixml can be included as source in the source tree of your project. Bigger libraries like boost are better included as binary object code (library objects). Not all libraries do have binaries available to download (boost has), so you might be required to build from source. Bear in mind that all binaries are required to be built with exact the same compiler that you use in your project. The general steps to include it in your VS-Project:
Get the distribution files (either build from source or download and install binaries)
Add include paths to your Project:
Project > "projectname" properties > Configuration Properties > C/C++ > General > Additional Include Directories
Add paths to libraries:
Project > "projectname" properties > Configuration Properties > Linker > Input > Additional Dependencies.
No. You normally just use the header file. But it's better to add the path of the library into your project instead of copying the header file, because some projects (boost) have a huge hierarchy of header files.
It is a good idea that each member of your team has the same development environment with the same set of libraries installed. There are tools for this task: Chocolatey builds on top of NuGet and is therefore windows-affine. Vagrant deals with virtual boxes ands thus offers cross-platform development environments.
But more important is a decent source-control-management system. If you don't already use one - start using one Today!. This is the main collaboration-tool. It can really save your neck if you loose a developer machine.
There is another dependency problem: We've only addressed the development dependencies above. There is the problem of deployment dependencies:
your customers will need the libraries (*.dll files) that you have used for the development. You will need to package them as well into your deployment package (Installer). This is another issue which is probably already answered on SO.
Qt: if you start using Qt, I'd suggest that you start using their development environment Qt-Creator. This will automatically handle all dependencies. It will automatically detect the Visual Studio Compiler that you have installed, and use it. The IDE is quite close to Visual Studio.
CMake: No, it is not always required to use CMake to build a library project, some also include Makefiles. Others use CMake to produce Makefiles. "Follow the instructions" is the best advice I can give here.
Update 2015-10-24: paragraph point three reworked
How do C++ developer normally include 3rd party library into their
project (the flow of installation 3rd party library)?
It depends... There are a lot of ways, how to redistribute C++ libraries.
Do I have to build from source everytime I want to include? or perhaps
you just need the header file which you could just copy and paste into
your project directory?
For now, most C++ libraries contains two parts: binaries + header files. But often, there are a lot of problems, if compiler version of library is different with your compiler.
I'm working in team, do each of my team need to build the dependency
manually? Can it be automate such that the process of including new
library is transparent for everyone?
It depents on your team guidelines. You can choose what you want.
Or perhaps, I don't really understand what specific question do I need
to ask. But why it is so painful to reuse library in C++?
Because of some legacy of C. And because C++ is low-level language in compare with python/java/c#. C++ is supported by a lot of different platforms, including embedded. And ofter, it is not possible to install complex runtime on this platforms. So there are no mechanism to transparently link a "modules" in runtime.
Hopefully, there will be a normal support of modules in C++17 standard. And Microsoft will provide a technology preview of modules in C++ in MSVC 2015 update 1.
Do I miss some fundamental understanding of C++ environement?
Yes, I propose you to read about compiling and linking in C/C++. This two things are often come together, but they are different.
First, that you should mind: code in C/C++ is splitted in two parts: declaration (.h files) and implementation (.cpp files). .CPP files are compiled into binaries. .H files just declares an interfaces.

Managing a collection of libraries for C++ development on Windows

I'm responsible for developing a set of C++ libraries and programs. Currently building on Linux and MacOS, but Windows support is also a requirement. We will need to support VS2010 and VS2012, and in the future will also include VS2013 and maybe also MinGW. We're using cmake for building, so our code should build on all the platforms without issues; my problem is how to manage all the dependencies on Windows in order to be able to build in the first place, and keeping it up-to-date over time. At the moment, we have one virtual machine per visual studio version as a jenkins slave, so parallel builds of all the variants is fairly easy, but managing it is not.
The problem is the number of variants this requires building. If we consider only VS2010 and VS2012, with debug/release and i386/x64 builds, that's already 8 copies of each library; 16 if we include the other compilers. We will need all the libraries our code depends on, which will include at a minimum boost, qt, xerces+xalan, zlib, icu, libpng/tiff/jpeg, hdf5 and more, plus python, and all their dependencies. And as new upstream releases are made, we'll need to keep the entire collection up-to-date and consistent for all the build/arch/compiler variants.
I don't want to do this by hand, since this really needs automating. However, I'm unaware of any good solution for doing this on Windows. The Windows building guides I've seen for other projects often involve hand-building all the dependencies, and only build for a single variant. On Linux, it's already packaged, you don't need separate debug builds, and the arch variants can be catered for with chroots; on MacOS there's homebrew, macports etc., and it's also fairly simple to automate stuff there as well. Is there any equivalent for Windows? I've looked at stuff like chocolatey, but it's entirely unsuited to handling libraries, and is pretty poor as a package manager.
This seems like it should be a common problem for anyone doing C++ development on Windows? Are there any common solutions, tools or methodologies for managing a complex set of libraries and tools for development? How do other developers manage this?
NB. Just for the record, we are not using the visual studio application; we're doing all builds non-interactively via scripts driving the compilers directly with cmake and/or msbuild.
Many thanks,
Roger
I worked on large windows C++ project that delivers X86 Release, x86 Debug, x64 Release and x64 debug. Very similarly I used build system that does parallel builds for all target platforms using custom script.
We manage all third party dependency libraries in organized folders.
For example x86\release\Zlib.dll x86\Debug\zlib.dll x64\release\zlib.dll x64\zlib.dll
Custom script is made to pick all these libraries and project source code from configuration management tool. This allows to automatically build the relevant target binaries as needed.
any third party libraries change is updated in configuration management tool and then later picked up by the script for the next build.
For your question on VS2010 and 2012 support I don't understand importance. Is not one version of VS enough to support for the project?
you may take a look at Link, and their build system https://github.com/gisinternals/buildsystem
It's basicly a set of batch and make files calling each others. You still need keep track of lib update manually.

C++ Buildsystem with ability to compile dependencies beforehand

I'm in the middle of setting up an build environment for a c++ game project. Our main requirement is the ability to build not just our game code, but also its dependencies (Ogre3D, Cegui, boost, etc.). Furthermore we would like to be able build on Linux as well as on Windows as our development team consists of members using different operating systems.
Ogre3D uses CMake as its build tool. This is why we based our project on CMake too so far. We can compile perfectly fine once all dependencies are set up manually on each team members system as CMake is able to find the libraries.
The Question is if there is an feasible way to get the dependencies set up automatically. As a Java developer I know of Maven, but what tools do exist in the world of c++?
Update: Thanks for the nice answers and links. Over the next few days I will be trying out some of the tools to see what meets our requirements, starting with CMake. I've indeed had my share with autotools so far and as much as I like the documentation (the autobook is a very good read), I fear autotools are not meant to be used on Windows natively.
Some of you suggested to let some IDE handle the dependency management. We consist of individuals using all possible technologies to code from pure Vim to fully blown Eclipse CDT or Visual Studio. This is where CMake allows use some flexibility with its ability to generate native project files.
In the latest CMake 2.8 version there is the new ExternalProject module.
This allows to download/checkout code, configure and build it as part of your main build tree.
It should also allow to set dependencies.
At my work (medical image processing group) we use CMake to build all our own libraries and applications. We have an in-house tool to track all the dependencies between projects (defined in a XML database). Most of the third party libraries (like Boost, Qt, VTK, ITK etc..) are build once for each system we support (MSWin32, MSWin64, Linux32 etc..) and are commited as zip-files in the version control system. CMake will then extract and configure the correct zip file depending on which system the developer is working on.
I have been using GNU Autotools (Autoconf, Automake, Libtool) for the past couple of months in several projects that I have been involved in and I think it works beautifully. Truth be told it does take a little bit to get used to the syntax, but I have used it successfully on a project that requires the distribution of python scripts, C libraries, and a C++ application. I'll give you some links that helped me out when I first asked a similar question on here.
The GNU Autotools Page provides the best documentation on the system as a whole but it is quite verbose.
Wikipedia has a page which explains how everything works. Autoconf configures the project based upon the platform that you are about to compile on, Automake builds the Makefiles for your project, and Libtool handles libraries.
A Makefile.am example and a configure.ac example should help you get started.
Some more links:
http://www.lrde.epita.fr/~adl/autotools.html
http://www.developingprogrammers.com/index.php/2006/01/05/autotools-tutorial/
http://sources.redhat.com/autobook/
One thing that I am not certain on is any type of Windows wrapper for GNU Autotools. I know you are able to use it inside of Cygwin, but as for actually distributing files and dependencies on Windows platforms you are probably better off using a Windows MSI installer (or something that can package your project inside of Visual Studio).
If you want to distribute dependencies you can set them up under a different subdirectory, for example, libzip, with a specific Makefile.am entry which will build that library. When you perform a make install the library will be installed to the lib folder that the configure script determined it should use.
Good luck!
There are several interesting make replacements that automatically track implicit dependencies (from header files), are cross-platform and can cope with generated files (e.g. shader definitions). Two examples I used to work with are SCons and Jam/BJam.
I don't know of a cross-platform way of getting *make to automatically track dependencies.
The best you can do is use some script that scans source files (or has C++ compiler do that) and finds #includes (conditional compilation makes this tricky) and generates part of makefile.
But you'd need to call this script whenever something might have changed.
The Question is if there is an feasible way to get the dependencies set up automatically.
What do you mean set up?
As you said, CMake will compile everything once the dependencies are on the machines. Are you just looking for a way to package up the dependency source? Once all the source is there, CMake and a build tool (gcc, nmake, MSVS, etc.) is all you need.
Edit: Side note, CMake has the file command which can be used to download files if they are needed: file(DOWNLOAD url file [TIMEOUT timeout] [STATUS status] [LOG log])
Edit 2: CPack is another tool by the CMake guys that can be used to package up files and such for distribution on various platforms. It can create NSIS for Windows and .deb or .tgz files for *nix.
At my place of work (we build embedded systems for power protection) we used CMake to solve the problem. Our setup allows cmake to be run from various locations.
/
CMakeLists.txt "install precompiled dependencies and build project"
project/
CMakeLists.txt "build the project managing dependencies of subsystems"
subsystem1/
CMakeLists.txt "build subsystem 1 assume dependecies are already met"
subsystem2/
CMakeLists.txt "build subsystem 2 assume dependecies are already met"
The trick is to make sure that each CMakeLists.txt file can be called in isolation but that the top level file can still build everything correctly. Technically we don't need the sub CMakeLists.txt files but it makes the developers happy. It would be an absolute pain if we all had to edit one monolithic build file at the root of the project.
I did not set up the system (I helped but it is not my baby). The author said that the boost cmake build system had some really good stuff in it, that help him get the whole thing building smoothly.
On many *nix systems, some kind of package manager or build system is used for this. The most common one for source stuff is GNU Autotools, which I've heard is a source of extreme grief. However, with a few scripts and an online depository for your deps you can set up something similar like so:
In your project Makefile, create a target (optionally with subtargets) that covers your dependencies.
Within the target for each dependency, first check to see if the dep source is in the project (on *nix you can use touch for this, but you could be more thorough)
If the dep is not there, you can use curl, etc to download the dep
In all cases, have the dep targets make a recursive make call (make; make install; make clean; etc) to the Makefile (or other configure script/build file) of the dependency. If the dep is already built and installed, make will return fairly promptly.
There are going to be lots of corner cases that will cause this to break though, depending on the installers for each dep (perhaps the installer is interactive?), but this approach should cover the general idea.
Right now I'm working on a tool able to automatically install all dependencies of a C/C++ app with exact version requirement :
compiler
libs
tools (cmake, autotools)
Right now it works, for my app. (Installing UnitTest++, Boost, Wt, sqlite, cmake all in correct order)
The tool, named «C++ Version Manager» (inspired by the excellent ruby version manager), is coded in bash and hosted on github : https://github.com/Offirmo/cvm
Any advices and suggestions are welcomed.