How to prevent accidentally including old headers? - c++

Build systems frequently have separate build and install steps. Sometimes, installed versions will have headers that are older installed on the operating system and those headers may be picked up instead of the headers in the source code. This can lead to very subtle and strange behavior in the source code that is difficult to diagnose because the code looks like it does one thing and the binary does another.
In particular, my group uses CMake and C++, but this question is also more broadly relevant.
Are there good techniques to prevent old headers from being picked up in a build?

1. Uninstall
Uninstall package from CMAKE_INSTALL_PREFIX while hacking develop version.
Pros: very effective
Cons: not flexible
2. Custom install location
Use custom location for installed target, don't add custom install prefix to build.
Pros: very flexible
Cons: if every package use this technique tons of -I option passed to
compiler and tons of <PACKAGE>_ROOT to cmake configure step.
3. Include priority
Use headers search priority. See include_directories command and
AFTER/BEFORE suboptions.
Pros: flexible enough
Cons: sometimes it's not a trivial task if you have a lot of find_package/add_subdirectory
commands, error-prone, errors not detected by autotesting.
BTW
Conflicts can occur not only between build/install directories, but also
in install directory itself. For example version 1.0 install: A.hpp and B.hpp,
version 2.0 install: A.hpp. If you sequentially install 1.0 and 2.0 targets
some #include<B.hpp> errors will not be detected locally. This kind of error can be easily
detected by autotesting (clean environment of CI server don't have old B.hpp file from 1.0 version). Uninstall command also can be helpfull.

Guys recently had fixed the exact same problem with shogun package. You basically need to have your source folders including your header files passed by -I to gcc before the system folders. You don't have to pass the system folders as -I to gcc anyway.
Have a look at the search path here. You might need to have a proper way of including your header files in your source code.
This is the pull request which fixed the problem I guess.

Related

How do you package GCC for distribution?

I am making a modified C++ compiler and I have it built and tested locally. However, I would like to be able to package my build for Windows, Linux (Debian), and Mac OSX.
All of the instructions I can find online deal with building gcc but have no regard for making something distributable (or perhaps I am missing something?). I know for Windows I will need to bundle MinGW somehow, but that only further confuses me - and I have no idea how well Mac works with GCC these days..
Can anyone layout a set of discrete high-level steps I could try on each system so I can allow people to install my modified compiler easily?
First make sure your project installs well including executables, headers, runtime dependencies. If you're using something like cmake, this is a matter of installing things to CMAKE_INSTALL_PREFIX while possibly appending GnuInstallDirs. If you are using make, then you need to ensure that make install --prefix=... works well.
From here, you can target each platform independently. Treat the packaging independently from your project. Like Chipster mentioned, making rpm files isn't so tough. deb files for Debian-based OSs, tar.xz files for Arch-based OSs are similar. The rules for creating these packages can use your install rules to create the package. You mentioned mingw. If you're targeting an msys distribution of mingw for Windows deployment, then the Arch-based packaging of pacman will work on msys as well. You can slowly work on supporting one-platform at a time with almost no changes to your actual project.
Typically in the open-source world, people will release a tar.gz file supporting ./configure && make && make install or similar. Then someone associated with the platform (like a Debian-developer) will find your project, make some packaging rules for it, and release it into their distribution. That means your project can be totally agnostic to where it's being release. It also means you don't really need to worry about MacOS yet, you can wait until you have someone who wants it there, or some hardware to test it on.
If you really want to be in control of how things are packaged for each platform from inside of your project, and you are already using cmake, cpack is a great tool which helps out. After writing cpack rules for your project, you can simply type cpack to generate many types of deployable archives. You won't get the resulting *.deb file into Debian or Ubuntu official archives, but at least people can using those formats can install your package.
Also, consider releasing one package with the runtime libraries, and one with the development content (headers, compiler, static libraries). This way, if someone uses your compiler, they can re-distribute the runtime libraries which is probably going to be a much simpler install.

How can I use ocamlbrowser with opam packages?

My system ocaml instalation also includes the /usr/bin/ocamlbrowser executable. Is there a way I can use it to browse packages I installed with opam?
So far the closest I could get was using the -I flag to add extra directories to the search but I don't know how to tell it to search all folders (the -I flag only adds one at a time) and I don't know how to access the source code for the functions (ocamlbrowser is only finding the mli files, not the ml)
ocamlbrowser -I ~/.opam/system/lib/core -I ~/.opam/system/lib/fieldslib
OCamlBrowser is rather legacy and you need manually specify all the include directories.
For code browsing, ~/.opam/<switch>/lib/* dirs are not sufficient since they usually lack source codes (.ml and .mli's).
You should use the build directories, ~/.opam/<switch/build/packagename/... instead, keeping the source code of the installed OPAM packages. You need to set OPAMKEEPBUILDDIR env var or opam install --keep-build-dir for it.
AFAIK, currently (2014/09) we have no alternatives which is 100% compatible with OCamlBrowser which work fully with OPAM/OCamlFind eco system, but we have ocp-index, ocp-browser and http://ocamloscope.herokuapp.com/ . However, things are rapidly evolving around OPAM and newer tools may be released.

Why must uuid.h be "installed" on linux systems to be able to build many C++ programs rather than just put in include or lib folders

All over the web, the answer to the question "uuid.h not found" is "install some kind of rpm or deb file". This may be appropriate for attempting to build some kind of open source project, which has dependencies on other open source, but it does not seem correct for building one's own software.
At my company, most of our own code can be built by getting the code from our source control and building it. Dependent headers, libs, etc. are included in the sync. However, whenever someone gets a uuid.h not found, soemone always says "do apt-get install uuid-dev" or something like that.
My question: what is so different about uuid.h that it must be installed like this? Our code uses ODBC too, but we don't need to "install" odbc headers. Ditto xml parsers, and many other third party code.
I don't think there's anything magical about uuid.h that requires a packages installation; just that installing the package is a simpler step than adding the required files one by one, and it will be easier for you to keep them up to date through your Linux distro's package update utilities.
So installing the package is the simplest way to get a user going, and will keep them up to date without manual intervention. I suspect there is a way to build from source and add the files one-by-one, but that is not as simple.

Install gcc on linux with no root privilege

I have access to computer in a public library and I want to try out some C++ and maybe other code. Problem is that there is no g++ installed and I can't install it using packages because I have no root access. Is there a "smart" way to make a full environment for programming in a home folder?
I have gcc installed (I can compile C code). Also, I have a home folder that is consistent. I don't know where to find precompiled g++, I found only source but I don't know what to do with it. I tried to ask them to install this but it didn't work :)
If you want to install it as a local user
GNU GSRC provides an easy way to do so
Link: http://www.gnu.org/software/gsrc/
After configuration, simply specify the following commands:
cd gsrc
make -C pkg/gnu/gcc
make -C pkg/gnu/gcc install
The second step could also be changed to speed up for an N-core system:
make -C pkg/gnu/gcc MAKE_ARGS_PARALLEL="-jN"
You can run the configure script with the --prefix parameter: ../gcc-4.5.0/configure --prefix=/home/foo/bar. Since it is very likely that the c++ standard library is different then the one on your system, you have to set export LD_LIBRARY_PATH=/home/foo/bar/lib before you can start a program compiled by this compiler.
Once, a long time ago (1992 or so), I went through something similar to this when I bought a SCO system with no development environment. Bootstrapping it up to having a full development environment was a gigantic pain, and not at all easy. Having library header files or gcc on a system would make your job a whole lot easier.
It depends a lot on just how obnoxious the library has been about what kinds of things are installed. If there is no gcc there, your job becomes a bit harder. If there are no header files for glibc there, your job is a LOT harder.
Also, do you get an account on the system so you have a home folder that's consistent from login to login?
If you have no gcc there, you need to find a pre-compiled binary of gcc/g++ and install it somewhere. If you have no header files there, you need to find copies of those and put them on the system.
There is no 'standard' way of installing gcc in your home folder though. All of the solutions are going to have some manner of hand-rolling involved.
Have you asked the librarians if they can change what's installed because you want to learn a bit of programming and only have access to their computers to do it with? That might well be the easiest solution.
From your comment it seems that you do have gcc and if you can compile C code, you have the library header files. So now it's a matter of actually compiling your own version of g++. You could probably find a way to entice the package manager on the system into installing a binary package somewhere other than in a system folder. I think this solution is less fun than compiling your own, and I think there may also be possible subtle problems as that installed package may be expecting to find things in particular places and not finding them there.
First thing to do is to make sure you've downloaded the right source for the gcc package. The place to find that is the GNU United States mirror page. You want to find the gcc-4.5.0.tar.bz2 or gcc-4.5.0.tar.gz file on the mirror site you choose. It will likely be in a gcc directory, and a gcc-4.5.0 sub-folder.
After you have that downloaded, you should untar it. In general you shouldn't build gcc in the folder you untar it into. So create another sibling folder that you actually want to build it in labeled gcc-build. Then the command you want is ../gcc-4.5.0/configure --prefix=$HOME/.local --enable-languages='c c++'.
gcc does require some other packages be installed in order to be able to compile itself. You can use the same --prefix line for these packages to install them in the same place. The gcc website has a list of pre-requisite packages.
$HOME/.local is sort of the standard accepted place for per-user installs of things.
If you have fakeroot, you can use that to set ~/some-path as root to install the packages from. Alternatively, you can setup a chroot environment to do the same.
Given this, you can then use dpkg -i package.deb to install the gcc package(s) on your system. You will need to download and install each package individually (e.g. from the debian website) -- at least binutils, glibc, linux-headers and gcc.
If you are on another system, you will need to get the right packages for that system and install them using the corresponding package manager.

Linux programming environment configuration

The other day I set up an Ubuntu installation in a VM and went to gather the tools and libraries I figured I would need for programming mostly in C++.
I had a problem though, where to put things such as 3rd party source libraries, etc. From what I can gather, a lot of source distributions assume that a lot of their dependencies are already installed in a certain location and assume that a lot of tools are also installed in particular locations.
To give an example of what I currently do on Windows, is I have a directory where I keep all source code. C:\code. In this directory, I have a directory for all 3rd party libraries, c:\code\thirdparty\libs. This way I can easily set up relative paths for all of the dependencies of any projects I write or come across and wish to compile. The reason I am interested in setting up a linux programming environment is that it seems that both the tool and library dependency problems have been solved efficiently making it easy for example to build OpenSSH from source.
So what I was looking for was a decent convention I can use when I am trying to organize my projects and libraries on linux that is easy to maintain and easy to use.
Short answer: don't do a "heaps of code in local dir" thing.
Long answer: don't do a "heaps of code in local dir" thing, because it will be nightmare to keep up-to-date, and if you decide to distribute your code, it will be nightmare to package it for any decent distribution.
Whenever possible, stick to the libraries shipped in the distribution (ubuntu has 20000+ packets, it ought to have most of what you'll need prepackaged). When there is not package, you caninstall by hand to /usr/local (but see above about upgrades and DONT do that).
Better, use "stow" or "installwatch" (or both) to install to per-library dirs (/usr/local/stow/libA-ver123) and then symlink files from there to /usr/local or /usr/ (stow does the simlinking part). Or just package the lib for your distribution.
For libraries/includes...
/usr/local/lib
/usr/local/include
Where possible code against the system/distro provided libraries. This makes it easiest to ship a product on that distro.
However, if you are building a commercial application, because there are so many flavors of Linux distros that can mean you have to maintain a plethora of different application builds for each distro. Which isn't necessarily a bad thing as it means you can more cleanly integrate with the distro's package management system.
But in the case where you can't do that it should be fairly easy to download the source of each 3rd party dependency you have and integrate the building of that dependency into a static lib that is linked to your executable. That way you know exactly what you're linking against but has the downside of bloating out your executable size. This can also be required if you need a specific library (or version) not provided by the distro.
If you want your code to build on as broad a variety of different Unix systems then you're probably wise looking into GNU autoconf and automake. These help you construct a configure script and makefile for your project so that it will build on practically any Unix system.
Also look into pkg-config which is used quite a bit now on Linux distributions for helping you include and link to the right libraries (for libs that support pkg-config).
If you're using subversion to manage your source there is a "convention" that most subversion repositories use to manage their own code and "vendor" code.
Most svn repositories have a "vendor" tree (that goes along with the trunk, branches & tags trees). That is the top for all 3rd party vendor code. In that directory you have directories for each library you use. Eg:
branches/
tags/
trunk/
vendor/somelib
vendor/anotherlib
Beneath each of these libs is a directory for each library version and a "current" directory for the most up-to-date version in your repository.
vendor/somelib/1.0
vendor/somelib/1.1
vendor/somelib/current
Then your project's tree should be laid out something like this:
trunk/source # all your code in here
trunk/libs # all vendor code in here
The libs directory should be empty but it will have svn:externals meta data associated with it, via:
svn propedit svn:externals trunk/libs
The contents of this property would be something along the lines of (assumes subversion 1.5):
^/vendor/somelib/current somelib
^/vendor/anotherlib/1.0 anotherlib
This means that when you checkout your code subversion also checks out your vendor libraries into your trunk/libs directory. So that when checked out it looks like this:
trunk/source
trunk/libs/somelib
trunk/libs/anotherlib
This is described (probably a whole lot better) in the Subversion Book. Particularly the section on handling vendor branches and externals.
Ubuntu = Debian = apt-get goodness
Start with Linux Utilities:
%> sudo apt-get install util-linux