Locating etc and share directories on Linux - c++

I'm writing a program for Linux in C++, and I need to store some additional data, such as images. Stuff like that is usually in /usr/share on Linux.
The user can decide where to install the software (I'm using CMake), thus I should either use /usr/share, /usr/local/share, /home/theuser/somefolder/share or whatever, depending on where he installed it.
I usually go about doing this by figuring out the absolute path to my binary, cutting the trailing "bin" from the path and replacing it with "share". However, this is quite cumbersome and not the least elegant, so I was wondering how other people did it. I'm using boost, but I can't find any respective functions.
I only need the share directory for this project, but I'd also be interested in how you do this with the etc directory (my approach doesn't quite work there, because the binary can be in /usr/bin while the configuration files are in /etc)

The build system should pass the desired install location as a define during the build process. So
gcc -DDATA_DIR=/custom/build/location ...
This means that the install location can't be changed after the code is built, but is the only way to be certain that the code knows where to look, without reading that information somewhere at runtime.

You could use default directories paths.

Related

Is it possible to edit the hardcoded path in a Windows (custom built) installation of Qt 5?

We are building Qt 5.10 internally, and installing it to a given prefix on the build environments.
We would like to be able to relocate the installation (notably, but not only for, distribution). We are aware of qt.conf, as pointed out by this answer.
Yet, is there a maintained way to directly edit the values of those hardcoded paths in the installed files?
EDIT:
More rationale behind why we thing qt.conf is inferior to directly patching the binaries.
On development machines, it means that instead of simply patching the installed binaries once, we have to provide a configuration file in each folder containing an application depending on Qt.
Even worse than that, we discovered through failures (and the help of this post) that qtwebengineprocess.exe, in qtprefix/bin, expects its own qt.conf file, otherwise it will use the paths hardcoded in the libraries. This means that we have to touch the the library folder anyway, in otder to edit the configuration file to make it match the folder location on each development machine.

set output path for cmake generated files

My question is the following:
Is there a way to tell CMakeFiles where to generate it's makefiles, such as cmake_install.cmake, CMakeCache.txt etc.?
More specifically, is there a way to set some commands in the CMakeFiles that specifies where to output these generated files? I have tried to search around the web to find some answers, and most people say there's no explicit way of doing this, while others say I might be able to, using custom commands. Sadly, I'm not very strong in cmake, so I couldn't figure this out.
I'm currently using the CLion IDE and there you can specifically set the output path through the settings, but for flexibility reasons I would like as much as possible to be done through the CMakeFiles such that compiling from different computers isn't that big of a hassle.
I would also like to avoid explicitly adding additional command line arguments etc.
I hope someone might have an answer for me, thanks in advance!
You can't (easily) do this and you shouldn't try to do it.
The build tree is CMake's territory. It allows you some tiny amount of customization there (for instance you can specify where the final build artifacts will be placed through the *_OUTPUT_DIRECTORY target properties), but it does not give you any direct control over where intermediate files, like object files or internal make scripts used for bookkeeping are being placed.
This is a feature. You have no idea how all the build systems supported by CMake work internally. Maybe you can move that internal file to a different location in your build process, which is based on Unix Makefiles. But maybe that will also horribly break my build process, which is using Visual Studio. The bottom line is: You shouldn't have to care about this. CMake should take care of it, and by taking some freedom away from you, it ensures that it can actually do that job on all supported build toolchains.
But this might still be an unsatisfactory answer to you. You're the developer, shouldn't you be in full control of the results produced by your build? Of course you should, which is why CMake again grants you full control over what goes into the install tree. That is, whatever ends up in the install directory when you call make install (or whatever is the equivalent of installing in your build toolchain) is again under your control.
So you do control everything that matters: The source tree, the install tree, and that tiny portion of the build tree where the final build artifacts go. The rest of the build tree is off-limits for you and for good reasons.

Good practice for implementing resource directories

I'm not sure if this is too general, so if it is I'll say that I'm on Linux using qmake, but I'd like to be able to switch from Linux to Windows with my project whenever I need to, as well as, possibly other PCs.
In order to do this, I'd like to know how some of the programmers on here have gotten around using resource directories without using absolute path definitions. With Qt, it seems like the runtime working directory is the build directory of the application, and not the source directory.
Ideally, I think the best solution would be to somehow get the Resource directory as it resides in the source directory and copy that to the relative build directory (i.e., Debug or Release, depending on development stage) so that the application can access that via run time.
This can introduce some complication, however (at least, I think it can).
Anyway, what would be a good solution to do this?
If you are using Qt. I would suggest using deploy process.
http://doc.qt.digia.com/qtcreator/creator-building-running.html
Basically, you just need to declare which directories need to be copied.
The qt creator will copy those dirs to build dir(release/debug) after build process is done.Then you simply run the executable.
Here is one of example.
https://github.com/longwei/incubator-cordova-qt.
in the pro file
wwwDir.source = www
xmlDir.source = xml
qmlDir.source = qml
DEPLOYMENTFOLDERS = wwwDir xmlDir qmlDir
second
include(deployment.pri)
qtcAddDeployment()
then it is done.
Its not clear what exactly you're trying to achieve, but perhaps a simple solution would be for the build scripts to pass the necessary path via a compilation definition (-D with gcc). Then depending on if its a Debug, Release, etc build, the definition would be set accordingly, then the corresponding binary would have the correct path.
As a side note, I tried qmake for a while, but found SCons to be much more versatile.

g++: Use ZIP files as input

We have the Boost library in our side. It consists of a huge number of files which never change and only a tiny portion of it is used. We swap the whole boost directory if we are changing versions. Currently we have the Boost sources in our SVN, file by file which makes the checkout operations very slow, especially on Windows.
It would be nice if there were a notation / plugin to address C++ files inside ZIP files, something like:
// #ZIPFS ASSIGN 'boost' 'boost.zip/boost'
#include <boost/smart_ptr/shared_ptr.hpp>
Are there any support for compiler hooks in g++? Are there any effort regarding ZIP support? Other ideas?
I assume that make or a similar buildsystem is involved in the process of building your software. I'd put the zip file in the repository, and add a rule to the Makefile to extract it before the actual build starts.
For example, suppose your zip file is in the source tree at "external/boost.zip", and it shall be extracted to "external/boost", and it contains at its toplevel a file "boost_version.h".
# external/Makefile
unpack_boost: boost/boost_version.h
boost/boost_version.h: boost.zip
unzip $<
I don't know the exact syntax of the unzip call, ask your manpage about this.
Then in other Makefiles, you can let your source files depend on the unpack_boost target in order to have make unpack Boost before a source file is compiled.
# src/Makefile (excerpt)
unpack_boost:
make -C ../external unpack_boost
source_file.cpp: unpack_boost
If you're using a Makefile generator (or an entirely different buildsystem), please check the documentation for these programs for how to create something like the custom target unpack_boost. For example, in CMake, you can use the add_custom_command directive.
The fine print: The boost/boost_version.h file is not strictly necessary for the Makefile to work. You could just put the unzip command into the unpack_boost target, but then the target would effectively be phony, that is: it would be executed during each build. The file inbetween (which of course you need to replace by a file which is actually present in the zip archive) ensures that unzip only runs if necessary.
A year ago I was in the same position as you. We kept our source in SVN and, even worse, included boost in the same repository (same branch) as our own code. Trying to work on multiple branches was impossible, as it would take most of a day to check-out a fresh working copy. Moving boost into a separate vendor repository helped, but it would still take hours to check-out.
I switched the team over to git. To give you an idea of how much better it is than SVN, I have just created a repository containing the boost 1.45.0 release, then cloned it over the network. (Cloning copies all of the repository history, which in this case is a single commit, and creates a working copy.)
That clone took six minutes.
In the first six seconds a compressed copy of the repository was copied to my machine. The rest of the time was spent writing all of those tiny files.
I heartily recommend that you try git. The learning curve is steep, but I doubt you'll get much pre-compiler hacking done in the time it would take to clone a copy of boost.
We've been facing similar issues in our company. Managing boost versions in build environments is never going to be easy. With 10+ developers, all coding on their own system(s), you will need some kind of automation.
First, I don't think it's good idea to store copies of big libraries like boost in SVN or any SCM system for that matter, that's not what those systems are designed for, except if you plan to modify code in boost yourself. But let's assume you're not doing that.
Here's how we manage it now, after trying lots of different methods, this works best for us.
For every version of boost that we use, we put the whole tree (unzipped) on a file server and we add extra subdirectories, one for each architecture/compiler-combination, where we put the compiled libraries.
We keep copies of these trees on every build system and in the global system environment we add variables like:
BOOST_1_48=C:\boost\1.48 # Windows environment var
or
BOOST_1_48=/usr/local/boost/1.48 # Linux environment var, e.g. in /etc/profile.d/boost.sh
This directory contains the boost tree (boost/*.hpp) and the added precompiled libs (e.g. lib/win/x64/msvc2010/libboost_system*.lib, ...)
All build configurations (vs solutions, vs property files, gnu makefiles, ...) define an internal variable, importing the environment vars, like:
BOOSTROOT=$(BOOST_1_48) # e.g. in a Makefile, or an included Makefile
and further build rules all use the BOOSTROOT setting for defining include paths and library search paths, e.g.
CXXFLAGS += -I$(BOOSTROOT)
LFLAGS += -L$(BOOSTROOT)/lib/linux/x64/ubuntu/precise
LFLAGS += -lboost_date_time
The reason for keeping local copies of boost is compilation speed. It takes up quite a bit of disk space, especially the compiled libs, but storage is cheap and a developer losing lots of time compiling code is not. Plus, this only needs to be copied once.
The reason for using global environment vars is that build configurations are transferrable from one system to another, and can thus be safely checked in to your SCM system.
To smoothen things a bit, we've developed a little tool that takes care of the copying and setting the global environment. With a CLI, this can even be included in the build process.
Different working environments mean different rules and cultures, but believe me, we've tried lots of things and finally, we decided to define some kind of convention. Maybe ours can inspire you...
This is something you would not do in g++, because any other application that wants to do it would also have to be modified.
Store the files on a compressed filesystem. Then every application gets the benefit automatically.
It should be possible in an OS to allow transparent access to files inside a ZIP file. I know that I put it in the design of my own OS a long time ago (2004 or so) but never got it to a point where it was usable. The downside is that seeking backwards in a file inside a ZIP is slower as it's compressed (and you can't rewind the compressor state, so you have to seek from the start instead). This also makes using a zip-inside-a-zip slow for rewinding and reading. Fortunately, most cases just read a file sequentially.
It should also be retrofittable to current OSes, at least in client space. You can hook the filesystem access functions used (fopen, open, ...) and add a set of virtual file descriptors that your own software would return for a given filename. If it's a real file just pass it on, if it's not open the underlying file (possibly again via this very function) and pass a virtual handle. When accessing the file contents, read directly from the zip file without caching.
On Linux you would use an LD_PRELOAD to inject it into existing software (at usage time), on Windows you can hook the system calls or inject a DLL into the space of software to hook the same functions.
Does anybody know if this already exists? I can't see any clear reason it wouldn't...

Installing C/C++ libraries on Windows

I'm studying (well, trying to) C right now, but I'm limited to working in Windows XP. I've managed to set up and learn how to use Emacs and can compile simple C programs with gcc (from Emacs no less!), but I'm getting to the point where I'd like to install something like SDL to play around with it.
The thing is that the installation instructions for SDL indicate that, on a Win32 environment using MingW, I would need to use MSYS to run ./configure and make/make install to install SDL, like one would do on Linux. I noticed that when I unzipped the SDL-dev package (forgot the exact name, sorry) there were folders there that corresponded to a folder in the MinGW directory (SDL/include -> MinGW/include).
Am I right in saying that all the ./configure and make commands do is move these files from one directory to another? Couldn't I just move those files by hand and spare myself the trouble of installing and configuring MSYS (which, to be honest, confuses me greatly)?
The build process usually works like this: the configure script finds the appropriate settings for the compilation (like which features to enable, the paths to the required libraries, which compiler to use etc.) and creates a Makefile accordingly. make then compiles the source code to binaries. make install copies the created binaries, the headers, and the other files that belong to the library to the appropriate places.
You can't just copy the files from the source archive, because the source archive does not contain the binary files (or any other files that are created during the make step), so all you'd copy would be the headers, which aren't enough to use the library.
In most case, configure and make will discover the compiler/environment of your machine and build the suitable binary, respectively. Therefore, unfortunately, it will not be easy as moving/copying header files to new locations.
However, in some cases, the library can be the "header only" library. Which means you need only header files to use it.
I have no experience with MSYS and SDL. But the basics of configure and make is worth learning (especially if you are going to program any C/C++ in non-Windows environment.)