set output path for cmake generated files - c++

My question is the following:
Is there a way to tell CMakeFiles where to generate it's makefiles, such as cmake_install.cmake, CMakeCache.txt etc.?
More specifically, is there a way to set some commands in the CMakeFiles that specifies where to output these generated files? I have tried to search around the web to find some answers, and most people say there's no explicit way of doing this, while others say I might be able to, using custom commands. Sadly, I'm not very strong in cmake, so I couldn't figure this out.
I'm currently using the CLion IDE and there you can specifically set the output path through the settings, but for flexibility reasons I would like as much as possible to be done through the CMakeFiles such that compiling from different computers isn't that big of a hassle.
I would also like to avoid explicitly adding additional command line arguments etc.
I hope someone might have an answer for me, thanks in advance!

You can't (easily) do this and you shouldn't try to do it.
The build tree is CMake's territory. It allows you some tiny amount of customization there (for instance you can specify where the final build artifacts will be placed through the *_OUTPUT_DIRECTORY target properties), but it does not give you any direct control over where intermediate files, like object files or internal make scripts used for bookkeeping are being placed.
This is a feature. You have no idea how all the build systems supported by CMake work internally. Maybe you can move that internal file to a different location in your build process, which is based on Unix Makefiles. But maybe that will also horribly break my build process, which is using Visual Studio. The bottom line is: You shouldn't have to care about this. CMake should take care of it, and by taking some freedom away from you, it ensures that it can actually do that job on all supported build toolchains.
But this might still be an unsatisfactory answer to you. You're the developer, shouldn't you be in full control of the results produced by your build? Of course you should, which is why CMake again grants you full control over what goes into the install tree. That is, whatever ends up in the install directory when you call make install (or whatever is the equivalent of installing in your build toolchain) is again under your control.
So you do control everything that matters: The source tree, the install tree, and that tiny portion of the build tree where the final build artifacts go. The rest of the build tree is off-limits for you and for good reasons.

Related

Is there a way to perform atomic CMake build?

I'm considering reimplementing our build system (currently based on GNU Make) in CMake.
Disclaimer: this is more of a theoretical and "best practices" question. I don't know CMake in-depth. Also, please feel free to migrate the question to programmers if it's more on-topic there.
As far as I understand, the standard workflow for CMake is
cmake .
make
I suspect there may be problems of de-synchronization of CMake files and Makefiles.
So, during usual development process you're supposed to run make to avoid unnecessary rebuilds of CMakeCache and Makefiles and generally make the process more straight-forward. But then, if you add, say, a new source file to CMakeLists and run make, it'll be using old CMakeCache and Makefiles and will not regenerate them automatically. I think it may cause major problems when used at scale, since in case something is not building as it should, you'll have to try to perform make clean, then, if it doesn't help, you'll need to remove CMakeCache and regenerate everything (manually!).
If I'm not right about something of the above, please correct me.
I'd like to just do
awesome-cmake
and have it update everything what needs updating and build the project.
So, the question: is there a way to make "atomic build" with CMake so that it tracks all the required information and abstracts away the usage of make?
I think you have a couple of incorrect ideas here:
I suspect there may be problems of de-synchronization of CMake files and Makefiles.
Ultimately, CMake is all about producing correct Makefiles (or Visual Studio solution files, or XCode project files, or whatever). Unless you modify a generated Makefile by hand, there can be no synchronisation issue between CMake and the Makefile since CMake generates the Makefile.
But then, if you add, say, a new source file to CMakeLists and run make, it'll be using old CMakeCache and Makefiles and will not regenerate them automatically.
Actually, the opposite is true: if you modify the CMakeLists.txt (e.g. adding a new source, changing a compiler flag, adding a new dependency) then running make will trigger a rerun of CMake automatically. CMake will read in its previously cached values (which includes any command line args previously given to CMake) and generate an updated Makefile.
in case something is not building as it should, you'll have to try to perform make clean, then, if it doesn't help, you'll need to remove CMakeCache and regenerate everything (manually!).
Yes, this would be a pretty normal workflow if something has gone wrong. However, things don't often get that bad in my experience.
So, the question: is there a way to make "atomic build" with CMake so that it tracks all the required information and abstracts away the usage of make?
Given that running make will cause CMake to "do the right thing", i.e. rerun if required, I guess that using make is as close to an "atomic build" as possible.
One thing to beware of here is the use of file(GLOB ...) or similar to generate a list of source files. From the docs:
We do not recommend using GLOB to collect a list of source files from your source tree. If no CMakeLists.txt file changes when a source is added or removed then the generated build system cannot know when to ask CMake to regenerate.
In other words, if you do use file(GLOB ...) to gather a list of sources, you need to get into the habit of rerunning CMake after adding/removing a file from your source tree; running make won't trigger a rerun of CMake in this situation.
The standard workflow for CMake is an out of source build
mkdir build
cd build
cmake ..
make

Good practice for implementing resource directories

I'm not sure if this is too general, so if it is I'll say that I'm on Linux using qmake, but I'd like to be able to switch from Linux to Windows with my project whenever I need to, as well as, possibly other PCs.
In order to do this, I'd like to know how some of the programmers on here have gotten around using resource directories without using absolute path definitions. With Qt, it seems like the runtime working directory is the build directory of the application, and not the source directory.
Ideally, I think the best solution would be to somehow get the Resource directory as it resides in the source directory and copy that to the relative build directory (i.e., Debug or Release, depending on development stage) so that the application can access that via run time.
This can introduce some complication, however (at least, I think it can).
Anyway, what would be a good solution to do this?
If you are using Qt. I would suggest using deploy process.
http://doc.qt.digia.com/qtcreator/creator-building-running.html
Basically, you just need to declare which directories need to be copied.
The qt creator will copy those dirs to build dir(release/debug) after build process is done.Then you simply run the executable.
Here is one of example.
https://github.com/longwei/incubator-cordova-qt.
in the pro file
wwwDir.source = www
xmlDir.source = xml
qmlDir.source = qml
DEPLOYMENTFOLDERS = wwwDir xmlDir qmlDir
second
include(deployment.pri)
qtcAddDeployment()
then it is done.
Its not clear what exactly you're trying to achieve, but perhaps a simple solution would be for the build scripts to pass the necessary path via a compilation definition (-D with gcc). Then depending on if its a Debug, Release, etc build, the definition would be set accordingly, then the corresponding binary would have the correct path.
As a side note, I tried qmake for a while, but found SCons to be much more versatile.

Best practices for porting a visual studio solution file to scons

I'm new to scons and trying to port over an existing visual studio solution (.sln) which internally references many VS project files (.vcxproj). The are multiple outputs, including a variety of libraries, and different executables.
From a conceptual point of view I'm unsure if I'm going down the right path and would appreciate any advice on how to do it better.
Here is my setup:
I have a top level SConstruct file at the root of the code depot. Additionally I have one SConscript file for each of my old VS project files. The SConstruct file calls the SConscript function once for each of these SConscript files, in which it specifies the source directory and where the outputs should go as parameters.
Additionally the SConstruct file creates and passes to each SConstruct file an array of scons environment instances. For example, there is one for compiling libraries, one for compiling executables, one for debug config, one for release, etc. and each SConscript file then chooses the one it wants, based on what it's trying to accomplish.
There are a couple things which I was wondering about:
1) Is there a better approach than creating multiple different environments, one for each configuration variation? Is that the expected usage pattern?
2) In visual studio, I could right click on a specific project and select build to only build that project and the projects it depends on, ignoring the rest of the dependency graph in the sln. With scons, is it true that it'll recompute the entire dependency graph every time I trigger a build of a specific library, even though in theory it would only need to compute a little portion of the entire dependency graph.
Thanks for any advice.
Mark
Your approach to having a SConstruct call several subsidiary SConscript files is indeed a good way to organize your projects, and is called a Hierarchical SCons build.
Regarding your questions, here are some things to consider:
Several different environments: Unless you have different compilers or compiler flags per builder or target (library, executable, etc) I would say that the approach you are using is a bit overkill. You could most likely achieve the same with just one environment. If you do need additional flags per sub-directory/builder, then you could consider passing the "main" environment to the subdirs, and in the respective SConscript's, clone the env and add/append what you need as mentioned here. This way the entire solution will be more modular by avoiding repetition and keeping everything common in one central place.
Building certain projects/targets: You can do the same with SCons by selecting the target on the command line, like this $ scons yourTarget. You can make the target names more manageable using the env.Alias() function. SCons does indeed analyze everything before building, but depending on the size of the project, it shouldnt be a problem, its still quite fast. If build performance does become an issue, here are some pointers for improving the performance.
Here are a few extra good things to know:
The SCons documentation is not bad WRT to other open-source tools. At the bottom of that doc page, there are several appendices with lots of extra information. The SCons man page is quite complete too.
Paths can be confusing in SCons if you're not Using the '#' as mentioned here
If you need to deal with MSVS projects, you can use the MSVSProject() and MSVSSolution() builers as mentioned here.

Does cmake use convention over configuration?

Maven is said to employ a form of Convention over Configuration.
I don't want to draw any wrong comparisons but as far as I understand cmake can fill a similar roll for a C++ project as maven can for a Java project.
So, does cmake have some Conventions over Configuration, or is each project configured uniquely? (Wrt. file layout, test layout, build output, etc.)
After experiencing the elegance of Maven 3, I also looked for a convention over configuration C maven style system. ...
I also checked out CMAKE, after creating a skeleton a few things stood out.
CMAKE is sometimes declarative, sometimes procedural, and your always going to end up with an ugly mix. AKA, it's ANT for C without brackets.
CMAKE itself IS portable, alas your project will need platform specifics handled and you better know them in advance. CMAKE modules exist to supposedly help with this, unfortunately their re-usability appears more like the promise of Ansible roles... Theoretically possible, but in practice they end up as decent way for you to organize all that complexity required.
In other words, CMAKE is in no shape or form like Maven. It's more like a lower level ANT, that allows you to use the "Cross-platform" DSL to generate platform specific makefiles.
The one convention that we strongly encourage is to do "out of source" builds, where the build directory contains ALL build products, and is completely separate from the source tree, usually source and build are siblings:
projects
proj1-build-x86
proj1-build-x64
proj1-src
Two primary reasons we always recommend this strategy are (1) to keep the source tree clean of build products, so it is easy to tell what has changed since your last update from your version control system and (2) so that you may have multiple build trees for any given source tree and not worry about the build products and/or settings from one interfering with the other one.
I recently noticed a project I was working on had inadvertently generated some python files in the source tree. I only noticed it, though, when I tried to build both the x86 and x64 builds simultaneously in different build trees... and suddenly the generated python files had some lines duplicated and intermixed. Changed it to generate into the build tree, and all was well.
This is all just part of CMake good practice, though, and is not strongly enforced by anything other than the common sense and discipline of the smart people running these projects...

keeping Eclipse-generated makefiles in the version control - any issues to expect?

we work under Linux/Eclipse/C++ using Eclipse's "native" C++ projects (.cproject). the system comprises from several C++ projects all kept under svn version control, using integrated subclipse plugin.
we want to have a script that would checkout, compile and package the system, without us needing to drive this process manually from eclipse, as we do now.
I see that there are generated makefile and support files (sources.mk, subdir.mk etc.), scattered around, which are not under version control (probably the subclipse plugin is "clever" enough to exclude them). I guess I can put them under svn and use in the script we need.
however, this feels shaky. have anybody tried it? Are there any issues to expect? Are there recommended ways to achieve what we need?
N.B. I don't believe that an idea of adopting another build system will be accepted nicely, unless it's SUPER-smooth. We are a small company of 4 developers running full-steam ahead, and any additional overhead or learning curve will not appreciated :)
thanks a lot in advance!
I would not recommend putting things that are generated in an external tool into version control. My favorite phrase for this tactic is "version the recipe, not the cake". Instead, you should use a third party tool like your script to manipulate Eclipse appropriately to generate these files from your sources, and then compile them. This avoids the risk of having one of these automatically generated files be out of sync with your root sources.
I'm not sure what your threshold for "super-smooth" is, but you might want to take a look at Maven2, which has a plugin for Eclipse projects to do just this.
I know that this is a big problem (I had exactly the same; in addition: maintaining a build-workspace in svn is a real pain!)
Problems I see:
You will get into problems as soon as somebody adds or changes project settings files but doesn't trigger a new build for all possible platforms! (makefiles aren't updated).
There is no overall make file so you can not easily use the build order of your projects that Eclipse had calculated
BTW: I wrote an Eclipse plugin that builds up a workspace from a given (textual) list of projects and then triggers the build. That's possible but also not an easy task.
Unfortunately I can't post the plugin somewhere because I wrote it for my former employer...