Hide automatically generated CTest targets - c++

I'm using CMake and CTest in CLion. Annoyingly, CTest generates a load of targets that I don't care about:
Continuous
ContinuousBuild
ContinuousConfigure
ContinuousCoverage
ContinuousMemCheck
ContinuousStart
ContinuousSubmit
ContinuousTest
ContinuousUpdate
Experimental
ExperimentalBuild
ExperimentalConfigure
ExperimentalCoverage
ExperimentalMemCheck
ExperimentalStart
ExperimentalSubmit
ExperimentalTest
ExperimentalUpdate
Nightly
NightlyBuild
NightlyConfigure
NightlyCoverage
NightlyMemCheck
NightlyStart
NightlySubmit
NightlyTest
NightlyUpdate
These all show up in CLion. Quite annoying as I'm sure you'll agree. Is there solution to remove them? I'm open to any solution:
Get CTest to not generate them in the first place.
Delete the targets after CTest has created them.
A setting in CLion to hide them.

Unless you are using CDash, the solution is very simple.
In your CMakefile replace
include(CTest)
with
enable_testing()

Hack warning - below hack relies on an internal implementation detail (CTestTargets.cmake), and thus not guaranteed to work with any CMake version.
If one cannot avoid include(CTest), since certain CTest functionalities are needed and aren't available by enable_testing(), such as Valgrind integration, generation of the automatic CTest targets could be avoided altogether using the following hack:
set_property(GLOBAL PROPERTY CTEST_TARGETS_ADDED 1) # hack to prevent CTest added targets
include (CTest)
Setting CTEST_TARGETS_ADDED to 1 prior to including CTest will prevent generation of the automatic CTest targets.
I've tested this with CMake 3.6, and it should be working through CMake 3.19.6 (in which the CTEST_TARGETS_ADDED macro is still being used).

A possible solution that I'm not sure works 100% is to go to Run->Edit Configurations... in CLion and simply delete all the targets you don't want.
This seems to survive reloading the CMake configuration, and make clean.

A simple solution to manage the clutter that doesn't require deleting those targets (so you can easily go back to them if the need arises) is to create a sub-folder in the Edit Configurations... menu where you can drag and move all the entries you are not interested to see at the top level.
Note that this applies more generally to various target types, not just test related ones.
Here is an example here where I put all that stuff in a misc folder :

Related

How to avoid copying fetched module files during install step [duplicate]

I'm using CMake for a project and googletest for my test cases.
Looking around the internet, it seems to be common practise to just copy the googletest source into a subfolder of your repository and include it with "add_subdirectory(googletest)". I did that.
Now I'm using CPack to generate debian packages for my project. Unfortunately, the packages generated by CPack install googletest alongside with my project. This is of course not what I want.
Looking in the googletest directory, I found some INSTALL cmake commands there, so it is clear, why it happens. The question is now - how can I avoid it? I don't like modifying the CMakeLists.txt files from googletest, because I would have to remember re-applying my modifications on an update. Is there another way to disable these installs in CPack?
So there is the macro option #Tsyvarev mentioned that was originally suggested here:
# overwrite install() command with a dummy macro that is a nop
macro (install)
endmacro ()
# configure build system for external libraries
add_subdirectory(external)
# replace install macro by one which simply invokes the CMake
install() function with the given arguments
macro (install)
_install(${ARGV})
endmacro(install)
Note ${ARGV} and ${ARGN} are the same but the docs currently suggest using ${ARGN}. Also the fact that macro-overwriting prepends _ to the original macro name is not documented, but it is still the behaviour. See the code here.
However, I never got the above code to work properly. It does really weird things and often calls install() twice.
An alternative - also undocumented - is to use EXCLUDE_FROM_ALL:
add_subdirectory(external EXCLUDE_FROM_ALL)
According to some comment I found somewhere this disables install() for that subdirectory. I think what it actually does is set EXCLUDE_FROM_ALL by default for all the install() commands which also probably does what you want. I haven't really tested it, worth a shot though.
Updated: As noted in the other answer,
it seems that EXCLUDE_FROM_ALL option is the most direct and correct way for disable install in the subproject in the subdirectory:
add_subdirectory(googletest EXCLUDE_FROM_ALL)
Previous solutions
If you don't need tests in your project's release (which you want to deliver with CPack), then include googletest subdirectory conditionally, and set conditional to false when packaging:
...
if(NOT DISABLE_TESTS)
add_subdirectory(googletest)
endif()
packaging with
cmake -DDISABLE_TESTS=ON <source-dir>
cpack
Alternatively, if you want tests, but don't want to install testing infrastructure, you may disable install command via defining macro or function with same name:
# Replace install() to do-nothing macro.
macro(install)
endmacro()
# Include subproject (or any other CMake code) with "disabled" install().
add_subdirectory(googletest)
# Restore original install() behavior.
macro(install)
_install(${ARGN})
endmacro()
This approach has also been suggested in CMake mailing.
According to the comments, that way with replacing CMake command is very tricky one and may to not work in some cases: either parameters passed to the modified install are parsed incorrectly or restoring install is not work and even following installs are disabled.
A bit late reply, but I just spent too long a time figuring this out.
In the specific case of googletests, specifying this in your top level CMakeLists.txt does the trick.
option(INSTALL_GMOCK "Install Googletest's GMock?" OFF)
option(INSTALL_GTEST "Install Googletest's GTest?" OFF)
add_subdirectory(googletest)
I read on (I think) the CMake mailing list that making installation conditional on a INSTALL_<package name> inside your package is sort of a defacto standard (and one I'm certainly going to follow from now on!). But I can't find that link now.

CMake Interface dependency with custom build type

So, I've found really strange behaviour in CMake creating dependency on target_link_library..
It's hard to explain in one sentence, so here is a list of requirements (I hope this all will make sence in the end)
your project must have custom build type defined through CMAKE_CONFIGURATION_TYPES ('Tools' in this example)
you must have at least 3 targets:
executable (or simply main target) (test-exe in this example)
interface library which link to main target (this could be something other than INTERFACE library, but the next target must be linked to it via interface property only) (test-env in this example)
static library which links to the interface library with specific generator expression, which is depends on custom build type (something like that 'target_link_libraries(test-env INTERFACE $<$CONFIG:Tools:test-lib>)') (test-lib in this example)
Here is the code of the CMakeLists.txt file to make it little bit clearer:
cmake_minimum_required(VERSION 3.19.0 FATAL_ERROR)
project(multiconfiguration-test LANGUAGES CXX)
set(CMAKE_CONFIGURATION_TYPES Debug Release Tools)
set(CMAKE_CXX_FLAGS_TOOLS ${CMAKE_CXX_FLAGS_DEBUG})
set(CMAKE_EXE_LINKER_FLAGS_TOOLS ${CMAKE_EXE_LINKER_FLAGS_DEBUG})
set(CMAKE_STATIC_LINKER_FLAGS_TOOLS ${CMAKE_STATIC_LINKER_FLAGS_DEBUG})
add_library(test-env INTERFACE)
# EXCLUDE_FROM_ALL used to not build this target by default
add_library(test-lib STATIC EXCLUDE_FROM_ALL "test-lib.cpp")
target_link_libraries(test-env INTERFACE $<$<CONFIG:Tools>:test-lib>)
add_executable(test-exe "test-exe.cpp")
target_link_libraries(test-exe PRIVATE test-env)
(Files test-exe.cpp and test-lib.cpp are trivial, test-lib.cpp has a simple function, and test-exe.cpp declares this function and then calls it from main.)
When you would try to build this project with visual studio 2019/2017 generators (with "Tools" as configuration type of course: cmake --build . --config Tools), you will have next error:
LINK : fatal error LNK1104: cannot open file 'Tools\test-lib.lib' [<none_important_path_to_the_project>\test-exe.vcxproj]
And, what is important, you will not see in the terminal target test-lib being build.
So, what happened is target test-exe knows it must be linked against test-lib, but the build system doesn't know that target test-exe is dependent on target test-lib.
And now the most strange thing! If we will link this library like that: target_link_libraries(test-env INTERFACE $<$<CONFIG:Debug>:test-lib>) (so build type must be Debug), and still build project with Tools as a build type, you will see in the terminal that target test-lib is now building! (yes we have link error because test-exe can't find the function which is defined in test-lib, but this is at least expected)
So, what actually happens, the link flag of the target test-exe is correctly depends on the generator expression BUT, the actual build dependency, somehow, transforms any custom build type in this generator expression to the Debug.
Again this only happens with requirements I pointed up above, so it's not only the fault of generator expression, it's also connected to the interface dependency as well..
If we can't break any of the requirements, one possible solution will be to add direct dependency of test-lib target to test-env (like that: add_dependecies(test-env test-lib)), but this is not perfect, because even if we will use test-lib only then where is Tools as build type, we still will build this library each time, which can be undesired behavior.
I'm not really asking for solution here (but if you have one please share), I'm asking for the explanation why this even happening? Is this a bug or a really strange feature?
EDIT Small update I've found just now:
It must not even be the custom build type. The same bug can be encountered even with Release build type, so the final code can look as simple as this:
cmake_minimum_required(VERSION 3.19.0 FATAL_ERROR)
project(multiconfiguration-test LANGUAGES CXX)
add_library(test-env INTERFACE)
add_library(test-lib STATIC EXCLUDE_FROM_ALL "test-lib.cpp")
target_link_libraries(test-env INTERFACE $<$<CONFIG:Release>:test-lib>)
add_executable(test-exe "test-exe.cpp")
target_link_libraries(test-exe PRIVATE test-env)
and be build with next command: cmake --build . --config Release
Looks like a bug with the Visual Studio generator to me. I've just tested the Ninja Multi-Config generator both on Linux and on Windows and there cmake --build <build-dir> --config Release works just fine.

Controlling build options (tests, etc) in included libraries in a modern CMake fashion

What would the best way to replace global variables and instead move towards using target properties to control things like building tests in included libraries from the main applications CMakeLists.txt file?
Background
We have a c++ application, A, that uses, amongst others, inhouse libraries: lib B and lib C, that are included as git submodules in A:s repository. All respositories (a, B and C) share a common structure internal structure with all of them using Gtest as a unit test framework. This leads to all projects having a "unit_tests" target as well as a "gtest" target.
As a consequence, when including lib B and lib C in A:s CMakeLists.txt using add_subfolder() there is a conflict, as CMake requires target names to be unique and there is a total of 3 "unit_tests" targets in these 3 repos. There are other test targets as well, but they are so far unique. Renaming the test targets to b_unit_tests and a_unit_test would cure this, but doesnt feel right, and we would also need to rename the gtest targets to a_gtest, b_gtest...
Currently we have solved it by having global CMake variables B_BUILD_TESTS, C_BUILD_TESTS that are set to false in A:s CMakeLists.txt and control the inclusison of test in B and Cs CMakeLists.txt files using add_submodule(unit_tests). This has not been a real problem as we don't want to build and run unit tests for lib B and C when building app A. Doing so would have been mostly a waste of time.
Having watched Daniel Pfeifers ccpnow talk and read other blog posts on using CMake in a declarative way, I've started rewriting our CMake system with modern CMake practices in mind. So now the use of setting a global flag for every included library feels like an anti pattern I would like to avoid. Ideally I feel that tests would be best controlled by setting a property on an included targets. Something like below.
add_subdirectory(B)
set_target_properties(B PROPERTIES BUILD_TESTS false)
target_link_libraries(A PRIVATE B)
Ideally this would still include bs test targets, only make them not depend on b:s main target. But this seems very difficult to do without renaming all of bs targets with a b_ prefix. Technically b_unit_tests is still a different target than b, though from the context of A you would think of it as being part of target B.
Unfortunately I can't get the above to work as is seems like CMake already has a set of pre-defined properties for targets and adding new one doesn't seem fully supported. There is define_property and set_property, but from what I can see you cant use them to define properties on targets.
Suggestion how to rewrite the CMakeLists.txt in a more modern way with clear separation and configuration without resorting to global variables are greatly appreciated.
It seems that for modern CMake, libraries should export their paths and sources and in the export process you can add a namespace. Though I don't really get if this should also apply to libraries that are included as submodules and built together with the application? Is this the route I should go with libs A and
Why don't just rename test targets so that they wouldn't conflict with each other? You can also rule them with single BUILD_TESTS variable and set it to ON or OFF by default depending on how a library is built - as a standalone project, or as part of other one.

CMake: compilation speed when including external makefile

I have a c++ cmake project. In this project I build (among other) one example, where I need to use another project, call it Foo. This Foo project does not offer a cmake build system. Instead, it has a pre-made Makefile.custom.in. In order to build an executable that uses Foo's features, one needs to copy this makefile in his project, and modify it (typically setting the SOURCES variable and a few compiler flags). Basically, this Makefile ends up having the sources for your executable and also all the source files for the Foo project. You will not end up using Foo as a library.
Now, this is a design I don't like, but for the sake of the question, let's say we stick with it.
To create my example inside my cmake build I added a custom target:
CONFIGURE_FILE( ${CMAKE_CURRENT_SOURCE_DIR}/Makefile.custom.in Makefile.custom)
ADD_CUSTOM_TARGET(my_target COMMAND $(MAKE) -f Makefile.custom
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR})
This works. I can specify some variables to cmake, which get resolved in the call to CONFIGURE_FILE, and I end up with a working Makefile.custom. Then, invoking make my_target from the build directory, I can build the executable. I can even add it to the all target (to save me the effort of typing make my_target) with
SET_TARGET_PROPERTIES(my_target PROPERTIES EXCLUDE_FROM_ALL FALSE)
Sweet. However, cmake appears to assign a single job to the custom target, slowing down my compilation time (the Foo source folder contains a couple dozens cpp files). On top of that, the make clean target does not forward to the custom makefile. I end up having to add another target:
ADD_CUSTOM_TARGET(really-clean COMMAND "$(MAKE)" clean
COMMAND "$(MAKE)" -f Makefile.custom clean
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR})
which, unlike my_target with all, I can't include in the clean target (can I?).
Now, I know that a cleaner solution would be to have the Foo project be built as an external project, and then link to it. However, I've been 'recommended' to use their Makefile.custom.in makefile, modifying the few lines I need (adding my sources, specifying compiler flags, and few other minor modifications). So, regardless of how neat and clean this design pattern is, my questions are:
is there a way to tell cmake that make should use more than 1 job when making the target my_target?
is there a cleaner way to include a pre-existing makefile in a cmake project? Note that I don't want (can't?) use Foo as a library (and link against it). I want (need?) to compile it together with my executable using a makefile not generated by cmake (well, cmake can help a bit, through CONFIGURE_FILE, by resolving some variables, but that's it).
Note: I am aware of ExternalProject (as suggested also in this answer), but I think it's not exactly what I need here (since it would build Foo and then use it as a library). Also, both my project and Foo are written exclusively in C++ (not sure this matter at all).
I hope the question makes sense (regardless of how ugly/annoying/unsatisfactory the resulting design would be).
Edit: I am using cmake version 3.5.2
First, since you define your own target, you can assign more cores to the build process for the target my_target, directly inside your CMakeLists.txt.
You can include the Cmake module ProcessCount to determine the number of cores in your machine and then use this for a parallel build.
include(ProcessorCount)
ProcessorCount(N)
if(NOT N EQUAL 0)
# given that cores != 0 you could modify
# math(EXPR N "${N}+1") # modify (increment/decrement) N at your will, in this case, just incrementing N by one
set(JOBS_IN_PARALLEL -j${N})
endif(NOT N EQUAL 0)
and when you define your custom target have something like the following:
ADD_CUSTOM_TARGET(my_target
COMMAND ${CMAKE_MAKE_PROGRAM} ${JOBS_IN_PARALLEL} -f Makefile.custom
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR})
by the way, I don't think there's the need for you to include also CMAKE_BUILD_TOOL among the COMMANDs in your target.
I believe that instead of modifying the lines as above, you could call
make -j8 my_target
and it might start 8 jobs (just an example) without modifying the CMakeLists.txt, but I cannot guarantee this works having defined the COMMAND the way you have, just try if that's enough.
For the second point, I cannot think right now of a "cleaner" way.

CMake + Boost test: ignore tests that fail to build

We have C++ project that has a relatively big number of test suites implemented in Boost/Test. All tests are kept out of main project's tree, every test suite is located in separate .cpp file. So, our current CMakeLists.txt for tests looks like this:
cmake_minimum_required(VERSION 2.6)
project(TEST_PROJECT)
find_package(Boost COMPONENTS unit_test_framework REQUIRED)
set(SPEC_SOURCES
main.cpp
spec_foo.cpp
spec_bar.cpp
...
)
set(MAIN_PATH some/path/to/our/main/tree)
set(MAIN_SOURCES
${MAIN_PATH}/foo.cpp
${MAIN_PATH}/bar.cpp
...
)
add_executable (test_project
${SPEC_SOURCES}
${MAIN_SOURCES}
)
target_link_libraries(test_project
${Boost_UNIT_TEST_FRAMEWORK_LIBRARY}
)
add_test(test_project test_project)
enable_testing()
It works ok, but the problem is SPEC_SOURCES and MAIN_SOURCES are fairly long lists and someone occasionally breaks something in either one of the files in main tree or spec sources. This, in turn, makes it impossible to build target executable and test the rest. One has to manually figure out what was broken, go into CMakeLists.txt and comment out parts that fail to compile.
So, the question: is there a way to ignore tests that fail to build automatically in CMake, compile, link and run the rest (ideally, marking up ones that failed as "failed to build")?
Remotely related question
Best practice using boost test and tests that should not compile suggests to try_compile command in CMake. However, in its bare form it justs executes new ad hoc generated CMakeList (which will fail just as the original one) and doesn't have any hooks to remove uncompilable units.
I think you have some issues in your testing approach.
One has to manually figure out what was broken, go into CMakeLists.txt and comment out parts that fail to compile.
If you have good coverage by unit-tests you should be able to identify and locate problems really quickly. Continuous integration (e.g. Jenkins, Buildbot, Travis (GitHub)) can be very helpful. They will run your tests even if some developers have not done so before committing.
Also you assume that a non-compiling class (and its test) would just have to be removed from the build. But what about transitive dependencies, where a non-compiling class breaks compilation of other classes or leads to linking errors. What about tests that break the build? All these things happen during development.
I suggest you separate your build into many libraries each having its own test runner. Put together what belongs together (cohesion). Try to minimize dependencies in your compilation also (dependency injection, interfaces, ...). This will allow to keep development going by having compiling libraries and test runners even if some libs do not compile (for some time).
I guess you could create one test executable per spec source, (using a foreach() loop) and then do something like:
make spec_foo && ./spec_foo
This will only try to build the binary matching the test you want to run
But if your build often fails it may be a sign of some bad design in your production code ...