How to exclude second main while linking test exe with gradle? - c++

I'm trying to integrate unit tests in a native (C++) gradle project but I can't seem to find a working solution. The problem occurs while linking the test executable since there are two wmains available (one for the main application, one for the unit tests). Does anyone know how to exclude one of them during the linking step?
Here's a minimal example of my setup:
Project structure
build.gradle
src
-> main
-> cpp
-> main.cpp
-> registry.cpp
-> headers
-> registry.hpp
-> test
-> cpp
-> main_test.cpp
-> test_registry.cpp
libs
-> googletest
-> 1.7.0
-> include
-> ...
-> lib
-> libgtest.a
build.gradle
apply plugin: 'cpp'
apply plugin: 'google-test-test-suite'
model {
platforms {
x86 {
architecture "x86"
}
x64 {
architecture "x86_64"
}
}
components {
main(NativeExecutableSpec) {
baseName "Registry"
targetPlatform "x86"
binaries.all {
cppCompiler.args "-std=c++11", "-municode", "-mwindows"
linker.args "-municode", "-mwindows"
}
}
}
testSuites {
mainTest(GoogleTestTestSuiteSpec) {
testing $.components.main
sources {
cpp.source.srcDir 'src/test/cpp'
}
}
}
repositories {
libs(PrebuiltLibraries) {
googleTest {
headers.srcDir "libs/googletest/1.7.0/include"
binaries.withType(StaticLibraryBinary) {
staticLibraryFile =
file("libs/googletest/1.7.0/lib/libgtest.a")
}
}
}
}
}
model {
binaries {
withType(GoogleTestTestSuiteBinarySpec) {
lib library: "googleTest", linkage: "static"
cppCompiler.args "-std=c++11", "-municode"
linker.args "-municode"
}
}
}
Error message
:compileMainExecutableMainCpp
:linkMainExecutable
:mainExecutable
:assemble
:compileMainTestGoogleTestExeMainCpp
:compileMainTestGoogleTestExeMainTestCpp
:linkMainTestGoogleTestExe
C:\Users\minimal\build\objs\mainTest\mainCpp\e7f4uxujatdodel7e7qw5uhsp\main.obj:main.cpp:(.text+0x0): multiple definition of `wmain'
C:\Users\minimal\build\objs\mainTest\mainTestCpp\271ezc0ay5ubap2l962cnectq\main_test.obj:main_test.cpp:(.text+0x0): first defined here
collect2.exe: error: ld returned 1 exit status
:linkMainTestGoogleTestExe FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':linkMainTestGoogleTestExe'.
> A build operation failed.
Linker failed while linking mainTest.exe.
options.txt
-o
C:\\Users\\minimal\\build\\exe\\mainTest\\mainTest.exe
C:\\Users\\minimal\\build\\objs\\mainTest\\mainTestCpp\\271ezc0ay5ubap2l962cnectq\\main_test.obj
C:\\Users\\minimal\\build\\objs\\mainTest\\mainTestCpp\\dp6ieaohq04qqqa31sdfwrsxj\\test_registry.obj
C:\\Users\\minimal\\build\\objs\\mainTest\\mainCpp\\68sxcjmhakj69ha7wqtijofs3\\Registry.obj
C:\\Users\\minimal\\build\\objs\\mainTest\\mainCpp\\e7f4uxujatdodel7e7qw5uhsp\\main.obj
C:\\Users\\minimal\\libs\\googletest\`.7.0\\lib\\libgtest.a
-municode
-m32
Any help is greatly appreciated!

Your problem arises from trying to do the wrong thing with googletest.
Googletest is a unit-testing framework. That means it is for testing
libraries - where library here means a bunch of functions and/or classes
that don't include a main function. It is not for testing applications,
that have a main function. You test an application by running the
application and making controlled - possibly automated - observations
of its overt behaviour. That kind of testing has various names but is
not unit-testing, and unit-testing comes first.
To test a library with googletest, you need to create an application to
run your googletest test cases. That application, the test-runner, needs
its own main function, that does nothing but run the test cases:
int main(int argc, char **argv) {
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
It's common enough for someone to have an application that contains
a bunch of application specific functions and/or classes that they'd
like to unit-test with googletest. The way to do that is:-
Refactor the application into:
A library - call it the app library - that contains the functions and/or classes you want to
unit-test, excluding the main function.
The remainder, including the main function.
Build the application by linking the remainder with the app library.
Create a test-runner application, comprising your googletest test cases
and a googletest main function. Naturally, the test cases, #include
the headers of the app library.
Build the test-runner by linking the test cases and main function with the app library.
Unit-test the app library by running the test-runner.
The simplest and quickest approach to this is just to put all of the application,
except for the main function, into the app library. But it's probably wiser
to move stuff from the application into the app library only as you develop
unit tests to cover it, so you know at any time that whatever is in the app library
has unit tests and the rest doesn't. Proceed in this way till everything except
the main function is in the app library, and has unit tests.
You will almost certainly find that the process of factoring out the app-library
so that you can link it with both the application and the test runner exposes
design flaws in the application and forces you to do better. This can start an
app library on its way to maturing into a multi-app library, a stable high-quality
software asset.
So you started with one project, the application project, and you end up with
three projects:
A project that builds the app library
A project that builds the application
A project that builds the test runner
One way or another, any build system you have will allow you to make automatic
dependencies between these projects. Clearly, you want:
Building the application to require building the app library
Building the test-runner to require building the app libary
But you could go further, and make building the application require
building the test-runner - which will build the app library - and successfully
running the test runner. That way, you won't ever build the application
unless the unit tests are successful. That's probably overkill for your
desktop dev cycle but, but not for CI builds.
For the next application, you can start with this 3-project pattern,
and add functionality to the application only when you can get it from the
app library, which must be covered by unit tests, which must pass
in the test-runner. Then you're using googletest the right way.

In VS there is an explicit option on which main to use.

Related

CMake + Boost test: ignore tests that fail to build

We have C++ project that has a relatively big number of test suites implemented in Boost/Test. All tests are kept out of main project's tree, every test suite is located in separate .cpp file. So, our current CMakeLists.txt for tests looks like this:
cmake_minimum_required(VERSION 2.6)
project(TEST_PROJECT)
find_package(Boost COMPONENTS unit_test_framework REQUIRED)
set(SPEC_SOURCES
main.cpp
spec_foo.cpp
spec_bar.cpp
...
)
set(MAIN_PATH some/path/to/our/main/tree)
set(MAIN_SOURCES
${MAIN_PATH}/foo.cpp
${MAIN_PATH}/bar.cpp
...
)
add_executable (test_project
${SPEC_SOURCES}
${MAIN_SOURCES}
)
target_link_libraries(test_project
${Boost_UNIT_TEST_FRAMEWORK_LIBRARY}
)
add_test(test_project test_project)
enable_testing()
It works ok, but the problem is SPEC_SOURCES and MAIN_SOURCES are fairly long lists and someone occasionally breaks something in either one of the files in main tree or spec sources. This, in turn, makes it impossible to build target executable and test the rest. One has to manually figure out what was broken, go into CMakeLists.txt and comment out parts that fail to compile.
So, the question: is there a way to ignore tests that fail to build automatically in CMake, compile, link and run the rest (ideally, marking up ones that failed as "failed to build")?
Remotely related question
Best practice using boost test and tests that should not compile suggests to try_compile command in CMake. However, in its bare form it justs executes new ad hoc generated CMakeList (which will fail just as the original one) and doesn't have any hooks to remove uncompilable units.
I think you have some issues in your testing approach.
One has to manually figure out what was broken, go into CMakeLists.txt and comment out parts that fail to compile.
If you have good coverage by unit-tests you should be able to identify and locate problems really quickly. Continuous integration (e.g. Jenkins, Buildbot, Travis (GitHub)) can be very helpful. They will run your tests even if some developers have not done so before committing.
Also you assume that a non-compiling class (and its test) would just have to be removed from the build. But what about transitive dependencies, where a non-compiling class breaks compilation of other classes or leads to linking errors. What about tests that break the build? All these things happen during development.
I suggest you separate your build into many libraries each having its own test runner. Put together what belongs together (cohesion). Try to minimize dependencies in your compilation also (dependency injection, interfaces, ...). This will allow to keep development going by having compiling libraries and test runners even if some libs do not compile (for some time).
I guess you could create one test executable per spec source, (using a foreach() loop) and then do something like:
make spec_foo && ./spec_foo
This will only try to build the binary matching the test you want to run
But if your build often fails it may be a sign of some bad design in your production code ...

How to avoid mixing test and production code using GoogleTest?

I am starting to use GoogleTest. It seems that it needs a main file for running the tests:
Separate test cases across multiple files in google test
But currently in my demo application I already have a main file:
src/
-> MyType.h
-> main.cpp
-> Makefile
Which will eventually be my "production" application. I don't want to clutter that with gtest includes, macros etc.
Should I just create another main.cpp file in another folder e.g.: test/ that will contain all the specific gtest configuration so I would end up with:
src/
-> MyType.h
-> main.cpp
-> Makefile // Makefile for producing production code/binaries
Test/
-> MyTypeTest.h // Unittest for MyType
-> main.cpp // The "Test runner"
-> Makefile // Makefile for producing test executable
EDIT:
Found this based on cmake:
http://www.kaizou.org/2014/11/gtest-cmake/
which seems to be exactly what I am looking for.
The most sensible approach to this is to have a library for your production code and then two executables, one for production and another one for tests:
|-lib/
| |-Makefile
| |-mytype.h
| `-mytype.cpp
|-app/
| |-Makefile
| `-main.cpp
`-test/
|-Makefile
`-mytypetest.cpp
Notice that gtest distribution provides the gtest library and a gtest_main library with the standard main function for your test executable. So unless you need a custom main (rare case) you don't need to provide a main.cpp for your tests and can simply link against gtest_main, e.g. $(CC) mytypetest.cpp -o apptests -lapplib -lgtest_main -lgtest.
The library approach involves slightly more complex Makefiles, but it pays off in compilation time, since not having it implies you need to compile mytype.cpp once for production application and once for test executable.
There are probably a lot of ways to do this, but generally speaking, yes, you should add a test-specific main function to your project. This makes compilation a little bit more complex since you'll have to produce two separate binaries (one for your application and another for your tests) but this is a fairly typical setup.
I'd simply add a test.cpp file with a main and create a test target in my makefile so that I could either make - to build my production code - or make test - to build the tests. In actual projects I use cmake in very similar fashion (I sometimes bundle all common dependencies in a core.a library and then link both main and test against it).

Configuring Tests in subfolders using Google Test and CMake

This should be a fairly simple question, but given the black arts of project structuring using cmake, it will help quite a bit of people struggling with this.
I'm trying to get my codebase a little bit more organized. For this, I'm creating subfolders that contain the test suites according to their domain.
Google test itself is already compiling and running, the only thing is that with this restructure, Google Test can't find any of the Test Cases I have.
Here is my structure:
tests\
|
\domain1\
|CMakeLists.txt
|domain1_test.cpp
|domain1_test.hpp
|[.. more tests ...]
\domain2\
|CMakeLists.txt
|domain2_test.cpp
|domain2_test.hpp
|[.. more tests ...]
|main.cpp
|CMakeLists.txt
As you can see, I have two folders where tests live.
The CMakeLists.txt files in those are as follows:
SET(DOMAIN1_TEST_SRC
domain1_test.cpp
domain1_test.hpp)
ADD_LIBRARY(domain1testlib STATIC ${DOMAIN1_TEST_SRC})
TARGET_LINK_LIBRARIES(domain1testlib
${Boost_LIBRARIES}
domain_lib
gtest
)
TARGET_INCLUDE_DIRECTORIES(domain1testlib
INTERFACE
${CMAKE_CURRENT_SOURCE_DIR})
The CMakeLists.txt in the main tests directory is:
add_subdirectory(domain1)
add_subdirectory(domain2)
ADD_EXECUTABLE(my_domain_tests main.cpp)
TARGET_LINK_LIBRARIES(my_domain_tests
${Boost_LIBRARIES}
domain1testlib
domain2testlib
comptestlib
gtest
)
add_test(MyTestSuite my_domain_tests)
What am I doing wrong?
Running tests just says that No tests were found.
Thanks!
UPDATE
Adding my main.cpp
It's really nothing special, just the boilerplate main.cpp file.
#include "gtest/gtest.h"
int main(int argc, char ** argv) {
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
The problem is that no symbols from your domain*testlibs are referenced from your executable sources, i.e. main.cpp.
The TEST and TEST_F macros in Google Test automatically register the test cases with the test runner. So your test source files are the ones that actually include references to symbols in the gtest library and not the other way around. Thus, the linker will not include any of your actual test cases.
You should include the domain*_test.cpp and domain*_test.hppfiles as part of your executable sources instead of creating libraries with them. You can do that directly referencing the files or using a variable defined in each CMakeLists.txt with the list of sources.
What am I doing wrong?
Running tests just says that No tests were found.
to make available for "make test" and to work ctest inside build dir you need
ENABLE_TESTING()# Force "make test" to works
before add_test in CMakeLists.txt

GoogleTest several C++ projects in the same solution

I have e.g. 10 C++ projects in a solution SolA and want to UnitTest them with GoogleTest:
so I created a new solution SolATest and for every project of SolA an unit test project in SolATest!
Is it a good approach to load the SolA libraries implicit in SolATest/Projects and run every test project as an executable:
#include <iostream>
#include "gmock/gmock.h"
#include "gtest/gtest.h"
int main(int argc, char **argv)
{
::testing::InitGoogleMock(&argc, argv);
int value = RUN_ALL_TESTS();
std::getchar(); // to hold the terminal open
return value;
}
or is there a more convenience way -> e.g. only have one executable in SolATest and load the other test projects as libraries (IMHO to have all cpp files in one test project is confusing)?!
Thx for any help
Either approach should work; it just depends on your preference. I tend to follow a project structure like the following:
Solution
|-- ProjectA
|-- ProjectATests
|-- ProjectB
|-- ProjectBTests
`-- TestLib
Where the projects (ProjectA, ProjectB) are libraries, and each test project (ProjectATests, ProjectBTests) is an executable. Note that we do not separate unit tests into a separate solution; they are always built and run alongside the production code. I like this structure for a few reasons:
It's easier to run just the tests that are related to your changes.
The development workflow is a bit more efficient, since when making changes to one library you only have to rebuild and link the corresponding test.
Whether you create a single test project or multiple, I would definitely recommend putting the project in the same solution as the code under test. Further, I would recommend setting up a post-build step for the test project(s) which runs the tests and fails the build if they don't pass.
Lastly, you might be wondering about that 'TestLib' project. I use that for the gtest/gmock fused sources, the definition of main(), and any other utilities that are shared between tests. That eliminates (or at least, reduces) code duplication between the various test projects.

Why doesn't Gradle include transitive dependencies in compile / runtime classpath?

I'm learning how Gradle works, and I can't understand how it resolves a project transitive dependencies.
For now, I have two projects :
projectA : which has a couple of dependencies on external libraries
projectB : which has only one dependency on projectA
No matter how I try, when I build projectB, gradle doesn't include any projectA dependencies (X and Y) in projectB's compile or runtime classpath. I've only managed to make it work by including projectA's dependencies in projectB's build script, which, in my opinion does not make any sense. These dependencies should be automatically attached to projectB. I'm pretty sure I'm missing something but I can't figure out what.
I've read about "lib dependencies", but it seems to apply only to local projects like described here, not on external dependencies.
Here is the build.gradle I use in the root project (the one that contains both projectA and projectB) :
buildscript {
repositories {
mavenCentral()
}
dependencies {
classpath 'com.android.tools.build:gradle:0.3'
}
}
subprojects {
apply plugin: 'java'
apply plugin: 'idea'
group = 'com.company'
repositories {
mavenCentral()
add(new org.apache.ivy.plugins.resolver.SshResolver()) {
name = 'customRepo'
addIvyPattern "ssh://.../repository/[organization]/[module]/[revision]/[module].xml"
addArtifactPattern "ssh://.../[organization]/[module]/[revision]/[module](-[classifier]).[ext]"
}
}
sourceSets {
main {
java {
srcDir 'src/'
}
}
}
idea.module { downloadSources = true }
// task that create sources jar
task sourceJar(type: Jar) {
from sourceSets.main.java
classifier 'sources'
}
// Publishing configuration
uploadArchives {
repositories {
add project.repositories.customRepo
}
}
artifacts {
archives(sourceJar) {
name "$name-sources"
type 'source'
builtBy sourceJar
}
}
}
This one concerns projectA only :
version = '1.0'
dependencies {
compile 'com.company:X:1.0'
compile 'com.company:B:1.0'
}
And this is the one used by projectB :
version = '1.0'
dependencies {
compile ('com.company:projectA:1.0') {
transitive = true
}
}
Thank you in advance for any help, and please, apologize me for my bad English.
I know that this specific version of the question has already been solved, but my searching brought me here and I hope I can save some people the hassle of figuring this out.
Bad foo/build.gradle
dependencies {
implementation 'com.example:widget:1.0.0'
}
Good foo/build.gradle
dependencies {
api 'com.example:widget:1.0.0'
}
bar/build.gradle
dependencies {
implementation project(path: ':foo')
}
implementation hides the widget dependency.
api makes the widget dependency transitive.
From https://stackoverflow.com/a/44493379/68086:
From the Gradle documentation:
dependencies {
api 'commons-httpclient:commons-httpclient:3.1'
implementation 'org.apache.commons:commons-lang3:3.5'
}
Dependencies appearing in the api configurations will be
transitively exposed to consumers of the library, and as such will
appear on the compile classpath of consumers.
Dependencies found in the implementation configuration will, on the
other hand, not be exposed to consumers, and therefore not leak into
the consumers' compile classpath. This comes with several benefits:
dependencies do not leak into the compile classpath of consumers anymore, so you will never accidentally depend on a transitive
dependency
faster compilation thanks to reduced classpath size
less recompilations when implementation dependencies change: consumers would not need to be recompiled
cleaner publishing: when used in conjunction with the new maven-publish plugin, Java libraries produce POM files that
distinguish exactly between what is required to compile against the
library and what is required to use the library at runtime (in other
words, don't mix what is needed to compile the library itself and what
is needed to compile against the library).
The compile configuration still exists, but should not be used as it will not offer the guarantees that the api and implementation
configurations provide.
Finally, the problem didn't come from the scripts. I've just cleared gradle's cache, and each project's build folder, to make this work.
Put the following line in projectB's dependencies.
compile project(':projectA')