how to organize test cases with boost::test library? - c++

I have a project of 50+ .H/.CPP files/classes. I would like to test every class with its own test case, which will include methods for testing of different aspects of every class. My classes are located in different directories, like this:
/project
/include
/SuperModule
Foo.h
Foo.cpp
..
Alpha.h
Alpha.cpp
..
/test // I assume that my tests shall be here
main.cpp
Makefile
I would like to use boost::test as a unit-testing framework. How should I organize my files, how shall I name them, etc. Some hint or a link or a suggestion will be appreciated. Thanks.

We are using boost::test in a similar layout. Our layout is -
/project
/include
/SuperModule
/Foo
foo.c
foo.h
/foo_unittest
foo_unittest.c // note - no separate header file is required
// for boost::test unit test.exe program.
Basic layout rule is to put the unit test for a class in a sub-directory named "foo_unittest" after the class in the same directory as the source code. The advantage to this naming is
The source code and the directory are stored next to each other. So by simple inspection you can see if you have written the unit test or not.
Also, when you copy the source code,
it is easy to copy the unit test at
the same time.
As our projects are not overly complex (30-50 major classes), this system works for us. If you are running larger projects, I don't think this would be an optimal solution.

Related

"_main already defined" while using gtest

I have a solution with two projects in it
One of them is a console application, and the other one is a Google Test project
My project has a .h file and a .CPP with a main() in it
My gtest consists of a .CPP file which calls the .h file using #include and a main function to RUN_ALL_TESTS()
I need a main in my project but I also need a main in the gtest project, but having two main() doesn't let me build the gtest successfully
Is there a workaround to this?
Sorry if it's a silly question, I have no clue how to use gtest because various sites keep presenting different ways
First of all you should have a dedicated file main.cpp for your main() function, which contains nothing else.
E.g. your project structure could look like:
project1
file1.h
file1.cpp
main.cpp
I'm not familiar wiht gtest specifically, but usually unit test frameworks have a separate file for the gtest main function, e.g. gtest_main.cpp. Tests are in one or more files like file1test.cpp etc.
So you would compile and link your project1 with file1.h, file1.cpp and main.cpp to get an executable.
For unit tests you would compile and link file1.h, file1.cpp, file1test.cpp and gtest_main.cpp for a unit test executable.
Structure could be like
project1
file1.h
file1.cpp
main.cpp
project1test
file1test.cpp
gtest_main.cpp
EDIT additional infos on linking:
In project1test you would include file1.h with #include "../project1/file1.h".
For correct linking right-click on project1test project
--> Configuration Properties --> Linker --> Input --> Additional Dependencies --> Add "..\project1\Debug\file1.obj"
As #Alan Birtles pointed out it would be even more clearer if you had the following structure:
project1library
file1.h
file1.cpp
project1application
main.cpp
project1test
file1test.cpp
gtest_main.cpp
The you would get a static/dynamic library project1library.lib/.dll, an executable project1application.exe and a unit test executable project1test.exe.
The advantage is that you would just link the library in your unit test project with
--> Configuration Properties --> Linker --> Input --> Additional Dependencies --> Add "..\project1library\Debug\project1library.lib"
If you have more than one file you need from your project, you don't have to add every obj file, but just one lib file.
But making sure that everything was rebuilt correctly on changes can be more difficult and error prone with a lib, an executable and a unit test project.
Standart usage of gtest is for unit testing.
Usually, unit tests don't check main :).
I recommend you to use standart gtest main function (don't define custom main function). it allows you to use command line to filter running tests.
If you don't want to use gtest main, IMHO, you shouldn't include gtest_main library.
I used macros for this problem. I have defined a TESTING macro which evaluates to true when compiling the unit tests and otherwise to false:
#ifndef TESTING
// the source main
int main() {
...
}
#endif // !TESTING
You can also use this later for "test" code in your sources. What I do sometimes (not good design IMO):
class Klass:
#ifdef TESTING
friend class KlassUnitTestClass; // allows access to private members in my google test unit class. Disabled when i build sources
#endif // !TESTING

Why create an include/ directory in C and C++ projects?

When I work on my personal C and C++ projects I usually put file.h and file.cpp in the same directory and then file.cpp can reference file.h with a #include "file.h" directive.
However, it is common to find out libraries and other kinds of projects (like the linux kernel and freeRTOS) where all .h files are placed inside an include/ directory, while .cpp files remain in another directory. In those projects, .h files are also included with #include "file.h" instead of #include "include/file.h" as I was hoping.
I have some questions about all of this:
What are the advantages of this file structure organization?
Why are .h files inside include/ included with #include "file.h" instead of #include "include/file.h"? I know the real trick is inside some Makefile, but is it really better to do that way instead of making clear (in code) that the file we want to include is actually in the include/ directory?
The main reason to do this is that compiled libraries need headers in order to be consumed by the eventual user. By convention, the contents of the include directory are the headers exposed for public consumption. The source directory may have headers for internal use, but those are not meant to be distributed with the compiled library.
So when using the library, you link to the binary and add the library's include directory to your build system's header paths. Similarly, if you install your compiled library to a centralized location, you can tell which files need to be copied to the central location (the compiled binaries and the include directory) and which files don't (the source directory and so forth).
It used to be that <header> style includes were of the implicit path type, that is, to be found on the includes environment variable path or a build macro, and the "header" style includes were of the explicit form, as-in, exactly relative to where-ever the source file is that included it. While some build tool chains still allow for this distinction, they often default to a configuration that effectively nullifies it.
Your question is interesting because it brings up the question of which really is better, implicit or explicit? The implicit form is certainly easier because:
Convenient groupings of related headers in hierarchies of directories.
You only need include a few directories in the includes path and need not be aware of every detail with regard to exact locations of files. You can change versions of libraries and their related headers without changing code.
DRY.
Flexible! Your build environment doesn't have to match mine, but we can often get nearly exact same results.
Explicit on the other hand has:
Repeatable builds. A reordering of paths in an includes macro/environment variable, doesn't change resulting header files found during the build.
Portable builds. Just package everything from the root of the build and ship it off to another dev.
Proximity of information. You know exactly where the header is with #include "\X\Y\Z". In the implicit form, you may have to go searching along multiple paths and might even find multiple versions of the same file, how do you know which one is used in the build?
Builders have been arguing over these two approaches for many decades, but a hybrid form of the two, mostly wins out because of the effort required to maintain builds based purely of the explicit form, and the obvious difficulty one might have familiarizing one's self with code of a purely implicit nature. We all generally understand that our various tool chains put certain common libraries and headers in particular locations, such that they can be shared across users and projects, so we expect to find standard C/C++ headers in one place, but we don't initially know anything about the specific structure of any arbitrary project, lacking a locally well documented convention, so we expect the code in those projects to be explicit with regard to the non-standard bits that are unique to them and implicit regarding the standard bits.
It is a good practice to always use the <header> form of include for all the standard headers and other libraries that are not project specific and to use the "header" form for everything else. Should you have an include directory in your project for your local includes? That depends to some extent on whether those headers will be shipped as interfaces to your libraries or merely consumed by your code, and also on your preferences. How large and complex is your project? If you have a mix of internal and external interfaces or lots of different components, you might want to group things into separate directories.
Keep in mind that the directory structure your finished product unpacks to, need not look anything like the directory structure under which you develop and build that product in. If you have only a few .c/.cpp files and headers, it's ok to put them all in one directory, but eventually, you're going to work on something non-trivial and will have to think through the consequences of your build environment choices, and hopefully document it for others to understand it.
1 . .hpp and .cpp doesn't necessary have 1 to 1 relationship, there may have multiple .cpp using same .hpp according to different conditions (eg:different environments), for example: a multi-platform library, imagine there is a class to get the version of the app, and the header is like that:
Utilities.h
#include <string.h>
class Utilities{
static std::string getAppVersion();
}
main.cpp
#include Utilities.h
int main(){
std::cout << Utilities::getAppVersion() << std::ends;
return 0;
}
there may have one .cpp for each platform, and the .cpp may be placed at different locations so that they are easily be selected by the corresponding platform, eg:
.cpp for iOS (path:DemoProject/ios/Utilities.cpp):
#include "Utilities.h"
std::string Utilities::getAppVersion(){
//some objective C code
}
.cpp for Android (path:DemoProject/android/Utilities.cpp):
#include "Utilities.h"
std::string Utilities::getAppVersion(){
//some jni code
}
and of course 2 .cpp would not be used at the same time normally.
2.
#include "file.h"
instead of
#include "include/file.h"
allows you to keep the source code unchanged when your headers are not placed in the "include" folder anymore.

Structuring C++ Application (directory and folders)

I'm coming from web development and I need to ask C++ programmers how do they manage their directory for a model-based project?
I have structured my project in Visual Studio C++ Solution Manager like this:
-> Header Files
--> Models
DatabaseEngine.interface.h
-> Resources
-> Source Files
--> Models
DatabaseEngine.cpp
--> Application
Core.cpp
Bootstrap.cpp
--> FrontController
---
I have made an exact duplicate of Model's directory under the Headers directory, and appended them with ".interface" name, since they are interfaces and the real implementation of them lies in the mirror path under the Sources.
And I have custom types such as DBConnection which I don't know where to put them? should i put them in a file named CustomTypes.cpp or I should relate them to their associated parent model/class/object?
My concern is the convention and standards.
There is not any standard, C++ is a very open-minded world you will see ; )
It is all about making what works best for you, but usually taking advices from people that have already experimented cannot hurt.
Personally, I try to follow this convention
/ProjectName
/src
/libs <- Libraries go here
/Models <- Assuming you want to make a library out of your models
User.h
User.cpp
... <- Putting header and implementations together is not a problem,
they should be edited in parallel oftentimes
/Utilities <- Should your library grow, you can make it more modular
by creating subdirectories
(that could contain subdirectories, etc.)
DBConnection.h
DBConnection.cpp
/apps <- define your applications here.
They probably rely on classes and functions defined in one or several of your libaries define above.
/ApplicationA
Core.h
Core.cpp
Bootstrap.h
Bootstrap.cpp
/resources
/doc
# Below are 'environment specific' folders.
/vs <- Visual studio project files
/xcode <- Xcode project files
Remarks
Headers and implementations
Header files (.h, or .hpp, or no extension) are indeed defining the interface that will be implemented in the implementation file (.cpp). Nonetheless, it is very common to give the same basename to both, and only distinguish them by extension(or absence of). Adding an additional .interface part probably does not buy you much, and could confuse your IDE (or other tools), that is otherwise able to relate a header file to its implementation file.
For the same reason (not confusing some tools), it can be easier to put both files in the same folder: they are very closely related anyway.
Additionally, if later on you need to change your folders structures (eg. to modularize), having only one place to make subfolders (instead of two in your approach) will also make life a bit easier.
 Custom types
C++ offers classes for the programmer to define custom types. It is very common to define custom types in their own pair of header/implementation file. In your case, DBConnection.h would define a DBConnection class, whose (non-inline) methods would be implemented in DBConnection.cpp.
Personnaly, I would not be afraid to create one pair of files per type, which makes it easier for future-you and other programmers to find the file defining a type. You can manage the growing number of files by making subfolders, that will force you to modularize your design.
Of course, sometimes you will need to define a very short class, tightly coupled to another class. It is up to you to include both classes in a common pair of files if you feel the link between them is strong enough.
Extensibility
It may not be a concern to all projects, but this directory structure is extensible in terms of environments and build management.
Keeping project files in separate folders at the top level, and defining out-of-source builds, allows to create project files for other IDEs further down the line.
This hierarchy is also easily amenable to CMake build management, if you should go this way. A CMakeLists.txt file will be placed at the top level (under ProjectName/), this file invoking add_subdirectory(src), in turn caling a CMakeLists.txt in ProjectName/src/, etc.

Proper structure for C++ project & libs

I'm starting to write a data processing library of mine and quite confused about building the proper structure of project and libraries.
Say, I'd like to have a set of functions stored in myfunclib library. My current set up (taken from multiple recommendations online) looks like this:
myproj/include/myfunclib.h - class declaration
myproj/include/myfunclib.cpp - class functionality
myproj/src/functest.cpp - test file to check functions
Firstly, it feels like this is a proper set up in case I use myfunc only for myproj project, but say I want to reuse it - then I'd need to specify it's path in each of cpp files using it or store multiple copies of it.
Secondly, compilation is a bit bulky in such case:
g++ -I include include/myfunclib.cpp src/functest.cpp
Is it a normal practice to type all that stuff every time? What if I have many custom libraries I need? Is there a way to store them all separately, simply include as 'myfunclib.h' and not worry about recompiling etc?
Use a makefile to handle all of your dependencies and building your code. Google the syntax it's pretty simple. then you can just say "make" on the command line and it will build everything for you.
here's a good tutorial
http://mrbook.org/tutorials/make/
some things that bit me originally,
remember that templated classes should only be included, what is generally the source implementation should not be built like normal class implementations into object files, so generally i put my whole template implementation within the include directory
i keep include and source files separate, by source files i mean code (definitions) that needs to be compiled into object files for linking, and includes are all the declarations, inline functions, etc it just seems to make more sense to me
sometimes i'll have a header file that includes all relevant headers for a specific module, and in turn perhaps a header file higher up that includes all main headers for modules i am using
also as said in the comments, you need to introduce yourself to some build tools, and get comfortable with them, these will help you track dependencies within your project, and in most cases avoid rebuilding an entire project when only a subset of dependencies have changed (this can be a pain to get right in the beginning but is worthwhile learning, if you use make and g++ there is a way to get this working with g++ -MM ... not sure how well it works for all cases ), i know that the way i organized my projects changed drastically the more i learnt about the build process, and the more complex my projects became (and the more flaws i had to fix )
this is how i generally keep my a project directory structure when starting
build - where all the built files will be stored
app - these are the main apps (can also be split into include/src)
include - includes files
src - src files (compiled into objects and then linked with main compiled app)
lib - any libs (usually 3rdparty libs , if any my src is compiled into a library it usually ends up in build/lib/target/... )
hope some of this helps

In C++, why are cyclical directory dependencies bad?

I'm asking this about a C++ project developed on Linux. Consider this:
I have two peer directories, dir1 and dir2. dir1 contains classA.h and classB.h. dir2 contains classC.h and classD.h. dir1/classA.h has an #include for dir2/classC.h. dir2/classD.h has an #include for dir1/classB.h. As a result, there is a cyclical dependency between directories dir1 and dir2. However, there are no cyclical dependencies between any classes.
I understand why cyclic dependencies are bad between classes. It seems intuitive to me that directories should also not have cyclical dependencies--however I can't figure out why this would be bad.
Anyone have an explanation?
They are not bad. At least not the way you stated the problem. Directories are meant to organize files, but programatically have no meaning.
However if your directories represent separate modules (i.e. there is a generated library file for each directory), you will have linking errors.
Because classA depends on classC, you need to build the second module in order to compile the first one. But the second module needs the first module to be built first, since classD depends on classB.
Like for classes cyclic dependencies for directories can be an issue for maintainability and reuse.
Maintainability: when a "module" (in this case a directory) has dependencies on another module, whenever the other modules changes, the change can affect this module.
Reuse: when reusing a module, you must also reuse the modules it depends on.
So with cyclic dependencies, all modules are affected. This isn't a real problem with a limited amount of modules, but it grows together with the growing amount.