Unit tests for a simple game loop using SDL - c++

Background
This is my first time writing unit tests in C++. I am using Catch2 as a test framework and I have 2 projects set up in my Visual Studio solution: one for my application, and one for the tests.
I have a simple game loop that I want to test. Something like this:
Application.h
#ifndef APPLICATION_H
#define APPLICATION_H
namespace Rival {
class Application {
public:
void start();
};
} // namespace Rival
#endif // APPLICATION_H
Application.cpp
#include "pch.h"
#include "Application.h"
#include <SDL.h>
namespace Rival {
void Application::start() {
Uint32 nextUpdateDue = SDL_GetTicks();
while (!exiting) {
Uint32 frameStartTime = SDL_GetTicks();
if (nextUpdateDue <= frameStartTime) {
// Update the game logic, as many times as necessary to keep it
// in-sync with the refresh rate.
while (nextUpdateDue <= frameStartTime) {
state->update();
nextUpdateDue += TimerUtils::timeStepMs;
}
state->render();
} else {
// Sleep until next frame is due
Uint32 sleepTime = nextUpdateDue - frameStartTime;
SDL_Delay(sleepTime);
}
}
}
} // namespace Rival
Problem
The problem is the #include <SDL.h>.
I want to be able to mock methods from this header, for example SDL_GetTicks().
I don't want to actually include SDL, if I can help it; I want to keep my unit tests lightweight and free from any window creation / rendering code.
How is this normally accomplished?

The answer is to apply the Fundamental Theorem of Software Engineering, which says that you can solve virtually any problem by adding a new layer of abstraction.
Here that means you wrap your SDL calls in a class that you can swap out at test time. Define an interface (abstract base class), then implement one derivative that uses SDL, and implement another (or use a mock) for tests. Only the SDL implementation will know about SDL, so the tests will not even include or link it.
BTW, for a great summary of testing in C++, see this episode of CppCast on designing for test.

In the end I solved this by stubbing the library functions I was using. You can find my commit with the working test implementation here.
This is somewhat similar to #metal's suggestion, but instead of adding a new layer of abstraction I relied on the fact that my test project did not include the library headers, which meant that I was free to provide my own substitutions for use in the tests.
Maybe the documentation I produced in the process will help someone else:
The test project includes all headers defined by Main-Project, but does not include the headers from third-party libraries. This is to help keep the tests lightweight; we do not want to create an OpenGL context every time we run our tests.
This means that we have to provide stub implementations for any third-party definitions that we depend upon. Stub or mock implementations of Main-Project definitions can also be provided for files that depend heavily on third-party libraries (e.g. Texture). Other source files from Main-Project can be directly included in the test project, as required.
To keep the project organised, several filters have been created:
Test Framework: Files required to get the tests to run.
Source Files: Unmodified source files under test, imported directly from Main-Project.
Test Doubles: Test-only implementations of Main-Project headers.
Tests: The tests themselves.
As an aside, the Catch2 framework has been excellent so far, and has added no extra complexity whatsoever.

Related

Java module dependencies and testImplementation dependencies

I apologise if this is a duplicate - a link to another answer would be great, but I am having difficulty knowing what to search for.
I am building a library (kotlin, but I can jump between kotlin and java terminology quite happily). In this library I want the unit tests to depend upon an external library.
I have added a line to build.gradle.kts
testImplementation("library-group:library-name:0.0.13")
And it all compiles fine, I can publish the library to maven-local, and use it. But when I want to run the tests I get an IllegalAccessException because my module info.java does not specify a dependency upon the library that the tests depend upon.
I know it can be done - the test code depends upon JUnit, but that is not declared in module info. My guess is that there is a fairly simple incantation to add to build.gradle.kts but I do not know what to search for..... any help gratefully received.
Edit1:
The library that I depend upon at test time is modular.
The problem is that when I run the tests that access classes in library-group:library-name these are not present. I get an IllegalAccess exception as the required class is not present. It is as if I need two moduleinfo.java files, one for test and one for production.
/Edit1
Some relevant parts of build.gradle.kts:
plugins {
// Apply the org.jetbrains.kotlin.jvm Plugin to add support for Kotlin.
id("org.jetbrains.kotlin.jvm") version "1.5.31"
// Apply the java-library plugin for API and implementation separation.
`java-library`
`maven-publish`
// https://plugins.gradle.org/plugin/org.jlleitschuh.gradle.ktlint
id("org.jlleitschuh.gradle.ktlint") version "10.1.0"
// https://github.com/java9-modularity/gradle-modules-plugin/blob/master/test-project-kotlin/build.gradle.kts
id("org.javamodularity.moduleplugin") version "1.8.9"
}
dependencies {
// Align versions of all Kotlin components
implementation(platform("org.jetbrains.kotlin:kotlin-bom"))
// Use the Kotlin JDK 8 standard library.
implementation("org.jetbrains.kotlin:kotlin-stdlib-jdk8")
testImplementation(kotlin("test"))
// https://logging.apache.org/log4j/kotlin/index.html
// https://github.com/apache/logging-log4j-kotlin
implementation("org.apache.logging.log4j:log4j-api-kotlin:1.0.0")
implementation("org.apache.logging.log4j:log4j-api:2.11.1")
implementation("org.apache.logging.log4j:log4j-core:2.11.1")
testImplementation("library-group:library-name:0.0.13")
}
tasks.test { useJUnitPlatform() }
kotlin { explicitApi = ExplicitApiMode.Strict }
tasks.compileKotlin { kotlinOptions.allWarningsAsErrors = true }

Unit testing D library

The built-in unit testing functionality (unittest {...} code blocks) seems to only be activated when running.
How can I activate unit tests in a library with no main function?
This is somewhat related to this SO question, although the accepted answer there deals with a workaround via the main function.
As an example, I would expect unit testing to fail on a file containing only this code:
int foo(int i) { return i + 1; }
unittest {
assert(foo(1) == 1); // should fail
}
You'll notice I don't have module declared at the top. I'm not sure if that matters for this specific question, but in reality I would have a module statement at the top.
How can I activate unit tests in a library with no main function?
You can use DMD's -main switch, or rdmd's --main switch, to add an empty main function to the set of compiled source files. This allows creating a unit test binary for your library.
If you use Dub, dub test will do something like the above automatically.

Unittest design pattern for static functions

I am writing a piece of simple C++ class and trying to have a unittest case for the code.
The code is as simple as:
class Foo
{
static int EntryFunction(bool flag)
{
if(flag)
{
TryDownload();
}
else
{
TryDeleteFile();
}
return 0;
}
static void TryDownload()
{
// http download code
}
static void TryDeleteFile()
{
// delete file code
}
}
The issue is, according to the concept of UT, we cannot relay on the network connection. So the unittest cannot really run the download code. My ultimate goal is just to test the code path, for example: when pass in TRUE, the download code path should hit, otherwise the delete logic should hit. I was thinking to override this class so download and delete function can be override to just set a flag and noop, but the functions are static.
I was wondering in this case what will be a good way to test it?
I think it depends on what is in your TryDownload and TryDelete functions. If they use some other objects/functions to perform their tasks, you can configure simulations of those objects so that your TryDownload and TryDelete don't know their aren't really downloading/deleting anything.
If you don't have such objects/functions (and everything is contained in TryDownload/TryDelete), one might argue that the code is not well suited to unit testing because it can't be broken down into small units. In that case, your only option is an actual web service (perhaps running on the localhost) that lets those functions do their thing.
One of the ways which I can suggest is to use Google Mock library in your unit test framework.
Using Google mock library, you can do exactly what you have explained.

Using Boost::Test for parallel code

I want to create some tests with boost::unit_test for my parallel (mpi based) C++ codes. I have some basic experiences in using the test framework. For me the main problem, when going to parallel codes, is where to put MPI::Init, such that it is called first. In the test suites I have created there is no main function. Furthermore, does Boost::Test exists correctly (with respect to mpi) when some assertions fail on a subset of the existing ranks?
Boost Test has fixture support, which allows you to perform setup/cleanup per test case, test suite, or globally. Sounds like you should put the call to MPI::Init in a global fixture.
struct MPIFixture {
MPIFixture() { MPI::Init(); }
~MPIFixture() { /* I bet there's a deinit you should call */ }
};
BOOST_GLOBAL_FIXTURE(MPIFixture);
If you have trouble working with that, or if you are working in a framework that provides its own main function, then you can #define BOOST_TEST_NO_MAIN before including the Boost headers. Then you can invoke boost::unit_test::unit_test_main yourself to run your test suites.

Test framework for component testing

I am looking for a test framework that suit my requirements. Following are the steps that I need to perform during automated testing:
SetUp (There are some input files, that needs to be read or copied into some specific folders.)
Execute (Run the stand alone)
Tear Down (Clean up to bring the system in its old state)
Apart from this I also want to have some intelligence to make sure if a .cc file changed, all the tests that can validate the changes should be run.
I am evaluating PyUnit, cppunit with scons for this. Thought of running this question to make sure I am on right direction. Can you suggest any other test framework tools? And what other requirements should be considered to select right test framework?
Try googletest AKA gTest it is no worse then any other unit test framework, but can as well beat some with the ease of use. Not exactly a tool for integration testing you are looking for, but can easily be applied in most cases. This wikipedia page might also be useful for you.
Here is a copy of a sample on the gTest project page:
#include <gtest/gtest.h>
namespace {
// The fixture for testing class Foo.
class FooTest : public ::testing::Test {
protected:
// You can remove any or all of the following functions if its body
// is empty.
FooTest() {
// You can do set-up work for each test here.
}
virtual ~FooTest() {
// You can do clean-up work that doesn't throw exceptions here.
}
// If the constructor and destructor are not enough for setting up
// and cleaning up each test, you can define the following methods:
virtual void SetUp() {
// Code here will be called immediately after the constructor (right
// before each test).
}
virtual void TearDown() {
// Code here will be called immediately after each test (right
// before the destructor).
}
// Objects declared here can be used by all tests in the test case for Foo.
};
// Tests that Foo does Xyz.
TEST_F(FooTest, DoesXyz) {
// Exercises the Xyz feature of Foo.
}
Scons could take care of building your .cc when they are changed, gTest can be used to setUp and tearDown your tests.
I can only add that we are using gTest in some cases, and a custom in-house test automation framework in almost all other. It is often a case with such tools that it might be easier to write your own than try to adjust and tweak some other to match your requirements.
One good option IMO, and it is something our test automation framework is moving towards, is using nosetests, coupled with a library of common routines (like start/stop services, get status of something, enable/disable logging in certain components etc.). This gives you a flexible system that is also fairly easy to use. And since it uses python and not C++ or something like that, more people can be busy creating test cases, including QEs, which not necessarily need to be able to write C++.
After reading this article http://gamesfromwithin.com/exploring-the-c-unit-testing-framework-jungle some time ago I went for CxxTest.
Once you have the thing set up (you need to install python for instance) it's pretty easy to write tests (I was completely new to unit tests)
I use it at work, integrated as a visual studio project in my solution. It produces a clickable output when a test fails, and the tests are built and run each time I build the solution.