Java module dependencies and testImplementation dependencies - unit-testing

I apologise if this is a duplicate - a link to another answer would be great, but I am having difficulty knowing what to search for.
I am building a library (kotlin, but I can jump between kotlin and java terminology quite happily). In this library I want the unit tests to depend upon an external library.
I have added a line to build.gradle.kts
testImplementation("library-group:library-name:0.0.13")
And it all compiles fine, I can publish the library to maven-local, and use it. But when I want to run the tests I get an IllegalAccessException because my module info.java does not specify a dependency upon the library that the tests depend upon.
I know it can be done - the test code depends upon JUnit, but that is not declared in module info. My guess is that there is a fairly simple incantation to add to build.gradle.kts but I do not know what to search for..... any help gratefully received.
Edit1:
The library that I depend upon at test time is modular.
The problem is that when I run the tests that access classes in library-group:library-name these are not present. I get an IllegalAccess exception as the required class is not present. It is as if I need two moduleinfo.java files, one for test and one for production.
/Edit1
Some relevant parts of build.gradle.kts:
plugins {
// Apply the org.jetbrains.kotlin.jvm Plugin to add support for Kotlin.
id("org.jetbrains.kotlin.jvm") version "1.5.31"
// Apply the java-library plugin for API and implementation separation.
`java-library`
`maven-publish`
// https://plugins.gradle.org/plugin/org.jlleitschuh.gradle.ktlint
id("org.jlleitschuh.gradle.ktlint") version "10.1.0"
// https://github.com/java9-modularity/gradle-modules-plugin/blob/master/test-project-kotlin/build.gradle.kts
id("org.javamodularity.moduleplugin") version "1.8.9"
}
dependencies {
// Align versions of all Kotlin components
implementation(platform("org.jetbrains.kotlin:kotlin-bom"))
// Use the Kotlin JDK 8 standard library.
implementation("org.jetbrains.kotlin:kotlin-stdlib-jdk8")
testImplementation(kotlin("test"))
// https://logging.apache.org/log4j/kotlin/index.html
// https://github.com/apache/logging-log4j-kotlin
implementation("org.apache.logging.log4j:log4j-api-kotlin:1.0.0")
implementation("org.apache.logging.log4j:log4j-api:2.11.1")
implementation("org.apache.logging.log4j:log4j-core:2.11.1")
testImplementation("library-group:library-name:0.0.13")
}
tasks.test { useJUnitPlatform() }
kotlin { explicitApi = ExplicitApiMode.Strict }
tasks.compileKotlin { kotlinOptions.allWarningsAsErrors = true }

Related

Unit tests for a simple game loop using SDL

Background
This is my first time writing unit tests in C++. I am using Catch2 as a test framework and I have 2 projects set up in my Visual Studio solution: one for my application, and one for the tests.
I have a simple game loop that I want to test. Something like this:
Application.h
#ifndef APPLICATION_H
#define APPLICATION_H
namespace Rival {
class Application {
public:
void start();
};
} // namespace Rival
#endif // APPLICATION_H
Application.cpp
#include "pch.h"
#include "Application.h"
#include <SDL.h>
namespace Rival {
void Application::start() {
Uint32 nextUpdateDue = SDL_GetTicks();
while (!exiting) {
Uint32 frameStartTime = SDL_GetTicks();
if (nextUpdateDue <= frameStartTime) {
// Update the game logic, as many times as necessary to keep it
// in-sync with the refresh rate.
while (nextUpdateDue <= frameStartTime) {
state->update();
nextUpdateDue += TimerUtils::timeStepMs;
}
state->render();
} else {
// Sleep until next frame is due
Uint32 sleepTime = nextUpdateDue - frameStartTime;
SDL_Delay(sleepTime);
}
}
}
} // namespace Rival
Problem
The problem is the #include <SDL.h>.
I want to be able to mock methods from this header, for example SDL_GetTicks().
I don't want to actually include SDL, if I can help it; I want to keep my unit tests lightweight and free from any window creation / rendering code.
How is this normally accomplished?
The answer is to apply the Fundamental Theorem of Software Engineering, which says that you can solve virtually any problem by adding a new layer of abstraction.
Here that means you wrap your SDL calls in a class that you can swap out at test time. Define an interface (abstract base class), then implement one derivative that uses SDL, and implement another (or use a mock) for tests. Only the SDL implementation will know about SDL, so the tests will not even include or link it.
BTW, for a great summary of testing in C++, see this episode of CppCast on designing for test.
In the end I solved this by stubbing the library functions I was using. You can find my commit with the working test implementation here.
This is somewhat similar to #metal's suggestion, but instead of adding a new layer of abstraction I relied on the fact that my test project did not include the library headers, which meant that I was free to provide my own substitutions for use in the tests.
Maybe the documentation I produced in the process will help someone else:
The test project includes all headers defined by Main-Project, but does not include the headers from third-party libraries. This is to help keep the tests lightweight; we do not want to create an OpenGL context every time we run our tests.
This means that we have to provide stub implementations for any third-party definitions that we depend upon. Stub or mock implementations of Main-Project definitions can also be provided for files that depend heavily on third-party libraries (e.g. Texture). Other source files from Main-Project can be directly included in the test project, as required.
To keep the project organised, several filters have been created:
Test Framework: Files required to get the tests to run.
Source Files: Unmodified source files under test, imported directly from Main-Project.
Test Doubles: Test-only implementations of Main-Project headers.
Tests: The tests themselves.
As an aside, the Catch2 framework has been excellent so far, and has added no extra complexity whatsoever.

Is there any way to create integration test for libGDX application?

I apologize if the question is duplicated, but I can't find any information about this.
I know that I can use JUnit to create simple unit tests, but I can't run it on android/iOS devices. If I understand correctly, I can use Instrumented Unit Tests, but they are for android platform only. In this case, I can't test functions from libGDX core (am I wrong?). So, I'm interested, how can I run my tests on devices?
Testing libGDX applications is not an easy topic but with a good architecture it is possible. The crucial point is to separate the rendering part from the business logic you want to test. Rendering always requires an OpenGL context and will just break if you try to run it without that. You can actually write tests that require OpenGL if you don't plan to run them on a headless build server but just on your desktop.
That being said, testing of libGDX apps is mostly centered around the usage of HeadlessApplication that makes your libGDX-dependent code runnable in your test environment. If you want to start he whole game in a test, you need a headless version of it (here "MyGameHeadlessApplication"). Then you can initialize it like this:
private MyGameHeadlessApplication application;
#Before
public void setUp() throws Exception {
HeadlessApplicationConfiguration config = new HeadlessApplicationConfiguration();
config.renderInterval = 1F / 30F;
application = new MyGameHeadlessApplication();
new HeadlessApplication(this.application , config);
}
For testing smaller parts that depend on libGDX, there is a very conventient library available: The gdx-testing project contains a GdxTestRunner that wraps your tests in a HeadlessApplication and allows you something like this (from the gdx-testing example):
#RunWith(GdxTestRunner.class)
public class MySuperTestClass {
#Test
public void bestTestInHistory() {
// libgdx dependent code runs here
}
}
On top of that I had a little problem with my assets folder that couldn't be found in the tests in the first place. I fixed that by setting workingDir for tests in my build.gradle. And of cours make sure to have all the needed dependencies (also e.g. box2d if you need that in your tests). In my setup I have all tests in the "core" project:
project(":core") {
apply plugin: "java"
test {
project.ext.assetsDir = new File("./assets")
workingDir = project.ext.assetsDir
}
dependencies {
testCompile "com.badlogicgames.gdx:gdx-backend-headless:$gdxVersion"
testCompile "com.badlogicgames.gdx:gdx-platform:$gdxVersion:natives-desktop"
testCompile "com.badlogicgames.gdx:gdx-box2d-platform:$gdxVersion:natives-desktop"
// ... more dependencies here ...
See also Unit-testing of libgdx-using classes

MvvmCross: Unit-testing services with plugins

I am creating a cross-platform project with MvvmCross v3 and Xamarin solution and i would like to create some unit-tests.
This seems a bit outdated, so i was trying to follow this and it worked as expected.
However, I am now making an attempt to unit-test some of my domain services, which are dependent on platform specific MvvvCross plugins (e.g ResourceLoader).
Running the test results in the following exception:
Cirrious.CrossCore.Exceptions.MvxException: Failed to resolve type
Cirrious.CrossCore.Plugins.IMvxPluginManager.
I assume that IMvxPluginManager is probably registered in the Setup flow, and that I need to include platform implementation of the plugins in my project, yet I was wondering what would be the preferred way of setting up my unit-test project? Is there something that I am missing?
Is there any updated tutorial for the above task?
Are there already any plugin platform extensions that supports test environment, or should I make an attempt to write them by myself?
In general, you shouldn't be loading the plugins or a real MvxPluginManager during your service tests.
Instead your unit tests should be registering mock types for the interfaces that your services need to use.
var mock = new Mock<INeedToUse>();
// use mock.Setup methods
Ioc.RegisterSingleton<INeedToUse>(mock.Object);
// or you can use constructor dependency injection on INeedToUse instead
You can also register a mock IMvxPluginManager if you really need to, but in the majority of cases I don't believe you should need that. If you've got a case where you absolutely need it, please post a code sample - it's easier to talk in code than text.
This scenario should be well possible. I wanted to UnitTest my SqlLite service implementation. I did the following to get it to work:
Create a Visual Studio unit test project
Add a reference to .Core portable library project
Add a nuget reference To MvvmCross Test Helper
Add a nugget reference to MvvmCross SqlLite Plugin
( this will make use of the WPF implementation of SqlLite)
Download the SqlLite windows library and copy these into your test project
Sql Lite Download location
And make sure to add the sqllite3.dll to the root of your unit test project and set the "Copy to Output Library" to "Copy always". This will make sure the actual sqllite database is copied to the unit test dll location. (Check that the DLL is copied to your bin/debug folder)
Then write you unit test the following way:
[TestClass]
public class SqlServiceTests:MvxIoCSupportingTest
{
private readonly ISQLiteConnectionFactory _factory;
public SqlServiceTests()
{
base.ClearAll();
_factory = new MvxWpfSqLiteConnectionFactory();
Ioc.RegisterSingleton<ISQLiteConnectionFactory>(_factory);
}
[TestMethod]
public void YourSqlLiteTest()
{
// Arrange
var si = new SqlDataService(_factory);
var list = si.GetOrderList();
}
}
I haven't tested this with my viewmodel. By using the IoC.RegisterSingleton method the SqlConnectionFactory should be readyli available for your viewmodels.

xunit programmatically add new tests/"[Facts]"?

We have a folder full of JSON text files that need to be set to a single URI. Currently it's all done with a single xUnit "[Fact]" as below
[Fact]
public void TestAllCases()
{
PileOfTests pot = new PileOfTests();
pot.RunAll();
}
pot.RunAll() then parses the folder, loads the JSON files (say 50 files). Each is then hammered against the URI to see is each returns HTTP 200 ("ok"). If any fail, we're currently printing it as a fail by using
System.Console.WriteLine("\n >> FAILED ! << " + testname + "\n");
This does ensure that failures catch our eye but xUnit thinks all tests failed (understandably). Most importantly, we can't specify to xunit "here, run only this specific test". It's all or nothing the way it's currently built.
How can I programmatically add test cases? I'd like to add them when I read the number and names of the *.json files.
The simple answer is:
No, not directly. But there exists an, albeit a bit hacky, workaround, which is presented below.
Current situation (as of xUnit 1.9.1)
By specifiying the [RunWith(typeof(CustomRunner))] on a class, one can instruct xUnit to use the CustomRunner class - which must implement Xunit.Sdk.ITestClassCommand - to enumerate the tests available on the test class decorated with this attribute.
But unfortunately, while the invocation of test methods has been decoupled from System.Reflection + the actual methods,
the way of passing the tests to run to the test runner haven't.
Somewhere down in the xUnit framework code for invoking a specific test method, there is a call to typeof(YourTestClass).GetMethod(testName).
This means that if the class implementing the test discovery returns a test name that doesn't refer to a real method on the test class, the test is shown in the xUnit GUI - but any attempts to run / invoke it end up with a TargetInvocationException.
Workaround
If one thinks about it, the workaround itself is relatively straightforward.
A working implementation of it can be found here.
The presented solution first reads in the names of the files which should appear as different tests in the xUnit GUI.
It then uses System.Reflection.Emit to dynamically generate an assembly with a test class containing a dedicated test method for each of the input files.
The only thing that each of the generated methods does is to invoke the RunTest(string fileName) method on the class that specified the [EnumerateFilesFixture(...)] attribute. See linked gist for further explanation.
Hope this helps; feel free to use the example implementation if you like.

Test framework for component testing

I am looking for a test framework that suit my requirements. Following are the steps that I need to perform during automated testing:
SetUp (There are some input files, that needs to be read or copied into some specific folders.)
Execute (Run the stand alone)
Tear Down (Clean up to bring the system in its old state)
Apart from this I also want to have some intelligence to make sure if a .cc file changed, all the tests that can validate the changes should be run.
I am evaluating PyUnit, cppunit with scons for this. Thought of running this question to make sure I am on right direction. Can you suggest any other test framework tools? And what other requirements should be considered to select right test framework?
Try googletest AKA gTest it is no worse then any other unit test framework, but can as well beat some with the ease of use. Not exactly a tool for integration testing you are looking for, but can easily be applied in most cases. This wikipedia page might also be useful for you.
Here is a copy of a sample on the gTest project page:
#include <gtest/gtest.h>
namespace {
// The fixture for testing class Foo.
class FooTest : public ::testing::Test {
protected:
// You can remove any or all of the following functions if its body
// is empty.
FooTest() {
// You can do set-up work for each test here.
}
virtual ~FooTest() {
// You can do clean-up work that doesn't throw exceptions here.
}
// If the constructor and destructor are not enough for setting up
// and cleaning up each test, you can define the following methods:
virtual void SetUp() {
// Code here will be called immediately after the constructor (right
// before each test).
}
virtual void TearDown() {
// Code here will be called immediately after each test (right
// before the destructor).
}
// Objects declared here can be used by all tests in the test case for Foo.
};
// Tests that Foo does Xyz.
TEST_F(FooTest, DoesXyz) {
// Exercises the Xyz feature of Foo.
}
Scons could take care of building your .cc when they are changed, gTest can be used to setUp and tearDown your tests.
I can only add that we are using gTest in some cases, and a custom in-house test automation framework in almost all other. It is often a case with such tools that it might be easier to write your own than try to adjust and tweak some other to match your requirements.
One good option IMO, and it is something our test automation framework is moving towards, is using nosetests, coupled with a library of common routines (like start/stop services, get status of something, enable/disable logging in certain components etc.). This gives you a flexible system that is also fairly easy to use. And since it uses python and not C++ or something like that, more people can be busy creating test cases, including QEs, which not necessarily need to be able to write C++.
After reading this article http://gamesfromwithin.com/exploring-the-c-unit-testing-framework-jungle some time ago I went for CxxTest.
Once you have the thing set up (you need to install python for instance) it's pretty easy to write tests (I was completely new to unit tests)
I use it at work, integrated as a visual studio project in my solution. It produces a clickable output when a test fails, and the tests are built and run each time I build the solution.