Is there any way to create integration test for libGDX application? - unit-testing

I apologize if the question is duplicated, but I can't find any information about this.
I know that I can use JUnit to create simple unit tests, but I can't run it on android/iOS devices. If I understand correctly, I can use Instrumented Unit Tests, but they are for android platform only. In this case, I can't test functions from libGDX core (am I wrong?). So, I'm interested, how can I run my tests on devices?

Testing libGDX applications is not an easy topic but with a good architecture it is possible. The crucial point is to separate the rendering part from the business logic you want to test. Rendering always requires an OpenGL context and will just break if you try to run it without that. You can actually write tests that require OpenGL if you don't plan to run them on a headless build server but just on your desktop.
That being said, testing of libGDX apps is mostly centered around the usage of HeadlessApplication that makes your libGDX-dependent code runnable in your test environment. If you want to start he whole game in a test, you need a headless version of it (here "MyGameHeadlessApplication"). Then you can initialize it like this:
private MyGameHeadlessApplication application;
#Before
public void setUp() throws Exception {
HeadlessApplicationConfiguration config = new HeadlessApplicationConfiguration();
config.renderInterval = 1F / 30F;
application = new MyGameHeadlessApplication();
new HeadlessApplication(this.application , config);
}
For testing smaller parts that depend on libGDX, there is a very conventient library available: The gdx-testing project contains a GdxTestRunner that wraps your tests in a HeadlessApplication and allows you something like this (from the gdx-testing example):
#RunWith(GdxTestRunner.class)
public class MySuperTestClass {
#Test
public void bestTestInHistory() {
// libgdx dependent code runs here
}
}
On top of that I had a little problem with my assets folder that couldn't be found in the tests in the first place. I fixed that by setting workingDir for tests in my build.gradle. And of cours make sure to have all the needed dependencies (also e.g. box2d if you need that in your tests). In my setup I have all tests in the "core" project:
project(":core") {
apply plugin: "java"
test {
project.ext.assetsDir = new File("./assets")
workingDir = project.ext.assetsDir
}
dependencies {
testCompile "com.badlogicgames.gdx:gdx-backend-headless:$gdxVersion"
testCompile "com.badlogicgames.gdx:gdx-platform:$gdxVersion:natives-desktop"
testCompile "com.badlogicgames.gdx:gdx-box2d-platform:$gdxVersion:natives-desktop"
// ... more dependencies here ...
See also Unit-testing of libgdx-using classes

Related

MvvmCross: Unit-testing services with plugins

I am creating a cross-platform project with MvvmCross v3 and Xamarin solution and i would like to create some unit-tests.
This seems a bit outdated, so i was trying to follow this and it worked as expected.
However, I am now making an attempt to unit-test some of my domain services, which are dependent on platform specific MvvvCross plugins (e.g ResourceLoader).
Running the test results in the following exception:
Cirrious.CrossCore.Exceptions.MvxException: Failed to resolve type
Cirrious.CrossCore.Plugins.IMvxPluginManager.
I assume that IMvxPluginManager is probably registered in the Setup flow, and that I need to include platform implementation of the plugins in my project, yet I was wondering what would be the preferred way of setting up my unit-test project? Is there something that I am missing?
Is there any updated tutorial for the above task?
Are there already any plugin platform extensions that supports test environment, or should I make an attempt to write them by myself?
In general, you shouldn't be loading the plugins or a real MvxPluginManager during your service tests.
Instead your unit tests should be registering mock types for the interfaces that your services need to use.
var mock = new Mock<INeedToUse>();
// use mock.Setup methods
Ioc.RegisterSingleton<INeedToUse>(mock.Object);
// or you can use constructor dependency injection on INeedToUse instead
You can also register a mock IMvxPluginManager if you really need to, but in the majority of cases I don't believe you should need that. If you've got a case where you absolutely need it, please post a code sample - it's easier to talk in code than text.
This scenario should be well possible. I wanted to UnitTest my SqlLite service implementation. I did the following to get it to work:
Create a Visual Studio unit test project
Add a reference to .Core portable library project
Add a nuget reference To MvvmCross Test Helper
Add a nugget reference to MvvmCross SqlLite Plugin
( this will make use of the WPF implementation of SqlLite)
Download the SqlLite windows library and copy these into your test project
Sql Lite Download location
And make sure to add the sqllite3.dll to the root of your unit test project and set the "Copy to Output Library" to "Copy always". This will make sure the actual sqllite database is copied to the unit test dll location. (Check that the DLL is copied to your bin/debug folder)
Then write you unit test the following way:
[TestClass]
public class SqlServiceTests:MvxIoCSupportingTest
{
private readonly ISQLiteConnectionFactory _factory;
public SqlServiceTests()
{
base.ClearAll();
_factory = new MvxWpfSqLiteConnectionFactory();
Ioc.RegisterSingleton<ISQLiteConnectionFactory>(_factory);
}
[TestMethod]
public void YourSqlLiteTest()
{
// Arrange
var si = new SqlDataService(_factory);
var list = si.GetOrderList();
}
}
I haven't tested this with my viewmodel. By using the IoC.RegisterSingleton method the SqlConnectionFactory should be readyli available for your viewmodels.

Using IOC To Configure Unit and Integration Tests

I have a unit test project which uses Ninject to mock the database repositories. I would like to use these same tests as integration tests and use Ninject to bind my real database repositories back into their respective implementations so as to test/stress the application into the DB.
Is there a way to do this with Visual Studio 2012 or is there another test framework, other than MSTest, which allows for this type of configuration?
I would really hate to rewrite/copy these unit tests into an integration test project but I suspect I could copy the files in as links and have a single test file compiled into two projects (Unit and Integration).
Thanks
Todd
Your requirements sound really odd to me. The difference between a unit test and an integration test is much bigger than just connecting to a database or not. An integration test either has a much bigger scope, or tests if components communicate correctly. When you write a unit test, the scope of such a unit is normally small (one class/component with all dependencies mocked out), which means there is no need for using a DI container.
Let me put it differently. When the tests are exactly the same, why are you interested to do the same test with and without the database. Just leave the database in and just test that. Besides these tests, you can add 'real' unit tests, that have a much smaller scope.
With Nunit you can do this with TestCase,
say you need to use the unit and unit/integration test using CustomerRepository and OrderRepository,
[TestFixture]
public class TestCustomerRepository
{
IKernel _unit;
Ikernel _integration;
[SetUp]
public void Setup()
{
//setup both kernels
}
[TestCase("Unit")]
[TestCase("Integration")]
public void DoTest(String type)
{
var custRepo = GetRepo<ICustomerRepository>(type);
var orderRepo = GetRepo<IOrderRepository>(type);
//do the test here
}
protected T GetRepo<T>(String type)
{
if (type.Equals("Unit"))
{
return _unit.Get<T>();
}
return _integration.Get<T>();
}
}
This is the basic idea.

How do I use an API of a windows service?

I've got a big windows service application. It performs actions on a time bound basis. Sometimes I need to be able to use some of it's functionality in isolation from the rest of the application. Currently I've got a battery of 'unit tests' which call into various sources and perform the desired functionality. My problem is these are not unit tests, they are the way we're exposing the API. If we run all the unit tests in the project, we'll be damaging some of our production data.
My question is how do I go about accessing some of the functionality of the application without unit testing? I was thinking of perhaps something like an interpreter over the top of it where you can call various parts of the functionality, but am not really that sure where to start.
An example of a unit test in our code will be:
[TestMethod]
public void TransferFunds()
{
int accountNumberTo = 123456;
int accountNumberFrom = 654321;
var accountFrom = Store.GetAccount(accountNumberFrom);
var accountTo = Store.GetAccount(accountNumberTo);
double amountToTransfer = 1000;
DateTime transactionDate = new DateTime(2010,01,01);
Store.TransferFunds(accountFrom, AccountTo, amountToTransfer, transactionDate);
var client = BankAccountService.Client();
client.Contribute(accountNumberTo, amountToTransfer, transactionDate);
client.Contribute(accountNumberFrom, amountToTransfer, transactionDate);
}
How can we move this out of unit tests, but still have the ability to run code like this?
Your setup sounds very dangerous. I would create separate console applications for your different needs. I would also remove recommend that you remove all unittests that endangers your production data. Having that sort of unittests is just down-right bad!

MEF and unit testing with NUnit

A few weeks ago I jumped on the MEF (ComponentModel) bandwagon, and am now using it for a lot of my plugins and also shared libraries. Overall, it's been great aside from the frequent mistakes on my part, which result in frustrating debugging sessions.
Anyhow, my app has been running great, but my MEF-related code changes have caused my automated builds to fail. Most of my unit tests were failing simply because the modules I was testing were dependent upon other modules that needed to be loaded by MEF. I worked around these situations by bypassing MEF and directly instantiating those objects.
In other words, via MEF I would have something like
[Import]
public ICandyInterface ci { get; set; }
and
[Export(typeof(ICandyInterface))]
public class MyCandy : ICandyInterface
{
[ImportingConstructor]
public MyCandy( [Import("name_param")] string name) {}
...
}
But in my unit tests, I would just use
CandyInterface MyCandy = new CandyInterface( "Godiva");
In addition, the CandyInterface requires a connection to a database, which I have worked around by just adding a test database to my unit test folder, and I have NUnit use that for all of the tests.
Ok, so here are my questions regarding this situation:
Is this a Bad Way to do things?
Would you recommend composing parts in [SetUp]
I haven't yet learned how to use mocks in unit testing -- is this a good example of a case where I might want to mock the underlying database connection (somehow) to just return dummy data and not really require a database?
If you've encountered something like this before, can you offer your experience and the way you solved your problem? (or should this go into the community wiki?)
It sounds like you are on the right track. A unit test should test a unit, and that's what you do when you directly create instances. If you let MEF compose instances for you, they would tend towards integration tests. Not that there's anything wrong with integration tests, but unit tests tend to be more maintainable because you test each unit in isolation.
You don't need a container to wire up instances in unit tests.
I generally recommend against composing Fixtures in SetUp, as it leads to the General Fixture anti-pattern.
It is best practice to replace dependencies with Test Doubles. Dynamic mocks is one of the more versatile ways of doing this, so definitely something you should learn.
I agree that creating the DOCs manually is much better than using MEF composition container to satisfy imports, but regarding the note 'compositing fixtures in setup leads to the general fixture anti pattern' - I want to mention that that's not always the case.
If you’re using the static container and satisfy imports via CompositionInitializer.SatisfyImports you will have to face the general fixture anti pattern as CompositionInitializer.Initialize cannot be called more than once. However, you can always create CompositionContainer, add catalogs, and call SatisyImportOnce on the container itself. In that case you can use a new CompositionContainer in every test and get away with facing the shared/general fixture anti pattern
I blogged on how to do unit tests (not nunit but works just the same) with MEF.
The trick was to use a MockExportProvider and i created a test base for all my tests to inherit from.
This is my main AutoWire function that works for integration and unit tests:
protected void AutoWire(MockExportProvider mocksProvider, params Assembly[] assemblies){
CompositionContainer container = null;
var assCatalogs = new List<AssemblyCatalog>();
foreach(var a in assemblies)
{
assCatalogs.Add(new AssemblyCatalog(a));
}
if (mocksProvider != null)
{
var providers = new List<ExportProvider>();
providers.Add(mocksProvider); //need to use the mocks provider before the assembly ones
foreach (var ac in assCatalogs)
{
var assemblyProvider = new CatalogExportProvider(ac);
providers.Add(assemblyProvider);
}
container = new CompositionContainer(providers.ToArray());
foreach (var p in providers) //must set the source provider for CatalogExportProvider back to the container (kinda stupid but apparently no way around this)
{
if (p is CatalogExportProvider)
{
((CatalogExportProvider)p).SourceProvider = container;
}
}
}
else
{
container = new CompositionContainer(new AggregateCatalog(assCatalogs));
}
container.ComposeParts(this);
}
More info on my post: https://yoavniran.wordpress.com/2012/10/18/unit-testing-wcf-and-mef/

Test framework for component testing

I am looking for a test framework that suit my requirements. Following are the steps that I need to perform during automated testing:
SetUp (There are some input files, that needs to be read or copied into some specific folders.)
Execute (Run the stand alone)
Tear Down (Clean up to bring the system in its old state)
Apart from this I also want to have some intelligence to make sure if a .cc file changed, all the tests that can validate the changes should be run.
I am evaluating PyUnit, cppunit with scons for this. Thought of running this question to make sure I am on right direction. Can you suggest any other test framework tools? And what other requirements should be considered to select right test framework?
Try googletest AKA gTest it is no worse then any other unit test framework, but can as well beat some with the ease of use. Not exactly a tool for integration testing you are looking for, but can easily be applied in most cases. This wikipedia page might also be useful for you.
Here is a copy of a sample on the gTest project page:
#include <gtest/gtest.h>
namespace {
// The fixture for testing class Foo.
class FooTest : public ::testing::Test {
protected:
// You can remove any or all of the following functions if its body
// is empty.
FooTest() {
// You can do set-up work for each test here.
}
virtual ~FooTest() {
// You can do clean-up work that doesn't throw exceptions here.
}
// If the constructor and destructor are not enough for setting up
// and cleaning up each test, you can define the following methods:
virtual void SetUp() {
// Code here will be called immediately after the constructor (right
// before each test).
}
virtual void TearDown() {
// Code here will be called immediately after each test (right
// before the destructor).
}
// Objects declared here can be used by all tests in the test case for Foo.
};
// Tests that Foo does Xyz.
TEST_F(FooTest, DoesXyz) {
// Exercises the Xyz feature of Foo.
}
Scons could take care of building your .cc when they are changed, gTest can be used to setUp and tearDown your tests.
I can only add that we are using gTest in some cases, and a custom in-house test automation framework in almost all other. It is often a case with such tools that it might be easier to write your own than try to adjust and tweak some other to match your requirements.
One good option IMO, and it is something our test automation framework is moving towards, is using nosetests, coupled with a library of common routines (like start/stop services, get status of something, enable/disable logging in certain components etc.). This gives you a flexible system that is also fairly easy to use. And since it uses python and not C++ or something like that, more people can be busy creating test cases, including QEs, which not necessarily need to be able to write C++.
After reading this article http://gamesfromwithin.com/exploring-the-c-unit-testing-framework-jungle some time ago I went for CxxTest.
Once you have the thing set up (you need to install python for instance) it's pretty easy to write tests (I was completely new to unit tests)
I use it at work, integrated as a visual studio project in my solution. It produces a clickable output when a test fails, and the tests are built and run each time I build the solution.