Calling several times a single test - Google Tests - c++

I am trying to perform a little of random testing in a piece of software I am developing.
I have a fixture that is initialized with random values, therefore, each test will have different input.
Moreover, what I want is to run one of those test several times(I expect the fixture to be initialized randomly for each execution), is it possible in Google Tests? I need that to be in the code, not to use a argument or something like that.
I am looking for something like invocationCount in JUnit.

How about something like this, using an unused parameter and Range()
class Fixture : public ::testing::TestWithParam<int> {
//Random initialisation
};
TEST_P(Fixture, Test1){}
INSTANTIATE_TEST_CASE_P(Instantiation, Fixture, ::testing::Range(1, 11));
Test1 wil be called 10 times (the range end, 11, isn't included), with a new fixture created each time.

Related

How to configure unit testing for AnyLogic agent code?

How do you configure unit testing framework to help develop code that is part of AnyLogic agents?
To have a suitable test driven development rhythm, I need to be able to run all tests in a few seconds. I thought of exporting the project as a standalone application (jar) each time, but that's pretty slow.
I thought of trying to write all the code outside AnyLogic in separate classes, but there are many references to built-in AnyLogic classes, as well as various agents. My code would need to refer to these somehow, and I'm not sure how to do that except by writing the code inside AnyLogic.
I wonder if there's a way of adding the test runner as a dependency, and executing that test runner from within AnyLogic.
Does anyone have a setup that works nicely?
This definitely requires some advanced Java, but testing, especially unit testing is too often neglected in building good robust models. I hope this simple example is enough to get you (and lots of other modellers) going.
For Junit testing, we make use of two libraries that you can add as a dependency to your model.
Now there are two main types of logic that you will want to test in simulation models.
Functions in Java classes
Model execution
Type 1: Suppose I have this very simple Java class
public class MyClass {
public MyClass() {
}
public boolean getResult() {
return true;
}
}
And I want to test the function getResult()
I can simply create a new class and create a function that I annotate with the #Test modifier and then also make use of the assertEquals() method, which is standard in junit testing
import org.junit.Test;
import static org.junit.Assert.assertEquals;
public class MyTestClass{
#Test
public void testMyClassFunction1() {
boolean result = new MyClass().getResult();
assertEquals("The value of the test class 1", result, true);
}
Now comes the AnyLogic specific implementation (there are other ways to do this but this is the easiest/most useful, you will see in a minute)
You need to create a custom experiment
Now if you run this from the Run Model button you will get this output
SUCCESS
Run: 1
Failed: 0
You can obviously update and change the output as to your liking
Type 2: Suppose we have this very simple model
And the function getResult() simply returns an int of 2.
Now we need to create another custom experiment to run this model
And then we can write a test to run this Custom Experiment and check the result
Simply add the following to your MyTestClass
#Test
public void testMyClassFunction2() {
int result = new SingleRun(null).runExperiment();
assertEquals("Value of a single run", result, 2);
}
And now if you run the RunAllTests customer experiment it will give you this output
SUCCESS
Run: 2
Failed: 0
This is just the beginning, you can read up tons on using junit to your advantage

Nunit parallel lifecycle and parallel test failing sometimes

I am having an issue with tests sporadically failing when using nunit 3 and using parallel test running.
We have a number of tests that currently are structured as follows
[TestFixture]
public class CalculateShipFromStoreShippingCost
{
private IService_service;
private IClient _client;
[SetUp]
public void SetUp()
{
_service = Substitute.For<IService>();
_client = new Client(_service);
}
[Test]
public async Task WhenScenario1()
{
_service.Apply(Args.Any<int>).Returns(1);
var result = _client.DoTheThing();
Assert.IsTrue(1,result);
}
[Test]
public async Task WhenScenario2()
{
_service.Apply(Args.Any<int>).Returns(2);
var result = _client.DoTheThing();
Assert.IsTrue(2,result);
}
}
Sometimes the test fail as one of the substitutes is returning the value for the other test.
How should this test be structured so that with Nunit it will run reliably when done in parallel
You haven't shown any Parallelizable attributes in your example, so I assume you are using the attribute at a higher level, most likely on the assembly. Otherwise, no parallel execution would occur. Further, since you say the test cases are running in parallel, you have apparently specified ParallelScope.Children.
The two test cases shown in your fixture cannot run in parallel. You should bear in mind that the SetUp method runs for each of the tests. So each of your two tests sets the value of _service, which is part of the state of the single instance of CalculateShipFromStoreShippingCost, which is shared by both tests. That is why you are seeing the "wrong" substitute being returned at times.
It is not possible for two test cases to run reliably in parallel if they both change the state of the fixture. Note that it does not matter whether the assignment to _service takes place in the test method itself or in the SetUp method - both are executed as part of the test case. So, you have to either stop running these two cases in parallel or stop changing the state.
To stop running the tests in parallel, you simply add [NonParallelizable] to each test method. If you are not using the latest framework version, use [Parallelizable(ParallelScope.None)] instead. Your other tests will continue to run in parallel, but these two will not.
Alternatively, use ParallelScope.Fixture at the assembly level. This will cause fixtures to run in parallel by default, while the individual test cases within them each run sequentially. When using ParallelizableAttribute at the assembly level, it is sometimes best to take a more conservative approach, adding in more parallelism within some fixtures where it is useful.
An entirely different approach is to make your tests stateless. Eliminate the _service member and use a local value within the test method itself. Each of your tests would add two lines like...
var service = SubstituteFor<IService>();
var client = new Client(service);
As shown in your example, I would imagine you are getting very little performance gain from running the two methods in parallel, so I would not use that last approach unless I saw a specific performance reason to do so.
As a final note... If you use make your fixtures run in parallel by default (either with an assembly-level attribute or with attributes on each fixture) and place no Parallelizable attribute on your test cases, NUnit uses an optimization, whereby all the tests within the fixture run on the same thread. This saving in context changes will often make up for the loss of any performance improvement you hoped to get through running in parallel.

iOS Unit Testing: XCTestSuite, XCTest, XCTestRun, XCTestCase methods

In my daily unit test coding with Xcode, I only use XCTestCase. There are also these other classes that don't seem to get used much such as: XCTestSuite, XCTest, XCTestRun.
What are XCTestSuite, XCTest, XCTestRun for? When do you use them?
Also, XCTestCase header has a few methods such as:
defaultTestSuite
invokeTest
testCaseWithInvocation:
testCaseWithSelector:
How and when to use the above?
I am having trouble finding documentation on the above XCTest-classes and methods.
Well, this question is pretty good and I just wondering why this question is being ignored.
As the document saying:
XCTestCase is a concrete subclass of XCTest that should be the override point for
most developers creating tests for their projects. A test case subclass can have
multiple test methods and supports setup and tear down that executes for every test
method as well as class level setup and tear down.
On the other hand, this is what XCTestSuite defined:
A concrete subclass of XCTest, XCTestSuite is a collection of test cases. Alternatively, a test suite can extract the tests to be run automatically.
Well, with XCTestSuite, you can construct your own test suite for specific subset of test cases, instead of the default suite ( [XCTestCase defaultTestSuite] ), which as all test cases.
Actually, the default XCTestSuite is composed of every test case found in the runtime environment - all methods with no parameters, returning no value, and prefixed with ‘test’ in all subclasses of XCTestCase.
What about the XCTestRun class?
A test run collects information about the execution of a test. Failures in explicit
test assertions are classified as "expected", while failures from unrelated or
uncaught exceptions are classified as "unexpected".
With XCTestRun, you can record information likes startDate, totalDuration, failureCount etc when the test is starting, or somethings like hasSucceeded when done, and therefore you got the result of running a test. XCTestRun gives you controlability to focus what is happening or happened about the test.
Back to XCTestCase, you will notice that there are methods named testCaseWithInvocation: and testCaseWithSelector: if you read source code. And I recommend you to do for more digging.
How do they work together ?
I've found that there is an awesome explanation in Quick's QuickSpec source file.
XCTest automatically compiles a list of XCTestCase subclasses included
in the test target. It iterates over each class in that list, and creates
a new instance of that class for each test method. It then creates an
"invocation" to execute that test method. The invocation is an instance of
NSInvocation, which represents a single message send in Objective-C.
The invocation is set on the XCTestCase instance, and the test is run.
Some links:
http://modocache.io/probing-sentestingkit
https://github.com/Quick/Quick/blob/master/Sources/Quick/QuickSpec.swift
https://developer.apple.com/reference/xctest/xctest?language=objc
Launch your Xcode, and use cmd + shift + O to open the quickly open dialog, just type 'XCTest' and you will find some related files, such as XCTest.h, XCTestCase.h ... You need to go inside this file to check out the interfaces they offer.
There is a good website about XCTest: http://iosunittesting.com/xctest-assertions/

Run TestMethod with different datasets NOT from a database

So a TestMethod runs only once in one test run.
How can I, in a single test run, let a TestMethod run several times, each time for a different data set that I've set up? My data does not come from a database or file; I want to build up several different in-memory instances of test data mockup.
TestInitialize doesn't let me do this as it runs only once, as well.
What's in control of the execution of TestMethods? How to make it re-run my TestMethods for each data set and how do I access the data set then?
I thought TestContext would be useful but it seems to be database only?
What you're looking for is so-called Data-driven testing.
Look e.g. here and here for descriptions on how to do it with MSTest.
HTH.
Thomas
You could define a test method which calls the other test method multiple times, after doing the correct setup, I'm not saying that this is a good thing to do, but I believe it would work
public class TestClass
{
//This is where the per-data-source test is. This is not marked as a TestMehod because
//it will not be invoked directly by the test runner.
public void ActualTest()
{
//Per-data-source test logic here.
}
[TestMethod]
public void RunActualTestsMultipleTimesWithDifferentConfigs()
{
//Setup for test run with data set 1
ActualTest();
//Setup for test with data set 2
ActualTest();
}
}
This feels like a terrible, terrible hack, I freely admit that. I wouldn't use this myself if I had any other choice, but it may be an option.
Another possibility is to look into how extensible MSTest is, specifically whether or not there is any mechinism to modify or extend the test runner

How do I ignore a test based on another test in NUnit?

I'm writing some NUnit tests for database operations. Obviously, if Add() fails, then Get() will fail as well. However, it looks deceiving when both Add() and Get() fail because it looks like there's two problems instead of just one.
Is there a way to specify an 'order' for tests to run in, in that if the first test fails, the following tests are ignored?
In the same line, is there a way to order the unit test classes themselves? For example, I would like to run my tests for basic database operations first before the tests for round-tripping data from the UI.
Note: This is a little different than having tests depend on each other, it's more like ensuring that something works first before running a bunch of tests. It's a waste of time to, for example, run a bunch of database operations if you can't get a connection to the database in the first place.
Edit: It seems that some people are missing the point. I'm not doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
db.Get(someData);
Assert.That(data was retrieved successfully);
}
Rather, I'm doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
// need some way here to ensure that db.Add() can actually be performed successfully
db.Add(someData);
db.Get(somedata);
Assert.That(data was retrieved successfully);
}
In other words, I want to ensure that the data can be added in the first place before I can test whether it can be retrieved. People are assuming I'm using data from the first test to pass the second test when this is not the case. I'm trying to ensure that one operation is possible before attempting another that depends on it.
As I said already, you need to ensure you can get a connection to the database before running database operations. Or that you can open a file before performing file operations. Or connect to a server before testing API calls. Or...you get the point.
NUnit supports an "Assume.That" syntax for validating setup. This is documented as part of the Theory (thanks clairestreb). In the NUnit.Framework namespace is a class Assume. To quote the documentation:
/// Provides static methods to express the assumptions
/// that must be met for a test to give a meaningful
/// result. If an assumption is not met, the test
/// should produce an inconclusive result.
So in context:
public void TestGet() {
MyList sut = new MyList()
Object expecting = new Object();
sut.Put(expecting);
Assume.That(sut.size(), Is(1));
Assert.That(sut.Get(), Is(expecting));
}
Tests should never depend on each other. You just found out why. Tests that depend on each other are fragile by definition. If you need the data in the DB for the test for Get(), put it there in the setup step.
I think the problem is that you're using NUnit to run something other than the sort of Unit Tests that NUnit was made to run.
Essentially, you want AddTest to run before GetTest, and you want NUnit to stop executing tests if AddTest fails.
The problem is that that's antithetical to unit testing - tests are supposed to be completely independent and run in any order.
The standard concept of Unit Testing is that if you have a test around the 'Add' functionality, then you can use the 'Add' functionality in the 'Get' test and not worry about if 'Add' works within the 'Get' test. You know 'Add' works - you have a test for it.
The 'FIRST' principle (http://agileinaflash.blogspot.com/2009/02/first.html) describes how Unit tests should behave. The test you want to write violates both 'I' (Isolated) and 'R' (Repeatable).
If you're concerned about the database connection dropping between your two tests, I would recommend that rather than connect to a real database during the test, your code should use some sort of a data interface, and for the test, you should be using a mock interface. If the point of the test is to exercise the database connection, then you may simply be using the wrong tool for the job - that's not really a Unit test.
I don't think that's possible out-of-box.
Anyway, your test class design as you described will make the test code very fragile.
MbUnit seems to have a DependsOnAttribute that would allow you to do what you want.
If the other test fixture or test
method fails then this test will not
run. Moreover, the dependency forces
this test to run after those it
depends upon.
Don't know anything about NUnit though.
You can't assume any order of test fixture execution, so any prerequisites have to be checked for within your test classes.
Segregate your Add test into one test-class e.g. AddTests, and put the Get test(s) into another test-class, e.g. class GetTests.
In the [TestFixtureSetUp] method of the GetTests class, check that you have working database access (e.g. that Add's work), and if not, Assert.Ignore or Inconclusive, as you deem appropriate.
This will abort the GetTests test fixture when its prerequisites aren't met, and skip trying to run any of the unit tests it contains.
(I think! I'm an nUnit newbie.)
Create a global variable and return in the test for Get unless Add set it to true (do this in the last line of Add):
public boolean addFailed = false;
public void testAdd () {
try {
... old test code ...
} catch (Throwable t) { // Catch all errors
addFailed = true;
throw t; // Don't forget to rethrow
}
}
public void testGet () {
if (addFailed) return;
... old test code ...
}