I want to stop the execution of a test if it matches a certain scenario in order to avoid code duplication.
Consider the following situation:
CoreProviderTest
public void executeCoreSuccess(Object responseModel){
assertNotNull("Response successful", responseModel != null);
if (responseModel == null) {
//Kill Test
}
}
ChildProviderTest - extends CoreProviderTest
#Test
public void responseTester() {
new Provider().getServiceResponse(new Provider.Interface() {
#Override
public void onSuccess(Object responseModel) {
executeCoreSuccess(responseModel);
//Continue assertions
}
#Override
public void onFailure(ErrorResponseModel error) {
executeCoreFailure(error);
}
});
}
For a null response, I would like to kill my current test case inside CoreProviderTest otherwise that might trigger exceptions in further assertions. I wanted to avoid something like:
CoreProviderTest
if (responseModel == null) {
return true;
}
ChildProviderTest
#Override
public void onSuccess(Object responseModel) {
if (executeCoreSuccess(responseModel))
return;
//Continue assertions
}
Is there a way to kill the current test execution with Mockito, JUnit or Roboletric? No luck so far googling an answer.
Thanks in advance
If you are using JUnit5, it has features like Assumtions, Disabling tests and Conditional Test Execution.
Here's the link :
https://junit.org/junit5/docs/current/user-guide/#writing-tests-assumptions
In your case, looks like assumingThat should work. Here's the API :
https://junit.org/junit5/docs/5.0.0/api/org/junit/jupiter/api/Assumptions.html#assumingThat-boolean-org.junit.jupiter.api.function.Executable-
JUnit Assumptions suits perfect to the given case.
Code snippet now stands like:
CoreProvider
public void executeCoreSuccess(Object responseModel){
assumeTrue("Response successful",responseModel != null);
}
According to JUnit's documentation:
A failed assumption does not mean the code is broken, but that the
test provides no useful information. Assume basically means "don't run
this test if these conditions don't apply". The default JUnit runner
skips tests with failing assumptions.
+1 Adelin and Dossani
Related
I've implemented a C++ Class that will execute something in a timed cycle using a thread. The thread is set to be scheduled with the SCHED_DEADLINE scheduler of the Linux kernel. To setup the Scheduler the process running this must have certain Linux capabilities.
My question is, how to test this?
I can of course make a unit test and create the threat, do some counting an exit the test after a time to validate the cycle counter but that only works if the unit test is allowed to apply the right scheduler. If not, the default scheduler applies and the timing of the cyclic loops will be immediate and therefore executes a different behaviour.
How would you test this scenario?
Some Code Example:
void thread_handler() {
// setup SCHED_DEADLINE Parameters
while (running) {
// execute application logic
sched_yield();
}
}
There two separate units to test here. First the cyclic execution of code and second the strategy with the os interface. The first unit would look like this:
class CyclicThread : public std::thread {
public:
CyclicThread(Strategy& strategy) :
std::thread(bind(&CyclicThread::worker, this)),
strategy(strategy) { }
add_task(std::function<void()> handler) {
...
}
private:
Strategy& strategy;
void worker() {
while (running) {
execute_handler()
strategy.yield();
}
}
}
This is fairly easy to test with a mock object of the strategy.
The Deadline scheduling strategy looks like this:
class DeadlineStrategy {
public:
void yield() {
sched_yield();
}
}
This class can also be tested fairly easy by mocking the sched_yield() system call.
I am using Artos runner. In our development environment we keep
<property name="stopOnFail">true</property> so I can debug my changes without having to deal with dependent test cases failing. In production environment we keep <property name="stopOnFail">false</property> so test execution does not stop upon failure and we can analyse log in the morning.
Now I have a different requirement,
I have some tests that are pre-requisite for rest of the units, so if critical test fails then I would like to skip rest of the unit, otherwise it can put our product into bad state.
Is there a way in Artos to skip rest of the unit only if specific test case or test unit fails?
Or can we perform specific steps, incase test fails to ensure we are safe to perform rest of the tests?
Depending on the requirement there are multiple ways to achieve it in Artos
First of all, ensure all of your units are having a sequence number so they execute in the same order all the time.
Let's say testUnit_1() is critical unit and it must be executed successfully in order to execute rest of the following units. In that case, set dropRemainingUnitsUponFailure = true as shown below. This will ensure that the rest of the units are dropped from the execution list if testUnit_1() fails.
#TestPlan(preparedBy = "user", preparationDate = "19/02/2019", bdd = "GIVEN..WHEN..AND..THEN..")
#TestCase(sequence = 1)
public class TestCase_1 implements TestExecutable {
#Unit(sequence = 1, dropRemainingUnitsUponFailure = true)
public void testUnit_1(TestContext context) {
context.getLogger().info("do something");
}
#Unit(sequence = 2)
public void testUnit_2(TestContext context) {
context.getLogger().info("do something");
}
#Unit(sequence = 3)
public void testUnit_3(TestContext context) {
context.getLogger().info("do something");
}
}
If the test cases that are dependent upon each other then you can do similar at test case level
Ensure TestCases are assigned with sequence number so they follow same execution order (Similar to units)
As shown below. If dropRemainingTestsUponFailure = true and dropRemainingUnitsUponFailure = true then upon testUnit_1() failure, not only rest of the units will be dropped but also remaining test cases will be dropped from execution list so you can achieve clean exit.
#TestPlan(preparedBy = "user", preparationDate = "19/02/2019", bdd = "GIVEN..WHEN..AND..THEN..")
#TestCase(sequence = 1, , dropRemainingTestsUponFailure = true)
public class TestCase_1 implements TestExecutable {
#Unit(sequence = 1, dropRemainingUnitsUponFailure = true)
public void testUnit_1(TestContext context) {
context.getLogger().info("do something");
}
#Unit(sequence = 2)
public void testUnit_2(TestContext context) {
context.getLogger().info("do something");
}
#Unit(sequence = 3)
public void testUnit_3(TestContext context) {
context.getLogger().info("do something");
}
}
In the log file you will see warning
=========================================================================
========== DROP REMAINING UNITS UPON FAILURE IS TRIGGERED ===============
================== REMAINING UNITS WILL BE DROPPED ======================
=========================================================================
and
=========================================================================
========== DROP REMAINING TESTS UPON FAILURE IS TRIGGERED ===============
================== REMAINING TESTS WILL BE DROPPED ======================
=========================================================================
so you will know what happened.
To Answer your second question
(Question: If there is any ways to perform clean up if test unit fails, so before you perform next unit test, you can recover your product from the bad state)
If I understood it correctly then it can be done using annotation #AfterFailedUnit
If you create a method as shown below in your runner class
#AfterFailedUnit
public void globalAfterFailedTestUnit(TestContext context) throws Exception {
context.getLogger().info("This method executes after failed test unit");
}
then it will be executed after each test unit failure, you should implement clean up logic in this method.
Hopefully this answers your questions
I have this function
override fun trackEvent(trackingData: TrackingData) {
trackingData.eventsList()
}
And I could have my test as below.
#Test
fun `My Test`() {
// When
myObject.trackEvent(myTrackingMock)
// Then
verify(myTrackingMock, times(1)).eventsList()
}
However, if I make it into a
override fun trackEvent(trackingData: TrackingData) {
GlobalScope.launch{
trackingData.eventsList()
}
}
How could I still get my test running? (i.e. can make the launch Synchronous?)
I created my own CoroutineScope and pass in (e.g. CoroutineScope(Dispatchers.IO) as a variable myScope)
Then have my function
override fun trackEvent(trackingData: TrackingData) {
myScope.launch{
trackingData.eventsList()
}
}
Then in my test I mock the scope by create a blockCoroutineScope as below.
class BlockCoroutineDispatcher : CoroutineDispatcher() {
override fun dispatch(context: CoroutineContext, block: Runnable) {
block.run()
}
}
private val blockCoroutineScope = CoroutineScope(BlockCoroutineDispatcher())
For my test, I'll pass the blockCoroutineScope in instead as myScope. Then the test is executed with launch as a blocking operation.
To approach the answer, try asking a related question: "How would I unit-test a function that has
Thread { trackingData.eventsList() }
in it?"
Your only hope is running a loop that repeatedly checks the expected condition, for some period time, until giving up and declaring the test failed.
When you wrote GlobalScope.launch, you waived your interest in Kotlin's structured concurrency, so you'll have to resort to unstructured and non-deterministic approaches of testing.
Probably the best recourse is to rewrite your code to use a scope under your control.
I refactored my method to
suspend fun deleteThing(serial: String): String? = coroutineScope {
This way, I can launch coroutines with launch
val jobs = mutableListOf<Job>()
var certDeleteError: String? = null
certs.forEach { certArn ->
val job = launch {
deleteCert(certArn, serial)?.let { error ->
jobs.forEach { it.cancel() }
certDeleteError = error
}
}
jobs.add(job)
}
jobs.joinAll()
For the test, I can then just use runTest and it runs all of the coroutines synchronously
#Test
fun successfullyDeletes2Certs() = runTest {
aws.deleteThing("s1")
Now you just need to mind your context where you are calling the deleteThing function. For me, it was a ktor request, so I could just call launch there also.
delete("vehicles/{vehicle-serial}/") {
launch {
aws.deleteThing(serial)
}
}
I have a method with following code:
public void myMethod() {
if (condition1) {
doSomething1();
if (condition2) {
doSomething2();
} else {
doSomething3();
}
}
}
Now doSomething1, doSomething2, doSomething3 are void methods.
How to unit-test myMethod ?
eg: if condition1 is satisfied check if doSomething1 was called.
Is there something we can do to refactor this to make easily testable ?
A general approach could be 3 test cases. Each test case would exercise a single condition. For each test case:
doSomethingX would be patched with a test object, (there are mock libraries for pretty much all languages)
conditionX would be triggered
doSomethingX would execute
test would assert that doSomethingX was actually called
There are many strategies for removing the need to mock.
if doSomethingX is an instance method then you could create a test specific subclass and override doSomethingX and make your assertion in the subclass.
You could also refactor your method to require the caller to inject the doSomethingX dependency (dependency injection)
public void myMethod(somethingStrategy)
Then the test could easily configure a mock object and call myMethod with the mock object.
Dependency injection could take place on the class level by having the class be instantiated with a somethingStrategy as well.
(a) The fact that these methods return void means their outcome is irrelevant, we just don't care. But you need to test the outcome, and need therefore to know the outcome. So this is a huge red flag that the code isn't SOLID and needs refactoring.
(b) Hey, this could be legacy code that is impossible to change, so if your methods truly are voids, then the following refactoring could help, if you then assert that the myNewMethod2 and doSomething2/3 are called once / not called at all depedndant upon the conditions (e.g. via MOQ unit testing framework)
public myNewMethod()
{
bool cnd1 = (condition1);
bool cnd2 = (condition2);
if(cnd1)
{
myNewMethod2(cnd2);
}
}
public myNewMethod2(bool cnd2)
{
doSomething1();
myNewMthod3(cnd2);
}
public myNewMethod3(bool cnd2)
{
if (cnd2)
{
doSomething2();
}
else
{
doSomething3();
}
}
(c) Another strategy for voids, which I'm not a great fan of, but leaves your original code largely intact, is this:
public void myMethod() {
try
{
if (condition1) {
doSomething1();
if (condition2) {
doSomething2();
} else {
doSomething3();
}
}
catch(Exception ex)
{
//
}
}
Your unit test can then assert that no exception is thrown. Not ideal, but if needs must...
I've got a number of Boost test cases ordered in several test suites. Some test cases have one, some more than one check.
However, when executing all tests, they all get executed – no matter how many fail or pass. I know, that I can stop the execution of one test case with several checks by using BOOST_REQUIRE instead of BOOST_CHECK. But that's not want I want.
How can I tell Boost to stop the whole execution after the first test case failed? I would prefer a compiled solution (e.g. realized with a global fixture) over a runtime solution (i.e. runtime parameters).
BOOST_REQUIRE will stop the current test case in a test suite but go on on others.
I don't really see what you wanted when you asked for a "compiled solution" but here is a trick that should work. I use a boolean to check the stability of the whole test suite. If it is unstable i.e a BOOST_REQUIRE has been triggered then I stop the whole thing.
Hope it can help you.
//#include <...>
//FIXTURES ZONE
struct fixture
{
fixture():x(0.0),y(0.0){}
double x;
double y;
};
//HELPERS ZONE
static bool test_suite_stable = true;
void in_strategy(bool & stable)
{
if(stable)
{
stable = false;
}
else
{
exit();
}
}
void out_strategy(bool & stable)
{
if(!stable)
{
stable = true;
}
}
BOOST_AUTO_TEST_SUITE(my_test_suite)
//TEST CASES ZONE
BOOST_FIXTURE_TEST_CASE(my_test_case, fixture)
{
in_strategy(test_suite_stable);
//...
//BOOST_REQUIRE() -> triggered
out_strategy(test_suite_stable);
}
BOOST_FIXTURE_TEST_CASE(another_test_case, fixture)
{
in_strategy(test_suite_stable); //-> exit() since last triggered so stable = false
//...
//BOOST_REQUIRE()
out_strategy(test_suite_stable);
}
BOOST_TEST_SUITE_END()
Benoit.
Why not just use assert? Not only you abort immediately the whole program, you will also be able to see stack if necessary.