Skipping tests using Artos if pre-requisite is not met - unit-testing

I am using Artos runner. In our development environment we keep
<property name="stopOnFail">true</property> so I can debug my changes without having to deal with dependent test cases failing. In production environment we keep <property name="stopOnFail">false</property> so test execution does not stop upon failure and we can analyse log in the morning.
Now I have a different requirement,
I have some tests that are pre-requisite for rest of the units, so if critical test fails then I would like to skip rest of the unit, otherwise it can put our product into bad state.
Is there a way in Artos to skip rest of the unit only if specific test case or test unit fails?
Or can we perform specific steps, incase test fails to ensure we are safe to perform rest of the tests?

Depending on the requirement there are multiple ways to achieve it in Artos
First of all, ensure all of your units are having a sequence number so they execute in the same order all the time.
Let's say testUnit_1() is critical unit and it must be executed successfully in order to execute rest of the following units. In that case, set dropRemainingUnitsUponFailure = true as shown below. This will ensure that the rest of the units are dropped from the execution list if testUnit_1() fails.
#TestPlan(preparedBy = "user", preparationDate = "19/02/2019", bdd = "GIVEN..WHEN..AND..THEN..")
#TestCase(sequence = 1)
public class TestCase_1 implements TestExecutable {
#Unit(sequence = 1, dropRemainingUnitsUponFailure = true)
public void testUnit_1(TestContext context) {
context.getLogger().info("do something");
}
#Unit(sequence = 2)
public void testUnit_2(TestContext context) {
context.getLogger().info("do something");
}
#Unit(sequence = 3)
public void testUnit_3(TestContext context) {
context.getLogger().info("do something");
}
}
If the test cases that are dependent upon each other then you can do similar at test case level
Ensure TestCases are assigned with sequence number so they follow same execution order (Similar to units)
As shown below. If dropRemainingTestsUponFailure = true and dropRemainingUnitsUponFailure = true then upon testUnit_1() failure, not only rest of the units will be dropped but also remaining test cases will be dropped from execution list so you can achieve clean exit.
#TestPlan(preparedBy = "user", preparationDate = "19/02/2019", bdd = "GIVEN..WHEN..AND..THEN..")
#TestCase(sequence = 1, , dropRemainingTestsUponFailure = true)
public class TestCase_1 implements TestExecutable {
#Unit(sequence = 1, dropRemainingUnitsUponFailure = true)
public void testUnit_1(TestContext context) {
context.getLogger().info("do something");
}
#Unit(sequence = 2)
public void testUnit_2(TestContext context) {
context.getLogger().info("do something");
}
#Unit(sequence = 3)
public void testUnit_3(TestContext context) {
context.getLogger().info("do something");
}
}
In the log file you will see warning
=========================================================================
========== DROP REMAINING UNITS UPON FAILURE IS TRIGGERED ===============
================== REMAINING UNITS WILL BE DROPPED ======================
=========================================================================
and
=========================================================================
========== DROP REMAINING TESTS UPON FAILURE IS TRIGGERED ===============
================== REMAINING TESTS WILL BE DROPPED ======================
=========================================================================
so you will know what happened.
To Answer your second question
(Question: If there is any ways to perform clean up if test unit fails, so before you perform next unit test, you can recover your product from the bad state)
If I understood it correctly then it can be done using annotation #AfterFailedUnit
If you create a method as shown below in your runner class
#AfterFailedUnit
public void globalAfterFailedTestUnit(TestContext context) throws Exception {
context.getLogger().info("This method executes after failed test unit");
}
then it will be executed after each test unit failure, you should implement clean up logic in this method.
Hopefully this answers your questions

Related

Missing capabilities for unit test

I've implemented a C++ Class that will execute something in a timed cycle using a thread. The thread is set to be scheduled with the SCHED_DEADLINE scheduler of the Linux kernel. To setup the Scheduler the process running this must have certain Linux capabilities.
My question is, how to test this?
I can of course make a unit test and create the threat, do some counting an exit the test after a time to validate the cycle counter but that only works if the unit test is allowed to apply the right scheduler. If not, the default scheduler applies and the timing of the cyclic loops will be immediate and therefore executes a different behaviour.
How would you test this scenario?
Some Code Example:
void thread_handler() {
// setup SCHED_DEADLINE Parameters
while (running) {
// execute application logic
sched_yield();
}
}
There two separate units to test here. First the cyclic execution of code and second the strategy with the os interface. The first unit would look like this:
class CyclicThread : public std::thread {
public:
CyclicThread(Strategy& strategy) :
std::thread(bind(&CyclicThread::worker, this)),
strategy(strategy) { }
add_task(std::function<void()> handler) {
...
}
private:
Strategy& strategy;
void worker() {
while (running) {
execute_handler()
strategy.yield();
}
}
}
This is fairly easy to test with a mock object of the strategy.
The Deadline scheduling strategy looks like this:
class DeadlineStrategy {
public:
void yield() {
sched_yield();
}
}
This class can also be tested fairly easy by mocking the sched_yield() system call.

How to stop / kill my current test case?

I want to stop the execution of a test if it matches a certain scenario in order to avoid code duplication.
Consider the following situation:
CoreProviderTest
public void executeCoreSuccess(Object responseModel){
assertNotNull("Response successful", responseModel != null);
if (responseModel == null) {
//Kill Test
}
}
ChildProviderTest - extends CoreProviderTest
#Test
public void responseTester() {
new Provider().getServiceResponse(new Provider.Interface() {
#Override
public void onSuccess(Object responseModel) {
executeCoreSuccess(responseModel);
//Continue assertions
}
#Override
public void onFailure(ErrorResponseModel error) {
executeCoreFailure(error);
}
});
}
For a null response, I would like to kill my current test case inside CoreProviderTest otherwise that might trigger exceptions in further assertions. I wanted to avoid something like:
CoreProviderTest
if (responseModel == null) {
return true;
}
ChildProviderTest
#Override
public void onSuccess(Object responseModel) {
if (executeCoreSuccess(responseModel))
return;
//Continue assertions
}
Is there a way to kill the current test execution with Mockito, JUnit or Roboletric? No luck so far googling an answer.
Thanks in advance
If you are using JUnit5, it has features like Assumtions, Disabling tests and Conditional Test Execution.
Here's the link :
https://junit.org/junit5/docs/current/user-guide/#writing-tests-assumptions
In your case, looks like assumingThat should work. Here's the API :
https://junit.org/junit5/docs/5.0.0/api/org/junit/jupiter/api/Assumptions.html#assumingThat-boolean-org.junit.jupiter.api.function.Executable-
JUnit Assumptions suits perfect to the given case.
Code snippet now stands like:
CoreProvider
public void executeCoreSuccess(Object responseModel){
assumeTrue("Response successful",responseModel != null);
}
According to JUnit's documentation:
A failed assumption does not mean the code is broken, but that the
test provides no useful information. Assume basically means "don't run
this test if these conditions don't apply". The default JUnit runner
skips tests with failing assumptions.
+1 Adelin and Dossani

Returning Promise from AWS.SWF Workflow

It seems that according to swf-docs the following code:
#Workflow
#WorkflowRegistrationOptions(
defaultExecutionStartToCloseTimeoutSeconds = 60,
defaultTaskStartToCloseTimeoutSeconds = 10)
public interface MyWorkflow
{
#Execute(version = "1.0")
Promise<String> startMyWF(int a, String b);
}
Should generate MyWorkflowClientExternal that returns a Promise<String>; i.e.:
Promise<String> startMyWF(int a, String b);
However, instead a void method is generated for both MyWorkflowClientExternal and MyWorkflowClientExternalImpl:
void startMyWF(int a, String b) ...
The internal client MyWorkflowClient and MyWorkflowClientImpl does return the Promise object as expected:
Promise<String> startMyWF(int a, String b);
I would like to use ExternalClient; but it does not seem to return the Promise object. I would very much appreciate clarifications.
Thank you.
I posted this question on the AWS-SWF developer forum; and #maxim-fateev has kindly pointed several approaches:
The return value of a workflow is very useful for child workflows
because they are modeled as asynchronous calls. For standalone
workflows, you can use one of the following options to retrieve the
results:
1) Get it from the workflow history using SWF API
GetWorkflowExecutionHistory (the result is in the
WorkflowExecutionCompleted event). You can also inspect the history
using the SWF console.
2) Design your workflow to put the result somewhere, for example you
can add an activity at the end to put the result in a store and have
the application look there periodically.
3) Host an activity in the program that starts the workflow execution.
The workflow starter program now becomes part of the workflow and the
activity it hosts can be passed the result of the workflow.
You may use the first option in manually operated tools. However, it
is not recommended as a general mechanism for applications to retrieve
workflow results because it effectively requires you to poll SWF to
check for workflow completion and goes against our long polling
design.
I went with the approach #2; here is the gist of it (if you think there is a better way; please do let me know).
Created NotificationActivityImpl:
public class NotificationActivitiesImpl implements NotificationActivities {
private Object notification;
public NotificationActivitiesImpl() {
this.notification = null;
}
#Override
public void notify(Object obj) {
this.notification = obj;
}
/**
* #return notification (will block until it is available)
*/
#Override
public Object getNotification() {
while (notification == null ){
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
return notification;
}
}
In the WorkflowImpl added:
notificationClient.notify(obj) // obj that want to pass back to your app
In the App (which starts the workflow; and NotificationAcitivityWorker) added the following:
workflowWorker.start();
notificationWorker.start();
NotificationActivitiesImpl notificationImpl = (NotificationActivitiesImpl) notificationWorker.getActivitiesImplementations().iterator().next();
Object notification = notificationImpl.getNotification();

GWT Timer fires immediately, when running gwt-test-utils unit tests

I wrote an unit test using the gwt-test-utils framework, as described here.
The tested class internally uses a com.google.gwt.user.client.Timer (not the Java default Timer).
Only when tested, though, the Timer instance doesn't behave correctly, as it fires as soon as it is scheduled.
When I run this test
public class TimerTest extends GwtTest {
#Override
public String getModuleName() {
return "com.whatevs";
}
#Test
public void testTimer() {
final int[] counter = { 0 };
com.google.gwt.user.client.Timer t = new Timer() {
#Override
public void run() {
Log.info("firing timer");
counter[0]++; // just increase the counter
}
};
Log.info("scheduling timer");
t.schedule(1000000); // this should return immediately
Log.info("scheduling returns");
assertEquals(0, counter[0]); // the counter shouldn't yet be incremented
}
}
I get a failure
testTimer(com.whatevs.TimerTest): expected:<0> but was:<1>
And the debug output
22:37:44,075 INFO gwt-log:81 - scheduling timer
22:37:44,075 INFO gwt-log:81 - firing timer
22:37:44,075 INFO gwt-log:81 - scheduling returns
Please note that the test is run as a JUnit test, without being compiled to JavaScript first.
Am I doing something wrong, or did I just hit a bug?
Is there any other way to test such classes?
Update:
I just found out that if in the above example i call scheduleRepeating, or I reschedule the timer using schedule inside the run method, the timer fires exactly 5 times before returning control to the caller.
Something weird is going on, I just opened a bug report on gwt-test-utils.

How to extract testing creation logic into a shared method

I have code I want to extract from a unit test to make my test method clearer:
Check check;
check.Amount = 44.00;
// unit testing on the check goes here
How should I extract this? Should I use a pointer to the check or some other structure to make sure it's still allocated when I use the object?
I don't want to use a constructor because I want to isolate my test creation logic with production creation logic.
In a modern unit testing framework you usually have testing case as
class MyTest: public ::testing::Test {
protected:
MyTest() {}
~MyTest() {}
virtual void SetUp() {
// this will be invoked just before each unit test of the testcase
// place here any preparations or data assembly
check.Amount = 44.00;
}
virtual void TearDown() {
// this will be inkoved just after each unit test of the testcase
// place here releasing of data
}
// any data used in tests
Check check;
};
// single test that use your predefined preparations and releasing
TEST_F(MyTest, IsDefaultInitializedProperly) {
ASSERT_FLOAT_EQ(44., check.Amount);
}
// and so on, SetUp and TearDown will be done from scratch for every new test
You can find such functionality i.e. in Google Test Framework (https://github.com/google/googletest/)