Platform selected at test execution - testlink

I have a test plan which can be independently applied to two platforms, namely two different servers. Ideally, I would like to be able to select a platform on which a specific test case is ran directly at the test execution stage.
As it stands now, the structure looks like this:
Platform A
Test 1 : Results
Test 2 : Results
Test 3 : Results
Platform B
Test 1 : Results
Test 2 : Results
Test 3 : Results
I would like to be able to have something more of that structure:
Test 1
Platform A : Results (if any)
Platform B : Results (if any)
Test 2
Platform A : Results (if any)
Platform B : Results (if any)
Is there a way to parametrize Testlink to do this rather than completely duplicating the test plan and assign it one different platform each?
Thanks!

Related

Nunit parallel lifecycle and parallel test failing sometimes

I am having an issue with tests sporadically failing when using nunit 3 and using parallel test running.
We have a number of tests that currently are structured as follows
[TestFixture]
public class CalculateShipFromStoreShippingCost
{
private IService_service;
private IClient _client;
[SetUp]
public void SetUp()
{
_service = Substitute.For<IService>();
_client = new Client(_service);
}
[Test]
public async Task WhenScenario1()
{
_service.Apply(Args.Any<int>).Returns(1);
var result = _client.DoTheThing();
Assert.IsTrue(1,result);
}
[Test]
public async Task WhenScenario2()
{
_service.Apply(Args.Any<int>).Returns(2);
var result = _client.DoTheThing();
Assert.IsTrue(2,result);
}
}
Sometimes the test fail as one of the substitutes is returning the value for the other test.
How should this test be structured so that with Nunit it will run reliably when done in parallel
You haven't shown any Parallelizable attributes in your example, so I assume you are using the attribute at a higher level, most likely on the assembly. Otherwise, no parallel execution would occur. Further, since you say the test cases are running in parallel, you have apparently specified ParallelScope.Children.
The two test cases shown in your fixture cannot run in parallel. You should bear in mind that the SetUp method runs for each of the tests. So each of your two tests sets the value of _service, which is part of the state of the single instance of CalculateShipFromStoreShippingCost, which is shared by both tests. That is why you are seeing the "wrong" substitute being returned at times.
It is not possible for two test cases to run reliably in parallel if they both change the state of the fixture. Note that it does not matter whether the assignment to _service takes place in the test method itself or in the SetUp method - both are executed as part of the test case. So, you have to either stop running these two cases in parallel or stop changing the state.
To stop running the tests in parallel, you simply add [NonParallelizable] to each test method. If you are not using the latest framework version, use [Parallelizable(ParallelScope.None)] instead. Your other tests will continue to run in parallel, but these two will not.
Alternatively, use ParallelScope.Fixture at the assembly level. This will cause fixtures to run in parallel by default, while the individual test cases within them each run sequentially. When using ParallelizableAttribute at the assembly level, it is sometimes best to take a more conservative approach, adding in more parallelism within some fixtures where it is useful.
An entirely different approach is to make your tests stateless. Eliminate the _service member and use a local value within the test method itself. Each of your tests would add two lines like...
var service = SubstituteFor<IService>();
var client = new Client(service);
As shown in your example, I would imagine you are getting very little performance gain from running the two methods in parallel, so I would not use that last approach unless I saw a specific performance reason to do so.
As a final note... If you use make your fixtures run in parallel by default (either with an assembly-level attribute or with attributes on each fixture) and place no Parallelizable attribute on your test cases, NUnit uses an optimization, whereby all the tests within the fixture run on the same thread. This saving in context changes will often make up for the loss of any performance improvement you hoped to get through running in parallel.

Robot Framework Test Case Generation from Within a Test Case?

I am using Robot Framework to automate onboard unit testing of a Linux based device.
The device has a directory /data/tests that contains a series of subdirectories, each subdirectory is a test module with 'run.sh' to be executed to run the unit test. For example :
/data/tests/module1/run.sh
/data/tests/module2/run.sh
I wrote a function that collects the subdirectory names in an array, and this is the list of test modules to be executed. The number of modules can vary daily.
#{modules}= SSHLibrary.List Directories in Directory /data/tests
Then another function (Module Test) that basically runs a FOR loop on the element list and executes the run.sh in each subdirectory, collects log data, and logs it to the log.html file.
The issue I am experiencing is that when the log.html file is created, there is one test case titled Module Test, and under the FOR loop, a 'var' entry for each element (test module). Under each 'var' entry are the results of the module execution.
Is it possible from within the FOR loop, to create a test case for each element and log results against it? Right now, if one of the modules / elements fails, I do not get accurate results, I still get a pass for the Module Test test case. I would like to log test cases Module 1, Module 2, ... , Module N, with logs and pass fail for each one. Given that the number of modules can vary from execution to execution, I cannot create static test cases, I need to be able to dynamically create the test cases once the number of modules has been determined for the test run.
Any input is greatly appreciated.
Thanks,
Dan.
You can write a simple script that dynamically create the robot test file by reading the /data/test/module*, then create one test case for each of the modules. In each test case, simply run the operating system command and check the return code (the run.sh).
This way, you get one single test suite, with many test cases, each representing a module.
Consider writing a bash script that would run robot test for each module, and then merge reports to one report with rebot script. Use a --name parameter in pybot script to differentiate tests in report.

tcltest unit tests: how to check if constraint is active to enable code reuse

We are using tcltest to do our unit testing but we are finding it difficult to reuse code within our test suite.
We have a test that is executed multiple times for different system configurations. I created a proc which contains this test and reuse it everywhere instead of duplicating the test's code many times throughout the suite.
For example:
proc test_config { config_name} {
test $test_name {} -constraints $config_name -body {
<test body>
} -returnCodes ok
}
The problem is that I sometimes want to only test certain configurations. I pass the configuration name as a parameter to the proc as shown above, but the -constraints {} part of the test does not look up the $config_name parameter as expected. The test is always skipped unless I hard code the configuration name, but using a proc is not possible and I would need to duplicate the code everywhere just to hardcode the constraint.
Is there a way to look if the constraint is enabled in the tcltest configuration?
Something like this:
proc test_config { config_name} {
testConstraint X [expr { ::tcltest::isConstraintActive $config_name } ]
test $test_name {} -constraints X -body {
<test body>
} -returnCodes ok
}
So, is there a function in tcltest doing something like ::tcltest::isConstraintActive $config_name?
Is there a way to look if the constraint is enabled in the tcltest configuration?
Yes. The testConstraint command will do that if you don't pass in an argument to set the constraint's status:
if {[tcltest::testConstraint foo]} {
# ...
}
But don't use this to decide whether to run tests or for-a-single-test setup or cleanup. Tests should always only be turned on or off by constraints directly so that the report generated tcltest can properly track what tests were disabled and for what reasons, and each test has -setup and -cleanup options that allow for scripts to be run before and after the test if the constraints are matched.
Personally, I don't recommend putting tests inside procedures or using a variable for a test name. It works and everything, but it's confusing when you're trying to figure out what test failed and why; debugging is hard enough without adding to it. (I also find that apply is great as a way to get a procedure-like thing inside a test without losing the “have the code inspectable right there” property.)

Unit tests with different severity

I'm testing a set of classes and my unit tests so far are along the lines
1. read in some data from file X
2. create new object Y
3. sanity assert some basic properties of Y
4. assert advanced properties of Y
There's about 30 of these tests, that differ in input/properties of Y that can be checked. However, at the current project state, it sometimes crashes at #2 or already fails at #3. It should never crash at #1. For the time being, I'm accepting all failures at #4.
I'd like to e.g. see a list of unit tests that fail at #3, but so far ignore all those that fail at #4. What's the standard approach/terminology to create this? I'm using JUnit for Java with Eclipse.
You need reporting/filtering on your unit test results.
jUnit itself wants your tests to pass, fail, or not run - nothing in between.
However, it doesn't care much about how those results are tied to passing/failing the build, or reported.
Using tools like maven (surefire execution plugin) and some custom code, you can categorize your tests to distinguish between 'hard failures', 'bad, but let's go on', etc. But that's build validation or reporting based on test results rather than testing.
(Currently, our build process relies on annotations such as #Category(WorkInProgress.class) for each test method to decide what's critical and what's not).
What I could think of would be to create assert methods that check some system property as to whether to execute the assert:
public static void assertTrue(boolean assertion, int assertionLevel){
int pro = getSystemProperty(...);
if (pro >= assertionLevel){
Assert.assertTrue(assertion);
}
}

Calling several times a single test - Google Tests

I am trying to perform a little of random testing in a piece of software I am developing.
I have a fixture that is initialized with random values, therefore, each test will have different input.
Moreover, what I want is to run one of those test several times(I expect the fixture to be initialized randomly for each execution), is it possible in Google Tests? I need that to be in the code, not to use a argument or something like that.
I am looking for something like invocationCount in JUnit.
How about something like this, using an unused parameter and Range()
class Fixture : public ::testing::TestWithParam<int> {
//Random initialisation
};
TEST_P(Fixture, Test1){}
INSTANTIATE_TEST_CASE_P(Instantiation, Fixture, ::testing::Range(1, 11));
Test1 wil be called 10 times (the range end, 11, isn't included), with a new fixture created each time.