Robot Framework Test Case Generation from Within a Test Case? - unit-testing

I am using Robot Framework to automate onboard unit testing of a Linux based device.
The device has a directory /data/tests that contains a series of subdirectories, each subdirectory is a test module with 'run.sh' to be executed to run the unit test. For example :
/data/tests/module1/run.sh
/data/tests/module2/run.sh
I wrote a function that collects the subdirectory names in an array, and this is the list of test modules to be executed. The number of modules can vary daily.
#{modules}= SSHLibrary.List Directories in Directory /data/tests
Then another function (Module Test) that basically runs a FOR loop on the element list and executes the run.sh in each subdirectory, collects log data, and logs it to the log.html file.
The issue I am experiencing is that when the log.html file is created, there is one test case titled Module Test, and under the FOR loop, a 'var' entry for each element (test module). Under each 'var' entry are the results of the module execution.
Is it possible from within the FOR loop, to create a test case for each element and log results against it? Right now, if one of the modules / elements fails, I do not get accurate results, I still get a pass for the Module Test test case. I would like to log test cases Module 1, Module 2, ... , Module N, with logs and pass fail for each one. Given that the number of modules can vary from execution to execution, I cannot create static test cases, I need to be able to dynamically create the test cases once the number of modules has been determined for the test run.
Any input is greatly appreciated.
Thanks,
Dan.

You can write a simple script that dynamically create the robot test file by reading the /data/test/module*, then create one test case for each of the modules. In each test case, simply run the operating system command and check the return code (the run.sh).
This way, you get one single test suite, with many test cases, each representing a module.

Consider writing a bash script that would run robot test for each module, and then merge reports to one report with rebot script. Use a --name parameter in pybot script to differentiate tests in report.

Related

Unit test timeout in Rust

Is there a way to specify a timeout for a specific unit test in Rust?
The only way to do it so add it to the test case as a hardcoded argument: https://docs.rs/ntest/0.7.2/ntest/index.html
But I wanted to ask if there is a way to provide in the command line as an argument to running test. Something like:
cargo test <timeout?>
You can get help on the test runner with cargo test -- --help. In particular you will find:
--ensure-time Treat excess of the test execution time limit as
error.
Threshold values for this option can be configured via
`RUST_TEST_TIME_UNIT`, `RUST_TEST_TIME_INTEGRATION`
and
`RUST_TEST_TIME_DOCTEST` environment variables.
Expected format of environment variable is
`VARIABLE=WARN_TIME,CRITICAL_TIME`.
`CRITICAL_TIME` here means the limit that should not
be exceeded by test.

Postman-Run a folder issue

Im using latest version of Postman v5.3.2
When running a folder with several requests inside of it, it is not executing them sequentially in the same order they appear in the main app. unlike when running the all collection that contains that folder.
Is this a bug or Postman do not support this?
When you need to run anything sequentially through the Collection Runner, you need to add logic to the "Tests" tab in your request.
There is a built-in method call "setNextRequest" that you can use in this situation. (documentation: https://www.getpostman.com/docs/postman/collection_runs/building_workflows) It's important to note that "setNextRequest" only runs when you use the Collection Runner. (not sure if this is a bug, or as intended)
Here is an example I made: https://www.getpostman.com/collections/78ffebd7823f47b26a21
In this example collection, you can see that Test checks for status 200, then if true, calls Test3 using setNextRequest on line 3. The same is done inside of Test3 on line 3, but we call Test2 instead. In Test2 we again run a test, but this time we're STOPPING the execution by supplying null to the method.
Note: you MUST to supply null as an argument to setNextRequest in Test2 to prevent the collection from running over and over.
The results in the Collection Runner should look like this:
Hope this helps someone.

Why doesn't the log4net XmlConfigurator attribute work for my unit tests

I'm using log4net, trying to get logging in my unit tests. If I manually call
log4net.Config.XmlConfigurator.Configure();
Since that works, that seems to eliminate all of the "bad config, config location" issues.
it works, but there are a large number of test classes, so that is not good.
I added
[assembly: log4net.Config.XmlConfigurator(Watch=true)]
to the assemblyinfo of my test project, but when I run (either via native MSTest, or Resharper test runner) I get no logging.
Help?
Source
[AssemblyInitialize()]
public static void MyTestInitialize(TestContext testContext)
{
// Take care the log4net.config file is added to the deployment files of the testconfig
FileInfo fileInfo;
string fullPath = Path.Combine(System.Environment.CurrentDirectory, "log4net.config");
fileInfo = new FileInfo(fullPath);
As it says in the documentation for assembly attributes
Therefore if you use configuration attributes you must invoke log4net
to allow it to read the attributes. A simple call to
LogManager.GetLogger will cause the attributes on the calling assembly
to be read and processed. Therefore it is imperative to make a logging
call as early as possible during the application start-up, and
certainly before any external assemblies have been loaded and invoked.
Because the unit test runners load the test assembly in order to find and the tests, it isn't possible to initialise log4net using an assembly attribute in unit test projects, and you will have to use the XmlConfigurator.
Edit: as linked in a comment by OP this can be done in one place for the whole test project by using the AssemblyInitializeAttribute

TFS Build servers and critical Unit Tests

When you build on a TFS build server, failed unit tests cause the build to show an orange alert state but they still "succeed". Is there any way to tag a unit test as critical such that if it fails, the whole build will fail?
I've Googled for it and didn't find anything, and I don't see any attribute in the framework, so I'm guessing the answer is no. But maybe I'm just looking in the wrong place.
There is a way to do this, but you need to create multiple test runs and then filter your tests. On your tests, set a TestCategory attribute:
[TestCategory("Critical")]
[TestMethod]
public void MyCriticalTest {}
For NUnit you should be able to use [Category("Critical")]. There are multiple attributes of a test you can filter on, including the name.
Name = TestMethodDisplayNameName
FullyQualifiedName = FullyQualifiedTestMethodName
Priority = PriorityAttributeValue
TestCategory = TestCategoryAttributeValue
ClassName = ClassName
And these operators:
= (equals)
!= (not equals)
~ (contains or substring only for string values)
& (and)
| (or)
( ) (paranthesis for grouping)
XUnit .NET currently does not support TestCaseFilters.
Then in your build definition you can create two test runs, one that runs Critical tests, one that runs everything else. You can use the Filter option of the Test Run.
Open the Test Runs window using this hard to find button:
Create 2 test runs:
On your first run set the options as follows:
On your second run set the options as follows:
This way Team Build will run any test with the "Ciritical" category in the first run and will fail. If the first run succeeds it will kick off the non-critical tests and will Partially Succeed, even when a test fails.
Update
The same process explained for Azure DevOps Pipelines.
Yes.
Using the TFS2013 Default Template:
Under the "Process" tab, go to section 2, "Basic".
Expand the Automated Tests section.
For "Test Source", click the ellipsis ("...").
This will open a new window that has a "Fail build when tests fail" check box.

xunit programmatically add new tests/"[Facts]"?

We have a folder full of JSON text files that need to be set to a single URI. Currently it's all done with a single xUnit "[Fact]" as below
[Fact]
public void TestAllCases()
{
PileOfTests pot = new PileOfTests();
pot.RunAll();
}
pot.RunAll() then parses the folder, loads the JSON files (say 50 files). Each is then hammered against the URI to see is each returns HTTP 200 ("ok"). If any fail, we're currently printing it as a fail by using
System.Console.WriteLine("\n >> FAILED ! << " + testname + "\n");
This does ensure that failures catch our eye but xUnit thinks all tests failed (understandably). Most importantly, we can't specify to xunit "here, run only this specific test". It's all or nothing the way it's currently built.
How can I programmatically add test cases? I'd like to add them when I read the number and names of the *.json files.
The simple answer is:
No, not directly. But there exists an, albeit a bit hacky, workaround, which is presented below.
Current situation (as of xUnit 1.9.1)
By specifiying the [RunWith(typeof(CustomRunner))] on a class, one can instruct xUnit to use the CustomRunner class - which must implement Xunit.Sdk.ITestClassCommand - to enumerate the tests available on the test class decorated with this attribute.
But unfortunately, while the invocation of test methods has been decoupled from System.Reflection + the actual methods,
the way of passing the tests to run to the test runner haven't.
Somewhere down in the xUnit framework code for invoking a specific test method, there is a call to typeof(YourTestClass).GetMethod(testName).
This means that if the class implementing the test discovery returns a test name that doesn't refer to a real method on the test class, the test is shown in the xUnit GUI - but any attempts to run / invoke it end up with a TargetInvocationException.
Workaround
If one thinks about it, the workaround itself is relatively straightforward.
A working implementation of it can be found here.
The presented solution first reads in the names of the files which should appear as different tests in the xUnit GUI.
It then uses System.Reflection.Emit to dynamically generate an assembly with a test class containing a dedicated test method for each of the input files.
The only thing that each of the generated methods does is to invoke the RunTest(string fileName) method on the class that specified the [EnumerateFilesFixture(...)] attribute. See linked gist for further explanation.
Hope this helps; feel free to use the example implementation if you like.