Is there a way to specify a timeout for a specific unit test in Rust?
The only way to do it so add it to the test case as a hardcoded argument: https://docs.rs/ntest/0.7.2/ntest/index.html
But I wanted to ask if there is a way to provide in the command line as an argument to running test. Something like:
cargo test <timeout?>
You can get help on the test runner with cargo test -- --help. In particular you will find:
--ensure-time Treat excess of the test execution time limit as
error.
Threshold values for this option can be configured via
`RUST_TEST_TIME_UNIT`, `RUST_TEST_TIME_INTEGRATION`
and
`RUST_TEST_TIME_DOCTEST` environment variables.
Expected format of environment variable is
`VARIABLE=WARN_TIME,CRITICAL_TIME`.
`CRITICAL_TIME` here means the limit that should not
be exceeded by test.
Related
I have a Rust test which delegates to a C++ test suite using doctest and wants to pass command-line parameters to it. My first attempt was
// in mod ffi
pub fn run_tests(cli_args: &mut [String]) -> bool;
#[test]
fn run_cpp_test_suite() {
let mut cli_args: Vec<String> = env::args().collect();
if !ffi::run_tests(
cli_args.as_mut_slice(),
) {
panic!("C++ test suite reported errors");
}
}
Because cargo test help shows
USAGE:
cargo.exe test [OPTIONS] [TESTNAME] [-- <args>...]
I expected
cargo test -- --test-case="X"
to let run_cpp_test_suite access and pass on the --test-case="X" parameter. But it doesn't; I get error: Unrecognized option: 'test-case' and cargo test -- --help shows it has a fixed set of options
Usage: --help [OPTIONS] [FILTER]
Options:
--include-ignored
Run ignored and not ignored tests
--ignored Run only ignored tests
...
My other idea was to pass the arguments in an environment variable, that is
DOCTEST_ARGS="--test-case='X'" cargo test
but then I need to somehow split that string into arguments (handling at least spaces and quotes correctly) either in Rust or in C++.
There are two pieces of Rust toolchain involved when you run cargo test.
cargo test itself looks for all testable targets in your package or workspace, builds them with cfg(test), and runs those binaries. cargo test processes the arguments to the left of the --, and the arguments to the right are passed to the binary.
Then,
Tests are built with the --test option to rustc which creates an executable with a main function that automatically runs all functions annotated with the #[test] attribute in multiple threads. #[bench] annotated functions will also be run with one iteration to verify that they are functional.
The libtest harness may be disabled by setting harness = false in the target manifest settings, in which case your code will need to provide its own main function to handle running tests.
The “libtest harness” is what rejects your extra arguments. In your case, since you're intending to run an entire other test suite, I believe it would be appropriate to disable the harness.
Move your delegation code to its own file, conventionally located in tests/ in your package directory:
Cargo.toml
src/
lib.rs
...
tests/
cpp_test.rs
Write an explicit target section in your Cargo.toml for it, with harness disabled:
[[test]]
name = "cpp_test"
# path = "tests/cpp_test.rs" # This is automatic; you can use a different path if you really want to.
harness = false
In cpp_test.rs, instead of writing a function with the #[test] attribute, write a normal main function which reads env::args() and calls the C++ tests.
[Disclaimer: I'm familiar with these mechanisms because I've used Criterion benchmarking (which similarly requires disabling the default harness) but I haven't actually written a test with custom arguments the way you're looking for. So, some details might be wrong. Please let me know if anything needs correcting.]
In addition to Kevin Reid's answer, if you don't want to write your own test harness, you can use the shell-words crate to split an environment variable into individual arguments following shell rules:
let args = var ("DOCTEST_ARGS").unwrap_or_else (|_| String::new());
let args = shell_words::split (&args).expect ("failed to parse DOCTEST_ARGS");
Command::new ("cpptest")
.args (args)
.spawn()
.expect ("failed to start subprocess")
.wait()
.expect ("failed to wait for subprocess");
I wanted to run robot test for a duration of time, say for 1hr. No matter if the execution of all test cases in a test suite is completed. It should repeat the test cases until the given time reached.
I tried to use --prerunmodifier and tried to write my own module, I used robot.api module robot.running.context, and override the present method end_suite(). But not successful! :(
Try with 'Repeat Keyword' keyword. It takes as argument for how long it should repeat given keyword. But in this case all your test cases should go to one keyword.
Use 'Run Keyword And Ignore Error' inside of it so you ignore errors.
E.g.:
Repeat Keyword 2h Keyword With All Test Cases
Second option would be writing a Listener - has similar functionality as prerun modifier but is executed during tests not before.
As per the Robot framework user guide the robot framework test suite will be executed for a maximum time of 120 minutes i.e. 2hrs. Not we can overwrite this timeout by explicitly mentioning the test execution time in the test files Settings table as below
***Setting***
Test Timeout 60 minutes
Further you are most welcome to set a test cases specific timeout using [Timeout] option as below
***Test Cases***
Sample Test Case
[Timeout] 5 minutes
Do some testing
Validate some results
Feel free to try and ask further questions.
I am using Robot Framework to automate onboard unit testing of a Linux based device.
The device has a directory /data/tests that contains a series of subdirectories, each subdirectory is a test module with 'run.sh' to be executed to run the unit test. For example :
/data/tests/module1/run.sh
/data/tests/module2/run.sh
I wrote a function that collects the subdirectory names in an array, and this is the list of test modules to be executed. The number of modules can vary daily.
#{modules}= SSHLibrary.List Directories in Directory /data/tests
Then another function (Module Test) that basically runs a FOR loop on the element list and executes the run.sh in each subdirectory, collects log data, and logs it to the log.html file.
The issue I am experiencing is that when the log.html file is created, there is one test case titled Module Test, and under the FOR loop, a 'var' entry for each element (test module). Under each 'var' entry are the results of the module execution.
Is it possible from within the FOR loop, to create a test case for each element and log results against it? Right now, if one of the modules / elements fails, I do not get accurate results, I still get a pass for the Module Test test case. I would like to log test cases Module 1, Module 2, ... , Module N, with logs and pass fail for each one. Given that the number of modules can vary from execution to execution, I cannot create static test cases, I need to be able to dynamically create the test cases once the number of modules has been determined for the test run.
Any input is greatly appreciated.
Thanks,
Dan.
You can write a simple script that dynamically create the robot test file by reading the /data/test/module*, then create one test case for each of the modules. In each test case, simply run the operating system command and check the return code (the run.sh).
This way, you get one single test suite, with many test cases, each representing a module.
Consider writing a bash script that would run robot test for each module, and then merge reports to one report with rebot script. Use a --name parameter in pybot script to differentiate tests in report.
I have a program that receives command line arguments (in my case, it's a Scala program that uses Argot). A simplified use case would be something like:
sbt "run -n 300 -n 50"
And imagine that the app should only accept (and print) numbers between 0 and 100, meaning it should discard 300 and print only 50.
What's the best approach to test it? Is unit testing appropriated? Instead of processing the arguments in the main maybe I should refactor a function and test the function?
If you want to test the very act of passing arguments to your program then you'll be required to perform a blackbox integration/system test (not an unit test):
Black-box testing is a method of software testing that examines the functionality of an application without peering into its internal structures or workings. See this.
Otherwise, do as Chris said and factor out the code to test the underlying logic without testing the act of receiving an argument.
We are using tcltest to do our unit testing but we are finding it difficult to reuse code within our test suite.
We have a test that is executed multiple times for different system configurations. I created a proc which contains this test and reuse it everywhere instead of duplicating the test's code many times throughout the suite.
For example:
proc test_config { config_name} {
test $test_name {} -constraints $config_name -body {
<test body>
} -returnCodes ok
}
The problem is that I sometimes want to only test certain configurations. I pass the configuration name as a parameter to the proc as shown above, but the -constraints {} part of the test does not look up the $config_name parameter as expected. The test is always skipped unless I hard code the configuration name, but using a proc is not possible and I would need to duplicate the code everywhere just to hardcode the constraint.
Is there a way to look if the constraint is enabled in the tcltest configuration?
Something like this:
proc test_config { config_name} {
testConstraint X [expr { ::tcltest::isConstraintActive $config_name } ]
test $test_name {} -constraints X -body {
<test body>
} -returnCodes ok
}
So, is there a function in tcltest doing something like ::tcltest::isConstraintActive $config_name?
Is there a way to look if the constraint is enabled in the tcltest configuration?
Yes. The testConstraint command will do that if you don't pass in an argument to set the constraint's status:
if {[tcltest::testConstraint foo]} {
# ...
}
But don't use this to decide whether to run tests or for-a-single-test setup or cleanup. Tests should always only be turned on or off by constraints directly so that the report generated tcltest can properly track what tests were disabled and for what reasons, and each test has -setup and -cleanup options that allow for scripts to be run before and after the test if the constraints are matched.
Personally, I don't recommend putting tests inside procedures or using a variable for a test name. It works and everything, but it's confusing when you're trying to figure out what test failed and why; debugging is hard enough without adding to it. (I also find that apply is great as a way to get a procedure-like thing inside a test without losing the “have the code inspectable right there” property.)