I would like to unit test functions from a single file Lua script, say script.lua. The script looks something like follows:
-- some fields from gvsp dissector which shall be post processed in custom dissector
gvsp_field0_f = Field.new("gvsp.<field0-name>")
gvsp_field1_f = Field.new("gvsp.<field1-name>")
-- custom protocol declaration
custom_protocol = Proto("custom","Custom Postdissector")
-- custom protocol field declarations
field0_f = ProtoField.string("custom.<field0-name>","Custom Field 0")
field1_f = ProtoField.string("custom.<field1-name>","Custom Field 1")
-- register custom protocol as postdissector
register_postdissector(custom_protocol)
function custom_protocol.dissector(buffer,pinfo,tree)
-- local field values of "pre" dissector which are analyzed
local gvsp_field0_value = gvsp_field0_f()
local gvsp_field1_value = gvsp_field1_f()
-- functions which shell be unit tested
function0(...)
function1(...)
end
function0(...)
-- implementation
end
function1(...)
-- implementation
end
Let's say I do not want to separate the functions from the script file into a separate module file (which would probably make things easier). How can I define tests (preferably with luaunit because easy to integrate, but other tool would be ok as well) for the functions defined in script.lua inside the script.lua file or in a separate test_script.lua file?
To enable separate script and unit test execution one needs at least 3 files (in this example 4 because the unit test framework luaunit which consists of a single file is integrated into the project directory). For this example all files reside in the same directory. The script script.lua may not define any functions in it but must import all functions it needs from its module module.lua.
-- script imports module functions
module = require('module')
-- ... and uses it to print the result of the addition function
result = module.addtwo(1,1)
print(result)
module.lua is implemented accoring to the Lua module skeleton that its functions are automatically registered for import through other script files or modules.
-- capture the name searched for by require
local NAME=...
-- table for our functions
local M = { }
-- A typical local function that is also published in the
-- module table.
local function addtwo(a,b) return a+b end
M.addtwo = addtwo
-- Shorthand form is less typing and doesn't use a local variable
function M.subtwo(x) return x-2 end
return M
test_module.lua contains the unit tests for the module functions and imports luaunit.lua (unit test framework) for its execution. test_module.lua has the following content.
luaunit = require('luaunit')
script = require('module')
function testAddPositive()
luaunit.assertEquals(module.addtwo(1,1),2)
end
os.exit( luaunit.LuaUnit.run() )
If you run the tests by executing lua test_module.lua the tests are executed separately from the script functionality.
.
Ran 1 tests in 0.000 seconds, 1 success, 0 failures
OK
The script is executed as usual with lua script.lua with output 2.
Simple answer: You can't!
I've asked the Lua team about this myself a few years ago as there is no obvious way for a script to know if it is the main script running or included (e.g., 'require'd).
There does not seem to be interest for adding such capability in the foreseeable future, either!
Related
I have a Rust test which delegates to a C++ test suite using doctest and wants to pass command-line parameters to it. My first attempt was
// in mod ffi
pub fn run_tests(cli_args: &mut [String]) -> bool;
#[test]
fn run_cpp_test_suite() {
let mut cli_args: Vec<String> = env::args().collect();
if !ffi::run_tests(
cli_args.as_mut_slice(),
) {
panic!("C++ test suite reported errors");
}
}
Because cargo test help shows
USAGE:
cargo.exe test [OPTIONS] [TESTNAME] [-- <args>...]
I expected
cargo test -- --test-case="X"
to let run_cpp_test_suite access and pass on the --test-case="X" parameter. But it doesn't; I get error: Unrecognized option: 'test-case' and cargo test -- --help shows it has a fixed set of options
Usage: --help [OPTIONS] [FILTER]
Options:
--include-ignored
Run ignored and not ignored tests
--ignored Run only ignored tests
...
My other idea was to pass the arguments in an environment variable, that is
DOCTEST_ARGS="--test-case='X'" cargo test
but then I need to somehow split that string into arguments (handling at least spaces and quotes correctly) either in Rust or in C++.
There are two pieces of Rust toolchain involved when you run cargo test.
cargo test itself looks for all testable targets in your package or workspace, builds them with cfg(test), and runs those binaries. cargo test processes the arguments to the left of the --, and the arguments to the right are passed to the binary.
Then,
Tests are built with the --test option to rustc which creates an executable with a main function that automatically runs all functions annotated with the #[test] attribute in multiple threads. #[bench] annotated functions will also be run with one iteration to verify that they are functional.
The libtest harness may be disabled by setting harness = false in the target manifest settings, in which case your code will need to provide its own main function to handle running tests.
The “libtest harness” is what rejects your extra arguments. In your case, since you're intending to run an entire other test suite, I believe it would be appropriate to disable the harness.
Move your delegation code to its own file, conventionally located in tests/ in your package directory:
Cargo.toml
src/
lib.rs
...
tests/
cpp_test.rs
Write an explicit target section in your Cargo.toml for it, with harness disabled:
[[test]]
name = "cpp_test"
# path = "tests/cpp_test.rs" # This is automatic; you can use a different path if you really want to.
harness = false
In cpp_test.rs, instead of writing a function with the #[test] attribute, write a normal main function which reads env::args() and calls the C++ tests.
[Disclaimer: I'm familiar with these mechanisms because I've used Criterion benchmarking (which similarly requires disabling the default harness) but I haven't actually written a test with custom arguments the way you're looking for. So, some details might be wrong. Please let me know if anything needs correcting.]
In addition to Kevin Reid's answer, if you don't want to write your own test harness, you can use the shell-words crate to split an environment variable into individual arguments following shell rules:
let args = var ("DOCTEST_ARGS").unwrap_or_else (|_| String::new());
let args = shell_words::split (&args).expect ("failed to parse DOCTEST_ARGS");
Command::new ("cpptest")
.args (args)
.spawn()
.expect ("failed to start subprocess")
.wait()
.expect ("failed to wait for subprocess");
So I'm using Busted to create unit tests for an existing Lua file, without changing the code in the file if possible. The file imports another file, and then stores various methods from that file in local functions, like so.
[examplefile.lua]
local helper = require "helper.lua"
local helper_accept = helper.accept
local helper_reject = helper.reject
foo = new function()
-- do something which uses helper_accept
-- do something which uses helper_reject
end
I want to spy on these methods in my tests to ensure that they have been called at the right places. However, I can't find any way to do this from the test.
I've tried simply mocking out the helper methods, as in:
[exampletest.lua]
local helper = require "helper.lua"
local examplefile = require "examplefile.lua"
-- mock the helper function to simply return true
helper.accept = new function() return true end
spy.on(helper, "accept")
examplefile:foo
assert.spy(helper).was().called()
but that doesn't work as the real file uses the helper_accept and helper_reject methods, not helper.accept and helper.reject.
Can this be done without changing the code?
Thanks.
The easiest way I can think of for accomplishing this is to override the "helper" library with hook stubs. You can do this by modifying the package.loaded table. The package.loaded table stores the result of an initial call to require "lib", so that if the same require is called again, the module does not need to be reloaded. If you place something in there before the first call to require "lib", it will never actually load the library from the filesystem.
In your case you may want to actually load the library, but hook all the library accesses. I'd do that something like this...
local lib = require "lib"
local function hook_func(_, key)
print('Accessing "lib" attribute '..tostring(key))
-- other stuff you might want to do in the hook
return lib[key]
end
package.loaded["lib"] = setmetatable({}, {__index = hook_func})
I often wrote my test code in the main function while developing an API but because D has integrated unittest I want to start using them.
My current work flow is the following, I have a script that watches for file changes in any .d files, if the scripts finds a modified file it will run dub build
The problem is that dub build doesn't seem to build the unittest
module foo
struct Bar{..}
unittest{
...
// some syntax error here
...
}
It only compiles the unittests if I explicitly run dub test. But I don't want to run and compile them at the same time.
The second problem is that I want to be able to run unittests for a single module for example
dub test module foo
Would this be possible?
You can program a custom test runner using the trait getUnittests.
getUnitTests
Takes one argument, a symbol of an aggregate (e.g. struct/class/module). The result is a tuple of all the unit test functions of that aggregate. The functions returned are like normal nested static functions, CTFE will work and UDA's will be accessible.
in your main() you should be able to write something that takes an arbitrary number of module:
void runModuleTests(Modules...)()
{
static if (Modules.length > 1)
runModuleTests!(Modules[1..$]);
else static if (Modules.length == 1)
foreach(test; __traits(getUnitTests, Modules[0])) test;
}
of course the switch -unittest must be passed to dmd
I am using Robot Framework to automate onboard unit testing of a Linux based device.
The device has a directory /data/tests that contains a series of subdirectories, each subdirectory is a test module with 'run.sh' to be executed to run the unit test. For example :
/data/tests/module1/run.sh
/data/tests/module2/run.sh
I wrote a function that collects the subdirectory names in an array, and this is the list of test modules to be executed. The number of modules can vary daily.
#{modules}= SSHLibrary.List Directories in Directory /data/tests
Then another function (Module Test) that basically runs a FOR loop on the element list and executes the run.sh in each subdirectory, collects log data, and logs it to the log.html file.
The issue I am experiencing is that when the log.html file is created, there is one test case titled Module Test, and under the FOR loop, a 'var' entry for each element (test module). Under each 'var' entry are the results of the module execution.
Is it possible from within the FOR loop, to create a test case for each element and log results against it? Right now, if one of the modules / elements fails, I do not get accurate results, I still get a pass for the Module Test test case. I would like to log test cases Module 1, Module 2, ... , Module N, with logs and pass fail for each one. Given that the number of modules can vary from execution to execution, I cannot create static test cases, I need to be able to dynamically create the test cases once the number of modules has been determined for the test run.
Any input is greatly appreciated.
Thanks,
Dan.
You can write a simple script that dynamically create the robot test file by reading the /data/test/module*, then create one test case for each of the modules. In each test case, simply run the operating system command and check the return code (the run.sh).
This way, you get one single test suite, with many test cases, each representing a module.
Consider writing a bash script that would run robot test for each module, and then merge reports to one report with rebot script. Use a --name parameter in pybot script to differentiate tests in report.
We have a folder full of JSON text files that need to be set to a single URI. Currently it's all done with a single xUnit "[Fact]" as below
[Fact]
public void TestAllCases()
{
PileOfTests pot = new PileOfTests();
pot.RunAll();
}
pot.RunAll() then parses the folder, loads the JSON files (say 50 files). Each is then hammered against the URI to see is each returns HTTP 200 ("ok"). If any fail, we're currently printing it as a fail by using
System.Console.WriteLine("\n >> FAILED ! << " + testname + "\n");
This does ensure that failures catch our eye but xUnit thinks all tests failed (understandably). Most importantly, we can't specify to xunit "here, run only this specific test". It's all or nothing the way it's currently built.
How can I programmatically add test cases? I'd like to add them when I read the number and names of the *.json files.
The simple answer is:
No, not directly. But there exists an, albeit a bit hacky, workaround, which is presented below.
Current situation (as of xUnit 1.9.1)
By specifiying the [RunWith(typeof(CustomRunner))] on a class, one can instruct xUnit to use the CustomRunner class - which must implement Xunit.Sdk.ITestClassCommand - to enumerate the tests available on the test class decorated with this attribute.
But unfortunately, while the invocation of test methods has been decoupled from System.Reflection + the actual methods,
the way of passing the tests to run to the test runner haven't.
Somewhere down in the xUnit framework code for invoking a specific test method, there is a call to typeof(YourTestClass).GetMethod(testName).
This means that if the class implementing the test discovery returns a test name that doesn't refer to a real method on the test class, the test is shown in the xUnit GUI - but any attempts to run / invoke it end up with a TargetInvocationException.
Workaround
If one thinks about it, the workaround itself is relatively straightforward.
A working implementation of it can be found here.
The presented solution first reads in the names of the files which should appear as different tests in the xUnit GUI.
It then uses System.Reflection.Emit to dynamically generate an assembly with a test class containing a dedicated test method for each of the input files.
The only thing that each of the generated methods does is to invoke the RunTest(string fileName) method on the class that specified the [EnumerateFilesFixture(...)] attribute. See linked gist for further explanation.
Hope this helps; feel free to use the example implementation if you like.