While coding using python's unittest module, I have found it useful to mark tests to be skipped on execution (see unittest.SkipTest exception in python)
Is there anything similar in Boost.Test?
I am implementing my tests using boost version 1.49.0 and I want to add something like:
BOOST_AUTO_TEST_CASE(test_wibly)
{
throw boost::???::skip_test("http://my::defect.tracking.software/#4321");
}
Basically this should not consider the test as passed or failed, but "skipped" and it should appear so in the output.
If there is nothing like it, where can I find some resources on how to implement it myself (on top of Boost.Test)?
The documentation has a section on skipping tests, but it refers to skipping a test suite if a previous test fails.
As far as I know there is no way to do this with Boost Test.
I have run across the NCBI C++ Toolkit, which has an enhanced version of Boost Test that adds these sort of capabilities. I haven't had an occasion to try it yet, so I can't vouch for it.
Related
I have recently been getting back into python and I am a bit rusty. I have started working with the testing framework nose. I am trying to find out what possible functions I have available for use with this framework.
For example when I used rspec in ruby if I wanted to find out what "options" I have available when writing a test case I would simply go to the link below and browse the doco until I found what I needed:
https://www.relishapp.com/rspec/rspec-expectations/docs/built-in-matchers/comparison-matchers
Now when I try and do the same for nose, Google keeps sending me to:
https://nose.readthedocs.io/en/latest/writing_tests.html#test-functions
Although the doco is informative it's not really what I'm looking for.
Is there a python command I can use to find possible testing options or another place where good up to date documentation is stored?
All the assertions nose/unittests provides should be documented:
https://docs.python.org/2.7/library/unittest.html
In addition to doc, the code will always tell the truth. You could check out the library source code, or drop into a debugger inside your test method:
import pdb; pdb.set_trace()
And then inspect the test method for available assertions.
dir(self)
help(unittest.skip)
I'm creating a library of components in Modelica, and would appreciate some input on techniques for unit testing the package.
So far I have a test package, consisting of a set of models, one per component. Each test model instantiates a component, and connects it to some very simple helper classes that provide the necessary inputs and outputs.
This works fine when using it interactively in the OMEditor, but I'm looking for a more automated solution with pass/fail criteria etc.
Should I start writing .mos scripts, or is there another/better way ?
Thanks.
I like how Openmodelica testing results look, see
https://test.openmodelica.org/libraries/MSL_3.2.1/BuildModelRecursive.html
click on a red cell: https://test.openmodelica.org/libraries/MSL_3.2.1/files/Modelica.Electrical.Analog.Examples.AD_DA_conversion.diff.html
choose "javascript" for a failing signal: https://test.openmodelica.org/libraries/MSL_3.2.1/files/Modelica.Electrical.Analog.Examples.AD_DA_conversion.diff.resistor.v.html
No idea how they are doing it, though. Obviously some kind of regression testing is done, with previous results stored, but no idea if that is from some testing library or self-made.
In general, I find it kinda sad/suboptimal, that there isn't "the one" testing solution everybody can/should use (cf. e.g. nose or pytest in the python ecosystem), instead everybody seems to cook up their own solutions (or tries to), and all you find is some Modelica conference papers (often without a trace of implementation) or unmaintained library of unknown status.
Off the top of my head, I found/know of (some already linked in other answers here)
OM testing
JModelica testing (seems to only test for compiler errors?)
Xogeny test (Some tests of the library itself fail for me. Also, does not seem to include a test runner)
MoUnit (something by Fraunhofer, and not publically available - maybe in OneWind/OneModelica?)
UnitTesting (apparently some kind of predecessor of XogenyTest. Also, no sources/implementation found)
Optimica Testing Toolkit (apparently a commercial product by Modelon)
SystemModeler VerificationTest
buildingspy Python package, for regression testing among other things. Under the umbrella of the Berkeley Modelica Buildings Library. (Simulation only with Dymola)
Modelica_Requirements library -- define requirements for simulation. (claimed to be open source and implemented, but apparently not available anywhere)
... I'm sure there are more I have forgotten or am not aware of
This seems like a pathological instance of https://xkcd.com/927/. It's kinda impossible for a (non-dev) user to know which of those to choose, which are actually good/usable/available/...
(Not real testing, but also relevant: parsing and semantic analysis using ANTLR: modelica.org/events/Conference2003/papers/h31_parser_Tiller.pdf)
Writing a .mos script would be one way but there is also a small proof-of-concept library by Michael Tiller: XogenyTest which you could use as a basis.
I prefer using the .mos script, it works pretty well when you further integrate your test framework into a continuous integration tool. BuildingPy is a good example of this, though it's not implemented in CI tools, it's still a good tool.
Here's a reference of a good framework design:
UnitTesting: A Library for Modelica Unit Testing
If you have Mathematica and SystemModeler you can run the simulation from Mathematica and use the VerificationTest "function" to test:
VerificationTest[Abs[WSMSimulate["HelloWorld"]["x", .1] - .90] < .01].
Multiple tests can then be simulated in a TestReport[].
I have a big mess with 100 tests in one class and running all of them by clicking "Test project (...). They run in a random order and I would like them to run in a specific order - from beginning to the end, the same order that I wrote them. In eclipse it's not a problem because eclipse just works like that, how to do it in netbeans?
Any help will be appreciated.
Edit (due to answers): Tests order is required for the clearance of the log. They are independent.
If your tests needs to run in a specific order, something is wrong with your design.
2 test that needs to run one after another are 1 test. Consider this before searching for a solution.
check this https://blogs.oracle.com/mindless/entry/controlling_the_order_of_junit
Having tests depending on other tests 99.9% of the time a very bad idea. Unit tests should be independent from each other, as otherwise you might have a cascade of errors, or (even worse) one test failing because something another test did sometime before.
If you still want to go through this pain, you'll need to use a different unit testing framework (such as TestNG - see dependsOnMethods) which supports test dependencies.
Junit doesn't support this feature because it's seen by many as a bad practice (for very good reasons).
The next JUnit release will support ordering of test methods. The standard Maven Surefire Plugin supports ordering of test methods already.
Netbeans has good integration with ant build files. You could write a specific ant target that could execute each test in order.
Is there an automated tool to generate reports containing information about unit tests when using sikuli? The data I want would be things such as pass/fail, a trace to where/why it failed, and a log of events.
Ended up using the HTMLTestRunner tool, it was far easier than anything else I found and met the criteria I needed. (There is also an XML version of this, XMLTestRunner)
HTMLTestRunner is an extension to the Python standard library's unittest module. It generates easy to use HTML test reports.
http://tungwaiyip.info/software/HTMLTestRunner.html
Another useful tool I found was RobotFramework, which can be integrated into Sikuli, but is more complicated and requires alot of research and reading of documentation.
I am using the Boost 1.34.1 unit test framework. (I know the version is ancient, but right now updating or switching frameworks is not an option for technical reasons.)
I have a single test module (#define BOOST_TEST_MODULE UnitTests) that consists of three test suites (BOOST_AUTO_TEST_SUITE( Suite1 );) which in turn consist of several BOOST_AUTO_TEST_CASE()s.
My question:
Is it possible to run only a subset of the test module, i.e. limit the test run to only one test suite, or even only one test case?
Reasoning:
I integrated the unit tests into our automake framework, so that the whole module is run on make check. I wouldn't want to split it up into multiple modules, because our application generates lots of output and it is nice to see the test summary at the bottom ("X of Y tests failed") instead of spread across several thousand lines of output.
But a full test run is also time consuming, and the output of the test you're looking for is likewise drowned; thus, it would be nice if I could somehow limit the scope of the tests being run.
The Boost documentation left me pretty confused and none the wiser; anyone around who might have a suggestion? (Some trickery allowing to split up the test module while still receiving a usable test summary would also be welcome.)
Take a look at the --run_test parameter - it should provide what you're after.