I'm using plunit package for my prolog unit tests (SWI-Prolog 7.2).
run_tests/0 prints the results on console but I would like to export plunit test results in the xUnit XML format that most CI servers understand. Is there any way for this ?
I assume that SWI-Prolog plunit tool uses the message printing mechanism to generate its output. If true, you should be able to export unit test results in the xUnit XML format (or any other format) by intercepting those messages using the message_hook/3 predicate:
http://www.swi-prolog.org/pldoc/doc_for?object=message_hook/3
By coincidence, I'm working in similar support (for exporting testing results) for Logtalk's unit testing tool, lgtunit. It should give you an idea on how to do it for plunit. I committed a preliminary version today:
https://github.com/LogtalkDotOrg/logtalk3/blob/master/tools/lgtunit/NOTES.md
https://github.com/LogtalkDotOrg/logtalk3/blob/master/tools/lgtunit/xunit_xml_report.lgt
In my case, this support is being targeted for integration with the CI server Concourse. There seems to be, unfortunately, a lack of definitive information on the xUnit XML format with sources quoting different versions of e.g. which attributes are required or optional. I did find a XSD for this format bit I have no idea of its accuracy or if it's just another variation:
https://gist.github.com/erikd/4192748
Related
I'm creating a library of components in Modelica, and would appreciate some input on techniques for unit testing the package.
So far I have a test package, consisting of a set of models, one per component. Each test model instantiates a component, and connects it to some very simple helper classes that provide the necessary inputs and outputs.
This works fine when using it interactively in the OMEditor, but I'm looking for a more automated solution with pass/fail criteria etc.
Should I start writing .mos scripts, or is there another/better way ?
Thanks.
I like how Openmodelica testing results look, see
https://test.openmodelica.org/libraries/MSL_3.2.1/BuildModelRecursive.html
click on a red cell: https://test.openmodelica.org/libraries/MSL_3.2.1/files/Modelica.Electrical.Analog.Examples.AD_DA_conversion.diff.html
choose "javascript" for a failing signal: https://test.openmodelica.org/libraries/MSL_3.2.1/files/Modelica.Electrical.Analog.Examples.AD_DA_conversion.diff.resistor.v.html
No idea how they are doing it, though. Obviously some kind of regression testing is done, with previous results stored, but no idea if that is from some testing library or self-made.
In general, I find it kinda sad/suboptimal, that there isn't "the one" testing solution everybody can/should use (cf. e.g. nose or pytest in the python ecosystem), instead everybody seems to cook up their own solutions (or tries to), and all you find is some Modelica conference papers (often without a trace of implementation) or unmaintained library of unknown status.
Off the top of my head, I found/know of (some already linked in other answers here)
OM testing
JModelica testing (seems to only test for compiler errors?)
Xogeny test (Some tests of the library itself fail for me. Also, does not seem to include a test runner)
MoUnit (something by Fraunhofer, and not publically available - maybe in OneWind/OneModelica?)
UnitTesting (apparently some kind of predecessor of XogenyTest. Also, no sources/implementation found)
Optimica Testing Toolkit (apparently a commercial product by Modelon)
SystemModeler VerificationTest
buildingspy Python package, for regression testing among other things. Under the umbrella of the Berkeley Modelica Buildings Library. (Simulation only with Dymola)
Modelica_Requirements library -- define requirements for simulation. (claimed to be open source and implemented, but apparently not available anywhere)
... I'm sure there are more I have forgotten or am not aware of
This seems like a pathological instance of https://xkcd.com/927/. It's kinda impossible for a (non-dev) user to know which of those to choose, which are actually good/usable/available/...
(Not real testing, but also relevant: parsing and semantic analysis using ANTLR: modelica.org/events/Conference2003/papers/h31_parser_Tiller.pdf)
Writing a .mos script would be one way but there is also a small proof-of-concept library by Michael Tiller: XogenyTest which you could use as a basis.
I prefer using the .mos script, it works pretty well when you further integrate your test framework into a continuous integration tool. BuildingPy is a good example of this, though it's not implemented in CI tools, it's still a good tool.
Here's a reference of a good framework design:
UnitTesting: A Library for Modelica Unit Testing
If you have Mathematica and SystemModeler you can run the simulation from Mathematica and use the VerificationTest "function" to test:
VerificationTest[Abs[WSMSimulate["HelloWorld"]["x", .1] - .90] < .01].
Multiple tests can then be simulated in a TestReport[].
The company I'm working at uses xUnit to write Integration tests. xUnit works perfectly for us but we will like to extract more statistical information out of runs. For example - "How many times did this specific Test-Case has been failing in the last month", and maybe even sort it out nicely on a pie chart.
Since we have a Microsoft Test Manager license, I did some research on it, and it seems like it does support more detailed reports. I also like the coupling between Manual Test-Cases to Automation Test-Cases, and the fact you easily identify how much of your Test-Cases are automated.
Sadly enough, Test Manager only support MSTest integration out of the box. I did noticed however that the MSTest.ext alternative - VSTest.exe is able to run xUnit tests, and even output TRX result file. Is there any way to integrate xUnit (or nUnit) to the Test Manager somehow? Has anyone done so in the past? we prefer to use Test Manager, but I'm interesting to know if there is an alternative that support a couple Test-Cases with Automated Test-Cases and a way to get statistical information about multiple-runs.
Thank you.
This tool will allow you to associate NUnit and xUnit test cases with Microsoft Test Manager.
https://github.com/JakeGinnivan/TestCaseAutomationAssigner
After some research on the Microsoft Forum (and personally speaking with Microsoft representative), It seems like it's not possible to use xUnit with MS Test Manager.
We decided not to use MS Test Manager and handle all our test runs using VSTest.exe and xUnit categories.
Edit: It's now possible to use an external tool for that called "TestCaseAutomationAssigner". See Jeff's answer for more information.
There is an option in IDEA IntelliJ, that you can export test results (after running TestNG tests) in HTML or XML format. I am wondering, is it possible to generate them by default?
Take a look at the default TestNG logging reporters and what seems to be a cosmetically improved version, ReportNG. Depending on your build system, you can take advantage of what's already being provided, or you can supply your own implementation.
How can I perform unit testing in Simulink, or preferably, Stateflow?
I'm a fan of agile software methods, including test driven development. I'm responsible for the development of safety critical control software and we're using Matlab/Simulink/Stateflow for the development of it. This toolset is selected because of the link with plant (hardware) models. (model-in-the-loop, hardware-in-the-loop)
I have found some links on Stackoverflow: Unit-testing framework for MATLAB: xunit, slunit and doctest.
Does anyone have experience in using those or different unit test frameworks?
How to link this to continuous integration systems (i.e. Hudson)?
EDIT: This is now much easier and getting easier all the time with the Jenkins plugin for MATLAB
ORIGINAL ANSWER:
As Craig mentioned there is indeed a framework in MATLAB introduced in R2013a. Furthermore, this framework added a TAPPlugin in R2014a which outputs the Test Anything Protocal. Using that protocol you can set up your CI build with a TAPPlugin (eg. Jenkins, TeamCity) so that the CI system can fail the build if the tests fail.
Your CI build may look like a shell command to start MATLAB and run all your tests:
/your/path/to/matlab/bin/matlab -nosplash -nodisplay -nodesktop -r "runAllMyTests"
Then the runAllMyTests creates the suite to run and runs it with the tap output being redirected to a file. You'll need to tweak specifics here, but perhaps this can help you get started:
function runAllMyTests
import matlab.unittest.TestSuite;
import matlab.unittest.TestRunner;
import matlab.unittest.plugins.TAPPlugin;
import matlab.unittest.plugins.ToFile;
try
% Create the suite and runner
suite = TestSuite.fromPackage('packageThatContainsTests', 'IncludingSubpackages', true);
runner = TestRunner.withTextOutput;
% Add the TAPPlugin directed to a file in the Jenkins workspace
tapFile = fullfile(getenv('WORKSPACE'), 'testResults.tap');
runner.addPlugin(TAPPlugin.producingOriginalFormat(ToFile(tapFile)));
runner.run(suite);
catch e;
disp(e.getReport);
exit(1);
end;
exit force;
EDIT: I used this topic as the first two posts of a new developer oriented blog launched this year
Unit testing Simulink is not straightforward, unfortunately. Mathworks have the SystemTest. Alternatively, you can roll-your-own Simulink testing framework, which is the approach that we've followed and is not too difficult, but you may need to built test-harnesses programmatically.
In order to integrate with CI, you need to create a function/script that executes all the tests, then you can use the command-line parameters for MATLAB.exe to run a script on start-up. I'm not sure anyone has a good way of integrating the test reports with the CI software, though. Just look at the number of comments in Unit-testing framework for MATLAB.
With 2015a Matlab introduces a new product name "Simulink Test". Perhaps that'll simplify this mess.
http://www.mathworks.com/products/simulink-test/features.html#manage-test-plans-and-test-execution
R2016b introduces integration between Simulink Test and MATLAB Unit Testing framework. Tests created with Simulink Test using Test Manager (*.mldatx) are recognized by and can be run natively using the MATLAB Unit Test Runner and thus you can generate JUnit style XML test results or TAP test results facilitating Continuous integration workflows.
See this reference for more information: https://www.mathworks.com/help/sltest/ug/run-test-files-using-matlab-unit-test.html?s_tid=gn_loc_drop
The documentation shows an example of producing TAP results using matlab.unittest.plugins.TAPPlugin but you can use XMLPlugin (https://www.mathworks.com/help/matlab/ref/matlab.unittest.plugins.xmlplugin-class.html) instead just as easily.
This should open up a better integration within just MATLAB environment even without CI in the picture with the ability to have MATLAB and Simulink Tests together in the same test suite and have them run together seamlessly. For example if you have a directory MYDIR with both native MATLAB unit tests and Simulink Tests, you can do something as simple as the follows to execute both kinds of tests in one shot:
results = runtests(MYDIR)
If your system is complex, you should decompose it using Model Reference and test each of these independently.
An other solution (more "old school") is to put your main blocks in a library and to create small models.
To test these submodels and especially those with a State Machine (Stateflow), the best is to create temporal test cases
with the Signal builder block. You have a powerful function signalbuilder to interact with this block and load test cases. My method is to get for each case of each submodel an input file and an output file. Your outputs of the model are the "correct" output and the one from the blocks. The model is run with sim (no external inputs) and the 2 outputs are compared with a script the indicated which signal is different (and when).
You could use an existent system but I prefer to use my own to run each case (or some of them).
I don't have any public code for that but that's the way I use. I don't use a CIS so I can't answer the second part of your question.
Matlab (since 2013b) has built-in support for xUnit, in the form of the Unit Testing Framework.
I haven't used it but since it's possible to run simulink from Matlab with sim() then this framework can be used to test your simulink models. You libraries and possibly models will need a wrapper to execute it as the other answerers have noted.
There are plenty of examples on the Mathworks site, unfortunately non of them run simulink models. I'd code an example for you, but I don't have ML2013b :-(
In order to initiate your tests from a CI (I use Jenkins) then you can call matlab to run a .m file that runs your test suite, this example cmd script will call Run_Tests.m from Matlab:
IF EXIST "C:\Program Files (x86)\MATLAB\R2013b\bin\win32\matlab.exe" (
REM WinXP
"C:\Program Files (x86)\MATLAB\R2013b\bin\win32\matlab.exe" -r "Run_Tests;exit" -logfile matlab.log
) ELSE (
REM Win7
"C:\Program Files\MATLAB\R2013b\bin\win32\matlab.exe" -r "Run_Tests;exit" -logfile matlab.log
)
Note that if a startup.m exists in the directory that you call Matlab from, then it'll be executed automatically beforeRun_Tests.m`.
I think you are searching for something like EZTEST. It is intended for your special purpose: Test driven development for Simulink and Stateflow on unit level. For safety critical software, there is also a Safety Manual included, which describes what is covered regarding ISO 26262.
Edit: I am a developer of this software, so my opinion may be biased. But I am not involved in the marketing or sale of the product. I am just posting this, because I know that there are little to none unit test frameworks out there, meeting the questioner's needs (as the answers might suppose).
Testing units using SIL and PIL is also supported. Unfortunately I am not familiar with Hudson, so I cannot address this part of the question.
I've seen different solutions to the problem of unit testing Simulink models. Simulink Verification & Validation which did not support xUnit concepts of test runners and test suites at the time of examination, TPT being overloaded with functionality, not easy to use and very hard in terms of changeability and maintainability.
Furthermore I've seen custom solutions with Matlab scripts and Excel tables which were lightweight but also difficult in terms of understandability and maintainability. I'd still not recommend using any of these approaches, at least not for unit testing.
In the end, we ended up using a C unit testing framework (CUnit) testing the generated code. While this definitely has the disadvantage that you have to generate code before testing, it also has a lot of advantages, like easy integration into CI systems, high flexibility of writing unit tests, fast execution of unit tests and last but not least refactoring capabilities in terms of switching from Simulink to another model-based environment or to hand-written code. Especially the last point should not be underestimated since I have seen many Simulink models that should have been hand-written modules in the first place. Nowadays, I'd recommend using GoogleTest instead of CUnit.
I've been working with a document repository using XQuery (via Java and .NET interfaces) and was wondering if anyone has any recommendations for unit testing XQuery modules?
There are several XQuery unit testing frameworks, but most are special-purpose written for a specific XQuery Processor. This is not a complete list, but includes most of the popular ones I'm aware of:
MarkLogic
Roxy Unit Tester https://github.com/marklogic/roxy/wiki/Unit-Testing
XQUT https://github.com/mblakele/xqut
xray https://github.com/robwhitby/xray
eXist
XQSuite http://exist-db.org/exist/apps/doc/xqsuite.xml
BaseX
XQuery Unit Module http://docs.basex.org/wiki/Unit_Module
Here is a quick DIY type solution for this problem:
Poor man's unit testing with XQuery (dead link).
This approach seems to have been embraced and extended for the tests of the xprocxq project.
Other tools exist, for example XTC.
This link may spruce your development. This is just an idea but you can develope and enhance more on this with various unit-test case design ideas.
since Xml is a king in meta-data world its quite easy to design and run.
design a Test-Case Xml by having all details capture
test-data ( input & expected result )
test-case id
test-case local methods
invoke dynamically based on the details.
functions
sample test script
results
Try XQSuite
It's pretty slick; here's the "minimal example":
declare namespace test="http://exist-db.org/xquery/xqsuite";
declare function %test:assertEquals("Hello world") local:hello() {
"Hello world"
};
Additionally, XSpec works wonderfully for XSLT testing (provides nicely formatted HTML test results, for example) BUT appears to need a bit of work with XQuery testing. The project appears to have become inactive in the past few years.