Unit testing Modelica component library? - unit-testing

I'm creating a library of components in Modelica, and would appreciate some input on techniques for unit testing the package.
So far I have a test package, consisting of a set of models, one per component. Each test model instantiates a component, and connects it to some very simple helper classes that provide the necessary inputs and outputs.
This works fine when using it interactively in the OMEditor, but I'm looking for a more automated solution with pass/fail criteria etc.
Should I start writing .mos scripts, or is there another/better way ?
Thanks.

I like how Openmodelica testing results look, see
https://test.openmodelica.org/libraries/MSL_3.2.1/BuildModelRecursive.html
click on a red cell: https://test.openmodelica.org/libraries/MSL_3.2.1/files/Modelica.Electrical.Analog.Examples.AD_DA_conversion.diff.html
choose "javascript" for a failing signal: https://test.openmodelica.org/libraries/MSL_3.2.1/files/Modelica.Electrical.Analog.Examples.AD_DA_conversion.diff.resistor.v.html
No idea how they are doing it, though. Obviously some kind of regression testing is done, with previous results stored, but no idea if that is from some testing library or self-made.
In general, I find it kinda sad/suboptimal, that there isn't "the one" testing solution everybody can/should use (cf. e.g. nose or pytest in the python ecosystem), instead everybody seems to cook up their own solutions (or tries to), and all you find is some Modelica conference papers (often without a trace of implementation) or unmaintained library of unknown status.
Off the top of my head, I found/know of (some already linked in other answers here)
OM testing
JModelica testing (seems to only test for compiler errors?)
Xogeny test (Some tests of the library itself fail for me. Also, does not seem to include a test runner)
MoUnit (something by Fraunhofer, and not publically available - maybe in OneWind/OneModelica?)
UnitTesting (apparently some kind of predecessor of XogenyTest. Also, no sources/implementation found)
Optimica Testing Toolkit (apparently a commercial product by Modelon)
SystemModeler VerificationTest
buildingspy Python package, for regression testing among other things. Under the umbrella of the Berkeley Modelica Buildings Library. (Simulation only with Dymola)
Modelica_Requirements library -- define requirements for simulation. (claimed to be open source and implemented, but apparently not available anywhere)
... I'm sure there are more I have forgotten or am not aware of
This seems like a pathological instance of https://xkcd.com/927/. It's kinda impossible for a (non-dev) user to know which of those to choose, which are actually good/usable/available/...
(Not real testing, but also relevant: parsing and semantic analysis using ANTLR: modelica.org/events/Conference2003/papers/h31_parser_Tiller.‌​pdf)

Writing a .mos script would be one way but there is also a small proof-of-concept library by Michael Tiller: XogenyTest which you could use as a basis.

I prefer using the .mos script, it works pretty well when you further integrate your test framework into a continuous integration tool. BuildingPy is a good example of this, though it's not implemented in CI tools, it's still a good tool.
Here's a reference of a good framework design:
UnitTesting: A Library for Modelica Unit Testing

If you have Mathematica and SystemModeler you can run the simulation from Mathematica and use the VerificationTest "function" to test:
VerificationTest[Abs[WSMSimulate["HelloWorld"]["x", .1] - .90] < .01].
Multiple tests can then be simulated in a TestReport[].

Related

What form of testing should I perform?

I want to write an algorithm (a bunch of machine learning algorithms) in C/C++ or maybe in Java, possibly in Python. The language doesn't really matter to me - I'm familiar with all of the above.
What matters to me is the testing. I want to train my models using training data. So I have the test input and I know what the output should be and I compare it to the model's output. What kind of test is this? Is it a unit test? How do I approach the problem? I can see that I can write some code to check what I need checking but I want to separate testing from main code. Testing is a well developed field and I've seen this done before but I don't know the name and type of this particular kind of testing so that I can read up on it and not create a mess. I'd be grateful if you could let me know what this testing method is called.
Your best bet is watch the psychology of testing videos from the tetsing God http://misko.hevery.com/
Link of Misko videos:
http://misko.hevery.com/presentations/
And read this Google testing guide http://misko.hevery.com/code-reviewers-guide/
Edited:
Anyone can write tests, they are really simple and there is no magic to write a test, you can simply do something like:
var sut = new MyObject();
var res = sut.IsValid();
if(res != true)
{
throw new ApplicationException("message");
}
That is the theory of course these days we have tools to simplify the tests and we can write something like this:
new MyObject().IsValid().Should().BeTrue();
But what you should do is focus on writing testable code, that's the magic key
Just follow the psychology of testing videos from Misko to get you started
This sounds a lot like Test-Driven Development (TDD), where you create unit-tests ahead of the production code. There are many detailed answers around this site on both topics. I've linked to a couple of pertinent questions to get you started.
If your inputs/outputs are at the external interfaces of your full program, that's black box system testing. If you are going inside your program to zoom in on a particular function, e.g., a search function, providing inputs directly into the function and observing the behavior, that's unit testing. This could be done at function level and/or module level.
If you're writing a machine learning project, the testing and training process isn't really Test-Driven Development. Have you ever heard of co-evolution? You have a set puzzles for your learning system that are, themselves, evolving. Their fitness is determined by how much they confound your cases.
For example, I want to evolve a sorting network. My learning system is the programs that produce networks. My co-evolution system generates inputs that are difficult to sort. The sorting networks are rewarded for producing correct sorts and the co-evolutionary systems are rewarded for how many failures they trigger in the sorting networks.
I've done this with genetic programming projects and it worked quite well.
Probably back testing, which means you have some historical inputs and run your algorithm over them to evaluate the performance of your algorithm. The term you used yourself - training data - is more general and you could search for that to find some useful links.
Its Unit testing. the controllers are tested and the code is checked in and out without really messing up your development code. This process is also called a Test Driven Development(TDD) where your every development cycle is tested before going into the next software iteration or phase.
Although this is a very old post, my 2 cents :)
Once you've decided which algorithmic method to use (your "evaluation protocol", so to say) and tested your algorithm on unitary edge cases, you might be interested in ways to run your algorithm on several datasets and assert that the results are above a certain threshold (individually, or on average, etc.)
This tutorial explains how to do it within the pytest framework, that is the most popular testing framework within python. It is based on an example (comparing polynomial fitting algorithms on several datasets).
(I'm the author, feel free to provide feedback on the github page!)

Unit testing/continuous integration with Simulink/Stateflow

How can I perform unit testing in Simulink, or preferably, Stateflow?
I'm a fan of agile software methods, including test driven development. I'm responsible for the development of safety critical control software and we're using Matlab/Simulink/Stateflow for the development of it. This toolset is selected because of the link with plant (hardware) models. (model-in-the-loop, hardware-in-the-loop)
I have found some links on Stackoverflow: Unit-testing framework for MATLAB: xunit, slunit and doctest.
Does anyone have experience in using those or different unit test frameworks?
How to link this to continuous integration systems (i.e. Hudson)?
EDIT: This is now much easier and getting easier all the time with the Jenkins plugin for MATLAB
ORIGINAL ANSWER:
As Craig mentioned there is indeed a framework in MATLAB introduced in R2013a. Furthermore, this framework added a TAPPlugin in R2014a which outputs the Test Anything Protocal. Using that protocol you can set up your CI build with a TAPPlugin (eg. Jenkins, TeamCity) so that the CI system can fail the build if the tests fail.
Your CI build may look like a shell command to start MATLAB and run all your tests:
/your/path/to/matlab/bin/matlab -nosplash -nodisplay -nodesktop -r "runAllMyTests"
Then the runAllMyTests creates the suite to run and runs it with the tap output being redirected to a file. You'll need to tweak specifics here, but perhaps this can help you get started:
function runAllMyTests
import matlab.unittest.TestSuite;
import matlab.unittest.TestRunner;
import matlab.unittest.plugins.TAPPlugin;
import matlab.unittest.plugins.ToFile;
try
% Create the suite and runner
suite = TestSuite.fromPackage('packageThatContainsTests', 'IncludingSubpackages', true);
runner = TestRunner.withTextOutput;
% Add the TAPPlugin directed to a file in the Jenkins workspace
tapFile = fullfile(getenv('WORKSPACE'), 'testResults.tap');
runner.addPlugin(TAPPlugin.producingOriginalFormat(ToFile(tapFile)));
runner.run(suite);
catch e;
disp(e.getReport);
exit(1);
end;
exit force;
EDIT: I used this topic as the first two posts of a new developer oriented blog launched this year
Unit testing Simulink is not straightforward, unfortunately. Mathworks have the SystemTest. Alternatively, you can roll-your-own Simulink testing framework, which is the approach that we've followed and is not too difficult, but you may need to built test-harnesses programmatically.
In order to integrate with CI, you need to create a function/script that executes all the tests, then you can use the command-line parameters for MATLAB.exe to run a script on start-up. I'm not sure anyone has a good way of integrating the test reports with the CI software, though. Just look at the number of comments in Unit-testing framework for MATLAB.
With 2015a Matlab introduces a new product name "Simulink Test". Perhaps that'll simplify this mess.
http://www.mathworks.com/products/simulink-test/features.html#manage-test-plans-and-test-execution
R2016b introduces integration between Simulink Test and MATLAB Unit Testing framework. Tests created with Simulink Test using Test Manager (*.mldatx) are recognized by and can be run natively using the MATLAB Unit Test Runner and thus you can generate JUnit style XML test results or TAP test results facilitating Continuous integration workflows.
See this reference for more information: https://www.mathworks.com/help/sltest/ug/run-test-files-using-matlab-unit-test.html?s_tid=gn_loc_drop
The documentation shows an example of producing TAP results using matlab.unittest.plugins.TAPPlugin but you can use XMLPlugin (https://www.mathworks.com/help/matlab/ref/matlab.unittest.plugins.xmlplugin-class.html) instead just as easily.
This should open up a better integration within just MATLAB environment even without CI in the picture with the ability to have MATLAB and Simulink Tests together in the same test suite and have them run together seamlessly. For example if you have a directory MYDIR with both native MATLAB unit tests and Simulink Tests, you can do something as simple as the follows to execute both kinds of tests in one shot:
results = runtests(MYDIR)
If your system is complex, you should decompose it using Model Reference and test each of these independently.
An other solution (more "old school") is to put your main blocks in a library and to create small models.
To test these submodels and especially those with a State Machine (Stateflow), the best is to create temporal test cases
with the Signal builder block. You have a powerful function signalbuilder to interact with this block and load test cases. My method is to get for each case of each submodel an input file and an output file. Your outputs of the model are the "correct" output and the one from the blocks. The model is run with sim (no external inputs) and the 2 outputs are compared with a script the indicated which signal is different (and when).
You could use an existent system but I prefer to use my own to run each case (or some of them).
I don't have any public code for that but that's the way I use. I don't use a CIS so I can't answer the second part of your question.
Matlab (since 2013b) has built-in support for xUnit, in the form of the Unit Testing Framework.
I haven't used it but since it's possible to run simulink from Matlab with sim() then this framework can be used to test your simulink models. You libraries and possibly models will need a wrapper to execute it as the other answerers have noted.
There are plenty of examples on the Mathworks site, unfortunately non of them run simulink models. I'd code an example for you, but I don't have ML2013b :-(
In order to initiate your tests from a CI (I use Jenkins) then you can call matlab to run a .m file that runs your test suite, this example cmd script will call Run_Tests.m from Matlab:
IF EXIST "C:\Program Files (x86)\MATLAB\R2013b\bin\win32\matlab.exe" (
REM WinXP
"C:\Program Files (x86)\MATLAB\R2013b\bin\win32\matlab.exe" -r "Run_Tests;exit" -logfile matlab.log
) ELSE (
REM Win7
"C:\Program Files\MATLAB\R2013b\bin\win32\matlab.exe" -r "Run_Tests;exit" -logfile matlab.log
)
Note that if a startup.m exists in the directory that you call Matlab from, then it'll be executed automatically beforeRun_Tests.m`.
I think you are searching for something like EZTEST. It is intended for your special purpose: Test driven development for Simulink and Stateflow on unit level. For safety critical software, there is also a Safety Manual included, which describes what is covered regarding ISO 26262.
Edit: I am a developer of this software, so my opinion may be biased. But I am not involved in the marketing or sale of the product. I am just posting this, because I know that there are little to none unit test frameworks out there, meeting the questioner's needs (as the answers might suppose).
Testing units using SIL and PIL is also supported. Unfortunately I am not familiar with Hudson, so I cannot address this part of the question.
I've seen different solutions to the problem of unit testing Simulink models. Simulink Verification & Validation which did not support xUnit concepts of test runners and test suites at the time of examination, TPT being overloaded with functionality, not easy to use and very hard in terms of changeability and maintainability.
Furthermore I've seen custom solutions with Matlab scripts and Excel tables which were lightweight but also difficult in terms of understandability and maintainability. I'd still not recommend using any of these approaches, at least not for unit testing.
In the end, we ended up using a C unit testing framework (CUnit) testing the generated code. While this definitely has the disadvantage that you have to generate code before testing, it also has a lot of advantages, like easy integration into CI systems, high flexibility of writing unit tests, fast execution of unit tests and last but not least refactoring capabilities in terms of switching from Simulink to another model-based environment or to hand-written code. Especially the last point should not be underestimated since I have seen many Simulink models that should have been hand-written modules in the first place. Nowadays, I'd recommend using GoogleTest instead of CUnit.

What's a good unit test framework for Common Lisp projects?

I need to write a unit test suite for a project I am developing in my spare time. Being a CL newbie I was overwhelmed by the amount of choices for a CL implementation, I spent quite some time to choose one. Now I am facing exactly the same thing with unit test frameworks.
A quick glance at http://www.cliki.net/test%20framework shows 20 unit test frameworks! Choice is good but for a novice like me this can be a bit confusing and given the number of frameworks it would be painful to try them all.
I would like to use a framework which:
Is reasonably well maintained
Easy to use but with some degree of flexibility
Offers some sort of integration with Emacs (or it is possible to easily integrate it with Emacs)
It is possible to integrate it with git post-commit hooks
It is possible to integrate it with a continous integration system (such as buildbot)
What are your experiences in this field?
Did you see the link to http://aperiodic.net/phil/archives/Geekery/notes-on-lisp-testing-frameworks.html off the Test framework comparison link on that cliki page you mention? Phil gives his impressions, and what it looks like to use the various test frameworks.
I personally prefer lisp-unit. It's simple to use and has most of the common types of tests.
http://www.cliki.net/lisp-unit
http://repo.or.cz/w/lisp-unit.git/blob_plain/master:/documentation/lisp-unit.html
I don't think it has any integration with post-commit hooks or buildbot built in.

Unit/integration testing Asterisk configuration

Unit and integration testing is usually performed as part of a development process, of course. I'm looking for ways to use this methodology in configuration of an existing system, in this case the Asterisk soft PBX.
In the case of Asterisk, the configuration file is as much a programming language as anything else, complete with loops, jumps, conditionals, etc., and can get quite complex. Changes to the configuration often suffers from the same problems as changes to a complex software product - it can be hard to foresee all the effects without tests in place. It's made worse by the fact that the nature of the system is to communicate with external entities, i.e. make phone calls.
I have a few ideas about testing the system using call files (to create specific calls between extensions) while watching the manager interface for generated events. A test could then watch for an expected result, i.e. dialling *99# should result in the Voicemail application getting called.
The flaws are obvious - it doesn't test the actual result, only what the system thinks is the result, and it probably requires some modification of the system under test. It's also really hard to write these tests robustly enough to only trigger on the expected output, especially if the system is in use (i.e. there are other calls in progress).
Is what I want, a testing system for Asterisk, impossible? If not, do you have any ideas about ways to go about this in a reasonable manner? I'm willing to put a fair amount of development time into this and release the result under a friendly license, but I'm unsure about the best way to approach it.
This is obviously an old question, so there's a good chance that when the original answers were posted here that Asterisk did not support unit / integration testing to the extent that it does today (although the Unit Test Framework API went in on 12/22/09, so that, at least, did exist).
The unit testing framework (David's e-mail from the dev list here) lets you execute unit tests directly within Asterisk. Tests are registered with the framework and can be executed / viewed through the CLI. Since this is all part of Asterisk, the tests are compiled into the executable. You do have to configure Asterisk with the --enable-dev-mode option, and mark the tests for compilation using the menuselect tool (some applications, like app_voicemail, automatically register tests - but they're the minority).
Writing unit tests is fairly straight-forward - and while it (obviously) isn't as fully featured as a commercial unit test framework, it gets the job done and can be enhanced as needed.
That most likely isn't what the majority of Asterisk users are going to want to use - although Asterisk developers are highly encouraged to check it out. Both users and developers are probably interested in integration tests, which the Asterisk Test Suite provides. At its core, the Test Suite is a python script that executes other scripts - be they lua, python, etc. The Test Suite comes with a set of python and lua libraries that help to orchestrate and execute multiple Asterisk instances. Test writers can use third party applications such as SIPp or Asterisk interfaces (AMI, AGI) or a combination thereof to test the hosted Asterisk instance(s).
There are close to 200 tests now in the Test Suite, with more being added on a fairly regular basis. You could obviously write your own tests that exercise your Asterisk configuration and have them managed by the Test Suite - if they're generic enough, you could submit them for inclusion in the Test Suite as well.
Note that the Test Suite can be a bit tricky to set up - Leif wrote a good blog post on setting up the Test Suite here.
Well, it depends on what you are testing. There are a lot of ways to handle this sort of thing. My preference is to use Asterisk Call Files bundled with dialplan code. EG: Create a callfile to dial some public number, once it is answered, hop back to the specified dialplan context and perform all of my testing logic (play soundfiles, listen for keypresses, etc.)
I wrote an Asterisk call file library which makes this sort of testing EXTREMELY easy. It has a lot of documentation / examples too, check it out here: http://pycall.org/. That may help you.
Good luck!
You could create a set of specific scenarios and use Asterisk's MixMonitor command to record these calls. This would enable you to establish a set of sound recordings that were normative for your system for these tests, and use an automated sound file comparison tool (Perhaps something from comparing-sound-files-if-not-completely-identical?) to examine the results. Just an idea.
Unit testing as opposed to integration testing means your code is supposed to be architectured so the logic itself is insulated from external dependencies. You said "the configuration file is as much a programming language as anything else" but that's the thing --- real languages has not just control flow but abstraction capabilities, which allow you to write the logic in a way that can be unit tested. That's why I keep logic outside of asterisk as much as possible.
For integration testing, script linphonec to drive your application, and grep the asterisk console to see what it's doing.
You can use docker, and fire up temporary asterisk instances for each test.

How can we decide which testing method can be used?

i have project in .net , i want to test it.
But i dont know anything about testing and its method.
how can i go ahead with testing.
which method is better for me for begining?
Is there anything to decide which testing method is taken into account for better result?
There is no "right" or "wrong" in testing. Testing is an art and what you should choose and how well it works out for you depends a lot from project to project and your experience.
But as a professional Tester Expert my suggestion is that you have a healthy mix of automated and manual testing.
AUTOMATED TESTING
Unit Testing
Use NUnit to test your classes, functions and interaction between them.
http://www.nunit.org/index.php
Automated Functional Testing
If it's possible you should automate a lot of the functional testing. Some frame works have functional testing built into them. Otherwise you have to use a tool for it. If you are developing web sites/applications you might want to look at Selenium.
http://www.peterkrantz.com/2005/selenium-for-aspnet/
Continuous Integration
Use CI to make sure all your automated tests run every time someone in your team makes a commit to the project.
http://martinfowler.com/articles/continuousIntegration.html
MANUAL TESTING
As much as I love automated testing it is, IMHO, not a substitute for manual testing. The main reason being that an automated can only do what it is told and only verify what it has been informed to view as pass/fail. A human can use it's intelligence to find faults and raise questions that appear while testing something else.
Exploratory Testing
ET is a very low cost and effective way to find defects in a project. It take advantage of the intelligence of a human being and a teaches the testers/developers more about the project than any other testing technique i know of. Doing an ET session aimed at every feature deployed in the test environment is not only an effective way to find problems fast, but also a good way to learn and fun!
http://www.satisfice.com/articles/et-article.pdf
Since it is not clear about the scale of the project you have, all you need to do is make sure:
Your tests are trustworthy - you should know they are telling u the truth.
Repeatable
Consistent - If you repeat test with same test data it should provide same output.
Proves you are covering all the problem areas.
To get this you can use:
Standard way : NUnit, MbUnit (myFav) or xUnit (havent got around to working with it) or MSTest
Quick and Dirty : Console app (not cool, not so flexible)
If you are using .Net, I'd recommend checking out NUnit. It's a great testing framework to use.
As far as learning about the "testing method", there are many different ways to test an application. When using a tool like NUnit, for example, you are writing automated tests which run without user interaction. In these types of tests, you typically write tests for each of the public methods in your application, and you ensure that given known inputs, these methods produce the expected outputs. Over time as the application changes (via enhancements, bug fixes, etc.) you have a core set of tests that you can re-run to ensure nothing breaks as a result of the changes. You can also do failure testing to ensure that given an invalid set of inputs to a method, it throws the proper exceptions, etc.
Besides automated testing with a tool like NUnit, it's also important to ensure that your end users test the product. "End users" here could be a Quality Assurance group in your company, or it could be the actual customer. The point is that you need to ensure that someone actually uses your application to make sure it works as expected, because no matter how good the automated tests are, there will still be many things you won't think of that your users will discover. One way to approach this type of testing is to write test scenarios, and have your users execute them to make sure the scenario results in the correct behavior.
I think the best testing approach combines both of the above, namely automated testing and user testing (with documented test scenarios).