Exporting sikuli unit test data as a report - unit-testing

Is there an automated tool to generate reports containing information about unit tests when using sikuli? The data I want would be things such as pass/fail, a trace to where/why it failed, and a log of events.

Ended up using the HTMLTestRunner tool, it was far easier than anything else I found and met the criteria I needed. (There is also an XML version of this, XMLTestRunner)
HTMLTestRunner is an extension to the Python standard library's unittest module. It generates easy to use HTML test reports.
http://tungwaiyip.info/software/HTMLTestRunner.html
Another useful tool I found was RobotFramework, which can be integrated into Sikuli, but is more complicated and requires alot of research and reading of documentation.

Related

Nosetests and finding out all possible library functions

I have recently been getting back into python and I am a bit rusty. I have started working with the testing framework nose. I am trying to find out what possible functions I have available for use with this framework.
For example when I used rspec in ruby if I wanted to find out what "options" I have available when writing a test case I would simply go to the link below and browse the doco until I found what I needed:
https://www.relishapp.com/rspec/rspec-expectations/docs/built-in-matchers/comparison-matchers
Now when I try and do the same for nose, Google keeps sending me to:
https://nose.readthedocs.io/en/latest/writing_tests.html#test-functions
Although the doco is informative it's not really what I'm looking for.
Is there a python command I can use to find possible testing options or another place where good up to date documentation is stored?
All the assertions nose/unittests provides should be documented:
https://docs.python.org/2.7/library/unittest.html
In addition to doc, the code will always tell the truth. You could check out the library source code, or drop into a debugger inside your test method:
import pdb; pdb.set_trace()
And then inspect the test method for available assertions.
dir(self)
help(unittest.skip)

xUnit export support in PLUnit

I'm using plunit package for my prolog unit tests (SWI-Prolog 7.2).
run_tests/0 prints the results on console but I would like to export plunit test results in the xUnit XML format that most CI servers understand. Is there any way for this ?
I assume that SWI-Prolog plunit tool uses the message printing mechanism to generate its output. If true, you should be able to export unit test results in the xUnit XML format (or any other format) by intercepting those messages using the message_hook/3 predicate:
http://www.swi-prolog.org/pldoc/doc_for?object=message_hook/3
By coincidence, I'm working in similar support (for exporting testing results) for Logtalk's unit testing tool, lgtunit. It should give you an idea on how to do it for plunit. I committed a preliminary version today:
https://github.com/LogtalkDotOrg/logtalk3/blob/master/tools/lgtunit/NOTES.md
https://github.com/LogtalkDotOrg/logtalk3/blob/master/tools/lgtunit/xunit_xml_report.lgt
In my case, this support is being targeted for integration with the CI server Concourse. There seems to be, unfortunately, a lack of definitive information on the xUnit XML format with sources quoting different versions of e.g. which attributes are required or optional. I did find a XSD for this format bit I have no idea of its accuracy or if it's just another variation:
https://gist.github.com/erikd/4192748

Need a recommendation to use a testing platform

We are working on billing system (java based module)for that we would like to have a testing framework. That testing framework should be able to adoptable for any type of billing scenarios(eg: utility bill payments, water/electricity/or any other type billing) Normally the billing entity have common attributes like customer name/usage/ etc..I would like to pick a suitable testing platform to test our billing module.
It can be opensource/licensed software.
Can anybody suggest such a framework/engine?
If you wanna go for open source tool, then I'd recommend Selenium Webdriver with TestNG Framework. You can get lot of documentation and help on web.
You can go for Cucumber for describing various test scenarios (and their dependencies).
These scenarios will be backed by jUnit for gluing the description to executable code. jUnit will typically be used to write low level tests as well (for use by developers).
Cucumber has the benefit of giving you reports and can serve well for discussion with users and their representatives.
I would suggest following based on your given little description about your need :
1 - For functionality testing and to make all your scenarios automate use Selenium WebDriver
2 -Then if you want to priorities your testes , want to run tests through XML , want to run multiple tests then you can use TestNG Framework
Above are totally open source tool and you will get real benefit of those by scripting. They allow you to do scripting using programming languages like Java , Python , ruby and a little more. You will get all details once you visit my given links in above 2 points.
According to your given requirements I think above 2 tools are enough to make everything automate for testing.

Is there a good way to get a Code Coverage Report for Apex Code?

We're developing alot of enhancements to salesforce using Visualforce and Apex as part of a larger system, as part of our quality metrics we have to provide a report to management on our Code Coverage.
I'd like to get a report similar to the one produced by Run All Tests in the Force.com IDE but in HTML so I can display it easily via a web interface.
For the rest of our system we use Sonar http://www.sonarsource.org/ to produce the reports.
Does anybody know the best approach to this?
I've explored the API documentation but am unable to find out if the Coverage Percentages is stored against the classes so querying that isn't an option.
Any help or pointers would be greatly appreciated.
If you run the Apex tests yourself via the API there are objects returned indicating which lines hasn't been covered by the tests in that run. You can run the tests via either the synchronous or asynchronous methods.
Then you can use the data to create a report in a format that you require. For example, I've used it to create a basic report in the FuseIT SFDC Explorer (Windows based and free). I'm just dumping out line ranges that aren't covered.
You will probably want to run all the tests in one run to get the complete code coverage of all the tests. For example, in the screenshot above I only ran one out of a much greater number of test classes. As a result it looks like the code coverage was much lower than the cumulative tests give. It does however show which lines an individual test class reaches.
I've also heard good things about MavensMate for the Sublime Text Editor. Being open source you should be able to find how it integrates with the testing api and then generates the reports.

Unit testing/continuous integration with Simulink/Stateflow

How can I perform unit testing in Simulink, or preferably, Stateflow?
I'm a fan of agile software methods, including test driven development. I'm responsible for the development of safety critical control software and we're using Matlab/Simulink/Stateflow for the development of it. This toolset is selected because of the link with plant (hardware) models. (model-in-the-loop, hardware-in-the-loop)
I have found some links on Stackoverflow: Unit-testing framework for MATLAB: xunit, slunit and doctest.
Does anyone have experience in using those or different unit test frameworks?
How to link this to continuous integration systems (i.e. Hudson)?
EDIT: This is now much easier and getting easier all the time with the Jenkins plugin for MATLAB
ORIGINAL ANSWER:
As Craig mentioned there is indeed a framework in MATLAB introduced in R2013a. Furthermore, this framework added a TAPPlugin in R2014a which outputs the Test Anything Protocal. Using that protocol you can set up your CI build with a TAPPlugin (eg. Jenkins, TeamCity) so that the CI system can fail the build if the tests fail.
Your CI build may look like a shell command to start MATLAB and run all your tests:
/your/path/to/matlab/bin/matlab -nosplash -nodisplay -nodesktop -r "runAllMyTests"
Then the runAllMyTests creates the suite to run and runs it with the tap output being redirected to a file. You'll need to tweak specifics here, but perhaps this can help you get started:
function runAllMyTests
import matlab.unittest.TestSuite;
import matlab.unittest.TestRunner;
import matlab.unittest.plugins.TAPPlugin;
import matlab.unittest.plugins.ToFile;
try
% Create the suite and runner
suite = TestSuite.fromPackage('packageThatContainsTests', 'IncludingSubpackages', true);
runner = TestRunner.withTextOutput;
% Add the TAPPlugin directed to a file in the Jenkins workspace
tapFile = fullfile(getenv('WORKSPACE'), 'testResults.tap');
runner.addPlugin(TAPPlugin.producingOriginalFormat(ToFile(tapFile)));
runner.run(suite);
catch e;
disp(e.getReport);
exit(1);
end;
exit force;
EDIT: I used this topic as the first two posts of a new developer oriented blog launched this year
Unit testing Simulink is not straightforward, unfortunately. Mathworks have the SystemTest. Alternatively, you can roll-your-own Simulink testing framework, which is the approach that we've followed and is not too difficult, but you may need to built test-harnesses programmatically.
In order to integrate with CI, you need to create a function/script that executes all the tests, then you can use the command-line parameters for MATLAB.exe to run a script on start-up. I'm not sure anyone has a good way of integrating the test reports with the CI software, though. Just look at the number of comments in Unit-testing framework for MATLAB.
With 2015a Matlab introduces a new product name "Simulink Test". Perhaps that'll simplify this mess.
http://www.mathworks.com/products/simulink-test/features.html#manage-test-plans-and-test-execution
R2016b introduces integration between Simulink Test and MATLAB Unit Testing framework. Tests created with Simulink Test using Test Manager (*.mldatx) are recognized by and can be run natively using the MATLAB Unit Test Runner and thus you can generate JUnit style XML test results or TAP test results facilitating Continuous integration workflows.
See this reference for more information: https://www.mathworks.com/help/sltest/ug/run-test-files-using-matlab-unit-test.html?s_tid=gn_loc_drop
The documentation shows an example of producing TAP results using matlab.unittest.plugins.TAPPlugin but you can use XMLPlugin (https://www.mathworks.com/help/matlab/ref/matlab.unittest.plugins.xmlplugin-class.html) instead just as easily.
This should open up a better integration within just MATLAB environment even without CI in the picture with the ability to have MATLAB and Simulink Tests together in the same test suite and have them run together seamlessly. For example if you have a directory MYDIR with both native MATLAB unit tests and Simulink Tests, you can do something as simple as the follows to execute both kinds of tests in one shot:
results = runtests(MYDIR)
If your system is complex, you should decompose it using Model Reference and test each of these independently.
An other solution (more "old school") is to put your main blocks in a library and to create small models.
To test these submodels and especially those with a State Machine (Stateflow), the best is to create temporal test cases
with the Signal builder block. You have a powerful function signalbuilder to interact with this block and load test cases. My method is to get for each case of each submodel an input file and an output file. Your outputs of the model are the "correct" output and the one from the blocks. The model is run with sim (no external inputs) and the 2 outputs are compared with a script the indicated which signal is different (and when).
You could use an existent system but I prefer to use my own to run each case (or some of them).
I don't have any public code for that but that's the way I use. I don't use a CIS so I can't answer the second part of your question.
Matlab (since 2013b) has built-in support for xUnit, in the form of the Unit Testing Framework.
I haven't used it but since it's possible to run simulink from Matlab with sim() then this framework can be used to test your simulink models. You libraries and possibly models will need a wrapper to execute it as the other answerers have noted.
There are plenty of examples on the Mathworks site, unfortunately non of them run simulink models. I'd code an example for you, but I don't have ML2013b :-(
In order to initiate your tests from a CI (I use Jenkins) then you can call matlab to run a .m file that runs your test suite, this example cmd script will call Run_Tests.m from Matlab:
IF EXIST "C:\Program Files (x86)\MATLAB\R2013b\bin\win32\matlab.exe" (
REM WinXP
"C:\Program Files (x86)\MATLAB\R2013b\bin\win32\matlab.exe" -r "Run_Tests;exit" -logfile matlab.log
) ELSE (
REM Win7
"C:\Program Files\MATLAB\R2013b\bin\win32\matlab.exe" -r "Run_Tests;exit" -logfile matlab.log
)
Note that if a startup.m exists in the directory that you call Matlab from, then it'll be executed automatically beforeRun_Tests.m`.
I think you are searching for something like EZTEST. It is intended for your special purpose: Test driven development for Simulink and Stateflow on unit level. For safety critical software, there is also a Safety Manual included, which describes what is covered regarding ISO 26262.
Edit: I am a developer of this software, so my opinion may be biased. But I am not involved in the marketing or sale of the product. I am just posting this, because I know that there are little to none unit test frameworks out there, meeting the questioner's needs (as the answers might suppose).
Testing units using SIL and PIL is also supported. Unfortunately I am not familiar with Hudson, so I cannot address this part of the question.
I've seen different solutions to the problem of unit testing Simulink models. Simulink Verification & Validation which did not support xUnit concepts of test runners and test suites at the time of examination, TPT being overloaded with functionality, not easy to use and very hard in terms of changeability and maintainability.
Furthermore I've seen custom solutions with Matlab scripts and Excel tables which were lightweight but also difficult in terms of understandability and maintainability. I'd still not recommend using any of these approaches, at least not for unit testing.
In the end, we ended up using a C unit testing framework (CUnit) testing the generated code. While this definitely has the disadvantage that you have to generate code before testing, it also has a lot of advantages, like easy integration into CI systems, high flexibility of writing unit tests, fast execution of unit tests and last but not least refactoring capabilities in terms of switching from Simulink to another model-based environment or to hand-written code. Especially the last point should not be underestimated since I have seen many Simulink models that should have been hand-written modules in the first place. Nowadays, I'd recommend using GoogleTest instead of CUnit.