How to list available test classes - unit-testing

Is there a way to get Gradle (1.12) to list all of the available unit test classes in a project?
I'm considering putting a front-end on a series of tests we use in my company, and since new tests are always being added, I need a way to get a list of available tests.
I realize that I could scan the actual project for classes that reside in the test sources tree, but I was hoping for something easily parsed from Gradle. I just don't know if that's really an option and I'm having trouble getting decent search results since "test" is such a generic word.
Any help would be appreciated.

There is no official API in Gradle to expose this information. You can check if ClassScanner.java is what you need. Either look at Gradle sources or it is also used in EclipseTestExecuter.java Keep in mind that it is an implementation detail.
A simpler approach is to run these tests and enable logging where you will print names of executed tests. There is an example how to this in Gradle documentation I think.

Related

Unit testing Modelica component library?

I'm creating a library of components in Modelica, and would appreciate some input on techniques for unit testing the package.
So far I have a test package, consisting of a set of models, one per component. Each test model instantiates a component, and connects it to some very simple helper classes that provide the necessary inputs and outputs.
This works fine when using it interactively in the OMEditor, but I'm looking for a more automated solution with pass/fail criteria etc.
Should I start writing .mos scripts, or is there another/better way ?
Thanks.
I like how Openmodelica testing results look, see
https://test.openmodelica.org/libraries/MSL_3.2.1/BuildModelRecursive.html
click on a red cell: https://test.openmodelica.org/libraries/MSL_3.2.1/files/Modelica.Electrical.Analog.Examples.AD_DA_conversion.diff.html
choose "javascript" for a failing signal: https://test.openmodelica.org/libraries/MSL_3.2.1/files/Modelica.Electrical.Analog.Examples.AD_DA_conversion.diff.resistor.v.html
No idea how they are doing it, though. Obviously some kind of regression testing is done, with previous results stored, but no idea if that is from some testing library or self-made.
In general, I find it kinda sad/suboptimal, that there isn't "the one" testing solution everybody can/should use (cf. e.g. nose or pytest in the python ecosystem), instead everybody seems to cook up their own solutions (or tries to), and all you find is some Modelica conference papers (often without a trace of implementation) or unmaintained library of unknown status.
Off the top of my head, I found/know of (some already linked in other answers here)
OM testing
JModelica testing (seems to only test for compiler errors?)
Xogeny test (Some tests of the library itself fail for me. Also, does not seem to include a test runner)
MoUnit (something by Fraunhofer, and not publically available - maybe in OneWind/OneModelica?)
UnitTesting (apparently some kind of predecessor of XogenyTest. Also, no sources/implementation found)
Optimica Testing Toolkit (apparently a commercial product by Modelon)
SystemModeler VerificationTest
buildingspy Python package, for regression testing among other things. Under the umbrella of the Berkeley Modelica Buildings Library. (Simulation only with Dymola)
Modelica_Requirements library -- define requirements for simulation. (claimed to be open source and implemented, but apparently not available anywhere)
... I'm sure there are more I have forgotten or am not aware of
This seems like a pathological instance of https://xkcd.com/927/. It's kinda impossible for a (non-dev) user to know which of those to choose, which are actually good/usable/available/...
(Not real testing, but also relevant: parsing and semantic analysis using ANTLR: modelica.org/events/Conference2003/papers/h31_parser_Tiller.‌​pdf)
Writing a .mos script would be one way but there is also a small proof-of-concept library by Michael Tiller: XogenyTest which you could use as a basis.
I prefer using the .mos script, it works pretty well when you further integrate your test framework into a continuous integration tool. BuildingPy is a good example of this, though it's not implemented in CI tools, it's still a good tool.
Here's a reference of a good framework design:
UnitTesting: A Library for Modelica Unit Testing
If you have Mathematica and SystemModeler you can run the simulation from Mathematica and use the VerificationTest "function" to test:
VerificationTest[Abs[WSMSimulate["HelloWorld"]["x", .1] - .90] < .01].
Multiple tests can then be simulated in a TestReport[].

Automated QA for a single freelance developer?

I have been developing an application in my free time using Qt.
As the size of code is increasing I am finding it difficult to contain new bugs for older code. I have been testing my application manually.
Since the target is an exe I cannot test it automated with C++ tests without injecting some extra code into my application.
So my question is, what is the best QA technique for a GUI application if you are a single developer & wont be earning money from the project as it will be released for free?
Thank You.
EDIT:
I would like to have a set of simple tests, each testing for specific functionalities of my software. I would like them to run automatically one after another. Finally they should create a report of which tests failed. This can possibly be done by creating new functions in the same classes + adding some checks in existing functions I want to test & then create a new class which will have all the tests. So I wanted to know whether is this the best way or is there a better alternative? Because everytime I will build a release target, I will be commenting/deleting this QA code, which may create some bugs for that build.
Currently I am not worried about documentation & comments as I have maintained that from the beginning. It is only about source code QA.
Unit tests by-the-book will only give you assurance for your methods, not for the entire application. But you can also use the same unit-test framework to write acceptance tests for specific capabilities of the application.
The easiest way to go would be to extract the GUI from the application, and to make the GUI dependent of an API/library. The API will make it easy to write functional tests. Be sure to make the GUI as thin as possible.
I wouldn't add test code to your class and remove it to release, I think this is as risky as shipping with the test code. You're better off have separated source as already advised here.
If your project is getting large enough, you'll probably want to create some unit tests for it (I like the free CppUnit library, which is similar to JUnit; also Jo Are By suggested QtTest, which presumably is available with Qt).
Even if you have to make some changes to your production code, it will be worth your time in the end.
You may also wish to look into automated GUI testing frameworks for Qt applications; I'm not familiar with any of these.
Test code go to its own source file.
You may split your exe into library and one main.cpp which simply call your library.
That way, you may use any unitTest Framework with extra test files to generate a executable which only tests your library.
For code testing you will use Junit testcase
You may split your exe into library and one main.cpp which simply call your library.
For GUI testing you have do it Manually because there is not Tool Available to Test GUI interface of any application.
In Manual testing GUI is check complete and GUI image or text is not displayed clearly or text is missing is not all this is will not be test by automation.

Junit: changing sequence of test running

I have a big mess with 100 tests in one class and running all of them by clicking "Test project (...). They run in a random order and I would like them to run in a specific order - from beginning to the end, the same order that I wrote them. In eclipse it's not a problem because eclipse just works like that, how to do it in netbeans?
Any help will be appreciated.
Edit (due to answers): Tests order is required for the clearance of the log. They are independent.
If your tests needs to run in a specific order, something is wrong with your design.
2 test that needs to run one after another are 1 test. Consider this before searching for a solution.
check this https://blogs.oracle.com/mindless/entry/controlling_the_order_of_junit
Having tests depending on other tests 99.9% of the time a very bad idea. Unit tests should be independent from each other, as otherwise you might have a cascade of errors, or (even worse) one test failing because something another test did sometime before.
If you still want to go through this pain, you'll need to use a different unit testing framework (such as TestNG - see dependsOnMethods) which supports test dependencies.
Junit doesn't support this feature because it's seen by many as a bad practice (for very good reasons).
The next JUnit release will support ordering of test methods. The standard Maven Surefire Plugin supports ordering of test methods already.
Netbeans has good integration with ant build files. You could write a specific ant target that could execute each test in order.

Google App Engine + GWT + Eclipse: where do your unit tests live?

I'm just getting started with a project that combines GWT, Google App Engine and the Google Eclipse plugin. Where is the best place to store my tests? I normally keep my code organized Maven-style, with src/main/java, and tests in src/test/java. The default setup I get from the plugin dumped my source directly into src, which I'm not too fond of, but I'd prefer not to fight against the tools. What's the "standard" place to put unit tests in such a project?
Solution:
create src/main/java, move the existing code under there
create src/test/java, add your tests here
go to Project -> Properties -> Java Build Path, add the new locations as Source Folders.
I've faced a kind of problem woth GAE testing: Some tests require an appengine-testing.jar wich conflicts with the main appengine-api-xxx.jar of the poject. That way, I was able to run tests for GAE but it conflicted with a normal run/debug launch. To be able to run the app in my local machine, I had to remove the appengine-testing.jar and then, a lot of compilation errors appeared in my test/ clases.
If you want an advice, set your test clases in another project (where you can use the jars without conflict)
Otherwise, if you got make it work, please, tell me how did you do.
Thanks a lot.
Put it where it pains you least.
GWT on Google App Engine is pretty new at this point; you are
optimistic to expect there is a "standard" place, especially since
you've already found an inconsistency in what the tools do.
Since you've already accepted the source starting at "src/", why not
put the test source in "test/"? This is certainly standard in many
contexts.

Can any IDE or framework help test new code quickly without having to run the whole application

I mainly develop in native C++ on Windows using Visual Studio.
A lot of times, I find myself creating a new function/class or whatever, and I just want to test that piece of logic I just wrote, quickly.
A lot of times, I have to run the entire application, which sometimes could take a while since there are many connected parts.
Is there some sort of tool that will allow me to test that new piece of code quickly without having to run the whole application?
i.e.
Say I have a project with about 1000 files, and I'm adding a new class called Adder. Adder has a method Add( int, int );
I just want the IDE/tool to allow me to test just the Adder class (without me having to create a new project and write a dummy main.cpp) by allowing me to specify the value of the inputs going into Adder object. Likewise, it would be nice if it would allow me to specify the expected output from the tested object.
What would be even cooler is if the IDE/tool would then "record" these sets of inputs/expected output, and automatically create unit tester class based on them. If I added more input/output sets, it would keep building a history of input/outputs.
Or how about this: what if I started the actual application, feed some real data to it, and have the IDE/tool capture the complete inputs going into the unit being tested. That way, I can quickly restart my testing if I found some bugs in my program or I want to change its interface a bit. I think this feature would be so neat, and can help developer quickly test / modify their code.
Am I talking about mock object / unit testing that already exists?
Sidenote: it would be cool if Visual Studio debugger has a "replay" technology where user can step back to find what went wrong. Such debugger already exists here: http://www.totalviewtech.com/
It's very easy to get started with static unit testing in C++ - three lines of code.
VS is a bit poor in that you have to go through wizards to make a project to build and run the tests, so if you have a thousand classes you'd need a thousand projects. So for large projects on VS I've tended to organised the project into a few DLLs for independent building and testing rather than monolithic ones.
An alternative to static tests more similar to your 'poke and dribble' script could be done in python, using swig to bind your code to the interpreter, and python's doc tests . I haven't used both together myself. Again, you'd need a separate target to build the python binding, and another to run the tests, rather than it being just a simple 'run this class' button.
I would go with Boost.Test (see tutorial here)).
The idea would be to add a new configuration to your project, which would exclude from build all unnecessary cpp files. You would just have to add .cpp files to describe the tests you want to pass.
I am no expert in this area but i have used this technique in the past and it works !
I think you are talking about unit testing and mock objects. Here are couple of C++ mock object libraries that might be useful :-
googlemock which only works with googletest
mockpp
You are essentially asking how can I test one function instead of the whole application. That is what unit-testing is, and you will find many questions about unit-testing C++ on SO.