Best way to create automated testing in a Java environment - unit-testing

I have been tasked with finding the best way to do integrated unit testing. We have a very large Java EE 5 application (desktop). Right now we use a tool called QF-TEST which is pretty cumbersome for large tests and can be difficult to use (easy to break) with any code changes.
We now want to do something that is more standard and gives the developers more control.
I have read a few posts here:
Unit testing in Java EE environment
Automated Testing - kind of cool, although for .Net
Best practice approach for automated testing
easiest Automated testing tool in Java
From the general information I have read JUnit/JUnitEE is probably the best (by best I mean quickest to learn and possibly the JAVA standard).
Is JUnit the way to go for large Java EE applications? What are some other options that others find better (if there are any)?
Thanks!

That's a very broad question. It's the topic of many books. Start with JUnit, start reading about test-driven design/development (TDD), and build from there. Ask more specific questions as you come across them. You could start with "Test Infected", a rather old, yet still applicable article on the JUnit site.

I think for java I would go with either with junit or testNG frameworks, If you have persistence/data base involved in the application, I would add dbUnit to the mix.
If you have build scripts like maven or ant or gradle, I would also suggest you look into Jenkins and similar tools to automate builds.
I am suggesting maven because it has life cycle events for testing, You can add targets with ant, I have not used gradle but if I was given choice, I would choose gradle

JUnit is not the way to go for large Java EE applications, it is a bare minimum.
But since you already have an application writing jUnit tests will not help you, because jUnit is not a testing tool. jUnit is used for TDD and refactoring safety, writing jUnit tests as an afterthought simply does not work.
That said you should learn jUnit4 as soon as possible (it can be done in 60 seconds)
and than learn about TDD (it can be done in 2-3 hours)
and use jUnit for every future code change you make to your application.
As for code you already have, what you need is 'code analyzers'.
That means tools that will check your source code for violated coding standards, code duplication, do code coverage, dependency analysis and complexity monitoring.
(All of these and more exist as eclipse plugins)
You will probably also need some 'java gui testing' framework (to replace QF-TEST)
There are a lot of open source alternatives there, search the web.

Related

Common questions TDD with Mock

I don't know how use TDD in C++ projects, but I decided use "Google Mock Framework" for a start.
But I have one question:
When I finish testing, do I have to clean up my code from TDD's macros, class and etc?
In other words, should the release version of my project include Google Mock?
P.S.
What do you advise for learning TDD on practice? (Articles, Books and etc.)
You can try this book: TDD By Example. It uses java, but I think it will help :)
In my opinion, there's no need to remove the testing code from the release version of the project. Test code should be developed in such a way that it is part of the final product i.e. it follows the same standards, is maintainable and follows good unit testing practises (see The Art of Unit Testing).
As part of TDD you should also be performing continuous integration builds that run after code is delivered. This build process should run through all (active) unit tests to make sure that nothing has been inadvertently broken (We use Anthill Pro). If you remove your test code prior to building, this process won't be possible.
There's a good article here by James Shore that might be worth read.

Testing... how do the pros do it, and what techniques can scale to single-person development?

I've been writing software for years, but have never mastered the art of testing. My typical testing includes thorough run-throughs on my machines, and then testing in various operating systems via VMware. Mainly a brute-force play-with-it-until-it-breaks-or-doesn't approach. Where possible I work on actual hardware, but this isn't always practical.
My question is twofold:
How do medium-sized professional development houses do their testing?
What common techniques or procedures (outside of unit testing) can apply to a developer team of one. I'm looking for practicality.
Thank you for your time and input.
Step 1: Unit Testing
Divide your software into components (which can be anything from single functions up to whole programs) and unit-test those components thoroughly, especially as relates to the API and behavior that the rest of the application can see. (Don't forget to check for failure modes too, but beware of binding too carefully to the exact nature of a failure; it's often good enough to just test for the presence of the right class of exception rather than its exact message.) Make sure those tests pass; you're honing them in on a specification of what the component should be doing. (Automated test running helps here, as does a CI system.) This is important because of…
Step 2: Integration Testing
Test that the compositions of components that make up the application work (this is integration testing). Ideally you'll only be finding bugs in the specifications of things at this point (hah!) and wherever you identify that a component is wrong despite passing its unit tests, that tells you that there is a bug. Whenever things fail to work together despite being told to do so, you've probably got a bug in your specs from the previous step so you typically resolve these things by adding more detail to your unit tests and fixing the components until they work.
Note that to make good integration, you want to keep this stage so that the integration itself is sufficiently simple that it is in the “Obviously No Bugs” class of programs instead of the larger “No Obvious Bugs” class. An integration framework like Spring or a scripting language can help a lot here (though with the latter you have to guard against creating components on the sly; if you create a component then admit it and make sure it has a proper usage contract and unit tests to ensure that it meets its contract).
Where you can, you can make components by composing others together; these higher-level components need to be unit tested as characterized in Step 1 above. This might sound like extra work – it probably is – but it does have the advantage of meaning that you can use automated tests for larger parts of the program. (Alas, it's harder to do all integration tests with an automated test tool; such things tend to work better doing unit tests where you can mock out all the irrelevant parts.) But this doesn't save you from…
Step 3: Acceptance Testing
This is where the overall application is tested to see if it actually does what is desired. This might be automatable, but usually isn't. This is level where you bring in users to let them see whether things are what they expected, though you might want to use internal testers a bit first. How easy this all is depends on the nature of the application.
Note also that user interfaces tend to spend more time in this step than the others, precisely because what makes for a good UI is difficult to impossible to pin down in algorithms (it relates much more to human psychology after all).
A final note: What I've written here sounds like testing is meant to be a laborious process that takes ages at the end of a project. It isn't! You can often get parts of an application done before others, do an integration of those parts (with mocks for the other bits) and test quite a bit of how acceptable this sub-application really is. Of course, when doing this take care to stop users from believing that everything is done; one way is to have dialog boxes that pop up and say things like “magic to happen here”. Silly but effective. :-)
For a small team unit tests or automatic integration tests are crucial. Because there are no hands and time for manual testing - the more you automate the better. This includes Continuous Integration.
Set up a separate 'beta' environment that is as close to your production environment as possible. Do most of your tests there - this way you will pick up all the things you've forgotten in your 'release plan'.
As a proffesional tester my suggestion is that you should have a healthy mix of automated and manual testing. The Examples below are in .net but it should be easy to find a tool for whatever technique you are using.
AUTOMATED TESTING
Unit Testing
Use NUnit to test your classes, functions and interaction between them.
http://www.nunit.org/index.php
Automated Functional Testing
If it's possible you should automate a lot of the functional testing. Some frame works have functional testing built into them. Otherwise you have to use a tool for it. If you are developing web sites/applications you might want to look at Selenium.
http://www.peterkrantz.com/2005/selenium-for-aspnet/
Continuous Integration
Use CI to make sure all your automated tests run every time someone in your team makes a commit to the project.
http://martinfowler.com/articles/continuousIntegration.html
MANUAL TESTING
As much as I love automated testing it is, IMHO, not a substitute for manual testing. The main reason being that an automated can only do what it is told and only verify what it has been informed to view as pass/fail. A human can use it's intelligence to find faults and raise questions that appear while testing something else.
Exploratory Testing
ET is a very low cost and effective way to find defects in a project. It take advantage of the intelligence of a human being and a teaches the testers/developers more about the project than any other testing technique i know of. Doing an ET session aimed at every feature deployed in the test environment is not only an effective way to find problems fast, but also a good way to learn and fun!
http://www.satisfice.com/articles/et-article.pdf
My testing tool examples are Java based, but I will try to suggest tools which are ported to multiple languages or are language agnostic.
Use unit testing tools like junit (ported to a variety of languages). This will allow you to refactor your code safely. Most code bugs should result in the addition or correction of at least one test.
Use revision control, and setup an automated build environment that check out the code and builds the code. It should then run the automated test suite. If the application uses a database the build environment should have its own database. Use different code branched for production (released) and development code.
Use integration testing tools like HTTPunit or Synergy to test web applications. Tools of this type are basically language agnostic, but your may want to choose a tool which can be extended in the language(s) you are using. For non-web applications, there may not be an equivelent tool for your platform. You may also want to use a performance tool like JMeter.
These tools have some setup costs, but a quick payback. Overall development time may be the same or less than if you didn't use the tools.
Acceptance tests generally don't lend themselves to automated testing. Where they do, included them in the integrated testing. Get acceptance feedback as early and often as possible.
How do the pros do it? That all depends on who the 'Pro' is... There are dozens of different approaches to testing, and plenty of experts to tell you that their way is the one true way. Agile gurus will tell you a very different story from the waterfall gurus. The ISTBQ guys will tell you a very different from the Context-Driven guys.
Unfortunately there is no one true way, and you have to figure it out for yourself. Your approach to testing depends on too many factors. That's probably not very helpful, but you just need to be aware that any answer you get here will be only one option of many, and it may be completely inappropriate for your situation.
Personally, after several years in software testing, I have decided to align myself with the Context Driven school of software testing. See: http://www.context-driven-testing.com
Secondly, from your description of you current approach, that sounds a lot like exploratory testing to me. You may find this material interesting: satisfice.com/sbtm/
One thing you can do (combined with all the previous suggestions) is identify the risky and critical areas of your app and try to focus your testing efforts on these areas.

What documentation (, links and advices) could you offer me to create a testing library?

I'm thinking of designing my own Test library (framework) ->in c++
I am wondering if some of you have already designed there own (and what good advices, documentation they could offer me), decided to not do that (and why), What critics (and argue) you have against differents existing testing frameworks.
I want to no more about testing framework design.
In fact I have some pretty differents things to test :
simple unit test
MVC and signal slot
data, (escpecially for audio and DSP)
performance
compatibility
"So much things ... and so few time "
No really I need to test a lot of different things.
So I checkout how is designed XUnit, and the Addison and W XUnit related book, also the Advanced Unit Testing related article on code project....
And different articles, discuss this with coworkers ...
And at the end, I want to design my own.
Why :
specific needs,
like the do it yourself way (and learn why it's done this way in existing frameworks and that I'm not a genious ... ^^)
Thank you all.
I remember having read some discussions about Cppunit 2 design on the sourceforge wiki. I'd start from here. Also, Noel Llopis explored the C++ unit-testing framework jungle.
But, you're saying you want to re-create another framework and you only have few time left. I'd suggest picking one framework fitting your need for the unit tests, see if it can be used fore your MVC and data testing. Moreover unit testing framework are not designed to run performance tests. I'd recommend following the Unix philosophy here: simple little tools that do one thing and do it well.
Learn at least one existing framwork before you implement your own. My experience is that the framework is not the problem. Learning how to write good unit tests is the hard part.
I have used several framworks through the years including CxxTest, CppUnitLite and UnitTest++. But my recommendation is Google Test together with Google Mock (Google Mock comes with a copy of Google Test bundled).

Using unit tests as a "functionality contract"

Unit tests are often deployed with software releases to validate the install - i.e. do the install, run the tests and if they pass then the install is good.
I'm about to embark on a project that will involve delivering prototype software library releases to customers. The unit tests will be delivered as part of each release and in addition to using the tests to validate the install, I plan on using the unit tests that test the API as a "contract" for how the release should be used. If the user uses the release in a similar manner to how it is used by the unit tests then great. If they use it some other way then all bets are off.
Has anybody tried this before? Any thoughts on whether this is a good/bad idea?
Edit: To highlight a good point raised by ChrisA and Dan in replies below, the "unit tests that test the API" are better called the integration tests and their intent is to exercise the API and the software to demonstrate the functionality of the software from a customer perspective.
Sounds like a good idea to me. I (we all?) routinely use unit tests internally to do just that. In using my unit tests to validate that I haven't broken anything I'm also implicitly verifying that my API contract hasn't changed. It seems like a natural usage of unit tests to deploy them in the fashion you're talking about.
Agile methodologies say: Tests are specifications, so this is a very good idea.
I fully expect to be flamed for this, but I don't understand how a set of unit tests proves anything at all about the kind of things a customer cares about, namely whether the application meets his business requirements.
Here's an example: I've just finished converting a chunk of code to fix a big mistake we made. It was a classic case of over-engineering, and the changes have touched about a dozen windows forms and about as many classes.
It's taken me a couple of days, it's now a lot simpler, we gained some features for free, and we lost a ton of code that did stuff that we now know we never really needed.
Every single one of those forms worked perfectly before. The public methods did exactly what they needed to do, and the underlying data accesses were just fine.
So any unit test would have passed.
Except, sadly, they did the wrong thing - which we didn't realise, except in retrospect. It's as if we'd built a prototype and only after trying to use it, realised that it wasn't right.
So now we have a leaner, meaner, fitter application.
But the things that were wrong, were wrong at a level where unit tests could never have revealed them, so I'm just not understanding how shipping a set of unit tests with an install does anything except give a false sense of security.
Maybe I'm not understanding something, but it seems to me that unless the thing that is shipped functions at the same level as the tests supplied, they prove nothing.
It's actually a pretty good idea, and extremely pleasant as an API user.
This technique can actually also be used the other way round : when you're using a "legacy" API, you can use unit tests to document the way you think the API behaves and to validate that it actually behaves as planned.
If you are releasing a code library, this sounds great.
If you are releasing an ordinary software product with which your users will interact only via a GUI, your unit tests may not be working at the same level of abstraction or may not the most useful tool to assess the behaviour of your product. A really good user manual (yes, this is possible) might be better for that.
If you're interested in providing a set of specifications with your code, perhaps you should investigate some of the behavior-driven development tools (nbehave, jbehave, rspec, etc.). These frameworks provide support for describing your tests in given/when/then syntax and outputting formatted results that are in a natural language. See nbehave for an example of a BDD tool for .NET. You can find an excellent description of BDD here
Another option may be for you to write tests using an acceptance testing framework such as fit or fitnesse (or the java-only concordion) and deliver these acceptance tests with the code. Both fit/fitnesse and concordion allow specification of the tests in plain HTML or even Word documents.
The benefit of either approach (BDD or acceptance testing frameworks) is that the results the user sees are more human-readable and understandable.
Tests will check requirements.
Requirements define functionality
=> Tests will check functionality.
The problem is, that only functionality can be checked that can be covered by unit tests. Integration or whole system tests won't work.
Otherwise, it's the main approach of TDD to check functionality via unit tests.
Meszaros calls this "Tests as documentation"

What unit-test frameworks would you recommend for J2ME? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I'm relatively new to J2ME and about to begin my first serious project. My experience in testing isn't too deep either. I'm looking for a unit test framework for J2ME.
So far I've seen J2MEUnit, but I don't now how well supported it is. I've seen JavaTest Harness but I don't know if it's not an overkill.
Please tell me what framework you suggest with respect to:
* Simplicity of implementing tests
* Supporting community and tools
* Compatibility with application certification processes
* Integration with IDEs (Eclipse, NetBeans)
* Other aspects you find important...
Thanks,
Asaf.
This is a blog entry of a spanish company who makes movile games. Compares many frameworks and the conclusion is (translated):
MoMEUnit Offer very useful
information about the tests. Is
easily ported and Ant compabile. A
disadvantage (or maybe not), its
that it needs that every test class
have an unique test method, using a
lot of inheritance.
JMEUnit. (Future merge of J2MEUnit
and JMUnit) JMUnit doesn't supports
Ant but the interface is similar to
MoMEUnit. J2MEUnit doesn't provide
very useful information with the
tests. Test creation in both
frameworks is somehow complex.
J2MEUnit does support Ant; thats
why the merge of both frameworks
will be very interesting(they have
been working on int for a year more
o less)
My experience: I've use J2ME Unit and setting up Test Fixtures is a pain due to the lack of "Reflection" in J2ME, but they are all build always the same way, so a template saves a lot of time.
I was planning to try out MoME Unit this week, just to check its simpler model
Some Test Unit Frameworks for J2ME:
JMUnit
MoME Unit
J2ME Unit
Sony-Ericsson Movil Java Unit
Take a glance at MockME as well.
www.mockme.org
From their site:
"MockME is Java ME mock objects for Java SE. MockME lets you write real unit tests without having to run them on the phone. You can even use dynamic mock object frameworks such as EasyMock that enables you to mock any object in Java ME! MockME integrates best-of-breed tools for unit testing including JUnit, EasyMock and DDSteps. By making Java ME API's mockable you can write unit tests for your Java ME application the way you really want to."
MicroEmulator + JUnit on J2SE
I started out with tools like JMUnit, but I recently switched over to standard JUnit + MicroEmulator on J2SE. This is similar to using MockME, but with MicroEmulator instead. I prefer MicroEmulator, because it has actual implementations of the components, and you can run an entire MIDlet on it. I've never used MockME myself though.
All my non-GUI unit tests are run by simply using MicroEmulator as a library. This has the advantage that all the JUnit tools work seamlessly, specifically Ant, Maven, most IDE's and Continuous Integration tools. As it runs on J2SE, you can also use features such as generics and JUnit annotations, which makes writing unit tests a little nicer.
Some components like the RecordStore require some setup before working. This is done with MIDletBridge.setMicroEmulator().
Using MicroEmulator also has the advantage that the implementation of some components can be customized, for example the RecordStore. I use an in-memory RecordStore, which is re-created before each test, so that I'm sure the tests run independently.
Real Devices
The approach described above won't run on any real devices. But, in my opinion, only GUI and acceptance tests need to be run on real devices. For this, tools like mVNC together with T-Plan Robot can be used on Symbian devices (thanks to this blog post). However, I could only get mVNC to work over Bluetooth, and it was very slow.
An alternative might be to use a service like The Forum Nokia Remote Device Access (RDA). I still need to investigate whether platforms like this are suitable for automated testing.
Hmm... I myself have not developed a mobile application but I think J2MEUnit is the better choice as its based on the original JUnit which has a big community and is supported by most IDEs so it should be guite easy to run at least those test which do not depend on the mobile hardware directly from your IDE.
More important might be that J2MEUnit integrates with ANT so you can run your test with every build.
A related document I found (after posting the question) is Testing Wireless Java Applications. It describes J2MEUnit near the document's end.