With a good design pattern like MVP, MVC, etc we aim to move all logic out of the GUI. That leaves us with a light weight GUI which ideally just need to "bind" its buttons and fields to properties in some business logic layer. This is a great approach as this layer will be free from GUI stuff, and we can easily write unit tests for it.
My question is: Is this enough? Or should we still unit test the GUI layer?
IMHO if you remove whole logic from GUI, you don't need to test it automatically. Of course you still need to run it to see if it looks like it should :)
This is about unit tests. For integration tests it is still good to test everything, e.g. by Selenium, if possible.
Sometime the GUI is not really that dumb. For instance there might be drag and drop support, custom components which display their content based on where they are placed and many more. In that case these things need to be specifically tested both in integration tests and individually in unit tests.
Most of the time the integration tests start from the UI layer and we end up testing a lot of UI layer in those scenarios as well. I once read a comment from someone about unit-testing that you don't need to write tests for code that can be easily broken for instance getters/setters can be easily broken (for example getter returns the value it is supposed to do and we can break it easily by not returning the value) so we don't end up writing unit tests for getter and setters unless there is some logic embedded in it (in which case these are not actually getter and setters).
So if the GUI is totally dumb and there is only bindings in it then unit tests are not required.
Related
We use SWTBot for writing of functional tests. To test some cases is very difficult and some programmers use classes and their methods directly from implementation (for example call methods from class AddUserDialog etc.). Is this good approach? And why?
And next qustion please. It is SWTBot enough for testing of eclipse RCP based application? Is is necessary to write unit tests please?
Note: We are scrum team.
SWTBot and JUnit serve two different purposes.
JUnit
As the name implies, JUnit is meant for unit testing. Unit tests should be small and fast to execute. They test only a single unit of code and the above mentioned attributes allow them to be executed often while developing the unit under test.
But there is more to (good) units tests. You may want to read one of the following posts for further attributes of unit tests:
Key qualities of a good unit test
What attribute should a good Unit-Test have?
I would go one step further and say that unit tests only make sense in TDD, that is you write the test before the production code. Otherwise you neglect the tests. Who want's to do the extra effort of writing tests for something that already works. And even if you have the discipline to write the tests afterwards, they merely manifest the state of your production code. Whereas, like in TDD, writing tests beforehand leads to lean production code that only does what is required by the tests.
But I guess that's something not everyone will agree on.
In an RCP setting, unit tests would ideally be able to run without starting the platform (which takes rather long). I.e. they would not require to be run as PDE JUnit Tests but as plain JUnit Tests instead. Therefore the unit under test should be isolated from the RCP APIs.
On a related note, see also this question: How to efficiently JUnit test Eclipse RCP Plugins
SWTBot
While SWTBot uses the JUnit runtime to execute the tests, it is rather meant as a utility to create integration or functional tests. SWTBot, when used with RCP, starts the entire workbench and runs all tests within the same instance. Therefore great care should be taken to ensure that each test leaves the environment in the same state as it was before the test started. Specialized Rules may help here to set up and tear down a particular recurring scenario.
It is perfectly valid in order to setup an SWTBot test to call methods from your application. For example, you could programmatically open the wizard and then use SWTBot to simulate a user that enters data and presses the OK button. There is no need to use SWTBot to laboriously open the wizard itself.
In my experience, SWTBot is even too much for simple use cases. Consider a test that should enter some data into a dialog and then press OK. If you already have the dialog opened programmatically you can as well continue without SWTBot:
dialog.textField.setText( "data" );
dialog.okButton.notifyListeners( SWT.Selection, null );
assertThat( dialog.getEnteredData() ).isEqualTo( "data" );
Use Both
The best bet is to have both, unit tests that ensure the behavior of the respective units and functional tests that make sure that the particular units play together as desired.
Not sure if that answers the question, if you have further concerns please leave a comment.
what is the different purpose of those both? I mean, in which condition I should do each of them?
as for the example condition. if you have the backend server and several front-end webs, which one you'll do?do-unit testing the backend server first or do-UI testing in the web UI first?
given the condition, the server and the front-end webs already exist, so it's not an iterative design to build along with (TDD)...
Unit testing aims to test small portions of your code (individual classes / methods) in isolation from the rest of the world.
UI testing may be a different name for system / functional / acceptance testing, where you test the whole system together to ensure it does what it is supposed to do under real life circumstances. (Unless by UI testing you mean usability / look & feel etc. testing, which is typically constrained to details on the UI.)
You need both of these in most of projects, but at different times: unit testing during development (ideally from the very beginning, TDD style), and UI testing somewhat later, once you actually have some complete end-to-end functionality to test.
If you already have the system running, but no tests, practically you have legacy code. Strive to get the best test coverage achievable with the least effort first, which means high level functional tests. Adding unit tests is needed too, but it takes much more effort and starts to pay back later.
Recommended reading: Working Effectively with Legacy Code.
Unit test should always be done. Unittests are there to provide proof that each UNIT (read: object) of your technical solution delivers the expected results. To put it very (maybe too) simple, user testing is there to verify that your system fulfills the needs and demands of the user.
Test pyramid [1] is important concept here, well described by Martin Fawler.
In short, tests that run end-to-end through the UI are: brittle and expensive to write. You may consider test recording tools [2] to speed recording and re-recording up. Disclaimer - I'm developer of such tool.
[1] https://martinfowler.com/articles/practical-test-pyramid.html
[2] https://anwendo.com
In addition to the accepted answer, today I just came up with this question of why not just programmatically trigger layout functions and then unit-test your logic around that as well?
The answer I got from a senior dev was: programmatically trigger layout functions will not be an absolute copy of the real user-experience. In the real world, the system will trigger many callbacks, like when the user of an app backgrounds or foregrounds the app. Obviously you can trigger such events manually and test again, but would you be sure you got all events in all sequences right?!
The real user-experience is one where user makes actual network calls, taps on screens, loads multiple screen on top of each other and at times you might get system callbacks. Callbacks which you forgot to mock that you didn't properly mock. In unit-tests you're mainly testing in isolation. In UI test, you setup the app, may have to login, etc. That stack you build is much more complex vs a unit-test. Hence it's better to not mix unit-testing with UI testing.
I've heard of unit testing, and written some tests myself just as tests but never used any testing frameworks. Now I'm writing a wxPython GUI for some in-house data analysis/visualisation libraries. I've read some of the obvious Google results, like http://wiki.wxpython.org/Unit%20Testing%20with%20wxPython and its link http://pywinauto.openqa.org/ but am still uncertain where to start.
Does anyone have experience or good references for someone who sort of knows the theory but has never used any of the frameworks and has no idea how it works with GUIs?
I am on a Windows machine developing a theoretically cross-platform application that uses NumPy, Matplotlib, Newville's MPlot package, and wxPython 2.8.11. Python 2.6 with plans for 3.1. I work for a bunch of scientists, so there is no in-house unit-testing policy.
If you want to unit-test your application, you haven't to focus on GUI testing techniques. It is much better to write the application using MVC, MVP, or other meta-pattern like these. So you get business logic and presentation layer separated.
It is much more important to cover the business layer with tests since this is your code. Presentation layer is tested already by wxWidgets developers. To test the business layer it will be enough just basic tools like standard unittest module and maybe nose.
To make sure the whole application behave correctly, you should add few acceptance tests that will test functionality from end to end. These will deal with GUI, but there will be few such tests comparing to number of unit-tests.
If you will limit yourself with acceptance tests only, you'll get low coverage, fragile and very slow test code base.
To unit test your application without requiring lots of mock objects/stubs, your GUI's event handlers should basically delegate to other method calls, passing in values from the Event object as parameters to the delegated method.
Otherwise you'll be unable to test your application without having to mock wx's objects.
Take a look at the PyPubSub project for a great module to help with MVC.
In one early project of mine I really test wxPython application using GUI layer. So tests really spin live wxApp object, pops up real windows and then starts messing with a real MainLoop(). Very soon I realize it was a wrong way to do testing. My tests was run very slow and unreliable. Much better way is to separate GUI stuff aside and test only the "model" level of your application. Note that you can actually create model for presentation level logic (model that represent some visual part of your application) and test it. But this model should not involve any "real" gui objects (windows, dialogs, widgets).
I have a simple project, mostly consisting of back-end service code. I have this fully unit-tested, including my DAL layer...
Now I have to write the front-end. I re-use what business objects I can in my front-end, and at one point I have a grid that renders some output. I have my DAL object with some function called DisplayRecords(id) which displays the records for a given ID...
All of this DAL objects are unit tested. But is it worth it to write a unit test for the DisplayRecords() function? This function is calling a stored proc, which is doing some joins. This means that my unit-test would have to set-up multiple tables, one with 15 columns, and its return value is a DataSet (this is the only function in my DAL that returns a datset - because it wasnt worth it to create an object just for this one grid)...
Is stuff like this even worth testing? What about front-end logic in general - do people tend to skip unit tests for the ASP.NET front-end, similar to how people 'skip' the logic for private functions? I know the latter is a bit different - testing behavior vs implementation and all... but, am just curious what the general rule-of-thumb is?
Thanks very much
There are a few things that weigh into whether you should write tests:
It's all about confidence. You build tests so that you have confidence to make changes. Can you confidently make changes without tests?
How important is this code to the consumers of the application? If this is critical and central to everything, test it.
How embarrassing is it if you have regressions? On my last project, my goal was no regressions-- I didn't want the client to have to report the same bug twice. So every important bug got a test to reproduce it before it was fixed.
How hard is it to write the test? There are many tools that can help ease the pain:
Selenium is well understood and straightforward to set up. Can be a little expensive to maintain a large test suite in selenium. You'll need the fixture data for this to work.
Use a mock to stub out your DAL call, assuming its tested elsewhere. That way you can save time creating all the fixture data. This is a common pattern in testing Java/Spring controllers.
Break the code down in other ways simply so that it can be tested. For example, extract out the code that formats a specific grid cell, and write unit tests around that, independent of the view code or real data.
I tend to make quick Selenium tests and just sit and watch the app do its thing - that's a fast validation method which avoids all the manual clicking.
Fully automated UI testing is tedious and should IMO only be done in more mature apps where the UI won't change much. Regarding the 'in-between' code, I would test it if it is reused and/or complicated/ introducing new logic, but if its just more or less a new sequence of DAL method calls and specific to a single view I would skip it.
It's pretty clearly worth it to write tests for stuff that happens on the server side, but I've heard that it is really hard to write unit test for UI stuff and that they are fragile and unreliable. I would love to have more confidence that changes that I make don't break major parts of import pages on my site though.
Any thoughts or experiences?
UI tests can be slow, fragile and painful to maintain, but some bugs can only be caught in UI tests. The important question isn't whether it is worth it to write UI tests, but how to keep your UI tests useful, stable and maintainable.
A common mistake is to use UI tests as a substitute for other tests. Theoretically, you can test a lot of functionality through UI tests, but there are many problems with that approach. For starters, some functionality may be very hard to test directly in the UI (especially exceptional conditions). Secondly, if a test fails, it often hard to see what the source of the problem is. Finally, the more code paths you test in UI tests, the slower the UI tests get. If you rely only on slow tests, your productivity gets worse, which increases the temptation to just "temporary" turn off broken tests.
My advice is to test as much as possible in unit tests and integration tests, create a good separation between your UI and your business logic, and use your UI tests to catch/prevent bugs that cannot be tested in other kinds of tests.
If you have many tests, consider creating multiple suites. I create one suite for UI tests, one for integration tests, and one for unit tests. The unit tests are very fast, so I run them as I develop my code (often via TDD). These are the tests that help me be productive.
The integration tests I run less often (perhaps after I've done implementing a bit of changes). The UI tests I run when I'm getting ready to submit a change (or when I write more UI tests, obviously).
One final bit of advice: consider writing your UI tests with a Domain Specific Language. This makes it easier to understand the tests (because they read as a set of user steps and not as a bunch of low-level browser actions). It also can make the code easier to maintain. For instance, instead of having every test go through the step-by-step browser actions to log the user in, you might see:
LoginPage loginPage = new LoginPage(selenium);
HomePage homePage = loginPage
.enterUserName(TestUsers.Alice.USER_NAME)
.enterPassword(TestUsers.Alice.PASSWORD)
.submit();
assertThat(homePage.getBreadcrumbs(), equalTo("Home"));
It's easy, could be done by not-so-techy person and definitely worth the effort. UI testing is hard for desktop application, not so hard for web apps because you can move controls around and selenium will still be able to find them in HTML. No such luck for poor desktop developers.
Of course there's always a possibility you overdo it to the point where it doesn't give you much back compare to the effort but it's the same with normal unit testing. You should know when to stop.
I used them as part of development. cool thing about them is that you can execute them also in IE(you need selenium server for that). I used them only on development stage and dont treat them like unit tests. if i do some UI specific logic they do help alot.
the most important thing is to assign id to all used html elements. without it tests are very fragile. using comments also help.
The more tests you write to cover the different dynamics of the application the better. When the time comes that you need to change something on a page thats generated by the app - your tests will tell you if anything breaks.
Yes they are fragile and yes its a pain keeping crazy selector paths working just right...
You could use browsershots.org to check the look of your pages in virtually every browser out there.