what is the different purpose of those both? I mean, in which condition I should do each of them?
as for the example condition. if you have the backend server and several front-end webs, which one you'll do?do-unit testing the backend server first or do-UI testing in the web UI first?
given the condition, the server and the front-end webs already exist, so it's not an iterative design to build along with (TDD)...
Unit testing aims to test small portions of your code (individual classes / methods) in isolation from the rest of the world.
UI testing may be a different name for system / functional / acceptance testing, where you test the whole system together to ensure it does what it is supposed to do under real life circumstances. (Unless by UI testing you mean usability / look & feel etc. testing, which is typically constrained to details on the UI.)
You need both of these in most of projects, but at different times: unit testing during development (ideally from the very beginning, TDD style), and UI testing somewhat later, once you actually have some complete end-to-end functionality to test.
If you already have the system running, but no tests, practically you have legacy code. Strive to get the best test coverage achievable with the least effort first, which means high level functional tests. Adding unit tests is needed too, but it takes much more effort and starts to pay back later.
Recommended reading: Working Effectively with Legacy Code.
Unit test should always be done. Unittests are there to provide proof that each UNIT (read: object) of your technical solution delivers the expected results. To put it very (maybe too) simple, user testing is there to verify that your system fulfills the needs and demands of the user.
Test pyramid [1] is important concept here, well described by Martin Fawler.
In short, tests that run end-to-end through the UI are: brittle and expensive to write. You may consider test recording tools [2] to speed recording and re-recording up. Disclaimer - I'm developer of such tool.
[1] https://martinfowler.com/articles/practical-test-pyramid.html
[2] https://anwendo.com
In addition to the accepted answer, today I just came up with this question of why not just programmatically trigger layout functions and then unit-test your logic around that as well?
The answer I got from a senior dev was: programmatically trigger layout functions will not be an absolute copy of the real user-experience. In the real world, the system will trigger many callbacks, like when the user of an app backgrounds or foregrounds the app. Obviously you can trigger such events manually and test again, but would you be sure you got all events in all sequences right?!
The real user-experience is one where user makes actual network calls, taps on screens, loads multiple screen on top of each other and at times you might get system callbacks. Callbacks which you forgot to mock that you didn't properly mock. In unit-tests you're mainly testing in isolation. In UI test, you setup the app, may have to login, etc. That stack you build is much more complex vs a unit-test. Hence it's better to not mix unit-testing with UI testing.
Related
We have a web application (using Grails/Groovy) and we write unit tests and functional tests.
However, we are not writing integration tests.
With unit tests, we can catch some small issues and also, it helps us writing our codebase in modular, short and readable fashion.
Functional tests obviously helps us to know when a feature is broken.
What would be get from writing integration tests? What would be the benefits of the extra time spent writing these tests?
The question is "What would be get from writing integration tests? What would be the benefits of the extra time spent writing these tests?"
Your integration tests will ensure that your components work together with cross cutting concerns such as web services, db, session etc. You only need only few integration tests - As #bagheera's comment on TestPyramid. Be aware how you write integration tests because if you go overboard, it can really slow to run all of them, and harder to work with. When you compare them to Unit Tests, you don't get much benefit of writing them.
Addtional :
You need lot of Unit Tests - you already have this and it is a good sign. You don't want tests in between this, which is called "dirty hybrids" http://blog.stevensanderson.com/2009/08/24/writing-great-unit-tests-best-and-worst-practises/
From:
http://martinfowler.com/bliki/TestPyramid.html
The pyramid also argues for an intermediate layer of tests that act
through a service layer of an application, what I refer to as
SubcutaneousTests. These can provide many of the advantages of
end-to-end tests but avoid many of the complexities of dealing with UI
frameworks. In web applications this would correspond to testing
through an API layer while the top UI part of the pyramid would
correspond to tests using something like Selenium or Sahi..
In addition to checking your application's plumbing as #Raedwald mentioned, integration tests are great for testing that your persistence layer is working as you expect. Are cascades set up correctly? Is everything rolled back properly if something blows up during a transaction? It's often much easier to directly check this stuff with an integration test rather than a functional test.
I find they are useful for checking the 'plumbing' of an application. For checking that your units are connected together and that delegating objects meet the preconditions of the methods they delegating to.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
The advantages of unit-testing are obvious to me, they are done by developers themselves (either test or code-first) and are automated.
What I am a bit unsure about is whether developers should also do integration testing when the team already consists of a dedicated tester, who does automate as much as possible and does black box testing of the whole system (End-to-End test or more common termed Acceptance testing).
For short background more details:
Example Integration Test (MVC webapp)
Setup: Only the controller itself and the layers below controller are bootstrapped during test setup. Nothing is mocked or stubbed.
Test Entry: Bare Controller, most often Controllers entry points are methods with parameters(e.g. Spring MVC) and can be natively executed. No browser is involved during test fixture
Assert targets: Model data and View-name are asserted as direct outputs. Indirect outputs (e.g. data written to database) could be asserted also. The rendered payload (most often HTML) is ignored completely.
Example Acceptance Test (MVC webapp)
Setup: The whole webapp is bootstrapped (like it would be seen from end-user).
Test Entry: HTTP call itself. Browser can be involved as test executer (e.g. Selenium)
Assert Targets: The test output is the complete rendered response (HTML and other artifacts like javascript). Asserts on the database (e.g. data got inserted) can also be included.
Pitfalls double testing (both Integration + Acceptance)
I see major problems when including both test styles:
Controller tests are close to general system behaviour (e.g. submit login form, password validation, successful login). This is very close what an Acceptance test would do. In the end "double-testing" could happen, which is highly inefficient.
Controller are more white-boxed tests and tend to be brittle because they rely on many dependencies of lower layers (in difference to very fine grained unit-tests). Beause of this setting up maintaining Controller tests is high effort, Acceptance test where the whole application is started as black box is more trivial and have advantage being closer to production.
Above two points lead to my conclusion that if you're having good automation strategy of your tester you should skip Integration tests done by developers. They should more focus on unit-tests.
What do you think? Can you explain your test strategy? Do you have good/bad experiences including both test styles?
Thanks for reading my long question ;)
EDIT: Acceptance testing seems to be more common jargon as End-to-End so I switched the terms.
We do Acceptance TDD at my work.
When I first started I was told I could implement whatever policies I wanted so long as the work was completed in a timely and predictable fashion. Having done unit testing in the past I realized that one of the problem we always ran into were integration bugs. Some could take quite a long time to fix and were often a surprise. We would run into subtle bugs we introduced while extending the app's functionality.
I decide to avoid those issue I had run into in the past by focusing more on the the end result features that we were suppose to deliver. We would write tests that tested the acceptance behavior, not just at the unit level, but at the whole system level. I wanted to do that because at the end of the day I don't care of the unit works correctly, I care that the entire system works correctly. We found the following benefits to doing automated acceptance tests.
We NEVER regress end user functionality because it is explicitly tested for.
Refactors are easier because we don't have to update a bunch of unit tests. We just have to make sure our acceptance test still passes.
The integration of the "units" are implicitly covered.
The tests become a very clear definition of required end user functionality.
Integration issues are exposed earlier and are less of a surprise.
Some of the trade offs to doing it this way
Tests can be more complex in terms of usage of mocks, stubs, fixtures, etc.
Tests are less useful for narrowing down which "unit" has the defect.
We also make our test suite runnable via a Continuous Integration server which tags and packages for deployment. It runs with every commit as with most CI setups.
With regard to your points/concerns:
Setup: The whole webapp is
bootstrapped (like it would be seen
from end-user).
One compromise we do tend to make is to run the test in the same process space ala unit tests. Our entry point is the top of the app stack. We don't bother to try and run the app as a server because that adds to the complexity and doesn't add much in terms of coverage.
Test Entry: HTTP call itself. Browser
can be involved as test executer (e.g.
Selenium)
All of our automated tests are driven by a simulating a HTTP GET, POST, PUT, or DELETE. We don't actually use a browser for this though, a call into the top of the app stack the way the particular HTTP call get's mapped in works just fine.
Assert Targets: The test output is the
complete rendered response (HTML and
other artifacts like javascript).
Asserts on the database (e.g. data got
inserted) can also be included.
I think this where automated acceptance tests really shine. What you assert is the end user functionality you want to guarantee that you are implementing.
Controller tests are close to general
system behaviour (e.g. submit login
form, password validation, successful
login). This is very close what an
End-to-End test would do. In the end
"double-testing" could happen, which
is highly inefficient.
We actually do very little unit testing and rely almost solely on our automated acceptance tests. As a result we don't have much in the way of double testing.
Controller are more white-boxed tests
and tend to be brittle because they
rely on many dependencies of lower
layers (in difference to very fine
grained unit-tests). Beause of this
setting up maintaining Controller
tests is high effort, End-to-End test
where the whole application is started
as black box is more trivial and have
advantage being closer to production.
They may have more dependencies, but those can be mitigated through the usage of mocks and fixtures. We also usually implement our test with 2 modes of execution. Unmanaged mode where the tests runs fully wired to the network, dbs, etc. And Managed mode where the test runs with the unmanaged resources mocked out. Although you are correct in your assertion that the tests can be alot more effort to create and maintain.
Developer should do integration tests of the part that he changed/implemented. Under integration tests, I meant that they should see if the functionality they implemented really works as expected. If you don't do this, how do you know that what you just finished really works? Unit tests by itself is not the final goal - it is the product that matters.
This should be done in order to speed up bugs finding. After all, integration tests takes long to execute (at least in my company because of complexity it takes 1-2 days to execute all integration tests). Finding bugs earlier is better then later.
Having integration tests (and, indeed, unit tests) that test behaviour that is also tested by a system test helps debugging, by narrowing the location of a defect. If your system has components A-B-C and fails a system test-case, but the assembly A-B passes a similar integration test-case, the defect is probably in component C.
Considering that this post is dealing with testing pitfalls, I would like to make you aware of my most recent book, Common System and Software Testing Pitfalls, which was published last month by Addison Wesley. It documents 92 testing pitfalls organized into 14 categories. Each pitfall includes description, potential applicability, characteristic symptoms, potential negative consequences, potential causes, and recommendations for avoiding the pitfall and climbing out if you have already fallen in. Check it out on Amazon.com at: http://www.amazon.com/Common-System-Software-Testing-Pitfalls/dp/0133748553/ref=la_B001HQ006A_1_1?s=books&ie=UTF8&qid=1389613893&sr=1-1
I used TDD as a development style on some projects in the past two years, but I always get stuck on the same point: how can I test the integration of the various parts of my program?
What I am currently doing is writing a testcase per class (this is my rule of thumb: a "unit" is a class, and each class has one or more testcases). I try to resolve dependencies by using mocks and stubs and this works really well as each class can be tested independently. After some coding, all important classes are tested. I then "wire" them together using an IoC container. And here I am stuck: How to test if the wiring was successfull and the objects interact the way I want?
An example: Think of a web application. There is a controller class which takes an array of ids, uses a repository to fetch the records based on these ids and then iterates over the records and writes them as a string to an outfile.
To make it simple, there would be three classes: Controller, Repository, OutfileWriter. Each of them is tested in isolation.
What I would do in order to test the "real" application: making the http request (either manually or automated) with some ids from the database and then look in the filesystem if the file was written. Of course this process could be automated, but still: doesn´t that duplicate the test-logic? Is this what is called an "integration test"? In a book i recently read about Unit Testing it seemed to me that integration testing was more of an anti-pattern?
IMO, and I have no literature to back me on this, but the key difference between our various forms of testing is scope,
Unit testing is testing isolated pieces of functionality [typically a method or stateful class]
Integration testing is testing the interaction of two or more dependent pieces [typically a service and consumer, or even a database connection, or connection to some other remote service]
System integration testing is testing of a system end to end [a special case of integration testing]
If you are familiar with unit testing, then it should come as no surprise that there is no such thing as a perfect or 'magic-bullet' test. Integration and system integration testing is very much like unit testing, in that each is a suite of tests set to verify a certain kind of behavior.
For each test, you set the scope which then dictates the input and expected output. You then execute the test, and evaluate the actual to the expected.
In practice, you may have a good idea how the system works, and so writing typical positive and negative path tests will come naturally. However, for any application of sufficient complexity, it is unreasonable to expect total coverage of every possible scenario.
Unfortunately, this means unexpected scenarios will crop up in Quality Assurance [QA], PreProduction [PP], and Production [Prod] cycles. At which point, your attempts to replicate these scenarios in dev should make their way into your integration and system integration suites as automated tests.
Hope this helps, :)
ps: pet-peeve #1: managers or devs calling integration and system integration tests "unit tests" simply because nUnit or MsTest was used to automate it ...
What you describe is indeed integration testing (more or less). And no, it is not an antipattern, but a necessary part of the sw development lifecycle.
Any reasonably complicated program is more than the sum of its parts. So however well you unit test it, you still have not much clue about whether the whole system is going to work as expected.
There are several aspects of why it is so:
unit tests are performed in an isolated environment, so they can't say anything about how the parts of the program are working together in real life
the "unit tester hat" easily limits one's view, so there are whole classes of factors which the developers simply don't recognize as something that needs to be tested*
even if they do, there are things which can't be reasonably tested in unit tests - e.g. how do you test whether your app server survives under high load, or if the DB connection goes down in the middle of a request?
* One example I just read from Luke Hohmann's book Beyond Software Architecture: in an app which applied strong antipiracy defense by creating and maintaining a "snapshot" of the IDs of HW components in the actual machine, the developers had the code very well covered with unit tests. Then QA managed to crash the app in 10 minutes by trying it out on a machine without a network card. As it turned out, since the developers were working on Macs, they took it for granted that the machine has a network card whose MAC address can be incorporated into the snapshot...
What I would do in order to test the
"real" application: making the http
request (either manually or automated)
with some ids from the database and
then look in the filesystem if the
file was written. Of course this
process could be automated, but still:
doesn´t that duplicate the test-logic?
Maybe you are duplicated code, but you are not duplicating efforts. Unit tests and integrations tests serve two different purposes, and usually both purposes are desired in the SDLC. If possible factor out code used for both unit/integration tests into a common library. I would also try to have separate projects for your unit/integration tests b/c
your unit tests should be ran separately (fast and no dependencies). Your integration tests will be more brittle and break often so you probably will have a different policy for running/maintaining those tests.
Is this what is called an "integration
test"?
Yes indeed it is.
In an integration test, just as in a unit test you need to validate what happened in the test. In your example you specified an OutfileWriter, You would need some mechanism to verify that the file and data is good. You really want to automate this so you might want to have a:
Class OutFilevalidator {
function isCorrect(fName, dataList) {
// open file read data and
// validation logic
}
You might review "Taming the Beast", a presentation by Markus Clermont and John Thomas about automated testing of AJAX applications.
YouTube Video
Very rough summary of a relevant piece: you want to use the smallest testing technique you can for any specific verification. Spelling the same idea another way, you are trying to minimize the time required to run all of the tests, without sacrificing any information.
The larger tests, therefore are mostly about making sure that the plumbing is right - is Tab A actually in slot A, rather than slot B; do both components agree that length is measured in meters, rather than feet, and so on.
There's going to be duplication in which code paths are executed, and possibly you will reuse some of the setup and verification code, but I wouldn't normally expect your integration tests to include the same level of combinatoric explosion that would happen at a unit level.
Driving your TDD with BDD would cover most of this for you. You can use Cucumber / SpecFlow, with WatiR / WatiN. For each feature it has one or more scenarios, and you work on one scenario (behaviour) at a time, and when it passes, you move onto the next scenario until the feature is complete.
To complete a scenario, you have to use TDD to drive the code necessary to make each step in the current scenario pass. The scenarios are agnostic to your back end implementation, however they verify that your implementation works; if there is something that isn't working in the web app for that feature, the behaviour needs to be in a scenario.
You can of course use integration testing, as others pointed out.
i have project in .net , i want to test it.
But i dont know anything about testing and its method.
how can i go ahead with testing.
which method is better for me for begining?
Is there anything to decide which testing method is taken into account for better result?
There is no "right" or "wrong" in testing. Testing is an art and what you should choose and how well it works out for you depends a lot from project to project and your experience.
But as a professional Tester Expert my suggestion is that you have a healthy mix of automated and manual testing.
AUTOMATED TESTING
Unit Testing
Use NUnit to test your classes, functions and interaction between them.
http://www.nunit.org/index.php
Automated Functional Testing
If it's possible you should automate a lot of the functional testing. Some frame works have functional testing built into them. Otherwise you have to use a tool for it. If you are developing web sites/applications you might want to look at Selenium.
http://www.peterkrantz.com/2005/selenium-for-aspnet/
Continuous Integration
Use CI to make sure all your automated tests run every time someone in your team makes a commit to the project.
http://martinfowler.com/articles/continuousIntegration.html
MANUAL TESTING
As much as I love automated testing it is, IMHO, not a substitute for manual testing. The main reason being that an automated can only do what it is told and only verify what it has been informed to view as pass/fail. A human can use it's intelligence to find faults and raise questions that appear while testing something else.
Exploratory Testing
ET is a very low cost and effective way to find defects in a project. It take advantage of the intelligence of a human being and a teaches the testers/developers more about the project than any other testing technique i know of. Doing an ET session aimed at every feature deployed in the test environment is not only an effective way to find problems fast, but also a good way to learn and fun!
http://www.satisfice.com/articles/et-article.pdf
Since it is not clear about the scale of the project you have, all you need to do is make sure:
Your tests are trustworthy - you should know they are telling u the truth.
Repeatable
Consistent - If you repeat test with same test data it should provide same output.
Proves you are covering all the problem areas.
To get this you can use:
Standard way : NUnit, MbUnit (myFav) or xUnit (havent got around to working with it) or MSTest
Quick and Dirty : Console app (not cool, not so flexible)
If you are using .Net, I'd recommend checking out NUnit. It's a great testing framework to use.
As far as learning about the "testing method", there are many different ways to test an application. When using a tool like NUnit, for example, you are writing automated tests which run without user interaction. In these types of tests, you typically write tests for each of the public methods in your application, and you ensure that given known inputs, these methods produce the expected outputs. Over time as the application changes (via enhancements, bug fixes, etc.) you have a core set of tests that you can re-run to ensure nothing breaks as a result of the changes. You can also do failure testing to ensure that given an invalid set of inputs to a method, it throws the proper exceptions, etc.
Besides automated testing with a tool like NUnit, it's also important to ensure that your end users test the product. "End users" here could be a Quality Assurance group in your company, or it could be the actual customer. The point is that you need to ensure that someone actually uses your application to make sure it works as expected, because no matter how good the automated tests are, there will still be many things you won't think of that your users will discover. One way to approach this type of testing is to write test scenarios, and have your users execute them to make sure the scenario results in the correct behavior.
I think the best testing approach combines both of the above, namely automated testing and user testing (with documented test scenarios).
Unit testing and ASP.NET web applications are an ambiguous point in my group. More often than not, good testing practices fall through the cracks and web applications end up going live for several years with no tests.
The cause of this pain point generally revolves around the hassle of writing UI automation mid-development.
How do you or your organization integrate best TDD practices with web application development?
Unit testing will be achievable if you separate your layers appropriately. As Rob Cooper implied, don't put any logic in your WebForm other than logic to manage your presentation. All other stuff logic and persistence layers should be kept in separate classes and then you can test those individually.
To test the GUI some people like selenium. Others complain that is a pain to set up.
I layer out the application and at least unit test from the presenter/controller (whichever is your preference, mvc/mvp) to the data layer. That way I have good test coverage over most of the code that is written.
I have looked at FitNesse, Watin and Selenium as options to automate the UI testing but I haven't got around to using these on any projects yet, so we stick with human testing. FitNesse was the one I was leaning toward but I couldn't introduce this as well as introducing TDD (does that make me bad? I hope not!).
This is a good question, one that I will be subscribing too :)
I am still relatively new to web dev, and I too am looking at a lot of code that is largely untested.
For me, I keep the UI as light as possible (normally only a few lines of code) and test the crap out of everything else. At least I can then have some confidence that everything that makes it to the UI is as correct as it can be.
Is it perfect? Perhaps not, but at least it as still quite highly automated and the core code (where most of the "magic" happens) still has pretty good coverage..
I would generally avoid testing that involves relying on UI elements. I favor integration testing, which tests everything from your database layer up to the view layer (but not the actual layout).
Try to start a test suite before writing a line of actual code in a new project, since it's harder to write tests later.
Choose carefully what you test - don't mindlessly write tests for everything. Sometimes it's a boring task, so don't make it harder. If you write too many tests, you risk abandoning that task under the weight of time-consuming maintenance.
Try to bundle as much functionality as possible into a single test. That way, if something goes wrong, the errors will propagate anyway. For example, if you have a digest-generating class - test the actual output, not every single helper function.
Don't trust yourself. Assume that you will always make mistakes, and so you write tests to make your life easier, not harder.
If you are not feeling good about writing tests, you are probably doing it wrong ;)
A common practice is to move all the code you can out of the codebehind and into an object you can test in isolation. Such code will usually follow the MVP or MVC design patterns. If you search on "Rhino Igloo" you will probably find the link to its Subversion repository. That code is worth a study, as it demonstrate one of the best MVP implementations on Web Forms that I have seen.
Your codebehind will, when following this pattern, do two things:
Transit all user actions to the presenter.
Render data provided by the presenter.
Unit testing the presenter should be trivial.
Update: Rhino Igloo can be found here: https://svn.sourceforge.net/svnroot/rhino-tools/trunk/rhino-igloo/
There have been tries on getting Microsoft's free UI Automation (included in .NET Framework 3.0) to work with web applications (ASP.NET). A german company called Artiso happens to have written a blog entry that explains how to achieve that (link).
However, their blogpost also links an MSDN Webcasts that explains the UI Automation Framework with winforms and after I had a look at this, I noticed you need the AutomationId to get a reference to the respecting controls. However, in web applications, the controls do not have an AutomationId.
I asked Thomas Schissler (Artiso) about this and he explained that this was a major drawback on InternetExplorer. He referenced an older technology of Microsoft (MSAA) and was hoping himself that IE8 will do this better.
However, I was also giving Watin a try and it seems to work pretty well. I even liked Wax, which allows to implement simple testcases via Microsoft Excel worksheets.
Ivonna can unit test your views. I'd still recommend moving most of the code to other parts. However, some code just belongs there, like references to controls or control event handlers.