How to organize integration tests? - unit-testing

When writing unit tests, I usually have one test class per production class, so my hierarchy will look something like that:
src/main
-package1
-classA
-classB
-package2
-classC
src/test
-package1
-classATests
-classBTests
-package2
-classCTests
However when doing integration tests the organization becomes less rigid. For example, I may have a test class that tests classA and classB in conjunction. Where would you put it? What about a test class that tests classA, classB and classC together?
Also, integration tests usually require external properties or configuration files. Where do you place them and do you use any naming convention for them?

Our integration tests tend to be organised the same way our specifications are. And they tend to be gathered by categories and/or feature.

I'd concur with f4's answer. Such kind of tests (level higher than UT) usually has no correlation with particular classes. Your tests should stick to project requirements and specifications.
In case you really need to develop a testing project tailored to your test requirements, I'd recommend the following approach: a separate project with packages per requirement or user story (depending on your approach to manage requirements).
For example:
src/itest
-package1 - corresponds to story#1
-classA - test case1
-classB - test case2
-package2 - corresponds to story#1
-classC - test case2
-packageData - your common test data and utilities
However keep in mind - doing an integration or system-level tests is usually complicated task and its scope could easily be broader than testing software project can cover. You should be ready to consider a third-party test automation tools, because at the level of integration or system test it's often a more efficient approach than developing a tailored testing package.

Maybe create an integration tests directory under src/test? Sure, for integration tests the separation becomes less clear, but there's something that groups A,B and C together, no? I'd start with this and see how things go. It's tough to come up with a perfect solution right away and an "OK" solution is better than no solution.

It seems that your integration tests are higher level unit tests since you still bind them to one or more classes. Try to pick class that depends on all others (transitively) from the group and associate test with such class.
If you have true integration tests then association with concrete classes is of little value. Then tests are classified by application subject areas (domains) and by types of functionality. For example, domains are orders, shipments, invoices, entitlements, etc. and functionality types are transactional, web, messaging, batch, etc. Their permutations would give you nice first cut of how to organize integration tests.

I have found that when doing TDD it is not always the case that in unit tests there is a 1:1 relationship between classes and tests. If you do that you will have a hard time refactoring. In fact after some refactoring I usually end up with about 50% 1:1 couplings and 50% tests that you could link to several classes or clusters of tests that link to a single class.
Integration tests happen if you try to prove that something is or isn't working. This happens either when you're worried because you need to deliver something, or if you find a bug. Trying to get full coverage from integration tests is a bad idea (to put it mildly).
The most important thing is that a test needs to tell a story. In BDD'ish terms: given you have such, when doing this, that should happen. The tests should be examples of how you intend people to use the unit, API, application, service, ...
The granularity and organisation of your tests will follow from your storyline. It should not be designed with simplistic rules up front.

Related

Unit tests as Cucumber acceptance tests

We've got our application unit tested and there are two system tests using Robot verifying the basic functionality. There is also a bunch of requirements (acceptance tests) from business as .feature files in Gherkin that we run using Cucumber.
We started implementing them as integration tests using rest assured - everything was fine for the basic functionality. The problem is with implementing the more detailed scenarios - some things that we need to verify are not persisted in DB or returned by the endpoint.
It would be really easy to test that functionality by implementing those scenarios as unit tests but I'm not sure if that is a good practice?
I think there needs to be a back-and-forth between yourselves and the business. You'll usually end up with a couple of integration tests to prove the feature works end-to-end in a happy path/negative scenario, but you won't go through all the edge cases since that is costly (time-consuming, especially when running the integration tests). That's what the unit tests are for, covering every scenario. Convey that to the business and they might learn to trust your judgement instead of implementing each and every one of the acceptance criteria as an integration test.
It would be really easy to test that functionality by implementing those scenarios as unit tests but I'm not sure if that is a good practice?
If they're not implemented, it's definitely a good practice to do so as soon as possible, but, I'd pay attention to the test's scope. I also see no problem in using Cucumber to aid your unit testing effort, but don't fall into the mistake of making your unit tests bigger scope acceptance tests. Keep them as direct as possible.
You mention the complexity of some of these scenarios:
some things that we need to verify are not persisted in DB or returned by the endpoint
Then the code should probably be more testable/maintainable on different levels. You can achieve this with some techniques. Depending on the language you use, you could use a tool similar to mockito spy to spy object's interactions that otherwise, you have no way of checking, but, it's only wise to do so on appropriate test scope, like with Unit and Mock tests.
Also, consider that it's perfectly fine if you cover different parts of a given flow with different test levels. Ideally, you'll cover most of it with unit tests.
You can keep the most complex unit parts covered with unit tests, and still maintain the higher level acceptance scenarios business asks you to but don't do deep assertions in the tests.
I interpret your question as "Is it ok to use unit tests at the same time as you use BDD?"
My take is that these are two techniques that complement each other. If something is easier using one technique, then use that technique in that particular case.
Use whatever that supports your delivery of working software. Working software is the goal, not using a specific tool.

unit test logic involving multiple classes

I have a subscription class with an assess method.
This methods gets the plan for this subscription (as an model) then this gets the fees for it.
With this the subscription contructs an invoice object that contains the fees that were not billed from the last billing date.
I would like to test this method but it seems to me that this would not be an unit test since it would involve many object with different dependencies.
How would you test this method ?
It is not a unit test for purists (rather an integration test), but still it may be a perfectly fine test :-) And technically you can run it with JUnit (or whichever your favourite unit testing framework is), so IMHO the difference is only in terminology.
If you write your code from scratch, it is best indeed to start by writing unit tests for individual methods in isolation (mocking out dependencies), then at the next stage maybe add higher level integration tests such as the one you describe, to verify that your classes work together well.
However, in legacy projects (i.e. lots of inherited code without tests), it is often not feasible to start with fine grained low-level unit tests; instead it is more efficient to write higher-level, more complex tests which clarify and "lock" the behaviour of a larger component.
Unfortunately the majority of projects in this industry by far is legacy :-( For me, in these cases, pragmatism wins over purity of approach hands down :-)
As it stands this sounds like an integration test, not a unit test.
If you want to unit test the methods involved you should mock the dependencies (so you use a Mock Invoice to return known data). Then you can write unit tests separately to ensure the invoice class works.

BDD and functional tests

I am starting to buy into BDD. Basically, as I understand it, you write scenario that describes well acceptance criteria for certain story. You start by simple tests, from the outside-in, using mocks in place of classes you do not implement as yet. As you progress, you should replace mocks with real classes. From Introduction to BDD:
At first, the fragments are
implemented using mocks to set an
account to be in credit or a card to
be valid. These form the starting
points for implementing behaviour. As
you implement the application, the
givens and outcomes are changed to use
the actual classes you have
implemented, so that by the time the
scenario is completed, they have
become proper end-to-end functional
tests.
My question is: When you finish implementing a scenario, should all classes you use be real, like in integration tests? For example, if you use DB, should your code write to a real (but lightweight in-memory) DB? In the end, should you have any mocks in your end-to-end tests?
Well, it depends :-) As I understand, the tests produced by BDD are still unit tests, so you should use mocks to eliminate dependency on external factors like DB.
In full fledged integration / functionality tests, however, you should obviously test against the whole production system, without any mocks.
Integration tests might contain stubs/mocks to fake the code/components outside the modules that you are integrating.
However, IMHO the end-to-end test should mean no stubs/mocks along the way but production code only. Or in other words - if mocks are present - it is not really end-to-end test.
Yes, by the time a scenario runs, ideally all your classes will be real. A scenario exercises the behaviour from a user's point of view, so the system should be as a user would see it.
In the early days of BDD we used to start with mocks in the scenarios. I don't bother with this any more, because it's a lot of work to keep mocking as you go down the levels. Instead I will sometimes do things like hard-code data or behaviour if it lets me get feedback from the stakeholders more quickly.
We still keep mocks in the unit tests though.
For things like databases, sure, you can use an in-memory DB or whatever helps you get feedback faster. At some point you should probably run your scenarios on a system that's as close to production as possible. If this is too slow, you might do it overnight instead of as part of your regular build cycle.
As for what you "should" do, writing the right code is far more tricky than writing the code right. I worry about using my scenarios to get feedback from the stakeholders and users before I worry about how close my environment is to a production environment. When you get to the point where changes are deployed every couple of weeks, sure, then you probably want more certainty that you're not introducing any bugs.
Good luck!
I agree with Peter and ratkok. I think you keep the mocks forever, so you always have unit tests.
Separately, it is appropriate to additionally have integration tests (no mocks, use a DB, etc. etc.).
You may even find in-betweens helpful at times (mock one piece of depended-on code (DOC), but not another).
I've only recently been looking into BDD and in particular jBehave. I work in fairly large enterprises with a lot of waterfall, ceremony orientated people. I'm looking at BDD as a way to take the businesses use cases and turn then directly into tests which the developer can then turn into either unit test or integration tests.
BDD seems to me to be not just a way to help drive the developers understanding of what the business wants, but also a way to ensure as much a spossible that those requirements are accurately represented.
My view that if you are dealing with mocks then you are doing unit tests. You need both unit testing to test out the details of a classes operation, and integrations to test out that the class plays well with others. I find developers often get infused between the two, but it's best to be as clear as possible and keep there separate from each other.

How do I write useful unit tests for a mostly service-oriented app?

I've used unit tests successfully for a while, but I'm beginning to think they're only useful for classes/methods that actually perform a fair amount of logic - parsers, doing math, complex business logic - all good candidates for testing, no question. I'm really struggling to figure out how to use testing for another class of objects: those which operate mostly via delegation.
Case in point: my current project coordinates a lot of databases and services. Most classes are just collections of service methods, and most methods perform some basic conditional logic, maybe a for-each loop, and then invoke other services.
With objects like this, mocks are really the only viable strategy for testing, so I've dutifully designed mocks for several of them. And I really, really don't like it, for the following reasons:
Using mocks to specify expectations for behavior makes things break whenever I change the class implementation, even if it's not the sort of change that ought to make a difference to a unit test. To my mind, unit tests ought to test functionality, not specify "the methods needs to do A, then B, then C, and nothing else, in that order." I like tests because I am free to change things with the confidence that I'll know if something breaks - but mocks just make it a pain in the ass to change anything.
Writing the mocks is often more work than writing the classes themselves, if the intended behavior is simple.
Because I'm using a completely different implementation of all the services and component objects in my test, in the end, all my tests really verify is the most basic skeleton of the behavior: that "if" and "for" statements still work. Boring. I'm not worried about those.
The core of my application is really how all the pieces work together, so I'm considering
ditching unit tests altogether (except for places where they're clearly appropriate) and moving to external integration tests instead - harder to set up, coverage of less possible cases, but actually exercise the system as it is mean to be run.
I'm not seeing any cases where using mocks is actually useful.
Thoughts?
If you can write integration tests that are fast and reliable, then I would say go for it.
Use mocks and/or stubs only where necessary to keep your tests that way.
Notice, though, that using mocks is not necessarily as painful as you described:
Mocking APIs let you use loose/non-strict mocks, which will allow all invocations from the unit under test to its collaborators. Therefore, you don't need to record all invocations, but only those which need to produce some required result for the test, such as a specific return value from a method call.
With a good mocking API, you will have to write little test code to specify mocking. In some cases you may get away with a single field declaration, or a single annotation applied to the test class.
You can use partial mocking so that only the necessary methods of a service/component class are actually mocked for a given test. And this can be done without specifying said methods in strings.
To my mind, unit tests ought to test
functionality, not specify "the
methods needs to do A, then B, then C,
and nothing else, in that order."
I agree. Behavior testing with mocks can lead to brittle tests, as you've found. State-based testing with stubs reduces that issue. Fowler weighs in on this in Mocks Aren't Stubs.
Writing the mocks is often more work
than writing the classes themselves
For mocks or stubs, consider using an isolation (mocking) framework.
in the end, all my tests really verify
is the most basic skeleton of the
behavior: that "if" and "for"
statements still work
Branches and loops are logic; I would recommend testing them. There's no need to test getters and setters, one-line pure delegation methods, and so forth, in my opinion.
Integration tests can be extremely valuable for a composite system such as yours. I would recommend them in addition to unit tests, rather than instead of them.
You'll definitely want to test the classes underlying your low-level or composing services; that's where you'll see the biggest bang for the buck.
EDIT: Fowler doesn't use the "classical" term the way I think of it (which likely means I'm wrong). When I talk about state-based testing, I mean injecting stubs into the class under test for any dependencies, acting on the class under test, then asserting against the class under test. In the pure case I would not verify anything on the stubs.
Writing Integration Tests is a viable option here, but should not replace Unit Tests. But since you stated your writing mocks yourself, I suggest using an Isolation Framework (aka Mocking Framework), which I am pretty sure of will be available for your environment too.
Being that you've posted several questions in one I'll answer them one by one.
How do I write useful unit tests for a mostly service-oriented app?
Do not rely on unit tests for a "mostly service-oriented app"! Yes I said that in a sentence. These types of apps are meant to do one thing: integrate services. It's therefore more pressing that you write integration tests instead of unit tests to very that the integration is working correctly.
I'm not seeing any cases where using mocks is actually useful.
Mocks can be extremely useful, but I wouldn't use them on controllers. Controllers should be covered by integration tests. Services can be covered by unit tests but it may be wise to have them as separate modules if the amount of testing slows down your project.
Thoughts?
For me, I tend to think about a few things:
What is my application doing?
How expensive would it be to perform system level / integration tests?
Can I split my application up into modules that can be tested separately?
In the scenario you've provided, I'd say your application is an integration of many services. Therefore, I'd lean heavily on integration tests over unit tests. I'd bet most of the Mocks you've written have been for http related classes etc.
I'm a bigger fan of integration / system level tests wherever possible for the following reasons:
In this day and age of "moving fast", re-factoring the designs of yesterday happens at an ever increasing rate. Integration tests aren't concerned about implementation details at all so this facilitates rapid change. Dynamic languages are in full swing making mocks even more dangerous / brittle. With a static lang, mocks are much safer because your tests won't compile if they're trying to stub out a non existent or misspelled method name.
The amount of code written in an integration test is usually 60% less than the amount of code written in a unit test to achieve the same level of coverage so development time is less. "Yes but it takes longer to run integration tests..." that's where you need to be pragmatic until it actually slows you down to run integration tests.
Integration tests catch more bugs. Mocking is often contrived and removes the developer from the realities of what their changes will do to the application as a whole. I've allowed way more bugs into production under the "safety net" of 100% unit test coverage than I would have with integration tests.
If integration testing is slow for my application then I haven't split it up into separate modules. This is often an indicator early on that I need to do some extracting into separation.
Integration tests do way more for you than reach code coverage, they're also an indicator of performance issues or network problems etc.

What does unit testing mean to you?

G'day,
I am working with a group of offshore developers who have been using the term unit testing quite loosely.
Their QA document talks about writing unit tests and then performing unit testing of the system.
This doesn't line up with my interpretation of what unit testing is at all.
I am used to unit testing being a test or suite of tests that are being used to exercise a single class, usually as a black box. The class under test may require other classes to be included by the implementation but generally it is a single class that is being exercised by the unit test(s).
Then you have system functional testing, intergration testing, acceptance testing, etc.
I want to know is this a bit pedantic on my part? Or is this what you think of when referring to unit tests and unit testing?
Edit: Rob Wells. I need to clarify that approaching such testing from a black box perspective is only one aspect. When using mock objects to verify internal behaviours, you are really testing from a white box perspective because you know what you want to happen inside the box.
Unit tests are generally used by developers to test isolated sections of code. They cover border cases, error cases, and normal cases. They are intended to demonstrate the correctness of a limited segment of code. If all of your unit tests pass, then you have demonstrated that your isolated segments of code do what they are supposed to do.
When you do integration testing, you are looking at end-to-end cases, to see if all the segments that have passed unit testing work together. Functional testing checks to see if the code meets the requirements as specified. Acceptance testing is done by end users to see if they approve of the final product.
I try to implement unit tests to test only a single method. and I make an effort to crete "mock" classes for dependant classes and methods used by the method I am testing...
... so that the exercise of the code in that method does not in fact call code in other methods the unit test is not supposed to be "Testing" (There are other unit tests for those methods) This way, a failure of the unit test reliably indicates a failure of the method the unit test is testing...
Mock classes are designed to "simulate" the interface and behavior of dependant classes so that the method I am testing can call them and they will behave ina standard, well-defined way according to system requirements. In order to make this approach work, calls to such dependant classes and to their methods must be made on a well defined interface, so that the "tester" process can "inject" the Mock version of teh dependant class into the class being tested instead of the actual production version... . This is kinda like a common design pattern referred to as "Dependency Injection", or "Inversion of Control" (IOC)
There are several third party tools on the market to help you implement this kind of pattern. One I have heard of is called "Rhino-Mock" or something like that...
Edit: Rob Wells. #Charles. Thanks for this. I'd forgotten using mock objects to completely replace using other classes except for the one under test.
A couple of other things I've remembered after you mentioning mock objects is that:
they can be used to simulate errors being returned by the included classes.
they can be used to raise specific exceptions to check exception handling in the class under test.
they can be used to simulate items where setup costs are high, e.g. a large SQL DB back end.
they can be used to verify the contents of an incoming request.
For more information, have a look at Martin Fowler's paper called "Mocks Aren't Stubs" and The Pragmatic Programmers's article "Mock Objects"
There is no reason why unit tests can't span multiple classes, or even submodules, as long as the test is treating only one consistent business operation. Think about "calculateWage", a method of a BO that uses different strategies to calculate the salary of a person. That's one unit test in my opinion.
I have heard of techniques in which many of the unit tests are done first, and development is done around them. Someone has just commented saying that this is "Test Driven Development" - TDD (Thanks Elie).
But if this is an offshore operation that's possibly going to charge you more money because they're spending time doing these unit tests - then I'd be careful. Get a second opinion from someone, experienced with unit tests who will verify that they're actually doing as they say.
From my understanding, unit testing will add a bit more time to any development project, but of course may offer some quality control. Nonetheless, this is the type of quality control I would want with an in house project. This may just be something the offshore company throws out there to give you a warm fuzzy.
There is a difference between the process you use to test and the technology that is used to support it. The various frameworks used for unit testing are generally very flexible and can be used for testing small units of code, large units and even testing entire processes. This flexibility can lead to confusion.
My recommendation is that whatever specific methodology or process you adopt that you segregate the various Unit Test into distinct assemblies or modules. The exact arrangement depends on your code and your company's organization.
The accumulative effect of using the Unit Test framework is that much of the testing of the code is automated. Adopted correctly developers can evaluate changes to the code code better with out going through a full Q&A process. As for the Q & A Process itself it makes their time more productive as the quality of the code coming out of development should be higher.
Understand it is not THE answer to all quality issues it just a useful tool like the other you use.
Wikipedia would seem to suggest that unit testing is about testing the smallest amount of code which would be a method on a class in the case of OO programming.
Some may have a more general term of what they mean by unit tests, where some may think of some integration tests as being unit tests where the unit is a mixture of components.
there is a traditional view of http://en.wikipedia.org/wiki/Software_testing as part of http://en.wikipedia.org/wiki/Software_engineering. but i like the idea of an agile unit test: a test is an agile unit test if it is fast enough so that the programmers always run it.
A unit test is the smallest and only piece of confidence you can get yourself on your way to being done. That is what matters, iteratively building a shield against regression and spec deviation, not how you actually integrate it to your Object-Oriented architecture.
This is almost a repeat of the "What is a 'Unit'?" question.
"Unit" can be defined flexibly. If their document doesn't have a definition of "unit", you'll need to clarify that.
It might be that they think of unit as a big assembly of code. Which is not the most desirable definition.
While I agree that you have several layers of testing (unit, module, package, application), I also think that much of this can be done with unit testing tools. Leading to "what is a unit?" questions coming up all the time.
Unit depends on context. For an individual developer, unit must be Class. Sometimes, it will also mean module or package.
For a team, however, their unit may be a package or a complete application.
What does it matter what we think? The issue here is your unhappiness with the terms they use in the document. Why don't you discuss it with them?
Ten years ago, before the current usage of "unit testing" as tests written in code, the same designation was applied to manual tests. I worked for a software development firm with a very formalized software development process. We had to write "unit tests" before writing any code. In that era, the unit tests were written in a text document (such as in Word). They described the exact steps that the user was to follow in using the app. For example, they described the exact input to type on the page to set up a new customer. Or, the user was to search for a particular product, and see that the displayed information matched the test document. So, the test was a script that the tester followed, where they also recorded the results. When the new incarnation of unit testing came along, it was confusing for a while trying to figure out if they meant the old, human tests or the new, coded tests.
I lead a group of offshore team too. Supposely we have a set of unit tests...but it doesn't mean much. :) So we rely much more on the functional and testers for quality. The inherit problem with unit testing is that you have perfect knowledge of the functionals, and you trust the developers. In the real world, that's hard to assume..