I have an architectural dilemma regarding JUnit Test structure.
The flow of my test class test the loading of a dictionary from a large file. The class may fail due to lack of memory, file not found, wrong file structure, and so on.
On the first hand, it makes good sense to run the tests on a specific order: check for memory, then check if the file exists, then check for its structure. This would be very helpful, because if we don't have enough memory, the appropriate test would fail and give meaningful output, instead of the cryptic OutOfMemoryException. Moreover, I won't have to overgo the time-consuming process of reading the file over and over again.
On the other hand, tests should be isolated, self contained entities.
Any ideas how to desing the test suite?
What you are describing is an integration test, rather than a unit test. Unit tests (ideally) should not rely on the file system, amount of memory available etc.
You should set up your unit tests with a manageable amount of data in the dictionary (preferably initialized e.g. from a string (stream) within the test fixture code, not from an external file). This way you can reinitialize your test fixture independently for each unit test case, and test whatever nitty gritty details you need to about the structure of the dictionary.
And of course, you should also create separate, high level integration tests to verify that your system as a whole works as expected under real life circumstances. If these are separated from the genuine unit tests, you can afford to run the (cheap and fast) unit tests via JUnit after any code change, and run the costlier integration tests via some other way only more rarely, depending on how long they take, e.g. once every night. You may also choose between setting up the integration environment once, and running your integration tests (via some other means than JUnit, e.g. a shell / batch script) in a well defined order, or reloading the dictionary file for each test case, as the extended execution time may not matter during a nightly build.
To add to what Péter says:
An OOME unit test wouldn't rely on actually running out of memory. Instead whatever calls for the file load should react appropriately when the code-that-loads throws the OOME. This is what mocking/etc. are for--simulating behavior outside of what is being tested. (Same for missing files/file structure errors.)
IMO loading a file that's too big for the JVM configuration isn't really a useful thing to test in the code that actually loads the file--as long as whatever calls for the file load handles an OOME appropriately--at least not in a unit test.
If the code-that-loads needs some behavior verification when the file is too big I'd consider stubbing out the file load part and force an OOME rather than actually load a giant file, just to save time. To reiterate his point, actual physical behavior can be moved to integration tests and, say, run on a CI server outside of your own TDD heartbeat.
Related
I've been trying to learn how to properly unittest and set up unit tests for all of my code on a new project. The project I'm currently doing this for requires me running a lot of actions against Google BigQuery (i.e. create tables, insert, query, delete). I'm feeling like I can't truly test all of this functionality by mocking BigQuery because the actions I do against it are complicated and interdependent, and if there's a break in the middle somewhere, I want to catch it. Is it generally frowned upon to have something like an environment variable that specifies a test account built into my unit tests so they actually run against the remote service? This feels like the best way to truly test everything and hit tests that I couldn't hit with a mock. So, is this something people do? Are there some major downsides to doing things this way?
I tend to have a mix of unit and integration tests in my project. I believe both are equally valuable, but one thing to keep in mind when doing integration testing is to ensure that the tests are stable and repeatable.
There are several approaches, but I favor the approach of making the tests self sufficient by ensuring that all data dependencies are built in the test itself. This is important since you avoid failing tests due to failed assumptions about existing data in your data source.
A variation on this is to have a scaffolding script populate your data source with fixed test data. I find this to be less manageable since it can introduce dependencies between tests and changing the test data for one test may cause failure in another.
What you're looking to do is technically called integration tests but I do see your point. I myself am doing both as well currently. My interaction in my integration tests is with a database. I find that these integration tests often catch way more errors than true unit tests and are generally more beneficial. I will say however that unit tests are important as well.
I have found that integration tests can tend to take a much longer time since it's doing all this interaction and if this is a part of your nightly build process for example this can greatly increase the amount of time it takes for a build to complete. Some of our builds take close to an hour at this point to complete which is sometimes a problem for us.
I will say when you introduce things like environment variables into the mix you have to start making sure that every developer on the team has this environment variable if they want to run the tests. As a general rule of thumb I try to make it as simple as possible for everyone to build and run tests directly out of source control. There is nothing more frustrating than not being able to build source code or execute unit tests directly out of source control.
It's helpful to think of things like BigQuery as just implementation details; means to an end.
Something in your application currently says "I need x - I'll use BigQuery to get it." Instead of having explicit knowledge of BigQuery, this thing could instead have knowledge of "some entity capable of getting x". This is the location of a seam, and is where mocking would take place.
You mentioned that you don't want to mock all of the objects involved in creating a BigQuery request. You are absolutely right in avoiding this. That doesn't mean that you can't mock out BigQuery, though; you just need to move up a rung.
When writing unit tests that deal with XML (e.g. test a class that reads/generates XML) I used to write my asserted outcome XML-String / my input XML String in separate files right next to my unit test. Let's say I have a class "MyTransformer" that transformes one XML format into another. Then I would create three files all in the same package:
MyTransformerTest.java
MyTransformerTestSampleInput.xml
MyTransformerTestExpectedOutput.xml
Then my assertion might look like this (simplified pseudo code for reasons of simplicity):
Reader transformed = MyTransformer.transform(getResourceAsStream("MyTransformerTestSampleInput.xml")));
Reader expected = getResourceAsStream("MyTransformerTestExpectedOutput.xml");
assertXMLEqual(expected, transformed);
However a colleague told me that the file access that I have in this unit test is unacceptable. He proposed creating a literal string constant (private static final String) containing my XML file contents, possibly in a separate groovy class because of the benefit of multi line strings rather than writing the XML file into files.
I dislike the idea of the literal string constants, because even if I have multi line strings in groovy, I still loose syntax highlighting and all the other helpful features of my XML editor that tell me right away if my XML has syntax errors etc.
What do you think? Is the file access really bad? If so: Why? If not why is it ok?
Two problems with files in unit tests:
they slow down the testing cycle. You may have thousands of unit tests which, preferably, get run on every build - so they should be as fast as possible. If you can speed them up (eg, by getting rid of I/O operations) you'd want to do that. Surely it's not always feasible, so you normally separate out the "slow" tests via NUnit [Category] or something similar - and then run those special tests less frequently - say, only on Nightly builds.
they introduce additional dependencies. If a test requires a file, it will fail not only when the logic behind the test is wrong, but also when the file is missing, or test runner doesn't have read permissions etc. Which makes debugging and fixing not so pleasing!
That said, I won't be too strict about not using files in the tests. If possible, try to avoid them but don't get mad. Make sure you consider maintainability vs speed - the cleaner the test, the easier it will be to fix and understand it later.
If your Unit Tests access the files to feed the fake test data into the System Under Test, so you can run tests, that's not a problem. That actually helps your to have wider variety of test data to exercise within the system under test.
However if your System Under Test access the file system, when executing from the a test, that's not a Unit Test. That's an Integration Test. This is because you are accessing cross cutting concerns such as file system, and they cannot categorised as Unit Tests.
You would really isolate/fake out the file access and test the behaviour of your code (if any), using Unit Tests. They are faster and easier to run. It gives your a pin point feedback if written correctly.
In these cases, I have a unit test that uses an internal representation of the file, which is a string literal in this case.
I also will have an Integration Test, to test the code works correctly when writing to the file.
So it is all down to the Unit / Integration test definitions. Both are valid tests, just depends which test you are writing at the time.
If the xml is more readable, or easier to work with in a file, and you have a lot of these tests, I would leave them.
Strictly speaking, unit tests should not use the file system because it is slow. However, readability is more important. XML in a file is easier to read, and can be loaded in an XML friendly editor.
If the tests take to long to run (cause you have a lot of them), or your colleagues complain, move them to integration tests.
If you work in Windows and LINUX, you have to be careful that the files are picked up by your build server.
There are no perfect answers.
Exactly how independent should unit tests be? What should be done in a "before" section of a unit testing suite?
Say, for example, I am testing the functionality of a server - should the server be created, initialised, connected to it's various data sources, &c. inside the body of every test case. Are there situations where it may be appropriate to initialise the server once, and then test more than one case.
The other situation I am considering is mobile app testing - where the phone objects need to be created to perform a unit test. Should this be done every time. Create phone, initialise, run test, destroy phone, repeat?
Unit tests should be completely independent, i.e. each should be able to run in any order so each will need to have its own initialization steps.
Now, if you are talking about server or phone initialization, it sounds more like integration tests rather than unit tests.
Ideally yes. Every test should start from scratch, and put the system into a particular well-defined state before executing the function under test. If you don't, then you make it more difficult to isolate the problem when the test fails. Even worse, you may cause the test not to fail because of some extra state left behind by an earlier test.
If you have a situation where the setup time is too long, you can mock or stub some of the ancillary objects.
If you are worried about having too much setup code, you can refactor the setup code into reusable functions.
I am looking for rules like:
A test is not a unit-test if:
it communicates with a database
it cannot run in parallel with other tests
uses the "environment" like registry or file system
What else is there?
See Michael Feathers' definition
A test is not a unit test if:
It talks to the database
It communicates across the network
It touches the file system
It can't run at the same time as any of your other unit tests
You have to do special things to your environment (such as editing
config files) to run it.
A test is not a unit test if it is not testing a unit.
Seriously, that's all there is to it.
The concept of "unit" in unit testing is not well-defined, in fact, the best definition I have found so far, isn't actually a definition because it is circular: a unit in a unit test is the smallest possible thing that can be tested in isolation.
This gives you two checkpoints: is it tested in isolation? And is it the smallest possible thing?
Please note that both of these are context-dependent. What might be the smallest possible thing in one situation (say, an entire object) might in another situation just one small piece of one single method. And what counts as isolation in one situation might be in another (e.g. in a memory-managed language, you never run in isolation from the garbage collector, and most of the time that is irrelevant, but sometimes it might not be).
Difficult one...
For me a unit test verifies one specific piece of logic in isolation. Meaning, I take some logic, extract it from the rest (if necessary by mocking dependencies) and test just that logic - a unit (of the whole) - by exploring different kind of possible control flows.
But on the other side...can we always 100% say correct or incorrect?? Not to become philosophical, but - as also Michael says in his post:
Tests that do these things aren't bad.
Often they are worth writing, and they
can be written in a unit test harness.
However, it is important to be able to
separate them from true unit tests so
that we can keep a set of tests that
we can run fast whenever we make our
changes.
So why shouldn't I write a unit test that verifies the logic of parsing for instance an xls file by accessing some dummy file from the file system in my test folder (like MS tests allow with the DeploymentItem)?
Of course - as mentioned - we should separate these kind of tests from the others (maybe in a separate test suite in JUnit). But I think one should also write those tests if he feels comfortable in having them there...clearly then always again remembering that a unit test should just test a piece in isolation.
What is most important in my eyes is that these tests run fast and don't take too long s.t. they can be run repeatedly and very often.
It has no asserts, and is not expecting an exception to be thrown.
A test is not an Unit Test when:
it tests more than one thing at once (i.e. it tests how two things work together) - then it is an integration test
Checklist for good unit tests:
they are automated
they are repeatable
they are easy to implement
they remain for future use, once written
they can be run by anyone
they can be run by the push of a button
they run quickly
Some more best practices (in no particular order of importance):
tests should be separated from integration tests (which are slower), so that they can be run fast as frequently as possible
they should not comprise too much logic (preferably, no control structures)
every test should test only one thing (thus, they should contain only one assert)
the expected values used in assertions should be hard-coded and not computed at test run-time
external dependencies (filesystem, time, memory etc.) should be replaced by stubs
test should recreate the initial state at test shutdown
in assertions, it is better to use a "contains..." policy, rather than "is strictly equal..." policy (i.e. we expect certain values in a collection, certain characters in a string etc.)
This is part of the knowledge I have extracted from Roy Osherove's book - The Art of Unit Testing
Implementing a test across multiple possibly failing units would not be a unit test.
Intricate question.
Say I am to program some business logic and all business logic needs to get to the data via some form of DAL.
Say that for the purposes of testing, I mock the DAL units (by creating "mockingbirds").
But those mockingbirds are of course, additional units in their own right. So even when using mocks, it might seem like I'm still bound to violate the idea of "no other units involved" when I want to unit-test my business logic module.
Of course, it is generally known that "creating mockingbirds for the DAL" can invalidate your very test itself on the count that your mockingbird deviates in some particular aspect from the DAL.
Conclusion : it is outright impossible to do "genuine unit-tests" on business modules that depend in any way on any kind of DAL, question mark ?
Corrolary : the only thing that can possible be ("genuinely" !) unit-tested is the DAL itself, question mark ?
Corrolary of the corrolary : given that the "DAL" is usually either an ORM or the very DML of some DBMS, and given that those products are usually bought as being "proven technology", what is the added value of doing any unit tests what so ever, question mark ?
After whether a test is a unit test or not is settled the next question is, is it a good unit test?
In which parts of a project writing unit tests is nearly or really impossible? Data access? ftp?
If there is an answer to this question then %100 coverage is a myth, isn't it?
Here I found (via haacked something Michael Feathers says that can be an answer:
He says,
A test is not a unit test if:
It talks to the database
It communicates across the network
It touches the file system
It can't run at the same time as any of your other unit tests
You have to do special things to your environment (such as editing config files) to run it.
Again in same article he adds:
Generally, unit tests are supposed to be small, they test a method or the interaction of a couple of methods. When you pull the database, sockets, or file system access into your unit tests, they are not really about those methods any more; they are about the integration of your code with that other software.
That 100% coverage is a myth, which it is, does not mean that 80% coverage is useless. The goal, of course, is 100%, and between unit tests and then integration tests, you can approach it.What is impossible in unit testing is predicting all the totally strange things your customers will do to the product. Once you begin to discover these mind-boggling perversions of your code, make sure to roll tests for them back into the test suite.
achieving 100% code coverage is almost always wasteful. There are many resources on this.
Nothing is impossible to unit test but there are always diminishing returns. It may not be worth it to unit test things that are painful to unit test.
The goal is not 100% code coverage nor is it 80% code coverage. A unit test being easy to write doesn't mean you should write it, and a unit tests being hard to write doesn't mean you should avoid the effort.
The goal of any test is to detect user visible problems in the most afforable manner.
Is the total cost of authoring, maintaining, and diagnosing problems flagged by the test (including false positives) worth the problems that specific test catches?
If the problem the test catches is 'expensive' then you can afford to put effort into figuring out how to test it, and maintaining that test. If the problem the test catches is trivial then writing (and maintaining!) the test (even in the presence of code changes) better be trivial.
The core goal of a unit test is to protect devs from implementation errors. That alone should indicate that too much effort will be a waste. After a certain point there are better strategies for getting correct implementation. Also after a certain point the user visible problems are due to correctly implementing the wrong thing which can only be caught by user level or integration testing.
What would you not test? Anything that could not possibly break.
When it comes to code coverage you want to aim for 100% of the code you actually write - that is you need not test third-party library code, or operating system code since that code will have been delivered to you tested. Unless its not. In which case you might want to test it. Or if there are known bugs in which case you might want to test for the presence of the bugs, so that you get a notification of when they are fixed.
Unit testing of a GUI is also difficult, albeit not impossible, I guess.
Data access is possible because you can set up a test database.
Generally the 'untestable' stuff is FTP, email and so forth. However, they are generally framework classes which you can rely on and therefore do not need to test if you hide them behind an abstraction.
Also, 100% code coverage is not enough on its own.
#GarryShutler
I actually unittest email by using a fake smtp server (Wiser). Makes sure you application code is correct:
http://maas-frensch.com/peter/2007/08/29/unittesting-e-mail-sending-using-spring/
Something like that could probably be done for other servers. Otherwise you should be able to mock the API...
BTW: 100% coverage is only the beginning... just means that all code has actually bean executed once.... nothing about edge cases etc.
Most tests, that need huge and expensive (in cost of resource or computationtime) setups are integration tests. Unit tests should (in theory) only test small units of the code. Individual functions.
For example, if you are testing email-functionality, it makes sense, to create a mock-mailer. The purpose of that mock is to make sure, your code calls the mailer correctly. To see if your application actually sends mail is an integration test.
It is very useful to make a distinction between unit-tests and integration tests. Unit-tests should run very fast. It should be easily possible to run all your unit-tests before you check in your code.
However, if your test-suite consists of many integration tests (that set up and tear down databases and the like), your test-run can easily exceed half an hour. In that case it is very likely that a developer will not run all the unit-tests before she checks in.
So to answer your question: Do net unit-test things, that are better implemented as an integration test (and also don't test getter/setter - it is a waste of time ;-) ).
In unit testing, you should not test anything that does not belong to your unit; testing units in their context is a different matter. That's the simple answer.
The basic rule I use is that you should unit test anything that touches the boundaries of your unit (usually class, or whatever else your unit might be), and mock the rest. There is no need to test the results that some database query returns, it suffices to test that your unit spits out the correct query.
This does not mean that you should not omit stuff that is just hard to test; even exception handling and concurrency issues can be tested pretty well using the right tools.
"What not to test when it comes to Unit Testing?"
* Beans with just getters and setters. Reasoning: Usually a waste of time that could be better spent testing something else.
Anything that is not completely deterministic is a no-no for unit testing. You want your unit tests to ALWAYS pass or fail with the same initial conditions - if weirdness like threading, or random data generation, or time/dates, or external services can affect this, then you shouldn't be covering it in your unit tests. Time/dates are a particularly nasty case. You can usually architect code to have a date to work with be injected (by code and tests) rather than relying on functionality at the current date and time.
That said though, unit tests shouldn't be the only level of testing in your application. Achieving 100% unit test coverage is often a waste of time, and quickly meets diminishing returns.
Far better is to have a set of higher level functional tests, and even integration tests to ensure that the system works correctly "once it's all joined up" - which the unit tests by definition do not test.
Anything that needs a very large and complicated setup. Ofcourse you can test ftp (client), but then you need to setup a ftp server. For unit test you need a reproducible test setup. If you can not provide it, you can not test it.
You can test them, but they won't be unit tests. Unit test is something that doesn't cross the boundaries, such as crossing over the wire, hitting database, running/interacting with a third party, Touching an untested/legacy codebase etc.
Anything beyond this is integration testing.
The obvious answer of the question in the title is You shouldn't unit test the internals of your API, you shouldn't rely on someone else's behavior, you shouldn't test anything that you are not responsible for.
The rest should be enough for only to make you able to write your code inside it, not more, not less.
Sure 100% coverage is a good goal when working on a large project, but for most projects fixing one or two bugs before deployment isn't necessarily worth the time to create exhaustive unit tests.
Exhaustively testing things like forms submission, database access, FTP access, etc at a very detailed level is often just a waste of time; unless the software being written needs a very high level of reliability (99.999% stuff) unit testing too much can be overkill and a real time sink.
I disagree with quamrana's response regarding not testing third-party code. This is an ideal use of a unit test. What if bug(s) are introduced in a new release of a library? Ideally, when a new version third-party library is released, you run the unit tests that represent the expected behaviour of this library to verify that it still works as expected.
Configuration is another item that is very difficult to test well in unit tests. Integration tests and other testing should be done against configuration. This reduces redundancy of testing and frees up a lot of time. Trying to unit test configuration is often frivolous.
FTP, SMTP, I/O in general should be tested using an interface. The interface should be implemented by an adapter (for the real code) and a mock for the unit test.
No unit test should exercise the real external resource (FTP server etc)
If the code to set up the state required for a unit test becomes significantly more complex than the code to be tested I tend to draw the line, and find another way to test the functionality. At that point you have to ask how do you know the unit test is right!
FTP, email and so forth can you test with a server emulation. It is difficult but possible.
Not testable are some error handling. In every code there are error handling that can never occur. For example in Java there must be catch many exception because it is part of a interface. But the used instance will never throw it. Or the default case of a switch if for all possible cases a case block exist.
Of course some of the not needed error handling can be removed. But is there a coding error in the future then this is bad.
The main reason to unit test code in the first place is to validate the design of your code. It's possible to gain 100% code coverage, but not without using mock objects or some form of isolation or dependency injection.
Remember, unit tests aren't for users, they are for developers and build systems to use to validate a system prior to release. To that end, the unit tests should run very fast and have as little configuration and dependency friction as possible. Try to do as much as you can in memory, and avoid using network connections from the tests.