Sell me on unit testing [closed] - unit-testing

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I've been a .Net developer for a long time and am trying to wrap my head around real, practical unit testing.
Most specifically, I'm looking at unit testing ASP.Net MVC 3 projects.
I've read a decent amount about it, and believe I understand it at the academic level (i.e. basically ensures that changes you make aren't going to break other stuff). However, all of the tests I've seen written in examples are trivially stupid things that would be a pretty obvious catch anyway (does this controller return a view with this name?).
So, maybe I'm missing something, or just haven't seen any really good test examples or something, but it really looks like a crap ton of extra work and complexity with mocking, ioc, etc and I'm just not seeing the counter-balancing gains.
Teach me, please :)

Given proper unit testing, it's almost trivial to catch corner-cases that would have otherwise slipped passed you. Let's say you have a method that returns a list of items and the list of items is retrieved from some table. What happens if that table isn't populated correctly (e.g. if there's a null value in one of the columns or empty) or if they change the column type to something your ORM tool doesn't map like you thought it would? Unit tests would help catch these cases before you're in production and someone deletes all of the data in your table.

basically ensures that changes you make aren't going to break other stuff
This is not unit testing, but regression testing. Unit testing is used to test the smallest pieces of code individually (Usually at the class level). This is not very useful in situations where the projects are small, or there are not may classes.
There are many instances where some form of unit testing (mixed in with other forms usually, I like to use mock testing if I have time, for example) is very useful. Say you have a large project with about 20+ classes, and you are getting an error in one of them. You may need to go thorough each class, and make sure that their methods return the correct information. This is unit testing.
In short, unit testing is used when you need to test a specific class, or specific methods, to make sure they are functional. It is a lot easier to find the issue with a program when you are working with the smallest units, and not walking through a whole program to find out what method isn't working right

Related

Do you perform test when you code a project alone? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I wonder if individual programmer would spend time doing unit test, functional test or applying test driven development (TDD) method when they code alone. Or, he just bother about getting it work.
Reason I ask because having those tests do prolong the entire project.
Thanks.
I've found that if I don't do unit tests, that prolongs the project a thousand times more. I always want to just get the feature working, then the next feature, then the next. It takes work for me to have the discipline to do unit tests, even though I've proved to myself over and over again that those tests form the golden road to timely completion.
The benefits of testing are the same whether you are a team or a sole developer.
Therefore the magic answer is this: it is totally up to you.
The only real difference between the two scenarios is when you are developing by yourself you do not have to convince anyone else to write or not write tests, you can simply have that argument with yourself.
I prefer doing TDD coz the API's of some modules are not defined in beginning and writing the UI while writing the API for the data module is a little confusing at times.
so I tend to create the data module API's by writing the test cases for it. I also use it to measure progress. once that is done. the UI gets complete fairly quick and debugging the UI gets a lot lot lot faster as data part is already tested.
while the offset with the test case development is more, the debugging time it saves is good amount and gives a comfortable development flow.
It depends :)
If it is a prototype/ proof of concept / tech that I am trying to learn. I'd usually not choose TDD because it's a throw-away. (Learning tests for third party libs that I am trying to integrate into my app are an exception.)
If it is something that I need to sustain for a longer duration of time (more than a month), I'd pick TDD. If multiple people are going to work on it in the future, I'd definitely pick TDD
That said, TDD does not automatically imply good apps/design/insert-good-metric here. Good/Experienced programmers write good software. With TDD, they're more likely to be better/faster.

Physical world examples of design for testability and test driven development [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm going to be doing a presentation on unit testing and in doing so I will touch on "design for testability" patterns. In other words using IOC containers, Dependency Injection, avoiding static methods etc.
I have a feeling my team will be cold to starting to code differently to accommodate for testing. So I was wondering if anybody knew of any real world examples of altering a design of something for no other reason then to make it easier to test.
I'm assuming this concept isn't uncommon in manufacturing, engineering and other professions, I'm just not familiar with any hard examples.
I imagine the development of the Saturn V rocket, Space Shuttle, Automobiles, Robotics, etc. must have some documented example of some design for testability or possibly the lack thereof causing problems.
Examples that have come to mind
I suppose having replaceable parts is a form of dependency injection, where as welding all the components together wouldn't allow testing them individually.
Perhaps the OBD2 port on modern automobiles because it makes it easy to check if any systems have issues.
Many electrical surge protector have a Test button to check the correct functionality. It is a very clever form of testing, because it's not only in the hands of the developers, but also of the final users.
Another example: many control report lights (in particular in critical environments, like nuclear power plants and so on) have a button to turn them on and check if they are still functional. The same for many appliances using LED displays.
Batteries have a power indicator, so that you can test them before buying.
In sugar refinement, you monitor many steps of the production (sort of breakpoints) to assess the quality of the product. The plant is designed so to provide these testing breakpoints for easy accessibility by a human sampler (normally not well paid).
Finally, car makers include all sort of diagnostic. A car repair shop has a computer to perform full check on the status of the car. It's a sort of "after the fact" debug log, so not really "preventive testing", but still very useful and the inclusion of it is a real world "design for testability".
The main difference, however, between real-world testing and software world testing, is that real-world testing can ruin the product, to the limit of being destructive. For this reason, the faulty-to-good ratio is assessed via destructive sampling and analysis. In software testing, you never have destructive testing (unless you are a sadistic programmer with evil intentions)
If a set of components have been built so they can be tested, it is likely that they will be more modular and loosely coupled. For example it is easier to test code that uses an IOC container rather than a Service Locator as collaborators can be simply mocked and injected.
This separation follows into development. As the type has no hard dependency on its collaboraters, it allows composition of those parts in a different structure, which means it is simpler to refactor/respond to new requirements. You will also have more confidence in any refactoring as you have a suite of tests to verify your changes against.
Taken together this means in my experience that code written for testing is quicker and easier to modify, which results in less work for me, which I like.
Some real world examples of components designed for testing (themselves or other things) to avoid bigger problems down the line:
Smoke detectors
RCDs (residual current devices)
Pre-flight engine checks in aviation
Personal Breathalisers

least 'worth it' unit test you've ever written? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
On the SO blog and podcast Joel and Jeff have been discussing the, often ignored, times when unit testing a particular feature simply isn't worth the effort. Those times when unit testing a simple feature is so complicated, unpredictable, or impractical that the cost of the test doesn't reflect the value of the feature. In Joel's case, the example called for a complicated image comparison to simply determine compression quality if they had decided to write the test.
What are some cases you've run into where this was the case? Common areas I can think of are GUIs, page layout, audio testing (testing to make sure an audible warning sounded, for example), etc.
I'm looking for horror stories and actual real-world examples, not guesses (like I just did). Bonus points if you, or whoever had to write said 'impossible' test, went ahead and wrote it anyways.
#Test
public void testSetName() {
UnderTest u = new UnderTest();
u.setName("Hans");
assertEquals("Hans", u.getName());
}
Testing set/get methods is just stupid, you don't need that. If you're forced to do this, your architecture has some serious flaws.
Foo foo = new Foo();
Assert.IsNotNull(foo);
My company writes unit tests and integration tests seperately. If we write an Integration test for, say, a Data Access class, it gets fully tested.
They see Unit Tests as the same thing as an Integration test, except it can't go off-box (i.e. make calls to databases or webservices). Yet we also have Unit Tests as well as Integration Tests for the Data Access classes.
What good is a test against a data access class that can't connect to the data?
It sounds to me like the writing of a useless unit test is not the fault of unit tests, but of the programmer who decided to write the test.
As mentioned in the podcast (I believe (or somewhere else)) if a unit test is obscenely hard to create then it's possible that the code could stand to be refactored, even if it currently "works".
Even the "stupid" unit tests are necessary sometimes, even in the case of "get/set Name". When dealing which clients with complicated business rules, some of the most straightforward properties can have ridiculous caveats attached, and you mind find that some incredibly basic functions might break.
Taking the time to write a complicated unit test means that you've taken the time to fine-tune your understanding of the code, and you might fix bugs in doing so, even if you never complete the unit test itself.
Once I wrote a unit test to expose a concurrency bug, in response to a challenge on C2 Wiki.
It turned out to be unreasonably hard, and hinted that guaranteeing correctness of concurrent code is better handled at a more fundamental level.

How to test an application? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have been building IMO a really cool RIA. But its now close to completion and I need to test it to see if there are any bugs or counter-intuitive parts or anything like that. But how? Anytime I ask someone to try to break it, they look at it for like 3 minutes and say "it's solid". How do you guys test things? I have never used a UnitTest before, actually about 3 months ago I never even heard of a unit-test, and I still don't really understand what it is. Would I have to build a whole new application to run every function? That would take forever, plus some functions may only produce errors in certain situations, so I do not understand unit tests.
The question is pretty open-ended so this post won't answer all your question. If you can refine what you are looking for, that would help.
There are two major pieces of testing you likely want to do. The first is unit testing and the second is what might be called acceptance testing.
Unit testing is trying each of the classes/methods in relative isolation and making sure they work. You can use something like jUnit, nUnit, etc. as a framework to hold your tests. Take a method and look at what the different inputs it might expect and what its outcome is. Then write a test case for each of these input/output pairs. This will tell you that most of the parts work as intended.
Acceptance testing (or end-to-end testing as it is sometimes called) is running the whole system and making sure it works. Come up with a list of scenarios you expect users to do. Now systematically try them all. Try variations of them. Do they work? If so, you are likely ready to roll it out to at least a limited audience.
Also, check out How to Break Software by James Whittaker. It's one of the better testing books and is a short read.
First thing is to systematically make sure everything works in the manner you expect it to. Then you want to try it against every realistic hardware with software installed combination that is feasible and appropriate. Then you want to take every point of human interaction and try putting as much data in, no data in, and special data that may cause exceptions. The try doing things in an order or workflow you did not expect sometimes certain actions depend on others. You and your friends will naturally do those steps in order, what happens when someone doesn't? Also, having complete novices use it is a good way to see odd things users might try.
Release it in beta?
It's based on Xcode and Cocoa development, but this video is still a great introduction to unit testing. Unit testing is really something that should be done alongside development, so if your application is almost finished it's going to take a while to implement.
Firebug has a good profiler for web apps. As for testing JS files, I use Scriptaculous. Whatever backend you are using needs to be fully tested too.
But before you do that, you need to understand what unit testing is. Unit testing is verifying that all of the individual units of source code function as they are intended. This means that you verify the output of all of your functions/methods. Basically, read this. There are different testing strategies beyond unit testing such as integration testing, which is testing that different modules integrate with one another. What you are asking people to do is Acceptance testing, which is verifying that it looks and behaves according to the original plan. Here is more on various testing strategies.
PS: always test boundary conditions

Tricks for writing better unit tests [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
What are some of the tricks or tools or policies (besides having a unit testing standard) that you guys are using to write better unit tests? By better I mean 'covers as much of your code in as few tests as possible'. I'm talking about stuff that you have used and saw your unit tests improve by leaps and bounds.
As an example I was trying out Pex the other day and I thought it was really really good. There were tests I was missing out and Pex easily showed me where. Unfortunately it has a rather restrictive license.
So what are some of the other great stuff you guys are using/doing?
EDIT: Lots of good answers. I'll be marking as correct the answer that I'm currently not practicing but will definitely try and that hopefully gives the best gains. Thanks to all.
Write many tests per method.
Test the smallest thing possible. Then test the next smallest thing.
Test all reasonable input and output ranges. IOW: If your method returns boolean, make sure to test the false and true returns. For int? -1,0,1,n,n+1 (proof by mathematical induction). Don't forget to check for all Exceptions (assuming Java).
4a. Write an abstract interface first.
4b. Write your tests second.
4c. Write your implementation last.
Use Dependency Injection. (for Java: Guice - supposedly better, Spring - probably good enough)
Mock your "Unit's" collaborators with a good toolkit like mockito (assuming Java, again).
Google much.
Keep banging away at it. (It took me 2 years - without much help but for google - to start "getting it".)
Read a good book about the topic.
Rinse, repeat...
Write tests before you write the code (ie: Test Driven Development). If for some reason you are unable to write tests before, write them as you write the code. Make sure that all the tests fail initially. Then, go down the list and fix each broken one in sequence. This approach will lead to better code and better tests.
If you have time on your side, then you may even consider writing the tests, forgetting about it for a week, and then writing the actual code. This way you have taken a step away from the problem and can see the problem more clearly now. Our brains process tasks differently if they come from external or internal sources and this break makes it an external source.
And after that, don't worry about it too much. Unit tests offer you a sanity check and stable ground to stand on -- that's all.
On my current project we use a little generation tool to produce skeleton unit tests for various entities and accessors, it provides a fairly consistent approach for each modular unit of work which needs to be tested, and creates a great place for developers to test out their implementations from (i.e the unit test class is added when the rest of the entities and other dependencies are added by default).
The structure of the (templated) tests follows a fairly predictable syntax, and the template allows for implementation of module/object-specific buildup/tear down (we also use a base class for all the tests to encapsule some logic).
We also create instances of entities (and assign test data values) in static functions so that objects can be created programatically and used within different test scenarios and across test classes, whcih is proving to be very helpful.
Read a book like The Art of Unit Testing will definitely help.
As far as policy goes read Kent Beck's answer on SO, particularly:
to test as little as possible to reach a given level of confidence
Write pragmatic unit tests for tricky parts of your code and don't lose site of the fact that it's the program you are testing that's important not the unit tests.
I have a ruby script that generates test stubs for "brown" code that wasnt built with TDD. It writes my build script, sets up includes/usings and writes a setup/teardown to instantiate the test class in the stub. Helps to start with a consistent starting point without all the typing tedium when I hack at code written in the Dark Times.
One practice I've found very helpful is the idea of making your test suite isomorphic to the code being tested. That means that the tests are arranged in the same order as the lines of code they are testing. This makes it very easy to take a piece of code and the test suite for that code, look at them side-by-side and step through each line of code to verify there is an appropriate test. I have also found that the mere act of enforcing isomorphism like this forces me to think carefully about the code being tested, such as ensuring that all the possible branches in the code are exercised by tests, or that all the loop conditions are tested.
For example, given code like this:
void MyClass::UpdateCacheInfo(
CacheInfo *info)
{
if (mCacheInfo == info) {
return;
}
info->incrRefCount();
mCacheInfo->decrRefCount();
mCacheInfo = info
}
The test suite for this function would have the following tests, in order:
test UpdateCacheInfo_identical_info
test UpdateCacheInfo_increment_new_info_ref_count
test UpdateCacheInfo_decrement_old_info_ref_count
test UpdateCacheInfo_update_mCacheInfo