As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have been building IMO a really cool RIA. But its now close to completion and I need to test it to see if there are any bugs or counter-intuitive parts or anything like that. But how? Anytime I ask someone to try to break it, they look at it for like 3 minutes and say "it's solid". How do you guys test things? I have never used a UnitTest before, actually about 3 months ago I never even heard of a unit-test, and I still don't really understand what it is. Would I have to build a whole new application to run every function? That would take forever, plus some functions may only produce errors in certain situations, so I do not understand unit tests.
The question is pretty open-ended so this post won't answer all your question. If you can refine what you are looking for, that would help.
There are two major pieces of testing you likely want to do. The first is unit testing and the second is what might be called acceptance testing.
Unit testing is trying each of the classes/methods in relative isolation and making sure they work. You can use something like jUnit, nUnit, etc. as a framework to hold your tests. Take a method and look at what the different inputs it might expect and what its outcome is. Then write a test case for each of these input/output pairs. This will tell you that most of the parts work as intended.
Acceptance testing (or end-to-end testing as it is sometimes called) is running the whole system and making sure it works. Come up with a list of scenarios you expect users to do. Now systematically try them all. Try variations of them. Do they work? If so, you are likely ready to roll it out to at least a limited audience.
Also, check out How to Break Software by James Whittaker. It's one of the better testing books and is a short read.
First thing is to systematically make sure everything works in the manner you expect it to. Then you want to try it against every realistic hardware with software installed combination that is feasible and appropriate. Then you want to take every point of human interaction and try putting as much data in, no data in, and special data that may cause exceptions. The try doing things in an order or workflow you did not expect sometimes certain actions depend on others. You and your friends will naturally do those steps in order, what happens when someone doesn't? Also, having complete novices use it is a good way to see odd things users might try.
Release it in beta?
It's based on Xcode and Cocoa development, but this video is still a great introduction to unit testing. Unit testing is really something that should be done alongside development, so if your application is almost finished it's going to take a while to implement.
Firebug has a good profiler for web apps. As for testing JS files, I use Scriptaculous. Whatever backend you are using needs to be fully tested too.
But before you do that, you need to understand what unit testing is. Unit testing is verifying that all of the individual units of source code function as they are intended. This means that you verify the output of all of your functions/methods. Basically, read this. There are different testing strategies beyond unit testing such as integration testing, which is testing that different modules integrate with one another. What you are asking people to do is Acceptance testing, which is verifying that it looks and behaves according to the original plan. Here is more on various testing strategies.
PS: always test boundary conditions
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I've been a .Net developer for a long time and am trying to wrap my head around real, practical unit testing.
Most specifically, I'm looking at unit testing ASP.Net MVC 3 projects.
I've read a decent amount about it, and believe I understand it at the academic level (i.e. basically ensures that changes you make aren't going to break other stuff). However, all of the tests I've seen written in examples are trivially stupid things that would be a pretty obvious catch anyway (does this controller return a view with this name?).
So, maybe I'm missing something, or just haven't seen any really good test examples or something, but it really looks like a crap ton of extra work and complexity with mocking, ioc, etc and I'm just not seeing the counter-balancing gains.
Teach me, please :)
Given proper unit testing, it's almost trivial to catch corner-cases that would have otherwise slipped passed you. Let's say you have a method that returns a list of items and the list of items is retrieved from some table. What happens if that table isn't populated correctly (e.g. if there's a null value in one of the columns or empty) or if they change the column type to something your ORM tool doesn't map like you thought it would? Unit tests would help catch these cases before you're in production and someone deletes all of the data in your table.
basically ensures that changes you make aren't going to break other stuff
This is not unit testing, but regression testing. Unit testing is used to test the smallest pieces of code individually (Usually at the class level). This is not very useful in situations where the projects are small, or there are not may classes.
There are many instances where some form of unit testing (mixed in with other forms usually, I like to use mock testing if I have time, for example) is very useful. Say you have a large project with about 20+ classes, and you are getting an error in one of them. You may need to go thorough each class, and make sure that their methods return the correct information. This is unit testing.
In short, unit testing is used when you need to test a specific class, or specific methods, to make sure they are functional. It is a lot easier to find the issue with a program when you are working with the smallest units, and not walking through a whole program to find out what method isn't working right
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
On the SO blog and podcast Joel and Jeff have been discussing the, often ignored, times when unit testing a particular feature simply isn't worth the effort. Those times when unit testing a simple feature is so complicated, unpredictable, or impractical that the cost of the test doesn't reflect the value of the feature. In Joel's case, the example called for a complicated image comparison to simply determine compression quality if they had decided to write the test.
What are some cases you've run into where this was the case? Common areas I can think of are GUIs, page layout, audio testing (testing to make sure an audible warning sounded, for example), etc.
I'm looking for horror stories and actual real-world examples, not guesses (like I just did). Bonus points if you, or whoever had to write said 'impossible' test, went ahead and wrote it anyways.
#Test
public void testSetName() {
UnderTest u = new UnderTest();
u.setName("Hans");
assertEquals("Hans", u.getName());
}
Testing set/get methods is just stupid, you don't need that. If you're forced to do this, your architecture has some serious flaws.
Foo foo = new Foo();
Assert.IsNotNull(foo);
My company writes unit tests and integration tests seperately. If we write an Integration test for, say, a Data Access class, it gets fully tested.
They see Unit Tests as the same thing as an Integration test, except it can't go off-box (i.e. make calls to databases or webservices). Yet we also have Unit Tests as well as Integration Tests for the Data Access classes.
What good is a test against a data access class that can't connect to the data?
It sounds to me like the writing of a useless unit test is not the fault of unit tests, but of the programmer who decided to write the test.
As mentioned in the podcast (I believe (or somewhere else)) if a unit test is obscenely hard to create then it's possible that the code could stand to be refactored, even if it currently "works".
Even the "stupid" unit tests are necessary sometimes, even in the case of "get/set Name". When dealing which clients with complicated business rules, some of the most straightforward properties can have ridiculous caveats attached, and you mind find that some incredibly basic functions might break.
Taking the time to write a complicated unit test means that you've taken the time to fine-tune your understanding of the code, and you might fix bugs in doing so, even if you never complete the unit test itself.
Once I wrote a unit test to expose a concurrency bug, in response to a challenge on C2 Wiki.
It turned out to be unreasonably hard, and hinted that guaranteeing correctness of concurrent code is better handled at a more fundamental level.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I've been test-infected for a long time now, but it would seem the majority of developers I work with either have never tried it or dismiss it for one reason or another, with arguments typically being that it adds overhead to development or they don't need to bother.
What bothers me most about this is that when I come along to make changes to their code, I have a hard time getting it under test as I have to apply refactorings to make it testable and sometimes end up having to do a lot of work just so that I can test the code I'm about to write.
What I want to know is, what arguments would you use to persuade other developers to start writing unit tests? Most developers I've introduced to it take to it quite well, see the benefits and continue to use it. This always seems to be the good developers though, who are already interested in improving the quality of their code and hence can see how unit testing does this.
How do persuade the rest of the motley crew? I'm not looking for a list of testing benefits as I already know what these are, but what techniques you have used or would use to get other people on board. Tips on how to persuade management to take an active role are appreciated as well
There's more than one side to that question, I guess. I find that actually convincing developers to starting using tests is not that hard, because the list of advantages of using testing often speaks for itself. When that said, it is quite a barrier to actually get going and I find that the learning curve often is a bit steep – especially for novice coders. Throwing testing frameworks, TDD test-first mentality, and mocking framework at someone who's not yet comfortable with neither C#, .Net or programming in general, could be just too much to handle.
I work as a consultant and therefore I often have to address the problem of implementing TDD in an organization. Luckily enough, when companies hire me it is often because of my expertise in certain areas, and therefore I might have a little advantage when it comes to getting people’s attention. Or maybe it's just that it's a bit easier to for me as an outsider to come in to a new team and say "Hi! I've tried TDD on other projects and I know that it works!" Or maybe it's my persuasiveness/stubbornness? :) Either way, I often don't find it very hard to convince devs to start writing tests. What I find hard though, is to teach them how to write good unit tests. And as you point out in your question; to stay on the righteous path.
But I have found one method that I think works pretty well when it comes to teaching unit testing. I've blogged about it here, but the essence is to sit down and do some pair-programming. And doing the pair programming I start out writing the unit test first. This way I show them a bit how the testing framework work, how I structure the tests and often some use of mocking. Unit tests should be simple, so all in all the test should be fairly easy to understand even for junior devs. The worst part to explain is often the mocking, but using easy-to-read mocking frameworks like Moq helps a lot. Then when the test is written (and nothing compiles or passes) I hand over the keyboard to my fellow coder so that (s)he can implement the functionality. I simply tell her/him; "Make it go green!” Then we move on to the next test; I write the test, the 'soon-to-be-test-infected-dev' next to me writes the functionality.
Now, it's important to understand that at this point the dev(s) you are teaching are probably not yet convinced that this is the right way to code. The point where most devs seem to see the (green) light is when a test fails due to some code changes that they never thought would break any functionality. When the test that covers that functionality blows up, that's when you've got yourself a loyal TDD'er on your team. Or that's at least my experience, but as always; your mileage will vary :)
Quality speaks for itself. If you're more successful than everyone else, that's all you need to say.
Use a test-coverage tool. Make it very visible. That way everybody can easily see how much code in each area is passed, failed and untested.
Then you may be able to start a culture where "untested" is a sign of bad coding, "failed" is a sign of work in progress and "passed" is a sign of finished code.
This works best if you also do "test-first". Then "untested" becomes "you forgot step 1".
Of course you don't need 100% test coverage. But of one area has 1% coverage and another has 30%, you have a metric for which area is most likely to fail in production.
Lead by example. If you can get evidence that there are less regression on unit tested code that elsewhere.
Getting QA and management buy-in so that your process mandates unit testing.
Be ready to help others to get started with unit testing: provide assistance, supply a framework so that they can start easily, run an introductory presentation.
You just have to get used to the mantra "if it ain't tested, the work ain't done!"
Edit: To add some more meat to my facetious comment above, how can someone know if they're actually finished if they haven't tested their work?
Mind you, you will have a battle convincing others if time isn't allowed in the estimate for the testing of the devleoped code.
A one-to-one split for between effort for coding and that for testing seems to be a good number.
HTH
cheers,
Rob
Give compliments for one writes more test and produce good results and show the best one to others and ask them to produce the same or better result than this.
People (and processes) don't change without one or more pain points. So you need to finjd the significant pain points and demonstrate how unit testing might help deal with them.
If you can't find any significant pain points, then unit testing may not add a lot of value to your current process.
As Steve Lott implies, delivering better results than the other team members will also help. But without the pain points, my experience is that people won't change.
Two ways: convince the project manager that unit testing improves quality AND saves time overall, then have him make unit tests mandatory.
Or wait for a development crunch just before an important release date, where everyone has to work overtime and weekends to finish the last features and eliminate the last bugs, only to find they've just introduced more bugs. Then point out that with proper unit tests they wouldn't have to work like that.
Another situation where unit tests can be shown as indispensable is when a release was actually delivered and turns out to contain a serious bug due to last-minute changes.
If developers are seeing that the "successful" developers are writing unit tests, and they are still not doing it then I suggest unit tests should become part of the formal development life-cycle.
E.g. nothing can be checked in until a unit test is written and reviewed.
Probably reefnet_alex' answer helps you:
Is Unit Testing worth the effort?
I think it was Fowler who said:
"Imperfect tests, run frequently, are
much better than perfect tests that
are never written at all". I
interprate this as giving me
permission to write tests where I
think they'll be most useful even if
the rest of my code coverage is
woefully incomplete.
You mentioned that your manager is on board with unit tests. If that's the case, then why isn't he (she) enforcing it? It isn't your job to get everybody else to follow along or to teach them and in fact, other developers will often resent you if you try to push it on them. In order to get your fellow developers to write unit tests, the manager has to emphasize it strongly. It might end up that part of that emphasis is education on unit test implementation which you might end up being the educator and that's great, but management of it is everything.
If you're in an environment where the group decides the style of implementation, then you have more of a say in how the group dynamic should be. If you are in that sort of environment and the group doesn't want to emphasize unit tests while you do, then maybe you're in the wrong group/company.
I have found that "evangelizing" or preaching rarely works. As others have said, do it your way for your own code, make it known that you do it, but don't try to force others to do it. If people ask about it be supportive and helpful. Offer to do a few lunch-time seminars or informal dog and pony shows. That will do a lot more than just complaining to your manager or the other developers that you have a hard time writing tests for code they wrote.
Slow and steady - it is not going to change overnight.
Once I realized that at one place where I worked the acceptance for peer reviews improved tremendously. My group just did it and stopped trying to get others to do it. Eventually people started asking s about how we got some of the success we did. Then it was easier.
We have a test framework which includes automated running of the test suite whenever anyone commits a change. If someone commits code that fails the tests, the whole team gets emailed with the errors.
This leads to introduced bugs being fixed pretty quickly.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I'm a software test engineer embedded in a development team. A large part of my job involves checking over the state of the project's automated tests (mainly unit/integration tests).
I'm not a short-sighted zealot who wants to force testing down everyone's throats, but I do want to help everyone to get the best out of the time they spend writing tests. A lot of time is spent every week writing tests, so it is important to maximise the returns.
Right now, I do a few things to try and help. Firstly, I always make myself available to talk about testability concerns. E.g. try to identity a testing strategy, whether a particular design is testable and so forth.
In addition to explaining things to people and generally trying to help them out, I also review the finished code and the tests that they write (I have to sign off on stories, meaning that I am somewhat adversarial, too).
My current process is to sit down alone, work through their code and bookmark & comment all problem areas, places that things can be improved and the reason for it. I then get the developer around to my PC and talk through all of the review points. I then send them a decent write up so they have a record of it and they have easy reference.
I do not fix their code and tests for them, but I will add more test cases etc. if I spot gaps. The reason I have decided not to fix up the tests for them is that it's too easy for developers to say "thanks" but to tune out. My reasoning is that if they have to fix the problems I identified before I will sign off, it will lead to a better standard of testing on the project (i.e. more self-sufficient developer testing).
My question is: When it comes to aiding the team, could I be doing anything better? What approaches have you found that can be beneficial?
I'd particularly like to hear from people holding similar positions who have faced the same challenges (e.g. helping improve the quality of the testing, demonstrating the value testing can bring in relevant situations and also striking a good balance between being supportive and adversarial.)
*edit:
Thanks for the answers; all of them contained useful suggestions. I marked the top one as the best answer as I guess it comes down to developer support, and pair programming is something I have not yet tried (short of a few impromptu 'here's how I'd do this' demonstrations after the tests had been written). I'll give that a go with anyone who struggles with testing something :) Cheers.
If you have certain people that tend to be weak at testing, then sit down with them, pair programming, sort of, and as they work on their code, you can help them see how they may test it.
After a while these people should get better at unit testing, and your work load on this should decrease.
The other thing is that everyone should be looking at tests. If I touch a function, make any change, then I should be checking on the tests to make certain they are complete. If there is a problem I can discuss it with the developer.
You should also enlist the work of the team lead, as that is part of his responsibility, or should be, to ensure that everyone understands how to write tests well.
A few things I'd do:
Get them to run coverage and spot any missed areas of code and highlight how although they think they've got all the cases covered, they might not have. I've done this with a few people and they always seem quite surprised at areas they've missed when they thought they'd written watertight tests
Start a "recipe" page on your local Wiki. Every time someone comes up with a testing scenario that they can't figure out, or need your help with, stick it on the Wiki and make it easy to find. Get other people to contribute as well
It sounds like you're already doing this anyway, but ensure when anyone has a testing related question, make yourself available even if it's to the detriment of your normal workload. If you're passionate about it, it should inspire those who are interested to do the right thing too.
When I'm introducing someone to testing (or a new testing technique), I'll often spend alot of my time randomly wandering over to their workstation just to see how they're getting on and nudge them in the right direction. This can be fitted in quite nicely when going for tea/smoke breaks or when you're doing a build. I've had quite good feedback about this but YMMV.
Depending on the size of the team, I wonder if it may make sense after an initial review of the code, to pull someone else to be another set of eyes that can look through what changes you'd propose and act as a way to show that this isn't just your opinion on it. This could work as a way to highlight where there may be some tension in terms of what changes you'd like to see that a developer may reply, "Oh, that'll take weeks and likely isn't worth it..." or something similar if what you'd like to change isn't that simple.
In a similar vein, how does most of the team view testing? Are there leaders or those highly respected that have a positive view on it and help foster a positive attitude towards it? Is there general documentation about the testing guidelines that may help those new to the team to get up to speed quickly? These are just a few other areas I'd examine since sometimes tests can be a great thing and sometimes they can be a pain. Much like the glass that is half-empty or half-full depending on how you want to see it.
Not that I've had the same position, but as someone that has been a developer for a while, this is just what I'd like to see to help make testing be a good thing, as Martha Stewart would say.
One way to gently ease the team into getting tests started is to initiate the practice of writing tests when bugs are being fixed. So when a bug comes in, the first thing to do is write a test that will fail because the of the bug, fix the bug and then get the test to pass.
This approach can also be done when code gets modified internally (no public API changes) - write tests to cover the area being modified to ensure that it doesn't get broken by the code changes. Writing tests this way is a lot less work and clearly demonstrates the benefits once the developer catches their first regression bug.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
What are some of the tricks or tools or policies (besides having a unit testing standard) that you guys are using to write better unit tests? By better I mean 'covers as much of your code in as few tests as possible'. I'm talking about stuff that you have used and saw your unit tests improve by leaps and bounds.
As an example I was trying out Pex the other day and I thought it was really really good. There were tests I was missing out and Pex easily showed me where. Unfortunately it has a rather restrictive license.
So what are some of the other great stuff you guys are using/doing?
EDIT: Lots of good answers. I'll be marking as correct the answer that I'm currently not practicing but will definitely try and that hopefully gives the best gains. Thanks to all.
Write many tests per method.
Test the smallest thing possible. Then test the next smallest thing.
Test all reasonable input and output ranges. IOW: If your method returns boolean, make sure to test the false and true returns. For int? -1,0,1,n,n+1 (proof by mathematical induction). Don't forget to check for all Exceptions (assuming Java).
4a. Write an abstract interface first.
4b. Write your tests second.
4c. Write your implementation last.
Use Dependency Injection. (for Java: Guice - supposedly better, Spring - probably good enough)
Mock your "Unit's" collaborators with a good toolkit like mockito (assuming Java, again).
Google much.
Keep banging away at it. (It took me 2 years - without much help but for google - to start "getting it".)
Read a good book about the topic.
Rinse, repeat...
Write tests before you write the code (ie: Test Driven Development). If for some reason you are unable to write tests before, write them as you write the code. Make sure that all the tests fail initially. Then, go down the list and fix each broken one in sequence. This approach will lead to better code and better tests.
If you have time on your side, then you may even consider writing the tests, forgetting about it for a week, and then writing the actual code. This way you have taken a step away from the problem and can see the problem more clearly now. Our brains process tasks differently if they come from external or internal sources and this break makes it an external source.
And after that, don't worry about it too much. Unit tests offer you a sanity check and stable ground to stand on -- that's all.
On my current project we use a little generation tool to produce skeleton unit tests for various entities and accessors, it provides a fairly consistent approach for each modular unit of work which needs to be tested, and creates a great place for developers to test out their implementations from (i.e the unit test class is added when the rest of the entities and other dependencies are added by default).
The structure of the (templated) tests follows a fairly predictable syntax, and the template allows for implementation of module/object-specific buildup/tear down (we also use a base class for all the tests to encapsule some logic).
We also create instances of entities (and assign test data values) in static functions so that objects can be created programatically and used within different test scenarios and across test classes, whcih is proving to be very helpful.
Read a book like The Art of Unit Testing will definitely help.
As far as policy goes read Kent Beck's answer on SO, particularly:
to test as little as possible to reach a given level of confidence
Write pragmatic unit tests for tricky parts of your code and don't lose site of the fact that it's the program you are testing that's important not the unit tests.
I have a ruby script that generates test stubs for "brown" code that wasnt built with TDD. It writes my build script, sets up includes/usings and writes a setup/teardown to instantiate the test class in the stub. Helps to start with a consistent starting point without all the typing tedium when I hack at code written in the Dark Times.
One practice I've found very helpful is the idea of making your test suite isomorphic to the code being tested. That means that the tests are arranged in the same order as the lines of code they are testing. This makes it very easy to take a piece of code and the test suite for that code, look at them side-by-side and step through each line of code to verify there is an appropriate test. I have also found that the mere act of enforcing isomorphism like this forces me to think carefully about the code being tested, such as ensuring that all the possible branches in the code are exercised by tests, or that all the loop conditions are tested.
For example, given code like this:
void MyClass::UpdateCacheInfo(
CacheInfo *info)
{
if (mCacheInfo == info) {
return;
}
info->incrRefCount();
mCacheInfo->decrRefCount();
mCacheInfo = info
}
The test suite for this function would have the following tests, in order:
test UpdateCacheInfo_identical_info
test UpdateCacheInfo_increment_new_info_ref_count
test UpdateCacheInfo_decrement_old_info_ref_count
test UpdateCacheInfo_update_mCacheInfo