Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I get the concept of unit testing and TDD on a whole.
However, I'm still a little confused on what exactly unit testing frameworks are. Whenever I read about unit testing, it's usually an explanation of what it is, followed by "oh here are the frameworks for this language, i.e JUnit".
But what does that really mean? Are framework just a sort of testing library that allows programmers to write simpler/efficient unit tests?
Also, what are the benefits of using a framework? As I understand it, unit testing is done on small chunks of code at a time, i.e a method. However, I could individually write a test for a method without using a unit testing framework. Is it maybe for standardization of testing practices?
I'm just very new to testing and unit-testing, clarification on some basic concepts would be great.
A bit of a broad question, but I think there are certain thoughts that could count as as facts for an answer:
When 5, 10, 100, ... people go forward to "work" with the same idea/concept (for example unit testing) then, most likely, certain patterns respectively best practices will evolve. People have ideas, and by trial and error they find out which of those ideas are helpful and which are not.
Then people start to communicate their ideas, and those "commonly used" patterns undergo discussions and get further refined.
And sooner or later, people start thinking "I am doing the same task over and over again; I should write a program for me to do that".
And that is how frameworks come into existence: they are tools to support certain aspects of a specific activity.
Let's give an example: using a framework like JUnit, I can completely focus on writing test cases. I don't need to worry about accumulation of failure statistics; I don't need to worry how to make sure that really all my tests are executed when I want that to happen.
I simply understand how to use the JUnit framework; and I know how to further utilize JUnit test cases in conjunction with build systems such as gradle or maven - in order to have all my unit tests executed automatically; each time I push a commit into my source code management system for example.
Of course you can re-invent the wheel here; and implement all of that yourself. But that is just a waste of time. It is like saying: "I want to move my crop to the market - let's start by building the truck myself". No. You rent or buy a pre-build truck; and you use that to do what you actually want to do (move things around).
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Here is an article: https://martinfowler.com/articles/mocksArentStubs.html#ClassicalAndMockistTesting
It's in relation to Classical TDD and Mockist. My understanding was that classes should be tested in isolation therefore ALL dependencies should be stubbed / mocked. However it seems there's a large group of people the Classical TDDers who use real objects according to the article. There are various articles on the internet that emphasize that unit tests should not use real classes other than the SUT of course. For example take a look at this from Microsoft's website on stubs: https://learn.microsoft.com/en-us/visualstudio/test/using-stubs-to-isolate-parts-of-your-application-from-each-other-for-unit-testing
public int GetContosoPrice()
{
var stockFeed = new StockFeed(); // NOT RECOMMENDED
return stockFeed.GetSharePrice("COOO");
}
Can someone clear up my confusion?
Can someone clear up the my confusion?
You don't seem to be confused at all - there are two different schools of thought on what a "unit test" is, and therefore how it should be used.
For instance, Kent Beck, in Test Driven Development By Example, writes
The problem with driving development with small-scale tests ( I call them "unit tests" but they don't match the accepted definition of unit tests very well)....
Emphasis added.
It may help to keep in mind that 20 years ago, the most common testing pattern was the "throw it over the wall to QA" test. Even in cases where automated tests were present, the disciplines required to make those tests effective were not common knowledge.
So it was important to communicate the idea that tests should be isolated from other tests. If developers were going to be running tests as often as the extreme programmers were insisting that they should, then those tests needed to be reliable and fast in wall clock time. Tests that don't share any mutable state (either themselves, or indirectly via the system under test) can be run effectively in parallel, reducing the wall clock time, and therefore reducing the developer interruption that they introduce.
There is a separate discipline that says, in addition to the isolation described above, we should also be striving for tests that check the system in isolation from other parts of the system.
If you want to get a real sense for the history of people with these different ideas talking past each other -- including the history of recognizing that they are talking past each other and trying to invent new labels, a good starting point is the C2 wiki
http://wiki.c2.com/?UnitTest
http://wiki.c2.com/?ShouldUnitTestsTestInteroperations
http://wiki.c2.com/?DeveloperTest
http://wiki.c2.com/?ProgrammerTest
For a modern perspective, you might start with Ham Vocke's Practical Test Pyramid
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I used to write the tests while developing my software, but I stopped it because I noticed that, almost always, the first api and structures I thought great turn out to be clumsy after some progress. I would need to rewrite the entire main program and the entire test every time.
I believe this situation is common in reality. So my questions are:
Is it really common to write tests at first, like so said in TDD? I'm just an amateur programmer so I don't know the real development world.
If so, do people rewrite the tests again (and again) when they revamp the software api/structure? (unless they're smart enough to think up the best one at first, unlike me.)
I don't know of anyone who recommends TDD when you don't know what you're building yet. Unless you've created a very similar system before, then you prototype first, without TDD. There is a very real danger, however, of ending up putting the prototype into production without ever bringing the TDD process into play.
Some common ways of doin' it right are…
A. Throw the prototype away, and start over using TDD (can still borrow some code almost verbatim from the prototype, just re-implement following the actual TDD cycle).
B. Retrofit unit tests into the prototype, and then proceed with red, green, refactor from there.
but I stopped it because I noticed that, almost always, the first api and structures I thought great turn out to be clumsy after some progress
Test driven development should help you with the design. An API that is "clumsy" will seam clumsy as you write your tests for it.
Is it really common to write tests at first, like so said in TDD?
Depends on the developers. I use Test driven development for 99% of what I write. It aids in the design of the APIs and applications I write.
If so, do people rewrite the tests again (and again) when they revamp the software api/structure?
Depends on the level of the tests. Hopefully during a big refactor (that is when you rewrite a chunk of code) you have some tests at the to cover the work you are about to do. Some unit tests will be thrown away but integration and functional tests will be very important. They are what tells you that nothing has been broken.
You may have noticed I've made a point of writing test driven development and not "TDD". Test driven development is not simply "writing tests first", it is allowing the tests to drive the development cycle. The design of your API will be strongly effected by the tests that you write (contrived example, that singleton or service locator will be replaced with IoC). Writing good APIs takes practice and learning to listen to the tools you have at your disposal.
Purists say yes but in practice it works out a little different. Sometimes I write a half dozen tests and then write the code that passes them. Other times I will write several functions before writing the tests because those functions are not to be used in isolation or testing will be hard.
And yes, you may find you need to rewrite tests as API change.
And to the purists, even they will admit that some tests are better than none.
Is it really reasonable to write tests at the early stage?
No if you are writing top-down-design high level integrationtests that require a real database or internetconnection to an other website to work
yes if you are implementing bottom-up with unittesting (=testing a module in isolation)
The higher the "level" the more difficuilt the unittesting becomes because you have to introduce more mocking/abstraction.
In my opinion the architectual benefits of tdd only apply when combined with unittesting because this drives the Separation_of_concerns
When i started tdd i had to rewrite many tests when changing the api/architecture. With grown experience today there are only a few cases where this is neccessary.
You should have a first layer of tests that verifies the externally visible behavior of your API regardless of its internals.
Updating this kind of tests when a new functional requirement emerges is not a problem. In the example you mention, it would be easy to adjust to new websites being scraped - you would just add new assertions to the tests to account for the new data fetched.
The fact that "the scraping code had to be revamped entirely" shouldn't affect the structure of these higher level tests, because from the outside, the API should be consumed exactly the same way as before.
If such a low-level technical detail does affect your high level tests, you're probably missing an abstraction that describes what data you get but hides the details of how it is retrieved.
Writing tests before you write the actual code would mean you know how your application will be designed. This is rarely the case.
As a matter of fact I for example start writing everything in a single file. It might have a few hundereds or more lines. This way I can easily and quickly redesign the api. Later when I decide I like it and that it's good I start refactoring it by putting everything in meaningfull namespaces and separate files.
When this is done I start writing tests to verify everything works fine and to find bugs.
TDD is just a myth. It is not possible to write tests first and the code later especially if you are at the beginning.
You always have to keep in mind the KISS rule. If you need some crazy stuff to test you own code like fakes or mocks you already failed it.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
My team is working on the development of an application running for several years already, but no unit test has ever been coded. Now that we wish starting doing so, we realise we cannot possibly go over all the existing methods to test them because that would take years of work.
The question is : how can one decide which method absolutely need unit testing, and which doesn't?
Would you rather unit test a method that is often called or a method that is often modified?
I read that Unit Testing is rather inefficient on DAO classes. Should I restrain the tests to methods containing logic?
Most important : Will the tests put in place any useful as far as only part of the application is unit tested?
how can one decide which method absolutely need unit testing, and which doesn't?
This is a difficult question to answer without knowing about your code base and what its history and future are. But in general, write tests for the parts of the code that are hard to understand, will get modified in the near future or are known to have bugs. When testing legacy applications, the best bang for your buck is to have tests make the program both easier to maintain going forward, to fix bugs and to keep old bugs from coming back.
Would you rather unit test a method that is often called or a method that is often modified?
As stated above, it depends. Is the method that is called often trivial? Easy to understand? I would probably lean towards "often modified" just to make future development easier. But ideally both should get tested.
I read that Unit Testing is rather inefficient on DAO classes.
I don't know where you read that. Unit testing can be very efficient with DAOs if you use mock objects.
Will the tests put in place any useful as far as only part of the application is unit tested?
Any tests are useful. A program that is only 10% covered by tests is better than a program with 0% coverage. Especially if that 10% is the most important or trickiest part of the program.
If you haven't read it yet, I highly recommend Michael Feather's Working Effectively with Legacy Code where "legacy code" means code that doesn't have tests.
Some people create unit tests for getters and setters and insist on 100% code coverage.
Practical people will test those methods that need testing. What this means will depend on your intelligence and discernment of what constitutes a method that needs testing.
Some people however consider the minimum size of a unit is the class and that tests should be created to test a class (and sometimes its associated classes).
In short, forget any kind of dogmatic principle about unit testing, what matters is the quality of your code. Like agile development, its what helps you to achieve that goal that is important. So if you feel your DAO objects will not benefit from testing, then don't bother - spend that time you would have spent doing something more productive instead.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm in a project where we don't do TDD, because our bosses and the cliente are very "old styled" people. Because I can't do design through TDD but I feel fear to changes, I would like to write unit tests for my own safety. But, how those unit test would be? Do I have to write a test for each method specification for test that they do what it's supposed they do? Do I have to test for each new functionality like TDD but without design? I have a mess in my mind.
Thanks in advance.
You probably can't hurt anything by doing unit tests - regardless of how well they're done - except for one possible side-effect, and that is false confidence.
We tend not to do hardcore TDD, nevertheless the unit test coverage ranges from non-existent to moderate depending on project, and is becoming increasingly valuable as the idea settles in.
For general pointers, I'd say the following are key priorities for you right now:
Test what you know to be important
Test what you know to be fragile
Write tests to expose any new bugs, then solve the bug by making modifications that pass the test
Apply TDD where possible to any new features
Acknowledge that you can't TDD an existing project. By its nature, TDD only applies to new ground, whether that's a new product, or a new feature for a legacy product. Don't let this fact dishearten you.
Yes. TDD is yet another software development technique which happens to utilize unit testing heavily. Unit testing as a process is fine on its own without TDD behind. Not to mention, sometimes it's not even possible to do TDD yet you do write tests (think legacy systems/existing untested code testing).
But as for what you should test. Depending how deep with testing you want (can) go, you can start with end user oriented functionality, through system components testing (ie. class contracts) up to simply assuring your code does what you claim it does - that's pretty much the final, most fine-grained unit test which you'll most likely have a lot of.
In general, what to test is not an easy question, I've happened to answer several variants of that question already, to give you few tips:
test what your code does, not what it does not
if you got certain requirement, have a test covering it
test single feature at time
focus on public contract, skip private bits
Also, reading few of the top voted unit testing questions might give you some ideas on why you will benefit from testing, regardless of using TDD or not.
When you say "we don't do TDD", do you mean that others don't practise TDD or that your bosses forbid you from practising TDD? If it's the first one, then you can practise TDD as much as you want, as long as you don't try to force other people to do it. If it's the other one, then tell your bosses that they pay you to write code the best way you know how, and TDD is part of how you do that.
You can certainly write tests without practising TDD. People do it all the time. Use the old saying, "Test until fear turns to boredom". Write tests for whatever you fear might not work correctly.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Jimmy Bogard, wrote an article: Getting value out of your unit tests, where he gives four rules:
Test names should describe the what and the why, from the user’s perspective
Tests are code too, give them some love
Don’t settle on one fixture pattern/organizational style
One Setup, Execute and Verify per Test
In your opinion these guidelines are complete? What are your guidelines for unit tests?
Please avoid specific language idioms, try to keep answers language-agnostic .
There's an entire, 850 page book called xUnit Test Patterns that deal with this topic, so it's not something that can be easily boiled down to a few hard rules (although the rules you mention are good).
A more digestible book that also covers this subject is The Art of Unit Testing.
If I may add the rules I find most important, they would be:
Use Test-Driven Development. It's the by far the most effective road towards good unit tests. Trying to retrofit unit tests unto existing code tend to be difficult at best.
Keep it simple: Ideally, a unit test should be less than 10 lines of code. If it grows to much more than 20 lines of code, you should seriously consider refactoring either the test code, or the API you are testing.
Keep it fast. Unit test suites are meant to be executed very frequently, so aim at keeping the entire suite under 10 s. That can easily mean keeping each test under 10 ms.
Writing unit-tests is simple, it is writing unit-testable code that is difficult.
The Way of Testivus
If you write code, write tests.
Don’t get stuck on unit testing dogma.
Embrace unit testing karma.
Think of code and test as one.
The test is more important than the unit.
The best time to test is when the code is fresh.
Tests not run waste away.
An imperfect test today is better than a perfect test someday.
An ugly test is better than no test.
Sometimes, the test justifies the means.
Only fools use no tools.
Good tests fail.
Break the code in test regularly to see the effectiveness of the unit tests
Take a look at the code coverage of your tests, and try to make it reasonably complete (for error cases I'd use some discretion whether to test them or not).
For many more good ideas to write unit tests, search stackoverflow.com.