Are there things that you can't test when applying TDD [closed] - unit-testing

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I've been applying TDD to some new projects to get the hang of things, and I understand the basic flow: Write tests that fail, write code to pass tests, refactor if needed, repeat.
To keep the tests fast and decoupled, I abstract out things like network requests and file I/O. These are usually abstracted into interfaces which get passed through using dependency injection.
Usually development is very smooth until the end where I realize I need to implement these abstract interfaces. The whole point of abstracting them was to make it easily testable, but following TDD I would need to write a test before writing the implementation code, correct?
For example, I was looking at the tdd-tetris-tutorial https://github.com/luontola/tdd-tetris-tutorial/tree/tutorial/src/test/java/tetris. If I wanted to add the ability to play with a keyboard, I would abstract away basic controls into methods inside a Controller class, like "rotateBlock", "moveLeft", etc. that could be tested.
But at the end I would need to add some logic to detect keystrokes from the keyboard when implementing a controller class. How would one write a test to implement that?
Perhaps some things just can't be tested, and reaching 100% code coverage is impossible for some cases?

Perhaps some things just can't be tested, and reaching 100% code coverage is impossible for some cases?
I use a slightly different spelling: not all things can be tested at the same level of cost effectiveness.
The "trick", so to speak, is to divide your code into two categoies: code that is easy to test, and code that is so obvious that you don't need to test it -- or not as often.
The nice thing about simple adapters is that (once you've got them working at all) they don't generally need to change very much. All of the logic lives somewhere else and that somewhere else is easy to test.
Consider, for example, reading bytes from a file. That kind of interface looks sort of like a function, that accepts a filename as an argument and either returns an array of bytes, or some sort of exception. Implementing that is a straight forward exercise in most languages, and the code is so text book familiar that it falls clearly into the category of "so simple there are obviously no deficiencies".
Because the code is simple and stable, you don't need to test it at anywhere near the frequency that you test code you regularly refactor. Thus, the cost benefit analysis supports the conclusion that you can delegate your occasional tests of this code to more expensive techniques.
100% statement coverage was never the goal of TDD (although it is really easy to understand how you -- and a small zillion other people -- reached that conclusion). It was primarily about deferring detailed design. So to some degree code that starts simple and changes infrequently was "out of bounds" from the get go.

You can't test everything with TDD unit tests. But if you also have integration tests, you can test those I/O interfaces. You can produce integration tests using TDD.
In practice, some things are impractical to automatically test. Error handling of some rare error conditions or race hazards are the hardest.

Related

Is it really reasonable to write tests at the early stage? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I used to write the tests while developing my software, but I stopped it because I noticed that, almost always, the first api and structures I thought great turn out to be clumsy after some progress. I would need to rewrite the entire main program and the entire test every time.
I believe this situation is common in reality. So my questions are:
Is it really common to write tests at first, like so said in TDD? I'm just an amateur programmer so I don't know the real development world.
If so, do people rewrite the tests again (and again) when they revamp the software api/structure? (unless they're smart enough to think up the best one at first, unlike me.)
I don't know of anyone who recommends TDD when you don't know what you're building yet. Unless you've created a very similar system before, then you prototype first, without TDD. There is a very real danger, however, of ending up putting the prototype into production without ever bringing the TDD process into play.
Some common ways of doin' it right are…
A. Throw the prototype away, and start over using TDD (can still borrow some code almost verbatim from the prototype, just re-implement following the actual TDD cycle).
B. Retrofit unit tests into the prototype, and then proceed with red, green, refactor from there.
but I stopped it because I noticed that, almost always, the first api and structures I thought great turn out to be clumsy after some progress
Test driven development should help you with the design. An API that is "clumsy" will seam clumsy as you write your tests for it.
Is it really common to write tests at first, like so said in TDD?
Depends on the developers. I use Test driven development for 99% of what I write. It aids in the design of the APIs and applications I write.
If so, do people rewrite the tests again (and again) when they revamp the software api/structure?
Depends on the level of the tests. Hopefully during a big refactor (that is when you rewrite a chunk of code) you have some tests at the to cover the work you are about to do. Some unit tests will be thrown away but integration and functional tests will be very important. They are what tells you that nothing has been broken.
You may have noticed I've made a point of writing test driven development and not "TDD". Test driven development is not simply "writing tests first", it is allowing the tests to drive the development cycle. The design of your API will be strongly effected by the tests that you write (contrived example, that singleton or service locator will be replaced with IoC). Writing good APIs takes practice and learning to listen to the tools you have at your disposal.
Purists say yes but in practice it works out a little different. Sometimes I write a half dozen tests and then write the code that passes them. Other times I will write several functions before writing the tests because those functions are not to be used in isolation or testing will be hard.
And yes, you may find you need to rewrite tests as API change.
And to the purists, even they will admit that some tests are better than none.
Is it really reasonable to write tests at the early stage?
No if you are writing top-down-design high level integrationtests that require a real database or internetconnection to an other website to work
yes if you are implementing bottom-up with unittesting (=testing a module in isolation)
The higher the "level" the more difficuilt the unittesting becomes because you have to introduce more mocking/abstraction.
In my opinion the architectual benefits of tdd only apply when combined with unittesting because this drives the Separation_of_concerns
When i started tdd i had to rewrite many tests when changing the api/architecture. With grown experience today there are only a few cases where this is neccessary.
You should have a first layer of tests that verifies the externally visible behavior of your API regardless of its internals.
Updating this kind of tests when a new functional requirement emerges is not a problem. In the example you mention, it would be easy to adjust to new websites being scraped - you would just add new assertions to the tests to account for the new data fetched.
The fact that "the scraping code had to be revamped entirely" shouldn't affect the structure of these higher level tests, because from the outside, the API should be consumed exactly the same way as before.
If such a low-level technical detail does affect your high level tests, you're probably missing an abstraction that describes what data you get but hides the details of how it is retrieved.
Writing tests before you write the actual code would mean you know how your application will be designed. This is rarely the case.
As a matter of fact I for example start writing everything in a single file. It might have a few hundereds or more lines. This way I can easily and quickly redesign the api. Later when I decide I like it and that it's good I start refactoring it by putting everything in meaningfull namespaces and separate files.
When this is done I start writing tests to verify everything works fine and to find bugs.
TDD is just a myth. It is not possible to write tests first and the code later especially if you are at the beginning.
You always have to keep in mind the KISS rule. If you need some crazy stuff to test you own code like fakes or mocks you already failed it.

How to do unit testing [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I have a method CreateTask(UserId).
For this method, is it enough to check UserId against null or empty and an invalid value?
Or should I check whether Task is created for a specific UserId?
And should I also check number of tasks created and their date and time?
I don't think there's enough information here to answer this. But ti address some of your points:
For this method, Is that enough to check UserId null or empty and invalid Id ?
The method itself can internally do that, but that's not part of testing. That's just a method at runtime doing some error checking. This is often referred to as "defensive programming."
Or Should I check whether Task is created for the specif UserId.?
This is where it gets cloudy. And this is where you would want to step back for a moment and look at the bigger picture. Make sure you're not tightly coupling your unit tests with your implementation logic. The tests should validate the business logic, unaware of the implementation.
It's highly likely that "creating a task" isn't business logic, but rather simply an implementation detail. What you should be testing is that when Step A is performed, Result B is observed. How the system goes about producing Result B is essentially what's being tested, but not directly or explicitly.
A big reason for keeping your unit tests high-level like this is because if the implementation details change then you don't want to have to change your tests with them. That drastically reduces the value of those tests because it not only adds more work to any change but it eliminates the tests as the validation point of those change, since the tests themselves also change. The tests should be fairly simple and static, acting as a set of rules used to validate the code. If the tests are complex and often changing, they lose that level of confidence needed to validate the code.
You don't have to test every method. You should test every observable business action that the system performs. Methods which perform those actions get tested as a result of this. Methods which don't perform those actions are then questionable as to whether or not you need them in the first place. A code coverage tool is great for determining this.
For example, let's say you have MethodA() which doesn't get used by any of the tests. No test calls it directly, because it's just an implementation detail and the tests don't need to know about it. (In this case it might even make sense for the method to be private or in some other way obscured from the external calling code.) This leaves you with two options:
The tests are incomplete, because MethodA() is needed by a business process which isn't being tested. Add tests for that business process.
The tests are complete, and MethodA() isn't actually needed by the system. Remove it.
If your tests blindly test every method regardless of the bigger picture of the business logic, you'd never be able to determine when something isn't needed by the system. And deprecating code which is no longer needed is a huge part of keeping a simple and maintainable codebase.
1) Keep your methods short and simple, so unit testing will be easy (btw. one of reasons that TDD encourages good design)
2) Check all boundary conditions (invalid input, trivial input etc.) (btw. one of ways to make TDD easy)
3) Check all possible scenarios to achieve high coverage (all possible execution flows through your method with simplest input to achieve this) (btw. one of reasons that TDD works)
4) Check few (maybe one) typical scenarios with real data that demonstrates typical usage of the method
And as you've probably noticed - consider using TDD so you won't have to worry about the issue of "testing an existing method" :)

Is it acceptable as a professional developer not to write unit tests? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Just wondering on the pros and cons on TDD/automated unit testing and looking for the community's view on whether it's acceptable for professional developers to write applications without supporting unit tests?
Re-asked on Programmers: https://softwareengineering.stackexchange.com/questions/159572/is-it-acceptable-as-a-professional-developer-not-to-write-unit-tests
I bet I'll get -1 -ed for this, but I still say: if you have other measures to ensure quality, including avoiding regression, program validation, program verification, then no.
The only problem is usually that people don't have any other tools than unit testing to achieve this.
In case you have formally tested models (there's a tool, that actually tests it, or it was constructed in a way which ensures it's valid), and you have formally tested ways to ensure that the actually running software is conform to that model, then it's fine.
Example: if you are sure, that the code you wrote in ruby will act as you'd expect it (because you or someone else tested the ruby interpreter and it doesn't have bugs, or you use only a subset of features known to be safe) then its fine. Usually, we trust C compilers and CPUs in this manner.
Also, if a program is only to be used once,there's no regression problem! If I write a one-liner in bash, which will calculate something for me, I might test it first manually on fake data, then run it on the real one - no need to write an automated test.
If you take the blame, you can also go with along with assumptions: I assume usually, that eclipse is pretty good at creating setters and getters, and I don't test on those. Also, I assume, that in case there'd be any problem with java's Collection classes in Java 7, it'd have turned out by now. But in case there's a trouble, it's your personal trouble. Don't blame anyone.
Personally, I rarely use unit testing on certain codes as I formally test them while they're still flowcharts on a piece of paper, and I ensure that I only use subsets of the language/libraries which are known to work in such situations. Also, I never let code out without peer review. Still, it's sometimes better if there's someone who runs an acceptance test on them...
It is up to you. The question is more philosophical in nature.
Unit tests are just a tool to help you. You can chose to ignore them. However, if you are going to work on a more than trivial project I would advise you to use unit tests.
Yes, they take time, too to write. But in the end you will save a lot if there is any refactoring done or some parts of the code need to be changed.
As always: It depends.
Generally speaking, unit tests are a good thing: they catch a whole class of errors, they verify that particular parts of your code work as expected under given circumstances, and they make it easier to track down errors when something does go wrong. So unless you have good reasons not to, you should write unit tests.
Good reasons not to write unit tests include:
Making relatively small changes to a codebase that is structured badly and hardly testable because of this (usually, the reason is that there is little separation of concerns, and testable units cannot be isolated for testing without intrusive changes to the codebase itself).
The nature of the problem domain makes the code inherently untestable. This is rare, but it happens - for example, it is very hard to come up with meaningful unit tests for a routine that draws a GUI: you'd have to make it render to a mocked surface and then check individual pixels, but you'd also have to mock all the parameters that influence layout decisions, etc.; while this is theoretically possible, it's not usually worth the effort, and one should opt for manual or semi-automatic testing in such cases.
The project is a tiny throwaway program with such a small scope and such a short lifecycle that the benefit gained from unit testing (increased maintainability, decreased complexity) is marginal. Keep in mind, however, that software tends to live longer than it was designed for, and your one-off throwaway script might very well end up becoming a mission-critical part of the company's processes.

How verbose/granular should your tests be? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I recently started a new project where I decided to adopt writing unit tests for most functions. Prior to this my testing was limited to sporadically writing test "functions" to ensure something worked as expected, and then never bothering to update the test functions, clearly not good.
Now that I've written a fair amount of code, and tests, I'm noticing that I'm writing a lot of tests for my code. My code is generally quite modular, in the sense that I try to code small functions that do something simple, and then chain them together in a larger function as required, again, accepted best practice.
But, I now end up writing tests for both the individual "building block" functions (quite small tests), as well as tests for the function that chains them together, and testing the result there as well, obviously the result will be different, but since the inputs are similar, I'm duplicating a lot of test code (the setting up the input portions , which are slightly different in each but not by much, since they're not identical I can't just use a text fixture..).
Another concern is I try to adhere quite strictly to test one thing per test, so I write a single test for every different feature within the function, for instance, if there's some extra input that can be passed to the function, but which is optional, I write one version which adds the input, one that doesn't and test them separately. The setup here is again mostly identical except for the input I added, again not exactly the same, so using a fixture doesn't feel "right".
Since this is my first project with everything being fully unit tested, I just wanted to make sure I was doing stuff correctly and that the code duplication in tests is to be expected.. so, my question is: Am I doing things correctly? If not, what should I change?
I code in C and C++.
On a side note, I love the testing itself, I'm far more confident of my code now.
Thanks!
Your question tries to address many things, and I can try to answer only some of them.
Try to get as high coverage as possible (ideally 100%)
Do not use real resources for your unit test, or at least try to avoid it. You can use mocks and stubs for that.
Do not unit test 3rd party libraries.
You can break dependencies using dependency injections or functors. That way the size of your tests can decrease.

Preparing unit tests : What's important to keep in mind when working on a software architecture? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Let's say I'm starting a new project, quality is a top priority.
I plan on doing extensive unit testing, what's important to keep in mind when I'm working on the architecture to ease and empower further unit testing?
edit : I read an article some times ago (I can't find it now) talking about how decoupling instantiation code from classes behaviors could be be helpful when unit testing. That's the kind of design tips I'm seeking here.
Ease of testing comes through being able to replace as many of the dependencies your method has with test code (mocks, fakes, etc.) The currently recommended way to accomplish this is through dependency inversion, aka the Hollywood Principle: "Don't call us, we'll call you." In other words your code should "ask for things, don't look for things."
Once you start thinking this way you'll find code can easily have dependencies on many things. Not only do you have dependencies on other objects, but databases, files, environment variables, OS APIs, globals, singletons, etc. By adhering to a good architecture, you minimize most of these dependencies by providing them via the appropriate layers. So when it comes time to test, you don't need a working database full of test data, you can simply replace the data object with a mock data object.
This also means you have to carefully sort out your object construction from your object execution. The "new" statement placed in a constructor generates a dependency that is very hard to replace with a test mock. It's better to pass those dependencies in via constructor arguments.
Also, keep the Law of Demeter in mind. Don't dig more than one layer deep into an object, or else you create hidden dependencies. Calling Flintstones.Wilma.addChild(pebbles); means what you thought was a dependence on "Flintstones" really is a dependence on both "Flintstones" and "Wilma".
Make sure that your code is testable by making it highly cohesive, lowly decoupled. And make sure you know how to use mocking tools to mock out the dependencies during unit tests.
I recommend you to get familiar with the SOLID principle, so that you can write a more testable code.
You might also want to check out these two SO questions:
Unit Test Adoption
What Should Be A Unit
Some random thoughts:
Define your interfaces: decouple the functional modules from each other, and decide how they will communicate with each other. The interface is the “contract” between the developers of different modules. Then, if your tests operate on the interfaces, you're ensuring that the teams can treat each other's modules as black boxes, and therefore work independently.
Build and test at least the basic functionality of the UI first. Once your project can “talk” to you, it can tell you what's working and what's not ... but only if it's not lying to you. (Bonus: if your developers have no choice but to use the UI, you'll quickly identify any shortcomings in ease-of-use, work flow, etc.)
Test at the lowest practical level: the more confident you are that the little pieces work, the easier it will be to combine them into a working whole.
Write at least one test for each feature, based on the specifications, before you start coding. After all, the features are the reason your customers will buy your product. Be sure it's designed to do what it's supposed to do!
Don't be satisfied when it does what it's supposed to do; ensure it doesn't do what it's not supposed to do! Feed it bad data, use it in an illogical way, disconnect the network cable during data transfer, run it alongside conflicting applications. Your customers will.
Good luck!
Your tests will only ever be as good as your requirements. They can be requirements that you come up with up front all at once, they can be requirements that you come up with one at a time as you add features, or they can be requirements that you come up with after you ship it and people start reporting a boat load of bugs, but you can't write a good test if no one can or will document exactly what the thing is supposed to do.