Related
I have a project with multiple structural entities that each have sub-entities like the picture below.
Now I am trying to plan for the testing phase. I have checked UVVM, OSVVM, and VUnit and I found that VUnit is the easiest and fastest way to start.
I have read the documentation and played a little bit with the provided examples and I started to write some tests for some sub-entities like A1, A2, ..., etc.
My question is how to use VUnit for testing at a system level and how should I structure my testbench?
Shall I write tests for all sub-entities and then for the structural entities, and then for the whole system?
Shall I write tests for the structural entities as transactions?
Shall I write tests for all sub-entities each in a separate architecture file and run them in sequence in the test suite in top-level testbench?
Any other suggestions?
This is the type of question that risks being closed as opinion-based so I will just give a few general remarks that can guide you in your decisions.
VUnit doesn't care about the size of the thing you're testing. It can be a small function or a very large system. The basic test strategy is not a tool question but something you're free to decide.
Regardless of test philosophy I think most, if not all, developers agree that it's better/cheaper to find bugs early rather than late. This means that you should test often and start on small things. How often and small depends on the test overhead and bug frequency which depends on the tools being used, experience of designer, complexity of the problem etc. There's no universally true number here.
A transaction is a coding abstraction and as such it can improve readability and maintainability of the code. However, using too much abstractions for a simple problem adds no value. If the thing you're testing has a single simple bus interface it makes sense to create simple read and write procedures that you call after one another. If you have several interfaces and want to perform concurrent transactions on those interfaces you need somehing more. The VUnit solution for this is to take your simple transaction procedures and add message passing on top of that. https://www.linkedin.com/pulse/vunit-bfms-simple-emailing-lars-asplund/ explains how it's done
I'm not sure if I understand your last question but if you're thinking about only testing the small things from the top-level you would need to develop the top before testing can start. That would go against number 2.
Disclaimer: I'm one of the VUnit authors but I think I managed to avoid personal opinions here.
Reading this question has helped me solidify some of the problems I've always had with unit-testing, TDD, et al.
Since coming across the TDD approach to development I knew that it was the right path to follow. Reading various tutorials helped me understand how to make a start, but they have always been very simplistic - not really something that one can apply to an active project. The best I've managed is writing tests around small parts of my code - things like libraries, that are used by the main app but aren't integrated in any way. While this has been useful it equates to about 5% of the code-base. There's very little out there on how to go to the next step, to help me get some tests into the main app.
Comments such as "Most code without unit tests is built with hard dependencies (i.e.'s new's all over the place) or static methods." and "...it's not rare to have a high level of coupling between classes, hard-to-configure objects inside your class [...] and so on." have made me realise that the next step is understanding how to de-couple code to make it testable.
What should I be looking at to help me do this? Is there a specific set of design patterns that I need to understand and start to implement which will allow easier testing?
Here Mike Clifton describes 24 test patterns from 2004. Its a useful heuristic when designing unit tests.
http://www.codeproject.com/Articles/5772/Advanced-Unit-Test-Part-V-Unit-Test-Patterns
Pass/Fail Patterns
These patterns are your first line of defence (or attack, depending on your perspective) to guarantee good code. But be warned, they are deceptive in what they tell you about the code.
The Simple-Test Pattern
The Code-Path Pattern
The Parameter-Range Pattern
Data Transaction Patterns
Data transaction patterns are a start at embracing the issues of data persistence and communication. More on this topic is discussed under "Simulation Patterns". Also, these patterns intentionally omit stress testing, for example, loading on the server. This will be discussed under "Stress-Test Patterns".
The Simple-Data-I/O Pattern
The Constraint-Data Pattern
The Rollback Pattern
Collection Management Patterns
A lot of what applications do is manage collections of information. While there are a variety of collections available to the programmer, it is important to verify (and thus document) that the code is using the correct collection. This affects ordering and constraints.
The Collection-Order Pattern
The Enumeration Pattern The
Collection-Constraint Pattern
The Collection-Indexing Pattern
Performance Patterns
Unit testing should not just be concerned with function but also with form. How efficiently does the code under test perform its function? How fast? How much memory does it use? Does it trade off data insertion for data retrieval effectively? Does it free up resources correctly? These are all things that are under the purview of unit testing. By including performance patterns in the unit test, the implementer has a goal to reach, which results in better code, a better application, and a happier customer.
The Performance-Test Pattern
Process Patterns
Unit testing is intended to test, well, units...the basic functions of the application. It can be argued that testing processes should be relegated to the acceptance test procedures, however I don't buy into this argument. A process is just a different type of unit. Testing processes with a unit tester provide the same advantages as other unit testing--it documents the way the process is intended to work and the unit tester can aid the implementer by also testing the process out of sequence, rapidly identifying potential user interface issues as well. The term "process" also includes state transitions and business rules, both of which must be validated.
The Process-Sequence Pattern
The Process-State Pattern
The Process-Rule Pattern
Simulation Patterns
Data transactions are difficult to test because they often require a preset configuration, an open connection, and/or an online device (to name a few). Mock objects can come to the rescue by simulating the database, web service, user event, connection, and/or hardware with which the code is transacting. Mock objects also have the ability to create failure conditions that are very difficult to reproduce in the real world--a lossy connection, a slow server, a failed network hub, etc.
Mock-Object Pattern
The Service-Simulation Pattern
The Bit-Error-Simulation Pattern
The Component-Simulation Pattern
Multithreading Patterns
Unit testing multithreaded applications is probably one of the most difficult things to do because you have to set up a condition that by its very nature is intended to be asynchronous and therefore non-deterministic. This topic is probably a major article in itself, so I will provide only a very generic pattern here. Furthermore, to perform many threading tests correctly, the unit tester application must itself execute tests as separate threads so that the unit tester isn't disabled when one thread ends up in a wait state
The Signalled Pattern
The Deadlock-Resolution Pattern
Stress-Test Patterns
Most applications are tested in ideal environments--the programmer is using a fast machine with little network traffic, using small datasets. The real world is very different. Before something completely breaks, the application may suffer degradation and respond poorly or with errors to the user. Unit tests that verify the code's performance under stress should be met with equal fervor (if not more) than unit tests in an ideal environment.
The Bulk-Data-Stress-Test Pattern
The Resource-Stress-Test Pattern
The Loading-Test Pattern
Presentation Layer Patterns
One of the most challenging aspects of unit testing is verifying that information is getting to the user right at the presentation layer itself and that the internal workings of the application are correctly setting presentation layer state. Often, presentation layers are entangled with business objects, data objects, and control logic. If you're planning on unit testing the presentation layer, you have to realize that a clean separation of concerns is mandatory. Part of the solution involves developing an appropriate Model-View-Controller (MVC) architecture. The MVC architecture provides a means to develop good design practices when working with the presentation layer. However, it is easily abused. A certain amount of discipline is required to ensure that you are, in fact, implementing the MVC architecture correctly, rather than just in word alone.
The View-State Test Pattern
The Model-State Test Pattern
I'd say you need mainly two things to test, and they go hand in hand:
Interfaces, interfaces, interfaces
dependency injection; this in conjunction with interfaces will help you swap parts at will to isolate the modules you want to test. You want to test your cron-like system that sends notifications to other services? instanciate it and substitute your real-code implementation for everything else by components obeying the correct interface but hard-wired to react in the way you want to test: mail notification? test what happens when the smtp server is down by throwing an exception
I myself haven't mastered the art of unit testing (and i'm far from it), but this is where my main efforts are going currently. The problem is that i still don't design for tests, and as a result my code has to bend backwards to accomodate...
Michael Feather's book Working Effectively With Legacy Code is exactly what you're looking for. He defines legacy code as 'code without tests' and talks about how to get it under test.
As with most things it's one step at a time. When you make a change or a fix try to increase the test coverage. As time goes by you'll have a more complete set of tests. It talks about techniques for reducing coupling and how to fit test pieces between application logic.
As noted in other answers dependency injection is one good way to write testable (and loosely coupled in general) code.
Arrange, Act, Assert is a good example of a pattern that helps you structure your testing code around particular use cases.
Here's some hypothetical C# code that demonstrates the pattern.
[TestFixture]
public class TestSomeUseCases() {
// Service we want to test
private TestableServiceImplementation service;
// IoC-injected mock of service that's needed for TestableServiceImplementation
private Mock<ISomeService> dependencyMock;
public void Arrange() {
// Create a mock of auxiliary service
dependencyMock = new Mock<ISomeService>();
dependencyMock.Setup(s => s.GetFirstNumber(It.IsAny<int>)).Return(1);
// Create a tested service and inject the mock instance
service = new TestableServiceImplementation(dependencyMock.Object);
}
public void Act() {
service.ProcessFirstNumber();
}
[SetUp]
public void Setup() {
Arrange();
Act();
}
[Test]
public void Assert_That_First_Number_Was_Processed() {
dependencyMock.Verify(d => d.GetFirstNumber(It.IsAny<int>()), Times.Exactly(1));
}
}
If you have a lot of scenarios to test, you can extract a common abstract class with concrete Arrange & Act bits (or just Arrange) and implement the remaining abstract bits & test functions in the inherited classes that group test functions.
Gerard Meszaros' xUnit Test Patterns: Refactoring Test Code is chock full of patterns for unit testing. I know you're looking for patterns on TDD, but I think you will find a lot of useful material in this book
The book is on safari so you can get a really good look at what's inside to see if it might be helpful:
http://my.safaribooksonline.com/9780131495050
have made me realise that the next step is understanding how to de-couple code to make it testable.
What should I be looking at to help me do this? Is there a specific set of design patterns that I need to understand and start to implement which will allow easier testing?
Right on! SOLID is what you are looking for (yes, really). I keep recommending these 2 ebooks, specially the one on SOLID for the issue at hand.
You also have to understand that its very hard if you are introducing unit testing to an existing code base. Unfortunately tightly coupled code is far too common. This doesn't mean not to do it, but for a good time it'll be just like you mentioned, tests will be more concentrated in small pieces.
Over time these grow into a larger scope, but it does depend on the size of the existing code base, the size of the team and how many are actually doing it instead of adding to the problem.
Design patterns aren't directly relevant to TDD, as they are implementation details. You shouldn't try to fit patterns into your code just because they exist, but rather they tend to appear as your code evolves. They also become useful if your code is smelly, since they help resolve such issues. Don't develop code with design patterns in mind, just write code. Then get tests passing, and refactor.
A lot of problems like this can be solved with proper encapsulation. Or, you might have this problem if you are mixing your concerns. Say you've got code that validates a user, validates a domain object, then saves the domain object all in one method or class. You've mixed your concerns, and you aren't going to be happy. You need to separate those concerns (authentication/authorization, business logic, persistence) so you can test them in isolation.
Design patterns help, but a lot of the exotic ones have very narrow problems to which they can be applied. Patterns like composite, command, are used often, and are simple.
The guideline is: if it is very difficult to test something, you can probably refactor it into smaller problems and test the refactored bits in isolation. So if you have a 200 line method with 5 levels of if statements and a few for-loops, you might want to break that sucker up.
So, start by seeing if you can make complicated code simpler by separating your concerns, and then see if you can make complicated code simpler by breaking it up. Of course if a design pattern jumps out at you, then go for it.
Dependency Injection/IoC. Also read up on dependency injection frameworks such as SpringFramework and google-guice. They also target how to write testable code.
Test patterns are design patterns. Both are intended to guide the construction of a piece of software.The basic test patterns that are used in software test automation are many in number and i have selected following:
"Fluent builder test pattern"
They are defined as
"A coding technique known as the "fluent builder pattern" forces the developer or test automation engineer to create the objects sequentially by invoking each setter function one at a time until all necessary properties are defined."
Fluent builder test pattern.
In the Field of Software test Automation where normally we use different automation frameworks for web based testing including Selenium, TestNG and Maven we have this test pattern. Although the fluent builder pattern isn't expressly used for unit tests but it describe how it might be useful in the organising steps. Two components make up a builder class and a number of clearly defined Set methods, each of which is in charge of changing just one aspect of the state of the produced object. In order to connect all method calls together, each of these methods returns the builder itself. A construct method uses the previously set state to create and output the objects. This Specific Test Pattern is used with a number of programming languages like C#, JAVA and Javascript. This Test Pattern is used in different testing frameworks like JUNIT, NUNIT for Asp.net.
Complex objects are a typical occurrence in our Test Scripts solutions. objects with numerous fields, each of which is challenging to construct. The Most Important Advantages of the Fluent Builder test Patterns are
1- The Code is more maintainable
2-The Code of the Automated Script is readable and simple to understand for Static reviews and Static analysis and even helpful for white box testing.
3-When you create an automated test script in any language like Java, the possibility for errors become less when you create objects for different methods.
Understanding the benefits of the builder pattern is one of the most crucial considerations you should make. As previously stated, if we design objects that are difficult to construct, your code will be of higher quality in those situations. When an object's function Object() { [native code] } takes more than a few parameters and those parameters are objects with nested classes, you should consider using this pattern.
Fluent Builder Test Patterns in Java:
Although it appears straightforward, there is a problem. If there are too many setters methods and the object development is complicated, the Test Engineer may frequently forget to use some of the setters as they construct the object. As a result, none of the setters for many significant object attributes are being called because many of them will be null.
Missing the set property can be an expensive procedure in terms of development and maintenance for many enterprise systems' fundamental entities, such as Order or Loan, which may be begun in many different areas of the code. We make the developer call all necessary setter methods before the build function is called.
In STLC(Software Testing Lifecycle):
We would have known that developing an automated test for an application is not particularly challenging if you were an automation engineer. Maintaining the current exams is challenging instead! When you have a large number of automated tests, that too. you also have junior team members who are responsible for maintaining the thousands of tests suite. In STLC (Software Testing Lifecycle),
We collaborate with others. We collaborate with other QA engineers who have a variety of skill sets! It could be really challenging for some team members to comprehend the code. You might assume that they would be less productive the longer they spent trying to grasp the code. You must develop an appropriate architecture for the page objects and tests as the lead architect of the automation framework to make maintenance simple and code reusibility as well. So everyone in the STLC is familiar and have a good understanding of the framework
Try utilising the fluent builder pattern if setting up the state for multiple unit tests is a bit messy or hard. It is now very easy to read the test state. Because a single function on the builder could encapsulate a lot of state-setup code that can be written once and used by multiple tests, employing the fluent builder pattern in unit tests in some circumstances can assist to prevent code duplication.
Similar to Does TDD mean not thinking about class design?, I am having trouble thinking about where the traditional 'design' stage fits into TDD.
According to the Bowling Game Kata (the 'conversation' version, whose link escapes me at the moment) TDD appears to ignore design decisions made early on (discard the frame object, roll object, etc). I can see in that example it being a good idea to follow the tests and ignore your initial design thoughts, but in bigger projects or ones where you want to leave an opening for expansion / customisation, wouldn't it be better to put things in that you don't have a test for or don't have a need for immediately in order to avoid time-consuming rewrites later?
In short - how much design is too much when doing TDD, and how much should I be following that design as I write tests and the code to pass them (ignoring my design to only worry about passing tests)?
Or am I worrying about nothing, and code written simply to follow tests is not (in practice) difficult to rewrite or refactor if you're painted into a corner?
Alternatively, am I missing the point and that I should be expecting to rewrite portions of the code when I come to test a new section of functionality?
I would base your tests on your initial design. In many ways TDD is a discovery process. You can expect to either confirm your early design choices or find that there are better choices you can make. Do as much upfront design as you are comfortable with. Some like to fly by the seat of the chairs doing high level design and using TDD to flesh the design out. While others like to have everything on paper first.
Part of TDD is refactoring.
There is something to be said about 'Designing Big Complex Systems' that should not be associated with TDD - especially when TDD is interpreted as 'Test Driven Design' and not 'Test Driven Development'.
In the context 'Development', using TDD will ensure you are writing testable code which give all the benefits cited about TDD ( detect bugs early, high code:test coverage ratio, easier future refactoring etc. etc.)
But in 'Designing' large complex systems, TDD does not particularly address the following concerns that are inherent in the architecture of the system
(Engineering for) Performance
Security
Scalability
Availability
(and all other 'ilities')
(i.e. all of the concerns above do not magically 'emerge' through the "write a failing test case first, followed by the working implementation, Refactor - lather, rinse, repeat..." recipe).
For these, you will need to approach the problem by white-boarding the high-level and then low-level details of a system with respect to the constraints imposed by the requirements and the problem space.
Some of the above considerations compete with each other and require careful trade-offs that just don't 'emerge' through writing lots of unit tests.
Once key components and their responsibilities are defined and
understood, TDD can be used in the implementation of these
components. The process of Refactoring and continually
reviewing/improving your code will ensure the low-level design
details of these components are well-crafted.
I am yet to come across a significantly complex piece of software (e.g. compiler, database, operating system) that was done in a Test Driven Design style. The following blog article talks about this point extremely well (Compilers, TDD, Mastery)
Also, check the following videos on Architecture which adds a lot of common sense to the thought process.
Start with a rough design idea, pick a first test and start coding, going green test after test, letting the design emerge, similar or not to the initial design. How much initial design depends on the problem complexity.
One must be attentive and listen to and sniff the code, to detect refactoring opportunities and code smells.
Strictly following TDD and the SOLID principles will bring code clean, testable and flexible, so that it can be easily refactored, leveraging on the unit tests as scaffolding to prevent regression.
I've found three ways of doing design with TDD:
Allow the design to emerge naturally as duplication and complexity is removed
Create a perfect design up-front, using mocks combined with the single responsibility principle
Be pragmatic about it.
Pragmatism seems to be the best choice most times, so here's what I do. If I know that a particular pattern will suit my problem very well (for instance, MVC) I'll go straight for the mocks and assume it works. Otherwise, if the design is less clear, I'll allow it to emerge.
The cross-over point at which I feel the need to refactor an emergent design is the point at which it stops being easy to change. If a piece of code isn't perfectly designed, but another dev coming across it could easily refactor it themselves, it's good enough. If the code is becoming so complex that it stops being obvious to another dev, it's time to refactor it.
I like Real Options, and refactoring something to perfection feels to me like committing to the design without any real need to do so. I refactor to "good enough" instead; that way if my design proves itself to be wrong I've not wasted the time. Assume that your design will be wrong if you've never used it before in a similar context.
This also lets me get my code out much more quickly than if it were perfect. Having said that, it was my attempts to make the code perfect that taught me where the line was!
I am very new to test-driven development (TDD), not yet started using it.
But I know that we have to write tests first and then the actual code to pass the test and refactor it till the design is good.
My concern over TDD is where it fits in our systems development life cycle (SDLC).
Suppose I get a requirement of making an order processing system.
Now, without having any model or design for this system, how can I start writing tests?
Shouldn't we require to define the entities and their attributes to proceed?
If not, is it possible to develop a big system without any design?
There is two levels of TDD, ATDD or acceptance test driven development, and normal TDD which is driven by unit tests.
I guess the relationship between TDD and design is influenced by the somewhat "agile" concept that source code IS the design of a software product. A lot of people reinforce this by translating TDD as Test Driven Design rather than development. This makes a lot of sense as TDD should be seen as having a lot more to do with driving the design than testing. Having acceptance and unit tests at the end of it is a nice side effect.
I cannot really say too much about where it fits into your SDLC without knowing more about it, but one nice workflow is:
For every user story:
Write acceptance tests using a tool like FitNesse or Cucumber, this would specify what the desired outputs are for the given inputs, from a perspective that the user understands. This level automates the specifications, or can even replace specification documentation in ideal situations.
Now you will probably have a vague idea of the sort of software design you might need as far as classes / behaviour etc goes.
For each behaviour:
Write a failing test that shows how calling code you would like to use the class.
Implement the behaviour that makes the test pass
Refactor both the test and actual code to reflect good design.
Go onto the next behaviour.
Go onto the next user story.
Of course the whole time you will be thinking of the evolving high level design of the system. Ideally TDD will lead to a flexible design at the lower levels that permits the appropriate high design to evolve as you go rather than trying to guess it at the beginning.
It should be called Test Driven Design, because that is what it is.
There is no practical reason to separate the design into a specific phase of the project. Design happens all the time. From the initial discussion with the stakeholder, through user story creation, estimation, and then of course during your TDD sessions.
If you want to formalize the design using UML or whatever, that is fine, just keep in mind that the code is the design. Everything else is just an approximation.
And remember that You Aren't Gonna Need It (YAGNI) applies to everything, including design documents.
Writing test first forces you to think first about the problem domain, and acts as a kind of specification. Then in a 2nd step you move to solution domain and implement the functionality.
TDD works well iteratively:
Define your initial problem domain (can be small, evolutionary prototype)
Implement it
Grow the problem domain (add features, grow the prototype)
Refactor and implement it
Repeat step 3.
Of course you need to have a vague architectural vision upfront (technologies, layers, non-functional requirement, etc.). But the features that bring added-value to your your application can be introduced nicely with TDD.
See related question TDD: good for a starter?
With TDD, you don't care much about design. The idea is that you must first learn what you need before you can start with a useful design. The tests make sure that you can easily and reliably change your application when the time comes that you need to decide on your design.
Without TDD, this happens: You make a design (which is probably too complex in some areas plus you forgot to take some important facts into account since you didn't knew about them). Then you start implementing the design. With time, you realize all the shortcomings of your design, so you change it. But changing the design doesn't change your program. Now, you try to change your code to fit the new design. Since the code wasn't written to be changed easily, this will eventually fail, leaving you with two designs (one broken and the other in an unknown state) and code which doesn't fit either.
To start with TDD, turn your requirements into test. To do this, ask "How would I know that this requirement is fulfilled?" When you can answer this question, write a test that implements the answer to this question. This gives you the API which your (to be written) code must adhere to. It's a very simple design but one that a) always works and b) which is flexible (because you can't test unflexible code).
Also starting with the test will turn you into your own customer. Since you try hard to make the test as simple as possible, you will create a simple API that makes the test work.
And over time, you'll learn enough about your problem domain to be able to make a real design. Since you have plenty of tests, you can then change your code to fit the design. Without terminally breaking anything on the way.
That's the theory :-) In practice, you will encounter a couple of problems but it works pretty well. Or rather, it works better than anything else I've encountered so far.
Well of course you need a solid functional analysis first, including a domain model, without knowing what you'll have to create in the first place it's impossible to write your unit tests.
I use a test-driven development to program and I can say from experience it helps create more robust, focussed and simpler code. My recipe for TDD goes something likes this:
Using a unit-test framework (I've written my own) write code as you wish to use it and tests to ensure return values etc. are correct. This ensures you only write the code you're actually going to use. I also add a few more tests to check for edge cases.
Compile - you will get compiler errors!!!
For each error add declarations until you get no compiler errors. This ensures you have the minimum declarations for your code.
Link - you will get linker errors!!!
Write enough implementation code to remove the linker errors.
Run - you unit tests will fail. Write enough code to make the test succeed.
You've finished at this point. You have written the minimum code you need to implement your feature, and you know it is robust because of your tests. You will also be able to detect if you break things in the future. If you find any bugs, add a unit test to test for that bug (you may not have thought of an edge case for example). And you know that if you add more features to your code you won't make it incompatible to existing code that uses your feature.
I love this method. Makes me feel warm and fuzzy inside.
TDD implies that there is some existing design (external interface) to start with. You have to have some kind of design in mind in order to start writing a test. Some people will say that TDD itself requires less detailed design, since the act of writing tests provides feedback to the design process, but these concepts are generally orthogonal.
You need some form of specification, rather than a form of design -- design is about how you go about implementing something, specification is about what you're going to implement.
Most common form of specs I've seen used with TDD (and other agile processes) are user stories -- an informal kind of "use case" which tends to be expressed in somewhat stereotyped English sentences like "As a , I can " (the form of user stories is more or less rigid depending on the exact style/process in use).
For example, "As a customer, I can start a new order", "As a customer, I can add an entry to an existing order of mine", and so forth, might be typical if that's what your "order entry" system is about (the user stories would be pretty different if the system wasn't "self-service" for users but rather intended to be used by sales reps entering orders on behalf of users, of course -- without knowing what kind of order-entry system is meant, it's impossible to proceed sensibly, which is why I say you do need some kind of specification about what the system's going to do, though typically not yet a complete idea about how it's going to do it).
Let me share my view:
If you want to build an application, along the way you need to test it e.g check the values of variables you create by code inspection, of quickly drop a button that you can click on and will execute a part of code and pop up a dialog to show the result of the operation etc. on the other hand TDD changes your mindset.
Commonly, you just rely on the development environment like visual studio to detect errors as you code and compile and somewhere in your head, you know the requirement and just coding and testing via button and pop ups or code inspection. this is a Syntax debugging driven development . but when you are doing TDD, is a "semantic debugging driven development " because you write down your thoughts/ goals of your application first by using tests (which and a more dynamic and repeatable version of a white board) which tests the logic (or "semantic") of your application and fails whenever you have a semantic error even if you application passes syntax error (upon compilation).
In practice you may not know or have all the information required to build the application , since TDD kind of forces you to write tests first, you are compelled to ask more questions about the functioning of the application at a very early stage of development rather than building a lot only to find out that a lot of what you have written is not required (or at lets not at the moment). you can really avoid wasting your precious time with TDD (even though it may not feel like that initially)
I have been using TDD to drive the project that I am currently working on and the results have been fairly satisfying. I did run into a problem (described here; still without a solution or any suggestions!) where there are some aspects of a particular method which may not be able to be tested (as in my example; briefly, I want to be able to handle a ManagementException which has a specific ErrorCode - but it doesn't seem possible for me to set up a test which throws a ManagementException like that).
So, how does one deal with that? Do we simply accept the fact that some logical paths are untestable (because of the framework that we are working in or limitations in the testing framework(s) that are currently available)?
Some designs do not lend themselves to testability.. especially ones that do not have testability as one of the design goals. Generally TDDed designs do not fall into this category.
To answer your original question, I've posted a response which involves using reflection to slot in the requested error code. However this may not work in all situations and is not a general solution.
The tradeoff here is the effort in writing the test vs the benefit of having that particular piece of code under automated tests. If you feel that the cost to benefit ratio is huge and probability of failure is miniscule, you may write it up as an exceptional manual test, a comment to future developers and verify it manually for now. I'd say be pragmatic, if you've spent 30-40 mins of a couple of developers' brain time trying to get it under test, maybe you need to step back and rethink your strategy. Have a look at Michael Feather's 'Working effectively with legacy code' on some suggestions to overcome barriers to testability.
I don't think you could say that anything is logically untestable, but you will certainly find areas of code where the effort required to test them would be better spent elsewhere.
This is a great question, and one which I also found myself contemplating recently.
So first, I wouldn't say some logical paths are "untestable" - at most they are probably very hard to test with automatic unit testing. You could probably still test most of these problematic paths with some serious heavy duty system tests.
Consider this - anything you test can be thought to run inside a virtual machine under your control and you can (theoretically) simulate every aspect of its operation in order to test your software. Whether or not this is practical for most applications is another question.
I've just tried answering your original question (and collided in midflight with somebody else saying the same thing more concisely, or most of it at least;-). Anyway, there surely exist frameworks that are way too rigid (thanks to private and friends), and if you can't use introspection to go around that (despite having done all proper incantations), then you're just using a language that's too rigid as well as a framework that is.
I'd be astonished if that was the case in an overall system that supports dynamic languages (as .NET now does) such as IronRuby and IronPython -- maybe if C# won't let you go around accessibility limitations via introspection, the dynamic languages could serve.
That said, it is surely possible for the overall environment to be designed so badly and so rigidly to make it impossible to unit-test certain things -- even though I'm not entirely convinced that this is the case in your current situation.
Some things cannot be tested in an automated unit test because the language/framework/situation is just not open to it. The way to handle that is to reduce that area as much as possible and keep it so simple that it is highly unlikely to be a source of bugs or behavior changes later on.
There is also more to testing than just unit testing, and those areas (such as Acceptance testing, QA, etc.) are not covered by unit testing as well.