Should Private/Protected methods be under unit test? [closed] - unit-testing

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
In TDD development, the first thing you typically do is to create your interface and then begin writing your unit tests against that interface. As you progress through the TDD process you would end-up creating a class that implements the interface and then at some point your unit test would pass.
Now my question is about the private and protected methods that I might have to write in my class in support of the methods/properties exposed by the interface:
Should the private methods in the class have their own unit tests?
Should the protected methods in the class have their own unit tests?
My thoughts:
Especially because I am coding to interfaces, I shouldn't worry about protected/private methods as they are black boxes.
Because I am using interfaces, I am writing unit tests to validate that the contract defined is properly implemented by the different classes implementing the interface, so again I shouldnt worry about the private/protected methods and they should be exercised via unit tests that call the methods/properties defined by the interface.
If my code-coverage does not show that the protected/private methods are being hit, then I don't have the right unit-tests or I have code thats not being used and should be removed.

No, I don't think of testing private or protected methods. The private and protected methods of a class aren't part of the public interface, so they don't expose public behavior. Generally these methods are created by refactorings you apply after you've made your test turn green.
So these private methods are tested implicitly by the tests that assert the behavior of your public interface.
On a more philosophical note, remember that you're testing behavior, not methods. So if you think of the set of things that the class under test can do, as long as you can test and assert that the class behaves as expected, whether there are private (and protected) methods that are used internally by the class to implement that behavior is irrelevant. Those methods are implementation details of the public behavior.

I disagree with most of the posters.
The most important rule is: WORKING CODE TRUMPS THEORETICAL RULES about public/protected/private.
Your code should be thoroughly tested. If you can get there by writing tests for the public methods, that sufficiently exercise the protected/private methods, that's great.
If you can't, then either refactor so that you can, or bend the protected/private rules.
There's a great story about a psychologist who gave children a test. He gave each child two wooden boards with a rope attached to each end, and asked them to cross a room w/o touching their feet to the floor, as fast as possible. All the kids used the boards like little skis, one foot on each board, holding them by the ropes, and sliding across the floor. Then he gave them the same task, but using only ONE board. They pivoted/"walked" across the floor, one foot on each end of the single board -- and they were FASTER!
Just because Java (or whatever language) has a feature (private/protected/public) does not necessarily mean you are writing better code because you use it!
Now, there will always be ways to optimize/minimize this conflict. In most languages, you can make a method protected (instead of public), and put the test class in the same package (or whatever), and the method will be available for test. There are annotations that can help, as described by other posters. You can use reflection to get at the private methods (yuck).
The context also matters. If you're writing an API for use by external people, public/private is more important. If it's an internal project -- who really cares?
But at the end of the day, think about how many bugs have been caused by lack of testing. Then compare to how many bugs have been caused by "overly visible" methods. That answer should drive your decision.

You wrote:
In TDD development, the first thing
you typically do is to create your
interface and then begin writing your
unit tests against that interface. As
you progress through the TDD process
you would end-up creating a class that
implements the interface and then at
some point your unit test would pass.
Please let me rephrase this in BDD language:
When describing why a class is valuable and how it behaves, the first thing you typically
do is to create an example of how to use the class, often via its interface*. As you add
desired behavior you end up creating a class which provides that value, and then at some
point your example works.
*May be an actual Interface or simply the accessible API of the class, eg: Ruby
doesn't have interfaces.
This is why you don't test private methods - because a test is an example of how to use the class, and you can't actually use them. Something you can do if you want to is delegate the responsibilities in the private methods to a collaborating class, then mock / stub that helper.
With protected methods, you're saying that a class which extends your class should have some particular behavior and provide some value. You could then use extensions of your class to demonstrate that behavior. For instance, if you were writing an ordered collection class, you might want to demonstrate that two extensions with the same contents demonstrated equality.
Hope this helps!

When you're writing the unit tests for your class, you shouldn't necessarily care whether or not the functionality of the class is implemented directly in the method on the public interface or if it is implemented in a series of private methods. So yes, you should be testing your private methods, but you shouldn't need to call them directly from your test code in order to do so (directly testing the private methods tightly couples your implementation to your tests and makes refactoring unnecessarily hard).
Protected methods form a different contract between your class and its future children, so you should really be testing it to a similar extent as your public interface to ensure that the contract is well defined and exercised.

No! Only test interfaces.
One of the big benefits of TDD is assuring that the interface works no matter how you've chosen to implement the private methods.

Completing what others said above, I would say that protected methods are part of an interface of some kind: it simply happens to be the one exposed to inheritance instead of composition, which is what everyone tends to think about when considering interfaces.
Marking a method as protected instead of private means it is expected to be used by third party code, so some sort of contract needs to be defined and tested, as happens with normal interfaces defined by public methods, which are open both for inheritance and composition.

There's two reasons for writing tests:
Asserting expected behavior
Preventing regression of behavior
The take on (1) Asserting expected behavior:
When you're asserting expected behavior, you want to make sure the code works as you think it should. This is effectively an automated way of doing your routine manual verification that any dev would perform when implementing any kind of code:
Did what I just write works?
Does this loop actually end?
Is it looping in the order I think it is?
Would this work for a null input?
Those are questions we all answer in our heads, and normally, we'd try to execute the code in our heads too, make sure it looks like it does work. For these cases, it is often useful to have the computer answer them in a definitive manner. So we write a unit test that assert it does. This gives us confidence in our code, helps us find defects early, and can even help actually implementing the code.
It's a good idea to do this wherever you feel is necessary. Any code that is a little tricky to understand, or is non trivial. Even trivial code could benefit from it. It's all about your own confidence. How often to do it and how far to go will depend on your own satisfaction. Stop when you can confidently answer Yes to: Are you sure this works?
For this kind of testing, you don't care about visibility, interfaces, or any of that, you only care about having working code. So yes, you would test private and protected methods if you felt they needed to be tested for you to answer Yes to the question.
The take on (2) Preventing regression of behavior:
Once you've got working code, you need to have a mechanism in place to protect this code from future damage. If nobody was to ever touch your source and your config ever again, you wouldn't need this, but in most cases, you or others will be touching the source and the configs of your software. This internal fiddling is highly likely to break your working code.
Mechanisms exist in most languages already as a way to protect against this damage. The visibility features are one mechanism. A private method is isolated, and hidden. Encapsulation is another mechanism, where you compartmentalize things, so that changing other compartments doesn't affect others.
The general mechanism for this is called: coding to boundary. By creating boundaries between parts of the code, you protect everything inside a boundary from things outside of it. The boundaries become the point of interaction, and the contract by which things interact.
This means that changes to a boundary, either by breaking it's interface, or breaking it's expected behavior, would damage and possibly break other boundaries which relied on it. That's why it's a good idea to have a unit test, that targets those boundaries and assert they don't change in semantic and in behavior.
This is your typical unit test, the one most everybody talks about when mentioning TDD or BDD. The point is to hardened the boundaries and protect them from change. You do not want to test private methods for this, because a private method is not a boundary. Protected methods are a restricted-boundary, and I would protect them. They aren't exposed to the world, but are still exposed to other compartments or "Units".
What to make of this?
As we've seen, there's a good reason to unit test public and protected methods, as to assert our interfaces don't change. And there's also good reason to test private methods, as to assert our implementation works. So should we unit test them all?
Yes and No.
Firstly: Test all methods that you feel you need a definitive proof that it works in most cases as to be able to be confident your code works, no matter the visibility. Then, disable those tests. They've done there job.
Lastly: Write tests for your boundaries. Have a unit test for each point that will be used by other units of your system. Make sure this test assert the semantic contract, method name, number of arguments, etc. And also make sure the test asserts the available behavior of the unit. Your test should demonstrate how to use the unit, and what the unit can do. Keep these tests enabled so that they run on every code push.
NOTE: The reason you disabled the first set of test is to allow refactoring work to occur. An active test is a code coupling. It prevents future modification of the code it's testing. You only want this for your interfaces and interaction contracts.

No, you shouldn't test private methods (how would you anyway without using something horrible like reflection). With protected methods it is slightly less obvious in C# you can make things protected internal and I think it is OK to do that to test derived classes that implement all of their functionality through template pattern methods.
But, in general, if you think that your public methods are doing too much then it is time to refactor your classes into more atomic classes and then test those clases.

I too agree with #kwbeam's answer about not testing private methods. However, an important point I'd like to highlight - protected methods ARE part of a class' exported API and hence MUST be tested.
Protected methods may not be publicly accessible but you definitely are providing a way for sub-classes to use/override them. Something outside the class can access them and hence you need to ensure that those protected members behave in an expected manner. So don't test private methods, but do test public and protected methods.
If you believe you have a private method which contains critical logic, I'd try to extract it out into a separate object, isolate it and provide a way to test its behavior.
Hope it helps!

If you are aiming high code coverage (I suggest you should), you should test your all methods regardless of they are private or protected.
Protected is a kind of different discussion point, but in summary, it should not be there at all. Either it breaks encapsulation on deployed code, or it forces you to inherit from that class, just to unit test it, even sometimes you do not need to inherit.
Just hiding a method to client (making private) does not give it to have privilege not to be audited. Therefore, they can be tested by public methods as mentioned before.

I agree with everyone else: The answer to your question is 'no'.
Indeed you are entirely correct with your approach and your thoughts, especially about code coverage.
I would also add that the question (and the answer 'no') also applies to public methods that you might introduce to classes.
If you add methods (public/protected or private) because they make a failing test pass, then you've more or less achieved the goal of TDD.
If you add methods (public/protected or private) because you just decide to, violating TDD, then your code coverage should catch these and you should be able to improve your process.
Also, for C++ (and I should think only for C++) I implement interfaces using private methods only, to indicate that the class should only be used via the interface it implements. It stops me mistakenly calling new methods added to my implementation from my tests

A good design means splitting the application into multiple testable units. After doing this, some units are exposed to the public API, but some others may not be. Also, the interaction points between exposed units and these "internal" units are also not a part of the pubic API.
I think once we have the identifiable unit, is would benefit from the unit tests, regardless if exposed via public API or not.

Simple answer - NO.
Explanation :
why should test a private function ? It will automatically be tested anyway (and must be tested) when you test the feature/method which is using it - the private function.

Related

Unit tests, private methods and hidden abstraction

I was reading about unit tests, to learn a bit more about that.
It seems that tests private methods shouldn't be the general rule, only some exceptions. I have find this article, in which explains that: https://enterprisecraftsmanship.com/posts/unit-testing-private-methods/
Here it says that if the private method is complex, one option it is create a public class and implement the method here, so it can be tested.
Here is my doubt.
The reason to don't test private methods it is because it is recomended to test only what the client can use from the library.
The reason to use private methods is don't let to client to know or have details about the internal implementation, so it is good idea to keep the library as simple as possible for the client.
But if for test the private method I create a new public class put the method there, now public, am I not giving to the client details about the implementation? In practice, instead of declaring public the private method, a public class is create to put there the public method.
So I guess that I am misunderstanding something, but I don't know what.
In one of the answers of this question: How do you unit test private methods? it is said that one option it is to pass the private method to a public class too, but it doesn't explain more (I guess the ansewer could be much longer).
Thanks.
But if for test the private method I create a new public class put the method there, now public, am I not giving to the client details about the implementation?
The trick here is to not actually expose this. Exactly how to do this is language/ecosystem dependent, but generally speaking you’ll try to ship your code in a way that implementation details will not be (easily) accessible by end users.
For example, in C++ you could have private headers exposing the functionality that you don’t ship with your library (not a problem if they aren’t included in your interface headers). Java has its “jigsaw” module system. But even then, if it can’t be mechanically enforced you can still socially enforce it by making it very clear with things like package and class names when end users aren’t supposed to use things; For example, if you’re not using Java’s module system you can still have your implementation details for your package lib.foo in a package called lib.foo.implementation_details, similar to how in languages like Smalltalk where you don’t have private methods at all you can still give your methods names like private_foo that make it quite clear they’re not meant for external users.
Of course mechanical enforcement is better, but as you note it’s not always an option. Even if it’s not available, the principles of private vs. public and interface vs. implementation still apply, you just have to be a bit more creative in how you make sure people actually adhere to these things.
The reason to don't test private methods it is because it is recomended to test only what the client can use from the library.
There are a lot of people explaining "the" goals of unit-testing, but in fact they are describing their goals when doing unit-testing. Unit-testing is applied in many different domains and contexts, starting from toy projects but ending in safety-relevant software for nuclear plants, airplanes etc.
In other words, there is a lot of software developed where the abovementioned recommendation may be fine. But, you can apply unit-testing way beyond that. If you don't want to start with a restricted view to what unit-testing can be about, you might better look at it in the following way: One of the primary goals of testing in general and also for unit-testing is to find bugs (see Myers, Badgett, Sandler: The Art of Software Testing, or, Beizer: Software Testing Techniques, but also many others).
Taking it as granted that unit-testing is about finding bugs, then you may also want to test the implementation details: Bugs are in the implementation - different implementations of the same functionality have different bugs. Take a simple fibonacci function: It can be implemented as a recursive function, as an iterative function, as a closed expression (Moivre/Binet), using a hand-written lookup-table, using an automatically-generated lookup-table etc. For each of these implementations, the set of likely bugs will differ dramatically.
Another example is sorting: There is a plethora of sort functions, which from a functionality perspective do the same thing and many even have the same user interface. The IntroSort algorithm is quite interesting with respect to testing because it implements a quicksort, but when it realizes that it runs into a degenerated sort, it falls back to another algorithm (typically heap-sort). Testing an IntroSort means, also to create such a degenerated set of data that forces the algorithm to enter the heap-sort, just to be sure that the potential bugs in the heap-sort part can be found. Looking at the public interface alone, you would never come up with such a test case (at least that would be quite a coincidence).
Summarized so far: Testing implementation details is by no means bad practice. It comes at a cost: Tests that go into implementation details are certainly more likely to break or become useless when the implementation changes. Therefore, it depends on your project whether finding more bugs is more important than saving test maintenance effort.
Regarding the possibilities to make private functions accessible for tests but still not make them part of the "official" public interface: #Cubic has explained nicely the difference between a) being public in the technical sense of the visibility rules of the given programming language, and b) belonging to the "official" and documented public API.

Unit testing private methods seems to make the whole solution easier

Let's suppose there is an utility class (no data) with one complex (as in, hard-to-test) public method. It uses random number generation, return large arrays of data and that fun stuff. However, if you implement it in small private methods, every private method would be easy to test, and thus the whole thing would be easier to test. From the application point of view, only the big method needs to be public, and the other should be private. Yet, testing the private methods results in an easier to test class. How should I approach this problem?
Sometimes generating random numbers, returning large arrays and other fun stuff means that the single utility class is responsible for more than one thing which means there should be more classes instead. High complexity in a single class (single method!) is sometimes the sign of bad design. But there's never one single golden rule to follow.
Whether you should leave your method as a single blackbox algorithm whose subparts aren't testable, or try to externalize as many responsibilities as possible to separate classes, is very situational.
You might have subparts that are likely to be reused, in which case it's a good idea to factor them out. You might have subparts that talk to other layers or to the hardware - same thing.
It all depends on what these small sub methods do, it's hard to tell without a context.
The need to test private methods should be a warning sign. It is not the solution to your big and complex method. Extraction of functionality into smaller classes IS the solution, concidering the Single Responsibility Principle (SRP). SRP states that a class should really only do one thing.
Random number generation, array handling and fun stuff is at least three separate things, that should be done separately.
There's a very strong argument for only testing the public API of a class. It makes refactoring your code that much easier because unless you change the method signature, the unit tests won't need to change and they'll validate that your changes didn't break anything.
That being said sometimes it can make sense to test the private methods (although this would be the exception - not the rule). Some test frameworks (e.g. MSTest) let you access private members for just this purpose while others (e.g. nUnit) don't since they don't think you should be doing it (in the case of .Net it doesn't take too much to write your own reflection class/method to give you access though).
If you do decide to test your private methods, make sure you also test the full public method too as you'll need the public tests to verify your later refactors, as mentioned above.
Testing private members will always make your tests brittle. When you test private methods, your tests are dependent on implementation details that can and will often change. The reason you made them private to begin with, is that you want to be able to change them without impacting the rest of your software. Guess what: if you expose your privates, even if it is only using reflection "black magic", you are making them public, because your tests are part of your software as well.
If you feel that you must test the implementation details, whether because your public API does too much, or because the private methods are so important, you should refactor your code and extract your private members as public classes and test those. Remember, because your tests are also part of the software, they should have "normal" access to the code that they use.
If you are adamant that these (not so) private methods cannot be publicly exposed, perhaps for fear of allowing external users / code to access them, you can limit the access to these classes by making them internal (in C#) or package-private (in Java), and have the code's internals visible to your tests.
Either way, it sounds that your code should either not be tested or should be made (more) public.

Faking a Method of the Object Under Test

Is there a reason why you shouldn't create a partial fake of an object or just fake one method on the object that you are testing of it for the sake of testing another method? This might be helpful to save you from making an entire new mock object, or when there is an external dependency in the method you are faking which you can't reasonably get rid of and would like to keep out of all the other unit tests?
The objects you want to do this for are trying to do too many things. In particular, if you have an external dependency, you would normally create an object to isolate that dependency. The Façade pattern is one example of this. If your objects weren't designed with testability in mind you may have to do some refactoring. Take a look at Michael Feathers' PDF on working with legacy code(PDF). He also has a book by the same title that goes into much more detail.
It is a very bad idea to mock/fake part of a class to test another.
Doing this, you are not testing what the real code does in the conditions under test leading to unreliable test results.
It also increases the maintenance burden of the faked part of the class. If this is in effect for the whole test program, the fake implementation also makes other tests on the faked method harder.
You need to ask yourself why you need to fake out the part under test.
If it is because the method is accessing a file or database, then you should define an interface and pass an instance of that interface to the class constructor or method. This allows you to test different scenarios in the same test application.
If it is because you are using singletons, you should rethink your design to make it more testable: removing singletons will remove implicit dependencies and maintenance nightmares.
If you are using static methods/free-standing functions to access data in a registry or settings file, you should really move that out of the function under test and pass the data as a parameter or provide a settings provider interface. This will make the code more flexible and robust.
If it is to break a dependency for the purpose of testing (e.g. faking out a vector method to test a method in a matrix class) then you should not be faking that -- you should treat the code under test as what is defined by the class under test by its public interface: methods; pre-conditions, post-conditions, invariants, documentation, parameters and exception specifications.
You can use knowledge of the implementation details to test special edge cases, but trigger those through the main API, not by faking an implementation detail.
For example, suppose you faked std::vector::at() but the implementation switched to use operator[] instead. Your test would break or silently pass.
If the method you want to fake is virtual (as in, not static and not final), then you can subclass your object in your test, override the method in the subclass, and exercise the subclass in the test. No mock-object libraries required.
(Ideally you should consider refactoring, this is not a great long-term solution. But it is a way to get legacy code under test so you can start the refactoring process more easily.)
The Extract and Override technique described in Chapter 3 of Roy Osherove's The Art of Unit Testing does seem to be a way to fake part of the class under test (pp. 71-77). Osherove does not address the concerns raised in some of the other answers to this question.
In addition, Michael Feathers discusses this in Working Effectively with Legacy Code. He terms the resulting class a testing subclass (227) and the technique Subclass and Override Method (401). Now, granted, Feathers is not giving an exposition of pristine techniques that are recommended on new code. But he still gives it serious treatment as a potentially helpful technique.
I also asked my former computer professor about this. He is well-read and currently works full-time in the software industry, where he has advanced rapidly. He said that this technique definitely has a good application, and that there are several dozen classes in the codebase at his company that are under test in this way. He said that, like any technique, it can be overused.
I originally wrote the question when I was new to unit testing and knew next to nothing about dependency injection. Now, after some experience with both, I would add that the need to use this testing technique could be a smell. It may be a sign that need to rework your approach to dependencies. If the method that needs to be faked is one that is inherited from a base class, it may mean that you need to take the adage "favor composition over inheritance" more seriously. You should inject your dependencies rather than inheriting them.
There are some really nice packages for facilitating this kind of stuff. For instance, from the Mockito docs:
//You can mock concrete classes, not only interfaces
LinkedList mockedList = mock(LinkedList.class);
//stubbing
when(mockedList.get(0)).thenReturn("first");
does some real magic that's hard to believe at first. When you call
String firstMember = mockedList.get(0);
you'll get back "first", because of what you said in the "when" statement.

Test "proxies"; good TDD or a code smell?

When I write tests for certain types of objects, such as UI elements like Forms or UserControls, I often find myself altering my TDD pattern; instead of going test-first, I define and lay out the form's controls, to provide a "skeleton", then start writing behavioral tests (databinding/"unbinding", display-mode behavior, etc). In doing so, I find myself dealing with non-public members. I also run into the same concern with other behavioral methods; I may want to focus on and exercise logic in some private helper called by another method, before worrying about the usage in the other method and its behavior.
To me, making everything public (and sometimes virtual) just to be able to unit-test everything is a smell; I don't want other objects being able to call a helper, or directly access a textbox; but I need to know that the helper does its job and the textbox gets its value when the form loads.
The solution I arrived at some time ago is to create a "test proxy" for the actual object under test. The proxy derives from the object under test, and doesn't hide or override any behavior, but it does provide internally-visible getters, setters and/or methods that make calls to non-public members of the object under test, allowing me to tell the object to perform certain actions of which I can then view the results, without requiring the test to also depend on proper integration within the object or making the method or some other member of interest public in production code just for testing purposes.
Advantages I see:
Members' visibility is not determined by whether you want a unit test or not.
Finer control over what you can do with the object in a test allows for more flexible and extensible testing.
Disadvantages I see:
Class count increases, with an extra level to develop just for testing purposes.
Care must be taken not to somehow end up using the test proxy in production code (making the constructor or the entire class internal generally does the trick)
Not a "pure" unit test as you are, at some level, dependent on integration between the proxy and actual object under test.
The question is, is this a valid way to architect unit tests, or does the fact that I have to do this indicate a problem with the code or the testing strategy?
My first reaction to this pattern is that you're de-emphasizing the 'D's in TDD. Your code is tested but those tests are not driving your design so the code you end up with has a different structure than it would if you had written tests first. A structure with more inaccessible private state than necessary. In general I'll argue that if you can't test your class' behavior using the public interfaces then you are either writing a test which doesn't make sense (testing implementation details) or you have a poorly designed public interface.
However if you're working with view classes this becomes a bit more complicated since you have "public" inputs and outputs via your view which you want to test but which are not necessarily exposed to the code using this view component. In that case I think it makes sense to give your tests access to that user interface; either by exposing those normally private attributes to the test (your proxy is one option and others may be available depending on the language you are using) or by writing some form of functional test which can drive the UI (again tools available depend on your environment).
I would say it indicates a problem with your testing strategy or code. The reason is because your violating encapsulation which is going to couple your tests to the implementation rather than the interface. This will add to the overall work you do because refactors (for example) may no longer be free.
That being said I think there a good reasons to violate encapsulation, and they revolve around functions with side effects (which is often the case with UI programming). In many cases you need to assure that functions are being called in a particular order or that they are called at all. I think there are ways to mitigate how much you violate encapsulation.
If I'm writing tests for side effects I will usually separate them out into their own test. I will also stub/mock out the side effect functions and assert they are called according to my requirements (ie, order, timing, called or not, arguments, etc). This frees me from knowing the implementation details, but still allows me to assert that particular functions were called properly.
In certain languages it can be difficult to mock out the objects, or methods. In those cases I will use dependency injection to pass the object or functions with side effects. That way when testing I can pass my mocks for verification.
MSTest uses this method to test private methods. Visual Studio puts all the tests in a separate test project and creates an "Accessor" class. That class is a subclass where all the private members are made public. Since this subclass is in the test project it isn't available in the assembly under test. I think this is a viable pattern for testing private methods and could be manually implemented in a TDD environment if you aren't using Visual Studio and MSTest.

Should non-public functions be unit tested and how?

I am writing unit tests for some of my code and have run into a case where I have an object with a small exposed interface but complex internal structures as each exposed method runs through a large number of internal functions including dependancies on the object's state. This makes the methods on the external interface quite difficult to unit test.
My initial question is should I be aiming to unit test these internal functions as well, as they are simpler and hence easier to write tests for? My gut feeling says yes, which leads to the follow-up question of if so, how would I go about doing this in C++?
The options I've come up with are to change these internal functions from private to protected and use either a friend class or inheritence to access these internal functions. Is this the best/only method of doing this will preserving some of the semantics of keeping the internal methods hidden?
If your object is performing highly complex operations that are extremely hard to test through the limited public interface, an option is to factor out some of that complex logic into utility classes that encapsulate specific tasks. You can then unit test those classes individually. It's always a good idea to organize your code into easily digestible chunks.
Short answer: yes.
As to how, I caught a passing reference on SO a few days ago:
#define private public
in the unit testing code evaluated before the relevant headers are read...
Likewise for protected.
Very cool idea.
Slightly longer answer: Test if the code is not obviously correct. Which means essentially any code that does something non-trivial.
On consideration, I am wondering about this. You won't be able to link against the same object file that you use in the production build. Now, unit testing is a bit of an artificial environment, so perhaps this is not a deal-breaker. Can anyone shed some light on the pros and cons of this trick?
My feeling personally is that if testing the public interface is not sufficient to adequately test the private methods, you probably need to decompose the class further. My reasoning is: private methods should be doing only enough to support all use-cases of the public interface.
But my experience with unit testing is (unfortunately) slim; if someone can provide a compelling example where a large chunk of private code cannot be separated out, I'm certainly willing to reconsider!
There are several possible approaches. presuming your class is X:
Only use the public interface of X. You will have extensive setup problems and may need a coverage tool to make sure that your code is covered, but there are no special tricks involved.
Use the "#define private public" or similar trick to link against a version of X.o that is exposed to everyone.
Add a public "static X::unitTest()" method. This means that your code will ship linked to your testing framework. (However, one company I worked with used this for remote diagnostics of the software.)
Add "class TestX" as a friend of X. TestX is not shipped in you production dll/exe. It is only defined in your test program, but it has access to X's internals.
Others...
My opinion is no, generally they should not be tested directly.
Unit tests are white box, from a higher perspective of the system, but they should be black box from the perspective of the tested class interface (its public methods and their expected behavior).
So for example, a string class (that wouldn't need legacy char* support):
you should verify that its length() method is working correcly.
you should not have to verify that it puts the '\0' char at the end of its internal buffer. This is an implementation detail.
This allows you to refactor the implementation almost without touching the tests later on
This helps you reduce coupling by enforcing class responsibilities
This makes your tests easier to maintain
The exception is for quite complex helper methods you would like to verify more thoroughly.
But then, this may be a hint that this piece of code should be "official" by making it public static or extracted in its own class with its public methods.
I would say to use a code coverage tool to check if these function are already tested somehow.
Theoretically if your public API is passing all the tests then the private functions are working fine, as long as every possible scenario is covered. That's the main problem, I think.
I know there is tools for that working with C/C++. CoverageMeter is one of them.
Unless you're making a general purpose library you should try and limit what you have built to what you will be using. Extend as you need it.
As such, you should have full code coverage, and you should test it all.
Perhaps your code is a bit smelly? Is it time to refactor?
If you have a large class doing lots of things internally, perhaps it should be broken into several smaller classes with interfaces you could test separately.
I've always thought this would tend to fall into place if you use test driven development. There are two ways of approaching the development, either you start with your public interface and write a new test before each addition to the complex private methods or you start off working on the complex stuff as public and then refactor the code to make the methods private and the tests you've already written to use the new public interface. Either way you should get full coverage.
Of course I've never managed to write a whole app (or even class) in a strict tdd way and the refactoring of the complex stuff into utility classes is the way to go if possible.
You could always use a compile switch around the private: like
#if defined(UNIT_TEST)
Or with code coverage tools verify that your unit testing of your public functions fully exercise the private ones.
Yes you should. Its called white box testing, this means that you have to know alot about the internals of the program to test it properly.
I would create public 'stubs' that call the private functions for testing. #ifdef the stubs so that you can compile them out when the testing is complete.
You might find it productive, to write a test program. In the test program, create a class which uses the class to be tested as a base.
Add methods to your new class, to test the functions that aren't visible at the public interface. Have your simply test program, call these methods to validate the functions you are concerned about.
If your class is performing complex internal calculations, a utility class or even an external function may be the best option to break out the calculation. But if the object has a complex internal structure, the object should have consistency check functions. Ie, if the object represents a specialized tree, the class should have methods to check that the tree is still correct. Additional functions such as tree depth are often useful to users of the class. Some of these functions may be declared inside #ifdef DEBUG or similar constructs to limit run time space in embedded applications. Using internal functions that are only compiled when DEBUG is set are much better from an encapsulation standpoint. You aren't breaking encapsulation. Additionally, the implementation dependent tests are kept with the implementation so it is obvious the test needs to change when the implementation changes.