I am writing unitests for a legacy application without any automated tests. While writing I realized that without refactoring the unittests get pretty large. So without Refactoring i need many Function to keep my tests readable.
Is it okay to use "Arrange" or "Setup" Methods in your Tests to keep them readable? Or is my test to complicated. Here is some Pseudocode.
[TestMethod]
public TestFoo()
{
Obj1 obj1;
Obj2 obj2;
...
//arrange
SetupObjects(obj1, obj2);
//act
Foo.foo( ojb1, obj2);
//assert
AssertObjectStates( obj1, obj2);
}
SetupObjects(Obj1 obj1, Obj2 obj2)
{
CreateAndDoSomethingWithObj1(obj1);
MockSomethingInObj2(obj2);
}
...
It is very common to have helper functions as part of test code. Some situations in which it can be beneficial are:
Test code tends to be repetitive, so helper functions can help to reduce code duplication. This simplifies the creation and maintenance of test suites with larger number of tests. (Note that there are also other mechanisms to reduce code duplication: Typically the frameworks offer some kind of setup and teardown interface which are implicitly called before and after each test, respectively. However, these often lead to a phenomenon that Meszaros calls "mystery guest": Something magic seems to happen outside of the test code. Instead of these implicitly called setup and teardown methods, explicitly called and clearly named helper functions are a good mechanism to get the best of both worlds, namely to avoid duplication in test setups and still make it easy to fully understand each test.)
Encapsulate test code that uses implementation details or non-public interfaces: Unit-testing is the test level closest to the implementation, and if your intent is also to find implementation specific bugs, some tests will be implementation specific. There are also other situations where tests are built upon glass box (aka white box) knowledge, like, when you are looking at code coverage (e.g. MC/DC coverage). This, however, means when designing and implementing your tests, you have to find ways to keep test code maintenance effort low. By encapsulating parts of the test code where implementation details are used, you can end up with tests that also find implementation leven bugs, but still keep maintenance effort on an acceptable level.
Now, looking at your code. It is certainly only example code, therefore the following may be clear for you, even it is not apparent from the code: The way helper functions are used there is not fortunate. The function SetupObjects is not descriptive. And, it sets up both objects individually although for a reader of the test code it appears as if SetupObjects would do something common to both objects.
Related
I'm sorry, it's a very long post. I've read almost everything on the subject, and I'm not yet convinced that it's a bad idea to partially mock the SUT to just DRY up the tests. So, I need to first address all the reasonings against it in order to avoid repetitive answers. Please bear with me.
Have you ever felt the urge to partially mock out the SUT itself, in order to make tests more DRY? Less mocks, less madness, more readable tests?!
Let's have an example at hand to discuss the subject more clearly:
class Sut
{
public function fn0(...)
{
// Interacts with: Dp0
}
public function fn1(...)
{
// Calls: fn0
// Interacts with: Dp1
}
public function fn2(...)
{
// Calls: fn0
// Interacts with: Dp1
}
public function fn3(...)
{
// Calls: fn2
}
public function fn4(...)
{
// Calls: fn1(), fn2(), fn3()
// Interacts with: Dp2
}
}
Now, let's test the behavior of the SUT. Each fn*() is representing a behavior of the class under test. Here, I'm not trying to unit-test each single method of the SUT, but its exposed behaviors.
class SutTest extends \PHPUnit_Framework_Testcase
{
/**
* #covers Sut::fn0
*/
public function testFn0()
{
// Mock Dp0
}
/**
* #covers Sut::fn1
*/
public function testFn1()
{
// Mock Dp1, which is a direct dependency
// Mock Dp0, which is an indirect dependency
}
/**
* #covers Sut::fn2
*/
public function testFn2()
{
// Mock Dp1 with different expectations than testFn1()
// Mock Dp0 with different expectations
}
/**
* #covers Sut::fn3
*/
public function testFn3()
{
// Mock Dp1, again with different expectations
// Mock Dp0, with different expectations
}
/**
* #covers Sut::fn4
*/
public function testFn4()
{
// Mock Dp2 which is a direct dependency
// Mock Dp0, Dp1 as indirect dependencies
}
}
You get the terrible idea! You need to keep repeating yourself. It's not DRY at all. And as the expectations of each mock object may differ for each test, you can't just mock out all the dependencies and set expectations once for the whole testcase. You need to be explicit about the behavior of the mock for each test.
Let's also have some real code to see what it would look like when a test needs to mock out all dependencies through the code path it's testing:
/** #test */
public function dispatchesActionsOnACollectionOfElementsFoundByALocator()
{
$elementMock = $this->mock(RemoteWebElement::class)
->shouldReceive('sendKeys')
->times(3)
->with($text = 'some text...')
->andReturn(Mockery::self())
->mock();
$this->inject(RemoteWebDriver::class)
->shouldReceive('findElements')
->with(WebDriverBy::class)
->andReturn([$elementMock, $elementMock, $elementMock])
->shouldReceive('getCurrentURL')
->zeroOrMoreTimes();
$this->inject(WebDriverBy::class)
->shouldReceive('xpath')
->once()
->with($locator = 'someLocatorToMatchMultipleElements')
->andReturn(Mockery::self());
$this->inject(Locator::class)
->shouldReceive('isLocator', 'isXpath')
->andReturn(true, false);
$this->type($text, $locator);
}
Madness! In order to test a small method, you find yourself writing such a test that is extremely non-readable and coupled to the implementation details of 3 or 4 other dependant methods in its way up the chain. It's even more terrible when you see the whole testcase; lot's of those mock blocks, duplicated to set different expectations, covering different paths of code. The test is mirroring the implementation details of few others. It's painful.
Workaround
OK, back to the first pseudo-code; While testing fn3(), you start thinking, what if I could mock the call to fn2() and put a stop to all the mocking madness? I make a partial mock of the SUT, set expectations for the fn2() and I make sure that the method under test interacts with fn2(), correctly.
In other words, to avoid excessive mocking of external dependencies, I focus on just one behavior of the SUT (might be one or couple of methods) and make sure it behaves correctly. I mockout all other methods that belong to other behaviors of the SUT. No worries about them, they all have their own tests.
Contrary reasonings
One might discuss that:
The problem with stubbing/mocking methods in a class is that you are violating the encapsulation. Your test should be checking to see whether or not the external behaviour of the object matches the specifications. Whatever happens inside the object is none of its business. By mocking the public methods in order to test the object, you are making an assumption about how that object is implemented.
When unit-testing, it's rare that you always deal with such behaviors that are fully testable by providing input and expecting output; seeing them as black boxes. Most of the time you need to test how they interact with one another. So, we need to have, at least, some information about the internal implementation of the SUT, to be able to fully test it.
When we mock a dependency, we're ALREADY making an assumption about how the SUT works. We're binding the test to the implementation details of the SUT. So, now that we're deep in mud, why not mocking an internal method to make our lives easier?!
Some may say:
Mocking the methods is taking the object (SUT) and breaking it into two pieces. One piece is being mocked while the other piece is being tested. What you are doing is essentially an ad-hoc breaking up of the object. If that's the case, just break up the object already.
Unit tests should treat the classes they test as black boxes. The only thing which matters is that its public methods behave the way it is expected. How the class achieves this through internal state and private methods does not matter. When you feel that it is impossible to create meaningful tests this way, it is a sign that your classes are too powerful and do too much. You should consider to move some of their functionality to separate classes which can be tested separately.
If there are arguments that support this separation (partially mock the SUT), the same arguments can be used to refactor the class into two classes and this is exactly what you should to then.
If it's a smell for SRP, yeah right, the functionality can be extracted into another class and then you can easily mock that class and go happily home. But it's not the case. The SUT's design is ok, it has no SRP issues, it's small, it does one job and ad-heres to SOLID principles. When looking at SUT's code, there's no reason you want to break the functionality into some other classes. It's already broken into very fine pieces.
How come when you look at SUT tests, you decide to break the class? How come it's ok to mock out aaaaall those dependencies along the way, when testing fn3() but it's not ok to mock the only real dependency it has (even though it's an internal one)? fn2(). Either way, we're bound to the implementation details of the SUT. Either way, the tests are fragile.
It's important to notice why we want to mock those methods. We just want easier testing, less mocks, while maintaining the absolute isolation of the SUT (more on this later).
Some other might reason:
The way I see it, an object has external and internal behaviors. External behavior includes returns values, calls into other objects, etc. Obviously, anything in that category should be tested. But internal behavior shouldn't really be tested. I don't write tests directly on the internal behavior, only indirectly through the external behavior.
Right, I do that, too. But we're not testing the internals of SUT, we're just exercising its public API and we want to avoid excessive mocking.
Reasoning states that, External behavior includes calls into other objects; I agree. We're tryinng to test the SUT's external calls here as well, just by early-mocking the internal method that makes the interaction. That mocked method(s) has the its tests already.
One other reasons:
Too many mocks and already perfectly broken into multiple classes? You are over-unit-testing by unit-testing what should be integration tested.
The example code has just 3 external dependencies and I don't think that it's too much. Again, it's important to notice why I want to partially mock the SUT; only and only for easier testing, avoiding excessive mocks.
By the way, the reasoning might be true somehow. I might need to do integration testing in some cases. More on this in the next section.
The last one says:
These are all tests man, not production code, they don't need to be DRY!
I've actually read something like this! And I simply don't think so. I need to put my lifetime into use! You, too!
Bottom-line: To mock, or not to mock?
When we choose to mock, we're writing whitebox unit-tests. We're bounding tests to the implementation details of the SUT, more or less. Then, if we decide to go down the PURE way, and radically maintain the isolation of the SUT, soon or later, we find ourselves in the hell of mocking madness... and fragile tests. Ten months into the maintenance, you found yourself serving unit-tests instead of them serving you! You find yourself re-implementing multiple tests for a single change in the implementation of one SUT method. Painful right?
So, if we're going this way, why not partially mock the SUT? Why not making our lives waaaaaaaaay easier? I see no reason not to do so? Do you?
I've read and read, and finally came across this article by Uncle Bob:
https://8thlight.com/blog/uncle-bob/2014/05/10/WhenToMock.html
To qoute the most important part:
Mock across architecturally significant boundaries, but not within those boundaries.
I think it's the remedy to all the mocking madness I told you about. There's no need to radically maintain the isolation of the SUT as I've learnt blindly. Even though it may work for most of the time, it also may force you to live in your private mocking hell, banging your head against the wall.
This little gem piece of advice, is the only reasoning that makes sense to not to partially mock the SUT. In fact, it's the exact opposite of doing so. But now the question would be, isn't that integration testing? Is that still called unit-testing? What's the UNIT here? Architecturally significant boundaries?
Here's another article by Google Testing team, implicitly suggesting the same practice:
https://testing.googleblog.com/2013/05/testing-on-toilet-dont-overuse-mocks.html
To recap
If we're going down the pure isolation way, assuming that the SUT is already broken into fine pieces, with minimum possible external deps, is there any reason not to partially mock the SUT? In order to avoid the excessive mocking and to make unit-tests more DRY?
If we take the advice of Uncle Bob to the heart and only "Mock across architecturally significant boundaries, but not within those boundaries.", is that still considered unit testing? What's the unit here?
Thank you for reading.
P.S. Those contrary reasonings are more or less from existing SO answers or articles I found on the subject. Unfortunately, I don't currently have the refs to link to.
Unit tests don't have to be isolated unit tests, at least if you accept the definition promoted by authors like Martin Fowler and Kent Beck. Kent is the creator of JUnit, and probably the main proponent of TDD. These guys don't do mocking.
In my own experience (as long-time developer of an advanced Java mocking library), I see programmers misusing and abusing mocking APIs all the time. In particular, when they think that partially mocking the SUT is a valid idea. It is not. Making test code more DRY shouldn't be an excuse for over-mocking.
Personally, I favor integration tests with minimal or (preferably) no mocking. As long as your tests are stable and run fast enough, they are ok. The important thing is that tests don't become a pain to write, maintain, and more importantly don't discourage the programmers from running them. (This is why I avoid functional UI-driven tests - they tend to be a pain to run.)
I'm looking to better understand I should test functions that have many substeps or subfunctions.
Let's say I have the functions
// Modify the state of class somehow
public void DoSomething(){
DoSomethingA();
DoSomethingB();
DoSomethingC();
}
Every function here is public. Each subfunction has 2 paths. So to test every path for DoSomething() I'd have 2*2*2 = 8 tests. By writing 8 tests for DoSomething() I will have indirectly tested the subfunctions too.
So should I be testing like this, or instead write unit tests for each of the subfunctions and then only write 1 test case that measures the final state of the class after DoSomething() and ignore all the possible paths? A total of 2+2+2+1 = 7 tests. But is it bad then that the DoSomething() test case will depend on the other unit test cases to have complete coverage?
There appears to be a very prevalent religious belief that testing should be unit testing. While I do not intend to underestimate the usefulness of unit testing, I would like to point out that it is just one possible flavor of testing, and its extensive (or even exclusive) use is indicative of people (or environments) that are somewhat insecure about what they are doing.
In my experience knowledge of the inner workings of a system is useful as a hint for testing, but not as an instrument for testing. Therefore, black box testing is far more useful in most cases, though that's admittedly in part because I do not happen to be insecure about what I am doing. (And that is in turn because I use assertions extensively, so essentially all of my code is constantly testing itself.)
Without knowing the specifics of your case, I would say that in general, the fact that DoSomething() works by invoking DoSomethingA() and then DoSomethingB() and then DoSomethingC() is an implementation detail that your black-box test should best be unaware of. So, I would definitely not test that DoSomething() invokes DoSomethingA(), DoSomethingB(), and DoSomethingC(), I would only test to make sure that it returns the right results, and using the knowledge that it does in fact invoke those three functions as a hint I would implement precisely those 7 tests that you were planning to use.
On the other hand, it should be noted that if DoSomethingA() and DoSomethingB() and DoSomethingC() are also public functions, then you should also test them individually, too.
Definitely test every subfunction seperately (because they're public).
It would help you find the problem if one pops up.
If DoSomething only uses other functions, I wouldn't bother writing additional tests for it. If it has some other logic, I would test it, but assume all functions inside work properly (if they're in a different class, mock them).
The point is finding what the function does that is not covered in other tests and testing that.
Indirect testing should be avoided. You should write unit tests for each function explicitly. After that You should mock submethods and test your main function. For example :
You have a method which inserts a user to DB and method is like this :
void InsertUser(User user){
var exists = SomeExternal.UserExists(user);
if(exists)
throw new Exception("bla bla bla");
//Insert codes here
}
If you want to test InsertUser function, you should mock external/sub/nested methods and test behaviour of InsertUser function.
This example creates two tests: 1 - "When user exists then Should throw Exception" 2 - "When user does not exist then Should insert user"
This question already has answers here:
How do I test a class that has private methods, fields or inner classes?
(58 answers)
Closed 5 years ago.
In C++, I have often made a unit test class a friend of the class I am testing. I do this because I sometimes feel the need to write a unit test for a private method, or maybe I want access to some private member so I can more easily setup the state of the object so I can test it. To me this helps preserve encapsulation and abstraction because I am not modifying the public or protected interface of the class.
If I buy a third party library, I wouldn't want its public interface to be polluted with a bunch of public methods I don't need to know about simply because the vendor wanted to unit test!
Nor do I want have to worry about a bunch of protected members that I don't need to know about if I am inheriting from a class.
That is why I say it preserves abstraction and encapsulation.
At my new job they frown against using friend classes even for unit tests. They say because the class should not "know" anything about the tests and that you do not want tight coupling of the class and its test.
Can someone please explain these reasons to me more so that I may understand better? I just do not see why using a friend for unit tests is bad.
Ideally, you shouldn't need to unit test private methods at all. All a consumer of your class should care about is the public interface, so that's what you should test. If a private method has a bug, it should be caught by a unit test that invokes some public method on the class which eventually ends up calling the buggy private method. If a bug manages to slip by, this indicates that your test cases don't fully reflect the contract you wish your class to implement. The solution to this problem is almost certainly to test public methods with more scrutiny, not to have your test cases dig into the class's implementation details.
Again, this is the ideal case. In the real world, things may not always be so clear, and having a unit testing class be a friend of the class it tests might be acceptable, or even desirable. Still, it's probably not something you want to do all the time. If it seems to come up often enough, that might a sign that your classes are too large and/or performing too many tasks. If so, further subdividing them by refactoring complex sets of private methods into separate classes should help remove the need for unit tests to know about implementation details.
You should consider that there are different styles and methods to test: Black box testing only tests the public interface (treating the class as a black box). If you have an abstract base class you can even use the same tests against all your implementations.
If you use White box testing, you might even look at the details of the implementation. Not only about which private methods a class has, but what kind of conditional statements are included (i.e. if you want to increase your condition coverage because you know that the conditions were hard to code). In white box testing, you definitely have "high coupling" between classes/implementation and the tests which is necessary because you want to test the implementation and not the interface.
As bcat pointed out, it's often helpful to use composition and more but smaller classes instead of many private methods. This simplifies white box testing because you can more easily specify the test cases to get a good test coverage.
I feel that Bcat gave a very good answer, but I would like to expound on the exceptional case that he alludes to
In the real world, things may not always be so clear, and having a
unit testing class be a friend of the class it tests might be
acceptable, or even desirable.
I work in a company with a large legacy codebase, which has two problems both of which contribute to making a friend unit-test desirable.
We suffer from obscenely large functions and classes which require refactoring, but in order to refactor it is helpful to have tests.
Much of our code is dependent on database access, which for various reasons should not be brought into the unit tests.
In some cases Mocking is useful to alleviate the latter problem, but very often this just leads to uneccessarily complex design (class heirarchies where none would otherwise be needed), while one could very simply refactor the code in the following way:
class Foo{
public:
some_db_accessing_method(){
// some line(s) of code with db dependance.
// a bunch of code which is the real meat of the function
// maybe a little more db access.
}
}
Now we have the situation where the meat of the function needs refactoring, so we'd like a unit test. It shouldn't be exposed publicly. Now, there's a wonderful technique called mocking that could be used in this situation, but the fact is that in this case a mock is overkill. It would require me to increase the complexity of the design with an unecessary hierarchy.
A far more pragmatic approach would be to do something like this:
class Foo{
public:
some_db_accessing_method(){
// db code as before
unit_testable_meat(data_we_got_from_db);
// maybe more db code.
}
private:
unit_testable_meat(...);
}
The latter gives me all of the benefits I need from unit testing, including giving me that precious safety net to catch errors produced when I refactor the code in the meat. In order to unit test it, I have to friend a UnitTest class, but I would strongly argue that this is is far better than an otherwise useless code heirarchy just to allow me to use a Mock.
I think this should become an idiom, and I think it's a suitable, pragmatic solution to increase the ROI of unit testing.
Like bcat suggested, as much as possible, you need to find bugs using public interface itself. But if you want to do things like printing private variables and comparing with expected result etc(Helpful for developers to debug the issues easily), then you can make UnitTest class as friend to class to be tested. But you may need to add it under a macro like below.
class Myclass
{
#if defined(UNIT_TEST)
friend class UnitTest;
#endif
};
Enable flag UNIT_TEST only when Unit testing is required.
For other releases, you need to disable this flag.
I don't see anything wrong with using a friend unit testing class in many cases. Yes, decomposing a large class into smaller ones is sometimes a better way to go. I think people are a bit too hasty to dismiss using the friend keyword for something like this - it might not be ideal object oriented design, but I can sacrifice a little idealism for better test coverage if that's what I really need.
Typically you only test the public interface so that you are free to redesign and refactor the implementation. Adding test cases for private members defines a requirement and restriction on the implementation of your class.
Make the functions you want to test protected.
Now in your unit test file, create a derived class.
Create public wrapper functions that call your the class-under-test protected functions.
For example:
// NUnit-like pseudo code (within a TestFixture)
Ctor()
{
m_globalVar = getFoo();
}
[Test]
Create()
{
a(m_globalVar)
}
[Test]
Delete()
{
// depends on Create being run
b(m_globalVar)
}
… or…
// NUnit-like pseudo code (within a TestFixture)
[Test]
CreateAndDelete()
{
Foo foo = getFoo();
a(foo);
// depends on Create being run
b(foo);
}
… I’m going with the later, and assuming that the answer to my question is:
No, at least not with NUnit, because according to the NUnit manual:
The constructor should not have any side effects, since NUnit may construct the class multiple times in the course of a session.
... also, can I assume it's bad practice in general? Since tests can usually be run separately. So the result of Create may never be cleaned up by Delete.
Yes, it is bad practice. In all unit test frameworks I know, the execution order of test methods is not guaranteed, thus writing tests which depend on the execution order is explicitly discouraged.
As you also noted, if test B depends on the (side) effects of test A, either test A contains some common initialization code (which then should be moved into a common setup method instead), or the two tests are part of the same story, so they could be united (IMHO - some people stick to having a single assert per test method, so they would disagree with me on this), or test B should otherwise be made totally independent of test A regarding fixture setup.
Definately a bad idea. Unit tests should be lightweight, stateless, and have no dependencies on things such as file system, registry, etc.. This allows them to run quickly and to be less brittle.
If your tests require executing in a certain order, then you can't ever be sure (at least without investigation) whether a test has failed because of execution order or a problem with the code!
This will ultimately lead to a lack of confidence developing regarding your test suite and eventual abandonment.
In general, it's good practice to make each of your tests test exactly one thing or one sequence of things. (They're different types of tests, but even so.) Except when you are testing the constructor or destructor themselves, they should be done as part of the test setup and teardown code, and not the tests themselves. It's OK to be inefficient about this; the important thing with a test is that it be clear exactly what is being tested, not that you minimize the number of auxiliary actions performed during the process.
Many testing harnesses also allow you to only run a subset of tests (minimally just one). This is great for when you're focussing in on a particular bug! But it does mean that the tests need to be written so as to have no dependencies or everything will be rather meaningless.
Personally, I'd put testing of constructors and destructors earlier in my test suite than testing of behavior of the constructed instances, but YMMV.
Suppose you have a method:
public void Save(Entity data)
{
this.repositoryIocInstance.EntitySave(data);
}
Would you write a unit test at all?
public void TestSave()
{
// arrange
Mock<EntityRepository> repo = new Mock<EntityRepository>();
repo.Setup(m => m.EntitySave(It.IsAny<Entity>());
// act
MyClass c = new MyClass(repo.Object);
c.Save(new Entity());
// assert
repo.Verify(m => EntitySave(It.IsAny<Entity>()), Times.Once());
}
Because later on if you do change method's implementation to do more "complex" stuff like:
public void Save(Entity data)
{
if (this.repositoryIocInstance.Exists(data))
{
this.repositoryIocInstance.Update(data);
}
else
{
this.repositoryIocInstance.Create(data);
}
}
...your unit test would fail but it probably wouldn't break your application...
Question
Should I even bother creating unit tests on methods that don't have any return types* or **don't change anything outside of internal mock?
Don't forget that unit tests isn't just about testing code. It's about allowing you to determine when behaviour changes.
So you may have something that's trivial. However, your implementation changes and you may have a side effect. You want your regression test suite to tell you.
e.g. Often people say you shouldn't test setters/getters since they're trivial. I disagree, not because they're complicated methods, but someone may inadvertently change them through ignorance, fat-finger scenarios etc.
Given all that I've just said, I would definitely implement tests for the above (via mocking, and/or perhaps it's worth designing your classes with testability in mind and having them report status etc.)
It's true your test is depending on your implementation, which is something you should avoid (though it is not really that simple sometimes...) and is not necessarily bad. But these kind of tests are expected to break even if your change doesn't break the code.
You could have many approaches to this:
Create a test that really goes to the database and check if the state was changed as expected (it won't be a unit test anymore)
Create a test object that fakes a database and do operations in-memory (another implementation for your repositoryIocInstance), and verify the state was changed as expected. Changes to the repository interface would incurr in changes to this object as well. But your interfaces shouldn't be changing much, right?
See all of this as too expensive, and use your approach, which may incur on unnecessarily breaking tests later (but once the chance is low, it is ok to take the risk)
Ask yourself two questions. "What is the manual equivalent of this unit test?" and "is it worth automating?". In your case it would be something like:
What is manual equivalent?
- start debugger
- step into "Save" method
- step into next, make sure you're inside IRepository.EntitySave implementation
Is it worth automating? My answer is "no". It is 100% obvious from the code.
From hundreds of similar waste tests I didn't see a single which would turn out to be useful.
The general rule of thumb is, that you test all things, that could probably break. If you are sure, that the method is simple enough (and stays simple enough) to not be a problem, that let it out with testing.
The second thing is, you should test the contract of the method, not the implementation. If the test fails after a change, but not the application, then your test tests not the right thing. The test should cover cases that are important for your application. This should ensure, that every change to the method that doesn't break the application also don't fail the test.
A method that does not return any result still changes the state of your application. Your unit test, in this case, should be testing whether the new state is as intended.
"your unit test would fail but it probably wouldn't break your application"
This is -- actually -- really important to know. It may seem annoying and trivial, but when someone else starts maintaining your code, they may have made a really bad change to Save and (improbably) broken the application.
The trick is to prioritize.
Test the important stuff first. When things are slow, add tests for trivial stuff.
When there isn't an assertion in a method, you are essentially asserting that exceptions aren't thrown.
I'm also struggling with the question of how to test public void myMethod(). I guess if you do decide to add a return value for testability, the return value should represent all salient facts necessary to see what changed about the state of the application.
public void myMethod()
becomes
public ComplexObject myMethod() {
DoLotsOfSideEffects()
return new ComplexObject { rows changed, primary key, value of each column, etc };
}
and not
public bool myMethod()
DoLotsOfSideEffects()
return true;
The short answer to your question is: Yes, you should definitely test methods like that.
I assume that it is important that the Save method actually saves the data. If you don't write a unit test for this, then how do you know?
Someone else may come along and remove that line of code that invokes the EntitySave method, and none of the unit tests will fail. Later on, you are wondering why items are never persisted...
In your method, you could say that anyone deleting that line would only be doing so if they have malign intentions, but the thing is: Simple things don't necessarily stay simple, and you better write the unit tests before things get complicated.
It is not an implementation detail that the Save method invokes EntitySave on the Repository - it is part of the expected behavior, and a pretty crucial part, if I may say so. You want to make sure that data is actually being saved.
Just because a method does not return a value doesn't mean that it isn't worth testing. In general, if you observe good Command/Query Separation (CQS), any void method should be expected to change the state of something.
Sometimes that something is the class itself, but other times, it may be the state of something else. In this case, it changes the state of the Repository, and that is what you should be testing.
This is called testing Inderect Outputs, instead of the more normal Direct Outputs (return values).
The trick is to write unit tests so that they don't break too often. When using Mocks, it is easy to accidentally write Overspecified Tests, which is why most Dynamic Mocks (like Moq) defaults to Stub mode, where it doesn't really matter how many times you invoke a given method.
All this, and much more, is explained in the excellent xUnit Test Patterns.