Create a single shared Mock Object or one per Unit Test - unit-testing

I'm currently broadening my Unit Testing by utilising Mock objects (nSubsitute in this particular case). However I'm wondering what the current wisdom when creating a Mock objects. For instance, I'm working with an object that contains various routines to grab and process data - no biggie here but it will be utilised in a fair number of tests.
Should I create a shared function that returns the Mock Object with all the appropriate methods and behaviours mocked for pretty much most of the Testing project and call that object into my Unit Tests? Or shall I Mock the object into every Unit Test, only mocking the behaviour I need for that test (although there will be times I'll be mocking the same behaviour more than one occasion).
Thoughts or advice is gratefully received...

I'm not sure if there is an agreed "current wisdom" on this, but here's my 2 cents.
First, as #codebox pointed out, re-creating your mocks for each unit test is a good idea, as you want your unit tests to run independently of each other. Doing otherwise can result in tests that pass when run together but fail when run in isolation (or vis versa). Creating mocks required for tests is commonly done in test setup ([SetUp] in NUnit, constructor in XUnit), so each test will get a newly created mock.
In terms of configuring these mocks, it depends on the situation and how you test. My preference is to configure them in each test with the minimum amount of configuration necessary. This is a good way of communicating exactly what that test requires of its dependencies. There is nothing wrong with some duplication in these cases.
If a number of tests require the same configuration, I would consider using a scenario-based test fixture (link disclaimer: shameless self-promotion). A scenario could be something like When_the_service_is_unavailable, and the setup for that scenario could configure the mocked service to throw an exception or return an error code. Each test then makes assertions based on that common configuration/scenario (e.g. should display error message, should send email to admin etc).
Another option if you have lots of duplicated bits of configuration is to use a Test Data Builder. This gives you reusable ways of configuring a number of different aspects of your mock or other any other test data.
Finally, if you're finding a large amount of configuration is required it might be worth considering changing the interface of the test dependency to be less "chatty". By looking for a valid abstraction that reduces the number of calls required by the class under test you'll have less to configure in your tests, and have a nice encapsulation of the responsibilities on which that class depends.
It is worth experimenting with a few different approaches and seeing what works for you. Any removal of duplication needs to be balanced with keeping each test case independent, simple, maintainable and reliable. If you find you have a large number of tests fail for small changes, or that you can't figure out the configuration an individual tests needs, or if tests fail depending on the order in which they are run, then you'll want to refine your approach.

I would create new mocks for each test - if you re-use them you may get unexpected behaviour where the state of the mock from earlier tests affects the outcome of later tests.

It's hard to provide a general answer without looking at a specific case.
I'd stick with the same approach as I do everywhere else: first look at the tests as independent beings, then look for similarities and extract the common part out.
Your goal here is to follow DRY, so that your tests are maintainable in case the requirements change.
So...
If it's obvious that every test in a group is going to use the same mock behaviour, provide it in your common set-up
If each of them is significantly different, as in: the content of the mock constitutes a significant part of what you're testing and the test/mock relationship looks like 1:1, then it's reasonable to keep them close to the tests
If the mocks differ between them, but only to some degree, you still want to avoid redundancy. A common SetUp won't help you, but you may want to introduce an utility like PrepareMock(args...) that will cover different cases. This will make your actual test methods free of repetitive set-up, but still let you introduce any degree of difference between them.
The tests look nice when you extract all similarities upwards (to a SetUp or helper methods) so that the only thing that remains in test methods is what's different between them.

Related

Unit testing of a very dynamic web application?

My business web application (PHP with HTML/Javascript) has lots of very different options (about 1000) which are stored in the database, so the users can change them theirselves. These options for example define if a button, tab or inputfield is visible, the validation of inputs and the workflow, like when e-mails should be sent. Each user has a user-role, which also defines what they're able to see and do.
My users can use any combination of these options, so I find it very difficult to write tests for all these situations. I have 100+ customers so writing tests for each customer is definetely not an option.
The problem is that some options work together. So while testing one option it's necessary to know the value of some other options. Ideally the tests should also be able to read the options-profiles for each customer. But that would almost be like rewriting the whole application, just for testing, which seems error-prone by itself.
Is it common in unit testing to read the database to get the test-data and options, or is that not a good idea?
How would you handle the situation I described ?
First of all yes, that's perfectly possible. Although it's not recommended to write unit tests after the application is already written and its extremely difficult.
Here are a few advises for your case:
Data Providers
Data Providers are making it possible to call the same test with different parameters, which prevents code duplication in your tests. Perfect if you want to test the same method with different configurations.
https://phpunit.de/manual/3.7/en/writing-tests-for-phpunit.html#writing-tests-for-phpunit.data-providers
Mock objects
If objects depend on other objects you use mock objects. Mocking an object is basically nothing more than creating a dummy object which has a defined behavior and won't do anything else than the things you told it to do.
Note that you can also mock the tested class itself! A mock will keep the methods of the mocked class by default, so you can mock the class you want to test and define a specific behavior for some methods while testing another.
If this is still not enough, you might want to think about splitting up you methods in smaller, more specific methods to get smaller units.
https://phpunit.de/manual/3.7/en/test-doubles.html
Keep it small
Unit tests are called unit tests because they test the smallest possible unit of your code without executing anything else. So instead of testing if a button behaves the way it should, just test if its visible if it should be. Just test one kind of behavior and nothing more.
Don't read the database
It is highly unusual to read the database when writing unit tests and its even more unusual to use actual user data. Instead you define test data. Instead of testing your users configuration, you should test every possible configuration
Code Coverage
A decent way to check if your code is covered by the tests is code coverage. It will show you how much and which code was executed by the tests. Although 100% coverage does not mean full coverage in reality, specially in your case. Just because all lines of code were executed does not mean every option was considered. But its a handy tool anyways and you can see which code you are done with and which you forgot about.
https://phpunit.de/manual/current/en/code-coverage-analysis.html
Conclusion:
What you are trying to do is error-prone itself, yes. Because usually you'd write all your tests before writing the actual method. And you will probably write more test code than the application has itself, but that's not uncommon.

Learning About Unit Testing Using When and Should and TDD

The tests at my new job are nothing like the tests I have encountered before.
When they're writing their unit tests (presumably before the code), they create a class starting with "When". The name describes the scenario under which the tests will run (the fixture). They'll created subclasses for each branch through the code. All of the tests within the class start with "should" and they test different aspects of the code after running. So, they will have a method for verifying that each mock (DOC) is called correctly and for checking the return value, if applicable. I am a little confused by this method because it means the exact same execution code is being run for each test and this seems wasteful. I was wondering if there is a technique similar to this that they may have adapted. A link explaining the style and how it is supposed to be implemented would be great. I sounds similar to some approaches of BDD I've seen.
I also noticed that they've moved the repeated calls to "execute" the SUT into the setup methods. This causes issues when they are expecting exceptions, because they can't use built-in tools for performing the check (Python unittest's assertRaises). This also means storing the return value as a backing field of the test class. They also have to store many of the mocks as backing fields. Across class hierarchies it becomes difficult to tell the configuration of each mock.
They also test code a little differently. It really comes down to what they consider an integration test. They mock out anything that steals the context away from the function being tested. This can mean private methods within the same class. I have always limited mocking to resources that can affect the results of the test, such as databases, the file system or dates. I can see some value in this approach. However, the way it is being used now, I can see it leading to fragile tests (tests that break with every code change). I get concerned because without an integration test, in this case, you could be using a 3rd party API incorrectly but your unit tests would still pass. I'd like to learn more about this approach as well.
So, any resources about where to learn more about some of these approaches would be nice. I'd hate to pass up a great learning opportunity just because I don't understand they way they are doing things. I would also like to stop focusing on the negatives of these approaches and see where the benefits come in.
If I understood you explanation in the first paragraph correctly, that's quite similar to what I often do. (Depending on whether the testing framework makes it easy or not. Also many mocking frameworks don't support it, but spy frameworks like Mockito do better.)
For example see the stack example here which has a common setup (adding things to the stack) and then a bunch of independent tests which each check one thing. Here's still another example, this time one where none of the tests (#Test) modify the common fixture (#Before), but each of them focuses on checking just one independent thing that should happen. If the tests are very well focused, then it should be possible to change the production code to make any single test fail while all other tests pass (I wrote about that recently in Unit Test Focus Isolation).
The main idea is to have each test check a single feature/behavior, so that when tests fail it's easier to find out why it failed. See this TDD tutorial for more examples and to learn that style.
I'm not worried about the same code paths executed multiple times, when it takes a millisecond to run one test (if it takes more than a couple of seconds to run all unit tests, the tests are probably too big). From your explanation I'm more worried that the tests might be too tightly coupled to the implementation, instead of the feature, if it's systematic that there is one test for each mock. The name of the test would be a good indicator of how well structured or how fragile the tests are - does it describe a feature or how that feature is implemented.
About mocking, a good book to read is Growing Object-Oriented Software Guided by Tests. One should not mock 3rd party APIs (APIs which you don't own and can't modify), for the reason you already mentioned, but one should create an abstraction over it which better fits the needs of the system using it and works the way you want it. That abstraction needs to be integration tested with the 3rd party API, but in all tests using the abstraction you can mock it.
First, the pattern that you are using is based on Cucumber - here's a link. The style is from the BDD (Behavior-driven development) approach. It has two advantages over traditional TDD:
Language - one of the tenants of BDD is that the language you use influences the thoughts you have by forcing you to speak in the language of the end user, you will end up writing different tests than when you write tests from the focus of a programmer
Tests lock code - BDD locks the code at the appropriate level. One problem common in testing is that you write a large number of tests, which makes your codebase more brittle as when you change the code you must also change a large number of tests too. BDD forces you to lock the behavior of your code, rather than the implementation of your code. This way, when a test breaks, it is more likely to be meaningful.
It is worth noting that you do not have to use the Cucumber style of testing to achieve these affects and using it does add an extra layer of overhead. But very few programmers have been successful in keeping the BDD mindset while using traditional xUnit tools (TDD).
It also sounds like you have some scenarios where you would like to say 'When I do , then verify '. Because the current BDD xUnit frameworks only allow you to verify primitives (strings, ints, doubles, booleans....), this usually results in a large number of individual tests (one for each Assert). It is possible to do more complicated verifications using a Golden Master paradigm test tool, such as ApprovalTests. Here's a video example of this.
Finally, here's a link to Dan North's blog - he started it all.

When is it appropriate to do interaction based testing as opposed to state based testing?

When I use Easymock(or a similar mocking framework) to implement my unit tests, I'm forced to do interaction-based testing (as I don't get to assert on the state of my dependencies. Or am I mistaken?).
On the other hand if I use a hand written stub (instead of using easymock) I can implement state based testing.
I'm quite unclear if I want to go with interaction based testing or state based testing.
I'm biased and I want to use Easymock, but I'm not sure if there would be any side-effects that I may have to face in the future.
Can anyone please throw some light on this?
Thanks in advance!
You have to divide your objects into domainy value objects (which hold state and should be immutable) and services. Services are the things other objects should ask to perform a particular task, but your code shouldn't be concerned about how this task is performed. To test the service in isolation without testing its peers, use a mock.
Value objects, which may contain domain functionality such as calculations, should never be mocked, because their responsibility is calculating and not delegating.
In a well designed system, services should always be injected and never returned from other services, so generally speaking, mocks shouldn't return mocks.
There is no reason you cannot do both. I find behavior-based or interaction-based testing using mocks saves a lot of boilerplate when all you want to do is test behavior. With hand-written stubs you end up with a lot of booleans indicating that a method was called that you have to then test for. That is redundant, brittle and quite a drag.
On the other hand, sometimes you do want to test state. For example, if the object under test's behavior needs to change based on the state of the candidate for mocking or stubbing, and there is some complex interaction to work out.
In that case, mocking frameworks can get in the way, and a hand-written stub makes managing the state much easier for the purposes of the test.
So the bottom line is that they are not mutually exclusive - use what makes sense for a given test. As long as each test is small and tests only one thing (as much as is reasonable) then you shouldn't find yourself in a situation where you started with a mock an suddenly find you have to do a bunch of effort to get things back to a stub.

Do setup/teardown hurt test maintainability?

This seemed to spark a bit of conversation on another question and I
thought it worthy to spin into its own question.
The DRY principle seems to be our weapon-of-choice for fighting maintenance
problems, but what about the maintenance of test code? Do the same rules of thumb
apply?
A few strong voices in the developer testing community are of the opinion that
setup and teardown are harmful and should be avoided... to name a few:
James Newkirk
Jay Fields, [2]
In fact, xUnit.net has removed them from the framework altogether for this very reason
(though there are ways to get around this self-imposed limitation).
What has been your experience? Do setup/teardown hurt or help test maintainability?
UPDATE: do more fine-grained constructs like those available in JUnit4 or TestNG (#BeforeClass, #BeforeGroups, etc.) make a difference?
The majority (if not all) of valid uses for setup and teardown methods can be written as factory methods which allows for DRY without getting into issues that seem to be plagued with the setup/teardown paradigm.
If you're implementing the teardown, typically this means you're not doing a unit test, but rather an integration test. A lot of people use this as a reason to not have a teardown, but IMO there should be both integration and unit test. I would personally separate them into separate assemblies, but I think a good testing framework should be able to support both types of testing. Not all good testing is going to be unit testing.
However, with the setup there seems to be a number of reasons why you need to do things before a test is actually run. For example, construction of object state to prep for the test (for instance setting up a Dependency Injection framework). This is a valid reason for a setup, but could just as easily be done with a factory.
Also, there is a distinction between class and method level setup/teardown. That needs to be kept in mind when considering what you're trying to do.
My biggest problem that I have had with using the setup/teardown paradigm is that my tests don't always follow the same pattern. This has brought me into using factory patterns instead, which allows me to have DRY while at the same time being readable and not at all confusing to other developers. Going the factory route, I've been able to have my cake and eat it to.
They've really helped with our test maintainability. Our "unit" tests are actually full end-to-end integration tests that write to the DB and check the results. Not my fault, they were like that when I got here, and I'm working to change things.
Anyway, if one test failed, it went on to the next one, trying to enter the same user from the first test in the DB, violating a uniqueness constraint, and the failures just cascaded from there. Moving the user creation/deletion into the [Fixture][SetUp|TearDown] methods allowed us to see the one test that failed without everything going haywire, and made my life a lot easier and less stabby.
I think the DRY principle applies just as much for tests as it does for code, however its application is different. In code you go to much greater lengths to literally not do the same thing in two different parts of the code. In tests the need to do that (do a lot of the same setup) is certainly a smell, but the solution is not necessarily to factor out the duplication into a setup method. It may be make the state easier to set up in the class itself or to isolate the code under test so it is less dependent on this amount of state to be meaningful.
Given the general goal of only testing one thing per test, it really isn't possible to avoid doing a lot of the same thing over and over again in certain cases (such as creating an object of a certain type). If you find you have a lot of that, it may be worth rethinking the test approach, such as introducing parametrized tests and the like.
I think setup and teardown should be primarily for establishing the environment (such as injections to make the environment a test one rather than a production one), and should not contain steps that are part and parcel of the test.
I agree with everything Joseph has to say, especially the part about tearDown being a sign of writing integration tests (and 99% of the time is what I've used it for), but in addition to that I'd say that the use of setup is a good indicator of when tests should be logically grouped together and when they should be split into multiple test classes.
I have no problem with large setup methods when applying tests to legacy code, but the setup should be common to every test in the suite. When you find yourself having the setup method really doing multiple bits of setup, then it's time to split your tests into multiple cases.
Following the examples in "Test Driven", the setup method comes about from removing duplication in the test cases.
I use setup quite frequently in Java and Python, frequently to set up collaborators (either real or test, depending). If the object under test has no constructors or just the collaborators as constructors I will create the object. For a simple value class I usually don't bother with them.
I use teardown very infrequently in Java. In Python it was used more often because I was more likely to change global state (in particular, monkey patching modules to get users of those modules under test). In that case I want a teardown that will guaranteed to be called if a test failed.
Integration tests and functional tests (which often use the xunit framework) are more likely to need setup and teardown.
The point to remember is to think about fixtures, not only DRY.
I don't have an issue with test setup and teardown methods per se.
The issue to me is that if you have a test setup and teardown method, it implies that the same test object is being reused for each test. This is a potential error vector, as if you forget to clean up some element of state between tests, your test results can become order-dependent. What we really want is tests that do not share any state.
xUnit.Net gets rid of setup/teardown, because it creates a new object for each test that is run. In essence, the constructor becomes the setup method, and the finalizer becomes the teardown method. There's no (object-level) state held between tests, eliminating this potential error vector.
Most tests that I write have some amount of setup, even if it's just creating the mocks I need and wiring the object being tested up to the mocks. What they don't do is share any state between tests. Teardown is just making sure that I don't share that state.
I haven't had time to read both of what you posted, but I in particular liked this comment:
each test is forced to do the initialization for what it needs to run.
Setup and tear down are convenience methods - they shouldn't attempt to do much more than initialize a class using its default constructor, etc. Common code that three tests need in a five test class shouldn't appear there - each of the three tests should call this code directly. This also keeps tests from stepping on each other's toes and breaking a bunch of tests just because you changed a common initalization routine. The main problem is that this will be called before all tests - not just specific tests. Most tests should be simple, and the more complex ones will need initialization code, but it is easier to see the simplicity of the simple tests when you don't have to trace through a complex initialization in set up and complex destruction in tear down while thinking about what the test is actually supposed to accomplish.
Personally, I've found setup and teardown aren't always evil, and that this line of reasoning is a bit dogmatic. But I have no problem calling them a
code smell for unit tests. I feel their use should be justified, for a few reasons:
Test code is procedural by its nature. In general, setup/teardown do tend to reduce test readability/focus.
Setup methods tend to initialize more than what is needed for any single test. When abused they can become unwieldy. Object Mothers, Test Data Builders, perhaps frameworks like FactoryGirl seem better at initializing test data.
They encourage "context bloat" - the larger the test context becomes, the less maintainable it will be.
To the extent that my setup/teardown doesn't do this, I think their use is warranted. There will always be some duplication in tests. Neal Ford states this as "Tests can be wet but not soaking..." Also, I think their use is more justified when we're not talking about unit tests specifically, but integration tests more broadly.
Working on my own, this has never really been a problem. But I've found it very difficult to maintain test suites in a team setting, and it tends to be because we don't understand each other's code immediately, or don't want to have to step through it to understand it. From a test perspective, I've found allowing some duplication in tests eases this burden.
I'd love to hear how others feel about this, though.
If you need setup and teardown to make your unit tests work, maybe what you really need is mock objects?

Mocks or real classes? [duplicate]

This question already has answers here:
When should I mock?
(4 answers)
Closed 9 years ago.
Classes that use other classes (as members, or as arguments to methods) need instances that behave properly for unit test. If you have these classes available and they introduce no additional dependencies, isn't it better to use the real thing instead of a mock?
I say use real classes whenever you can.
I'm a big believer in expanding the boundaries of "unit" tests as much as possible. At this point they aren't really unit tests in the traditional sense, but rather just an automated regression suite for your application. I still practice TDD and write all my tests first, but my tests are a little bigger than most people's and my green-red-green cycles take a little longer. But now that I've been doing this for a little while I'm completely convinced that unit tests in the traditional sense aren't all they're cracked up to be.
In my experience writing a bunch of tiny unit tests ends up being an impediment to refactoring in the future. If I have a class A that uses B and I unit test it by mocking out B, when I decide to move some functionality from A to B or vice versa all of my tests and mocks have to change. Now if I have tests that verify that the end to end flow through the system works as expected then my tests actually help me to identify places where my refactorings might have caused a change in the external behavior of the system.
The bottom line is that mocks codify the contract of a particular class and often end up actually specifying some of the implementation details too. If you use mocks extensively throughout your test suite your code base ends up with a lot of extra inertia that will resist any future refactoring efforts.
It is fine to use the "real thing" as long as you have absolute control over the object. For example if you have an object that just has properties and accessors you're probably fine. If there is logic in the object you want to use, you could run into problems.
If a unit test for class a uses an instance of class b and an change introduced to b breaks b, then the tests for class a are also broken. This is where you can run into problems where as with a mock object you could always return the correct value. Using "the real thing" Can kind of convolute tests and hide the real problem.
Mocks can have downsides too, I think there is a balance with some mocks and some real objects you will have to find for yourself.
There is one really good reason why you want to use stubs/mocks instead of real classes. I.e. to make your unit test's (pure unit test) class under test isolated from everything else. This property is extremely useful and the benefits for keeping tests isolated are plentiful:
Tests run faster because they don't need to call the real class implementation. If the implementation is to run against file system or relational database then the tests will become sluggish. Slow tests make developers not run unit tests as often. If you're doing Test Driven Development then time hogging tests are together a devastating waste of developers time.
It will be easier to track down problems if the test is isolated to the class under test. In contrast to a system test it will be much more difficult to track down nasty bugs that are not apparently visible in stack traces or what not.
Tests are less fragile on changes done on external classes/interfaces because you're purely testing the class that is under test. Low fragility is also an indication of low coupling, which is a good software engineering.
You're testing against external behaviour of a class rather than the internal implementation which is more useful when deciding code design.
Now if you want to use real class in your test, that's fine but then it is NOT a unit test. You're doing a integration test instead, which is useful for the purpose of validating requirements and overall sanity check. Integration tests are not run as often as unit tests, in practice it is mostly done before committing to favorite code repository, but is equally important.
The only thing you need to have in mind is the following:
Mocks and stubs are for unit tests.
Real classes are for integration/system tests.
Extracted and extended from an answer of mine How do I unit-test inheriting objects?">here:
You should always use real objects where possible.
You should only use mock objects if the real objects do something you dont want to set up (like use sockets, serial ports, get user input, retrieve bulky data etc). Essentially, mock objects are for when the estimated effort to implement and maintain a test using a real object is greater than that to implement and maintain a test using a mock object.
I dont buy into the "dependant test failure" argument. If a test fails because a depended-on class broke, the test did exactly what it should have done. This is not a smell! If a depended-on interface changes, I want to know!
Highly mocked testing environments are very high-maintenance, particularly early in a project when interfaces are in flux. Ive always found it better to start integration testing ASAP.
I always use a mock version of a dependency if the dependency accesses an external system like a database or web service.
If that isn't the case, then it depends on the complexity of the two objects. Testing the object under test with the real dependency is essentially multiplying the two sets of complexities. Mocking out the dependency lets me isolate the object under test. If either object is reasonably simple, then the combined complexity is still workable and I don't need a mock version.
As others have said, defining an interface on the dependency and injecting it into the object under test makes it much easier to mock out.
Personally, I'm undecided about whether it's worth it to use strict mocks and validate every call to the dependency. I usually do, but it's mostly habit.
You may also find these related questions helpful:
What is object mocking and when do I need it?
When should I mock?
How are mocks meant to be used?
And perhaps even, Is it just me, or are interfaces overused?
Use the real thing only if it has been unit tested itself first. If it introduces dependencies that prevent that (circular dependencies or if it requires certain other measures to be in place first) then use a 'mock' class (typically referred to as a "stub" object).
If your 'real things' are simply value objects like JavaBeans then thats fine.
For anything more complex I would worry as mocks generated from mocking frameworks can be given precise expectations about how they will be used e.g. the number of methods called, the precise sequence and the parameters expected each time. Your real objects cannot do this for you so you risk losing depth in your tests.
I've been very leery of mocked objects since I've been bitten by them a number of times. They're great when you want isolated unit tests, but they have a couple of issues. The major issue is that if the Order class needs a a collection of OrderItem objects and you mock them, it's almost impossible to verify that the behavior of of the mocked OrderItem class matches the real-world example (duplicating the methods with appropriate signatures is generally not enough). More than once I've seen systems fail because the mocked classes don't match the real ones and there weren't enough integration tests in place to catch the edge cases.
I generally program in dynamic languages and I prefer merely overriding the specific methods which are problematic. Unfortunately, this is sometimes hard to do in static languages. The downside of this approach is that you're using integration tests rather than unit tests and bugs are sometimes harder to track down. The upside is that you're using the actual code that is written, rather than a mocked version of that code.
If you don't care for verifying expectations on how your UnitUnderTest should interact with the Thing, and interactions with the RealThing have no other side-effects (or you can mock these away) then it is in my opinion perfectly fine to just let your UnitUnderTest use the RealThing.
That the test then covers more of your code base is a bonus.
I generally find it is easy to tell when I should use a ThingMock instead of a RealThing:
When I want to verify expectations in the interaction with the Thing.
When using the RealThing would bring unwanted side-effects.
Or when the RealThing is simply too hard/troublesome to use in a test setting.
If you write your code in terms of interfaces, then unit testing becomes a joy because you can simply inject a fake version of any class into the class you are testing.
For example, if your database server is down for whatever reason, you can still conduct unit testing by writing a fake data access class that contains some cooked data stored in memory in a hash map or something.
It depends on your coding style, what you are doing, your experience and other things.
Given all that, there's nothing stopping you from using both.
I know I use the term unit test way too often. Much of what I do might be better called integration test, but better still is to just think of it as testing.
So I suggest using all the testing techniques where they fit. The overall aim being to test well, take little time doing it and personally have a solid feeling that it's right.
Having said that, depending on how you program, you might want to consider using techniques (like interfaces) that make mocking less intrusive a bit more often. But don't use Interfaces and injection where it's wrong. Also if the mock needs to be fairly complex there is probably less reason to use it. (You can see a lot of good guidance, in the answers here, to what fits when.)
Put another way: No answer works always. Keep your wits about you, observe what works what doesn't and why.