Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I'm unit testing my application using the Mockito framework.
TL;DR
Should I create a mock for every possible result of a class and set their behaviours in the #Before or should I create just one mock for every class and then define their behaviours in each test that uses them?
Context
Right now I'm testing the DataRepository, that has getPosts(String postId). DataRepository receives a FirebaseFirestore instance via DI.
getPosts returns a Maybe<List<Post>> with the following code:
fun getPosts(companyId: String): Maybe<List<Post>> {
return Maybe.create { emitter ->
firestore.collection("companies/$companyId/posts").get()
.addOnCompleteListener {
if (it.isSuccessful) {
if (it.result.isEmpty) {
emitter.onComplete()
} else {
emitter.onSuccess(it.result.toObjects(Post::class.java))
}
} else {
emitter.onError(it.exception ?: UnknownError())
}
}
}
}
Problem
To test this function and the various cases (success/empty/failed) I have to mock FirebaseFirestore, then the CollectionReference, then the Task, then the OnCompleteListener (with an argument captor) and so on. Basically everything.
Question
When I mock the task should I
a. Create a successfulTaskMock that I set in the #Before to always be successful and one for the failure? Same thing with a valid/empty validQuerySnapshot/emptyQuerySnapshot?
b. Create only a taskMock and then change its return value directly inside the test? (Make taskMock.isSuccessful return true in the success test and false in the failure test)
I've looked at some GitHub repos: some do a., some do b.
Problem with a. is I would end up with tons of mocks for the same class, but with very clean and readable tests.
Problem with b. is I should setup each mock at the start of every test, thus having a lot of boilerplate code.
Conclusion
I've read that I could use deep mocking to only create the end result of a call (the QuerySnapshot, as opposed to Firestore + Collection + Task etc.) but that seems to be very frowned upon and I can see why.
Maybe there is even a third way that I don't know of. Any suggestion is greatly appreciated.
Thank you very much.
Related
I was looking at the documentation at: https://pub.dartlang.org/packages/mockito and was trying to understand it more. It seems that in the examples, the function stubs were accepting strings, but was kind of confused as to how I was going to implement my Mocked Services.
I was curious how I would do it. The services I have is pretty simple and straight forward.
class Group{}
class GroupService {}
class MockGroupService extends Mock implements GroupService {}
final mockProviders = [new Provider(MockGroupService, useExisting: GroupService];
So you can see I am using Angular dart.
I was creating a sample group in my Test file.
group("service tests", (){
MockGroupService _mock;
testBed.addProviders([mockProviders]);
setUp(() async {
fixture = await testBed.create();
_mock = new MockGroupService();
//This is where I was going to create some stubbs for the methods
when(_mock.add()).thenReturn((){
return null; //return the object.
});
//create additional when statements for edit, delete, etc.
});
});
So what i was thinking is that there would be an argument passed into add (or 2).... how would I properly code that in the when statement, and how do those 2 arguments reflect in the then statement?
Essentially, I was wanting to do a test with a complex class.. and pass it into add. Then it would just process it accordingly and return it.
Do i pass into the arguments something akin to: (using pseudocode)
when(_mock.add(argThat(hasType(Group)))).thenReturn((Group arg)=> arg);
or something similar? hasType isnt function, so im not 100% sure how to approach this design. Ideally, Im trying create the Group in the test, and then pass it into the add function accordingly. It just seems that the examples were showing Strings.
Yes mockito allows objects to be passed you can see examples in the test.
It is a bit hard to follow but you can see here that it uses deep equality to check if arguments are equal if no matchers are specified.
The second part of your question is a bit more complex. If you want to use the values that were passed into your mock as part of your response then you need to use thenAnswer. It provides you with an Invocation of what was just called. From that object you can get and return any arguments that were used in the method call.
So for your add example if you know what is being passing in and have complete access to it I would write:
Group a = new Group();
when(_mock.add(a)).thenReturn(a);
If the Group object is being created by something else I would write:
when(_mock.add(argThat(new isInstanceOf<Group>()))
.thenAnswer((invocation)=>invocation.positionalArguments[0]);
Or if you don't really care about checking for the type. Depending on what checks you are using for your test the type might already be checked for you.
when(_mock.add(any)).thenAnswer(
(invocation)=>invocation.positionalArguments[0]);
Or if you are using Dart 2.0:
when(_mock.add(typed(any))).thenAnswer(
(invocation)=>invocation.positionalArguments[0]);
Sorry for the long post...
While being introduced to a brown field project, I'm having doubts regarding certain sets of unit tests and what to think. Say you had a repostory class, wrapping a stored procedure and in the developer guide book, a certain set guidelines (rules), describe how this class should be constructured. The class could look like the following:
public class PersonRepository
{
public PersonCollection FindPersonsByNameAndCity(string personName, string cityName)
{
using (new SomeProfiler("someKey"))
{
var sp = Ioc.Resolve<IPersonStoredProcedure>();
sp.addNameArguement(personName);
sp.addCityArguement(cityName);
return sp.invoke();
}
} }
Now, I would of course write some integration tests, testing that the SP can be invoked, and that the behavior is as expected. However, would I write unit tests that assert that:
Constructor for SomeProfiler with the input parameter "someKey" is called
The Constructor of PersonStoredProcedure is called
The addNameArgument method on the stored procedure is called with parameter personName
The addCityArgument method on the stored procedure is called with parameter cityName
The invoke method is called on the stored procedure -
If so, I would potentially be testing the whole structure of a method, besides the behavior. My initial thought is that it is overkill. However, in regards to the coding practices enforced by the team, these test ensure a uniform and 'correct' structure and that the next layer is called correctly (from DAL to DB, BLL to DAL etc).
In my case these type of tests, are performed for each layer of the application.
Follow up question - the use of the SomeProfiler class smells a little like a convention to me - Instead creating explicit tests for this, could one create convention styled test by using static code analysis or unittest + reflection?
Thanks in advance.
I think that your initial thought was right - this is an overkill. Although you can use reflection to make sure that the class has the methods you expect I'm not sure you want to test it that way.
Perhaps instead of unit testing you should use some tool such as FxCop/StyleCop or nDepend to make sure all of the classes in a specific assembly/dll has these properties.
Having said that I'm a believer of "only code what you need" why test that a method exist, either you use it somewhere in your code and in that can you can test the specific case or you don't - and so it's irrelevant.
Unit tests should focus on behavior, not implementation. So writing a test to verify that certain arguments are set or passed in doesn't add much value to your testing strategy.
As the example provided appears to be communicating with your database, it can't truly be considered a "unit test" as it must communicate with physical dependencies that have additional setup and preconditions, such as availability of the environment, database schema, existing data, stored-procedures, etc. Any test you write is actually verifying these preconditions as well.
In it's present condition, your best bet for these types of tests is to test the behavior provided by the class -- invoke a method on your repository and then validate that the results are what you expected. However, you'll suddenly realize that there's a hidden cost here -- the database maintains state between test runs, and you'll need additional setup or tear-down logic to ensure that the database is in a well-known state.
While I realize the intent of the question was about the testing a "black box", it seems obvious that there's some hidden magic here in your API. My preference to solve the well-known state problem is to use an in-memory database that is scoped to the current test, which isolates me from environment considerations and enables me to parallelize my integration tests. I'd wager that under the current design, there is no "seam" to programmatically introduce a database configuration so you're "hemmed in". In my experience, magic hurts.
However, a slight change to the existing design solves this problem and the "magic" goes away:
public class PersonRepository : IPersonRepository
{
private ConnectionManager _mgr;
public PersonRepository(ConnectionManager mgr)
{
_mgr = mgr;
}
public PersonCollection FindPersonsByNameAndCity(string personName, string cityName)
{
using (var p = _mgr.CreateProfiler("somekey"))
{
var sp = new PersonStoredProcedure(p);
sp.addArguement("name", personName);
sp.addArguement("city", cityName);
return sp.invoke();
}
}
}
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I understand the idea behind test-driven development, write tests first, code against tests until it's successful. It's just not coming together for me in my workflow yet.
Can you give me some examples where unit-tests could be used in a front or back end web development context?
You didn't specify a language, so I'll try and keep this somewhat generic. It'll be hard though, since it's a lot easier to express concepts with actual code.
Unit tests can be a bit confusing at first. Sometimes it's not always clear how to test something, or what the purpose of a test is.
I like to treat unit testing as a way to test small individual pieces of code.
The first place I use unit tests is to verify some method works like I expect it to for all cases. I just recently wrote a validation method for a phone number for my site. I accept any inputs, from 123-555-1212, (123) 555-1212, etc. I want to make sure my validation method works for all the possible formats. Without a unit test, I'd be forced to manually enter each different format, and check that the form posts correctly. This is very tedious and error prone. Later on, if someone makes a change to the phone validation code, it would be nice if we could easily check to make sure nothing else broke. (Maybe we added support for the country code). So, here's a trivial example:
public class PhoneValidator
{
public bool IsValid(string phone)
{
return UseSomeRegExToTestPhone(phone);
}
}
I could write a unit test like this:
public void TestPhoneValidator()
{
string goodPhone = "(123) 555-1212";
string badPhone = "555 12"
PhoneValidator validator = new PhoneValidator();
Assert.IsTrue(validator.IsValid(goodPhone));
Assert.IsFalse(validator.IsValid(badPhone));
}
Those 2 Assert lines will verify that the value returned from IsValid() is true and false respectively.
In the real world, you would probably have lots and lots of examples of good and bad phone numbers. I have about 30 phone numbers that I test against. Simply running this unit test in the future will tell you if your phone validation logic is broke.
We can also use unit tests to simulate things outside of our control.
Unit tests should run independent of any outside resources. Your tests shouldn't depend on a database being present, or a web service being available. So instead, we simulate these resources, so we can control what they return. In my app for example, I can't simulate a rejected credit card on registration. The bank probably would not like me submitting thousands of bad credit cards just to make sure my error handling code is correct. Here's some sample code:
public class AccountServices
{
private IBankWebService _webService = new BankWebService();
public string RegisterUser(string username, string creditCard)
{
AddUserToDatabase(username);
bool success = _webService.BillUser(creditCard);
if (success == false)
return "Your credit card was declined"
else
return "Success!"
}
}
This is where unit testing is very confusing and not obvious. What should a test of this method do? The first thing, it would be very nice if we could check to see that if billing failed, the appropriate error message was returned. As it turns out, by using a mock, there's a way. We use what's called Inversion of Control. Right now, AccountServices() is responsible for creating the BankWebService object. Let's let the caller of this class supply it though:
public class AccountServices
{
public AccountServices(IBankWebService webService)
{
_webService = webService;
}
private IBankWebService _webService;
public string RegisterUser(string username, string creditCard)
{
AddUserToDatabase(username);
bool success = _webService.BillUser(creditCard);
if (success == false)
return "Your credit card was declined"
else
return "Success!"
}
}
Because the caller is responsible for creating the BankWebService object, our Unit test can create a fake one:
public class FakeBankWebService : IBankWebService
{
public bool BillUser(string creditCard)
{
return false; // our fake object always says billing failed
}
}
public void TestUserIsRemoved()
{
IBankWebService fakeBank = FakeBankWebService();
AccountServices services = new AccountServices(fakeBank);
string registrationResult = services.RegisterUser("test_username");
Assert.AreEqual("Your credit card was declined", registrationResult);
}
By using that fake object, anytime our bank's BillUser() is called, our fake object will always return false. Our unit test now verifies that if the call to the bank fails, RegisterUser() will return the correct error message.
Suppose one day you are making some changes, and a bug creeps in:
public string RegisterUser(string username, string creditCard)
{
AddUserToDatabase(username);
bool success = _webService.BillUser(creditCard);
if (success) // IT'S BACKWARDS NOW
return "Your credit card was declined"
else
return "Success!"
}
Now, when your billing fails, Your RegisterUser() method returns "Success!". Fortunately, you have a unit test written. That unit test will now fail because it's no longer returning "Your credit card was declined".
It's much easier and quicker to find the bug this way than to manually fill out your registration form with a bad credit card, just to check the error message.
Once you look at different mocking frameworks, there are even more powerful things you can do. You can verify your fake methods were called, you can verify the number of times a method was called, you can verify the parameters that methods were called with, etc.
I think once you understand these 2 ideas, you will understand more than enough to write plenty of unit tests for your project.
If you tell us the language you're using, we can direct you better though.
I hope this helps. I apologize if some of it is confusing. I'll clean it up if something doesn't make sense.
Let's say you're writing a function to check if a page was reached by the appropriate URL. The page has a "canonical" stub - for example, while a page could be reached at stackoverflow.com/questions/123, we would prefer (for SEO reasons) to redirect it to stackoverflow.com/questions/123/how-do-i-move-the-turtle-in-logo - and the actual redirect is safely contained in its own method (eg. redirectPage($url)), but how do you properly test the function which calls it?
For example, take the following function:
function checkStub($questionId, $baseUrl, $stub) {
canonicalStub = model->getStub($questionId);
if ($stub != $canonicalStub) {
redirectPage($baseUrl . $canonicalStub);
}
}
If you were to unit test the checkStub() function, wouldn't the redirect get in the way?
This is part of a larger problem where certain functions seem to get too big and leave the realm of unit testing and into the world of integration testing. My mind immediately thinks of routers and controllers as having these sorts of problems, as testing them necessarily leads to the generation of pages rather than being confined to just their own function.
Do I just fail at unit testing?
You say...
This is part of a larger problem where certain functions seem to get too big and leave the realm of unit testing and into the world of integration testing
I think this is why unit testing is (1) hard and (2) leads to code that doesn't crumble under its own weight. You have to be meticulous about breaking all of your dependencies or you end up with unit tests == integration tests.
In your example, you would inject a redirector as a dependency. You use a mock, double or spy. Then you do the tests as #atk lays out. Sometimes it's not worth it. More often it forces you to write better code. And it's hard to do without an IOC container.
This is an old question, but I think this answer is relevant. #Rob states that you would inject a redirector as a dependency - and sure, this works. However, your problem is that you don't have a good separation of concerns.
You need to make your functions as atomic as possible, and then compose larger functionality using the granular functions you've created. You wrote this:
function checkStub($questionId, $baseUrl, $stub) {
canonicalStub = model->getStub($questionId);
if ($stub != $canonicalStub) {
redirectPage($baseUrl . $canonicalStub);
}
}
I'd write this:
function checkStubEquality($stub1, $stub2) {
return $stub1 == $stub2;
}
canonicalStub = model->getStub($questionId);
if (!checkStubEquality(canonicalStub, $stub)) redirectPage($baseUrl . $canonicalStub);
It sounds like you just have another test case. You need to check that the stub is identified correctly as a stub with both positive and negative testing, and you need to check that the page to which you are redirected is correct.
Or do I totally misunderstand the question?
I'm looking for tidy suggestions on how people organise their controller tests.
For example, take the "add" functionality of my "Address" controller,
[AcceptVerbs(HttpVerbs.Get)]
public ActionResult Add()
{
var editAddress = new DTOEditAddress();
editAddress.Address = new Address();
editAddress.Countries = countryService.GetCountries();
return View("Add", editAddress);
}
[RequireRole(Role = Role.Write)]
[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Add(FormCollection form)
{
// save code here
}
I might have a fixture called "when_adding_an_address", however there are two actions i need to test under this title...
I don't want to call both actions in my Act() method in my fixture, so I divide the fixture in half, but then how do I name it?
"When_adding_an_address_GET" and "When_adding_an_address_POST"?
things just seems to be getting messy, quickly.
Also, how do you deal with stateless/setupless assertions for controllers, and how do you arrange these wrt the above? for example:
[Test]
public void the_requesting_user_must_have_write_permissions_to_POST()
{
Assert.IsTrue(this.SubjectUnderTest.ActionIsProtectedByRole(c => c.Add(null), Role.Write));
}
This is custom code i know, but you should get the idea, it simply checks that a filter attribute is present on the method. The point is it doesnt require any Arrange() or Act().
Any tips welcome!
Thanks
In my opinion you should forget about naming your tests after the methods you're testing. In fact testing a single method is a strange concept. You should be testing a single thing a client will do with your code. So for example if you can hit add with a POST and a GET you should write two tests like you suggested. If you want to see what happens in a certain exceptional case you should write another test.
I usually pick names that tell a maintainer what he needs to know in Java:
#Test public void shouldRedirectToGetWhenPostingToAdd(){
//...
}
You can do this in any language and pick any *DD naming convention if you like, but the point is that the test name should convey the expectations and the scenario. You will get very small test this way and I consider this a good thing.
Well, 13 months later and no answers. Awesome.
Heres what i do now:
/tests/controllers/address/add/get.cs
/tests/controllers/address/add/valid.cs
/tests/controllers/address/add/invalid.cs