How to avoid Collection.sort() method from executing in a test case? - unit-testing

Hi I am writing unit test cases for a program. In that program I am testing a certain method in which their is this method : Collections.sort(Arraylist object).
Its something like this.
public void abc(){
try{
Arraylist<object_class> object=some_method.get(list);
Collections.sort(object);
System.out.print("The whole functon is executing successfully") }
catch(exception e) {
system.out.print("error message") } }
the some_method.get(list)` method I am calling is sending a empty list so when Collections.sort() is called it goes to catch block and the rest code is not executed.
What can i do so that this Collections.sort() method is not called while test case is running.
PS- above code is only to explain the question and I cant make changes to main class
In test class I tried to use this method
Mockito.doNothing().when(collections.sort(Mockito.anyList()));//this is not working
So I tried this Mockito.doNothing().when(collections.class).sort(Mockito.anyList()); //this is also not working
I can return a mock list to object but I want to know if I can prevent Collections.sort() from executing.
Please help

It' simple if you want to execute Collections.sort() without any exception follow below steps:
1) Create a list object with dummy values in your test class and send it to main class
list = new ArrayList<>();
list.add("a");
list.add("b");
list.add("c");
2) Second step is make sure that your
some_method.get(list);
returns a ArrayList of objects,
3) As Collections class has static method sort, JVM will execute rest of the code as usual. And will not through any exception.
PS- If Mockito.doNothing() is not working try with PowerMockito.doNothing(), it might work and make sure that you have
1) #PrepareforTest(Collections.class)
2) PowerMockito.mockStatic(Collections.class);
3) PowerMockito.doNothing().when(Collections.sort(obj));
Hope it's useful.

This is wrong on many levels:
You are using a raw type there (by using ArrayList without a generic type parameter) - never do that!
If your test setup makes that other method return null, then don't manipulate "later" code - instead either avoid that null or make sure your production code can deal with it!
In other words: there are only two reasonable choices:
A) if in reality, that method never returns null. Then you should make sure, that this is also true for your test setup.
B) if in reality, that method returns null, too ... then your production code needs to deal with that, for example by doing a null check before calling sort!
Finally: especially for lists, that problem really doesn't exist at all: the answer is - you never ever return null! Instead, you return an empty list if there is no data. Empty lists can be sorted without a problem! Long story short: avoid returning null in the first place.

I can return a mock list to object but I want to know if I can prevent Collections.sort() from executing.
I don't see a way to prevent that
Collections.sort() is executed as long as the method that calls it is executed and as long as you can't change the tested code. The reason is that sort is a static method. You can pass or inject an instance that is a mock to achieve your goal.
Returning a mock list that can be sorted is the way to go in my opinion. If you manipulate the behavior of the tested class during test you will have a higher risk not to find errors. Additionally, you have a tight coupling between test and tested class in this case.

Related

How to test (in google test) that a function is called with the correct parameters

I have a class and this class has a method ("original_method") that uses two objects of different types. In this method there are two calls: one call to a method of the first object that returns a value which is then used for calling a method of the second object. I was wondering what is the correct way of unit testing such behavior (using google-test). Specifically, I want to test that the argument provided to the second object is indeed the value returned from the first.
Currently I achieve this using parametrized tests - the code below shows what I do:
TEST_P(SomeTestSuite, checkingIfCalledWithTheRightArgument)
{
EXPECT_CALL(*obj1, get_some_value()).WillOnce(Return(name_of_value));
EXPECT_CALL(*obj2, do_a_calculation(name_of_value));
obj0->call_original_method();
}
I have a fixture for my original class under testing, i have mocks for obj1 and obj2, I provide a value for "name_of_value" in the parameters and the test works.
My problem is that this doesn't seem to be the correct way, I believe I shouldn't have to pass a parameter to check such a functionality. I would appreciate if somebody could explain to me how i should have approached the problem.
Thank you.
EDIT:
I think I could do:
TEST_F(SomeTestSuite, checkingIfCalledWithTheRightArgument)
{
EXPECT_CALL(*obj1, get_some_value());
auto name_of_value = obj1->get_some_value();
EXPECT_CALL(*obj2, do_a_calculation(name_of_value));
obj0->call_original_method();
}
but I'm not sure if this captures (or actually tests) the original behaviour...

Mockito Verify method not giving consistent results

I'm learning GwtMockito but having trouble getting consistent verify() method results in one of my tests.
I'm trying to test the correct GwtEvents are being fired by my application. So I've mocked the Event Bus like this in my #Before method:
eventBus = mock(HandlerManager.class);
This test passes as expected:
// Passes as expected
verify(eventBus).fireEvent(any(ErrorOccurredEvent.class));
I wanted to force the test to fail just to know it was running correctly. So I changed it to this and it still passes:
// Expected this to fail, but it passes
verify(eventBus).fireEvent(any(ErrorOccurredEvent.class));
verifyZeroInteractions(eventBus).fireEvent(any(ErrorOccurredEvent.class));
This seems contradictory to me. So I removed the first test:
// Fails as expected
verifyZeroInteractions(eventBus).fireEvent(any(ErrorOccurredEvent.class));
Finally I added an unrelated event that should cause it to fail
// Expected to fail, but passes
verify(eventBus).fireEvent(any(ErrorOccurredEvent.class));
verify(eventBus).fireEvent(any(ModelCreatedEvent.class)); // This event is not used at all by the class that I'm testing. It's not possible for it to be fired.
I'm not finding any documentation that explains what's going on. Both ErrorOccurredEvent and ModelCreatedEvent extend GwtEvent, and have been verified in manual testing. Am I testing my EventBus incorrectly? If so, what is a better way to go about it?
Update
I've done some additional experimenting. It appears to be an issue I'm having with the Mockito matcher. When I get the test to fail the exception reports the method signature as eventBus.fireEvent(<any>) so it doesn't appear to be taking into account the different classes I'm passing into the any method. Not sure what to do about this yet, but including it here for anyone else researching this problem.
The method you're looking for is isA, instead of any.
This doesn't explain my first attempt to force the test to fail, but it does explain the other confusion. From the Mockito documentation:
public static T any(java.lang.Class clazz)
Matches any object, including nulls
This method doesn't do type checks with the given parameter, it is
only there to avoid casting in your code. This might however change
(type checks could be added) in a future major release.
So by design it doesn't do the type checks I was hoping for. I'll have to work out another way to design these tests. But this explains why they weren't behaving as I expected.

Organizing unit test within a test class

Suppose I have several unit tests in a test class ([TestClass] in VSUnit in my case). I'm trying to test just one thing in each test (doesn't mean just one Assert though). Imagine there's one test (e.g. Test_MethodA() ) that tests a method used in other tests as well. I do not want to put an assert on this method in other tests that use it to avoid duplicity/maintainability issues so I have the assert in only this one test. Now when this test fails, all tests that depend on correct execution of that tested method fail as well. I want to be able to locate the problem faster so I want to be somehow pointed to Test_MethodA. It would e.g. help if I could make some of the tests in the test class execute in a particular order and when they fail I'd start looking for cause of the failure in the first failing test. Do you have any idea how to do this?
Edit: By suggesting that a solution would be to execute the tests in a particular order I have probably went too far and in the wrong direction. I don't care about the order of the tests. It's just that some of the tests will always fail if a prequisite isn't valid. E.g. I have a test class that tests a DAO class (ok, probably not a UNIT test, but there's logic in the database stored procedures that needs to be tested but that's not the point here I think). I need to insert some records into a table in order to test that a method responsible for retrieving the records (let's call it GetAll()) gets them all in the correct order e.g. I do the insert by using a method on the DAO class. Let's call it Insert(). I have tests in place that verify that the Insert() method works as expected. Now I want to test the GetAll() method. In order to get the database in a desired state I use the Insert() method. If Insert() doesn't work, most tests for GetAll() will fail. I'd prefer to mark the tests that can't pass because Insert() doesn't work as inconclusive rather than failed. It would ease finding the cause of the problem if I know which method/test to look into first.
You can't (and shouldn't) execute unit tests in a specific order. The underlying reason for this is to prevent Interacting Tests - I realize that your motivation for requesting such a feature is different, but that's the reason why unit test frameworks don't allow you to order tests. In fact, last time I checked, xUnit.net even randomizes the order.
One could argue that the fact that some of your tests depend on a different method call on the same class is a symptom of tight coupling, but that's not always the case (state machines come to mind).
However, if possible, consider using a Back Door instead of the other method in question.
If you can't do either that or decouple the interdependency (e.g. by making the first method virtual and using the Extract and Override technique), you will have to live with it.
Here's an example:
public class MyClass
{
public virtual void FirstMethod() { // do something... }
public void SecondMethod() {}
}
Since FirstMethod is virtual, you can derive from MyClass and override its behavior. You can also use a dynamic mock to do that for you. With Moq, it would look like this:
var sutStub = new Mock<MyClass>();
// by default, Moq overrides all virtual methods without calling base
// Now invoke both methods in sequence:
sutStub.Object.FirstMethod(); // overriden by Moq, so it does nothing
sutSutb.Object.SecondMethod();
I think I would indeed have the assertion on the method_A() result in every tests relying on its result, even if this introduces some duplication. Then I would use the assertion message to point to the method_A() failure.
assert("method_A() returned true", true, rc);
Perhaps will I end extracting the method_A() call and the assertion into an helper function to remove the duplication.
Now let's imagine method_A() queries an object and returns it, or NULL when no object is found. Then this assertion is a guard ; and it is necessary with languages suchas C, C++ that do not have NullPointerException.
I'm afraid you can't do this. The only solution is to redesign your code and break it up into smaller methods so that unit tests can call these one by one. Of course this isn't always desirable.
With Visual Studio you can order your tests: see here. But I'd like to advise you to stay away from this technique as much as possible: unit tests are meant to be run anywhere, anytime and in every order.
EDIT: why is this a problem for you? All failing tests point to the same method anyway...

What is the unit testing strategy for method call forwarding?

I have the following scenario:
public class CarManager
{
..
public long AddCar(Car car)
{
try
{
string username = _authorizationManager.GetUsername();
...
long id = _carAccessor.AddCar(username, car.Id, car.Name, ....);
if(id == 0)
{
throw new Exception("Car was not added");
}
return id;
} catch (Exception ex) {
throw new AddCarException(ex);
}
}
public List AddCars(List cars)
{
List ids = new List();
foreach(Car car in cars)
{
ids.Add(AddCar(car));
}
return ids;
}
}
I am mocking out _reportAccessor, _authorizationManager etc.
Now I want to unittest the CarManager class.
Should I have multiple tests for AddCar() such as
AddCarTest()
AddCarTestAuthorizationManagerException()
AddCarTestCarAccessorNoId()
AddCarTestCarAccessorException()
For AddCars() should I repeat all previous tests as AddCars() calls AddCar() - it seems like repeating oneself? Should I perhaps not be calling AddCar() from AddCars()? < p/>
Please help.
There are two issues here:
Unit tests should do more than test methods one at a time. They should be designed to prove that your class can do the job it was designed for when integrated with the rest of the system. So you should mock out the dependencies and then write a test for each way in which you class will actually be used. For each (non-trivial) class you write there will be scenarios that involve the client code calling methods in a particular pattern.
There is nothing wrong with AddCars calling AddCar. You should repeat tests for error handling but only when it serves a purpose. One of the unofficial rules of unit testing is 'test to the point of boredom' or (as I like to think of it) 'test till the fear goes away'. Otherwise you would be writing tests forever. So if you are confident a test will add no value them don't write it. You may be wrong of course, in which case you can come back later and add it in. You don't have to produce a perfect test first time round, just a firm basis on which you can build as you better understand what your class needs to do.
Unit Test should focus only to its corresponding class under testing. All attributes of class that are not of same type should be mocked.
Suppose you have a class (CarRegistry) that uses some kind of data access object (for example CarPlatesDAO) which loads/stores car plate numbers from Relational database.
When you are testing the CarRegistry you should not care about if CarPlateDAO performs correctly; Since our DAO has it's own unit test.
You just create mock that behaves like DAO and returns correct or wrong values according to expected behavior. You plug this mock DAO to your CarRegistry and test only the target class without caring if all aggregated classes are "green".
Mocking allows separation of testable classes and better focus on specific functionality.
When unittesting the AddCar class, create tests that will exercise every codepath. If _authorizationManager.GetUsername() can throw an exception, create a test where your mock for this object will throw. BTW: don't throw or catch instances of Exception, but derive a meaningful Exception class.
For the AddCars method, you definitely should call AddCar. But you might consider making AddCar virtual and override it just to test that it's called with all cars in the list.
Sometimes you'll have to change the class design for testability.
Should I have multiple tests for
AddCar() such as
AddCarTest()
AddCarTestAuthorizationManagerException()
AddCarTestCarAccessorNoId()
AddCarTestCarAccessorException()
Absolutely! This tells you valuable information
For AddCars() should I repeat all previous tests as AddCars() calls AddCar() - it seems
like repeating oneself? Should I perhaps not be calling AddCar() from AddCars()?
Calling AddCar from AddCars is a great idea, it avoids violating the DRY principle. Similarly, you should be repeating tests. Think of it this way - you already wrote tests for AddCar, so when testing AddCards you can assume AddCar does what it says on the tin.
Let's put it this way - imagine AddCar was in a different class. You would have no knowledge of an authorisation manager. Test AddCars without the knowledge of what AddCar has to do.
For AddCars, you need to test all normal boundary conditions (does an empty list work, etc.) You probably don't need to test the situation where AddCar throws an exception, as you're not attempting to catch it in AddCars.
Writing tests that explore every possible scenario within a method is good practice. That's how I unit test in my projects. Tests like AddCarTestAuthorizationManagerException(), AddCarTestCarAccessorNoId(), or AddCarTestCarAccessorException() get you thinking about all the different ways your code can fail which has led to me find new kinds of failures for a method I might have otherwise missed as well as improve the overall design of the class.
In a situation like AddCars() calling AddCar() I would mock the AddCar() method and count the number of times it's called by AddCars(). The mocking library I use allows me to create a mock of CarManager and mock only the AddCar() method but not AddCars(). Then your unit test can set how many times it expects AddCar() to be called which you would know from the size of the list of cars passed in.

How to test function call order

Considering such code:
class ToBeTested {
public:
void doForEach() {
for (vector<Contained>::iterator it = m_contained.begin(); it != m_contained.end(); it++) {
doOnce(*it);
doTwice(*it);
doTwice(*it);
}
}
void doOnce(Contained & c) {
// do something
}
void doTwice(Contained & c) {
// do something
}
// other methods
private:
vector<Contained> m_contained;
}
I want to test that if I fill vector with 3 values my functions will be called in proper order and quantity. For example my test can look something like this:
tobeTested.AddContained(one);
tobeTested.AddContained(two);
tobeTested.AddContained(three);
BEGIN_PROC_TEST()
SHOULD_BE_CALLED(doOnce, 1)
SHOULD_BE_CALLED(doTwice, 2)
SHOULD_BE_CALLED(doOnce, 1)
SHOULD_BE_CALLED(doTwice, 2)
SHOULD_BE_CALLED(doOnce, 1)
SHOULD_BE_CALLED(doTwice, 2)
tobeTested.doForEach()
END_PROC_TEST()
How do you recommend to test this? Are there any means to do this with CppUnit or GoogleTest frameworks? Maybe some other unit test framework allow to perform such tests?
I understand that probably this is impossible without calling any debug functions from these functions, but at least can it be done automatically in some test framework. I don't like to scan trace logs and check their correctness.
UPD: I'm trying to check not only the state of an objects, but also the execution order to avoid performance issues on the earliest possible stage (and in general I want to know that my code is executed exactly as I expected).
You should be able to use any good mocking framework to verify that calls to a collaborating object are done in a specific order.
However, you don't generally test that one method makes some calls to other methods on the same class... why would you?
Generally, when you're testing a class, you only care about testing its publicly visible state. If you test
anything else, your tests will prevent you from refactoring later.
I could provide more help, but I don't think your example is consistent (Where is the implementation for the AddContained method?).
If you're interested in performance, I recommend that you write a test that measures performance.
Check the current time, run the method you're concerned about, then check the time again. Assert that the total time taken is less than some value.
The problem with check that methods are called in a certain order is that your code is going to have to change, and you don't want to have to update your tests when that happens. You should focus on testing the actual requirement instead of testing the implementation detail that meets that requirement.
That said, if you really want to test that your methods are called in a certain order, you'll need to do the following:
Move them to another class, call it Collaborator
Add an instance of this other class to the ToBeTested class
Use a mocking framework to set the instance variable on ToBeTested to be a mock of the Collborator class
Call the method under test
Use your mocking framework to assert that the methods were called on your mock in the correct order.
I'm not a native cpp speaker so I can't comment on which mocking framework you should use, but I see some other commenters have added their suggestions on this front.
You could check out mockpp.
Instead of trying to figure out how many functions were called, and in what order, find a set of inputs that can only produce an expected output if you call things in the right order.
Some mocking frameworks allow you to set up ordered expectations, which lets you say exactly which function calls you expect in a certain order. For example, RhinoMocks for C# allows this.
I am not a C++ coder so I'm not aware of what's available for C++, but that's one type of tool that might allow what you're trying to do.
http://msdn.microsoft.com/en-au/magazine/cc301356.aspx
This is a good article about Context Bound Objects. It contains some so advanced stuff, but if you are not lazy and really want to understand this kind of things it will be really helpful.
At the end you will be able to write something like:
[CallTracingAttribute()]
public class TraceMe : ContextBoundObject
{...}
You could use ACE (or similar) debug frameworks, and in your test, configure the debug object to stream to a file. Then you just need to check the file.