Kotlin delegation, what should I test? - unit-testing

In Kotlin the powerful construct of delegation can be used to extend functionality of existing interfaces by reusing existing implementations.
class Demo : Map by HashMap {}
Questions:
What should I be testing? Testing hashmap from the example is not the target of this test. It seems very verbose to verify the complete implementation, I would rather like to verify that the delegation of the proper fields take place.
When using mutation testing, e.g. using PItest, how do I catch all mutations? The report shows quite a few mutations, correctly I believe. The Kotlin compiler creates byte code for all delegations.

Related

Unit Testing RxJava Flowable using spock

I have below snippet for fetching data from MongoDB using com.mongodb.reactivestreams.client.MongoClient and Flowable
The snippet goes like:
Flowable
.fromPublisher(
mongoClient
.getDatabase(mydb)
.getCollection(mycollection)
.find()
.limit()
)
.firstOrError()
.toMaybe()
.doOnError(error -> { /* somecode */ })
I tried mocking every step of this fluent expression, e.g.
MongoDatabase someDb = Mock(MongoDatabase)
mongoClient.getDatabase(mydb) >> somedb
but on doing this somehow the test keeps running.
What is the correct way to unit test this using Spock?
Fluent interfaces are a PITA to mock, my strategy is to put those calls into a separate class / method and mock that. And then to test the fluent part in an integration test.
In addition to Leonard's idea, you might also want to look into implementing a special ThisResponse implements IDefaultResponse which always returns the mock instance for every mock method call and using that like Mock(defaultResponse: ThisResponse.INSTANCE) for your fluent API class(es). This works nicely as long as the fluent API methods used in the test are supposed to return this or at least another object of the given type. Only where another type is returned, you need to stub something.
Check this answer for more details. As soon as you update your question with a little MCVE, you may also ask follow-up questions if you have any problems using that solution.
Update 2022-03-08: I re-wrote the linked answer after learning about the special behaviour of EmptyOrDummyResponse for methods returning the mocked type. I am also describing now the related Spock 2 syntactic sugar syntax there.

When to use strict mocks?

I am trying to come up with scenario in which one should use strict mocks. I can't think of any.
When do you use strict mocks and why?
Normal (or loose) mocks are used when you want to verify that an expected method has been called with the proper parameters.
Strict mocks are used to verify that only the expected methods have been called and no other. Think of them as a kind of negative test.
In most cases, having strict mocks makes your unit tests very fragile. Tests start failing even if you make a small internal implementation change.
But let me give you an example where they may be useful - testing a requirement such as:
"A Get on a cache should not hit the database if it already contains data".
There are ways to achieve this with loose mocks, but instead, it is very convenient to simply set up a strict Mock<Database> with zero expected function calls. Any call to this database will then throw an exception and fail the test.
Another scenario where you would want to use strict mocks is in an Adapter or Wrapper design pattern. In this pattern, you are not executing much business logic. The major part of testing these classes is whether the underlying functions have been called with the correct parameters (and no other). Strict mocks work fairly well in this case.
I have a simple convention:
Use strict mocks when the system under test (SUT) is delegating the call to the underlying mocked layer without really modifying or applying any business logic to the arguments passed to itself.
Use loose mocks when the SUT applies business logic to the arguments passed to itself and passes on some derived/modified values to the mocked layer.
For eg: Lets say we have database provider StudentDAL which has two methods:
Data access interface looks something like below:
Student GetStudentById(int id);
IList<Student> GetStudents(int ageFilter, int classId);
The implementation which consumes this DAL looks like below:
public Student FindStudent(int id)
{
//StudentDAL dependency injected
return StudentDAL.GetStudentById(id);
//Use strict mock to test this
}
public IList<Student> GetStudentsForClass(StudentListRequest studentListRequest)
{
//StudentDAL dependency injected
//age filter is derived from the request and then passed on to the underlying layer
int ageFilter = DateTime.Now.Year - studentListRequest.DateOfBirthFilter.Year;
return StudentDAL.GetStudents(ageFilter , studentListRequest.ClassId)
//Use loose mock and use verify api of MOQ to make sure that the age filter is correctly passed on.
}

Unit Testing mGo

I have a function that accepts a database *mgo.Database parameter.
func myFunc(db *mgo.Database) {
// does some operations with db
}
I would like to write a unit test and pass in a mocked db object, but I'm having a very difficult time figuring out how to do that with golang. In other languages I could use there testing frameworks to do a myMock = createMock("Class to Mock"), but with Go I'm not sure how to do this.
I glanced at gomock, but wasn't sure if that is the only way, and wasn't sure how to use the mockgen tool with mgo.
I also thought maybe to write an interface that has all of the same methods as mgo.Database and pass a "mocked" object that uses the interface, and then create an object that uses the interface and passes calls through to mgo's library (similar to an ORM), but that seems like a lot of coding.
*mgo.Database is a pointer to a type, not an interface, you can't mock it.
As in other languages - you need to provide a level of indirection, so that you can provide a real object in production but a mock for testing. So your first step is to extract the interface that your "myFunc" uses (which methods it calls), then you can provide *mgo.Database in production and your mock (manual mock or using some mocking framework) for testing.
This free sample chapter from great book "The Art of Unit Testing" explains the steps you need to do on page 52 (chapter 3, "Using stubs to break dependencies" - "3.3 Determining how to easily test LogAnalyzer"):
http://www.manning.com/osherove/SampleChapter3.pdf
given that in Go a type implements the interface just by implementing the interface's methods - it's even easier than in other languages (like C# for example)
so the answer is as you said
to write an interface that has all of the same methods as mgo.Database
and pass a "mocked" object that uses the interface, and then create an
object that uses the interface and passes calls through to mgo's
library (similar to an ORM), but that seems like a lot of coding.
except that you don't need to create an object that uses the interface and passes calls through to mgo's library (similar to an ORM) because *mgo.Database will implicitly satisfy your interface. So it's not a lot of coding.
You can use Docker on your unit testing too.
I've created a library to help this kind of testing: https://github.com/skarllot/raiqub
Example: https://github.com/raiqub/data/blob/v0.4/mongostore/store_test.go

Parameterized jUnit test without changing runner

Is there a clean way to run parameterized jUnit 4 tests without changing the runner, i.e. without using
#RunWith(Parameterized.class)?
I have unit tests which require a special runner already and I can't replace this one with Parameterized. Maybe there is some kind of "runner chaining" so I could both runners at the same time? (Just a wild guess...)
I have released a framework with a couple of runners that are able to enforce parameterization on the test-class while allowing you to chain an arbitrary 3rd-party runner for the actual test-execution.
The framework is CallbackParams - (http://callbackparams.org) - and these are the runners:
CallbackParamsRunner
BddRunner
By using the framework annotation ...
#WrappedRunner
... you can specify an arbitrary 3rd-party runner in this manner:
#RunWith(CallbackParamsRunner.class) // or #RunWith(BddRunner.class)
#WrappedRunner(YourSpecialRunner.class)
public class YourTest {
...
Parameterized tests with CallbackParams differ considerably from the traditional approach to test-parameterization, however. The reasons are explained in this tutorial article with BddRunner explained near the end of the tutorial article.
For your first CallbackParams test you would probably prefer BddRunner, since it requires less boiler-plate stuff, but when you start reusing parameter values between different test-classes you are probably better off with CallbackParamsRunner, which demands stronger type-checking.
Also - with BddRunner you must not have any #Test-methods. Instead you must use the framework annotations #Given, #When and #Then. That requirement sometimes clash with those of the third-party runner but it usually works out quite well.
Good Luck!
org.junit.runners.Parameterized is created by org.junit.internal.builders.AnnotatedBuilder by reflect mechanism. Maybe you could extend Parameterized as your own Runner: #RunWith(MyParameterized.class).

unit testing data storage

Suppose I have an interface with methods 'storeData(key, data)' and 'getData(key)'. How should I test a concrete implementation? Should I check if the data was correctly set in the storage medium (eg an sql database) or should I just check whether or not it gives the correct data back by using getData?
If I look up the data in the database it feels like I'm also testing the internals of the method but only checking whether it gives the same data back feels incomplete.
You seem to be caught up in the hype of unit testing, what you will be doing is actually an integration test. Setting and getting back the same value from the same key is a unit test you'd do with a mock implementation of the storage engine, but actually testing the real storage, say your database, as you should, that is no longer a unit test, but it is a fundamental part of testing, and it sounds like integration testing to me. Don't use unit testing as your hammer, choose the right tools for the right job. Divide your testing into more layers.
What you want to do in a unit test is make sure that the method does the job that it is supposed to do. If the method uses dependencies to accomplish it's work, you would mock those dependencies out and make sure that your method calls the methods on the objects it depends on with the appropriate arguments. This way you test your code in isolation.
One of the benefits to this is that it will drive the design of your code in a better direction. In order to use mocking, for example, you naturally gravitate towards more decoupled code using dependency injection. This gives you the ability to easily substitute your mock objects for the actual objects that your class depends on. You also end up implementing interfaces, which are more naturally mocked. Both of these things are good design patterns and will improve your code.
In order to test your particular example, for instance, you might have your class depend on a factory to create connections to the database and a builder to construct parameterized SQL commands that are executed via the connection. You'd pass these mocked versions of these objects to your class and ensure that the correct methods to set up the connection and command, build the correct command, execute it, and tear down the connection were invoked. Or perhaps, you inject an already open connection and simply build the command and invoke it. The point is your class is built against an interface or set of interfaces and you use mocking to supply objects that implement those interfaces and can record invocations and supply correct return values to the methods that you expect to use from the interface(s).
In cases like this I will usually create SetUp and TearDown methods that fire before/after my unit tests. These methods will set up any test data I need in the db and delete any test data when I'm done. Pseudo code example:
Const KEY1 = "somekey"
Const VALUE1= "somevalue"
Const KEY2 = "somekey2"
Const VALUE2= "somevalue2"
Sub SetUpUnitTests()
{
Insert Into SQLTable(KEY1,VALUE1)
}
//this test is not dependent on the setData Method
Sub GetDataTest()
{
Assert.IsEqual(getData(KEY1),VALUE1)
}
//this test is not dependent on getData Method
Sub SetDataTest()
{
storeData(newKey,NewData)
Assert.IsNotNull(Direct Call to SQL [Select data from table where key=KEY2])
}
Sub TearDownUnitTests()
{
Delete From table Where key in (KEY1, KEY2)
}
Testing both in concert is a common technique (at least, in my experience), and I wouldn't shy away from it. I've used this same pattern for serializing/deserializing and parsing and printing.
If you don't want to hit the database, you could use a database mock. Some people have the same feelings as you when using mocks - it is partly implementation specific. As in all things, it's a trade-off: consider the benefits of mocking (faster, not db dependent) vs its downsides (won't detect actual db problems, slower).
I think it depends on what happens to the data later - if you're only ever going to access the data using storeData and getData, why not test the methods in concert? I suppose there's a chance that a bug will arise and it'll be slightly harder to figure out whether it's in storeData or getData, but I'd consider that an acceptable risk if it
makes your test easier to implement, and
conceals the internals, as you say
If the data will be read from, or inserted into, the database using some other mechanism, then I'd check the database using SQL as you suggest.
#brendan makes a good point, though - whichever method you decide on, you'll be inserting data in the database. It's a good idea to clear out the data before and after the tests to ensure that you can achieve consistent results.