Making a test double in OCaml - unit-testing

How usual is to have a test double in OCaml that would fake a database connection ?
Lets say you want to test a small API on top of a database and the way this works is by providing a Connection type to each function that API exposes.
Something like:
let get_data connection = do_something_with_connection
How would this be unit tested ?
On a larger note is is this kind of testing usual in OCaml, given the fact that OCaml's powerful type system already makes sure that you don't make weird mistakes ?

You would create an object which has all of the same method names as Connection each with the same signatures (and with stub functionality, obviously). Then you can instantiate one of these objects and declare it as being a Connection via subtyping. Then it can be passed into any of the functions.
Here is a helpful bit about subtyping (which, it should be noted, is not the same things as inheritance in Ocaml).

Build your module with a functor, which takes the Connection module as its argument. Then you can stub out the Connection module in your tests.
So, for example, your db.ml file could look kind of like this:
(* The interface of Connection that we use *)
module type CONNECTION = sig
type t
val execute : string -> t -> string list
end
(* functor to build Db modules, given a Connection module *)
module Make(Connection : CONNECTION) = struct
...
let get_data connection =
do_something_with (Connection.execute "some query" connection)
...
end
Then in your test_db.ml you can just stub out the Connection module
let test_get_data () =
let module TestConnection = struct
type t = unit
let execute _ _ = ["data"]
end in
let module TestDb = Db.Make(TestConnection) in
assert (TestDb.get_data () = ["munged data"])

Related

Mixing StateT and MonadUnliftIO

I'm not sure on the best name for my question, but I seem to have coded myself into a corner somewhat. Or at least I am faced with a slightly awkward design descision within my hspec test suite for my project.
My project has to make some third party API calls, and in HSpec I am trying to make it so these can be faked using a transformer implemented using StateT, that can be placed inside a test stack to enable the ability to 'fake' these API calls. I always want to stub these calls out in unit tests, I never want to hit the real API, ever.
The way I am 'stubbing' calls is by defining all of my systems effects with typeclasses and providing a different instance when in test. So for example I would have a class like:
class FireApiGetRequest m where
fireApiGetRequest :: GetRequest -> m (Either ApiError GetReponse)
instance FireApiGetRequest Hander where
fireApiGetRequest = -- real world implementation
I have two transformer stacks for my tests currently one is known as AppTestIO, and AppTestPureM.
AppTestIO is made up of a ReaderT that contains a connection pool to a real test database and some test configuration settings, and AppTestPureM is a type alias for StateT FakeDatabaseCalls Gen, much in the same way as I write out the API calls with typeclasses, database calls are also factored out into type classes and 'stubbed' out in the same way. This works well, only I would like to add a transformer layer to both stacks, that mocks the third party API. In my mind I should be able to define a transformer that can sit within both of these transformer stacks and give me the ability to fake these API calls:
type AppTestPureM = StateT FakedDatabaseCalls (ThirdPartyApiMocksT Gen)
type AppTestIO = ThirdPartyApiMocksT (AppTestT IO)
newtype AppTestT m a = AppTestT
{ unAppTestT :: ReaderT TestEnv m a
}
On the face of it this seems like a great idea because it means I can stub out these API calls regardless of whether or not the test is hitting the real database, basically.
For the instances, I have no issue defining instances for the API calls in test:
instance FireApiGetRequest ThirdPartyApiMocksT where
fireApiGetRequest = -- test implementation, access state and return value based on that
instance FireApiGetRequest m => FireApiGetRequest (StateT s m) where
fireApiGetRequest = lift . fireApiGetRequest
And I can define helper functions in my test to get the instances to return the fake data that I want:
stubApiGetRequest :: Monad m => (Either ApiError GetResponse) -> ThirdPartyApiMocksT m ()
stubApiGetRequest returnVal = undefined -- store `returnVal` in the state for use in typeclass instance
My issue arises when I actually start using these instances within my tests. For those functions in my app that hit the database (that aren't yet stubbed out with typeclass instances), they ultimately are using runSqlPool, which uses MonadUnliftIO, and MonadUnliftIO is effectively forbidding me from mixing these together.
The approach to achieve 'state' while using MonadUnliftIO is to use ReaderT + MVar instead. I don't have a huge issue with this normally, my only problem here is that my PureM stack is based around StateT, and also does not run in IO, because it is used to run 'effectful' computations using fake data in order to write the test in a pure way, with no IO. This also means I can use quickcheck to test these functions if that is something I want to do. I am aware there is quickcheck-monadic so having these functions result in IO may not be the end of the world, but I would like retain AppTestPureM a -> Gen a. Using ReaderT + MVar here in this situation would make it so that these tests then depend on IO. So these two test stacks are contradictory.
I suppose that illustrates the situation I am in now. I am not sure how exactly to proceed from here.

Passing Configuration Envirnoment in OCaml

I'm trying to understand and/or find examples of how robust OCaml applications deal with a configuration environment. I'm coming from a world of Haskell, where I would use the Reader monad to solve this problem. In particular, I want to define a top-level configuration which I may pass throughout my application, but only some of the functions in my application will need to use the configuration.
To motivate this, consider a simple executable which queries a database. I would want to define a top-level main file which might do things like setup a connection pool, or create a logger which would be shared throughout the application.
I'm assuming OCaml has some pattern for dealing with this, but I cannot find any great examples. I'd prefer not define my own Reader monad and functorize the entire application if I don't have to.
That being said, this is also ugly
main.mli
module Config : sig
type t = {
fooConfig : Foo.Config.t;
dbClientConfig : DB_client.Config.t;
}
val make : Foo.Config.t -> DB_client.Config.t -> t
end
val main : Config.t -> unit
foo.mli
module Config : sig
type t = {
dbConfig : DB_client.Config.t;
}
end
(** fooFunc will need to pass the dbConfig, but it does not actually explicitly need anything in the config **)
val fooFunc : Config.t -> string -> unit
db_client.mli
module Config : sig
type t = {
connPool : SomeConnPool
}
end
(** Finally, I need to use the config to grab a connection out of the pool **)
val writeToDb : Config.t -> string -> (string, string) Lwt.result
I don't want to make the explicit arguments of function "higher in the execution stack" depend explicitly on a configuration they will never use.
Is there a nice functional pattern to deal with this nested dependency coupling? I'd appreciate any code examples that someone can point towards so I can study a better approach.

Run function/behaviour after all behaviours ended in Pony

I have a simple Publish-Subscriber that I want to write tests for.
The methods called here are all behaviours, except get_number_consumed_messages that would be a function.
class iso _SinglePubSub is UnitTest
fun name(): String => "single publish/consume"
fun apply(h: TestHelper) =>
let p = Publisher("publisher message", h.env.out)
let queue = Queue(1, h.env.out)
let c = Consumer(h.env.out)
p.publish_message(queue)
p.publish_message(queue)
c.consume_message(queue)
c.consume_message(queue)
//Run after all behaviours are done
let n = c.get_number_consumed_messages()
h.assert_eq[USize](2, n)
How would someone implement the get_number_consumed_messages function/behaviour or how would you have to modify the test function?
First of all, c.get_number_consumed_messages() must be a behaviour as well. It is the only way to let one actor communicate with another. This has the added benefit of behaviours being run in the same order as they are called, which means c.get_number_consumed_messages() would run after both calls to c.consume_message(queue).
Given that, since Consumer is also an actor, calling it with behaviours -- and not methods -- means that we cannot return data from it directly. To actually receive data from another actor, you should use the Promise pattern, for example:
use "promises"
actor Consumer
var message_count: USize = 0
be consume_message(queue: OutStream) =>
... // Do some things
message_count = message_count + 1
... // Do other things
be get_number_consumed_messages(p: Promise[USize]) =>
p(message_count)
To actually test it, you would need to follow an adapted version of the Testing Notifier Interactions pattern for long tests, for example:
use "ponytest"
use "promises"
class iso _SinglePubSub is UnitTest
fun apply(h: TestHelper) =>
h.long_test(2_000_000_000)
... // Produce and consume messages
let p = Promise[USize]
p.next[None]({(n: USize): None =>
h.assert_eq[USize](2, n)
h.complete(true) })
c.get_number_consumed_messages(p)
(Notice the extra calls to h.long_test and h.complete, as well as the promise wrapping a lambda with the end of our test.)
For more information on these concepts, I would recommend familiarizing yourself with the stdlib documentation on Promises and the "Long tests" section of Ponytest.

The correct way to write unit tests for a module in OCaml

I have a given interface specification in the module.mli file. I have to write its implementation in the module.ml file.
module.mli provides an abstract type
type abstract_type
I'm using OUnit to create the tests. I need to use the type's implementation in them. (for example to compare the values) One solution would be to extend the interface to contain additional functions used in the tests.
But is it possible to do such a thing without modifying the interface?
The only way to expose tests without touching the module interface would be to register the tests with some global container. If you have a module called Tests that provides a function register, your module.ml would contain something like this:
let some_test = ...
let () = Tests.register some_test
I don't recommend this approach because the Tests module loses control over what tests it's going to run.
Instead I recommend exporting the tests, i.e. adding them to module.mli.
Note that without depending on OUnit, you can export tests of the following type that anyone can run. Our tests look like this:
let test_cool_feature () =
...
assert ...;
...
assert ...;
true
let test_super_feature () =
...
a = b
let tests = [
"cool feature", test_cool_feature;
"super feature", test_super_feature;
]
The interface is:
...
(**/**)
(* begin section ignored by ocamldoc *)
val test_cool_feature : unit -> bool
val test_super_feature : unit -> bool
val tests : (string * (unit -> bool)) list

Scala - write unit tests for objects/singletons that extends a trait/class with DB connection

Unit test related question
Encountered a problem with testing scala objects that extend another trait/class that has a DB connection (or any other "external" call)
Using a singleton with a DB connection anywhere in my project makes unit-test not be a option because I cannot override / mock the DB connection
This results in changing my design only for test purpose in situations where its clearly needed to be a object
Any suggestions ?
Code snippet for a non testable code :
object How2TestThis extends SomeDBconnection {
val somethingUsingDB = {
getStuff.map(//some logic)
}
val moreThigs {
//more things
}
}
trait SomeDBconnection {
import DBstuff._
val db = connection(someDB)
val getStuff = db.getThings
}
One of the options is to use cake pattern to require some DB connection and mixin specific implementation as desired. For example:
import java.sql.Connection
// Defines general DB connection interface for your application
trait DbConnection {
def getConnection: Connection
}
// Concrete implementation for production/dev environment for example
trait ProductionDbConnectionImpl extends DbConnection {
def getConnection: Connection = ???
}
// Common code that uses that DB connection and needs to be tested.
trait DbConsumer {
this: DbConnection =>
def runDb(sql: String): Unit = {
getConnection.prepareStatement(sql).execute()
}
}
...
// Somewhere in production code when you set everything up in init or main you
// pick concrete db provider
val prodDbConsumer = new DbConsumer with ProductionDbConnectionImpl
prodDbConsumer.runDb("select * from sometable")
...
// Somewhere in test code you mock or stub DB connection ...
val testDbConsumer = new DbConsumer with DbConnection { def getConnection = ??? }
testDbConsumer.runDb("select * from sometable")
If you have to use a singleton/Scala object you can have a lazy val or some init(): Unit method that sets connection up.
Another approach would be to use some sort of injector. For example look at Lift code:
package net.liftweb.http
/**
* A base trait for a Factory. A Factory is both an Injector and
* a collection of FactorMaker instances. The FactoryMaker instances auto-register
* with the Injector. This provides both concrete Maker/Vender functionality as
* well as Injector functionality.
*/
trait Factory extends SimpleInjector
Then somewhere in your code you use this vendor like this:
val identifier = new FactoryMaker[MongoIdentifier](DefaultMongoIdentifier) {}
And then in places where you actually have to get access to DB:
identifier.vend
You can supply alternative provider in tests by surrounding your code with:
identifier.doWith(mongoId) { <your test code> }
which can be conveniently used with specs2 Around context for example:
implicit val dbContext new Around {
def around[T: AsResult](t: => T): Result = {
val mongoId = new MongoIdentifier {
def jndiName: String = dbName
}
identifier.doWith(mongoId) {
AsResult(t)
}
}
}
It's pretty cool because it's implemented in Scala without any special bytecode or JVM hacks.
If you think first 2 options are too complicated and you have a small app you can use Properties file/cmd args to let you know if you are running in test or production mode. Again the idea comes from Lift :). You can easily implement it yourself, but here how you can do it with Lift Props:
// your generic DB code:
val jdbcUrl: String = Props.get("jdbc.url", "jdbc:postgresql:database")
You can have 2 props files:
production.default.props
jdbc.url=jdbc:postgresql:database
test.default.props
jdbc.url=jdbc:h2
Lift will automatically detect run mode Props.mode and pick the right props file to read. You can set run mode with JVM cmd args.
So in this case you can either connect to in-memory DB or just read run mode and set your connection in code accordingly (mock, stub, uninitialized, etc).
Use regular IOC pattern - pass dependencies via constructor arguments to the class. Don't use an object. This gets inconvenient quickly unless you use special dependency injection frameworks.
Some suggestions:
Use object for something that can't have an alternative implementation and if this only implementation will work in all environments. Use object for constants and pure FP non side effecting code. Use singletons for wiring things up at the last moment - like a class with main, not somewhere deep in the code where many components depend on it unless it has no side effects or it uses something like stackable/injectable vendor providers (see Lift).
Conclusion:
You can't mock an object or override its implementation. You need to design your code to be testable and some of the options for it are listed above. It's a good practice to make your code flexible with easily composable parts not only for the purposes of testing but also for reusability and maintainability.