I use Haskell's Tasty framework for testing. When I acquire and clear resources, I do it with withResource Tasty's function:
withResource :: IO a -> (a -> IO ()) -> (IO a -> TestTree) -> TestTree
where a is type of resource. But I want to keep resources if tests fail and to clear them only if tests passed. How is it possible?
Test failures (at least in tasty-hunit) are implemented as exceptions. The purpose of withResource and bracket is to free resources even when there is an exception. If you write straight-line code like this, the resource will be freed if and only if the assertions pass:
testCase "resource management" $ do
a <- allocate
assertBool =<< runTest
cleanUp a
It's a bit hacky, but you could use an AllSucceed dependency and define a dummy test that would clean up your resource but would run only in the case when certain other tests succeed.
A caveat is that such a cleanup test could be filtered out by a pattern.
Alternatively, I think I'd accept a pull request that adds a version of withResource with an additional Outcome argument.
Related
I have a test case where I need to mock downloading an image. The issue is when I mock this download function, it makes the other functions in that file undefined, but I also need to call the other functions in the test as they originally exist without mocking.
Is there a way to mock only one function from App.Functions in the example below and keep the rest of the functions working the same?
The code looks like this for setting up the mock:
setup_with_mocks(
[
{App.Functions, [], [download_file: fn _url -> :ok end]}
],
context
)
Seems that you are using Mock (https://hexdocs.pm/mock/Mock.html). In that case you can use the passthrough option:
test_with_mock "test_name", App.Functions, [:passthrough], [download_file: fn _url -> :ok end] do
end
I don't know if the option is available also for setup_with_mocks.
More info here: https://github.com/jjh42/mock#passthrough---partial-mocking-of-a-module
Sometimes when you encounter difficulty in mocking functions for testing it can indicate an organizational problem in your code, e.g. a violation of the single-responsibility-principle. Pondering things like this starts to venture into more philosophical territory (which Stackoverflow is not geared towards), but generally it's helpful to isolate your modules in a way that is compatible with testing -- some of the common code/repo organizational patterns fall into place more easily when giving due consideration to facilitating testing.
As already noted, Mock allows the passthrough option.
The Mox package does not have a viable solution to this particular use case -- even its skipping-optional-callbacks option does not really fit the bill.
Another option is to go the more manual route: pass an opt (or read one out of the Application config) that can be overridden at runtime to facilitate testing. This tactic smells to me a bit like Javascript's heavy reliance on passing callback functions, but it can work in a pinch, e.g. something like:
def download(url, opts \\ []) do
http_client = Keyword.get(opts, :client, HTTPoison)
http_client.get(url)
end
# OR
def download(url) do
http_client = Application.get_env(:myapp, :http_client, HTTPoison)
http_client.get(url)
end
Then in your tests:
test "download a file" do
assert {:ok, _} = MyApp.download("http://example", client: HttpClientMock)
end
# OR...
setup do
starting_value = Application.get_env(:myapp, :http_client)
on_exit(fn ->
Application.put_env(:myapp, :http_client, starting_value)
end)
end
test "download a file" do
Application.put_env(:myapp, :http_client, ClientMock)
# ...
end
This has the disadvantage of punting compile-time errors into runtime (which might be a worthwhile tradeoff to achieve test coverage), and this approach can become disorganized, so use with care.
Generally, I've found Mox's approach to rely on behaviours/callbacks to lead to cleaner tests and cleaner code, but your mileage and use-cases may vary.
Recently I noticed that my team follows two approaches on how to write tests in Reactor. First one is with help of .block() method. And it looks something like that:
#Test
void set_entity_version() {
Entity entity = entityRepo.findById(ID)
.block();
assertNotNull(entity);
assertFalse(entity.isV2());
entityService.setV2(ID)
.block();
Entity entity = entityRepo.findById(ID)
.block();
assertNotNull(entity);
assertTrue(entity.isV2());
}
And the second one is about using of StepVerifier. And it looks something like that:
#Test
void set_entity_version() {
StepVerifier.create(entityRepo.findById(ID))
.assertNext(entity -> {
assertNotNull(entity);
assertFalse(entity.isV2());
})
.verifyComplete();
StepVerifier.create(entityService.setV2(ID)
.then(entityRepo.findById(ID)))
.assertNext(entity -> {
assertNotNull(entity);
assertTrue(entity.isV2());
})
.verifyComplete();
}
In my humble opinion, the second approach looks more reactive I would say. Moreover, official docs are very clear on that:
A StepVerifier provides a declarative way of creating a verifiable script for an async Publisher sequence, by expressing expectations about the events that will happen upon subscription.
Still, I'm really curious, what way should be encouraged to use as the main road for doing testing in Reactor. Should .block() method be abandoned completly or it could be useful in some cases? If yes, what such cases are?
Thanks!
You should use StepVerifier. It allows more options:
Verify that you expect n element in a flux
Verify that the flux/mono complete
Verify that an error is expected
Verify that a sequence is expected n element followed by an error (impossible to test with .block())
From the official doc:
public <T> Flux<T> appendBoomError(Flux<T> source) {
return source.concatWith(Mono.error(new IllegalArgumentException("boom")));
}
#Test
public void testAppendBoomError() {
Flux<String> source = Flux.just("thing1", "thing2");
StepVerifier.create(
appendBoomError(source))
.expectNext("thing1")
.expectNext("thing2")
.expectErrorMessage("boom")
.verify();
}
Create initial context
Using virtual time to manipulate time. So when you have something like Mono.delay(Duration.ofDays(1)) you don't have to wait 1 day for your test to complete.
Expect that no event are emitted for a given duration...
from https://medium.com/swlh/stepverifier-vs-block-in-reactor-ca754b12846b
There are pros and cons of both block() and StepVerifier testing
patterns. Hence, it is necessary to define a pattern or set of rules
which can guide us on how to use StepVerifier and block().
In order to decide which patterns to use, we can try to answer the
following questions which will provide a clear expectation from the
tests we are going to write:
Are we trying to test the reactive aspect of the code or just the output of the code?
In which of the patterns we find clarity based on the 3 A’s of testing i.e Arrange, Act, and Assert, in order to make the test
understandable?
What are the limitations of the block() API over StepVerifier in testing reactive code? Which API is more fluent for writing tests in
case of Exception?
If you try answering all these questions above, you will find the
answers to “what” and “where”. So, just give it a thought before
reading the following answers:
block() tests the output of the code and not the reactive aspect. In such a case where we are concerned about testing the output of
the code, rather than the reactive aspect of the code we can use a
block() instead of StepVerifier as it is easy to write and the tests
are more readable.
The assertion library for a block() pattern is better organised in terms of 3 A’s pattern i.e Arrange, Act, and Assert than
StepVerifier. In StepVerfier while testing a method call for a mock
class or even while testing a Mono output one has to write expectation
in the form of chained methods, unlike assert which in my opinion
decreases the readability of the tests. Also, if you forget to write
the terminal step i.e verify() in case of StepVerifier, the code
will not get executed and the test will go green. So, the developer
has to be very careful about calling verify at end of the chain.
There are some aspects of reactive code that can not be tested by using block() API. In such cases, one should use StepVerifier when we
are testing a Flux of data or subscription delays or subscriptions
on different Schedulers, etc, where the developer is bound to use
StepVerifier.
To verify exception by using block() API you need to use assertThatThrownBy API in assertions library that catches the
exception. With the use of an assertion API, error message and
instance of the exception can be asserted. StepVerifier also provides
assertions on exception by expectError() API and supports the
assertion of the element before errors are thrown in a Flux of
elements that can not be achieved by block(). So, for the assertion of
exception, StepVerifier is better than a block() as it can assert
both Mono/Flux.
I have a service that only make queries ( read / write ) to influxDB.
I want to unit test this, but I'm not sure how to do it, I've read a bunch of tutos talking about mocking. A lot deals with components like go-sqlmock. But as I am using influxDB, I could not use it.
I also find out other components I've tried to use like goMock or testify to be over-complicated.
What I think to do is to create a Repository Layer, an interface that should implement all the methods I need to run / test, and pass concrete classes with dependency injection.
I think it could work, but is it the easiest way to do it ?
I guess having Repositories everywhere, even for small services, just for them to be testable, seems to be over-engineered.
I can give you code if needed, but I think my question is a bit more theorical than practical. It is about the easiest way to mock a custom DB for unit testing.
To expand on #Markus W Mahlberg answer:
If the goal is to verify the queries are valid and actually execute against influx there's no shortcut for actually performing these against influx. These are usually considered to be "integration" tests. I have found with docker-compose that these tests can be just as reliable as unit tests, and fast enough to be integrated into CI. Having the tests execute in CI enables local engineers to easily run these tests to verify their query changes as well.
I guess having Repositories everywhere, even for small services, just for them to be testable, seems to be over-engineered.
I have found this to be pretty polarizing discussion. A test implementation IS a concrete implementation and paves the way for reliable, repeatable tests that support easily isolating and exercising specific components of your code.
I want to unit test this, but I'm not sure how to do it,
I think this is pretty nuanced, IMO unit testing queries provides negative value. Value comes from using a repository interface to allow your unit tests to explicitly configure responses that you would receive from influx in order to fully exercise your application code. This provides no feedback on influx, which is why the integration tests are essential in order to verify that your application can validly configure, connect, and query against influx. This validation implicitly happens when you deploy your application, at which point it becomes way more expensive in terms of feedback than verifying it locally and in CI with integration tests.
I created a diagram to try and illustrate these differences:
Unit tests with repository are focused on your application code and provide little feedback/value on anything to do with influx. Integration tests are useful for verifying your client (perhaps being extended to your application depending on where the tests are exercising but I prefer to bound it to the client since you already have the static feedback from go on the interfaces and calls). Then finally, as #Markus points out, the step to e2e tests is pretty small from integration tests, and allow you to test your full service.
By its very definition, if you test your integration with an external resource, we are talking of integration tests, not unit tests. So we have two problems to solve here.
Unit tests
What you typically do is to have a data access layer which accepts interfaces, which in turn are easy to mock and you can unittest your application logic.
package main
import (
"errors"
"fmt"
)
var (
values = map[string]string{"foo": "bar", "bar": "baz"}
Expected = errors.New("Expected error")
)
type Getter interface {
Get(name string) (string, error)
}
// ErrorGetter implements Getter and always returns an error to test the error handling code of the caller.
// ofc, you could (and prolly should) use some mocking here in order to be able to test various other cases
type ErrorGetter struct{}
func (e ErrorGetter) Get(name string) (string, error) {
return "", Expected
}
// MapGetter implements Getter and uses a map as its datasource.
// Here you can see that you actually get an advantage: you decouple your logic from the data source,
// making refactoring (and debugging) **much** easier WTSHTF.
type MapGetter struct {
data map[string]string
}
func (m MapGetter) Get(name string) (string, error) {
if v, ok := m.data[name]; ok {
return v, nil
}
return "", fmt.Errorf("No value found for %s", name)
}
type retriever struct {
g Getter
}
func (r retriever) retrieve(name string) (string, error) {
return r.g.Get(name)
}
func main() {
// Assume this is test code. No tests possible on playground ;)
bad := retriever{g: ErrorGetter{}}
s, err := bad.retrieve("baz")
if s != "" || err == nil {
panic("Something went seriously wrong")
}
// Needs to fail as well, as "baz" is not in values
good := retriever{g: MapGetter{values}}
s, err = good.retrieve("baz")
if s != "" || err == nil {
panic("Something went seriously wrong")
}
s, err = good.retrieve("foo")
if s != "bar" || err != nil {
panic("Something went seriously wrong")
}
}
In the example above, I actually had to implement two Getters to cover all test cases, since I could not use a mocking library, but you get the picture.
As for the over engineering: Plain and simple, no, that is not overengineering. It is what I personally call proper craftsmanship. It will pay in the long run to get used to it. Maybe not in this project, but in one to come.
Integration tests
Dodgy. What I tend to do is to make sure my queries are correct before I commit them ;)
In the rare case I really want to verify my queries in a CI for example, I usually create a Makefile which in turn spins up a docker(-compose) which provides the stuff I want to integrate against and then runs the tests.
I'm writing a unit test for a function that calls out to module as part of a side effect of invoking it:
defmodule HeimdallrWeb.VerifyController do
use HeimdallrWeb, :controller
def verify(conn, _params) do
[forwarded_host | _tail] = get_req_header(conn, "x-forwarded-host")
case is_preview_page?(forwarded_host) do
{:ok, false} ->
conn |> send_resp(200, "")
{:ok, %Heimdallr.Commits.Commit{} = commit} ->
Heimdallr.Commits.touch_commit(commit)
conn |> send_resp(200, "")
{:not_found, _reason} ->
conn |> send_resp(200, "")
end
end
end
The side effect is triggered from the line Heimdallr.Commits.touch_commit(commit).
A few questions about this:
Should my unit test be concerned with testing the effects of the touch_commit method.
If so, should I think about passing in a generic "touch" function to verify method to make it easier to test. This might be difficult due to the nature of Phoenix / Elixirs routing system, I haven't investigated.
If I was using Rails / Ruby / Rspec then I'd set an expectation that a class level method would be called on the HeimdallrCommits module.
My concern and reason for writing the test is that in the future I may accidentally remove the functionality that is touching a commit by deleting or commenting out the line etc.
I would say 1: No. And that is to keep the complexity low in your test. You only want to test (whatever it is you want to test) in a method, the rest should be mocked ignored. What you could do is verify what your method have invoked touch_commit - should be part of a good mocking framework. Those are my 5 cent, sorry to say that I am not familiar with phoenix/elixir so I can't give you any working examples. Like the verify method i Mockito or Moq is what I am thinking of..
When writing tests I find myself writing all kinds of small little helper functions to make assertions. I searched for an assertion library and didn't find anything. In my tests I often have things like this:
value_in_list(_Value, []) ->
false;
value_in_list(Value, [Item|List]) ->
case Value == Item of
true ->
true;
false ->
value_in_list(Value, List)
end.
test_start_link(_Config) ->
% should return the pid and register the process as my_app
{ok, Pid} = my_app:start_link(),
true = is_pid(Pid),
value_in_list(my_app, registered()).
I end up having to write a whole function to check if my_app is a registered process. It would be much nicer if I could just call something like assertion:value_in_list(my_app, registered()) or assertion:is_registered(my_app).
I come from a Ruby background so I hate having to clutter up my tests with utility functions just to make a few assertions. It would be much cleaner if I could just do:
test_start_link(_Config) ->
% should return the pid and register the process as my_app
{ok, Pid} = my_app:start_link(),
true = is_pid(Pid),
assertion:value_in_list(my_app, registered()).
So my questions are:
Why doesn't an assertion library exist for Common Test?
Would it be possible to build a third party library that would be accessible during all tests?
Some ideas for this:
Move your application startup to the suite's startup section:
init_per_suite(Config) ->
{ok, Pid} = my_app:start_link(),
true = is_pid(Pid),
[{app, Pid} | Config].
Then write your test registration as:
test_registration(Config) ->
Pid = ?config(app, Config),
true = lists:member(Pid, registered()).
There is no need to assert things via explicit assertion functions since they are "built in". Just make a failing match like above and the test process will crash. And hence report that the test case went wrong. Each test case is run in its own process. Which is also why you want to start the application in the init_per_suite/1 callback. Otherwise, my_app will be terminated once your test case has run since you are linked to the process-per-test-case.
So the answer is: assertions are built in. Hence there is less need for an assertion library as such.
On a side note, it's terser and more efficient to write that first block in pattern matching in the signature, rather than to add a case.
value_in_list(_Value, [] ) -> false;
value_in_list( Value, [Value|List] ) -> true;
value_in_list( Value, [ _ |List] ) -> value_in_list(Value, List).
I realize this should probably just be a comment to the original question, but that's murderously difficult to read without monospace and newlines.
You can just use EUnit assertions in Common Test.
-include_lib("eunit/include/eunit.hrl").
And all the regular assertions are available.
I decided to write an Erlang assertion library to help with cases like this. It provides this functionality.