Writing unit test for make handler function in Go-kit - unit-testing

My problem is specific to Go-kit and how to organize code within.
I'm trying to write a unit test for the following function:
func MakeHandler(svc Service, logger kitlog.Logger) http.Handler {
orderHandler := kithttptransport.NewServer(
makeOrderEndpoint(svc),
decodeRequest,
encodeResponse,
)
r := mux.NewRouter()
r.Handle("/api/v1/order/", orderHandler).Methods("GET")
return r
}
What would be the correct way of writing a proper unit test? I have seen examples such as the following:
sMock := &ServiceMock{}
h := MakeHandler(sMock, log.NewNopLogger())
r := httptest.NewRecorder()
req := httptest.NewRequest("GET", "/api/v1/order/", bytes.NewBuffer([]byte("{}")))
h.ServeHTTP(r, req)
And then testing the body and headers of the request. But this doesn't seem like a proper unit test, as calls other parts of the code (orderHandler). Is it possible to just validate what's returned from MakeHandler() instead of during a request?

TL;DR: Yes, that test is in the right direction. You shouldn't try to test the
internals of the returned handler since that third party package may change in ways you didn't expect in the future.
Is it possible to just validate what's returned from MakeHandler() instead of
during a request?
Not in a good way. MakeHandler() returns an interface and ideally you'd use
just the interface methods in your tests.
You could look at the docs of the type returned by mux.NewRouter() to see if
there are any fields or methods in the concrete type that can give you the
information, but that can turn out to be a pain - both for understanding the
tests (one more rarely used type to learn about) and due to how future
modifications to the mux package may affect your code without breaking the
tests.
What would be the correct way of writing a proper unit test?
Your example is actually in the right direction. When testing MakeHandler(),
you're testing that the handler returned by it is able to handle all the paths
and calls the correct handler for each path. So you need to call the
ServeHTTP() method, let it do its thing and then test to see it worked
correctly. Only introspecting the handler does not guarantee correctness during
actual usage.
You may need to make actually valid requests though so you're able to understand
which handler was called based on the response body or header. That should
bring the test to a quite reasonable state. (I think you already have that)
Similarly, I'd add a basic sub test for each route that's added in the future.
Detailed handler tests can be written in separate funcs.

Related

Elixir mock only one function from a file

I have a test case where I need to mock downloading an image. The issue is when I mock this download function, it makes the other functions in that file undefined, but I also need to call the other functions in the test as they originally exist without mocking.
Is there a way to mock only one function from App.Functions in the example below and keep the rest of the functions working the same?
The code looks like this for setting up the mock:
setup_with_mocks(
[
{App.Functions, [], [download_file: fn _url -> :ok end]}
],
context
)
Seems that you are using Mock (https://hexdocs.pm/mock/Mock.html). In that case you can use the passthrough option:
test_with_mock "test_name", App.Functions, [:passthrough], [download_file: fn _url -> :ok end] do
end
I don't know if the option is available also for setup_with_mocks.
More info here: https://github.com/jjh42/mock#passthrough---partial-mocking-of-a-module
Sometimes when you encounter difficulty in mocking functions for testing it can indicate an organizational problem in your code, e.g. a violation of the single-responsibility-principle. Pondering things like this starts to venture into more philosophical territory (which Stackoverflow is not geared towards), but generally it's helpful to isolate your modules in a way that is compatible with testing -- some of the common code/repo organizational patterns fall into place more easily when giving due consideration to facilitating testing.
As already noted, Mock allows the passthrough option.
The Mox package does not have a viable solution to this particular use case -- even its skipping-optional-callbacks option does not really fit the bill.
Another option is to go the more manual route: pass an opt (or read one out of the Application config) that can be overridden at runtime to facilitate testing. This tactic smells to me a bit like Javascript's heavy reliance on passing callback functions, but it can work in a pinch, e.g. something like:
def download(url, opts \\ []) do
http_client = Keyword.get(opts, :client, HTTPoison)
http_client.get(url)
end
# OR
def download(url) do
http_client = Application.get_env(:myapp, :http_client, HTTPoison)
http_client.get(url)
end
Then in your tests:
test "download a file" do
assert {:ok, _} = MyApp.download("http://example", client: HttpClientMock)
end
# OR...
setup do
starting_value = Application.get_env(:myapp, :http_client)
on_exit(fn ->
Application.put_env(:myapp, :http_client, starting_value)
end)
end
test "download a file" do
Application.put_env(:myapp, :http_client, ClientMock)
# ...
end
This has the disadvantage of punting compile-time errors into runtime (which might be a worthwhile tradeoff to achieve test coverage), and this approach can become disorganized, so use with care.
Generally, I've found Mox's approach to rely on behaviours/callbacks to lead to cleaner tests and cleaner code, but your mileage and use-cases may vary.

What should be the easiest way to unit test influxdb queries

I have a service that only make queries ( read / write ) to influxDB.
I want to unit test this, but I'm not sure how to do it, I've read a bunch of tutos talking about mocking. A lot deals with components like go-sqlmock. But as I am using influxDB, I could not use it.
I also find out other components I've tried to use like goMock or testify to be over-complicated.
What I think to do is to create a Repository Layer, an interface that should implement all the methods I need to run / test, and pass concrete classes with dependency injection.
I think it could work, but is it the easiest way to do it ?
I guess having Repositories everywhere, even for small services, just for them to be testable, seems to be over-engineered.
I can give you code if needed, but I think my question is a bit more theorical than practical. It is about the easiest way to mock a custom DB for unit testing.
To expand on #Markus W Mahlberg answer:
If the goal is to verify the queries are valid and actually execute against influx there's no shortcut for actually performing these against influx. These are usually considered to be "integration" tests. I have found with docker-compose that these tests can be just as reliable as unit tests, and fast enough to be integrated into CI. Having the tests execute in CI enables local engineers to easily run these tests to verify their query changes as well.
I guess having Repositories everywhere, even for small services, just for them to be testable, seems to be over-engineered.
I have found this to be pretty polarizing discussion. A test implementation IS a concrete implementation and paves the way for reliable, repeatable tests that support easily isolating and exercising specific components of your code.
I want to unit test this, but I'm not sure how to do it,
I think this is pretty nuanced, IMO unit testing queries provides negative value. Value comes from using a repository interface to allow your unit tests to explicitly configure responses that you would receive from influx in order to fully exercise your application code. This provides no feedback on influx, which is why the integration tests are essential in order to verify that your application can validly configure, connect, and query against influx. This validation implicitly happens when you deploy your application, at which point it becomes way more expensive in terms of feedback than verifying it locally and in CI with integration tests.
I created a diagram to try and illustrate these differences:
Unit tests with repository are focused on your application code and provide little feedback/value on anything to do with influx. Integration tests are useful for verifying your client (perhaps being extended to your application depending on where the tests are exercising but I prefer to bound it to the client since you already have the static feedback from go on the interfaces and calls). Then finally, as #Markus points out, the step to e2e tests is pretty small from integration tests, and allow you to test your full service.
By its very definition, if you test your integration with an external resource, we are talking of integration tests, not unit tests. So we have two problems to solve here.
Unit tests
What you typically do is to have a data access layer which accepts interfaces, which in turn are easy to mock and you can unittest your application logic.
package main
import (
"errors"
"fmt"
)
var (
values = map[string]string{"foo": "bar", "bar": "baz"}
Expected = errors.New("Expected error")
)
type Getter interface {
Get(name string) (string, error)
}
// ErrorGetter implements Getter and always returns an error to test the error handling code of the caller.
// ofc, you could (and prolly should) use some mocking here in order to be able to test various other cases
type ErrorGetter struct{}
func (e ErrorGetter) Get(name string) (string, error) {
return "", Expected
}
// MapGetter implements Getter and uses a map as its datasource.
// Here you can see that you actually get an advantage: you decouple your logic from the data source,
// making refactoring (and debugging) **much** easier WTSHTF.
type MapGetter struct {
data map[string]string
}
func (m MapGetter) Get(name string) (string, error) {
if v, ok := m.data[name]; ok {
return v, nil
}
return "", fmt.Errorf("No value found for %s", name)
}
type retriever struct {
g Getter
}
func (r retriever) retrieve(name string) (string, error) {
return r.g.Get(name)
}
func main() {
// Assume this is test code. No tests possible on playground ;)
bad := retriever{g: ErrorGetter{}}
s, err := bad.retrieve("baz")
if s != "" || err == nil {
panic("Something went seriously wrong")
}
// Needs to fail as well, as "baz" is not in values
good := retriever{g: MapGetter{values}}
s, err = good.retrieve("baz")
if s != "" || err == nil {
panic("Something went seriously wrong")
}
s, err = good.retrieve("foo")
if s != "bar" || err != nil {
panic("Something went seriously wrong")
}
}
In the example above, I actually had to implement two Getters to cover all test cases, since I could not use a mocking library, but you get the picture.
As for the over engineering: Plain and simple, no, that is not overengineering. It is what I personally call proper craftsmanship. It will pay in the long run to get used to it. Maybe not in this project, but in one to come.
Integration tests
Dodgy. What I tend to do is to make sure my queries are correct before I commit them ;)
In the rare case I really want to verify my queries in a CI for example, I usually create a Makefile which in turn spins up a docker(-compose) which provides the stuff I want to integrate against and then runs the tests.

Unit testing functions with side effects?

Let's say you're writing a function to check if a page was reached by the appropriate URL. The page has a "canonical" stub - for example, while a page could be reached at stackoverflow.com/questions/123, we would prefer (for SEO reasons) to redirect it to stackoverflow.com/questions/123/how-do-i-move-the-turtle-in-logo - and the actual redirect is safely contained in its own method (eg. redirectPage($url)), but how do you properly test the function which calls it?
For example, take the following function:
function checkStub($questionId, $baseUrl, $stub) {
canonicalStub = model->getStub($questionId);
if ($stub != $canonicalStub) {
redirectPage($baseUrl . $canonicalStub);
}
}
If you were to unit test the checkStub() function, wouldn't the redirect get in the way?
This is part of a larger problem where certain functions seem to get too big and leave the realm of unit testing and into the world of integration testing. My mind immediately thinks of routers and controllers as having these sorts of problems, as testing them necessarily leads to the generation of pages rather than being confined to just their own function.
Do I just fail at unit testing?
You say...
This is part of a larger problem where certain functions seem to get too big and leave the realm of unit testing and into the world of integration testing
I think this is why unit testing is (1) hard and (2) leads to code that doesn't crumble under its own weight. You have to be meticulous about breaking all of your dependencies or you end up with unit tests == integration tests.
In your example, you would inject a redirector as a dependency. You use a mock, double or spy. Then you do the tests as #atk lays out. Sometimes it's not worth it. More often it forces you to write better code. And it's hard to do without an IOC container.
This is an old question, but I think this answer is relevant. #Rob states that you would inject a redirector as a dependency - and sure, this works. However, your problem is that you don't have a good separation of concerns.
You need to make your functions as atomic as possible, and then compose larger functionality using the granular functions you've created. You wrote this:
function checkStub($questionId, $baseUrl, $stub) {
canonicalStub = model->getStub($questionId);
if ($stub != $canonicalStub) {
redirectPage($baseUrl . $canonicalStub);
}
}
I'd write this:
function checkStubEquality($stub1, $stub2) {
return $stub1 == $stub2;
}
canonicalStub = model->getStub($questionId);
if (!checkStubEquality(canonicalStub, $stub)) redirectPage($baseUrl . $canonicalStub);
It sounds like you just have another test case. You need to check that the stub is identified correctly as a stub with both positive and negative testing, and you need to check that the page to which you are redirected is correct.
Or do I totally misunderstand the question?

Webservice test isolation - but when to verify the webservice itself?

I am isolating my webservice-related tests from the actual webservices with Stubs.
How do you/should i incorporate tests to ensure that my crafted responses match the actual webservice ones (i don't have control over it)?
I don't want to know how to do it, but when and where?
Should i create a testsuite-testsuite for testdata testing?...
I would use something like this excellent tool
Storm
If you can, install the service in a small, completely controlled environment. Drawback: You must find a way to be notified when a new version is rolled out.
If that's not possible, write a test that calls the real service and checks for vital points (do I get a response? Are all parts there and where I expect them? Can I parse the result?)
Avoid things like checking timestamps, result size, etc., that is things that can and do change all the time.
You can test the possible failures using EasyMock as follows:
public void testDisplayProductsWhenWebServiceThrowsRemoteLookupException() {
...
EasyMock.expect(mockWebService.getProducts(category)).andThrow(new RemoteLookupException());
...
someServiceOrController.someMethodThatUsesMockWebService(...);
}
Repeat for all possible failure scenarios. The other solution is to implement a dummy SEI yourself. Using JAX-WS, you can trivially annotate a java class that generates an interface consistent with the client you consume. All of the methods can just return dummy data. You can then deploy the services on your own server and point your test environment at the dummy location.
Perhaps more importantly than any of the crap I've said so far, you can take the advice of the authors of The Pragmatic Programmer and program with assertions. That is, given that you must inevitably make certain assumptions about the web service you consume given that you have no control over it's implementation, you can add code such as:
if(resultOfWebService == null || resultOfWebService.getId() == null)
throw new AssertionError("WebService violated contract by doing xyz: result => " + resultOfWebServivce);
That way, if your assumptions don't hold, you'll at least find out about it instead of potentially silently fail!
You can also turn on schema validations and protocol validations to ensure that the service is operating according to spec.

Unit testing code with a file system dependency

I am writing a component that, given a ZIP file, needs to:
Unzip the file.
Find a specific dll among the unzipped files.
Load that dll through reflection and invoke a method on it.
I'd like to unit test this component.
I'm tempted to write code that deals directly with the file system:
void DoIt()
{
Zip.Unzip(theZipFile, "C:\\foo\\Unzipped");
System.IO.File myDll = File.Open("C:\\foo\\Unzipped\\SuperSecret.bar");
myDll.InvokeSomeSpecialMethod();
}
But folks often say, "Don't write unit tests that rely on the file system, database, network, etc."
If I were to write this in a unit-test friendly way, I suppose it would look like this:
void DoIt(IZipper zipper, IFileSystem fileSystem, IDllRunner runner)
{
string path = zipper.Unzip(theZipFile);
IFakeFile file = fileSystem.Open(path);
runner.Run(file);
}
Yay! Now it's testable; I can feed in test doubles (mocks) to the DoIt method. But at what cost? I've now had to define 3 new interfaces just to make this testable. And what, exactly, am I testing? I'm testing that my DoIt function properly interacts with its dependencies. It doesn't test that the zip file was unzipped properly, etc.
It doesn't feel like I'm testing functionality anymore. It feels like I'm just testing class interactions.
My question is this: what's the proper way to unit test something that is dependent on the file system?
edit I'm using .NET, but the concept could apply Java or native code too.
Yay! Now it's testable; I can feed in test doubles (mocks) to the DoIt method. But at what cost? I've now had to define 3 new interfaces just to make this testable. And what, exactly, am I testing? I'm testing that my DoIt function properly interacts with its dependencies. It doesn't test that the zip file was unzipped properly, etc.
You have hit the nail right on its head. What you want to test is the logic of your method, not necessarily whether a true file can be addressed. You donĀ“t need to test (in this unit test) whether a file is correctly unzipped, your method takes that for granted. The interfaces are valuable by itself because they provide abstractions that you can program against, rather than implicitly or explicitly relying on one concrete implementation.
Your question exposes one of the hardest parts of testing for developers just getting into it:
"What the hell do I test?"
Your example isn't very interesting because it just glues some API calls together so if you were to write a unit test for it you would end up just asserting that methods were called. Tests like this tightly couple your implementation details to the test. This is bad because now you have to change the test every time you change the implementation details of your method because changing the implementation details breaks your test(s)!
Having bad tests is actually worse than having no tests at all.
In your example:
void DoIt(IZipper zipper, IFileSystem fileSystem, IDllRunner runner)
{
string path = zipper.Unzip(theZipFile);
IFakeFile file = fileSystem.Open(path);
runner.Run(file);
}
While you can pass in mocks, there's no logic in the method to test. If you were to attempt a unit test for this it might look something like this:
// Assuming that zipper, fileSystem, and runner are mocks
void testDoIt()
{
// mock behavior of the mock objects
when(zipper.Unzip(any(File.class)).thenReturn("some path");
when(fileSystem.Open("some path")).thenReturn(mock(IFakeFile.class));
// run the test
someObject.DoIt(zipper, fileSystem, runner);
// verify things were called
verify(zipper).Unzip(any(File.class));
verify(fileSystem).Open("some path"));
verify(runner).Run(file);
}
Congratulations, you basically copy-pasted the implementation details of your DoIt() method into a test. Happy maintaining.
When you write tests you want to test the WHAT and not the HOW.
See Black Box Testing for more.
The WHAT is the name of your method (or at least it should be). The HOW are all the little implementation details that live inside your method. Good tests allow you to swap out the HOW without breaking the WHAT.
Think about it this way, ask yourself:
"If I change the implementation details of this method (without altering the public contract) will it break my test(s)?"
If the answer is yes, you are testing the HOW and not the WHAT.
To answer your specific question about testing code with file system dependencies, let's say you had something a bit more interesting going on with a file and you wanted to save the Base64 encoded contents of a byte[] to a file. You can use streams for this to test that your code does the right thing without having to check how it does it. One example might be something like this (in Java):
interface StreamFactory {
OutputStream outStream();
InputStream inStream();
}
class Base64FileWriter {
public void write(byte[] contents, StreamFactory streamFactory) {
OutputStream outputStream = streamFactory.outStream();
outputStream.write(Base64.encodeBase64(contents));
}
}
#Test
public void save_shouldBase64EncodeContents() {
OutputStream outputStream = new ByteArrayOutputStream();
StreamFactory streamFactory = mock(StreamFactory.class);
when(streamFactory.outStream()).thenReturn(outputStream);
// Run the method under test
Base64FileWriter fileWriter = new Base64FileWriter();
fileWriter.write("Man".getBytes(), streamFactory);
// Assert we saved the base64 encoded contents
assertThat(outputStream.toString()).isEqualTo("TWFu");
}
The test uses a ByteArrayOutputStream but in the application (using dependency injection) the real StreamFactory (perhaps called FileStreamFactory) would return FileOutputStream from outputStream() and would write to a File.
What was interesting about the write method here is that it was writing the contents out Base64 encoded, so that's what we tested for. For your DoIt() method, this would be more appropriately tested with an integration test.
There's really nothing wrong with this, it's just a question of whether you call it a unit test or an integration test. You just have to make sure that if you do interact with the file system, there are no unintended side effects. Specifically, make sure that you clean up after youself -- delete any temporary files you created -- and that you don't accidentally overwrite an existing file that happened to have the same filename as a temporary file you were using. Always use relative paths and not absolute paths.
It would also be a good idea to chdir() into a temporary directory before running your test, and chdir() back afterwards.
I am reticent to pollute my code with types and concepts that exist only to facilitate unit testing. Sure, if it makes the design cleaner and better then great, but I think that is often not the case.
My take on this is that your unit tests would do as much as they can which may not be 100% coverage. In fact, it may only be 10%. The point is, your unit tests should be fast and have no external dependencies. They might test cases like "this method throws an ArgumentNullException when you pass in null for this parameter".
I would then add integration tests (also automated and probably using the same unit testing framework) that can have external dependencies and test end-to-end scenarios such as these.
When measuring code coverage, I measure both unit and integration tests.
There's nothing wrong with hitting the file system, just consider it an integration test rather than a unit test. I'd swap the hard coded path with a relative path and create a TestData subfolder to contain the zips for the unit tests.
If your integration tests take too long to run then separate them out so they aren't running as often as your quick unit tests.
I agree, sometimes I think interaction based testing can cause too much coupling and often ends up not providing enough value. You really want to test unzipping the file here not just verify you are calling the right methods.
One way would be to write the unzip method to take InputStreams. Then the unit test could construct such an InputStream from a byte array using ByteArrayInputStream. The contents of that byte array could be a constant in the unit test code.
This seems to be more of an integration test as you are depending on a specific detail (the file system) that could change, in theory.
I would abstract the code that deals with the OS into it's own module (class, assembly, jar, whatever). In your case you want to load a specific DLL if found, so make an IDllLoader interface and DllLoader class. Have your app acquire the DLL from the DllLoader using the interface and test that .. you're not responsible for the unzip code afterall right?
Assuming that "file system interactions" are well tested in the framework itself, create your method to work with streams, and test this. Opening a FileStream and passing it to the method can be left out of your tests, as FileStream.Open is well tested by the framework creators.
You should not test class interaction and function calling. instead you should consider integration testing. Test the required result and not the file loading operation.
As others have said, the first is fine as an integration test. The second tests only what the function is supposed to actually do, which is all a unit test should do.
As shown, the second example looks a little pointless, but it does give you the opportunity to test how the function responds to errors in any of the steps. You don't have any error checking in the example, but in the real system you may have, and the dependency injection would let you test all the responses to any errors. Then the cost will have been worth it.
For unit test I would suggest that you include the test file in your project(EAR file or equivalent) then use a relative path in the unit tests i.e. "../testdata/testfile".
As long as your project is correctly exported/imported than your unit test should work.