In my unit tests, I want to assert that workflow.Sleep() was called. How do I do this?
It's possible to access the emulated time using the TestWorkflowEnvironment.Now() function. For example:
before := testenv.Now()
testenv.ExecuteWorkflow(...)
after := testenv.Now()
Then assert on the change between before and after.
Related
My problem is specific to Go-kit and how to organize code within.
I'm trying to write a unit test for the following function:
func MakeHandler(svc Service, logger kitlog.Logger) http.Handler {
orderHandler := kithttptransport.NewServer(
makeOrderEndpoint(svc),
decodeRequest,
encodeResponse,
)
r := mux.NewRouter()
r.Handle("/api/v1/order/", orderHandler).Methods("GET")
return r
}
What would be the correct way of writing a proper unit test? I have seen examples such as the following:
sMock := &ServiceMock{}
h := MakeHandler(sMock, log.NewNopLogger())
r := httptest.NewRecorder()
req := httptest.NewRequest("GET", "/api/v1/order/", bytes.NewBuffer([]byte("{}")))
h.ServeHTTP(r, req)
And then testing the body and headers of the request. But this doesn't seem like a proper unit test, as calls other parts of the code (orderHandler). Is it possible to just validate what's returned from MakeHandler() instead of during a request?
TL;DR: Yes, that test is in the right direction. You shouldn't try to test the
internals of the returned handler since that third party package may change in ways you didn't expect in the future.
Is it possible to just validate what's returned from MakeHandler() instead of
during a request?
Not in a good way. MakeHandler() returns an interface and ideally you'd use
just the interface methods in your tests.
You could look at the docs of the type returned by mux.NewRouter() to see if
there are any fields or methods in the concrete type that can give you the
information, but that can turn out to be a pain - both for understanding the
tests (one more rarely used type to learn about) and due to how future
modifications to the mux package may affect your code without breaking the
tests.
What would be the correct way of writing a proper unit test?
Your example is actually in the right direction. When testing MakeHandler(),
you're testing that the handler returned by it is able to handle all the paths
and calls the correct handler for each path. So you need to call the
ServeHTTP() method, let it do its thing and then test to see it worked
correctly. Only introspecting the handler does not guarantee correctness during
actual usage.
You may need to make actually valid requests though so you're able to understand
which handler was called based on the response body or header. That should
bring the test to a quite reasonable state. (I think you already have that)
Similarly, I'd add a basic sub test for each route that's added in the future.
Detailed handler tests can be written in separate funcs.
I can not think of a better heading.
In the following code, if rollBackLogger is nil, the first test case would fail but all other test cases would raise an exception.
Is there a way available to avoid this, other than using an if statement?
I believe that this is a very common situation for unit testing and that there should be some function in assert or some other way around to avoid this.
assert.NotNil(rollbackLogger)
assert.Equal("Action", rollBackLogger[0].Action)
assert.Equal("random path", rollBackLogger[0].FilePath)
Use require.NotNil instead.
Package require implements the same assertions as the assert package but stops test execution when a test fails.
require.NoError is also particularly useful.
You can simply use t.FailNow() if you want the test to fail if the condition isn't valid.
I don't think there's a way to stop the test on an assert failure without using a condition or an external package.
if !assert.NotNil(rollbackLogger) {
t.FailNow()
}
assert.Equal("Action", rollBackLogger[0].Action)
assert.Equal("random path", rollBackLogger[0].FilePath)
or if you use the testify/assert package,
if !assert.NotNil(rollbackLogger) {
assert.FailNow(t, "message")
}
assert.Equal("Action", rollBackLogger[0].Action)
assert.Equal("random path", rollBackLogger[0].FilePath)
I have unit tests for most of our code. But I cannot figure out how to generate unit tests coverage for certain code in main() in main package.
The main function is pretty simple. It is basically a select block. It reads flags, then either call another function/execute something, or simply print help on screen. However, if commandline options are not set correctly, it will exit with various error codes. Hence, the need for sub-process testing.
I tried sub-process testing technique but modified code so that it include flag for coverage:
cmd := exec.Command(os.Args[0], "-test.run=TestMain -test.coverprofile=/vagrant/ucover/coverage2.out")
Here is original code: https://talks.golang.org/2014/testing.slide#23
Explanation of above slide: http://youtu.be/ndmB0bj7eyw?t=47m16s
But it doesn't generate cover profile. I haven't been able to figure out why not. It does generate cover profile for main process executing tests, but any code executed in sub-process, of course, is not marked as executed.
I try to achieve as much code coverage as possible. I am not sure if I am missing something or if there is an easier way to do this. Or if it is just not possible.
Any help is appreciated.
Thanks
Amer
I went with another approach which didn't involve refactoring main(): see this commit:
I use a global (unexported) variable:
var args []string
And then in main(), I use os.Args unless the private var args was set:
a := os.Args[1:]
if args != nil {
a = args
}
flag.CommandLine.Parse(a)
In my test, I can set the parameters I want:
args = []string{"-v", "-audit", "_tests/p1/conf/gitolite.conf"}
main()
And I still achieve a 100% code coverage, even over main().
I would factor the logic that needs to be tested out of main():
func main() {
start(os.Args)
}
func start(args []string) {
// old main() logic
}
This way you can unit-test start() without mutating os.Args.
Using #VonC solution with Go 1.11, I found I had to reset flag.CommandLine on each test redefining the flags, to avoid a "flag redefined" panic.:
for _, check := range checks {
t.Run("flagging " + check.arg, func(t *testing.T) {
flag.CommandLine = flag.NewFlagSet(cmd, flag.ContinueOnError)
args = []string{check.arg}
main()
})
}
Am trying to understand what Exactly Verify or VerifyAll Does ?
I was searching and i got the below info on using MOQ
Arrange
Mock
Set up expectations for dependencies
Set up expected results
Create class under test
Act
Invoke method under test
Assert
Assert actual results match expected results
Verify that expectations were met
So What exactly does Verify Does? I can test everything using Assert and in case if any failures the unit test will fail ?
What extra work does verify does ? Is it the replacement for Assert ?
Some more clarify will make me understand.
Thanks
Assert vs Mock.Verify
Assert is used to do checks on the class/object/subject under test (SUT).
Verify is used to check if the collaborators of the SUT were notified or contacted.
So if you are testing a car object, which has an engine as a collaborator/dependency.
You would use to Assert to see if car.IsHumming after you invoke car.PushStart()
You would use Verify to see if _mockEngine.Ignition() received a call.
Verify vs VerifyAll
Approach One:
Explicitly setup all operations you expect to be triggered on the mocked collaborator from the subsequent Act step
Act - do something that will cause the operations to be triggered
call _mock.VerifyAll() : to cause every expection that you setup in (1) to be verified
Approach Two
Act - do something that will cause the operations to be triggered
call _mock.Verify(m => m.Operation) : cause verification that Operation was in fact called. One Verify call per verification. It also allows you to check count of calls e.g. exactly Once, etc.
So if you have multiple operations on the Mock OR if you need the mocked method to return a value which will be processed, then Setup + Act + VerifyAll is the way to go
If you have a few notifications that you want to be checked, then Verify is easier.
I have this piece of code
function aFunctionForUnitTesting(aParameter)
return (aFirstCheck(aParameter) &&
aOtherOne(aParameter) &&
aLastOne(aParameter)
);
How can I unit test this ?
My problem is the following, let's say I create this unit test :
FailWhenParameterDoesntFillFirstCheck()
{
Assert.IsFalse(new myObject().aFunctionForUnitTesting(aBadParameter));
}
How do I know that this test is ok because of the first check (it might have failed because of the second or the third, so my function firstCheck might be bugged) ?
You need to pass a different value of your parameter, that will pass the first check and fail on the second.
FailWhenParameterDoesntFillFirstCheck()
{
Assert.IsFalse(new myObject().aFunctionForUnitTesting(aBadParameterThatPassesFirstCheckButFailsTheSecond));
}
You don't have to write all these cases. It depends on what level of code coverage you want. You can get 'modified condition/decision coverage' in four tests:
- all params ok
- one of each param failing the test
'Multiple condition coverage' requires 8 tests here. Not sure I'd bother unless I was working for Airbus :-)