How to test code that loops forever - unit-testing

I have an app (epazote) that once starts runs forever but I want to test some values before it blocks/waits until ctrl+c is pressed or is killed.
Here is an small example: http://play.golang.org/p/t0spQRJB36
package main
import (
"fmt"
"os"
"os/signal"
)
type IAddString interface {
AddString(string)
}
type addString struct{}
func (self *addString) AddString(s string) {
fmt.Println(s)
}
func block(a IAddString, s string) {
// test this
a.AddString(s)
// ignore this while testing
block := make(chan os.Signal)
signal.Notify(block, os.Interrupt, os.Kill)
for {
signalType := <-block
switch signalType {
default:
signal.Stop(block)
fmt.Printf("%q signal received.", signalType)
os.Exit(0)
}
}
}
func main() {
a := &addString{}
block(a, "foo")
}
I would like to know if is posible to ignore some parts of the code while testing, or how to test this cases, I have implemented an interface, in this case for testing AddString that helped me to test some parts but have no idea of how to avoid the "block" and test.
Any ideas?
Update: Putting the code inside the loop Addstring in another function works but only for testing that function, but If I want to do a full code coverage, I still need to check/test the blocking part, for example how to test that is properly behaving when receiving ctrl+c or a kill -HUP, I was thinking on maybe creating a fake signal.Notify but don't know how to overwrite imported packages in case that could work.

Yes, it's possible. Put the code that is inside the loop in a separate function, and unit test that function without the loop.

Introduce test delegates into your code.
Extract your loop into a function that takes 2 functions as arguments: onBeginEvent and onEndEvent. The functions signatures shall take:
state that you want to inspect inside the test case
optional: counter of loop number (so you can identify each loop). It is optional because actual delegate implementation can count number of times it was invoked by itself.
In the beginning of your loop you call OnBegingEvent(counter, currentState); than your code does its normal work and in the end you call OnEndEvent(counter, currentState); Presumably your code has changed currentState.
In production you could use an empty implementation of the function delegates or implement nil check in your loop.
You can use this model to put as many checks of your processing algorithms as you want. Let's say you have 5 checks. Now you look back at it and realize that's becoming too hard. You create an Interface that defines your callback functions. These callback functions are a powerful method of changing your service behavior. You step back one more time and realize that interface is actually your "service's policy" ;)
Once you take that route you will want to stop your infinite loop somehow. If you want a tight control within a test case you could take a 3rd function delegate that returns true if it is time to quit from the loop. Shared variable is an option to control quit condition.
This is certainly a higher level of testing than unit testing and it is necessary in complex services.

Related

How to verify number of invocations, in the project reactor for retryWhen

I have the following function
public Mono<Integer> revertChange() { someService.someMethod() .retryWhen(3 times, with 150millis of delay, if specific error occured) .onError(e -> log_the_error); }
And I have a simple unit test that summpose to verify that the someService.someMethod was called exactly 3 times
`class Test {
#InjectMocks
SomeService someService;
#Test
void shouldCallSomeServiceExactlythreetimes_whenErrorOccured() {
verify(someService).someMethod(3)//someMethod invoked 3 times
}
}
`
The problem is that the verify block does not catches that the someMethod was executed 3 times, it says only 1. I am using junit 5 and jmockit, maybe there are better alternatives specific for reactive mocks, any ideas guys?
Verification block does not catch multiple execution of the method
Mockito.verify(someService, Mockito.times(3)).someMethod();
from https://javadoc.io/doc/org.mockito/mockito-core/latest/org/mockito/Mockito.html :
Arguments passed are compared using equals() method. Read about ArgumentCaptor or ArgumentMatcher to find out other ways of matching / asserting arguments passed.
Although it is possible to verify a stubbed invocation, usually it's just redundant. Let's say you've stubbed foo.bar(). If your code cares what foo.bar() returns then something else breaks(often before even verify() gets executed). If your code doesn't care what foo.bar() returns then it should not be stubbed.
check also https://javadoc.io/doc/org.mockito/mockito-core/latest/org/mockito/Mockito.html#4
for verification with time out check https://javadoc.io/doc/org.mockito/mockito-core/latest/org/mockito/Mockito.html#verification_timeout
this snippet passes as soon as someMethod() has been called 2 times under 150 ms
Mockito.verify(someService, Mockito.timeout(150).times(2)).someMethod();
After careful investigation of the problem and deep dive to project reactor internals, the solution looks pretty simple the method that I am calling needs to be wrapped with Mono.deffer(() -> someService.someMethod()) that will eagerly every time when you call it

Testing a function that contains stderr, stdout and os.Exit()

I'm building an UI for cli apps. I completed the functions but I couldn't figure out how to test it.
Repo: https://github.com/erdaltsksn/cui
func Success(message string) {
color.Success.Println("√", message)
os.Exit(0)
}
// Error prints a success message and exit status 1
func Error(message string, err ...error) {
color.Danger.Println("X", message)
if len(err) > 0 {
for _, e := range err {
fmt.Println(" ", e.Error())
}
}
os.Exit(1)
}
I want to write unit tests for functions. The problem is functions contains print and os.Exit(). I couldn't figure out how to write test for both.
This topic: How to test a function's output (stdout/stderr) in unit tests helps me test print function. I need to add os.Exit()
My Solution for now:
func captureOutput(f func()) string {
var buf bytes.Buffer
log.SetOutput(&buf)
f()
log.SetOutput(os.Stderr)
return buf.String()
}
func TestSuccess(t *testing.T) {
type args struct {
message string
}
tests := []struct {
name string
args args
output string
}{
{"Add test cases.", args{message: "my message"}, "ESC[1;32m"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
want := tt.output
got := captureOutput(func() {
cui.Success(tt.args.message)
})
got := err
if got.Error() != want {
t.Error("Got:", got, ",", "Want:", want)
}
})
}
}
The usual answer in TDD is that you take your function and divide it into two parts; one part that is easy to test, but is not tightly coupled to specific file handles, or a specific implementation of os::Exit; the other part is tightly coupled to these things, but is so simple it obviously has no deficiencies.
Your "unit tests" are mistake detectors that measure the first part.
The second part you write once, inspect it "by hand", and then leave it alone. The idea here being that things are so simple that, once implemented correctly, they don't need to change.
// Warning: untested code ahead
func Foo_is_very_stable() {
bar_is_easy_to_test(stdin, stdout, os.exit)
}
func bar_is_easy_to_test(in *File, out *File , exit func(int)) {
// Do complicated things here.
}
Now, we are cheating a little bit -- os.exit is special magic that never returns, but bar_is_easy_to_test doesn't really know that.
Another design that is a bit more fair is to put the complicated code into a state machine. The state machine decides what to do, and the host invoking the machine decides how to do that....
// More untested code
switch state_machine.next() {
case OUT:
println(state_machine.line())
state_machine.onOut()
case EXIT:
os.exit(state_machine.exitCode())
Again, you get a complicated piece that is easy to test (the state machine) and a much simpler piece that is stable and easily verified by inspection.
This is one of the core ideas underlying TDD - that we deliberately design our code in such a way that it is "easy to test". The justification for this is the claim that code that is easy to test is also easy to maintain (because mistakes are easily detected and because the designs themselves are "cleaner").
Recommended viewing
Boundaries, by Gary Bernhardt
Building Protocol Libraries..., by Cory Benfield
What you have there is called a "side effect" - a situation when execution of your application spans beyond it's environment, it's address space. And the thing is, you don't test side effects. It is not always possible, and when it is - it is unreasonably complicated and ugly.
The basic idea is to have your side effects, like CLI output or os.Exit() (or network connections, or accessing files), decoupled from you main body of logic. There are plenty of ways to do it, the entire "software design" discipline is devoted to that, and #VoiceOfUnreason gives a couple of viable examples.
In your example I would go with wrapping side effects in functions and arranging some way to inject dependencies into Success() & Error(). If you want to keep those two just plain functions, then it's either a function argument or a global variable holding a function for exiting (as per #Peter's comment), but I'd recommend going OO way, employing some patterns and achieving much greater flexibility for you lib.

trouble debugging async Task in unit test because it's not entered

So I am trying to test a method of type async Task that is called inside of a command handler, inside that method I have some ifs and I want to check on which branch it goes.
Because on each branch a certain method is called, I can see which branch it went to by
await myRepository.Received(1).Method1(3, null);
Imagine the key method is like this:
public async Task MyKeyMethod(int x) {
if (x == 21)
Method1("bla");
if (x == 22)
Method2("blue");
if (x == 23)
Method3("ba");
}
So I want to test that the call MyKeyMethod(2) actually goes into the branch that calls Method2("blue");
And I know that I can do this by something like:
await MyKeyMethod.Received(1).Method2(22); // Received(1) means that method was invoked once.
Question 1: what should 22 be? The parameter supplied to Method2 or the one supplied to MyKeyMethod?
Question2: Why does my code not even enter any async Task method that I have inside the command handler (during debugging)?
Is there any concrete example that you have?
I am able to enter step by step the command by doing something like:
var cmd = new MyCommand(myObject); // myObject is an object that I mocked earlier (gave it some dummy values for each field)
var commandResponse = await handler.Handle(cmd);
Assert.That(commandResponse.IsSuccessful, Is.True);
...just NOT at the next deeper level, like the async Tasks inside those commands. I can just at the moment simulate what the async Task return, which is not what I want in this instance.
Question 3. Could this be because those async Task methods are inside a repository that is mocked by using
myRepository = Substitute.For<IMyRepository>();
Question 4. How do I enter actually not mockingly Task methods found inside Repositories that are mocked?
I am still getting the hang of it, "it" being the broader subject of unit tests in NUnit, but apparently my hunch was right. Because the repository was mocked, I could not enter and debug inside of one of its contained methods. So I used a real (not fake) method of the repository which by the way took in its constructors some fake instances of other dependant repos or services, and then I could go inside that Task.
So, factually, instead of:
myRepository = Substitute.For<IMyRepository>();
I went and created a real instance, such as:
var myRepository = new MyRepository>(mockService1, mockRepo2);
where mockService1 was mocked using Substitute like previously pointed out.
And by doing so I could then debug a method like:
myRepository.MyMethod(x) which previously the debugger couldn't analyse the inside of.
If you have a better way of phrasing my conclusions, by all means, or more complete explanation, please go ahead. Thank you

How can you test code that relies on net.Conn without creating an actual network connection?

If I have code that works with a net.Conn, how can I write tests for it without actually creating a network connection to localhost?
I've seen no solutions to this online; people seem to either ignore it (no tests), write tests that cannot run in parallel (ie. use an actual network connection, which uses up ports), or use io.Pipe.
However, net.Conn defines SetReadDeadline, SetWriteDeadline; and io.Pipe doesn't. net.Pipe also doesnt, despite superficially claiming to implement the interface, it's simply implemented with:
func (p *pipe) SetDeadline(t time.Time) error {
return &OpError{Op: "set", Net: "pipe", Source: nil, Addr: nil, Err: errors.New("deadline not supported")}
}
func (p *pipe) SetReadDeadline(t time.Time) error {
return &OpError{Op: "set", Net: "pipe", Source: nil, Addr: nil, Err: errors.New("deadline not supported")}
}
func (p *pipe) SetWriteDeadline(t time.Time) error {
return &OpError{Op: "set", Net: "pipe", Source: nil, Addr: nil, Err: errors.New("deadline not supported")}
}
(see: https://golang.org/src/net/pipe.go)
So... is there some other way of doing this?
I'll accept any answer that shows how to use a stream in a test with a working deadline that is not an actual network socket.
(Idly, this cloudflare blogpost covers the motivation for using deadlines, and why blocking forever in a goroutine per connection is not an acceptable solution; but regardless of that argument, particularly in this case I'm looking for a solution for tests where we deliberately want to handle edge cases where a bad connection hangs, etc.)
(NB. this may seem like a duplicate of Simulate a tcp connection in Go, but notice that all the solutions proposed in that question do not implement the Deadline functions, which is specifically what I'm asking about how to test here)
Your question is very open, so it is not possible to give you "the correct answer". But I think I understand the point where you stuck. This answer is also open, but it should bring you back on the right track.
A few days ago I wrote a short article, which shows the principle which you have to use.
Before I make some small examples how you such tests work, we need to fix one important point:
We do not test the net package. We asume, that the package has no bug and does, what the documentation says. That means we do not care how the Go team has implementet SetReadDeadline and SetWriteDeadline. We test only the usage in our programm.
Step 1: Refactoring your code
You did not post any code snippets, so I give you just a simple example. I guess you have a method or function, where you are using the net package.
func myConn(...) error {
// You code is here
c, err := net.Dial("tcp", "12.34.56.78:80")
c.setDeadline(t)
// More code here
}
That you are able to to test you need to refactor your function, so it is just using the net.Conn interface. To do this, the net.Dial() call has to be moved outside of the function. Remember that we don't want to test the net.Dial function.
The new function could look something like that:
func myConn(c, net.Conn, ...) error {
// You code is here
c.setDeadline(t)
// More code here
}
Step 2: Implement the net.Conn interface
For testing you need to implement the net.Conn interface:
type connTester struct {
deadline time.Time
}
func (c *connTester) Read(b []byte) (n int, err error) {
return 0, nil
}
...
func (c *connTester) SetDeadline(t time.Time) error {
c.deadline = t
return nil
}
...
Complete implementation including a small type check:
https://play.golang.org/p/taAmI61vVz
Step 3: Testing
When testing, we don't care about the Dial() Method, we just create a pointer to our testtype, which implements the net.Conn interface and put it into your function. Afterwards we look inside our test cases, if the deadline parameter is set correctly.
func TestMyConn(t *testing.T){
myconnTester = &connTester{}
err := myConn(myconnTester,...)
...
if myconntester.deadline != expectedDeadline{
//Test fails
}
}
So when testing, you should always think about, what feature you want to test. I think it is the really most difficult part to abstract the functionality you really want to write. Inside of simple unit tests you should never test functionalities of the standard library. Hope this examples can help you to bring you back on the right track.
Code that needs to be swapped out, for a controlled version in a unittest, should live behind an abstraction. In this case the abstraction would be the net.Conn interface. The production code would use go std lib net.Conn but the test code would use a test stub that is configured with the exact logic to exercise your function.
Introducing an abstraction is a powerful pattern that should allow swapping out all IO, or timing based code, to allow for controlled execution of code, during a unittest.
#apxp stated the same approach in a comment.
The same approach should work for the deadlines. It could get tricky simulating a deadline that is reached, because you may have to configure your stub with multiple responses. ie. The first response succeeds, but the second simulates a deadline that has been reached, and throws an error for the second request.

Execute test cases in a pre-defined order

Is there a way to execute test cases in GoLang in a pre-defined order.
P.S: I am writing test cases for life cycle of a event. So I have different api's for all the CURD operations. I want to run these test cases in a particular order as only if an event is created it can be destroyed.
Also can I get some value from one test case and pass it as input to another. (example:- To test the delete event api, I need a event_id which i get when I call create_event test case)
I am new to GoLang, can someone please guide me through.
Thanks in advance
The only way to do it is to encapsulate all your tests into one test function, that calls sub-functions in the right order and with the right context, and pass the testing.T pointer to each so they can fail. The down-side is that they will all appear as one test. But in fact that is the case - tests are stateless as far as the testing framework is concerned, and each function is a separate test case.
Note that although the tests may run in the order they are written in, I found no documentation stating that this is actually a contract of some sort. So even though you can write them in order and keep the state as external global variables - that's not recommended.
The only flexibility the framework gives you since go 1.4 is the TestMain method that lets you run before/after steps, or setup/teardown:
func TestMain(m *testing.M) {
if err := setUp(); err != nil {
panic(err)
}
rc := m.Run()
tearDown()
os.Exit(rc)
}
But that won't give you what you want. The only way to do that safely is to do something like:
// this is the whole stateful sequence of tests - to the testing framework it's just one case
func TestWrapper(t *testing.T) {
// let's say you pass context as some containing struct
ctx := new(context)
test1(t, ctx)
test2(t, ctx)
...
}
// this holds context between methods
type context struct {
eventId string
}
func test1(t *testing.T, c *context) {
// do your thing, and you can manipulate the context
c.eventId = "something"
}
func test2(t *testing.T, c *context) {
// do your thing, and you can manipulate the context
doSomethingWith(c.eventId)
}