We have a special text file on an HTTP server that contains the filenames and test functions we want to skip when our golang tests run.
I must build something which downloads that test file, parses the file names and test functions that should be skipped, and then finally runs our go tests and properly skips the test functions found in the input file.
What's the right way to make this work in golang?
(I realize this sounds like an unusual way to skip, but we really want to make this work as I have described for reasons which are out of context to this question.)
You can skip test cases with (*testing.T).Skip() function. You can download the test file in the init function of go's test file. Then parse and load the function names in map or array. Before running each case, check if the test case is included in the skip list and skip if required.
// Psuedo code -> foo_test.go
var skipCases map[string]bool
func init() {
// download and parse test file.
// add test case names which needs skipped into skipCases map
}
func TestFoo(t *testing.T) {
if skipCases["TestFoo"] {
t.Skip()
}
// else continue testing
}
Related
I have an app (epazote) that once starts runs forever but I want to test some values before it blocks/waits until ctrl+c is pressed or is killed.
Here is an small example: http://play.golang.org/p/t0spQRJB36
package main
import (
"fmt"
"os"
"os/signal"
)
type IAddString interface {
AddString(string)
}
type addString struct{}
func (self *addString) AddString(s string) {
fmt.Println(s)
}
func block(a IAddString, s string) {
// test this
a.AddString(s)
// ignore this while testing
block := make(chan os.Signal)
signal.Notify(block, os.Interrupt, os.Kill)
for {
signalType := <-block
switch signalType {
default:
signal.Stop(block)
fmt.Printf("%q signal received.", signalType)
os.Exit(0)
}
}
}
func main() {
a := &addString{}
block(a, "foo")
}
I would like to know if is posible to ignore some parts of the code while testing, or how to test this cases, I have implemented an interface, in this case for testing AddString that helped me to test some parts but have no idea of how to avoid the "block" and test.
Any ideas?
Update: Putting the code inside the loop Addstring in another function works but only for testing that function, but If I want to do a full code coverage, I still need to check/test the blocking part, for example how to test that is properly behaving when receiving ctrl+c or a kill -HUP, I was thinking on maybe creating a fake signal.Notify but don't know how to overwrite imported packages in case that could work.
Yes, it's possible. Put the code that is inside the loop in a separate function, and unit test that function without the loop.
Introduce test delegates into your code.
Extract your loop into a function that takes 2 functions as arguments: onBeginEvent and onEndEvent. The functions signatures shall take:
state that you want to inspect inside the test case
optional: counter of loop number (so you can identify each loop). It is optional because actual delegate implementation can count number of times it was invoked by itself.
In the beginning of your loop you call OnBegingEvent(counter, currentState); than your code does its normal work and in the end you call OnEndEvent(counter, currentState); Presumably your code has changed currentState.
In production you could use an empty implementation of the function delegates or implement nil check in your loop.
You can use this model to put as many checks of your processing algorithms as you want. Let's say you have 5 checks. Now you look back at it and realize that's becoming too hard. You create an Interface that defines your callback functions. These callback functions are a powerful method of changing your service behavior. You step back one more time and realize that interface is actually your "service's policy" ;)
Once you take that route you will want to stop your infinite loop somehow. If you want a tight control within a test case you could take a 3rd function delegate that returns true if it is time to quit from the loop. Shared variable is an option to control quit condition.
This is certainly a higher level of testing than unit testing and it is necessary in complex services.
I am writing unit tests for my golang code, and there are a couple methods that I would like to be ignored when coverage is calculated. Is this possible? If so, how?
One way to do it would be to put the functions you don't want tested in a separate go file, and use a build tag to keep it from being included during tests. For example, I do this sometimes with applications where I have a main.go file with the main function, maybe a usage function, etc., that don't get tested. Then you can add a test tag or something, like go test -v -cover -tags test and the main might look something like:
//+build !test
package main
func main() {
// do stuff
}
func usage() {
// show some usage info
}
I was trying to use gocheck to test my go code. I was guiding myself with the following example (similar to the one provided by their website):
package hello_test
import (
"testing"
gocheck "gopkg.in/check.v1"
)
// Hook up gocheck into the "go test" runner.
func Test(t *testing.T) {
gocheck.TestingT(t)
}
type MySuite struct{} //<==== Does the struct have to be named that way, what if we have multiple of these and register them, is it a problem?
var _ = gocheck.Suite(&MySuite{}) // <==== What does this line do?
func (s *MySuite) TestHelloWorld(c *gocheck.C) {
c.Assert(42, gocheck.Equals, "42")
c.Check(42, gocheck.Equals, 42)
}
However, there are some lines I am not sure I understand even after reading the documentation. Why is the line type MySuite struct{} needed and even more of an interesting line, why is var _ = gocheck.Suite(&MySuite{}) needed? The first one, its easy to infer that one probably has to declare that struct first and create functions that will run the tests if implemented with the signature as shown. However, the second line beats me. I have literally no idea why its needed. The documentation says:
Suite registers the given value as a test suite to be run. Any methods starting with the Test prefix in the given value will be considered as a test method.
However, I am not sure about a lot of things. For instance, is there a problem if I run this function with multiple MySuite structs in the same file? Is there anything special about the type MySuite struct? Could the gocheck testing suite work even with some different struct being registered? Basically, how many times can we register a struct in one file and will it still work?
The gocheck.Suite function has the side effect of registering the given suite value with the gocheck package. Internally, it just adds the the suite to a slice of registered test suites. You could get the same effect with:
func init() {
gocheck.Suite(&MySuite{})
}
Either form should work, so it is just a matter of style.
The tests in the registered test suites are run when you call gocheck.TestingT. You do this in your test called Test, which will be picked up by Go's testing framework. This is how gocheck tests are integrated into the testing framework. Note that you only need a single invocation of TestingT to run all test suites: not one for each test suite.
I'm starting out using unit tests. I have a situation and don't know how to proceed:
For example:
I have a class that opens and reads a file.
In my unit test, I want to test the open method and the read method, but to read the file I need to open the file first.
If the "open file" test fails, the "read file" test would fail too!
So, how to explicit that the read fail because the open? I test the open inside the read??
The key feature of unit tests is isolation: one specific unit test should cover one specific functionality - and if it fails, it should report it.
In your example, read clearly depends on open functionality: if the latter is broken, there's no reason to test the former, as we do know the result. More, reporting read failure will only add some irrelevant noise to your test results.
What can (and should be) reported for read in this case is test skipped or something similar. That's how it's done in PHPUnit, for example:
class DependencyFailureTest extends PHPUnit_Framework_TestCase
{
public function testOne()
{
$this->assertTrue(FALSE);
}
/**
* #depends testOne
*/
public function testTwo()
{
}
}
Here we have testTwo dependant on testOne. And that's what's shown when the test is run:
There was 1 failure:
1) testOne(DependencyFailureTest)
Failed asserting that <boolean:false> is true.
/home/sb/DependencyFailureTest.php:6
There was 1 skipped test:
1) testTwo(DependencyFailureTest)
This test depends on "DependencyFailureTest::testOne" to pass.
FAILURES!
Tests: 2, Assertions: 1, Failures: 1, Skipped: 1.
Explanation:
To quickly localize defects, we want our attention to be focused on
relevant failing tests. This is why PHPUnit skips the execution of a
test when a depended-upon test has failed.
Opening the file is a prerequisite to reading the file, so it's fine to include that in the test. You can throw an exception in your code if the file failed to open. The error message in the test will then make it clear why the test failed.
I would also recommend that you consider creating the file in the test itself to remove any dependencies on existing files. That way you ensure that you always have a valid file to reference.
Generally speaking, you wouldn't find yourself testing your proposed scenario of unit testing the ability to read from a file, since you will usually end up using a file manipulation library of some kind and can usually safely assume that the maintainers of said library have the appropriate unit tests already in place (for example, I feel pretty confident that I can use the File class in .NET without worry).
That being said, the idea of one condition being an impediment to testing a second is certainly valid. That's why mock frameworks were created, so that you can easily set up a mock object that will always behave in a defined manner that can then be substituted for the initial dependency. This allows you to focus squarely on unit testing the second object/condition/etc. in a test scenario.
I have a little JUnit-Test that export an Object to the FileSystem. In the first place my test looked like this
public void exportTest {
//...creating a list with some objects to export...
JAXBService service = new JAXBService();
service.exportList(list, "output.xml");
}
Usually my test contain a assertion like assertEquals(...). So I changed the code to the following
public void exportCustomerListTest() throws Exception {
// delete the old resulting file, so we can test for a new one at the end
File file = new File("output.xml");
file.delete();
//...creating a list with some objects to export...
JAXBService service = new JAXBService();
service.exportCustomers(list, "output.xml");
// Test if a file has been created and if it contains some bytes
FileReader fis = new FileReader("output.xml");
int firstByte = fis.read();
assertTrue(firstByte != -1 );
}
Do I need this, or was the first approach enough? I am asking because, the first one is actually just "testing" that the code runs, but not testing any results. Or can I rely on the "contract" that if the export-method runs without an exception the test passes?
Thanks
Well, you're testing that your code runs to completion without any exceptions - but you're not testing anything about the output.
Why not keep a file with the expected output, and compare that with the actual output? Note that this would probably be easier if you had an overload of expertCustomers which took a Writer - then you could pass in a StringWriter and only write to memory. You could test that in several ways, with just a single test of the overload which takes a filename, as that would just create a FileOutputStream wrapped in an OutputStreamWriter and then call the more thoroughly tested method. You'd really just need to check that the right file existed, probably.
you could use
assertTrue(new File("output.xml")).exist());
if you notice problems during the generation of the file, you can unit test the generation process (and not the fact that the file was correctly written and reloaded from the filesystem)
You can either go with the "gold file" approach (testing that two files are 1 to 1 identical) or test various outputs of your generator (I imagine that the generation of the content is separated from the saving into the file)
I agree with the other posts. I will also add that your first test won't tell a test suite or test runner that this particular test has failed.
Sometimes a test only needs to demonstrate that no exceptions were thrown. In that case relying that an exception will fail the test is good enough. There is certainly nothing gained in JUnit by calling the assertEquals method. A test passes when it doesn't throw an AssertionException, not because that method is called. Consider a method that allows null input, you might write a test like this:
#Test public void testNullAllowed() {
new CustomObject().methodThatAllowsNull(null);
}
That might be enough of a test right there (leave to a separate test or perhaps there is nothing interesting to test about what it does with a null value), although it prudent to leave a comment that you didn't forget the assert, you left it out on purpose.
In your case, however, you haven't tested very much. Sure it didn't blow up, but an empty method wouldn't blow up either. Your second test is better, at least you demonstrate a non-empty file was created. But you can do better than that and check that at least some reasonable result was created.