Unit testing - log and then fail? - unit-testing

I am used to test drive my code. Now that I am new to Go I am trying to get it right as fast as possible. I am using the testing package in the standard library which seem to be good enough. (I also like that it is not yet another external dependency. We are currently at 2 dependencies overall - compared to any Java- or Ruby project.....) Anyways - it looks like an assert in golang looks like this:
func TestSomething(t *testing.T) {
something := false
if something {
t.Log("Oh noes - something is false")
t.Fail()
}
}
I find this verbose and would like to do it on one line instead:
Assert( something, "Oh noes - something is false" )
or something like that. I hope that I have missed something obvious here. What is the best/idiomatic way to do it in go?
UPDATE: just to clarify. If I was to do something like this:
func AssertTrue(t *testing.T, value bool, message string) {
if value {
t.Log(message)
t.Fail()
}
}
and then write my test like this
func TestSomething(t *testing.T) {
something := false
AssertTrue(t, something, "Oh noes - something is false")
}
then it would not be the go way to do it?

There are external packages that can be integrated with the stock testing framework.
One of them I wrote long ago, gocheck, was intended to sort that kind of use case.
With it, the test case looks like this, for example:
func (s *Suite) TestFoo(c *gocheck.C) {
// If this succeeds the world is doomed.
c.Assert("line 1\nline 2", gocheck.Equals, "line 3")
}
You'd run that as usual, with go test, and the failure in that check would be reported as:
----------------------------------------------------------------------
FAIL: foo_test.go:34: Suite.TestFoo
all_test.go:34:
// If this succeeds the world is doomed.
c.Assert("line 1\nline 2", gocheck.Equals, "line 3")
... obtained string = "" +
... "line 1\n" +
... "line 2"
... expected string = "line 3"
Note how the comment right above the code was included in the reported failure.
There are also a number of other usual features, such as suite and test-specific set up and tear down routines, and so on. Please check out the web page for more details.
It's well maintained as I and other people use it in a number of active projects, so feel free to join on board, or follow up and check out the other similar projects that suit your taste more appropriately.
For examples of gocheck use, please have a look at packages such as mgo, goyaml, goamz, pipe, vclock, juju (massive code base), lpad, gozk, goetveld, tomb, etc. Also gocheck, manages to test itself. It was quite fun to bootstrap that.

But when You try write test like Uncle Martin, with one assert in test and long function names, then simple assert library, like http://github.com/stretchr/testify/assert can make it much faster and easier

I discourage writing test in the way you seem to have desire for. It's not by chance that the whole stdlib uses the, as you call it, "verbose" way.
It is undeniably more lines, but there are several advantages to this approach.
If you read Why does Go not have assertions? and s/error handling/test failure reporting/g you can get a picture of why the several "assert" packages for Go testing are not a good idea to use,
Once again, the proof is the huge code base of the stdlib.

The idiomatic way is the way you have above. Also, you don't have to log any message if you don't desire.
As defined by the GO FAQ:
Why does Go not have assertions?
Go doesn't provide assertions. They are undeniably convenient, but our
experience has been that programmers use them as a crutch to avoid
thinking about proper error handling and reporting. Proper error
handling means that servers continue operation after non-fatal errors
instead of crashing. Proper error reporting means that errors are
direct and to the point, saving the programmer from interpreting a
large crash trace. Precise errors are particularly important when the
programmer seeing the errors is not familiar with the code.
We understand that this is a point of contention. There are many
things in the Go language and libraries that differ from modern
practices, simply because we feel it's sometimes worth trying a
different approach.
UPDATE
Based on your update, that is not idiomatic Go. What you are doing is in essence designing a test extension framework to mirror what you get in the XUnit frameworks. While there is nothing fundamentally wrong, from an engineering perspective, it does raise questions as to the value + cost of maintaining this extension library. Additionally, you are creating an in-house standard that will potentially ruffle feathers. The biggest thing about Go is it is not C or Java or C++ or Python and things should be done the way the language is constructed.

Related

How to testing that Bazel my_test rule fails when it should?

I have found the documentation for how to test for expected analysis phase errors, but I'm drawing a blank no mater what I try to search for on how to test for expected execution phase failures.
An example of what I'm looking for would be a test of this example line_length_test rules where the test feeds in a file with over length lines and expects the test-under-test to be run and produce a failure.
Or to put it another way; I want a test that would fail if I did something dumb like this:
def _bad_test_impl(ctx):
# No-op, never fails regardless of what you feed it.
executable = ctx.actions.declare_file(ctx.label.name + ".sh")
ctx.actions.write(output=executable, content="")
return [DefaultInfo(executable=executable)]
bad_test = rule(
implementation=_bad_test_impl,
test=True,
)
Edit:
So far, the best I've come up with is the very gross:
BUILD
# Failure
bad_test(
name = "bad_test_fail_test",
tags = ["manual"],
)
native.sh_test(
name = "bad_test_failure_test",
srcs = [":not.sh"],
args = ["$(location :bad_test_fail_test)"],
data = [":bad_test_fail_test"],
)
not.sh
! $*
This does not seem like a good idea, particularly for something I'd expect to be well supported.
EDIT:
I got annoyed and built my own. I still wish there was official implementation.
I don't think that your not.sh is a particularly bad solution. But it depends on the situation. In general the Key to good failure testing is specificity i.e. this test should fail for this specific reason. If you don't get this nailed down your going to have a lot of headaches with false positives.
I'll use a slightly more expressive example to try and illustrate why it's so difficult to create a "generic/well supported" failure testing framework.
Let's say that we are developing a compiler. To test the compilers parser we intentionally feed the compiler a malformed source file, and we expect it to fail with something like a "missed semicolon on line 55". But instead our compiler fails from a fairly nasty bug that results in a segfault. As we have just tested that the compiler fails, the test passes.
This kind of a false positive is really hard to deal with in a way that is easy to reason about, whilst also being generic.
What we really want in the above scenario, is to test that the compiler fails AND it prints "missed semicolon on line 55". At this point it becomes increasingly difficult to create a "well supported" interface for these kinds of tests.
In reality a failure test is an integration test, or in some cases an end to end test.
Here is a short excerpt of an integration test from the Bazel repository. This integration test calls Bazel with a range of different flags some combinations expecting success and others failure;
function test_explicit_sandboxfs_not_found() {
create_hello_package
bazel build \
--experimental_use_sandboxfs \
--experimental_sandboxfs_path="/non-existent/sandboxfs" \
//hello >"${TEST_log}" 2>&1 && fail "Build succeeded but should have failed"
expect_log "Failed to get sandboxfs version.*/non-existent/sandboxfs"
}
And the corresponding build definition;
sh_test(
name = "sandboxfs_test",
size = "medium",
srcs = ["sandboxfs_test.sh"],
data = [":test-deps"],
tags = ["no_windows"],
)

Is there a way to specify an NUnit test as "extra credit"?

I have a few tests for an API, and I would like to be able to express certain tests that reflect "aspirational" or "extra credit" requirements - in other words, it's great if they pass, but fine if they don't. For instance:
[Test]
public void RequiredTest()
{
// our client is using positive numbers in DoThing();
int result = DoThing(1);
Assert.That( /* result is correct */ );
}
[Test]
public void OptionalTest()
{
// we do want to handle negative numbers, but our client is not yet using them
int result = DoThing(-1);
Assert.That( /* result is correct */ );
}
I know about the Ignore attribute, but I would like to be able to mark OptionalTest in such a way that it still runs on the CI server, but is fine if it does not pass - as soon as it does, I would like to take notice and perhaps make it a requirement. Is there any major unit test framework that supports this?
I would use a Warnings to achieve this. That way - your test will print a 'warning' output, but not be a failure, and not fail your CI build.
See: https://github.com/nunit/docs/wiki/Warnings
as soon as it does, I would like to take notice and perhaps make it a requirement.
This part's a slightly separate requirement! Depends a lot on how you want to 'take notice'! Consider looking at Custom Attributes - it may be possible to write an IWrapSetUpTearDown attribute, which sends an email when the relevant test passes. See the docs, here: https://github.com/nunit/docs/wiki/ICommandWrapper-Interface
The latter is a more unusual requirement - I would expect to have to do something custom to fit your needs there!

How to test asynchronuous code

I've written my own access layer to a game engine. There is a GameLoop which gets called every frame which lets me process my own code. I'm able to do specific things and to check if these things happened. In a very basic way it could look like this:
void cycle()
{
//set a specific value
Engine::setText("Hello World");
//read the value
std::string text = Engine::getText();
}
I want to test if my Engine-layer is working by writing automated tests. I have some experience in using the Boost Unittest Framework for simple comparison tests like this.
The problem is, that some things I want the engine to do are just processed after the call to cycle(). So calling Engine::getText() directly after Engine::setText(...) would return an empty string. If I would wait until the next call of cycle() the right value would be returned.
I now am wondering how I should write my tests if it is not possible to process them in the same cycle. Are there any best practices? Is it possible to use the "traditional testing" approach given by Boost Unittest Framework in such an environment? Are there perhaps other frameworks aimed at such a specialised case?
I'm using C++ for everything here, but I could imagine that there are answers unrelated to the programming language.
UPDATE:
It is not possible to access the Engine outside of cycle()
In your example above, std::string text = Engine::getText(); is the code you want to remember from one cycle but execute in the next. You can save it for later execution. For example - using C++11 you could use a lambda to wrap the test into a simple function specified inline.
There are two options with you:
If the library that you have can be used synchronously or using c++11 futures like facility (which can indicate the readyness of the result) then in your test case you can do something as below
void testcycle()
{
//set a specific value
Engine::setText("Hello World");
while (!Engine::isResultReady());
//read the value
assert(Engine::getText() == "WHATEVERVALUEYOUEXPECT");
}
If you dont have the above the best you can do have a timeout (this is not a good option though because you may have spurious failures):
void testcycle()
{
//set a specific value
Engine::setText("Hello World");
while (Engine::getText() != "WHATEVERVALUEYOUEXPECT") {
wait(1 millisec);
if (total_wait_time > 1 sec) // you can put whatever max time
assert(0);
}
}

How do I run some code after every build in scons?

I'm looking for a way to register somthing like an end-build callback in scons. For example, I'm doing something like this right now:
def print_build_summary():
failures = SCons.Script.GetBuildFailures()
notifyExe = 'notify-send '
if len(failures) > 0:
notifyExe = notifyExe + ' --urgency=critical Build Failed'
else:
notifyExe = notifyExe + ' --urgency=normal Build Succeed'
os.system(notifyExe)
atexit.register(print_build_summary)
This only works in non-interactive mode. I'd like to be able to pop up something like this at the end of every build, specifically, when running multiple 'build' commands in an interactive scons session.
The only suggestions I've found, looking around, seem to be to use the dependency system or the AddPostAction call to glom this on. It doesn't seem quite right to me to do it that way, since it's not really a dependency (it's not even really a part of the build, strictly speaking) - it's just a static bit of code that needs to be run at the end of every build.
Thanks!
I don't think there's anything wrong with using the dependency system to resolve this. This is how I normally do it:
def finish( target, source, env ):
raise Exception( 'DO IT' )
finish_command = Command( 'finish', [], finish )
Depends( finish_command, DEFAULT_TARGETS )
Default( finish_command )
This creates a command that depends on the default targets for it's execution (so you know it'll always run last - see DEFAULT_TARGETS in scons manual). Hope this helps.
Ive been looking into this and havent found that SCons offers anything that would help. This seems like quite a usefull feature, maybe the SCons developers are watching these threads and will take the suggestion...
I looked at the source code and figured out how to do it. I'll try to suggest this change to the SCons developers on scons.org.
If you're interested, the file is engine/SCons/Script/Main.py, and the function is _build_targets(). At the end of this funcion, you would simply need to add a call to a user supplied callback. Of course this solution would not be very useful if you build on several different machines in your network, since you would have to port the change everywhere its needed, but if you're only building on one machine, then maybe you could make the change until/if SCons officially provides a solution.
Let me know if you need help implementing the change, and I'll see what I can do.
Another option would be to wrap the call to SCons, and have the wrapper script perform the desired actions, but that wouldnt help in SCons interactive mode.
Hope this helps,
Brady
EDIT:
I create a feature request for this: http://scons.tigris.org/issues/show_bug.cgi?id=2834

CATIA-CAA CATKeyboardEvent

I know there are only a few CAA Programmers in the world but I try it anyway...
I can't get keyboard events to work. I found this code which looks reasonable but the Notification doesn't fire.
AddAnalyseNotificationCB(CATFrmLayout::GetCurrentLayout()->GetCurrentWindow()->GetViewer(),
CATKeyboardEvent::ClassName(),
(CATCommandMethod)&PROTrvTreeView::OnKeyboardEvent, NULL);
void PROTrvTreeView::OnKeyboardEvent(CATCommand * ipCmd, CATNotification * ipEvt, CATCommandClientData iobjData) {
cout<< "KeyboardEvent" <<endl;
}
Anyone any idea?
There is a much denser group of developers for CAA at:
http://www.3ds.com/alliances/c-java-developers/forum/
The same question came up, with several people mentioning that this API was unauthorized, and therefore you can't rely on it, even if it works.
The other samples there are essentially the same code as yours, but the only one that purports to work doesn't use CATKeyboardEvent::ClassName, but instead uses CATKeybdEvent. Might be worth a try.