I am creating a Golang project, and doing a pretty good job of adding tests in step with the features. The general idea is that I perform a "semantic diff" between files and git branches. For the newest feature, the behavior depends upon whether an external tool (tree-sitter-cli) is installed, and upon which extra capabilities are installed.
If the external tool (or its optional grammars) are not installed, I expect different results from my internal function. However, both results are consistent with my tool (sdt) itself being correct. For example, here is a test I have modified from another programming language analyzed without tree-sitter:
func TestNoSemanticDiff(t *testing.T) {
// Narrow the options to these test files
opts := options
opts.Source = file0.name
opts.Destination = file1.name
report := treesitter.Diff("", opts, config)
if !strings.Contains(report, "| No semantic differences detected") {
t.Fatalf("Failed to recognize semantic equivalence of %s and %s",
opts.Source, opts.Destination)
}
}
The tests for various supported languages are mostly the same in form. In this case, however, receiving the "report" of "| No available semantic analyzer for this format" would also be a correct value if the external tool is missing (or did not include that capability).
The thing is, I can run code to determine which value is expected; but that code would ideally be "run once at top of test" rather than re-check within each of many similar tests.
So what I'd ideally want is to first run some sort of setup/scaffolding, then have individual tests written something like:
if !treeSitterInstalled { // checked once at start of test file
if !strings.Contains(report, "appropriate response") { ... }
} else if treeSitterNoLangSupport { // ditto, configured once
if !strings.Contains(report, "this other thing") { ... }
} else {
if !string.Contains(report, "the stuff where it's installed") { ... }
}
In your test file you could simply use the init function to do the checking for the external dependency and set a variable inside that very same file, which can then be checked inside individual tests.
Related
I have found the documentation for how to test for expected analysis phase errors, but I'm drawing a blank no mater what I try to search for on how to test for expected execution phase failures.
An example of what I'm looking for would be a test of this example line_length_test rules where the test feeds in a file with over length lines and expects the test-under-test to be run and produce a failure.
Or to put it another way; I want a test that would fail if I did something dumb like this:
def _bad_test_impl(ctx):
# No-op, never fails regardless of what you feed it.
executable = ctx.actions.declare_file(ctx.label.name + ".sh")
ctx.actions.write(output=executable, content="")
return [DefaultInfo(executable=executable)]
bad_test = rule(
implementation=_bad_test_impl,
test=True,
)
Edit:
So far, the best I've come up with is the very gross:
BUILD
# Failure
bad_test(
name = "bad_test_fail_test",
tags = ["manual"],
)
native.sh_test(
name = "bad_test_failure_test",
srcs = [":not.sh"],
args = ["$(location :bad_test_fail_test)"],
data = [":bad_test_fail_test"],
)
not.sh
! $*
This does not seem like a good idea, particularly for something I'd expect to be well supported.
EDIT:
I got annoyed and built my own. I still wish there was official implementation.
I don't think that your not.sh is a particularly bad solution. But it depends on the situation. In general the Key to good failure testing is specificity i.e. this test should fail for this specific reason. If you don't get this nailed down your going to have a lot of headaches with false positives.
I'll use a slightly more expressive example to try and illustrate why it's so difficult to create a "generic/well supported" failure testing framework.
Let's say that we are developing a compiler. To test the compilers parser we intentionally feed the compiler a malformed source file, and we expect it to fail with something like a "missed semicolon on line 55". But instead our compiler fails from a fairly nasty bug that results in a segfault. As we have just tested that the compiler fails, the test passes.
This kind of a false positive is really hard to deal with in a way that is easy to reason about, whilst also being generic.
What we really want in the above scenario, is to test that the compiler fails AND it prints "missed semicolon on line 55". At this point it becomes increasingly difficult to create a "well supported" interface for these kinds of tests.
In reality a failure test is an integration test, or in some cases an end to end test.
Here is a short excerpt of an integration test from the Bazel repository. This integration test calls Bazel with a range of different flags some combinations expecting success and others failure;
function test_explicit_sandboxfs_not_found() {
create_hello_package
bazel build \
--experimental_use_sandboxfs \
--experimental_sandboxfs_path="/non-existent/sandboxfs" \
//hello >"${TEST_log}" 2>&1 && fail "Build succeeded but should have failed"
expect_log "Failed to get sandboxfs version.*/non-existent/sandboxfs"
}
And the corresponding build definition;
sh_test(
name = "sandboxfs_test",
size = "medium",
srcs = ["sandboxfs_test.sh"],
data = [":test-deps"],
tags = ["no_windows"],
)
I have an application written in C++, configured with CMake, tested using Catch2, the tests are invoked through CTest.
I have a fairly large list of files that each contain captures of messages that have caused an issue in my application in the past. I currently have a single test that runs through each of these files serially using code that looks approximately like:
TEST_CASE("server type regressions", "[server type]") {
auto const state = do_some_setup();
for (auto const path : files_to_test()) {
INFO(path);
auto parser = state.make_parser(path);
for (auto const message : parser) {
INFO(message);
handle(message);
}
}
}
The message handler has a bunch of internal consistency checks, so when this test fails, it typically does so by throwing an exception.
Is it possible to improve this solution to get / keep the following:
Run the initial do_some_setup once for all of the tests, but then run the test for each file in parallel. do_some_setup is fairly slow, and I have enough files relative to the number of cores that I wouldn't want to have to do setup per file. It would also be acceptable to run do_some_setup more than once, as long as it's better than O(n) in the number of files.
Run the regression test on all the files, even when an earlier file fails. I know I could do this with a try + catch and manually setting a bool has_failed on any failure, but I'd prefer if there were some built-in way to do this?
Be able to specify the file name when invoking tests, so that I can manually run just the test for a single file
Automatically detect the set of files. I would prefer not having to change to a solution where I need to add test files to the test file directory and also update some other location that lists all of the files I'm testing to manually shard them
I'm willing to write some CMake to manage this, pass some special flags to CTest or Catch2, or change to a different unit testing framework.
I'm trying to write unit test for struct constructor, which may return also nil if error happens during file.Open. I don't have idea how to test/mock file error with flags: os.O_RDWR|os.O_CREATE|os.O_APPEND
I tried to check nil value inside test, but it failed.
Constructor:
type App struct {
someField string
log *log.Logger
}
func New() *App {
app := &App{}
f, err := os.OpenFile("info.log", os.O_RDWR|os.O_CREATE|os.O_APPEND, 0666)
if err != nil {
fmt.Printf("error opening file: %v", err)
return nil
}
mw := io.MultiWriter(os.Stdout, f)
l = log.New(mw, "APP", log.Ldate|log.LstdFlags|log.Lshortfile)
app.log = l
return app
}
And test for constructor:
func TestNew(t *testing.T) {
var a App
a = New()
// doesn't cover
if a == nil {
t.Fatal("Error opening file")
}
}
I expect to have covered error != nil, which in coverage is red:
f, err := os.OpenFile("info.log", os.O_RDWR|os.O_CREATE|os.O_APPEND, 0666)
if err != nil {
fmt.Printf("error opening file: %v", err)
return nil
}
Mocking in Go means having interfaces, if that's something you really need you might consider using something like https://github.com/spf13/afero instead of using the os package directly. This also allows you to use in-memory filesystems and other things that make testing easier.
There are two things to consider.
The first is that O_RDWR|O_CREAT|O_APPEND is almost nothing interesting when it comes to opening a file: it tells the OS that it should open the file for reading and writing, in append mode, and that if the file does not exist at the time of a call it should be created, and otherwise it's fine to append to it.
Now the only two reasons I may fathom this operation might fail are:
The filesystem to contain the file is mounted read/only — and hence opening the file for writing, creating it and appending to it are impossible.
The file does not exist and the filesystem has its inode table full — and hence it's impossible to create a record for another file even if there is space for the file's data.
Now consider that in order to simulate one of such cases you need to manipulate some filesystem available to the process running your test. While it certainly possible to do within a unit-testing framework, it looks like belonging more to the domain of integration testing.
There are plenty of options to work towards this level of testing on Linux: "flakey" device-mapper target and friends, mounting a read-only image via loop device or FUSE, injecting faults into the running kernel etc. Still, these are mostly unsuitable for unit-testing.
If you want unit-test this stuff, there, again, are two approaches:
Abstract away the whole filesystem layer using something like https://github.com/spf13/afero as #Adrien suggested.
The upside is that you can easily test almost everything filesystem-related in your code.
Abstract away just a little bit of code using a variable.
Say, you might have
var whateverCreate = os.Create
use that whateverCreate in your code and then override just that variable in the setup code of your test suite to assign it a function which returns whatever error you need in specific test(s).
You could make the the filename/file path configurable instead of using the hard coded info.log, then in your test you could use some non existing path for example.
There are multiple options for configuring it:
parameter in the constructor (maybe a separate constructor that could be called from New if you wanna keep the API as it is)
package level configuration (like a global variable defaultLogFileName), this is less flexible (for example if you wanna run tests parallel), but might suit in this case as well
What is the correct way to go about automatically running some setup code (either in R or C++) once per package loading? Ideally, said code would execute once the user did library(mypackage). Right now, it's contained in a setup() function that needs to be run once before anything else.
Just for more context, in my specific case, I'm using an external library that uses glog and I need to execute google::InitGoogleLogging() once and only once. It's slightly awkward because I'm trying to use it within a library because I have to, even though it's supposed to be called from a main.
Just read 'Writing R Extensions' and follow the leads -- it is either .onAttach() or .onLoad(). I have lots of packages that do little things there -- and it doesn't matter this calls to C++ (via Rcpp or not) as you are simply asking about where to initialise things.
Example: Rblpapi creates a connection and stores it
.pkgenv <- new.env(parent=emptyenv())
.onAttach <- function(libname, pkgname) {
if (getOption("blpAutoConnect", FALSE)) {
con <- blpConnect()
if (getOption("blpVerbose", FALSE)) {
packageStartupMessage(paste0("Created and stored default connection object ",
"for Rblpapi version ",
packageDescription("Rblpapi")$Version, "."))
}
} else {
con <- NULL
}
assign("con", con, envir=.pkgenv)
}
I had some (not public) code that set up a handle (using C++ code) to a proprietary database the same way. The key is that these hooks guarantee you execution on package load / attach which is what you want here.
How do I compute the path to data fixtures files in test code, given:
test/{main.cpp,one_test.cpp,two_test.cpp}
compilation done in build/
test/fixtures/{conf_1.cfg}
The problem I'm facing is as follows:
/* in test/one_test.cpp */
TEST_CASE( "Config from file", "[config]" ) {
Config conf;
REQUIRE( conf.read(??? + "/conf_1.cfg") )
}
The solution I found so far is to define a macro at configure time:
#define TEST_DIR "/absolute/path/to/test"
which is obtained in my wscript with
def configure(cnf):
# ...
cnf.env.TEST_DIR = cnf.path.get_src().abspath()
cnf.define('TEST_DIR', cnf.env.TEST_DIR)
cnf.write_config_header('include/config.h')
Other attempts included __FILE__ which expanded to ../test/one_test.cpp, but I couldn't use it.
Some background: I'm using the Catch testing framework, with the waf build tool.
Is there is a common practice or pattern, possibly dependent on the testing framework ?
We found this hard to solve at compile/build time as refactoring components (and therefore tests) would move code around. We found two possible solutions:
Put the data into the test. This is only practical if it's short and humanly readable - strings or an easy hex-dump. You could always put the data into a header file if that would make the test easier to maintain.
Specify the location of the data files at the command-line when you run the tests. For this, you may need your own main (See 'Supplying your own main()'