How can you unit test lib/src files in Dart? - unit-testing

My Dart package is currently laid out in the following manner:
lib/
mypackage.dart
src/
mypackage_util.dart
test/
mypackage_test.dart
Inside mypackage.dart, I'm using the part, part of import strategy to use
mypackage_util.dart in mypackage.dart, as recommended by the Pub Layout
conventions.
On the test side, I took inspiration from Seth Ladd's example of using
unittest,
which shows him creating a new library for his tests, which makes sense to me.
Unfortunately, this leads to the inability to import mypackage_util.dart into
mypackage_test.dart, which means I can't test classes or functions from
mypackage_util.dart, or anything in src/.
The solutions I'm imagining are;
Make mypackage_test.dart a part of the main library, but this seems to make
it impossible to just run the test code stand-alone.
Take mypackage_util.dart out of src/, but this seems to imply that you
can never unit test src/ code in packages, which seems silly and exposes
code I don't want to.
Should I take one of the above approaches, or am I missing something?
Update:
The above situation was due to a library import conflict (see Chapter 2. Dart Tour's Libraries and Visibility section for solutions to library conflicts). See comments below.

If lib/mypackage.dart declares the library and includes lib/src/mypackage_util.dart using part/part of you just need to include lib/mypackage.dart in your test - or is your problem that you want to test private classes/functions contained in lib/src/mypackage_util.dart?

Related

How to generate files using dart source_gen to a different directory

This issue describes the concept
https://github.com/dart-lang/source_gen/issues/272
To summarize:
I am using source_gen to generate some dart code.
I am using json_serializable on the generated dart code.
I wish to output all of the results to a source directory adjacent or below my target source.
The desired directory structure
src
feature_a
model.dart
gen
model.g.dart
model.g.g.dart
feature_b
...
I have considered building to cache however it seems json_serializable doesn't support this and even if it did I don't know if its even possible to run a builder on files in the cache.
I've also considered an aggregated builder that is mentioned here.
Generate one file for a list of parsed files using source_gen in dart
But json_serializable is still an issue and the source_gen version in that post is super old and doesn't describe the solution well.
This is not possible with build_runner. The issue to follow is https://github.com/dart-lang/build/issues/1689
Note that this doesn't help much with builders that you don't author, and wouldn't work with things like SharedPartBuilder.

Testing in GO - code coverage within project packages

I have a question about generating code coverage in Go(lang) projects.I have this simple structure:
ROOT/
config/
handlers/
lib/
models/
router/
main.go
config contains configuration in JSON and one simple config.go that reads and parses JSON file and fills the Config struct which is used then when initializing DB connection. handlers contains controllers (i.e. handlers of respective METHOD+URL described in router/routes.go). lib contains some DB, request responder and logger logic. models contains structs and their funcs to be mapped from-to JSON and DB. Finally router contains the router and routes definition.
Basically just by testing one single handler I ensure that my config, logger, db, responder, router and corresponding model are invoked (and tested somehow as well).
Now if this would be a PHP or Java or I don't know what else language, having and running that single handler test would create a code coverage also for all other invoked parts of code even if they are in different folders (i.e. packages here). Unfortunately, this is not the case in Go.
And there is very little code in most of my lib files having just one method (like InitDB() or ReadConfig() or NewRouter() or Logger()) so to be able to have code coverage for them I have to create stupid tests in these packages as well while they were already invoked and tested by testing the main URLs handling packages.
Is there any way how to get code coverage from packages also for included other packages within one project?
You can import your packages within a single test and write tests for them based on how they'll be used within your application. For example, you can have one test, main_test.go which imports all the other packages, and then you write tests for the methods in the imported packages.
Like this (in main_test.go):
package main
import (
"testing"
"lib"
"models"
"handlers"
// etc
)
// testing code here
However, in my personal opinion, it might be better to make sure that each package does what it should and that only. The best way to do that, is by testing the individual package itself, with its own test suite.

How can I use gradle to build my project both with and without our profiling aspects?

We've developed some profiling aspects that we would like to include in a testing build, but not in our production build. I'm looking for a best-practices way of structuring the build.gradle file and the source directories.
My initial thought was to create a compileJavaAJ task, and a jarAJ task which depends on compileJavaAJ. compileJavaAJ would look awfully similar to the compileJava defined in the aspectJ plugin, http://github.com/breskeby/gradleplugins/raw/0.9-upgrade/aspectjPlugin/aspectJ.gradle. The problem with just applying this plugin is that it completely replaces compileJava (i.e. the one using javac). I need two build targets - one that uses javac, the other that uses ajc. I welcome suggestions if there's a better approach though.
Next, I need to decide where to put the aspectJ code. I don't want to put it in src/main/java, because the java compiler will choke on it. So, I'm thinking of defining a new SourceSet, src/main/aspectJ, which only compileJavaAJ knows about. A SourceSet is supposed to model java code though, so I'm not quite sure if this is the correct approach.
Any input is greatly appreciated. Thanks!
I would use a property like "withAspectJ" to differentiate between compiling with and without your aspects. Have a look on the following snippet:
if(project.hasProperty('withAspectj') && project.getProperty("withAspectj")){
sourceSets {
main {
java {
srcDir 'src/main/aspectj'
}
}
}
}
This snippets adds the directory src/main/aspectj to your main sourceset if a property named withAspectj evaluates to true. Now you can put all your aspects into this specific directory. If you don't pass the withAspectj property, the replaced compileJava task will compile your code without wiring the aspects into it.
But if you run your build from the command line:
gradle build -PwithAspectj=true
all aspects located in src/main/aspectj will be wired into your code.
hope that helped,
regards,
René

Best practices for file system dependencies in unit/integration tests

I just started writing tests for a lot of code. There's a bunch of classes with dependencies to the file system, that is they read CSV files, read/write configuration files and so on.
Currently the test files are stored in the test directory of the project (it's a Maven2 project) but for several reasons this directory doesn't always exist, so the tests fail.
Do you know best practices for coping with file system dependencies in unit/integration tests?
Edit: I'm not searching an answer for that specific problem I described above. That was just an example. I'd prefer general recommendations how to handle dependencies to the file system/databases etc.
First one should try to keep the unit tests away from the filesystem - see this Set of Unit Testing Rules. If possible have your code working with Streams that will be buffers (i.e. in memory) for the unit tests, and FileStream in the production code.
If this is not feasible, you can have your unit tests generates the files they need. This makes the test easy to read as everything is in one file. This may also prevent permissions problem.
You can mock the filesystem/database/network access in your unit tests.
You can consider the unit tests that rely on DB or file systems as integration tests.
Dependencies on the filesystem come in two flavours here:
files that your tests depend upon; if you need files to run the test, then you can generate them in your tests and put them in a /tmp directory.
files that your code is dependent upon: config files, or input files.
In this second case, it is often possible to re-structure your code to remove dependency on a file (e.g. java.io.File can be replaced with java.io.InputStream and java.io.OutputStream, etc.) This may not be possible of course.
You may also need to handle 'non-determinism' in the filesystem (I had a devil of a job debugging something on an NFS once). In this case you should probably wrap the file system in a thin interface.
At its simplest, this is just helper methods that take a File and forward the call onto that file:
InputStream getInputStream(File file) throws IOException {
return new FileInputStream(file);
}
You can then replace this one with a mock which you can direct to throw the exception, or return a ByteArrayInputStream, or whatever.
The same can be said for URLs and URIs.
There are two options for testing code that needs to read from files:
Keep the files related to the unit tests in source control (e.g. in a test data folder), so anyone who gets the latest and runs the tests always has the relevant files in a known folder relative to the test binaries. This is probably the "best practice".
If the files in question are huge, you might not want to keep them in source control. In this case, a network share that is accessible from all developer and build machines is probably a reasonable compromise.
Obviously most well-written classes will not have hard dependencies on the file system in the first place.
Usually, file system tests aren't very critical: The file system is well understood, easy to set up and to keep stable. Also, accesses are usually pretty fast, so there is no reason per se to shun it or to mock the tests.
I suggest that you find out why the directory doesn't exist and make sure that it does. For example, check the existence of a file or directory in setUp() and copy the files if the check fails. This only happens once, so the performance impact is minimal.
Give the test files, both in and out, names that are structurally similar to the unit test name.
In JUnit, for instance, I'd use:
File reportFile = new File("tests/output/" + getClass().getSimpleName() + "/" + getName() + ".report.html");

How to organize C++ test apps and related files?

I'm working on a C++ library that (among other stuff) has functions to read config files; and I want to add tests for this. So far, this has lead me to create lots of valid and invalid config files, each with only a few lines that test one specific functionality. But it has now got very unwieldy, as there are so many files, and also lots of small C++ test apps. Somehow this seems wrong to me :-) so do you have hints how to organise all these tests, the test apps, and the test data?
Note: the library's public API itself is not easily testable (it requires a config file as parameter). The juicy, bug-prone methods for actually reading and interpreting config values are private, so I don't see a way to test them directly?
So: would you stick with testing against real files; and if so, how would you organise all these files and apps so that they are still maintainable?
Perhaps the library could accept some kind of stream input, so you could pass in a string-like object and avoid all the input files? Or depending on the type of configuration, you could provide "get/setAttribute()" functions to directly, publicy, fiddle the parameters. If that is not really a design goal, then never mind. Data-driven unit tests are frowned upon in some places, but it is definitely better than nothing! I would probably lay out the code like this:
project/
src/
tests/
test1/
input/
test2
input/
In each testN directory you would have a cpp file associated to the config files in the input directory.
Then, assuming you are using an xUnit-style test library (cppunit, googletest, unittest++, or whatever) you can add various testXXX() functions to a single class to test out associated groups of functionality. That way you could cut out part of the lots-of-little-programs problem by grouping at least some tests together.
The only problem with this is if the library expects the config file to be called something specific, or to be in a specific place. That shouldn't be the case, but if it is would have to be worked around by copying your test file to the expected location.
And don't worry about lots of tests cluttering your project up, if they are tucked away in a tests directory then they won't bother anyone.
Part 1.
As Richard suggested, I'd take a look at the CPPUnit test framework. That will drive the location of your test framework to a certain extent.
Your tests could be in a parallel directory located at a high-level, as per Richard's example, or in test subdirectories or test directories parallel with the area you want to test.
Either way, please be consistent in the directory structure across the project! Especially in the case of tests being contained in a single high-level directory.
There's nothing worse than having to maintain a mental mapping of source code in a location such as:
/project/src/component_a/piece_2/this_bit
and having the test(s) located somewhere such as:
/project/test/the_first_components/connection_tests/test_a
And I've worked on projects where someone did that!
What a waste of wetware cycles! 8-O Talk about violating the Alexander's concept of Quality Without a Name.
Much better is having your tests consistently located w.r.t. location of the source code under test:
/project/test/component_a/piece_2/this_bit/test_a
Part 2
As for the API config files, make local copies of a reference config in each local test area as a part of the test env. setup that is run before executing a test. Don't sprinkle copies of config's (or data) all through your test tree.
HTH.
cheers,
Rob
BTW Really glad to see you asking this now when setting things up!
In some tests I have done, I have actually used the test code to write the configuration files and then delete them after the test had made use of the file. It pads out the code somewhat and I have no idea if it is good practice, but it worked. If you happen to be using boost, then its filesystem module is useful for creating directories, navigating directories, and removing the files.
I agree with what #Richard Quirk said, but also you might want to make your test suite class a friend of the class you're testing and test its private functions.
For things like this I always have a small utility class that will load a config into a memory buffer and from there it gets fed into the actually config class. This means the real source doesn't matter - it could be a file or a db. For the unit-test it is hard coded one in a std::string that is then passed to the class for testing. You can simulate currup!pte3d data easily for testing failure paths.
I use UnitTest++. I have the tests as part of the src tree. So:
solution/project1/src <-- source code
solution/project1/src/tests <-- unit test code
solution/project2/src <-- source code
solution/project2/src/tests <-- unit test code
Assuming that you have control over the design of the library, I would expect that you'd be able to refactor such that you separate the concerns of actual file reading from interpreting it as a configuration file:
class FileReader reads the file and produces a input stream,
class ConfigFileInterpreter validates/interprets etc. the contents of the input stream
Now to test FileReader you'd need a very small number of actual files (empty, binary, plain text etc.), and for ConfigFileInterpreter you would use a stub of the FileReader class that returns an input stream to read from. Now you can prepare all your various config situations as strings and you would not have to read so many files.
You will not find a unit testing framework worse than CppUnit. Seriously, anybody who recommends CppUnit has not really taken a look at any of the competing frameworks.
So yes, go for a unit testing franework, but do not use CppUnit.