Advanced module unit test setup in Julia - unit-testing

I have custom Julia package, which has hopefully quite standard structure. I have single package, src directory with sources and then test directory with all the tests.
In my package I have 4 modules (1 "main" and entrypoint module) and then 3 submodules.
I am slowly adding tests to tests directory. Tests are importing modules with using keyword.
Now problem is, very often I am testing some "private" or unnecessary method to be visible to outside and I have to export those functions even though I would not export them otherwise.
How to solve this? I was thinking that each module could have "Private" submodule containing all these "private" functions and constants used for unit testing so that I don't bloat exports of my clean module API.

copied from comments as that seems to be the solution OP is looking for
you can always test functions that are not exported by calling them with MyModule.MySubModule.func()

Related

julia: rerun unittests upon changes to files

Are there julia libraries that can run unittests automatically when I make changes to the code?
In Python there is the pytest.xdist library which can run unittests again when you make changes to the code. Does julia have a similar library?
A simple solution could be made using the standard library module FileWatching; specifically FileWatching.watch_file. Despite the name, it can be used with directories as well. When something happens to the directory (e.g., you save a new version of a file in it), it returns an object with a field, changed, which is true if the directory has changed. You could of course combine this with Glob to instead watch a set of source files.
You could have a separate Julia process running, with the project's environment active, and use something like:
julia> import Pkg; import FileWatching: watch_file
julia> while true
event = watch_file("src")
if event.changed
try
Pkg.pkg"test"
catch err
#warn("Error during testing:\n$err")
end
end
end
More sophisticated implementations are possible; with the above you would need to interrupt the loop with Ctrl-C to break out. But this does work for me and happily reruns tests whenever I save a file.
If you use a Github repository, there are ways to set up Travis or Appveyor to do this. This is the testing method used by many of the registered modules for Julia. You will need to write the unit test suite (with using Test) and place it in a /test subdirectory on the github repository. You can search for julia and those web services for details.
Use a standard GNU Makefile and call it from various places depending on your use-case
Your .juliarc if you want to check for tests on startup.
Cron if you want them checked regularly
Inside your module's init function to check every time a module is loaded.
Since GNU makefiles detect changes automatically, calls to make will be silently ignored in the absence of changes.

Where to put shared code for tests in a Go package? [duplicate]

This question already has answers here:
Can I create shared test utilities?
(2 answers)
Closed 3 years ago.
I have a Go package with multiple files. As of Go standard I am creating an associated test file for each source file in the package.
In my case the different tests use the same test helping functions. I don't want these functions to be in the package source files because it is only used for testing purpose. I also would like avoiding replicating this code in each test file.
Where should I put this code shared between all the test source files and not part of the package ?
You just put it in any of the test files and that's all. Test files using the same package clause belong to the same test package and can refer to each other's exported and unexported identifiers without any import statements.
Also note that you're not required to create a separate _test.go file for each of the .go files; and you can have an xx_test.go file without having a "matching" xx.go file in the package.
For example if you're writing package a, having the following files:
a/
a.go
b.go
a_test.go
b_test.go
For black-box testing you'd use the package a_test package clause in a_test.go and b_test.go. Having a func util() in file a_test.go, you can use it in b_test.go too.
If you're writing white-box testing, you'd use package a in the test files, and again, you can refer any identifiers declared in a_test.go from b_test.go (and vice versa) without any imports.
Note that however if the package clauses in a_test.go and b_test.go do not match (e.g. a_test.go uses package a and b_test.go uses package a_test), then they will belong to different test packages and then you can't use identifiers declared in one another.

Does ES5 modules accelate webpack build time comparing with ES6 modules?

In convention, when we write a ES6 modules, we put source code in src folder, and compile it using babel-loader and webpack to lib or dist folder to set code to ES5, and set main entry to dist folder, then publish to npm.
On the one hand, user can use this module without using webpack, and the code can run. On the other hand, when using webpack, ES5 code may reduce babel-loader time because it's already ES5 code.
What I confused is the second point, when using webpack, does ES5 codes in node_module reduce babel-loader time so we can accelate webpack build performance ?
The question is almost about ES5 npm modules with webpack build performance, although it's a convention we already did, I just want to know about something about webpack build performance. Thanks!
Yes, generally public packages are distributed with sources that have already been transformed. The performance benefit, with regards to Webpack and babel-loader, is that you can consume these sources as-is without having to process them with babel-loader, so you'll commonly see:
{
test: '\.js$',
loader: 'babel',
exclude: ['node_modules']
}
So, I too am confused about this excerpt, specifically why one would want to parse ES5 code with Babel, since no transformation would eventually take place.
Either way, the sources are always parsed by Webpack and not having to parse, transform them beforehand with babel-loader should improve performances.

Grails test-app classpath

I'm trying to use test support classes within my tests. I want these classes to be available for all different test types.
My directory structure is as follows;
/test/functional
/test/integration
/test/unit
/test/support
I have test helper classes within the /test/support folder that I would like to be available to each of the different test types.
I'm using GGTS and I've added the support folder to the classpath. But whenever I run my integration tests running 'test-app' I get a compiler 'unable to resolve class mypackage.support.MyClass
When I run my unit tests from within GGTS the support classes are found and used. I presume this is because the integration tests run my app in its own JVM.
Is there any way of telling grails to include my support package when running any of my tests?
I don't want my test support classes to be in my application source folders.
The reason that it works for your unit tests inside the IDE is that all source folders get compiled into one directory, and that is added to your classpath along with the jars GGTS picks up from the project dependencies. This is convenient but misleading, because it doesn't take into account that Grails uses different classpaths for run-app and each of the test phases, which you see when you run the integration tests. GGTS doesn't really run the tests; it runs the same grails test-app process that you do from the commandline, and captures its output and listens for build events so it can update its JUnit view.
It's possible to add extra jar files to the classpath for tests because you can hook into an Ant event and add it to the classpath before the tests start. But the compilation process is a lot more involved and it looks like it would be rather ugly/hackish to get it working, and would likely be brittle and stop working in the future when the Grails implementation changes.
Here are some specifics about why it'd be non-trivial. I was hoping that you could call GrailsProjectTestCompiler.compileTests() for your extra directory, but you need to compile it along with the test/unit directory for unit tests and the test/integration directory for integration tests, and the compiler (GrailsProjectTestCompiler) presumes that each test phase only needs to compile that one directory. That compiler uses Gant, and each test phase has its own Grailsc subclass (org.grails.test.compiler.GrailsTestCompiler and org.grails.test.compiler.GrailsIntegrationTestCompiler) registered as taskdefs. So it should be possible to subclass them and add logic to compile both the standard directory and the shared directory, and register those as replacements, but that requires also subclassing and reworking GrailsProjectTestRunner (which instantiates GrailsProjectTestCompiler), and hooking into an event to replace the projectTestRunner field in _GrailsTest.groovy with your custom one, and at this point my brain hurts and I don't want to think about this anymore :)
So instead of all this, I'd put the code in src/groovy and src/java, but in test-specific packages that make it easy to exclude the compiled classes from your WAR files. You can do that with a grails.war.resources closure in BuildConfig.groovy, e.g.
grails.war.resources = { stagingDir ->
println '\nDeleting test classes\n'
delete(verbose: true) {
// adjust as needed to only delete test-specific classes
fileset dir: stagingDir, includes: '**/test/**/*.class'
}
println '\nFinished deleting test classes\n'
}

What is your favorite/recommended project structure and file structure for Unit Testing using Boost?

I have not used Unit Testing so far, and I intend to adopt this procedure. I was impressed by TDD and certainly want to give it a try - I'm almost sure it's the way to go.
Boost looks like a good choice, mainly because it's being maintained. With that said, how should I go about implementing a working and elegant file-structure and project-structure ? I am using VS 2005 in Win XP. I have been googling about this and was more confused than enlightened.
Our Boost based Testing structure looks like this:
ProjectRoot/
Library1/
lib1.vcproj
lib1.cpp
classX.cpp
...
Library2/
lib2.vcproj
lib2.cpp
toolB.cpp
classY.cpp
...
MainExecutable/
main.cpp
toolA.cpp
toolB.cpp
classZ.cpp
...
Tests/
unittests.sln
ut_lib1/
ut_lib1.vcproj (referencing the lib1 project)
ut_lib1.cpp (with BOOST_AUTO_TEST_CASE) - testing public interface of lib1
ut_classX.cpp - testing of a class or other entity might be split
into a separate test file for size reasons or if the entity
is not part of the public interface of the library
...
ut_lib2/
ut_lib2.vcproj (referencing the lib2 project)
ut_lib2.cpp (with BOOST_AUTO_TEST_CASE) - testing public interface of lib2
...
ut_toolA/
ut_toolA.vcproj (referencing the toolA.cpp file)
ut_toolA.cpp - testing functions of toolA
ut_toolB/
ut_toolB.vcproj (referencing the toolB.cpp file)
ut_toolB.cpp - testing functions of toolB
ut_main/
ut_main.vcproj (referencing all required cpp files from the main project)
ut_classZ.cpp - testing classZ
...
This structure was chosen for a legacy project, where we had to decide on a case-by-case basis on what tests to add and how to group test-projects for existing modules of sourcecode.
Things to note:
Unit Testing code is always compiled separately from production code.
Production projects do not reference the unit testing code.
Unit Testing projects include source-files directly or only reference libraries, depending on what makes sense given the use of a certain code-file.
Running the unit tests is done via a post-build step in each ut_*.vcproj
All our production builds automatically also run the unit tests. (In our build scripts.)
In our real (C++) world you have to make tradeoffs btw. legacy issues, developer convenience, compile times, etc. I think our project structure is a good tradeoff. :-)
I spilt my core code up into either .libs or .dlls and then have my Boost test projects depend on these lib/dll projects. So I might end up with:
ProjectRoot
Lib1Source
Lib1Tests
Lib2Source
Lib2Tests
The alternative is to store your source in a separate folder and add the files to both your main apps project and the unit test project but I find this a little messy. YMMV.