Gradle force test run if external file is modified - unit-testing

I am using gradle for running my JUnit tests in a big project with different modules, where test caching is really valuable because I don't want to run every test for every code change. However I have several tests that read other module files, such as configuration files, that may cause the test to fail if they are changed, however because there is not a hard gradle dependency between the test and those files the test run is cached and therefore it might result in a successful run when in reality the code change of the configuration file might break the test.
I know that I can skip gradle test caching by using cleanTest but I don't want to loose the caching feature for every build I make. So I would like to tell gradle they depend on other module files, so if they change the test run must not be cached. Any suggestions of how this can be achieved?

You should specify the dependency to achieve it.
Assume the project structure looks like this:
.
|\__ module-a/
| |\__ src/main/resources/aa/bb/config.json
| \___ build.gradle
|
|\__ module-b/
| |\__ src/test/java/com/example/DemoTest.java
| \___ build.gradle
|
\___ settings.gradle
Add the dependency to module-b/build.gradle:
...
dependencies {
testImplementation(project(':module-a'))
}
The config could be loaded inside DemoTest:
...
class DemoTest {
#Test
void qq() {
InputStream inputStream = this.getClass().getResoureAsStream("/aa/bb/config.json");
...
}
}
Then any change to the config inside module-a would cause the tests inside module-b to rerun.

Related

take liquibase scripts not from resource folder during unit tests

I have a following project structure
bin
start.sh
db
liquibase_scripts
...
schema.json
main
java
...
test
resources
liquibase_scripts
...
schema.json
So than I build my project, folder db with liquibase scripts added to distributive.
In unit tests I use H2 database and want to load schema from db/liquibase. I create bean
#Bean
public SpringLiquibase springLiquibase() {
SpringLiquibase springLiquibase = new SpringLiquibase();
springLiquibase.setDataSource(dataSource());
springLiquibase.setChangeLog("classpath:/liquibase/sam.json");
return springLiquibase;
}
The problem is that method setChangeLog look at resource folder in test folder.
So to solve the problem I copied all liquibase scripts to the test/resources directory.
But this is not ok becouse now I have 2 copies of scripts in different folders.
Is there a way to force springLiquibase.setChangeLog find scripts in any folder not only in test/resources?
In Maven build configuration you can define testResources, which may be directories and files. It looks like this:
<build>
<testResources>
<testResource>
<directory>${project.basedir}/db/liquibase_scripts</directory>
</testResource>
</testResources>
</build>
With such configuration Maven copies the files into the target/test-classes directory. Thanks to that those files can be used as test resources in the code, but there's no duplication in the project files.
Im using such configuration (Liquibase + testResources) in one of my projects. To better reproduce your problem, I've created a separate branch, where you can find the configuration described above - see: this commit diff. All test pass after the change.

How to run multiple test files in a go package

I have a project structure like this:
pkg
|
--pkg.go
--pkg_test.go
--a.go
--a_test.go
--b.go
--b_test.go
--c.go
--c_test.go
I wish to get the coverage for all the source files belonging to the package i.e.(pkg.go, a.go, b.go and c.go). However, when I run:
go test -v pkg
tests are run for only 1/4 go files.
Is there any way I can test my package without moving all the test codes within one file and keeping the file structure intact ?
if your working directory is that of your package, to test all of the files you could run:
go test ./...
if you wanted to get test coverage, you could run:
go test ./... -cover

How to fetch multiple subdirectories as one artifact with GoCD?

I am using GoCD to build a project with a large number of modules and I have modeled it as a pipeline with two stages (stage 1 is to build the code, stage 2 is to run the tests). The directory structure after a successful build stage looks about like this:
myproject/
|-- myproject-module1
| |-- build <-- created by stage 1, required by stage 2
| `-- src
|-- myproject-module2
| |-- build <-- created by stage 1, required by stage 2
| `-- src
|-- myproject-module3
| |-- build <-- created by stage 1, required by stage 2
| `-- src
`-- ... many more modules ...
In stage 1 I have configured a Build Artifact with source */build and in stage 2 I'm trying to fetch all the build folders again with source * , with the intention that they would end up in the correct location next to the src folder inside each of the project modules.
Unfortunately, I have found no way to achieve this yet. GoCD seems to create a separate ZIP file of all the *\build folders and during the fetch, the file *.zip cannot be found (I assume that it really looks for a file with that exact name, instead of using wildcards). Of course I could hard-code all the module names and individually fetch myproject-module[1:n], but that's exactly what I want to avoid.
Does anyone have some advice on how this could be achieved?
In this discussion from 2014, it is claimed that wildcards cannot be used to fetch artifacts. Is that really still the case?!
I don't know if it's possible to do using built-in features of GoCD, but it should be definitely be possible using REST API.
Knowing current pipeline name, you can get all available stages of it and calculate previous one. Next, using a possibility to download an artifact directory as zip archive, you can get what you want.
So you can add this as a script for the second stage, which will get the zipped artifact and after that you can continue with testing.
To do that, I can recommend my implementation of GoCD API - yagocd. This python library would let you to program aforementioned logic in a natural way.
Some tips:
you can get current pipeline name from GO_PIPELINE_NAME (there are a lot environment variables at your service)
to find pipeline by name you can use PipelineManager: go.pipelines.get($GO_PIPELINE_NAME, $GO_PIPELINE_COUNTER)
having pipeline instance, you can iterate stages by pipeline_instance.stages object
having a stage, you can get it job and download directory by some path using go.artifacts.directory_wait method
If you would have questions about the implementation I can try to help you.
You can select artifacts by wildcard, you just have to give the artifact a destination (attribute dest in the XML config). You can do that multiple times inside the same job, just use a different dest each time:
<artifact src="myproject/myproject-module1/build/*" dest="module1/" />
<artifact src="myproject/myproject-module2/build/*" dest="module2/" />
<artifact src="myproject/myproject-module3/build/*" dest="module3/" />
The corresponding <fetchartifact ...> tags then need to use srcdir="module1" etc.

automake: automatically run unit tests

I am maintaining an autoconf package and wanted to integrate automatic testing. I use the Boost Unit Test Framework for my unit tests and was able to sucessfully integrate it into the package.
That is it can be compiled via make check, but is is not run (although I read that make check both compiles and runs the tests). As result, I have to run it manually after building the tests which is cumbersome.
Makefile.am in the test folder looks like this:
check_PROGRAMS = prog_test
prog_test_SOURCES = test_main.cpp ../src/class1.cpp class1_test.cpp class2.cpp ../src/class2_test.cpp ../src/class3.cpp ../src/class4.cpp
prog_test_LDADD = $(BOOST_FILESYSTEM_LIB) $(BOOST_SYSTEM_LIB) $(BOOST_UNIT_TEST_FRAMEWORK_LIB)
Makefile.am in the root folder:
SUBDIRS = src test
dist_doc_DATA = README
ACLOCAL_AMFLAGS = ${ACLOCAL_FLAGS} -I m4
Running test/prog yields the output:
Running 4 test cases...
*** No errors detected
(I don't think you need the contents of my test cases in order to answer my question, so I omitted them for now)
So how can I make automake run my tests every time I run make check?
At least one way of doing this involves setting TESTS variable. Here's what documentation on automake says about it:
If the special variable TESTS is defined, its value is taken to be a list of programs or scripts to run in order to do the testing.
So adding the line
TESTS = $(check_PROGRAMS)
should instruct it to run the tests on make check.

Grails test-app classpath

I'm trying to use test support classes within my tests. I want these classes to be available for all different test types.
My directory structure is as follows;
/test/functional
/test/integration
/test/unit
/test/support
I have test helper classes within the /test/support folder that I would like to be available to each of the different test types.
I'm using GGTS and I've added the support folder to the classpath. But whenever I run my integration tests running 'test-app' I get a compiler 'unable to resolve class mypackage.support.MyClass
When I run my unit tests from within GGTS the support classes are found and used. I presume this is because the integration tests run my app in its own JVM.
Is there any way of telling grails to include my support package when running any of my tests?
I don't want my test support classes to be in my application source folders.
The reason that it works for your unit tests inside the IDE is that all source folders get compiled into one directory, and that is added to your classpath along with the jars GGTS picks up from the project dependencies. This is convenient but misleading, because it doesn't take into account that Grails uses different classpaths for run-app and each of the test phases, which you see when you run the integration tests. GGTS doesn't really run the tests; it runs the same grails test-app process that you do from the commandline, and captures its output and listens for build events so it can update its JUnit view.
It's possible to add extra jar files to the classpath for tests because you can hook into an Ant event and add it to the classpath before the tests start. But the compilation process is a lot more involved and it looks like it would be rather ugly/hackish to get it working, and would likely be brittle and stop working in the future when the Grails implementation changes.
Here are some specifics about why it'd be non-trivial. I was hoping that you could call GrailsProjectTestCompiler.compileTests() for your extra directory, but you need to compile it along with the test/unit directory for unit tests and the test/integration directory for integration tests, and the compiler (GrailsProjectTestCompiler) presumes that each test phase only needs to compile that one directory. That compiler uses Gant, and each test phase has its own Grailsc subclass (org.grails.test.compiler.GrailsTestCompiler and org.grails.test.compiler.GrailsIntegrationTestCompiler) registered as taskdefs. So it should be possible to subclass them and add logic to compile both the standard directory and the shared directory, and register those as replacements, but that requires also subclassing and reworking GrailsProjectTestRunner (which instantiates GrailsProjectTestCompiler), and hooking into an event to replace the projectTestRunner field in _GrailsTest.groovy with your custom one, and at this point my brain hurts and I don't want to think about this anymore :)
So instead of all this, I'd put the code in src/groovy and src/java, but in test-specific packages that make it easy to exclude the compiled classes from your WAR files. You can do that with a grails.war.resources closure in BuildConfig.groovy, e.g.
grails.war.resources = { stagingDir ->
println '\nDeleting test classes\n'
delete(verbose: true) {
// adjust as needed to only delete test-specific classes
fileset dir: stagingDir, includes: '**/test/**/*.class'
}
println '\nFinished deleting test classes\n'
}