We have a project that contains a library of Python and Scala packages, as well as Bourne, Python and Perl executable scripts. Although the library has good test coverage, we don't have any tests on the scripts.
The current testing environment uses Jenkins, Python, virtualenv, nose, Scala, and sbt.
Is there a standard/common way to incorporate testing of scripts in Jenkins?
Edit: I'm hoping for something simple like Python's unittest for shell scripts, like this:
assertEquals expected.txt commandline
assertError commandline --bogus
assertStatus 11 commandline baddata.in
Have you looked at shunit2: https://github.com/kward/shunit2
It allows you to write testable shell scripts in Bourne, bash or ksh scripts.
Not sure how you can integrate it into what you're describing, but it generates output similar to other unit test suites.
I do not know how 'standard' this is, but if you truly practice TDD your scripts also should be developed with TDD. How you connect your TDD tests with Jenkins then depends on the TDD framework you are using: you can generate JUnit reports for example, that Jenkins can read, or your tests can simply return failed status, etc.
If your script requires another project, then my inclination is to make a new jenkins project, say 'system-qa'.
This would be a downstream project of the python project, and have a dependency on the python project and the in-house project.
If you were using dependency resolution/publishing technology, say apache ivy http://ant.apache.org/ivy/, and if these existing projects were to publish a packaged version of their code (as simple as a .tar.gz, perhaps), then system-qa project could then declare dependencies (again, using ivy) for both the python package and the in-house project package, download it using ivy, extract/install it, run tests, and exit.
So in summary, the system-qa project's build script is responsible for retrieving dependencies, running tests against those dependencies, and then perhaps publishing a standardized output test format like junit xml (but at a minimum returning 0 or non-0 to clue Jenkins in on how the built went).
I think this is a technically correct solution, but also a lot of work. Judgement call required if it's worth it.
Related
I have a simple module to test with a few inline pa_ounit tests, i've setup the directory in the oasis style and got it all to build.
For a reference I've been using: https://github.com/janestreet/textutils
How would one execute the unit-tests for the above repo? I'm assuming there's an executable .ml file to write but what goes in this, how it is it built and does it extend the tests described at the module level in any way?
I've read the docs for pa_ounit and they just make me more confused ha.
As pa_ounit readme says, run the executable that contains tests with inline-test-runner argument.
Even without pa_ounit (when using plain OUnit), the file with tests is compiled and then executed. You should probably try OUnit itself before you start using the syntax extension so you can get the feel of the system.
OASIS, a popular build automation tool, allows you to build tests and run them with "make test" easily. See https://ocaml.org/learn/tutorials/setting_up_with_oasis.html#Tests
I have a (flat) multi project layout. I'm running gradle 2.1, but an upgrade would be possible.
At the moment I'm migrating an ant build to gradle. For this procedure I would like to exclude/skip/disable a single project from being tested, since its execution takes a long while.
I could only find tips on how to skip tests completely, but that's not what I want, because I also need to run the tests of subsequently added projects, to see if there are any runtime dependencies missing.
Try:
gradle -x :your_project_name:test
I have a maven build process that publishes executable jars and their tests to Nexus.
I have another maven build process that needs to access these jars (executable + test) and run the tests.
How do I go about it? So far I have managed to do this only if the jar is exploded to class files.
I am new to maven and completely lost in the documentation.
Update 2022-03-11
The feature has been implemented, see https://stackoverflow.com/a/17061755/1589700 for details
Original answer
Surefire and failsafe do not currently support running tests from within a jar.
This is largely a case of not being able to identify the tests.
There are two ways to get the tests to run.
Use a test suite that lists all the tests from the test-jar. Because the test suite will be in src/test/java (more correctly will be compiled into target/test-classes) that will be picked up and all the tests in the suite will be run by Surefire/failsafe (assuming the suite class name matches the includes rule: starts or ends with Test)
Use the maven dependency plugin's unpack-dependencies goal to unpack the test-jar into target/test-classes (this screams of hack, but works quite well)
The main issue with the first option is that you cannot easily run just one test from the suite, and you need to name every test from the test-jar
For that reason I tend to favour option 2... There is the added benefit that option 2 does not mean writing code to work around a limitation in a build tool plugin... The less you lock yourself into a specific build tool, the better IMHO
This actually works quite fine with the newer surefire and failsafe plugins, see related questions:
Run JUnit Tests contained in dependency jar using Maven Surefire
run maven tests from classpath
So you don't need to unpack the jar anymore, you just provide the group and artifact id for the dependencies to scan (this works with both "main jar" dependencies, as well as "test-jar" dependencies)
The attached test-jar can be used as a usual dependency in other project which supports reuse of code in the test area but you can't run tests out of the jar. If you really need the solution you have to write at least a single suite (etc.?) to start the tests from the jar.
In other languages I like to put my unit tests in a different directory structure from the production code to keep things cleanly separated. Is there a typical convention in Haskell of how to do that or something similar?
There is a typical convention codified at http://www.haskell.org/haskellwiki/Structure_of_a_Haskell_project.
Additionally, you can add the test build to the main cabal as in https://github.com/ekmett/speculation/blob/master/speculation.cabal
There are some bonuses to the separate cabal method. Namely that testing methods like the quickcheck generators for datatypes are available in a second project-test style cabal that others can import if they are using your data structures in their projects, but I prefer the single cabal approach. It depends on the purpose of your library though.
Haskell testing workflow is useful for more testing info.
I think the best example I've came so far for that is the snap project
http://github.com/snapframework/snap-core
Check the test folder, they develop their own cabal package just for testing, and have a shell script (runTestAndCoverage.sh) that executes the final compiled test suite.
Good Luck.
It's early days for me at Haskell development as well, but I've used cabal to organize my tests under a tests/ subdirectory
What are the best policies for unit testing build files?
The reason I ask is my company produces highly reliable embedded devices. Software patches are just not an option, as they cost our customers thousands to distribute. Because of this we have very strict code quality procedures(unit tests, code reviews, tracability, etc). Those procedures are being applied to our build files (autotools if you must know, I expect pity), but if feels like a hack.
Uh... the project compiles... mark the build files as reviewed and unit tested.
There has got to be a better way. Ideas?
Here's the approach we've taken when building a large code base (many millions of lines of code) across more than a dozen platforms.
Makefile changes are reviewed by the build team. These people know the errors people tend to make in our build environment, and they are the ones who feel the brunt of it when a build breaks, so they're motivated to find issues.
Minimize what needs to go in a Makefile, so there are fewer opportunities for error. We have a layer on top of make, that generates the Makefile. A developer just has to indicate in the higher-level file, using tags, that for example a given target is a shared library or a unit test. Usually a target is defined on one line, which then results in multiple settings/targets in the generated Makefile. Similar things could be done with build tools like scons that allow one to abstract away things like platform-specific details, making targets very simple.
Unit tests of our build tool. The tool is written in Perl, so we use Perl's Test::More unit test framework there to verify that the tool generates the correct Makefile given our higher-level file. If we used something like scons instead, I'd use their testing framework.
Unit tests of our nightly build/test scripts. We have a set of scripts that start nightly builds on each platform, run static analysis tools, run unit tests, run functional tests, and report all results to a central database. We test the various scripts individually, mostly using the shunit2 unit-testing framework for sh/bash/ksh/etc.
End-to-end tests of our build/test process. I am working on an end-to-end test that operates on a tiny source tree rather than our production code, since the latter can take hours to build. These tests are mainly aimed at verifying that our build targets still work and report results into our central database even after, for example, upgrading our code coverage tool or making changes to our build scripts.
Have your build file to compile a known version of your software (or simpler piece of code that is similar from a build perspective) and compare the result obtained with your new build tools to a expected result (built with a validated version of the build tools).
In my projects build-files don't change very often. Even more, I can reuse build-files from earlier projects, only changing some variables (that I moved to an easy to recognize section). That's why for me it is unneeded to unit-test the build-files. That can be different in other projects.