How to separate production and test code in Haskell - unit-testing

In other languages I like to put my unit tests in a different directory structure from the production code to keep things cleanly separated. Is there a typical convention in Haskell of how to do that or something similar?

There is a typical convention codified at http://www.haskell.org/haskellwiki/Structure_of_a_Haskell_project.
Additionally, you can add the test build to the main cabal as in https://github.com/ekmett/speculation/blob/master/speculation.cabal
There are some bonuses to the separate cabal method. Namely that testing methods like the quickcheck generators for datatypes are available in a second project-test style cabal that others can import if they are using your data structures in their projects, but I prefer the single cabal approach. It depends on the purpose of your library though.
Haskell testing workflow is useful for more testing info.

I think the best example I've came so far for that is the snap project
http://github.com/snapframework/snap-core
Check the test folder, they develop their own cabal package just for testing, and have a shell script (runTestAndCoverage.sh) that executes the final compiled test suite.
Good Luck.

It's early days for me at Haskell development as well, but I've used cabal to organize my tests under a tests/ subdirectory

Related

Should utility test classes in a library go under test or main?

E.g. I have a file ConnectionDummy in Maven project Common. It is used only by tests in Email-Lib, Scanner and Common. Note Common itself also includes unit tests, but these files are just helpers to unit tests. Should it go Common's test or main?
Maven setting but I think this is probably common to many technologies.
It depends a little bit on how strong you feel about having test (utility) code in your production system, down the road. Is code that's just sitting there, doing nothing, bad for your production environment? Imagine that your test utility has nifty annotations that e.g. Spring picks up during run-time?
I've come across people who don't care. Personally I feel strong about it, and don't want test code in my production environments. If that's your thinking too, there's two ways to go with Maven that I've seen in our projects:
The test utility code goes in Common's test folder. You'll end up building test JARs (test-jar) for your Common project and you may run into transitive dependency issues where any <scope>test</scope> dependencies of your Common project that are needed by your test utility code are not going to be there when someone depends on your test JAR. This is a potential dependency nightmare where I've seen the "including project" re-defining a whole lot of dependencies that were already nicely defined by the "included project", but weren't picked up by Maven for being transitive at scope test.
The test utility code goes in a separate test project, e.g. CommonTest which produces jar, not test-jar packaging. That way the test utility becomes "utility" to not only your dependent projects, but to the original Common project itself, just the same. By doing so, your test folder becomes truly an internal matter of a project. Maybe that's how the Maven guys have meant it, why else would making a test-jar not be a standard option? :)

How to structure a Haskell project?

I'm currently trying to do a Haskell project using the Test Driven Development methodology. In Java, we can create a nicely structured project containing src and bin folders, then there are main and test folders for unit testing with JUnit. I was just wondering is there a standard way to get such a structure in Haskell? A folder for source a folder for binary, and in the source folder two folders one for testing one for main source.
My reference is alway Structure of a Haskell project and How to write a Haskell program which spell out some defaults which the community more or less seems to follow. It has worked well for me so far but my projects have not been very large as of yet.
What is suggested in Structure of a Haskell project sounds similar to what you have outlined in your post with a few minor modifications like the testing folder is in the same directory as the src folder.
Edit:
cabal init will generate a minimal amount for you including the cabal file with relevant dependencies if you have a any files with imports at least. It is a great start but only part of what you are looking for.
Ideally as a project grows the cabal file and directory hierarchy would be automatically kept up to date, I am unaware of any tool made public that will do this though. It is on my maybe one day list as I am sure it s for many others.
-odir and -hidir can be used with ghc to put the *.o and *.hi files in separate directories. You can read more in GHC user guide's section on separate compilation)
Edit2:
Other relevant/overlapping posts:
Basic Structure of a Haskell Program
Haskell module naming conventions
Large-scale design in Haskell?
The modern answer to this is to use The Haskell Tool Stack. This will structure the project for you using sane defaults.

Is there a standard way to test scripts/executables in Jenkins?

We have a project that contains a library of Python and Scala packages, as well as Bourne, Python and Perl executable scripts. Although the library has good test coverage, we don't have any tests on the scripts.
The current testing environment uses Jenkins, Python, virtualenv, nose, Scala, and sbt.
Is there a standard/common way to incorporate testing of scripts in Jenkins?
Edit: I'm hoping for something simple like Python's unittest for shell scripts, like this:
assertEquals expected.txt commandline
assertError commandline --bogus
assertStatus 11 commandline baddata.in
Have you looked at shunit2: https://github.com/kward/shunit2
It allows you to write testable shell scripts in Bourne, bash or ksh scripts.
Not sure how you can integrate it into what you're describing, but it generates output similar to other unit test suites.
I do not know how 'standard' this is, but if you truly practice TDD your scripts also should be developed with TDD. How you connect your TDD tests with Jenkins then depends on the TDD framework you are using: you can generate JUnit reports for example, that Jenkins can read, or your tests can simply return failed status, etc.
If your script requires another project, then my inclination is to make a new jenkins project, say 'system-qa'.
This would be a downstream project of the python project, and have a dependency on the python project and the in-house project.
If you were using dependency resolution/publishing technology, say apache ivy http://ant.apache.org/ivy/, and if these existing projects were to publish a packaged version of their code (as simple as a .tar.gz, perhaps), then system-qa project could then declare dependencies (again, using ivy) for both the python package and the in-house project package, download it using ivy, extract/install it, run tests, and exit.
So in summary, the system-qa project's build script is responsible for retrieving dependencies, running tests against those dependencies, and then perhaps publishing a standardized output test format like junit xml (but at a minimum returning 0 or non-0 to clue Jenkins in on how the built went).
I think this is a technically correct solution, but also a lot of work. Judgement call required if it's worth it.

unit test build files

What are the best policies for unit testing build files?
The reason I ask is my company produces highly reliable embedded devices. Software patches are just not an option, as they cost our customers thousands to distribute. Because of this we have very strict code quality procedures(unit tests, code reviews, tracability, etc). Those procedures are being applied to our build files (autotools if you must know, I expect pity), but if feels like a hack.
Uh... the project compiles... mark the build files as reviewed and unit tested.
There has got to be a better way. Ideas?
Here's the approach we've taken when building a large code base (many millions of lines of code) across more than a dozen platforms.
Makefile changes are reviewed by the build team. These people know the errors people tend to make in our build environment, and they are the ones who feel the brunt of it when a build breaks, so they're motivated to find issues.
Minimize what needs to go in a Makefile, so there are fewer opportunities for error. We have a layer on top of make, that generates the Makefile. A developer just has to indicate in the higher-level file, using tags, that for example a given target is a shared library or a unit test. Usually a target is defined on one line, which then results in multiple settings/targets in the generated Makefile. Similar things could be done with build tools like scons that allow one to abstract away things like platform-specific details, making targets very simple.
Unit tests of our build tool. The tool is written in Perl, so we use Perl's Test::More unit test framework there to verify that the tool generates the correct Makefile given our higher-level file. If we used something like scons instead, I'd use their testing framework.
Unit tests of our nightly build/test scripts. We have a set of scripts that start nightly builds on each platform, run static analysis tools, run unit tests, run functional tests, and report all results to a central database. We test the various scripts individually, mostly using the shunit2 unit-testing framework for sh/bash/ksh/etc.
End-to-end tests of our build/test process. I am working on an end-to-end test that operates on a tiny source tree rather than our production code, since the latter can take hours to build. These tests are mainly aimed at verifying that our build targets still work and report results into our central database even after, for example, upgrading our code coverage tool or making changes to our build scripts.
Have your build file to compile a known version of your software (or simpler piece of code that is similar from a build perspective) and compare the result obtained with your new build tools to a expected result (built with a validated version of the build tools).
In my projects build-files don't change very often. Even more, I can reuse build-files from earlier projects, only changing some variables (that I moved to an easy to recognize section). That's why for me it is unneeded to unit-test the build-files. That can be different in other projects.

Adding unit tests to an existing project

My question is quite relevant to something asked before but I need some practical advice.
I have "Working effectively with legacy code" in my hands and I 'm using advice from the book as I read it in the project I 'm working on. The project is a C++ application that consists of a few libraries but the major portion of the code is compiled to a single executable. I 'm using googletest for adding unit tests to existing code when I have to touch something.
My problem is how can I setup my build process so I can build my unit tests since there are two different executables that need to share code while I am not able to extract the code from my "under test" application to a library. Right now I have made my build process for the application that holds the unit tests link against the object files generated from the build process of the main application but I really dislike it. Are there any suggestions?
Working Effectively With Legacy Code is the best resource for how to start testing old code. There are really no short term solutions that won't result in things getting worse.
I'll sketch out a makefile structure you can use:
all: tests executables
run-tests: tests
<commands to run the test suite>
executables: <file list>
<commands to build the files>
tests: unit-test1 unit-test2 etc
unit-test1: ,files that are required for your unit-test1>
<commands to build unit-test1>
That is roughly what I do, as a sole developer on my project
If your test app is only linking the object files it needs to test then you are effectively already treating them as a library, it should be possible to group those object files into a separate library for the main and the test app. If you can't then I don't see that what you are doing is too bad an alternative.
If you are having to link other object files not under test then that is a sign of dependencies that need to be broken, for which you have the perfect book.
We have similar problems and use a system like the one suggested by Vlion
I personally would continue doing as you are doing or consider having a build script that makes the target application and the unit tests at the same time (two resulting binaries off the same codebase). Yes it smells fishy but it is very practical.
Kudos to you and good luck with your testing.
I prefer one test executable per test. This enables link-time seams and also helps allow TDD as you can work on one unit and not worry about the rest of your code.
I make the libraries depend on all of the tests. Hopefully this means your tests are only run when the code actually changes.
If you do get a failure the tests will interrupt the build process at the right place.