Unit testing for a compiler output - unit-testing

As part of a university project, we have to write a compiler for a toy language. In order to do some testing for this, I was considering how best to go about writing something like unit tests. As the compiler is being written in haskell, Hunit and quickcheck are both available, but perhaps not quite appropriate.
How can we do any kind of non-manual testing?
The only idea i've had is effectively compiling to haskell too, seeing what the output is, and using some shell script to compare this to the output of the compiled program - this is quite a bit of work, and isn't too elegant either.
The unit testing is to help us, and isn't part of assessed work itself.

This really depends on what parts of the compiler you are writing. It is nice if you can keep phases distinct to help isolate problems, but, in any phase, and even at the integration level, it is perfectly reasonable to have unit tests that consist of pairs of source code and hand-compiled code. You can start with the simplest legal programs possible, and ensure that your compiler outputs the same thing that you would if compiling by hand.
As complexity increases, and hand-compiling becomes unwieldy, it is helpful for the compiler to keep some kind of log of what it has done. Then you can consult this log to determine whether or not specific transformations or optimizations fired for a given source program.
Depending on your language, you might consider a generator of random programs from a collection of program fragments (in the QuickCheck vein). This generator can test your compiler's stability, and ability to deal with potentially unforeseen inputs.

The unit tests shall test small piece of code, typically one class or one function. The lexical and semantic analysis will each have their unit tests. The Intermediate Represetation generator will also have its own tests.
A unit test covers a simple test case: it invokes the function to be unit tested in a controlled environment and verify (assert) the result of the function execution. A unit test usually test one behavior only and has the following structure, called AAA :
Arrange: create the environment the function will be called in
Act: invoke the function
Assert: verify the result

Have a look at shelltestrunner. Here are some example tests. It is also being used in this compiler project.

One options is to the approach this guy is doing to test real compilers: get together with as many people as you can talk into it and each of you compiles and runs the same set of programs and then compare the outputs. Be sure to add every test case you use as more inputs makes it more effective. A little fun with automation and source control and you can make it fairly easy to maintain.
Be sure to get it OKed by the prof first but as you will only be sharing test cases and outputs I don't see where he will have much room to object.

Testing becomes more difficult once the output of your program goes to the console (such as standard output). Then you have to resort to some external tool, like grep or expect to check the output.
Keep the return values from your functions in data structures for as long as possible. If the output of your compiler is, say, assembly code, build a string in memory (or a list of strings) and output it at the last possible moment. That way you can test the contents of the strings more directly and quickly.

Related

Program written in generated code based on unit tests

As I was doing test driven development I pondered whether a hypothetical program could be completely developed by generated code based on tests. i.e. is there an ability to have a generator that creates the code specifically to pass tests. Would the future of programming languages just be to write tests?
I think this would be a tough one as, at least for the initial generations of such technology, developers would be very skeptical of generated code's correctness. So human review would have to be involved as well.
As a simple illustration of what I mean, suppose you write 10 tests for a function, with sample inputs and expected outputs covering every scenario you can think of. A program could trivially generate code which passed all of these tests with nothing more than a rudimentary switch statement (your ten inputs matched to their expected outputs). This code would obviously not be correct, but it would take a human to see that.
That's just a simple example. It isn't hard to imagine more sophisticated programs which might not generate a switch statement but still produce solutions that aren't actually correct, and which could be wrong in much more subtle ways. Hence my suggestion that any technology along these lines would be met with a deep level of skepticism, at least at first.
If code can be generated completely, then the basis of the generator would have to be a specification that exactly describes the code. This generator would then be something like a compiler that cross compiles one language into an other.
Tests are not such a language. They only assert that a specific aspect of the code functionality is valid and unchanged. By doing so they scaffold the code so that it does not break, even when it is refactored.
But how would I compare these two ways of development?
1) If the generator works correctly, then the specification is always transferred into correct code. I postulate that this code is tested by design and needs no additional test. Better TDD the generator than the generated code.
2) Whether you have a specification that leads to generated code or specifications expressed as tests that ensure that code works is quite equivalent in my eyes.
3) You can combine both ways of development. Generate a program framework with a tested generator from a specification and then enrich the generated code by using TDD. Attention: You then have two different development cycles running in one project. That means, you have to ensure that you always can regenerate the generated code when specifications change und that your additional code still correctly fits into the generated code.
Just one small example: Imagine a tool that can generate code from an UML class diagram. This could be done in an way that you can develop the methods with TDD, but the structure of the classes is defined in UML and you would not need to test this again.
While it's possible sometime in the future, simple tests can be used to generate code:
assertEquals(someclass.get_value(), true)
but getting the correct output from a black-box integration test is what I would guess is an NP-complete problem:
assertEquals(someclass.do_something(1), file_content(/some/file))
assertEquals(someclass.do_something(2), file_content(/some/file))
assertEquals(someclass.do_something(2), file_content(/some/file2))
assertEquals(someclass.do_something(3), file_content(/some/file2))
Does this mean that the resulting code will always write to /some/file? Does it mean that the resulting code should always write to /some/file2? Either could be true. What if it needs to only do the minimal set to get the tests to pass? Without knowing the context and writing very exact and bounding tests, no code could figure out (at this point in time) what the test author intended.

Unit testing a compiler

What is considered the best approach to unit test a complex unit such as a compiler?
I've written a few compilers and interpreters over the years, and I do find this kind of code quite hard to test in a good way.
If we take something like the Abstract Syntax Tree generation. how would you test this using TDD?
Small constructs might be easy to test.
e.g. something along the lines of:
string code = #"public class Foo {}";
AST ast = compiler.Parse(code);
Since that won't generate alot of ast nodes.
But if I actually want to test that the compiler can generate an AST for something like a method:
[TestMethod]
public void Can_parse_integer_instance_method_in_class ()
{
string code = #"public class Foo { public int method(){ return 0;}}";
AST ast = compiler.Parse(code);
What would you assert on?
Manually defining an AST that represents the given code and make an assertion that the generated AST conforms to the manually defined AST seems horribly combersome and might even be error prone.
So what are the best tactics for TDD'ing complex scenarios like this?
Firstly, if you test a compiler, you cannot get enough tests! Users really rely on compiler-generated output as if it were an always-golden-standard, so really be aware of quality. So if you can, test with every test you can come up with!
Secondly, use all testing methods available, yet use them where appropriate. Indeed, you may be able to mathematically prove, that a certain transformation is correct. If you are able to do so, you should.
But every compiler that I've seen some internals of involves heuristics and a lot of optimized, hand-crafted code at its internals; thus assisted proving methods typically are not applicable any more. Here, testing comes in place, and I mean a lot of it!
When collecting tests, please consider different cases:
Positive Standard-Conformance: your frontend should accept certain code patterns and the compiler must produce a correctly running program thereof. Tests in this category either need a golden-reference compiler or generator that produces the correct output of the test-program; or it involves hand-written programs that include a check against values furnished by human reasoning.
Negative Tests: every compiler has to reject faulty code, like syntax errors, type mismatches et cetera. It must produce certain types of error and warning messages. I don't know of any method to auto-generate such tests. So these need to be human-written, too.
Transformation tests: whenever you come up with a fancy optimization within your compiler (middle-end), you probably have some example code in mind that demonstrates the optimization. Be aware of transformations before and after such a module, they might need special options to your compiler or a bare-bone compiler with just that module plugged-in. Test a reasonable big set of surrounding module combinations, too. I usually did regression-testing on the intermediate representation before and after the specific transformation, defining a reference by intensive reasoning with colleagues. Try to write code on both sides of the transformation, i.e. code snippets that you want to have transformed and slightly different ones that must not be.
Now, this sounds like an awful lot of work! Yes it does, but there is help: there are several, commercial test-suites for (C-) compilers out in the world and experts that might help you in applying them. Here a small list of those known to me:
ACE: SuperTest Compiler Test and Validation Suite
Perennial C Compiler Validation Suite
Bugseng: ECLAIR
Tests in the GCC and LLVM environment
you name it! Or google "Compiler Test Suite"
First of all parsing is usually a trivial part of compiler project. From my experience it never takes more than 10% of the time (unless we are talking about C++ but you wouldn't be asking questions here if you were designing it) so you'd rather not invest much of your time into parser tests.
Still, TDD (or however you call it) has it's share in developing the middle-end where you often want to verify that e.g. optimizations that you've just added actually did result in expected code transformation. From my experience, tests like this are usually implemented by giving compiler specially crafted test programs and grepping output assembly for expected patterns (was this loop unrolled four times? did we manage to avoid memory writes is this function? etc.). Grepping assembly isn't as good as analyzing structured representation (S-exprs or XML) but it's cheap and works fine in most cases. It's awfully hard to support as your compiler grows though.

Help with TDD approach to a real world problem: linker

I'm trying to learn TDD. I've seen examples and discussions about how it's easy to TDD a coffee vending machine firmware from smallest possible functionality up. These examples are either primitive or very well thought-out, it's hard to tell right away. But here's a real world problem.
Linker.
A linker, at its simplest, reads one object file, does magic, and writes one executable file. I don't think I can simplify it further. I do believe the linker design may be evolved, but I have absolutely no idea where to start. Any ideas on how to approach this?
Well, probably the whole linker is too big a problem for the first unit test. I can envision some rough structure beforehand. What a linker does is:
Represents an object file as a collection of segments. Segments contain code, data, symbol definitions and references, debug information etc.
Builds a reference graph and decides which segments to keep.
Packs remaining segments into a contiguous address space according to some rules.
Relocates references.
My main problem is with bullet 1. 2, 3, and 4 basically take a regular data structure and convert it into a platform-dependent mess based on some configuration. I can design that, and the design looks feasible. But 1, it should pick a platform-dependent mess, in one of the several supported formats, and convert it into a regular structure.
The task looks generic enough. It happens everywhere you need to support multiple input formats, be it image processing, document processing, you name it. Is it possible to TDD ? It seems like either test is too simple and I easily hack it to green, or it's a bit more complex and I need to implement the whole object/image/document format reader which is a lot of code. And there is no middle ground.
First, have a look at "Growing Object Oriented Software Guided By Tests" by Freeman & Pryce.
Now, my attempt to answer a difficult question in a few lines.
TDD does require you to think (i.e. design) what you're going to do. You have to:
Think in small steps. Very small steps.
Write a short test, to prove that the next small piece of behaviour works.
Run the test to show that it fails
Do the simplest thing possible to get the test to pass
Refactor ruthlessly to remove duplication and improve the structure of the code
Run the test(s) again to make sure it all still works
Go back to 1.
An initial idea (design) of how your linker might be structured will guide your initial tests. The tests will enforce a modular design (because each test is only testing a single behaviour, and there should be minimal dependencies on other code you've written).
As you proceed you may find your ideas change. The tests you've already written will allow you to refactor with confidence.
The tests should be simple. It is easy to 'hack' a single test to green. But after each 'hack' you refactor. If you see the need for a new class or algorithm during the refactoring, then write tests to drive out its interface. Make sure that the tests only ever test a single behaviour by keeping your modules loosely coupled (dependency injection, abstract base classes, interfaces, function pointers etc.) and use fakes, stubs and mocks to isolate the code under test from the rest of your system.
Finally use 'customer' tests to ensure that you have delivered functional features.
It's a difficult change in mind-set, but a lot of fun and very rewarding. Honest.
You're right, a linker seems a bit bigger than a 'unit' to me, and TDD does not excuse you from sitting down and thinking about how you're going to break down your problem into units. The Sudoku saga is a good illustration of what goes wrong if you don't think first!
Concentrating on your point 1, you have already described a good collection of units (of functionality) by listing the kinds of things that can appear in segments, and hinting that you need to support multiple formats. Why not start by dealing with a simple case like, say, a file containing just a data segment in the binary format of your development platform? You could simply hard-code the file as a binary array in your test, and then check that it interprets just that correctly. Then pick another simple case, and test for that. Keep going.
Now the magic bit is that pretty soon you'll see repeated structures in your code and in your tests, and because you've got tests you can be quite aggressive about refactoring it away. I suspect this is the bit that you haven't experienced yet, because you say "It seems like either test is too simple and I easily hack it to green, or it's a bit more complex and I need to implement the whole object/image/document format reader which is a lot of code. And there is no middle ground." The point is that you should hack them all to green, but as you're doing that you are also searching out the patterns in your hacks.
I wrote a (very simple) compiler in this fashion, and it mostly worked quite well. For each syntactic construction, I wrote the smallest program that I could think of which used it in some observable way, and had the test compile the program and check that it worked as expected. I used a proper parser generator as you can't plausibly TDD your way into one of them (you need to use a little forethought!) After about three cycles, it became obvious that I was repeating the code to walk the syntax tree, so that was refactored into something like a Visitor.
I also had larger-scale acceptance tests, but in the end I don't think these caught much that the unit tests didn't.
This is all very possible.
A sample from the top of my head is NHAML.
This is ASP.NET ViewEngine that converts plain text to the .NET native code.
You can have a look at source code and see how it is tested.
I guess what I do is come up with layers and blocks and sub-divide to the point where I might be thinking about code and then start writing tests.
I think your tests should be quite simple: it's not the individual tests that are the power of TDD but the sum of the tests.
One of the principles I follow is that a method should fit on a screen - when that's the case, the tests are usually simple enough.
Your design should allow you to mock out lower layers so that you're only testing one layer.
TDD is about specification, not test.
From your simplest spec of a linker, your TDD test has just to check whether an executable file has been created during the linker magic if you feed it with an object file.
Then you write a linker that makes your test succeed, e.g.:
check whether input file is an object file
if so, generate a "Hello World!" executable (note that your spec didn't specify that different object files would produce different executables)
Then you refine your spec and your TDD (these are your four bullets).
As long as you can write a specification you can write TDD test cases.

How does unit testing work when the program doesn't lend itself to a functional style?

I'm thinking of the case where the program doesn't really compute anything, it just DOES a lot. Unit testing makes sense to me when you're writing functions which calculate something and you need to check the result, but what if you aren't calculating anything? For example, a program I maintain at work relies on having the user fill out a form, then opening an external program, and automating the external program to do something based on the user input. The process is fairly involved. There's like 3000 lines of code (spread out across multiple functions*), but I can't think of a single thing which it makes sense to unit test.
That's just an example though. Should you even try to unit test "procedural" programs?
*EDIT
Based on your description these are the places I would look to unit test:
Does the form validation work of user input work correctly
Given valid input from the form is the external program called correctly
Feed in user input to the external program and see if you get the right output
From the sounds of your description the real problem is that the code you're working with is not modular. One of the benefits I find with unit testing is that it code that is difficult to test is either not modular enough or has an awkward interface. Try to break the code down into smaller pieces and you'll find places where it makes sense to write unit tests.
I'm not an expert on this but have been confused for a while for the same reason. Somehow the applications I'm doing just don't fit to the examples given for UNIT testing (very asynchronous and random depending on heavy user interaction)
I realized recently (and please let me know if I'm wrong) that it doesn't make sense to make a sort of global test but rather a myriad of small tests for each component. The easiest is to build the test in the same time or even before creating the actual procedures.
Do you have 3000 lines of code in a single procedure/method? If so, then you probably need to refactor your code into smaller, more understandable pieces to make it maintainable. When you do this, you'll have those parts that you can and should unit test. If not, then you already have those pieces -- the individual procedures/methods that are called by your main program.
Even without unit tests, though, you should still write tests for the code to make sure that you are providing the correct inputs to the external program and testing that you handle the outputs from the program correctly under both normal and exceptional conditions. Techniques used in unit testing -- like mocking -- can be used in these integration tests to ensure that your program is operating correctly without involving the external resource.
An interesting "cut point" for your application is you say "the user fills out a form." If you want to test, you should refactor your code to construct an explicit representation of that form as a data structure. Then you can start collecting forms and testing that the system responds appropriately to each form.
It may be that the actions taken by your system are not observable until something hits the file system. Here are a couple of ideas:
Set up something like a git repository for the initial state of the file system, run a form, and look at the output of git diff. It's likely this is going to feel more like regression testing than unit testing.
Create a new module whose only purpose is to make your program's actions observable. This can be as simple as writing relevant text to a log file or as complex as you like. If necessary, you can use conditional compilation or linking to ensure this module does something only when the system is under test. This is closer to traditional unit testing as you can now write tests that say upon receiving form A, the system should take sequence of actions B. Obviously you have to decide what actions should be observed to form a reasonable test.
I suspect you'll find yourself migrating toward something that looks more like regression testing than unit testing per se. That's not necessarily bad. Don't overlook code coverage!
(A final parenthetical remark: in the bad old days of interactive console applications, Don Libes created a tool called Expect, which was enormously helpful in allowing you to script a program that interacted like a user. In my opinion we desperately need something similar for interacting with web pages. I think I'll post a question about this :-)
You don't necessarily have to implement automated tests that test individual methods or components. You could implement an automated unit test that simulates a user interacting with your application, and test that your application responds in the correct way.
I assume you are manually testing your application currently, if so then think about how you could automate that and work from there. Over time you should be able to break your tests into progressively smaller chunks that test smaller sections of code. Any sort of automated testing is usually a lot better than nothing.
Most programs (regardless of the language paradigm) can be broken into atomic units which take input and provide output. As the other responders have mentioned, look into refactoring the program and breaking it down into smaller pieces. When testing, focus less on the end-to-end functionality and more on the individual steps in which data is processed.
Also, a unit doesn't necessarily need to be an individual function (though this is often the case). A unit is a segment of functionality which can be tested using inputs and measuring outputs. I've seen this when using JUnit to test Java APIs. Individual methods might not necessarily provide the granularity I need for testing, though a series of method calls will. Therefore, the functionality I regard as a "unit" is a little greater than a single method.
You should at least refactor out the stuff that looks like it might be a problem and unit test that. But as a rule, a function shouldn't be that long. You might find something that is unit test worthy once you start refactoring
Good object mentor article on TDD
As a few have answered before, there are a few ways you can test what you have outlined.
First the form input, can be tested in a few ways.
What happens if invalid data is inputted, valid data, etc.
Then each of the function can be tested to see if the functions when supplied with various forms of correct and incorrect data react in the proper manner.
Next you can mock the application that are being called so that you can make sure that your application send and process data to the external programs correctly. Don't for get to make sure your program deals with unexpected data from the external program as well.
Usually, the way I figure out how I want to write tests for a program I have been assigned to maintain, is to see what I am do manually to test the program. Then try and figure how to automate as much of it as possible. Also, don't restrict your testing tools just to the programming language you are writing the code in.
I think a wave of testing paranoia is spreading :) Its good to examine things to see if tests would make sense, sometimes the answer is going to be no.
The only thing that I would test is making sure that bogus form input is handled correctly.. I really don't see where else an automated test would help. I think you'd want the test to be non invasive (i.e. no record is actually saved during testing), so that might rule out the other few possibilities.
If you can't test something how do you know that it works? A key to software design is that the code should be testable. That may make the actual writing of the software more difficult, but it pays off in easier maintenance later.

Random data in Unit Tests?

I have a coworker who writes unit tests for objects which fill their fields with random data. His reason is that it gives a wider range of testing, since it will test a lot of different values, whereas a normal test only uses a single static value.
I've given him a number of different reasons against this, the main ones being:
random values means the test isn't truly repeatable (which also means that if the test can randomly fail, it can do so on the build server and break the build)
if it's a random value and the test fails, we need to a) fix the object and b) force ourselves to test for that value every time, so we know it works, but since it's random we don't know what the value was
Another coworker added:
If I am testing an exception, random values will not ensure that the test ends up in the expected state
random data is used for flushing out a system and load testing, not for unit tests
Can anyone else add additional reasons I can give him to get him to stop doing this?
(Or alternately, is this an acceptable method of writing unit tests, and I and my other coworker are wrong?)
There's a compromise. Your coworker is actually onto something, but I think he's doing it wrong. I'm not sure that totally random testing is very useful, but it's certainly not invalid.
A program (or unit) specification is a hypothesis that there exists some program that meets it. The program itself is then evidence of that hypothesis. What unit testing ought to be is an attempt to provide counter-evidence to refute that the program works according to the spec.
Now, you can write the unit tests by hand, but it really is a mechanical task. It can be automated. All you have to do is write the spec, and a machine can generate lots and lots of unit tests that try to break your code.
I don't know what language you're using, but see here:
Java
http://functionaljava.org/
Scala (or Java)
http://github.com/rickynils/scalacheck
Haskell
http://www.cs.chalmers.se/~rjmh/QuickCheck/
.NET:
http://blogs.msdn.com/dsyme/archive/2008/08/09/fscheck-0-2.aspx
These tools will take your well-formed spec as input and automatically generate as many unit tests as you want, with automatically generated data. They use "shrinking" strategies (which you can tweak) to find the simplest possible test case to break your code and to make sure it covers the edge cases well.
Happy testing!
This kind of testing is called a Monkey test. When done right, it can smoke out bugs from the really dark corners.
To address your concerns about reproducibility: the right way to approach this, is to record the failed test entries, generate a unit test, which probes for the entire family of the specific bug; and include in the unit test the one specific input (from the random data) which caused the initial failure.
There is a half-way house here which has some use, which is to seed your PRNG with a constant. That allows you to generate 'random' data which is repeatable.
Personally I do think there are places where (constant) random data is useful in testing - after you think you've done all your carefully-thought-out corners, using stimuli from a PRNG can sometimes find other things.
I am in favor of random tests, and I write them. However, whether they are appropriate in a particular build environment and which test suites they should be included in is a more nuanced question.
Run locally (e.g., overnight on your dev box) randomized tests have found bugs both obvious and obscure. The obscure ones are arcane enough that I think random testing was really the only realistic one to flush them out. As a test, I took one tough-to-find bug discovered via randomized testing and had a half dozen crack developers review the function (about a dozen lines of code) where it occurred. None were able to detect it.
Many of your arguments against randomized data are flavors of "the test isn't reproducible". However, a well written randomized test will capture the seed used to start the randomized seed and output it on failure. In addition to allowing you to repeat the test by hand, this allows you to trivially create new test which test the specific issue by hardcoding the seed for that test. Of course, it's probably nicer to hand-code an explicit test covering that case, but laziness has its virtues, and this even allows you to essentially auto-generate new test cases from a failing seed.
The one point you make that I can't debate, however, is that it breaks the build systems. Most build and continuous integration tests expect the tests to do the same thing, every time. So a test that randomly fails will create chaos, randomly failing and pointing the fingers at changes that were harmless.
A solution then, is to still run your randomized tests as part of the build and CI tests, but run it with a fixed seed, for a fixed number of iterations. Hence the test always does the same thing, but still explores a bunch of the input space (if you run it for multiple iterations).
Locally, e.g., when changing the concerned class, you are free to run it for more iterations or with other seeds. If randomized testing ever becomes more popular, you could even imagine a specific suite of tests which are known to be random, which could be run with different seeds (hence with increasing coverage over time), and where failures wouldn't mean the same thing as deterministic CI systems (i.e., runs aren't associated 1:1 with code changes and so you don't point a finger at a particular change when things fail).
There is a lot to be said for randomized tests, especially well written ones, so don't be too quick to dismiss them!
If you are doing TDD then I would argue that random data is an excellent approach. If your test is written with constants, then you can only guarantee your code works for the specific value. If your test is randomly failing the build server there is likely a problem with how the test was written.
Random data will help ensure any future refactoring will not rely on a magic constant. After all, if your tests are your documentation, then doesn't the presence of constants imply it only needs to work for those constants?
I am exaggerating however I prefer to inject random data into my test as a sign that "the value of this variable should not affect the outcome of this test".
I will say though that if you use a random variable then fork your test based on that variable, then that is a smell.
In the book Beautiful Code, there is a chapter called "Beautiful Tests", where he goes through a testing strategy for the Binary Search algorithm. One paragraph is called "Random Acts of Testing", in which he creates random arrays to thoroughly test the algorithm. You can read some of this online at Google Books, page 95, but it's a great book worth having.
So basically this just shows that generating random data for testing is a viable option.
Your co-worker is doing fuzz-testing, although he doesn't know about it. They are especially valuable in server systems.
One advantage for someone looking at the tests is that arbitrary data is clearly not important. I've seen too many tests that involved dozens of pieces of data and it can be difficult to tell what needs to be that way and what just happens to be that way. E.g. If an address validation method is tested with a specific zip code and all other data is random then you can be pretty sure the zip code is the only important part.
if it's a random value and the test fails, we need to a) fix the object and b) force ourselves to test for that value every time, so we know it works, but since it's random we don't know what the value was
If your test case does not accurately record what it is testing, perhaps you need to recode the test case. I always want to have logs that I can refer back to for test cases so that I know exactly what caused it to fail whether using static or random data.
You should ask yourselves what is the goal of your test.
Unit tests are about verifying logic, code flow and object interactions. Using random values tries to achieve a different goal, thus reduces test focus and simplicity. It is acceptable for readability reasons (generating UUID, ids, keys,etc.).
Specifically for unit tests, I cannot recall even once this method was successful finding problems. But I have seen many determinism problems (in the tests) trying to be clever with random values and mainly with random dates.
Fuzz testing is a valid approach for integration tests and end-to-end tests.
Can you generate some random data once (I mean exactly once, not once per test run), then use it in all tests thereafter?
I can definitely see the value in creating random data to test those cases that you haven't thought of, but you're right, having unit tests that can randomly pass or fail is a bad thing.
If you're using random input for your tests you need to log the inputs so you can see what the values are. This way if there is some edge case you come across, you can write the test to reproduce it. I've heard the same reasons from people for not using random input, but once you have insight into the actual values used for a particular test run then it isn't as much of an issue.
The notion of "arbitrary" data is also very useful as a way of signifying something that is not important. We have some acceptance tests that come to mind where there is a lot of noise data that is no relevance to the test at hand.
I think the problem here is that the purpose of unit tests is not catching bugs. The purpose is being able to change the code without breaking it, so how are you going to know that you break
your code when your random unit tests are green in your pipeline, just because they doesn't touch the right path?
Doing this is insane for me. A different situation could be running them as integration tests or e2e not as a part of the build, and just for some specific things because in some situations you will need a mirror of your code in your asserts to test that way.
And having a test suite as complex as your real code is like not having tests at all because who
is going to test your suite then? :p
A unit test is there to ensure the correct behaviour in response to particular inputs, in particular all code paths/logic should be covered. There is no need to use random data to achieve this. If you don't have 100% code coverage with your unit tests, then fuzz testing by the back door is not going to achieve this, and it may even mean you occasionally don't achieve your desired code coverage. It may (pardon the pun) give you a 'fuzzy' feeling that you're getting to more code paths, but there may not be much science behind this. People often check code coverage when they run their unit tests for the first time and then forget about it (unless enforced by CI), so do you really want to be checking coverage against every run as a result of using random input data? It's just another thing to potentially neglect.
Also, programmers tend to take the easy path, and they make mistakes. They make just as many mistakes in unit tests as they do in the code under test. It's way too easy for someone to introduce random data, and then tailor the asserts to the output order in a single run. Admit it, we've all done this. When the data changes the order can change and the asserts fail, so a portion of the executions fail. This portion needn't be 1/2 I've seen exactly this result in failures 10% of the time. It takes a long time to track down problems like this, and if your CI doesn't record enough data about enough of the runs, then it can be even worse.
Whilst there's an argument for saying "just don't make these mistakes", in a typical commercial programming setup there'll be a mix of abilities, sometimes relatively junior people reviewing code for other junior people. You can write literally dozens more tests in the time it takes to debug one non-deterministic test and fix it, so make sure you don't have any. Don't use random data.
In my experience unit tests and randomized tests should be separated. Unit tests serve to give a certainty of the correctness of some cases, not only to catch obscure bugs.
All that said, randomized testing is useful and should be done, separately from unit tests, but it should test a series of randomized values.
I can't help to think that testing 1 random value with every run is just not enough, neither to be a sufficient randomized test, neither to be a truly useful unit test.
Another aspect is validating the test results. If you have random inputs, you have to calculate the expected output for it inside the test. This will at some level duplicate the tested logic, making the test only a mirror of the tested code itself. This will not sufficiently test the code, since the test might contain the same errors the original code does.
This is an old question, but I wanted to mention a library I created that generates objects filled with random data. It supports reproducing the same data if a test fails by supplying a seed. It also supports JUnit 5 via an extension.
Example usage:
Person person = Instancio.create(Person.class);
Or a builder API for customising generation parameters:
Person person = Instancio.of(Person.class)
.generate(field("age"), gen -> gen.ints.min(18).max(65))
.create();
Github link has more examples: https://github.com/instancio/instancio
You can find the library on maven central:
<dependency>
<groupId>org.instancio</groupId>
<artifactId>instancio-junit</artifactId>
<version>LATEST</version>
</dependency>
Depending on your object/app, random data would have a place in load testing. I think more important would be to use data that explicitly tests the boundary conditions of the data.
We just ran into this today. I wanted pseudo-random (so it would look like compressed audio data in terms of size). I TODO'd that I also wanted deterministic. rand() was different on OSX than on Linux. And unless I re-seeded, it could change at any time. So we changed it to be deterministic but still psuedo-random: the test is repeatable, as much as using canned data (but more conveniently written).
This was NOT testing by some random brute force through code paths. That's the difference: still deterministic, still repeatable, still using data that looks like real input to run a set of interesting checks on edge cases in complex logic. Still unit tests.
Does that still qualify is random? Let's talk over beer. :-)
I can envisage three solutions to the test data problem:
Test with fixed data
Test with random data
Generate random data once, then use it as your fixed data
I would recommend doing all of the above. That is, write repeatable unit tests with both some edge cases worked out using your brain, and some randomised data which you generate only once. Then write a set of randomised tests that you run as well.
The randomised tests should never be expected to catch something your repeatable tests miss. You should aim to cover everything with repeatable tests, and consider the randomised tests a bonus. If they find something, it should be something that you couldn't have reasonably predicted; a real oddball.
How can your guy run the test again when it has failed to see if he has fixed it? I.e. he loses repeatability of tests.
While I think there is probably some value in flinging a load of random data at tests, as mentioned in other replies it falls more under the heading of load testing than anything else. It is pretty much a "testing-by-hope" practice. I think that, in reality, your guy is simply not thinkng about what he is trying to test, and making up for that lack of thought by hoping randomness will eventually trap some mysterious error.
So the argument I would use with him is that he is being lazy. Or, to put it another way, if he doesn't take the time to understand what he is trying to test it probably shows he doesn't really understand the code he is writing.