Best way to mask dynamic data (timestamps) using ScalaTest - regex

I'm using ScalaTest for my unit testing. I have a test result (JSON) that might look like below. The actual result is huge and complex. This is an example.
[{"name":"George", "when":143828333, ...}, {"name":"Fred", "when":14857777, ... }]
The 'when' field values are dynamic and will change from test-to-test (i.e. current timestamp), so I can't test against these. I could use some regex to mask these out, basically replacing them with some inert token.
Does ScalaTest have an more elegant way of handling dynamic bits of data like this?

You can make a custom Equality for the types you're comparing. Your custom Equality can ignore the dynamic fields for the equality comparison. Info on Equality is here:
http://doc.scalatest.org/2.2.0/index.html#org.scalactic.Equality
All you need to do is define the areEqual method and then make it implicit. So Equality[JsonType] or Equality[String], whatever the type is. This will then be picked up by the === operator and the equal matcher in your assertions.

What I've done many times in the past in this situation is exactly what you propose: use a regex to replace the dates with a constant so your comparisons will work.

Related

Unit testing for either/or conditions

The module I'm working on holds a list of items and has a method to locate and return an item from that list based on certain criteria. the specification states that "...if several matching values are found, any one may be returned"
I'm trying to write some tests with Nunit, and I can't find anything that allows me to express this condition very well (i.e. the returned object must be either A or B but I don't mind which)
Of course I could quite easily write code that sets a boolean to whether the result is as expected and then just do a simple assert on that boolean, but this whole question is making me wonder whether this is a "red flag" for unit testing and whether there's a better solution.
How do experienced unit testers generally handle the case where there are a range of acceptable outputs and you don't want to tie the test down to one specific implementation?
Since your question is in rather general form, I can only give a rather general answer, but for example...
Assert.That(someObject, Is.TypeOf<A>().Or.TypeOf<B>());
Assert.That(someObject, Is.EqualTo(objectA).Or.EqualTo(objectB));
Assert.That(listOfValidOjects, Contains.Item(someObject));
It depends on the details of what you are testing.
I am coming from Java, JUnit and parametrized tests, but it seems that nunit supports those as well (see here).
One could use that to generate values for your different variables (and the "generator" could keep track of the expected overall result, too).
Using that approach you might find ways to avoid "hard-coding" all the potential input value combinations (as said: by really generating them); but at least you should be able to write code where that information of different input values together with the expected result is more nicely "colocated" in your source code.

Direct access to Theory decision procedures using z3

Is there a z3 c++ api for direct query of a theory decision procedure?
Meaning, given a set of theory predicates, I would like to check whether they are conflicting in some given theory, without calling the z3 prover on their conjunction.
For example, I would like to check whether the following set of predicates in equality logic are conflicting:
x=y, y=z, x!=z
You can probably use some tactics, depending on what parts of the theories exactly you need (see Z3 Strategies).
If you want only very quick no-solving-at-all checks, then you should use the simplifier, it will apply the rewrite rules for all theories and return a simplified expression, which could be either true or false. The rewriter is also used to evaluate expressions given some model, so it supports everything that is needed to evaluate an expression when all the variable/constants/functions have an interpretation (and model completion can be enabled to fill in the missing ones).

Groovy: Deep comparison of Java beans without equals() as in Unitils assertReflectionEquals?

Has Groovy a simple solution for comparing two Java beans with nested beans without using their equals(), i.e. comparing all primitive/java.lang.* fields in them and then do the same recursively for the remaining fields? In the case of inequality I'd of course like to get a nice report about what was eqaul and what wasn't. Unitils' assertReflectionEquals does exactly that but I was wondering if there is something similar directly in Groovy.
Thanks!
The Groovy runtime itself has no such thing for general classes, no.

Stream or Iterator to generate all strings that match a regular expression?

This is a follow-up to my previous question.
Suppose I want to generate all strings that match a given (simplified) regular expression.
It is just a coding exercise and I do not have any additional requirements (e.g. how many strings are generated actually). So the main requirement is to produce nice, clean, and simple code.
I thought about using Stream but after reading this question I am thinking about Iterator. What would you use?
The solution to this question asks for too much code for it to be practical to answer here, but the outline goes as follows.
First, you want to parse your regular expression--you can look into parser combinators for this, for example. You'll then have an evaluation tree that looks like, for example,
List(
Constant("abc"),
ZeroOrOne(Constant("d")),
Constant("efg"),
OneOf(Constant("h"),List(Constant("ij"),ZeroOrOne(Constant("klmnop")))),
Constant("qrs"),
AnyChar()
)
Rather than running this expression tree as a matcher, you can run it as a generator by defining a generate method on each term. For some terms, (e.g. ZeroOrOne(Constant("d"))), there will be multiple options, so you can define an iterator. One way to do this is to store internal state in each term and pass in either an "advance" flag or a "reset" flag. On "reset", the generator returns the first possible match (e.g. ""); on advance, it goes to the next one and returns that (e.g. "d") while consuming the advance flag (leaving the rest to evaluate with no flags). If there are no more items, it produces a reset instead for everything inside itself and leaves the advance flag intact for the next item. You start by running with a reset; on each iteration, you put an advance in, and stop when you get it out again.
Of course, some regex constructs like "d+" can produce infinitely many values, so you'll probably want to limit them in some way (or at some point return e.g. d...d meaning "lots"); and others have very many possible values (e.g. . matches any char, but do you really want all 64k chars, or howevermany unicode code points there are?), and you may wish to restrict those also.
Anyway, this, though time-consuming, will result in a working generator. And, as an aside, you'll also have a working regex matcher, if you write a match routine for each piece of the parsed tree.

how do I avoid re-implementing the code being tested when I write tests?

(I'm using rspec in RoR, but I believe this question is relevant to any testing system.)
I often find myself doing this kind of thing in my test suite:
actual_value = object_being_tested.tested_method(args)
expected_value = compute_expected_value(args)
assert(actual_value == expected_value, ...)
The problem is that my implementation of compute_expected_value() often ends up mimicking object_being_tested.tested_method(), so it's really not a good test because they may have identical bugs.
This is a rather open-ended question, but what techniques do people use to avoid this trap? (Points awarded for pointers to good treatises on the topic...)
Usually (for manually written unit tests) you would not compute the expected value. Rather, you would just assert against what you expect to be the result from the tested method for the given args. That is, you would have something like this:
actual_value = object_being_tested.tested_method(args)
expected_value = what_you_expect_to_be_the_result
assert(actual_value == expected_value, ...)
In other testing scenarios where the arguments (or even test methods being executed) are generated automatically, you need to devise a simple oracle which will give you the expected result (or an invariant that should hold for the expected result). Tools like Pex, Randoop, ASTGen, and UDITA enable such testing.
Well here are my two cents
a) if the calculation of the expected value is simple and does not encompass any business rules/conditions in there apart from the test case to which it is generating the expected result then it should be good enough... remember your actual code will be as generic as possible.
Well there are cases where you will run into issues in the expected method but you can easily pin point the cos of failure and fix it.
b) there are cases when the expected value cannot be easily calculated in that case probably have flat files with results or probably some kind of constant expected value as naturally you would want that.
Also then there are tests where in you just want to verify whether a particular method was called or not and you are done testing that unit.. remember to use all these different paradigms while testing and always remember KEEP IT SIMPLE
you would not do that.
you do not compute the expected value, you know it already. it should be a constant value defined in your test. (or is constructed from other functions that have already been tested.)