In F# (and most of the functional languages) some codes are extremely short as follows:
let f = getNames
>> Observable.flatmap ObservableJson.jsonArrayToObservableObjects<string>
or :
let jsonArrayToObservableObjects<'t> =
JsonConvert.DeserializeObject<'t[]>
>> Observable.ToObservable
And the simplest property-based test I ended up for the latter function is :
testList "ObservableJson" [
testProperty "Should convert an Observable of `json` array to Observable of single F# objects" <| fun _ ->
//--Arrange--
let (array , json) = createAJsonArrayOfString stringArray
//--Act--
let actual = jsonArray
|> ObservableJson.jsonArrayToObservableObjects<string>
|> Observable.ToArray
|> Observable.Wait
//--Assert--
Expect.sequenceEqual actual sArray
]
Regardless of the arrange part, the test is more than the function under test, so it's harder to read than the function under test!
What would be the value of testing when it's harder to read than the production code?
On the other hand:
I wonder whether the functions which are a composition of multiple functions are safe to not to be tested?
Should they be tested at integration and acceptance level?
And what if they are short but do complex operations?
Depends upon your definition of what 'functional programming' is. Or even more precise - upon how close you wanna stay to the origin of functional programming - math with both broad and narrow meanings.
Let's take something relevant to the programming. Say, mappings theory. Your question could be translated in such a way: having a bijection from A to B, and a bijection from B to C, should I prove that composition of those two is a bijection as well? The answer is twofold: you definitely should, and you do it only once: your prove is generic enough to cover all possible cases.
Falling back into programming, it means that pipe-lining has to be tested (proved) only once - and I guess it was before deploy to the production. Since that your job as a programmer to create functions (mappings) of such a quality, that, being composed with a pipeline operator or whatever else, the desired properties are preserved. Once again, it's better to stick with generic arguments rather than write tons of similar tests.
So, finally, we come down to a much more valuable question: how one can guarantee that some operation preserve some property? It turns out that the easiest way to acknowledge such a fact is to deal with types like Monoid from the great Haskell: for example, Monoind is there to represent any associative binary operation A -> A -> A together with some identity-element of type A. Having such a generic containers is extremely profitable and the best known way of being explicit in what and how exactly your code is designed to do.
Personally I would NOT test it.
In fact having less need for testing and instead relying more on stricter compiler rules, side effect free functions, immutability etc. is one major reason why I prefer F# over C#.
of course I continue (unit)testing of "custom logic" ... e.g. algorithmic code
Related
I would like to check for an arbitrary fact and do something if it is in the knowledge base and something else if it not, but without the ( I -> T ; E)syntax.
I have some facts in my knowledge base:
unexplored(1,1).
unexplored(2,1).
safe(1,1).
given an incomplete rule
foo:- safe(A,B),
% do something if unexplored(A,B) is in the knowledge base
% do something else if unexplored(A,B) is not in the knowledge base
What is the correct way to handle this, without doing it like this?
foo:-
safe(A,B),
( unexplored(A,B) -> something ; something_else ).
Not an answer but too long for a comment.
"Flow control" is by definition not declarative. Changing the predicate database (the defined rules and facts) at run time is also not declarative: it introduces state to your program.
You should really consider very carefully if your "data" belongs to the database, or if you can keep it a data structure. But your question doesn't provide enough detail to be able to suggest anything.
You can however see this example of finding paths through a maze. In this solution, the database contains information about the problem that does not change. The search itself uses the simplest data structure, a list. The "flow control" if you want to call it this is implicit: it is just a side effect of Prolog looking for a proof. More importantly, you can argue about the program and what it does without taking into consideration the exact control flow (but you do take into consideration Prolog's resolution strategy).
The fundamental problem with this requirement is that it is non-monotonic:
Things that hold without this fact may suddenly fail to hold after adding such a fact.
This inherently runs counter to the important and desirable declarative property of monotonicity.
Declaratively, from adding facts, we expect to obtain at most an increase, never a decrease of the things that hold.
For this reason, your requirement is inherently linked to non-monotonic constructs like if-then-else, !/0 and setof/3.
A declarative way to reason about this is to entirely avoid checking properties of the knowledge base. Instead, focus on a clear description of the things that hold, using Prolog clauses to encode the knowledge.
In your case, it looks like you need to reason about states of some search problem. A declarative way to solve such tasks is to represent the state as a Prolog term, and write pure monotonic rules involving the state.
For example, let us say that a state S0 is related to state S if we explore a certain position Pos that was previously not explored:
state0_state(S0, S) :-
select(Pos-unexplored, S0, S1),
S = [Pos-explored|S1].
or shorter:
state0_state(S0, [Pos-explored|S1) :-
select(Pos-unexplored, S0, S1).
I leave figuring out the state representation I am using here as an easy exercise. Notice the convenient naming convention of using S0, S1, ..., S to chain the different states.
This way, you encode explicit relations about Prolog terms that represent the state. Pure, monotonic, and works in all directions.
I'm trying to replace some old unit tests with property based testing (PBT), concreteley with scala and scalatest - scalacheck but I think the problem is more general. The simplified situation is , if I have a method I want to test:
def upcaseReverse(s:String) = s.toUpperCase.reverse
Normally, I would have written unit tests like:
assertEquals("GNIRTS", upcaseReverse("string"))
assertEquals("", upcaseReverse(""))
// ... corner cases I could think of
So, for each test, I write the output I expect, no problem. Now, with PBT, it'd be like :
property("strings are reversed and upper-cased") {
forAll { (s: String) =>
assert ( upcaseReverse(s) == ???) //this is the problem right here!
}
}
As I try to write a test that will be true for all String inputs, I find my self having to write the logic of the method again in the tests. In this case the test would look like :
assert ( upcaseReverse(s) == s.toUpperCase.reverse)
That is, I had to write the implementation in the test to make sure the output is correct.
Is there a way out of this? Am I misunderstanding PBT, and should I be testing other properties instead, like :
"strings should have the same length as the original"
"strings should contain all the characters of the original"
"strings should not contain lower case characters"
...
That is also plausible but sounds like much contrived and less clear. Can anybody with more experience in PBT shed some light here?
EDIT : following #Eric's sources I got to this post, and there's exactly an example of what I mean (at Applying the categories one more time): to test the method times in (F#):
type Dollar(amount:int) =
member val Amount = amount
member this.Add add =
Dollar (amount + add)
member this.Times multiplier =
Dollar (amount * multiplier)
static member Create amount =
Dollar amount
the author ends up writing a test that goes like:
let ``create then times should be same as times then create`` start multiplier =
let d0 = Dollar.Create start
let d1 = d0.Times(multiplier)
let d2 = Dollar.Create (start * multiplier) // This ones duplicates the code of Times!
d1 = d2
So, in order to test that a method, the code of the method is duplicated in the test. In this case something as trivial as multiplying, but I think it extrapolates to more complex cases.
This presentation gives some clues about the kind of properties you can write for your code without duplicating it.
In general it is useful to think about what happens when you compose the method you want to test with other methods on that class:
size
++
reverse
toUpperCase
contains
For example:
upcaseReverse(y) ++ upcaseReverse(x) == upcaseReverse(x ++ y)
Then think about what would break if the implementation was broken. Would the property fail if:
size was not preserved?
not all characters were uppercased?
the string was not properly reversed?
1. is actually implied by 3. and I think that the property above would break for 3. However it would not break for 2 (if there was no uppercasing at all for example). Can we enhance it? What about:
upcaseReverse(y) ++ x.reverse.toUpper == upcaseReverse(x ++ y)
I think this one is ok but don't believe me and run the tests!
Anyway I hope you get the idea:
compose with other methods
see if there are equalities which seem to hold (things like "round-tripping" or "idempotency" or "model-checking" in the presentation)
check if your property will break when the code is wrong
Note that 1. and 2. are implemented by a library named QuickSpec and 3. is "mutation testing".
Addendum
About your Edit: the Times operation is just a wrapper around * so there's not much to test. However in a more complex case you might want to check that the operation:
has a unit element
is associative
is commutative
is distributive with the addition
If any of these properties fails, this would be a big surprise. If you encode those properties as generic properties for any binary relation T x T -> T you should be able to reuse them very easily in all sorts of contexts (see the Scalaz Monoid "laws").
Coming back to your upperCaseReverse example I would actually write 2 separate properties:
"upperCaseReverse must uppercase the string" >> forAll { s: String =>
upperCaseReverse(s).forall(_.isUpper)
}
"upperCaseReverse reverses the string regardless of case" >> forAll { s: String =>
upperCaseReverse(s).toLowerCase === s.reverse.toLowerCase
}
This doesn't duplicate the code and states 2 different things which can break if your code is wrong.
In conclusion, I had the same question as you before and felt pretty frustrated about it but after a while I found more and more cases where I was not duplicating my code in properties, especially when I starting thinking about
combining the tested function with other functions (.isUpper in the first property)
comparing the tested function with a simpler "model" of computation ("reverse regardless of case" in the second property)
I have called this problem "convergent testing" but I can't figure out why or where there term comes from so take it with a grain of salt.
For any test you run the risk of the complexity of the test code approaching the complexity of the code under test.
In your case, the the code winds up being basically the same which is just writing the same code twice. Sometimes there is value in that. For example, if you are writing code to keep someone in intensive care alive, you could write it twice to be safe. I wouldn't fault you for the abundance of caution.
For other cases there comes a point where the likelihood of the test breaking invalidates the benefit of the test catching real issues. For that reason, even if it is against best practice in other ways (enumerating things that should be calculated, not writing DRY code) I try to write test code that is in some way simpler than the production code, so it is less likely to fail.
If I cannot find a way to write code simpler than the test code, that is also maintainable(read: "that I also like"), I move that test to a "higher" level(for example unit test -> functional test)
I just started playing with property based testing but from what I can tell it is hard to make it work with many unit tests. For complex units, it can work, but I find it more helpful at functional testing so far.
For functional testing you can often write the rule a function has to satisfy much more simply than you can write a function that satisfies the rule. This feels to me a lot like the P vs NP problem. Where you can write a program to VALIDATE a solution in linear time, but all known programs to FIND a solution take much longer. That seems like a wonderful case for property testing.
We want to use FsCheck as part of our unit testing in continuous integration. As such deterministic and reproducible behaviour is very important for us.
FsCheck, being a random testing framework, can generate test cases that potentially sometimes break. The key is, we do not only use properties that would have to hold for necessarily every input, like say List.rev >> List.rev === id. But rather, we do some numerics and some test cases can cause the test to break because of being badly conditioned.
The question is: how can we guarantee, that once the test succeeds it will always succeed?
So far I see the following options:
hard code the seed, e.g. 0. This would be the easiest solution.
make very specific custom generators which avoid bad examples. Certainly possible, but could turn out pretty hard, especially if there are many objects to generate.
live with it, that in some cases the build might be red due to pathological cases and simply re-run.
What is the idiomatic way of using FsCheck in such a setting?
some test cases can cause the test to break because of being badly conditioned.
That sounds like you need a Conditional Property:
let isOk x =
match x with
| 42 -> false
| _ -> true
let MyProperty (x:int) = isOk x ==> // check x here...
(assuming that you don't like the number 42.)
(I started writing a comment but it got so long I guess it deserved its own answer).
It's very common to test properties with FsCheck that don't hold for every input. For example, FsCheck will trivially refute your List.rev example if you run it for list<float>.
Numerical stability is a tricky problem in itself - there isn't any non-determinism in FsCheck to blame here(FsCheck is totally deterministic, it's just an input generator...). The "non-determinism" you're referring to may be things like bugs in floating point operations in certain processors and so on. But even in that case, wouldn't you like to know about them? And if you algorithm is numerically unstable for a class of inputs, wouldn't you like to know about it? If you don't it seems to me like you're setting yourself up for some real non-determinism...in production.
The idiomatic way to write properties that don't hold for all inputs of a given type in FsCheck is to write a generator & shrinker. You can use ==> as a step up to that, but it doesn't scale up well to complex preconditions. You say this could turn out pretty hard - that's true in the sense that I guarantee you'll learn something about your code. A good thing!
Fixing the seed is a bad idea, except for reproducing a previously discovered bug. I mean, in practice what would you do: keep re-running the test until it passes, then fix the seed and declare "job done"?
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I remember some time (years, probably) ago I read on Stackoverflow about the charms of programming with as few if-tests as possible. This question is somewhat relevant but I think the stress was on using many small functions that returned values determined by tests depending on the parameter they receive. A very simple example would be using this:
int i = 5;
bool iIsSmall = isSmall(i);
with isSmall() looking like this:
private bool isSmall(int number)
{
return (i < 10);
}
instead of just doing this:
int i = 5;
bool isSmall;
if (i < 10) {
isSmall = true;
} else {
isSmall = false;
}
(Logically this code is just sample code. It is not part of a program I am making.)
The reason for doing this, I believe, was because it looks nicer and makes a programmer less prone to logical errors. If this coding convention is applied correctly, you would see virtually no if-tests anywhere, except in functions whose only purpose is to do that test.
Now, my question is: is there any documentation about this convention? Is there anyplace where you can see wild arguments between supporters and opposers of this style? I tried searching for the Stackoverflow post that introduced me to this, but I can't find it anymore.
Lastly, I hope this question doesn't get shot down because I am not asking for a solution to a problem. I am simply hoping to hear more about this coding style and maybe increase the quality of all coding I will do in the future.
This whole "if" vs "no if" thing makes me think of the Expression Problem1. Basically, it's an observation that programming with if statements or without if statements is a matter of encapsulation and extensibility and that sometimes it's better to use if statements2 and sometimes it's better to use dynamic dispatching with methods / function pointers.
When we want to model something, there are two axes to worry about:
The different cases (or types) of the inputs we need to deal with.
The different operations we want to perform over these inputs.
One way to implement this sort of thing is with if statements / pattern matching / the visitor pattern:
data List = Nil | Cons Int List
length xs = case xs of
Nil -> 0
Cons a as -> 1 + length x
concat xs ys = case ii of
Nil -> jj
Cons a as -> Cons a (concat as ys)
The other way is to use object orientation:
data List = {
length :: Int
concat :: (List -> List)
}
nil = List {
length = 0,
concat = (\ys -> ys)
}
cons x xs = List {
length = 1 + length xs,
concat = (\ys -> cons x (concat xs ys))
}
It's not hard to see that the first version using if statements makes it easy to add new operations on our data type: just create a new function and do a case analysis inside it. On the other hand, this makes it hard to add new cases to our data type since that would mean going back through the program and modifying all the branching statements.
The second version is kind of the opposite. It's very easy to add new cases to the datatype: just create a new "class" and tell what to do for each of the methods we need to implement. However, it's now hard to add new operations to the interface since this means adding a new method for all the old classes that implemented the interface.
There are many different approaches that languages use to try to solve the Expression Problem and make it easy to add both new cases and new operations to a model. However, there are pros and cons to these solutions3 so in general I think it's a good rule of thumb to choose between OO and if statements depending on what axis you want to make it easier to extend stuff.
Anyway, going back to your question there are couple of things I would like to point out:
The first one is that I think the OO "mantra" of getting rid of all if statements and replacing them with method dispatching has more to do with how most OO languages don't have typesafe Algebraic Data Types than it has to do with "if statemsnts" being bad for encapsulation. Since the only way to be type safe is to use method calls you are encouraged to convert programs using if statements into programs using the Visitor Pattern4 or worse: convert programs that should be using the visitor pattern into programs using simple method dispatch, therefore making extensibility easy in the wrong direction.
The second thing is that I'm not a big fan of breaking things into functions just because you can. In particular, I find that style where all the functions have just 5 lines and call tons of other functions is pretty hard to read.
Finally, I think your example doesn't really get rid of if statements. Essentially, what you are doing is having a function from Integers to a new datatype (with two cases, one for Big and one for Small) and then you still need to use if statements when working with the datatype:
data Size = Big | Small
toSize :: Int -> Size
toSize n = if n < 10 then Small else Big
someOp :: Size -> String
someOp Small = "Wow, its small"
someOp Big = "Wow, its big"
Going back to the expression problem point of view, the advantage of defining our toSize / isSmall function is that we put the logic of choosing what case our number fits in a single place and that our functions can only operate on the case after that. However, this does not mean that we have removed if statements from our code! If we have the toSize being a factory function and we have Big and Small be classes sharing an interface then yes, we will have removed if statements from our code. However, if our isSmall just returns a boolean or enum then there will be just as many if statements as there were before. (and you should choose what implementation to use depending if you want to make it easier to add new methods or new cases - say Medium - in the future)
1 - The name of the problem comes from the problem where you have an "expression" datatype (numbers, variables, addition/multiplication of subexpressions, etc) and want to implement things like evaluation functions and other things.
2 - Or pattern matching over Algebraic Data Types, if you want to be more type safe...
3 - For example, you might have to define all multimethods on the "top level" where the "dispatcher" can see them. This is a limitation compared to the general case since you can use if statements (and lambdas) nested deeply inside other code.
4 - Essentially a "church encoding" of an algebraic data type
I've never heard of such a convection. I don't see how it works, anyway. Surely the only point of having a iIsSmall is to later branch on it (possibly in combination with other values)?
What I have heard of is an argument to avoid having variables like iIsSmall at all. iIsSmall is just storing the result of a test you made, so that you can later use that result to make some decision. So why not just test the value of i at the point where you need to make the decision? i.e., instead of:
int i = 5;
bool iIsSmall = isSmall(i);
...
<code>
...
if (iIsSmall) {
<do something because i is small>
} else {
<do something different because i is not small>
}
just write:
int i = 5
...
<code>
...
if (isSmall(i)) {
<do something because i is small>
} else {
<do something different because i is not small>
}
That way you can tell at the branch point what you're actually branching on because it's right there. That's not hard in this example anyway, but if the test was complicated you're probably not going to be able to encode the whole thing in the variable name.
It's also safer. There's no danger that the name iIsSmall is misleading because you changed the code so that it was testing something else, or because i was actually altered after you called isSmall so that it is not necessarily small anymore, or because someone just picked a dumb variable name, etc, etc.
Obviously this doesn't always work. If the isSmall test is expensive and you need to branch on its result many times, you don't want to execute it many times. You also might not want to duplicate the code of that call many times, unless it's trivial. Or you might want to return the flag to be used by a caller who doesn't know about i (though then you could just return isSmall(i), rather than store it in a variable and then return the variable).
Btw, the separate function saves nothing in your example. You can include (i < 10) in an assignment to a bool variable just as easily as in a return statement in a bool function. i.e. you could just as easily write bool isSmall = i < 10; - it's this that avoids the if statement, not the separate function. Code of the form if (test) { x = true; } else { x = false; } or if (test) { return true; } else { return false; } is always silly; just use x = test or return test.
Is it really a convention? Should one just kill minimal if-constructs just because there could be frustration over it?
OK, if statements tend to grow out of control, especially if many special cases are added over time. Branch after branch is added and at the end no one is able to comprehend what everything does without spending hours of time and some cups of coffee into this grown instance of spaghetti-code.
But is it really a good idea to put everything in seperate functions? Code should be reusable. Code should be readable. But a function call just creates the need to look it up further up in the source file. If all ifs are put away in this way, you just skip around in the source file all the time. Does this support readability?
Or consider an if-statement which is not reused anywhere. Should it really go into a separate function, just for the sake of convention? there is some overhead involved here, too. Performance issues could be relevant in this context, too.
What I am trying to say: following coding conventions is good. Style is important. But there are exceptions. Just try to write good code that fits into your project and keep the future in mind. In the end, coding conventions are just guidelines which try to help us to produce good code without enforcing anything on us.
The FC++ library provides an interesting approach to supporting functional programming concepts in C++.
A short example from the FAQ:
take (5, map (odd, enumFrom(1)))
FC++ seems to take a lot of inspiration from Haskell, to the extent of reusing many function names from the Haskell prelude.
I've seen a recent article about it, and it's been briefly mentioned in some answers on stackoverflow, but I can't find any usage of it out in the wild.
Are there any open source projects actively using FC++? Or any history of projects which used it in the past? Or does anyone have personal experience with it?
There's a Customers section on the web site, but the only active link is to another library by the same authors (LC++).
As background: I'm looking to write low latency audio plugins using existing C++ APIs, and I'm looking for tooling which allows me to write concise code in a functional style. For this project I wan't to use a C++ library rather than using a separate language, to avoid introducing FFI bindings (because of the complexity) or garbage collection (to keep the upper bound on latency in the sub-millisecond range).
I'm aware that the STL and Boost libraries already provide support from many FP concepts--this may well be a more practical approach. I'm also aware of other promising approaches for code generation of audio DSP code from functional languages, such as the FAUST project or the Haskell synthesizer package.
This isn't an answer to your question proper, but my experience with embedding of functional style into imperative languages has been horrid. While the code can be almost as concise, it retains the complexity of reasoning found in imperative languages.
The complexity of the embedding usually requires the most intimate knowledge of the details and corner cases of the language. This greatly increases the cost of abstraction, as these things must always be taken into careful consideration. And with a cost of abstraction so high, it is easier just to put a side-effectful function in a lazy stream generator and then die of subtle bugs.
An example from FC++:
struct Insert : public CFunType<int,List<int>,List<int> > {
List<int> operator()( int x, const List<int>& l ) const {
if( null(l) || (x > head(l)) )
return cons( x, l );
else
return cons( head(l), curry2(Insert(),x,tail(l)) );
}
};
struct Isort : public CFunType<List<int>,List<int> > {
List<int> operator()( const List<int>& l ) const {
return foldr( Insert(), List<int>(), l );
}
};
I believe this is trying to express the following Haskell code:
-- transliterated, and generalized
insert :: (Ord a) => a -> [a] -> [a]
insert x [] = [x]
insert x (a:as) | x > a = x:a:as
| otherwise = a:insert x as
isort :: (Ord a) => [a] -> [a]
isort = foldr insert []
I will leave you to judge the complexity of the approach as your program grows.
I consider code generation a much more attractive approach. You can restrict yourself to a miniscule subset of your target language, making it easy to port to a different target language. The cost of abstraction in a honest functional language is nearly zero, since, after all, they were designed for that (just as abstracting over imperative code in an imperative language is fairly cheap).
I'm the primary original developer of FC++, but I haven't worked on it in more than six years. I have not kept up with C++/boost much in that time, so I don't know how FC++ compares now. The new C++ standard (and implementations like VC++) has a bit of stuff like lambda and type inference help that makes some of what is in there moot. Nevertheless, there might be useful bits still, like the lazy list types and the Haskell-like (and similarly named) combinators. So I guess try it and see.
(Since you mentioned real-time, I should mention that the lists use reference counting, so if you 'discard' a long list there may be a non-trivial wait in the destructor as all the cells' ref-counts go to zero. I think typically in streaming scenarios with infinite streams/lists this is a non-issue, since you're typically just tailing into the stream and only deallocating things one node at a time as you stream.)