Faking Return value with F# and FakeItEasy - unit-testing

I am trying to use FakeItEasy to mock an interface defined in C#
public interface IMyInterface
{
int HeartbeatInterval { get; set; }
}
In the F# test i do
let myFake = A.Fake<IMyInterface>()
A.CallTo(fun () -> ((!myFake).HeartbeatInterval)).Returns(10) |> ignore
Running this in the test runner results in
System.ArgumentException
Expression of type 'Microsoft.FSharp.Core.FSharpFunc`2[Microsoft.FSharp.Core.Unit,System.Int32]' cannot be used for return type 'System.Int32'
In fact it would seem that it does this for any return type e.g. if HeartbeatInterval returned type of foo then the exception thrown would be for type foo instead of System.Int32.
Am i doing this wrong or is there some incompatibility between F# and FakeItEasy?
It has crossed my mind that using an Object Expression might be an easier way to go.

I would venture a hypothesis that the "Easy" in "FakeItEasy" stands for "Easy Over Simple". The library says it's "easy", and that is probably true if you're using it from C#. Because it is quite obviously designed for use with that language. But it is far from "simple", because it is using C#-specific syntactic tricks that are hidden from view and don't work in F# well.
The specific gotcha you're getting right now is a combination of two things: (1) F# functions are not the same as C# Func<T,R>, and (2) F# overload resolution rules are different from C#.
There are three overloads of CallTo - two of them take an Expression<_>, and the third one is a "catch-all", taking an object. In C#, if you call this method with a lambda-expression as argument, the compiler will try its best to convert the lambda-expression to an Expression<_> and call one of the specialized methods. F#, however, does not make this effort: F#'s support for C#-style Expression<_> is very limited, primarily focused on compatability with LINQ, and only kicks in when there are no alternatives. So in this case, F# chooses to call the CallTo(object) overload.
Next, what would the argument be? F# is a very strict and consistent language. Apart from some special interop cases, most F# expressions have a definite type, regardless of the context in which they appear. Specifically, an expression of the form fun() -> x will have type unit -> 'a, where 'a is the type of x. In other words, it's an F# function.
At runtime, F# functions are represented by the type FSharpFunc<T,R>, so that is what the compiler will pass to the CallTo(object) method, which will look at it and, unable to understand what the hell it is, throw an exception.
To fix it, you could make yourself a special version of CallTo (let's call it FsCallTo) that would force the F# compiler to convert your fun() -> x expression into an Expression<_>, then use that method instead of CallTo:
// WARNING: unverified code. Should work, but I haven't checked.
type A with
// This is how you declare extension methods in F#
static member FsCallTo( e: System.Linq.Expressions.Expression<System.Func<_,_>> ) = A.CallTo( e )
let myFake = A.Fake<IMyInterface>()
// Calling FsCallTo will force F# to generate an `Expression<_>`,
// because that's what the method expects:
A.FsCallTo(fun () -> ((!myFake).HeartbeatInterval)).Returns(10) |> ignore
However, as you have absolutely correctly observed, this is way too much of a hassle for mocking up an interface, since F# already has a perfectly statically verifiable, runtime-cost-free, syntactically nice alternative in the form of object expressions:
let myFake = { new IMyInterface with
member this.HeartbeatInterval = 10
member this.HeartbeatInterval with set _ = ()
}
I would totally recommend going with them instead.

Related

OCaml - Why is Either not a Monad

I'm new to OCaml, but have worked with Rust, Haskell, etc, and was very surprised when I was trying to implement bind on Either, and it doesn't appear that any of the general implementations have bind implemented.
JaneStreet's Base is missing it
What I assume is the standard library is missing it
bind was the first function I reached for... even before match, and the implementation seems quite easy:
let bind_either (m: ('e, 'a) Either.t) (f: 'a -> ('e, 'b) Either.t): ('e, 'b) Either.t =
match m with
| Right r -> f r
| Left l -> Left l
Am I missing something?
It is because we prefer a more specific Result.t, which has clear names for the ok state and for the exceptional state. And, in general, Either.t is not extremely popular amongst OCaml programmers as usually, a more specialized type could be used with the variant names that better communicate the domain-specific purpose of either branch. It is also worth mentioning that Either was introduced to the OCaml standard very recently, just 4.12, so it might become more popular.
As mentioned by #ivg, Either is relatively new to the standard library and generally one would prefer to use types that make more sense. For example, Result for error handling.
There is also another point of view, which also applies to Result. Monads act on types parameterised by one type.
In Haskell, this is much less obvious because it is possible to partially apply type constructors. Hence; bind:: (a -> b) -> Either a -> Either b allows you to go from Either a c to Either b c.
In trying to generalise the behaviour of a monad via parameterised modules (functors in the ML sense of the term), one would have to "trick" oneself into standardising, for example, the treatment of option (a type of arity 1) and either (or result) which are of arity 2.
There are several approaches. For example, expressing multiple interfaces to describe a monad. For example describing Monad2 and describing Monad in terms of Monad2 as is done in the Base library (https://ocaml.janestreet.com/ocaml-core/latest/doc/base/Base/Monad/index.html)
In Preface we used a rather different (and perhaps less generic) approach. We leave it to the user to set the left parameter of Either (via a functor) (and the right parameter for Result): https://github.com/xvw/preface/blob/master/lib/preface_stdlib/either.mli
However, we do not lose the ability to change the left-hand type of the calculation because Either also has a Bifunctor module that allows us to change the type of both parameters. The conversation is broadly described in this thread: https://discuss.ocaml.org/t/instance-modules-for-more-parametrized-types/5356/2

Is it possible to support higher-kinded types in Standard ML?

I have read in this post that ML dialects do not allow type variables of non-ground kind. E.g. the last statement is not representable:
-- Haskell code
type Ground = Int
type FirstOrder a = Maybe a
type SecondOrder c = c Int -- ML do not allow :c
OCaml has support of higher-kinded only at the level of modules. There are some explanations (here and author's comment here) about which features of OCaml clash with higher-kinded types opportunity.
If I understood it correctly, the main problem is in the following facts:
OCaml does not follow a "freshness" restriction for type definitions: construct type can define both an alias (an the type will remain the same) and a new fresh type
type alias definition can be hidden
AFAIK, Standard ML has different constructs for type definition and aliases: type for aliases and datatype for new fresh types introduction.
Unfortunatelly, I do not know SML well enough -- is it possible to export type aliases with its definition hidden? And can someone please show me if there are any other SML features that still do not go well with an opportunity of higher-kinded types?
Probably there will be some problems with functors -- Could one be so kind to show a code example of it? I've heard several times about such cases but still have not found a complete example of it.
Yes, SML can express the equivalent of higher-kinded types through functors, and can also make them abstract. Useless example:
functor F (type 'a t) :> sig type 'a u end =
struct
type 'a u = ('a t) t
end
However, unlike OCaml, SML does not (officially) have higher-order functors, so per the standard, you can only express second-order type constructors this way.
FWIW, OCaml may use the same keyword for type aliases and generative types (type vs datatype in SML), but they are still distinguished syntactically, by their right-hand side. So that's no real difference to SML.In both languages, an abstract occurring in a signature can be implemented as either a type alias or a generative type. So the problem for type inference that Leo is alluding to exists equally in both. Haskell can get away without that problem because it does not have the same expressiveness regarding type abstraction (i.e., no "sealing" operator for modules).

typed vs untyped vs expr vs stmt in templates and macros

I've been lately using templates and macros, but i have to say i have barely found information about these important types. This is my superficial understanding:
typed/expr is something that must exists previously, but you can use .immediate. to overcome them.
untyped/stmt is something that doesn't to be defined previously/one or more statements.
This is a very vague notion of the types. I'd like to have a better explanation of them, including which types should be used as return.
The goal of these different parameter types is to give you several increasing levels of precision in specifying what the compiler should accept as a parameter to the macro.
Let's imagine a hypothetical macro that can solve mathematical equations. It will be used like this:
solve(x + 10 = 25) # figures out that the correct value for x is 15
Here, the macro just cares about the structure of the supplied AST tree. It doesn't require that the same tree is a valid expression in the current scope (i.e. that x is defined and so on). The macro just takes advantage of the Nim parser that already can decode most of the mathematical equations to turn them into easier to handle AST trees. That's what untyped parameters are for. They don't get semantically checked and you get the raw AST.
On the next step in the precision ladder are the typed parameters. They allow us to write a generic kind of macro that will accept any expression, as long as it has a proper meaning in the current scope (i.e. its type can be determined). Besides catching errors earlier, this also has the advantage that we can now work with the type of the expression within the macro body (using the macros.getType proc).
We can get even more precise by requiring an expression of a specific type (either a concrete type or a type class/concept). The macro will now be able to participate in overload resolution like a regular proc. It's important to understand that the macro will still receive an AST tree, as it will accept both expressions that can be evaluated at compile-time and expressions that can only be evaluated at run-time.
Finally, we can require that the macro receives a value of specific type that is supplied at compile-time. The macro can work with this value to parametrise the code generation. This is realm of the static parameters. Within the body of the macro, they are no longer AST trees, but rather ordinary well typed values.
So far, we've only talked about expressions, but Nim's macros also accept and produce blocks and this is the second axis, which we can control. expr generally means a single expression, while stmt denotes a list of expressions (historically, its name comes from StatementList, which existed as a separate concept before expressions and statements were unified in Nim).
The distinction is most easily illustrated with the return types of templates. Consider the newException template from the system module:
template newException*(exceptn: typedesc, message: string): expr =
## creates an exception object of type ``exceptn`` and sets its ``msg`` field
## to `message`. Returns the new exception object.
var
e: ref exceptn
new(e)
e.msg = message
e
Even thought it takes several steps to construct an exception, by specifying expr as the return type of the template, we tell the compiler that only that last expression will be considered as the return value of the template. The rest of the statements will be inlined, but cleverly hidden from the calling code.
As another example, let's define a special assignment operator that can emulate the semantics of C/C++, allowing assignments within if statements:
template `:=` (a: untyped, b: typed): bool =
var a = b
a != nil
if f := open("foo"):
...
Specifying a concrete type has the same semantics as using expr. If we had used the default stmt return type instead, the compiler wouldn't have allowed us to pass a "list of expressions", because the if statement obviously expects a single expression.
.immediate. is a legacy from a long-gone past, when templates and macros didn't participate in overload resolution. When we first made them aware of the type system, plenty of code needed the current untyped parameters, but it was too hard to refactor the compiler to introduce them from the start and instead we added the .immediate. pragma as a way to force the backward-compatible behaviour for the whole macro/template.
With typed/untyped, you have a more granular control over the individual parameters of the macro and the .immediate. pragma will be gradually phased out and deprecated.

Is it possible to write this Rust code into semantically equivalent C++ code?

I stumbled upon this Rust example in Wikipedia and I am wondering if its possible to convert it to semantically equivalent C++ code?
The program defines a recursive datastructure and implements methods upon it. Recursive datastructures require a layer of indirection, which is provided by a unique pointer, constructed via the box operator. (These are analogous to the C++ library type std::unique_ptr, though with more static safety guarantees.)
fn main() {
let list = box Node(1, box Node(2, box Node(3, box Empty)));
println!("Sum of all values in the list: {:i}.", list.multiply_by(2).sum());
}
// `enum` defines a tagged union that may be one of several different kinds
// of values at runtime. The type here will either contain no value, or a
// value and a pointer to another `IntList`.
enum IntList {
Node(int, Box<IntList>),
Empty
}
// An `impl` block allows methods to be defined on a type.
impl IntList {
fn sum(self) -> int {
match self {
Node(value, next) => value + next.sum(),
Empty => 0
}
}
fn multiply_by(self, n: int) -> Box<IntList> {
match self {
Node(value, next) => box Node(value * n, next.multiply_by(n)),
Empty => box Empty
}
}
}
Apparently in C++ version Rusts enum should be replaced with union, Rusts Box should be replaced with std::unique_ptr and Rusts Node tuple should be std::tuple type but I just cant wrap my head around how to write equivalent implementation in C++.
I know this is probably not practical (and definitely not the correct way to do things in C++) but I just wanted to see how these languages compare (C++11 features flexible enough for this kind of tinkering?). I would also like to compare compiler generated assembly for semantically equivalent implementations (if even possible).
Disclaimer: I'm not a C++11 expert. Consume with a requisite dose of salt.
As others have commented, there's a few ways of interpreting your question. I'm going to go with an overly aggressive interpretation, since it's the only interesting one:
No, it is not possible to translate that Rust code into equivalent C++ code. Can you translate it into a program that provides the same output? They're both turing complete, so of course you can. Can you translate it so that all semantics in the original are preserved? No.
Most of it can be translated such that it preserves the actual behaviour. Rust-style enums can be replaced by structs with both a tag field and a union, along with writing appropriate operator overloads to ensure that you correctly destroy the members of only the variant that's actually stored. You can (presumably) use unique_ptr in such a way that the memory gets allocated first and then the new value is written directly into the allocation, so there's no copy. I believe you can rewrite fn sum(self) so that it uses an rvalue this (although I've never done this, so I could easily be wrong).
But the one thing you cannot do in C++, to my knowledge, is replicate linear types. There is no way to statically enforce that a moved value cannot be used again. The best you can do is runtime checks, which must necessarily involve additional overhead. This also plays into why you can't have a non-nullable unique_ptr: you wouldn't ever be able to move it, since you have to leave the moved variable in a usable state.
Now, that having been said, I should disclaim the previous statement by noting that currently, the Rust compiler emits some runtime checks for dropped (i.e. moved) values, in the form of drop flags. The plan, last I checked, was to remove these runtime checks in favour of purely static destruction, hopefully before 1.0.

OCaml polymorphism example other than template function?

I am trying to understand for myself, which form of polymorhism does OCaml language have.
I was provided by an example
let id x = x
Isn't this example equivalent to C++ template function
template<class A> A id(A x) { return x; }
If so then my question is: are there any other forms of polymorphism in OCaml? This notion is called "generic algorithm" in the world of imperative languages, not "polymorphism".
There are basically three language features that are sometimes called polymorphism:
Parametric polymorphism (i.e. "generics")
Subtype polymorphism, this is the ability of a subtype of a type to offer a more specific version of an operation than the supertype, i.e. the ability to override methods (and the ability of the the runtime system to call the correct implementation of a method based on the runtime type of an object). In OO languages this is often simply referred to as "polymorphism".
So-called ad-hoc polymorphism, i.e. the ability to overload functions/methods.
As you already discovered, OCaml has parametric polymorphism. It also has subtype polymorphism. It does not have ad-hoc polymorphism.
Since in your title you've asked for examples, here's an example of subtype polymorphism in OCaml:
class c = object
method m x = x+1
end
class d = object
inherit c
method m x = x+2
end
let main =
let o:c = new d in
print_int (o#m 2)
This will print 4.
This kind of polymorphism is called generic programming but the theoretical concept behind it is called parametric polymorphism.
The two examples you provided indeed show parametric polymorphism but OCaml is supported by a strong inferring type checker instead that the one provided by C++ (which is a solution more pragmatic and with more caveats) so the real difference is that in C++ the code is duplicated for every type you use it in your code while in OCaml it is resolved by type checker by verifying that a substitution of implicit type variables through unification does exist.
Everything can be polymorphic in OCaml just because nothing is usually annotated with types so in practice if something can be used as an argument to any function then it is implicitly allowed.
You can for example have type variables to define polymorphic methods:
let swap ((x : 'a), (y : 'b)) : 'b * 'a = (y, x)
so that this will work whatever type 'a o 'b is.
Another powerful polymorphic feature of OCaml are functors (which are not the common C++ functors) but are modules parametrized by other modules. The concept sounds scarier that it is but they indeed represent an higher order of polymorphic behavior for OCaml code.