Unhandled Exception with OCaml 5.0.0~beta1 - ocaml

I'm inconsistently getting this error in a first experiment with OCaml 5.0.0~beta1:
Fatal error: exception Stdlib.Effect.Unhandled(Domainslib__Task.Wait(_, _))
My setup:
Processor: Intel(R) Core(TM) i7-8750H CPU # 2.20GHz
Debian 10 (buster)
opam version 2.1.3 installed as binary from this script
opam switch: "→ 5.0.0~beta1 ocaml-base-compiler.5.0.0~beta1 5.0.0~beta1"
After a quick read of this tutorial, I copied the parallel_matrix_multiply function and added some code in the end just to use it:
open Domainslib
let parallel_matrix_multiply pool a b =
let i_n = Array.length a in
let j_n = Array.length b.(0) in
let k_n = Array.length b in
let res = Array.make_matrix i_n j_n 0 in
Task.parallel_for pool ~start:0 ~finish:(i_n - 1) ~body:(fun i ->
for j = 0 to j_n - 1 do
for k = 0 to k_n - 1 do
res.(i).(j) <- res.(i).(j) + a.(i).(k) * b.(k).(j)
done
done);
res ;;
let pool = Task.setup_pool ~num_domains:3 () in
let a = Array.make_matrix 2 2 1 in
let b = Array.make_matrix 2 2 2 in
let c = parallel_matrix_multiply pool a b in
for i = 0 to 1 do
for j = 0 to 1 do
Printf.printf "%d " c.(i).(j)
done;
print_char '\n'
done;;
I then compile it with no errors with
ocamlfind ocamlopt -linkpkg -package domainslib parallel_for.ml
and then comes the problem: executing the generated a.out file sometimes (rarely) prints the expected output
4 4
4 4
but usually ends with the error mentioned earlier:
Fatal error: exception Stdlib.Effect.Unhandled(Domainslib__Task.Wait(_, _))
Sorry if I am making some trivial mistake, but I can't understand what is going on, especially given that the error happens inconsistently.

The parallel_matrix_multiply computation is running outside of the Domainslib scheduler, thus whenever a task yields to the scheduler, the Wait effect is unhandled and transformed into a Effect.Unhandled exception.
The solution is to run the parallel computation within Task.run:
...
let c = Task.run pool (fun () -> parallel_matrix_multiply pool a b) in
...

Related

Is there a way to work around the (SET_ADD) error in in numba given here?

I have a snippet of python code as a function here. The purpose of the code is to use the combinations tool of itertools and turn a list of pairs into a list of triplets by matching items that correlate with each other. Due to the time consuming nature of the function, I have been trying to get it to work using a GPU through numba. Here is the function followed by me calling it:
#jit(target_backend='cuda')
def combinator(size, theuniquegenes, thetuplelist):
limiter = size+1
lengths = [l + 1 for l in range(len(theuniquegenes)) if (l + 1 > 2) and (l + 1 < limiter)]
new_combs = {c for l in lengths for c in combinations(theuniquegenes, l)}
correlations = {x for x in new_combs if all([c in thetuplelist for c in combinations(x, 2)])}
print(len(correlations))
tuplelist = thetuplelist + list(correlations)
print(len(tuplelist))
return tuplelist
tuplelist = combinator(3, uniquegenes, tuplelist)
Unfortunately, I keep encountering the following error message:
numba.core.errors.UnsupportedError: Failed in object mode pipeline (step: inline calls to locally defined closures)
Use of unsupported opcode (SET_ADD) found
new_combs = {c for l in lengths for c in combinations(theuniquegenes, l)}
^
How can I rewrite this line to work?

Core.Command.run : API changed between v0.14 and v0.15?

The following code that was compiling successfully with Core v0.14 (ocaml 4.10), but fails with v0.15 (ocaml 4.11).
open Core;;
let command = Command.basic ~summary:"essai" (
let open Command.Let_syntax in
let%map_open s = anon(sequence ("n" %: int)) in
fun () ->
List.iter s ~f:(fun x -> Printf.printf "n = %d\n" x ) ;
)
let () = Command.run ~version:"0.0" ~build_info:"RWO" command;;
The error (with 4.11) :
File "cli.ml", line 10, characters 9-20:
10 | let () = Command.run ~version:"0.0" ~build_info:"RWO" command;;
^^^^^^^^^^^
Error (alert deprecated): Core.Command.run
[since 2021-03] Use [Command_unix]
File "cli.ml", line 10, characters 9-20:
10 | let () = Command.run ~version:"0.0" ~build_info:"RWO" command;;
^^^^^^^^^^^
Error: This expression has type [ `Use_Command_unix ]
This is not a function; it cannot be applied.
The documentation of Core.Command.run states that it is obsolete - but I fail to find how to replace it.
I think you're looking for Command_unix as indicated by the message you received. Documentation link for Command_unix.run.
The core library has been restructured in version 0.15: Unix-specific modules and functions have been moved to the core_unix library in order to make the main core library more portable (or at least javascript compatible).
The function Command_unix.run in core_unix library is exactly the same as the Command.run function in previous versions of the core library.

Run two infinite loop at once in Ocaml

I have programmed a langton's ant and it work nice.
Now I want to run 2 ant simultaneously.I have a run function how make the computation and ant's movement and it's a infinite run loop.
How can I run 2 of this loop at once ?
I have try to look on Thread but i'm not sure that it's the best for my case.
This is some example of my code :
Run function :
let run(f,tab2 : t_fourmi*int array array) =
f.xx := !(f.x);
f.yy := !(f.y);
let d = ref 0
and z = ref 0
and o = ref 0
(* 1 = de bas, 2 = de droite, 3 = de haut, 4 = de gauche *)
in
if tab2.(!(f.x)/5).(!(f.y)/5) = 0
then move_right(f,1,tab2);
if !(f.xx) + 5 = !(f.x)
then d := 4
else if !(f.xx) - 5 = !(f.x)
then d := 2
else if !(f.yy) + 5 = !(f.y)
then d := 1
else if !(f.yy) - 5 = !(f.y)
then d := 3;
while true
do
(*
print_string "step : ";
print_int !o;
print_newline(); *)
o := !o + 1;
f.xx := !(f.x);
f.yy := !(f.y);
z := tab2.(!(f.x)/5).(!(f.y)/5);
if !z = 0
then move_right(f,!d,tab2)
else if !z = 1
then move_left(f,!d,tab2)
else if !z = 2
then move_right(f,!d,tab2)
else if !z = 3
then move_right(f,!d,tab2);
if !(f.xx) + 5 = !(f.x)
then d := 4
else if !(f.xx) - 5 = !(f.x)
then d := 2
else if !(f.yy) + 5 = !(f.y)
then d := 1
else if !(f.yy) - 5 = !(f.y)
then d := 3;
done;
;;
Example of move function :
let move_left(f,d,tab2 : t_fourmi*int*int array array) = (* d = direction d'ou la fourmi viens *)
(* 1 = de bas, 2 = de droite, 3 = de haut, 4 = de gauche *)
f.xx := !(f.x);
f.yy := !(f.y);
if d = 1
then f.x := !(f.x) - 5
else if d = 2
then f.y := !(f.y) - 5
else if d = 3
then f.x := !(f.x) + 5
else if d = 4
then f.y := !(f.y) + 5;
if !(f.x) >= 995
then f.x := 5
else if !(f.x) <= 5
then f.x := 995;
if !(f.y) >= 995
then f.y := 5
else if !(f.y) <= 5
then f.y := 995;
create_fourmi(f);
let n = tab2.(!(f.xx)/5).(!(f.yy)/5) in
drawinv(!(f.xx),!(f.yy),n,tab2);
;;
If you need more function, ask me.
Thanks
If I understand your code correctly (it's unfortunately not very readable), it could be outlined as essentially:
let init params =
...
let step state =
...
let move thing state =
...
let run params thing =
let state = ref (init params) in
while true do
let state := step state in
move thing state
done
where I've factored init, step and move out into separate function, which should be pretty straight-forward to do. And by doing so we can replace the run function with a run_two function that can run two instances virtually at the same time, though not entirely in parallel. They won't run independently, but in synchrony, iteration by iteration:
let run_two params1 thing1 params2 thing2 =
let state1 = ref (init params1) in
let state2 = ref (init params2) in
while true do
state1 := step !state1;
state2 := step !state2;
move !state1;
move !state2;
done
This doesn't use any threads or other complicated concurrency primitives. It's just ordinary code organized in a way that allows composition. It also allows the init and step function to be completely pure, which makes them easy to test and reason about. You could even make the move functions pure, if you factor out the drawing.
You can use threads, though it will open a pandora box for you. You have to synchronize your ants, since probably you want them to move in the same world.
Running two ants in different threads
First of all, you have to represent each ant process as an ever-looping function of type ant -> unit, where ant is the type that describes ant's initial position, moving parameters, and so on (if you don't need this, then just use unit instead. So suppose we have a function val run : ant -> unit. Next, we need to make sure, that we do not write to the same board at the same time from different threads, so we need to create a mutex, using Mutex.create, we then need to update our run function and do Mutex.lock before updating our board, and Mutex.unlock after. Finally, we should ensure that no ants will starve for the board, and that once an ant finishes it moves it yields the control to another ant, we could do this using Thread.delay if we want our simulation to be smooth (i.e., if we want to artificially slow it down to human-observable speed). Otherwise, we can just use Thread.yield. (Note, normally threads are preempted (forced to yield) by the system, but this is done in special yield points. When we do system programming there are usually lots of yield points, but this is not our case, so we have to implement this cooperation explicitly). Finally, our updated run function is ready to be run in a thread, with Thread.createrun ant. This function will return to our main thread of execution, so we can run a second ant and so on. Once all ants start running we have to call Thread.join to wait for all of them. Ideally, this function should never return.
I hope, that the above outline provided enough information for you, and you can enjoy the coding yourself. Feel free to ask questions int the comment section if anything is unclear. Probably, the first question would be how to compile it. It is (there are simpler ways, of course, but the most basic is, assuming that your program is in the ant.ml):
ocamlopt -thread unix.cmxa threads.cmxa ant.ml -o ant
Running two ants in co-routines
While the above will work, it seems too complicated and too system-dependent for such a simple simulation task. Why do we need to use system threads for that? We actually don't, we can be clever and implement co-routines using plain OCaml and continuation-passing style. Don't worry, a continuation is just a function. And a continuation-passing style is when we are calling another function at the end of the other function. In this approach, we will not have the run function that runs infinitely, instead, we will have a step function that advances an ant one step forward at each invocation. For the demonstration purposes, let's simplify our step function so that each ant just greets another, e.g.,
let step other yield =
print_endline ("Hello, " ^ other);
yield ()
That simple. We do our step, and then call the yield function (that is that fancy continuation) to yield the control to the next ant. Now let's tie it together in a simple infinite loop, e.g.,
let rec loop () =
step "Joe" ## fun () ->
step "Doe" ## fun () ->
loop ()
let () = loop ()
Let's make it clear. step "Joe" ## fun () -> <continue> is the same as step "Joe" (fun () -> <continue>), and this (fun () -> <continue>) is the yield function that is passed to the step function that greets Joe. Once Joe is greeted the step function calls us again and we evaluate the <continue>, which is, in our case step "Doe" ## fun () -> loop (), i.e., we pass the fun () -> loop () function as the yield argument to the step function that greets Doe, so once Doe is greeted, we call loop () ... and now we're at the begining of the loop.
In our case, we were passing a value of type unit in our continuations, but we can also pass an arbitrary value, e.g., a value that will represent the state of our simulation (the board with ant positions). This will enable a functional implementation of your simulation if you want to try it.

Appcrash on running UmfPackLU<> with Eigen library

I am compiling and trying to run the UMfPackLU<SparseMatrix<>> routine from Eigen 3.2.9 and UMFPACK v4.5 libraries with TDM-GCC 5.1.0 on Win64 platform. But I am getting Appcrash with exception code c0000005.
What I need to implement is the following:
_ _ _ _
A = | P |, B = | R |, where P and Q are sparse and Z is 0 with 3 cols
| Q | | Z |
|_ _| |_ _|
X = A\B;
What I am doing (excerpt only) is the following:
#define num_t double
...
SparseMatrix<num_t,RowMajor> A(P.rows()+Q.rows(), P.cols());
A.topRows(P.rows()) = P;
A.bottomRows(Q.rows()) = Q;
Matrix<num_t, Dynamic, 3> B(P.rows()+Q.rows(), 3);
B.topLeftCorner(P.rows(), 3) = R;
B.bottomLeftCorner(Q.rows(), 3) = S;
UmfPackLU<SparseMatrix<num_t>> solver(A.transpose()*A);
auto AtB = A.transpose()*B;
X.col(0) = solver.solve(AtB.col(0)); // ### segmentation error here ###
X.col(1) = solver.solve(AtB.col(1));
X.col(2) = solver.solve(AtB.col(2));
Note the SparseMatrix<> is in RowMajor format.
On debugging with gdb: I get Program received signal SIGSEGV, Segmentation fault. at the line marked as above`.
Instead of UmfPackLU<SparseMatrix<>>, solving with SimplicialLLT<SparseMatrix<>>, SimplicialLDLT<SparseMatrix<>> or CholmodDecomposition<SparseMatrix<>> is working correctly.
Thanks in advance for any help.
This is a shortcoming in Eigen 3.2.9 that has been fixed a while ago in the 3.3 branch. It's now fixed in the 3.2 branch too (changeset 1e7d97fea51d).
You can workaround the issue by calling compute(...) instead of the constructor:
UmfPackLU<SparseMatrix<num_t>> solver;
solver.compute(A.transpose()*A);
Please feel free to correct/enhance/establish my answer.
What I have found is that, I need to instantiate an explicit ColMajor matrix AtA before feeding it to the solver (RowMajor is not working) as follows:
SparseMatrix<num_t, ColMajor> AtA = A.transpose()*A;
UmfPackLU<SparseMatrix<num_t>> solver(AtA);
Is this a requirement due to Eigen's lazy evaluation implementation for calling an external routine?

Scalacheck won't properly report the failing case

I've wrote the following spec
"An IP4 address" should "belong to just one class" in {
val addrs = for {
a <- Gen.choose(0, 255)
b <- Gen.choose(0, 255)
c <- Gen.choose(0, 255)
d <- Gen.choose(0, 255)
} yield s"$a.$b.$c.$d"
forAll (addrs) { ip4s =>
var c: Int = 0
if (IP4_ClassA.unapply(ip4s).isDefined) c = c + 1
if (IP4_ClassB.unapply(ip4s).isDefined) c = c + 1
if (IP4_ClassC.unapply(ip4s).isDefined) c = c + 1
if (IP4_ClassD.unapply(ip4s).isDefined) c = c + 1
if (IP4_ClassE.unapply(ip4s).isDefined) c = c + 1
c should be (1)
}
}
That is very clear in its scope.
The test passes successfully but when I force it to fail (for example by commenting out one of the if statements) then ScalaCheck correctly reports the error but the message doesn't mention correctly the actual value used to evaluate the proposition. More specifically I get:
[info] An IP4 address
[info] - should belong to just one class *** FAILED ***
[info] TestFailedException was thrown during property evaluation.
[info] Message: 0 was not equal to 1
[info] Location: (NetSpec.scala:105)
[info] Occurred when passed generated values (
[info] arg0 = "" // 4 shrinks
[info] )
where you can see arg0 = "" // 4 shrinks doesn't show the value.
I've tried to add even a simple println statement to review the cases but the output appears to be trimmed. I get something like this
192.168.0.1
189.168.
189.
1
SOLUTION
import org.scalacheck.Prop.forAllNoShrink
import org.scalatest.prop.Checkers.check
"An IP4 address" should "belong to just one class" in {
val addrs = for {
a <- Gen.choose(0, 255)
b <- Gen.choose(0, 255)
c <- Gen.choose(0, 255)
d <- Gen.choose(0, 255)
} yield s"$a.$b.$c.$d"
check {
forAllNoShrink(addrs) { ip4s =>
var c: Int = 0
if (IP4.ClassA.unapply(ip4s).isDefined) c = c + 1
if (IP4.ClassB.unapply(ip4s).isDefined) c = c + 1
if (IP4.ClassC.unapply(ip4s).isDefined) c = c + 1
if (IP4.ClassD.unapply(ip4s).isDefined) c = c + 1
if (IP4.ClassE.unapply(ip4s).isDefined) c = c + 1
c == (1)
}
}
}
This is caused by ScalaCheck's test case simplification feature. ScalaCheck just sees that your generator produces a string value. Whenever it finds a value that makes your property false, it tries to simplify that value. In your case, it simplifies it four times until it ends up with an empty string, that still makes your property false.
So this is expected, although confusing, behavior. But you can improve the situation in three different ways.
You can select another data structure to represent your IP addresses. This will make ScalaCheck able to simplify your test cases in a more intelligent way. For example, use the following generator:
val addrs = Gen.listOfN(4, Gen.choose(0,255))
Now ScalaCheck knows that your generator only produces lists of length 4, and that it only contains numbers between 0 and 255. The test case simplification process will take this into account and not create any values that couldn't have been produced by the generator from start. You can do the conversion to string inside your property instead.
A second method is to add a filter directly to your generator, which tells ScalaCheck how an IP address string should look like. This filter is used during test case simplification. Define a function that checks for valid strings and attach it to your existing generator this way:
def validIP(ip: String): Boolean = ...
val validAddrs = addrs.suchThat(validIP)
forAll(validAddrs) { ... }
The third method is to simply disable the test case simplification feature altogether by using forAllNoShrink instead of forAll:
Prop.forAllNoShrink(addrs) { ... }
I should also mention that the two first methods require ScalaCheck version >= 1.11.0 to function properly.
UPDATE:
The listOfN list length is actually not respected by the shrinker any more, due to https://github.com/rickynils/scalacheck/issues/89. Hopefully this can be fixed in a future version of ScalaCheck.