Ocaml Lwl_mvar.take does not block thread - concurrency

I'm still going on writing a simple game server. Due to a piece of advice from here I implemented mvar support in hope it will block threads while does not contain at least 2 players. But it doesn't wait untill I put any data there. It's always return sleeping Lwt.t.
First of all, here we accept connections and offer the player to enter START to begin looking for partners:
let waiting_players =
Lwt_mvar.create_empty();;
let rec make_ready player =
player >>= fun cli ->
send_to_client player "Type in START";
let answer = read_from_client player in
answer >>= fun str ->
match str with
|"START" ->
let ready_client = cli in
send_to_client player "Waiting for opponent";
Lwt_mvar.put waiting_players ready_client;
| _ ->
send_to_client player "Unknown command. try again";
make_ready player
let handle_income () =
let in_conection = Lwt_unix.accept sock in
in_conection >>= fun (cli, addr) ->
let player = Lwt.return cli in
send_to_client player "Welcome to the server. To start game type in START and press Enter";
make_ready player;;
val make_ready : Lwt_unix.file_descr Lwt.t -> unit Lwt.t = <fun>
val handle_income : unit -> unit Lwt.t = <fun>
Seems to be alright but when I call Lwt_mvar.take waiting_players it always returns some values even nothing had been put there before and thread is not blocked. Such a strange (for me) behaviour is better seen at example:
# let bucket = Lwt_mvar.create_empty ();;
val bucket : '_a Lwt_mvar.t = <abstr>
# let apple = Lwt_mvar.take bucket;;
val apple : '_a Lwt.t = <abstr>
# Lwt.state apple;;
- : '_a Lwt.state = Sleep
If "blocking" means returning exactly such sleeping objects, please, tell. And how to make a loop, returning only "ready" objects the best way? Is that a good idea to use Lwt.is_sleeping? Thanks a lot.

There're few issues with your approach and some bugs in your code. So, I will first highlight the latter, and then propose and justify another approach.
Issues
Issue 1
Looks like that your send_to_client returns a value of type unit Lwt.t. If you just ignore it by terminating your expression with ;, then it means, "don't wait until the message is send and move forward". Usually this is not what you want. So, you need to wait until the unit Lwt.t thread is finished, by binding to its return value.
Issue 2
Usually in Lwt programming, functions accepts values of immediate types (i.e., one that are not wrapped into Lwt.t) and returns deferred threads (i.e., values of type 'some Lwt.t). This is usually, of course, nobody prevents you for doing something different. But try to stick with "immediate inputs, delayed output" pattern.
Issue 3
Use tools. Use ocp-indent to indent your code, it will help in readability. Also, it looks like, that you do not use compiler and are playing in a toplevel. Usually it is a bad idea, especially with system programming. Use ocamlbuild to compile and run your code with:
ocamlbuild game.native --
The Game
Programming in OCaml have different philosophy in comparison with programming in Python or other languages with weak type system. In OCaml one should start from designing types and signatures, and later fill in implementations. Of course, this is idealization, and in real life it will a process of iterative refining, but the general approach is still the same. Start with types.
So, at first, let's define a player type. It's trivial, but has a room for improvement.
open Lwt
type player = {
fd : Lwt_unix.file_descr
}
Next, let's use type system to help us, to understand our problem of game initialization. You need to get two players ready and willing to play your game. That means, that you have three consecutive states:
Nobody is ready
One player is ready
Both (player1, player2) are ready
Actually, since as soon as you reach the third state you're ready for the game, you don't need that state, so we end up with only two choices:
type stage =
| Nobody
| One of player
We can use player option type here as it is isomorphic to our choice. But let's be more explicit and use our own stage type. It will keep our model more constrained and fit.
The next step would be to define protocol of interaction between client and server. We will use name request for a message from server to client, and response for messages moving in opposite direction.
type request =
| Init
| Wait
| Unknown_command
| Bye
type response =
| Start
| Quit
This protocol is abstract, in the sense that it doesn't contain any concrete representation – based on it you can build different representations, e.g., gui interface, or textual chats supporting different languages.
But let's mock up a simplest concrete implementation, that uses textual commands:
let response_of_string msg =
match String.trim (String.uppercase msg) with
| "START" -> Some Start
| "QUIT" -> Some Quit
| _ -> None
And in the opposite direction (note: it is better to render this messages on client side, and send values of types request and response on wire, it will keep your traffic profile low, and, more important, will allow to attach different clients transparently).
let string_of_request = function
| Init -> "Welcome to a game server.
Please, type
- `start' to start game;
- `quit' to finish session"
| Wait -> "Please wait for another player to join the game"
| Unknown_command -> "Don't understand this"
| Bye -> "Thank you, see you later!"
The next step is to define the interface for the Io. This module is responsible for interacting between client and server. Note how we hide with abstraction all details, like using sockets, or strings.
module Io : sig
val send : player -> request -> unit Lwt.t
val recv : player -> response option Lwt.t
end = struct
let send dst msg = return_unit
let recv dst = return None
end
Now, we can define our Game module. At first it will have two different automata:
init to initialize a game between two players;
play to play the game once, we get two victims.
Let's say this explicitly in OCaml:
module Game : sig
(** [play a b] play a game between player [a] and player [b] *)
val play : player -> player -> unit Lwt.t
(** [init next_player] waits until two players are ready to play.
TODO: Describe a grammar that is recognized by this automaton. *)
val init : (unit -> player Lwt.t) -> (player * player) Lwt.t
end = struct
let play a b = return_unit
let init next_player =
let rec process stage player =
Io.send player Init >>= fun () ->
Io.recv player >>= function
| None ->
Io.send player Unknown_command >>= fun () ->
process stage player
| Some Quit ->
Io.send player Bye >>= fun () ->
next_player () >>= process stage
| Some Start -> match stage with
| One a -> return (a,player)
| Nobody ->
Io.send player Wait >>= fun () ->
next_player () >>= process (One player) in
next_player () >>= process Nobody
end
Now we can write out main function, that glues everything together:
let main server_sock =
let next_player () =
Lwt_unix.accept server_sock >>=
fun (fd,_) -> return {fd} in
Game.init next_player >>= fun (a,b) ->
Game.play a b
When you will continue with this approach, you may later notice, that different finite state machines of your game defines different languages (i.e., protocols). So instead of having one protocol, you may end up in using a specific protocol for each FSM, e.g., init_protocol, play_protocol, etc. But you may also notice, that this protocols has some intersections. To handle this, you can use subtyping, and polymorphic variants.

Found a way.
let rec form_pairs () =
let player1 = Lwt_mvar.take waiting_players in
player1 >>= fun descriptor1 ->
let player2 = Lwt_mvar.take waiting_players in
player2 >>= fun descriptor2->
Lwt_io.printl "Pairs formed";
Lwt.return (descriptor1, descriptor2);
form_pairs ();;

Related

How to send multiple TCP messages and continue when one has succeeded

I'm writing some networking code currently and I need to send out a large number of messages and then wait for a single response.
Given that I have a function that returns the input and output channels for a socket I have:
let resps = List.map uris ~f:(fun uri ->
let%lwt (ic,oc) = connect uri in
let%lwt () = Lwt_io.write_value oc msg in
Lwt_io.read_value ic
) in
Lwt.pick resps
My understanding of this is that pick should cancel any ongoing requests after it has a fulfilled promise in resps. The issue is that if any of those connections fails/is refused, an exception is raised Unix.ECONNREFUSED.
My question is what is the correct semantics to force Lwt.pick to ignore the exceptions?
Options I've thought of so far are to catch the
exception explicity in the requests:
let resps = List.map uris ~f:(fun uri ->
try
let%lwt (ic,oc) = connect uri in
let%lwt () = Lwt_io.write_value oc msg in
Lwt_io.read_value ic
with Unix_error (e,_,_) -> ...
) in
Lwt.pick resps
But I'm not sure under what conditions the Lwt.pick will view those promises are rejected?
Update: I'm now handling the errors with cancellable, unfulfillable promises:
fst ## Lwt.task ()
This feels hacky but seems to work so far.
Handling the exception explicitly is right. Lwt promises are rejected when you either reject them explicitly (using Lwt.fail), or when an exception is caught by Lwt, in a callback that should have returned a promise (like the one you would pass to Lwt.bind).
However, for handling exceptions in code that calls into Lwt, you have to use try%lwt instead of the plain try.

Unit testing an agent

I am trying to test a MailboxProcessor in F#. I want to test that the function f I am giving is actually executed when posting a message.
The original code is using Xunit, but I made an fsx of it that I can execute using fsharpi.
So far I am doing this :
open System
open FSharp
open System.Threading
open System.Threading.Tasks
module MyModule =
type Agent<'a> = MailboxProcessor<'a>
let waitingFor timeOut (v:'a)=
let cts = new CancellationTokenSource(timeOut|> int)
let tcs = new TaskCompletionSource<'a>()
cts.Token.Register(fun (_) -> tcs.SetCanceled()) |> ignore
tcs ,Async.AwaitTask tcs.Task
type MyProcessor<'a>(f:'a->unit) =
let agent = Agent<'a>.Start(fun inbox ->
let rec loop() = async {
let! msg = inbox.Receive()
// some more complex should be used here
f msg
return! loop()
}
loop()
)
member this.Post(msg:'a) =
agent.Post msg
open MyModule
let myTest =
async {
let (tcs,waitingFor) = waitingFor 5000 0
let doThatWhenMessagepostedWithinAgent msg =
tcs.SetResult(msg)
let p = new MyProcessor<int>(doThatWhenMessagepostedWithinAgent)
p.Post 3
let! result = waitingFor
return result
}
myTest
|> Async.RunSynchronously
|> System.Console.WriteLine
//display 3 as expected
This code works, but it does not look fine to me.
1) is the usage of TaskCompletionSource normal in f# or is there some dedicated stuff to allow me waiting for a completion?
2) I am using a second argument in the waitingFor function in order to contraint it, I know I could use a type MyType<'a>() to do it, is there another option? I would rather not use a new MyType that I find cumbersome.
3) Is there any other option to test my agent than doing this? the only post I found so far about the subject is this blogpost from 2009 http://www.markhneedham.com/blog/2009/05/30/f-testing-asynchronous-calls-to-mailboxprocessor/
This is a tough one, I've been trying to tackle this for some time as well. This is what I found so far, it's too long for a comment but I'd hesitate to call it a full answer either...
From simplest to most complex, depends really how thoroughly you want to test, and how complex is the agent logic.
Your solution may be fine
What you have is fine for small agents whose only role is to serialize access to an async resource, with little or no internal state handling. If you provide the f as you do in your example, you can be pretty sure it will be called in a relatively short timeout of few hundred milliseconds. Sure, it seems clunky and it's double the size of code for all the wrappers and helpers, but those can be reused it you test more agents and/or more scenarios, so the cost gets amortized fairly quickly.
The problem I see with this is that it's not very useful if you also want to verify more than than the function was called - for example the internal agent state after calling it.
One note that's applicable to other parts of the response as well: I usually start agents with a cancellation token, it makes both production and testing life cycle easier.
Use Agent reply channels
Add AsyncReplyChannel<'reply> to the message type and post messages using PostAndAsyncReply instead of Post method on the Agent. It will change your agent to something like this:
type MyMessage<'a, 'b> = 'a * AsyncReplyChannel<'b>
type MyProcessor<'a, 'b>(f:'a->'b) =
// Using the MyMessage type here to simplify the signature
let agent = Agent<MyMessage<'a, 'b>>.Start(fun inbox ->
let rec loop() = async {
let! msg, replyChannel = inbox.Receive()
let! result = f msg
// Sending the result back to the original poster
replyChannel.Reply result
return! loop()
}
loop()
)
// Notice the type change, may be handled differently, depends on you
member this.Post(msg:'a): Async<'b> =
agent.PostAndAsyncReply(fun channel -> msg, channel)
This may seem like an artificial requirement for the agent "interface", but it's handy to simulate a method call and it's trivial to test - await the PostAndAsyncReply (with a timeout) and you can get rid of most of the test helper code.
Since you have a separate call to the provided function and replyChannel.Reply, the response can also reflect the agent state, not just the function result.
Black-box model-based testing
This is what I'll talk about in most detail as I think it's most general.
In case the agent encapsulates more complex behavior, I found it handy to skip testing individual messages and use model-based tests to verify whole sequences of operations against a model of expected external behavior. I'm using FsCheck.Experimental API for this:
In your case this would be doable, but wouldn't make much sense since there is no internal state to model. To give you an example what it looks like in my particular case, consider an agent which maintains client WebSocket connections for pushing messages to the clients. I can't share the whole code, but the interface looks like this
/// For simplicity, this adapts to the socket.Send method and makes it easy to mock
type MessageConsumer = ArraySegment<byte> -> Async<bool>
type Message =
/// Send payload to client and expect a result of the operation
| Send of ClientInfo * ArraySegment<byte> * AsyncReplyChannel<Result>
/// Client connects, remember it for future Send operations
| Subscribe of ClientInfo * MessageConsumer
/// Client disconnects
| Unsubscribe of ClientInfo
Internally the agent maintains a Map<ClientInfo, MessageConsumer>.
Now for testing this, I can model the external behavior in terms of informal specification like: "sending to a subscribed client may succeed or fail depending on the result of calling the MessageConsumer function" and "sending to an unsubscribed client shouldn't invoke any MessageConsumer". So I can define types for example like these to model the agent.
type ConsumerType =
| SucceedingConsumer
| FailingConsumer
| ExceptionThrowingConsumer
type SubscriptionState =
| Subscribed of ConsumerType
| Unsubscribed
type AgentModel = Map<ClientInfo, SubscriptionState>
And then use FsCheck.Experimental to define the operations of adding and removing clients with differently successful consumers and trying to send data to them. FsCheck then generates random sequences of operations and verifies the agent implementation against the model between each steps.
This does require some additional "test only" code and has a significant mental overhead at the beginning, but lets you test relatively complex stateful logic. What I particularly like about this is that it helps me test the whole contract, not just individual functions/methods/messages, the same way that property-based/generative testing helps test with more than just a single value.
Use Actors
I haven't gone that far yet, but what I've also heard as an alternative is using for example Akka.NET for full-fledged actor model support, and use its testing facilities which let you run agents in special test contexts, verify expected messages and so on. As I said, I don't have first-hand experience, but seems like a viable option for more complex stateful logic (even on a single machine, not in a distributed multi-node actor system).

How to fully utilise lwt_pool?

I am struggling at the check and validate function for Lwt_pool.create and have some questions here.
val create :
int ->
?check:('a -> (bool -> unit) -> unit) ->
?validate:('a -> bool Lwt.t) -> (unit -> 'a Lwt.t) -> 'a t
First of all, let me describe the background of my usage.
I wish to use Lwt_pool to manage a pool of database connections. The database is MongoDB and the driver was made by myself (Mongo.ml). The driver is actually simple that it is just a TCP (Unix.file_descr) connection to the MongoDB server and send requests / receive responses with the server.
`create n ?check ?validate f` creates a new pool with at most n members. f is the function to use to create a new pool member.
An element of the pool is validated by the optional validate function before its Lwt_pool.use. Invalid elements are re-created.
The optional function check is called after a use of an element failed. It must call its argument excatly one with true if the pool member is still valid and false otherwise.
above is the documentation for create
So here are my questions:
From the doc, I understand validate is to validate the connection before using it.
so my first question is How can I check the availability of a Unix.file_descr? I only know that in order to check it, I have send something through it, right? But if I send something through my connection in order to check, then I guess it would be urgly and I anyway want to send something out via Lwt_pool.use, why bother do similar things before use?
My second question is about check.
So check will be used after use. From the doc, I really can't understand. check is a function which take a my_db_connection (in my case) and a (fun b -> unit) as parameter. Who will provide (fun b -> unit)? Does Lwt_pool itself has such a function? or I should provide it? To do what then?
thanks
I don't know anything about Lwt, but one thing to do is to wait to use a validate function until you see why you would need it. That's why it's an optional parameter (I suspect).
One thing you can do with a Unix file descriptor is to figure out whether it's attached to a network socket:
let is_a_socket fd = (Unix.fstat fd).st_kind = S_SOCK
Maybe this will be useful, though someone with Lwt experience can probably give a better answer.

Ocaml, Module Graphics - Queuing keypresses

I am writing a simple game in Ocaml, using its module Graphics to perform drawing and interaction. I came across the problem that Graphics.read_key() queues all presses for later use, therefore when I hold a key for a while then many "presses" are put into memory. After release the action is still performed.
Is there any way to delete entries from this queue, or just (even better) not queue them at all?
THis is problably not the most beautiful solution, but you can use key_pressed.
This function will return true if a keypress is available.
So once you have read a keypress with read_key you can flush your queue by calling read_key until key_pressed is false and ignoring the result.
(* flush_kp : unit -> unit *)
let flush_kp () = while key_pressed () do
let c = read_key ()
in ()
done ;;
Hope this could help.

Is the MailboxProcessor type a replacement for locks?

I have been slowly examining all of the features that F# brings to the table. One that has particularly piqued my interest is the MailboxProcessor.
The equivalent of this in C# would most likely use locks. Can we consider the MailboxProcessor as a replacement for locks?
In the following example, am I doing
anything particularly naive or can
you see anything that might be
improved?
module Tcp =
open System
open System.Collections.Generic
open System.Net
open System.Net.Sockets
open System.Threading
type SocketAsyncMessage =
| Get of AsyncReplyChannel<SocketAsyncEventArgs>
| Put of SocketAsyncEventArgs
| Dispose of AsyncReplyChannel<MailboxProcessor<SocketAsyncMessage>>
type SocketAsyncEventArgsPool(size:int) =
let agent =
lazy(MailboxProcessor.Start(
(fun inbox ->
let references = lazy(new List<SocketAsyncEventArgs>(size))
let idleReferences = lazy(new Queue<SocketAsyncEventArgs>(size))
let rec loop () =
async {
let! message = inbox.Receive()
match message with
| Get channel ->
if idleReferences.Value.Count > 0 then
channel.Reply(idleReferences.Value.Dequeue())
else
let args = new SocketAsyncEventArgs()
references.Value.Add args
channel.Reply args
return! loop()
| Put args ->
if args = null then
nullArg "args"
elif references.Value.Count < size then
idleReferences.Value.Enqueue args
else
if not(references.Value.Remove args) then
invalidOp "Reference not found."
args.Dispose()
return! loop()
| Dispose channel ->
if references.IsValueCreated then
references.Value
|> Seq.iter(fun args -> args.Dispose())
channel.Reply inbox
}
loop())))
/// Returns a SocketAsyncEventArgs instance from the pool.
member this.Get () =
agent.Value.PostAndReply(fun channel -> Get channel)
/// Returns the SocketAsyncEventArgs instance to the pool.
member this.Put args =
agent.Value.Post(Put args)
/// Releases all resources used by the SocketAsyncEventArgsPool.
member this.Dispose () =
(this:>IDisposable).Dispose()
interface IDisposable with
member this.Dispose() =
if agent.IsValueCreated then
(agent.Value.PostAndReply(fun channel -> Dispose channel):>IDisposable).Dispose()
Mailboxes (and similar constructs) are used in programming models that don't use locks, as they're inherently built around asynchronous processing. (Lack of shared mutable state is another requirement of this model).
The Actor model can be thought of as a series of single-threaded mini-applications that communicate by sending and receiving data from each other. Each mini-application will only be run by a single thread at a time. This, combined with the lack of shared state, renders locks unnecessary.
Procedural models (and most OO code is, at its heart, procedural), use thread-level concurrency, and synchronous calls to other objects. The Actor model flips this around - calls (messages) between objects are asynchronous, but each object is completely synchronous.
I don't know enough F# to really analyze your code, frankly. It does look like you're trying to stick a synchronous-looking shell around your mailbox, and I wonder if that's really the best thing to do (vs. embracing the mailbox model fully). In your implementation, it does appear that you're using it as a replacement for a lock.
To first part of your question:
The MailboxProcessor class is a message queue running on its own thread. You may send an message
to the MailboxProcessor from any thread as asynchronously as synchronously.
Such model allows to communicate between threads through message passing instead of using locks/mutexes/ipc mechanics.