I have a function
convertMsg : Msg1 -> List Msg2
where Msg1 and Msg2 are certain message types. And I would like to turn this into a function:
convertCmd : Cmd Msg1 -> Cmd Msg2
Which would for every message in the batch replace it with some messages possibly none or more than 1.
As a Haskell programmer at heart I immediately reach for a monadic bind ((>>=) in Haskell and andThen in the Elm parlance), a function with the type:
bind : (a -> Cmd b) -> Cmd a -> Cmd b
I can easily change my convertMsg to be the following:
convertMsg : msg1 -> Cmd Msg2
At which point it would be just perfect for the bind.
But looking in Platform.Cmd, there isn't such a function I can find. There's a map which is similar, but convertMsg can't really be convertMsg : Msg1 -> Msg2 since it doesn't always give back exactly one message.
Is there a way to achieve this? Is there some limitation to the Cmd type that would prevent this sort of thing?
What you're trying to do to messages, how you might, and whether it's a good plan
I promise I'll try to answer what I think you're trying to do, but first I think there's a more important thing...
You're perhaps assuming that Cmd is analogous to IO from Haskell, but Cmd is asynchronous and isn't designed to chain actions. Your update is what glues consequences to outputs:
update : Msg -> Model -> (Model,Cmd Msg)
At the end of your update, you can issue a Cmd Msg to ask elm to do something externally, usually passing it a constructor with which it can wrap its output. This output comes back to your update function.
What you don't do is chain Cmds together as you would in a monad. There's no bind for Cmd, or to put it another way, the only bind for Cmd is your update function!
Now I suppose that if you wanted to catch a MyComplexMsg : Msg and turn it into [SimpleMsg1,SimpleMsg2], you could pattern match for it in your update function, leave the model unchanged and issue a new Cmd Msg, but what command would you be running the second time?
You could certainly take a pure Msg -> List Msg function and use Cmd.map to apply it, or apply it manually in the pattern match at the beginning of update, like
update msg model = case msg of
MyComplexMsg -> myHandler [SimpleMsg1,SimpleMsg2]
...
or even go full state monad style with
update msg model0 = case msg of
MyComplexMsg ->
let
(model1,cmd1) = update SimpleMsg1 model0
(model2,cmd2) = update SimpleMsg2 model1
in
(model2,Cmd.batch [cmd1,cmd2])
to try to emulate monadic bind, but I don't know why you might ever want this, and a lot of advice in the elm literature and community is that if you're calling update from update you're probably doing it wrong. Make a separate single-purpose helper function for that stuff instead of re-running your entire program logic twice!
Let go of your need to have a monad
I suspect that what's going wrong is that you're not letting go of a monadic control flow mentality. update is where it's at. update is where you make things happen. User input and asynchronous messages are your drivers, not sequencing. Cmd is just for communicating externally. You don't plumb the results back in, the elm architecture does that for you. Just handle the result of your Cmd (which will arrive as a message) as a branch in your update and it'll all progress nicely, and if the user presses some button of their own choice without you making it happen, so be it. You can handle that too.
I worry that you're trying to write a monad transformer stack in elm, which is a bit like trying to write an object oriented programming library in haskell. Haskell doesn't do object oriented programming, and the sooner folks drop the OO thinking and let go of their need to bundle data and functions together, the sooner they're writing good haskell code. Elm doesn't do typeclasses, it does model/view/update, and does it extraordinarily well. Let go of your need to find and use a monad to control the flow of your program, and instead respond to what messages you're given. Make a Msg for something you want to happen, provide a way to trigger it appropriately in your view and then handle it in your update.
When should one message become three messages?
If one of your messages is really three messages, why isn't it three messages already? If it's just that in response to that particular message, you just need to do three things to your model and issue five commands, why not just have one message and get update to do those three things to your model using pure code and issue the five commands in a batch?
If you need to log the successful login, then get the user's photo, then query the database for their recent activity, then display it all, then I disagree about the immediateness, and they're all asynchronous. You can issue commands to do each of those things in a batch, and when the responses come back you will need to separately deal with each - update your model with the image when it arrives, with the list of recent activity when it arrives. Once your model is in the state that the picture and the recent activity are both there you can change the view, but why not show each as soon as they're there?
Using monads sometimes trains us to think sequentially when we're doing effects programming when we needn't, but now, finally, I'll address what to do when there is a compelling need to sequence commands.
Genuinely necessary sequential commands
Perhaps there really is something sequential that you need. Maybe you have to query some data store for something before you send some request elsewhere. You still don't use a bind, you just use your update:
update msg model = case msg of
StartsMultiStageProcess userID ->
({model|multiStageProcessStatus = RequestedData}
, getDataPart1 userID PartOneReceived )
PartOneReceived userData ->
({model|multiStageProcessStatus = Fired}
, fireRockets userData.nemesis.location RocketResult)
RocketResult r -> if model.multiStageProcessStatus == Fire then
case r of
Ok UtterlyDestroyed ->
....
Ok DamagedBeyondUse ->
....
Err disappointment ->
....
...
A handy point is that if your model doesn't have the required data, it automatically won't show the missing data in the view (sum types for the win), and you can store whatever state your multistage process is in in the model.
You might prefer to put all of those messages into a new type so it becomes indented once further in the handler and won't ever be mixed with other ones in the order, like
MSPmsg msg -> case msg of
Started userId ->
...
GotPartOne userData ->
but much more likely use a helper function like MSPmsg msg -> updateMultiStageProcess.
Concluding advice
Maybe there's some great use case for delving into messages and commands and editing them that you haven't made explicit, but Cmd is opaque and all you can do is issue them and handle the resulting messages, so I'm sceptical but definitely interested.
Also in giving you update to write, it's almost like they've given you the app-specific bind to write (but it's not a functor and you absolutely do look at the data), so they've given you the keys to the Tesla. It takes a bit of getting used to but you're really going to like what happens at the traffic lights. Don't attempt to dismantle the door hinges until you've learned to drive it.
Edit: Your specific use case: inter-page communication
It turns out in chat that you're trying to get messages from one page to be usable in other pages or the overall update - sometimes one page needs to tell the app to change page and tell the new page to start an animation. I might have skipped all the advice above if I'd known that initially, but I think it's good advice for anyone coming from Haskell and I'm leaving it in!
Multiple messages
I still think it's important to accept that sometimes a single message needs to cover multiple actions, and you sort that out in your update function rather than try to create multiple messages in response to a single user action.
Lots of elm folks give the advice that your messages, rather than describing something to do like AddProduct they should describe something that happened in the past, partly because that's how messages come to you in your update so your mental model of what the elm runtime is doing is accurate, and partly because you're less likely to want to make two messages and do weird message translations when you ought to make one message.
Do multiple things in your ClickedViewOffers branch of update rather than try to make both a SwitchToOffersPage and a
AnimatePickOfTheDay message.
I'd like to point out that your idea to convert your messages and filter them somehow within the messages type is doing it in the wrong place. Filtering so that your Home page doesn't get all the messages for your Login page is something you have to do in update anyway - don't try to filter them while you're making or passing the messages. update is where it's at for deciding what to do in response to user input. Messages are for describing user input.
OK, but how do you get messages to cross the barriers between pages?!
There are a few ways to achieve this and it might be worth looking into different ways of making a Single Page Application (SPA) in Elm. I found this article by Rogério Chaves on Medium quite enlightening on the topic of various ways of organising messages from child page to parent app. He's done the TodoMVC app all the different ways in this repo A stack overflow post is better if it inlines ideas, so here we go:
Common Msg type across all pages
This can work by having a separate module for your message types which all your modules import. Messages look like ProductsMsg (UserCreatedNewProduct productRecord), as they might well do anyway, but because all the message types are global you can call another page's methods.
Individual pages also return an OutMsg from their update function
Use better names than these (eg Login.Msg rather than LoginMsg), but...
loginPageUpdate : LoginMsg -> LoginModel -> (LoginModel,Cmd LoginMsg,OutMsgFromLogin)
update : GlobalMsg -> GlobalModel -> (GlobalModel,Cmd GlobalMsg)
update msg model = case msg of
LoginMsg loginMsg ->
let (newLoginModel,cmd,outMsgFromLogin) = loginPageUpdate loginMsg model.loginModel
in
...
(You'd need NoOp :: OutMsgFromLogin or use Maybe OutMsgFromLogin there. I'm not a fan of NoOp. It's terribly tempting to use it for unimplemented features, and it's the king of all divorced-from-user-intentions messages that doesn't explain why you ought to do nothing or how you came to write something where you generated a purposeless message. I think it's a code smell that there's a better way of writing something.)
Have a record of messages that you later use to translate your page's Msgs messages into global messages.
(Again, use better domain-specific names, I'm trying to convey usage in my names.)
type LoginMessagesRecord globalMsg =
{ internalLoginMsgTag : LoginMsg -> globalMsg
, loginSucceeded : User -> globalMsg
, loginFailed : globalMsg
, newUserSuccessfullyRegistered : User -> globalMsg
}
and in your main, you would specify these:
loginMessages : LoginMessagesRecord GlobalMsg
loginMessages =
{ internalLoginMsgTag = LocalLoginMsg
, loginSucceeded = LoginSucceeded
, loginFailed = LoginFailed
, newUserSuccessfullyRegistered = NewUserSuccessfullyRegistered
}
You can either parameterise functions in your Login code with those so they all consume a LoginMessagesRecord and produce a msg, or you can use a genuinely local message type and write a translation helper in your Login module:
type HereOrThere here there = Here here | There there
type LocalLoginMessage = EditedUserName String | EditedPassword String | ....
type MessageForElsewhere = LoggedIn User | DidNotLogIn | MadeNewAccount User
type alias LoginMsg = HereOrThere LocalLoginMessage MessageForElsewhere
loginMsgTranslator : LoginMessagesRecord msg -> LoginMsg -> msg
loginMsgTranslator
{ internalLoginMsgTag
, loginSucceeded
, loginFailed
, newUserSuccessfullyRegistered
}
loginMsg = case loginMsg of
Here msg -> internalLoginMsgTag msg
There msg -> case msg of
LoggedIn user -> loginSucceeded user
DidNotLogIn -> loginFailed
MadeNewAccount user -> newUserSuccessfullyRegistered user
and then you can use Html.map loginMsgTranslator loginView in your global view, or Element.map loginMsgTranslator loginView if you're using the utterly brilliant html&css-free way to write elm apps, elm-ui.
Summary / takeaway
Have a single message describing a user intention and use update to handle all the consequences.
Don't edit the messages, respond appropriately in the update
The user is in control. The runtime is in control. You're not in control. Don't generate messages yourself, just respond to them. If you're generating messages rather than the user or the runtime, you're using elm in a weird way that'll be hard.
Your program logic largely resides in update. It doesn't reside in message. Don't try to make things happen in message, just describe what the user did or what the system did in the message.
Use case statements and descriptive tags in message types to help choose which update helper function to run. It can often help to use union types to describe how local a message is. Sometimes you use a local updating function, sometimes a global one.
You might want to also read this reddit thread about scaling elm apps that Rogério Chaves references.
I have the following source queue definition.
lazy val (processMessageSource, processMessageQueueFuture) =
peekMatValue(
Source
.queue[(ProcessMessageInputData, Promise[ProcessMessageOutputData])](5, OverflowStrategy.dropNew))
def peekMatValue[T, M](src: Source[T, M]): (Source[T, M], Future[M]) {
val p = Promise[M]
val s = src.mapMaterializedValue { m =>
p.trySuccess(m)
m
}
(s, p.future)
}
The Process Message Input Data Class is essentially an artifact that is created when a caller calls a web server endpoint, which is hooked upto this stream (i.e. the service endpoint's business logic puts messages into this queue). The Promise of process message out is something that is completed downstream in the sink of the application, and the web server then has an on complete callback on this future to return the response back.
There are also other sources of ingress into this stream.
Now the buffer may be backed up since the other source may overload the system, thereby triggering stream back pressure. The existing code just drops the new message. But I still want to complete the process message output promise to complete with an exception stating something like "Throttled".
Is there a mechanism to write a custom overflow strategy, or a post processing on the overflowed element that allows me to do this?
According to https://github.com/akka/akka/blob/master/akkastream/src/main/scala/akka/stream/impl/QueueSource.scala#L83
dropNew would work just fine. On clients end it would look like.
processMessageQueue.offer(in, pr).foreach { res =>
res match {
case Enqueued => // Code to handle case when successfully enqueued.
case Dropped => // Code to handle messages that are dropped since the buffier was overflowing.
}
}
I´m coming from ReactiveX and there we have the operator defer, in order to create an Observable and get the emission value once we have a subscriber.
Here in Akka Streams I was wondering if something like that exists:
#Test def defer(): Unit = {
var range = 0 to 10
val graphs = Source(range)
.to(Sink.foreach(println))
range = 10 to 20
graphs.run()
Thread.sleep(2000)
}
Having this code, even before we execute run(), changing the value of the range, the value is not changed since the blueprint is already created, and emits 0 to 10.
Is anything like Observable.defer in Akka Streams?
SOLUTION:
I found the solution, the solution is using lazy keyword, where we provide a function which to be executed once we run the stream.
I will keep the question just in case there´s a better way or someone else has the same question
#Test def defer(): Unit = {
var range = 0 to 10
val graphs = Source.lazily(() => Source(range))
.to(Sink.foreach(println))
range = 10 to 20
graphs.run()
Thread.sleep(2000)
}
Regards.
The simplest way would probably be Source.fromIterator(() => List(1).iterator) or something similar. In the Akka Streams API we opted to try to keep the minimal set of operators, so sometimes you may get into situations where the same is achievable in an one-liner, but would not have a direct counterpart with a name like in defer's case here. If you think it's a common enough thing please let us know on github.com/akka/akka and we could consider adding it as an API.
Note that there's also fromFuture and other ones, which while not directly related may be useful depending on your actual use-case (esp. when combined with a Promise etc).
I'm developing a REST server in Play with Scala, that at some point needs to request data at one or more other web services. Based on the responses from these services the server must compose a unified result to use later on.
Example:
Event C on www.someplace.com needs to be executed. In order to execute Event C, Event A on www.anotherplace.com and Event B on www.athirdplace.com must also be executed.
Event C has a Seq(www.anotherplace.com, www.athirdplace.com) from which I would like to iterate and send a WS request to each URL respectively to check wether B and C are executed.
It is assumed that a GET to these URLs returns either true or false
How do I collect the responses from each request (preferably combined to a list) and assert that each response is equal to true?
EDIT: An event may contain an arbitrary number of URL's. So I cant know beforehand how many WS requests i need to send.
Short Answer
You can use sequence method available on Future object.
For example:
import scala.concurrent.Future
val urls = Seq("www.anotherplace.com", "www.athirdplace.com")
val requests = urls.map(WS.url)
val futureResponses = Future.sequence(requests.map(_.get()))
Aggregated Future
Note that the type of futureResponses will be Future[Seq[WSResponse]]. Now you can work on the results:
futureResponses.map { responses =>
responses.map { response =>
val body = response.body
// do something with response body
}
}
More Details
From ScalaDocs of sequence method:
Transforms a TraversableOnce[Future[A]] into a
Future[TraversableOnce[A]]. Useful for reducing many Futures into a
single Future.
Note that if any of the Futures you pass to sequence fails, the resulting Future will be failed as well. Only when all Futures are completed successfully the result will complete successfully. This is good for some purposes, especially if you want to send requests at the same time, no one after another.
Have a look at the documentation and see if you can get their example to work.
Try something like this:
val futureResponse: Future[WSResponse] = for {
responseOne <- WS.url(urlOne).get()
responseTwo <- WS.url(responseOne.body).get()
responseThree <- WS.url(responseTwo.body).get()
} yield responseOne && responseTwo && responseThree
You probably need to parse the response of your WebService since they (probably) won't return booleans, but you'll get the idea.
I have been slowly examining all of the features that F# brings to the table. One that has particularly piqued my interest is the MailboxProcessor.
The equivalent of this in C# would most likely use locks. Can we consider the MailboxProcessor as a replacement for locks?
In the following example, am I doing
anything particularly naive or can
you see anything that might be
improved?
module Tcp =
open System
open System.Collections.Generic
open System.Net
open System.Net.Sockets
open System.Threading
type SocketAsyncMessage =
| Get of AsyncReplyChannel<SocketAsyncEventArgs>
| Put of SocketAsyncEventArgs
| Dispose of AsyncReplyChannel<MailboxProcessor<SocketAsyncMessage>>
type SocketAsyncEventArgsPool(size:int) =
let agent =
lazy(MailboxProcessor.Start(
(fun inbox ->
let references = lazy(new List<SocketAsyncEventArgs>(size))
let idleReferences = lazy(new Queue<SocketAsyncEventArgs>(size))
let rec loop () =
async {
let! message = inbox.Receive()
match message with
| Get channel ->
if idleReferences.Value.Count > 0 then
channel.Reply(idleReferences.Value.Dequeue())
else
let args = new SocketAsyncEventArgs()
references.Value.Add args
channel.Reply args
return! loop()
| Put args ->
if args = null then
nullArg "args"
elif references.Value.Count < size then
idleReferences.Value.Enqueue args
else
if not(references.Value.Remove args) then
invalidOp "Reference not found."
args.Dispose()
return! loop()
| Dispose channel ->
if references.IsValueCreated then
references.Value
|> Seq.iter(fun args -> args.Dispose())
channel.Reply inbox
}
loop())))
/// Returns a SocketAsyncEventArgs instance from the pool.
member this.Get () =
agent.Value.PostAndReply(fun channel -> Get channel)
/// Returns the SocketAsyncEventArgs instance to the pool.
member this.Put args =
agent.Value.Post(Put args)
/// Releases all resources used by the SocketAsyncEventArgsPool.
member this.Dispose () =
(this:>IDisposable).Dispose()
interface IDisposable with
member this.Dispose() =
if agent.IsValueCreated then
(agent.Value.PostAndReply(fun channel -> Dispose channel):>IDisposable).Dispose()
Mailboxes (and similar constructs) are used in programming models that don't use locks, as they're inherently built around asynchronous processing. (Lack of shared mutable state is another requirement of this model).
The Actor model can be thought of as a series of single-threaded mini-applications that communicate by sending and receiving data from each other. Each mini-application will only be run by a single thread at a time. This, combined with the lack of shared state, renders locks unnecessary.
Procedural models (and most OO code is, at its heart, procedural), use thread-level concurrency, and synchronous calls to other objects. The Actor model flips this around - calls (messages) between objects are asynchronous, but each object is completely synchronous.
I don't know enough F# to really analyze your code, frankly. It does look like you're trying to stick a synchronous-looking shell around your mailbox, and I wonder if that's really the best thing to do (vs. embracing the mailbox model fully). In your implementation, it does appear that you're using it as a replacement for a lock.
To first part of your question:
The MailboxProcessor class is a message queue running on its own thread. You may send an message
to the MailboxProcessor from any thread as asynchronously as synchronously.
Such model allows to communicate between threads through message passing instead of using locks/mutexes/ipc mechanics.