Normally in Erlang programmers use ! symbol to send message to receive in concurrent programming but how do we do it in yaws? Say I am trying to do this>
<erl>
out(Arg) -> loop("bad").
loop(X)->
receive
good -> {html, "Good"};
bad -> {html, "bad"}
end.
</erl>
This program keeps waiting for a message, How do I send message to it?
If you want to have one process send a message to another, it's clear you need two processes. When Yaws receives a HTTP request, by default it dispatches the request into one of its processes in its Erlang process pool. When you're using a .yaws file as in your example, that process invokes your out/1 function. But that's just one process, so you need another.
There are numerous ways to start a second process. One simple way is to spawn_link a process to run whatever logic will send the message to loop/1:
<erl>
out(_Arg) ->
process_flag(trap_exit, true),
Self = self(),
Pid = spawn_link(fun() -> my_business_logic(Self) end),
loop(Pid).
loop(Pid) ->
receive
{Pid, good} -> {html, "Good"};
{Pid, bad} -> {html, "Bad"};
{'EXIT', Pid, Reason} ->
[{status, 500},
{html, "internal server error"}]
end.
my_business_logic(Parent) ->
%% run your logic here, then send a message
Parent ! {self(), good}.
</erl>
Note that we put the child process Pid in the message to identify that it's originating from the expected process. Note also that we link to the child process and trap exits so that if the child dies unexpectedly, we can catch the EXIT and report the error properly.
But this might not be a good approach. If the logic process should run independently of any HTTP request, you could start a pool of them when your Erlang system starts, and have the out/1 function send one a message to ask it to carry out a request on its behalf. It all depends on what those processes are doing, how they relate to incoming requests, and whether having a pool of them is adequate for the request load you're expecting.
Using a .yaws file is handy for some applications but they can be limiting. An alternative is to build an Erlang system containing Yaws and your own application, and use the Yaws appmod feature to have Yaws dispatch requests into your own processes running your own Erlang modules. Explaining all that here isn't practical, so please consult the Yaws documentation or the Yaws book, or ask for help from the Yaws mailing list.
Related
First off, sorry that I was not able to provide a reduced example. At the moment, it was beyond my ability. Especially my codes that pass the file descriptors around wasn't cleanly working. I think I have only a fair understanding on how the code works at a high level.
The question is essentially if, in the following complicated example, the end-user enters Ctrl + C, which process takes the SIGINT and how things happen that way.
The application works on the Command Line Interface (CLI, going forward). The user starts a client, which effectively sends a command to the server, prints some responses out, and terminates. The server upon request finds a proper worker executable, and fork-and-exec the executable, and waits for it. Then, the server constructs the response and sends it back to the client.
There are, however, some complications. The client starts the server if the server process is not already running -- there's one server process per user. When the server is fork-and-exec'ed, the line just after fork() has this:
if (pid == 0) {
daemon(0, 0);
// do more set up and exec
}
Another complication, which might be more important,is that when the client sends a request over a unix socket (which looks like #server_name), the client appears to send the three file descriptors for standard I/O, using techniques like this.
When the server fork-and-execs the worker executable, the server redirects the worker's standard I/O to the the three file descriptors received from the client:
// just after fork(), in the child process' code
auto new_fd = fcntl(received_fd, F_DUPFD_CLOEXEC, 3);
dup2(new_fd, channel); // channel seems to be 0, 1, or 2
That piece of codes run for all the three file descriptors, respectively. (The worker executable yet again creates a bunch of processes but it does not pass the STDIN to its children.)
The question is what happens if the end user inputs Ctrl + C in the terminal. I thought, the Bash shell takes it, and generates & sends SIGINT to the processes that has a particular session ID perhaps same as the bash shell's direct child process or itself: the client, in the example.
However, it looks like the worker executable receives the signal, and I cannot confirm if the client receives the signal. I do not think the server process receives the signal but cannot confirm that. How could this happen?
If the Bash takes the Ctrl+C first, and delivers it to whatever processes, I thought the server has been detached from the Bash (i.e. daemon(0, 0)), and has nothing to do with the bash process. I thought the server and thus the worker processes have different session IDs, and which looked so when I ran the ps -o command.
It's understandable that the user keyboard inputs (yes or no, etc) could be delivered to the worker process. I am not sure how Ctrl + C could be delivered to the worker process by just effectively sharing the standard input. I would like to understand how it works.
%P.S.Thank you for the answers and comments! The answer was really helpful. It sounded like the client must get the signal, and the worker process must be stopped by other mechanism. Based on that, I could look into the code deeper. It turned out that the client indeed catches the signal and dies. It breaks the socket connection. The server detects when the fd is broken, and signals the corresponding worker process. That was why the worker process looked like getting the signal from the terminal.
It's not Bash that sends the signal, but the tty driver. It sends it to the foreground process group, meaning all processes in the foreground group receive it.
I am currently creating some custom flows, sending back and forth some data through the session. I noticed that in some cases (for example if a responder flow has a session.receive still unanswered when the initiating flow finishes), no exceptions are thrown and everything works smoothly, without even a warn log. Is there a way to force the check of send/receive completeness?
If you can provide some log file to demonstrate your use case would be better.
Send & Receive is typically a one-direction communication, one sends and one receives. If you are looking for a confirm receive, you can try to use method sendAndReceive, which
Serializes and queues the given payload object for sending to the counterparty.
Suspends until a response is received, which must be of the given R type.
Receive method itself is a blocking method, so if your flow successfully finishes. it means the receive method successfully receive what it is looking for.
But again, it would be much better if you can share your log and the elaborate on your questions a bit.
I have a simple program that when given a tupple message containing {pid,integer} will send a message back to the processor with its PID and the interger+1. The problem is that I need this program to be left active so i can send it multiple messages, and then when i flush() it, will send back its mailbox all at once. It only works 1 message at a time. I tried a recursion but it doesn't work. Here is what I have.
defmodule Spawner do
def start() do
spawn(fn ->
receive do
{pid,y} -> send(pid,y+1)
Spawner.start()
end
end)
end
end
Then on the terminal i would do:
> x = Spawner.start()
> send x, {self(),3}
> send x, {self(),5}
> flush()
#⇒ output: {PID,4}
I need the output to be {PID,4} and {PID,6}.
Thank you for your time.
Think about send as about ping-pong game. The rule is: one send ⇒ one consume. Exactly as in ping-pong one can not expect the proper behaviour from the opposite side, serving ten balls at once.
To accomplish what you want you are going to have a GenServer that collects all the received messages (instead of immediately answering to each of them.)
Also it would provide, say, get_all call, that would retrieve all the collected messages from it’s state and respond with the {int, list} tuple:
{PID, [msg1, msg2, ..., msgN]}
The implementation of that won’t fit the margins here, but since you have your question tagged with elixir, GenServer tutorial would be a good start. Then you might want to read about Agent to hold the state.
Other way round (I do not recommend it) would be flush() the consumer recursively with a timeout. The empty queue would trigger a timeout. But, again, it’s not how it’s supposed to be done, because you probably want all the already sent messages to be collected somehow on the other side.
I need two processes P and Q to communicate via 4KB-long messages. All the messages belong to a session. A session begins with the first message successfully sent by P to Q and finishes when either any of two processes sends a Stop message to the other process or a process terminates. Each process can send and receive a message from the other process. Sending and receiving operations must block until the whole message has been sent or received respectively or until a time out occurs, otherwise an error is thrown.
At the moment, my idea is to use a Socket and two queues in shared memory (one for the messages from P to Q and one for the messages from Q to P). The only purpose of the Socket is to properly implement the session concept I described: it is opened when P sends the first message to Q and is closed when one of two processes wants to deliberately terminate the session (equivalently to the Stop message the described above) or if one of the two processes terminates for some reasons (this would be done automatically by the OS). In both cases the remaining process can be easily notified of the event. The queues are useful for receiving or sending messages "all at once", as I think there is no easy way to do this via Sockets.
Are there any simpler solutions than the above? I have full access to C++11, boost (e.g. for IPC part) and POCO libraries (e.g. for the appropriate Socket type). Other libraries are not allowed unless they are header-only.
I do NOT care about efficiency.
Question from an akka newbie: let's say that at some point one of my actors wants to issue an HTTP request against an external REST API. What is the best way to do it? (note: I would ask the same question about an actor wishing to store data in a RDBMS).
Should I create another type of actor for that, and create a pool of such agents. Should I then create a message type that has the semantics of "please make an HTTP call to this endpoint", and should my first actor send this message to the pool to delegate the work?
Is that the recommended pattern (rather than doing the work in the initial actor)? And if so, would I then create a message type to communicate the outcome of the request to the initial actor when it is available?
Thank you for your feedback!
Olivier
This question is old now, but presumably one of your goals is to write reactive code that does not block threads, as sourcedelica mentioned. The Spray folks are best known for their async HTTP server and their awesome routing DSL (which you would use to create your own API), but they also offer a Spray-client package that allows your app to access other servers. It is based on Futures and thus allows you to get things done without blocking. Filip Andersson wrote an illustrative example; here are a few lines from it that will give you the idea:
val pipeline: HttpRequest => Future[HttpResponse] = sendReceive
// create a function to send a GET request and receive a string response
def get(url: String): Future[String] = {
val futureResponse = pipeline(Get(url))
futureResponse.map(_.entity.asString)
}
If you are familiar with futures, you know how you can further operate on Futures without blocking (like that map invocation). Spray's client library uses the same underlying data structures and concepts as their server side, which is handy if you are going to do both in one app.
Yes that sounds like a good approach.
If your HTTP client is blocking you will want to run the REST API calls in a different thread pool so you don't block your actors. You can use Future in actors to avoid blocking. Using a pool of actors is also possible though it's a little more work to set up.
For example, at the top level of your application create a ExecutionContext that is passed to actors that you create:
implicit val blockingEc =
ExecutionService.fromExecutorService(Executors.newFixedThreadPool(BlockingPoolSize))
class MyActor(implicit blockingEc: ExecutionContext) extends Actor {
def receive = {
case RestCall(arg) =>
val snd = sender()
Future { restApiCall(arg) } pipeTo snd
}
}
This will run the blocking call and send the result back to the requestor. Make sure to handle Status.Failure(ex) messages in the calling actor in case the restApiCall threw an exception.
The specific type and size of thread pool really depends on your application.