I am making a Messenger bot and I am using Ring as my http framework.
Sometime I want to apply delays between messages sent by the bot. My expectation would be that it is safe to use Thread/sleep because this will make the active thread sleep and not the entire server. Is that so, or should I resort to clojure/core.async?
This is the code I would be writing without async:
(match [reply]
; The bot wants to send a message (text, images, videos etc.) after n milliseconds
[{:message message :delay delay}]
(do
(Thread/sleep interval delay)
(facebook/send-message sender-id message))
; More code would follow...
A link to Ring code where its behaviour in this sense is clear would be appreciated, as well as any other with explanation on the matter.
Ring is the wrong thing to ask this question about: ring is not an http server, but rather an abstraction over http servers. Ring itself does not have a fixed threading model: all it really cares about is that you have a function from request to response.
What really makes this decision is which ring adapter you use. By far the most common is ring-jetty-adapter, which is a jetty http handler that delegates to your function through ring. And jetty does indeed have a single thread for each request, so that you can sleep in one thread without impacting others (but as noted in another answer, threads are not free, so you don't want to do a ton of this regularly).
But there are other ring handlers with different threading models. For example, aleph includes a ring adapter based on netty, which uses java.nio for non-blocking IO in a small, limited threadpool; in that case, sleeping on a "request thread" is very disruptive.
Assuming you're talking about code in a handler, Thread/sleep in Ring does make the thread for the request sleep. If you have multiple requests you are burning up expensive server threads.
The reason why Ring blocks is because the (non-async) model is based on function composition, where the result of one function is the output for another. So they have to wait, where exactly I can pinpoint this in the code I don't know.
Putting it in a go-block is better, because then you are not blocking server threads. It can return the response while you send the message. Do note that you cannot use results from the go block.
If you also want a response asynchronously (without blocking a server thread) you can for example use Pedestal.
For most servers synchronous handlers are sufficient, but if you are using Thread/sleeps AND want a response I would recommend using asynchronous Ring handlers or Pedestal or another framework.
Related
I have a single threaded asynchronous tcp server written using boost asio. Each incoming request will go through several processing steps (synchronous and asynchronous) and finally send back the response using async write.
For small loads with 10 concurrent requests, it works decently. However, when I test using a parallelism of 100, things start worsening. Response latency starts increasing as time progresses. So, I want to try with some multi-threaded processing for handling requests.
I am looking for a decent example / help on creating and running multiple threads for asynchronous reading/writing to clients. I have the following doubts:
Should I use a single IOS object and call its run method in all of the threads of the thread pool, or should I use a separate IOS per thread?
If I use a single IOS, is there a possibility that part of the tcp data goes to one thread, while another part going to another thread and so on.. Is this understanding correct?
Is there any other better way?
Thanks for any help and pointers here.
Without seeing your code I can only guess what goes wrong. Most probably you're running long actions inside async completion handlers. The completion handlers should be fast - get the data, hand it off for further processing, done.
As a first priority, I would go full-asynchronous and run all processing in a thread pool. You can find an example here, where a new thread is started for every new client, which you can replace with a thread pool.
Use a single io_service. A single io_service can handle a lot of parallelism, provided you don't delay it inside completion handlers. This simplifies the implementation because you don't have to worry about completion handlers running in parallel, which will happen if you run multiple IOS in multiple threads.
Q1: Should I use a single IOS object and call its run method in all of the threads of the thread pool, or should I use a separate IOS per thread?
Either you can
HTTP Server 2 - IOS per thread
HTTP Server 3 - single IOS with thread pool
Q2: If I use a single IOS, is there a possibility that part of the tcp data goes to one thread, while another part going to another thread and so on.. Is this understanding correct?
Yes, there is a race condition, but boost.asio support strand to avoid it.
Q3: Is there any other better way?
To me, not find a better way, if you find, tell me or past here, thank you.
BTW, as #rustyx said, your program is blocked at sync calls, turn to full-asynchronous calls will help.
I am currently working on a project that recquire fast network management. To do so I choosed 0MQ, but after reading the documentation and example given by this one. There is something I hardly understand concerning the asynchronous part of 0MQ.
Is there any thread created for each request on a ROUTER or DEALER socket ?
I often do the mistake to combine asynchronous and multi-threaded. When I look at the man of zmqsocket I see that for a DEALER or ROUTER socket, the incoming routing is setted at "Fair-queued". From this I conclude asynchronous means you can write or read on the socket without waiting for an answer to send another request (everything is queued and process synchronously).
So here is the question,
Is there any thread created by 0MQ concerning each request ? (I am not talking about the background thread 0MQ use internally to manage message queueing).
Zeromq creates only one thread. No additional thread is created for request or a socket.
The background thread does all the work and the user thread communicate with the background threads using queues and file descriptors.
The background thread is using epoll or kqueue to do the asynchronous magic.
You can actually control the amount of background threads, but usually it is one.
I am trying to do use the http-kit client library in clojure to do synchronous posts returning promises. Is there any way to limit the number of threads doing the actual post?
All the examples I could find of using the inbuilt thread pool use the lower level primitive function called request but they were all for http/get.
Thanks
I'm assuming you've seen http://http-kit.org/client.html#sync
My question is do you want to do a synchronous POST, or limit the number of threads? You can do a sync POST with 100 threads, it just so happens you're main thread will wait for the request to return.
Maybe more importantly, why do you want to limit the number of threads?
Also, see https://github.com/http-kit/http-kit/blob/master/src/org/httpkit/client.clj, specifically request. You can handle it a map of arguments, like
{:url "http://yoursite.com" :worker-pool my-thread-pool-executor}
my-thread-pool-executor has to extend ExecutorService.
Specifically, you need to overload submit which is what the RespListener uses in http-kit. You could make submit synchronous with your own ExecutorService implementation so it runs on the same thread.
The program is a client server socket application being developed with C on Linux. There is a remote server to which each client connects and logs itself as being online. There will be most likely be several clients online at any given point of time, all trying to connect to the server to log themselves as being online/busy/idle etc. So how can the server handle these concurrent requests. What's a good design approach (Forking/multithreading for each connection request maybe?)?
personally i would use the event driven approach for servers. there you register a callback that is called as soon as a connection arrives. and event callbacks whenever the socket is ready to read or write.
with a huge amount of connections you will have a great performance and resource benefit compared to threads. But i would also prefere this for a smaler count of connections.
i only would use threads if you really need to use multiple cores or if you have some request that could take longer to process and where it is too complicate to handle it without threads.
i use libev as base library to handle event driven networking.
Generally speaking, you want a thread pool to service requests.
A typical structure will start with a single thread that does nothing but queue up incoming requests. Since it doesn't do very much, it's typically pretty easy for one thread to keep up with the maximum speed of the network.
That puts the items into some sort of concurrent queue. Then you have a pool of other threads reading items from the queue, doing what's needed, then depositing the result in another queue (and repeating, and repeating until the servers shuts down).
Finally, you have another single thread that just takes items from the result queue, and sends replies out to the clients.
Best approach is a combination of event driven model with multithreaded model.
You create a bunch of nonblocking sockets, but threads count should be much fewver. I.e. 10 sockets per thread.
Then you just listen for an event (incoming request) on every thread in a non-blocking mode and process it as it happens.
This technique usually performs better then non-blocking sockets or multithreaded model separately.
Take a look at Comer's "Internetworking with TCP/IP" volume 3 (BSD sockets version), it has detailed examples for different ways of writing servers and clients. The full code (sans explanations, unfortunally) is on the web. Or rummage around in http://tldp.org, there you'll find a collection of tutorials.
select or poll or epoll
These are facilities on *nix systems to aggregate multiple event sources (connections) into a single waiting point. The server adds the connections to a data structure, and then waits by calling select etc. It gets woken up when stuff happens on any of these connections, figures out which one, handles it, and then goes back to sleep. See manual for details.
There are several higher level libraries built on top of these mechanisms, that make programming them somewhat easier e.g. libevent, libev etc.
I have a game I am working on in C++ and OpenGL. I have made a threaded server that right now accepts clients (the game) and receives messages from them. Right now the game only sends messages. I want both the game and server to be able to send and receive, but I'm not sure the best way to go about it. I was considering using a thread for sending and one for receiving, both on the same socket. Right now the game runs in a single thread, and the server makes a separate thread for each client.
Looking for suggestions on how to go about it for the game as well as the server (unless your suggestion is the same for both). Any questions, feel free to ask :)
Thanks!
What you need to do is set up an outgoing queue of messages for each client. Say you have 2 clients connected to the server, one being serviced by thread A and the other by thread B. Thread A should do a WaitOnMultipleObjects() on its socket and on a semaphore/mutex/condition variable for its queue. That way, if it gets something in its queue, it can wake up and send it out. If it gets a message from the client that it needs no give to client B, it will process that message and put it in thread B's outgoing queue.
This is a very simple synchronization scheme. If your game is very complex or massive, you will have to do something much more clever than this.
Don't use threads in a game server. Many professional, AAA game servers are single-threaded - every one I've ever seen, in fact.
Consider using Boost.ASIO that implements this well with a C++ API (allowing many different approaches besides just asynchronous I/O). There are plenty of tutorials. However, for the absolute highest performance, you should probably not use threads.