I'm trying to implement a two way communication using boost:asio. I'm writing the server that will communicate with multiple clients.
I want the writes and reads to and from clients to happen without any synchronization and order - the client can send a command to the server at any time and it still receives some data in a loop. Of course access to shared resources must be protected.
What is the best way to achieve this? Is having two threads - one for reading and one for writing a good option? What about accepting the connections and managing many clients?
//edit
By "no synchronization and order" I mean that the server should stream to the clients its data all the time and that it can respond(change its behaviour) to clients requests at any time regardless of what is now being sent to them.
One key idea behind asio is exactly that you don't need multiple threads to deal with multiple client sessions. Your description is a bit generic, and I'm not sure I understand what you mean by 'I want the writes and reads to and from clients to happen without any synchronization and order'.
A good starting point would be the asio chat server example. Notice how in this example an instance of the class chat_session is created for each connected client. Objects of that class keep on posting asynchronous reads as long as the connection is alive and at the same time they can write data to the connected clients. In the mean time an object of class chat_server keeps accepting new incoming client connections.
At work we're doing something conceptually very similar and there I noticed the big impact a heavy handler has on performance. The writing side of the code/write handler does too much work and occupies a worker thread for too long, thereby jeopardizing the program flow. Especially RST packets (closed connections) weren't detected quick enough by the read handler because the write actions were taking their sweet time and hogging most of the processing time in the worker thread. Currently I fixed that by creating two worker threads so that one line of code was not starved of processing time. Admittedly, this is far from ideal and it is on my lengthy to-do list of optimizations.
Long story short, you can get away with using a single thread for reading and writing if your handlers are light-weight while a second thread handles the rest of your program. Once you notice weird synchronization issues it's time to either lighten your network handlers or add an extra thread to the worker pool.
Related
I'm writing a networking library that uses Boost asio and am confused on whether I should use a separate thread to run the io_service or not.
I currently have a class that wraps all asio work. It has one io_service, one socket, etc, and uses async_read and async_write methods to communicate with the remote server. This class exposes read and write methods to allow users to communicate with the remote server.
This class is then called by other classes that use it's read/write methods to send and receive data to the remote server. In some cases there are chained calls to read/write data from the server until a final user-provided callback is called to pass on the final result of the computation.
I'm now trying to implement a connection pool and am wondering if I need a thread pool: all reads and writes to the remote server use async methods, none post-read processing involves blocking calls until the final user-provided callback. Should it not be ok to have a series of connection objects running at the same time without the need for a separate thread pool?
If you only have one thread, then when you get the data and process it, you are blocking any other calls. Of course, if the only thing that you do in a async_read or async_write is start the next async call, then the io_service threads is always waiting for new data to arrive, and it populates the relevant connection underlying data structures. No problem with just one thread.
But you probably have some kind of processing that interacts with the read/write data, and this is the part that you can parallelize with the thread pool. So the question is: how big is the fraction of time consumed in this processing? Is it the bottleneck (latency and bandwidth) of the server?
I saw different cases here in the past. One case was a simple server working on one list of jobs to do and dispatching the data to clients. It didn't require threading, I didn't care about the latency, as the clients would come only from time to time, and no bottleneck. Then I had another case where everything needed to be processed quickly and in this instance, I used a thread pool.
So the real question is: where is the bottleneck?
I am trying to learn C++ (with prior programming knowledge) by creating a server application with multiple clients. Server application will run on Raspberry Pi/Debian (Raspbian). I thought this would also be a good opportunity to learn about low-level concurrent programming with threads (e.g. POSIX). Then I came across with select() function which basically allows usage of blocking functions in a single thread to handle multiple clients, which is interesting. Some people here on StackOverflow mentioned that threads cause a lot of overhead and select() seems to be a nice alternative.
In reality, I will have 1-3 clients connected but I would like to keep my application flexible. As a structure design, I was thinking about a main thread invoking a data thread (processing stuff non-stop) and a server thread (listening for incoming connections). Since accept() call is blocking, the latter one needs to be a separate thread. If a client connects, then for each client, I may need a separate thread as well.
At the end, worker thread will write to the shared memory and client threads will read from there and communicate with the clients. Some people were opposing to the usage of threads but in my understanding, threads are good if they are invoked rarely (and long living) and if there are blocking function calls. For the last one as it seems there is the select() function, which used in a loop, allows for handling of multiple sockets in a single thread.
I think at least for the data processing and server accept() call, I will need 2 separate threads initiated at the beginning. I may handle all clients with select() in a single thread or separate threads. What would be the correct approach and are there smarter alternatives?
I'm trying to ping a URL on a server in the middle of my high-performance C++ application, where every millisecond is critical. I don't care about the return data from the query... I just need to send a HTTP request to a specific URL (to cause it to load), and I'm trying to find the most effective, non-blocking method to accomplish this.
My application uses Boost::ASIO, but most methods to do this seem to involve building and tearing down sockets each time (which might unfortunately be necessary), but I'm hoping there's a basic C/C++ socket one-liner that won't cause any overhead, memory leaks, blocking, etc. Just quickly open a socket, shoot the HTTP request off, and move along.
And this will need to happen thousands of times per second, so sockets and overhead is important (don't want to flood the OS).
Anyone have any advice on the most efficient way to accomplish this?
Thanks so much!
With thousands of notifications sent per second, I can't imagine opening a socket connection for each one. That would probably be too inefficient due to the overhead. So, as Casey suggested, try using a dedicated connection.
Since it sounds like you are doing quite a bit of processing on your main thread, you might consider creating a worker thread for the socket work. You will probably need to use thread synchronization objects like a mutex or critical section to single thread the code - at least when updating a container (probably a queue) from your main thread and reading it from the worker thread.
I was in a discussion about Multiple Threads in a client application and was told that using a separate thread for receiving data and another thread for sending data is not the way to go.
Why?
From what I know TCP is Full-Duplex so this would be a performance improvement, or not?
Having a dedicated send thread and a dedicated receive thread is bad for two reasons.
First, it means that a context switch is required every time you go from receiving to sending unless you are doing both at the same time.
Second, it means that in the the typical path where you receive a query, formulate a response, and then send that response, data will need to be handed from one thread to another, blowing out caches.
That said, if performance isn't super-critical and it fits into your design well, it certainly works. It's just that there's usually no advantage.
I suppose it depends on the scale of your application. If you are doing a small app for a class project, it might be enough to have the send and receive on the same thread. Then you don't have to worry about threading issues.
However, I worked on an application that had to listen for several thousand incoming connections, and each connection might be sending a significant amount of data. We had a thread whose sole purpose was listening for socket connections and putting the new connections into a pool, and a variable number of threads (depending on how busy the app was) just for reading off of sockets, and a different pool of threads for writing.
The problem is that if your listening socket isn't reading the data off of the wire fast enough and the buffer fills up, an error is returned, and in the case of thousands of clients, caused there to be a lot of reconnects and re-sends of data, which compounded the problem that the data was not being read fast enough in the first place.
So it comes back to what I said in the first place - it depends on the scale of your application, but why not add in the ability now? Just make sure that you are thread safe, and you should be OK.
I'm writing a client-server application and one of the requirements is the Server, upon receiving an update from one of the clients, be able to Push out new data to all the other clients. This is a C++ (Qt) application meant to run on Linux (both client and server), but I'm more looking for high-level conceptual ideas of how this should work (though low-level thoughts are good, too).
Server:
It needs to (among its other duties) keep a socket open listening for incoming packets from potentially n different clients, presumably on a background thread (I haven't written much in terms of socket code other than some rinky-dink examples in school). Upon getting this data from a client, it processes it and then spits it out to all its clients, right?
Of course, I'm not sure how it actually does this. I'm guessing this means it has to keep persistent connections with every single client (at least the active clients), but I don't understand even conceptually how to maintain this connection (or the list of these connections).
So, how should I approach this?
In general when you have multiple clients, there are a few ways to handle this.
First of all, in TCP, when a client connects to you they're placed into a queue until they can be serviced. This is a given, you don't need to do anything except call the accept system call to receive a new client. Once the client is recieved, you'll be given a socket which you use to read and write. Who reads / writes first is entirely dependent on your protocol, but both sides need to know the protocol (which is up to you to define).
Once you've got the socket, you can do a few things. In a simple case, you just read some data, process it, write back to the socket, close the socket, and serve the next client. Unfortunately this means you can only serve one client at a time, thus no "push" updates are possible. Another strategy is to keep a list of all the open sockets. Any "updates" simply iterate over the list and write to each socket. This may present a problem though because it only allows push updates (if a client sent a request, who would be watching for it?)
The more advanced approach is to assign one thread to each socket. In this scenario, each time a socket is created, you spin up a new thread whose whole purpose is to serve exactly one client. This cuts down on latency and utilizes multiple cores (if available), but is far more difficult to program. Also if you have 10,000 clients connecting, that's 10,000 threads which gets to be too much. Pushing an update to a single client (in this scenario) is very simple (a thread just writes to its respective socket). Pushing to all of them at once is a little more tricky (requires either a thread event or a producer / consumer queue, neither of which are very fun to implement)
There are, of course, a million other ways to handle this (one process per client, a thread pool, a load-balancing proxy, you name it). Suffice it to say there's no way to cover all of these in one answer. I hope this answers your basic questions, let me know if you need me to clarify anything. It's a very large subject. However if I might make a suggestion, handling multiple clients is a wheel that has been re-invented a million times. There are very good libraries out there that are far more efficient and programmer-friendly than raw socket IO. I suggest libevent, which turns network requests into an event-driven paradigm (much more like GUI programming, which might be nice for you), and is incredibly efficient.
From what I understand, I think you need to keep an infinite loop going, (at least until the program terminates) that answers a connection request from your clients. It would be best to add them to a array of some sort. Use an event to see when a new client is added to that array, and wait for one of them to give data. Then you do what you have to do with that data and spit it back.