Different between none-blocking mode with async socket in C++ winsock server - c++

In C++, I've read some tutorials to create a server which can accept connections from multiple clients. They suggest using async socket, but i don't really know why we should choose async over none-blocking mode. And what's about the ideas that use multi-threading? is it better than using async socket? Thanks!!

Since you're requesting a solution in C++, boost asio is imo the best async io library there is.
I assume you're talking about the "one thread per client" solution when refering to "multi-threading", which is generally a very bad idea for servers who expect many clients in a short time frame or connected at the same time.
Threads are way to resource consuming for this use, plus you have to take care of mutual exclusion, which in combination with blocking calls can drive you into deadlocks very fast. And thats the least worst of what you can run into.
Additionally on that, it's very easy for an attacker to exploit your server to stuck. You will spend much time on trying to design your code so that this will be avoided, which leads you into having an unreadable, hard to update and error phrone code.
In boost.asio the specified thread(s) ( those who call io_service::run ) will only do work when there is actually work to do, directly leading you into the object assigned to the task.
So technically async is also blocking, with the difference that only the scheduler waits for work to do, while those functions you add work with ( connect, send, receive, ... ) will return immediately.

I'll assume you're talking TCP and not UDP. I definitely recommend skipping async sockets, those are favored by Microsoft and supporters but are not portable. Instead use the vanilla stuff: here's an example with server and client.

Related

Am I potentially shooting myself in the foot by making this UDP listen socket non-blocking?

I am building a system that sends and receives UDP packets to multiple pieces of remote hardware.
A function mySend passes new information to send to a third-party API that I must use to construct the actual UDP datagram. The API locks a mutex during its work constructing and sending the datagram.
A function myRecv runs in a worker thread, repeatedly asking the third-party API to poll for new data. The API invokes a UDP-receive function which runs select and recvfrom to grab any responses from the remote hardware.
The thread that listens and handles incoming packets is problematic at the moment due to the design of the API I'm using to decode those packets, which locks its own mutex around the call to the UDP-receive function. But this function performs a blocking select.
The consequence is that the mutex is almost always locked by the receive thread and, in fact, the contention is so bad that mySend is practically never able to obtain the lock. The result is that the base thread is effectively deadlocked.
To fix this, I'm trying to justify making the listen socket non-blocking and performing a usleep between select calls where no data was available.
Now, if my blocking select had a 3-second timeout, that's not the same as performing a non-blocking select every 3 seconds (in the worst case) because of the introduction of latency in looking for and consequently handling incoming packets. So the usleep period has to be a lot lower, say 300-500ms.
My concern is mostly in the additional system calls — this is a lot more calls to select, and new calls to usleep. At times I will expect next to no incoming data for tens of seconds or even minutes, but there will also likely be periods during which I might expect to receive perhaps 40KB over a few seconds.
My first instinct, if this were all my own software, would be to tighten up the use of mutexes such that no locking was in place around select at all, and then there'd be no problem. But I'd like to avoid hacking about in the 3rd-party API if I don't have to.
Simple time-based profiling is not really enough at this stage because this mechanism needs to scale really well, and I don't have the means to test at scale right now. Consequently I'm trying to gather some anecdotal evidence in order to steer my decision-making.
Is moving to a non-blocking socket the right approach?
Or would I be better off hacking up the third-party API (which I'd rather not do) to tighten their mutex usage?
I, my team and the developers of the 3rd party library have all come to the conclusion that the hack is suitable enough for deployment, and outweighs the questions posed and disadvantages associated with my potential alternative workarounds.
The real solution is, of course, to push a proper design fix into the 3rd party library; this is a way off as it would be fairly extensive and nobody really cares enough, but it does give us the answer to this question.

C++ Fastest Way to Hit a URL

I'm trying to ping a URL on a server in the middle of my high-performance C++ application, where every millisecond is critical. I don't care about the return data from the query... I just need to send a HTTP request to a specific URL (to cause it to load), and I'm trying to find the most effective, non-blocking method to accomplish this.
My application uses Boost::ASIO, but most methods to do this seem to involve building and tearing down sockets each time (which might unfortunately be necessary), but I'm hoping there's a basic C/C++ socket one-liner that won't cause any overhead, memory leaks, blocking, etc. Just quickly open a socket, shoot the HTTP request off, and move along.
And this will need to happen thousands of times per second, so sockets and overhead is important (don't want to flood the OS).
Anyone have any advice on the most efficient way to accomplish this?
Thanks so much!
With thousands of notifications sent per second, I can't imagine opening a socket connection for each one. That would probably be too inefficient due to the overhead. So, as Casey suggested, try using a dedicated connection.
Since it sounds like you are doing quite a bit of processing on your main thread, you might consider creating a worker thread for the socket work. You will probably need to use thread synchronization objects like a mutex or critical section to single thread the code - at least when updating a container (probably a queue) from your main thread and reading it from the worker thread.

Multiuser chat server c++

I am building a Chat Server (which allows private messages between users) in c++ ... just as a challenge for me, and I've hit a dead point... where I don't know what may be better.
By the way: I am barely new to C++; that's why I want a challenge... so if there are other optimal ways, multithreading, etc... let me know please.
Option A
I have a c++ application running, that has an array of sockets, reads all the input (looping through all the sockets) in every loop (1second loop I guess) and stores it to DB (a log is required), and after that, loops again over all the sockets sending what's needed in every socket.
Pros: One single process, contained. Easy to develop.
Cons: I see it hardly scalable, and a single focus of failure ... I mean, what about performance with 20k sockets?
Option B
I have a c++ application listening to connections.
When a connection is received, it forks a subprocess that handles that socket... reading and saving to a DB all the input of the user. And checking all the required output from DB on every loop to write to the socket.
Pros: If the daemon is small enough, having a process per socket is likely more scalable. And at the same time if a process fails, all the others are kept online.
Cons: Harder to develop. May be it consumes too much resources to maintain a process for each connection.
What option do you think is the best? Any other idea or suggestion is welcome :)
As mentioned in the comments, there is an additional alternative which is to use select() or poll() (or, if you don't mind making your application platform-specific, something like epoll()). Personally I would suggest poll() because I find it more convenient, but I think only select() is available on at least some versions of Windows - I don't know whether running on Windows is important to you.
The basic approach here is that you first add all your sockets (including a listen socket, if you're listening for connections) to a structure and then call select() or poll() as appropriate. This call will block your application until at least one of the socket has some data to read, and then you get woken up and you go through the socket(s) that are ready for reading, process the data and then jump back into blocking again. You generally do this in a loop, something like:
while (running) {
int rc = poll(...);
// Handle active file descriptors here.
}
This is a great way to write an application which is primarily IO-bound - i.e. it spends much more time handling network (or disk) traffic than it does actually processing the data with the CPU.
As also mentioned in the comments, another approach is to fork a thread per connection. This is quite effective, and you can use simple blocking IO in each thread to read and write to that connection. Personally I would advise against this approach for several reasons, most of which are largely personal preference.
Firstly, it's fiddly to handle connections where you need to write large amounts of data at a time. A socket can't guarantee to write all pending data at once (i.e. the amount that it sent may not be the full amount you requested). In this case you have to buffer up the pending data locally and wait until there's room in the socket to send it. This means at any given time, you might be waiting for two conditions - either the socket is ready to send, or the socket is ready to read. You could, of course, avoid reading from the socket until all the pending data is sent, but this introduces latency into handling the data. Or, you could use select() or poll() on just that connection - but if so, why bother using threads at all, just handle all the connections that way. You could also use two threads per connection, one for reading and one for writing, which is probably the best approach if you're not confident whether you can always send all messages in a single call, although this doubles the number of threads you need which could make your code more complicated and slightly increase resource usage.
Secondly, if you plan to handle many connections, or a high connection turnover, threads are somewhat more of a load on the system than using select() or friends. This isn't a particularly big deal in most cases, but it's a factor for larger applications. This probably isn't a practical issue unless you were writing something like a webserver that was handling hundreds of requests a second, but I thought it was relevant to mention for reference. If you're writing something of this scale you'd likely end up using a hybrid approach anyway, where you multiplexed some combination of processes, threads and non-blocking IO on top of each other.
Thirdly, some programmers find threads complicated to deal with. You need to be very careful to make all your shared data structures thread-safe, either with exclusive locking (mutexes) or using someone else's library code which does this for you. There are a lot of examples and libraries out there to help you with this, but I'm just pointing out that care is needed - whether multithreaded coding suits you is a matter of taste. It's relatively easy to forget to lock something and have your code work fine in testing because the threads don't happen to contend that data structure, and then find hard-to-diagnose issues when this happens under higher load in the real world. With care and discipline, it's not too hard to write robust multithreaded code and I have no objection to it (though opinions vary), but you should be aware of the care required. To some extent this applies to writing any software, of course, it's just a matter of degree.
Those issues aside, threads are quite a reasonable approach for many applications and some people seem to find them easier to deal with than non-blocking IO with select().
As to your approaches, A will work but is wasteful of CPU because you have to wake up every second regardless of whether there's actual useful work to do. Also, you introduce up to a second's delay in handling messages, which could be irritating for a chat server. In general I would suggest that something like select() is a much better approach than this.
Option B could work although when you want to send messages between connections you're going to have to use something like pipes to communicate between processes and that's a bit of a pain. You'll end up having to wait on both your incoming pipe (for data to send) as well as the socket (for data to receive) and thus you end up effectively with the same problem, having to wait on two filehandles with something like select() or threads. Really, as others have said, threads are the right way to process each connection separately. Separate processes are also a little more expensive of resources than threads (although on platforms such as Linux the copy-on-write approach to fork() means it's not actually too bad).
For small applications with only, say, tens of connections there's not an awful lot technically to choose between threads and processes, it largely depends on which style appeals to you more. I would personally use non-blocking IO (some people call this asynchronous IO, but that's not how I would use the term) and I've written quite a lot of code that does that as well as lots of multithreaded code, but it's still only my personal opinion really.
Finally, if you want to write portable non-blocking IO loops I strongly suggest investigating libev (or possbily libevent but personally I find the former easier to use and more performant). These libraries use different primitives such as select() and poll() on different platforms so your code can remain the same, and they also tend to offer slightly more convenient interfaces.
If you have any more questions on any of that, feel free to ask.

Program structure for bi-directional TCP communication using Boost::Asio

First off, I hope my question makes sense and is even possible! From what I've read about TCP sockets and Boost::ASIO, I think it should be.
What I'm trying to do is to set up two machines and have a working bi-directional read/write link over TCP between them. Either party should be able to send some data to be used by the other party.
The first confusing part about TCP(/IP?) is that it requires this client/server model. However, reading shows that either side is capable of writing or reading, so I'm not yet completely discouraged. I don't mind establishing an arbitrary party as the client and the other as the server. In my application, that can be negotiated ahead of time and is not of concern to me.
Unfortunately, all of the examples I come across seem to focus on a client connecting to a server, and the server immediately sending some bit of data back. But I want the client to be able to write to the server also.
I envision some kind of loop wherein I call io_service.poll(). If the polling shows that the other party is waiting to send some data, it will call read() and accept that data. If there's nothing waiting in the queue, and it has data to send, then it will call write(). With both sides doing this, they should be able to both read and write to each other.
My concern is how to avoid situations in which both enter into some synchronous write() operation at the same time. They both have data to send, and then sit there waiting to send it on both sides. Does that problem just imply that I should only do asynchronous write() and read()? In that case, will things blow up if both sides of a connection try to write asynchronously at the same time?
I'm hoping somebody can ideally:
1) Provide a very high-level structure or best practice approach which could accomplish this task from both client and server perspectives
or, somewhat less ideally,
2) Say that what I'm trying to do is impossible and perhaps suggest a workaround of some kind.
What you want to do is absolutely possible. Web traffic is a good example of a situation where the "client" sends something long before the server does. I think you're getting tripped up by the words "client" and "server".
What those words really describe is the method of connection establishment. In the case of "client", it's "active" establishment; in the case of "server" it's "passive". Thus, you may find it less confusing to use the terms "active" and "passive", or at least think about them that way.
With respect to finding example code that you can use as a basis for your work, I'd strongly encourage you to take a look at W. Richard Stevens' "Unix Network Programming" book. Any edition will suffice, though the 2nd Edition will be more up to date. It will be only C, but that's okay, because the socket API is C only. boost::asio is nice, but it sounds like you might benefit from seeing some of the nuts and bolts under the hood.
My concern is how to avoid situations
in which both enter into some
synchronous write() operation at the
same time. They both have data to
send, and then sit there waiting to
send it on both sides. Does that
problem just imply that I should only
do asynchronous write() and read()? In
that case, will things blow up if both
sides of a connection try to write
asynchronously at the same time?
It sounds like you are somewhat confused about how protocols are used. TCP only provides a reliable stream of bytes, nothing more. On top of that applications speak a protocol so they know when and how much data to read and write. Both the client and the server writing data concurrently can lead to a deadlock if neither side is reading the data. One way to solve that behavior is to use a deadline_timer to cancel the asynchronous write operation if it has not completed in a certain amount of time.
You should be using asynchronous methods when writing a server. Synchronous methods are appropriate for some trivial client applications.
TCP is full-duplex, meaning you can send and receive data in the order you want. To prevent a deadlock in your own protocol (the high-level behaviour of your program), when you have the opportunity to both send and receive, you should receive as a priority. With epoll in level-triggered mode that looks like: epoll for send and receive, if you can receive do so, otherwise if you can send and have something to send do so. I don't know how boost::asio or threads fit here; you do need some measure of control on how sends and receives are interleaved.
The word you're looking for is "non-blocking", which is entirely different from POSIX asynchronous I/O (which involves signals).
The idea is that you use something like fcntl(fd,F_SETFL,O_NONBLOCK). write() will return the number of bytes successfully written (if positive) and both read() and write() return -1 and set errno = EAGAIN if "no progress can be made" (no data to read or write window full).
You then use something like select/epoll/kqueue which blocks until a socket is readable/writable (depending on the flags set).

Most suitable asynchronous socket model for an instant messenger client?

I'm working on an instant messenger client in C++ (Win32) and I'm experimenting with different asynchronous socket models. So far I've been using WSAAsyncSelect for receiving notifications via my main window. However, I've been experiencing some unexpected results with Winsock spawning additionally 5-6 threads (in addition to the initial thread created when calling WSAAsyncSelect) for one single socket.
I have plans to revamp the client to support additional protocols via DLL:s, and I'm afraid that my current solution won't be suitable based on my experiences with WSAAsyncSelect in addition to me being negative towards mixing network with UI code (in the message loop).
I'm looking for advice on what a suitable asynchronous socket model could be for a multi-protocol IM client which needs to be able to handle roughly 10-20+ connections (depending on amount of protocols and protocol design etc.), while not using an excessive amount of threads -- I am very interested in performance and keeping the resource usage down.
I've been looking on IO Completion Ports, but from what I've gathered, it seems overkill. I'd very much appreciate some input on what a suitable socket solution could be!
Thanks in advance! :-)
There are four basic ways to handle multiple concurrent sockets.
Multiplexing, that is using select() to poll the sockets.
AsyncSelect which is basically what you're doing with WSAAsyncSelect.
Worker Threads, creating a single thread for each connection.
IO Completion Ports, or IOCP. dp mentions them above, but basically they are an OS specific way to handle asynchronous I/O, which has very good performance, but it is a little more confusing.
Which you choose often depends on where you plan to go. If you plan to port the application to other platforms, you may want to choose #1 or #3, since select is not terribly different from other models used on other OS's, and most other OS's also have the concept of threads (though they may operate differently). IOCP is typically windows specific (although Linux now has some async I/O functions as well).
If your app is Windows only, then you basically want to choose the best model for what you're doing. This would likely be either #3 or #4. #4 is the most efficient, as it calls back into your application (similar, but with better peformance and fewer issues to WSAsyncSelect).
The big thing you have to deal with when using threads (either IOCP or WorkerThreads) is marshaling the data back to a thread that can update the UI, since you can't call UI functions on worker threads. Ultimately, this will involve some messaging back and forth in most cases.
If you were developing this in Managed code, i'd tell you to look at Jeffrey Richter's AysncEnumerator, but you've chose C++ which has it's pros and cons. Lots of people have written various network libraries for C++, maybe you should spend some time researching some of them.
consider to use the ASIO library you can find in boost (www.boost.org).
Just use synchronous models. Modern operating systems handle multiple threads quite well. Async IO is really needed in rare situations, mostly on servers.
In some ways IO Completion Ports (IOCP) are overkill but to be honest I find the model for asynchronous sockets easier to use than the alternatives (select, non-blocking sockets, Overlapped IO, etc.).
The IOCP API could be clearer but once you get past it it's actually easier to use I think. Back when, the biggest obstacle was platform support (it needed an NT based OS -- i.e., Windows 9x did not support IOCP). With that restriction long gone, I'd consider it.
If you do decide to use IOCP (which, IMHO, is the best option if you're writing for Windows) then I've got some free code available which takes away a lot of the work that you need to do.
Latest version of the code and links to the original articles are available from here.
And my views on how my framework compares to Boost::ASIO can be found here: http://www.lenholgate.com/blog/2008/09/how-does-the-socket-server-framework-compare-to-boostasio.html.