Is epoll a bad idea for udp client? - c++

I have created a linux server using epoll. And I realized that the clients will use udp packets...
I just erased the "listen" part from my code and it seems like working. But I was wondering any hidden issues or problems I might face.
Also, is this a bad idea using epoll for server if clients are sending udp packets?

If the respective thread does not need to do anything else but receive UDP packets, you can as well just block on recvfrom, this will be the exact same effect with one less syscall and less code complexity.
On the other hand, if you need to do other things periodically or with some timely constraints that should not depend on whether packets arrive on the wire, it's better to use epoll anyway, even if it seems overkill.
The big advantage of epoll is that besides being reasonably efficient, it is comfortable and extensible (you can plug in a signalfd, timerfd or eventfd and many other things).

Related

Is there a better way to use asynchronous TCP sockets in C++ rather than poll or select?

I recently started writing some C++ code that uses sockets, which I'd like to be asynchronous. I've read many posts about how poll and select can be used to make my sockets asynchronous (using poll or select to wait for a send or recv buffer), but on my server side I have an array of struct pollfd, where every time the listening socket accepts a connection, it adds it to the array of struct pollfd so that it can monitor that socket's recv (POLLIN).
My problem is that if I have 5000 sockets connected to my listening socket on my server, then the array of struct pollfd would be of size 5000, since it would be monitoring all the connected sockets BUT the only way I know how to check if a recv for a socket is ready, is by looping through all the items in the array of struct pollfd to find the ones whose revents equals POLLIN. This just seems kind of inefficient, when the number of connected sockets because very large. Is there a better way to do this?
How does the boost::asio library handle async_accept, async_send, etc...? How should I handle it?
What the heck, I will go ahead and write up an answer.
I am going to ignore the "asynchronous" vs "non-blocking" terminology because I believe it is irrelevant to your question.
You are worried about performance when handling thousands of network clients, and you are right to be worried. You have rediscovered the C10K problem. Back when the Web was young, people saw a need for a small number of fast servers to handle a large number of (relatively) slow clients. The existing select/poll type interfaces require linear scans -- in both kernel and user space -- across all sockets to determine which are ready. If many sockets are often idle, your server can wind up spending more time figuring out what work to do than doing actual work.
Fast-forward to today, where we have basically two approaches for dealing with this problem:
1) Use one thread per socket and just issue blocking reads and writes. This is usually the simplest to code, in my opinion, and modern operating systems are quite good at letting idle threads sleep peacefully out of the way without imposing any significant performance overhead. In my experience, this approach works very well for hundreds of clients; I cannot personally say how it will work for thousands.
2) Use one of the platform-specific interfaces that were introduced to tackle the C10K problem. That means epoll (Linux), kqueue (BSD/Mac), or completion ports (Windows). (If you think epoll is the same as poll, look again.) All of these will only notify your application about sockets that are actually ready, avoiding the wasteful linear scan across idle connections. There are several libraries that make these platform-specific interfaces easier to use, including libevent, libev, and Boost.Asio. You will find that all of them ultimately invoke epoll on Linux, kqueue on BSD, and so on, whenever such interfaces are available.

Can single-buffer blocking WSASend deliver partial data?

I've pretty much always used send() with sockets and now I'm moving onto the WSA functions. With send(), I have a sendall() helper that ensured all data is delivered even if it didn't happen in one try and a partial send occurred on first call.
So, instead of learning the hard way or over-complicating code when I don't have to, decided to ask you:
Can a blocking WSASend() send partial data or does it send everything before it returns or fails? Or should I check the bytes sent vs. expected to send and keep at it until everything is delivered?
ANSWER: Overlapped WSASend() does not send partial data but if it does, it means the connection has terminated. I've never encountered the case yet.
From the WSASend docs:
If the socket is non-blocking and stream-oriented, and there is not sufficient space in the transport's buffer, WSASend will return with only part of the application's buffers having been consumed. Given the same buffer situation and a blocking socket, WSASend will block until all of the application buffer contents have been consumed.
I haven't tried this behavior though. BTW, why do you rewrite your code to use WSA functions? Switching from standard bsd socket api just to use the socket basically with the same blocking behavior doesn't really seem to be a good idea for me. Just keep the old blocking code with send with the "retry code", this way its portable and bulletproof. It is not saving 1-2 comparisons is that makes your IO code performant.
Switch to specialized WSA functions only if you are trying to exploit some windows specific strengths, or if you want to use for non-blocking sockets with WSAWaitForMultipleObjects that is a bit better than the standard select but even in that case you can simply go with send and recv as I did it.
In my opinion using epoll/kqueue/iocp (or a library that abstracts these away) with sockets are the way to go. There are some very basic tasks that can be done with blocking sockets but if you cross the line and you need nonblocking socks then switching straight to epoll/kqueue/iocp is the way to go instead of programming painful select or WSAWaitForMultipleObjects based apis. epoll/kqueue/iocp are not only better but also easier to program than the select based alternatives. Really. They are more modern apis that were invented based on more experience. (Although they are not crossplatform, but even select has portability issues...).
The previously mentioned apis for linux/bsd/windows are based on the same concept but in my opinion the simplest and easiest to learn is the epoll api of linux. It is ways better than a select call but its 100x easier to program once you get the idea. If you start using IOCP on windows than it my seem a bit more complicated.
If you haven't yet used these apis then definitely give epoll a go if you are familiar with linux and then on windows implement the same with IOCP that is based on a similar concept with a bit more complicated overlapped IO programming. With IOCP you will have a reason for using WSASend because you can not start overlapped IO on a socket with send but you can do that with WSASend (or WriteFile).
EDIT: If you are going for max performance with IOCP then here are some additional hints:
Drop blocking operations. This is very important. A serious networking engine can not afford blocking IO. It simply doesn't scale on any of the platforms. Do overlapped operations for both send and receive, overlapped IO is the big gun of windows.
Setup a thread pool that processes the completed IO operations. Setup test clients that bomb your server with real-world-usage-like messages and parallel connection counts and under stress tweak the buffer sizes and thread counts for your actual target hardware.
Set the SO_RCVBUF and SO_SNDBUF sizes of your sockets to zero and play around with the size of the buffers that you are using to send and receive data. Setting the rcv/send buf of the socket handle to zero allows the tcp stack to receive/send data directly to/from your buffers avoiding an additional copy between your userspace buffers and the socket buffers. The optimal size for these buffers is also subject to tweaking. I usually use at least a few ten K buffers sizes but sometimes in case of large volume transfers 1-2M buffer sizes are better depending on the number of parallel busy connections. Again, tweak the values while stressing the server with some test clients that do activity similar to real world clients. When you are ready with the first working version of your network engine on top of it lets build a test client that can simulate many (maybe thousands of) parallel clients depending on the real world usage of your server.
You will need "per connection software send buffers" inside your network engine and you may (or may not) want to control the max size of the send buffers. In case of reaching the max send buffer size you may want to block or discard messages/data depending on what you want to do, encapsulate this special buffer and provide two nice interfaces to it: one for the threads that are putting data into this buffer and another interface that is used by the IOCP sender code. This buffer is usually a very critical part of the whole thing and I usually had a lot of bugs around this part of the code so make sure to design its interface nicely to minimize the number of bugs. Depending on how your application constructs and puts messages into the queue you can play around a lot with the internal implementation (size of storage chunks, nagle-like optimizations, ...).

Program structure for bi-directional TCP communication using Boost::Asio

First off, I hope my question makes sense and is even possible! From what I've read about TCP sockets and Boost::ASIO, I think it should be.
What I'm trying to do is to set up two machines and have a working bi-directional read/write link over TCP between them. Either party should be able to send some data to be used by the other party.
The first confusing part about TCP(/IP?) is that it requires this client/server model. However, reading shows that either side is capable of writing or reading, so I'm not yet completely discouraged. I don't mind establishing an arbitrary party as the client and the other as the server. In my application, that can be negotiated ahead of time and is not of concern to me.
Unfortunately, all of the examples I come across seem to focus on a client connecting to a server, and the server immediately sending some bit of data back. But I want the client to be able to write to the server also.
I envision some kind of loop wherein I call io_service.poll(). If the polling shows that the other party is waiting to send some data, it will call read() and accept that data. If there's nothing waiting in the queue, and it has data to send, then it will call write(). With both sides doing this, they should be able to both read and write to each other.
My concern is how to avoid situations in which both enter into some synchronous write() operation at the same time. They both have data to send, and then sit there waiting to send it on both sides. Does that problem just imply that I should only do asynchronous write() and read()? In that case, will things blow up if both sides of a connection try to write asynchronously at the same time?
I'm hoping somebody can ideally:
1) Provide a very high-level structure or best practice approach which could accomplish this task from both client and server perspectives
or, somewhat less ideally,
2) Say that what I'm trying to do is impossible and perhaps suggest a workaround of some kind.
What you want to do is absolutely possible. Web traffic is a good example of a situation where the "client" sends something long before the server does. I think you're getting tripped up by the words "client" and "server".
What those words really describe is the method of connection establishment. In the case of "client", it's "active" establishment; in the case of "server" it's "passive". Thus, you may find it less confusing to use the terms "active" and "passive", or at least think about them that way.
With respect to finding example code that you can use as a basis for your work, I'd strongly encourage you to take a look at W. Richard Stevens' "Unix Network Programming" book. Any edition will suffice, though the 2nd Edition will be more up to date. It will be only C, but that's okay, because the socket API is C only. boost::asio is nice, but it sounds like you might benefit from seeing some of the nuts and bolts under the hood.
My concern is how to avoid situations
in which both enter into some
synchronous write() operation at the
same time. They both have data to
send, and then sit there waiting to
send it on both sides. Does that
problem just imply that I should only
do asynchronous write() and read()? In
that case, will things blow up if both
sides of a connection try to write
asynchronously at the same time?
It sounds like you are somewhat confused about how protocols are used. TCP only provides a reliable stream of bytes, nothing more. On top of that applications speak a protocol so they know when and how much data to read and write. Both the client and the server writing data concurrently can lead to a deadlock if neither side is reading the data. One way to solve that behavior is to use a deadline_timer to cancel the asynchronous write operation if it has not completed in a certain amount of time.
You should be using asynchronous methods when writing a server. Synchronous methods are appropriate for some trivial client applications.
TCP is full-duplex, meaning you can send and receive data in the order you want. To prevent a deadlock in your own protocol (the high-level behaviour of your program), when you have the opportunity to both send and receive, you should receive as a priority. With epoll in level-triggered mode that looks like: epoll for send and receive, if you can receive do so, otherwise if you can send and have something to send do so. I don't know how boost::asio or threads fit here; you do need some measure of control on how sends and receives are interleaved.
The word you're looking for is "non-blocking", which is entirely different from POSIX asynchronous I/O (which involves signals).
The idea is that you use something like fcntl(fd,F_SETFL,O_NONBLOCK). write() will return the number of bytes successfully written (if positive) and both read() and write() return -1 and set errno = EAGAIN if "no progress can be made" (no data to read or write window full).
You then use something like select/epoll/kqueue which blocks until a socket is readable/writable (depending on the flags set).

C++ Sockets Send() Thread-Safety

I am coding sockets server for 1000 clients maxmimum, the server is about my game, i'm using non-blocking sockets and about 10 threads that receive data simultaneously from different sockets (first thread receives from 0-100,second from 101-200 and so on..)
but if thread 1 wants to send data to all 1000 clients and thread 2 also wants to send data to all 1000 clients at the same time, is that safe? are there any chances of the data being messed in the other (client) side?
if yes, i guess the only problem that can happen is that sometimes client would receive 2 or 10 packets as 1 packet, is that correct? if yes, is there any solution to that :(
The usual pattern of dealing with many sockets is to have a dedicated thread polling for I/O events with select(2), poll(2), or better kqueue(2) or epoll(4) (depending on the platform) acting as socket event dispatcher. The sockets are usually handled in non-blocking mode. Then one might have pool of threads reacting to the events and either do reads and writes directly or via lower level buffers/queues.
All sorts of techniques are applicable here - from queues to event subscription whiteboards. It gets tricky with multiplexing accepts/reads/writes/EOFs on the I/O level and with event arbitration on the application level. Several libraries like libevent and boost::asio help structure the lower level (the ACE library is also in this space, but I'd hate recommending it to anybody). You would have to come up with application-level protocols and state machines yourself (again boost::statechart might be of help).
Some good links to get better understanding of what you are up against (this is probably the millionth time they are mentioned here on SO):
The C10K problem
High-Performance Server Architecture
Apologies for not offering a concrete solution, but this is a very wide design question and most decisions depend heavily on the context (lots of fun though). Hope this helps a bit.
Since you are sending data using different sockets, there must not be any problem. Rather when these different threads access same data you have to ensure data integrity.
Are you using UDP or TCP sockets?
If UDP, each write should be encapsulated in a separate packet and should be carried to the other side intact. The order may be swapped (as it may for any UDP packet) but they should be whole.
If TCP, there's no concept of packets on the transport layer and any 10 writes on one side may be bundled up on the other side in one read. TCP writes may also only accept part of your buffer so even if the send() function is atomic, your write isn't necessarily. In this case you'd need to synchronize it.
send() is not atomic in most implementations, so sending to 1000 different sockets from multiple threads could lead to mixed-up messages arriving on the client side, and all kinds of weirdness. (I know nothing, see Nicolai's and Robert's comments below the rest of my comment still stands though (in terms of being a solution to your problem))
What I would do is use threads for sending like you use them for receiving. One thread to manage sending to one (or more) sockets that ensures that you don't write to one socket from multiple threads at the same time.
Also look here for some additional discussion and more interesting links.
If you're on windows, the winsock programmers faq is an invaluable resource, for your issue see here.

c++ send data to multiple UDP sockets

I got a c++ non-blocking server socket, with all the clients stored in a std::map structure.
I can call the send() method for each clientObject to send something to the connected client and that works pretty good already.
But for sending a message to all (broadcast?) i wanna know:
there is something better than do a for/loop with all the clients and call to ClientObject->send("foo") each iteration?
Or should i just try having a peek on multicast sockets?
Thanks in advance.
Rag.
Multicast is only an option if you're communicating over a LAN. It won't work over the Internet.
What you may want to do here is to demultiplex the sockets using asynchronous I/O. This allows you to send data to multiple sockets at the same time, and use asynchronous event handlers to deal with each transmission.
I would recommend looking into Boost ASIO for a portable way to do this. You can also use OS specific system calls, (such as poll/select on UNIX or epoll on Linux) to do this, but it is a lot more complicated.
Multicast would be much preferable... as long as you are talking about local nodes i.e. within the "broadcast/multicast" domain on the LAN.
Of course there are multicast distribution protocols for wider dispersion of such messages but they are seldom used and, depending on your specific case, you could/couldn't reliability depend on such facility.
The use of Multicast translates to lots of savings from a sender point of view: only one send operation needs to occur instead of n*send.
You'd better off to do udp unicast to each host unless you have those very expensive switches. Yes, broadcast/multicast can actually be slower for most switches that have much wimpier CPU than your pcs. Doing anything other than simple forwarding would slow them down tremendously.
Do a benchmark to find out.
Asynch socket programming is definitely the way to go! :)