I'm working on my own FTP client in C++, but I'm stuck at function recv(). When I get data with recv(), they can be incomplete, because I'm using TCP protocol, so I have to use recv in loop. Problem is that when I call recv after everything that should be received was received server blocks, and my program is stuck.
I don't know how many bytes im going to recieve so I can't control it and stop it when its done. I found two not very elegant solutions right now:
is to use string.substr() (or TR1 regex) to find needed
expression and then stop calling recv before it blocks
second is to
set up timeval structure and then control socket through
setsockopt() function. Problem is long response time when i can get
incomplete corrupted data.
Question is, is there any clean and elegant solution for this?
The obvious thing to do is to transmit the length of the to-be-received message ahead (many protocols, including for example HTTP do that, to address the exact same issue). That way, you know that when you have received amount X, no more will come.
This will work fine 99.9% of the time and will catastrophically fail in the 0.1% of cases where the server is lying to you or where the server crashes unexpectedly or someone stumbles over the network cable (or something similar happens). Sadly, the "connection" established by TCP is an illusion, and you don't have much of a means to detect when the connection dies. The other end can go down, and you will not notice anything, unless you try to send and get an error (or until several hours later).
Therefore, you also need a backup strategy for when things don't go quite as good as expected. You might either use select or poll to know when data is available, so you don't block forever for a message that will never come.
Using threads to solve the block-at-end problem (as proposed in other answers) is not a very good option since blocking isn't the actual problem. The actual problem is that you don't know when you have reached the end of the transmission. Having a worker thread block at the end of the transmission will "work", but will leave the worker thread blocked indefinitely, consuming resources and with an uncertain, system-dependent fate.
You cannot join the thread before exiting, since it is blocked (so trying to join it would deadlock your main thread). When your process exits and the socket is closed, the thread will unblock, but will (at least on some operating systems, e.g. Windows) be terminated immediately after. This likely won't do much evil, but terminating a thread in an uncontrolled way is always less desirable than having it exit properly. On other operating systems, you may have a lingering thread remaining.
Since you are using C++, there are alternative libraries that greatly simplify network programming compared to stock C. My personal favourite is Boost::Asio, however others are available. These libraries not only save you the pain of coding in C, but also provide asynchronous capabilities to work around your blocking problem.
The typical approach is to use select()/pselect() or poll()/ppoll(). Both allow to specify a timeout in order to exit if there are no incoming data.
However I don't see how you should "call recv after everything that should be received". It would be extremely inefficient to rely on the timeout also when there are not network problems...
Or you send the size of data being sent, before the data, and that's what you read, or the data connection is terminated with an EOF. In this case read() will return -1 and you exit.
I can think of two options that will not require a major rewrite of your existing code and a third one which is more radical:
use non-blocking I/O and poll for data periodically. You can do other work while a message remains incomplete or no further data can be read from the socket.
use a separate worker thread to do the I/O. Even if it blocks on synchronous recv() calls, your main thread can continue to do work. The worker thread can transfer the data it receives to the main thread for processing once a complete message is received via TCP.
use an OS specific feature (I/O completion ports on Windows or aio on Linux), but these are far more complex and you should definitely consider Boost.Asio before going this route.
You can put the recv function in it's own thread and do the processing in another thread.
Related
When my socket connection is terminated normally, then it works fine. But there are cases where the normal termination does not occur and the remote side of the connection simply disappears. When this happens, the sending task gets stuck in send() because the other side has stopped ack'ing the data. My application has a ping request/response going on and so, in another thread, it recognizes that the connection is dead. The question is...what should this other thread do in order to bring the connection to a safe termination. Should it call close()? I see SIGPIPE thrown around when this happens and I just want to make sure I am closing the connection in a safe way. I just don't want it to crash...I don't care about the leftover data. I am using a C++ library that is using synchronous sockets, so moving to async is not an easy option for me.
I avoid this problem by setting setting SIGPIPE to be ignored, and setting all my sockets to non-blocking I/O mode. Once a socket is in non-blocking mode, it will never block inside of send() or recv() -- rather, in any situation where it would normally block, it will instead immediately return -1 and set errno to EWOULDBLOCK instead. Therefore I can never "lose control" of the thread due to bad network conditions.
Of course if you never block, how do you keep your event loop from spinning and using up 100% of a core all the time? The answer is that you can block waiting for I/O inside of a separate call that is designed to do just that, e.g. select() or poll() or similar. These functions are designed to block until any one of a number of sockets becomes ready-to-read (or optionally ready-for-write) or until a pre-specified amount of time elapses, whichever comes first. So by using these, you can have your thread wake up when it needs to wake up and also sleep when there's nothing to do.
Anyway, once you have that (and you've made sure that your code handles short reads, short writes, and -1/EWOULDBLOCK gracefully, as those happen more often in non-blocking mode), you are free to implement your dead-network-detector in any of several ways. You could implement it within your network I/O thread, by keeping track of how long it has been since any data was last sent or received, and by using the timeout argument to select() to cause the blocking function to wake up at the appropriate times based on that. Or you could still use a second thread, but now the second thread has a safe way to wake up the first thread: by calling pipe() or socketpair() you can create a pair of connected file descriptors, and your network I/O thread can select()/poll() on the receiving file descriptor while the other thread holds the sending file descriptor. Then when the other thread wants to wake up the I/O thread, it can send a byte on its file descriptor, or just close() it; either one will cause the network I/O thread to return from select() or poll() and find out that something has happened on its receiving-file-descriptor, which gives it the opportunity to react by exiting (or taking whatever action is appropriate).
I use this technique in almost all of my network programming, and I find it works very well to achieve network behavior that is both reliable and CPU-efficient.
I had a lot of SIGPIPE in my application. Those are not really important: they just tells you that a Pipe (here a SOCKET) is no more available.
I do then, in my main function
signal(SIGPIPE, SIG_IGN);
Another option is to use MSG_NOSIGNAL flag for send, e.g. send(..., MSG_NOSIGNAL);. In that case SIGPIPE is not sent, the call returns -1 and errno == EPIPE.
I am working with POSIX threads for a multi-threaded socket programming project. I have run into a situation where I need to detach a thread from the main program using setdetachstate(); however, later on I cancel the thread (I know that cancelling is generally bad practice, but I know what I'm doing (hopefully)). I need a method to check whether the thread is still alive or not, and after doing a bit of research, I found that waitpid() might work for my purposes even though I have a TID instead of a PID. However, after trying it out, both with and without ptraces, it didn't work. Another method that I have seen on the Internet everywhere is pthread_join(). While I agree that it is the optimal way to do it, as I said, my thread is detached, so it can't be joined.
As a side note, my goal is to find a way to wait for the function call pthread_cancel() to finish before executing any subsequent code, i.e.
pthread_t tid;
// ...
pthread_cancel(tid);
// wait until pthread with ID tid is cancelled
// more code here...
Originally, the reason why I need to check whether the detached pthread is alive was because I was planning on doing something like this: while(!pthread_dead(tid)); or something of this manner; however, if there is a solution that directly waits for the cancel to finish, that would be even better. Please try not to criticize my use of detached threads or pthread cancelling; I have contemplated many plans of action and this seems to be required no matter how I go about it (unless I'm doing a multiprocessed application, which I don't want to do). Unless I'm doing something absolutely syntactically or structurally abominable, I would appreciate it if you just answered my question.
Thank you!
P.S. I'm coding in C++.
Have you thought about using Actor model programming, or even better Communicating Sequential Processes?
These are really quite a good model for when you have a separate thread that needs to go off and do its own thing, and you need to be able to tell it something and get an answer back.
Your apparent need is to know that something asynchronous has completed (the termination of a separate thread) - there's nothing wrong with having that thread send you a direct acknowledgement of it's termination, rather than trying to have to determine whether or not it's still alive through slightly iffy means such as waitpid(). So say you chose ZeroMQ as your Actor model library; to "kill" that detached thread you'd send it a command down a ZeroMQ "socket". The recipient thread would receive that message, understand that it means "die", and do whatever clean up it needs to before terminating itself. Just before it terminates itself, it sends you back an acknowledgement on another "socket" that yes, it is dead (or at least about to be so, all necessary cleanup has already happened).
Actor model / CSP programming places an emphasis on having a loop responding to messages from one or more sources. Well, your own code snippet hints at a loop, waiting for the pthread_cancel() to take effect.
I've put "socket" in quotes as underneath a ZeroMQ socket can be a tcp socket, ipc, some in-process memory transfer, etc; it all behaves the same. In-proc is, naturally, quite quick.
The difference between Actor model and Communicating Sequential Processes is that in Actor model, when a message is sent there is no information available to the sender that it has been received, whilst in Communicating Sequential Processes a successful send = a completed read. Personally speaking I prefer the latter - your code then has complete knowledge as to where a message recipient has got to; a send/receive are an Execution Rendezvous. So when you send the "terminate" message, you know for sure that the recipient thread has received the message and is now acting on it. When the recipient sends it's "I'm dead" acknowledgement, it knows that the command thread has received that ack.
FYI, CSP is very useful in real time systems, not because it's faster but because your program can have much better knowledge as to whether it's kept up with the real time demand or not. Actor model lets you "hide" real time inadequacies as latency in communications links.
I inherited a piece of code that reads in data from a UDP socket. I need some help figuring out what's going on here and also if I can improve anything performance wide.
The code starts by calling select(), and then recvfrom(). Based on my research, it appears that recvfrom() is only called when select() returns the fact that there is data available. This code basically consists of a thread that continuously listens for a multicast message. As a result, it basically sits in the select() routine until it either receives data or times out.
I was wondering if there was perhaps a better way to improve upon the performance of this code. First, is select() necessary? Based on this thread: setting timeout for recv fcn of a UDP socket it appears that I can just set the timeout of the recvfrom() command itself. Will this buy me anything? Also, based on some research, I've seen alot of implementations without select(). Why is this?
Also, ideally, I'd like to free up as much CPU as possible. Is there a way that I can put the process to sleep until it receives a packet? That being said, I'd like to receive a full packet at a time for simplicity sake.
Thanks in advance for the help.
When recvfrom blocks or when select is waiting for an event, the process is effectively asleep; that is, it is not scheduled to run. So your existing code (at least if I'm interpreting your description correctly) already satisfies your first requirement.
Is there a way that I can put the process to sleep until it receives a packet?
UDP is a "datagram packet service". (Quoted from man 7 udp). That is, it always sends and receives complete packets. So with UDP, you never have to worry about your second requirement:
I'd like to receive a full packet at a time for simplicity sake.
All in all, I'd say you don't have a problem.
However, you may be able to remove the select() call, if it is only waiting on a single descriptor, assuming you are able to setsocketopt() SO_RCVTIMEO (which you probably can do, although Posix allows an individual implementation to not allow the option to be set, so there is no absolute guarantee.) I doubt whether you will notice any performance improvement from doing so, but it should save a few microseconds.
Is there a way that I can put the process to sleep until it receives a packet?
You're already doing that. That's what select() does, or recvfrom() in blocking mode.
I'd like to receive a full packet at a time for simplicity sake.
That's what UDP is already doing for you.
May be this would be helpful: 10k problem
I can just set the timeout of the recvfrom() command itself
If select waits on only one file descriptor (e.g. socket), then there would be not much difference with what you propose. But if this manages more than one, then there isn't really a better solution to this.
I'm writing a POSIX compatible multi-threaded server in c/c++ that must be able to accept, read from, and write to a large number of connections asynchronously. The server has several worker threads which perform tasks and occasionally (and unpredictably) queue data to be written to the sockets. Data is also occasionally (and unpredictably) written to the sockets by the clients, so the server must also read asynchronously. One obvious way of doing this is to give each connection a thread which reads and writes from/to its socket; this is ugly, though, since each connection may persist for a long time and the server thus may have to hold hundred or thousand threads just to keep track of connections.
A better approach would be to have a single thread that handled all communications using the select()/pselect() functions. I.e., a single thread waits on any socket to be readable, then spawns a job to process the input that will be handled by a pool of other threads whenever input is available. Whenever the other worker threads produce output for a connection, it gets queued, and the communication thread waits for that socket to be writable before writing it.
The problem with this is that the communication thread may be waiting in the select() or pselect() function when output is queued by the worker threads of the server. It's possible that, if no input arrives for several seconds or minutes, a queued chunk of output will just wait for the communication thread to be done select()ing. This shouldn't happen, however--data should be written as soon as possible.
Right now I see a couple solutions to this that are thread-safe. One is to have the communication thread busy-wait on input and update the list of sockets it waits on for writing every tenth of a second or so. This isn't optimal since it involves busy-waiting, but it will work. Another option is to use pselect() and send the USR1 signal (or something equivalent) whenever new output has been queued, allowing the communication thread to update the list of sockets it is waiting on for writable status immediately. I prefer the latter here, but still dislike using a signal for something that should be a condition (pthread_cond_t). Yet another option would be to include, in the list of file descriptors on which select() is waiting, a dummy file that we write a single byte to whenever a socket needs to be added to the writable fd_set for select(); this would wake up the communications server because that particular dummy file would then be readable, thus allowing the communications thread to immediately update it's writable fd_set.
I feel intuitively, that the second approach (with the signal) is the 'most correct' way to program the server, but I'm curious if anyone knows either which of the above is the most efficient, generally speaking, whether either of the above will cause race conditions that I'm not aware of, or if anyone knows of a more general solution to this problem. What I really want is a pthread_cond_wait_and_select() function that allows the comm thread to wait on both a change in sockets or a signal from a condition.
Thanks in advance.
This is a fairly common problem.
One often used solution is to have pipes as a communication mechanism from worker threads back to the I/O thread. Having completed its task a worker thread writes the pointer to the result into the pipe. The I/O thread waits on the read end of the pipe along with other sockets and file descriptors and once the pipe is ready for read it wakes up, retrieves the pointer to the result and proceeds with pushing the result into the client connection in non-blocking mode.
Note, that since pipe reads and writes of less then or equal to PIPE_BUF are atomic, the pointers get written and read in one shot. One can even have multiple worker threads writing pointers into the same pipe because of the atomicity guarantee.
Unfortunately, the best way to do this is different for each platform. The canonical, portable way to do it is to have your I/O thread block in poll. If you need to get the I/O thread to leave poll, you send a single byte on a pipe that the thread is polling. That will cause the thread to exit from poll immediately.
On Linux, epoll is the best way. On BSD-derived operating systems (including OSX, I think), kqueue. On Solaris, it used to be /dev/poll and there's something else now whose name I forget.
You may just want to consider using a library like libevent or Boost.Asio. They give you the best I/O model on each platform they support.
Your second approach is the cleaner way to go. It's totally normal to have things like select or epoll include custom events in your list. This is what we do on my current project to handle such events. We also use timers (on Linux timerfd_create) for periodic events.
On Linux the eventfd lets you create such arbitrary user events for this purpose -- thus I'd say it is quite accepted practice. For POSIX only functions, well, hmm, perhaps one of the pipe commands or socketpair I've also seen.
Busy-polling is not a good option. First you'll be scanning memory which will be used by other threads, thus causing CPU memory contention. Secondly you'll always have to return to your select call which will create a huge number of system calls and context switches which will hurt overall system performance.
First off, I hope my question makes sense and is even possible! From what I've read about TCP sockets and Boost::ASIO, I think it should be.
What I'm trying to do is to set up two machines and have a working bi-directional read/write link over TCP between them. Either party should be able to send some data to be used by the other party.
The first confusing part about TCP(/IP?) is that it requires this client/server model. However, reading shows that either side is capable of writing or reading, so I'm not yet completely discouraged. I don't mind establishing an arbitrary party as the client and the other as the server. In my application, that can be negotiated ahead of time and is not of concern to me.
Unfortunately, all of the examples I come across seem to focus on a client connecting to a server, and the server immediately sending some bit of data back. But I want the client to be able to write to the server also.
I envision some kind of loop wherein I call io_service.poll(). If the polling shows that the other party is waiting to send some data, it will call read() and accept that data. If there's nothing waiting in the queue, and it has data to send, then it will call write(). With both sides doing this, they should be able to both read and write to each other.
My concern is how to avoid situations in which both enter into some synchronous write() operation at the same time. They both have data to send, and then sit there waiting to send it on both sides. Does that problem just imply that I should only do asynchronous write() and read()? In that case, will things blow up if both sides of a connection try to write asynchronously at the same time?
I'm hoping somebody can ideally:
1) Provide a very high-level structure or best practice approach which could accomplish this task from both client and server perspectives
or, somewhat less ideally,
2) Say that what I'm trying to do is impossible and perhaps suggest a workaround of some kind.
What you want to do is absolutely possible. Web traffic is a good example of a situation where the "client" sends something long before the server does. I think you're getting tripped up by the words "client" and "server".
What those words really describe is the method of connection establishment. In the case of "client", it's "active" establishment; in the case of "server" it's "passive". Thus, you may find it less confusing to use the terms "active" and "passive", or at least think about them that way.
With respect to finding example code that you can use as a basis for your work, I'd strongly encourage you to take a look at W. Richard Stevens' "Unix Network Programming" book. Any edition will suffice, though the 2nd Edition will be more up to date. It will be only C, but that's okay, because the socket API is C only. boost::asio is nice, but it sounds like you might benefit from seeing some of the nuts and bolts under the hood.
My concern is how to avoid situations
in which both enter into some
synchronous write() operation at the
same time. They both have data to
send, and then sit there waiting to
send it on both sides. Does that
problem just imply that I should only
do asynchronous write() and read()? In
that case, will things blow up if both
sides of a connection try to write
asynchronously at the same time?
It sounds like you are somewhat confused about how protocols are used. TCP only provides a reliable stream of bytes, nothing more. On top of that applications speak a protocol so they know when and how much data to read and write. Both the client and the server writing data concurrently can lead to a deadlock if neither side is reading the data. One way to solve that behavior is to use a deadline_timer to cancel the asynchronous write operation if it has not completed in a certain amount of time.
You should be using asynchronous methods when writing a server. Synchronous methods are appropriate for some trivial client applications.
TCP is full-duplex, meaning you can send and receive data in the order you want. To prevent a deadlock in your own protocol (the high-level behaviour of your program), when you have the opportunity to both send and receive, you should receive as a priority. With epoll in level-triggered mode that looks like: epoll for send and receive, if you can receive do so, otherwise if you can send and have something to send do so. I don't know how boost::asio or threads fit here; you do need some measure of control on how sends and receives are interleaved.
The word you're looking for is "non-blocking", which is entirely different from POSIX asynchronous I/O (which involves signals).
The idea is that you use something like fcntl(fd,F_SETFL,O_NONBLOCK). write() will return the number of bytes successfully written (if positive) and both read() and write() return -1 and set errno = EAGAIN if "no progress can be made" (no data to read or write window full).
You then use something like select/epoll/kqueue which blocks until a socket is readable/writable (depending on the flags set).