I have a boost async tcp client that need recevive data from server all time.
I want put there a time out that when don't arive data for n sec disconnect from server and try againg to connect.
and I use vc++.
void tcpclient::Connect(){
.....
socket_.async_connect(*iterator,boost::bind(&tcpclient::AfterConnection,shared_from_this(),boost::asio::placeholders::error));
}
void tcpclient::AfterConnection(const boost::system::error_code& error){
if (!error)
{
SetTimeout();
}
}
void tcpclient::SetTimeout(int sec = 1)
{
SOCKET native_sock = socket_.native();
int result = SOCKET_ERROR;
if (INVALID_SOCKET != native_sock)
{
struct timeval tv;
tv.tv_sec = sec;
tv.tv_usec = 0;
result = setsockopt(native_sock, SOL_SOCKET, SO_RCVTIMEO,(char *)&tv,sizeof(struct timeval));
i = GetLastError();
}
}
and I read like blow:
socket_.async_receive(boost::asio::buffer(buffer, 1024),
boost::bind(&tcpClient::handleReceive,
shared_from_this(),
boost::asio::placeholders::error,
buffer,
boost::asio::placeholders::bytes_transferred)
);
but when I try to simulate case that don't arive data connection stay etablished:
$netstat -ao
TCP 192.168.0.6:62836 192.168.0.5:telnet ESTABLISHED 2840
what is the problem why this happend?
Setting SO_RCVTIMEO causes otherwise-blocking calls to read to return with no data after waiting for the specified time, as a non-blocking socket would have done immediately.
So, the question is, what do you, or Boost, do when read returns an EWOULDBLOCK error? You need to show how you're doing the read; if Boost is handling it in an event loop (based on select or poll) it probably just waits for some data to become available.
If that's the case, a better approach is to register a timer callback with the event loop to fire every second or however often, and check whether you received some data during the previous second. The socket options won't help you with that.
Related
I'm working on a vision-application, which have two modes:
1) parameter setting
2) automatic
The problem is in 2), when my app waits for a signal via TCP/IP. The program is freezing while accept()-methode is called. I want to provide the possibility on a GUI to change the mode. So if the mode is changing, it's provided by another signal (message_queue). So I want to interrupt the accept state.
Is there a simple possibility to interrupt the accept?
std::cout << "TCPIP " << std::endl;
client = accept(slisten, (struct sockaddr*)&clientinfo, &clientinfolen);
if (client != SOCKET_ERROR)
cout << "client accepted: " << inet_ntoa(clientinfo.sin_addr) << ":"
<< ntohs(clientinfo.sin_port) << endl;
//receive the message from client
//recv returns the number of bytes received!!
//buf contains the data received
int rec = recv(client, buf, sizeof(buf), 0);
cout << "Message: " << rec << " bytes and the message " << buf << endl;
I read about select() but I have no clue how to use it. Could anybody give me a hint how to implement for example select() in my code?
Thanks.
Best regards,
T
The solution is to call accept() only when there is an incoming connection request. You do that by polling on the listen socket, where you can also add other file descriptors, use a timeout etc.
You did not mention your platform. On Linux, see epoll(), UNIX see poll()/select(), Windows I don't know.
A general way would be to use a local TCP connection by which the UI thread could interrupt the select call. The general architecture would use:
a dedicated thread waiting with select on both slisten and the local TCP connection
a TCP connection (Unix domain socket on a Unix or Unix-like system, or 127.0.0.1 on Windows) between the UI thread and the waiting one
various synchronizations/messages between both threads as required
Just declare that select should read slisten and the local socket. It will return as soon as one is ready, and you will be able to know which one is ready.
As you haven't specified your platform, and networking, especially async, is platform-specific, I suppose you need a cross-platform solution. Boost.Asio fits perfectly here: http://www.boost.org/doc/libs/1_39_0/doc/html/boost_asio/reference/basic_socket_acceptor/async_accept/overload1.html
Example from the link:
void accept_handler(const boost::system::error_code& error)
{
if (!error)
{
// Accept succeeded.
}
}
...
boost::asio::ip::tcp::acceptor acceptor(io_service);
...
boost::asio::ip::tcp::socket socket(io_service);
acceptor.async_accept(socket, accept_handler);
If Boost is a problem, Asio can be a header-only lib and used w/o Boost: http://think-async.com/Asio/AsioAndBoostAsio.
One way would be to run select in a loop with a timeout.
Put slisten into nonblocking mode (this isn't strictly necessary but sometimes accept blocks even when select says otherwise) and then:
fd_set read_fds;
FD_ZERO(&read_fds);
FD_SET(slisten, &read_fds);
struct timeval timeout;
timeout.tv_sec = 1; // 1s timeout
timeout.tv_usec = 0;
int select_status;
while (true) {
select_status = select(slisten+1, &read_fds, NULL, NULL, &timeout);
if (select_status == -1) {
// ERROR: do something
} else if (select_status > 0) {
break; // we have data, we can accept now
}
// otherwise (i.e. select_status==0) timeout, continue
}
client = accept(slisten, ...);
This will allow you to catch signals once per second. More info here:
http://man7.org/linux/man-pages/man2/select.2.html
and Windows version (pretty much the same):
https://msdn.microsoft.com/pl-pl/library/windows/desktop/ms740141(v=vs.85).aspx
I have the following bit of ASIO code, that synchronously reads UDP packets. The problem is if no data packets of the given size have arrived in a given time frame (30 seconds) I'd like the recieve_from function to return with some kind of error to specif icy timeout.
for (;;)
{
boost::array<char, 1000> recv_buf;
udp::endpoint remote_endpoint;
asio::error_code error;
socket.receive_from(asio::buffer(recv_buf), // <-- require timeout
remote_endpoint, 0, error);
if (error && error != asio::error::message_size)
throw asio::system_error(error);
std::string message = make_daytime_string();
asio::error_code ignored_error;
socket.send_to(asio::buffer(message),
remote_endpoint, 0, ignored_error);
}
Looking at the documentation non of the UDP oriented calls support a time-out mechanism.
What is the correct way (also portable if possible) for having a time-out with syncronous UDP calls in ASIO?
As far as I know, this is not possible. By running a synchronous receive_from you've blocked code execution by a syscall recvmsg from #include <sys/socket.h>.
As portability goes, I cannot speak for Windows but a linux/bsd C-flavoured solution would look like this:
void SignalHandler(int signal) {
// do what you need to do, possibly informing about timeout and calling exit()
}
...
struct sigaction signal_action;
signal_action.sa_flags = 0;
sigemptyset(&signal_action.sa_mask);
signal_action.sa_handler = SignalHandler;
if (sigaction(SIGALRM, &signal_action, NULL) == -1) {
// handle error
}
...
int timeout_in_seconds = 5;
alarm(timeout_in_seconds);
...
socket.receive_from(asio::buffer(recv_buf), remote_endpoint, 0, error);
...
alarm(0);
If this is not at all feasible, I would recommend going full async and run it in a boost::asio::io_service.
I'm writing a network game for a university project and while I have messages being sent and received between a client and a server, I'm unsure on how I would go about implementing a writeable fd_set (my lecturer's example code only included a readable fd_set) and what the function is of both fd_sets with select(). Any insight you could give would be great in helping me understand this.
My server code is as such:
bool ServerSocket::Update() {
// Update the connections with the server
fd_set readable;
FD_ZERO(&readable);
// Add server socket, which will be readable if there's a new connection
FD_SET(m_socket, &readable);
// Add connected clients' sockets
if(!AddConnectedClients(&readable)) {
Error("Couldn't add connected clients to fd_set.");
return false;
}
// Set timeout to wait for something to happen (0.5 seconds)
timeval timeout;
timeout.tv_sec = 0;
timeout.tv_usec = 500000;
// Wait for the socket to become readable
int count = select(0, &readable, NULL, NULL, &timeout);
if(count == SOCKET_ERROR) {
Error("Select failed, socket error.");
return false;
}
// Accept new connection to the server socket if readable
if(FD_ISSET(m_socket, &readable)) {
if(!AddNewClient()) {
return false;
}
}
// Check all clients to see if there are messages to be read
if(!CheckClients(&readable)) {
return false;
}
return true;
}
A socket becomes:
readable if there is either data in the socket receive buffer or a pending FIN (recv() is about to return zero)
writable if there is room in the socket receive buffer. Note that this is true nearly all the time, so you should use it only when you've encountered a prior EWOULDBLOCK/EAGAIN on the socket, and stop using it when you don't.
You'd create an fd_set variable called writeable, initialize it the same way (with the same sockets), and pass it as select's third argument:
select(0, &readable, &writeable, NULL, &timeout);
Then after select returns you'd check whether each socket is still in the set writeable. If so, then it's writeable.
Basically, exactly the same way readable works, except that it tells you a different thing about the socket.
select() is terribly outdated and it's interface is arcane. poll (or it's windows counterpart WSAPoll is a modern replacement for it, and should be always preferred.
It would be used in following manner:
WSAPOLLFD pollfd = {m_socket, POLLWRNORM, 0};
int rc = WSAPoll(&pollfd, 1, 100);
if (rc == 1) {
// Socket is ready for writing!
}
I am trying to a set a timeout for a socket that I have created using ASIO in boost with no luck. I have found the following code elsewhere on the site:
tcp::socket socket(io_service);
struct timeval tv;
tv.tv_sec = 5;
tv.tv_usec = 0;
setsockopt(socket.native(), SOL_SOCKET, SO_RCVTIMEO, &tv, sizeof(tv));
setsockopt(socket.native(), SOL_SOCKET, SO_SNDTIMEO, &tv, sizeof(tv));
boost::asio::connect(socket, endpoint_iterator);
The timeout remains at the same 60 seconds as opposed to the 5 seconds I am looking for in the connect call. What am I missing? Note the connect code works fine in all other cases (where there is no timeout).
The socket options you've set don't apply to connect AFAIK.
This can be accomplished by using the asynchronous asio API as in the following asio example.
The interesting parts are setting the timeout handler:
deadline_.async_wait(boost::bind(&client::check_deadline, this));
Starting the timer
void start_connect(tcp::resolver::iterator endpoint_iter)
{
if (endpoint_iter != tcp::resolver::iterator())
{
std::cout << "Trying " << endpoint_iter->endpoint() << "...\n";
// Set a deadline for the connect operation.
deadline_.expires_from_now(boost::posix_time::seconds(60));
// Start the asynchronous connect operation.
socket_.async_connect(endpoint_iter->endpoint(),
boost::bind(&client::handle_connect,
this, _1, endpoint_iter));
}
else
{
// There are no more endpoints to try. Shut down the client.
stop();
}
}
And closing the socket which should result in the connect completion handler to run.
void check_deadline()
{
if (stopped_)
return;
// Check whether the deadline has passed. We compare the deadline against
// the current time since a new asynchronous operation may have moved the
// deadline before this actor had a chance to run.
if (deadline_.expires_at() <= deadline_timer::traits_type::now())
{
// The deadline has passed. The socket is closed so that any outstanding
// asynchronous operations are cancelled.
socket_.close();
// There is no longer an active deadline. The expiry is set to positive
// infinity so that the actor takes no action until a new deadline is set.
deadline_.expires_at(boost::posix_time::pos_infin);
}
// Put the actor back to sleep.
deadline_.async_wait(boost::bind(&client::check_deadline, this));
}
I'm doing a c++ project that requires a server to create a new thread to handle connections each time accept() returns a new socket descriptor. I am using select to decide when a connection attempt has taken place as well as when a client has sent data over the newly created client socket (the one that accept creates). So two functions and two selects - one for polling the socket dedicated to listening for connections, one for polling the socket created when a new connection is successful.
The behavior of the first case is what I expect - FD_ISSET returns true for the id of my listening socket only when a connection is requested, and is false until the next connection attempt. The second case does not work, even though the code is exactly the same with different fd_set and socket objects. I'm wondering if this stems from the TCP socket? Do these sockets always return true when polled by a select due to their streamy nature?
//working snippet
struct timeval tv;
tv.tv_sec = 0;
tv.tv_usec = 500000;
fd_set readfds;
FD_ZERO(&readfds);
FD_SET(sid,&readfds);
//start server loop
for(;;){
//check if listening socket has any client requrests, timeout at 500 ms
int numsockets = select(sid+1,&readfds,NULL,NULL,&tv);
if(numsockets == -1){
if(errno == 4){
printf("SIGINT recieved in select\n");
FD_ZERO(&readfds);
myhandler(SIGINT);
}else{
perror("server select");
exit(1);
}
}
//check if listening socket is ready to be read after select returns
if(FD_ISSET(sid, &readfds)){
int newsocketfd = accept(sid, (struct sockaddr*)&client_addr, &addrsize);
if(newsocketfd == -1){
if(errno == 4){
printf("SIGINT recieved in accept\n");
myhandler(SIGINT);
}else{
perror("server accept");
exit(1);
}
}else{
s->forkThreadForClient(newsocketfd);
}
}
//non working snippet
//setup clients socket with select functionality
struct timeval ctv;
ctv.tv_sec = 0;
ctv.tv_usec = 500000;
fd_set creadfds;
FD_ZERO(&creadfds);
FD_SET(csid,&creadfds);
for(;;){
//check if listening socket has any client requrests, timeout at 500 ms
int numsockets = select(csid+1,&creadfds,NULL,NULL,&ctv);
if(numsockets == -1){
if(errno == 4){
printf("SIGINT recieved in client select\n");
FD_ZERO(&creadfds);
myhandler(SIGINT);
}else{
perror("server select");
exit(1);
}
}else{
printf("Select returned %i\n",numsockets);
}
if(FD_ISSET(csid,&creadfds)){
//read header
unsigned char header[11];
for(int i=0;i<11;i++){
if(recv(csid, rubyte, 1, 0) != 0){
printf("Received %X from client\n",*rubyte);
header[i] = *rubyte;
}
}
Any help would be appreciated.
Thanks for the responses, but I don't believe it has much todo with the timeout value being inside the loop. I tested it and even with tv being reset and the fd_set being zeroed every time the server loops, select still returns 1 immediately. I feel like there's a problem with how select is treating my TCP socket. Any time I set selects highest socket id to encompass my TCP socket, it returns immediately with that socket set. Also, client does not send anything, just connects.
One thing you must do is reset the value of tv to your desired timeout every time before you call select(). The select() function changes the values in tv to indicate how much time is left in the timeout, after returning from the function. If you fail to do this, your select() calls will end up using a timeout of zero, which is not efficient.
Some other operating systems implement select() differently, in such a way that they don't change the value of tv. Linux does change it, so you must reset it.
Move
FD_ZERO(&creadfds);
FD_SET(csid,&creadfds);
into the loop. The function select() reports the result in this structure. You already retrieve the result with
FD_ISSET(csid,&creadfds);