Boost asio-acceptor unblocks without a new connection? - c++

I am using the C++ boost asio library, where I listen to new connections on the socket. On getting a connection I process the request and then listen for a new connection on another socket in a loop.
while (true)
{
tcp::socket soc(this->blitzIOService);
this->blitzAcceptor.listen();
boost::system::error_code ec;
this->blitzAcceptor.accept(soc,ec);
if (ec)
{
// Some error occured
cerr << "Error Value: " << ec.value() << endl;
cerr << "Error Message: " << ec.message() << endl;
soc.close();
break;
}
else
{
this->HandleRequest(soc);
soc.shutdown(tcp::socket::shutdown_both);
soc.close();
}
}
According to my understanding it should always block at this->blitzAcceptor.accept(soc,ec); and everytime a new connection is made it should handle it in this->HandleRequest(soc); and again block at this->blitzAcceptor.accept(soc,ec);
But what I see is this that for the first time it will block at this->blitzAcceptor.accept(soc,ec) and when a new connection is made it will handle the request, but instead of blocking again at this->blitzAcceptor.accept(soc,ec) it will go ahead into this->HandleRequest(soc); and block at soc.receive(); inside.
This doesn't happen always, but happens most of the time. What could be the reason to this behavior, and how can I ensure that it always block at this->blitzAcceptor.accept(soc,ec) until a new request is made?

What could be the reason to this
behavior?
This behavior is entirely dependent on the client code. If it connects, but does not send a request, the server with block when receiving data.
how can I ensure that it always block
at this->blitzAcceptor.accept(soc,ec)
until a new request is made?
You can't. But your server can initiate a timeout that starts immediately after accepting the connection. If the client does not send a request within that duration, close the socket. To do that, you should switch to using asynchronous methods rather than synchronous methods.

Be sure you're not blocking on a read(2) call for the file descriptor that you are listen(2)'ing on vs the file descriptor that you accept(2)'ed. I think if you print out the file descriptor numbers you'll very quickly find your problem.

Related

How to close tcp server socket correctly

can anyone explain me, what I am doing wrong with my tcp server termination?
In my program (single instance), I started another program which starts a tcp server. For the tcp server it is only allowed to listen to one connection.
After a connection was established between client and my server, a few messages are shared. As soon as the message protocoI has passed through, I want to terminate the server socket, reset my internal states and close my sub-programm.
After a few seconds, it will be possible to open my sub-programm again.
If so, I open the socket again... The same network device, the same ip address and the same port as before will be used...
My problem: my sub-programm crashes when running the 2nd time.
With netstat I analyzed my socket and found out, it stays in state LAST_ACK.
This can take more than 60 seconds (timeout?) till the socket is finally closed.
For closing the socket, I used the following code:
if (0 != shutdown(socketDescriptor_, SHUT_RDWR)) {
std::cout << "Read/write of socket deactivated" << std::endl;
}
if (0 == close(socketDescriptor_)) {
std::cout << "Socket is destroyed" << std::endl;
socketDescriptor_ = -1;
}
Any ideas? Thanks for your help!
Kind regards,
Matthias

Unix socket hangs on recv, until I place/remove a breakpoint anywhere

[TL;DR version: the code below hangs indefinitely on the second recv() call both in Release and Debug mode. In Debug, if I place or remove a breakpoint anywhere in the code, it makes the execution continue and everything behaves normally]
I'm coding a simple client-server communication using UNIX sockets. The server is in C++ while the client is in python. The connection (TCP socket on localhost) gets established no problem, but when it comes to receiving data on the server side, it hangs on the recv function. Here is the code where the problem happens:
bool server::readBody(int csock) // csock is the socket filedescriptor
{
int bytecount;
// protobuf-related variables
google::protobuf::uint32 siz;
kinMsg::request message;
// if the code is working, client will send false
// I initialize at true to be sure that the message is actually read
message.set_endconnection(true);
// First, read 4-characters header for extracting data size
char buffer_hdr[5];
if((bytecount = recv(csock, buffer_hdr, 4, MSG_WAITALL))== -1)
::std::cerr << "Error receiving data "<< ::std::endl;
buffer_hdr[4] = '\0';
siz = atoi(buffer_hdr);
// Second, read the data. The code hangs here !!
char buffer [siz];
if((bytecount = recv(csock, (void *)buffer, siz, MSG_WAITALL))== -1)
::std::cerr << "Error receiving data " << errno << ::std::endl;
//Finally, process the protobuf message
google::protobuf::io::ArrayInputStream ais(buffer,siz);
google::protobuf::io::CodedInputStream coded_input(&ais);
google::protobuf::io::CodedInputStream::Limit msgLimit = coded_input.PushLimit(siz);
message.ParseFromCodedStream(&coded_input);
coded_input.PopLimit(msgLimit);
if (message.has_endconnection())
return !message.endconnection();
return false;
}
As can be seen in the code, the protocol is such that the client will first send the number of bytes in the message in a 4-character array, followed by the protobuf message itself. The first recv call works well and does not hang. Then, the code hangs on the second recv call, which should be recovering the body of the message.
Now, for the interesting part. When run in Release mode, the code hangs indefinitely and I have to kill either the client or the server. It does not matter whether I run it from my IDE (qtcreator), or from the CLI after a clean build (using cmake/g++).
When I run the code in Debug mode, it also hangs at the same recv() call. Then, if I place or remove a breakpoint ANYWHERE in the code (before or after that line of code), it starts again and works perfectly : the server receives the data, and reads the correct message.endconnection() value before returning out of the readBody function. The breakpoint that I have to place to trigger this behavior is not necessarily trigerred. Since the readBody() function is in a loop (my C++ server waits for requests from the python client), at the next iteration, the same behavior happens again, and I have to place or remove a breakpoint anywhere in the code, which is not necessarily triggered, in order to go past that recv() call. The loop looks like this:
bool connection = true;
// server waiting for client connection
if (!waitForConnection(connectionID)) std::cerr << "Error accepting connection" << ::std::endl;
// main loop
while(connection)
{
if((bytecount = recv(connectionID, buffer, 4, MSG_PEEK))== -1)
{
::std::cerr << "Error receiving data "<< ::std::endl;
}
else if (bytecount == 0)
break;
try
{
if(readBody(connectionID))
{
sendResponse(connectionID);
}
// if client is requesting disconnection, break the while(true)
else
{
std::cout << "Disconnection requested by client. Exiting ..." << std::endl;
connection = false;
}
}
catch(...)
{
std::cerr << "Erro receiving message from client" << std::endl;
}
}
Finally, as you can see, when the program returns from readBody(), it sends back another message to the client, which processes it and prints in the standard output (python code working, not shown because the question is already long enough). From this last behavior, I can conclude that the protocol and client code are OK. I tried to put sleep instructions at many points to see whether it was a timing problem, but it did not change anything.
I searched all over Google and SO for a similar problem, but did not find anything. Help would be much appreciated !
The solution is to not use any flags. Call recv with 0 for the flags or just use read instead of recv.
You are requesting the socket for data that is not there. The recv expects 10 bytes, but the client only sent 6. The MSG_WAITALL states clearly that the call should block until 10 bytes are available in the stream.
If you dont use any flags, the call will succeed with a bytecount at 6, which is the exact same effect than with MSG_DONTWAIT, without the potential side effects of non-blocking calls.
I did the test on the github project, it works.
The solution is to replace MSG_WAITALL by MSG_DONTWAIT in the recv() calls. It now works fine. To summarize, it makes the recv() calls non blocking, which makes the whole code work fine.
However, this still raises many questions, the first of which being: why was it working with this weird breakpoint changing thing ?
If the socket was blocking in the first place, one could assume that it is because there is no data on the socket. Let's assume both situations here :
There is no data on the socket, which is the reason why the blocking recv() call was not working. Changing it to a non blocking recv() call would then, in the same situation, trigger an error. If not, the protobuf deserialization would afterwards fail trying to deserialize from an empty buffer. But it does not ...
There is data on the socket. Then, why on earth would it block in the first place ?
Obviously there is something that I don't get about sockets in C, and I'd be very happy if somebody has an explanation for this behavior !

Multithreading in C++, receive message from socket

I have studied Java for 8 months but decided to learn some c++ to on my spare time.
I'm currently making a multithreaded server in QT with minGW. My problem is that when a client connects, I create an instance of Client( which is a class) and pass the socket in the client class contructor.
And then I start a thread in the client object (startClient()) which is going to wait for messages, but it doesn't. Btw, startClient is a method that I create a thread from. See code below.
What happens then? Yes, when I try to send messages to the server, only errors, the server won't print out that a new client connects, and for some reason my computer starts working really hard. And qtcreator gets super slow until I close the server-program.
What I actually is trying to achieve is an object which derives the thread, but I have heard that it isn't a very good idea to do so in C++.
The listener loop in the server:
for (;;)
{
if ((sock_CONNECTION = accept(sock_LISTEN, (SOCKADDR*)&ADDRESS, &AddressSize)))
{
cout << "\nClient connected" << endl;
Client client(sock_CONNECTION); // new object and pass the socket
std::thread t1(&Client::startClient, client); //create thread of the method
t1.detach();
}
}
the Client class:
Client::Client(SOCKET socket)
{
this->socket = socket;
cout << "hello from clientconstructor ! " << endl;
}
void Client::startClient()
{
cout << "hello from clientmethod ! " << endl;
// WHEN I ADD THE CODE BELOW I DON'T GET ANY OUTPUT ON THE CONSOLE!
// No messages gets received either.
char RecvdData[100] = "";
int ret;
for(;;)
{
try
{
ret = recv(socket,RecvdData,sizeof(RecvdData),0);
cout << RecvdData << endl;
}
catch (int e)
{
cout << "Error sending message to client" << endl;
}
}
}
It looks like your Client object is going out of scope after you detach it.
if (/* ... */)
{
Client client(sock_CONNECTION);
std::thread t1(&Client::startClient, client);
t1.detach();
} // GOING OUT OF SCOPE HERE
You'll need to create a pointer of your client object and manage it, or define it at a higher level where it won't go out of scope.
The fact that you never see any output from the Server likely means that your client is unable to connect to your Server in the first place. Check that you are doing your IP addressing correctly in your connect calls. If that looks good, then maybe there is a firewall blocking the connection. Turn that off or open the necessary ports.
Your connecting client is likely getting an error from connect that it is interpreting as success and then trying to send lots of traffic on an invalid socket as fast as it can, which is why your machine seems to be working hard.
You definitely need to check the return values from accept, connect, read and write more carefully. Also, make sure that you aren't running your Server's accept socket in non-blocking mode. I don't think that you are because you aren't seeing any output, but if you did it would infinitely loop on error spawning tons of threads that would also infinitely loop on errors and likely bring your machine to its knees.
If I misunderstood what is happening and you do actually get a client connection and have "Client connected" and "hello from client method ! " output, then it is highly likely that your calls to recv() are failing and you are ignoring the failure. So, you are in a tight infinite loop that is repeatedly outputting "" as fast as possible.
You also probably want to change your catch block to catch (...) rather than int. I doubt either recv() or cout throw an int. Even so, that catch block won't be invoked when recv fails because recv doesn't throw any exceptions AFAIK. It returns its failure indicator through its return value.

Boost ASIO async_read_some

I am having difficulties in implementing a simple TCP server. The following code is taken from boost::asio examples, "Http Server 1" to be precise.
void connection::start() {
socket_.async_read_some(
boost::asio::buffer(buffer_),
boost::bind(
&connection::handle_read, shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred
)
);
}
void connection::handle_read(const boost::system::error_code& e, std::size_t bytes_transferred) {
if (!e && bytes_transferred) {
std::cout << " " << bytes_transferred <<"b" << std::endl;
data_.append(buffer_.data(), buffer_.data()+bytes_transferred);
//(1) what here?
socket_.async_read_some(
boost::asio::buffer(buffer_),
boost::bind(
&connection::handle_read, shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred
)
);
}
else// if (e != boost::asio::error::operation_aborted)
{
std::cout << data_ << std::endl;
connection_manager_.stop(shared_from_this());
}
}
In the original code the buffer_ is big enough to keep the entire request. It's not what I need. I've changed the size to 32bytes.
The server compiles and listens at port 80 of localhost, so I try to connect to it via my web browser.
Now if the statement (1) is commented-out, then only the first 32bytes of the request are read and the connection hangs. Web browser keeps waiting for the response, the server does.. I dont know what.
If (1) is uncommented, then the entire request is read (and appeded to data_), but it never stops - I have to cancel the request in my browser and only then does the else { } part run - I see my request on stdout.
Question 1: How should I handle a large request?
Question 2: How should I cache the request (currently I append the buffer to a string)?
Question 3: How can I tell that the request is over? In HTTP there always is a response, so my web-browser keeps waiting for it and doesnt close the connection, but how can my server know that the request is over (and perhaps close it or reply some "200 OK")?
Suppose browser send you 1360 bytes of data, you say asio to read some data into your buffer that you say it only have 32 bytes.
then first time that you call it your handler will be called with 32 bytes start of data. here if you comment (1) then browser try to send rest of its data(actually browser already sent it and it is in the OS buffer that wait for you to peek it from there) and you are possibly blocked behind io_service::run for some miracle!!
if you uncomment (1) as you say your loop started, you read first block, then next and another and ... until the data that the browser sent finished, but after that when you say asio to read some more data it will wait for some more data that never come from the browser( since browser already sent its information and is waiting for your answer ) and when you cancel the request from the browser, it will close its socket and then your handler will be called whith an error that say I can't read more data, since the connection is closed.!!
but what you should do here to make it work is: you should learn HTTP format and thus know what is the data that your browser sent to you and provide a good answer for it and then your communication with the client will be proceeded. in this case end of buffer is \r\n\r\n and when you see it you shouldn't read any more data, you should process what you read till now and then send a response to the browser.

boost asio "A non-recoverable error occurred during database lookup"

I'm currently stress testing my server.
sometimes I get "A non-recoverable error occurred during database lookup" Error
coming from error.message()
error is sent to my handling function by boost::asio::placeholders::error called on the async_read method.
I have no idea what this error means, and I am not able to reproduce purposely this error, it only happen sometimes and seems to be random (of course it is not, but it seems)
Does anyone have ever got this error message, and if so, know where it came from ?
EDIT 1
Here's what I found on the boost library, the error is :
no_recovery = BOOST_ASIO_NETDB_ERROR(NO_RECOVERY)
But can't figure out what this is...
EDIT 2
Just so you know everything about my problem, here the design :
I have only one io_service.
Everytime a user is connecting, an async_read is starting, waiting for something to read.
When it reads something, most of the time, it is doing some work on a thread (coming from a pool), and write something synchronously back to the user. (using boost write).
Even since boost 1.37 claims that synchronous write is thread safe, I'm really worried about the fact that it is coming from this.
If the user sends different message really quick, it can happen that async_read and write are called simultaneously, can it does any harm ?
EDIT 3
Here's some portion of my code asked by Dave S :
void TCPConnection::listenForCMD() {
boost::asio::async_read(m_socket,
boost::asio::buffer(m_inbound_data, 3),
boost::asio::transfer_at_least(3),
boost::bind(&TCPConnection::handle_cmd,
shared_from_this(),
boost::asio::placeholders::error)
);
}
void TCPConnection::handle_cmd(const boost::system::error_code& error) {
if (error) {
std::cout << "ERROR READING : " << error.message() << std::endl;
return;
}
std::string str1(m_inbound_data);
std::string str = str1.substr(0,3);
std::cout << "COMMAND FUNCTION: " << str << std::endl;
a_fact func = CommandFactory::getInstance()->getFunction(str);
if (func == NULL) {
std::cout << "command doesn't exist: " << str << std::endl;
return;
}
protocol::in::Command::pointer cmd = func(m_socket, client);
cmd->setCallback(boost::bind(&TCPConnection::command_is_done,
shared_from_this()));
cmd->parse();
}
m_inbound_data is a char[3]
Once cmd->parse() is done, it will call a callback command_is_done
void TCPConnection::command_is_done() {
m_inbound_data[0] = '0';
m_inbound_data[1] = '0';
m_inbound_data[2] = '0';
listenForCMD();
}
The error occurs in the handle_cmd when checking for error at the first line.
As I said before, the cmd->parse() will parse the command it just got, sometime lauching blocking code in a thread coming from a pool. On this thread it sends back data to the client with a synchronous write.
IMPORTANT THING : The callback command_is_done will always be called before the said thread is launched. this means that listenForCMD is already called when the thread may send something back to the client in synchronous write. Therefore my first worries.
When it reads something, most of the time, it is doing some work on a
thread (coming from a pool), and write something synchronously back to
the user. (using boost write). Even since boost 1.37 claims that
synchronous write is thread safe, I'm really worried about the fact
that it is coming from this.
Emphasis added by me, this is incorrect. A single boost::asio::tcp::socket is not thread safe, the documentation is very clear
Thread Safety
Distinct objects: Safe.
Shared objects: Unsafe.
It is also very odd to mix async_read() with a blocking write().