SSL_shutdown returns -1 with SSL_ERROR_WANT_READ infinitely long - c++

I cannot understand how to properly use SSL_Shutdown command in OpenSSL. Similar questions arisen several times in different places, but I couldn't find a solution which matches exactly my situation. I am using package libssl-dev 1.0.1f-1ubuntu2.15 (the latest for now) under Ubuntu in VirtualBox.
I am working with a small legacy C++ wrapper over the OpenSSL library with non-blocking IO for server and client sockets. The wrapper seems to work fine, except in the following test case (I'm not providing the code of the unit test itself, because it contains a lot of code not related to the problem):
Initialize a server socket with self-signed certificate
Connect to that socket. SSL handshake completes successfully, except that I'm ignoring X509_V_ERR_DEPTH_ZERO_SELF_SIGNED_CERT return of SSL_get_verify_result for now.
Successfully send/receive some data through the connection. This step is optional and it doesn't affect the problem which follows. I mention it only to show that the connection is really established and set into correct state.
Trying to shutdown SSL connection (server or client, doesn't matter which one) which leads to infinite wait on select.
All of the calls to SSL_read and SSL_write are synchronized, locking_callback is also set. After step 3 there are no other operations on sockets except shutting down on one of them.
In the code snippet below I omit all of the error processing and debugging code for clarity, none of the OpenSSL/POSIX calls fail (except cases where I left error processing in place). I also provide connect functions, in case this is important:
void OpenSslWrapper::ConnectToHost( ErrorCode& ec )
{
ctx_ = SSL_CTX_new(SSLv23_client_method());
SSL_CTX_load_verify_locations(ctx_, NULL, config_.verify_locations.c_str());
if (config_.use_standard_verify_locations)
{
SSL_CTX_set_default_verify_paths(ctx_);
}
bio_ = BIO_new_ssl_connect(ctx_);
BIO_get_ssl(bio_, &ssl_);
SSL_set_mode(ssl_, SSL_MODE_AUTO_RETRY);
std::string hostname = config_.address + ":" + to_string(config_.port);
BIO_set_conn_hostname(bio_, hostname.c_str());
BIO_set_nbio(bio_, 1);
int res = 0;
while ((res = BIO_do_connect(bio_)) <= 0)
{
BIO_get_fd(bio_, &fd_);
if (!BIO_should_retry(bio_))
{ /* Never happens */}
WaitAfterError(res);
}
res = SSL_get_verify_result(ssl_);
if (res != X509_V_OK && res != X509_V_ERR_DEPTH_ZERO_SELF_SIGNED_CERT)
{ /* Never happens */ }
SSL_set_mode(ssl_, SSL_MODE_ENABLE_PARTIAL_WRITE);
SSL_set_mode(ssl_, SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER);
}
// config_.handle is a file descriptor which was got from
// accept function, stx_ is also set in advance
void OpenSslWrapper::ConnectAsServer( ErrorCode& ec )
{
ssl_ = SSL_new(ctx_);
int flags = fcntl(config_.handle, F_GETFL, 0);
flags |= O_NONBLOCK;
fcntl(config_.handle, F_SETFL, flags);
SSL_set_fd(ssl_, config_.handle);
while (true)
{
int res = SSL_accept(ssl_);
if( res > 0) {break;}
if( !WaitAfterError(res).isSucceded() )
{ /* never happens */ }
}
SSL_set_mode(ssl_, SSL_MODE_ENABLE_PARTIAL_WRITE);
SSL_set_mode(ssl_, SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER);
}
// The trouble is here
void OpenSSLWrapper::Shutdown()
{
// ...
while (true)
{
int ret = SSL_shutdown(ssl_);
if (ret > 0) {break;}
else if (ret == 0) {continue;}
else {WaitAfterError(ret);}
}
// ...
}
ErrorCode OpenSSLWrapper::WaitAfterError(int res)
{
int err = SSL_get_error(ssl_, res);
switch (ret)
{
case SSL_ERROR_WANT_READ:
WaitForFd(fd_, k_WaitRead);
return ErrorCode::Success;
case SSL_ERROR_WANT_WRITE:
case SSL_ERROR_WANT_CONNECT:
case SSL_ERROR_WANT_ACCEPT:
WaitForFd(fd_, k_WaitWrite);
return ErrorCode::Success;
default:
return ErrorCode::Fail;
}
}
WaitForFd is just a simple wrapper over the select, which waits on a given socket infinitely long on a specified FD_SET for read or write.
When the client calls Shutdown, the first call to SSL_Shutdown returns 0. After the second call it returns -1 and SSL_get_error returns SSL_ERROR_WANT_READ, but selecting on the file descriptor for reading never returns. If specify timeout on select, SSL_Shutdown will continue returning -1 and SSL_get_error continue returning SSL_ERROR_WANT_READ. The loop never exits. After the first call to SSL_Shutdown the shutdown status is always SSL_SENT_SHUTDOWN.
It doesn't matter if I close a server or a client: both have the same behavior.
There's also a strange situation when I connect to some external host. The first call to SSL_Shutdown returns 0, the second one -1 with SSL_ERROR_WANT_READ. Selecting on the socket finishes successfully, but when I call to SSL_Shutdown next time, I again got -1 with error SSL_ERROR_SYSCALL and errno=0. As I read in other places, it is not a big deal, although it still seems strange and maybe it is somehow related, so I mention it here.
UPD. I ported that same code for Windows, the behavior didn't change.
P.S. I am sorry for mistakes in my English, I'd be grateful if someone corrects my language.

Related

C++ + linux handle SIGPIPE signal

Yes, I understand this issue has been discussed many times.
And yes, I've seen and read these and other discussions:
1
2
3
and I still can't fix my code myself.
I am writing my own web server. In the next cycle, it listens on a socket, connects each new client and writes it to a vector.
Into my class i have this struct:
struct Connection
{
int socket;
std::chrono::system_clock::time_point tp;
std::string request;
};
with next data structures:
std::mutex connected_clients_mux_;
std::vector<HttpServer::Connection> connected_clients_;
and the cycle itself:
//...
bind (listen_socket_, (struct sockaddr *)&addr_, sizeof(addr_));
listen(listen_socket_, 4 );
while(1){
connection_socket_ = accept(listen_socket_, NULL, NULL);
//...
Connection connection_;
//...
connected_clients_mux_.lock();
this->connected_clients_.push_back(connection_);
connected_clients_mux_.unlock();
}
it works, clients connect, send and receive requests.
But the problem is that if the connection is broken ("^C" for client), then my program will not know about it even at the moment:
void SendRespons(HttpServer::Connection socket_){
write(socket_.socket,( socket_.request + std::to_string(socket_.socket)).c_str(), 1024);
}
as the title of this question suggests, my app receives a SIGPIPE signal.
Again, I have seen "solutions".
signal(SIGPIPE, &SigPipeHandler);
void SigPipeHandler(int s) {
//printf("Caught SIGPIPE\n%d",s);
}
but it does not help. At this moment, we have the "№" of the socket to which the write was made, is it possible to "remember" it and close this particular connection in the handler method?
my system:
Operating System: Ubuntu 20.04.2 LTS
Kernel: Linux 5.8.0-43-generic
g++ --version
g++ (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
As stated in the links you give, the solution is to ignore SIGPIPE, and CHECK THE RETURN VALUE of the write calls. This latter is needed for correct operation (short writes) in all but the most trivial, unloaded cases anyways. Also the fixed write size of 1024 that you are using is probably not what you want -- if your response string is shorter, you'll send a bunch of random garbage along with it. You probably really want something like:
void SendRespons(HttpServer::Connection socket_){
auto data = socket_.request + std::to_string(socket_.socket);
int sent = 0;
while (sent < data.size()) {
int len = write(socket_.socket, &data[sent], data.size() - sent);
if (len < 0) {
// there was an error -- might be EPIPE or EAGAIN or EINTR or ever a few other
// obscure corner cases. For EAGAIN or EINTR (which can only happen if your
// program is set up to allow them), you probably want to try again.
// Anything else, probably just close the socket and clean up.
if (errno == EINTR)
continue;
close(socket_.socket);
// should tell someone about it?
break; }
sent += len; }
}

Simplest IPC from one Linux app to another in C++ on raspberry pi

I need the simplest most reliable IPC method from one C++ app running on the RPi to another app.
All I'm trying to do is send a string message of 40 characters from one app to another
The first app is running as a service on boot, the other app is started at a later time and is frequently exited and restarted for debugging
The frequent debugging for the second app is whats causing problems with the IPCs I've tried so far
I've tried about 3 different methods and here is where they failed:
File FIFO, the problem is one program hangs while the other program is writing to the file
Shared memory: cannot initialize on one thread and read from another thread. Also frequent exiting while debugging causing GDB crashes with the following GDB command is taking too long to complete -stack-list-frames --thread 1
UDP socket with localhost - same issue as above, plus improper exits block the socket, forcing me to reboot device
Non blocking pipe - not getting any messages on the receiving process
What else can I try? I dont want to get the DBus library, seems too complex for this application.
Any simple server and client code or a link to it would be helpful
Here is my non-blockign pipe code, that doesnt work for me,
I assume its because I dont have a reference to the pipe from one app to the other
Code sourced from here: https://www.geeksforgeeks.org/non-blocking-io-with-pipes-in-c/
char* msg1 = "hello";
char* msg2 = "bye !!";
int p[2], i;
bool InitClient()
{
// error checking for pipe
if(pipe(p) < 0)
exit(1);
// error checking for fcntl
if(fcntl(p[0], F_SETFL, O_NONBLOCK) < 0)
exit(2);
//Read
int nread;
char buf[MSGSIZE];
// write link
close(p[1]);
while (1) {
// read call if return -1 then pipe is
// empty because of fcntl
nread = read(p[0], buf, MSGSIZE);
switch (nread) {
case -1:
// case -1 means pipe is empty and errono
// set EAGAIN
if(errno == EAGAIN) {
printf("(pipe empty)\n");
sleep(1);
break;
}
default:
// text read
// by default return no. of bytes
// which read call read at that time
printf("MSG = % s\n", buf);
}
}
return true;
}
bool InitServer()
{
// error checking for pipe
if(pipe(p) < 0)
exit(1);
// error checking for fcntl
if(fcntl(p[0], F_SETFL, O_NONBLOCK) < 0)
exit(2);
//Write
// read link
close(p[0]);
// write 3 times "hello" in 3 second interval
for(i = 0 ; i < 3000000000 ; i++) {
write(p[0], msg1, MSGSIZE);
sleep(3);
}
// write "bye" one times
write(p[0], msg2, MSGSIZE);
return true;
}
Please consider ZeroMQ
https://zeromq.org/
It is lightweight and has wrapper for all major programming languages.

Strange IOCP behaviour when communicating with browsers

I'm writing IOCP server for video streaming from desktop client to browser.
Both sides uses WebSocket protocol to unify server's achitecture (and because there is no other way for browsers to perform a full-duplex exchange).
The working thread starts like this:
unsigned int __stdcall WorkerThread(void * param){
int ThreadId = (int)param;
OVERLAPPED *overlapped = nullptr;
IO_Context *ctx = nullptr;
Client *client = nullptr;
DWORD transfered = 0;
BOOL QCS = 0;
while(WAIT_OBJECT_0 != WaitForSingleObject(EventShutdown, 0)){
QCS = GetQueuedCompletionStatus(hIOCP, &transfered, (PULONG_PTR)&client, &overlapped, INFINITE);
if(!client){
if( Debug ) printf("No client\n");
break;
}
ctx = (IO_Context *)overlapped;
if(!QCS || (QCS && !transfered)){
printf("Error %d\n", WSAGetLastError());
DeleteClient(client);
continue;
}
switch(auto opcode = client->ProcessCurrentEvent(ctx, transfered)){
// Client owed to receive some data
case OPCODE_RECV_DEBT:{
if((SOCKET_ERROR == client->Recv()) && (WSA_IO_PENDING != WSAGetLastError())) DeleteClient(client);
break;
}
// Client received all data or the beginning of new message
case OPCODE_RECV_DONE:{
std::string message;
client->GetInput(message);
// Analizing the first byte of WebSocket frame
switch( opcode = message[0] & 0xFF ){
// HTTP_HANDSHAKE is 'G' - from GET HTTP...
case HTTP_HANDSHAKE:{
message = websocket::handshake(message);
while(!client->SetSend(message)) Sleep(1); // Set outgoing data
if((SOCKET_ERROR == client->Send()) && (WSA_IO_PENDING != WSAGetLastError())) DeleteClient(client);
break;
}
// Browser sent a closing frame (0x88) - performing clean WebSocket closure
case FIN_CLOSE:{
websocket::frame frame;
frame.parse(message);
frame.masked = false;
if( frame.pl_len == 0 ){
unsigned short reason = 1000;
frame.payload.resize(sizeof(reason));
frame.payload[0] = (reason >> 8) & 0xFF;
frame.payload[1] = reason & 0xFF;
}
frame.pack(message);
while(!client->SetSend(message)) Sleep(1);
if((SOCKET_ERROR == client->Send()) && (WSA_IO_PENDING != WSAGetLastError())) DeleteClient(client);
shutdown(client->Socket(), SD_SEND);
break;
}
IO context struct:
struct IO_Context{
OVERLAPPED overlapped;
WSABUF data;
char buffer[IO_BUFFER_LENGTH];
unsigned char opcode;
unsigned long long debt;
std::string message;
IO_Context(){
debt = 0;
opcode = 0;
data.buf = buffer;
data.len = IO_BUFFER_LENGTH;
overlapped.Offset = overlapped.OffsetHigh = 0;
overlapped.Internal = overlapped.InternalHigh = 0;
overlapped.Pointer = nullptr;
overlapped.hEvent = nullptr;
}
~IO_Context(){ while(!HasOverlappedIoCompleted(&overlapped)) Sleep(1); }
};
Client Send function:
int Client::Send(){
int var_buf = O.message.size();
// "O" is IO_Context for Output
O.data.len = (var_buf>IO_BUFFER_LENGTH)?IO_BUFFER_LENGTH:var_buf;
var_buf = O.data.len;
while(var_buf > 0) O.data.buf[var_buf] = O.message[--var_buf];
O.message.erase(0, O.data.len);
return WSASend(connection, &O.data, 1, nullptr, 0, &O.overlapped, nullptr);
}
When the desktop client disconnects (it uses just closesocket() to do it, no shutdown()) the GetQueuedCompletionStatus returns TRUE and sets transfered to 0 - in this case WSAGetLastError() returns 64 (The specified network name is no longer available), and it has sense - client disconnected (line with if(!QCS || (QCS && !transfered))). But when the browser disconnects, the error codes confuse me... It can be 0, 997 (pending operation), 87 (invalid parameter)... and no codes related to end of connection.
Why do IOCP select this events? How can it select a pending operation? Why the error is 0 when 0 bytes transferred? Also it leads to endless trying to delete an object associated with the overlapped structure, because the destructor calls ~IO_Context(){ while(!HasOverlappedIoCompleted(&overlapped)) Sleep(1); } for secure deleting. In DeleteClient call the socket is closing with closesocket(), but, as you can see, I'm posting a shutdown(client->Socket(), SD_SEND); call before it (in FIN_CLOSE section).
I understand that there are two sides of a connection and closing it on a server side does not mean that an other side will close it too. But I need to create a stabile server, immune to bad and half opened connections. For example, the user of web application can rapidly press F5 to reload page few times (yeah, some dudes do so :) ) - the connection will reopen few times, and the server must not lag or crash due to this actions.
How to handle this "bad" events in IOCP?
you have many wrong code here.
while(WAIT_OBJECT_0 != WaitForSingleObject(EventShutdown, 0)){
QCS = GetQueuedCompletionStatus(hIOCP, &transfered, (PULONG_PTR)&client, &overlapped, INFINITE);
this is not efficient and wrong code for stop WorkerThread. at first you do excess call WaitForSingleObject, use excess EventShutdown and main this anyway fail todo shutdown. if your code wait for packet inside GetQueuedCompletionStatus that you say EventShutdown - not break GetQueuedCompletionStatus call - you continue infinite wait here. correct way for shutdown - PostQueuedCompletionStatus(hIOCP, 0, 0, 0) instead call SetEvent(EventShutdown) and if worked thread view client == 0 - he break loop. and usually you need have multiple WorkerThread (not single). and multiple calls PostQueuedCompletionStatus(hIOCP, 0, 0, 0) - exactly count of working threads. also you need synchronize this calls with io - do this only after all io already complete and no new io packets will be queued to iocp. so "null packets" must be the last queued to port
if(!QCS || (QCS && !transfered)){
printf("Error %d\n", WSAGetLastError());
DeleteClient(client);
continue;
}
if !QCS - the value in client not initialized, you simply can not use it and call DeleteClient(client); is wrong under this condition
when object (client) used from several thread - who must delete it ? what be if one thread delete object, when another still use it ? correct solution will be if you use reference counting on such object (client). and based on your code - you have single client per hIOCP ? because you retriever pointer for client as completion key for hIOCP which is single for all I/O operation on sockets bind to the hIOCP. all this is wrong design.
you need store pointer to client in IO_Context. and add reference to client in IO_Context and release client in IO_Context destructor.
class IO_Context : public OVERLAPPED {
Client *client;
ULONG opcode;
// ...
public:
IO_Context(Client *client, ULONG opcode) : client(client), opcode(opcode) {
client->AddRef();
}
~IO_Context() {
client->Release();
}
void OnIoComplete(ULONG transfered) {
OnIoComplete(RtlNtStatusToDosError(Internal), transfered);
}
void OnIoComplete(ULONG error, ULONG transfered) {
client->OnIoComplete(opcode, error, transfered);
delete this;
}
void CheckIoError(ULONG error) {
switch(error) {
case NOERROR:
case ERROR_IO_PENDING:
break;
default:
OnIoComplete(error, 0);
}
}
};
then are you have single IO_Context ? if yes, this is fatal error. the IO_Context must be unique for every I/O operation.
if (IO_Context* ctx = new IO_Context(client, op))
{
ctx->CheckIoError(WSAxxx(ctx) == 0 ? NOERROR : WSAGetLastError());
}
and from worked threads
ULONG WINAPI WorkerThread(void * param)
{
ULONG_PTR key;
OVERLAPPED *overlapped;
ULONG transfered;
while(GetQueuedCompletionStatus(hIOCP, &transfered, &key, &overlapped, INFINITE)) {
switch (key){
case '_io_':
static_cast<IO_Context*>(overlapped)->OnIoComplete(transfered);
continue;
case 'stop':
// ...
return 0;
default: __debugbreak();
}
}
__debugbreak();
return GetLastError();
}
the code like while(!HasOverlappedIoCompleted(&overlapped)) Sleep(1); is always wrong. absolute and always. never write such code.
ctx = (IO_Context *)overlapped; despite in your concrete case this give correct result, not nice and can be break if you change definition of IO_Context. you can use CONTAINING_RECORD(overlapped, IO_Context, overlapped) if you use struct IO_Context{
OVERLAPPED overlapped; } but better use class IO_Context : public OVERLAPPED and static_cast<IO_Context*>(overlapped)
now about Why do IOCP select this events? How to handle this "bad" events in IOCP?
the IOCP nothing select. he simply signaling when I/O complete. all. which specific wsa errors you got on different network operation absolute independent from use IOCP or any other completion mechanism.
on graceful disconnect is normal when error code is 0 and 0 bytes transferred in recv operation. you need permanent have recv request active after connection done, and if recv complete with 0 bytes transferred this mean that disconnect happens

Client application crash causes Server to crash? (C++)

I'm not sure if this is a known issue that I am running into, but I couldn't find a good search string that would give me any useful results.
Anyway, here's the basic rundown:
we've got a relatively simple application that takes data from a source (DB or file) and streams that data over TCP to connected clients as new data comes in. its a relatively low number of clients; i would say at max 10 clients per server, so we have the following rough design:
client: connect to server, set to read (with timeout set to higher than the server heartbeat message frequency). It blocks on read.
server: one listening thread that accepts connections and then spawns a writer thread to read from the data source and write to the client. The writer thread is also detached(using boost::thread so just call the .detach() function). It blocks on writes indefinetly, but does check errno for errors before writing. We start the servers using a single perl script and calling "fork" for each server process.
The problem(s):
at seemingly random times, the client will shutdown with a "connection terminated (SUCCESFUL)" indicating that the remote server shutdown the socket on purpose. However, when this happens the SERVER application ALSO closes, without any errors or anything. it just crashes.
Now, to further the problem, we have multiple instances of the server app being started by a startup script running different files and different ports. When ONE of the servers crashes like this, ALL the servers crash out.
Both the server and client using the same "Connection" library created in-house. It's mostly a C++ wrapper for the C socket calls.
here's some rough code for the write and read function in the Connection libary:
int connectionTimeout_read = 60 * 60 * 1000;
int Socket::readUntil(char* buf, int amount) const
{
int readyFds = epoll_wait(epfd,epEvents,1,connectionTimeout_read);
if(readyFds < 0)
{
status = convertFlagToStatus(errno);
return 0;
}
if(readyFds == 0)
{
status = CONNECTION_TIMEOUT;
return 0;
}
int fd = epEvents[0].data.fd;
if( fd != socket)
{
status = CONNECTION_INCORRECT_SOCKET;
return 0;
}
int rec = recv(fd,buf,amount,MSG_WAITALL);
if(rec == 0)
status = CONNECTION_CLOSED;
else if(rec < 0)
status = convertFlagToStatus(errno);
else
status = CONNECTION_NORMAL;
lastReadBytes = rec;
return rec;
}
int Socket::write(const void* buf, int size) const
{
int readyFds = epoll_wait(epfd,epEvents,1,-1);
if(readyFds < 0)
{
status = convertFlagToStatus(errno);
return 0;
}
if(readyFds == 0)
{
status = CONNECTION_TERMINATED;
return 0;
}
int fd = epEvents[0].data.fd;
if(fd != socket)
{
status = CONNECTION_INCORRECT_SOCKET;
return 0;
}
if(epEvents[0].events != EPOLLOUT)
{
status = CONNECTION_CLOSED;
return 0;
}
int bytesWrote = ::send(socket, buf, size,0);
if(bytesWrote < 0)
status = convertFlagToStatus(errno);
lastWriteBytes = bytesWrote;
return bytesWrote;
}
Any help solving this mystery bug would be great! at the VERY least, I would like it to NOT crash out the server even if the client crashes (which is really strange for me, since there is no two-way communication).
Also, for reference, here is the server listening code:
while(server.getStatus() == connection::CONNECTION_NORMAL)
{
connection::Socket s = server.listen();
if(s.getStatus() != connection::CONNECTION_NORMAL)
{
fprintf(stdout,"failed to accept a socket. error: %s\n",connection::getStatusString(s.getStatus()));
}
DATASOURCE* dataSource;
dataSource = open_datasource(XXXX); /* edited */ if(dataSource == NULL)
{
fprintf(stdout,"FATAL ERROR. DATASOURCE NOT FOUND\n");
return;
}
boost::thread fileSender(Sender(s,dataSource));
fileSender.detach();
}
...And also here is the spawned child sending thread:
::signal(SIGPIPE,SIG_IGN);
//const int headerNeeds = 29;
const int BUFFERSIZE = 2000;
char buf[BUFFERSIZE];
bool running = true;
while(running)
{
memset(buf,'\0',BUFFERSIZE*sizeof(char));
unsigned int readBytes = 0;
while((readBytes = read_datasource(buf,sizeof(unsigned char),BUFFERSIZE,dataSource)) == 0)
{
boost::this_thread::sleep(boost::posix_time::milliseconds(1000));
}
socket.write(buf,readBytes);
if(socket.getStatus() != connection::CONNECTION_NORMAL)
running = false;
}
fprintf(stdout,"socket error: %s\n",connection::getStatusString(socket.getStatus()));
socket.close();
fprintf(stdout,"sender exiting...\n");
Any insights would be welcome! Thanks in advance.
You've probably got everything backwards... when the server crashes, the OS will close all sockets. So the server crash happens first and causes the client to get the disconnect message (FIN flag in a TCP segment, actually), the crash is not a result of the socket closing.
Since you have multiple server processes crashing at the same time, I'd look at resources they share, and also any scheduled tasks that all servers would try to execute at the same time.
EDIT: You don't have a single client connecting to multiple servers, do you? Note that TCP connections are always bidirectional, so the server process does get feedback if a client disconnects. Some internet providers have even been caught generating RST packets on connections that fail some test for suspicious traffic.
Write a signal handler. Make sure it uses only raw I/O functions to log problems (open, write, close, not fwrite, not printf).
Check return values. Check for negative return value from write on a socket, but check all return values.
Thanks for all the comments and suggestions.
After looking through the code and adding the signal handling as Ben suggested, the applications themselves are far more stable. Thank you for all your input.
The original problem, however, was due to a rogue script that one of the admins was running as root that would randomly kill certain processes on the server-side machine (i won't get into what it was trying to do in reality; safe to say it was buggy).
Lesson learned: check the environment.
Thank you all for the advice.

C++ / Gloox: how to check when connection is down?

I'm trying to write own jabber bot on c++/gloox. Everything goes fine, but when internet connection is down - bot thinks that it's still connected, and when connection is up again - of course bot doesn't respond to any message.
Each time since bot is successfully connected gloox' recv() returns ConnNoError, even if interface is down and cable unplugged.
Tried use blocking and non-blocking gloox' connection and recv() and all was without any result. Periodic checks of availability of xmpp server in different thread is not seems like a good idea, so how to properly check is bot connected right now or no?
If it's not possible to do with gloox only - please point me on some good method, but let it be availible in unix.
I have the same question, and found the reason why recv always retrun ConnNoError. Here is what I found. When the connection is established, the recv calls a funciton named dataAvailable In ConnectionTCPBase.cpp which return
( ( select( m_socket + 1, &fds, 0, 0, timeout == -1 ? 0 : &tv ) > 0 ) && FD_ISSET( m_socket, &fds ) != 0 )
searching google, I found this thread, it said FD_ISSET( m_socket, &fds ) would detect the socket is readble but not is closed ... Return value of FD_ISSET( m_socket, &fds ) is always 0, even the network is down. In such case, the return value of dataAvailable is false, so the code below finally returns ConnNoError in recv.
if( !dataAvailable( timeout ) )
{
m_recvMutex.unlock();
return ConnNoError;
}
I don't know whether it is a bug or what, seems not.
Later I tried another way, write to the socket directly, and this will cause a SIGPIPE if the socket is closed, catch that signal, then use cleanup to disconnect.
I finally figure out a graceful solution to this problem, using heartbeat.
in the gloox thread, call heartBeat(), where m_pClient is an pointer to a instance of gloox::Client
void CXmpp::heartBeat()
{
m_pClient->xmppPing(m_pClient->jid(), this);
if (++heart) > 3) {
m_pClient->disconnect();
}
}
xmppPing will register itself to eventhandler, when ping comes back, it will call handleEvent, and in handleEvent
void CEventHandler::handleEvent(const Event& event)
{
std::string sEvent;
switch (event.eventType())
{
case Event::PingPing:
sEvent = "PingPing";
break;
case Event::PingPong:
sEvent = "PingPong";
//recieve from server, decrease the count of heart
--heart;
break;
case Event::PingError:
sEvent = "PingError";
break;
default:
break;
}
return;
}
connect to the server, turn off the network, 3 seconds later, I got a disconnect!
You have to define the onDisconnect(ConnectionError e) to be able to handle the disconnect event. The address to documentation is http://camaya.net/api/gloox-0.9.9.12/classgloox_1_1ConnectionListener.html#a2