Client application crash causes Server to crash? (C++) - c++

I'm not sure if this is a known issue that I am running into, but I couldn't find a good search string that would give me any useful results.
Anyway, here's the basic rundown:
we've got a relatively simple application that takes data from a source (DB or file) and streams that data over TCP to connected clients as new data comes in. its a relatively low number of clients; i would say at max 10 clients per server, so we have the following rough design:
client: connect to server, set to read (with timeout set to higher than the server heartbeat message frequency). It blocks on read.
server: one listening thread that accepts connections and then spawns a writer thread to read from the data source and write to the client. The writer thread is also detached(using boost::thread so just call the .detach() function). It blocks on writes indefinetly, but does check errno for errors before writing. We start the servers using a single perl script and calling "fork" for each server process.
The problem(s):
at seemingly random times, the client will shutdown with a "connection terminated (SUCCESFUL)" indicating that the remote server shutdown the socket on purpose. However, when this happens the SERVER application ALSO closes, without any errors or anything. it just crashes.
Now, to further the problem, we have multiple instances of the server app being started by a startup script running different files and different ports. When ONE of the servers crashes like this, ALL the servers crash out.
Both the server and client using the same "Connection" library created in-house. It's mostly a C++ wrapper for the C socket calls.
here's some rough code for the write and read function in the Connection libary:
int connectionTimeout_read = 60 * 60 * 1000;
int Socket::readUntil(char* buf, int amount) const
{
int readyFds = epoll_wait(epfd,epEvents,1,connectionTimeout_read);
if(readyFds < 0)
{
status = convertFlagToStatus(errno);
return 0;
}
if(readyFds == 0)
{
status = CONNECTION_TIMEOUT;
return 0;
}
int fd = epEvents[0].data.fd;
if( fd != socket)
{
status = CONNECTION_INCORRECT_SOCKET;
return 0;
}
int rec = recv(fd,buf,amount,MSG_WAITALL);
if(rec == 0)
status = CONNECTION_CLOSED;
else if(rec < 0)
status = convertFlagToStatus(errno);
else
status = CONNECTION_NORMAL;
lastReadBytes = rec;
return rec;
}
int Socket::write(const void* buf, int size) const
{
int readyFds = epoll_wait(epfd,epEvents,1,-1);
if(readyFds < 0)
{
status = convertFlagToStatus(errno);
return 0;
}
if(readyFds == 0)
{
status = CONNECTION_TERMINATED;
return 0;
}
int fd = epEvents[0].data.fd;
if(fd != socket)
{
status = CONNECTION_INCORRECT_SOCKET;
return 0;
}
if(epEvents[0].events != EPOLLOUT)
{
status = CONNECTION_CLOSED;
return 0;
}
int bytesWrote = ::send(socket, buf, size,0);
if(bytesWrote < 0)
status = convertFlagToStatus(errno);
lastWriteBytes = bytesWrote;
return bytesWrote;
}
Any help solving this mystery bug would be great! at the VERY least, I would like it to NOT crash out the server even if the client crashes (which is really strange for me, since there is no two-way communication).
Also, for reference, here is the server listening code:
while(server.getStatus() == connection::CONNECTION_NORMAL)
{
connection::Socket s = server.listen();
if(s.getStatus() != connection::CONNECTION_NORMAL)
{
fprintf(stdout,"failed to accept a socket. error: %s\n",connection::getStatusString(s.getStatus()));
}
DATASOURCE* dataSource;
dataSource = open_datasource(XXXX); /* edited */ if(dataSource == NULL)
{
fprintf(stdout,"FATAL ERROR. DATASOURCE NOT FOUND\n");
return;
}
boost::thread fileSender(Sender(s,dataSource));
fileSender.detach();
}
...And also here is the spawned child sending thread:
::signal(SIGPIPE,SIG_IGN);
//const int headerNeeds = 29;
const int BUFFERSIZE = 2000;
char buf[BUFFERSIZE];
bool running = true;
while(running)
{
memset(buf,'\0',BUFFERSIZE*sizeof(char));
unsigned int readBytes = 0;
while((readBytes = read_datasource(buf,sizeof(unsigned char),BUFFERSIZE,dataSource)) == 0)
{
boost::this_thread::sleep(boost::posix_time::milliseconds(1000));
}
socket.write(buf,readBytes);
if(socket.getStatus() != connection::CONNECTION_NORMAL)
running = false;
}
fprintf(stdout,"socket error: %s\n",connection::getStatusString(socket.getStatus()));
socket.close();
fprintf(stdout,"sender exiting...\n");
Any insights would be welcome! Thanks in advance.

You've probably got everything backwards... when the server crashes, the OS will close all sockets. So the server crash happens first and causes the client to get the disconnect message (FIN flag in a TCP segment, actually), the crash is not a result of the socket closing.
Since you have multiple server processes crashing at the same time, I'd look at resources they share, and also any scheduled tasks that all servers would try to execute at the same time.
EDIT: You don't have a single client connecting to multiple servers, do you? Note that TCP connections are always bidirectional, so the server process does get feedback if a client disconnects. Some internet providers have even been caught generating RST packets on connections that fail some test for suspicious traffic.
Write a signal handler. Make sure it uses only raw I/O functions to log problems (open, write, close, not fwrite, not printf).
Check return values. Check for negative return value from write on a socket, but check all return values.

Thanks for all the comments and suggestions.
After looking through the code and adding the signal handling as Ben suggested, the applications themselves are far more stable. Thank you for all your input.
The original problem, however, was due to a rogue script that one of the admins was running as root that would randomly kill certain processes on the server-side machine (i won't get into what it was trying to do in reality; safe to say it was buggy).
Lesson learned: check the environment.
Thank you all for the advice.

Related

C++ + linux handle SIGPIPE signal

Yes, I understand this issue has been discussed many times.
And yes, I've seen and read these and other discussions:
1
2
3
and I still can't fix my code myself.
I am writing my own web server. In the next cycle, it listens on a socket, connects each new client and writes it to a vector.
Into my class i have this struct:
struct Connection
{
int socket;
std::chrono::system_clock::time_point tp;
std::string request;
};
with next data structures:
std::mutex connected_clients_mux_;
std::vector<HttpServer::Connection> connected_clients_;
and the cycle itself:
//...
bind (listen_socket_, (struct sockaddr *)&addr_, sizeof(addr_));
listen(listen_socket_, 4 );
while(1){
connection_socket_ = accept(listen_socket_, NULL, NULL);
//...
Connection connection_;
//...
connected_clients_mux_.lock();
this->connected_clients_.push_back(connection_);
connected_clients_mux_.unlock();
}
it works, clients connect, send and receive requests.
But the problem is that if the connection is broken ("^C" for client), then my program will not know about it even at the moment:
void SendRespons(HttpServer::Connection socket_){
write(socket_.socket,( socket_.request + std::to_string(socket_.socket)).c_str(), 1024);
}
as the title of this question suggests, my app receives a SIGPIPE signal.
Again, I have seen "solutions".
signal(SIGPIPE, &SigPipeHandler);
void SigPipeHandler(int s) {
//printf("Caught SIGPIPE\n%d",s);
}
but it does not help. At this moment, we have the "№" of the socket to which the write was made, is it possible to "remember" it and close this particular connection in the handler method?
my system:
Operating System: Ubuntu 20.04.2 LTS
Kernel: Linux 5.8.0-43-generic
g++ --version
g++ (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
As stated in the links you give, the solution is to ignore SIGPIPE, and CHECK THE RETURN VALUE of the write calls. This latter is needed for correct operation (short writes) in all but the most trivial, unloaded cases anyways. Also the fixed write size of 1024 that you are using is probably not what you want -- if your response string is shorter, you'll send a bunch of random garbage along with it. You probably really want something like:
void SendRespons(HttpServer::Connection socket_){
auto data = socket_.request + std::to_string(socket_.socket);
int sent = 0;
while (sent < data.size()) {
int len = write(socket_.socket, &data[sent], data.size() - sent);
if (len < 0) {
// there was an error -- might be EPIPE or EAGAIN or EINTR or ever a few other
// obscure corner cases. For EAGAIN or EINTR (which can only happen if your
// program is set up to allow them), you probably want to try again.
// Anything else, probably just close the socket and clean up.
if (errno == EINTR)
continue;
close(socket_.socket);
// should tell someone about it?
break; }
sent += len; }
}

UDP sendto packet sent signal

I'm developing an application that sends a lot of messages by an UDP connection.
Sometimes some packets were lost and after some tests I conclude that the socket was busy.
Thus I put a tiny sleep between calls to sendto API trying to prevent a new send before the last one ends.
It worked, but I want to use a better approach, like treat a signal or something else which point me that the previous send was done.
Is there anything like that?
I'm using C++ language on a Linux environment.
The below code snippet shows what I'm doing:
#define MAX_SIZE 4096
string long_msg = GetLongMessage();
if (!long_msg.empty()) {
long int to_send = long_msg.size();
while (to_send) {
long int ret = sendto(socket_fd,
&long_msg[long_msg.size() - to_send],
(to_send > MAX_SIZE ? MAX_SIZE : to_send), 0,
reinterpret_cast<struct sockaddr*>(&addr_client),
addr_client_len);
if (ret > 0) {
to_send -= ret;
sleep(10);
} else {
// Log error
}
}
}
Edit: The intent of this question is to know a way to detect if a UDP socket is busy due a previous send call and not discuss TCP vs UDP advantages/disadvantages.

Context independent C++ TCP Server Class

I'm coding a TCP Server class based on the I/O multiplexing (select) way.
The basic idea is explained in this chunk of code:
GenericApp.cpp
TServer *server = new Tserver(/*parameters*/);
server->mainLoop();
For now the behavior of the server is independent from the context but in a way that i nedd to improove.
Actual Status
receive(sockFd , buffer);
MSGData * msg= MSGFactory::getInstance()->createMessage(Utils::getHeader(buffer,1024));
EventHandler * rightHandler =eventBinder->getHandler(msg->type());
rightHandler->callback(msg);
At this version the main loop reads from the socket, instantiates the right type of message object and calls the appropriate handler(something may not work properly because it compiles but i have not tested it).
As you can notice this allows a programmer to define his message types and appropriate handlers but once the main loop is started nothing can be done.
I need to make this part of the server more customizable to adapt this class to a bigger
quantity of problems.
MainLoop Code
void TServer::mainLoop()
{
int sockFd;
int connFd;
int maxFd;
int maxi;
int i;
int nready;
maxFd = listenFd;
maxi = -1;
for(i = 0 ; i< FD_SETSIZE ; i++) clients[i] = -1; //Should be in the constructor?
FD_ZERO(&allset); //Should be in the constructor?
FD_SET(listenFd,&allset); //Should be in the constructor?
for(;;)
{
rset = allset;
nready = select (maxFd + 1 , &rset , NULL,NULL,NULL);
if(FD_ISSET( listenFd , &rset ))
{
cliLen = sizeof(cliAddr);
connFd = accept(listenFd , (struct sockaddr *) &cliAddr, &cliLen);
for (i = 0; i < FD_SETSIZE; i++)
{
if (clients[i] < 0)
{
clients[i] = connFd; /* save descriptor */
break;
}
}
if (i == FD_SETSIZE) //!!HANDLE ERROR
FD_SET(connFd, &allset); /* add new descriptor to set */
if (connFd > maxFd) maxFd = connFd; /* for select */
if (i > maxi) maxi = i; /* max index in client[] array */
if (--nready <= 0) continue;
}
for (i = 0; i <= maxi; i++)
{
/* check all clients for data */
if ( (sockFd = clients[i]) < 0) continue;
if (FD_ISSET(sockFd, &rset))
{
//!!SHOULD CLEAN BUFFER BEFORE READ
receive(sockFd , buffer);
MSGData * msg = MSGFactory::getInstance()->createMessage(Utils::getHeader(buffer,1024));
EventHandler * rightHandler =eventBinder->getHandler(msg->type());
rightHandler->callback(msg);
}
if (--nready <= 0) break; /* no more readable descriptors */
}
}
}
Do you have any suggestions on a good way to do this?
Thanks.
Your question requires more than just a stack overflow question. You can find good ideas in these book:
http://www.amazon.com/Pattern-Oriented-Software-Architecture-Concurrent-Networked/dp/0471606952/ref=sr_1_2?s=books&ie=UTF8&qid=1405423386&sr=1-2&keywords=pattern+oriented+software+architecture
http://www.amazon.com/Unix-Network-Programming-Volume-Networking/dp/0131411551/ref=sr_1_1?ie=UTF8&qid=1405433255&sr=8-1&keywords=unix+network+programming
Basically what you're trying to do is a reactor. You can find open source library implementing this pattern. For instance:
http://www.cs.wustl.edu/~schmidt/ACE.html
http://pocoproject.org/
If you want yout handler to have the possibility to do more processing you could give them a reference to your TCPServer and a way to register a socket for the following events:
read, the socket is ready for read
write, the socket is ready for write
accept, the listening socket is ready to accept (read with select)
close, the socket is closed
timeout, the time given to wait for the next event expired (select allow to specify a timeout)
So that the handler can implement all kinds of protocols half-duplex or full-duplex:
In your example there is no way for a handler to answer the received message. This is the role of the write event to let a handler knows when it can send on the socket.
The same is true for the read event. It should not be in your main loop but in the socket read handler.
You may also want to add the possibility to register a handler for an event with a timeout so that you can implement timers and drop idle connections.
This leads to some problems:
Your handler will have to implement a state-machine to react to the network events and update the events it wants to receive.
You handler may want to create and connect new sockets (think about a Web proxy server, an IRC client with DCC, an FTP server, and so on...). For this to work it must have the possibility to create a socket and to register it in your main loop. This means the handler may now receive callbacks for one of the two sockets and there should be a parameter telling the callback which socket it is. Or you will have to implement a handler for each socket and they will comunnicate with a queue of messages. The queue is needed because the readiness of one socket is independent of the readiness of the other. And you may read something on one and not being ready to send it on the other.
You will have to manage the timeouts specified by each handlers which may be different. You may end up with a priority queue for timeouts
As you see this is no simple problem. You may want to reduce the genericity of your framework to simplify its design. (for instance handling only half-duplex protocols like simple HTTP)

why shutdown on udp socket blocks?

I'm writing a UDP server application for windows desktop/server.
My code uses the WSA API suggested by windows the following way (This is my simplified receivePacket method):
struct Packet
{
unsigned int size;
char buffer[MAX_SIZE(1024)];
}
bool receivePacket(Packet packet)
{
WSABUFFER wsa_buffer[2];
wsa_buffer[0].buf = &packet.size;
wsa_buffer[0].len = sizeof(packet.size);
wsa_buffer[1].buf = packet.buffer;
wsa_buffer[1].len = MAX_SIZE;
bool retval = false;
int flags = 0;
int recv_bytes = 0;
inet_addr client_addr;
int client_addr_len = sizeof(client_addr);
if(WSARecvFrom(_socket, wsa_buffer, sizeof(wsa_buffer)/sizeof(wsa_buffer[0]), &bytes_recv, &flags, (sockaddr *)&client_addr, &client_addr_len, NULL, NULL) == 0)
{
//Packet received successfully
}
else
{
//Report
}
}
Now, when I'm trying to close my application gracefully, not network-wise, but rather application-wise (going through all the d'tors and stuff), i'm trying to unblock this call.
To do this, I call the shutdown(_socket, SD_BOTH) method. Unfortunately, the call to shutdown itself BLOCKS!
After reading every possible page in the MSDN, I didn't find any reference to why this happens, other ways of attacking the problem or any way out.
Another thing I checked was using the SO_RCVTIMEO. Surprisingly, this sockopt didn't work as expected as well.
Is there any problem with my code/approach?
Did you run shutdown on duplicated handle? Shutdown on the same handle will wait any active operation on this handle to complete.

Multiple threads writing to same socket causing issues

I have written a client/server application where the server spawns multiple threads depending upon the request from client.
These threads are expected to send some data to the client(string).
The problem is, data gets overwritten on the client side. How do I tackle this issue ?
I have already read some other threads on similar issue but unable to find the exact solution.
Here is my client code to receive data.
while(1)
{
char buff[MAX_BUFF];
int bytes_read = read(sd,buff,MAX_BUFF);
if(bytes_read == 0)
{
break;
}
else if(bytes_read > 0)
{
if(buff[bytes_read-1]=='$')
{
buff[bytes_read-1]='\0';
cout<<buff;
}
else
{
cout<<buff;
}
}
}
Server Thread code :
void send_data(int sd,char *data)
{
write(sd,data,strlen(data));
cout<<data;
}
void *calcWordCount(void *arg)
{
tdata *tmp = (tdata *)arg;
string line = tmp->line;
string s = tmp->arg;
int sd = tmp->sd_c;
int line_no = tmp->line_no;
int startpos = 0;
int finds = 0;
while ((startpos = line.find(s, startpos)) != std::string::npos)
{
++finds;
startpos+=1;
pthread_mutex_lock(&myMux);
tcount++;
pthread_mutex_unlock(&myMux);
}
pthread_mutex_lock(&mapMux);
int t=wcount[s];
wcount[s]=t+finds;
pthread_mutex_unlock(&mapMux);
char buff[MAX_BUFF];
sprintf(buff,"%s",s.c_str());
sprintf(buff+strlen(buff),"%s"," occured ");
sprintf(buff+strlen(buff),"%d",finds);
sprintf(buff+strlen(buff),"%s"," times on line ");
sprintf(buff+strlen(buff),"%d",line_no);
sprintf(buff+strlen(buff),"\n",strlen("\n"));
send_data(sd,buff);
delete (tdata*)arg;
}
On the server side make sure the shared resource (the socket, along with its associated internal buffer) is protected against the concurrent access.
Define and implement an application level protocol used by the server to make it possible for the client to distinguish what the different threads sent.
As an additional note: One cannot rely on read()/write() reading/writing as much bytes as those two functions were told to read/write. It is an essential necessity to check their return value to learn how much bytes those functions actually read/wrote and loop around them until all data that was intended to be read/written had been read/written.
You should put some mutex to your socket.
When a thread use the socket it should block the socket.
Some mutex example.
I can't help you more without the server code. Because the problem is probably in the server.