C++ + linux handle SIGPIPE signal - c++

Yes, I understand this issue has been discussed many times.
And yes, I've seen and read these and other discussions:
1
2
3
and I still can't fix my code myself.
I am writing my own web server. In the next cycle, it listens on a socket, connects each new client and writes it to a vector.
Into my class i have this struct:
struct Connection
{
int socket;
std::chrono::system_clock::time_point tp;
std::string request;
};
with next data structures:
std::mutex connected_clients_mux_;
std::vector<HttpServer::Connection> connected_clients_;
and the cycle itself:
//...
bind (listen_socket_, (struct sockaddr *)&addr_, sizeof(addr_));
listen(listen_socket_, 4 );
while(1){
connection_socket_ = accept(listen_socket_, NULL, NULL);
//...
Connection connection_;
//...
connected_clients_mux_.lock();
this->connected_clients_.push_back(connection_);
connected_clients_mux_.unlock();
}
it works, clients connect, send and receive requests.
But the problem is that if the connection is broken ("^C" for client), then my program will not know about it even at the moment:
void SendRespons(HttpServer::Connection socket_){
write(socket_.socket,( socket_.request + std::to_string(socket_.socket)).c_str(), 1024);
}
as the title of this question suggests, my app receives a SIGPIPE signal.
Again, I have seen "solutions".
signal(SIGPIPE, &SigPipeHandler);
void SigPipeHandler(int s) {
//printf("Caught SIGPIPE\n%d",s);
}
but it does not help. At this moment, we have the "№" of the socket to which the write was made, is it possible to "remember" it and close this particular connection in the handler method?
my system:
Operating System: Ubuntu 20.04.2 LTS
Kernel: Linux 5.8.0-43-generic
g++ --version
g++ (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0

As stated in the links you give, the solution is to ignore SIGPIPE, and CHECK THE RETURN VALUE of the write calls. This latter is needed for correct operation (short writes) in all but the most trivial, unloaded cases anyways. Also the fixed write size of 1024 that you are using is probably not what you want -- if your response string is shorter, you'll send a bunch of random garbage along with it. You probably really want something like:
void SendRespons(HttpServer::Connection socket_){
auto data = socket_.request + std::to_string(socket_.socket);
int sent = 0;
while (sent < data.size()) {
int len = write(socket_.socket, &data[sent], data.size() - sent);
if (len < 0) {
// there was an error -- might be EPIPE or EAGAIN or EINTR or ever a few other
// obscure corner cases. For EAGAIN or EINTR (which can only happen if your
// program is set up to allow them), you probably want to try again.
// Anything else, probably just close the socket and clean up.
if (errno == EINTR)
continue;
close(socket_.socket);
// should tell someone about it?
break; }
sent += len; }
}

Related

Simplest IPC from one Linux app to another in C++ on raspberry pi

I need the simplest most reliable IPC method from one C++ app running on the RPi to another app.
All I'm trying to do is send a string message of 40 characters from one app to another
The first app is running as a service on boot, the other app is started at a later time and is frequently exited and restarted for debugging
The frequent debugging for the second app is whats causing problems with the IPCs I've tried so far
I've tried about 3 different methods and here is where they failed:
File FIFO, the problem is one program hangs while the other program is writing to the file
Shared memory: cannot initialize on one thread and read from another thread. Also frequent exiting while debugging causing GDB crashes with the following GDB command is taking too long to complete -stack-list-frames --thread 1
UDP socket with localhost - same issue as above, plus improper exits block the socket, forcing me to reboot device
Non blocking pipe - not getting any messages on the receiving process
What else can I try? I dont want to get the DBus library, seems too complex for this application.
Any simple server and client code or a link to it would be helpful
Here is my non-blockign pipe code, that doesnt work for me,
I assume its because I dont have a reference to the pipe from one app to the other
Code sourced from here: https://www.geeksforgeeks.org/non-blocking-io-with-pipes-in-c/
char* msg1 = "hello";
char* msg2 = "bye !!";
int p[2], i;
bool InitClient()
{
// error checking for pipe
if(pipe(p) < 0)
exit(1);
// error checking for fcntl
if(fcntl(p[0], F_SETFL, O_NONBLOCK) < 0)
exit(2);
//Read
int nread;
char buf[MSGSIZE];
// write link
close(p[1]);
while (1) {
// read call if return -1 then pipe is
// empty because of fcntl
nread = read(p[0], buf, MSGSIZE);
switch (nread) {
case -1:
// case -1 means pipe is empty and errono
// set EAGAIN
if(errno == EAGAIN) {
printf("(pipe empty)\n");
sleep(1);
break;
}
default:
// text read
// by default return no. of bytes
// which read call read at that time
printf("MSG = % s\n", buf);
}
}
return true;
}
bool InitServer()
{
// error checking for pipe
if(pipe(p) < 0)
exit(1);
// error checking for fcntl
if(fcntl(p[0], F_SETFL, O_NONBLOCK) < 0)
exit(2);
//Write
// read link
close(p[0]);
// write 3 times "hello" in 3 second interval
for(i = 0 ; i < 3000000000 ; i++) {
write(p[0], msg1, MSGSIZE);
sleep(3);
}
// write "bye" one times
write(p[0], msg2, MSGSIZE);
return true;
}
Please consider ZeroMQ
https://zeromq.org/
It is lightweight and has wrapper for all major programming languages.

UDP sendto packet sent signal

I'm developing an application that sends a lot of messages by an UDP connection.
Sometimes some packets were lost and after some tests I conclude that the socket was busy.
Thus I put a tiny sleep between calls to sendto API trying to prevent a new send before the last one ends.
It worked, but I want to use a better approach, like treat a signal or something else which point me that the previous send was done.
Is there anything like that?
I'm using C++ language on a Linux environment.
The below code snippet shows what I'm doing:
#define MAX_SIZE 4096
string long_msg = GetLongMessage();
if (!long_msg.empty()) {
long int to_send = long_msg.size();
while (to_send) {
long int ret = sendto(socket_fd,
&long_msg[long_msg.size() - to_send],
(to_send > MAX_SIZE ? MAX_SIZE : to_send), 0,
reinterpret_cast<struct sockaddr*>(&addr_client),
addr_client_len);
if (ret > 0) {
to_send -= ret;
sleep(10);
} else {
// Log error
}
}
}
Edit: The intent of this question is to know a way to detect if a UDP socket is busy due a previous send call and not discuss TCP vs UDP advantages/disadvantages.

SSL_shutdown returns -1 with SSL_ERROR_WANT_READ infinitely long

I cannot understand how to properly use SSL_Shutdown command in OpenSSL. Similar questions arisen several times in different places, but I couldn't find a solution which matches exactly my situation. I am using package libssl-dev 1.0.1f-1ubuntu2.15 (the latest for now) under Ubuntu in VirtualBox.
I am working with a small legacy C++ wrapper over the OpenSSL library with non-blocking IO for server and client sockets. The wrapper seems to work fine, except in the following test case (I'm not providing the code of the unit test itself, because it contains a lot of code not related to the problem):
Initialize a server socket with self-signed certificate
Connect to that socket. SSL handshake completes successfully, except that I'm ignoring X509_V_ERR_DEPTH_ZERO_SELF_SIGNED_CERT return of SSL_get_verify_result for now.
Successfully send/receive some data through the connection. This step is optional and it doesn't affect the problem which follows. I mention it only to show that the connection is really established and set into correct state.
Trying to shutdown SSL connection (server or client, doesn't matter which one) which leads to infinite wait on select.
All of the calls to SSL_read and SSL_write are synchronized, locking_callback is also set. After step 3 there are no other operations on sockets except shutting down on one of them.
In the code snippet below I omit all of the error processing and debugging code for clarity, none of the OpenSSL/POSIX calls fail (except cases where I left error processing in place). I also provide connect functions, in case this is important:
void OpenSslWrapper::ConnectToHost( ErrorCode& ec )
{
ctx_ = SSL_CTX_new(SSLv23_client_method());
SSL_CTX_load_verify_locations(ctx_, NULL, config_.verify_locations.c_str());
if (config_.use_standard_verify_locations)
{
SSL_CTX_set_default_verify_paths(ctx_);
}
bio_ = BIO_new_ssl_connect(ctx_);
BIO_get_ssl(bio_, &ssl_);
SSL_set_mode(ssl_, SSL_MODE_AUTO_RETRY);
std::string hostname = config_.address + ":" + to_string(config_.port);
BIO_set_conn_hostname(bio_, hostname.c_str());
BIO_set_nbio(bio_, 1);
int res = 0;
while ((res = BIO_do_connect(bio_)) <= 0)
{
BIO_get_fd(bio_, &fd_);
if (!BIO_should_retry(bio_))
{ /* Never happens */}
WaitAfterError(res);
}
res = SSL_get_verify_result(ssl_);
if (res != X509_V_OK && res != X509_V_ERR_DEPTH_ZERO_SELF_SIGNED_CERT)
{ /* Never happens */ }
SSL_set_mode(ssl_, SSL_MODE_ENABLE_PARTIAL_WRITE);
SSL_set_mode(ssl_, SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER);
}
// config_.handle is a file descriptor which was got from
// accept function, stx_ is also set in advance
void OpenSslWrapper::ConnectAsServer( ErrorCode& ec )
{
ssl_ = SSL_new(ctx_);
int flags = fcntl(config_.handle, F_GETFL, 0);
flags |= O_NONBLOCK;
fcntl(config_.handle, F_SETFL, flags);
SSL_set_fd(ssl_, config_.handle);
while (true)
{
int res = SSL_accept(ssl_);
if( res > 0) {break;}
if( !WaitAfterError(res).isSucceded() )
{ /* never happens */ }
}
SSL_set_mode(ssl_, SSL_MODE_ENABLE_PARTIAL_WRITE);
SSL_set_mode(ssl_, SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER);
}
// The trouble is here
void OpenSSLWrapper::Shutdown()
{
// ...
while (true)
{
int ret = SSL_shutdown(ssl_);
if (ret > 0) {break;}
else if (ret == 0) {continue;}
else {WaitAfterError(ret);}
}
// ...
}
ErrorCode OpenSSLWrapper::WaitAfterError(int res)
{
int err = SSL_get_error(ssl_, res);
switch (ret)
{
case SSL_ERROR_WANT_READ:
WaitForFd(fd_, k_WaitRead);
return ErrorCode::Success;
case SSL_ERROR_WANT_WRITE:
case SSL_ERROR_WANT_CONNECT:
case SSL_ERROR_WANT_ACCEPT:
WaitForFd(fd_, k_WaitWrite);
return ErrorCode::Success;
default:
return ErrorCode::Fail;
}
}
WaitForFd is just a simple wrapper over the select, which waits on a given socket infinitely long on a specified FD_SET for read or write.
When the client calls Shutdown, the first call to SSL_Shutdown returns 0. After the second call it returns -1 and SSL_get_error returns SSL_ERROR_WANT_READ, but selecting on the file descriptor for reading never returns. If specify timeout on select, SSL_Shutdown will continue returning -1 and SSL_get_error continue returning SSL_ERROR_WANT_READ. The loop never exits. After the first call to SSL_Shutdown the shutdown status is always SSL_SENT_SHUTDOWN.
It doesn't matter if I close a server or a client: both have the same behavior.
There's also a strange situation when I connect to some external host. The first call to SSL_Shutdown returns 0, the second one -1 with SSL_ERROR_WANT_READ. Selecting on the socket finishes successfully, but when I call to SSL_Shutdown next time, I again got -1 with error SSL_ERROR_SYSCALL and errno=0. As I read in other places, it is not a big deal, although it still seems strange and maybe it is somehow related, so I mention it here.
UPD. I ported that same code for Windows, the behavior didn't change.
P.S. I am sorry for mistakes in my English, I'd be grateful if someone corrects my language.

valgrind/helgrind gets killed on stress test

I'm making a web server on linux in C++ with pthreads. I tested it with valgrind for leaks and memory problems - all fixed. I tested it with helgrind for thread problems - all fixed. I'm trying a stress test. I'm getting problem when the probram is run with helgrind
valgrind --tool=helgrind ./chats
It just dies on random places with the text "Killed" as it would do when I kill it with kill -9. The only report I get sometimes from helgrind is that the program exists while still holding some locks, which is normal when gets killed.
When checking for leaks:
valgrind --leak-check=full ./chats
it's more stable, but I managed to make it die once with few hundreds of concurrent connections.
I tried running program alone and couldn't make it crash at all. I tried up to 250 concurrent connections. Each thread delays with 100ms to make it easier to have multiple connections at the same time. No crash.
In all cases threads as well as connections do not get above 10 and I see it crash even with 2 connections, but never with only one connection at the same time (with including main thread and one helper thread is total of 3).
Is it possible that the problem will only happen when run with
helgrind or just helgrind makes it more likely to show?
What be the reason that a program gets killed (by kernel?) Allocating too much memory, too many file descriptors?
I tested a bit more and I found out that it only dies when the client times out and closes the connection. So here is the code which detects that the client closed the socket:
void *TcpClient::run(){
int ret;
struct timeval tv;
char * buff = (char *)malloc(10001);
int br;
colorPrintf(TC_GREEN, "new client starting: %d\n", sockFd);
while(isRunning()){
tv.tv_sec = 0;
tv.tv_usec = 500*1000;
FD_SET(sockFd, &readFds);
ret = select(sockFd+1, &readFds, NULL, NULL, &tv);
if(ret < 0){
//select error
continue;
}else if(ret == 0){
// no data to read
continue;
}
br = read(sockFd, buff, 10000);
buff[br] = 0;
if (br == 0){
// client disconnected;
setRunning(false);
break;
}
if (reader != NULL){
reader->tcpRead(this, std::string(buff, br));
}else{
readBuffer.append(buff, br);
}
//printf("received: %s\n", buff);
}
free(buff);
sendFeedback((void *)1);
colorPrintf(TC_RED, "closing client socket: %d\n", sockFd);
::close(sockFd);
sockFd = -1;
return NULL;
}
// this method writes to socket
bool TcpClient::write(std::string data){
int bw;
int dataLen = data.length();
bw = ::write(sockFd, data.data(), dataLen);
if (bw != dataLen){
return false; // I don't close the socket in this case, maybe I should
}
return true;
}
P.S. Threads are:
main thread. connections are accepted here.
one helper thread which listen for signals and sends signals. It stops signal reception for the app and manually polls the signal queue. The reason is because it's hard to handle signals when using threads. I found this technique here in stackoverflow and it seams to work pretty fine in other projects.
client connection threads
The full code is pretty big, but I can post chunks if someone is interested.
Update:
I managed to trigger the problem with only one connection. It's all happening in client thread. This is what I do:
I read/parse headers. I put delay before writing so the client can timeout (which causes the problem).
Here the client timeouts and leaves (probably closes socket)
I write back headers
I write back the html code.
Here is how I write back
bw = ::write(sockFd, data.data(), dataLen);
// bw is = dataLen = 108 when writing the headers
//then secondary write for HTML kills the program. there is a message before and after write()
bw = ::write(sockFd, data.data(), dataLen); // doesn't go past this point second time
Update 2: Got it :)
gdb sais:
Program received signal SIGPIPE, Broken pipe.
[Switching to Thread 0x41401940 (LWP 10554)]
0x0000003ac2e0d89b in write () from /lib64/libpthread.so.0
Question 1: What should I do to void receiving this signal.
Question 2: How to know that remote side disconnected while writing. On read select returns that there is data but data read is 0. How about write?
Well I just had to handle the SIGPIPE singal and write returned -1 -> I close socket and quit thread gracefully. Works like a charm.
I guess the easiest way is to set signal handler of SIGPIPE to SIG_IGN:
signal(SIGPIPE, SIG_IGN);
Note that first write was successful and didn't kill the program. If you have similar problem check if you are writing once or multiple times. If you are not familiar with gdb this is how to do it:
gdb ./your-program
> run
and gdb will tell you all about signals and sigfaults.

Client application crash causes Server to crash? (C++)

I'm not sure if this is a known issue that I am running into, but I couldn't find a good search string that would give me any useful results.
Anyway, here's the basic rundown:
we've got a relatively simple application that takes data from a source (DB or file) and streams that data over TCP to connected clients as new data comes in. its a relatively low number of clients; i would say at max 10 clients per server, so we have the following rough design:
client: connect to server, set to read (with timeout set to higher than the server heartbeat message frequency). It blocks on read.
server: one listening thread that accepts connections and then spawns a writer thread to read from the data source and write to the client. The writer thread is also detached(using boost::thread so just call the .detach() function). It blocks on writes indefinetly, but does check errno for errors before writing. We start the servers using a single perl script and calling "fork" for each server process.
The problem(s):
at seemingly random times, the client will shutdown with a "connection terminated (SUCCESFUL)" indicating that the remote server shutdown the socket on purpose. However, when this happens the SERVER application ALSO closes, without any errors or anything. it just crashes.
Now, to further the problem, we have multiple instances of the server app being started by a startup script running different files and different ports. When ONE of the servers crashes like this, ALL the servers crash out.
Both the server and client using the same "Connection" library created in-house. It's mostly a C++ wrapper for the C socket calls.
here's some rough code for the write and read function in the Connection libary:
int connectionTimeout_read = 60 * 60 * 1000;
int Socket::readUntil(char* buf, int amount) const
{
int readyFds = epoll_wait(epfd,epEvents,1,connectionTimeout_read);
if(readyFds < 0)
{
status = convertFlagToStatus(errno);
return 0;
}
if(readyFds == 0)
{
status = CONNECTION_TIMEOUT;
return 0;
}
int fd = epEvents[0].data.fd;
if( fd != socket)
{
status = CONNECTION_INCORRECT_SOCKET;
return 0;
}
int rec = recv(fd,buf,amount,MSG_WAITALL);
if(rec == 0)
status = CONNECTION_CLOSED;
else if(rec < 0)
status = convertFlagToStatus(errno);
else
status = CONNECTION_NORMAL;
lastReadBytes = rec;
return rec;
}
int Socket::write(const void* buf, int size) const
{
int readyFds = epoll_wait(epfd,epEvents,1,-1);
if(readyFds < 0)
{
status = convertFlagToStatus(errno);
return 0;
}
if(readyFds == 0)
{
status = CONNECTION_TERMINATED;
return 0;
}
int fd = epEvents[0].data.fd;
if(fd != socket)
{
status = CONNECTION_INCORRECT_SOCKET;
return 0;
}
if(epEvents[0].events != EPOLLOUT)
{
status = CONNECTION_CLOSED;
return 0;
}
int bytesWrote = ::send(socket, buf, size,0);
if(bytesWrote < 0)
status = convertFlagToStatus(errno);
lastWriteBytes = bytesWrote;
return bytesWrote;
}
Any help solving this mystery bug would be great! at the VERY least, I would like it to NOT crash out the server even if the client crashes (which is really strange for me, since there is no two-way communication).
Also, for reference, here is the server listening code:
while(server.getStatus() == connection::CONNECTION_NORMAL)
{
connection::Socket s = server.listen();
if(s.getStatus() != connection::CONNECTION_NORMAL)
{
fprintf(stdout,"failed to accept a socket. error: %s\n",connection::getStatusString(s.getStatus()));
}
DATASOURCE* dataSource;
dataSource = open_datasource(XXXX); /* edited */ if(dataSource == NULL)
{
fprintf(stdout,"FATAL ERROR. DATASOURCE NOT FOUND\n");
return;
}
boost::thread fileSender(Sender(s,dataSource));
fileSender.detach();
}
...And also here is the spawned child sending thread:
::signal(SIGPIPE,SIG_IGN);
//const int headerNeeds = 29;
const int BUFFERSIZE = 2000;
char buf[BUFFERSIZE];
bool running = true;
while(running)
{
memset(buf,'\0',BUFFERSIZE*sizeof(char));
unsigned int readBytes = 0;
while((readBytes = read_datasource(buf,sizeof(unsigned char),BUFFERSIZE,dataSource)) == 0)
{
boost::this_thread::sleep(boost::posix_time::milliseconds(1000));
}
socket.write(buf,readBytes);
if(socket.getStatus() != connection::CONNECTION_NORMAL)
running = false;
}
fprintf(stdout,"socket error: %s\n",connection::getStatusString(socket.getStatus()));
socket.close();
fprintf(stdout,"sender exiting...\n");
Any insights would be welcome! Thanks in advance.
You've probably got everything backwards... when the server crashes, the OS will close all sockets. So the server crash happens first and causes the client to get the disconnect message (FIN flag in a TCP segment, actually), the crash is not a result of the socket closing.
Since you have multiple server processes crashing at the same time, I'd look at resources they share, and also any scheduled tasks that all servers would try to execute at the same time.
EDIT: You don't have a single client connecting to multiple servers, do you? Note that TCP connections are always bidirectional, so the server process does get feedback if a client disconnects. Some internet providers have even been caught generating RST packets on connections that fail some test for suspicious traffic.
Write a signal handler. Make sure it uses only raw I/O functions to log problems (open, write, close, not fwrite, not printf).
Check return values. Check for negative return value from write on a socket, but check all return values.
Thanks for all the comments and suggestions.
After looking through the code and adding the signal handling as Ben suggested, the applications themselves are far more stable. Thank you for all your input.
The original problem, however, was due to a rogue script that one of the admins was running as root that would randomly kill certain processes on the server-side machine (i won't get into what it was trying to do in reality; safe to say it was buggy).
Lesson learned: check the environment.
Thank you all for the advice.