send() crashes my program - c++

I'm running a server and a client. i'm testing my program on my computer.
this is the funcion in the server that sends data to the client:
int sendToClient(int fd, string msg) {
cout << "sending to client " << fd << " " << msg <<endl;
int len = msg.size()+1;
cout << "10\n";
/* send msg size */
if (send(fd,&len,sizeof(int),0)==-1) {
cout << "error sendToClient\n";
return -1;
}
cout << "11\n";
/* send msg */
int nbytes = send(fd,msg.c_str(),len,0); //CRASHES HERE
cout << "15\n";
return nbytes;
}
when the client exits it sends to the server "BYE" and the server is replying it with the above function. I connect the client to the server (its done on one computer, 2 terminals) and when the client exits the server crashes - it never prints the 15.
any idea why ? any idea how to test why?
thank you.
EDIT: this is how i close the client:
void closeClient(int notifyServer = 0) {
/** notify server before closing */
if (notifyServer) {
int len = SERVER_PROTOCOL[bye].size()+1;
char* buf = new char[len];
strcpy(buf,SERVER_PROTOCOL[bye].c_str()); //c_str - NEED TO FREE????
sendToServer(buf,len);
delete[] buf;
}
close(_sockfd);
}
btw, if i skipp this code, meaning just leave the close(_sockfd) without notifying the server everything is ok - the server doesn't crash.
EDIT 2: this is the end of strace.out:
5211 recv(5, "BYE\0", 4, 0) = 4
5211 write(1, "received from client 5 \n", 24) = 24
5211 write(1, "command: BYE msg: \n", 19) = 19
5211 write(1, "BYEBYE\n", 7) = 7
5211 write(1, "response = ALALA!!!\n", 20) = 20
5211 write(1, "sending to client 5 ALALA!!!\n", 29) = 29
5211 write(1, "10\n", 3) = 3
5211 send(5, "\t\0\0\0", 4, 0) = 4
5211 write(1, "11\n", 3) = 3
5211 send(5, "ALALA!!!\0", 9, 0) = -1 EPIPE (Broken pipe)
5211 --- SIGPIPE (Broken pipe) # 0 (0) ---
5211 +++ killed by SIGPIPE +++
broken pipe can kill my program?? why not just return -1 by send()??

You may want to specify MSG_NOSIGNAL in the flags:
int nbytes = send(fd,msg.c_str(), msg.size(), MSG_NOSIGNAL);

You're getting SIGPIPE because of a "feature" in Unix that raises SIGPIPE when trying to send on a socket that the remote peer has closed. Since you don't handle the signal, the default signal-handler is called, and it aborts/crashes your program.
To get the behavior your want (i.e. make send() return with an error, instead of raising a signal), add this to your program's startup routine (e.g. top of main()):
#include <signal.h>
int main(int argc, char ** argv)
{
[...]
signal(SIGPIPE, SIG_IGN);

Probably the clients exits before the server has completed the sending, thus breaking the socket between them. Thus making send to crash.
link
This socket was connected but the
connection is now broken. In this
case, send generates a SIGPIPE signal
first; if that signal is ignored or
blocked, or if its handler returns,
then send fails with EPIPE.

If the client exits before the second send from the server, and the connection is not disposed of properly, your server keeps hanging and this could provoke the crash.
Just a guess, since we don't know what server and client actually do.

I find the following line of code strange because you define int len = msg.size()+1;.
int nbytes = send(fd,msg.c_str(),len,0); //CRASHES HERE
What happens if you define int len = msg.size();?

If you are on Linux, try to run the server inside strace. This will write lots of useful data to a log file.
strace -f -o strace.out ./server
Then have a look at the end of the log file. Maybe it's obvious what the program did and when it crashed, maybe not. In the latter case: Post the last lines here.

Related

Simplest IPC from one Linux app to another in C++ on raspberry pi

I need the simplest most reliable IPC method from one C++ app running on the RPi to another app.
All I'm trying to do is send a string message of 40 characters from one app to another
The first app is running as a service on boot, the other app is started at a later time and is frequently exited and restarted for debugging
The frequent debugging for the second app is whats causing problems with the IPCs I've tried so far
I've tried about 3 different methods and here is where they failed:
File FIFO, the problem is one program hangs while the other program is writing to the file
Shared memory: cannot initialize on one thread and read from another thread. Also frequent exiting while debugging causing GDB crashes with the following GDB command is taking too long to complete -stack-list-frames --thread 1
UDP socket with localhost - same issue as above, plus improper exits block the socket, forcing me to reboot device
Non blocking pipe - not getting any messages on the receiving process
What else can I try? I dont want to get the DBus library, seems too complex for this application.
Any simple server and client code or a link to it would be helpful
Here is my non-blockign pipe code, that doesnt work for me,
I assume its because I dont have a reference to the pipe from one app to the other
Code sourced from here: https://www.geeksforgeeks.org/non-blocking-io-with-pipes-in-c/
char* msg1 = "hello";
char* msg2 = "bye !!";
int p[2], i;
bool InitClient()
{
// error checking for pipe
if(pipe(p) < 0)
exit(1);
// error checking for fcntl
if(fcntl(p[0], F_SETFL, O_NONBLOCK) < 0)
exit(2);
//Read
int nread;
char buf[MSGSIZE];
// write link
close(p[1]);
while (1) {
// read call if return -1 then pipe is
// empty because of fcntl
nread = read(p[0], buf, MSGSIZE);
switch (nread) {
case -1:
// case -1 means pipe is empty and errono
// set EAGAIN
if(errno == EAGAIN) {
printf("(pipe empty)\n");
sleep(1);
break;
}
default:
// text read
// by default return no. of bytes
// which read call read at that time
printf("MSG = % s\n", buf);
}
}
return true;
}
bool InitServer()
{
// error checking for pipe
if(pipe(p) < 0)
exit(1);
// error checking for fcntl
if(fcntl(p[0], F_SETFL, O_NONBLOCK) < 0)
exit(2);
//Write
// read link
close(p[0]);
// write 3 times "hello" in 3 second interval
for(i = 0 ; i < 3000000000 ; i++) {
write(p[0], msg1, MSGSIZE);
sleep(3);
}
// write "bye" one times
write(p[0], msg2, MSGSIZE);
return true;
}
Please consider ZeroMQ
https://zeromq.org/
It is lightweight and has wrapper for all major programming languages.

Using a signal handler to process datas received from a fifo

I am writing a simple client/server communication with fifo but i am stuck at using a signal handler to process client request.
The server open a fifo in readonly and non blocking mode, read datas received and writes back some datas to the client fifo.
And this actually works fine when there is no signal handler on the server side. Here is the main code for both sides.
Server :
int main(int argc, char *argv[])
{
// install handler
struct sigaction action;
action.sa_handler = requestHandler;
sigemptyset(&(action.sa_mask));
action.sa_flags = SA_RESETHAND | SA_RESTART;
sigaction(SIGIO, &action, NULL);
if(!makeFifo(FIFO_READ, 0644))
exit(1);
int rd_fifo = openFifo(FIFO_READ, O_RDONLY | O_NONBLOCK); // non blocking
if(rd_fifo == -1)
exit(1);
// wait for request and answer
while (1) {
qWarning() << "waiting client...";
sleep(1);
QString msg = readFifo(rd_fifo);
qWarning() << "msg = " << msg;
if(msg == "ReqMode") {
int wr_fifo = openFifo(FIFO_WRITE, O_WRONLY); // blocking
writeFifo(wr_fifo, QString("mode"));
break;
} else
qWarning() << "unknow request ..";
}
close(rd_fifo);
unlink(FIFO_READ);
return 0;
}
Client :
int main(int argc, char *argv[])
{
int wr_fifo = openFifo(FIFO_WRITE, O_WRONLY);
if(wr_fifo == -1)
exit(1);
// create a fifo to read server answer
if(!makeFifo(FIFO_READ, 0644))
exit(1);
// ask the server his mode
writeFifo(wr_fifo, QString("ReqMode"));
// read his answer and print it
int rd_fifo = openFifo(FIFO_READ, O_RDONLY); // blocking
qWarning() << "server is in mode : " << readFifo(rd_fifo);
close(rd_fifo);
unlink(FIFO_READ);
return 0;
}
Everything works as expected ( even if all errors are not properly handled, this is just a sample code to demonstrate that this is possible).
The problem is that the handler ( not shown here, but it only print a message on the terminal with the signal received ) is never called when the client write datas to the fifo. Beside, i have check that if i send a kill -SIGIO to the server from a bash ( or from elsewhere ) the signal handler is executed.
Thanks for your help.
Actually, i missed the 3 following lines on the server side :
fcntl(rd_fifo, F_SETOWN, getpid()); // set PID of the receiving process
fcntl(rd_fifo, F_SETFL, fcntl(rd_fifo, F_GETFL) | O_ASYNC); // enable asynchronous beahviour
fcntl(rd_fifo, F_SETSIG, SIGIO); // set the signal that is sent when the kernel tell us that there is a read/write on the fifo.
The last point was important because the default signal sent was 0 in my case, so i had to set it explicity to SIGIO to make things works. Here is the output of the server side :
waiting client...
nb_read = 0
msg = ""
unknow request ..
waiting client...
signal 29
SIGPOLL
nb_read = 7
msg = "ReqMode"
Now, i guess it's possible to handle the request inside the handler by moving what is inside the while loop into the requestHandler function.

Giving Each Child Process a Unique ID with Forking

To learn socket programming with TCP, I'm making a simple server and client. The client will send chunks of a file and the server will read them and write to a file. Client and server work properly without any multiprocessing. I want to make it so that multiple clients can connect simultaneously. I want to give each connected client a unique id, called "client_id". This is a number between 1 and n.
I tried to use fork() in order to spawn a child process, and in the child process I accept the connection and then read in the data and save it to the file. However, the client_id variable is not synchronized across processes so sometimes it will be incremented and sometimes not. I don't fully understand what's going on. The value of client_id should never be repeated, but sometimes I'm seeing numbers appear twice. I believe this is because on forking, the child process gets a copy of everything the parent had but there is no synchronization across parallel processes.
Here is my infinite loop that sits and waits for connecting clients. Within the child process, I transfer the file in another infinite loop that terminates when recv receives 0 bytes.
int client_id = 0;
while(1){
// accept a new connection
struct sockaddr_in clientAddr;
socklen_t clientAddrSize = sizeof(clientAddr);
//socket file descriptor to use for the connection
int clientSockfd = accept(sockfd, (struct sockaddr*)&clientAddr, &clientAddrSize);
if (clientSockfd == -1) {
perror("accept");
return 4;
}
else{ //handle forking
client_id++;
std::cout<<"Client id: "<<client_id<<std::endl;
pid_t pid = fork();
if(pid == 0){
//child process
std::string client_idstr = std::to_string(client_id);
char ipstr[INET_ADDRSTRLEN] = {'\0'};
inet_ntop(clientAddr.sin_family, &clientAddr.sin_addr, ipstr, sizeof(ipstr));
std::string connection_id = std::to_string(ntohs(clientAddr.sin_port));
std::cout << "Accept a connection from: " << ipstr << ":" << client_idstr
<< std::endl;
// read/write data from/into the connection
char buf[S_BUFSIZE] = {0};
std::stringstream ss;
//Create file stream
std::ofstream file_to_save;
FILE *pFile;
std::string write_dir = filedir+"/" + client_idstr + ".file";
std::string write_type = "wb";
pFile = fopen(write_dir.c_str(), write_type.c_str());
std::cout<<"write dir: "<<write_dir<<std::endl;
while (1) {
memset(buf, '\0', sizeof(buf));
int rec_value = recv(clientSockfd, buf, S_BUFSIZE, 0);
if (rec_value == -1) {
perror("recv");
return 5;
}else if(rec_value == 0){
//end of transmission, exit the loop
break;
}
fwrite(buf, sizeof(char), rec_value, pFile);
}
fclose(pFile);
close(clientSockfd);
}
else if(pid > 0){
//parent process
continue;
}else{
perror("failed to create multiple new threads");
exit(-1);
}
}
}
Here is the server output when I do the following, with the expected file name (client_id.file) in parentheses:
1) connect client 1, transfer file, disconnect client 1 (1.file)
2) connect client 2, transfer file, disconnect client 2 (2.file)
3) connect client 1, transfer file, disconnect client 1 (3.file)
4) connect client 1, transfer file, disconnect client 1 (4.file)
5) connect client 2, transfer file, disconnect client 2 (5.file)
6) connect client 2, transfer file, disconnect client 2 (6.file)
I might be wrong, but something tickles my memory about this counter approach. How your counter is defined, is it a local variable? it should have thread-wide time of life, so either a global variable or static local, then fork would copy its value to the spawned process when it copies memory pages. Local variables stored as temporary and what happens with them in C++ is undefined.
If you need shared variables, there is way, by using shared memory .. mmap and so on.
Second.. it's a copy of process, which becomes to be executed from the point where fork() was called. And what you have there? infinite loop wrap up. Loop would restart. You should exit from it if pid was 0.
There is example of processes breeding on Fibonacci principle within for() loop:
Visually what happens to fork() in a For Loop
Parent increased counter from 0 to 1 and called fork. Now we have child that printed 1.
CHild finished transmission and returned to the beginning of loop. it increased counter to 2 when it received connection. parent increased counter to two when it managed to receive connection. Now you have 4 processes.
If you made more tests in sequence, you would see that amount of duplicates rises. And you actually could see those processes in your taskmanager, or top, ps output, if you watched that data.

Is async_connect really asynchronous under GNU/Linux?

I wanted to check whether Boost Asio really performs an asynchronous connect or not. According to the diagrams corresponding to the asynchronous calls published in Asio's basics, the operation is started when the io_service signals the operating system, and therefore I understand that right after executing the async.connect instruction the system attempts to perform that connection.
What I have understood is, if you don't call run you just miss the results, but the operations are —probably— completed. So I tried that creating a dummy server with nc -l -p 9000 and then using the code attached below.
With a debugger, I have gone statement by statement and I have stopped right before the run call to the io_service. In the server, nothing happens. Nothing about the connect —this is obvious as the dummy server does not tell you about it— nor the async_write.
Immediately after calling the run function the corresponding messages pop up on the server side. I have been asking on Boost's IRC channel and after showing my strace, a really clever person told me that it probably is because the socket is not ready until run is called. Apparently, this does not happen on Windows.
Does that this mean that asynchronous is not really asynchronous under GNU/Linux operating systems? Does this mean that the diagram showed in the website does not correspond to GNU/Linux environments?
NOTE about "is not really asynchronous": Yes, it does not block the call and therefore the thread keeps running and doing things, but I mean asynchronous by starting the operations right after they have been executed.
Thank you very much indeed in advance.
The code
#include <iostream>
#include <string.h>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
void connect_handler(const boost::system::error_code& error)
{
if(error)
{
std::cout << error.message() << std::endl;
exit(EXIT_FAILURE);
}
else
{
std::cout << "Successfully connected!" << std::endl;
}
}
void write_handler(const boost::system::error_code& error)
{
if(error)
{
std::cout << error.message() << std::endl;
exit(EXIT_FAILURE);
}
else
{
std::cout << "Yes, we wrote!" << std::endl;
}
}
int main()
{
boost::asio::io_service io_service;
boost::asio::ip::tcp::socket socket(io_service);
boost::asio::ip::tcp::endpoint endpoint(
boost::asio::ip::address::from_string("127.0.0.1"), 9000);
socket.async_connect(endpoint, connect_handler);
std::string hello_world("Hello World!");
boost::asio::async_write(socket, boost::asio::buffer(hello_world.c_str(),
hello_world.size()), boost::bind(write_handler,
boost::asio::placeholders::error));
io_service.run();
exit(EXIT_SUCCESS);
}
My strace
futex(0x7f44cd0ca03c, FUTEX_WAKE_PRIVATE, 2147483647) = 0
futex(0x7f44cd0ca048, FUTEX_WAKE_PRIVATE, 2147483647) = 0
eventfd2(0, O_NONBLOCK|O_CLOEXEC) = 3
epoll_create1(EPOLL_CLOEXEC) = 4
timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC) = 5
epoll_ctl(4, EPOLL_CTL_ADD, 3, {EPOLLIN|EPOLLERR|EPOLLET, {u32=37842600, u64=37842600}}) = 0
write(3, "\1\0\0\0\0\0\0\0", 8) = 8
epoll_ctl(4, EPOLL_CTL_ADD, 5, {EPOLLIN|EPOLLERR, {u32=37842612, u64=37842612}}) = 0
socket(PF_INET, SOCK_STREAM, IPPROTO_TCP) = 6
epoll_ctl(4, EPOLL_CTL_ADD, 6, {EPOLLIN|EPOLLPRI|EPOLLERR|EPOLLHUP|EPOLLET, {u32=37842704, u64=37842704}}) = 0
ioctl(6, FIONBIO, [1]) = 0
connect(6, {sa_family=AF_INET, sin_port=htons(9000), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EINPROGRESS (Operation now in progress)
epoll_ctl(4, EPOLL_CTL_MOD, 6, {EPOLLIN|EPOLLPRI|EPOLLOUT|EPOLLERR|EPOLLHUP|EPOLLET, {u32=37842704, u64=37842704}}) = 0
epoll_wait(4, {{EPOLLIN, {u32=37842600, u64=37842600}}, {EPOLLOUT, {u32=37842704, u64=37842704}}}, 128, -1) = 2
poll([{fd=6, events=POLLOUT}], 1, 0) = 1 ([{fd=6, revents=POLLOUT}])
getsockopt(6, SOL_SOCKET, SO_ERROR, [0], [4]) = 0
sendmsg(6, {msg_name(0)=NULL, msg_iov(1)=[{"Hello World!", 12}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 12
fstat(1, {st_mode=S_IFCHR|0600, st_rdev=makedev(136, 5), ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f44cd933000
write(1, "Successfully connected!\n", 24Successfully connected!
) = 24
epoll_wait(4, {}, 128, 0) = 0
write(1, "Yes, we wrote!\n", 15Yes, we wrote!
) = 15
exit_group(0) = ?
+++ exited with 0 +++
A weird thing about boost.asio -- not unique to it, but generally different from OS-specific asynchronous networking frameworks -- is that it does not rely on auxiliary threads. That has a number of important consequences, which can be boiled down to this: boost.asio is NOT for doing things in the background. Rather, it is for doing multiple things in the foreground.
io_service::run() is the "hub" of boost.asio, and a single-threaded program using boost.asio should expect to spend most of its time waiting inside io_service::run(), or executing a completion handler called by it. Depending on the OS-specific internal implementation, it might or might not be possible to have particular asynchronous operations running before that function gets called, which is why calling it is basically the first thing you do once you've kicked off your initial asynchronous requests.
Think of async_connect as "arming" your io_service with an asynchronous operation. The actual asynchrony happens during io_service::run(). In fact, calling async_connect followed immediately by async_write is a weird thing to do, and I'm a little surprised that it works. Ordinarily, you'd execute (or, rather, "arm") the async_write from within the connect_handler, because it's only at that point that you have a connected socket.

c++ irc bot only receives data once

I'm creating a simple IRC bot in C++ but I'm coming across some issues I can't seem to resolve. My code is based on Beej's Networking Guide.
Also have I used following code to try and fix my problem.
main.cpp snippet
// as long we don't lose connection, keep receiving data
while(true) {
num_receives++;
if(num_receives == 3) {
snd("NICK q1w2\r\n");
snd("USER q1w2 0 * :q\r\n");
}
if(num_receives == 4) {
snd("JOIN q1w2\r\n");
}
num_bytes = recv(s, buf, MAXDATASIZE-1, 0);
buf[num_bytes]='\0';
std::cout << num_receives << "\t" << buf;
if(num_bytes == -1) {
perror("ERROR receiving");
break;
}
if(num_bytes == 0) {
perror("DISCONNECTED");
break;
}
}
and the output is (first line is command to run the bot)
~$ ./bot irc.snt.utwente.nl 6667
1 :irc.snt.utwente.nl 020 * :Please wait while we process your connection.
2 ERROR :Closing Link: [unknown#91.177.66.186] (Ping timeout)
DISCONNECTED: Network is unreachable
So, the problem is that it only receives data once. Even though it's placed in a while loop. I don't really see what's blocking it from looping or receiving. If I place a cout inside the while loop you can see it only loops once and a second time in the end when it disconnects.
Edit: I have it working. The commands send to the severs should be put outside the while loop. Because inside the loop it's waiting to be initialized. But as w don't receive anything we just lose connection because we didn't send anything either.
[correct] main.cpp snippet
snd("NICK q1w2\r\n");
snd("USER q1w2 0 * :q\r\n");
snd("JOIN q1w2\r\n");
// as long we don't lose connection, keep receiving data
while(true) {
num_receives++;
num_bytes = recv(s, buf, MAXDATASIZE-1, 0);
buf[num_bytes]='\0';
std::cout << num_receives << "\t" << buf;
if(num_bytes == -1) {
perror("ERROR receiving");
break;
}
if(num_bytes == 0) {
perror("DISCONNECTED");
break;
}
}
You forgot to implement the IRC protocol.
Your code assumes that each call to the TCP connection's recv function will return exactly one IRC message. This assumption is totally and completely invalid. The TCP implementation has no idea what an IRC message is. You have to implement the IRC protocol and the IRC protocol's definition of a message if you want to count how many messages you've received.