Serial port communication initialization - c++

At the time we are trying to create an interface for serial communication, to be able to communicate with a microprocessor.
Actually - everything works fine. Almost!
To be able to communicate with our controller, we need to sync up with it. To do this, we write a string: "?0{SY}13!", and the controller should then reply with "!0{SY}F5?" to accept the request for sync.
To do this, we use a writeData function (that works - we know that by using echo), and after that we use a readData to read the answer.
The problem is that, for some reason, it will not read anything. Though it returns 1 for success, the chars it reads is constanly " " (nothing).
Now comes the weird part - if we use an external terminal program to initialize the port (like putty), and then close the program, then everything works fine. It accepts the sync request, answers (and we can read it), and then we can do all that we want. But unless we use an external program to initialize the port, it doesn't work.
The constructor for the initializing the interface looks like this:
SerialIF::SerialIF(int baud, int byteSize, int stopBits, char* parity, int debug)
{
string coutport = getPort();
wstring wideport;
debug_ = debug; //Debuglevel
sync = false; //sync starts with false
error = false; //Error false as beginnging
//this is just for converting to the right type
for (int i = 0; i < coutport.length(); i++)
{
wideport += wchar_t(coutport[i]);
}
const wchar_t* port = wideport.c_str();
SerialIF::hserial = CreateFile(port,
GENERIC_READ | GENERIC_WRITE,
0,
0,
OPEN_EXISTING,
FILE_ATTRIBUTE_NORMAL,
0);
if (hserial == INVALID_HANDLE_VALUE)
{
if (GetLastError() == ERROR_FILE_NOT_FOUND)
{
if (debug_ != LOW)
{
cout << "[-] Port " << coutport << "doesn't exist." << endl;
}
}
if (debug_ != LOW)
{
cout << "[-] Handle error - is there another terminal active?" << endl;
}
error = true;
}
DCB dcbParms = { 0 };
dcbParms.DCBlength = sizeof(dcbParms);
if (!GetCommState(hserial, &dcbParms))
{
if (debug_ != LOW)
{
cout << "[-] Couldn't get status from port " << coutport << endl;
}
error = true;
}
if (!error)
{
setBaud(dcbParms, baud);
setParity(dcbParms, parity);
setByteSize(dcbParms, byteSize);
setStopbits(dcbParms, stopBits);
if (debug_ == HIGH)
{
cout << "[+] Serial port " << coutport << " has been activated. \nBaud-rate: " << baud << "\nParity: "
<< parity << "\nStop bits: " << stopBits << endl;
}
}
else if (debug_ != LOW)
{
cout << "[-] Port not initialized" << endl;
}
}
This should work - I really don't know why it shouldn't. It returns no errors, I've tried A LOT of error searching the last couple of days, I tried timeouts, I tried other ways of building it, but it all boils down to the same problem.
Why wont this initialize the port?
EDIT:
The output when trying to sync:
Can't post pictures due to lack of reputation. though it outputs as follows:
[+] Serial port COM1 has been activated.
Baud-rate: 9600
Parity: NONE
Stop bits: 1
[+] -> ?0{SY}13! is written to the port.
((And this is where it goes in to the infinite loop reading " "))
EDIT: code for read:
const int bytesToRead = 1; //I byte pr læsning
char buffer[bytesToRead + 1] = { 0 }; //Bufferen til data
DWORD dwBytesRead = 0; //Antal bytes læst
string store; //Store - den vi gemmer den samlede streng i
bool end = false; //Kontrolvariabel til whileloop.
while (end == false)
{
if (ReadFile(hserial, buffer, bytesToRead, &dwBytesRead, NULL))
/*Readfile læser fra interfacet vha. hserial som vi oprettede i constructoren*/
{
if (buffer[0] == '?') //Da protokollen slutter en modtaget streng med "?", sætter vi end til true
{ //Hvis denne læses.
end = true;
}
store += buffer[0];
}
else
{
if (debug_ != LOW)
{
cout << "[-] Read fail" << endl; //Hvis readfile returnerer false, så er der sket en fejl.
}
end = true;
}
}
if (debug_ == HIGH)
{
cout << "[+] Recieved: " << store << endl; //I forbindelse med debug, er det muligt at få udsrkevet det man fik ind.
}
recentIn = store; //RecentIN brugES i andre funktioner
if (verify()) //Som f.eks. her, hvor vi verificerer dataen
{
if (debug_ == HIGH)
{
cout << "[+] Verification success!" << endl;
}
return convertRecData(store);
}
else
{
if (debug_ != LOW)
{
cout << "[-] Verification failed." << endl;
}
vector <string> null; //Returnerer en string uden data i, hvis der er sket en fejl.
return null;
}

You never call SetCommState.
I'm not sure where your functions setBaud,setParity etc. come from, but I can't see how they can actually modify the serial port, as they don't have access to the comm device's handle.

ReadFile() can return success even when zero bytes are read. Use dwBytesRead to find the actual number of received characters.
while (ReadFile(hserial, buffer, 1, &dwBytesRead, NULL))
{
if (dwBytesRead != 0)
{
store += buffer[0];
if (buffer[0] == '?')
{
end = true;
break;
}
}
}

Had a similar problem between a PC and an arduino nano clone including a CH340. This post was the only one which discribes my problem very good.
I solved it by switching off DTR (data-terminal-ready) and RTS (request-to-send) flow control, which is normaly activated after (re)start the PC or plugging in the arduino. I found a descrition of this parameters in the documentation of DCB
I know that shis post is very old but maybe i can help somebody else with this idea/solution.

Related

Serial port read interval not applying in C++

Here's my entire program:
#include <iostream>
#include <windows.h>
int main() {
//? create the serial port file with read and write perms
HANDLE hPort = CreateFileW(L"COM3",
GENERIC_READ | GENERIC_WRITE,
0,
0,
OPEN_EXISTING,
0,
0);
if (hPort == INVALID_HANDLE_VALUE) {
std::cout << "INVALID_HANDLE_VALUE/6\n";
if (GetLastError() == 2) {
std::cout << "serial port doesn't exist. error code: 2/ERROR_FILE_NOT_FOUND\n";
} else {
std::cout << "error occured with serial port file creation (CreateFileW). error code: " << GetLastError() << std::endl;
}
CloseHandle(hPort);
} else {
std::cout << "serial port created successfully (probably)\n";
}
DCB port_conf;
int err = GetCommState(hPort, &port_conf);
if (err == 0) {
std::cout << "GetCommState failed. error code: " << GetLastError() << "\n";
CloseHandle(hPort);
}
port_conf.BaudRate = 9600;
port_conf.Parity = NOPARITY;
port_conf.ByteSize = 8;
port_conf.StopBits = ONESTOPBIT;
port_conf.DCBlength = sizeof(port_conf);
err = SetCommState(hPort, &port_conf);
COMMTIMEOUTS timeouts_conf;
timeouts_conf.ReadIntervalTimeout = 1;
timeouts_conf.ReadTotalTimeoutConstant = 1;
timeouts_conf.ReadTotalTimeoutMultiplier = 1;
timeouts_conf.WriteTotalTimeoutConstant = 1;
timeouts_conf.WriteTotalTimeoutMultiplier = 1;
err = SetCommTimeouts(hPort, &timeouts_conf);
DWORD buffer_size_read;
char buffer_read[512]{};
int buffer_read_size;
char buffer_read_last[512]{};
while (1){
ReadFile(hPort,
buffer_read,
512,
&buffer_size_read,
0);
std::cout << buffer_read;
// if (buffer_read_last != buffer_read) {
// std::cout << buffer_read;
// }
// buffer_read_size = strlen(buffer_read);
// for (int i = 0; i <= buffer_read_size; i++) {
// buffer_read_last[i] = buffer_read[i];
// }
if (GetKeyState(VK_SPACE) != 0) {
break;
}
}
CloseHandle(hPort);
}
The problem with it is that everything is spit out too fast into cout. I made a miserable attempt at limiting this (it is commented out), but the program just doesn't do anything then. Another attempt was using the timeouts_conf.ReadIntervalTimeout, which Microsoft describes this way:
The maximum time allowed to elapse before the arrival of the next byte on the communications line, in milliseconds. If the interval between the arrival of any two bytes exceeds this amount, the ReadFile operation is completed and any buffered data is returned. A value of zero indicates that interval time-outs are not used.
but it didn't change anything. The serial port is continuously receiving data from a microcontroller in which I will not do the limiting for a pretty specific reason.
I need some sort of reliable way of not spitting everything out to cout at the speed of light. Thanks in advance

Handeling SSL Client not reading all data

I am trying to accomplish, that my ssl server does not break down, when a client does not collect all data. (fixed with one minor bug)
when the data is too long.
Basically what I'm trying to do is write in a non-blocking way. For that I found two different approaches:
First approach
using this code
int flags = fcntl(ret.fdsock, F_GETFL, 0);
fcntl(ret.fdsock, F_SETFL, flags | O_NONBLOCK);
and creating the ssl connection with it
Second approach:
Doing this directly after creating the SSL Object using SSL_new(ctx)
BIO *sock = BIO_new_socket(ret.fdsock, BIO_NOCLOSE);
BIO_set_nbio(sock, 1);
SSL_set_bio(client, sock, sock);
Both of which have their downsides, but neither of which helps solving the problem.
The first approach seems to read in a unblocking way just fine, but when I write more data, than the client reads, my server crashes.
The second approach does not seem to do anything, so my guess is, that I did something wrong or did not understand what a BIO actually does.
For more Information here is how the server writes to the client:
int SSLConnection::send(char* msg, const int size){
int rest_size = size;
int bytes_sent = 0;
char* begin = msg;
std::cout << "expected bytes to send: " << size << std::endl;
while(rest_size > 0) {
int tmp_bytes_sent = SSL_write(connection, begin, rest_size);
std::cout << "any error : " << ERR_get_error()<< std::endl;
std::cout << "tmp_bytes_sent: " << tmp_bytes_sent << std::endl;
if (tmp_bytes_sent < 0){
std::cout << tmp_bytes_sent << std::endl;
std::cout << "ssl error : " << SSL_get_error(this->connection, tmp_bytes_sent)<< std::endl;
} else {
bytes_sent += tmp_bytes_sent;
rest_size -= tmp_bytes_sent;
begin = msg+bytes_sent;
}
}
return bytes_sent;
}
Output:
expected bytes to send: 78888890
Betätigen Sie die <RETURN> Taste, um das Fenster zu schließen...
(means: hit <return> to close window)
EDIT: After people said, that I need to cache errors appropriate, here is my new code:
Setup:
connection = SSL_new(ctx);
if (connection){
BIO * sbio = BIO_new_socket(ret.fdsock, BIO_NOCLOSE);
if (sbio) {
BIO_set_nbio(sbio, false);
SSL_set_bio(connection, sbio, sbio);
SSL_set_accept_state(connection);
} else {
std::cout << "Bio is null" << std::endl;
}
} else {
std::cout << "client is null" << std::endl;
}
Sending:
int SSLConnection::send(char* msg, const int size){
if(connection == NULL) {
std::cout << "ERR: Connection is NULL" << std::endl;
return -1;
}
int rest_size = size;
int bytes_sent = 0;
char* begin = msg;
std::cout << "expected bytes to send: " << size << std::endl;
while(rest_size > 0) {
int tmp_bytes_sent = SSL_write(connection, begin, rest_size);
std::cout << "any error : " << ERR_get_error()<< std::endl;
std::cout << "tmp_bytes_sent: " << tmp_bytes_sent << std::endl;
if (tmp_bytes_sent < 0){
std::cout << tmp_bytes_sent << std::endl;
std::cout << "ssl error : " << SSL_get_error(this->connection, tmp_bytes_sent)<< std::endl;
break;
} else if (tmp_bytes_sent == 0){
std::cout << "tmp_bytes are 0" << std::endl;
break;
} else {
bytes_sent += tmp_bytes_sent;
rest_size -= tmp_bytes_sent;
begin = msg+bytes_sent;
}
}
return bytes_sent;
}
Using a client, that fetches 60 bytes, here is the output:
Output writing 1,000,000 Bytes:
expected bytes to send: 1000000
any error : 0
tmp_bytes_sent: 16384
any error : 0
tmp_bytes_sent: 16384
Betätigen Sie die <RETURN> Taste, um das Fenster zu schließen...
(translates to: hit <RETURN> to close window)
Output writing 1,000 bytes:
expected bytes to send: 1000
any error : 0
tmp_bytes_sent: 1000
connection closed <- expected output
First, a warning: non-blocking I/O over SSL is a rather baroque API, and it's difficult to use correctly. In particular, the SSL layer sometimes needs to read internal data before it can write user data (or vice versa), and the caller's code is expected to be able to handle that based on the error-codes feedback it gets from the SSL calls it makes. It can be made to work correctly, but it's not easy or obvious -- you are de facto required to implement a state machine in your code that echoes the state machine inside the SSL library.
Below is a simplified version of the logic that is required (it's extracted from the Write() method in this file which is part of this library, in case you want to see a complete, working implementation)
enum {
SSL_STATE_READ_WANTS_READABLE_SOCKET = 0x01,
SSL_STATE_READ_WANTS_WRITEABLE_SOCKET = 0x02,
SSL_STATE_WRITE_WANTS_READABLE_SOCKET = 0x04,
SSL_STATE_WRITE_WANTS_WRITEABLE_SOCKET = 0x08
};
// a bit-chord of SSL_STATE_* bits to keep track of what
// the SSL layer needs us to do next before it can make more progress
uint32_t _sslState = 0;
// Note that this method returns the number of bytes sent, or -1
// if there was a fatal error. So if this method returns 0 that just
// means that this function was not able to send any bytes at this time.
int32_t SSLSocketDataIO :: Write(const void *buffer, uint32 size)
{
int32_t bytes = SSL_write(_ssl, buffer, size);
if (bytes > 0)
{
// SSL was able to send some bytes, so clear the relevant SSL-state-flags
_sslState &= ~(SSL_STATE_WRITE_WANTS_READABLE_SOCKET | SSL_STATE_WRITE_WANTS_WRITEABLE_SOCKET);
}
else if (bytes == 0)
{
return -1; // the SSL connection was closed, so return failure
}
else
{
// The SSL layer's internal needs aren't being met, so we now have to
// ask it what its problem is, then give it what it wants. :P
int err = SSL_get_error(_ssl, bytes);
if (err == SSL_ERROR_WANT_READ)
{
// SSL can't write anything more until the socket becomes readable,
// so we need to go back to our event loop, wait until the
// socket select()'s as readable, and then call SSL_Write() again.
_sslState |= SSL_STATE_WRITE_WANTS_READABLE_SOCKET;
_sslState &= ~SSL_STATE_WRITE_WANTS_WRITEABLE_SOCKET;
bytes = 0; // Tell the caller we weren't able to send anything yet
}
else if (err == SSL_ERROR_WANT_WRITE)
{
// SSL can't write anything more until the socket becomes writable,
// so we need to go back to our event loop, wait until the
// socket select()'s as writeable, and then call SSL_Write() again.
_sslState &= ~SSL_STATE_WRITE_WANTS_READABLE_SOCKET;
_sslState |= SSL_STATE_WRITE_WANTS_WRITEABLE_SOCKET;
bytes = 0; // Tell the caller we weren't able to send anything yet
}
else
{
// SSL had some other problem I don't know how to deal with,
// so just print some debug output and then return failure.
fprintf(stderr,"SSL_write() ERROR!");
ERR_print_errors_fp(stderr);
}
}
return bytes; // Returns the number of bytes we actually sent
}
I think your problem is
rest_size -= bytes_sent;
You should do rest_size -= tmp_bytes_sent;
Also
if (tmp_bytes_sent < 0){
std::cout << tmp_bytes_sent << std::endl;
//its an error condition
return bytes_sent;
}
I dont know whether this will fix the issue, but the code you pasted has the above mentioned issues
When I write more data, than the client reads, my server crashes.
No it doesn't, unless you've violently miscoded something else that you haven't posted here. It either loops forever or it gets an error: probably ECONNRESET, which means the client has behaved as you described, and you've detected it, so you should close the connection and forget about him. Instead of which, you are just looping forever, trying to send the data to a broken connection, which can never happen.
And when you get an error, there's not much use in just printing a -1. You should print the error, with perror() or errno or strerror().
Speaking of looping forever, don't loop like this. SSL_write() can return 0, which you aren't handling at all: this will cause an infinite loop. See also David Schwartz's comments below.
NB you should definitely use the second approach. OpenSSL needs to know that the socket is in non-blocking mode.
Both of which have their downsides
Such as?
And as noted in the other answer,
rest_size -= bytes_sent;
should be
rest_size -= tmp_bytes_sent;

Serial Communication C++ ReadFile()

I created 2 functions to read and write across a serial port, I am coding in c++ with visual studios 2012, windows 7, 64 bit operating system, and using RS-232 serial cord. The board I'm connecting to is supposed to send 5 characters, TRG 1, upon pressing a button, the code works, however the output isn't always the correct values.
char serialRead()
{
char input[5];
DCB dcBus;
HANDLE hSerial;
DWORD bytesRead, eventMask;
COMMTIMEOUTS timeouts;
hSerial = CreateFile (L"\\\\.\\COM13", GENERIC_READ, 0, NULL, OPEN_EXISTING, 0, NULL);
if (hSerial == INVALID_HANDLE_VALUE)
{
cout << "error opening handle\n";
}
else
{
cout << "port opened\n";
}
dcBus.DCBlength = sizeof(dcBus);
if ((GetCommState(hSerial, &dcBus) == 0))
{
cout << "error getting comm state\n";
}
dcBus.BaudRate = CBR_9600;
dcBus.ByteSize = DATABITS_8;
dcBus.Parity = NOPARITY;
dcBus.StopBits = ONESTOPBIT;
if ((GetCommState(hSerial, &dcBus) == 0))
{
cout << "error setting comm state\n";
}
if ((GetCommTimeouts(hSerial, &timeouts) == 0))
{
cout << "error getting timeouts\n";
}
timeouts.ReadIntervalTimeout = 10;
timeouts.ReadTotalTimeoutMultiplier = 1;
timeouts.ReadTotalTimeoutConstant = 500;
timeouts.WriteTotalTimeoutMultiplier = 1;
timeouts.WriteTotalTimeoutConstant = 500;
if (SetCommTimeouts(hSerial, &timeouts) == 0)
{
cout << "error setting timeouts\n";
}
if (SetCommMask(hSerial, EV_RXCHAR) == 0)
{
cout << "error setting comm mask\n";
}
if (WaitCommEvent(hSerial, &eventMask, NULL))
{
if (ReadFile(hSerial, &input, 5, &bytesRead, NULL) !=0)
{
for (int i = 0; i < sizeof(input); i++)
{
cout << input[i];
}
cout << endl;
}
else
{
cout << "error reading file\n";
}
}
else
{
cout << "error waiting for comm event\n";
}
switch (input[4])
{
case '1' :
CloseHandle(hSerial);
return '1';
break;
case '2' :
CloseHandle(hSerial);
return '2';
break;
case '3' :
CloseHandle(hSerial);
return '3';
break;
case '4' :
CloseHandle(hSerial);
return '4';
break;
case '5':
CloseHandle(hSerial);
return '5';
break;
default :
CloseHandle(hSerial);
return '9';
break;
}
}
The code runs successfully in the sense that the port is configured correctly and data is being transmitted. The output varies, most of the time the output will print the whole "TRG 1", but randomly (it seems), the output will be "TRG|}|}" or "T|}|}|}|}", i.e. it will be part of the string and every character missing will be replaced with a "|}" instead of the correct characters. This is a problem because I want to be able to send it different values for trigger and run the switch of that variable.
I'm relatively new to serial communication and not an expert programmer so I'm wondering what's going on?
Serial communication is not packet-based. The information doesn't come to you in packages where the entire message can necessarily be read in one go; instead, it's a stream, so you could read half a message, a whole message, more than one message, etc.
As zdan said in the comments, you need to check the number bytes read from ReadFile and use that to compose 5-character packages which are your messages.
Specifically, only the first couple of characters up to the returned number of bytes read are valid; the rest are garbage.

check on socket with select() returns 0 when data is present

I've had my socket class working for a while now, but I wanted to add a timeout using select(). Seems pretty straight forward but I always have 0 returned from select(). I've even removed the select() check so it reads data regardless of select() and the data gets read, but select() still reports that data is not present. Any clue on how to get select() to stop lying to me? I've also set the socket to non-blocking. Thanks.
Code:
char buf [ MAXRECV + 1 ];
s = "";
memset ( buf, 0, MAXRECV + 1 );
struct timeval tv;
int retval;
fd_set Sockets;
FD_ZERO(&Sockets);
FD_SET(m_sock,&Sockets);
// Print sock int for sainity
std::cout << "\nm_sock:" << m_sock << "\n";
tv.tv_sec = 1;
tv.tv_usec = 0;
retval = select(1, &Sockets, NULL, NULL, &tv);
std::cout << "\nretval is :[" << retval << "]\n\n";
// Check
if (FD_ISSET(m_sock,&Sockets))
std::cout << "\nFD_ISSET(m_sock,&Sockets) is true\n\n";
else
std::cout << "\nFD_ISSET(m_sock,&Sockets) is false\n\n";
// If error occurs
if (retval == -1)
{
perror("select()");
std::cout << "\nERROR IN SELECT()\n";
}
// If data present
else if (retval)
{
std::cout << "\nDATA IS READY TO BE READ\n";
std::cout << "recv ( m_sock, buf, MAXRECV, 0)... m_sock is " << m_sock << "\n";
int status = recv ( m_sock, buf, MAXRECV, 0 );
if ( status == -1 )
{
std::cout << "status == -1 errno == " << errno << " in Socket::recv\n";
return 0;
}
else if ( status == 0 )
{
return 0;
}
else
{
s = buf;
return status;
}
}
// If data not present
else
{
std::cout << "\nDATA WAS NOT READY, TIMEOUT\n";
return 0;
}
Your call to select is incorrect, as you have already discovered. Even though the first parameter is named nfds in many forms of documentation, it is actually one more than the largest file descriptor number held by any of the fd_sets passed to select. In this case, since you are only passing in one file descriptor, the call should be:
retval = select(m_sock + 1, &Sockets, NULL, NULL, &tv);
If you have an arbitrary number of sockets you are handling each in a different thread, you might find my answer to this question a better approach.
Whoops. Looks like I forgot to set select()'s int nfds:
working good now.

problem with popen

I have a strange problem. I have 2 binaries by the name cpp and another is called mnp_proxy_server.
cpp will start mnp_proxy_binary by calling a method executeScript. The code of this method is
int executeScript(string script, unsigned int scriptTmOut)
{
fd_set readfd;
const int BUFSIZE = 1024;
//stringstream strBuf;
char buf[ BUFSIZE];
time_t startTime = time(NULL);
struct timeval tv;
int ret, ret2 = 0;
FILE * pPipe = popen(script.c_str(), "r");
if (pPipe == NULL)
{
// cout << "popen() failed:"<< strerror(errno) << endl;
return -1;
}
while(1)
{
FD_ZERO(&readfd);
FD_SET(fileno(pPipe), &readfd);
/** Select Timeout Hardcode with 1 secs **/
tv.tv_sec = scriptTmOut;
tv.tv_usec = 0;
ret = select(fileno(pPipe)+1, &readfd, NULL, NULL, &tv);
if(ret < 0)
{
// cout << "select() failed " << strerror(errno) << endl;
}
else if (ret == 0)
{
// cout << "select() timeout" << endl;
break;
}
else
{
//cout << "Data is available now" <<endl;
if(FD_ISSET(fileno(pPipe), &readfd))
{
if(fgets(buf, sizeof(buf), pPipe) != NULL )
{
//cout << buf;
//strBuf << buf;
}
/** No Problem if there is no data ouput by script **/
#if 1
else
{
//ret2 = -1;
// cout << "fgets() failed " << strerror(errno) << endl;
break;
}
#endif
}
else
{
ret2 = -1;
// cout << "FD_ISSET() failed " << strerror(errno) << endl;
break;
}
}
/** Check the Script-timeout **/
if((startTime + scriptTmOut) < time(NULL))
{
// cout<<"Script Timeout"<<endl;
break ;
}
}
pclose(pPipe);
return ret2;
}
cpp is a server which listens on various ports 7001 and 7045. Once mnp_proxy_server is started it connects to 7001 port and starts sending messages.
Now coming to the problem. when i send ctr^c signal to cpp the signal is propagated to mnp_proxy_server and if i kill cpp process then all the ports on which cpp was listning now becomes the part of mnp_proxy_server process.
output of netstat after killing cpp process
[root#punith bin]# netstat -alpn | grep mnp_pr
tcp 0 0 0.0.0.0:7045 0.0.0.0:* LISTEN 26186/mnp_proxy_ser
tcp 0 0 0.0.0.0:7001 0.0.0.0:* LISTEN 26186/mnp_proxy_ser
I know it has something to do with the way I am executing the startup script of mnp_proxy_server through cpp.
There is a signal handler in both the binaries. And also to exit the socket select when ctr^c is pressed I have used pipes in select, so when ctr^c is pressed i close the write end of the pipe so that select is notified and select comes out and breaks the run loop.
Both of them are written in c++ and I am using rhel
Any clue will greatly help me in solving this. Thanking in advance.
You should set the flag CLOEXEC on the server sockets of cpp so that they are closed in the child process:
fcntl(fd, F_SETFD, FD_CLOEXEC);
While using socket like in your processes, I would suggest use fork and exec instead of popen to be able to close or manage all sockets between fork and exec, but the flag CLOEXEC might be enough to solve your problem.