I'm trying to communicate two processes via Named pipes but I'm having problems with the reader process. The idea is that the consumer reads several messages that the producer writes into the pipe in random periods. Since these periods can be long, the consumer should block in the read operation until the producer has written the message into the pipe.
This is the code that I have now.
FILE* result_pipe_stream;
...
result_pipe_stream = fopen(result_pipe , "r");
...
string read_result_from_pipe(){
if (result_pipe_stream == NULL){
return NULL;
}
char buf[BUFSIZ];
std::stringstream oss;
while (1) {
if( fgets (buf, BUFSIZ, result_pipe_stream) != NULL ) {
int buflen = strlen(buf);
if (buflen >0){
if (buf[buflen-1] == '\n'){
buf[buflen-1] = '\0';
oss << buf;
return oss.str();
} else {
oss << buf;
}
}
} else {
// Necessary to avoid that fgets return NULL after closing the pipe for the first time.
clearerr(result_pipe_stream);
}
}
}
The first time the consumer reads from the pipe, the method works properly. It awaits until the writer sends a message and returns the value. However, from that point on, when the method is called again the fgets returns NULL until the writer adds the new message.
What is missing for the consumer to block after the second read?
The producers is a Java application writing the pipe like this
try {
OutputStream output = new FileOutputStream(writePipe, true);
output.write(taskCMD.getBytes());
output.flush();
} catch (Exception e) {
e.printStackTrace();
} finally {
if (output != null) {
output.close();
}
}
Related
I'm coding a mock shell and I'm currently working on coding pipes with dup2. Here is my code:
bool Pipe::execute() {
int fds[2]; //will hold file descriptors
pipe(fds);
int status;
int errorno;
pid_t child;
child = fork();
if (-1 == child) {
perror("fork failed");
}
if (child == 0) {
dup2(fds[1], STDOUT_FILENO);
close(fds[0]);
close(fds[1]);
this->component->lchild->execute();
_exit(1);
}
else if (child > 0) {
dup2(fds[0], STDIN_FILENO);
close(fds[0]);
close(fds[1]);
this->component->rchild->execute();
waitpid(child, &status, 0);
if ( WIFEXITED(status) ) {
//printf("child exited with = %d\n",WEXITSTATUS(status));
if ( WEXITSTATUS(status) == 0) {
cout << "pipe parent finishing" << endl;
return true;
}
}
return false;
}
}
The this->component->lchild->execute(); and this->component->rchild->execute(); run execvp on the corresponding commands. I've confirmed that each of these return by printing out a statement in the parent process. However, in my Pipe::execute(), it seems like the child process does not finish because the cout statement in the parent process is never printed and I get a segmentation fault after the prompt ($) is initialized (see picture). Here is the main function that initializes the prompt after each execution:
int main()
{
Prompt new_prompt;
while(1) {
new_prompt.initialize();
}
return 0;
}
and here is the initialize() function:
void Prompt::initialize()
{
cout << "$ ";
std::getline(std::cin, input);
parse(input);
run();
input.clear();
tokens.clear();
fflush(stdout);
fflush(stdin);
return;
}
It seems like ls | sort runs fine, but then when the prompt is initialized, a blank line is read into input by getline. I've tried using cin.clear(), cin.ignore, and the fflush and clear() lines above. This blank string is "parsed" and then the run() function is called, which tries to dereference a null pointer. Any ideas on why/where this blank line is being entered into getline? and how do I resolve this? Thank you!
UPDATE: The parent process in the pipe is now finishing. I've also noticed that I'm getting seg faults also for my I/O redirection classes (> and <). I think I'm not flushing the stream or closing the file descriptors correctly...
Here is my execute() function for lchild and rchild:
bool Command::execute() {
int status;
int errorno;
pid_t child;
vector<char *> argv;
for (unsigned i=0; i < this->command.size(); ++i) {
char * cstr = const_cast<char*>(this->command.at(i).c_str());
argv.push_back(cstr);
}
argv.push_back(NULL);
child = fork();
if (-1 == child) {
perror("fork failed");
}
if (child == 0) {
errorno = execvp(*argv.data(), argv.data());
_exit(1);
} else if (child > 0) {
waitpid(child, &status, 0);
if ( WIFEXITED(status) ) {
//printf("child exited with = %d\n",WEXITSTATUS(status));
if ( WEXITSTATUS(status) == 0) {
//cout << "command parent finishing" << endl;
return true;
}
}
return false;
}
}
Here is the bug:
else if (child > 0) {
dup2(fds[0], STDIN_FILENO);
close(fds[0]);
close(fds[1]);
this->component->rchild->execute();
You are closing stdin for the parent, not just the right child. After this the stdin of the parent process is the same as that of the right child.
After that
std::getline(std::cin, input);
Tries to read the output of the left child, rather than the original stdin. By that point the left child had finished and that end of the pipe had been closed. This makes reading stdin fail, and leave input unchanged in its original state.
Edit: Your design has a minor and a major flaws. The minor flaw is that you don't need the fork in Pipe::execute. The major flaw is that the child should be the one who redirects streams and closes the descriptors.
Simply pass input and output parameters through fork() and let the child dup2 these. Don't forget to make it also close unrelated pipe ends. If you don't, the left child will finish but its output pipe will continue living in other processes. As long as other copies of that descriptor live, the right child will never get EOF while reading its pipe end - and week block forever.
I would like to achieve a result, so that the output of the program goes splitted to the deque structure.
To describe to problem: I am dealong with with the redirecting the output of the program created with CreateProcess. I need to read the output of program and process it line by line. The program itself provides the output in the convenient form, but the output which I receive via unnamed pipe is delivered slower (with smaller frequency) they appear in the portions and in such a way that the last line is somewhere cut in half. The next portion of the stream from the pipe will match and finish the line, but it will make some problems with the construction of the program.
What be a source of this discrepancy betweenn the cout from the program and the output redirected throught the pipe?
EDIT: According to the proposition by
user4581301 I've tried to use the stringstream and getline, but it seems that the lines are still cut in half even though the direct cout from the program without redirection to pipe does not have this problem. This leads to the issue, that the lines are splitted in the different elements of the queue (please look at the code below).
Sample from the console
The method ReadFromPipe is run in the loop.
void ProcessManagement::ReadFromPipe(void)
// Read output from the child process's pipe for STDOUT
{
DWORD dwRead, dwWritten;
char buffer[BUFSIZE];
BOOL bSuccess = FALSE;
std::deque<std::deque<std::string>> elems;
while (ReadFile(g_hChildStd_OUT_Rd, buffer, sizeof(buffer)-1, &dwRead, NULL) != FALSE)
{
/* add terminating zero */
buffer[dwRead] = '\0';
std::stringstream streamBuffer;
streamBuffer << buffer;
BufferToQueue(streamBuffer, ' ', elems);
// Print deque
for (std::deque <std::string> a : elems)
{
for (std::string b : a)
std::cout << b;
std::cout << std::endl;
}
}
}
And the method BufferToQueue.
void ProcessManagement::BufferToQueue(std::stringstream &streamBuffer, char delim, std::deque<std::deque<std::string>> &elems) {
std::string line;
std::string word;
std::deque<string> row;
// Splitting the stringstream into queue of queue (does not work properly)
while (std::getline(streamBuffer, line))
{
std::istringstream iss(line);
while (std::getline(iss, word, delim))
row.push_back(word);
elems.push_back(row);
row.clear();
}
}
Expanding on #Captain Obvlious's comment regarding flush:
The problem you are facing is because the WriteToPipe function does not flush at end of line. You can fix this in the reader by ensuring that you append to the previous string if the the previous ReadFromPipe call did not have a newline as the last character.
Modified functions:
bool ProcessManagement::BufferToQueue(std::stringstream &streamBuffer, char delim, std::deque<std::deque<std::string>> &elems)
{
std::string line;
std::string word;
std::deque<string> row;
bool is_unflushed_line = streamBuffer.str().back() != '\n';
// Splitting the stringstream into queue of queue (does not work properly)
while (std::getline(streamBuffer, line))
{
std::istringstream iss(line);
while (std::getline(iss, word, delim)) {
row.push_back(word);
}
elems.push_back(row);
row.clear();
}
return is_unflushed_line;
}
void ProcessManagement::ReadFromPipe(void)
// Read output from the child process's pipe for STDOUT
{
DWORD dwRead, dwWritten;
char buffer[BUFSIZE];
BOOL bSuccess = FALSE;
std::deque<std::deque<std::string>> elems;
while (ReadFile(g_hChildStd_OUT_Rd, buffer, sizeof(buffer)-1, &dwRead, NULL) != FALSE)
{
/* add terminating zero */
buffer[dwRead] = '\0';
std::stringstream streamBuffer;
streamBuffer << buffer;
bool is_unflushed_line = BufferToQueue(streamBuffer, ' ', elems);
for(auto idx = 0; idx != elems.size(); ++idx)
{
for (std::string const& b : elems[idx])
std::cout << b;
if(idx == elems.size() - 1 && is_unflushed_line)
break;// don't print a newline if input did not end with a newline
std::cout << std::endl;
}
}
}
The answer from #indeterminately sequenced was correct, thank you for your help. The problem was, that the buffer to which the pipe was copied was too small and it was deviding it to the seperate ones.
The complete solution based on the help of #indeterminately sequenced to push the output to queue structure, maybe it will help someone. The only problem is that the opened program is never closed, the TerminateProcess function has to be used somewhere.
void ProcessManagement::CreateChildProcess()
// Create a child process that uses the previously created pipes for STDIN and STDOUT.
{
SECURITY_ATTRIBUTES saAttr = { sizeof(SECURITY_ATTRIBUTES) };
saAttr.bInheritHandle = TRUE; //Pipe handles are inherited by child process.
saAttr.lpSecurityDescriptor = NULL;
// Create a pipe to get results from child's stdout.
if (!CreatePipe(&hPipeRead, &hPipeWrite, &saAttr, 0))
return;
si.dwFlags = STARTF_USESHOWWINDOW | STARTF_USESTDHANDLES;
si.hStdOutput = hPipeWrite;
si.hStdError = hPipeWrite;
si.wShowWindow = SW_HIDE; // Prevents cmd window from flashing. Requires STARTF_USESHOWWINDOW in dwFlags.
std::string command_temp = (" -i \"LAN 3\"");
LPSTR st = const_cast<char *>(command_temp.c_str());
BOOL fSuccess = CreateProcessA(program_path.c_str(), st, NULL, NULL, TRUE, CREATE_NEW_CONSOLE, NULL, NULL, &si, &pi);
if (!fSuccess)
{
CloseHandle(hPipeWrite);
CloseHandle(hPipeRead);
return ;
}
/* Needs to be used somewhere
TerminateProcess(pi.hProcess,exitCode);
CloseHandle(hPipeWrite);
CloseHandle(hPipeRead);
CloseHandle(pi.hProcess);
CloseHandle(pi.hThread);*/
}
void ProcessManagement::ReadFromPipe(void)
// Read output from the child process's pipe for STDOUT
{
std::deque<std::deque<std::string>> elems;
// Give some timeslice (50ms), so we won't waste 100% cpu.
bProcessEnded = WaitForSingleObject(pi.hProcess, 50) == WAIT_OBJECT_0;
// Even if process exited - we continue reading, if there is some data available over pipe.
for (;;)
{
char buf[8192];
DWORD dwRead = 0;
DWORD dwAvail = 0;
if (!::PeekNamedPipe(hPipeRead, NULL, 0, NULL, &dwAvail, NULL))
break;
if (!dwAvail) // no data available, return
break;
if (!::ReadFile(hPipeRead, buf, min(sizeof(buf)-1, dwAvail), &dwRead, NULL) || !dwRead)
// error, the child process might ended
break;
buf[dwRead] = '\0';
std::stringstream streamBuffer;
streamBuffer << unflushed_line << buf; // add to buffer also last unflashed line
unflushed_line = BufferToQueue(streamBuffer, ' ', elems);
for (auto idx = 0; idx != elems.size(); ++idx)
{
for (std::string const& b : elems[idx])
std::cout << b;
std::cout << std::endl;
}
}
}
std::string ProcessManagement::BufferToQueue(std::stringstream &streamBuffer, char delim, std::deque<std::deque<std::string>> &elems) {
std::string line;
std::string word;
std::deque<string> row;
bool is_unflushed_line = streamBuffer.str().back() != '\n';
// Splitting the stringstream into queue of queue (does not work properly)
while (std::getline(streamBuffer, line, '\n'))
{
std::istringstream iss(line);
while (std::getline(iss, word, delim)) {
row.push_back(word);
}
elems.push_back(row);
row.clear();
}
if (is_unflushed_line)
{
elems.pop_back(); // pop not fully flushed line
}
else line.clear(); // if the line was fully flushed return empty string
return line; // to add to buffer for next push to queue if the last was not flushed at the end
}
There is a client server application I am working on. Below is the code from client side.
pipe_input, pipe_output are shared variables.
int fds[2];
if (pipe(fds)) {
printf("pipe creation failed");
} else {
pipe_input = fds[0];
pipe_output = fds[1];
reader_thread_created = true;
r = pthread_create(&reader_thread_id,0,reader_thread,this);
}
void* reader_thread(void *input)
{
unsigned char id;
int n;
while (1) {
n = read(pipe_input , &id, 1);
if (1 == n) {
//process
}if ((n < 0) ) {
printf("ERROR: read from pipe failed");
break;
}
}
printf("reader thread stop");
return 0;
}
There is a writer thread also which writes data on event change from server.
void notify_client_on_event_change(char id)
{
int n;
n= write(pipe_output, &id, 1);
printf("message written to pipe done ");
}
My question is do I need to close the write end in reader thread and read end in case of writer thread. In the destructor, I am waiting for reader thread to exit but sometimes it doesn't exit from reader thread.
[...] do i need to close the write end in reader thread and read end in case of writer thread[?]
As those fds "are shared", closing them in one thread would close them for all threads. That is not what you want, I suspect.
I am writing a multithreaded server in c++(linux). My problem is that my code runs fine for initial few requests but after that it shows "free:Invalid pointer". I know this error can come from any part of my code but for the below snippet I want to know suggestions if I am going on right track.
// This is helper function used in pthread_create
void *Parse::serveRequest_helper(void *c)
{
Parse *P1 =(Parse *)c;
P1->serveRequest();
}
// On this function 10 threads are continously working
void Parse::serveRequest()
{
pthread_detach(pthread_self());
while(1)
{
pthread_mutex_lock(&print_lock);
if(requestqueue.empty())
pthread_cond_wait(&print_cond, &print_lock);
SendData *S = new SendData;
clientInfo c;
cout<<"Inside Serving thread"<<endl;
c = requestqueue.front(); //assuming this queue has data coming from some other scheduler thread function
requestqueue.pop();
S->requestPrint(c);
delete S;
pthread_mutex_unlock(&print_lock);
cout<<" Inside Thread is out"<<endl;
}
}
// Below function I am using to write data of the file to socket.
// When a thread will handle this function, the file pointer will be local to the thread or for all 10 threads it will be global .. ?
void SendData::requestPrint(clientInfo c)
{
if (write(c.accept,"Requested File Data :", 21) == -1)
perror("send");
ifstream file;
file.open(c.filename.c_str());
if (file.is_open())
{
string read;
while (!file.eof() )
{
getline(file,read);
if (write(c.accept, read.c_str(), (size_t)read.size()) == -1)
perror("send");
}
}
file.close();
close(c.accept);
}
I have a Linux file descriptor (from socket), and I want to read one line.
How to do it in C++?
I you are reading from a TCP socket you can't assume when the end of line will be reached.
Therfore you'll need something like that:
std::string line;
char buf[1024];
int n = 0;
while(n = read(fd, buf, 1024))
{
const int pos = std::find(buf, buf + n, '\n')
if(pos != std::string::npos)
{
if (pos < 1024-1 && buf[pos + 1] == '\n')
break;
}
line += buf;
}
line += buf;
Assuming you are using "\n\n" as a delimiter. (I didn't test that code snippet ;-) )
On a UDP socket, that is another story. The emiter may send a paquet containing a whole line. The receiver is garanted to receive the paquet as a single unit .. If it receives it , as UDP is not as reliable as TCP of course.
Pseudocode:
char newline = '\n';
file fd;
initialize(fd);
string line;
char c;
while( newline != (c = readchar(fd)) ) {
line.append(c);
}
Something like that.
Here is a tested, quite efficient code:
bool ReadLine (int fd, string* line) {
// We read-ahead, so we store in static buffer
// what we already read, but not yet returned by ReadLine.
static string buffer;
// Do the real reading from fd until buffer has '\n'.
string::iterator pos;
while ((pos = find (buffer.begin(), buffer.end(), '\n')) == buffer.end ()) {
char buf [1025];
int n = read (fd, buf, 1024);
if (n == -1) { // handle errors
*line = buffer;
buffer = "";
return false;
}
buf [n] = 0;
buffer += buf;
}
// Split the buffer around '\n' found and return first part.
*line = string (buffer.begin(), pos);
buffer = string (pos + 1, buffer.end());
return true;
}
It's also useful to setup signal SIGPIPE ignoring in reading and writing (and handle errors as shown above):
signal (SIGPIPE, SIG_IGN);
Using C++ sockets library:
class LineSocket : public TcpSocket
{
public:
LineSocket(ISocketHandler& h) : TcpSocket(h) {
SetLineProtocol(); // enable OnLine callback
}
void OnLine(const std::string& line) {
std::cout << "Received line: " << line << std::endl;
// send reply here
{
Send( "Reply\n" );
}
}
};
And using the above class:
int main()
{
try
{
SocketHandler h;
LineSocket sock(h);
sock.Open( "remote.host.com", port );
h.Add(&sock);
while (h.GetCount())
{
h.Select();
}
}
catch (const Exception& e)
{
std::cerr << e.ToString() << std::endl;
}
}
The library takes care of all error handling.
Find the library using google or use this direct link: http://www.alhem.net/Sockets/