I'm trying to call a process with a string to its stdin, with Boost-1.64.0.
The current code is :
bp::opstream inStream ;
bp::ipstream outStream;
bp::ipstream errStream;
bp::child child(
command, // the command line
bp::shell,
bp::std_out > outStream,
bp::std_err > errStream,
bp::std_in < inStream);
// read the outStream/errStream in threads
child.wait();
The problem is that the child executable is waiting for its stdin EOF. Here child.wait() is hanging indefinitely…
I tried to used asio::buffer, std_in.close(),… But no luck.
The only hack I found was to delete() the inStream… And that's not really reliable.
How am I supposed to "notify" the child process and close its stdin with the new boost::process library ?
Thanks !
I tried to used asio::buffer, std_in.close()
This works. Of course it only works if you pass it to a launch function (bp::child constructor, bp::system, etc).
If you need to pass data, and then close it, simply close the associated filedescriptor. I do something like this:
boost::asio::async_write(input, bp::buffer(_stdin_data), [&input](auto ec, auto bytes_written){
if (ec) {
logger.log(LOG_WARNING) << "Standard input rejected: " << ec.message() << " after " << bytes_written << " bytes written";
}
may_fail([&] { input.close(); });
});
Where input is
bp::async_pipe input(ios);
Also, check that the process is not actually stuck sending the output! If you fail to consume the output it would be buffering and waiting if the buffer is full.
Closing the pipe by calling inStream.close(); when you're done writing to it. You can also close it while launching with bp::std_in.close().
The asio solution of course also works and avoids the danger of deadlocks.
Related
I have an application that I am currently developing for communicating with a device using serial communication. For this I am using the boost library basic_serial_port. Right now, I am just attempting to read from the device and am using the async_wait_until function coupled with a async_wait from the deadline_timer class. The code that sets up the read and wait look like this:
async_read_until(port,readData,io_params.delim,
boost::bind(&SerialComm::readCompleted,
this,boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
timer.expires_from_now(boost::posix_time::seconds(1));
timer.async_wait(boost::bind(&SerialComm::timeoutExpired,this,
boost::asio::placeholders::error));
The callback on the async_read_until looks like
void SerialComm::readCompleted(const boost::system::error_code& error,
const size_t bytesTransferred){
if (!error){
wait_result = success;
bytes_transferred = bytesTransferred;
}
else {
if (error.value() != 125) wait_result = error_out;
else wait_result = op_canceled;
cout << "Port handler called with error code " + to_string(error.value()) << endl;
}
}
and the following code is triggered on successful read
string msg;
getline(istream(&readData), msg, '\r');
boost::trim_right_if(msg, boost::is_any_of("\r"));
In the case of this device, all messages are terminated with a carriage return, so specifying the carriage return in the async_read_until should retrieve a single message. However, what I am seeing is that, while the handler is triggered, new data is not necessarily entered into the buffer. So, what I might see is, if the handler is triggered 20x
one line pumped into the buffer in the first call
none in the next 6 calls
6 lines in the next call
no data in the next 10
10 lines following
...
I am obviously not doing something correctly, but what is it?
async_read_until does not guarantee it only read up until the first delimiter.
Due to the underlying implementation details, it will just "read what is available" on most systems and will return if the streambuf contains the delimiter. Additional data will be in the streambuf. Moreover, EOF might be returned even if you didn't expect it yet.
See for background Read until a string delimiter in boost::asio::streambuf
So, found the problem here. The way this program is intended to work is that it should
Send a request for data
Start an async_read_until to read data on the port.
Start an async_wait so that it we don't wait forever.
Use io_service::run_one to wait for a timeout or a successful read.
The code for step four looked like this:
for (;;){
// This blocks until an event on io_service_ is set.
n_handlers = io_service_.run_one();
// Brackets in success case limit scope of new variables
switch(wait_result){
case success:{
char c_[1024];
//string msg;
string delims = "\r";
std::string msg{buffers_begin(readData.data()), buffers_begin(readData.data()) + bytes_transferred- delims.size()};
// Consume through the first delimiter.
readData.consume(bytes_transferred);
data_out = msg;
cout << msg << endl;
data_handler(msg);
return data_out;
}
case timeout_expired:
//Set up for wait and read.
wait_result = in_progress;
cout << "Time is up..." << endl;
return data_out;
break;
case error_out:
cout << "Error out..." << endl;
return data_out;
break ;
case op_canceled:
return data_out;
break;
case in_progress:
cout << "In progress..." << endl;
break;
}
}
Only two cases should trigger an exit from the loop - timeout_expired and success. But, as you can see, the system will exit if an operation is cancelled (op_canceled) or if there is an error (error_out).
The problem is that when an async operation is cancelled (i.e. deadline_timer::cancel()) it will trigger an event picked up by io_service::run_one which will set the state evaluated by the switch statement to op_canceled. This can leave async operations stacking up in the event loop. The simple fix is to just comment out the return statement in all cases except for success and timeout_expired.
Possible duplicates:
How to call execl() in C with the proper arguments?
Grabbing output from exec
Linux Pipes as Input and Output
Using dup2 for piping
Piping for input/output
I've been trying to learn piping in Linux using dup/dup2 and fork the last 3 days. I think I got the hang of it, but when I call two different programs from the child process, it seems that I am only capturing output from the first one called. I don't understand why that is and/or what I'm doing wrong. This is my primary question.
Edit: I think a possible solution is to fork another child and set up pipes with dup2, but I'm more just wondering why the code below doesn't work. What I mean is, I would expect to capture stderr from the first execl call and stdout from the second. This doesn't seem to be happening.
My second question is if I am opening and closing the pipes correctly. If not, I would like to know what I need to add/remove/change.
Here is my code:
#include <stdlib.h>
#include <iostream>
#include <time.h>
#include <sys/wait.h>
#define READ_END 0
#define WRITE_END 1
void parentProc(int* stdoutpipe, int* stderrpipe);
void childProc(int* stdoutpipe, int* stderrpipe);
int main(){
pid_t pid;
int status;
int stdoutpipe[2]; // pipe
int stderrpipe[2]; // pipe
// create a pipe
if (pipe(stdoutpipe) || pipe(stderrpipe)){
std::cerr << "Pipe failed." << std::endl;
return EXIT_FAILURE;
}
// fork a child
pid = fork();
if (pid < 0) {
std::cerr << "Fork failed." << std::endl;
return EXIT_FAILURE;
}
// child process
else if (pid == 0){
childProc(stdoutpipe, stderrpipe);
}
// parent process
else {
std::cout<< "waitpid: " << waitpid(pid, &status, 0)
<<'\n'<<std::endl;
parentProc(stdoutpipe, stderrpipe);
}
return 0;
}
void childProc(int* stdoutpipe, int* stderrpipe){
dup2(stdoutpipe[WRITE_END], STDOUT_FILENO);
close(stdoutpipe[READ_END]);
dup2(stderrpipe[WRITE_END], STDERR_FILENO);
close(stderrpipe[READ_END]);
execl("/bin/bash", "/bin/bash", "foo", NULL);
execl("/bin/ls", "ls", "-1", (char *)0);
// execl("/home/me/outerr", "outerr", "-1", (char *)0);
//char * msg = "Hello from stdout";
//std::cout << msg;
//msg = "Hello from stderr!";
//std::cerr << msg << std::endl;
// close write end now?
}
void parentProc(int* stdoutpipe, int* stderrpipe){
close(stdoutpipe[WRITE_END]);
close(stderrpipe[WRITE_END]);
char buffer[256];
char buffer2[256];
read(stdoutpipe[READ_END], buffer, sizeof(buffer));
std::cout << "stdout: " << buffer << std::endl;
read(stderrpipe[READ_END], buffer2, sizeof(buffer));
std::cout << "stderr: " << buffer2 << std::endl;
// close read end now?
}
When I run this, I get the following output:
yfp> g++ selectTest3.cpp; ./a.out
waitpid: 21423
stdout: hB�(6
stderr: foo: line 1: -bash:: command not found
The source code for the "outerr" binary (commented out above) is simply:
#include <iostream>
int main(){
std::cout << "Hello from stdout" << std::endl;
std::cerr << "Hello from stderr!" << std::endl;
return 0;
}
When I call "outerr," instead of ls or "foo" I get the following output, which I would expect:
yfp> g++ selectTest3.cpp; ./a.out
waitpid: 21439
stdout: Hello from stdout
stderr: Hello from stderr!
On execl
Once you successfully call execl or any other function from the exec family, the original process is completely overwritten by the new process. This implies that the new process never "returns" to the old one. If you have two execl calls in a row, the only way the second one can be executed is if the first one fails.
In order to run two different commands in a row, you have to fork one child to run the first command, wait, fork a second child to run the second command, then (optionally) wait for the second child too.
On read
The read system call does not append a terminating null, so in general you need to look at the return value, which tells you the number of bytes actually read. Then set the following character to null to get a C string, or use the range constructor for std::string.
On pipes
Right now you are using waitpid to wait until the child process has already finished, then reading from the pipes. The problem with this is that if the child process produces a lot of output, then it will block because the pipe gets full and the parent process is not reading from it. The result will be a deadlock, as the child waits for the parent to read, and the parent waits for the child to terminate.
What you should do is use select to wait for input to arrive on either the child's stdout or the child's stderr. When input arrives, read it; this will allow the child to continue. When the child process dies, you'll know because you'll get end of file on both. Then you can safely call wait or waitpid.
The exec family of functions replace the current process image with a new process image. When you execute,
execl("/bin/bash", "/bin/bash", "foo", NULL);
the code from the current process is not executed any more. That's why you never see the result of executing
execl("/bin/ls", "ls", "-1", (char *)0);
I'm reading from a named pipe on Linux using std::ifstream. If the writing end of the file is closed, I can not continue reading from the pipe through the stream. For some reason I have to clear(), close() and open() the stream again to continue reading. Is this expected? How can I avoid the close() open() on a pipe when writers close() and open() the pipe at will?
Background: I believe the close() open() I have to do is causing the writer to sometimes receive SIGPIPE which I would like to avoid.
More details - I am using this code to read a stream
// read single line
stream_("/tmp/delme", std::ios::in|std::ios::binary);
std::getline(stream_, output_filename_);
std::cout << "got filename: " << output_filename_ << std::endl;
#if 0
// this fixes the problem
stream_.clear();
stream_.close();
stream_.open("/tmp/delme", std::ios::in|std::ios::binary);
// now the read blocks until data is available
#endif
// read more binary data
const int hsize = 4096+4;
std::array<char, hsize> b;
stream_.read(&b[0], hsize);
std::string tmp(std::begin(b), std::begin(b)+hsize);
std::cout << "got header: " << tmp << std::endl;
/tmp/delme is my pipe. I do echo "foo" > /tmp/delme and I get the foo in output_filename_ but the stream does not block there, (it should, there is no more data), it proceeds to read garbage. If I enable the code within the ifdef it works. Why?
Thanks,
Sebastian
Since you use std::getLine(), maybe you need to use an extra "\n" to signal the end of a line:
echo -e "foo\n" > /tmp/delme
instead of just
echo "foo" > /tmp/delme
This should at least get rid of the garbage reading.
I'm having some problems with unnamed pipes or "fifos" in C. I have two executable files: One tries to read, the other one tries to write. The reader is meant to be executed only once. I tried to make a simple code to show my problem, so it reads 10 times and then it gets closed. However, the writer should be executed many times (in my original program, it can't be executed twice at once: You have to wait for it to finish to run it again).
The problem with this code is: it only prints the incoming message when another one arrives. It seems that it gets blocked until it receives another message. I don't know what is happening, but it seems the "read" line blocks the program although there is data to read, and it works again when I send new data.
I tried another thing: As you can see the writer closes the file descriptor. The reader opens the file descriptor twice, because it would find EOF and get unblocked if it didn't. I tried eliminating those lines (the writer wouldn't close the fd, the reader would open the fd just once, eliminating the second "open()"). But for some reason, it unblocks if I do that. Why does that happen?
This is my code:
Reader:
int main () {
int fd;
static const std::string FILE_FIFO = "/tmp/archivo_fifo";
mknod ( static_cast<const char*>(FILE_FIFO.c_str()),S_IFIFO|0666,0 );
std::string mess = "Hii!! Example";
//open:
fd = open ( static_cast<const char*>(FILE_FIFO.c_str()),O_WRONLY );
//write:
write ( fd, static_cast<const void*>(mess.c_str()) ,mess.length() );
std::cout << "[Writer] I wrote " << mess << std::endl;
//close:
close ( fd );
fd = -1;
std::cout << "[Writer] END" << std::endl;
exit ( 0 );
}
Writer:
int main () {
int i,fd;
static const int BUFFSIZE = 100;
static const std::string name = "/tmp/archivo_fifo";
mknod ( static_cast<const char*>(name.c_str()),S_IFIFO|0666,0 );
char buffer[BUFFSIZE];
i=0;
fd = open ( name.c_str(),O_RDONLY );
while (true) {
i++;
std::cout << "Waiting to read Fifo: "<< i << std::endl;
ssize_t bytesLeidos = read ( fd,static_cast<void*>(buffer),BUFFSIZE);
fd = open ( name.c_str(),O_RDONLY );
std::string mess = buffer;
mess.resize ( bytesLeidos );
std::cout << "[Reader] I read: " << mess << std::endl;
sleep(3);
if (i==10) break;
}
close ( fd );
fd = -1;
unlink ( name.c_str() );
std::cout << "[Reader] END" << std::endl;
exit ( 0 );
}
Thanks in advance. And please excuse my poor English
you should use the select call to find out if any data is available on the fd of the pipe.
have a look at
http://en.wikipedia.org/wiki/Select_(Unix)
You've opened file in blocking mode:
If some process has the pipe open for writing and O_NONBLOCK is clear, read() shall block the calling thread until some data is written or the pipe is closed by all processes that had the pipe open for writing.
Depends on you goals you shoud rather synchronize readers and writers of your pipe, or use non-blocking mode for reader. Read about poll, epoll, select.
I've been reading more about unnamed pipes and now I understand the problem. I wrote:
the reader opens the file descriptor twice, because it would find EOF and get unblocked if it didn't. I tried eliminating those lines (the writer wouldn't close the fd, the reader would open the fd just once, eliminating the second "open()"). But for some reason, it unblocks if I do that. Why does that happen?
It unblocks because the other process closes, so the OS closes the file descriptor anyway. That's why although I didn't wrote close(fd) it unblocks.
The only way in which a blocking fifo can unblock is:
1) there is data to read
2) the other program closed the file descriptor. If there is no data to read and the writer closed the file descriptor (even if the file descriptor is open in reader), read() returns 0 and unblocks.
So my solution was: redesign my program so it would have the writer's file descriptor open all the time. Which means: there is only a executable file now. I'm pretty sure I could have done it with two executables, but I would probably need semaphores or something like that to synchronize, so it wouldn't try to read if the writer's fd is closed.
The essence of my problem is that I can't write to a file in a loop with sleep(). If I have the following code:
ofstream file
file.open("file.name");
for(;;) {
file << "HELLO\n";
}
This code works perfectly and prints HELLO repeatedly into "file.name". However, I want to do something like this (I'm recording data from a real-time application):
for(;;) {
file << "HELLO\n";
sleep(1);
}
This doesn't seem to print anything into my file. Any ideas?
You need to flush the output. The output stream is buffering your data into memory but not writing it out to disk. You should either use std::endl (which prints a newline and flushes) instead of the string literal '\n', or explicitly flush the stream with std::flush:
for(;;) {
file << "HELLO" << endl;
}
// or
for(;;) {
file << "HELLO\n" << flush;
}
The magic word you are looking for is "flush".
c++ std::ofstream flush() but not close()
before the sleep, flush the file so that it isn't pending in a buffer waiting for there to be enough of a change to bother writing out.
It's probably just a buffering issue. Because you are now writing much slower, the output buffer wont fill up so fast so you may not 'see' the written data. Try adding a flush() before the sleep.
file.flush()