I'm executing a long-running (and often blocked) command via popen() : "ls -R /"
Problem: popen() reads into a buffer that you supply, and it seemingly attempts to populate the ENTIRE buffer before returning. This causes it to block quite often (if your buffer is large).
The solution would seem to be to make the underling fd non-blocking. When I do this, popen() still blocks, usually about 1 second each time. Why is this happening?
Here is my code. Make sure to compile with -std=c++11:
#include <cstdio>
#include <iostream>
#include <sys/time.h>
#include <unistd.h>
#include <fcntl.h>
static constexpr size_t SIZE = 65536;
struct Time
{
friend std::ostream &operator<<(std::ostream &os, Time const &t)
{
(void)t;
timeval tv;
gettimeofday(&tv, nullptr);
os << tv.tv_sec << "." << std::fixed << tv.tv_usec << " ";
return os;
}
};
int main()
{
FILE *file;
file = popen("ls -R /", "r");
if(!file)
{
std::cerr << "Could not open app: " << errno;
return -1;
}
// Make it non-blocking
int fd = fileno(file);
fcntl(fd, F_SETFL, O_NONBLOCK);
char buffer[SIZE];
Time t;
while(true)
{
int rc = fread(buffer, 1, SIZE, file);
if(rc <= 0)
{
if(EAGAIN == errno)
{
usleep(10);
continue;
}
std::cerr << t << "Error reading: " << errno << std::endl;
break;
}
std::cerr << t << "Read " << rc << std::endl;
}
pclose(file);
return 0;
}
Output (notice that they are about 1 second apart, even though the fd is nonblocking and I only have a 10mS pause in the loop):
1429625100.983786 Read 4096
1429625101.745369 Read 4096
1429625102.426967 Read 4096
1429625103.185273 Read 4096
1429625103.834241 Read 4096
1429625104.512131 Read 4096
1429625105.188010 Read 4096
1429625105.942257 Read 4096
1429625106.642877 Read 4096
First, you should use read rather than fread. The stdio functions have their own layer of buffering beyond the OS's, so they can block even on non-blocking file descriptors. Use read to avoid this.
Second, you need to stop ls from buffering its output. The default behavior for programs that link to glibc is to use line buffering when stdout is connecting to a TTY, and full buffering when it is connected to a pipe or redirected to a file. Full buffering means the output is only flushed when the 4KB buffer fills up, rather than flushing every time a newline is output.
You can use stdbuf to override this behavior. Note that it will only work for programs that use C streams and link to glibc. That is most programs, but not all.
popen("stdbuf -oL ls -R /", "r");
Related
On Linux, I am trying to detect a bluetooth controller being connected and start reading from it. I know there's SDL to do that, but I just wanted to learn how to do it specifically on Linux. So I'm using the inotify api to wait for the file /dev/input/js0 to show up. But when I detect the file I cannot open it. I have the following c++ code:
#include <iostream>
#include <fstream>
#include <sys/inotify.h>
#include <unistd.h>
#include <linux/joystick.h>
#include <string.h>
constexpr int NAME_MAX = 16;
int main(int argc, char **argv) {
std::string path = std::string(argv[1]);
std::string directory = path.substr(0, path.find_last_of("/"));
std::string file = path.substr(path.find_last_of("/") + 1);
std::cout << "Directory is " << directory << ", file is " << file << std::endl;
int fd = inotify_init();
if (inotify_add_watch(fd, directory.c_str(), IN_CREATE) < 0) {
std::cout << "Could not watch: " << file << std::endl;
return -1;
}
else
std::cout << "Watching: " << file << std::endl;
char buffer[sizeof(struct inotify_event) + NAME_MAX + 1];
while (true) {
if (read(fd, buffer, sizeof(buffer)) < 0) {
std::cout << "Error reading event" << std::endl;
break;
}
struct inotify_event &event = (struct inotify_event &) buffer;
std::cout << event.name << std::endl;
if ((strcmp(event.name, file.c_str()) == 0) && (event.mask & IN_CREATE)) {
std::cout << "File has been created" << std::endl;
close(fd);
break;
}
}
std::fstream file_stream(file, std::fstream::in);
std::cout << file_stream.is_open() << std::endl;
}
If I run it to detect a regular file, it works, it waits for the file creation event, and when trying to open it with a std::fstream, is_open returns true. But if I run it to detect /dev/input/js0, even when the event comes and the file is detected, opening the fstream does not work, as is_open returns false. Is inotify appropriate to detect device files? If not, what would be the right way to do so?
According to inotify(7)
Inotify reports only events that a user-space program triggers
through the filesystem API. As a result, it does not catch
remote events that occur on network filesystems. (Applications
must fall back to polling the filesystem to catch such events.)
Furthermore, various pseudo-filesystems such as /proc, /sys, and
/dev/pts are not monitorable with inotify.
I would say that /dev/input/ also falls into this bucket.
I wonder if udev could be used: you should get info about the device using udevinfo -a -p /dev/input/js0, but also see what events connecting the peripheral generates using udevadm monitor --environment --udev.
Edit: if you successfuly get an inotify event but can't read the file:
Did you try reading the file with another simpler program when the BT device is already connected?
Is there a difference between fstream::open and open from <cstdio>?
Have you checked the permissions on the device? Also what does cat /dev/input/js0 produces?
I am using boost::process to read asynchronously the output of a console application in Windows. I noticed that the reading events is triggered after about 4k of data every-time.
If I set my buffer 'buf' to a small value nothing changes: the event is triggered multiple times ONLY after 4k of data has been transferred.
As per my understanding this could be a safe mechanism used in Windows to avoid dead-lock while reading from the pipe.
Is there any way in boost::process to change the size of the buffer used by the PIPE to transfer the data?
#include <boost/process.hpp>
#include <boost/asio.hpp>
using namespace boost::process;
boost::asio::io_service ios;
std::vector<char> buf(200);
async_pipe ap(ios);
void read_from_buffer(const boost::system::error_code &ec, std::size_t size)
{
if (ec)
{
std::cout << "error" << std::endl;
return;
}
std::cout << "--read-- " << size << std::endl;
for (size_t i = 0; i < size; i++) std::cout << buf[i];
std::cout << std::endl;
ap.async_read_some(boost::asio::buffer(buf), read_from_buffer);
}
int main()
{
child c("MyApp.exe --args", std_out > ap);
ap.async_read_some(boost::asio::buffer(buf), read_from_buffer);
ios.run();
int result = c.exit_code();
}
You might have to control the "sending" side (so, MyApp.exe).
On UNIX there's stdbuf (using setvbuff), unbuffer and similar. Tools might have some support built-in (e.g. grep --line-buffered).
On Windows, I'm not sure. Here's some pointers: Disable buffering on redirected stdout Pipe (Win32 API, C++)
I'm dealing with fscanf function in C++ and I've confused point about fscanf. Why it doesn't block the calling thread while the stream is absolutely empty.
In my expectation, the main thread should be blocked on the fscanf function, it will be released after 3 seconds because the file stream is going to be written to after 3 seconds by the child thread.
But reality, it doesn't seem like I expected. Please somebody tells me why?
The following is my lines of code:
#include <windows.h>
#include <iostream>
#include <stdio.h>
DWORD WINAPI subroutine(LPVOID data)
{
FILE* file = (FILE*)data;
std::cout << "I'll send you something after 3s" << std::endl;
Sleep(3000);
if (file != NULL)
{
std::cout << "I'm writing now" << std::endl;
char* sentence = "Hello";
fputs(sentence, file);
}
}
int main()
{
FILE* file = tmpfile();
if ( file != NULL )
{
CreateThread(NULL, 0, subroutine, file, 0, NULL);
char something[50];
std::cout << "Blocking..." << std::endl;
rewind(file);
fscanf(file, "%s", something);
std::cout << "Message is " << something << std::endl;
}
std::cout << "Done" << std::endl;
if (file != NULL)
{
fclose(file);
}
return 0;
}
============
Because people may don't understand why I think fscanf should block the calling thread.
This is why I have the above question.
int main()
{
char something[50];
fscanf(stdin, "%s", something);
std::cout << something << std::endl;
return 0;
}
The program stopped to you enter something.
You are using a regular file, so when you are at its end fscanf returns EOF. The fact that there's another thread that in a few seconds will append some data is irrelevant, and the C library has no way to know it anyway.
Standard input blocks because when it is attached to a console it doesn't end (until the user presses Ctrl-D), so if you ask for more input and it's not ready to take it just waits (think to it as if it was a file on a disk extremely slow to provide the data).
Besides, using a FILE * backed by an actual file for cross-thread communication seems like a bad idea; besides the efficiency concerns and the headaches relative the thread-safety of sharing a FILE * between two threads, it maps badly to the problem at hand; what you seem to want here is a FIFO-like communication channel between two threads, not a storage device for data.
If you want to have FIFO communication between the two threads you can use - for example - an anonymous pipe or just a thread-safe message queue.
I need to open a subprocess using popen, the process will continuously ask for user input... The main process need to send that data over the pipe.
This is my first attempt:
FILE *in;
char buff[1024];
if(!(in = popen("cd FIX/fix2/src; java -cp .:./* com.fix.bot", "w"))){
return 1;
}
while(1){
char buffer[] = { 'x' };
fwrite(buffer, sizeof(char), sizeof(buffer), in);
cout << "Wrote!" << endl;
usleep(1000000);
}
However the data is not sent! I need to close the pipe with pclose() so that the data is written to the process. How can I make sure to write the data without having to close the pipe everytime?
You'll want to call fflush(in) to make sure that the buffered data is actually written to the stream.
Also check that java -cp .:./* in the command isn't expanding to an invalid classpath. I think that'll end up expanding to several arguments if there's more than one file in the current directory, and not actual classpath entries.
This works for me:
#include <stdio.h>
#include <unistd.h>
#include <iostream>
int main(int argc, char ** argv) {
FILE *in;
char buff[1024];
if(!(in = popen("./2.sh", "w"))){
return 1;
}
while(1) {
char buffer[] = "xx\n";
fwrite(buffer, sizeof(char), sizeof(buffer), in);
fflush(in);
std::cout << "Wrote!" << std::endl;
usleep(1000000);
}
}
where 2.sh is:
#!/bin/sh
read INP
echo 'read: ' $INP
So I am really suspecting that the problem is missing \n.
This question follows from my attempt to implement the instructions in:
Linux Pipes as Input and Output
How to send a simple string between two programs using pipes?
http://tldp.org/LDP/lpg/node11.html
My question is along the lines of the question in: Linux Pipes as Input and Output, but more specific.
Essentially, I am trying to replace:
/directory/program < input.txt > output.txt
using pipes in C++ in order to avoid using the hard drive. Here's my code:
//LET THE PLUMBING BEGIN
int fd_p2c[2], fd_pFc[2], bytes_read;
// "p2c" = pipe_to_child, "pFc" = pipe_from_child (see above link)
pid_t childpid;
char readbuffer[80];
string program_name;// <---- includes program name + full path
string gulp_command;// <---- includes my line-by-line stdin for program execution
string receive_output = "";
pipe(fd_p2c);//create pipe-to-child
pipe(fd_pFc);//create pipe-from-child
childpid = fork();//create fork
if (childpid < 0)
{
cout << "Fork failed" << endl;
exit(-1);
}
else if (childpid == 0)
{
dup2(0,fd_p2c[0]);//close stdout & make read end of p2c into stdout
close(fd_p2c[0]);//close read end of p2c
close(fd_p2c[1]);//close write end of p2c
dup2(1,fd_pFc[1]);//close stdin & make read end of pFc into stdin
close(fd_pFc[1]);//close write end of pFc
close(fd_pFc[0]);//close read end of pFc
//Execute the required program
execl(program_name.c_str(),program_name.c_str(),(char *) 0);
exit(0);
}
else
{
close(fd_p2c[0]);//close read end of p2c
close(fd_pFc[1]);//close write end of pFc
//"Loop" - send all data to child on write end of p2c
write(fd_p2c[1], gulp_command.c_str(), (strlen(gulp_command.c_str())));
close(fd_p2c[1]);//close write end of p2c
//Loop - receive all data to child on read end of pFc
while (1)
{
bytes_read = read(fd_pFc[0], readbuffer, sizeof(readbuffer));
if (bytes_read <= 0)//if nothing read from buffer...
break;//...break loop
receive_output += readbuffer;//append data to string
}
close(fd_pFc[0]);//close read end of pFc
}
I am absolutely sure that the above strings are initialized properly. However, two things happen that don't make sense to me:
(1) The program I am executing reports that the "input file is empty." Since I am not calling the program with "<" it should not be expecting an input file. Instead, it should be expecting keyboard input. Furthermore, it should be reading the text contained in "gulp_command."
(2) The program's report (provided via standard output) appears in the terminal. This is odd because the purpose of this piping is to transfer stdout to my string "receive_output." But since it is appearing on screen, that indicates to me that the information is not being passed correctly through the pipe to the variable. If I implement the following at the end of the if statement,
cout << receive_output << endl;
I get nothing, as though the string is empty. I appreciate any help you can give me!
EDIT: Clarification
My program currently communicates with another program using text files. My program writes a text file (e.g. input.txt), which is read by the external program. That program then produces output.txt, which is read by my program. So it's something like this:
my code -> input.txt -> program -> output.txt -> my code
Therefore, my code currently uses,
system("program < input.txt > output.txt");
I want to replace this process using pipes. I want to pass my input as standard input to the program, and have my code read the standard output from that program into a string.
Your primary problem is that you have the arguments to dup2() reversed. You need to use:
dup2(fd_p2c[0], 0); // Duplicate read end of pipe to standard input
dup2(fd_pFc[1], 1); // Duplicate write end of pipe to standard output
I got suckered into misreading what you wrote as OK until I put error checking on the set-up code and got unexpected values from the dup2() calls, which told me what the trouble was. When something goes wrong, insert the error checks you skimped on before.
You also did not ensure null termination of the data read from the child; this code does.
Working code (with diagnostics), using cat as the simplest possible 'other command':
#include <unistd.h>
#include <string>
#include <iostream>
using namespace std;
int main()
{
int fd_p2c[2], fd_c2p[2], bytes_read;
pid_t childpid;
char readbuffer[80];
string program_name = "/bin/cat";
string gulp_command = "this is the command data sent to the child cat (kitten?)";
string receive_output = "";
if (pipe(fd_p2c) != 0 || pipe(fd_c2p) != 0)
{
cerr << "Failed to pipe\n";
exit(1);
}
childpid = fork();
if (childpid < 0)
{
cout << "Fork failed" << endl;
exit(-1);
}
else if (childpid == 0)
{
if (dup2(fd_p2c[0], 0) != 0 ||
close(fd_p2c[0]) != 0 ||
close(fd_p2c[1]) != 0)
{
cerr << "Child: failed to set up standard input\n";
exit(1);
}
if (dup2(fd_c2p[1], 1) != 1 ||
close(fd_c2p[1]) != 0 ||
close(fd_c2p[0]) != 0)
{
cerr << "Child: failed to set up standard output\n";
exit(1);
}
execl(program_name.c_str(), program_name.c_str(), (char *) 0);
cerr << "Failed to execute " << program_name << endl;
exit(1);
}
else
{
close(fd_p2c[0]);
close(fd_c2p[1]);
cout << "Writing to child: <<" << gulp_command << ">>" << endl;
int nbytes = gulp_command.length();
if (write(fd_p2c[1], gulp_command.c_str(), nbytes) != nbytes)
{
cerr << "Parent: short write to child\n";
exit(1);
}
close(fd_p2c[1]);
while (1)
{
bytes_read = read(fd_c2p[0], readbuffer, sizeof(readbuffer)-1);
if (bytes_read <= 0)
break;
readbuffer[bytes_read] = '\0';
receive_output += readbuffer;
}
close(fd_c2p[0]);
cout << "From child: <<" << receive_output << ">>" << endl;
}
return 0;
}
Sample output:
Writing to child: <<this is the command data sent to the child cat (kitten?)>>
From child: <<this is the command data sent to the child cat (kitten?)>>
Note that you will need to be careful to ensure you don't get deadlocked with your code. If you have a strictly synchronous protocol (so the parent writes a message and reads a response in lock-step), you should be fine, but if the parent is trying to write a message that's too big to fit in the pipe to the child while the child is trying to write a message that's too big to fit in the pipe back to the parent, then each will be blocked writing while waiting for the other to read.
It sounds like you're looking for coprocesses. You can program them in C/C++ but since they are already available in the (bash) shell, easier to use the shell, right?
First start the external program with the coproc builtin:
coproc external_program
The coproc starts the program in the background and stores the file descriptors to communicate with it in an array shell variable. Now you just need to start your program connecting it to those file descriptors:
your_program <&${COPROC[0]} >&${COPROC[1]}
#include <stdio.h>
#include <unistd.h>
#include <sys/stat.h>
#include <sys/wait.h>
#include <fcntl.h>
#include <string.h>
#include <iostream>
using namespace std;
int main() {
int i, status, len;
char str[10];
mknod("pipe", S_IFIFO | S_IRUSR | S_IWUSR, 0); //create named pipe
pid_t pid = fork(); // create new process
/* Process A */
if (pid == 0) {
int myPipe = open("pipe", O_WRONLY); // returns a file descriptor for the pipe
cout << "\nThis is process A having PID= " << getpid(); //Get pid of process A
cout << "\nEnter the string: ";
cin >> str;
len = strlen(str);
write(myPipe, str, len); //Process A write to the named pipe
cout << "Process A sent " << str;
close(myPipe); //closes the file descriptor fields.
}
/* Process B */
else {
int myPipe = open("pipe", O_RDONLY); //Open the pipe and returns file descriptor
char buffer[21];
int pid_child;
pid_child = wait(&status); //wait until any one child process terminates
int length = read(myPipe, buffer, 20); //reads up to size bytes from pipe with descriptor fields, store results
// in buffer;
cout<< "\n\nThis is process B having PID= " << getpid();//Get pid of process B
buffer[length] = '\0';
cout << "\nProcess B received " << buffer;
i = 0;
//Reverse the string
for (length = length - 1; length >= 0; length--)
str[i++] = buffer[length];
str[i] = '\0';
cout << "\nRevers of string is " << str;
close(myPipe);
}
unlink("pipe");
return 0;
}