Program that writes to /dev/stdout: how to send EOF? - c++

I have a program that writes data to a file. Normally, the file is on disk, but I am experimenting with writing to /dev/stdout. Currently, when I do this, the program will not exit until I press Ctrl-C . Is there a way for the program to signal that the output is done ?
Edit:
Currently, a disk file is opened via fopen(FILE_NAME), so I am now trying to pass in /dev/stdout as FILE_NAME.
Edit 2:
command line is
MY_PROGRAM -i foo -o /dev/stdout > dump.png
Edit 3:
It looks like the problem here is that stdout is already open, and I am opening it a second time.

The EOF condition on a FIFO (which, if you're piping from your program into something else, is what your stdout is) is set when no file handles are still open for write.
In C, the standard-library is fclose(stdout), whereas the syscall interface is close(1) -- if you're using fopen(), you'll want to pair it with fclose().
If you're also doing a separate outFile = fopen("/dev/stdout", "w") or similar, then you'll need to close that copy as well: fclose(outFile) as well as fclose(stdout).

Related

Check for error in command passed to popen API in cpp

There is a cpp application where I want to read following type of compressed file-
file_name.gz
file_name.Z
file_name.tar.gz
For this purpose, I check the file extension and choose decompression technique accordingly. E.g. file_name.gz will be decompressed using "gunzip -C file_name.gz".
I want to get the FILE handle for decompressed file. I use popen() API for it. Now, there might be a case where gunzip/uncompress/tar fails while decompressing the file due to memory issues. How do I capture the failure in my CPP application. There is way to check if popen failed or not. What about command passed to popen().
Please help. I tried to find it at various places but could not get satisfactory solution.
When a process terminates normally, it is expected to return the exit code of 0 (legally, EXIT_SUCCESS) to the parent. Otherwise, in the case of a crash or any other abnormal termination, a non-zero value is expected to be returned. You can obtain the exit code by calling pclose(). If the code is 0, the child process most probably terminated successfully.

Piping to provide a file as input to a C program

I have this set of .gz files and inside each of them is a single text file. This text file needs to be used in a C program. The following code solves this problem somehow where parameters 1 and 2 are integers which I'm receiving as arguments for the C program (argc, argv[]) in main().
gzip -dc xyz.txt.gz | ./program parameter1 parameter2
Can someone explain how the above code works in command line?
How does the text file automatically get passed to the program?
Do I need to write extra code in the C program to receive this text file?
The shell connects the stdout of one command directly to the stdin of the other command through a pipe(7). Neither program has to do anything out of the ordinary to take advantage of this.

Evaluate output of a background linux command with C++ or Bash/Shell Script

Question: Using C++ or a bash/shell script, how can I evaluate output of a long running linux process?
Example:
root#example.com~# iw event
(This command will run until manually killed.)
(It will output data that I will want to read and parse line by line.)
What is the most efficient way to evaluate the std output of this command when a new line is added to its buffer?
For example: iw event will output a line that says:
new station: 0e:0e:20:2d:20
I want to detect "new station" and run another command with the mac address. IE:
./myProgram -mac 0e:0e:20:2d:20
Thanks!
If you run the command as shown, all output will go to stdout and display on the terminal. To capture the output you have a few options:
Pipe the output to your monitor program, as in iw events | yourmonitorprogram which then reads stdin. iw should probably be modified to use unbuffered output.
Write the output of iw to a file and then use the same technique as the tail -f command to poll the file periodically
Have iw write to a named pipe or socket and have your monitor program read from that pipe or socket. This option requires modification to iw.
The simplest option is the first one

Input/output redirect from a command-line executable to file

How can I save all the input(cin) and output(cout, cerr) from a program whose input is taken from file(using "<")? I would like the input and output to be in order(so each input is followed by corresponding output as if I were typing the input in myself).
I tried ">" to output everything to a file, but that only saves standard output(no input/cerr), and just plainly copying the command line output still only gives the output without the input(because of how "<" works).
Is there a way to write everything(output+input) to file in order?
EDIT: edited for clarity
EDIT2: I just realized that it's impossible to do what I'm trying to do since the console does not know anything about when the commands would actually be entered. I'll have to manually enter commands and use the "script" command to actually log all input/output.
You need to add cerr to the stream
command > file 2&>1
This means put 2 (stderr) to 1 (stdout) as well.

Can system() return before piped command is finished

I am having trouble using system() from libc on Linux. My code is this:
system( "tar zxvOf some.tar.gz fileToExtract | sed 's/some text to remove//' > output" );
std::string line;
int count = 0;
std::ifstream inputFile( "output" );
while( std::getline( input, line != NULL ) )
++count;
I run this snippet repeatedly and occasionally I find that count == 0 at the end of the run - no lines have been read from the file. I look at the file system and the file has the contents I would expect (greater than zero lines).
My question is should system() return when the entire command passed in has completed or does the presence of the pipe '|' mean system() can return before the part of the command after the pipe is completed?
I have explicitly not used a '&' to background any part of the command to system().
To further clarify I do in practice run the code snippet multiples times in parallel but the output file is a unique filename named after the thread ID and a static integer incremented per call to system(). I'm confident that the file being output to and read is unique for each call to system().
According to the documentation
The system() function shall not return until the child process has terminated.
Perhaps capture the output of "output" when it fails and see what it is? In addition, checking the return value of system would be a good idea. One scenario is that the shell command you are running is failing and you aren't checking the return value.
system(...) calls the standard shell to execute the command, and the shell itself should return only after the shell has regained control over the terminal. So if there's one of the programs backgrounded, system will return early.
Backgrounding happens through suffixing a command with & so check if the string you pass to system(...) contains any & and if so make sure they're properly quoted from shell processing.
System will only return after completion of its command and the file output should be readable in full after that. But ...
... multiple instances of your code snippet run in parallel would interfere because all use the same file output. If you just want to examine the contents of output and do not need the file itself, I would use popen instead of system. popen allows you to read the output of the pipe via a FILE*.
In case of a full file system, you could also see an empty output while the popen version would have no trouble with this condition.
To notice errors like a full file system, always check the return code of your calls (system, popen, ...). If there is an error the manpage will tell you to check errno. The number errno can be converted to a human readable text by strerror and output by perror.