i have a program, which get the information from the stream and use cin to read and later convert the input.
This is the calling of the program:
cat file1 | ./converter
in C++ it is this line
while ( ! cin.eof( ) )
which reads from the stream.
Is it possible to simulate the pipeline in gdb?
Because i can not debug the source without it.
If you read the documentation, like the section on program input/output, you will see that you can use normal redirection for the run command:
(gdb) run < file1
This will run your program with stdin redirected from file1.
Related
I intend to run a c++ program on spark using rdd.pipe() operator in order to see the possible benefits of the program running in parallel.
In terminal i run it like this:
./program program_mode -t input -i output
In spark driver i've attempted:
mapDataset.pipe(s"/path/to/program program_mode -t $mapDataset -i /path/to/output"
where mapDataset is the input rdd (type .fasta file) that i have successfully loaded in spark driver,but this doesnt work.
The general problem is that the program expects to have its input through the flags but in spark the input is in the rdd that i've created on which pipe is called.
Any idea on how i can implement this communication correctly?
If your program uses streams, then change the way it behaves.
Instead of opening an ifstream for a file when it's on the command line, pass in stdin to your functions. Same for the output stream.
I have a program that writes data to a file. Normally, the file is on disk, but I am experimenting with writing to /dev/stdout. Currently, when I do this, the program will not exit until I press Ctrl-C . Is there a way for the program to signal that the output is done ?
Edit:
Currently, a disk file is opened via fopen(FILE_NAME), so I am now trying to pass in /dev/stdout as FILE_NAME.
Edit 2:
command line is
MY_PROGRAM -i foo -o /dev/stdout > dump.png
Edit 3:
It looks like the problem here is that stdout is already open, and I am opening it a second time.
The EOF condition on a FIFO (which, if you're piping from your program into something else, is what your stdout is) is set when no file handles are still open for write.
In C, the standard-library is fclose(stdout), whereas the syscall interface is close(1) -- if you're using fopen(), you'll want to pair it with fclose().
If you're also doing a separate outFile = fopen("/dev/stdout", "w") or similar, then you'll need to close that copy as well: fclose(outFile) as well as fclose(stdout).
Question: Using C++ or a bash/shell script, how can I evaluate output of a long running linux process?
Example:
root#example.com~# iw event
(This command will run until manually killed.)
(It will output data that I will want to read and parse line by line.)
What is the most efficient way to evaluate the std output of this command when a new line is added to its buffer?
For example: iw event will output a line that says:
new station: 0e:0e:20:2d:20
I want to detect "new station" and run another command with the mac address. IE:
./myProgram -mac 0e:0e:20:2d:20
Thanks!
If you run the command as shown, all output will go to stdout and display on the terminal. To capture the output you have a few options:
Pipe the output to your monitor program, as in iw events | yourmonitorprogram which then reads stdin. iw should probably be modified to use unbuffered output.
Write the output of iw to a file and then use the same technique as the tail -f command to poll the file periodically
Have iw write to a named pipe or socket and have your monitor program read from that pipe or socket. This option requires modification to iw.
The simplest option is the first one
i have an interface where i use to execute the mml command in my solaris unix like below:
> eaw 0004
<RLTYP;
BSC SYSTEM TYPE DATA
GSYSTYPE
GSM1800
END
<
As soon as i do eaw <name> on the command line.It will start an interface where in i can execute mml commands and i can see the output of those commands executed.
My idea here is to parse the command output in c++.
I can do away with some logic for parsing.But to start with How can get the command to be executed inside c++ ? Is there any predefined way to do this.
This should be similar to executing sql queries inside c++.But we use other libraries to execute sql queries.I also donot want to run a shell script or create temporary files in between.
what i want is to execute the command inside c++ and get the output and even that in c++.
could anybody give me the right directions?
You have several options. From easiest and simplest to hardest and most complex to use:
Use the system() call to spawn a shell to run a command
Use the popen() call to spawn a subprocess and either write to its standard input stream or read from its standard output stream (but not both)
Use a combination of pipe(), fork(), dup()/dup2(), and exec*() to spawn a child process and set up pipes for a child process's standard input and output.
The below code is done with the sh command. This redirects stdout to a file named "out" which can be read later to process the output. Each command to the process can be written through the pipe.
#include <stdio.h>
int main()
{
FILE *fp;
fp = popen("sh > out", "w");
if (fp) {
fprintf(fp, "date\n");
fprintf(fp, "exit\n");
fclose(fp);
}
return 0;
}
I am having trouble using system() from libc on Linux. My code is this:
system( "tar zxvOf some.tar.gz fileToExtract | sed 's/some text to remove//' > output" );
std::string line;
int count = 0;
std::ifstream inputFile( "output" );
while( std::getline( input, line != NULL ) )
++count;
I run this snippet repeatedly and occasionally I find that count == 0 at the end of the run - no lines have been read from the file. I look at the file system and the file has the contents I would expect (greater than zero lines).
My question is should system() return when the entire command passed in has completed or does the presence of the pipe '|' mean system() can return before the part of the command after the pipe is completed?
I have explicitly not used a '&' to background any part of the command to system().
To further clarify I do in practice run the code snippet multiples times in parallel but the output file is a unique filename named after the thread ID and a static integer incremented per call to system(). I'm confident that the file being output to and read is unique for each call to system().
According to the documentation
The system() function shall not return until the child process has terminated.
Perhaps capture the output of "output" when it fails and see what it is? In addition, checking the return value of system would be a good idea. One scenario is that the shell command you are running is failing and you aren't checking the return value.
system(...) calls the standard shell to execute the command, and the shell itself should return only after the shell has regained control over the terminal. So if there's one of the programs backgrounded, system will return early.
Backgrounding happens through suffixing a command with & so check if the string you pass to system(...) contains any & and if so make sure they're properly quoted from shell processing.
System will only return after completion of its command and the file output should be readable in full after that. But ...
... multiple instances of your code snippet run in parallel would interfere because all use the same file output. If you just want to examine the contents of output and do not need the file itself, I would use popen instead of system. popen allows you to read the output of the pipe via a FILE*.
In case of a full file system, you could also see an empty output while the popen version would have no trouble with this condition.
To notice errors like a full file system, always check the return code of your calls (system, popen, ...). If there is an error the manpage will tell you to check errno. The number errno can be converted to a human readable text by strerror and output by perror.