I've been doing this task for +3 weeks now and I'm sure it's a piece of cake to somebody out there, so I'm just going to ask if somebody could write me some example code with these requirements:
Task is in C++ and the main point is to become familiar with pipes
It should be called like this (I think) from cmd: cat inputfile.cpp | ./program01 ./program02. What I'm trying to say is (I think): "Do modifications to file inputfile.cpp using programs program01 and program02".
Using pipes, firstly program01 removes all occurances of something in the inputfile.cpp (for example all empty rows). After all empty rows are removed, program02 should remove all occurances of something else (comments for example).
Does my ask make any sense? I mean, are pipes even meant to be used that way (first run other program then another)?
Can I possibly run multiple files through multiple programs for example like cat input1.cpp input2.cpp input3.cpp | ./program01 ./program02 ./program03.
I've written a bunch of programs that does various things to a file, but that is not the main point of the task. The main point is the "piping" part but I really, really just don't get it.
Any guidance is appreciated (some code below).
#include <stdio.h>
#include <sys/types.h>
#include <unistd.h>
#include <stdlib.h>
#include <string.h>
#include "programs.h"
int main(int argc, char** argv)
{
int fd[2];
pid_t pid;
int result;
result = pipe(fd);
if(result < 0)
{
perror("pipe error");
exit(1);
}
pid = fork();
if(pid < 0)
{
perror("fork error");
exit(2);
}
// Child
if(pid == 0)
{
while(1)
{
// I guess I should do some piping-magic here?
}
exit(0);
}
// Parent
else
{
while(1)
{
}
exit(0);
}
}
I think you're confusing the bash pipe command with the pipes to obtain IPC.
In bash the pipe doesn't pass the arguments, but redirect the stdout of a command to the next command as stdin. So in your c++ program you should read from stdin and write to stdout (relevant stackoverflow question).
You can do something like
cat file | ./program1 > file
to chain programs
cat file | ./program1 | ./program2 > file
To process data from the shell's pipe all you have to do is read in data from std::cin and output the results to std::cout. The shell manages the actual pipes.
Here is a program that does nothing. It simple passes the data from the incoming "pipe" to the outgoing "pipe":
Do nothing using "pipes":
#include <iostream>
int main()
{
char c;
while(std::cin.get(c))
std::cout.put(c);
}
A program to remove every blank line:
#include <iostream>
#include <string>
int main()
{
std::string line;
while(std::getline(std::cin, line))
{
if(line.empty()) // skip empty lines
continue;
// otherwise send them out
std::cout << line << '\n';
}
}
Related
I've written a simple C++ program for tutorial purposes.
My goal is to loop it infinitely.
#include <iostream>
#include <string>
int main()
{
std::cout << "text";
for(;;) {
std::string string_object{};
std::getline(std::cin, string_object);
std::cout << string_object;
}
return 0;
}
After compilation I run it like this:
./bin 0>&1
What I expected to happen is that the "text" that is output to stdout, and it will now become also stdin for the program and it will loop forever. Why doesn't it happen?
First, you need to output newlines when printing to std::cout, otherwise std::getline() won't have any complete line to read.
Improved version:
#include <iostream>
#include <string>
int main()
{
std::cout << "stars" << std::endl;
for(;;) {
std::string string_object;
std::getline(std::cin, string_object);
std::cout << string_object << std::endl;
}
return 0;
}
Now try this:
./bin >file <file
you don't see any output, because it's going to the file. But if you stop the program and look at the file, behold, it's full of
stars
stars
stars
stars
:-)
Also, the reason that the feedback loop cannot start when you try
./bin 0>&1
is, that you end up with both stdin and stdout connected to /dev/tty
(meaning that you can see the output).
But a TTY device cannot ever close the loop, because it actually consists of two separate channels, one passing the output to the terminal, one passing the terminal input to the process.
If you use a regular file for in- and output, the loop can be closed. Every byte written to the file will be read from it as well, if the stdin of the process is connected to it. That's as long as no other process reads from the file simultaneously, because each byte in a stream can be only read once.
Since you're using gcc, I'm going to assume you have pipe available.
#include <cstring>
#include <iostream>
#include <unistd.h>
int main() {
char buffer[1024];
std::strcpy(buffer, "test");
int fd[2];
::pipe(fd);
::dup2(fd[1], STDOUT_FILENO);
::close(fd[1]);
::dup2(fd[0], STDIN_FILENO);
::close(fd[0]);
::write(STDOUT_FILENO, buffer, 4);
while(true) {
auto const read_bytes = ::read(STDIN_FILENO, buffer, 1024);
::write(STDOUT_FILENO, buffer, read_bytes);
#if 0
std::cerr.write(buffer, read_bytes);
std::cerr << "\n\tGot " << read_bytes << " bytes" << std::endl;
#endif
sleep(2);
}
return 0;
}
The #if 0 section can be enabled to get debugging. I couldn't get it to work with std::cout and std::cin directly, but somebody who knows more about the low-level stream code could probably tweak this.
Debug output:
$ ./io_loop
test
Got 4 bytes
test
Got 4 bytes
test
Got 4 bytes
test
Got 4 bytes
^C
Because the stdout and stdin don't create a loop. They may point to the same tty, but a tty is actually two separate channels, one for input and one for output, and they don't loop back into one another.
You can try creating a loop by running your program with its stdin connected to the read end of a pipe, and with its stdout to its write end. That will work with cat:
mkfifo fifo
{ echo text; strace cat; } <>fifo >fifo
...
read(0, "text\n", 131072) = 5
write(1, "text\n", 5) = 5
read(0, "text\n", 131072) = 5
write(1, "text\n", 5) = 5
...
But not with your program. That's because your program is trying to read lines, but its writes are not terminated by a newline. Fixing that and also printing the read line to stderr (so we don't have to use strace to demonstrate that anything happens in your program), we get:
#include <iostream>
#include <string>
int main()
{
std::cout << "text" << std::endl;
for(;;) {
std::string string_object{};
std::getline(std::cin, string_object);
std::cerr << string_object << std::endl;
std::cout << string_object << std::endl;
}
}
g++ foo.cc -o foo
mkfifo fifo; ./foo <>fifo >fifo
text
text
text
...
Note: the <>fifo way of opening a named pipe (fifo) was used in order to open both its read and its write end at once and so avoid blocking. Instead of reopening the fifo from its path, the stdout could simply be dup'ed from the stdin (prog <>fifo >&0) or the fifo could be first opened as a different file descriptor, and then the stdin and stdout could be opened without blocking, the first in read-only mode and the second in write-only mode (prog 3<>fifo <fifo >fifo 3>&-).
They will all work the same with the example at hand. On Linux, :|prog >/dev/fd/0 (and echo text | strace cat >/dev/fd/0) would also work -- without having to create a named pipe with mkfifo.
I am trying to pass a command into my shell script via a C++ program, but I am not familiar with C++ at all and, while I know that I must use system(), I am not sure how to set it up effectively.
#include <iostream>
#include <stdlib.h>
int main() {
system("./script $1");
return 0;
}
This is what I currently have.
It seems that I can't use positional parameters in the system command, but I wasn't sure what else to do. I'm trying to pass in an argument to the script via the C++ program.
If you just want to call "./script" with the first argument to the C++ program passed as the first argument to the script, you could do it like this:
#include <iostream>
#include <string>
#include <stdlib.h>
int main(int argc, char ** argv)
{
if (argc < 2)
{
printf("Usage: ./MyProgram the_argument\n");
exit(10);
}
std::string commandLine = "./script ";
commandLine += argv[1];
std::cout << "Executing command: " << commandLine << std::endl;
system(commandLine.c_str());
return 0;
}
Properly executing a shell command from C++ actually takes quite a bit of setup, and understanding exactly how it works requires a lot of explanation about operating systems and how they handle processes. If you want to understand it better, I recommend reading the man pages on the fork() and exec() commands.
For the purposes of just executing a shell process from a C++ program, you will want to do something like so:
#include <unistd.h>
#include <iostream>
int main() {
int pid = fork();
if (pid == 0) {
/*
* A return value of 0 means this is the child process that we will use
* to execute the shell command.
*/
execl("/path/to/bash/binary", "bash", "args", "to", "pass", "in");
}
/*
* If execution reaches this point, you're in the parent process
* and can go about doing whatever else you wanted to do in your program.
*/
std::cout << "QED" << std::endl;
}
To (very) quickly explain what's going on here, the fork() command essentially duplicates the entire C++ program being executed (called a process), but with a different value of pid which is returned from fork(). If pid == 0, then we are currently in the child process; otherwise, we're in the parent process. Since we're in the dispensable child process, we call the execl() command which completely replaces the child process with the shell command you want to execute. The first argument after the path needs to be the filename of the binary, and after that you can pass in as many arguments as you want as null-terminated C strings.
I hope this helps, and please let me know if you need further clarification.
the problem I have understood so far is that you want to pass arguments to c++ executable and it will then pass those arguments further to the system script
#include <iostream>
#include <string>
#include <stdlib.h>
using namespace std;
// argc is the count of the arguments
// args is the array of string (basically the arguments)
int main(int argc, char ** argv) {
// Convetion to check if arguments were provided
// First one or two are the not positional. (these are information related to the binary that is currently being executed)
if (argc < 2) {
printf("Please Provide an Argument");
return 100; // Any number other than 0 (to represent abnormal behaviour)
}
string scriptCommand = "./name-of-script"; // script in your case
// Loop through and add all the arguments.
for (int i = 1; i < argc; i++)
scriptCommand += " " + argv[i];
system(scriptCommand.c_str());
return 0; // Represents a normal exit (execution).
}
I'm writing a small CLI application and I want to allow the user to redirect to a file while standard cout statements go to the output.txt file I want progress to always to go the screen.
./myApp > output.txt
10% complete
...
90% complete
Completed
Is this possible? How can I do it?
Thanks in advance!!
This will work even if both stdin and stdout have been redirected:
spectras#etherbee:~$ ./term
hello terminal!
spectras#etherbee:~$ ./term >/dev/null 2>&1
hello terminal!
The idea is to open the controlling terminal of the process directly, bypassing any redirection, like this:
#include <errno.h>
#include <fcntl.h>
#include <unistd.h>
int main()
{
int fd = open("/dev/tty", O_WRONLY);
if (fd < 0 && errno != ENODEV) {
/* something went wrong */
return 1;
}
int hasTTY = (fd >= 0);
if (hasTTY) {
write(fd, "hello terminal!\n", 16);
}
return 0;
}
From man 4 tty:
The file /dev/tty is a character file with major number 5 and
minor number 0, usually of mode 0666 and owner.group root.tty. It is
a synonym for the controlling terminal of a process, if any.
If you're using C++, you might want to wrap the file descriptor into a custom streambuf, so you can use regular stream API on it. Alternately, some implementations of the C++ library offer extensions for that purpose. See here.
Or, if you don't care about getting the error code reliably, you could just std::ofstream terminal("/dev/tty").
Also as a design consideration if you do this, offering a quiet option to let the user turn off the writing to the terminal is a good idea.
Your process cannot know if the shell redirects the standard console output (std::cout) or not.
So you'll need another handle that lets you output to the terminal independently of that redirection.
As #Mark mentioned in their comment you could (ab-)use1 std::cerr to do that, along with some ASCII trickery to overwrite the current output line at the terminal (look at backspace characters: '\b').
1)Not to mention the mess printed at the terminal if the output isn't actually redirected.
You can write your progress indicators to the stderr stream. They will appear on the console if the user redirects stdout to a file.
For example:
fprintf(stderr, "10%% complete\n");
I figured out how to do it, even if the user redirects stderr. The following code gets the name of the current terminal and checks to see if our output is being redirected. It also has a my_write() function that allows you to write to both the terminal and the redirect file, if they've redirected stdout. You can use the my_write() function with the writetoterm variable where-ever you want to write something that you want to always be written to the terminal. The extern "C" has to be there, otherwise (on Debian 9 with GCC 6.3, anyway) the ttyname() function will just return NULL all the time.
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <iostream>
#include <string>
#include <sys/types.h>
#include <fcntl.h>
#include <sys/stat.h>
#include <string.h>
#include <error.h>
#include <errno.h>
#include <sstream>
using std::string;
using std::fstream;
using std::cout;
using std::endl;
using std::cerr;
using std::stringstream;
void my_write(bool writetoterm, int termfd, string data)
{
if(writetoterm)
{
int result = write(termfd, data.c_str(), data.length());
if(result < data.length()){
cerr << "Error writing data to tty" << endl;
}
}
cout << data;
}
extern "C" {
char* GetTTY(int fd){
//printf("%s", ttyname(fd));
return ttyname(fd);
}
}
int main(int argc, char** argv){
getenv("TTY");
bool writetoterm = false;
struct stat sb = {};
if(!GetTTY(STDOUT_FILENO)){
//not a TTY
writetoterm = true;
}
int ttyfd = open(GetTTY(2), O_WRONLY);
if(ttyfd < 0){
//error in opening
cout << strerror(errno) << endl;
}
string data = "Hello, world!\n";
my_write(true, ttyfd, data);
int num_for_cout = 42;
stringstream ss;
ss << "If you need to use cout to send something that's not a string" << endl;
ss << "Do this: " << num_for_cout << endl;
my_write(writetoterm, ttyfd, ss.str().c_str());
return 0;
}
I found the official std:: method of handling this. There is another type... std::clog. This is specifically for information and always appears on the command line even though the user redirects the output of the program myProgram > out.txt.
Thanks this was great to see all the methods that this can be done.
Say I have an .exe, lets say sum.exe. Now say the code for sum.exe is
void main ()
{
int a,b;
scanf ("%d%d", &a, &b);
printf ("%d", a+b);
}
I wanted to know how I could run this program from another c/c++ program and pass input via stdin like they do in online compiler sites like ideone where I type the code in and provide the stdin data in a textbox and that data is accepted by the program using scanf or cin. Also, I wanted to know if there was any way to read the output of this program from the original program that started it.
The easiest way I know for doing this is by using the popen() function. It works in Windows and UNIX. On the other way, popen() only allows unidirectional communication.
For example, to pass information to sum.exe (although you won't be able to read back the result), you can do this:
#include <stdio.h>
#include <stdlib.h>
int main()
{
FILE *f;
f = popen ("sum.exe", "w");
if (!f)
{
perror ("popen");
exit(1);
}
printf ("Sending 3 and 4 to sum.exe...\n");
fprintf (f, "%d\n%d\n", 3, 4);
pclose (f);
return 0;
}
In C on platforms whose name end with X (i.e. not Windows), the key components are:
pipe - Returns a pair of file descriptors, so that what's written to one can be read from the other.
fork - Forks the process to two, both keep running the same code.
dup2 - Renumbers file descriptors. With this, you can take one end of a pipe and turn it into stdin or stdout.
exec - Stop running the current program, start running another, in the same process.
Combine them all, and you can get what you asked for.
This is my solution and it worked:
sum.cpp
#include "stdio.h"
int main (){
int a,b;
scanf ("%d%d", &a, &b);
printf ("%d", a+b);
return 0;
}
test.cpp
#include <stdio.h>
#include <stdlib.h>
int main(){
system("./sum.exe < data.txt");
return 0;
}
data.txt
3 4
Try this solution :)
How to do so is platform dependent.
Under windows, Use CreatePipe and CreateProcess. You can find example from MSDN :
http://msdn.microsoft.com/en-us/library/windows/desktop/ms682499(v=vs.85).aspx
Under Linux/Unix, you can use dup() / dup2()
One simple way to do so is to use a Terminal (like command prompt in windows) and use | to redirect input/output.
Example:
program1 | program2
This will redirect program1's output to program2's input.
To retrieve/input date, you can use temporary files, If you don't want to use temporary files, you will have to use pipe.
For Windows, (use command prompt):
program1 <input >output
For Linux, you can use tee utility, you can find detail instruction by typing man tee in linux terminal
It sounds like you're coming from a Windows environment, so this might not be the answer you are looking for, but from the command line you can use the pipe redirection operator '|' to redirect the stdout of one program to the stdin of another. http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/redirection.mspx?mfr=true
You're probably better off working in a bash shell, which you can get on Windows with cygwin http://cygwin.com/
Also, your example looks like a mix of C++ and C, and the declaration of main isn't exactly an accepted standard for either.
How to do this (you have to check for errors ie. pipe()==-1, dup()!=0, etc, I'm not doing this in the following snippet).
This code runs your program "sum", writes "2 3" to it, and than reads sum's output. Next, it writes the output on the stdout.
#include <iostream>
#include <sys/wait.h>
#include <unistd.h>
int main() {
int parent_to_child[2], child_to_parent[2];
pipe(parent_to_child);
pipe(child_to_parent);
char name[] = "sum";
char *args[] = {name, NULL};
switch (fork()) {
case 0:
// replace stdin with reading from parent
close(fileno(stdin));
dup(parent_to_child[0]);
close(parent_to_child[0]);
// replace stdout with writing to parent
close(fileno(stdout));
dup(child_to_parent[1]);
close(child_to_parent[1]);
close(parent_to_child[1]); // dont write on this pipe
close(child_to_parent[0]); // dont read from this pipe
execvp("./sum", args);
break;
default:
char msg[] = "2 3\n";
close(parent_to_child[0]); // dont read from this pipe
close(child_to_parent[1]); // dont write on this pipe
write(parent_to_child[1], msg, sizeof(msg));
close(parent_to_child[1]);
char res[64];
wait(0);
read(child_to_parent[0], res, 64);
printf("%s", res);
exit(0);
}
}
I'm doing what #ugoren suggested in their answer:
Create two pipes for communication between processes
Fork
Replace stdin, and stdout with pipes' ends using dup
Send the data through the pipe
Based on a few answers posted above and various tutorials/manuals, I just did this in Linux using pipe() and shell redirection. The strategy is to first create a pipe, call another program and redirect the output of the callee from stdout to one end of the pipe, and then read the other end of the pipe. As long as the callee writes to stdout there's no need to modify it.
In my application, I needed to read a math expression input from the user, call a standalone calculator and retrieve its answer. Here's my simplified solution to demonstrate the redirection:
#include <string>
#include <unistd.h>
#include <sstream>
#include <iostream>
// this function is used to wait on the pipe input and clear input buffer after each read
std::string pipeRead(int fd) {
char data[100];
ssize_t size = 0;
while (size == 0) {
size = read(fd, data, 100);
}
std::string ret = data;
return ret;
}
int main() {
// create pipe
int calculatorPipe[2];
if(pipe(calculatorPipe) < 0) {
exit(1);
}
std::string answer = "";
std::stringstream call;
// redirect calculator's output from stdout to one end of the pipe and execute
// e.g. ./myCalculator 1+1 >&8
call << "./myCalculator 1+1 >&" << calculatorPipe[1];
system(call.str().c_str());
// now read the other end of the pipe
answer = pipeRead(calculatorPipe[0]);
std::cout << "pipe data " << answer << "\n";
return 0;
}
Obviously there are other solutions out there but this is what I can think of without modifying the callee program. Things might be different in Windows though.
Some useful links:
https://www.geeksforgeeks.org/pipe-system-call/
https://www.gnu.org/software/bash/manual/html_node/Redirections.html
Is it possible to send data to another C++ program, without being able to modify the other program (since a few people seem to be missing this important restriction)? If so, how would you do it? My current method involves creating a temporary file and starting the other program with the filename as a parameter. The only problem is that this leaves a bunch of temporary files laying around to clean up later, which is not wanted.
Edit: Also, boost is not an option.
Clearly, building a pipe to stdin is the way to go, if the 2nd program supports it. As Fred mentioned in a comment, many programs read stdin if either there is no named file provided, or if - is used as the filename.
If it must take a filename, and you are using Linux, then try this: create a pipe, and pass /dev/fd/<fd-number> or /proc/self/fd/<fd-number> on the command line.
By way of example, here is hello-world 2.0:
#include <string>
#include <sstream>
#include <cstdlib>
#include <cstdio>
#include <unistd.h>
#include <sys/types.h>
#include <sys/wait.h>
int main () {
int pfd[2];
int rc;
if( pipe(pfd) < 0 ) {
perror("pipe");
return 1;
}
switch(fork()) {
case -1: // Error
perror("fork");
return 1;
case 0: { // Child
// Close the writing end of the pipe
close(pfd[1]);
// Create a filename that refers to reading end of pipe
std::ostringstream path;
path << "/proc/self/fd/" << pfd[0];
// Invoke the subject program. "cat" will do nicely.
execlp("/bin/cat", "cat", path.str().c_str(), (char*)0);
// If we got here, then something went wrong, then execlp failed
perror("exec");
return 1;
}
default: // Parent
// Close the reading end.
close(pfd[0]);
// Write to the pipe. Since "cat" is on the other end, expect to
// see "Hello, world" on your screen.
if (write(pfd[1], "Hello, world\n", 13) != 13)
perror("write");
// Signal "cat" that we are done writing
close(pfd[1]);
// Wait for "cat" to finish its business
if( wait(0) < 0)
perror("wait");
// Everything's okay
return 0;
}
}
You could use sockets. It sounds like both application are on the same host, so you just identify the peers as localhost:portA and localhost:port B. And if you do it this way you can eventually graduate to do network IO. No temp files, no mystery parse errors or file deletions. TCP guarantees packet delivery and guarantees they will be ordered correctly.
So yeah, I would consider creating an synchronous socket server (use asynchronous if you anticipate having tons of peers). One benefit over pipe oriented IPC is that TCP sockets are completely universal. Piping varies dramatically based upon what system you are on (consider Windows named pipes vs implicit and explicit POSIX pipes -> very different).