I've written a simple C++ program for tutorial purposes.
My goal is to loop it infinitely.
#include <iostream>
#include <string>
int main()
{
std::cout << "text";
for(;;) {
std::string string_object{};
std::getline(std::cin, string_object);
std::cout << string_object;
}
return 0;
}
After compilation I run it like this:
./bin 0>&1
What I expected to happen is that the "text" that is output to stdout, and it will now become also stdin for the program and it will loop forever. Why doesn't it happen?
First, you need to output newlines when printing to std::cout, otherwise std::getline() won't have any complete line to read.
Improved version:
#include <iostream>
#include <string>
int main()
{
std::cout << "stars" << std::endl;
for(;;) {
std::string string_object;
std::getline(std::cin, string_object);
std::cout << string_object << std::endl;
}
return 0;
}
Now try this:
./bin >file <file
you don't see any output, because it's going to the file. But if you stop the program and look at the file, behold, it's full of
stars
stars
stars
stars
:-)
Also, the reason that the feedback loop cannot start when you try
./bin 0>&1
is, that you end up with both stdin and stdout connected to /dev/tty
(meaning that you can see the output).
But a TTY device cannot ever close the loop, because it actually consists of two separate channels, one passing the output to the terminal, one passing the terminal input to the process.
If you use a regular file for in- and output, the loop can be closed. Every byte written to the file will be read from it as well, if the stdin of the process is connected to it. That's as long as no other process reads from the file simultaneously, because each byte in a stream can be only read once.
Since you're using gcc, I'm going to assume you have pipe available.
#include <cstring>
#include <iostream>
#include <unistd.h>
int main() {
char buffer[1024];
std::strcpy(buffer, "test");
int fd[2];
::pipe(fd);
::dup2(fd[1], STDOUT_FILENO);
::close(fd[1]);
::dup2(fd[0], STDIN_FILENO);
::close(fd[0]);
::write(STDOUT_FILENO, buffer, 4);
while(true) {
auto const read_bytes = ::read(STDIN_FILENO, buffer, 1024);
::write(STDOUT_FILENO, buffer, read_bytes);
#if 0
std::cerr.write(buffer, read_bytes);
std::cerr << "\n\tGot " << read_bytes << " bytes" << std::endl;
#endif
sleep(2);
}
return 0;
}
The #if 0 section can be enabled to get debugging. I couldn't get it to work with std::cout and std::cin directly, but somebody who knows more about the low-level stream code could probably tweak this.
Debug output:
$ ./io_loop
test
Got 4 bytes
test
Got 4 bytes
test
Got 4 bytes
test
Got 4 bytes
^C
Because the stdout and stdin don't create a loop. They may point to the same tty, but a tty is actually two separate channels, one for input and one for output, and they don't loop back into one another.
You can try creating a loop by running your program with its stdin connected to the read end of a pipe, and with its stdout to its write end. That will work with cat:
mkfifo fifo
{ echo text; strace cat; } <>fifo >fifo
...
read(0, "text\n", 131072) = 5
write(1, "text\n", 5) = 5
read(0, "text\n", 131072) = 5
write(1, "text\n", 5) = 5
...
But not with your program. That's because your program is trying to read lines, but its writes are not terminated by a newline. Fixing that and also printing the read line to stderr (so we don't have to use strace to demonstrate that anything happens in your program), we get:
#include <iostream>
#include <string>
int main()
{
std::cout << "text" << std::endl;
for(;;) {
std::string string_object{};
std::getline(std::cin, string_object);
std::cerr << string_object << std::endl;
std::cout << string_object << std::endl;
}
}
g++ foo.cc -o foo
mkfifo fifo; ./foo <>fifo >fifo
text
text
text
...
Note: the <>fifo way of opening a named pipe (fifo) was used in order to open both its read and its write end at once and so avoid blocking. Instead of reopening the fifo from its path, the stdout could simply be dup'ed from the stdin (prog <>fifo >&0) or the fifo could be first opened as a different file descriptor, and then the stdin and stdout could be opened without blocking, the first in read-only mode and the second in write-only mode (prog 3<>fifo <fifo >fifo 3>&-).
They will all work the same with the example at hand. On Linux, :|prog >/dev/fd/0 (and echo text | strace cat >/dev/fd/0) would also work -- without having to create a named pipe with mkfifo.
The task is to read input from input.txt and write the output to output.txt.
However on completion of the above tasks, further instructions/output should now be displayed to the console.
Came to know about freopen() in c++ which works fine for the first half of the given task. But unfortunately, I have no idea how to redirect the output back to the console again.
void writeIntoFile(){
freopen("input.txt","r",stdin); // Task 1. Reading from input.txt file
freopen("output.txt","w",stdout); // Task 2. Writing to output.txt file
printf("This sentence is redirected to a file.");
fclose(stdout);
printf("This sentence is redirected to console"); // Task 3. Write further output to console
}
What I expected from fclose() was that it would end up writing into the text file and would hence further write the output into the console, but it doesn't. How can I achieve task 3 as well.
Probably what you are looking for is rdbuf() as mentioned by doomista in the comments.
Here is a way to redirect Output.
#include <iostream>
#include <fstream>
int main()
{
/** backup cout buffer and redirect to out.txt **/
std::ofstream out("out.txt");
auto *coutbuf = std::cout.rdbuf();
std::cout.rdbuf(out.rdbuf());
std::cout << "This will be redirected to file out.txt" << std::endl;
/** reset cout buffer **/
std::cout.rdbuf(coutbuf);
std::cout << "This will be printed on console" << std::endl;
return 0;
}
#include <iostream>
#include <stdio.h>
int main () {
std::ios::sync_with_stdio(false);
std::cout << "hi from c++\n";
printf("hi from c\n");
return 0;
}
After removing std::endl and putting \n instead in cout statement the output changed to the following:
hi from c
hi from c++
It's a buffering issue.
By default when standard output is connected to a terminal, stdout is line-buffered meaning the buffer is flushed and output actually written to the terminal on newline.
When C stdio is disconnected from the C++ standard streams, std::cout is fully buffered, meaning output is actually written when either explicitly flushed (using e.g. std::flush or std::endl manipulators) or if the buffer is full.
The two buffers used by C stdout and C++ std::cout are different and not connected.
Flushing of the buffers also happens when the program exits.
What happens in your program is that the output with printf is flushed immediately because of the trailing newline in the string. But the output to std::cout is only flushed when the program exits.
I am learning about iostream objects and flushing the buffer. I know when output buffers are guaranteed to be flushed and how to explicitly flush the buffer. However, I have never seen a case where output buffer is not flushed. It seems to me that output buffer gets flushed at the end of each statement even if I don't use manipulators such as endl, flush and ends.
So, is there any simple example(s) where the output buffer will not ( or at least, might often not) get flushed? I feel like I need to see such a case to really understand output buffers.
Depends on the system.
Take the following program as an example:
#include <iostream>
#ifdef WIN32
#include <windows.h>
#define sleep(n) Sleep((n)*1000)
#else
#include <unistd.h>
#endif
using namespace std;
int main()
{
cout << "This is line 1";
sleep(4);
cout << endl;
cout << "This is line 2" << endl;
return 0;
}
By inspecting the program, you might surmise that the program would print This is line 1, followed by pausing for 4 seconds, then printing This is line 2.
And if you compile with Visual Studio to run on Windows, you'll get that exact behavior.
On Linux and other Unix operating systems however, the program will appear to be silent for 4 seconds before printing out both lines together. The output won't reliably flush until a new line character is encountered in the output stream.
I can write data to a file using
std::ofstream fileStream;
fileStream << "Some Data";
or simply do a
std::cout << "Some Data";
and do ./myBinary > outFile
Which one is faster ?
It should not be significantly slower, and in fact the performance difference (if there even is any!) will be negligible.
Redirection causes the standard output / input / error handles to be replaced with a handle directly to the given file. As long as there are not needless flushes of the output stream, performance should be nearly identical, if not exactly identical. (I would hope that std::cin is able to detect whether or not output is to a terminal, and disable automatic flushing on std::endl if it is not.)
To prove this, let's take this small C program:
#include <stdio.h>
#include <sys/stat.h>
#include <unistd.h>
int main()
{
struct stat s;
fstat(fileno(stdout), &s);
fprintf(stderr, "Output is file? %d\n", S_ISREG(s.st_mode));
return 0;
}
And run it in a few situations:
chris#onslow:~$ ./testbin
Output is file? 0
chris#onslow:~$ ./testbin >/dev/null
Output is file? 0
chris#onslow:~$ ./testbin >foo
Output is file? 1
With a similar program that calls fstat() on standard input, we can see that the program can determine the size of the input file, indicating that it has a handle directly to the file and not some intermediate pipe:
chris#onslow:~$ ./testbin
Input file size is 0
chris#onslow:~$ ./testbin < /dev/null
Input file size is 0
chris#onslow:~$ echo hello > foo
chris#onslow:~$ ./testbin < foo
Input file size is 6