Printing to the console vs writing to a file (speed) - c++

In C++, which would be faster if repeated, say, 5000 times:
cout << "text!" << endl;
or
my_text_file << "text!" << endl;
(writing to a file vs. cout-ing to the console)
Edit:
I ask because when writing to the console, you see all the text being printed which seems like it would slow down the loop. In a file, you arn't seeing the text being printed, which seems as if it would take less time.
Just tested it:
Console: > 2000 ms using endl and \n
File: 40 ms with endl and 4 ms with \n

Writing to a file would be much faster. This is especially true since you are flushing the buffer after every line with endl.
On a side note, you could speed the printing significantly by doing repeating cout << "text!\n"; 5000 times, then flushing the buffer using flush().

It's not that much faster...
A test of 1 million couts with endl (clear buffer):
Results:
console cout time: 2.87001
file cout time: 2.33776
Code:
class Timer
{
struct timespec startTime, endTime;
double sec;
public:
void start();
void stop();
double getSec();
};
void Timer::start()
{
clock_gettime(CLOCK_MONOTONIC, &startTime);
}
void Timer::stop()
{
clock_gettime(CLOCK_MONOTONIC, &endTime);
sec = (endTime.tv_sec - startTime.tv_sec);
sec += (endTime.tv_nsec - startTime.tv_nsec) / 1000000000.0;
}
double Timer::getSec()
{
return sec;
}
int main(){
int ntests = 1000000;
Timer t1 = Timer(), t2 = Timer();
t1.start();
for(int c=0;c<ntests;c++)
{
cout << "0" << endl;
}
t1.stop();
ofstream out("out.txt");
streambuf *coutbuf = cout.rdbuf();
cout.rdbuf(out.rdbuf());
t2.start();
for(int c=0;c<ntests;c++)
{
cout << "0" << endl;
}
t2.stop();
cout.rdbuf(coutbuf);
cout << "console cout time: " << t1.getSec() << endl;
cout << "file cout time: " << t2.getSec() << endl;
}
Build and run:
g++ test.cpp -o test -lrt && ./test && rm out.txt

In addition to console I/O generally being relatively slow, the default configuration of the standard streams cout and cin has some issues that will greatly slow performance if not corrected.
The reason is that the standard mandates that, by default, cout and cin from the C++ iostream library should work alongside stdout and stdin from the C stdio library in the expected way.
This basically means that cout and cin can't do any buffering at all in its internal streambufs and basically forwards all I/O operations over to the C library.
If you want to do anything resembling high performance I/O with the standard streams, you need to turn off this synchronization with
std::ios_base::sync_with_stdio(false);
before doing any I/O.

Writing the same amount of data, with the same buffer size to the console will most definitely be faster than writing to a file.
You can speed up your write speed (both for console output, and file output) by not writing out the buffer with every line (i.e.- don't use std::endl after every line, as it both adds an endline to the stream, and writes the buffer). Instead use "\n" unless you need to ensure the buffer is output for some reason.

Related

C++ programs not working correctly in VS Code [duplicate]

#include <bits/stdc++.h>
using namespace std;
void scan_a_line_indefinitely()
{
// scan line indefinitely
string input_line;
while(getline(cin,input_line))
{
cout << input_line ; **// doesn't print if i use this line**
//cout << input_line << endl; **// if i use this line, it works fine**
}
}
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(NULL);
cout.tie(NULL);
scan_a_line_indefinitely();
return 0;
}
someone please help me understand the problem with this code.
i think the problem is with cout.tie() and cout.tie(), when i remove these, program works fine.
std::cout will flush under these conditions:
An input-stream which is tied to std::cout tries to read input.
You removed the ties.
iostreams are synchronized with stdio, thus effectively unbuffered.
You disabled the synchronization.
The buffer is full.
That takes a bit longer.
The program ends normally.
That comes too late for you.
There is a manual flush (stream.flush() which is called when streaming std::flush; stream << std::endl is equivalent to stream << stream.widen('\n') << std::flush).
You have none of those.
So, fix any of them and you will see your output earlier.
If only iostreams are used you can add a manual flush to the output :
std::cout.flush();
Or
std::cout << /* the output */<< std::flush;
Also:
std::cout << std::endl is equivalent to std::cout << '\n' << std::flush

When i use cout.tie(NULL), program doesn't print anything for my code, but if i print endl, program works fine

#include <bits/stdc++.h>
using namespace std;
void scan_a_line_indefinitely()
{
// scan line indefinitely
string input_line;
while(getline(cin,input_line))
{
cout << input_line ; **// doesn't print if i use this line**
//cout << input_line << endl; **// if i use this line, it works fine**
}
}
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(NULL);
cout.tie(NULL);
scan_a_line_indefinitely();
return 0;
}
someone please help me understand the problem with this code.
i think the problem is with cout.tie() and cout.tie(), when i remove these, program works fine.
std::cout will flush under these conditions:
An input-stream which is tied to std::cout tries to read input.
You removed the ties.
iostreams are synchronized with stdio, thus effectively unbuffered.
You disabled the synchronization.
The buffer is full.
That takes a bit longer.
The program ends normally.
That comes too late for you.
There is a manual flush (stream.flush() which is called when streaming std::flush; stream << std::endl is equivalent to stream << stream.widen('\n') << std::flush).
You have none of those.
So, fix any of them and you will see your output earlier.
If only iostreams are used you can add a manual flush to the output :
std::cout.flush();
Or
std::cout << /* the output */<< std::flush;
Also:
std::cout << std::endl is equivalent to std::cout << '\n' << std::flush

How to use QProcess write correctly?

I need a program to communicate with a subprocess that is relying on in- and
output. The problem is that I am apparently not able to use QProcess correctly.
The code further down should create a QProcess, start it and enter the main while loop. In there it prints all the output created by the subprocess to the console and subsequently asks the user for input which is then passed to the subprocess via write(...).
Originally I had two problems emerging from this scenario:
The printf's of the subprocess could not be read by the parent process.
scanf in the subprocess is not receiving the strings sent via write.
As for (1), I came to realize that this is a problem caused by the buffering of the subprocess' stdout. This problem can be solved easily with fflush(stdout) calls or manipulations regarding its flushing behavior.
The second problem is the one I can't wrap my head around. write gets called and even returns the correct number of sent bytes. The subprocess, however, is not continuing its excecution, because no new data is written to its output. The scanf seems not to be receiving the data sent. The output given by the program is:
Subprocess should have started.
124 bytes available!
Attempting to read:
Read: This is a simple demo application.
Read: It solely reads stdin and echoes its contents.
Read: Input exit to terminate.
Read: ---------
Awaiting user input: test
Written 5 bytes
No line to be read...
Awaiting user input:
I am seriously stuck right here. Google + heavy thinking having failed on me, I want to pass this on to you as my last beacon of hope. In case I am just failing to see the forest for all the trees, my apologies.
In case this information is necessary: I am working on 64bit MacOS X using Qt5 and the clang compiler. The subprocess-code is compiled with gcc on the same machine.
Thank you very much in advance,
NR
Main-Code:
int main() {
// Command to execute the subprocess
QString program = "./demo";
QProcess sub;
sub.start(program, QProcess::Unbuffered | QProcess::ReadWrite);
// Check, whether the subprocess is starting correctly.
if (!sub.waitForStarted()) {
std::cout << "Subprocess could not be started!" << std::endl;
sub.close();
return 99;
}
std::cout << "Subprocess should have started." << std::endl;
// Check, if the subprocess has written its starting message to the output.
if (!sub.waitForReadyRead()) {
std::cout << "No data available for reading. An error must have occurred." << std::endl;
sub.close();
return 99;
}
while (1) {
// Try to read the subprocess' output
if (!sub.canReadLine()) {
std::cout << "No line to be read..." << std::endl;
} else {
std::cout << sub.bytesAvailable() << " bytes available!" << std::endl;
std::cout << "Attempting to read..." << std::endl;
while (sub.canReadLine()) {
QByteArray output = sub.readLine();
std::cout << "Read: " << output.data();
}
}
std::cout << "Awaiting user input: ";
std::string input;
getline(std::cin, input);
if (input.compare("exit") == 0) break;
qint64 a = sub.write(input.c_str());
qint64 b = sub.write("\n");
sub.waitForBytesWritten();
std::cout << "Written " << a + b << " bytes" << std::endl;
}
std::cout << "Terminating..." << std::endl;
sub.close();
}
Subprocess-Code:
int main() {
printf("This is a simple demo application.\n");
printf("It reads stdin and echoes its contents.\n");
printf("Input \"exit\" to terminate.\n");
while (1) {
char str[256];
printf("Input: ");
fflush(stdout);
scanf("%s", str);
if (strcmp(str, "exit") == 0) return 0;
printf("> %s\n", str);
}
}
P.s: Since this is my first question on SO, please tell me if something is wrong concerning the asking style.
Solution
After many many more trials & errors, I managed to come up with a solution to the problem. Adding a call to waitForReadyRead() causes the main process to wait until new output is written by the subprocess. The working code is:
...
sub.waitForBytesWritten();
std::cout << "Written " << a + b << " bytes" << std::endl;
// Wait for new output
sub.waitForReadyRead();
...
I still don't have a clue why it works this way. I guess it somehow relates to the blocking of the main process by getline() vs blocking by waitForReadyRead(). To me it appears as if getline() blocks everything, including the subprocess, causing the scanf call never to be processed due to race conditions.
It would be great, if someone who understands could drop an explanation.
Thank you for your help :)
NR
This will not work. You are waiting for the sent bytes to be written but you are not waiting for the echo. Instead you are entering the getline() function waiting for new user input. Keep in mind that two processes are involved here where each process can be delayed to any degree.
Apart from this you should consider building your Qt application asynchronously (having an event loop) instead of trying the synchronous approach. This way your Qt application can do things in parallel... e.g. reading input or waiting for input from the remote process while still not being blocked and able to accept user input.

Using Sleep() prevent me to writing to a file

I have some code that writes the system time to a file:
std::ofstream file("time.txt");
char *date;
time_t timer;
timer=time(NULL);
date = asctime(localtime(&timer));
while ( true ) {
std::cout << date << ", " << randomNumber << std::endl;
if (file.is_open())
{
file << date;
file << ", ";
file << randomNumber;
file << "\n";
}
}
file.close()
When I let my program run and stop it in-between (its an infinite while loop), I am able to get data written to my file.
However, if I merely change the code to add a Sleep() timer. No data is written to my file. But I do see an output on the screen. Is this expected behavior? How do I ensure that even if I end my program execution mid-way, values are written to the file?
std::ofstream file("time.txt");
char *date;
time_t timer;
timer=time(NULL);
date = asctime(localtime(&timer));
while ( true ) {
**Sleep(100); // wait for 100 milli-seconds**
std::cout << date << ", " << randomNumber << std::endl;
if (file.is_open())
{
file << date;
file << ", ";
file << randomNumber;
file << "\n";
}
}
file.close()
If I close my file right after the sleep timer, it writes the data out. But the main reason I'm adding the timer, is that I want to slow-down how often my file is being written to ...
You need to flush the buffer so the contents are written to the file. Call std::flush or change file << "\n"; to file << std::endl; to flush the stream. When you don't call Sleep in your program, the contents of the buffer are written as soon as the buffer becomes full, however, with Sleep the buffer doesn't become full right away because there is a delay, so you don't see the contents written to the file.

Interpretation of homework requirements, involving client/server communication

I've been working on an assignment that asks us to implement some code provided to us that allows the creation of a server and client that can communicate. I was to fork a process in main, and then test the various request options available, and then measure the difference in time it took to do this via the child process, or locally using a function. I'm unsure if I've interpretated the requirements correctly though. On top of this, all the timing functions return 0 seconds. Not sure if this is correct or not. I'll post a small portion of the code.
Homework statement (only a small portion):
Measure the invocation delay of a request (i.e. the time between the
invocation of a request until the response comes back.) Compare that
with the time to submit the same request string to a function that
takes a request and returns a reply (as compared to a separate process
that does the same). Submit a report that compares the two.
The function declared before main:
string myfunc(string request){
//string myreq = request;
RequestChannel my_func_channel("control", RequestChannel::CLIENT_SIDE);
string reply1 = my_func_channel.send_request(request);
return reply1;
}
And how I interpreted the directions in code:
int main(int argc, char * argv[]) {
//time variables
time_t start, end;
double time_req_1, time_req_func;
cout << "client.C Starting...\n" << flush;
cout << "Forking new process...\n " << flush;
pid_t childpid = fork();
if(childpid == -1)
cout << "Failed to fork.\n" << flush;
else if(childpid == 0){
cout << "***Loading Dataserver...\n" << flush;
//Load dataserver
RequestChannel my_channel("control", RequestChannel::CLIENT_SIDE);
cout << "***Dataserver Loaded.\n" << flush;
time(&start);
string reply1 = my_channel.send_request("hello");
cout << "***Reply to request 'hello' is '" << reply1 << "'\n" << flush;
time(&end);
time_req_1 = difftime(end,start);
cout <<"\n\nRequest 1 took : "<< time_req_1 << flush;
}
else{//parent
time(&start);
string s = myfunc("hello");
time(&end);
time_req_func = difftime(end,start);
cout <<"\nmyfunc Request took: "<< time_req_func << "\n" << flush;
}
usleep(1000000);
}
This is an abbreviated version of my code, but contains everything you should need to figure out whats going on. Have I done what the directions stated? Also, is it likely that my 0 seconds results are correct?
The time it takes to do it once may be (probably is) too small to measure, so time how long it takes to do it many times and then work out how long each one took.