Ok, I'm working on my Operating Systems assignment. I need to write a microkernel which is able to do some basic stuff with threads, semaphores, events, etc.
BCC 3.1 is imitating my system environment. Classical debugging is really not of use. I'm debugging in cout style.
Problem is weird behavior of cout. It writes out in blocks or something. If I do, like, 40 couts it writes everything out. If I do 39 of them, it doesn't write any of them. On other hand if i do between 40 and 79 couts, it still writes only first 40, but if I do 80 of them, they're all ok, etc. Numbers are not exact, I'm not sure what's the number really. But I have also noticed that changing the length of string that is cout-ed effects the same way. Only I don't know how many characters equals one cout call.
Additional information available upon request. Thanks in forward.
sounds like buffering regardless of the fact std::cout shouldn't buffer output. in any case you can try flushing cout by
std::cout.flush();
or
std::cout << std::flush;
or
std::cout << std::endl;
or even by disabling buffering:
std::cout.rdbuf()->pubsetbuf(0, 0);
Related
What is the significance of including
ios_base::sync_with_stdio(false);
cin.tie(NULL);
in C++ programs?
In my tests, it speeds up the execution time, but is there a test case I should be worried about by including this?
Do the 2 statements always have to be together, or is the first one sufficient, i.e., ignoring cin.tie(NULL)?
Also, is it permissible to use simultaneous C and C++ commands if its value has been set to false?
https://www.codechef.com/viewsolution/7316085
The above code worked fine, until I used scanf/printf in a C++ program with the value as true. In this case, it gave a segmentation fault. What could be the possible explanation for this?
The two calls have different meanings that have nothing to do with performance; the fact that it speeds up the execution time is (or might be) just a side effect. You should understand what each of them does and not blindly include them in every program because they look like an optimization.
ios_base::sync_with_stdio(false);
This disables the synchronization between the C and C++ standard streams. By default, all standard streams are synchronized, which in practice allows you to mix C- and C++-style I/O and get sensible and expected results. If you disable the synchronization, then C++ streams are allowed to have their own independent buffers, which makes mixing C- and C++-style I/O an adventure.
Also keep in mind that synchronized C++ streams are thread-safe (output from different threads may interleave, but you get no data races).
cin.tie(NULL);
This unties cin from cout. Tied streams ensure that one stream is flushed automatically before each I/O operation on the other stream.
By default cin is tied to cout to ensure a sensible user interaction. For example:
std::cout << "Enter name:";
std::cin >> name;
If cin and cout are tied, you can expect the output to be flushed (i.e., visible on the console) before the program prompts input from the user. If you untie the streams, the program might block waiting for the user to enter their name but the "Enter name" message is not yet visible (because cout is buffered by default, output is flushed/displayed on the console only on demand or when the buffer is full).
So if you untie cin from cout, you must make sure to flush cout manually every time you want to display something before expecting input on cin.
In conclusion, know what each of them does, understand the consequences, and then decide if you really want or need the possible side effect of speed improvement.
This is to synchronize IOs from C and C++ world. If you synchronize, then you have a guaranty that the orders of all IOs is exactly what you expect. In general, the problem is the buffering of IOs that causes the problem, synchronizing let both worlds to share the same buffers. For example cout << "Hello"; printf("World"); cout << "Ciao";; without synchronization you'll never know if you'll get HelloCiaoWorld or HelloWorldCiao or WorldHelloCiao...
tie lets you have the guaranty that IOs channels in C++ world are tied one to each other, which means for example that every output have been flushed before inputs occurs (think about cout << "What's your name ?"; cin >> name;).
You can always mix C or C++ IOs, but if you want some reasonable behavior you must synchronize both worlds. Beware that in general it is not recommended to mix them, if you program in C use C stdio, and if you program in C++ use streams. But you may want to mix existing C libraries into C++ code, and in such a case it is needed to synchronize both.
It's just common stuff for making cin input work faster.
For a quick explanation: the first line turns off buffer synchronization between the cin stream and C-style stdio tools (like scanf or gets) — so cin works faster, but you can't use it simultaneously with stdio tools.
The second line unties cin from cout — by default the cout buffer flushes each time when you read something from cin. And that may be slow when you repeatedly read something small then write something small many times. So the line turns off this synchronization (by literally tying cin to null instead of cout).
Using ios_base::sync_with_stdio(false); is sufficient to decouple the C and C++ streams. You can find a discussion of this in Standard C++ IOStreams and Locales, by Langer and Kreft. They note that how this works is implementation-defined.
The cin.tie(NULL) call seems to be requesting a decoupling between the activities on cin and cout. I can't explain why using this with the other optimization should cause a crash. As noted, the link you supplied is bad, so no speculation here.
There are lots of great answers. I just want to add a small note about decoupling the stream.
cin.tie(NULL);
I have faced an issue while decoupling the stream with the CodeChef platform. When I submitted my code, the platform response was "Wrong Answer" but after tying the stream and testing the submission. It worked.
So, If anyone wants to untie the stream, the output stream must be flushed.
I understand cout << '\n' is preferred over cout << endl; but cout << '\n' doesn't flush the output stream. When should the output stream be flushed and when is it an issue?
What exactly is flushing?
Flushing forces an output stream to write any buffered characters. Read streamed input/output.
It depends on your application, in real-time or interactive applications you need to flush them immediately but in many cases you can wait until closing the file and leave the program to flush it automatically.
When must the output stream in C++ be flushed?
When you want to be sure that data written to it is visible to other programs or (in the case of file streams) to other streams reading the same file which aren't tied to this one; and when you want to be certain that the output is written even if the program terminates abnormally.
So you would want to do this when printing a message before a lengthy computation, or for printing a message to indicate that something's wrong (although you'd usually use cerr for that, which is automatically flushed after each output).
There's usually no need to flush cerr (which, by default, has its unitbuf flag set to flush after each output), or to flush cout before reading from cin (these streams are tied so that cout is flushed automatically before reading cin).
If the purpose of your program is to produce large amounts of output, either to cout or to a file, then don't flush after each line - that could slow it down significantly.
What exactly is flushing?
Output streams contain memory buffers, which are typically much faster to write to than the underlying output. Output operations put data into the buffer; flushing sends it to the final output.
First, you read wrong. Whether you use std::endl or '\n'
depends largely on context, but when in doubt, std::endl is
the normal default. Using '\n' is reserved to cases where
you know in advance that the flush isn't necessary, and that it
will be too costly.
Flushing is involved with buffering. When you write to
a stream, (typically) the data isn't written immediately to the
system; it is simply copied into a buffer, which will be written
when it is full, or when the file is closed. Or when it is
explicitly flushed. This is for performance reasons: a system
call is often a fairly expensive operation, and it's generally
not a good idea to do it for every characters. Historically,
C had something called line buffered mode, which flushed with
every '\n', and it turns out that this is a good compromize
for most things. For various technical reasons, C++ doesn't
have it; using std::endl is C++'s way of achieving the same
results.
My recommendation would be to just use std::endl until you
start having performance problems. If nothing else, it makes
debugging simpler. If you want to go further, it makes sense to
use '\n' when you're outputting a series of lines in just
a few statements. And there are special cases, like logging,
where you may want to explicitly control the flushing.
Flushing can be disastrous if you are writing a large file with frequent spaces.
For example
for(int i = 0 ;i < LARGENUMBER;i++)
{//Slow?
auto point = xyz[i];
cout<< point.x <<",",point.y<<endl;
}
vs
for(int i = 0 ;i < LARGENUMBER;i++)
{//faster
auto point = xyz[i];
cout<< point.x <<",",point.y<<"\n";
}
vs
for(int i = 0 ;i < LARGENUMBER;i++)
{//fastest?
auto point = xyz[i];
printf("%i,%i\n",point.x,point.y);
}
endl() was often know for doing other things, for example synchronize threads when in a so-called debug mode on MSVC, resulting in multithreaded programs that, contrary to expectation, printed uninterrupted phrases from different threads.
I/O libraries buffer data sent to stream for performance reasons. Whenever you need to be sure data has actually been sent to stream, you need to flush it (otherwise it may still be in buffer and not visible on screen or in file).
Some operations automatically flush streams, but you can also explicitly call something like ostream::flush.
You need to be sure data is flushed, whenever for example you have other program waiting for the input from first program.
It depends on what you are doing. For example, if you are using the console to warn the user about a long process... printing a series of dots in the same line... flushing can be interesting. For normal output, line per line, you should not care about flushing.
So, for char based output or non line based console output, flushing can be necessary. For line based output, it works as expected.
This other answer can clarify your question, based on why avoiding endl and flushing manually may be good for performance reasons:
mixing cout and printf for faster output
Regarding what is flushing: when you write to a buffered stream, like ostream, you don't have any guarantee that your data arrived the destination device (console, file, etc). This happens because the stream can use intermediary buffers to hold your data and to not stop your program. Usually, if your buffers are big enough, they will hold all data and won't stop your program due to slow I/O device. You may have already noticed that the console is very slow. The flush operation tells the stream that you want to be sure all intermediary data arrived on the destination device, or at least that their buffers are now empty. It is very important for log files, for example, where you want to be sure (not 100%) a line will be on disk not in an buffer somewhere. This becomes more important if your program can't loose data, i.e., if it crashes, you want to be sure you did you best to write your data on disk. For other applications, performance is more important and you can let the OS decide when to flush buffers for you or wait until you close the stream, for example.
#include <iostream>
using namespace std;
int main()
{
cout << 1;
while (true);
return 0;
}
I thought that this program should print 1 and then hung. But it doesn't print anything, it just hungs.
cout << endl or cout.flush() can solve this problem, but I still want to know why it's not working as expected :)
This problem appeared during codeforces contest and I spent a lot of time on looking at strange behavior of my program. It was incorrect, it also hunged, hidden output was actually debug information.
I tried using printf (compiling with gcc) and it behaves as well as cout, so this question can be referred to C also.
You writing to a buffer. You need to flush the buffer. As #Guvante mentioned, use cout.flush() or fflush(stdout) for printf.
Update:
Looks like fflush actually works with cout. But don't do that - it may not be the fact in all cases.
That is because cout buffers output. You have to flush the buffer for it to actually print.
endl and flush() both perform this flushing.
Also note that your program hangs because you have an infinite loop (while(true);).
The reason it does this is so that if you are printing a lot of data (say 1000 numbers) it can do so drastically more efficiently. Additionally most minor data points end with endl anyway, since you want your output to span multiple lines.
Concerning printf, the same as cout holds: you're printing into a buffer, you need to flush it with fflush(stdout);. Termination will flush the buffer, this is why you can see the output without your infinite loop.
See Why does printf not flush after the call unless a newline is in the format string? for more information.
I have a program that may take up to 3-4 hours to finish. Underway I need to output various information into a general file "info.txt". Here is how I do it currently
char dateStr [9];
char timeStr [9];
_strdate(dateStr);
_strtime(timeStr);
ofstream infoFile("info.txt", ios::out);
infoFile << "foo # " << timeStr << " , " << dateStr << endl;
infoFile.close();
This I do five times during a single run. My question is the following: Is it most proper (efficiency-wise and standard-wise) to
close infoFile after each output (and, consequently, use five ofstreams infoFile1, infoFile2, ..., infoFile5, one for each time I output)
or only to use "infoFile" and, consequently, have it open during the entire run?
EDIT: By "a single run" I mean a single run of the program. So by "five times during a single run" I mean that I output something to info.txt when running the program once (which takes 3-4 hours).
First; get numbers before optimizing, use a profiler. Then you know which parts take the most time.
If you don't have a profiler, think a bit before doing anything. How many runs will you do during those 3-4 hours? If it's few things that only happen once per run are probably less likely to be good targets for optimization, if it's lots and lots of runs those parts can be considered as well since disc access can be rather slow.
With that said, I've saved a bit of time in previous projects by reusing streams instead of opening and closing.
It's not really clear what you're trying to do. If the code you
post does what you want, it's certainly the best solution. If
you want the values appended, then you might want to keep the
file open.
Some other considerations:
unless you close the file or flush the data, external
programs may not see the data immediately.
When you open the file, any existing file with that name will be
truncated: an external program which tries to read the file at
precisely this moment won't see anything.
Flushing after each output (automatic if you use std::endl),
and seeking to the start before each output, will solve the
previous problem (and if the data is as small as it seems, the
write will be atomic), but could result in misleading data if
the values written have different lengths---the file length will
not be shortened. (Probably not the
case here, but something to be considered.)
With regards to performance: you're talking about an operation
which lasts at most a couple of milliseconds, and takes place
once or twice an hour. Whether it takes one millisecond, or
ten, is totally irrelevant.
This is a clear case of Premature optimization
It makes no actual difference to the performance of your application which approach you take as this is something that happens only 5 times during the scope of several hours.
Profile your application as the previous answer suggested and use that to identify the REAL bottlenecks in your code.
Only case I could think of where it would matter to you is if you wanted to prevent the info.txt from being deleted/edited during the scope of your application run-time. In which case you'd want to keep the stream alive. Otherwise it doesn't matter.
something weird is happening to my program. I am currently using lots of threads in my program, and will not be feasible to paste everything here.
However this is my problem:
int value = 1000;
std::cout << value << std::endl;
//output: 3e8
Any idea why is my output 3e8?
Whats the command to fix it back to print decimal values?
Thanks in advance! :)
Some other thread changed the default output radix of the std::cout stream to hexadecimal. Note that 100010 = 3e816, i.e. 1000 == 0x3e8.
Somewhere in your program a call such as:
std::cout << std::hex << value;
has been used. To revert output to normal (decimal) use:
std::cout << std::dec;
here is a relevent link to the different ways numbers can be output on std::cout.
Also, as pointed out in the comments below, the standard method of modifying cout flags safely appears to be the following:
ios::fmtflags cout_flag_backup(cout.flags()); // store the current cout flags
cout.flags ( ios::hex ); // change the flags to what you want
cout.flags(cout_flag_backup); // restore cout to its original state
Link to IO base flags
As stated in the comments below, it would also be wise to point out that when using IO Streams it is a good idea to have some form of synchronisation between the threads and the streams, that is, make sure no two threads can use the same stream at one time.
Doing this will probably also centralise your stream calls, meaning that it will be far easier to debug something such as this in the future.
Heres an SO question that may help you
Chances are that another thread has changed the output to hex on cout. I doubt that these streams are thread-safe.