Extremely slow std::cout using MS Compiler - c++

I am printing progress of many iterations of a computation and the output is actually the slowest part of it, but only if I use Visual C++ compiler, MinGW works fine on the same system.
Consider following code:
#include <iostream>
#include <chrono>
using namespace std;
#define TO_SEC(Time) \
chrono::duration_cast<chrono::duration<double> >(Time).count();
const int REPEATS = 100000;
int main() {
auto start_time = chrono::steady_clock::now();
for (int i = 1; i <= REPEATS; i++)
cout << '\r' << i << "/" << REPEATS;
double run_time = TO_SEC(chrono::steady_clock::now() - start_time);
cout << endl << run_time << "s" << endl;
}
Now the output I get when compiled with MinGW ("g++ source.cpp -std==c++11") is:
100000/100000
0.428025s
Now the output I get when compiled with Visual C++ Compiler November 2013 ("cl.exe source.cpp") is:
100000/100000
133.991s
Which is quite preposterous. What comes to mind is that VC++ is conducting unnecessary flushes.
Would anybody know how to prevent this?
EDIT: The setup is:
gcc version 4.8.2 (GCC), target i686-pc-cygwin
Microsoft (R) C/C++ Optimizing Compiler Version 18.00.21005.1 for x86
Windows 7 Professional N 64-bit with CPU i7-3630QM, 2.4GHz with 8.00GB RAM

std::cout in MSVC is slow (https://web.archive.org/web/20170329163751/https://connect.microsoft.com/VisualStudio/feedback/details/642876/std-wcout-is-ten-times-slower-than-wprintf-performance-bug-in-c-library).
It is an unfortunate consequence of how our C and C++ Standard Library
implementations are designed. The problem is that when printing to the
console (instead of, say, being redirected to a file), neither our C
nor C++ I/O are buffered by default. This is sometimes concealed by
the fact that C I/O functions like printf() and puts() temporarily
enable buffering while doing their work.
Microsoft suggests this fix (to enable buffering on cout/stdout):
setvbuf(stdout, 0, _IOLBF, 4096)
You could also try with:
cout.sync_with_stdio(false);
but probably it won't make a difference.

avoid using std::endl but use instead "\n". std::endl is supposed to flush according to the standard.

Related

Cout unsigned char

I'm using Visual Studio 2019: why does this command do nothing?
std::cout << unsigned char(133);
It literally gets skipped by my compiler (I verified it using step-by-step debug):
I expected a print of à.
Every output before the next command is ignored, but not the previous ones. (std::cout << "12" << unsigned char(133) << "34"; prints "12")
I've also tried to change it to these:
std::cout << unsigned char(133) << std::flush;
std::cout << (unsigned char)(133);
std::cout << char(-123);
but the result is the same.
I remember that it worked before, and some of my programs that use this command have misteriously stopped working... In a blank new project same result!
I thought that it my new custom keyboard layout could be the cause, but disabling it does not change so much.
On other online compilers it works properly, so may it be a bug of Visual Studio 2019?
The "sane" answer is: don't rely on extended-ASCII characters. Unicode is widespread enough to make this the preferred approach:
#include <iostream>
int main() {
std::cout << u8"\u00e0\n";
}
This will explicitly print the character à you requested; in fact, that's also how your browser understands it, which you can easily verify by putting into e.g. some unicode character search, which will result in LATIN SMALL LETTER A WITH GRAVE, with the code U+00E0 which you can spot in the code above.
In your example, there's no difference between using a signed or unsigned char; the byte value 133 gets written to the terminal, but the way it interprets it might differ from machine to machine, basing on how it's actually set up to interpret it. In fact, in a UTF-8 console, this is simply a wrong unicode sequence (u"\0x85" isn't a valid character) - if your OS was switched to UTF-8, that might be why you're seeing no output.
You can try to use static_cast
std::cout << static_cast<unsigned char>(133) << std::endl;
Or
std::cout << static_cast<char>(133) << std::endl;
Since in mine all of this is working, it's hard to pinpoint the problem, the common sense would point to some configuration issue.

Qt check if current process is 32 or 64 bit

I have a 32bit and a 64bit version of my application and need to do something special in case it's the 32bit version running on 64bit Windows. I'd like to avoid platform specific calls and instead use Qt or boost. For Qt I found Q_PROCESSOR_X86_32 besides Q_OS_WIN64 and it seems this is exactly what I need. But it doesn't work:
#include <QtGlobal>
#ifdef Q_PROCESSOR_X86_64
std::cout << "64 Bit App" << std::endl;
#endif
#ifdef Q_PROCESSOR_X86_32
std::cout << "32 Bit App" << std::endl;
#endif
This puts out nothing when running the 32 bit App on my 64 bit Windows 7. Do I misunderstand the documentation of these global declarations?
Since there's some confusion: This is not about detecting the OS the app is currently running on, it's about detecting the "bitness" of the app itself.
Take a look at QSysInfo::currentCpuArchitecture(): it will return a string containing "64" in it when running on a 64-bit host. Similarly, QSysInfo::buildCpuArchitecture() will return such string when compiled on a 64-bit host:
bool isHost64Bit() {
static bool h = QSysInfo::currentCpuArchitecture().contains(QLatin1String("64"));
return h;
}
bool isBuild64Bit() {
static bool b = QSysInfo::buildCpuArchitecture().contains(QLatin1String("64"));
return b;
}
Then, the condition you want to detect is:
bool is32BuildOn64Host() { return !isBuild64Bit() && isHost64Bit(); }
This should be portable to all architectures that support running 32-bit code on a 64-bit host.
You can use Q_PROCESSOR_WORDSIZE (or here). I'm surprised it's not documented because it's not in a private header (QtGlobal) and is quite handy.
It could be more preferable for some use cases because it doesn't depend on processor architecture. (e.g. it's defined the same way for x86_64 as well as arm64 and many others)
Example:
#include <QtGlobal>
#include <QDebug>
int main() {
#if Q_PROCESSOR_WORDSIZE == 4
qDebug() << "32-bit executable";
#elif Q_PROCESSOR_WORDSIZE == 8
qDebug() << "64-bit executable";
#else
qDebug() << "Processor with unexpected word size";
#endif
}
or even better:
int main() {
qDebug() << QStringLiteral("%1-bit executable").arg(Q_PROCESSOR_WORDSIZE * 8);
}
Preprocessor directives are evaluated at compile-time. What you want to do is to compile for 32 bit and at run-time check if you're running on a 64 bit system (note that your process will be 32 bit):
#ifdef Q_PROCESSOR_X86_32
std::cout << "32 Bit App" << std::endl;
BOOL bIsWow64 = FALSE;
if (IsWow64Process(GetCurrentProcess(), &bIsWow64) && bIsWow64)
std::cout << "Running on 64 Bit OS" << std::endl;
#endif
That example is Windows specific. There is not a portable way to do it, on Linux you may use run system("getconf LONG_BIT") or system("uname -m") and check its output.
You can use GetNativeSystemInfo (see below) to determine the OS's version. As other already mentioned, the App's version is determined at the compile-time.
_SYSTEM_INFO sysinfo;
GetNativeSystemInfo(&sysinfo);
if (sysinfo.wProcessorArchitecture == PROCESSOR_ARCHITECTURE_INTEL)
{
//this is a 32-bit OS
}
if (sysinfo.wProcessorArchitecture == PROCESSOR_ARCHITECTURE_AMD64)
{
//this is a 64-bit OS
}
The correct name is Q_PROCESSOR_X86_32 or Q_PROCESSOR_X86_64.
However if it happens your application runs on ARM in future this will not work. Rather consider checking sizeof(void *) or QT_POINTER_SIZE for example.
Additionally note that on Windows the GUI applications usually can't output to stdout so it may work but doesn't show anything. Use eithrr debugger or message box or output to some dummy file instead.

Netbeans 7.0 compiler error

I am using Netbeans 7.0 and get this error when I try to compile, and debug:
make: *** [.validate-impl] Error 127
BUILD FAILED (exit value 2, total time: 281ms)
I set my environment variables (within Windows) to C:\cygwin\bin
Within Netbeans my build tools are of the Cygwin family. The C compiler is Gcc, C++ compiler is G++, Assembler is as.exe, make command is make.exe, and debugger is gdb.exe. They're all located within C:\cygwin\bin\FILENAMEHERE
And finally, my source code:
#include <iostream>
int main ()
{
std::cout << "Enter two numbers:" << std::endl;
int v1, v2;
std::cin >> v1 >> v2;
std::cout << "The sum of " << v1 "and " << v2 << "is" << v1 +v2 << std::endl;
return 0;
}
Any suggestions?
I've had a LOT of problems with the 7.0 C++ version. It keeps destroying the auto generated make files. I removed it and downgraded. The 6.9.1 version is still available and seems to work much better.
p.s.
If you're going to be doing QT development it matters which compiler chain you choose. You want to look for "mingw tdm dw2". The SJLJ version of mingw (which is the default) does not work with the released QT binary libraries.

How to get IOStream to perform better?

Most C++ users that learned C prefer to use the printf / scanf family of functions even when they're coding in C++.
Although I admit that I find the interface way better (especially POSIX-like format and localization), it seems that an overwhelming concern is performance.
Taking at look at this question:
How can I speed up line by line reading of a file
It seems that the best answer is to use fscanf and that the C++ ifstream is consistently 2-3 times slower.
I thought it would be great if we could compile a repository of "tips" to improve IOStreams performance, what works, what does not.
Points to consider
buffering (rdbuf()->pubsetbuf(buffer, size))
synchronization (std::ios_base::sync_with_stdio)
locale handling (Could we use a trimmed-down locale, or remove it altogether ?)
Of course, other approaches are welcome.
Note: a "new" implementation, by Dietmar Kuhl, was mentioned, but I was unable to locate many details about it. Previous references seem to be dead links.
Here is what I have gathered so far:
Buffering:
If by default the buffer is very small, increasing the buffer size can definitely improve the performance:
it reduces the number of HDD hits
it reduces the number of system calls
Buffer can be set by accessing the underlying streambuf implementation.
char Buffer[N];
std::ifstream file("file.txt");
file.rdbuf()->pubsetbuf(Buffer, N);
// the pointer reader by rdbuf is guaranteed
// to be non-null after successful constructor
Warning courtesy of #iavr: according to cppreference it is best to call pubsetbuf before opening the file. Various standard library implementations otherwise have different behaviors.
Locale Handling:
Locale can perform character conversion, filtering, and more clever tricks where numbers or dates are involved. They go through a complex system of dynamic dispatch and virtual calls, so removing them can help trimming down the penalty hit.
The default C locale is meant not to perform any conversion as well as being uniform across machines. It's a good default to use.
Synchronization:
I could not see any performance improvement using this facility.
One can access a global setting (static member of std::ios_base) using the sync_with_stdio static function.
Measurements:
Playing with this, I have toyed with a simple program, compiled using gcc 3.4.2 on SUSE 10p3 with -O2.
C : 7.76532e+06
C++: 1.0874e+07
Which represents a slowdown of about 20%... for the default code. Indeed tampering with the buffer (in either C or C++) or the synchronization parameters (C++) did not yield any improvement.
Results by others:
#Irfy on g++ 4.7.2-2ubuntu1, -O3, virtualized Ubuntu 11.10, 3.5.0-25-generic, x86_64, enough ram/cpu, 196MB of several "find / >> largefile.txt" runs
C : 634572
C++: 473222
C++ 25% faster
#Matteo Italia on g++ 4.4.5, -O3, Ubuntu Linux 10.10 x86_64 with a random 180 MB file
C : 910390
C++: 776016
C++ 17% faster
#Bogatyr on g++ i686-apple-darwin10-g++-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664), mac mini, 4GB ram, idle except for this test with a 168MB datafile
C : 4.34151e+06
C++: 9.14476e+06
C++ 111% slower
#Asu on clang++ 3.8.0-2ubuntu4, Kubuntu 16.04 Linux 4.8-rc3, 8GB ram, i5 Haswell, Crucial SSD, 88MB datafile (tar.xz archive)
C : 270895
C++: 162799
C++ 66% faster
So the answer is: it's a quality of implementation issue, and really depends on the platform :/
The code in full here for those interested in benchmarking:
#include <fstream>
#include <iostream>
#include <iomanip>
#include <cmath>
#include <cstdio>
#include <sys/time.h>
template <typename Func>
double benchmark(Func f, size_t iterations)
{
f();
timeval a, b;
gettimeofday(&a, 0);
for (; iterations --> 0;)
{
f();
}
gettimeofday(&b, 0);
return (b.tv_sec * (unsigned int)1e6 + b.tv_usec) -
(a.tv_sec * (unsigned int)1e6 + a.tv_usec);
}
struct CRead
{
CRead(char const* filename): _filename(filename) {}
void operator()() {
FILE* file = fopen(_filename, "r");
int count = 0;
while ( fscanf(file,"%s", _buffer) == 1 ) { ++count; }
fclose(file);
}
char const* _filename;
char _buffer[1024];
};
struct CppRead
{
CppRead(char const* filename): _filename(filename), _buffer() {}
enum { BufferSize = 16184 };
void operator()() {
std::ifstream file(_filename, std::ifstream::in);
// comment to remove extended buffer
file.rdbuf()->pubsetbuf(_buffer, BufferSize);
int count = 0;
std::string s;
while ( file >> s ) { ++count; }
}
char const* _filename;
char _buffer[BufferSize];
};
int main(int argc, char* argv[])
{
size_t iterations = 1;
if (argc > 1) { iterations = atoi(argv[1]); }
char const* oldLocale = setlocale(LC_ALL,"C");
if (strcmp(oldLocale, "C") != 0) {
std::cout << "Replaced old locale '" << oldLocale << "' by 'C'\n";
}
char const* filename = "largefile.txt";
CRead cread(filename);
CppRead cppread(filename);
// comment to use the default setting
bool oldSyncSetting = std::ios_base::sync_with_stdio(false);
double ctime = benchmark(cread, iterations);
double cpptime = benchmark(cppread, iterations);
// comment if oldSyncSetting's declaration is commented
std::ios_base::sync_with_stdio(oldSyncSetting);
std::cout << "C : " << ctime << "\n"
"C++: " << cpptime << "\n";
return 0;
}
Two more improvements:
Issue std::cin.tie(nullptr); before heavy input/output.
Quoting http://en.cppreference.com/w/cpp/io/cin:
Once std::cin is constructed, std::cin.tie() returns &std::cout, and likewise, std::wcin.tie() returns &std::wcout. This means that any formatted input operation on std::cin forces a call to std::cout.flush() if any characters are pending for output.
You can avoid flushing the buffer by untying std::cin from std::cout. This is relevant with multiple mixed calls to std::cin and std::cout. Note that calling std::cin.tie(std::nullptr); makes the program unsuitable to run interactively by user, since output may be delayed.
Relevant benchmark:
File test1.cpp:
#include <iostream>
using namespace std;
int main()
{
ios_base::sync_with_stdio(false);
int i;
while(cin >> i)
cout << i << '\n';
}
File test2.cpp:
#include <iostream>
using namespace std;
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(nullptr);
int i;
while(cin >> i)
cout << i << '\n';
cout.flush();
}
Both compiled by g++ -O2 -std=c++11. Compiler version: g++ (Ubuntu 4.8.4-2ubuntu1~14.04) 4.8.4 (yeah, I know, pretty old).
Benchmark results:
work#mg-K54C ~ $ time ./test1 < test.in > test1.in
real 0m3.140s
user 0m0.581s
sys 0m2.560s
work#mg-K54C ~ $ time ./test2 < test.in > test2.in
real 0m0.234s
user 0m0.234s
sys 0m0.000s
(test.in consists of 1179648 lines each consisting only of a single 5. It’s 2.4 MB, so sorry for not posting it here.).
I remember solving an algorithmic task where the online judge kept refusing my program without cin.tie(nullptr) but was accepting it with cin.tie(nullptr) or printf/scanf instead of cin/cout.
Use '\n' instead of std::endl.
Quoting http://en.cppreference.com/w/cpp/io/manip/endl :
Inserts a newline character into the output sequence os and flushes it as if by calling os.put(os.widen('\n')) followed by os.flush().
You can avoid flushing the bufer by printing '\n' instead of endl.
Relevant benchmark:
File test1.cpp:
#include <iostream>
using namespace std;
int main()
{
ios_base::sync_with_stdio(false);
for(int i = 0; i < 1179648; ++i)
cout << i << endl;
}
File test2.cpp:
#include <iostream>
using namespace std;
int main()
{
ios_base::sync_with_stdio(false);
for(int i = 0; i < 1179648; ++i)
cout << i << '\n';
}
Both compiled as above.
Benchmark results:
work#mg-K54C ~ $ time ./test1 > test1.in
real 0m2.946s
user 0m0.404s
sys 0m2.543s
work#mg-K54C ~ $ time ./test2 > test2.in
real 0m0.156s
user 0m0.135s
sys 0m0.020s
Interesting you say C programmers prefer printf when writing C++ as I see a lot of code that is C other than using cout and iostream to write the output.
Uses can often get better performance by using filebuf directly (Scott Meyers mentioned this in Effective STL) but there is relatively little documentation in using filebuf direct and most developers prefer std::getline which is simpler most of the time.
With regards to locale, if you create facets you will often get better performance by creating a locale once with all your facets, keeping it stored, and imbuing it into each stream you use.
I did see another topic on this here recently, so this is close to being a duplicate.

mixing cout and printf for faster output

After performing some tests I noticed that printf is much faster than cout. I know that it's implementation dependent, but on my Linux box printf is 8x faster. So my idea is to mix the two printing methods: I want to use cout for simple prints, and I plan to use printf for producing huge outputs (typically in a loop). I think it's safe to do as long as I don't forget to flush before switching to the other method:
cout << "Hello" << endl;
cout.flush();
for (int i=0; i<1000000; ++i) {
printf("World!\n");
}
fflush(stdout);
cout << "last line" << endl;
cout << flush;
Is it OK like that?
Update: Thanks for all the precious feedbacks. Summary of the answers: if you want to avoid tricky solutions, simply stick with cout but don't use endl since it flushes the buffer implicitly (slowing the process down). Use "\n" instead. It can be interesting if you produce large outputs.
The direct answer is that yes, that's okay.
A lot of people have thrown around various ideas of how to improve speed, but there seems to be quite a bit of disagreement over which is most effective. I decided to write a quick test program to get at least some idea of which techniques did what.
#include <iostream>
#include <string>
#include <sstream>
#include <time.h>
#include <iomanip>
#include <algorithm>
#include <iterator>
#include <stdio.h>
char fmt[] = "%s\n";
static const int count = 3000000;
static char const *const string = "This is a string.";
static std::string s = std::string(string) + "\n";
void show_time(void (*f)(), char const *caption) {
clock_t start = clock();
f();
clock_t ticks = clock()-start;
std::cerr << std::setw(30) << caption
<< ": "
<< (double)ticks/CLOCKS_PER_SEC << "\n";
}
void use_printf() {
for (int i=0; i<count; i++)
printf(fmt, string);
}
void use_puts() {
for (int i=0; i<count; i++)
puts(string);
}
void use_cout() {
for (int i=0; i<count; i++)
std::cout << string << "\n";
}
void use_cout_unsync() {
std::cout.sync_with_stdio(false);
for (int i=0; i<count; i++)
std::cout << string << "\n";
std::cout.sync_with_stdio(true);
}
void use_stringstream() {
std::stringstream temp;
for (int i=0; i<count; i++)
temp << string << "\n";
std::cout << temp.str();
}
void use_endl() {
for (int i=0; i<count; i++)
std::cout << string << std::endl;
}
void use_fill_n() {
std::fill_n(std::ostream_iterator<char const *>(std::cout, "\n"), count, string);
}
void use_write() {
for (int i = 0; i < count; i++)
std::cout.write(s.data(), s.size());
}
int main() {
show_time(use_printf, "Time using printf");
show_time(use_puts, "Time using puts");
show_time(use_cout, "Time using cout (synced)");
show_time(use_cout_unsync, "Time using cout (un-synced)");
show_time(use_stringstream, "Time using stringstream");
show_time(use_endl, "Time using endl");
show_time(use_fill_n, "Time using fill_n");
show_time(use_write, "Time using write");
return 0;
}
I ran this on Windows after compiling with VC++ 2013 (both x86 and x64 versions). Output from one run (with output redirected to a disk file) looked like this:
Time using printf: 0.953
Time using puts: 0.567
Time using cout (synced): 0.736
Time using cout (un-synced): 0.714
Time using stringstream: 0.725
Time using endl: 20.097
Time using fill_n: 0.749
Time using write: 0.499
As expected, results vary, but there are a few points I found interesting:
printf/puts are much faster than cout when writing to the NUL device
but cout keeps up quite nicely when writing to a real file
Quite a few proposed optimizations accomplish little
In my testing, fill_n is about as fast as anything else
By far the biggest optimization is avoiding endl
cout.write gave the fastest time (though probably not by a significant margin
I've recently edited the code to force a call to printf. Anders Kaseorg was kind enough to point out--that g++ recognizes the specific sequence printf("%s\n", foo); is equivalent to puts(foo);, and generates code accordingly (i.e., generates code to call puts instead of printf). Moving the format string to a global array, and passing that as the format string produces identical output, but forces it to be produced via printf instead of puts. Of course, it's possible they might optimize around this some day as well, but at least for now (g++ 5.1) a test with g++ -O3 -S confirms that it's actually calling printf (where the previous code compiled to a call to puts).
Sending std::endl to the stream appends a newline and flushes the stream. The subsequent invocation of cout.flush() is superfluous. If this was done when timing cout vs. printf then you were not comparing apples to apples.
By default, the C and C++ standard output streams are synchronised, so that writing to one causes a flush of the other, so explicit flushes are not needed.
Also, note that the C++ stream is synced to the C stream.
Thus it does extra work to stay in sync.
Another thing to note is to make sure you flush the streams an equal amount. If you continuously flush the stream on one system and not the other that will definitely affect the speed of the tests.
Before assuming that one is faster than the other you should:
un-sync C++ I/O from C I/O (see sync_with_stdio() ).
Make sure the amount of flushes is comparable.
You can further improve the performance of printf by increasing the buffer size for stdout:
setvbuf (stdout, NULL, _IOFBF, 32768); // any value larger than 512 and also a
// a multiple of the system i/o buffer size is an improvement
The number of calls to the operating system to perform i/o is almost always the most expensive component and performance limiter.
Of course, if cout output is intermixed with stdout, the buffer flushes defeat the purpose an increased buffer size.
You can use sync_with_stdio to make C++ IO faster.
cout.sync_with_stdio(false);
Should improve your output perfomance with cout.
Don't worry about the performance between printf and cout. If you want to gain performance, separate formatted output from non-formatted output.
puts("Hello World\n") is much faster than printf("%s", "Hellow World\n"). (Primarily due to the formatting overhead). Once you have isolated the formatted from plain text, you can do tricks like:
const char hello[] = "Hello World\n";
cout.write(hello, sizeof(hello) - sizeof('\0'));
To speed up formatted output, the trick is to perform all formatting to a string, then use block output with the string (or buffer):
const unsigned int MAX_BUFFER_SIZE = 256;
char buffer[MAX_BUFFER_SIZE];
sprintf(buffer, "%d times is a charm.\n", 5);
unsigned int text_length = strlen(buffer) - sizeof('\0');
fwrite(buffer, 1, text_length, stdout);
To further improve your program's performance, reduce the quantity of output. The less stuff you output, the faster your program will be. A side effect will be that your executable size will shrink too.
Well, I can't think of any reason to actually use cout to be honest. It's completely insane to have a huge bulky template to do something so simple that will be in every file. Also, it's like it's designed to be as slow to type as possible and after the millionth time of typing <<<< and then typing the value in between and getting something lik >variableName>>> on accident I never want to do that again.
Not to mention if you include std namespace the world will eventually implode, and if you don't your typing burden becomes even more ridiculous.
However I don't like printf a lot either. For me, the solution is to create my own concrete class and then call whatever io stuff is necessary within that. Then you can have really simple io in any manner you want and with whatever implementation you want, whatever formatting you want, etc (generally you want floats to always be one way for example, not to format them 800 ways for no reason, so putting in formatting with every call is a joke).
So all I type is something like
dout+"This is more sane than "+cPlusPlusMethod+" of "+debugIoType+". IMO at least";
dout++;
but you can have whatever you want. With lots of files it's surprising how much this improves compile time, too.
Also, there's nothing wrong with mixing C and C++, it should just be done jusdiciously and if you are using the things that cause the problems with using C in the first place it's safe to say the least of your worries is trouble from mixing C and C++.
Mixing C++ and C iomethods was recommended against by my C++ books, FYI. I'm pretty sure the C functions trample on the state expected/held by C++.