Execution time and checking stream state in c++ - c++

I am trying to understand streams in C++. I have the following code where I print a message a number of times and I'm trying to find if there is a difference in execution time when checking for a good state or not. I used time of course but I couldn't find a definitive answer since sometimes checking was faster and sometimes it wasn't. My intuition says that since checking is an additional operation it should always take (a slightly) longer time. Is there any actual difference or it is just random?
#include <iostream>
using namespace std;
int main(int argc, char **argv)
{
ostream &out = cout; //initialize ostream object
size_t arg = stoul(argv[1]); //convert char to size_t
for (size_t cnt = 0; cnt != arg; ++cnt)
{
// if (out.good()) //check goodbit
out << "Nr. of command line argument " << argc << '\n';
}
}

The real answer to your question it is extremely hard in practice to measure the difference. It's just one comparison (the if) vs. giving up execution to the OS for I/O and communicating with hardware.
There are multiple layers of abstraction when it comes to printing, from buffering to branch prediction. The actual impact depends on multiple factors. Even multiple runs of exactly the same program will exhibit execution time variation.
You would need to devise a careful and clever experiment to measure the effect of the check reliably.
The take out for your problem here is, that most certainly, the difference is below your testing accuracy and probably below the execution noise. On top of that CPU architecture can actually eliminate the difference, keywords are: pre-fetching, branch prediction and (in)famous speculative execution.

Related

Why is this variable returning 32766?

I wrote a very basic evolution algorithm. The way it's supposed to work is that the user types in the desired value, and the amount of generations to try to reach it. Then, the program will run through, taking the nearest value in an array to the goal and mutating it four times (while also leaving the original, in case it's right) to try and get closer to the goal. In theory, it should take roughly |n|/2 generations to reach the value, as mutations happen in either one or two points.
Here's the code to demonstrate what I mean:
#include <iostream>
using namespace std;
int gen [5] = {0, 0, 0, 0, 0}; int goal; int gens; int best; int i = 0; int fit;
int dif(int in) {
return abs(gen[in] - goal);
}
void nextgen() {
int fit [5] = {dif(1), dif(2), dif(3), dif(4), dif(5)};
best = *max_element(fit, fit + 6);
int gen [5] = {best - 2, best - 1, best, best + 1, best + 2};
}
int main() {
cout << "Goal: "; cin >> goal; cout << "Gens: "; cin >> gens;
while(i < gens) {
nextgen(); cout << "Generation " << i + 1 << ": " << best << "\n";
i = i + 1;
}
}
It's pretty simple code. However, it seems that the int best bit of the output is returning 32766 every time, no matter what I do. Do you know what I've done wrong?
I've tried outputting the entire generation (which is even worse––a jumbled mess of non user friendly data that appears meaningless), I've reworked the code, I've added varibles and functions to try and pin down exactly where the error is, and I watched the entire code aesthetic youtube channel to make sure this looked good for you guys.
Looks like you're driving C++ without a license or safety belt. Joke aside, please keep trying and learning. But with C/C++ you should always enable compiler warnings. The godbolt link in the comment from #user4581301 is really good, the compiler flags -Wall -Wextra -pedantic -O2 -fsanitize=address,undefined are all best practice. (I would add -Werror.)
Why you got 32766 is possible to analyze with a debugger, but it's not meaningful. A number close to 32768 (=2^15) should trigger all the warning bells (could be an integer overflow). Your code is accessing uninitialized memory (among other issues), leading to what is called undefined behaviour. This means it may produce different output depending on your machine, compiler, optimization flags, OS, standard libraries, etc. - even adding a debug-print could change what it does.
For optimization algorithms (like GAs) it's also super easy to fool yourself into thinking that your implementation is correct, because the optimization will find a way to avoid (or exploit) any bugs. I've had one in my NN implementation that was accessing some data from the previous example by accident, and it took several days until I even noticed there was a problem.
If you want to focus on the algorithms, I suggest to start with a different language (anything except C/C++/Assembly). My advice would be either Python (though it can be 50x slower, it's much easier to learn and write) or Rust (just as fast as C++ and just as complicated, but no undefined behaviour). With Rust, every mistake in your code above would have given you either a warning by default, a compiler error, or a runtime error instead of wrong output. Though C++ with the flags mentioned above does the same for your specific code.

Can I replace an if-statement with AND?

My prof once said, that if-statements are rather slow and should be avoided as much as possible. I'm making a game in OpenGL, where I need a lot of them.
In my tests replacing an if-statement with AND via short-circuiting worked, but is it faster?
bool doSomething();
int main()
{
int randomNumber = std::rand() % 10;
randomNumber == 5 && doSomething();
return 0;
}
bool doSomething()
{
std::cout << "function executed" << std::endl;
return true;
}
My intention is to use this inside the draw function of my renderer. My models are supposed to have flags, if a flag is true, a certain function should execute.
if-statements are rather slow and should be avoided as much as possible.
This is wrong and/or misleading. Most simplified statements about slowness of a program are wrong. There's probably something wrong with this answer too.
C++ statements don't have a speed that can be attributed to them. It's the speed of the compiled program that matters. And that consists of assembly language instructions; not of C++ statements.
What would probably be more correct is to say that branch instructions can be relatively slow (on modern, superscalar CPU architectures) (when the branch cannot be predicted well) (depending on what you are comparing to; there are many things that are much more expensive).
randomNumber == 5 && doSomething();
An if-statement is often compiled into a program that uses a branch instruction. A short-circuiting logical-and operation is also often compiled into a program that uses a branch instruction. Replacing if-statement with a logical-and operator is not a magic bullet that makes the program faster.
If you were to compare the program produced by the logical-and and the corresponding program where it is replaced with if (randomNumber == 5), you would find that the optimiser sees through your trick and produces the same assembly in both cases.
My models are supposed to have flags, if a flag is true, a certain function should execute.
In order to avoid the branch, you must change the premise. Instead of iterating through a sequence of all models, checking flag, and conditionally calling a function, you could create a sequence of all models for which the function should be called, iterate that, and call the function unconditionally -> no branching. Is this alternative faster? There is certainly some overhead of maintaining the data structure and the branch predictor may have made this unnecessary. Only way to know for sure is to measure the program.
I agree with the comments above that in almost all practical cases, it's OK to use ifs as much as you need without hesitation.
I also agree that it is not an issue important for a beginner to waste energy on optimizing, and that using logical operators will likely to emit code similar to ifs.
However - there is a valid issue here related to branching in general, so those who are interested are welcome to read on.
Modern CPUs use what we call Instruction pipelining.
Without getting too deap into the technical details:
Within each CPU core there is a level of parallelism.
Each assembly instruction is composed of several stages, and while the current instruction is executed, the next instructions are prepared to a certain degree.
This is called instruction pipelining.
This concept is broken with any kind of branching in general, and conditionals (ifs) in particular.
It's true that there is a mechanism of branch prediction, but it works only to some extent.
So although in most cases ifs are totally OK, there are cases it should be taken into account.
As always when it comes to optimizations, one should carefully profile.
Take the following piece of code as an example (similar things are common in image processing and other implementations):
unsigned char * pData = ...; // get data from somewhere
int dataSize = 100000000; // something big
bool cond = ...; // initialize some condition for relevant for all data
for (int i = 0; i < dataSize; ++i, ++pData)
{
if (cond)
{
*pData = 2; // imagine some small calculation
}
else
{
*pData = 3; // imagine some other small calculation
}
}
It might be better to do it like this (even though it contains duplication which is evil from software engineering point of view):
if (cond)
{
for (int i = 0; i < dataSize; ++i, ++pData)
{
*pData = 2; // imagine some small calculation
}
}
else
{
for (int i = 0; i < dataSize; ++i, ++pData)
{
*pData = 3; // imagine some other small calculation
}
}
We still have an if but it's causing to branch potentially only once.
In certain [rare] cases (requires profiling as mentioned above) it will be more efficient to do even something like this:
for (int i = 0; i < dataSize; ++i, ++pData)
{
*pData = (2 * cond + 3 * (!cond));
}
I know it's not common , but I encountered specific HW some years ago on which the cost of 2 multiplications and 1 addition with negation was less than the cost of branching (due to reset of instruction pipeline). Also this "trick" supports using different condition values for different parts of the data.
Bottom line: ifs are usually OK, but it's good to be aware that sometimes there is a cost.

Timing of using variables passed by reference and by value in C++

I have decided to compare the times of passing by value and by reference in C++ (g++ 5.4.0) with the following code:
#include <iostream>
#include <sys/time.h>
using namespace std;
int fooVal(int a) {
for (size_t i = 0; i < 1000; ++i) {
++a;
--a;
}
return a;
}
int fooRef(int & a) {
for (size_t i = 0; i < 1000; ++i) {
++a;
--a;
}
return a;
}
int main() {
int a = 0;
struct timeval stop, start;
gettimeofday(&start, NULL);
for (size_t i = 0; i < 10000; ++i) {
fooVal(a);
}
gettimeofday(&stop, NULL);
printf("The loop has taken %lu microseconds\n", stop.tv_usec - start.tv_usec);
gettimeofday(&start, NULL);
for (size_t i = 0; i < 10000; ++i) {
fooRef(a);
}
gettimeofday(&stop, NULL);
printf("The loop has taken %lu microseconds\n", stop.tv_usec - start.tv_usec);
return 0;
}
It was expected that the fooRef execution would take much more time in comparison with fooVal case because of "looking up" referenced value in memory while performing operations inside fooRef. But the result proved to be unexpected for me:
The loop has taken 18446744073708648210 microseconds
The loop has taken 99967 microseconds
And the next time I run the code it can produce something like
The loop has taken 97275 microseconds
The loop has taken 99873 microseconds
Most of the time produced values are close to each other (with fooRef being just a little bit slower), but sometimes outbursts like in the output from the first run can happen (both for fooRef and fooVal loops).
Could you please explain this strange result?
UPD: Optimizations were turned off, O0 level.
If gettimeofday() function relies on operating system clock, this clock is not really designed for dealing with microseconds in an accurate manner. The clock is typically updated periodically and only frequently enough to give the appearance of showing seconds accurately for the purpose of working with date/time values. Sampling at the microsecond level may be unreliable for a benchmark such as the one you are performing.
You should be able to work around this limitation by making your test time much longer; for example, several seconds.
Again, as mentioned in other answers and comments, the effects of which type of memory is accessed (register, cache, main, etc.) and whether or not various optimizations are applied, could substantially impact results.
As with working around the time sampling limitation, you might be able to somewhat work around the memory type and optimization issues by making your test data set much larger such that memory optimizations aimed at smaller blocks of memory are effectively bypassed.
Firstly, you should look at the assembly language to see if there are any differences between passing by reference and passing by value.
Secondly, make the functions equivalent by passing by constant reference. Passing by value says that the original variable won't be changed. Passing by constant reference keeps the same principle.
My belief is that the two techniques should be equivalent in both assembly language and performance.
I'm no expert in this area, but I would tend to think that the reason why the two times are somewhat equivalent is due to cache memory.
When you need to access a memory location (Say, address 0xaabbc125 on an IA-32 architecure), the CPU copies the memory block (addresses 0xaabbc000 to 0xaabbcfff) to your cache memory. Reading from and writing to the memory is very slow, but once it's been copied into you cache, you can access values very quickly. This is useful because programs usually require the same range of addresses over and over.
Since you execute the same code over and over and that your code doesn't require a lot of memory, the first time the function is executed, the memory block(s) is (are) copied to your cache once, which probably takes most of the 97000 time units. Any subsequent calls to your fooVal and fooRef functions will require addresses that are already in your cache, so they will require only a few nanoseconds (I'd figure roughly between 10ns and 1µs). Thus, dereferencing the pointer (since a reference is implemented as a pointer) is about double the time compared to just accessing a value, but it's double of not much anyway.
Someone who is more of an expert may have a better or more complete explanation than mine, but I think this could help you understand what's going on here.
A little idea : try to run the fooVal and fooRef functions a few times (say, 10 times) before setting start and beginning the loop. That way, (if my explanation was correct!) the memory block will (should) be already into cache when you begin looping them, which means you won't be taking caching in your times.
About the super-high value you got, I can't explain that. But the value is obviously wrong.
It's not a bug, it's a feature! =)

Strange C++ performance difference?

I just stumbled upon a change that seems to have counterintuitive performance ramifications. Can anyone provide a possible explanation for this behavior?
Original code:
for (int i = 0; i < ct; ++i) {
// do some stuff...
int iFreq = getFreq(i);
double dFreq = iFreq;
if (iFreq != 0) {
// do some stuff with iFreq...
// do some calculations with dFreq...
}
}
While cleaning up this code during a "performance pass," I decided to move the definition of dFreq inside the if block, as it was only used inside the if. There are several calculations involving dFreq so I didn't eliminate it entirely as it does save the cost of multiple run-time conversions from int to double. I expected no performance difference, or if any at all, a negligible improvement. However, the perfomance decreased by nearly 10%. I have measured this many times, and this is indeed the only change I've made. The code snippet shown above executes inside a couple other loops. I get very consistent timings across runs and can definitely confirm that the change I'm describing decreases performance by ~10%. I would expect performance to increase because the int to double conversion would only occur when iFreq != 0.
Chnaged code:
for (int i = 0; i < ct; ++i) {
// do some stuff...
int iFreq = getFreq(i);
if (iFreq != 0) {
// do some stuff with iFreq...
double dFreq = iFreq;
// do some stuff with dFreq...
}
}
Can anyone explain this? I am using VC++ 9.0 with /O2. I just want to understand what I'm not accounting for here.
You should put the conversion to dFreq immediately inside the if() before doing the calculations with iFreq. The conversion may execute in parallel with the integer calculations if the instruction is farther up in the code. A good compiler might be able to push it farther up, and a not-so-good one may just leave it where it falls. Since you moved it to after the integer calculations it may not get to run in parallel with integer code, leading to a slowdown. If it does run parallel, then there may be little to no improvement at all depending on the CPU (issuing an FP instruction whose result is never used will have little effect in the original version).
If you really want to improve performance, a number of people have done benchmarks and rank the following compilers in this order:
1) ICC - Intel compiler
2) GCC - A good second place
3) MSVC - generated code can be quite poor compared to the others.
You may also want to try -O3 if they have it.
Maybe the result of getFreq is kept inside a register in the first case and written to memory in the second case? It might also be, that the performance decrease has to do with CPU mechanisms as pipelining and/or branch prediction.
You could check the generated assembly code.
This looks to me like a pipeline stall
int iFreq = getFreq(i);
double dFreq = iFreq;
if (iFreq != 0) {
Allows the conversion to double to happen in parallel with other code
since dFreq is not being used immediately. it gives the compiler something
to do between storing iFreq and using it, so this conversion is most likely
"free".
But
int iFreq = getFreq(i);
if (iFreq != 0) {
// do some stuff with iFreq...
double dFreq = iFreq;
// do some stuff with dFreq...
}
Could be hitting a store/reference stall after the conversion to double since you begin using the double value right away.
Modern processors can do multiple things per clock cycle, but only when the things are independent. Two consecutive instructions that reference the same register often result in a stall. The actual conversion to double may take 3 clocks, but all but the first clock can be done in parallel with other work, provided you don't refer to the result of the conversion for an instruction or two.
C++ compilers are getting pretty good at re-ordering instructions to take advantage of this, it looks like your change defeated some nice optimization.
One other (less likely) possibility is that when the conversion to float was before the branch, the compiler was able remove the branch entirely. Branchless code is often a major performance win in modern processors.
It would be interesting to see what instructions the compiler actually emitted for these two cases.
Try moving the definition of dFreq outside of the for loop but keep the assignment inside the for loop/if block.
Perhaps the creation of dFreq on the stack every for loop, inside the if, is causing issue (although the compiler should take care of that). Perhaps a regression in the compiler, if the dFreq var is in the four loop its created once, inside the if inside the for its created every time.
double dFreq;
int iFreq;
for (int i = 0; i < ct; ++i)
{
// do some stuff...
iFreq = getFreq(i);
if (iFreq != 0)
{
// do some stuff with iFreq...
dFreq = iFreq;
// do some stuff with dFreq...
}
}
maybe the compiler is optimizing it taking the definition outside the for loop. when you put it in the if the compiler optimizations aren't doing that.
There's a likelihood that this changed caused your compiler to disable some optimizations. What happens if you move the declarations above the loop?
Once I've read a document about optimization that said that as defining variables just before their usage and not even before was a good practice, the compilers could optimize code following that advice.
This article (a bit old but quite valid) say (with statistics) something similar : http://www.tantalon.com/pete/cppopt/asyougo.htm#PostponeVariableDeclaration
It's easy enough to find out. Just take 20 stackshots of the slow version, and of the fast version. In the slow version you will see on roughly 2 of the shots what it is doing that it is not doing in the fast version. You will see a subtle difference in where it halts in the assembly language.

Using scanf() in C++ programs is faster than using cin?

I don't know if this is true, but when I was reading FAQ on one of the problem providing sites, I found something, that poke my attention:
Check your input/output methods. In C++, using cin and cout is too slow. Use these, and you will guarantee not being able to solve any problem with a decent amount of input or output. Use printf and scanf instead.
Can someone please clarify this? Is really using scanf() in C++ programs faster than using cin >> something ? If yes, that is it a good practice to use it in C++ programs? I thought that it was C specific, though I am just learning C++...
Here's a quick test of a simple case: a program to read a list of numbers from standard input and XOR all of the numbers.
iostream version:
#include <iostream>
int main(int argc, char **argv) {
int parity = 0;
int x;
while (std::cin >> x)
parity ^= x;
std::cout << parity << std::endl;
return 0;
}
scanf version:
#include <stdio.h>
int main(int argc, char **argv) {
int parity = 0;
int x;
while (1 == scanf("%d", &x))
parity ^= x;
printf("%d\n", parity);
return 0;
}
Results
Using a third program, I generated a text file containing 33,280,276 random numbers. The execution times are:
iostream version: 24.3 seconds
scanf version: 6.4 seconds
Changing the compiler's optimization settings didn't seem to change the results much at all.
Thus: there really is a speed difference.
EDIT: User clyfish points out below that the speed difference is largely due to the iostream I/O functions maintaining synchronization with the C I/O functions. We can turn this off with a call to std::ios::sync_with_stdio(false);:
#include <iostream>
int main(int argc, char **argv) {
int parity = 0;
int x;
std::ios::sync_with_stdio(false);
while (std::cin >> x)
parity ^= x;
std::cout << parity << std::endl;
return 0;
}
New results:
iostream version: 21.9 seconds
scanf version: 6.8 seconds
iostream with sync_with_stdio(false): 5.5 seconds
C++ iostream wins! It turns out that this internal syncing / flushing is what normally slows down iostream i/o. If we're not mixing stdio and iostream, we can turn it off, and then iostream is fastest.
The code: https://gist.github.com/3845568
http://www.quora.com/Is-cin-cout-slower-than-scanf-printf/answer/Aditya-Vishwakarma
Performance of cin/cout can be slow because they need to keep themselves in sync with the underlying C library. This is essential if both C IO and C++ IO is going to be used.
However, if you only going to use C++ IO, then simply use the below line before any IO operations.
std::ios::sync_with_stdio(false);
For more info on this, look at the corresponding libstdc++ docs.
Probably scanf is somewhat faster than using streams. Although streams provide a lot of type safety, and do not have to parse format strings at runtime, it usually has an advantage of not requiring excessive memory allocations (this depends on your compiler and runtime). That said, unless performance is your only end goal and you are in the critical path then you should really favour the safer (slower) methods.
There is a very delicious article written here by Herb Sutter "The String Formatters of Manor Farm" who goes into a lot of detail of the performance of string formatters like sscanf and lexical_cast and what kind of things were making them run slowly or quickly. This is kind of analogous, probably to the kind of things that would affect performance between C style IO and C++ style. The main difference with the formatters tended to be the type safety and the number of memory allocations.
I just spent an evening working on a problem on UVa Online (Factovisors, a very interesting problem, check it out):
http://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&category=35&page=show_problem&problem=1080
I was getting TLE (time limit exceeded) on my submissions. On these problem solving online judge sites, you have about a 2-3 second time limit to handle potentially thousands of test cases used to evaluate your solution. For computationally intensive problems like this one, every microsecond counts.
I was using the suggested algorithm (read about in the discussion forums for the site), but was still getting TLEs.
I changed just "cin >> n >> m" to "scanf( "%d %d", &n, &m )" and the few tiny "couts" to "printfs", and my TLE turned into "Accepted"!
So, yes, it can make a big difference, especially when time limits are short.
If you care about both performance and string formatting, do take a look at Matthew Wilson's FastFormat library.
edit -- link to accu publication on that library: http://accu.org/index.php/journals/1539
The statements cin and cout in general use seem to be slower than scanf and printf in C++, but actually they are FASTER!
The thing is: In C++, whenever you use cin and cout, a synchronization process takes place by default that makes sure that if you use both scanf and cin in your program, then they both work in sync with each other. This sync process takes time. Hence cin and cout APPEAR to be slower.
However, if the synchronization process is set to not occur, cin is faster than scanf.
To skip the sync process, include the following code snippet in your program right in the beginning of main():
std::ios::sync_with_stdio(false);
Visit this site for more information.
There are stdio implementations (libio) which implements FILE* as a C++ streambuf, and fprintf as a runtime format parser. IOstreams don't need runtime format parsing, that's all done at compile time. So, with the backends shared, it's reasonable to expect that iostreams is faster at runtime.
Yes iostream is slower than cstdio.
Yes you probably shouldn't use cstdio if you're developing in C++.
Having said that, there are even faster ways to get I/O than scanf if you don't care about formatting, type safety, blah, blah, blah...
For instance this is a custom routine to get a number from STDIN:
inline int get_number()
{
int c;
int n = 0;
while ((c = getchar_unlocked()) >= '0' && c <= '9')
{
// n = 10 * n + (c - '0');
n = (n << 3) + ( n << 1 ) + c - '0';
}
return n;
}
The problem is that cin has a lot of overhead involved because it gives you an abstraction layer above scanf() calls. You shouldn't use scanf() over cin if you are writing C++ software because that is want cin is for. If you want performance, you probably wouldn't be writing I/O in C++ anyway.
Of course it's ridiculous to use cstdio over iostream. At least when you develop software (if you are already using c++ over c, then go all the way and use it's benefits instead of only suffering from it's disadvantages).
But in the online judge you are not developing software, you are creating a program that should be able to do things Microsoft software takes 60 seconds to achieve in 3 seconds!!!
So, in this case, the golden rule goes like (of course if you dont get into even more trouble by using java)
Use c++ and use all of it's power (and heaviness/slowness) to solve the problem
If you get time limited, then change the cins and couts for printfs and scanfs
(if you get screwed up by using the class string, print like this: printf(%s,mystr.c_str());
If you still get time limited, then try to make some obvious optimizations (like avoiding too many embedded for/while/dowhiles or recursive functions). Also make sure to pass by reference objects that are too big...
If you still get time limited, then try changing std::vectors and sets for c-arrays.
If you still get time limited, then go on to the next problem...
#include <stdio.h>
#include <unistd.h>
#define likely(x) __builtin_expect(!!(x), 1)
#define unlikely(x) __builtin_expect(!!(x), 0)
static int scanuint(unsigned int* x)
{
char c;
*x = 0;
do
{
c = getchar_unlocked();
if (unlikely(c==EOF)) return 1;
} while(c<'0' || c>'9');
do
{
//*x = (*x<<3)+(*x<<1) + c - '0';
*x = 10 * (*x) + c - '0';
c = getchar_unlocked();
if (unlikely(c==EOF)) return 1;
} while ((c>='0' && c<='9'));
return 0;
}
int main(int argc, char **argv) {
int parity = 0;
unsigned int x;
while (1 != (scanuint(&x))) {
parity ^= x;
}
parity ^=x;
printf("%d\n", parity);
return 0;
}
There's a bug at the end of the file, but this C code is dramatically faster than the faster C++ version.
paradox#scorpion 3845568-78602a3f95902f3f3ac63b6beecaa9719e28a6d6 ▶ make test
time ./xor-c < rand.txt
360589110
real 0m11,336s
user 0m11,157s
sys 0m0,179s
time ./xor2-c < rand.txt
360589110
real 0m2,104s
user 0m1,959s
sys 0m0,144s
time ./xor-cpp < rand.txt
360589110
real 0m29,948s
user 0m29,809s
sys 0m0,140s
time ./xor-cpp-noflush < rand.txt
360589110
real 0m7,604s
user 0m7,480s
sys 0m0,123s
The original C++ took 30sec the C code took 2sec.
Even if scanf were faster than cin, it wouldn't matter. The vast majority of the time, you will be reading from the hard drive or the keyboard. Getting the raw data into your application takes orders of magnitude more time than it takes scanf or cin to process it.