I am rewriting some rendering C code in C++. The old C code basically computes everything it needs and renders it at each frame. The new C++ code instead pre-computes what it needs and stores that as a linked list.
Now, actual rendering operations are translations, colour changes and calls to GL lists.
While executing the operations in the linked list should be pretty straightforward, it would appear that the resulting method call takes longer than the old version (which computes everything each time - I have of course made sure that the new version isn't recomputing).
The weird thing? It executes less OpenGL operations than the old version. But it gets weirder. When I added counters for each type of operation, and a good old printf at the end of the method, it got faster - both gprof and manual measurements confirm this.
I also bothered to take a look at the assembly code generated by G++ in both cases (with and without trace), and there is no major change (which was my initial suspicion) - the only differences are a few more stack words allocated for counters, increasing said counters, and preparing for printf followed by a jump to it.
Also, this holds true with both -O2 and -O3. I am using gcc 4.4.5 and gprof 2.20.51 on Ubuntu Maverick.
I guess my question is: what's happening? What am I doing wrong? Is something throwing off both my measurements and gprof?
By spending time in printf, you may be avoiding stalls in your next OpenGL call.
Without more information, it is difficult to know what is happening here, but here are a few hints:
Are you sure the OpenGL calls are the same? You can use some tool to compare the calls issued. Make sure there was no state change introduced by the possibly different order things are done.
Have you tried to use a profiler at runtime? If you have many objects, the simple fact of chasing pointers while looping over the list could introduce cache misses.
Have you identified a particular bottleneck, either on the CPU side or GPU side?
Here is my own guess on what could be going wrong. The calls you send to your GPU take some time to complete: the previous code, by mixing CPU operations and GPU calls, made CPU and GPU work in parallel; on the contrary the new code first makes the CPU computes many things while the GPU is idling, then feeds the GPU with all the work to get done while the CPU has nothing left to do.
Related
I'm asking regarding answers on this question, In my answer I first just got the time before and after the loops and printed out their difference, But as an update for #cigiens answer, it seems that I've done benchmarking inaccurately by not warming up the code.
What is warming up of the code? I think what happened here is that the string was moved to the cache first and that made the benchmarking results for the following loops close to each other. In my old answer, the first benchmarking result was slower than others, since it took more time to move the string to the cache I think, Am I correct? If not, what is warming up actually doing to code and also generally speaking if possible, What should I've done else than warming up for more accurate results? or how to do benchmarking correctly for C++ code (also C if possibly the same)?
To give you an example of warm up, i've recently benchmarked some nvidia cuda kernel calls:
The execution speed seems to increase over time, probably for several reasons like the fact that the GPU frequency is variable (to save power and cooldown).
Sometimes the slower call has an even worse impact on the next call so the benchmark can be misleading.
If you need to feel safe about these points, I advice you to:
reserve all the dynamic memory (like vectors) first
make a for loop to do the same work several times before a measurement
this implies to initialize the input datas (especially random) only once before the loop and to copy them each time inside the loop to ensure that you do the same work
if you deal with complex objects with cache, i advice you to pack them in a struct and to make an array of this struct (with the same construction or cloning technique), in order to ensure that the same work is done on the same starting data in the loop
you can avoid doing the for loop and copying the datas IF you alternate two calls very often and suppose that the impact of the behavior differences will cancel each other, for example in a simulation of continuous datas like positions
concerning the measurement tools, i've always faced problems with high_resolution_clock on different machines, like the non consistency of the durations. On the contrary, the windows QueryPerformanceCounter is very good.
I hope that helps !
EDIT
I forgot to add that effectively as said in the comments, the compiler optimization behavior can be annoying to deal with. The simplest way i've found is to increment a variable depending on some non trivial operations from both the warm up and the measured datas, in order to force the sequential computation as much as possible.
I have a code with a loop that counts to 10000000000, and within that loop, I do some calculations with conditional operators (if etc). It takes about 5 minutes to reach that number. so, my question is, can I reduce the time it takes by creating a DLL and call that dll for functions to do the calculation and return the values to the main program? will it make a difference in time it takes to do the calculations? further, will it improve the overall efficiency of the program?
By a “dll” I assume you mean going from managed .net code to that of un-managed “native” compiled code. Yes this approach can help.
It much depends. Remember, the speed of the loop code is likely only 25 seconds on a typical i3 (that is the cost and overhead to loop to 10 billion but doing much nothing else).
And I assumed you went to the project, then compile. On that screen select advanced compile. There you want to check remove integer overflow checks. Make sure you loop vars are integers for speed.
At that point the “base” loop that does nothing will drop from about 20 seconds down to about 6 seconds.
So that is the base loop speed – now it come down to what we are doing inside of that loop.
At this point, .net DOES HAVE a JIT (a just in time native compiler). This means your source code goes to “CLR” code and then in tern that code does get compiled down to native x86 assembly code. So this “does” get the source code down to REAL machine code level. However a JIT is certainly NOT as efficient nor can it spend “time” optimizing the code since the JIT has to work on the “fly” without you noticing it. So a c++ (or VB6 which runs as fast as c++ when native compiled) can certainly run faster, but the question then is by how much?
The optimized compiler might get another double in speed for the actually LOOPING code etc.
However, in BOTH cases (using .net managed code, or code compiled down to native Intel code), they BOTH LIKELY are calling the SAME routines to do the math!
In other words, if 80% of the code is spend in “library” code that does the math etc., then calling such code from c++ or calling such code from .net will make VERY LITTLE difference since the BULK of the work is spend in the same system code!
The above concept is really “supervisor” mode vs. your application mode.
In other words, the amount of time spent in your code vs. that of using system “library” code means that the bulk of the heaving lifting is occurring in supervisor code. That means jumping from .net to native c++/vb6 dll’s will NOT yield much in the way of performance.
So I would first ensure loops and array index refs in your code are integer types. The above tip of taking off bounds checking likely will give you “close” to that of using a .dll. Worse is often the time to “shuffle” the data two and from that external.dll sub will cost you MORE than the time saved on the processing side.
And if your routines are doing database or file i/o, then all bets are off, as that is VERY different problem.
So I would first test/try your application with [x] remove integer overflow checks turned off. And make sure during testing that you use ctrl-F5 in place of F5 to run your code without DEBUGGING. The above overflow check and options will NOT show increased speed when in debug mode.
So it hard to tell – it really depends on how much math (especially floating calls) you are doing (supervisor code) vs. that of just moving values around in arrays. If more code is moving things around, then I suggest the integer optimizing above, and going to a .dll likely will not help much.
Couldn´t you utilize "Parallel.ForEach" and strip this huge loop in some equal pieces?
Or try to work with some Backgroundworkers or even Threads (more than 1!!) to achieve the optimal CPU performance and try to reduce the spent time.
I have spent the past year developing a logging library in C++ with performance in mind. To evaluate performance I developed a set of benchmarks to compare my code with other libraries, including a base case that performs no logging at all.
In my last benchmark I measure the total running time of a CPU-intensive task while logging is active and when it is not. I can then compare the time to determine how much overhead my library has. This bar chart shows the difference compared to my non-logging base case.
As you can see, my library ("reckless") adds negative overhead (unless all 4 CPU cores are busy). The program runs about half a second faster when logging is enabled than when it is disabled.
I know I should try to isolate this down to a simpler case rather than asking about a 4000-line program. But there are so many venues for what to remove, and without a hypothesis I will just make the problem go away when I try to isolate it. I could probably spend another year just doing this. I'm hoping that the collective expertise of Stack Overflow will make this a much more shallow problem or that the cause will be obvious to someone who has more experience than me.
Some facts about my library and the benchmarks:
The library consists of a front-end API that pushes the log arguments onto a lockless queue (Boost.Lockless) and a back-end thread that performs string formatting and writes the log entries to disk.
The timing is based on simply calling std::chrono::steady_clock::now() at the beginning and end of the program, and printing the difference.
The benchmark is run on a 4-core Intel CPU (i7-3770K).
The benchmark program computes a 1024x1024 Mandelbrot fractal and logs statistics about each pixel, i.e. it writes about one million log entries.
The total running time is about 35 seconds for the single worker-thread case. So the speed increase is about 1.5%.
The benchmark produces an output file (this is not part of the timed code) that contains the generated Mandelbrot fractal. I have verified that the same output is produced when logging is on and off.
The benchmark is run 100 times (with all the benchmarked libraries, this takes about 10 hours). The bar chart shows the average time and the error bars show the interquartile range.
Source code for the Mandelbrot computation
Source code for the benchmark.
Root of the code repository and documentation.
My question is, how can I explain the apparent speed increase when my logging library is enabled?
Edit: This was solved after trying the suggestions given in comments. My log object is created on line 24 of the benchmark test. Apparently when LOG_INIT() touches the log object it triggers a page fault that causes some or all pages of the image buffer to be mapped to physical memory. I'm still not sure why this improves the performance by almost half a second; even without the log object, the first thing that happens in the mandelbrot_thread() function is a write to the bottom of the image buffer, which should have a similar effect. But, in any case, clearing the buffer with a memset() before starting the benchmark makes everything more sane. Current benchmarks are here
Other things that I tried are:
Run it with the oprofile profiler. I was never able to get it to register any time in the locks, even after enlarging the job to make it run for about 10 minutes. Almost all the time was in the inner loop of the Mandelbrot computation. But maybe I would be able to interpret them differently now that I know about the page faults. I didn't think to check whether the image write was taking a disproportionate amount of time.
Removing the locks. This did have a significant effect on performance, but results were still weird and anyway I couldn't do the change in any of the multithreaded variants.
Compare the generated assembly code. There were differences but the logging build was clearly doing more things. Nothing stood out as being an obvious performance killer.
When uninitialised memory is first accessed, page faults will affect timing.
So, before your first call to, std::chrono::steady_clock::now(), initialise the memory by running memset() on your sample_buffer.
I am using Callgrind in order to see how many times specific functions are called. However, I am also interested in the execution time.
I know the programs take much longer when running on Callgrind, since it has to take information. However, what I am surprised about is that how the time changes. In my case, I am running two different versions of the Fast Marching Method (FMM and Simplified FMM) on 2D and 3D grids. The results are as follow:
In 2D the ratio FMM/SFMM is not kept at all, but at least it is always >1 (it takes always longes for FMM than for SFMM). However, in 3D the effect of Callgrind is completely the opposite, the times are completely changed: SFMM takes shorter will callgrind but longer in regular execution.
The compilation I am using (-Ofast, -fno-finite-math-only) is the same all the time and the same binaries are being run in callgrind and regular running ./bin-name
The time measuring functions are those from std::chrono.
Therefore, the question is: as I am using the same binary in all cases, how is it possible that the same binary behaves so differently? Are the other data I am getting (function calls,% time cost, etc) reliable in this case? Callgrind-like results were expected when running the binaries with regular execution command.
EDIT: in the implemtation, the main change is that in FMM I am using the Boost Fibonacci heap and in the SFMM I am using a small modification with a Boost Priority Queue.
Thank you!
The bug we are tracking occurs within a specific VxWorks-based embedded environment (the vendor modified stuff to an unknown extend and provides an abstraction layer of much of the VxWorks-stuff). We have two tasks running at different priorities, executing roughly every 100ms. The task with the higher priority simply counts adds counts up an integer (just so it does anything), while the task with the lower priority creates a string, like this:
std::string text("Some text");
Note that there is no shared state between these task whatsoever. They both operate exclusively on automatic local variables.
On each run, each task does this a hundred times, so that the probability of the race-condition occurring is higher. The application runs fine for a couple of minutes, and then the CPU-load shots from 5% to 100% and stays there. The entire time appears to be used by the task that created the string. So far we have not been able to reproduce the behavior without using std::string.
We are using GCC 4.1.2 and running on VxWorks 5.5. The program is run on a Pentium III.
I have tried analyzing what happens there, but I cannot enter any of the string-methods with a debugger, and adding print-statements into basic-string does not seem to work (this was the background for this question of mine). My suspicion is that something in there corrupts the stack resulting in a power-loop. My question is, is there any know error in older VxWorks-versions that could explain this? If not, do you have any further suggestions how to diagnose this? I can get the disassembly and stack-dumps, but I have no experience in interpreting either. Can anyone provide some pointers?
If I remember, vxWorks provides thread specific memory locations (or possibly just one location). This feature lets you specify a memory location that will be automatically shadowed by task switches so that whenever a thread writes on it the value is preserved across task switches. It's sort of like an additional register save/restore.
GCC uses one of those thread-specific memory locations to track the exception stack. Even if you don't otherwise use exceptions there are some situations (particularly new, such as the std::string constructor might invoke) which implicitly create try/catch like environments which manipulate this stack. On a much older version of gcc I saw that go haywire in code that nominally did not use any exception handling.
In that case the solution was to compile with -fno-exceptions to eliminate all of that behavior, after which the problem went away.
Whenever I see a weird race-condition in a VxWorks system with unexplainable behavior, my first thought is always "VX_FP_TASK strikes again!" The first thing you should check is whether your threads are being created with the VX_FP_TASK flag in taskSpawn.
The documentation says something like "It is deadly to execute any floating-point operations in a task spawned without VX_FP_TASK option, and very difficult to find." Now, you may think that you're not using FP registers at all, but C++ uses them for some optimizations, and MMX operations (like you may be using for your add there) do require those registers to be preserved.