glfwSwapBuffers really slow (no vysnc) - c++

I made a basic opengl program and opened it up and I was only getting 2400fps with dips to 700fps in release mode, and I was really confused so I took out everything in the main loop till the code looks like below
while (true)
{
glfwSwapBuffers(window);
}
and now I'm only getting 3400-4000fps (I switched to release mode).
For a bit of context, I've made a game in DirectX 11 where when nothing is drawing it gets 8000fps and that's with input and game logic not an empty loop.
I've tried compiling my own glfw and using precompiled binaries. Im thinking that maybe I need to figure out how to build glfw as apart of my project so I can get more optimization.
I'm really confused, I want to do some heavy stuff in this game but i'm already getting 2-4x less performance when nothing is going on.
Last second addition:
People have talked about glfwswapbuffers having low performance on other threads but in all those cases they are using vysnc. (im using glfwSwapInterval(0))

There might be a multiple reasons to impact the performance of glfwSwapBuffers. Since it works asynchronously, performance might be reduced by synchronizations as v-sync, or monitor refresh rate (60Hz?). Usually you want your engine to be in sync with other processes (even if they are a limiting factor). You might also want to try glfwSwapInterval(0).

Related

Why does my program run faster on first launch than on next launches?

I have been working for 2.5 years on a personal flight sim project on my leisure time, written in C++ and using Opengl on a windows 7 PC.
I recently had to move to windows 10. Hardware is exactly the same. I reinstalled Code::blocks.
It turns out that on first launch of my project after the system start, performance is OK, similar to what I used to see with windows 7. But, the second, third, and all next launches give me lower performance, with significant less fluidity in frame rate compared to the first run, detectable by eye. This never happened with windows 7.
Any time I start my system, first run is fast, next ones are slower.
I had a look at the task manager while doing some runs. The first run is handled by one of the 4 cores of my CPU (iCore5-6500) at approximately 85%. For the next runs, the load is spread accross the 4 cores. During those slower runs on 4 cores, I tried to modify the affinity and direct my program to only one core without significant improvement in performance. The selected core was working at full load, though.
My C++ code doesn't explicitly use any thread function at this stage. From my modest programmer's point of view, there is only one main thread run in the main(). On the task manager, I can see that some 10 to 14 threads are alive when my program runs. I guess (wrongly?) that they are implicitly created by the use of joysticks, track ir or other communication task with GPU...
Could it come from memory not being correctly freed when my program stops? I thought windows would free it properly, even if I forgot some 'delete' after using 'new'.
Has anyone encountered a similar situation? Any explanation coming to your minds?
Any suggestion to better understand these facts? Obviously, my ultimate goal is to have a consistent performance level whatever the number of launches.
trying to upload screenshots of second run as viewed by task manager:
trying to upload screenshots of first run as viewed by task manager:
Well I got a problems when switching to win10 for clients at my work too here few I encountered all because Windows10 has changed up scheduling of processes creating a lot of issues like:
older windowses blockless thread synchronizations techniques not working anymore
well placed Sleep() helps sometimes. Btw. similar problems was encountered when switching from w2k to wxp.
huge slowdowns and frequent freezes for few seconds on older single threaded apps
usually setting affinity to single core solves this. You can do this also in task manager just to check and if helps then you can do this in code too. Here an example on how to do it with winapi:
Cache size estimation on your system?
messed up drivers timings causing zombies processes even total freeze and or BSOD
I deal with USB in my work and its a nightmare sometimes on win10. On top of all this Win10 tends to enforce wrong drivers on devices (like gfx cards, custom USB systems etc ...)
auto freeze close app if it does not respond the wndproc in time
In Windows10 the timeout is much much smaller than in older versions. If the case You can try running in compatibility mode (set in icon properties on desktop) for older windows (however does not work for #1,#2), or change the apps code to speed up response. For example in VCL you can call Process Messages from inside of blocking code to remedy this... Or you can use threads for the heavy lifting ... just be careful with rendering and using winapi as accessing some winapi (any window/visual related stuff) functions from outside main thread causes havoc ...
On top of all this old IDEs (especially for MCUs) don't work properly anymore and new ones are usually much worse (or unusable because of lack of functionality that was present in older versions) to work with so I stayed faith full to Windows7 for developer purposes.
If none of above helps then try to log the times some of your task did need ... it might show you which part of code is the problem. I usually do this using timing graph like this:
both x,y axises are time and each task has its own color and row in graph. the graph is scrolling in time (to the left side in my case) and has changeable time scale. The numbers are showing actual and max (or sliding avg) value ...
This way I can see if some task is not taking too much time or even overlaps its next execution, peaks are also nicely visible and all runs during runtime without any debug tools which might change the behavior of execution.

Analyzing function memory and CPU usage

I am making a video game, which is a pretty small 2D shooter. Recently I noticed that the frame rate drops dramatically when there are about 9 bullets in the scene or more. My laptop can handle advanced 3D games and my game is very very simple so hardware should not be a problem.
So now I have a very big code (at least for one person) and I am pretty confused where I should look for? there are too many functions and classes related to bullets, and for example, I don't know how to analyze if the rendering function has problems or the update function? I could use MVS 2015 debugging tools for other programs, but for a game, it is not practical, for example, if I put a breakpoint before the render function, It should be checked 60 times in a second plus I can't input anything so I will never have bullets to test render function! I tried to use task manager, and I realized that CPU usage goes up really fast for each bullet, but when the game slows down only 10 percent of the CPU is used!
So my questions are:
How can I analyze functions when I can't use debugging tool?
And why game slows down while it still can use system resources?
To see what part consumes most of the processing power, you should use a function profiler. It doesn't "debug", but it creates a report when it's finished.
Valgrind is a good tool for that.
Why the game slows down? Depends on your implementation. I can create a program that divides two numbers and make it take 5 minutes to calculate the result.
We're in the video-game industry as well and we use a very simple tool on PC for CPU profiling: very sleepy.
http://www.codersnotes.com/sleepy/
It is simple, but really helped me out a lot of times. Just fire up the program from IDE and let very sleepy run for a few thousand samples and off you go!
When it comes to memory holes, Valgrind is a good tool, as already noted by The Quantum Physicist.
For timing, I would write my own small tracing/profiling tool (if my IDE does not already have one). Use a text debugging output to write short messages to a log file. Something like that:
void HandleBullet() {
printf("HandleBullet START: %i", GetSysTime());
// do your function stuff
printf("HandleBullet END: %i", GetSysTime()); // or calculate time of function directly
}
Write those debugging messages in all of the functions where you think they could take too long.
After some execution time, you can look into that file and see if something obvious happened (blocking somewhere).
If not, use a high level language of your choice to write a small parser for your created log file to tidy up and analyze your output. Calculate stuff like overall time spent in some function, or chart which functions took the longest. Should not be too difficult, if you stick to a log message style which is easy parsable for you.

SDL game loop is dropping frames because of SDL_GL_SwapWindow

I'm just trying to make an empty game loop that doesn't lag!
My loop does basically nothing, yet sometimes it lags enough to drop frames (I'm trying to run at 60fps)
I traced the problem to SDL_GL_SwapWindow. I made sure vsync is turned off.
Most of the time SDL_GL_SwapWindow(window); takes <1ms. But sometimes it can take long enough to drop frames. Is this normal? I can't believe my raw C++ empty game loop is sometimes dropping frames!
My code doesn't do anything interesting, I've tried tweaking it quite a bit, but I've seen no improvement. You can see it all here http://pastebin.com/GpLAH8SZ
P.S. I'm on a decent gaming desktop!
I think it is the OS, which may not schedule you 100% of time.
You can change the msdn : process class. But there is going to be intervals where windows does not have resources to keep running your code, and keep running.

SDL_RenderPresent Hangs Forever

I've come across an inexplicable error in SDL 2.0.3 when using hardware rendered graphics. For some reason, around 5 minutes after the program starts my graphical window closes but my console window stays open. There is no error thrown or anything to signify a problem.
When I pause the debugger, the program puts the breakpoint inside of SDL_RenderPresent(). I followed the call stack to a function inside of ntdll.dll called WaitForSingleObject() but I'm not sure what's causing it to hang forever.
Also, this does not happen when I use software rendered graphics. I am running it on an AMD FirePro M5100 FireGL V with the latest drivers installed.
My question is, does anyone know what might cause SDL_RenderPresent() to never return?
From the description seems that there are locks not released by the lower levels of the graphics pipeline.
From the fact that it happens after 5 minutes seems that there is a resource leak somewhere.
All of this is just a wild guess of course, but I'd say that either the application code or the SDL code is leaking resources (handles to textures, vertex buffers and the like) and that part of the code (either at the lower levels of SDL or in the driver) when running out is not behaving nicely (this happens often... in many cases low-resource conditions are not very well tested and handled).
This doesn't happen in software rendering because there resources are basically unlimited. A confirm of this kind of problem would be that when running in software rendering the program works but process memory use keeps growing and growing.
Pay attention also to any code that "catches" any exception/failure and keeps running after that. Writing complex software that works correctly after an abnormal state is extremely difficult (basically impossible beyond trivial cases because exception safety doesn't scale by composition: the only way that doesn't make the complexity explode is to have logical partitioning "walls" and re-initialize whole subsystems).

Why does my DirectInput8 stack overflow?

The overall program is too complex to display here. Basically, just pay attention to the green highlights in my recent git commit. I am very new to DirectInput, so I expect I've made several errors. I have very carefully studied the MSDN documentation, so I promise I'm not just throwing this out there and stamping FIX IT FOR ME on it. :)
Basically, I think I have narrowed down my problem to the area of code around Engine::getEvent (line 238+). I do not understand how these functions work, and I've messed with certain pieces to achieve different results. My goal here is to simply read in keyboard events directly and output those raw numbers to the screen (I will deal with the numbers' meaning later). The problem here relates to KEYBOARD_BUFFER_SIZE. If I make it small, the program seems to run fine, but it outputs no events. If I make it big, it runs a bit better, but it starts to slow down and then freeze (the OpenGL window just has a rotating color cube). How do I properly capture keyboard events?
I checked the return values on all the setup steps higher in the code. They all return DI_OK just fine.
Your code seems to be okay (according to this tutorial, which I have used in the past). The use of several stack-based arrays is questionable, but shouldn't be too much of an issue (unless you start having lots of concurrent getEvent calls running).
However, your best bet would be to stop using DirectInput and start using Windows Raw Input. It's best to make this switch early (ie, now) rather than realise later on that you really need to use something other than DI to get the results you want.