C++ | Win32 API | Stuck awith white screen | InvalidateRect() [duplicate] - c++

I have a large, complex application written in C++ (no MFC or .NET). The client that uses the software most aggressively will, within an hour or so of starting it, get to a state where all the windows stop painting. We get reports that the application has "hung" because as far as they can tell nothing is happening. In reality, the application is functioning, just not displaying anything.
I've tried a lot of different things to no avail. I'm out of ideas...

You probably already have a hunch of what it is - you give it away in the first sentence
... large, complex application ...
It sounds like you have a GDI resource leak somewhere. To confirm this try looking in task manager at GDI objects for your process. At some point most GDI operations will fail for your application.
Make sure you are freeing all handles correctly. Note that different GDI objects require different methods of freeing the object. For example GetDC is freed by ReleaseDC, but CreateDC is freed by DeleteDC.
This is why RAII smart objects (like smart pointers) are recommended for resource management in C++ (where freeing is managed by the smart object to reduce the likelihood of leaks and errors).

I'd bet that the application is leaking GDI objects, and when the GDI dedicated space for this process is exhausted, it can no longer paint itself.
You can check if this is the case by adding to the Windows Task Manager (or any other process manager such as Process Monitor) the column GDI Objects and see if this number grows unbounded with time.

Your application may actually be suffering from an exception that is getting ignored. See Microsoft KB article 976038.

Related

Is it OK NOT to do glDeleteBuffers and other OpenGL (3.3) cleanups?

I sometimes forgot to do cleanup and afraid if the resources of them resides inside the GPU memory.
Things I use: shader programs, vertex array objects and buffer objects, textures
All OpenGL resources are supposed to be automatically released on destruction of OpenGL context (all shared contexts, to be precise). So practically, there should be no risk in leaking GPU memory when closing OpenGL context with some unreleased objects as far as your application does not trigger some video driver bug.
System will also take care on releasing all resources of closed application, even if some OpenGL contexts have been forgotten to be closed. Otherwise, it would be a total nightmare debugging 3D applications if GPU would keep allocated resources on an application crash.
To prove the idea - just write a simple test application allocating large portions of GPU memory (textures/VBOs) and track memory usage via external tools. Luckily, Task Manager in Windows 10 has been significantly improved and shows detailed GPU memory statistics.
From design point of view, however, it sounds like a bad idea tolerating incomplete clean-ups as the same release procedures used in other renderer code will cause real problems.
All resources you create for an OpenGL context are associated with that context (and any other context that you share resources with). If you destroy that context, all resources associated with it will be freed.
If you don't destroy the context when your program exits, then the OS will clean up after you.
That being said, destroying resources when you're done with them is a good habit to get into.

How to get Process ID and use EmptyWorkingSet on Windows?

I use a game engine, as the game goes on, any unused textures, like from past levels, seem not to be cleared automatically. The developers though state that DirectX does not need to clear its textures manually, it simply swaps them out automatically when not used.
However my game seems to increase its memory usage with each different level. I am still testing for leaks and whatnot, however I'd like to use EmptyWorkingSet winapi function to lower the memory usage.
I do have the HWND of the application, how can I get its proccess id and use EmptyWorkingSet to clear the unused memory?
Do NOT do this.
EmptyWorkingSet() is not a magic bullet and it will only cause the memory used by your App to appear lower when queried by Task Manager, however the memory will only have been paged to disk and you'll be getting lots of page faults as a result. (A lose-lose situation)
The only correct way to fix this is to fix your memory leaks. Use valgrind and cachegrind to locate the issue, and make sure you're releasing your memory in all the places it needs to be released.
Also if the memory usage you see going up is Physical Memory in Task Manager, this is not DirectX, as Direct3D will store all texture information in server memory (ie: VRAM)

glDeleteFramebuffers explicit call

In OpenGL tutorials I never saw that someone call glDeleteFramebuffers before program finished.
Should I delete FrameBuffer before close application? Or OpenGL driver will do this for me?
P.S. Same question about texture glDeleteTextures.
The driver will free all resources associated with the OpenGL objects of a process when that process exits. You don't have to worry about system wide leaks if an application exits without cleaning up.
Even though the mechanisms are different, the behavior is very much like what happens with memory allocations. If you don't free all your dynamically allocated memory before exiting, all the memory that the application allocated will still be returned to the system. (*)
Still, I think it's generally good style to explicitly clean up all resources before exiting. It can also be helpful if you use tools that detect memory leaks.
(*) I'm talking about modern, full-featured operating systems here like they would typically be used on a desktop computer, a smart phone, or a tablet. I can imagine that this might not be true on minimal operating systems.

How to make Qt GUI apps in C++ without memory leaks

I haven't been able to create a Qt GUI app that didn't have over 1K 'definitely lost' bytes in valgrind. I have experimented with this, making minimal apps that just show one QWidget, that extend QMainWindow; that just create a QApplication object without showing it or without executing it or both, but they always leak.
Trying to figure this out I have read that it's because X11 or glibc has bugs, or because valgrind gives false positives. And in one forum thread it seemed to be implied that creating a QApplication-object in the main function and returning the object's exec()-function, as is done in tutorials, is a "simplified" way to make GUIs (and not necessarily good, perhaps?).
The valgrind output does indeed mention libX11 and libglibc, and also libfontconfig. The rest of the memory losses, 5 loss records, occurs at ??? in libQtCore.so during QLibrary::setFileNameAndVersion.
If there is a more appropriate way to create GUI apps that prevents even just some of this from happening, what is it?
And if any of the valgrind output is just noise, how do I create a suppression file that suppresses the right things?
EDIT: Thank you for comments and answers!
I'm not worrying about the few lost kB themselves, but it'll be easier to find my own memory leaks if I don't have to filter several screens of errors but can normally get an "OK" from valgrind. And if I'm going to suppress warnings, I'd better know what they are, right?
Interesting to see how accepted leaks can be!
It is not uncommon for large-scale multi-thread-capable libraries such as QT, wxWidgets, X11, etc. to set up singleton-type objects that initialize once when a process is started and then make no attempt to effort to clean up the allocation when the process shuts down.
I can assure you that anything "leaked" from a function such as QLibrary::setFileNameAndVersion() has been left so intentionally. The bits of memory left behind by X11/glibc/fontConfig are probably not bugs either.
It could be seen as bad coding practice or etiquette, but it can also greatly simplify certain types of tasks. Operating systems these days offer a very strong guarantee for cleaning up any memory or resources left open by a process when its killed (either gracefully or by force), and if the allocation in question is very likely to be needed for the duration of the application, including shutdown procedures -- and various core components of QT would qualify -- then it can be a boon to performance to have the library set up some memory allocations as soon as it is loaded/initialized, and allow those to persist indefinitely. Among other things, this allows the memory to be present for use by any other C++ destructors that might reference that memory.
Since those allocations are only set up once, and from one point in the code, there is no risk of a meaningful memory leak. Its just memory that belongs to the process and is thus cleaned up when the process is closed by the operating system.
Conclusion: if the memory leak isn't in your code, and it doesn't appear to get significantly larger over time (and by significant these days, think megabytes), and/or is clearly orginating from first-time initialization setup code that is only ever invoked once within your app, then don't worry about it. It is probably intentional.
One way to test this can be to run your code inside a loop, and vary the number of iterations. If the difference between allocs and frees is independent on the number of iterations, you are likely to be safe.

Memory usage and minimizing

We have a fairly graphical intensive application that uses the FOX toolkit and OpenSceneGraph, and of course C++. I notice that after running the application for some time, it seems there is a memory leak. However when I minimize, a substantial amount of memory appears to be freed (as witnessed in the Windows Task Manager). When the application is restored, the memory usage climbs but plateaus to an amount less than what it was before the minimize.
Is this a huge indicator that we have a nasty memory leak? Or might this be something with how Windows handles graphical applications? I'm not really sure what is going on.
What you are seeing is simply memory caching. When you call free()/delete()/delete, most implementations won't actually return this memory to the OS. They will keep it to be returned in a much faster fashion the next time you request it. When your application is minimized, they will free this memory because you won't be requesting it anytime soon.
It's unlikely that you have an actual memory leak. Task Manager is not particularly accurate, and there's a lot of behaviour that can change the apparent amount of memory that you're using- even if you released it properly. You need to get an actual memory profiler to take a look if you're still concerned.
Also, yes, Windows does a lot of things when minimizing applications. For example, if you use Direct3D, there's a device loss. There's thread timings somethings. Windows is designed to give the user the best experience in a single application at a time and may well take extra cached/buffered resources from your application to do it.
No, there effect you are seeing means that your platform releases resources when it's not visible (good thing), and that seems to clear some cached data, which is not restored after restoring the window.
Doing this may help you find memory leaks. If the minimum amount of memory (while minimized) used by the app grows over time, that would suggest a leak.
You are looking at the working set size of your program. The sum of the virtual memory pages of your program that are actually in RAM. When you minimize your main window, Windows assumes the user won't be interested in the program for a while and aggressively trims the working set. Copying the pages in RAM to the paging file and chucking them out, making room for the other process that the user is likely to start or to switch to.
This number will also go down automatically when the user starts another program that needs a lot of RAM. Windows chucks out your pages to make room for this program. It picks pages that your program hasn't used for a while, making it likely that this doesn't affect the perf of your program much.
When you switch back to your program, Windows needs to swap pages back into RAM. But this is on-demand, it only pages-in pages that your program actually uses. Which will normally be less than what it used before, no need to swap the initialization code of your program back in for example.
Needless to say perhaps, the number has absolutely nothing to do with the memory usage of your program, it is merely a statistical number.
Private bytes would be a better indicator for a memory leak. Taskmgr doesn't show that, SysInternals' ProcMon tool does. It still isn't a great indicator because that number also includes any blocks in the heap that were freed by your program and were added to the list of free blocks, ready to be re-used. There is no good way to measure actual memory in use, read the small print for the HeapWalk() API function for the kind of trouble that causes.
The memory and heap manager in Windows are far too sophisticated to draw conclusions from the available numbers. Use a leak detection tool, like the VC debug allocator (crtdbg.h).