Qt qml application increasing memory usage - c++

I created an application with Qt/QML, load a qml file with QQuickView, and with this use a Loader element for change the pages(gui) inside application, and works fine, but my problem is the increase of the program memory usage (the application starts with less of 100MB, and after 1 day, the size is about 500 MB or more), I originally write and update the model of objects in Qml (javascript), but the application grows quickly, changing to create models with C++ objects, memory usage grows less but the problem is still not solved.
My model can be updated continuously (even 1 time per second), but I don't believe that be the reason of memory rises.
But with that problem come other strangers behaviors is with TableView when changes to that page, the memory rises even 10 MB, I try to free memory with gc(), but i don't get successful results, and in the change of page, the memory sometimes can rise by 1 MB.
Note: I use a Qt 5.5, and msvc 2010.

You might want to check your application for memory leaks. That sounds a little excessive, even for QML, which is not known for memory efficiency.
Keep in mind, the QML engine will over-provision and will not release memory even when that seems the logical thing to do. I've had cases of reaching gigabytes of memory usage in QML, having tens of thousands of QML objects "alive", and upon deletion of all the objects, memory usage doesn't come anywhere near the initial memory usage. The freed memory is usually a tiny amount, for example, having 1 GB of memory worth of objects, deleting all the objects frees only like 150 MB of memory. The good news is that memory will be reused, creating those objects again will push memory usage to the previous peak, not any further. So as far as memory in your application is concerned, you are set.
I don't know if that will stack with the rest of the OS processes, whether your application will release extra memory if your system runs out of ram.

Related

Win7 C++ application always reserving at least 4k memory per allocation

I'm currently looking into memory consumption issues of a C++ application that I have written (a rendering engine using OpenGL) and have stumbled upon a rather unusual problem:
I'm using my own allocators basically everywhere in the system, which all obtain their memory from a default allocator which is using malloc()/free() for the actual memory.
It turns out that my application is always reserving at least 4096 bytes (the page size on my system) for every allocation through malloc(), even if the size is significantly smaller.
malloc(8) or even malloc(1) both result in an increase of memory of 4096 bytes. I'm tracking the used memory size through GetProcessMemoryInfo() directly before and after the allocation, as well as through the TaskManager (which basically shows the same values). Interestingly, using _msize(ptr) returns the correct size of the pointer.
I can only reproduce this behaviour within my own application, testing it with a new VS2012 C++ project did not yield the same results. This behaviour also seems independent of the current reserved size of the application, even with more than 10GB of free RAM it always reserves at least 4K per allocation.
I have no deep knowledge of the innards of the Windows operating system (if it is at all related to the OS), so if anyone has an idea what could cause this behaviour I would be greatful!
Check this, it's from 1993 :-)
http://msdn.microsoft.com/en-us/library/ms810603.aspx
This does not mean that the smallest amount of memory that can be allocated in a heap is 4096 bytes; rather, the heap manager commits pages of memory as needed to satisfy specific allocation requests. If, for example, an application allocates 100 bytes via a call to GlobalAlloc, the heap manager allocates a 100-byte chunk of memory within its committed region for this request. If there is not enough committed memory available at the time of the request, the heap manager simply commits another page to make the memory available.
You might be running with "full page heap"... a diagnostic mode to help more quickly catch memory access errors in your code.

Check available memory in the system for new allocations

I'm working in a Windows C++ application to work with point clouds. We use the PCL library along with Qt and OpenSceneGraph. The computer has 4 GB of RAM.
If we load a lot of points (for example, 40 point clouds have around 800 million points in total) the system goes crazy.
The app is almost unresponsive (it takes ages to move the mouse around it and the arrow changes to a circle that keeps spinning) and in the task manager, in the Performance tab, I got this output:
Memory (1 in the picture): goes up to 3,97 GB, almost the total of the system.
Free (2 in the picture): 0
I have checked this posts: here and here and with the MEMORYSTATUSEX version, I got the memory info.
The idea here is, before loading more clouds, check the memory available. If the "weight" of the cloud that we're gonna load is bigger than the available memory don't load it, so the app won't freeze and the user has the chance to remove older clouds to free some memory. It's worth to note that no exceptions are thrown, the worst scenario I got was that Windows killed the app itself, when the memory was insufficient.
Now, is this a good idea? Is there a canonical way to deal with this thing?
I would be glad to hear your thoughts on this matter.
Your view is from a different direction from the usual approach to similar problems.
Normally, one would probably allocate then attempt to lock in physical memory the space they needed. (mlock() in POSIX, VirtualLock() in WinAPI). The reasoning is that even if the system has enough available physical memory at the moment, some other process could spawn the next moment and push part of your resident set into swap.
This will require you to use a custom allocator as well as ensure that your process has permission to lock down the required number of pages.
Read here for a start on this: http://msdn.microsoft.com/en-us/library/windows/desktop/aa366895(v=vs.85).aspx
You are also likely running into memory issues with your graphics card even once the points are loaded. You should probably monitor that as well. Once your loaded points clouds exceed your dedicated graphics card memory (which they almost certainly are in this case) the rendering slows to a crawl.
800 million is also an immense amount of points. With a minimum 3 floats per point (assuming no colorization) you are talking about 9.6GB of points so you are swapping like crazy.
I generally start voxeling to reduce memory usage once I get beyond 30-40 million points.
This is more complicated than you might imagine. The available memory shown in the system display is physical memory. The amount of memory available to your application is virtual memory.
The physical memory is shared by all processes on the computer. If you have something else running at the same time.
-=-=-=--=-=
I suspect that the problem you are seeing is processing. Using half the memory on an 4GB system should be no big deal.
If you are doing lengthy calculations do you give the system a chance to process accumulated events?
That is what I suspect the real problem is.

What is using so much uncommitted "private data" on Windows Server 2003?

So I have a native C++ application, and it needs to keep track of lots of things over long periods of time. It's running out of memory when task manager says that the process reaches somewhere between 800 and 1200 MB of memory, when the limit should be about 2GB.
I finally got a clue as to what's going on when I ran VMMap against my process, but that just gave me more questions. What I discovered:
The total size (type: total, column: size) is much larger than what task manager/process explorer were reporting
The total size seems to actually be the value that can't exceed 2GB before my program runs out of memory
The memory usage discrepency is almost entirely caused by "Private Data" - there is much more "size" than there is "committed". I have seen cases where there were around 800MB of committed private data, but a "Size" of around 1700MB.
The largest blocks of "Private Data" mainly consist of a pattern of pairs of one small sub-block (between 4K and 16K, generally) that has "Read/Write" protection and is fully committed, and one larger sub-block (between 90K and 400K) that has the "Reserved" protection and is not committed. This seems like a huge waste of resources. And there's usually one large (many megabytes) sub-block at the end that is "Reserved" and not committed.
The small part of the pair generally has strings that I recognize, while the larger block has no strings at all.
An example of these sub-block pairs: (not my application, but the idea is the same)
http://www.flickr.com/photos/95123032#N00/5280550393
It seems as though when one block of private data gets fully committed, a new block (usually the same or double the size of the previous largest block) gets allocated. Sounds fair. However, I have seen 3 blocks, all more than 100MB each, with less than 30MB committed. My application shouldn't behave in such a way (i.e. use up 400MB then shrink by 300MB in a matter of a few hours) that that would be possible.
As far as I can tell, the "Size" is the actual amount of virtual memory address space that has been allocated. "Committed" is the amount of "Size" that is actually being used (i.e. through calls to new/malloc). If that is indeed the case, then why is there such a huge discrepency between Size and Commited? And why is it allocating blocks that are multiple hundreds of megabytes in size?
The somewhat strange thing is that the behavior is entirely different when running on Windows 7. Whereas on 2003 Server, the application uses Private Data, on Windows 7, the application uses Heap. So...why? Why does VMMap show primarily private data usage on 2003, but primarily heap usage on 7? What's the difference? Well one difference is that I can't use the "Heap Allocations..." button in VMMap to see where all of that Private Data is being allocated.
I was beginning to wonder if excessive use of std::string was causing this problem since the strings that I recognized in the pairs (mentioned above) primarily consisted of strings stored in std::string that were frequently being created and destroyed (implying lots of memory allocation/deallocation). I converted all I could to use character arrays or using memory from a memory pool, but that seems to have had no effect. All of my other objects that are new/deleted frequently already have their own memory pools.
I also found out about the low fragmentation heap, so I tried enabling that, but it also didn't make a difference. I'm thinking it's because windows 2003 is not actually using the heap proper. VMMap shows that the low fragmentation heap is enabled, but since it's not actually used (i.e. it's using Private Data instead), it doesn't actually make a difference.
What actually seems to be happening is that those sub-block pairs are fragmenting the large Private Data blocks, which is causing the OS to allocate new blocks. Eventually, the fragmentation gets so bad that even though there's lots of uncommitted space, none of it seems to be usable and the process runs out of memory.
So my questions are:
Why is Windows Server 2003 using Private Data instead of Heap? Does it matter?
Is there a way to make Windows Server 2003 use Heap memory instead?
If so, would that improve my situation at all?
Is there any way to control how Private Data is allocated by the OS's memory allocator?
Is it possible to create my own custom heap and allocate off of that (without changing the majority of my codebase), and could that improve my situation? I know it's possible to make custom heaps, but as far as I can tell, you need to explicitly allocate from the custom heap instead of just calling new or just using STL containers normally.
Is there anything I'm missing or would be worth trying?
Private data is just a classification for all the memory that is not shared between two or more processes. Heap, relocated dll pages, stacks of all the threads in a process, unshared memory mapped files etc. fall in to the category of private data.
A request for memory from a process (via VirtualAlloc) would be failed by OS when one of the condition is true,
Contiguous virtual address space (not memory) is not available to hold the size requested.
The commit charge - the total memory committed memory of all the process and the operating system - has reached it's upper limit (that being RAM + page file size)
Apart from this Heap allocations may fail for their own reasons like, during expansion they would actually try to acquire more memory that the size of the allocation request that triggered the expansion - and if that fails they might just fail - though the actual requested size might be available through VirtualAlloc.
Few things that tend to accumulate memory are,
Having many heaps - they would hog memory - because they keep more in reserve. Many heaps means a lot of reserved space probably going unused. Heap compaction might help.
STL containers like vector and map might not shrink after elements are removed from them. Compacting them might help too.
Libraries like COM do some caching and thus accumulate memory - might help to investigate individual libraries to know about their memory hogging habits.
when task manager says that the process reaches somewhere between 800 and 1200 MB of memory, when the limit should be about 2GB
Probably you are looking at "Working Set" in Task Manager whereas the 2GB limit is on Virtual Memory. Task Manager doesn't show the amount of VM reserved; it will show the amount committed.
"Committed" is the amount of "Size" that is actually being used (i.e. through calls to new/malloc).
No, Committed means you actually touched the page (i.e. went to the address and did a load or store operation).
1.Why is Windows Server 2003 using Private Data instead of Heap?
According to "Windows Sysinternals Administrator's Reference" by Mark Russinovich and Aaron Margosis:
Private Data memory is memory that is allocated by VirtualAlloc and
that is not further handled by the Heap Manager or by the .Net runtime
So either your program is managing its memory differently on the two OS's, or VMmap is unable to detect the way in which this memory is being managed as a heap on Windows Server 2003.
4.Is there anything I'm missing or would be worth trying?
You can run with a 3GB limit on 32-bit OS and a 4GB limit for 32-bit processes on 64-bit OS. Google for "/3G" and "/4G".
A great source of information on this kind of stuff is the book "Windows Internals 6th Edition" by Mark Russinovich, David Solomon and Alex Ionescu.
I'm encountering the same issue.
In windows 2003, my application causes out of memory exception in a C++/CLI module when trying to allocate a 22MB array using gcnew. The same process works fine in windows 7.
VMMap shows the "private data" entry is almost 2 GB in win2003. After I enable /3GB flag, this entry also increased to almost 3GB. The "heap" entry is about 14 MB and the "managed heap" is nothing!
In windows 7, the "private data" is only 62 MB, the "heap" is 316MB and "managed heap" is 397MB. The entire memory usage is much less than win2003.

Memory usage and minimizing

We have a fairly graphical intensive application that uses the FOX toolkit and OpenSceneGraph, and of course C++. I notice that after running the application for some time, it seems there is a memory leak. However when I minimize, a substantial amount of memory appears to be freed (as witnessed in the Windows Task Manager). When the application is restored, the memory usage climbs but plateaus to an amount less than what it was before the minimize.
Is this a huge indicator that we have a nasty memory leak? Or might this be something with how Windows handles graphical applications? I'm not really sure what is going on.
What you are seeing is simply memory caching. When you call free()/delete()/delete, most implementations won't actually return this memory to the OS. They will keep it to be returned in a much faster fashion the next time you request it. When your application is minimized, they will free this memory because you won't be requesting it anytime soon.
It's unlikely that you have an actual memory leak. Task Manager is not particularly accurate, and there's a lot of behaviour that can change the apparent amount of memory that you're using- even if you released it properly. You need to get an actual memory profiler to take a look if you're still concerned.
Also, yes, Windows does a lot of things when minimizing applications. For example, if you use Direct3D, there's a device loss. There's thread timings somethings. Windows is designed to give the user the best experience in a single application at a time and may well take extra cached/buffered resources from your application to do it.
No, there effect you are seeing means that your platform releases resources when it's not visible (good thing), and that seems to clear some cached data, which is not restored after restoring the window.
Doing this may help you find memory leaks. If the minimum amount of memory (while minimized) used by the app grows over time, that would suggest a leak.
You are looking at the working set size of your program. The sum of the virtual memory pages of your program that are actually in RAM. When you minimize your main window, Windows assumes the user won't be interested in the program for a while and aggressively trims the working set. Copying the pages in RAM to the paging file and chucking them out, making room for the other process that the user is likely to start or to switch to.
This number will also go down automatically when the user starts another program that needs a lot of RAM. Windows chucks out your pages to make room for this program. It picks pages that your program hasn't used for a while, making it likely that this doesn't affect the perf of your program much.
When you switch back to your program, Windows needs to swap pages back into RAM. But this is on-demand, it only pages-in pages that your program actually uses. Which will normally be less than what it used before, no need to swap the initialization code of your program back in for example.
Needless to say perhaps, the number has absolutely nothing to do with the memory usage of your program, it is merely a statistical number.
Private bytes would be a better indicator for a memory leak. Taskmgr doesn't show that, SysInternals' ProcMon tool does. It still isn't a great indicator because that number also includes any blocks in the heap that were freed by your program and were added to the list of free blocks, ready to be re-used. There is no good way to measure actual memory in use, read the small print for the HeapWalk() API function for the kind of trouble that causes.
The memory and heap manager in Windows are far too sophisticated to draw conclusions from the available numbers. Use a leak detection tool, like the VC debug allocator (crtdbg.h).

Memory usage of C++ / Qt application

I'm using OS X 10.5.6. I have a C++ application with a GUI made with Qt. When I start my application it uses 30 MB of memory (reported by OS X Activity Monitor RSIZE).
I use this application to read in text files to memory, parse the data and finally visualize it. If I open (read to memory, parse, visualize) a 9 MB text file Activity Monitor reports that my application grows from the initial 30 MB of memory used to 103 MB.
Now if the file is closed and the parsed and visualized data is deleted, the size of the application stays at 103 MB. This sounds like a memory leak to me. But if I open the file again, reading it to memory, parsing it and visualizing it the application stays at 103 MB. No matter how many times I open the file (or another file of the same size) my applications memory use stays more or less unchanged. Does this mean that it's not a memory leak? If it was a leak the memory usage should keep growing each time the file is opened should it not? The only time it grows is if I open a larger file than the previous one.
Is this normal? Is this platform or library dependent? Is this some sort of caching done by the OS or libraries?
This seems relatively normal, but all OS are slightly different.
In the usual application life cycle the application requests memory from the OS and is given memory in huge chunks that it manages (via the C/C++ standard libraries). As the application acquires/releases memory this is all done internally within the application without recourse to the OS until the application has non left then a call is made to the OS for another huge chunk.
Memory is not usually returned to the OS until the application quits (though most OS do provide the mechanisms to do this if required and some C/C++ standard libraries will use this facility). Instead of returning memory to the OS the application uses everything it has been given and does its own memory management.
Though note: just because an application has memory does not mean that this is currently taking up RAM on a chip. Memory that is sporadically used or has not been used in a while will be temporarily saved onto secondary/tertiary storage.
Activity Monitor: Is not a very useful tool for checking memory usage, as you have discovered it only displays the total actually allocated to the application. It does not display any information about how the application has internal allocated this memory (most of which could be deallocated). Check the folder where XCode lives, there are a broad set of tools for examining how an application works provided with the development environment.
NB: I have avoided using terms like page etc as these are nothing to-do with C/C++/Objective C and are all OS/hardware specific.
This sounds like a memory fragmentation problem to me. Memory is acquired from OS in pages. Pages are usually several kB large, e.g. 4 kB. Now if you allocate, let's say, 100 MB of RAM for your objects, your memory allocator (new / malloc) asks OS for many free memory pages and allocates your objects on them. When your application finishes computations and deletes some, even most of, but not all of the previously allocated objects, the objects that were not deleted hold pages and disallow to return them back to the OS. A page can be returned only if all its memory is freed. So in extreme cases, an 8B object can prevent a full 4kB page from being returned.
The OS reports memory consumption by calculating the number of pages committed to your application, not by counting how much space your objects take on these pages. So if your memory is fragmented, the pages remain committed, and reported memory consumption stays the same.
The memory consumption does not grow on the second run, because on the second run the allocator reuses, previously acquired, mostly free pages.
The solution for fragmentation problems is usually preallocating a larger block of memory and using a custom memory allocator to allocate objects with similar lifetime from this larger block. Then, when you're done with objects, delete the whole block.
Another solution is switching to a fully garbage collected environment like Java or .NET - they have compacting garbage collectors that prevent such problems.