why webappclassloader retains more heap space? how to resolve? - classloader

Heap Dump Screen SHot
As you can see in the picture why the webappclassloader retained heap this high?
Does this indicate towards any problem?
If it is, then how to resolve it?

Related

OpenSceneGraph memory usage when resetting scene

I have spent a great deal of time trying to figure out OSG's memory management.
I have a scene graph with several children (actually a LOD based on an octree).
However, when I need to reset my scene (I just want to wipe ALL nodes from de scene and also wipe the memory), I use
// Clear main osg::Group root node
m_rootNode->removeChildren(0, m_rootNode->getNumChildren());
m_rootNode->dirtyBound();
// Clear Main view scene data from osg::Viewer
m_viewer->setSceneData(nullptr);
BEFORE I do this, I check all my nodes with a NodeVisitor pattern, and found out that ALL my nodes have reference count of 1, i.e, after clearing them from the scene, I expect my memory to be freed. However, this does not happen: my scene is actually reset, all the nodes disappear from the viewer, but the memory remains occupied.
Nonetheless, when I load another scene to my viewer, the memory is rewritten somehow (i.e., the memory usage does not increase, hence there is no memory leak, but used memory is always the same)
I can't have this behaviour, as I need to closely control memory usage. How can I do this?
Looks like OSG keeps cached instances of your data, either as CPU-side or GPU-side objects.
You could have a look at osgDB's options to disable caching in first place (CACHE_NONE, CACHE_ALL & ~CACHE_ARCHIVES), but this can actually increase your memory consumption as data may not be re-used and re-loaded multiple times.
You could instruct osg::Texture to free the CPU-side texture data after it was uploaded to OpenGL - in case you don't need it any more. This can be done conveniently via the osgUtil::Optimizer::TextureVisitor which you would want to set up to change the AutoUnref for each texture to true. I think, running osgUtil::Optimizer with the OPTIMIZE_TEXTURE_SETTINGS achieves the same effect.
Then, after closing down your scene, as you did in your Question's code, you could explicitly instruct OSG's database pager to wipe its caches:
for( osgViewer::View v in AllYourViews )
{
v->getDatabasePager()->cancel();
v->getDatabasePager()->clear();
}
To finally get rid of all pre-allocated GPU-side objects and their CPU-side representations, you would need to destroy your views and GLContext's.

OpenCV cv::Mat causing potential memory leak with std::vector

As it stands right now im trying to save an entire list of images in the form of cv::Mats inside of a vector for later processing. Right now I have something that looks like this:
do
{
image = readimage();
cv::Mat mat = cv::Mat((length, width, CV_8UC4, image));
cv::Mat temp = mat.clone();
saved_images.push_back();
mat.release();
temp.release();
freeimagememory(image);
}
while(hasimage);
This actually works. For exceptionally small lists of images it will store them just fine. However as I get to large amounts of images the program consistently crashes saying Abort() was called, and upon inspection it says it's throwing a cv::exception.
Does anyone know why this is? I've thought about changing the vector to a vector of pointers to cv::Mats in order to save space (cloning seems expensive) but I'm not sure how well that will work.
Can anyone help?
EDIT1: The exact error being thrown is failed to allocated X bytes. I assume this is because it's eating up all of the available memory somehow (even though I'm sitting on 8 gigs of memory and definitely have memory free).
EDIT2:
The below code also seems to work.
std::vector<cv::Mat*> ptrvec;
do{
image.readimage();
ptrvec.push_back(new cv::Mat((length, width, CV_8UC4, image)));
freeimagememory(image);
}
while(hasimage);
This one doesn't have a problem with memory (I can push all the images I want to it) but I get an access violation when I try to do
cv::imshow("Test Window", *ptrvec[0]);
EDIT3:
Is there a chance I'm hitting the upper limit of 32 bit? I'm more than capable of recompiling this into a 64 bit project.
You may be running out of memory when you store 3000 color images 800 x 600 in a vector. Storing Mat pointers in memory will not solve your problem, since the data is still allocated in RAM.
Check whether there is enough memory in your system to store all the images. If not you can upload images in batches, for example, process the first 500 images, then process the next 500 images, etc.
In your program you allocate the vector on the stack. Allocating on the heap is recommended when you need a large block of memory (your case). So can try to allocate the vector on the heap instead (provided that you have enough memory to store the vector). See, stack vs heap, or this cpp-tutorial for more information.

Another std::bad_alloc at memory location

I searched for this kind of error and found a lot. unfortunately no thread really helped me.
I have image files which I save in an array or a vector or what ever.
After about 1.8 GB (~1439 Images) the error std::bad_alloc at memory location occurs. So I tried to declared the array in different ways but every time the same error occurs.
Image* img;
Image img[180000];
Image* img = new Image[180000]
vector<Image> img;
(The 180k would be 1 minute of Frames). Its not really important to record 1 minute but it would be nice to save more than ~1439 Frames. Or at least to understand why this error occurs or rather why it occurs at 1.8 GB.
Maybe someone could help or explain that to me?
PS: i use a 32bit System
The problem is, the time to save the images in an folder or something takes to long. Maybe I have to find a compression which allows me to save just the necessary information of the image in the array and then I can restore the frames when I am done.
I heard that you can convert an image just in a x and a y "line" which holds all these information. But how this works is another issue.
The answers from Mark Ingram were exactly what I needed to understand the problem. Thanks for that
edit: oh i see i explained my problem not enough. I did`t have the Images and load them in my programm. I have a Camera which records the frame with a frequency of 50Hz so while recording i have no time to save the frames.
You've ran out of memory. On a 32bit system (on Windows at least) you can only allocate up to a maximum of ~2GB of memory. You need to dynamically load your data only when needed, and when you no longer need the image data, throw it away again.
In reality, the limit will be lower than 2GB, as memory is allocated in blocks (i.e. it isn't allocated contiguously). This means you will experience heap fragmentation if you mix small and large object allocations, and that will drastically reduce the amount of memory you can actually allocate.
Store the images in a folder and load one at a time.
Dynamic memory allocation is your friend.
There is nothing I could think of to accomplish by loading 18,000 images together. You are never going to process it even on a super computer.

CCLabelBMFont memory usage

Do I understand correctly that CCLabelBMFont only loads the font texture once, no matter how many labels you have, thus 10 labels will not exceed the memory requirements of 1 label, or said another way, the actual memory usage of any and all labels is approx equivalent to the memory usage of the font texture itself?
I ask because I preferred to use CCLabel but when I compared it to UILable, the resolution of UILabel is much sharper; I'm not sure the cause of this but CCLabel just doesn't look that great.
Yes.
Every texture cocos2d uses is cached only once. CCTextureCache does that, regardless of the class that created or loaded the texture. Memory-wise the only difference between using 1 CCLabelBMFont and 1000 is just the memory of the CCLabelBMFont instances. Which is roughly around 500 Bytes per instance.

Cocos2d Using the same

I need to create the same sprite, with the same image, like 50 - 100 times. I read that initializing them all by themselves creates a performance issue, is there a command to do this? if CCBatchNode is what should be used then please explain how it works. and YES i have searched the internet for like an hour now. any info would be appreciated. Thanks
Cocos2d loads the texture in the memory only once and keeps on using it for texture needs. So no problems with creating 100 references with same texture. It won't affect memory much.