I have a simple problem however I'm not sure how it could be solved. I have the following code (c++):
Mat myImage;
for(;;){
imshow("Name", myImage);
}
Lets make the assumption that myImage is already populated with an image here. For some reason this very simple code is causing a memory leak. I read somewhere online that supplying imshow with the same image results in imshow adding it as a seperate axis, increase the memory comsumption.
Basically I dont know how to stop this memory leak. Any ideas?
Thanks.
Edit 1
Here is a screen shot of what I'm seeing when analysing memory usage.
Edit 2
Here is an example xcode project, made from scratch to display this issue. If you cannot run xcode, I've also provided a copy of the code on pastebin here. The project files can be downloaded from here.
Related
Everytimes I use opencv to debug,the vs studio will output the black window:
enter image description here
and its ignoring because when i close the black window the program is returned by the same time.
What happened with my opencv and how should i do?
I'll appreciate it for solving the problem!
I've tried to search for the solutions,and somehow it seems not a big deal.
but i just want to know how can i solve the problem completely.
I'm new to openCV and I'm using 3.4.1_5, C++ in Xcode on Mac. I used Homebrew to install opencv.
I'm following an openCV tutorial which is conducted in VS on Windows. Here's a link to the tutorial.
Basically, the tutorial shows how to create/resize a window, which is easy. So I have some code like this which is basically the same as in the tutorial:
#include "opencv2/opencv.hpp"
....
namedWindow("modi", CV_WINDOW_FREERATIO);
imshow("modi", modi);
resizeWindow("modi", modi.cols/2, modi.rows/2); //modi is a Mat
....
Supposedly, the image will be the same except it will be 1/4 of the original size and fit into the resized window. This is the case in the tutorial. However, it's not the case on my mac. My image stays the same size, and only the window shrinks to 1/4. If I drag the border of the window, the "covered" part of the image is revealed.
This actually poses a problem to my project later on, so I want to figure out why. I want to achieve what the tutorial achieves, and I've tried all window parameter, like AUTOSIZE, KEEPRATIO, OPENGL, etc. None of them worked. I'm thinking about if it's due to platform or version problem, but I have no way to test them.
Please help! Any hints would be greatly appreciated!
I wrote this method (it displays an image):
void ImageLoader::displayMyImage()
{
namedWindow("new_Window1");
imshow("new_window1", m_image);
waitKey(2);
}
m_image is of Mat type.
I also use this destructor:
ImageLoader::~ImageLoader()
{
m_image.release();
}
However, Valgrind found tons of memory leaks. It's caused by these two cv functions:
namedWindow and imshow (because without calling the displayMyImage() there is no any leak).
Is there a way to fix it?
Thanks!
Your first problem is that you name the named window differently:
"new_Window1" is different from "new_window1". Second, I tell you I have never used namedWindow, you only need to use imshow to display an image in an image window called "new_window1".
Remark1: you don't need to worry about explicitly releasing m_image, that is what Mat is for in the first place.
Remark2: waitKey(0) holds the window forever.
I have seen this question here before, so I think you could search here too for answers.
I'm trying to display video from a webcam. I capture the images from the webcam using opencv and then I try to display them on a GtkImage.
This is my code, which runs in a seperate thread.
gpointer View::updateView(gpointer v)
{
IplImage *image;
CvCapture *camera;
GMutex *mutex;
View *view;
view=(View*)v;
camera=view->camera;
mutex=view->cameraMutex;
while(1)
{
g_mutex_lock(view->cameraMutex);
image=cvQueryFrame(camera);
g_mutex_unlock(view->cameraMutex);
if(image==NULL) continue;
cvCvtColor(image,image,CV_BGR2RGB);
GDK_THREADS_ENTER();
g_object_unref(view->pixbuf);
view->pixbuf=gdk_pixbuf_new_from_data((guchar*)image->imageData,GDK_COLORSPACE_RGB,FALSE,image->depth,image->width,image->height,image->widthStep,NULL,NULL);
gtk_image_set_from_pixbuf(GTK_IMAGE(view->image),view->pixbuf);
gtk_widget_queue_draw(view->image);
GDK_THREADS_LEAVE();
usleep(10000);
}
}
What happens is that one image is taken from the webcam and displayed and then the GtkImage stops updating.
In addition, when I try to use cvReleaseImage, I get a seg fault which says that free has been passed an invalid pointer.
GTK is an event-driven toolkit, like many others. What you're doing is queuing the new images to draw in an infinite loop, but never give GTK a chance to draw them. This is not how a message pump works. You need to give a hand back to GTK, so it can draw the updated image. The way to do that is explained in gtk_events_pending documentation.
Moreover, allocating/drawing/deallocating a gdk-pixpuf for each image is sub-optimal. Just allocate the buffer once out of your loop, draw on it in your loop (it will overwrite the previous content), and display it. You only need to reallocate a new buffer if your image size changes.
I don't know how to work with GtkImgae, but my guess is that you are not passing the newer images to the window. You need something like the native cvShowImage to execute inside the loop. If it isn't that I don't know.
Also You shouldn't release the image used for capture. OpenCV allocates and deallocates it itself.
EDIT: Try using the OpenCV functions for viewing image and see if the problem is still there.
OK - I have an interesting one here. I'm working on a tetris clone (basically to "level-up" my skills). I was trying to refactor my code to get it abstracted the way I wanted it. While it was working just fine before, now I get a segmentation fault before any images can be blitted. I've tried debugging it to no avail.
I have posted my SVN working copy of the project here.
It's just a small project and someone with more knowledge than me and a good debugger will probably figure it out in a snap. The only dependency is SDL. Kudos to the person that can tell me what I'm doing wrong.
Edit: As far as I can tell, what I have now and what I had before are logically the same, so I wouldn't think that what I have now would cause a segmentation fault. Just run an svn revert on the working copy, recompile and you can see that it was working...
Look at line 15 to 18 of Surface.cpp:
surface = SDL_DisplayFormatAlpha( tempSurface );
surface = tempSurface;
}
SDL_FreeSurface( tempSurface );
I assume it segfaults because when you use this surface later, you are actually operating on tempSurface because of this line:
surface = tempSurface;
and not the surface returned by SDL_DisplayFormatAlpha(). Since you free tempSurface, surface is now pointing to invalid memory. To fix, simply remove the second line in the else block.
I don't have SDL installed on my machine, but after looking through the code.
I noticed this in the Output.cpp file:
display = new Surface();
You do nothing. The constructor for this is empty. (surface is not initialized).
Then in Output::initalize() you do:
display->surface = SDL_SetVideoMode( 800, 600, 32, SDL_HWSURFACE | SDL_DOUBLEBUF );
This looks like the issue Surface::surface was never actually initialized. If you haven't found the solution, when i get home i'll digg into it.
As far as I understand, a segmentation fault happens when you are trying to mnaipulate a ponter which is no longer available, or you are trying to change a constant's value.