Memory leak in OpenCV function: cvQueryFrame() - c++

I have a problem with IplImage* returned from cvQueryFrame... Its memory is never released by library. And documentation says:
The returned image should not be released or modified by the user.
Well... how do I get my memory back? The program eats memory until it crashes. I would like to release allocated memory for each IplImage* after frame processing finishes. Code is this:
// In `process` thread:
CvCapture* camera;
camera = cvCreateCameraCapture(1);
assert(camera);
while(true)
{
main = cvQueryFrame(camera);
// Do something useful with images
emit sendImage(main); // Send Image to the UI thread
}
UPDATE:
This is QThread subclass:
#include <QThread>
class ImageFetcher : public QThread
{
Q_OBJECT
public:
explicit ImageFetcher(QObject *parent = 0);
signals:
void run();
public slots:
};
Implementattion:
void ImageFetcher::run()
{
CvCapture* camera = cvCreateCameraCapture(0);
IplImage* image;
while(true)
{
image = cvQueryFrame(camera);
// process image
}
// exec();
}
main():
int main(int argc, char *argv[])
{
QApplication a(argc,argv);
ImageFetcher thread;
thread.start();
return a.exec();
}

As far as I could test, there's no leak in v2.3.1 for Linux.
Here is what happens: an image is allocated by cvQueryFrame() when this function is called by the first time, and then this image is reused on all subsequent calls to cvQueryFrame(). No new images are created after the first call, and the image is only freed at cvReleaseCapture().
So you see, if you are experiencing a memory leak (how did you find it, exactly?) and a crash, it is most probably caused by some other part of your code. If sendImage() is not synchronized (i.e. non-blocking) passing main image directly could cause problems. I don't know what you are doing in this function, so I'll assume the problem is inside it.
One test you could do, is copy the main frame and then pass that copy to sendImage() instead, then release it when you no longer need it.

Related

Qt - Transform cv::Mat to QImage in worker thread crashes

Introduction
What i want is really simple: I want to start the process of reading a binary file via a button in a main ui thread, let a seperate thread handle the processing including transformation into QImage and return this image back to the main ui thread where it should shown in a label.
Therefore i use Qt's signal/slot mechanism and its thread functionality.
I allready have a single thread working solution, but when using the threading attempt it crashes on totally arbitrary steps which i don't understand because the whole process is totally encapsulated and not time critical...
Working single-threading solution:
TragVisMain is a QMainWindow:
class TragVisMain : public QMainWindow
Pushing the button readBinSingle starts the process:
void TragVisMain::on_readBinSingle_clicked()
{
// Open binary file
FILE *imageBinFile = nullptr;
imageBinFile = fopen("imageFile.bin", "rb");
if(imageBinFile == NULL) {
return;
}
// Get binary file size
fseek(imageBinFile, 0, SEEK_END); // seek to end of file
size_t size = static_cast<size_t>(ftell(imageBinFile)); // get current file pointer
fseek(imageBinFile, 0, SEEK_SET);
// Read binary file
void *imageData = malloc(size);
fread(imageData, 1, size, imageBinFile);
// Create cv::Mat
cv::Mat openCvImage(1024, 1280, CV_16UC1, imageData);
openCvImage.convertTo(openCvImage, CV_8UC1, 0.04); // Convert to 8 Bit greyscale
// Transform to QImage
QImage qImage(
openCvImage.data,
1280,
1024,
QImage::Format_Grayscale8
);
// Show image in label, 'imageLabel' is class member
imageLabel.setPixmap(QPixmap::fromImage(qImage));
imageLabel.show();
}
This works like a charm, but of course the ui is blocked.
Not working multi-threading solution
As you will see the code is basically the same as above just moving to another class DoCameraStuff. So here is are the main components for this purpose in the TragVisMain header file:
namespace Ui {
class TragVisMain;
}
class TragVisMain : public QMainWindow
{
Q_OBJECT
public:
explicit TragVisMain(QWidget *parent = nullptr);
~TragVisMain();
private:
Ui::TragVisMain *ui;
DoCameraStuff dcs;
QLabel imageLabel;
QThread workerThread;
public slots:
void setImage(const QImage &img); // Called after image processing
// I have also tried normal parameter 'setImage(QImage img)', non-const referenc 'setImage(const QImage &img)' and pointer 'setImage(QImage *img)'
private slots:
void on_readBin_clicked(); // Emits 'loadBinaryImage'
// void on_readBinSingle_clicked();
signals:
void loadBinaryImage(); // Starts image processing
};
DoCameraStuff is just a QObject:
class DoCameraStuff : public QObject
{
Q_OBJECT
public:
explicit DoCameraStuff(QObject *parent = nullptr);
public slots:
void readBinaryAndShowBinPic();
signals:
void showQImage(const QImage &image);
};
Moving dcs to the workerThread and connecting signals and slots happens in the constructor of TragVisMain:
TragVisMain::TragVisMain(QWidget *parent) :
QMainWindow(parent),
ui(new Ui::TragVisMain),
dcs()
{
ui->setupUi(this);
dcs.moveToThread(&workerThread);
// Object should be deletable after the thread finished!
connect(&workerThread, &QThread::finished, &dcs, &QObject::deleteLater);
// For starting the image processing
connect(this, &TragVisMain::loadBinaryImage, &dcs, &DoCameraStuff::readBinaryAndShowBinPic);
// Showing QImage after image processing is done
connect(&dcs, &DoCameraStuff::showQImage, this, &TragVisMain::setImage);
// Start workerThread
workerThread.start();
}
Starting the image processing happens through pressing the readBinSingle button:
void TragVisMain::on_readBin_clicked()
{
emit loadBinaryImage(); // slot: readBinaryAndShowBinPic
}
The process happens in DoCameraStuff::readBinaryAndShowBinPic:
void DoCameraStuff::readBinaryAndShowBinPic() {
// Exact same code from single thread solution:
// Open binary file
FILE *imageBinFile = nullptr;
imageBinFile = fopen("imageFile.bin", "rb");
if(imageBinFile == NULL) {
return;
}
// Get binary file size
fseek(imageBinFile, 0, SEEK_END); // seek to end of file
size_t size = static_cast<size_t>(ftell(imageBinFile)); // get current file pointer
fseek(imageBinFile, 0, SEEK_SET);
// Read binary file
void *imageData = malloc(size);
fread(imageData, 1, size, imageBinFile);
// Create cv::Mat
cv::Mat openCvImage(1024, 1280, CV_16UC1, imageData);
openCvImage.convertTo(openCvImage, CV_8UC1, 0.04); // Convert to 8 Bit greyscale
// Transform to QImage
QImage qImage(
openCvImage.data,
1280,
1024,
QImage::Format_Grayscale8
);
// Send qImage to 'TragVisMain'
emit showQImage(qImage);
}
Showing the image in TragVisMain::setImage:
void TragVisMain::setImage(const QImage &img)
{
imageLabel.setPixmap(QPixmap::fromImage(img));
imageLabel.show();
}
Problem
Well the multi-threading attempt just crashed the whole Application without any message during different steps. But honestly i don't have any idea. For me the DoCameraStuff class is a standard worker class doing time-independent stuff in a member function without any critical relations.
I also checked if some of the used function inside DoCameraStuff::readBinaryAndShowBinPic are not thread-safe but I couldn't find any problems regarding <cstdio>, cv::Mat and QImage in equivalent conditions.
So:
Why does the multi-threading attempt crash?
What changes need to be applied, so that the process in the thread does not crash?
I allways appreciate your help.
It doesn't have a chance of working, since the QImage wraps transient data that then is deallocated by cvMat. At the minimum you should emit the copy of the image (literally qImage.copy(). Ideally, you'd wrap the cvMat lifetime management in QImage's deleter, so that the copy is not necessary, and the matrix gets destroyed along with the image that wraps it. Since format conversions are usually needed between cv::Mat and QImage, it's probably best to perform the conversion between the source cv::Mat and another cv::Mat that wraps the QImage-owned memory. That solution avoids memory reallocations, since the QImage can be retained in the class performing the conversion. It is then also comaptible with Qt 4, where QImage doesn't support deleters.
See this answer for a complete example of an OpenCV video capture Qt-based widget viewer, and this answer for a complete example of multi-format conversion from cv::Mat to QImage.

Qt, paint image, will this cause memory issue?

//Case 1:
QImage* tImg = new QImage("Some Image Here");
painter->drawImage(x, y, *tImg );
...
delete tImg;
//Case 2:
QImage* tImg = new QImage("Some Image Here");
{
QImage aImg(*tImg);
painter->drawImage(x, y, aImg );
}
...
delete tImg;
I am trying to load several images in a worker thread and draw them out int the main thread. But I am not sure is it ok to delete the image in the worker thread after drawing them out.
//Case 3:
...
//In worker thread
QImage* tImg = new QImage("Some Image Here");
mutex.lock();
matrix.insert(tImg); // matrix is a QList
mutex.unlock();
...
//In main thread
mutex.lock();
foreach(QImage* tImg, matrix)
{
painter->drawImage(x, y, *tImg);
}
mutex.unlock();
...
//In worker thread
mutex.lock();
matrix.remove(tImg);
delete tImg;
mutex.unlock();
Will the above code causing issues? Since the drawImage function is a "pass by const reference". Will this causing any memory issue?
What if delete tImg is on another thread? Will it be safe if I use mutex to make sure the delete tImg only called after painter->drawImage(x, y, *tImg );
The manual memory management is unnecessary. You should leverage Qt to do it for you. You can pass the image through a signal-slot connection, and use the fact that the value will be automatically copied, and any access to it will be automatically synchronized by Qt.
Here's how you could do it, very simply, letting the compiler do all the hard work of resource management for you:
// https://github.com/KubaO/stackoverflown/tree/master/questions/imageloader-36265788
#include <QtWidgets>
#include <QtConcurrent>
First, let's have a class with a signal that is the image source. It has a signal that provides the image, of a const reference type, since any copying will be done automatically by Qt if necessary to cross thread boundaries.
class ImageSource : public QObject {
Q_OBJECT
public:
Q_SIGNAL void hasImage(const QImage & image);
A method generates the image, and emits the signal. As long as automatic connections are used with the hasImage signal, this method can be run in any thread - safely. In our case, we always run this method from the worker thread, but we could run it from the main thread, too - the only difference would be in performance.
/// This method is thread-safe (ignoring the UB of incrementing a shared int)
void generate() {
static auto counter = 0;
QImage img(128, 128, QImage::Format_ARGB32);
img.fill(Qt::white);
QPainter p(&img);
p.drawText(img.rect(), Qt::AlignCenter, QString::number(counter++));
p.end();
emit hasImage(img);
}
};
We'll need an instance of that class, and something to show the image on - say, a QLabel:
int main(int argc, char ** argv) {
QApplication app{argc, argv};
ImageSource source;
QLabel label;
label.show();
We can now connect the hasImage to a functor that sets the label's size and sets the image on it. It then immediately runs the image generator again in a worker thread from the global pool. That's handled automatically by QtConcurrent::run.
The functor runs in the main thread: this is assured by providing the context parameter to the connect: connect(--, --, context, --). The functor runs in label.thread(), just as we wish.
QObject::connect(&source, &ImageSource::hasImage, &label, [&](const QImage & image){
label.setFixedSize(image.size());
label.setPixmap(QPixmap::fromImage(image));
QtConcurrent::run(&source, &ImageSource::generate);
});
Since the connection is automatic, the effect of calling the hasImage signal results in posting of the slot call to the receiving object (label) thread's event queue - here, the queue of the main thread. The event loop picks up the slot call, and executes it. So, even through hasImage has been called in a worker thread, the image is automatically copied and delivered to our functor in the main thread.
Finally, we generate the first image to start the process.
QtConcurrent::run(&source, &ImageSource::generate); // generate the first image
return app.exec();
}
The #include at the end is needed to provide the implementation of the signal hasImage signal, and the metadata describing the ImageSource class. It is generated by moc.
#include "main.moc"
This is complete code, you can paste it into a new project, compile and run; or download the complete project from the github link.
It shows a label that gets its pixmap updated at a rate of approximately 1000/s, on my machine. The application is fully responsive: you can freely move the window, and quit it at any time.
See this answer for another example of a threaded image loader.

QImage on QThreadPool fails

I am trying to load multiple QImage objects from files using a threadpool. I have created my own QRunnable subclass to load the image from a file and copy it into a buffer:
class ImageLoader : public QRunnable
{
public:
ImageLoader(const QString &filename, char **buffer, int *size) :
QRunnable(),
filename(filename),
buffer(buffer),
size(size)
{}
// QRunnable interface
void run() {
QImage image(filename);
(*size) = image.byteCount();
(*buffer) = new char[(*size)];
memcpy_s(*buffer), (*size), image.constBits(), image.byteCount());
}
private:
const QString filename;
char **buffer;
int *size;
};
The code works fine if executed on the main thread, but as soon as I run the runnable on a QThreadPool, I get a huge bunch of errors, that basically all say the same:
QObject::moveToThread: Current thread (0x2a023ae6550) is not the object's thread (0x2a023ae65c0).
Cannot move to target thread (0x2a023aca0f0)
The first 2 addresses change each message, I assume they represent the different threads of the pool. Whats interesting:
The first and the second are never the same, however, they are all of the same "group", i.e. the first address of the first error can become the second address of the second error etc...
The third address always stays the same, it's the address of the main (gui) thread.
Any Ideas why that happens or how to fix it? I read the documentation of QImage but wasn't able to find anything about threads in there, except:
Because QImage is a QPaintDevice subclass, QPainter can be used to draw directly onto images. When using QPainter on a QImage, the painting can be performed in another thread than the current GUI thread.
Is solved the problem myself:
The path I passed to the QImage was invalid. I don't know how this was able to produce such an error, but after I fixed the path, it works just fine!

How to pass an opencv image to display in a Qt scene in a different thread?

I am struggling with memory issues, I think I missed something and would greatly appreciate if someone can point me to what I understand/do wrong.
What I want to do
My gui runs in the main thread. I am launching a computation on a separate thread T. The result of this computation is a bunch of opencv images. I want to display them in my gui during the computation.
How I understand I should do it
Launch computation thread.
When a new image is computed, convert it to a QImage, wrap it in a custom QEvent, and post it to my gui.
Only use heap memory.
How I implemented it
In my computation thread, when a new image is ready :
std::shared_ptr<cv::Mat> cvimRGB = std::shared_ptr<cv::Mat>(new cv::Mat);
cv::Mat cvimBGR;
cv::Mat cvim = MyNewComputedImage;
cvim.convertTo(cvimBGR,CV_8UC3);
cv::cvtColor(cvimBGR,*cvimRGB,cv::COLOR_BGR2RGB);
std::shared_ptr<QImage> qim = std::shared_ptr<QImage>(
new QImage((uint8_t*) cvimRGB->data,cvimRGB->cols,cvimRGB->rows,cvimRGB->step,QImage::Format_RGB888));
ImageAddedEvent* iae = new ImageAddedEvent(qim,i);
QCoreApplication::postEvent(gui, iae);
In my event handler :
bool mosaicage::event(QEvent * e){
if (e->type() == ImageAdded) {
ImageAddedEvent* ie = dynamic_cast<ImageAddedEvent*>(e);
QImage qim(*(ie->newImage));
QPixmap pm(QPixmap::fromImage(qim));
auto p = scene.addPixmap(pm);
images_on_display.push_back(p);
return true;
} else {
return QWidget::event(e);
}
}
My custom event is defined as follow :
class ImageAddedEvent: public QEvent {
public:
ImageAddedEvent();
~ImageAddedEvent();
ImageAddedEvent(std::shared_ptr<QImage> im, int i);
std::shared_ptr<QImage> newImage;
int index;
};
What happens
In debug mode, I get crap on display.
In release mode, I get an access violation error.
I am pretty confident about the part where I convert cv::Mat to qimage because I did not change it, I used to update the display from the computation thread but I learned better. It worked though (when it did not crash).
How I fixed it
The problem was in the memory pointed by the QImage, which was taken charge of by the cv::Mat I constructed it from. If I want to keep this way of constructing the QImage, using data managed by someone else, I must keep the data valid. Hence I moved the cv::Mat to the custom event :
class ImageAddedEvent: public QEvent {
public:
ImageAddedEvent();
~ImageAddedEvent();
ImageAddedEvent(cv::Mat im, int i);
QImage newImage;
cv::Mat cvim;
int index;
};
I changed the constructor of the event to initialize the QImage with the cv::Mat data :
ImageAddedEvent::ImageAddedEvent(cv::Mat cvimRGB, int i) : QEvent(ImageAdded),
index(i),
cvim(cvimRGB)
{
newImage = QImage((uint8_t*) cvim.data,cvim.cols,cvim.rows,cvim.step,QImage::Format_RGB888);
}
And now I only have to pass a cv::Mat to my event constructor :
cv::Mat cvimBGR,cvimRGB;
cv::Mat cvim = MyNewImage;
cvim.convertTo(cvimBGR,CV_8UC3);
cv::cvtColor(cvimBGR,cvimRGB,cv::COLOR_BGR2RGB);
ImageAddedEvent* iae = new ImageAddedEvent(cvimRGB,i);
QCoreApplication::postEvent(gui, iae);
et voilĂ , again, thanks for the help!
you are using the wrong constructor
from the doc(emph mine):
The buffer must remain valid throughout the life of the QImage and all copies that have not been modified or otherwise detached from the original buffer. The image does not delete the buffer at destruction. You can provide a function pointer cleanupFunction along with an extra pointer cleanupInfo that will be called when the last copy is destroyed.
and you are using the stack allocated cvimRGB for the data pointer which (I believe) will clean up the buffer in it's destructor before the event is handled, leading to accessing "crap" data
so you should create a fresh Qimage and then copy the data
std::shared_ptr<cv::Mat> cvimRGB = std::shared_ptr<cv::Mat>(new cv::Mat);
cv::Mat cvimBGR;
cv::Mat cvim = MyNewComputedImage;
cvim.convertTo(cvimBGR,CV_8UC3);
cv::cvtColor(cvimBGR,*cvimRGB,cv::COLOR_BGR2RGB);
QImage qim = QImage(cvimRGB->cols,cvimRGB->rows,QImage::Format_RGB888));
//copy from cvimRGB->data to qim.bits()
ImageAddedEvent* iae = new ImageAddedEvent(qim,i);
QCoreApplication::postEvent(gui, iae);
or detach cvimRGB->data and let the cleanupFunction delete the buffer
on another note there is no need to use std::shared_ptr<QImage> as QImage will not copy the underlying data unless needed, This is known in Qt as implicit data sharing
to call the gui you can provide a Q_INVOKABLE method (or just a slot) in gui and use QMetaObject::invokeMethod(gui, "imageUpdated", Q_ARG(QImage, qim));

Draw pixel based graphics to a QWidget

I have an application which needs to draw on a pixel by pixel basis at a specified frame rate (simulating an old machine). One caveat is that the main machine engine runs in a background thread in order to ensure that the UI remains responsive and usable during simulation.
Currently, I am toying with using something like this:
class QVideo : public QWidget {
public:
QVideo(QWidget *parent, Qt::WindowFlags f) : QWidget(parent, f), screen_image_(256, 240, QImage::Format_RGB32) {
}
void draw_frame(void *data) {
// render data into screen_image_
}
void start_frame() {
// do any pre-rendering prep work that needs to be done before
// each frame
}
void end_frame() {
update(); // force a paint event
}
void paintEvent(QPaintEvent *) {
QPainter p(this);
p.drawImage(rect(), screen_image_, screen_image_.rect());
}
QImage screen_image_;
};
This is mostly effective, and surprisingly not very slow. However, there is an issue. The update function schedules a paintEvent, it may not hapen right away. In fact, a bunch of paintEvent's may get "combined" according to the Qt documentation.
The negative effect that I am seeing is that after a few minutes of simulation, the screen stops updating (image appears frozen though simulation is still running) until I do something that forces a screen update for example switching the window in and out of maximized.
I have experimented with using QTimer's and other similar mechanism to have the effect of the rendering being in the GUI thread so that I can force immediate updates, but this resulted in unacceptable performance issues.
Is there a better way to draw pixels onto a widget constantly at a fixed interval. Pure Qt solutions are preferred.
EDIT: Since some people choose to have an attitude instead of reading the whole question, I will clarify the issue. I cannot use QWidget::repaint because it has a limitation in that it must be called from the same thread as the event loop. Otherwise, no update occurs and instead I get qDebug messages such as these:
QPixmap: It is not safe to use pixmaps outside the GUI thread
QPixmap: It is not safe to use pixmaps outside the GUI thread
QWidget::repaint: Recursive repaint detected
QPainter::begin: A paint device can only be painted by one painter at a time.
QWidget::repaint: It is dangerous to leave painters active on a widget outside of the PaintEvent
QWidget::repaint: It is dangerous to leave painters active on a widget outside of the PaintEvent
EDIT: to demonstrate the issue I have created this simple example code:
QVideo.h
#include <QWidget>
#include <QPainter>
class QVideo : public QWidget {
Q_OBJECT;
public:
QVideo(QWidget *parent = 0, Qt::WindowFlags f = 0) : QWidget(parent, f), screen_image_(256, 240, QImage::Format_RGB32) {
}
void draw_frame(void *data) {
// render data into screen_image_
// I am using fill here, but in the real thing I am rendering
// on a pixel by pixel basis
screen_image_.fill(rand());
}
void start_frame() {
// do any pre-rendering prep work that needs to be done before
// each frame
}
void end_frame() {
//update(); // force a paint event
repaint();
}
void paintEvent(QPaintEvent *) {
QPainter p(this);
p.drawImage(rect(), screen_image_, screen_image_.rect());
}
QImage screen_image_;
};
main.cc:
#include <QApplication>
#include <QThread>
#include <cstdio>
#include "QVideo.h"
struct Thread : public QThread {
Thread(QVideo *v) : v_(v) {
}
void run() {
while(1) {
v_->start_frame();
v_->draw_frame(0); // contents doesn't matter for this example
v_->end_frame();
QThread::sleep(1);
}
}
QVideo *v_;
};
int main(int argc, char *argv[]) {
QApplication app(argc, argv);
QVideo w;
w.show();
Thread t(&w);
t.start();
return app.exec();
}
I am definitely willing to explore options which don't use a temporary QImage to render. It is just the only class in Qt which seems to have a direct pixel writing interface.
Try emitting a signal from the thread to a slot in the event loop widget that calls repaint(), which will then execute right away. I am doing something like this in my graphing program, which executes the main calculations in one thread, then tells the widget when it is time to repaint() the data.
In similar cases what I did was still using a QTimer, but doing several simulation steps instead of just one. You can even make the program auto-tuning the number of simulation steps to be able to get whatever frames per seconds you like for the screen update.