Qt - Transform cv::Mat to QImage in worker thread crashes - c++

Introduction
What i want is really simple: I want to start the process of reading a binary file via a button in a main ui thread, let a seperate thread handle the processing including transformation into QImage and return this image back to the main ui thread where it should shown in a label.
Therefore i use Qt's signal/slot mechanism and its thread functionality.
I allready have a single thread working solution, but when using the threading attempt it crashes on totally arbitrary steps which i don't understand because the whole process is totally encapsulated and not time critical...
Working single-threading solution:
TragVisMain is a QMainWindow:
class TragVisMain : public QMainWindow
Pushing the button readBinSingle starts the process:
void TragVisMain::on_readBinSingle_clicked()
{
// Open binary file
FILE *imageBinFile = nullptr;
imageBinFile = fopen("imageFile.bin", "rb");
if(imageBinFile == NULL) {
return;
}
// Get binary file size
fseek(imageBinFile, 0, SEEK_END); // seek to end of file
size_t size = static_cast<size_t>(ftell(imageBinFile)); // get current file pointer
fseek(imageBinFile, 0, SEEK_SET);
// Read binary file
void *imageData = malloc(size);
fread(imageData, 1, size, imageBinFile);
// Create cv::Mat
cv::Mat openCvImage(1024, 1280, CV_16UC1, imageData);
openCvImage.convertTo(openCvImage, CV_8UC1, 0.04); // Convert to 8 Bit greyscale
// Transform to QImage
QImage qImage(
openCvImage.data,
1280,
1024,
QImage::Format_Grayscale8
);
// Show image in label, 'imageLabel' is class member
imageLabel.setPixmap(QPixmap::fromImage(qImage));
imageLabel.show();
}
This works like a charm, but of course the ui is blocked.
Not working multi-threading solution
As you will see the code is basically the same as above just moving to another class DoCameraStuff. So here is are the main components for this purpose in the TragVisMain header file:
namespace Ui {
class TragVisMain;
}
class TragVisMain : public QMainWindow
{
Q_OBJECT
public:
explicit TragVisMain(QWidget *parent = nullptr);
~TragVisMain();
private:
Ui::TragVisMain *ui;
DoCameraStuff dcs;
QLabel imageLabel;
QThread workerThread;
public slots:
void setImage(const QImage &img); // Called after image processing
// I have also tried normal parameter 'setImage(QImage img)', non-const referenc 'setImage(const QImage &img)' and pointer 'setImage(QImage *img)'
private slots:
void on_readBin_clicked(); // Emits 'loadBinaryImage'
// void on_readBinSingle_clicked();
signals:
void loadBinaryImage(); // Starts image processing
};
DoCameraStuff is just a QObject:
class DoCameraStuff : public QObject
{
Q_OBJECT
public:
explicit DoCameraStuff(QObject *parent = nullptr);
public slots:
void readBinaryAndShowBinPic();
signals:
void showQImage(const QImage &image);
};
Moving dcs to the workerThread and connecting signals and slots happens in the constructor of TragVisMain:
TragVisMain::TragVisMain(QWidget *parent) :
QMainWindow(parent),
ui(new Ui::TragVisMain),
dcs()
{
ui->setupUi(this);
dcs.moveToThread(&workerThread);
// Object should be deletable after the thread finished!
connect(&workerThread, &QThread::finished, &dcs, &QObject::deleteLater);
// For starting the image processing
connect(this, &TragVisMain::loadBinaryImage, &dcs, &DoCameraStuff::readBinaryAndShowBinPic);
// Showing QImage after image processing is done
connect(&dcs, &DoCameraStuff::showQImage, this, &TragVisMain::setImage);
// Start workerThread
workerThread.start();
}
Starting the image processing happens through pressing the readBinSingle button:
void TragVisMain::on_readBin_clicked()
{
emit loadBinaryImage(); // slot: readBinaryAndShowBinPic
}
The process happens in DoCameraStuff::readBinaryAndShowBinPic:
void DoCameraStuff::readBinaryAndShowBinPic() {
// Exact same code from single thread solution:
// Open binary file
FILE *imageBinFile = nullptr;
imageBinFile = fopen("imageFile.bin", "rb");
if(imageBinFile == NULL) {
return;
}
// Get binary file size
fseek(imageBinFile, 0, SEEK_END); // seek to end of file
size_t size = static_cast<size_t>(ftell(imageBinFile)); // get current file pointer
fseek(imageBinFile, 0, SEEK_SET);
// Read binary file
void *imageData = malloc(size);
fread(imageData, 1, size, imageBinFile);
// Create cv::Mat
cv::Mat openCvImage(1024, 1280, CV_16UC1, imageData);
openCvImage.convertTo(openCvImage, CV_8UC1, 0.04); // Convert to 8 Bit greyscale
// Transform to QImage
QImage qImage(
openCvImage.data,
1280,
1024,
QImage::Format_Grayscale8
);
// Send qImage to 'TragVisMain'
emit showQImage(qImage);
}
Showing the image in TragVisMain::setImage:
void TragVisMain::setImage(const QImage &img)
{
imageLabel.setPixmap(QPixmap::fromImage(img));
imageLabel.show();
}
Problem
Well the multi-threading attempt just crashed the whole Application without any message during different steps. But honestly i don't have any idea. For me the DoCameraStuff class is a standard worker class doing time-independent stuff in a member function without any critical relations.
I also checked if some of the used function inside DoCameraStuff::readBinaryAndShowBinPic are not thread-safe but I couldn't find any problems regarding <cstdio>, cv::Mat and QImage in equivalent conditions.
So:
Why does the multi-threading attempt crash?
What changes need to be applied, so that the process in the thread does not crash?
I allways appreciate your help.

It doesn't have a chance of working, since the QImage wraps transient data that then is deallocated by cvMat. At the minimum you should emit the copy of the image (literally qImage.copy(). Ideally, you'd wrap the cvMat lifetime management in QImage's deleter, so that the copy is not necessary, and the matrix gets destroyed along with the image that wraps it. Since format conversions are usually needed between cv::Mat and QImage, it's probably best to perform the conversion between the source cv::Mat and another cv::Mat that wraps the QImage-owned memory. That solution avoids memory reallocations, since the QImage can be retained in the class performing the conversion. It is then also comaptible with Qt 4, where QImage doesn't support deleters.
See this answer for a complete example of an OpenCV video capture Qt-based widget viewer, and this answer for a complete example of multi-format conversion from cv::Mat to QImage.

Related

Qt: How to make QImage aware of updated memory buffer?

I need to draw pixel data that is being held by a library as uint8_t * and that is updated frequently and partially. I get a call-back from the library every time an update is done, that looks like this:
void gotFrameBufferUpdate(int x, int y, int w, int h);
I've tried creating a QImage using the pixel data pointer
QImage bufferImage(frameBuffer, width, height, QImage::Format_RGBX8888);
and let the call-back trigger update() of my widget
void gotFrameBufferUpdate(int x, int y, int w, int h)
{
update(QRect(QPoint(x, y), QSize(w, h)));
}
which simply draws the updated area of the QImage via paint():
void MyWidget::paint(QPainter *painter)
{
QRect rect = painter->clipBoundingRect().toRect();
painter->drawImage(rect, bufferImage, rect);
}
The problem with this approach is that the QImage does not seem to reflect any updates to the pixel buffer. It keeps showing its initial contents.
My current workaround is to re-create a QImage instance each time the buffer is updated:
void gotFrameBufferUpdate(int x, int y, int w, int h)
{
if (bufferImage)
delete bufferImage;
bufferImage = new QImage(frameBuffer, width, height,
QImage::Format_RGBX8888);
update(QRect(QPoint(x, y), QSize(w, h)));
}
This works but seems very inefficient to me. Is there a better way of dealing with externally updated pixel data in Qt? Can I make my QImage aware of updates to its memory buffer?
(Background: I'm writing a custom QML type with C++ backend that shall display the contents of a VNC session. I'm using LibVNC/libvncclient for this.)
The QImage is updated if one calls QImage::bits().
It does not allocate a new buffer, you can discard the result, but it magically triggers a refresh of the image.
It is required every time you want a refresh.
I don't know if this is guaranteed behaviour, nor if it saves anything over recreating it.
I would guess that some sort of caching mechanism is interfering with your expectations. QImage has a cacheKey, which changes if the QImage is altered. This can of course only happen if you change the image through QImage functions. As far as I can see, you're changing the underlying buffer directly, so QImage's cacheKey will remain the same. Qt's pixmap cache then has that key in its cache, and uses the cached pixmap for performance reasons.
Unfortunately, there doesn't seem to be a direct way to update this cacheKey or otherwise "invalidate" a QImage. You have two options:
Recreate the QImage when you need it. No need to new it, so you can save a heap allocation. Creating a back-buffered QImage seems like a "cheap" operation, so I doubt this is a bottleneck.
Do a trivial operation on the QImage (i.e. setPixel on a single pixel to black and then to the old value). This is somewhat "hackish" but probably the most efficient way to work around this API deficiency (it will trigger an update to cacheKey as far as I can tell).
AFAICT the QImage class already does work the way you think it should -- in particular, simply writing into the external frame-buffer does, in fact, update the contents of the QImage. My guess is that in your program, some other bit of code is copying the QImage data into a QPixmap internally somewhere (since a QPixmap will always store its internal buffer in the hardware's native format and thus it will be more efficient to paint onto the screen repeatedly) and it is that QPixmap that is not getting modified when the frameBuffer is updated.
As evidence that a QImage does in fact always contain the data from the frameBuffer, here is a program that writes a new color into its frame-buffer every time you click on the window, and then calls update() to force the widget to re-draw itself. I see that the widget changes color on every mouse-click:
#include <stdio.h>
#include <stdint.h>
#include <QPixmap>
#include <QWidget>
#include <QApplication>
#include <QPainter>
const int width = 500;
const int height = 500;
const int frameBufferSizeBytes = width*height*sizeof(uint32_t);
unsigned char * frameBuffer = NULL;
class MyWidget : public QWidget
{
public:
MyWidget(QImage * img) : _image(img) {/* empty */}
virtual ~MyWidget() {delete _image;}
virtual void paintEvent(QPaintEvent * e)
{
QPainter p(this);
p.drawImage(rect(), *_image);
}
virtual void mousePressEvent(QMouseEvent * e)
{
const uint32_t someColor = rand();
const size_t frameBufferSizeWords = frameBufferSizeBytes/sizeof(uint32_t);
uint32_t * fb32 = (uint32_t *) frameBuffer;
for (size_t i=0; i<frameBufferSizeWords; i++) fb32[i] = someColor;
update();
}
private:
QImage * _image;
};
int main(int argc, char ** argv)
{
QApplication app(argc, argv);
frameBuffer = new unsigned char[frameBufferSizeBytes];
memset(frameBuffer, 0xFF, frameBufferSizeBytes);
QImage * img = new QImage(frameBuffer, width, height, QImage::Format_RGBX8888);
MyWidget * w = new MyWidget(img);
w->resize(width, height);
w->show();
return app.exec();
}

QGraphicsScene paint event inside QGraphicsProxyWidget

I am streaming from a webcam using OpenCV. Each captured frame (type is cv::Mat) is transported via custom signal to slots in a node-based QGraphicsScene. Since I have a single type of node that actually requires to display the cv::Mat while the rest is just for processing the data I have created a viewer widget that converts the cv::Mat data to QImage data and then using paintEvent() draws it on its surface:
Here is the code for the widget itself:
qcvn_node_viewer.hpp
#ifndef QCVN_NODE_VIEWER_HPP
#define QCVN_NODE_VIEWER_HPP
#include <QWidget>
#include <QImage>
#include <QSize>
#include <QPaintEvent>
#include <opencv2/core.hpp>
class QCVN_Node_Viewer : public QWidget
{
Q_OBJECT
QImage output;
public:
explicit QCVN_Node_Viewer(QWidget *parent = 0);
~QCVN_Node_Viewer();
QSize sizeHint() const;
QSize minimumSizeHint() const;
// Source: http://stackoverflow.com/a/17137998/1559401
static QImage Mat2QImage(cv::Mat const& frame);
static cv::Mat QImage2Mat(QImage const& src);
protected:
void paintEvent(QPaintEvent *event);
signals:
// Nodes connect to sendFrame(cv::Mat) signal
void sendFrame(cv::Mat);
public slots:
void receiveFrame(cv::Mat frame);
};
#endif // QCVN_NODE_VIEWER_HPP
qcvn_node_viewer.cpp
#include "qcvn_node_viewer.hpp"
#include <opencv2/imgproc.hpp>
#include <QPainter>
QCVN_Node_Viewer::QCVN_Node_Viewer(QWidget *parent) :
QWidget(parent)
{
qRegisterMetaType<cv::Mat>("cv::Mat");
}
QCVN_Node_Viewer::~QCVN_Node_Viewer()
{
}
QSize QCVN_Node_Viewer::sizeHint() const
{
return output.size();
}
QSize QCVN_Node_Viewer::minimumSizeHint() const
{
return output.size();
}
QImage QCVN_Node_Viewer::Mat2QImage(cv::Mat const& frame)
{
cv::Mat temp;
cvtColor(frame, temp,CV_BGR2RGB);
QImage dest((const uchar *) temp.data, temp.cols, temp.rows, temp.step, QImage::Format_RGB888);
dest.bits(); // enforce deep copy
return dest;
}
cv::Mat QCVN_Node_Viewer::QImage2Mat(QImage const& frame)
{
cv::Mat tmp(frame.height(),frame.width(),CV_8UC3,(uchar*)frame.bits(),frame.bytesPerLine());
cv::Mat result; // deep copy just in case (my lack of knowledge with open cv)
cvtColor(tmp, result,CV_BGR2RGB);
return result;
}
void QCVN_Node_Viewer::paintEvent(QPaintEvent *event)
{
QPainter painter(this);
painter.drawImage(QPoint(0,0), output);
painter.end();
}
void QCVN_Node_Viewer::receiveFrame(cv::Mat frame)
{
if(frame.empty()) return;
// Pass along the received frame
emit sendFrame(frame);
// Generate
// http://stackoverflow.com/questions/17127762/cvmat-to-qimage-and-back
// Applications crashes sometimes! Conversion seems to be incorrect
output = Mat2QImage(frame);
output.bits();
setFixedSize(output.width(), output.height());
repaint();
}
As you can see from the screenshot above the QWidget is embedded in a QGraphicsScene using QGraphicsProxyWidget along with a QGraphicsRectItem that I use for selecting/moving the node around in the scene.
Using QPainter is considered to be more efficient than using QLabel and settings its QPixmap to the captured and converted frame. Using an OpenGL viewport inside a QGraphicsScene as far as I know is not an option. It seems that processing the paint events in the scene also handles the local ones inside the QGraphicsItems and QGraphicsProxyWidgets. The frame that the viewer paints on its surface changes only when the scene's paint event handler is called (hovering some of my nodes, moving them around etc.).
Is there a way to fix this? I'm about to go for the QLabel+QPixmap option (even though I'm not sure if it won't suffer from the same or different problem) but first I'd like to make sure there is nothing I can do to keep my current way of doing things.
EDIT: Okay, so I have changed my viewer to use QLabel instead of the widget's paint event and it works like a charm. The question is still opened though since I'd like to know how to use local paint events.

QML and C++ image interoperability

I've looked through the documentation and also whatever I could find on the internet, but it doesn't seem like it is possible to access a QML Image from C++.
Is there a way to work around that?
It was possible to do in QtQuick1 but that functionality was removed in QtQuick2.
The solution I've come up with allows to have the same image in QML and C++ by implementing a QQuickImageProvider that basically works with QPixmap * which is converted to string and then back to a pointer type(it does sound a little unsafe but has proven to work quite well).
class Pixmap : public QObject {
Q_OBJECT
Q_PROPERTY(QString data READ data NOTIFY dataChanged)
public:
Pixmap(QObject * p = 0) : QObject(p), pix(0) {}
~Pixmap() { if (pix) delete pix; }
QString data() {
if (pix) return "image://pixmap/" + QString::number((qulonglong)pix);
else return QString();
}
public slots:
void load(QString url) {
QPixmap * old = 0;
if (pix) old = pix;
pix = new QPixmap(url);
emit dataChanged();
if (old) delete old;
}
void clear() {
if (pix) delete pix;
pix = 0;
emit dataChanged();
}
signals:
void dataChanged();
private:
QPixmap * pix;
};
The implementation of the Pixmap element is pretty straightforward, although the initial was a bit limited, since the new pixmap happened to be allocated at the exactly same memory address the data string was the same for different images, causing the QML Image component to not update, but the solution was as simple as deleting the old pixmap only after the new one has been allocated. Here is the actual image provider:
class PixmapProvider : public QQuickImageProvider {
public:
PixmapProvider() : QQuickImageProvider(QQuickImageProvider::Pixmap) {}
QPixmap requestPixmap(const QString &id, QSize *size, const QSize &requestedSize) {
qulonglong d = id.toULongLong();
if (d) {
QPixmap * p = reinterpret_cast<QPixmap *>(d);
return *p;
} else {
return QPixmap();
}
}
};
Registration:
//in main()
engine.addImageProvider("pixmap", new PixmapProvider);
qmlRegisterType<Pixmap>("Test", 1, 0, "Pixmap");
And this is how you use it in QML:
Pixmap {
id: pix
}
Image {
source: pix.data
}
// and then pix.load(path)
ALSO Note that in my case there was no actual modification of the pixmap that needed to be updated in QML. This solution will not auto-update the image in QML if it is changed in C++, because the address in memory will stay the same. But the solution for this is just as straightforward - implement an update() method that allocates a new QPixmap(oldPixmap) - it will use the same internal data but give you a new accessor to it with a new memory address, which will trigger the QML image to update on changes. This means the proffered method to access the pixmap will be through the Pixmap class, not directly from the QPixmap * since you will need the Pixmap class to trigger the data change, so just add an accessor for pix, and just in case you do complex or threaded stuff, you might want to use QImage instead and add a mutex so that the underlying data is not changed in QML while being changed in C++ or the other way around.

How to pass an opencv image to display in a Qt scene in a different thread?

I am struggling with memory issues, I think I missed something and would greatly appreciate if someone can point me to what I understand/do wrong.
What I want to do
My gui runs in the main thread. I am launching a computation on a separate thread T. The result of this computation is a bunch of opencv images. I want to display them in my gui during the computation.
How I understand I should do it
Launch computation thread.
When a new image is computed, convert it to a QImage, wrap it in a custom QEvent, and post it to my gui.
Only use heap memory.
How I implemented it
In my computation thread, when a new image is ready :
std::shared_ptr<cv::Mat> cvimRGB = std::shared_ptr<cv::Mat>(new cv::Mat);
cv::Mat cvimBGR;
cv::Mat cvim = MyNewComputedImage;
cvim.convertTo(cvimBGR,CV_8UC3);
cv::cvtColor(cvimBGR,*cvimRGB,cv::COLOR_BGR2RGB);
std::shared_ptr<QImage> qim = std::shared_ptr<QImage>(
new QImage((uint8_t*) cvimRGB->data,cvimRGB->cols,cvimRGB->rows,cvimRGB->step,QImage::Format_RGB888));
ImageAddedEvent* iae = new ImageAddedEvent(qim,i);
QCoreApplication::postEvent(gui, iae);
In my event handler :
bool mosaicage::event(QEvent * e){
if (e->type() == ImageAdded) {
ImageAddedEvent* ie = dynamic_cast<ImageAddedEvent*>(e);
QImage qim(*(ie->newImage));
QPixmap pm(QPixmap::fromImage(qim));
auto p = scene.addPixmap(pm);
images_on_display.push_back(p);
return true;
} else {
return QWidget::event(e);
}
}
My custom event is defined as follow :
class ImageAddedEvent: public QEvent {
public:
ImageAddedEvent();
~ImageAddedEvent();
ImageAddedEvent(std::shared_ptr<QImage> im, int i);
std::shared_ptr<QImage> newImage;
int index;
};
What happens
In debug mode, I get crap on display.
In release mode, I get an access violation error.
I am pretty confident about the part where I convert cv::Mat to qimage because I did not change it, I used to update the display from the computation thread but I learned better. It worked though (when it did not crash).
How I fixed it
The problem was in the memory pointed by the QImage, which was taken charge of by the cv::Mat I constructed it from. If I want to keep this way of constructing the QImage, using data managed by someone else, I must keep the data valid. Hence I moved the cv::Mat to the custom event :
class ImageAddedEvent: public QEvent {
public:
ImageAddedEvent();
~ImageAddedEvent();
ImageAddedEvent(cv::Mat im, int i);
QImage newImage;
cv::Mat cvim;
int index;
};
I changed the constructor of the event to initialize the QImage with the cv::Mat data :
ImageAddedEvent::ImageAddedEvent(cv::Mat cvimRGB, int i) : QEvent(ImageAdded),
index(i),
cvim(cvimRGB)
{
newImage = QImage((uint8_t*) cvim.data,cvim.cols,cvim.rows,cvim.step,QImage::Format_RGB888);
}
And now I only have to pass a cv::Mat to my event constructor :
cv::Mat cvimBGR,cvimRGB;
cv::Mat cvim = MyNewImage;
cvim.convertTo(cvimBGR,CV_8UC3);
cv::cvtColor(cvimBGR,cvimRGB,cv::COLOR_BGR2RGB);
ImageAddedEvent* iae = new ImageAddedEvent(cvimRGB,i);
QCoreApplication::postEvent(gui, iae);
et voilĂ , again, thanks for the help!
you are using the wrong constructor
from the doc(emph mine):
The buffer must remain valid throughout the life of the QImage and all copies that have not been modified or otherwise detached from the original buffer. The image does not delete the buffer at destruction. You can provide a function pointer cleanupFunction along with an extra pointer cleanupInfo that will be called when the last copy is destroyed.
and you are using the stack allocated cvimRGB for the data pointer which (I believe) will clean up the buffer in it's destructor before the event is handled, leading to accessing "crap" data
so you should create a fresh Qimage and then copy the data
std::shared_ptr<cv::Mat> cvimRGB = std::shared_ptr<cv::Mat>(new cv::Mat);
cv::Mat cvimBGR;
cv::Mat cvim = MyNewComputedImage;
cvim.convertTo(cvimBGR,CV_8UC3);
cv::cvtColor(cvimBGR,*cvimRGB,cv::COLOR_BGR2RGB);
QImage qim = QImage(cvimRGB->cols,cvimRGB->rows,QImage::Format_RGB888));
//copy from cvimRGB->data to qim.bits()
ImageAddedEvent* iae = new ImageAddedEvent(qim,i);
QCoreApplication::postEvent(gui, iae);
or detach cvimRGB->data and let the cleanupFunction delete the buffer
on another note there is no need to use std::shared_ptr<QImage> as QImage will not copy the underlying data unless needed, This is known in Qt as implicit data sharing
to call the gui you can provide a Q_INVOKABLE method (or just a slot) in gui and use QMetaObject::invokeMethod(gui, "imageUpdated", Q_ARG(QImage, qim));

Memory leak in OpenCV function: cvQueryFrame()

I have a problem with IplImage* returned from cvQueryFrame... Its memory is never released by library. And documentation says:
The returned image should not be released or modified by the user.
Well... how do I get my memory back? The program eats memory until it crashes. I would like to release allocated memory for each IplImage* after frame processing finishes. Code is this:
// In `process` thread:
CvCapture* camera;
camera = cvCreateCameraCapture(1);
assert(camera);
while(true)
{
main = cvQueryFrame(camera);
// Do something useful with images
emit sendImage(main); // Send Image to the UI thread
}
UPDATE:
This is QThread subclass:
#include <QThread>
class ImageFetcher : public QThread
{
Q_OBJECT
public:
explicit ImageFetcher(QObject *parent = 0);
signals:
void run();
public slots:
};
Implementattion:
void ImageFetcher::run()
{
CvCapture* camera = cvCreateCameraCapture(0);
IplImage* image;
while(true)
{
image = cvQueryFrame(camera);
// process image
}
// exec();
}
main():
int main(int argc, char *argv[])
{
QApplication a(argc,argv);
ImageFetcher thread;
thread.start();
return a.exec();
}
As far as I could test, there's no leak in v2.3.1 for Linux.
Here is what happens: an image is allocated by cvQueryFrame() when this function is called by the first time, and then this image is reused on all subsequent calls to cvQueryFrame(). No new images are created after the first call, and the image is only freed at cvReleaseCapture().
So you see, if you are experiencing a memory leak (how did you find it, exactly?) and a crash, it is most probably caused by some other part of your code. If sendImage() is not synchronized (i.e. non-blocking) passing main image directly could cause problems. I don't know what you are doing in this function, so I'll assume the problem is inside it.
One test you could do, is copy the main frame and then pass that copy to sendImage() instead, then release it when you no longer need it.