QGraphicsScene paint event inside QGraphicsProxyWidget - c++

I am streaming from a webcam using OpenCV. Each captured frame (type is cv::Mat) is transported via custom signal to slots in a node-based QGraphicsScene. Since I have a single type of node that actually requires to display the cv::Mat while the rest is just for processing the data I have created a viewer widget that converts the cv::Mat data to QImage data and then using paintEvent() draws it on its surface:
Here is the code for the widget itself:
qcvn_node_viewer.hpp
#ifndef QCVN_NODE_VIEWER_HPP
#define QCVN_NODE_VIEWER_HPP
#include <QWidget>
#include <QImage>
#include <QSize>
#include <QPaintEvent>
#include <opencv2/core.hpp>
class QCVN_Node_Viewer : public QWidget
{
Q_OBJECT
QImage output;
public:
explicit QCVN_Node_Viewer(QWidget *parent = 0);
~QCVN_Node_Viewer();
QSize sizeHint() const;
QSize minimumSizeHint() const;
// Source: http://stackoverflow.com/a/17137998/1559401
static QImage Mat2QImage(cv::Mat const& frame);
static cv::Mat QImage2Mat(QImage const& src);
protected:
void paintEvent(QPaintEvent *event);
signals:
// Nodes connect to sendFrame(cv::Mat) signal
void sendFrame(cv::Mat);
public slots:
void receiveFrame(cv::Mat frame);
};
#endif // QCVN_NODE_VIEWER_HPP
qcvn_node_viewer.cpp
#include "qcvn_node_viewer.hpp"
#include <opencv2/imgproc.hpp>
#include <QPainter>
QCVN_Node_Viewer::QCVN_Node_Viewer(QWidget *parent) :
QWidget(parent)
{
qRegisterMetaType<cv::Mat>("cv::Mat");
}
QCVN_Node_Viewer::~QCVN_Node_Viewer()
{
}
QSize QCVN_Node_Viewer::sizeHint() const
{
return output.size();
}
QSize QCVN_Node_Viewer::minimumSizeHint() const
{
return output.size();
}
QImage QCVN_Node_Viewer::Mat2QImage(cv::Mat const& frame)
{
cv::Mat temp;
cvtColor(frame, temp,CV_BGR2RGB);
QImage dest((const uchar *) temp.data, temp.cols, temp.rows, temp.step, QImage::Format_RGB888);
dest.bits(); // enforce deep copy
return dest;
}
cv::Mat QCVN_Node_Viewer::QImage2Mat(QImage const& frame)
{
cv::Mat tmp(frame.height(),frame.width(),CV_8UC3,(uchar*)frame.bits(),frame.bytesPerLine());
cv::Mat result; // deep copy just in case (my lack of knowledge with open cv)
cvtColor(tmp, result,CV_BGR2RGB);
return result;
}
void QCVN_Node_Viewer::paintEvent(QPaintEvent *event)
{
QPainter painter(this);
painter.drawImage(QPoint(0,0), output);
painter.end();
}
void QCVN_Node_Viewer::receiveFrame(cv::Mat frame)
{
if(frame.empty()) return;
// Pass along the received frame
emit sendFrame(frame);
// Generate
// http://stackoverflow.com/questions/17127762/cvmat-to-qimage-and-back
// Applications crashes sometimes! Conversion seems to be incorrect
output = Mat2QImage(frame);
output.bits();
setFixedSize(output.width(), output.height());
repaint();
}
As you can see from the screenshot above the QWidget is embedded in a QGraphicsScene using QGraphicsProxyWidget along with a QGraphicsRectItem that I use for selecting/moving the node around in the scene.
Using QPainter is considered to be more efficient than using QLabel and settings its QPixmap to the captured and converted frame. Using an OpenGL viewport inside a QGraphicsScene as far as I know is not an option. It seems that processing the paint events in the scene also handles the local ones inside the QGraphicsItems and QGraphicsProxyWidgets. The frame that the viewer paints on its surface changes only when the scene's paint event handler is called (hovering some of my nodes, moving them around etc.).
Is there a way to fix this? I'm about to go for the QLabel+QPixmap option (even though I'm not sure if it won't suffer from the same or different problem) but first I'd like to make sure there is nothing I can do to keep my current way of doing things.
EDIT: Okay, so I have changed my viewer to use QLabel instead of the widget's paint event and it works like a charm. The question is still opened though since I'd like to know how to use local paint events.

Related

Qt - Transform cv::Mat to QImage in worker thread crashes

Introduction
What i want is really simple: I want to start the process of reading a binary file via a button in a main ui thread, let a seperate thread handle the processing including transformation into QImage and return this image back to the main ui thread where it should shown in a label.
Therefore i use Qt's signal/slot mechanism and its thread functionality.
I allready have a single thread working solution, but when using the threading attempt it crashes on totally arbitrary steps which i don't understand because the whole process is totally encapsulated and not time critical...
Working single-threading solution:
TragVisMain is a QMainWindow:
class TragVisMain : public QMainWindow
Pushing the button readBinSingle starts the process:
void TragVisMain::on_readBinSingle_clicked()
{
// Open binary file
FILE *imageBinFile = nullptr;
imageBinFile = fopen("imageFile.bin", "rb");
if(imageBinFile == NULL) {
return;
}
// Get binary file size
fseek(imageBinFile, 0, SEEK_END); // seek to end of file
size_t size = static_cast<size_t>(ftell(imageBinFile)); // get current file pointer
fseek(imageBinFile, 0, SEEK_SET);
// Read binary file
void *imageData = malloc(size);
fread(imageData, 1, size, imageBinFile);
// Create cv::Mat
cv::Mat openCvImage(1024, 1280, CV_16UC1, imageData);
openCvImage.convertTo(openCvImage, CV_8UC1, 0.04); // Convert to 8 Bit greyscale
// Transform to QImage
QImage qImage(
openCvImage.data,
1280,
1024,
QImage::Format_Grayscale8
);
// Show image in label, 'imageLabel' is class member
imageLabel.setPixmap(QPixmap::fromImage(qImage));
imageLabel.show();
}
This works like a charm, but of course the ui is blocked.
Not working multi-threading solution
As you will see the code is basically the same as above just moving to another class DoCameraStuff. So here is are the main components for this purpose in the TragVisMain header file:
namespace Ui {
class TragVisMain;
}
class TragVisMain : public QMainWindow
{
Q_OBJECT
public:
explicit TragVisMain(QWidget *parent = nullptr);
~TragVisMain();
private:
Ui::TragVisMain *ui;
DoCameraStuff dcs;
QLabel imageLabel;
QThread workerThread;
public slots:
void setImage(const QImage &img); // Called after image processing
// I have also tried normal parameter 'setImage(QImage img)', non-const referenc 'setImage(const QImage &img)' and pointer 'setImage(QImage *img)'
private slots:
void on_readBin_clicked(); // Emits 'loadBinaryImage'
// void on_readBinSingle_clicked();
signals:
void loadBinaryImage(); // Starts image processing
};
DoCameraStuff is just a QObject:
class DoCameraStuff : public QObject
{
Q_OBJECT
public:
explicit DoCameraStuff(QObject *parent = nullptr);
public slots:
void readBinaryAndShowBinPic();
signals:
void showQImage(const QImage &image);
};
Moving dcs to the workerThread and connecting signals and slots happens in the constructor of TragVisMain:
TragVisMain::TragVisMain(QWidget *parent) :
QMainWindow(parent),
ui(new Ui::TragVisMain),
dcs()
{
ui->setupUi(this);
dcs.moveToThread(&workerThread);
// Object should be deletable after the thread finished!
connect(&workerThread, &QThread::finished, &dcs, &QObject::deleteLater);
// For starting the image processing
connect(this, &TragVisMain::loadBinaryImage, &dcs, &DoCameraStuff::readBinaryAndShowBinPic);
// Showing QImage after image processing is done
connect(&dcs, &DoCameraStuff::showQImage, this, &TragVisMain::setImage);
// Start workerThread
workerThread.start();
}
Starting the image processing happens through pressing the readBinSingle button:
void TragVisMain::on_readBin_clicked()
{
emit loadBinaryImage(); // slot: readBinaryAndShowBinPic
}
The process happens in DoCameraStuff::readBinaryAndShowBinPic:
void DoCameraStuff::readBinaryAndShowBinPic() {
// Exact same code from single thread solution:
// Open binary file
FILE *imageBinFile = nullptr;
imageBinFile = fopen("imageFile.bin", "rb");
if(imageBinFile == NULL) {
return;
}
// Get binary file size
fseek(imageBinFile, 0, SEEK_END); // seek to end of file
size_t size = static_cast<size_t>(ftell(imageBinFile)); // get current file pointer
fseek(imageBinFile, 0, SEEK_SET);
// Read binary file
void *imageData = malloc(size);
fread(imageData, 1, size, imageBinFile);
// Create cv::Mat
cv::Mat openCvImage(1024, 1280, CV_16UC1, imageData);
openCvImage.convertTo(openCvImage, CV_8UC1, 0.04); // Convert to 8 Bit greyscale
// Transform to QImage
QImage qImage(
openCvImage.data,
1280,
1024,
QImage::Format_Grayscale8
);
// Send qImage to 'TragVisMain'
emit showQImage(qImage);
}
Showing the image in TragVisMain::setImage:
void TragVisMain::setImage(const QImage &img)
{
imageLabel.setPixmap(QPixmap::fromImage(img));
imageLabel.show();
}
Problem
Well the multi-threading attempt just crashed the whole Application without any message during different steps. But honestly i don't have any idea. For me the DoCameraStuff class is a standard worker class doing time-independent stuff in a member function without any critical relations.
I also checked if some of the used function inside DoCameraStuff::readBinaryAndShowBinPic are not thread-safe but I couldn't find any problems regarding <cstdio>, cv::Mat and QImage in equivalent conditions.
So:
Why does the multi-threading attempt crash?
What changes need to be applied, so that the process in the thread does not crash?
I allways appreciate your help.
It doesn't have a chance of working, since the QImage wraps transient data that then is deallocated by cvMat. At the minimum you should emit the copy of the image (literally qImage.copy(). Ideally, you'd wrap the cvMat lifetime management in QImage's deleter, so that the copy is not necessary, and the matrix gets destroyed along with the image that wraps it. Since format conversions are usually needed between cv::Mat and QImage, it's probably best to perform the conversion between the source cv::Mat and another cv::Mat that wraps the QImage-owned memory. That solution avoids memory reallocations, since the QImage can be retained in the class performing the conversion. It is then also comaptible with Qt 4, where QImage doesn't support deleters.
See this answer for a complete example of an OpenCV video capture Qt-based widget viewer, and this answer for a complete example of multi-format conversion from cv::Mat to QImage.

Resizing Qlabel image upon resizing window using resizeEvent

I want to use ResizeEvent to receive the size of the window and set it to the size of a QLabel, in order to make the image stretched and adapted to the window's dimensions.by a left click of the mouse I can resize my window and the image takes a new size.
You must have the following considerations:
It is not necessary to store QPixmap by means of a pointer since when passing it to the QLabel it is copied by value.
Therefore, if you change the size of the QPixmap p will not be reflected in the QLabel since the QPixmap that has the QLabel is a copy of the one you established at the beginning.
It is not necessary to use a layout for this task since it will create an infinite loop since this also intervenes in the resizeEvent of the widget where it has been established so if you change the size of the QLabel, it will change the size of the QWidget, and this again I will try to change the QLabel, and so on.
It is not advisable to modify the original QPixmap since changing its size modifies the pixels and you will get an unexpected effect.
Using the above we obtain the following code:
*.h
#ifndef TESTSIZE_H
#define TESTSIZE_H
#include <QWidget>
class QLabel;
class testsize : public QWidget
{
Q_OBJECT
public:
explicit testsize(QWidget *parent = 0);
~testsize();
private:
QLabel *image;
QPixmap original_px;
protected:
void resizeEvent(QResizeEvent *event);
};
#endif // TESTSIZE_H
*.cpp
#include "testsize.h"
#include <QLabel>
#include <QResizeEvent>
testsize::testsize(QWidget *parent) :
QWidget(parent)
{
image = new QLabel(this);
original_px = QPixmap(":/wallpaper.jpg");
image->setPixmap(original_px);
resize(640, 480);
}
testsize::~testsize()
{
}
void testsize::resizeEvent(QResizeEvent *event)
{
QPixmap px = original_px.scaled(event->size());
image->setPixmap(px);
image->resize(event->size());
QWidget::resizeEvent(event);
}
You can find the complete example in the following link.

Creating QIcon from SVG contents in memory

How can we create the QIcon object having SVG icon contents in the memory buffer?
P.S. Initially wanted to create QSvgIconEngine, but it is hidden on the plugins layer, and I can not create it explicitly. How can I do it with loading from plugin (taking in account, that plugin is loaded)?
After digging for a while here and there, and digging in how QIcon itself uses an svg file to load an icon, here's what I learned:
QIcon, when called with an svg file (or any other image type for that matter), it calles addFile() subsequently, which only uses the extension of the file (called QFileInfo::suffix in Qt) to determine the method that converts the image file to an icon.
The method (semantically speaking) is determined by a QIconEngine instance
QIconEngine classes for every image type are not apparently simply accessible by us (the Qt developers); apparently there's a plugin system to be used and it's not available at compile-time (not simply at least)
On the other hand; How does QIcon work? When an icon is requested from QIcon, it uses the information passed to it to determine what engine to use, and creates an instance of the engine. Then, every time the icon needs to draw something, it asks the engine to draw an icon with some size given to it. The size is used on the function QIconEngine::pixmap(), which creates a pixmap with the required size, then the method QIconEngine::paint() is used to paint on that pixmap.
So, given this information, the solution is simply to write an Icon Engine that QIcon will use in order to generate the icon depending on the size passed to it. Here's how to do that:
So here's the header file SvgIconEngine.h
#ifndef SVGICONENGINE_H
#define SVGICONENGINE_H
#include <QIconEngine>
#include <QSvgRenderer>
class SVGIconEngine : public QIconEngine {
QByteArray data;
public:
explicit SVGIconEngine(const std::string &iconBuffer);
void paint(QPainter *painter, const QRect &rect, QIcon::Mode mode,
QIcon::State state) override;
QIconEngine *clone() const override;
QPixmap pixmap(const QSize &size, QIcon::Mode mode,
QIcon::State state) override;
signals:
public slots:
};
#endif // SVGICONENGINE_H
And here's the implementation, SvgIconEngine.cpp
#include "SvgIconEngine.h"
#include <QPainter>
SVGIconEngine::SVGIconEngine(const std::string &iconBuffer) {
data = QByteArray::fromStdString(iconBuffer);
}
void SVGIconEngine::paint(QPainter *painter, const QRect &rect,
QIcon::Mode mode, QIcon::State state) {
QSvgRenderer renderer(data);
renderer.render(painter, rect);
}
QIconEngine *SVGIconEngine::clone() const { return new SVGIconEngine(*this); }
QPixmap SVGIconEngine::pixmap(const QSize &size, QIcon::Mode mode,
QIcon::State state) {
// This function is necessary to create an EMPTY pixmap. It's called always
// before paint()
QImage img(size, QImage::Format_ARGB32);
img.fill(qRgba(0, 0, 0, 0));
QPixmap pix = QPixmap::fromImage(img, Qt::NoFormatConversion);
{
QPainter painter(&pix);
QRect r(QPoint(0.0, 0.0), size);
this->paint(&painter, r, mode, state);
}
return pix;
}
Notice: You must override clone(), because it's an abstract method, and you must override pixmap(), because without that, you'll not have an empty pixmap to paint the svg on.
To use this, simply do this:
std::string iconSvgData = GetTheSvgPlainData();
QIcon theIcon(new SVGIconEngine(iconSvgData));
//Use the icon!
Notice that QIcon takes ownership of the engine object. It'll destroy it when it's destroyed.
Have fun!
I don't have c++ at hand, but it should be easy to convert:
QtGui.QIcon(
QtGui.QPixmap.fromImage(
QtGui.QImage.fromData(
b'<svg version="1.1" viewBox="0 0 32 32"'
b' xmlns="http://www.w3.org/2000/svg">'
b'<circle cx="16" cy="16" r="4.54237"/></svg>')))
This code will create transparent 32 by 32 pixels icon with a black circle.
How can we create the QIcon object having SVG icon contents in the
memory buffer?
For that all the required functionality is delivered via external interface of QSvgRenderer class. To construct that type of renderer we need to use either of:
QSvgRenderer(const QByteArray &contents, QObject *parent = Q_NULLPTR);
QSvgRenderer(QXmlStreamReader *contents, QObject *parent = Q_NULLPTR);
or we can just load the content with:
bool QSvgRenderer::load(const QByteArray &contents)
bool QSvgRenderer::load(QXmlStreamReader *contents)
And to create the actual icon:
QIcon svg2Icon(const QByteArray& svgContent, QPainter::CompositionMode mode =
QPainter::CompositionMode_SourceOver)
{
return QIcon(util::svg2Pixmap(svgContent, QSize(128, 128), mode));
}
QPixmap svg2Pixmap(const QByteArray& svgContent,
const QSize& size,
QPainter::CompositionMode mode)
{
QSvgRenderer rr(svgContent);
QImage image(size.width(), size.height(), QImage::Format_ARGB32);
QPainter painter(&image);
painter.setCompositionMode(mode);
image.fill(Qt::transparent);
rr.render(&painter);
return QPixmap::fromImage(image);
}
We can also use other composition modes as well, say,
QPainter::RasterOp_NotSourceOrDestination to invert the icon color.

The right way to show a image in Qt

I am new to Qt and almost every tutorial I find says to add the image to a QLabel using setPixmap(). Maybe this is the right way, but it doesn't feels to be, since using a label for a image seems to go beyond the purpose of a label.
Could anyone tell me if there is a "right way" or a special Class for this, or if the "Label way" is the right one, not just the simple one.
Using QLabel is the usual way to display an image in a QtWidgets-based UI. That might indeed feel a bit awkward, as QLabel's API is mostly concerned with text rendering. However, it does the job and there's no other class dedicated to only painting images. One could think of writing his own class (taking a QPixmap, reimplementing paintEvent(), sizeHint()), but that would only make sense to me if functionality is needed that QLabel lacks.
There are of course many other ways to paint an image, depending on the context, like images on buttons (QToolButton, QPushButton, ...), images in graphics scenes (QGraphicsScene/View) etc., but they all serve either more specialized or more complex use cases.
easiest way is to use a QLabel.in ImageViewer example
http://qt-project.org/doc/qt-4.8/widgets-imageviewer.html they use QLabel ..
another way
QGraphicsView view(&scene);
QGraphicsPixmapItem item(QPixmap("c:\\test.png"));
scene.addItem(&item);
Here is a simple class that isn't Label based. I suppose it depends on what you personally feel is the proper way to do it and what you need to do with it. I prefer implementing my own class so you can add on to it later(maybe you want to manipulate the image).
imagewidget.h
#ifndef IMAGEWIDGET_H
#define IMAGEWIDGET_H
#include <QPainter>
#include <QImage>
#include <QWidget>
QT_BEGIN_NAMESPACE
class QPainter;
class QImage;
QT_END_NAMESPACE
class ImageWidget : public QWidget
{
Q_OBJECT
public:
ImageWidget(const QString &filename, QWidget* parent = 0);
~ImageWidget();
bool load(const QString &fileName);
bool save(const QString &fileName);
protected:
void paintEvent(QPaintEvent* event);
private:
QImage img;
};
#endif
imagewidget.cpp
#include <QDebug>
#include "imagewidget.h"
ImageWidget::ImageWidget(const QString &filename, QWidget* parent) : QWidget(parent)
{
img.load(filename);
setMinimumWidth(img.width());
setMinimumHeight(img.height());
setMaximumWidth(img.width());
setMaximumHeight(img.height());
this->show();
}
bool ImageWidget::load(const QString &fileName)
{
img.load(fileName);
return true;
}
bool ImageWidget::save(const QString &fileName)
{
img.save(fileName, "PNG");
return true;
}
ImageWidget::~ImageWidget()
{
}
void ImageWidget::paintEvent(QPaintEvent*)
{
QPainter painter(this);
painter.setViewport(0, 0, width(), height());
painter.setWindow(0, 0, width(), height());
painter.drawImage(0, 0, img);
}

Rendering QImage on QGLWidget of QML plugin

I'm trying to write a QML plugin that reads frames from a video (using a custom widget to do that task, NOT QtMultimedia/Phonon), and each frame is converted to a QImage RGB888, and then displayed on a QGLWidget (for performance reasons). Right now nothing is draw to the screen and the screen stays white all the time.
It's important to state that I already have all of this working without QGLWidget, so I know the issue is setting up and drawing on QGLWidget.
The plugin is being registered with:
qmlRegisterType<Video>(uri,1,0,"Video");
so Video is the main class of the plugin. On it's constructor we have:
Video::Video(QDeclarativeItem* parent)
: QDeclarativeItem(parent), d_ptr(new VideoPrivate(this))
{
setFlag(QGraphicsItem::ItemHasNoContents, false);
Q_D(Video);
QDeclarativeView* view = new QDeclarativeView;
view->setViewport(&d->canvas()); // canvas() returns a reference to my custom OpenGL Widget
}
Before I jump to the canvas object, let me say that I overloaded Video::paint() so it calls canvas.paint() while passing QImage as parameter, I don't know if this is the right way to do it so I would like some advice on this:
void Video::paint(QPainter* painter, const QStyleOptionGraphicsItem* option, QWidget* widget)
{
Q_UNUSED(painter);
Q_UNUSED(widget);
Q_UNUSED(option);
Q_D(Video);
// I know for sure at this point "d->image()" is valid, but I'm hiding the code for clarity
d->canvas().paint(painter, option, d->image());
}
The canvas object is declared as GLWidget canvas; and the header of this class is defined as:
class GLWidget : public QGLWidget
{
Q_OBJECT
public:
explicit GLWidget(QWidget* parent = NULL);
~GLWidget();
void paint(QPainter* painter, const QStyleOptionGraphicsItem* option, QImage* image);
};
Seems pretty simple. Now, the implementation of QGLWidget is the following:
GLWidget::GLWidget(QWidget* parent)
: QGLWidget(QGLFormat(QGL::SampleBuffers), parent)
{
// Should I do something here?
// Maybe setAutoFillBackground(false); ???
}
GLWidget::~GLWidget()
{
}
And finally:
void GLWidget::paint(QPainter* painter, const QStyleOptionGraphicsItem* option, QImage* image)
{
// I ignore painter because it comes from Video, so I create a new one:
QPainter gl_painter(this);
// Perform drawing as Qt::KeepAspectRatio
gl_painter.fillRect(QRectF(QPoint(0, 0), QSize(this->width(), this->height())), Qt::black);
QImage scaled_img = image->scaled(QSize(this->width(), this->height()), _ar, Qt::FastTransformation);
gl_painter.drawImage(qRound(this->width()/2) - qRound(scaled_img.size().width()/2),
qRound(this->height()/2) - qRound(scaled_img.size().height()/2),
scaled_img);
}
What am I missing?
I originally asked this question on Qt Forum but got no replies.
Solved. The problem was that I was trying to create a new GL context within my plugin when I should be retrieving the GL context from the application that loaded it.
This code was very helpful to understand how to accomplish that.
By the way, I discovered that the stuff was being draw inside view. It's just that I needed to execute view->show(), but that created another window which was not what I was looking for. The link I shared above has the answer.
I think that you have to draw on your glwidget using the opengl functions.
One possible way is to override your paintGL() method in GLWidget
and then draw your image with glDrawPixels().
glClear(GL_COLOR_BUFFER_BIT);
glDrawPixels(buffer.width(), buffer.height(), GL_RGBA, GL_UNSIGNED_BYTE, buffer.bits());
Where buffer is a QImage object that needs to be converted using QGLWidget::converrtToGLFormat() static method.
Look at this source for reference:
https://github.com/nayyden/ZVector/blob/master/src/GLDebugBufferWidget.cpp