I want to display 3 channel cv::Mat on Qt interface and usually I use this expressions:
e
QImage qImage = QImage( (uchar*)cvImage.data, cvImage.cols, cvImage.rows, cvImage.cols*3, QImage::Format_RGB888 );
QPixmap pixmap = QPixmap::fromImage(qImage);
myLabel.setPixmap(pixmap);
But conversion to QPixmap is slow enough. Do you know guys how to avoid the conversion? May be the function setImage can help, but I don't know how to use it...
Related
Does anyone know how to convert QVideoFrame to OpenCV cv::Mat in QT6?
I have event where video frame arrived from video camera.
void videoFrameChanged(const QVideoFrame &frame)// frame from QCamera
{
QImage qImg=frame.toImage();//<<-- slow
cv::Mat m=Qimage2Mat_shared(qImg);
...
}
I know how to convert QImage to cv::Mat, but here used very slow function: frame.toImage().
How to convert directly QVideoFrame to cv::Mat ?
like the title says I am trying to convert a cv::mat to a QImage. What I am doing is using the equalizeHist() function on the mat and then converting it to a QImage to display in widget window in Qt. I know the mat works and loads the image correctly because the equalized image will show in the new window with imshow(), however when converting this mat to a QImage, I can not get it to display in the window. I believe the problem is with the conversion from the mat to QImage but cant find the issue. Below is a part of my code snippet.
Mat image2= imread(directoryImage1.toStdString(),0);
//cv::cvtColor(image2,image2,COLOR_BGR2GRAY);
Mat histEquImg;
equalizeHist(image2,histEquImg);
imshow("Histogram Equalized Image 2", histEquImg);
//QImage img=QImage((uchar*) histEquImg.data, histEquImg.cols, histEquImg.rows, histEquImg.step, QImage::Format_ARGB32);
imageObject= new QImage((uchar*) histEquImg.data, histEquImg.cols, histEquImg.rows, histEquImg.step, QImage::Format_RGB888);
image = QPixmap::fromImage(*imageObject);
scene=new QGraphicsScene(this); //create a frame for image 2
scene->addPixmap(image); //put image 1 inside of the frame
ui->graphicsView_4->setScene(scene); //put the frame, which contains image 3, to the GUI
ui->graphicsView_4->fitInView(scene->sceneRect(),Qt::KeepAspectRatio); //keep the dimension ratio of image 3
No errors occur and the program doesnt crash.
Thanks in advance.
Your problem is the conversion of the QImage to cv::Mat, when using the flag 0 in cv::imread implies the reading is grayscale, and you are using the conversion with the format QImage::Format_RGB888. I use the following function to make the conversion of cv::Mat to QImage:
static QImage MatToQImage(const cv::Mat& mat)
{
// 8-bits unsigned, NO. OF CHANNELS=1
if(mat.type()==CV_8UC1)
{
// Set the color table (used to translate colour indexes to qRgb values)
QVector<QRgb> colorTable;
for (int i=0; i<256; i++)
colorTable.push_back(qRgb(i,i,i));
// Copy input Mat
const uchar *qImageBuffer = (const uchar*)mat.data;
// Create QImage with same dimensions as input Mat
QImage img(qImageBuffer, mat.cols, mat.rows, mat.step, QImage::Format_Indexed8);
img.setColorTable(colorTable);
return img;
}
// 8-bits unsigned, NO. OF CHANNELS=3
if(mat.type()==CV_8UC3)
{
// Copy input Mat
const uchar *qImageBuffer = (const uchar*)mat.data;
// Create QImage with same dimensions as input Mat
QImage img(qImageBuffer, mat.cols, mat.rows, mat.step, QImage::Format_RGB888);
return img.rgbSwapped();
}
return QImage();
}
After that I see that you have misconceptions of how QGraphicsView and QGraphicsScene work when commenting: put the frame, which contains image 3, to the GUI, with ui->graphicsView_4->setScene(scene); you are not setting a frame but a scene, and the scene should only be set once and preferably in the constructor.
// constructor
scene = new QGraphicsScene(this);
ui->graphicsView->setScene(scene);
So when you want to load the image just use the scene:
cv::Mat image= cv::imread(filename.toStdString(), CV_LOAD_IMAGE_GRAYSCALE);
cv::Mat histEquImg;
equalizeHist(image, histEquImg);
QImage qimage = MatToQImage(histEquImg);
QPixmap pixmap = QPixmap::fromImage(qimage);
scene->addPixmap(pixmap);
ui->graphicsView->fitInView(scene->sceneRect(), Qt::KeepAspectRatio);
The complete example can be found in the following link.
I'm working on a homework for my Digital Image Processing class, and I'm using OpenCV with QT Framework.
I've created a class ImageDisplay, which is as sub class of the QWidget class.
I've using OpenCV to manipulate a grayscale image, and then creating a QImage object from the Mat object. After that I use the QWidget::drawImage() to draw the image. But sometimes it shows a distorted, angled image.
I was experimenting and discovered that it has something to do with the image dimensions.
For instance, this image with 320x391 pixels is rendered normally
But if change the dimension to 321x391 (using Gimp), it shows up like this:
This is the code for the paintEvent method:
void ImageDisplay::paintEvent(QPaintEvent *){
QPainter painter(this);
Mat tmp;
/* The mat in the next line is a Mat object that contains the image data */
cvtColor(mat, tmp, CV_GRAY2BGR);
QImage image(tmp.data, tmp.cols, tmp.rows, QImage::Format_RGB888);
painter.drawImage(rect(), image);
}
Does anyone has a clue what is the problem and how to fix it?
Thanks in advance!
I think you may need to specify the step of the image (number of bytes per line) when creating QImage in the fourth parameter:
QImage image((const uchar*) tmp.data, tmp.cols, tmp.rows, tmp.step, QImage::Format_RGB888);
I'm writing a QT GUI application in wich a live stream of a connected camera is shown on a QGraphicsview. Therefore an openCV image is first converted to a QImage and than to a QPixmap. This is added to the QGraphicsScene of the QGraphicsView.
The bandwidth is not a problem, the cameras are connected via ethernet or USB.
I am testing the performance with the Analyze Toole build in Visual Studio 2012 and it shows that the conversion to the QPixmap is very slow and takes 60% of the computation time (of displaying the image), so that I end up with 1 FPS or so. The images are 2560 by 1920 or even bigger. Scaling the cv::Ptr stream_image befor converting it to a QImage improves the performance significantly but I need all the image detail in the image.
EDIT
Here is some code how I do the conversion:
cv::Ptr<IplImage> color_image;
// stream_image is a cv::Ptr<IplImage> and holds the current image from the camera
if (stream_image->nChannels != 3) {
color_image = cvCreateImage(cvGetSize(stream_image), IPL_DEPTH_8U, 3);
cv::Mat gr(stream_image);
cv::Mat col(color_image);
cv::cvtColor(gr, col, CV_GRAY2BGR);
}
else {
color_image = stream_image;
}
QImage *tmp = new QImage(color_image->width, color_image->height, QImage::Format_RGB888);
memcpy(tmp->bits(), color_image->imageData, color_image->width * color_image->height * 3);
// update Scene
m_pixmap = QPixmap::fromImage(*tmp); // this line takes the most time!!!
m_scene->clear();
QGraphicsPixmapItem *item = m_scene->addPixmap(m_pixmap);
m_scene->setSceneRect(0,0, m_pixmap.width(), m_pixmap.height());
delete tmp;
m_ui->graphicsView->fitInView(m_scene.sceneRect(),Qt::KeepAspectRatio);
m_ui->graphicsView->update();
EDIT 2
I tested the method from from Thomas answer, but it is as slow as my method.
QPixmap m_pixmap = QPixmap::fromImage(QImage(reinterpret_cast<uchar const*>(color_image->imageData),
color_image->width,
color_image->height,
QImage::Format_RGB888));
EDIT 3
I tried to incorporate Thomas second suggestion:
color_image = cvCreateImage(cvGetSize(resized_image), IPL_DEPTH_32F, 3);
//[...]
QPixmap m_pixmap = QPixmap::fromImage(QImage(
reinterpret_cast<uchar const*>( color_image->imageData),
color_image->width,
color_image->height,
QImage::Format_RGB32));
But that crashes when the drawEvent of the Widget is called.
Q: Is there a way to display the image stream in a QGraphicsView without converting it to a QPixmap first or any other fast/performant way? The QGraphicsView is importent since I want to add overlays to the image.
I have figured out a solution that works for me but also tested a little with different methods and how they perform:
Method one is performant even in debug mode and takes only 23.7 % of the execution time of the drawing procedure (using ANALYZE in VS2012):
color_image = cvCreateImage(cvGetSize(stream_image), IPL_DEPTH_8U, 4);
cv::Mat gr(stream_image);
cv::Mat col(color_image);
cv::cvtColor(gr, col, CV_GRAY2RGBA,4);
QPixmap m_pixmap = QPixmap::fromImage(QImage(reinterpret_cast<uchar const*>( color_image->imageData),
color_image->width,
color_image->height,
QImage::Format_ARGB32));
Method two is still performant in debug mode taking 42,1% of the execution time. when the following enum is used in QPixmap::fromeImage instead
QImage::Format_RGBA8888
Method three is the one I showed in my question and it is very slow in debug builds being responsible for 68,3% of the drawing workload.
However, when I compile in release all three methods are seamingly equally performant.
This is what I usually do. Use one of the constructors for QImage that uses an existing buffer and then use QPixmap::fromImage for the rest. The format of the buffer should be compatible with the display, such as QImage::Format_RGB32. In this example a vector serves as the storage for the image.
std::vector<QRgb> image( 2560 * 1920 );
QPixmap pixmap = QPixmap::fromImage( QImage(
reinterpret_cast<uchar const*>( image.data() ),
2560,
1920,
QImage::Format_RGB32 ) );
Note the alignment constraint. If the alignemnt is not 32-bit aligned, you can use one of the constructors that takes a bytesPerLine argument.
Edit:
If your image is 32bit, then you can write.
QPixmap pixmap = QPixmap::fromImage( QImage(
reinterpret_cast<uchar const*>( color_image->imageData ),
color_image->width,
color_image->height,
QImage::Format_RGB32 ) );
trying to convert local webcam stream from cv::Mat to QImage but the output is weird. I've tried a bunch of things and searched for hours; I'm officially stuck.
here is the code snippet in question
void AppName::SlotFrameReady(cv::Mat image, qint64 captureTime, qint64 processTime)
{
// cv::Mat imageholder;
// cv::cvtColor(image, imageholder, CV_BGRA2RGBA);
// QImage img((const unsigned char*)(image.data), image.cols, image.rows, QImage::Format_Grayscale8);
// QImage img((const unsigned char*)(imageholder.data), imageholder.cols, imageholder.rows, QImage::Format_RGB32);
QImage img((const unsigned char*)(image.data), image.cols, image.rows, image.step, QImage::Format_RGB888);
m_VideoView->Update(&img);
}
This is what I've tried - adding image.step, tried every QImage format, tried img.invertPixels() and img.invertRGB/invertRGBA()
I've also tried creating a temporary image to run cvtColor and convert (tried CV_BGRA2RGB and BGRA2RGBA) and this gives the same result.
type() output is 24 which, if I am correct, is CV_8UC4.
If I use any sort of above I get the following (although some formats will show incorrect color instead of just grayscale. This is with RGB8888):
http://i.imgur.com/79k3q8U.png
if I output in grayscale everything works as it should:
removed link bc rep isn't enough
On mac 10.11 with QT creator 5 and opencv 3.1 if that makes a difference, thanks!
edit to clarify:
I have tried Ypnos' solution here but that makes the output a blank gray screen. The only other options I've found are variations of what I've explored above.
The one thing I haven't tried is writing the mat to a file and reading it into a qimage. My thinking is that this is very inelegant and will be too slow for my needs.
another thing to note that I stupidly forgot to include - the video view update function transforms the qimage into a qpixmap for display. could this be where the error is?
edit again: got Ypnos' solution working, was a stupid error on my part (using the mat3b/vec3b when it is a 4 channel image). However, output is still a mess.
here is updated code:
void AppName::SlotFrameReady(const cv::Mat4b &image, qint64 captureTime, qint64 processTime)
{
QImage dest(image.cols, image.rows, QImage::Format_RGBA8888);
for (int y = 0; y < image.rows; ++y) {
const cv::Vec4b *srcrow = image[y];
QRgb *destrow = (QRgb*)dest.scanLine(y);
for (int x = 0; x < image.cols; ++x) {
destrow[x] = qRgba(srcrow[x][2], srcrow[x][1], srcrow[x][0], 255);
}
}
m_VideoView->Update(&dest);
and the relevant section of VideoView where it is converted to PixMap and pushed to display
QPixmap bitmap = QPixmap::fromImage(*image).transformed(transform, Qt::SmoothTransformation);
setPixmap(bitmap);
AND the new but still messed up output
http://i.imgur.com/1jlmfRQ.png
using the FaceTime camera built into my macbook pro as well as with 2 other USB cams I've tried (logitech c270 and a no-name chinese garbage cam)
any ideas?