cv::Mat to QImage conversion results in odd display - c++

trying to convert local webcam stream from cv::Mat to QImage but the output is weird. I've tried a bunch of things and searched for hours; I'm officially stuck.
here is the code snippet in question
void AppName::SlotFrameReady(cv::Mat image, qint64 captureTime, qint64 processTime)
{
// cv::Mat imageholder;
// cv::cvtColor(image, imageholder, CV_BGRA2RGBA);
// QImage img((const unsigned char*)(image.data), image.cols, image.rows, QImage::Format_Grayscale8);
// QImage img((const unsigned char*)(imageholder.data), imageholder.cols, imageholder.rows, QImage::Format_RGB32);
QImage img((const unsigned char*)(image.data), image.cols, image.rows, image.step, QImage::Format_RGB888);
m_VideoView->Update(&img);
}
This is what I've tried - adding image.step, tried every QImage format, tried img.invertPixels() and img.invertRGB/invertRGBA()
I've also tried creating a temporary image to run cvtColor and convert (tried CV_BGRA2RGB and BGRA2RGBA) and this gives the same result.
type() output is 24 which, if I am correct, is CV_8UC4.
If I use any sort of above I get the following (although some formats will show incorrect color instead of just grayscale. This is with RGB8888):
http://i.imgur.com/79k3q8U.png
if I output in grayscale everything works as it should:
removed link bc rep isn't enough
On mac 10.11 with QT creator 5 and opencv 3.1 if that makes a difference, thanks!
edit to clarify:
I have tried Ypnos' solution here but that makes the output a blank gray screen. The only other options I've found are variations of what I've explored above.
The one thing I haven't tried is writing the mat to a file and reading it into a qimage. My thinking is that this is very inelegant and will be too slow for my needs.
another thing to note that I stupidly forgot to include - the video view update function transforms the qimage into a qpixmap for display. could this be where the error is?
edit again: got Ypnos' solution working, was a stupid error on my part (using the mat3b/vec3b when it is a 4 channel image). However, output is still a mess.
here is updated code:
void AppName::SlotFrameReady(const cv::Mat4b &image, qint64 captureTime, qint64 processTime)
{
QImage dest(image.cols, image.rows, QImage::Format_RGBA8888);
for (int y = 0; y < image.rows; ++y) {
const cv::Vec4b *srcrow = image[y];
QRgb *destrow = (QRgb*)dest.scanLine(y);
for (int x = 0; x < image.cols; ++x) {
destrow[x] = qRgba(srcrow[x][2], srcrow[x][1], srcrow[x][0], 255);
}
}
m_VideoView->Update(&dest);
and the relevant section of VideoView where it is converted to PixMap and pushed to display
QPixmap bitmap = QPixmap::fromImage(*image).transformed(transform, Qt::SmoothTransformation);
setPixmap(bitmap);
AND the new but still messed up output
http://i.imgur.com/1jlmfRQ.png
using the FaceTime camera built into my macbook pro as well as with 2 other USB cams I've tried (logitech c270 and a no-name chinese garbage cam)
any ideas?

Related

OpenCV QT, Displaying Frames of a Video (Not using a While Loop)

I have a simple personal project which uses QT and OpenCV frameworks to display a video in a QLabel. I know how to do the conversions into QImage and setting the Pixmap.
However, the video is running too fast under the while loop and when I checked the fps is either 29 or 30 no matter which video I load.
To counter this, I have also implemented a QTimer to start when the video is loaded.
I am not sure how to use that to display the frames with an appropriate framerate that I need to set.
Any Idea how i can implement this?
I have done mat to QImage conversion in my project earlier.
static QImage Mat2QImage(const cv::Mat3b &src) {
QImage dest(src.cols, src.rows, QImage::Format_ARGB32);
for (int y = 0; y < src.rows; ++y) {
const cv::Vec3b *srcrow = src[y];
QRgb *destrow = (QRgb*)dest.scanLine(y);
for (int x = 0; x < src.cols; ++x) {
destrow[x] = qRgba(srcrow[x][2], srcrow[x][1], srcrow[x][0], 255);
}
}
return dest;
}
usage might be like this
void foo::timeout() // A slot which QTimer's timeout signal is connected to
{
// I didn't tested the code but it should work
Mat frame;
m_cap >> frame;
QImage img = Mat2QImage(frame);
QPixmap pixmap = QPixmap::fromImage(img);
ui->streamDisplay->setPixmap(pixmap);
}
As far as i remember, Mat image should be ARGB32. It had been worked at 30 fps smoothly.
I heard that the best performant solution is to use QOpenglWidget but i dont know how to implement the same functionality. Maybe you can take a look.
my older repo link
display-code-cpp
image-conversion-cpp
There is nothing wrong with using while-loop, you just need to calculate the frame rate correctly and then sleep for each loop, just do it like this:
int FPS = static_cast<int>(capture.get(cv::CAP_PROP_FPS));
uint delayTime = static_cast<uint>(1000 / FPS);
while(capture.read(frame))
{
// process the frame
...
// now sleep (milli secconds)
QThread::msleep(delayTime);
}
Now the video will be displayed at a normal rate.

Image is multiplied three times in OpenCV, what causes this?

I have one gray scale image which is just the R channel of a photo, now I'm trying to write that R channel into a new image, which is an RGB image. Ideally, the new image would look just like the old image, but red.
What happens though is that in the new image, the old image appears three times squished next to each other.
Here you can see the gray scale image and the output image.
Here is my code, I think it's pretty straightforward:
Mat img_in = imread("in.png", CV_LOAD_IMAGE_GRAYSCALE);
Mat img_out = Mat::zeros(img_in.size(), CV_8UC3);
for (int i = 0; i < img_in.rows; i++)
{
for (int j = 0; j < img_in.cols; j++)
{
img_out.at<Vec3b>(i,j)[2] = img_in.at<Vec3b>(i,j)[2];
}
}
imwrite("test_img_in.png", img_in);
imwrite("test_img_out.png", img_out);
At first I thought it was some kind of indices mixup, but I've tried a lot of combinations, and it always multiplies the output image three times horizontally, never vertically.
Now my thought is that it comes from some OpenCV specification, like the CV_8UC3 type (I've tried others too), which I've chosen because I think it support RGB images. Unfortunately, I don't know too much about OpenCV itself, that's why I'm seeking help here.
PS: This is part of a whole bigger program which wants to generate a color image from three gray scale channel images, but I'm currently stuck on combining the aligned gray scale images, since this happens. The code I posted is isolated from the rest of the program and works like this on its own.
My OpenCV version is 2.4.11.
The problem is here:
img_out.at<Vec3b>(i,j)[2] = img_in.at<Vec3b>(i,j)[2];
As you said the input image is gray. So, just use:
img_out.at<Vec3b>(i,j)[2] = img_in.at<unsigned char>(i,j);
you will get the same result by loading your image as 3 channel and subtract Scalar(255,255,0)
#include <opencv2/opencv.hpp>
using namespace cv;
int main(int argc, char **argv)
{
Mat src = imread(argv[1]);
imshow("src", src );
src -= Scalar(255,255,0);
imshow("Red channel", src );
waitKey();
return 0;
}

bad quality, when rendering images from camera in qt4

My code:
camera = new RaspiCam_Cv();//raspbery pi library
camera->set(CV_CAP_PROP_FORMAT,CV_8UC1); //this is monochrome 8 bit format
camera->set(CV_CAP_PROP_FRAME_WIDTH, 960);
camera->set(CV_CAP_PROP_FRAME_HEIGHT,720);
while (1){
camera->grab();//for linux
unsigned char* buff = camera->getImageBufferData();
QPixmap pic = QPixmap::fromImage(QImage( buff, camWidth_, camHeight_, camWidth_ * 1, QImage::Format_Indexed8 ));
label->setPixmap(pic);
}
The problem is bad quality! I found out that the problem happens when using QImage, when using openCv Mat, everything is good!
Same thing happens in other Qt based programs, like this one (same bad quality): https://code.google.com/p/qt-opencv-multithreaded/
Here is a pic, where the problem is shown. there is a white page in front of the camera, so if all went as it should, you should see clean gray image.
You are resizing the image using pixmap and label transformations, which are worse than the one of QImage. This is due to pixmap being optimized for display and not for anything else. The pixmap size should be the same of the label to avoid any further resizing.
QImage img =QImage(
buff,
camWidth_,
camHeight_,
camWidth_ * 1,
QImage::Format_Indexed8 ).scaled(label->size());
label->setPixmap(QPixmap::fromImage(img));
This is not an answer, but it's too hard to share code in the comments.
Can you please test this code and tell me whether the result is good or bad?
int main(int argc, char** argv)
{
RaspiCam_Cv *camera = new RaspiCam_Cv();
camera->set(CV_CAP_PROP_FORMAT , CV_8UC1) ;
camera->set(CV_CAP_PROP_FRAME_WIDTH, 960);
camera->set(CV_CAP_PROP_FRAME_HEIGHT,720);
namedWindow("Output",CV_WINDOW_AUTOSIZE);
while (1)
{
Mat frame;
camera.grab();
//camera.retrieve ( frame);
unsigned char* buff = camera->getImageBufferData();
frame = cv::Mat(720, 960, CV_8UC1, buff);
imshow("Output", frame);
if (waitKey(30) == 27)
{ cout << "Exit" << endl; break; }
}
camera->~RaspiCam_Cv();
return 0;
}
Your provided images look like the color depth is only 16 Bit.
For comparison, here's the provided captured image:
and here's the same image, transformed to 16 bit color space in IrfanView (without Floyd-Steinberg-Dithering).
In the comments we found out that the Raspberry Pi Output Buffer was set to 16 Bit. and setting it to 24 Bit helped.
But I can't explain why rendering the image on the pi with OpenCV's cv::imshow produced well looking images on the Monitor/TV...

Fastest method to convert IplImage IPL_DEPTH_32S to QImage Format_RGB32

What is the fastest method to convert IplImage IPL_DEPTH_32S to QImage Format_RGB32?
I need to catch pictures from cam and show it on form with frequency 30 frames in second. I tried to use QImage constructor:
QImage qImage((uchar *) image->imageData, image->width, image->height, QImage::Format_RGB32);
but image was corrupted after this. So, how can I do this fast (I think putting pixel by pixel into QImage is not a good decision)?
Before I start, OpenCV uses the BGR format by default. Not RGB! So before creating the QImage you need to convert your IplImage to RGB with:
cvtColor(image, image, CV_BGR2RGB);
Then you can:
QImage qImage((uchar*) image->imageData, image->width, image->height, QImage::Format_RGB32);
Note that the constructor above doesn't copy the data as you might be thinking, as the docs states:
The buffer must remain valid throughout the life of the QImage
So if you are having performance issues are not because of the conversion procedure. It is most probably caused by the drawing method you are using (which wasn't shared in the question). To summarize a lot of blah blah blah, you should render the QImage to an OpenGL texture and let the video card do all the drawing for you.
Your question is a bit misleading because your primary objective is not to find the fastest conversion method, but one that actually works since yours don't.
Another important thing to keep in mind, when you said:
image was corrupted after this
you must know that this is completely vague, and it doesn't help us at all because there is a number of causes for this effect, and without the source code is impossible to tell with certainty what are you doing wrong. Sharing the original and the corrupted image might gives some clues to where the problem is.
That's it for now. Good luck.
IPL_DEPTH_32S is greyscale with a 32bit pixel - normally used for depth data rather than an actual 'image', if you are packign a colour image into check what the pixel ordering is.
QImage::Format_ARGB32_Premultiplied is the fastest QImage format because its what the graphics card uses - note that the data order is actually BGRA and the A channel must be at least as large as the pixel, ie 255 if you don't want to use alpha.
There is a RGB2BGRA and BGR2RGBA in cv::cvtColor()
Time ago I found some examples in the internet of how to do that. I improved the examples a bit for how to copy the images in a more easy way. Qt works normally only with char pixels (other image formats are normally called HDR images). I was wondering how you get a video buffer of 32 bit rgb in opencv... I have never seen that !
If you have a color image in opencv you could use this function to allocate memory:
QImage* allocateqtimagefromcv(IplImage* cvimg)
{
//if (cvimg->nChannels==1)
// {
// return new QImage(cvimg->width,cvimg->height,QImage::Format_Indexed8);
// }
if (cvimg)
{
return new QImage(cvimg->width,cvimg->height,QImage::Format_ARGB32);
}
}
to just copy the IplImage to the qt image you could go the easy way and use this function:
void IplImage2QImage(IplImage *iplImg,QImage* qimg)
{
char *data = iplImg->imageData;
int channels = iplImg->nChannels;
int h = iplImg->height;
int w = iplImg->width;
for (int y = 0; y < h; y++, data += iplImg->widthStep)
{
for (int x = 0; x < w; x++)
{
char r, g, b, a = 0;
if (channels == 1)
{
r = data[x * channels];
g = data[x * channels];
b = data[x * channels];
}
else if (channels == 3)
{
r = data[x * channels + 2];
g = data[x * channels + 1];
b = data[x * channels];
}
if (channels == 4)
{
a = data[x * channels + 3];
qimg->setPixel(x, y, qRgba(r, g, b, a));
}
}
}
}
the following function could be a quicker solution:
static QImage IplImage2QImage(const IplImage *iplImage)
{
int height = iplImage->height;
int width = iplImage->width;
if (iplImage->depth == IPL_DEPTH_8U && iplImage->nChannels == 3)
{
const uchar *qImageBuffer =(const uchar*)iplImage->imageData;
QImage img(qImageBuffer, width, height, QImage::Format_RGB888);
return img.rgbSwapped();
}else{
//qWarning() << "Image cannot be converted.";
return QImage();
}
}
hope my function helped. Maybe someone know better ways of doing that :)

Displaying and sizing a grayscale from a QImage in Qt

I have been able to display an image in a label in Qt using something like the following:
transformPixels(0,0,1,imheight,imwidth,1);//sets unsigned char** imageData
unsigned char* fullCharArray = new unsigned char[imheight * imwidth];
for (int i = 0 ; i < imheight ; i++)
for (int j = 0 ; j < imwidth ; j++)
fullCharArray[(i*imwidth)+j] = imageData[i][j];
QImage *qi = new QImage(fullCharArray, imwidth, imheight, QImage::Format_RGB32);
ui->viewLabel->setPixmap(QPixmap::fromImage(*qi,Qt::AutoColor));
So fullCharArray is an array of unsigned chars that have been mapped from the 2D array imageData, in other words, it is imheight * imwidth bytes.
The problem is, it seems like only a portion of my image is showing in the label. The image is very large. I would like to display the full image, scaled down to fit in the label, with the aspect ratio preserved.
Also, that QImage format was the only one I could find that seemed to give me a close representation of the image I am wanting to display, is that what I should expect? I am only using one byte per pixel (unsigned char - values from 0 to 255), and it seems liek RGB32 doesnt make much sense for that data type, but none of the other ones displayed anything remotely correct
edit:
Following dan gallaghers advice, I implemented this code:
QImage *qi = new QImage(fullCharArray, imwidth, imheight, QImage::Format_RGB32);
int labelWidth = ui->viewLabel->width();
int labelHeight = ui->viewLabel->height();
QImage small = qi->scaled(labelWidth, labelHeight,Qt::KeepAspectRatio);
ui->viewLabel->setPixmap(QPixmap::fromImage(small,Qt::AutoColor));
But this causes my program to "unexpectedly finish" with code 0
Qt doesn't support grayscale image construction directly. You need to use 8-bit indexed color image:
QImage * qi = new QImage(imageData, imwidth, imheight, QImage::Format_Indexed8);
for(int i=0;i<256;++i) {
qi->setColor(i, qRgb(i,i,i));
}
QImage has a scaled member. So you want to change your setPixmap call to something like:
QImage small = qi->scaled(labelWidth, labelHeight, Qt::KeepAspectRatio);
ui->viewLabel->setPixmap(QPixmap::fromImage(small, Qt::AutoColor);
Note that scaled does not modify the original image qi; it returns a new QImage that is a scaled copy of the original.
Re-Edit:
To convert from 1-byte grayscale to 4-byte RGB grayscale:
QImage qi = new QImage(imwidth, imheight, QImage::Format_RGB32);
for (int i = 0; i < imheight; i++)
{
for (int j = 0; j < imwidth; j++)
{
qi->setPixel(i, j, QRgb(imageData[i][j], imageData[i][j], imageData[i][j]));
}
}
Then scale qi and use the scaled copy as the pixmap for viewLabel.
I've also faced similar problem - QImage::scaled returned black images. The quick work-around which worked in my case was to convert QImage to QPixmap, scale and convert back then. Like this:
QImage resultImg = QPixmap::fromImage(image)
.scaled( 400, 400, Qt::KeepAspectRatio )
.toImage();
where "image" is the original image.
I was not aware of format-problem, before reading this thread - but indeed, my images are 1-Bit black-white.
Regards,
Valentin Heinitz