I have a simple personal project which uses QT and OpenCV frameworks to display a video in a QLabel. I know how to do the conversions into QImage and setting the Pixmap.
However, the video is running too fast under the while loop and when I checked the fps is either 29 or 30 no matter which video I load.
To counter this, I have also implemented a QTimer to start when the video is loaded.
I am not sure how to use that to display the frames with an appropriate framerate that I need to set.
Any Idea how i can implement this?
I have done mat to QImage conversion in my project earlier.
static QImage Mat2QImage(const cv::Mat3b &src) {
QImage dest(src.cols, src.rows, QImage::Format_ARGB32);
for (int y = 0; y < src.rows; ++y) {
const cv::Vec3b *srcrow = src[y];
QRgb *destrow = (QRgb*)dest.scanLine(y);
for (int x = 0; x < src.cols; ++x) {
destrow[x] = qRgba(srcrow[x][2], srcrow[x][1], srcrow[x][0], 255);
}
}
return dest;
}
usage might be like this
void foo::timeout() // A slot which QTimer's timeout signal is connected to
{
// I didn't tested the code but it should work
Mat frame;
m_cap >> frame;
QImage img = Mat2QImage(frame);
QPixmap pixmap = QPixmap::fromImage(img);
ui->streamDisplay->setPixmap(pixmap);
}
As far as i remember, Mat image should be ARGB32. It had been worked at 30 fps smoothly.
I heard that the best performant solution is to use QOpenglWidget but i dont know how to implement the same functionality. Maybe you can take a look.
my older repo link
display-code-cpp
image-conversion-cpp
There is nothing wrong with using while-loop, you just need to calculate the frame rate correctly and then sleep for each loop, just do it like this:
int FPS = static_cast<int>(capture.get(cv::CAP_PROP_FPS));
uint delayTime = static_cast<uint>(1000 / FPS);
while(capture.read(frame))
{
// process the frame
...
// now sleep (milli secconds)
QThread::msleep(delayTime);
}
Now the video will be displayed at a normal rate.
Related
I have been trying to use absdiff to find the motion in an image,but unfortunately it fail,i am new to OpenCV. The coding supposed to use absdiff to determine whether any motion is happening around or not, but the output is a pitch black for diff1,diff2 and motion. Meanwhile,next_mframe,current_mframe, prev_mframe shows grayscale images. While, result shows a clear and normal image. I use this as my reference http://manmade2.com/simple-home-surveillance-with-opencv-c-and-raspberry-pi/. I think the all the image memory is loaded with the same frame and compare, that explain why its a pitch black. Is there any others method i miss there? I am using RTSP to pass camera RAW image to ROS.
void imageCallback(const sensor_msgs::ImageConstPtr&msg_ptr){
CvPoint center;
int radius, posX, posY;
cv_bridge::CvImagePtr cv_image; //To parse image_raw from rstp
try
{
cv_image = cv_bridge::toCvCopy(msg_ptr, enc::BGR8);
}
catch (cv_bridge::Exception& e)
{
ROS_ERROR("cv_bridge exception: %s", e.what());
return;
}
frame = new IplImage(cv_image->image); //frame now holding raw_image
frame1 = new IplImage(cv_image->image);
frame2 = new IplImage(cv_image->image);
frame3 = new IplImage(cv_image->image);
matriximage = cvarrToMat(frame);
cvtColor(matriximage,matriximage,CV_RGB2GRAY); //grayscale
prev_mframe = cvarrToMat(frame1);
cvtColor(prev_mframe,prev_mframe,CV_RGB2GRAY); //grayscale
current_mframe = cvarrToMat(frame2);
cvtColor(current_mframe,current_mframe,CV_RGB2GRAY); //grayscale
next_mframe = cvarrToMat(frame3);
cvtColor(next_mframe,next_mframe,CV_RGB2GRAY); //grayscale
// Maximum deviation of the image, the higher the value, the more motion is allowed
int max_deviation = 20;
result=matriximage;
//rellocate image in right order
prev_mframe = current_mframe;
current_mframe = next_mframe;
next_mframe = matriximage;
//motion=difflmg(prev_mframe,current_mframe,next_mframe);
absdiff(prev_mframe,next_mframe,diff1); //Here should show black and white image
absdiff(next_mframe,current_mframe,diff2);
bitwise_and(diff1,diff2,motion);
threshold(motion,motion,35,255,CV_THRESH_BINARY);
erode(motion,motion,kernel_ero);
imshow("Motion Detection",result);
imshow("diff1",diff1); //I tried to output the image but its all black
imshow("diff2",diff2); //same here, I tried to output the image but its all black
imshow("diff1",motion);
imshow("nextframe",next_mframe);
imshow("motion",motion);
char c =cvWaitKey(3); }
I change the cv_bridge method to VideoCap, its seem to functions well, cv_bridge just cannot save the image even through i change the IplImage to Mat format. Maybe there is other ways, but as for now, i will go with this method fist.
VideoCapture cap(0);
Tracker(void)
{
//check if camera worked
if(!cap.isOpened())
{
cout<<"cannot open the Video cam"<<endl;
}
cout<<"camera is opening"<<endl;
cap>>prev_mframe;
cvtColor(prev_mframe,prev_mframe,CV_RGB2GRAY); // capture 3 frame and convert to grayscale
cap>>current_mframe;
cvtColor(current_mframe,current_mframe,CV_RGB2GRAY);
cap>>next_mframe;
cvtColor(next_mframe,next_mframe,CV_RGB2GRAY);
//rellocate image in right order
current_mframe.copyTo(prev_mframe);
next_mframe.copyTo(current_mframe);
matriximage.copyTo(next_mframe);
motion = diffImg(prev_mframe, current_mframe, next_mframe);
}
I'm currently building a Qt application that using some camera.
In this application the uses capture images, and then they are automatically saved in a specific folder. Everything works great.
Now, when the "library" button is clicked, I want to read all the images (JPEG files) and display all the images that were taken one by one in a QLabel.
I couldn't find any tutorials for this, only found tutorials and uses the argv argument which is no good for me, because in my application the user may capture images and then display them in the same run.
How can read the files list and display it?
Thank you very much :)
If you have a single QLabel then you have to join the images together a single one. I find easier to display a list of QLabels:
auto layout = new QVBoxLayout();
Q_FOREACH (auto imageName, listOfImages) {
QPixmap pixmap(dirPath + "/" + imageName);
if (!pixmap.isNull()) {
auto label = new QLabel();
label->setPixmap(pixmap);
layout->addWidget(label);
}
}
a_wdiget_where_to_show_images->setLayout(layout);
The last line will depend on when do you want to place the labels. I suggest some widget with a scroll bar.
Now, you want to read all the images from a directory (the listOfImages variable above). If you don't have it:
const auto listOfImages = QDir(dirPath).entryList(QStringList("*.jpg"), QDir::Files);
You may have layout problems if your images are too big. In that case you should scale them if they are bigger than a given size. Take a look at QPixmap::scaled or QPixmap::scaledToWidth. Also, if image quality is important, specify Qt::SmoothTransformation as the transformation mode.
You can use opencv library to read all images in a directory.
vector<String> filenames; // notice here that we are using the Opencv's embedded "String" class
String folder = "Deri-45x45/"; // again we are using the Opencv's embedded "String" class
float sayi = 0;
glob(folder, filenames); // new function that does the job ;-)
float toplam = 0;
for (size_t i = 0; i < filenames.size(); ++i)
{
Mat img = imread(filenames[i],0);
//Display img in QLabel
QImage imgIn = putImage(img);
imgIn = imgIn.scaled(ui->label_15->width(), ui->label_15->height(),Qt::IgnoreAspectRatio, Qt::SmoothTransformation);
ui->label_15->setPixmap(QPixmap::fromImage(imgIn));
}
In order to convert Mat type to QImage, we use putImage function:
QImage putImage(const Mat& mat)
{
// 8-bits unsigned, NO. OF CHANNELS=1
if (mat.type() == CV_8UC1)
{
// Set the color table (used to translate colour indexes to qRgb values)
QVector<QRgb> colorTable;
for (int i = 0; i < 256; i++)
colorTable.push_back(qRgb(i, i, i));
// Copy input Mat
const uchar *qImageBuffer = (const uchar*)mat.data;
// Create QImage with same dimensions as input Mat
QImage img(qImageBuffer, mat.cols, mat.rows, mat.step, QImage::Format_Indexed8);
img.setColorTable(colorTable);
return img;
}
// 8-bits unsigned, NO. OF CHANNELS=3
if (mat.type() == CV_8UC3)
{
// Copy input Mat
const uchar *qImageBuffer = (const uchar*)mat.data;
// Create QImage with same dimensions as input Mat
QImage img(qImageBuffer, mat.cols, mat.rows, mat.step, QImage::Format_RGB888);
return img.rgbSwapped();
}
else
{
qDebug() << "ERROR: Mat could not be converted to QImage.";
return QImage();
}
}
trying to convert local webcam stream from cv::Mat to QImage but the output is weird. I've tried a bunch of things and searched for hours; I'm officially stuck.
here is the code snippet in question
void AppName::SlotFrameReady(cv::Mat image, qint64 captureTime, qint64 processTime)
{
// cv::Mat imageholder;
// cv::cvtColor(image, imageholder, CV_BGRA2RGBA);
// QImage img((const unsigned char*)(image.data), image.cols, image.rows, QImage::Format_Grayscale8);
// QImage img((const unsigned char*)(imageholder.data), imageholder.cols, imageholder.rows, QImage::Format_RGB32);
QImage img((const unsigned char*)(image.data), image.cols, image.rows, image.step, QImage::Format_RGB888);
m_VideoView->Update(&img);
}
This is what I've tried - adding image.step, tried every QImage format, tried img.invertPixels() and img.invertRGB/invertRGBA()
I've also tried creating a temporary image to run cvtColor and convert (tried CV_BGRA2RGB and BGRA2RGBA) and this gives the same result.
type() output is 24 which, if I am correct, is CV_8UC4.
If I use any sort of above I get the following (although some formats will show incorrect color instead of just grayscale. This is with RGB8888):
http://i.imgur.com/79k3q8U.png
if I output in grayscale everything works as it should:
removed link bc rep isn't enough
On mac 10.11 with QT creator 5 and opencv 3.1 if that makes a difference, thanks!
edit to clarify:
I have tried Ypnos' solution here but that makes the output a blank gray screen. The only other options I've found are variations of what I've explored above.
The one thing I haven't tried is writing the mat to a file and reading it into a qimage. My thinking is that this is very inelegant and will be too slow for my needs.
another thing to note that I stupidly forgot to include - the video view update function transforms the qimage into a qpixmap for display. could this be where the error is?
edit again: got Ypnos' solution working, was a stupid error on my part (using the mat3b/vec3b when it is a 4 channel image). However, output is still a mess.
here is updated code:
void AppName::SlotFrameReady(const cv::Mat4b &image, qint64 captureTime, qint64 processTime)
{
QImage dest(image.cols, image.rows, QImage::Format_RGBA8888);
for (int y = 0; y < image.rows; ++y) {
const cv::Vec4b *srcrow = image[y];
QRgb *destrow = (QRgb*)dest.scanLine(y);
for (int x = 0; x < image.cols; ++x) {
destrow[x] = qRgba(srcrow[x][2], srcrow[x][1], srcrow[x][0], 255);
}
}
m_VideoView->Update(&dest);
and the relevant section of VideoView where it is converted to PixMap and pushed to display
QPixmap bitmap = QPixmap::fromImage(*image).transformed(transform, Qt::SmoothTransformation);
setPixmap(bitmap);
AND the new but still messed up output
http://i.imgur.com/1jlmfRQ.png
using the FaceTime camera built into my macbook pro as well as with 2 other USB cams I've tried (logitech c270 and a no-name chinese garbage cam)
any ideas?
What is the fastest method to convert IplImage IPL_DEPTH_32S to QImage Format_RGB32?
I need to catch pictures from cam and show it on form with frequency 30 frames in second. I tried to use QImage constructor:
QImage qImage((uchar *) image->imageData, image->width, image->height, QImage::Format_RGB32);
but image was corrupted after this. So, how can I do this fast (I think putting pixel by pixel into QImage is not a good decision)?
Before I start, OpenCV uses the BGR format by default. Not RGB! So before creating the QImage you need to convert your IplImage to RGB with:
cvtColor(image, image, CV_BGR2RGB);
Then you can:
QImage qImage((uchar*) image->imageData, image->width, image->height, QImage::Format_RGB32);
Note that the constructor above doesn't copy the data as you might be thinking, as the docs states:
The buffer must remain valid throughout the life of the QImage
So if you are having performance issues are not because of the conversion procedure. It is most probably caused by the drawing method you are using (which wasn't shared in the question). To summarize a lot of blah blah blah, you should render the QImage to an OpenGL texture and let the video card do all the drawing for you.
Your question is a bit misleading because your primary objective is not to find the fastest conversion method, but one that actually works since yours don't.
Another important thing to keep in mind, when you said:
image was corrupted after this
you must know that this is completely vague, and it doesn't help us at all because there is a number of causes for this effect, and without the source code is impossible to tell with certainty what are you doing wrong. Sharing the original and the corrupted image might gives some clues to where the problem is.
That's it for now. Good luck.
IPL_DEPTH_32S is greyscale with a 32bit pixel - normally used for depth data rather than an actual 'image', if you are packign a colour image into check what the pixel ordering is.
QImage::Format_ARGB32_Premultiplied is the fastest QImage format because its what the graphics card uses - note that the data order is actually BGRA and the A channel must be at least as large as the pixel, ie 255 if you don't want to use alpha.
There is a RGB2BGRA and BGR2RGBA in cv::cvtColor()
Time ago I found some examples in the internet of how to do that. I improved the examples a bit for how to copy the images in a more easy way. Qt works normally only with char pixels (other image formats are normally called HDR images). I was wondering how you get a video buffer of 32 bit rgb in opencv... I have never seen that !
If you have a color image in opencv you could use this function to allocate memory:
QImage* allocateqtimagefromcv(IplImage* cvimg)
{
//if (cvimg->nChannels==1)
// {
// return new QImage(cvimg->width,cvimg->height,QImage::Format_Indexed8);
// }
if (cvimg)
{
return new QImage(cvimg->width,cvimg->height,QImage::Format_ARGB32);
}
}
to just copy the IplImage to the qt image you could go the easy way and use this function:
void IplImage2QImage(IplImage *iplImg,QImage* qimg)
{
char *data = iplImg->imageData;
int channels = iplImg->nChannels;
int h = iplImg->height;
int w = iplImg->width;
for (int y = 0; y < h; y++, data += iplImg->widthStep)
{
for (int x = 0; x < w; x++)
{
char r, g, b, a = 0;
if (channels == 1)
{
r = data[x * channels];
g = data[x * channels];
b = data[x * channels];
}
else if (channels == 3)
{
r = data[x * channels + 2];
g = data[x * channels + 1];
b = data[x * channels];
}
if (channels == 4)
{
a = data[x * channels + 3];
qimg->setPixel(x, y, qRgba(r, g, b, a));
}
}
}
}
the following function could be a quicker solution:
static QImage IplImage2QImage(const IplImage *iplImage)
{
int height = iplImage->height;
int width = iplImage->width;
if (iplImage->depth == IPL_DEPTH_8U && iplImage->nChannels == 3)
{
const uchar *qImageBuffer =(const uchar*)iplImage->imageData;
QImage img(qImageBuffer, width, height, QImage::Format_RGB888);
return img.rgbSwapped();
}else{
//qWarning() << "Image cannot be converted.";
return QImage();
}
}
hope my function helped. Maybe someone know better ways of doing that :)
I have been able to display an image in a label in Qt using something like the following:
transformPixels(0,0,1,imheight,imwidth,1);//sets unsigned char** imageData
unsigned char* fullCharArray = new unsigned char[imheight * imwidth];
for (int i = 0 ; i < imheight ; i++)
for (int j = 0 ; j < imwidth ; j++)
fullCharArray[(i*imwidth)+j] = imageData[i][j];
QImage *qi = new QImage(fullCharArray, imwidth, imheight, QImage::Format_RGB32);
ui->viewLabel->setPixmap(QPixmap::fromImage(*qi,Qt::AutoColor));
So fullCharArray is an array of unsigned chars that have been mapped from the 2D array imageData, in other words, it is imheight * imwidth bytes.
The problem is, it seems like only a portion of my image is showing in the label. The image is very large. I would like to display the full image, scaled down to fit in the label, with the aspect ratio preserved.
Also, that QImage format was the only one I could find that seemed to give me a close representation of the image I am wanting to display, is that what I should expect? I am only using one byte per pixel (unsigned char - values from 0 to 255), and it seems liek RGB32 doesnt make much sense for that data type, but none of the other ones displayed anything remotely correct
edit:
Following dan gallaghers advice, I implemented this code:
QImage *qi = new QImage(fullCharArray, imwidth, imheight, QImage::Format_RGB32);
int labelWidth = ui->viewLabel->width();
int labelHeight = ui->viewLabel->height();
QImage small = qi->scaled(labelWidth, labelHeight,Qt::KeepAspectRatio);
ui->viewLabel->setPixmap(QPixmap::fromImage(small,Qt::AutoColor));
But this causes my program to "unexpectedly finish" with code 0
Qt doesn't support grayscale image construction directly. You need to use 8-bit indexed color image:
QImage * qi = new QImage(imageData, imwidth, imheight, QImage::Format_Indexed8);
for(int i=0;i<256;++i) {
qi->setColor(i, qRgb(i,i,i));
}
QImage has a scaled member. So you want to change your setPixmap call to something like:
QImage small = qi->scaled(labelWidth, labelHeight, Qt::KeepAspectRatio);
ui->viewLabel->setPixmap(QPixmap::fromImage(small, Qt::AutoColor);
Note that scaled does not modify the original image qi; it returns a new QImage that is a scaled copy of the original.
Re-Edit:
To convert from 1-byte grayscale to 4-byte RGB grayscale:
QImage qi = new QImage(imwidth, imheight, QImage::Format_RGB32);
for (int i = 0; i < imheight; i++)
{
for (int j = 0; j < imwidth; j++)
{
qi->setPixel(i, j, QRgb(imageData[i][j], imageData[i][j], imageData[i][j]));
}
}
Then scale qi and use the scaled copy as the pixmap for viewLabel.
I've also faced similar problem - QImage::scaled returned black images. The quick work-around which worked in my case was to convert QImage to QPixmap, scale and convert back then. Like this:
QImage resultImg = QPixmap::fromImage(image)
.scaled( 400, 400, Qt::KeepAspectRatio )
.toImage();
where "image" is the original image.
I was not aware of format-problem, before reading this thread - but indeed, my images are 1-Bit black-white.
Regards,
Valentin Heinitz