I want to display a cv::Mat in a Gui written by gtkmm. So I have done a test.
I have a widget Gtk::Image image, and I want to set the image with the following two methods:
// first method, display from file
void displayImage1()
{
Glib::RefPtr<Gdk::Pixbuf> pixbuf = Gdk::Pixbuf::create_from_file("gtk.png");
image.set(pixbuf);
}
// second method, display from cv::Mat
void displayImage2()
{
cv::Mat outImage = cv::imread("gtk.png");
cv::cvtColor(outImage, outImage, CV_BGR2RGB);
Glib::RefPtr<Gdk::Pixbuf> pixbuf = Gdk::Pixbuf::create_from_data(outImage.data, Gdk::COLORSPACE_RGB,false, 8, outImage.cols, outImage.rows, outImage.step);
image.set(pixbuf);
}
The first method works well.
However, the second method doesn't work well, I got a destroyed image on the screen as shown in the picture.
If I set the has_alpha parameter to true, the result is also strange (see pic. below).
Similar tests were done by using Gtk::DrawingArea. Different IDEs are used (but all g++ compiler under linux). All same results.
Update:
I tested lots of images. Sometimes the images are broken, sometimes the programs crashed with
The program has unexpectedly finished.
Usually, this kind of "broken" image trigger a warning in my head: "wrong rawstride !". The rawstride in Gdk::Pixbuf in the length in bytes of a line of data. That's because you may have some byte-alignment constraints, so there may be some padding at the end of one line.
I checked what this step argument was, and yes, that's the same in OpenCV as the rawstride in Gdk::Pixbuf. Until I realized outImage.step is a cv:MatStep object, while Gdk::Pixbuf::create_from_data expects an int. I think you're supposed to use outImage.step[0] instead.
Please read https://docs.opencv.org/2.4/modules/core/doc/basic_structures.html#mat
here we go:
auto lenna = imread("Lenna.png");
Image image;
cvtColor(lenna, lenna, COLOR_BGR2RGB);
auto size = lenna.size();
auto img = Gdk::Pixbuf::create_from_data(lenna.data, Gdk::COLORSPACE_RGB, lenna.channels() == 4, 8, size.width, size.height, (int) lenna.step);
image.set(img);
This is what I did and it show good result on image showing
cvtColor(resize_image, resize_image, COLOR_BGR2RGB);
Pixbuf = Gdk::Pixbuf::create_from_data(resize_image.data, Gdk::COLORSPACE_RGB, false, 8, resize_image.cols, resize_image.rows, resize_image.step);
So I tested this (add scale_simple) with success for me:
From: http://orobotp.blogspot.com/2014/01/opencv-with-gtkmm3.html
Version: Gtkmm 3.22.2-2, OpenCV 4.4.0-dev, g++ 7.5.0
void displayImage2()
{
cv::Mat outImage;
outImage = cv::imread("gtk.png");
cv::cvtColor(outImage, outImage, cv::COLOR_RGB2BGR);
Glib::RefPtr<Gdk::Pixbuf> pixbuf = Gdk::Pixbuf::create_from_data(outImage.data, Gdk::COLORSPACE_RGB,false, 8, outImage.cols, outImage.rows, outImage.step)->scale_simple( outImage.cols, outImage.rows, Gdk::INTERP_BILINEAR );
image.set(pixbuf);
}
IMO, as already suggested by #Miki in the comments, this is just a lifetime issue.
I had the very same problem with similar code:
{
cv::Mat rgb;
cv::cvtColor(src, rgb, cv::COLOR_GRAY2RGB);
pixbuf = gdk_pixbuf_new_from_data(rgb.data,
GDK_COLORSPACE_RGB, FALSE, 8,
rgb.cols, rgb.rows, rgb.step,
NULL, NULL);
}
The above snippet simply does not work (or works intermittently) because, quoting the gdk_pixbuf_new_from_data documentation, "the data is owned by the caller of the function".
The problem is at the time the image is rendered, rgb has been destroyed. Adding a rgb.addref() just before the pixbuf assignment resolves the issue, although introducing a memory leak.
One solution would be to leverage the destroy callback to unreference the Mat object, e.g.:
static void
unref_mat(guchar *data, gpointer user_data)
{
((cv::Mat *) user_data)->release();
}
{
cv::Mat rgb;
cv::cvtColor(src, rgb, cv::COLOR_GRAY2RGB);
rgb.addref()
pixbuf = gdk_pixbuf_new_from_data(rgb.data,
GDK_COLORSPACE_RGB, FALSE, 8,
rgb.cols, rgb.rows, rgb.step,
unref_mat, &rgb);
}
try add a reference to outimage, like outimage.addref() . all my problems were related to that. the source image being de-referenced before the gdk_pixbuf_new_from_data get a chance to map it. leading to segfaults, corruption and so on. just be sure to release it later, or use the callback provided with gdk_pixbuf_new_from_data
For Gtkmm-2.4 and OpenCv 4.6 check https://onthim.blogspot.com/2015/10/using-opencv-in-gtk-applications.html and https://developer-old.gnome.org/gtkmm-tutorial/2.24/sec-draw-images.html.es
Mat frame;
frame = imread("gtk.png");
cv::cvtColor (frame, frame, COLOR_BGR2RGB);//COLOR_BGR2GRAY
Glib::RefPtr<Gdk::Pixbuf> image = Gdk::Pixbuf::create_from_data(frame.data,Gdk::COLORSPACE_RGB, false, 8, frame.cols, frame.rows, frame.step);
image->render_to_drawable(get_window(), get_style()->get_black_gc(),0, 0, 0, 0, image->get_width(), image->get_height(), // draw the whole image (from 0,0 to the full width,height) at 100,80 in the window
Gdk::RGB_DITHER_NONE, 0, 0);
Related
Objective and problem
I'm trying to process a video file on the fly using OpenCV 3.4.1 by grabbing each frame, converting to grayscale, then doing Canny edge detection on it. In order to display the images (on the fly as well), I created a Mat class with 3 additional headers that is three times as wide as the original frame. The 3 extra headers represent the images I would like to display in the composite, and are positioned to the 1st, 2nd and 3rd horizontal segment of the composite.
After image processing however, the display of the composite image is not as expected: the first segment (where the original frame should be) is completely black, while the other segments (of processed images) are displayed fine. If, on the other hand, I display the ROIs one by one in separate windows, all the images look fine.
These are the things I tried to overcome this issue:
use .copyTo to actually copy the data into the appropriate image segments. The result was the same.
I put the Canny image to the compOrigPart ROI, and it did display in the first segment, so it is not a problem with the definition of the ROIs.
Define the composite as three channel image
In the loop convert it to grayscale
put processed images into it
convert back to BGR
put the original in.
This time around the whole composite was black, nothing showed.
As per gameon67's suggestion, I tried to create a namedWindow as well, but that doesn't help either.
Code:
int main() {
cv::VideoCapture vid("./Vid.avi");
if (!vid.isOpened()) return -1;
int frameWidth = vid.get(cv::CAP_PROP_FRAME_WIDTH);
int frameHeight = vid.get(cv::CAP_PROP_FRAME_HEIGHT);
int frameFormat = vid.get(cv::CAP_PROP_FORMAT);
cv::Scalar fontColor(250, 250, 250);
cv::Point textPos(20, 20);
cv::Mat frame;
cv::Mat compositeFrame(frameHeight, frameWidth*3, frameFormat);
cv::Mat compOrigPart(compositeFrame, cv::Range(0, frameHeight), cv::Range(0, frameWidth));
cv::Mat compBwPart(compositeFrame, cv::Range(0, frameHeight), cv::Range(frameWidth, frameWidth*2));
cv::Mat compEdgePart(compositeFrame, cv::Range(0, frameHeight), cv::Range(frameWidth*2, frameWidth*3));
while (vid.read(frame)) {
if (frame.empty()) break;
cv::cvtColor(frame, compBwPart, cv::COLOR_BGR2GRAY);
cv::Canny(compBwPart, compEdgePart, 100, 150);
compOrigPart = frame;
cv::putText(compOrigPart, "Original", textPos, cv::FONT_HERSHEY_PLAIN, 1, fontColor);
cv::putText(compBwPart, "GrayScale", textPos, cv::FONT_HERSHEY_PLAIN, 1, fontColor);
cv::putText(compEdgePart, "Canny edge detection", textPos, cv::FONT_HERSHEY_PLAIN, 1, fontColor);
cv::imshow("Composite of Original, BW and Canny frames", compositeFrame);
cv::imshow("Original", compOrigPart);
cv::imshow("BW", compBwPart);
cv::imshow("Canny", compEdgePart);
cv::waitKey(33);
}
}
Questions
Why can't I display the entirety of the composite image in a single window, while displaying them separately is OK?
What is the difference between these displays? The data is obviously there, as evidenced by the separate windows.
Why only the original frame is misbehaving?
Your compBwPart and compEdgePart are grayscale images so the Mat type is CV8UC1 - single channel and therefore your compositeFrame is in grayscale too. If you want to combine these two images with a color image you have to convert it to BGR first and then fill the compOrigPart.
while (vid.read(frame)) {
if (frame.empty()) break;
cv::cvtColor(frame, compBwPart, cv::COLOR_BGR2GRAY);
cv::Canny(compBwPart, compEdgePart, 100, 150);
cv::cvtColor(compositeFrame, compositeFrame, cv::COLOR_GRAY2BGR);
frame.copyTo(compositeFrame(cv::Rect(0, 0, frameWidth, frameHeight)));
cv::putText(compOrigPart, "Original", textPos, cv::FONT_HERSHEY_PLAIN, 1, fontColor); //the rest of your code
This is a combination of several issues.
The first problem is that you set the type of compositeFrame to the value returned by vid.get(cv::CAP_PROP_FORMAT). Unfortunately that property doesn't seem entirely reliable -- I've just had it return 0 (meaning CV_8UC1) after opening a color video, and then getting 3 channel (CV_8UC3) frames. Since you want to have the compositeFrame the same type as the input frame, this won't work.
To work around it, instead of using those properties, I'd lazy initialize compositeFrame and the 3 ROIs after receiving the first frame (based on it's dimensions and type).
The next set of problems lies in those two statements:
cv::cvtColor(frame, compBwPart, cv::COLOR_BGR2GRAY);
cv::Canny(compBwPart, compEdgePart, 100, 150);
In this case assumption is made that frame is BGR (since you're trying to convert), meaning compositeFrame and its ROIs are also BGR. Unfortunately, in both cases you're writing a grayscale image into the ROI. This will cause a reallocation, and the target Mat will cease to be a ROI.
To correct this, use temporary Mats for the grayscale data, and use cvtColor to turn it back to BGR to write into the ROIs.
Similar problem lies in the following statement:
compOrigPart = frame;
That's a shallow copy, meaning it will just make compOrigPart another reference to frame (and therefore it will cease to be a ROI of compositeFrame).
What you need is a deep copy, using copyTo (note that the data types still need to match, but that was fixed earlier).
Finally, even though you try to be flexible regarding the type of the input video (judging by the vid.get(cv::CAP_PROP_FORMAT)), the rest of the code really assumes that the input is 3 channel, and will break if it isn't.
At the least, there should be some assertion to cover this expectation.
Putting this all together:
#include <opencv2/opencv.hpp>
int main()
{
cv::VideoCapture vid("./Vid.avi");
if (!vid.isOpened()) return -1;
cv::Scalar fontColor(250, 250, 250);
cv::Point textPos(20, 20);
cv::Mat frame, frame_gray, edges_gray;
cv::Mat compositeFrame;
cv::Mat compOrigPart, compBwPart, compEdgePart; // ROIs
while (vid.read(frame)) {
if (frame.empty()) break;
if (compositeFrame.empty()) {
// The rest of code assumes video to be BGR (i.e. 3 channel)
CV_Assert(frame.type() == CV_8UC3);
// Lazy initialize once we have the first frame
compositeFrame = cv::Mat(frame.rows, frame.cols * 3, frame.type());
compOrigPart = compositeFrame(cv::Range::all(), cv::Range(0, frame.cols));
compBwPart = compositeFrame(cv::Range::all(), cv::Range(frame.cols, frame.cols * 2));
compEdgePart = compositeFrame(cv::Range::all(), cv::Range(frame.cols * 2, frame.cols * 3));
}
cv::cvtColor(frame, frame_gray, cv::COLOR_BGR2GRAY);
cv::Canny(frame_gray, edges_gray, 100, 150);
// Deep copy data to the ROI
frame.copyTo(compOrigPart);
// The ROI is BGR, so we need to convert back
cv::cvtColor(frame_gray, compBwPart, cv::COLOR_GRAY2BGR);
cv::cvtColor(edges_gray, compEdgePart, cv::COLOR_GRAY2BGR);
cv::putText(compOrigPart, "Original", textPos, cv::FONT_HERSHEY_PLAIN, 1, fontColor);
cv::putText(compBwPart, "GrayScale", textPos, cv::FONT_HERSHEY_PLAIN, 1, fontColor);
cv::putText(compEdgePart, "Canny edge detection", textPos, cv::FONT_HERSHEY_PLAIN, 1, fontColor);
cv::imshow("Composite of Original, BW and Canny frames", compositeFrame);
cv::imshow("Original", compOrigPart);
cv::imshow("BW", compBwPart);
cv::imshow("Canny", compEdgePart);
cv::waitKey(33);
}
}
Screenshot of the composite window (using some random test video off the web):
My code:
camera = new RaspiCam_Cv();//raspbery pi library
camera->set(CV_CAP_PROP_FORMAT,CV_8UC1); //this is monochrome 8 bit format
camera->set(CV_CAP_PROP_FRAME_WIDTH, 960);
camera->set(CV_CAP_PROP_FRAME_HEIGHT,720);
while (1){
camera->grab();//for linux
unsigned char* buff = camera->getImageBufferData();
QPixmap pic = QPixmap::fromImage(QImage( buff, camWidth_, camHeight_, camWidth_ * 1, QImage::Format_Indexed8 ));
label->setPixmap(pic);
}
The problem is bad quality! I found out that the problem happens when using QImage, when using openCv Mat, everything is good!
Same thing happens in other Qt based programs, like this one (same bad quality): https://code.google.com/p/qt-opencv-multithreaded/
Here is a pic, where the problem is shown. there is a white page in front of the camera, so if all went as it should, you should see clean gray image.
You are resizing the image using pixmap and label transformations, which are worse than the one of QImage. This is due to pixmap being optimized for display and not for anything else. The pixmap size should be the same of the label to avoid any further resizing.
QImage img =QImage(
buff,
camWidth_,
camHeight_,
camWidth_ * 1,
QImage::Format_Indexed8 ).scaled(label->size());
label->setPixmap(QPixmap::fromImage(img));
This is not an answer, but it's too hard to share code in the comments.
Can you please test this code and tell me whether the result is good or bad?
int main(int argc, char** argv)
{
RaspiCam_Cv *camera = new RaspiCam_Cv();
camera->set(CV_CAP_PROP_FORMAT , CV_8UC1) ;
camera->set(CV_CAP_PROP_FRAME_WIDTH, 960);
camera->set(CV_CAP_PROP_FRAME_HEIGHT,720);
namedWindow("Output",CV_WINDOW_AUTOSIZE);
while (1)
{
Mat frame;
camera.grab();
//camera.retrieve ( frame);
unsigned char* buff = camera->getImageBufferData();
frame = cv::Mat(720, 960, CV_8UC1, buff);
imshow("Output", frame);
if (waitKey(30) == 27)
{ cout << "Exit" << endl; break; }
}
camera->~RaspiCam_Cv();
return 0;
}
Your provided images look like the color depth is only 16 Bit.
For comparison, here's the provided captured image:
and here's the same image, transformed to 16 bit color space in IrfanView (without Floyd-Steinberg-Dithering).
In the comments we found out that the Raspberry Pi Output Buffer was set to 16 Bit. and setting it to 24 Bit helped.
But I can't explain why rendering the image on the pi with OpenCV's cv::imshow produced well looking images on the Monitor/TV...
I have a VS console application that is built using opencv library. I am displaying image using the opencv imshow function. The thing is that all the imshow windows overlap over each other and it is difficult to toggle between them. How can I prevent the overlap and display them separately and toggle between them
The way to go about this programatically, is to call resizeWindow() to define each windows' size and moveWindow() to place them at specific locations in your screen.
void cv::resizeWindow(const string& winname, int width, int height)
void cv::moveWindow(const string& winname, int x, int y)
Although this is a late reply, you may find it useful to call moveWindow() after each imshow() call. A language independent solution is given here.
Example steps :-
call imshow("first image", img1)
call moveWindow("first image", img1, 0, 0) //Default position of window is at col,row == 0,0. So, this line is optional.
call imshow("second image", img2)
set firstImageWidth = width of img1
set mySpacing = 40 //vary this to increase/decrease the gap between the image windows.
call moveWindow("first image", firstImageWidth + mySpacing , 0)
Then, add these lines to prevent output windows from being forever active.
set myTime = 7000 //in milliseconds. Here, 7000 ms == 7 secs to show our image windows.
call waitKey(myTime)
call waitKey(1) //this is a trick. Otherwise, the windows are opened indefinitely.
At the moment, I am using Java SE8 with OpenCV 4.2. The above method works for me.
[Screenshot of the above example in action.][1]
[1]: https://i.stack.imgur.com/JaTI0.png
Here is a Java+OpenCV code snippet for the display part:
...
//display image. Using OpenCV HighGui class methods.
String inputWindowName = "This window shows input image";
String outputWindowName = "This window shows output image";
HighGui displayWindow = new HighGui();
displayWindow.imshow(inputWindowName, img1);
displayWindow.imshow(outputWindowName, img2);
displayWindow.moveWindow(outputWindowName, img1.cols()+40, 0);
displayWindow.waitKey(7000);
displayWindow.waitKey(1);
I'm trying to show LiveView image in real time. I use EDSDK 2.14 + Qt5 + opencv+mingw32 under Windows. I'm not very sophisticated in image processing so now I have the following problem. I use example from Canon EDSDK and all was ok until this part of code:
//
// Display image
//
I googled a lot of examples but all of them was written on C# or MFC or VB. Also I found advise to use libjpegTurbo for decompressing image and then showing it using opencv. I tried to use libjpegTurbo but failed to undestand what to do :(. Maybe somebody here have code example of the conversion LiveView stream to opencv Mat or QImage (because I use Qt)?
Here is what worked for me after following the SAMPLE 10 from the Canon EDSDK Reference. It's a starting point for a more robust solution.
In the downloadEvfData function, I replaced the "Display Image" part by the code bellow:
unsigned char *data = NULL;
EdsUInt32 size = 0;
EdsSize coords ;
// get image coordinates
EdsGetPropertyData(evfImage, kEdsPropsID_Evf_CoordinateSystem, 0, sizeof(coords), &coords);
// get buffer pointer and size
EdsGetPointer(stream, (EdsVoid**)&data);
EdsGetLenth(stream, &size);
//
// release stream and evfImage
//
// create mat object
Mat img(coords.height, coords.width, CV_8U, data);
image = imdecode(img, CV_LOAD_IMAGE_COLOR);
I've also changed the function signature:
EdsError downloadEvfData(EdsCameraRef camera, Mat& image)
And in the main function:
Mat image;
namedWindow("main", WINDOW_NORMAL);
startLiveView(camera);
for(;;) {
dowloadEvfData(camera, image);
imshow("main", image);
if (waitkey(10) >= 0);
break;
}
Based on the Canon EDSDKs example, you may append your EdsStreamRef 'stream' data with its correct length into a QByteArray. Then, use for example the following to parse the raw data from the QByteArray as a JPG into a new QImage:
QImage my_image = QImage::fromData(limagedata,"JPG"); Once it's in a QImage you can convert it into a OpenCV cv::Mat (see How to convert QImage to opencv Mat)
Well it depends on the format of the liveview-stream.
There must be some kind of delimiter in it and you need then to convert each image and update your QImage with it.
Check out this tutorial for more information: Canon EDSDK Tutorial in C#
QImage img = QImage::fromData(data, length, "JPG");
m_image = QImageToMat(img);
// -----------------------------------------
cv::Mat MainWindow::QImageToMat(QImage& src)
{
cv::Mat tmp(src.height(),src.width(),CV_8UC4,(uchar*)src.bits(),src.bytesPerLine());
cv::Mat result = tmp.clone();
return result;
}
// -------------------------
void MainWindow::ShowVideo()
{
namedWindow("yunhu",WINDOW_NORMAL);
while(1)
{
requestLiveViewImage();
if(m_image.data != NULL)
{
imshow("yunhu", m_image);
cvWaitKey(50);
}
}
}
So currently I open images created with openCV with something like
cvNamedWindow( "Original Image", CV_WINDOW_AUTOSIZE );
cvShowImage( "Original Image", original );
but my images are quite large and go off the screen like shown here
I want windows to be resizable or at least the size of the users screen but with scrolling.
How to do such thing?
A simple way to scroll large image is by usage of trackbar and a Rectangular for snipping.
.
.
.
namedWindow("winImage",WINDOW_AUTOSIZE);
namedWindow("controlWin",WINDOW_AUTOSIZE);
int winH=300;
int winW=600;
if(winH>=largeImage.rows)winH=largeImage.rows-1;
if(winW>=largeImage.cols)winW=largeImage.cols-1;
int scrolHight=0;
int scrolWidth=0;
cvCreateTrackbar("Hscroll", "controlWin", &scrolHight, (largeImage.rows -winH));
cvCreateTrackbar("Wscroll", "controlWin", &scrolWidth, (largeImage.cols -winW));
while(waitKey(0)!='q'){
Mat winImage= largeImage( Rect(scrolWidth,scrolHight,winW,winH) );
imshow("winImage",winImage);
}//while
.
.
EDIT
Short answer: you can't "enable" it, you have to implement it.
OpenCV does have trackbars -- have a look at the documentation, in particular the cvCreateTrackbar function. However, even if you use them, you still have to write the code behind it (for determining the new ROI and determining what to actually show).
If this sounds a bit too daunting, then you can wrap the displayed image using some GUI framework. Here is an example that uses OpenCV with wxWidgets. Of course, you can use any other GUI framework (for example, Qt).
This might help for one step: Just use CV_WINDOW_NORMAL instead of CV_WINDOW_AUTOSIZE.
cvNamedWindow(yourWindowName, CV_WINDOW_NORMAL);
cvShowImage("Original Image", original);
As far as I know (but I've only recently started looking at OpenCV) you need to build the OpenCV library with the Qt GUI library as GUI backend.
Then you get all the cute functions.
Well, OK, there's not very much, but the little that is there is documented as Qt only.
EDIT: PS, since possibly other answer might sow confusion, I'm not talking about using Qt to implement such functionality yourself. I'm talking about the functionality available in OpenCV's HighGUI module.
Cheers & hth.,
The best thing that I can do with pure opencv is by modifying the opencv trackbar method. I'm using ROI to update the display image according to the slider value. The weakness of this method is, opencv trackbar is displayed horizontally not vertically like the normal scrollbar, so in this case it is up to you whether you want to rotate your image horizontally or not.
int slider_max, slider, displayHeight;
int displayWidth = 1900;
Mat src1; // original big image
Mat dst;
cv::Rect roi;
static void on_trackbar(int, void*)
{
roi = cv::Rect(slider, 0, displayWidth, displayHeight); //Update ROI for display
dst = src1(roi);
imshow("Result", dst);
}
int main(void)
{
src1 = imread("BigImg.jpg"); // your big image
cv::rotate(src1, src1, cv::ROTATE_90_CLOCKWISE); //I rotate my image because opencv trackbar is displayed horizontally
cv::resize(src1, src1, cv::Size(src1.cols/2, src1.rows/2)); // resize the image if it's too big to display in 1 window
if (src1.empty()) { cout << "Error loading src1 \n"; return -1; }
slider_max = src1.cols - displayWidth;
slider = 0;
displayHeight = src1.rows;
namedWindow("Result", WINDOW_AUTOSIZE); // Create Window
char TrackbarName[50];
sprintf(TrackbarName, "Pixel Pos");
createTrackbar(TrackbarName, "Result", &slider, slider_max, on_trackbar);
on_trackbar(slider, 0);
waitKey(0);
return 0;
}
For changing the trackbar's orientation then you will need to use Qt or other GUI How to change the position of the trackbar in OpenCV applications?