QLabel is not displaying video - c++

It may be a very silly problem, but I am really got stack on this.
Here, I am trying to display a video frame by frame in QLabel. In the user interface, there is a QPushButton by clicking which user can select the video. Then QString of the video file is obtained, which is then converted to cv::String so that video can be loaded by using OpenCV libraries. After being loaded, every Mat3b type frame from the cv::video is being converted to the QImage, so that these frames can be displayed in a QLabel. But when I run this program, the QLabel is not displaying the video. And after few moments, it crushes showing Project.exe is not responding.
This may be a bit complex, but it has been thus done so that, some specific OpenCV methods can be applied on each frame if needed. Here is some code, which is responsible for this.
void MainWindow::on_Browse_clicked()
{
QFileDialog dialog(this);
dialog.setNameFilter(tr("Videos (*.avi)"));
dialog.setViewMode(QFileDialog::Detail);
QString videofileName = QFileDialog::getOpenFileName(this, tr("Open
File"), "C:/", tr("Videos (*.avi)"));
if(!videofileName.isEmpty())
{
String videopath;
videopath = videofileName.toLocal8Bit().constData();
bool playVideo = true;
VideoCapture cap(videopath);
if(!cap.isOpened())
{
QMessageBox::warning(this, tr("Warning"),tr("Error loadeing
video."));
exit(0);
}
Mat frame;
while(1)
{
if(playVideo)
cap >> frame;
Mat3b src=frame;
QImage dest= Mat3b2QImage(src); //To convert Mat3b to QImage
ui->label->setPixmap(QPixmap::fromImage(dest));
if(frame.empty())
{
QMessageBox::warning(this, tr("Warning"),tr("Video frame
cannot be openned."));
break;
}
}
}
}
But when I added the following few lines before the last three curly brace bracket, both the QLabel and cv::window are displaying the video.
imshow("Video",src);
char key = waitKey(10);
if(key == 'p')
playVideo = !playVideo;
if(key == 'q')
break;
But I don't want to display with cv::window. Can anyone help me fix it? I appreciate any help.
Thanks in advance.

The GUI thread is busy in the infinite while loop, so you never give Qt the chance to update the GUI.
You should add QApplication::processEvents inside the loop, which:
Processes all pending events for the calling thread [...].
You can call this function occasionally when your program is busy performing a long operation

Related

How to hold window open during OpenCV processing?

I'm running into an odd problem with OpenCV on Linux, Ubuntu 16.04 specifically. If I use usual code to show a webcam stream like this it works fine:
// WebcamTest.cpp
#include <opencv2/opencv.hpp>
#include <iostream>
int main()
{
// declare a VideoCapture object and associate to webcam, 1 => use 2nd webcam, the 0th webcam is the one integral to the TX2 development board
cv::VideoCapture capWebcam(1);
// check if VideoCapture object was associated to webcam successfully, if not, show error message and bail
if (capWebcam.isOpened() == false)
{
std::cout << "error: capWebcam not accessed successfully\n\n";
return (0);
}
cv::Mat imgOriginal; // input image
cv::Mat imgGrayscale; // grayscale of input image
cv::Mat imgBlurred; // intermediate blured image
cv::Mat imgCanny; // Canny edge image
char charCheckForEscKey = 0;
// while the Esc key has not been pressed and the webcam connection is not lost . . .
while (charCheckForEscKey != 27 && capWebcam.isOpened())
{
bool blnFrameReadSuccessfully = capWebcam.read(imgOriginal); // get next frame
// if frame was not read successfully, print error message and jump out of while loop
if (!blnFrameReadSuccessfully || imgOriginal.empty())
{
std::cout << "error: frame not read from webcam\n";
break;
}
// convert to grayscale
cv::cvtColor(imgOriginal, imgGrayscale, CV_BGR2GRAY);
// blur image
cv::GaussianBlur(imgGrayscale, imgBlurred, cv::Size(5, 5), 0);
// get Canny edges
cv::Canny(imgBlurred, imgCanny, 75, 150);
cv::imshow("imgOriginal", imgOriginal);
cv::imshow("imgCanny", imgCanny);
charCheckForEscKey = cv::waitKey(1); // delay (in ms) and get key press, if any
} // end while
return (0);
}
This example shows the webcam stream in one imshow window and a Canny edges image in a second window. Both windows update and show the images as expected with very little if any perceptible flicker.
If you're wondering why I'm using the 1th camera instead of the usual 0th camera, I'm running this on a Jetson TX2 and the 0th camera is the one integral to the development board and I'd prefer to use an additional external webcam. For this same reason I have to use Ubuntu 16.04 but I suspect the result would be the same with Ubuntu 18.04 (have not tested this however).
If instead I have a function that takes significant processing instead of taking simple Canny edges, i.e.:
int main(void)
{
.
.
.
// declare a VideoCapture object and associate to webcam, 1 => use 2nd webcam, the 0th webcam is the one integral to the TX2 development board
cv::VideoCapture capWebcam(1);
// check if VideoCapture object was associated to webcam successfully, if not, show error message and bail
if (capWebcam.isOpened() == false)
{
std::cout << "error: capWebcam not accessed successfully\n\n";
return (0);
}
cv::namedWindow("imgOriginal");
cv::Mat imgOriginal;
char charCheckForEscKey = 0;
// while the Esc key has not been pressed and the webcam connection is not lost . . .
while (charCheckForEscKey != 27 && capWebcam.isOpened())
{
bool blnFrameReadSuccessfully = capWebcam.read(imgOriginal); // get next frame
// if frame was not read successfully, print error message and jump out of while loop
if (!blnFrameReadSuccessfully || imgOriginal.empty())
{
std::cout << "error: frame not read from webcam\n";
break;
}
detectLicensePlate(imgOriginal);
cv::imshow("imgOriginal", imgOriginal);
charCheckForEscKey = cv::waitKey(1); // delay (in ms) and get key press, if any
} // end while
.
.
.
return (0);
}
The detectLicensePlate() function takes about a second to run.
The problem I'm having is, when running this program, the window only appears for the slightest amount of time, usually not long enough to even be perceptible, and never long enough to actually see the result.
The strange thing is, the window disappears, then the second or so day occurs for detectLicensePlate() to do its thing, then the window appears again for a very short time, then disappears again, and so on. It's almost as though just after cv::imshow("imgOriginal", imgOriginal);, cv::destroyAllWindows(); is implicitly being called.
The behavior I'm attempting to achieve is for the window to stay open and continue to show the previous result while processing the next. From what I recall this was the default behavior on Windows.
I should mention that I'm explicitly declaring the windows with cv::namedWindow("imgOriginal"); before the while loop in an attempt to not let it go out of scope but this does not seem to help.
Of course I can make the delay longer, i.e.
charCheckForEscKey = cv::waitKey(1500);
To wait for 1.5 seconds, but then the application gets very unresponsive.
Based on this post c++ opencv image not display inside the boost thread I tried declaring the window outside the while loop and putting detectLicensePlate() and cv::imshow() on a separate thread, as follows:
.
.
.
cv::namedWindow("imgOriginal");
boost::thread myThread;
// while the Esc key has not been pressed and the webcam connection is not lost . . .
while (charCheckForEscKey != 27 && capWebcam.isOpened())
{
// if frame was not read successfully, print error message and jump out of while loop
if (!blnFrameReadSuccessfully || imgOriginal.empty())
{
std::cout << "error: frame not read from webcam\n";
break;
}
myThread = boost::thread(&preDetectLicensePlate, imgOriginal);
myThread.join();
.
.
.
} // end while
// separate function
void preDetectLicensePlate(cv::Mat &imgOriginal)
{
detectLicensePlate(imgOriginal);
cv::imshow("imgOriginal", imgOriginal);
}
I even tried putting detectLicensePlate() on a separate thread but not cv::imshow(), and the other way around, still the same result. No matter how I change the order or use threading I can't get the window to stay open while the next round of processing is going.
I realize I could use an entirely different windowing environment, such as Qt or something else, and that may or may not solve the problem, but I'd really prefer to avoid that for various reasons.
Does anybody have any other suggestions to get an OpenCV imshow window to stay open until the window is next updated or cv::destroyAllWindows() is called explicitly?

Is it possible to make a thread inside another thread in MFC?

I have one problem about thread in MFC using opencv. Let I describe my problem first. I have one GUI that used to display video frame from camera. Hence, I must used one thread to get video from camera and display it in to GUI. It is done. However, I want to extend my problem such as: When video is displaying, I want to show that video in other window of opencv by command
IplImage* image2=cvCloneImage(&(IplImage)original);
cvShowImage("Raw Image", image2);
cvReleaseImage(&image2);
As my knowledge, I need to create a new thread inside thread get video from camera. Is it possible to do it? Let see my code and could you give me some solution or suggestion to do that task? Thank you so much
This is my code
THREADSTRUCT *_param = new THREADSTRUCT;
_param->_this = this;
CWinThread* m_hThread;
m_hThread = AfxBeginThread (StartThread, _param);
In StartThread function, I will call the load video from camera such as
UINT Main_MFCDlg::StartThread (LPVOID param)
{
THREADSTRUCT* ts = (THREADSTRUCT*)param;
cv::VideoCapture cap;
cap.open(0);
while (true)
{
Mat frame;
Mat original;
cap >> frame;
if (!frame.empty()){
original = frame.clone();
//Display video in GUI
CDC* vDC_VIDEO;
vDC_VIDEO=ts->_this->GetDlgItem(IDC_VIDEO)->GetDC();
CRect rect_VIDEO;
ts->_this->GetDlgItem(IDC_VIDEO)->GetClientRect(&rect_VIDEO);
//Is it possible to create a thread in here to show video with other
//delay time such as 1000ms
//To call the function cv::imshow("Second window", original);
}
if (waitKey(30) >= 0) break;// Delay 30ms for first window
}
}
Notethat thread struct look like
//structure for passing to the controlling function
typedef struct THREADSTRUCT
{
Main_MFCDlg* _this;
} THREADSTRUCT;
Of course you can create a thread in a thread. You problem is: the worker thread should not access UI object directly in the UI thread, the details are explained here
In your worker thread, after the background job is done, you can use SendMessage to send an message to the GUI and let it update.

How to show different Frame per second of video in two window in opencv

I am using opencv to show frames from camera. I want to show that frames in to two separation windows. I want show real frame from camera into first window (show frames after every 30 mili-seconds) and show the frames in second window with some delay (that means it will show frames after every 1 seconds). Is it possible to do that task. I tried to do it with my code but it is does not work well. Please give me one solution to do that task using opencv and visual studio 2012. Thanks in advance
This is my code
VideoCapture cap(0);
if (!cap.isOpened())
{
cout << "exit" << endl;
return -1;
}
namedWindow("Window 1", 1);
namedWindow("Window 2", 2);
long count = 0;
Mat face_algin;
while (true)
{
Mat frame;
Mat original;
cap >> frame;
if (!frame.empty()){
original = frame.clone();
cv::imshow("Window 1", original);
}
if (waitKey(30) >= 0) break;// Delay 30ms for first window
}
You could write the loop to display frames in a single function with the video file name as the argument and call them simultaneously by multi-threading.
The pseudo code would look like,
void* play_video(void* frame_rate)
{
// play at specified frame rate
}
main()
{
create_thread(thread1, play_video, normal_frame_rate);
create_thread(thread2, play_video, delayed_frame_rate);
join_thread(thread1);
join_thread(thread2);
}

Passing cv::Mat img between forms

I'm having trouble passing cv::mat data between forms on a qt gui application, for the moment I simply want to pass the Image chosen by the user on the main window and display it on the results page
void MainWindow::on_pushButton_clicked()
{
QString fileName = QFileDialog::getOpenFileName(this,
tr("Open Image"), ".", tr("Image Files (*.png *.jpg *.bmp)"));
Lateral= cv::imread(fileName.toAscii().data());
}
In the header file of the main window I definded:-
public:
cv::Mat get_Lateral(cv::Mat img);
cv::Mat get_Posteroanterior();
In the MainWindow.cpp file I have defined the folowing (i've tried a few variations of the method):-
cv::Mat MainWindow::get_Lateral(cv::Mat img ){
Lateral.copyTo(img);
return img;
}
cv::Mat MainWindow::get_Posteroanterior(){
return Posteroanterior;
}
Finally on the new form I have something like this:-
MainWindow Mw ;
cv::Mat op;
Mw.get_Lateral(op);
//(Mw.get_Posteroanterior()).copyTo(op);
cv::namedWindow("Lateral Image");
cv::imshow("Lateral Image",op);
When I run this I get a runtime error, so I added an if statment to check the contents of cv::mat op like this:-
MainWindow Mw ;
cv::Mat op;
Mw.get_Lateral(op);
//(Mw.get_Posteroanterior()).copyTo(op);
if (!op.data)
cv::namedWindow("dud Image");
else{
cv::namedWindow("Lateral Image");
cv::imshow("Lateral Image",op);
}
And I am given the dud image window implying that op is empty.
Any advice on how to do this process properly will be apppreciated, I am fairly new to opencv and c++ so I apologize for any blatant mistakes.
Cheers
Have a look at my answer involving integrating OpenCV with larger applications. I wouldn't recommend using the highgui imshow function inside of a Qt GUI application. It has done some weird things for me in the past.
Basically, you can convert the cv::Mat to a QImage and then either use QGLWidget, or just simply draw it onto a QPixmap if you don't particularly need high speed.
As for passing a Mat object between forms, you can either convert it to a QImage and then use signals/slots like my example shows, or if you need to manipulate the Mat object further, you can create a QMetaType. The QMetaType will allow you transmit it across forms just like you would any other native Qt object. Here is a Qt example to get you started.
Hope that is helpful!
The return value of get_Lateral is being discarded - you either need to modify it in place via a reference:
void MainWindow::get_Lateral(cv::Mat& img ){
Lateral.copyTo(img);
//return img;
}
or assign back to your local cv::Mat object in main:
MainWindow Mw ;
cv::Mat op;
op = Mw.get_Lateral(op);
//(Mw.get_Posteroanterior()).copyTo(op);
if (!op.data)
cv::namedWindow("dud Image");
else{
cv::namedWindow("Lateral Image");
cv::imshow("Lateral Image",op);
}

Extracting image from QMediaPlayer Video

i am using Qt Creator to implement an application that reads a video and by clicking on a button i will save the frame that is being showed. Then i will process that frame with Opencv.
Having a video being displayed with QmediaPlayer, how can i extract a frame from the video? I should then be able to convert that frame to a Mat image in Matlab.
Thanks
QMediaPlayer *player = new QMediaPlayer();
QVideoProbe *probe = new QVideoProbe;
connect(probe, SIGNAL(videoFrameProbed(QVideoFrame)), this, SLOT(processFrame(QVideoFrame)));
probe->setSource(player); // Returns true, hopefully.
processFrame slot:
void processFrame(QVideoFrame const&) {
if (isButtonClicked == false) return;
isButtonClicked = false;
...
process frame
...
}
QVideoProbe reference
QVideoFrame reference
You can use QVideoFrame::bits() to process image with OpenCV