Is it possible to make a thread inside another thread in MFC? - c++

I have one problem about thread in MFC using opencv. Let I describe my problem first. I have one GUI that used to display video frame from camera. Hence, I must used one thread to get video from camera and display it in to GUI. It is done. However, I want to extend my problem such as: When video is displaying, I want to show that video in other window of opencv by command
IplImage* image2=cvCloneImage(&(IplImage)original);
cvShowImage("Raw Image", image2);
cvReleaseImage(&image2);
As my knowledge, I need to create a new thread inside thread get video from camera. Is it possible to do it? Let see my code and could you give me some solution or suggestion to do that task? Thank you so much
This is my code
THREADSTRUCT *_param = new THREADSTRUCT;
_param->_this = this;
CWinThread* m_hThread;
m_hThread = AfxBeginThread (StartThread, _param);
In StartThread function, I will call the load video from camera such as
UINT Main_MFCDlg::StartThread (LPVOID param)
{
THREADSTRUCT* ts = (THREADSTRUCT*)param;
cv::VideoCapture cap;
cap.open(0);
while (true)
{
Mat frame;
Mat original;
cap >> frame;
if (!frame.empty()){
original = frame.clone();
//Display video in GUI
CDC* vDC_VIDEO;
vDC_VIDEO=ts->_this->GetDlgItem(IDC_VIDEO)->GetDC();
CRect rect_VIDEO;
ts->_this->GetDlgItem(IDC_VIDEO)->GetClientRect(&rect_VIDEO);
//Is it possible to create a thread in here to show video with other
//delay time such as 1000ms
//To call the function cv::imshow("Second window", original);
}
if (waitKey(30) >= 0) break;// Delay 30ms for first window
}
}
Notethat thread struct look like
//structure for passing to the controlling function
typedef struct THREADSTRUCT
{
Main_MFCDlg* _this;
} THREADSTRUCT;

Of course you can create a thread in a thread. You problem is: the worker thread should not access UI object directly in the UI thread, the details are explained here
In your worker thread, after the background job is done, you can use SendMessage to send an message to the GUI and let it update.

Related

Is there a way to avoid buffering images when using RTSP camera in OpenCV?

Let's say we have a time consuming process that when we call it on any frame, it takes about 2 seconds for it to complete the operation.
As we capture our frames with Videocapture, frames are stored in a buffer behind the scene(maybe it's occurred in the camera itself), and when process on the nth frame completes, it grabs next frame ((n+1)th frame), However, I would like to grab the frames in real-time, not in order(i.e. skip the intermediate frames)
for example you can test below example
cv::Mat frame;
cv::VideoCapture cap("rtsp url");
while (true) {
cap.read(frame);
cv::imshow("s",frame);
cv::waitKey(2000); //artificial delay
}
Run the above example, and you will see that the frame that shows belongs to the past, not the present. I am wondering how to skip those frames?
NEW METHOD
After you start videoCapture, you can use the following function in OpenCV.
videoCapture cap(0);
cap.set(CV_CAP_PROP_BUFFERSIZE, 1); //now the opencv buffer just one frame.
USING THREAD
The task was completed using multi-threaded programming. May I ask what's the point of this question? If you are using a weak processor like raspberrypi or maybe you have a large algorithm that takes a long time to run, you may experience this problem, leading to a large lag in your frames.
This problem was solved in Qt by using the QtConcurrent class that has the ability to run a thread easily and with little coding. Basically, we run our main thread as a processing unit and run a thread to continuously capture frames, so when the main thread finishes processing on one specific frame, it asks for another frame from the second thread. In the second thread, it is very important to capture frames without delay. Therefore, if a processing unit takes two seconds and a frame is captured in 0.2 seconds, we will lose middle frames.
The project is attached as follows
1.main.cpp
#include <opencv2/opencv.hpp>
#include <new_thread.h> //this is a custom class that required for grab frames(we dedicate a new thread for this class)
#include <QtConcurrent/QtConcurrent>
using namespace cv;
#include <QThread> //need for generate artificial delay, which in real situation it will produce by weak processor or etc.
int main()
{
Mat frame;
new_thread th; //create an object from new_thread class
QtConcurrent::run(&th,&new_thread::get_frame); //start a thread with QtConcurrent
QThread::msleep(2000); //need some time to start(open) camera
while(true)
{
frame=th.return_frame();
imshow("frame",frame);
waitKey(20);
QThread::msleep(2000); //artifical delay, or a big process
}
}
2.new_thread.h
#include <opencv2/opencv.hpp>
using namespace cv;
class new_thread
{
public:
new_thread(); //constructor
void get_frame(void); //grab frame
Mat return_frame(); //return frame to main.cpp
private:
VideoCapture cap;
Mat frame; //frmae i
};
3.new_thread.cpp
#include <new_thread.h>
#include <QThread>
new_thread::new_thread()
{
cap.open(0); //open camera when class is constructing
}
void new_thread::get_frame() //get frame method
{
while(1) // while(1) because we want to grab frames continuously
{
cap.read(frame);
}
}
Mat new_thread::return_frame()
{
return frame; //whenever main.cpp need updated frame it can request last frame by usign this method
}

QLabel is not displaying video

It may be a very silly problem, but I am really got stack on this.
Here, I am trying to display a video frame by frame in QLabel. In the user interface, there is a QPushButton by clicking which user can select the video. Then QString of the video file is obtained, which is then converted to cv::String so that video can be loaded by using OpenCV libraries. After being loaded, every Mat3b type frame from the cv::video is being converted to the QImage, so that these frames can be displayed in a QLabel. But when I run this program, the QLabel is not displaying the video. And after few moments, it crushes showing Project.exe is not responding.
This may be a bit complex, but it has been thus done so that, some specific OpenCV methods can be applied on each frame if needed. Here is some code, which is responsible for this.
void MainWindow::on_Browse_clicked()
{
QFileDialog dialog(this);
dialog.setNameFilter(tr("Videos (*.avi)"));
dialog.setViewMode(QFileDialog::Detail);
QString videofileName = QFileDialog::getOpenFileName(this, tr("Open
File"), "C:/", tr("Videos (*.avi)"));
if(!videofileName.isEmpty())
{
String videopath;
videopath = videofileName.toLocal8Bit().constData();
bool playVideo = true;
VideoCapture cap(videopath);
if(!cap.isOpened())
{
QMessageBox::warning(this, tr("Warning"),tr("Error loadeing
video."));
exit(0);
}
Mat frame;
while(1)
{
if(playVideo)
cap >> frame;
Mat3b src=frame;
QImage dest= Mat3b2QImage(src); //To convert Mat3b to QImage
ui->label->setPixmap(QPixmap::fromImage(dest));
if(frame.empty())
{
QMessageBox::warning(this, tr("Warning"),tr("Video frame
cannot be openned."));
break;
}
}
}
}
But when I added the following few lines before the last three curly brace bracket, both the QLabel and cv::window are displaying the video.
imshow("Video",src);
char key = waitKey(10);
if(key == 'p')
playVideo = !playVideo;
if(key == 'q')
break;
But I don't want to display with cv::window. Can anyone help me fix it? I appreciate any help.
Thanks in advance.
The GUI thread is busy in the infinite while loop, so you never give Qt the chance to update the GUI.
You should add QApplication::processEvents inside the loop, which:
Processes all pending events for the calling thread [...].
You can call this function occasionally when your program is busy performing a long operation

Capturing camera frames once a while

I have a system that is typically running a scan time of 100 HZ or 10 ms and performing time critical tasks. I'm trying to add a camera with opencv to once a while (depends on when a user interacts with the system so it can be anywhere from 10 seconds pauses to minutes) capture an image for quality control.
Here is what my code is doing:
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
UMat frame;
for(;;){
if (timing_variable_100Hz){
cap >> frame; // get a new frame from camera
*Do something time critical*
if(some_criteria_is_met){
if(!frame.empty()) imwrite( "Image.jpg", frame);
}
}
}
return 0;
}
Now the issue I'm having is that cap >> frame takes a lot of time.
My scan time regularly runs around 3ms and now it's at 40ms. Now my question is, are there anyway to open the camera, capture, then not have to capture every frame after until I have to? I tried to move the cap >> frame inside the if(some_criteria_is_met) which allowed me to capture the first image correctly but the second image which was taken a few minutes later was a single frame past the first captured image (I hope that makes sense).
Thanks
The problem is that your camera has a framerate of less than 100fps, probably 30fps (according to the 32ms you measured), so grab wait for a new frame to be available.
Since there is no way to do a non blocking read in opencv, i think that your best option is to do the video grabbing in another thread.
Something like this, if you use c++11 (this is an example, not sure it is entirely correct):
void camera_loop(std::atomic<bool> &capture, std::atomic<bool> &stop)
{
VideoCapture cap(0);
Mat frame;
while(!stop)
{
cap.grab();
if(capture)
{
cap.retrieve(frame);
// do whatever you do with the frame
capture=false;
}
}
}
int main()
{
std::atomic<bool> capture=false, stop=false;
std::thread camera_thread(camera_loop, std::ref(capture), std::ref(stop));
for(;;)
{
// do something time critical
if(some_criteria_is_met)
{
capture=true;
}
}
stop=true;
camera_thread.join();
}
It doesn't answer your question of are there anyway to open the camera, capture, then not have to capture every frame after until I have to?, but a suggestion
You could try and have the cap >> frame in a background thread which is responsible only for capturing the frames.
Once the frame is in memory, push it to some sort of shared cyclic queue to be accessed from the main thread.

How to show different Frame per second of video in two window in opencv

I am using opencv to show frames from camera. I want to show that frames in to two separation windows. I want show real frame from camera into first window (show frames after every 30 mili-seconds) and show the frames in second window with some delay (that means it will show frames after every 1 seconds). Is it possible to do that task. I tried to do it with my code but it is does not work well. Please give me one solution to do that task using opencv and visual studio 2012. Thanks in advance
This is my code
VideoCapture cap(0);
if (!cap.isOpened())
{
cout << "exit" << endl;
return -1;
}
namedWindow("Window 1", 1);
namedWindow("Window 2", 2);
long count = 0;
Mat face_algin;
while (true)
{
Mat frame;
Mat original;
cap >> frame;
if (!frame.empty()){
original = frame.clone();
cv::imshow("Window 1", original);
}
if (waitKey(30) >= 0) break;// Delay 30ms for first window
}
You could write the loop to display frames in a single function with the video file name as the argument and call them simultaneously by multi-threading.
The pseudo code would look like,
void* play_video(void* frame_rate)
{
// play at specified frame rate
}
main()
{
create_thread(thread1, play_video, normal_frame_rate);
create_thread(thread2, play_video, delayed_frame_rate);
join_thread(thread1);
join_thread(thread2);
}

OpenCV Running Video from WebCam on different thread

I have 2 webcams and I want to get input from both of them at the same time. Therefore I believe I have to work with threads in c++ which is pthread. When I run my code given below, the webcam turns on for a second and the routine exits. I can't figure out what is wrong in my code.
void *WebCam(void *arg){
VideoCapture cap(0);
for (; ; ) {
Mat frame;
*cap >> frame;
resize(frame, frame, Size(640, 480));
flip(frame, frame, 1);
imshow("frame", frame);
if(waitKey(30) >= 0)
break;
}
pthread_exit(NULL);
}
int main(){
pthread_t thread1, thread2;
pthread_create(&thread1, NULL, &WebCam, NULL);
return 0;
}
this is doe for one webcam just to turn and do streaming. Once this one works than other will be just copy of it.
When you create the thread, it starts running, but your main program, which is still running, just terminates, making the child thread finish too. Try adding this after pthread_create:
pthread_join(thread1, NULL);
By the way, even if you have two cameras, you can avoid the use of threads. I am not sure, but they could be problematic when dealing with the highgui functions (imshow, waitKey), because you must make sure they are thread-safe. Otherwise, what will be the result of having two threads calling waitKey at the same time?
You could get rid of threads with a design similar to this one:
VideoCapture cap0(0);
VideoCapture cap1(1);
for(;;)
{
cv::Mat im[2];
cap0 >> im[0];
cap1 >> im[1];
// check which of im[i] is non empty and do something with it
}