Opencv writing and reading from a buffer - c++

I have two tasks (Threads) each task runs on a different processor(core), the first capture an image repeatedly using OpenCV videocapture().
I only used these two lines for the capture :
cv::Mat frame;
capture.read(frame);
Now I want to display the captured image using the second task. After executing the imshow function within the second task's code :
cv::imshow("Display window", frame);
I got the following output error :
OpenCV Error: Assertion failed (size.width>0 && size.height>0) in imshow, file /build/opencv-L2vuMj/opencv-3.2.0+dfsg/modules/highgui/src/window.cpp, line 304
terminate called after throwing an instance of 'cv::Exception'
what(): /build/opencv-L2vuMj/opencv-3.2.0+dfsg/modules/highgui/src/window.cpp:304: error: (-215) size.width>0 && size.height>0 in function imshow
So, how can I avoid this error?
The complete code is hosted at Github

cv::VideoCapture::read() returns bool indicating if the read was successful or not.
You are passing an empty frame to cv::imshow(). Try checking if the read was successful before trying to show it.
cv::Mat frame;
if(capture.read(frame))
{
cv::imshow(frame);
}
EDIT
OP posted a link to the code. In his program frame is declared as a global variable. In line 120 capture.read(frame) writing into the frame and in line 140 imshow(frame) reads from the frame without using a mutex - that's a data race. Correct code should be along the lines of:
#include <mutex>
#include <opencv2/opencv.hpp>
std::mutex mutex;
cv::Mat frame;
{
mutex.lock();
capture.read(frame);
mutex.unlock();
}
{
mutex.lock();
cv::imshow(frame);
mutex.unlock();
}

The problem with your code is that there is a data race.. Imagine the display thread goes first lock the frame & try to display it before it is read so as you can see the problem
If you want a synchronized solution you can use the pthread condition & wait till an image is read to signal your display function otherwise you are gonna have an active wait!!
// in the declaration part
// Declaration of thread condition variable
pthread_cond_t cond1 = PTHREAD_COND_INITIALIZER;
//in the display function
ptask DisplyingImageTask()
{
int task_job = 0;
while (1)
{
std::cout << "The job " << task_job << " of Task T" << ptask_get_index()
<< " is running on core " << sched_getcpu() << " at time : "
<< ptask_gettime(MILLI) << std::endl;
cv::namedWindow("Display window", cv::WINDOW_AUTOSIZE);
pthread_mutex_lock(&frame_rw);
//wait for the read function to send a signal
pthread_cond_wait(&cond1, &frame_rw);
cv::imshow("Display window", frame);
cv::waitKey(1);
pthread_mutex_unlock(&frame_rw);
ptask_wait_for_period();
task_job++;
}
}
// in the Read function
ptask CapturingImageTask()
{
int task_job = 0;
while (1)
{
std::cout << "The job " << task_job << " of Task T" << ptask_get_index()
<< " is running on core " << sched_getcpu() << " at time : "
<< ptask_gettime(MILLI) << std::endl;
pthread_mutex_lock(&frame_rw);
capture.read(frame);
//after capturing the frame signal the display function & everything should be synchronize as you want
pthread_cond_signal(&cond1);
pthread_mutex_unlock(&frame_rw);
ptask_wait_for_period();
task_job++;
}
}

int main()
{
VideoCapture cap;
while(1){
Mat frame;
cap >> frame;
imshow("frame",frame);
waitKey();}
}
You can try this. İf you write waitKey(); Code want to press any key for get frame and show frame.

As others have mentioned try using a mutex.
You can also have a condition on the cv::Mat before trying to display it:
if (frame.data())
imshow("window", frame);
This will check that the frame to be displayed has data and thus avoiding the error.
Again this condition is only to avoid the imshow error and not to solve the original problem which, as mentioned in other answers, is a data race between the 2 threads.

Related

How to get rid of OpenCv's Bad argument error

I have three periodic real-time tasks that execute simultaneously, each on a different processor(core) for video capturing using OpenCV.
In order to synchronize between the three tasks, I used an array of cv::Mat and each time I swap the index in order to avoid a data race. Thanks to the answers to my previous post on buffer reading/writing synchronization, I come up with this solution :
cv::Mat frame[3];
int frame_index_write = 0;
int frame_index_read = 1;
int frame_index_read_ = 2;
int SwapIndex(int *fi){
return *fi = (*fi + 1) % 3;
}
Now, the first task grabs a capture and stores it in a buffer and broadcast a signal to the other tasks, so they can get the captured frames :
while (1)
{
capture.grab();
capture.retrieve(frame[frame_index_write], CHANNEL);
SwapIndex(&frame_index_write);
pthread_cond_broadcast(&synch_condition); /* After capturing the frame
signal the displaying tasks*/
}
The second task gets the captured frame and displays it:
while (1)
{
pthread_cond_wait(&synch_condition, &frame_rw); /*wait for the capturing
func to send a signal*/
if (frame[frame_index_read].data)
{
cv::imshow(CAPTURED_IMAGE_WINDOW_NAME, frame[frame_index_read]);
SwapIndex(&frame_index_read);
cv::waitKey(1);
}
else{
std::cout << "Frame reading error" << std::endl;
}
}
The third task gets the captured frame and applies an edge detection process before displaying it :
while (1)
{
pthread_cond_wait(&synch_condition, &frame_rw); /*wait for the capturing
func to send a signal*/
if (frame[frame_index_read_].data)
{
cv::cvtColor(frame[frame_index_read_], gray_capture, cv::COLOR_BGR2GRAY);
cv::blur(gray_capture, detected_edges, cv::Size(3, 3));
cv::Canny(detected_edges, detected_edges, 0, 100, 3);
cv::imshow(EDGE_IMAGE_WINDOW_NAME, detected_edges);
SwapIndex(&frame_index_read_);
cv::waitKey(1);
}
else{
std::cout << "Frame reading error" << std::endl;
}
}
My code seems to work perfectly and the results were also as expected but when I stop the execution on the terminal I got the following output indicating an error :
VIDIOC_DQBUF: Invalid argument
OpenCV Error: Bad argument (Unknown array type) in cvarrToMat, file /build/opencv-L2vuMj/opencv-3.2.0+dfsg/modules/core/src/matrix.cpp, line 943
terminate called after throwing an instance of 'cv::Exception'
what(): /build/opencv-L2vuMj/opencv-3.2.0+dfsg/modules/core/src/matrix.cpp:943: error: (-5) Unknown array type in function cvarrToMat
Is there a way to handle this kind of error?
The complete code is hosted at Github
Any help would be so greatly appreciated on this topic.

C++ suddenly blocks on reading a frame from IP camera using VideoCapture

I'm using OpenCV 3. Grabbing a frame using VideoCapture with an IP Camera is blocking if the camera goes disconnected from the network or there is an issue with a frame.
I first check if videoCapture.isOpened(). If it is, I tried these methods but nothing seems to work:
1) grabber >> frame
if(grabber.isOpened()) {
grabber >> frame;
// DO SOMETHING WITH FRAME
}
2) read
if(grabber.isOpened()) {
if(!grabber.grab()){
cout << "failed to grab from camera" << endl;
} else {
if (grabber.retrieve(frame,0) ){
// DO SOMETHING WITH FRAME
} else {
// SHOW ERROR
}
}
}
3) grab/retrieve
if(grabber.isOpened()) {
if ( !grabber.read(frame) ) {
cout << "Unable to retrieve frame from video stream." << endl;
}
else {
// DO SOMETHING WITH FRAME
}
}
The video stream gets stuck at some point grabbing a frame with all of the previous options, each one blocks but doesn't exit or returns any error.
Do you know if there is a way to handle or solve this? Maybe some validations, try/catch or timer?
this issue is solved by this merge but unfortunetely opencv_ffmpeg.dll is not released yet.
you can find here updated opencv_ffmpeg.dll and test.

Communicating with a loop in another thread

So my task is this - I have a GUI thread with sliders of HSV values (among other things), and a worker thread that does all the OpenCV work and sends processed video images back to GUI thread.
Like it usually is, the OpenCV work is inside of an endless loop. The thing is, half the work is transforming the current video frame according to HSV values sent from GUI sliders. If sent before the loop starts, it works. But not while it's going on, and I need it to work on the fly.
Is there any good way to communicate with that thread and change the HSV values the OpenCV loop is using, or is it a fool's errand? I can think of two solutions, one of which is probably highly inefficient (involves saving values to a file). I'm fairly new to Qt, and I could've easly missed something in the documentation and tutorials.
edit:
Here's how my app works - in GUI thread, user picks a file. A signal with an url is sent to the worker thread, which starts working away. When the user changes HSV values, a signal is sent to change the values from another thread. If the loop hasn't been started, they're received and QDebug shows me that.
edit2:
I might've been thinking about it all wrong. Is there a way for the thread to pull values from the other one? Instead of waiting for them to be sent?
edit3:
kalibracja.cpp, for Micka.
int hueMin=0;
int hueMax=180;
int satMin=0;
int satMax=255;
int valMin=15;
int valMax=255;
int satMinDua=133; //tests
HSV::HSV(QObject * parent) : QObject(parent)
{
hsvThread = new QThread;
hsvThread ->start();
moveToThread( hsvThread );
}
HSV::~HSV() //destruktor
{
hsvThread ->exit(0);
hsvThread ->wait();
delete hsvThread ;
}
void HSV::processFrames(QString kalibracja) {
while(1) {
cv::VideoCapture kalibrowanyPlik;
kalibrowanyPlik.open(kalibracja.toStdString());
int maxFrames = kalibrowanyPlik.get(CV_CAP_PROP_FRAME_COUNT);
for(int i=0; i<maxFrames; i++)
{
cv::Mat frame;
cv::Mat gray;
//satMin=kontenerHsv->satMin;
qDebug() << "kalibracja satMin - " << satMin;
qDebug() << "fdfdf - " << satMinDua;
kalibrowanyPlik.read(frame);
cv::cvtColor(frame, gray, CV_BGR2GRAY);
QImage image(cvMatToQImage(frame));
QImage processedImage(cvMatToQImage(gray));
emit progressChanged(image, processedImage);
QThread::msleep(750); //so I can read qDebug() messages easly
}
}
}
void MainWindow::onProgressChagned(QImage image, QImage processedImage) {
QPixmap processed = QPixmap::fromImage(processedImage);
processed = processed.scaledToHeight(379);
ui->labelHsv->clear();
ui->labelHsv->setPixmap(processed);
QPixmap original = QPixmap::fromImage(image);
original = original.scaledToHeight(379);
ui->labelKalibracja->clear();
ui->labelKalibracja->setPixmap(original);
}
void HSV::updateHsv(QString hmin, QString hmax, QString smin, QString smax, QString vmin, QString vmax){
satMinDua=smin.toInt();
}
mainwindow.cpp connection
HSV *hsv = new HSV;
(.... all kinds of things ....)
void MainWindow::updateHsvValues() {
QMetaObject::invokeMethod(hsv, "updateHsv", Qt::QueuedConnection,
Q_ARG(QString, hmin),
Q_ARG(QString, hmax),
Q_ARG(QString, smin),
Q_ARG(QString, smax),
Q_ARG(QString, vmin),
Q_ARG(QString, vmax));
}
It is certainly possible, but you need to be careful.
One of the ways to achieve this would be:
Create an object that stores the "current" HSV values to be used
Give a reference (or pointer) to this object to both the GUI thread and the OpenCV thread
When the GUI wants to "tell" the processing thread to use new values, it published them to that object
When the processing thread is ready to move on the the next frame (start of loop body), it fetches the values from that object.
You only need to make sure that the set and get methods on that shared object are synchronized, using a mutex for example, to prevent the processing thread from reading half-written values (data races lead to undefined behavior in C++).
If you use QThread in the "wrong" way (by subclassing QThread and using ::run , compare to https://mayaposch.wordpress.com/2011/11/01/how-to-really-truly-use-qthreads-the-full-explanation/ ), signal-slot parameter change works in endless loops too:
This is a small sample thread for testing:
void MyThread::run()
{
// start an infinite loop and test whether the sliderchange changes my used parameters
std::cout << "start infinite loop" << std::endl;
while(true)
{
unsigned long long bigVal = 0;
int lastVal = mValue;
std::cout << "start internal processing loop " << std::endl;
for(unsigned long long i=0; i<1000000000; ++i)
{
bigVal += mValue;
if(lastVal != mValue)
{
std::cout << "changed value: " << mValue << std::endl;
lastVal = mValue;
}
}
std::cout << "end internal processing loop: " << bigVal << std::endl;
}
std::cout << "stop infinite loop" << std::endl;
}
with this SLOT, which is connected to the main window slider SIGNAL
void MyThread::changeValue(int newVal)
{
// change a paramter. This is a slot which will be called by a signal.
// TODO: make this call thread-safe, e.g. by atomic operations, mutual exclusions, RW-Lock, producer-consumer etc...
std::cout << "change value: " << newVal << std::endl;
mValue = newVal;
}
giving me this result after using the slider:
this is how the slot was connected:
QObject::connect(mSlider, SIGNAL(valueChanged(int)), mTestThread, SLOT(changeValue(int)) );
if the infinite loop is performed as some kind of workerObject method which was moved to the thread with moveToThread, you can either change the way how the slot is called:
QObject::connect(mSlider, SIGNAL(valueChanged(int)), mTestThread, SLOT(changeValue(int)), Qt::DirectConnection );
Never used, but I guess the same should work for invoke:
QMetaObject::invokeMethod(hsv, "updateHsv", Qt::DirectConnection, ...
(the main thread will call changeValue then so the worker thread doesnt need to stop processing to change values => value access should be thread safe!
or you have to process the event queue of that thread:
while(true)
{
[processing]
QApplication::processEvents();
}
I think the simplest solution here may be to take advantage of the fact that Qt Signals/Slots work across threads.
Setup the appropriate slots in the processing thread and then signal them from the GUI thread.
There are all sorts of interesting questions about whether you signal for every user input, or whether you batch up changes for a moment on the GUI side...
There is some ideas for thread sync in the docs: http://doc.qt.io/qt-5/threads-synchronizing.html

updating cv::capturevideo frame in a boost::thread safely

I want to render via openGL 2camera in the same time. I want to refresh each frame as soon as possible. To update those frame I am using a thread in an infinite loop.
My actual problem is that my program crash if I put the resolution of those camera to high.
With 160:120 there's no problem. But if i put the max resolution (1920:1080) there's just 5 or 6 update of the image before crashing.
note: the number of update is not always the same before crashing
I suppose that If the resolution is low enough, frame_L and frame_R are changed quick enough that there's no colision beetween the main loop and the thread.
So I suppose that my mutex isnt doing what it should. How should I do?
(I'm not an expert in thread and variable safety)
My code:
#include <boost/thread/mutex.hpp>
#include <boost/thread/thread.hpp>
#include <opencv2/opencv.hpp>
boost::mutex m; //for an other thread
boost::mutex n;
cv::VideoCapture capture_L(0);
cv::VideoCapture capture_R(1);
cv::Mat frame_L;
cv::Mat frame_R;
void MyThreadFunction()
{
while (1)
{
{
boost::mutex::scoped_lock lk(n);
if (capture_L.grab()){
capture_L.retrieve(frame_L);
cv::transpose(frame_L, frame_L);
}
if (capture_R.grab()){
capture_R.retrieve(frame_R);
cv::transpose(frame_R, frame_R);
}
}
}
}
int main()
{
capture_L.set(CV_CAP_PROP_FRAME_WIDTH, 160);
capture_L.set(CV_CAP_PROP_FRAME_HEIGHT, 120);
capture_R.set(CV_CAP_PROP_FRAME_WIDTH, 160);
capture_R.set(CV_CAP_PROP_FRAME_HEIGHT, 120);
boost::thread thrd(&MyThreadFunction);
while(1)
{
[ use frame_L and frame_R ]
}
}
This is the code that I use for my threaded camera grab. Its part of a camera Object (eg; Each camera has it's own object and own thread for the capturing.
void Camera::setCapture(cv::VideoCapture cap)
{
pthread_mutex_lock(&latestFrameMutex);
videoCapture = cap;
videoCapture.read(latestFrame);
pthread_mutex_unlock(&latestFrameMutex);
int iret = pthread_create(&cameraGrabThread,NULL,&Camera::exec,this);
}
void *Camera::exec(void* thr)
{
reinterpret_cast<Camera *> (thr)->grabFrame();
}
This ensures that a new thread is started when a capture is set for that camera. The following code is run by the exec part of the thread to actually grab the frame.
void *Camera::grabFrame()
{
while(videoCapture.isOpened())
{
pthread_mutex_lock(&latestFrameMutex);
if(!videoCapture.read(latestFrame))
std::cout << "Unable to read frame" << std::endl;
pthread_mutex_unlock(&latestFrameMutex);
usleep(8000); // Sleep 8ms
}
}
And finally, the camera needs to be able to return the latest frame;
cv::Mat Camera::getLatestFrame()
{
while (getMilisecSinceLastCapture() < 35) //Enforce min time of 35 ms between frame requests.
{
usleep(5000);
}
pthread_mutex_lock(&latestFrameMutex);
cv::Mat result = latestFrame.clone();
pthread_mutex_unlock(&latestFrameMutex);
return result.clone();
}
Changing the size can be done by using
void Camera::setImageSize(const cv::Size& size)
{
pthread_mutex_lock(&latestFrameMutex);
videoCapture.set(CV_CAP_PROP_FRAME_HEIGHT, size.height);
videoCapture.set(CV_CAP_PROP_FRAME_WIDTH, size.width);
pthread_mutex_unlock(&latestFrameMutex);
}

OpenCV VideoCapture returns an empty frame only in first call to glutDisplayFunc callback

I have been trying to work with OpenCV and freeglut.
The program involves capturing an image from a WebCam, processing the image with OpenCV, and drawing 3D objects with OpenGL according to the processed image.
It works perfectly fine when I only use OpenCV routines.
The problem arises when the main loop becomes controlled by GLUT. When I try to grab a frame from within a callback I registered with glutDisplayFunc() the image returned is empty.
Strangely, however, when I grab a frame from a callback I registered with glutIdleFunc() it successfully returns a frame.
And after doodling around I figured out that somehow a frame cannot be captured in the first call of display() and works after the second call.
Currently my code is querying a frame inside the idle() function.
Regarding such background I have several questions.
Why does this happen? Is it because the program stalls inside display() before VideoCapture gains full access to the webcam? Or is this purely a hardware problem?
Is this safe? I'm perfectly fine about grabbing a frame from within idle(), but is this method safe to use?
If not so, is there a workaround? If this approach is not safe may somebody please notify me with another way of dealing with this issue?
The program is built on OS X Version 10.9.1 and libraries being use are
OpenCV 2.4.7.0
freeglut 2.0.1
Here is the simplified version of my code:
#include <opencv2/opencv.hpp>
#include <GL/freeglut.h>
#include <iostream>
cv::VideoCapture capture;
cv::Mat render;
void display()
{
std::cerr << "Grabbing frame in display()" << std::endl;
capture >> render; // This does not work on first call
if(render.empty()) {
std::cerr << "Error: Grabbing empty frame in display()" << std::endl;
}
}
void idle()
{
std::cerr << "Grabbing frame in idle()" << std::endl;
capture >> render; // This always works
if(render.empty()) {
std::cerr << "Error: Grabbing empty frame in idle()" << std::endl;
}
glutPostRedisplay();
}
int main(int argc, char* argv[])
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA);
glutInitWindowSize(640, 480);
int debug_window = glutCreateWindow("Debug");
glutDisplayFunc(display);
glutIdleFunc(idle);
capture.open(0);
if(!capture.isOpened()) {
std::cerr << "Error: Failed to open camera" << std::endl;
exit(1);
}
glutMainLoop();
return 0;
}
known problem.
some sloppy webcam drivers return an empty 1st frame, warmup or something.
just try to capture 1 frame, before you go into the idle loop