OpenCV: Debug Assertion Fail (pHead->nBlockUse) - c++

My program is getting an input from the webcam and outputting the Gaussian Pyramid in real time. The program runs fine, but when I exit (by pressing a key to trigger the waitKey()), I get an error:
Debug Assertion Failed!
_BLOCK_TYPE_IS_VALID(pHead->nBlockUse))
Line 52: dbgdel.cpp
I suspect this is related to the buildPyramid() function I am using to create the Gaussian Pyramid. The output requires an Array of Mat. The number of mats that are output depends on the number of levels, so the output needs to be a pointer. I don't know if the problem is with initializing the variable or if it doesn't get deleted at the end. I could also just be completely off about the cause.
I am making the Array of Arrays with this:
std::vector<cv::Mat> GPyr;
and I am making the Gaussian Pyramid with this:
buildPyramid(imgMatNew, GPyr, levels, BORDER_DEFAULT);
Any ideas for what is causing the error?
Full Source:
#include "stdafx.h"
#include <iostream>
#include <stdio.h>
#include "opencv2/core/core.hpp"
#include "opencv2/flann/miniflann.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/photo/photo.hpp"
#include "opencv2/video/video.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/objdetect/objdetect.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/ml/ml.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/contrib/contrib.hpp"
#include "opencv2/core/core_c.h"
#include "opencv2/highgui/highgui_c.h"
#include "opencv2/imgproc/imgproc_c.h"
#include "opencv2\objdetect\objdetect.hpp"
using namespace cv;
using namespace std;
int main()
{
CvCapture* capture = 0;
// imgMatNew, imgMatOut were used to grab the current frame
Mat frame, frameCopy, image, imgMatNew, imgMatOut;
std::vector<cv::Mat> GPyr;
int levels = 4;
capture = cvCaptureFromCAM(CV_CAP_ANY); //0=default, -1=any camera, 1..99=your camera
if (!capture)
{
cout << "No camera detected" << endl;
}
//cvNamedWindow("result", CV_WINDOW_AUTOSIZE);
namedWindow("GPyrOut", WINDOW_AUTOSIZE);
namedWindow("imageNew", WINDOW_AUTOSIZE);
if (capture)
{
cout << "In capture ..." << endl;
for (;;)
{
// capture frame from video camera
IplImage* iplImg = cvQueryFrame(capture);
frame = iplImg;
// convert ilpImg into Mat format for easy processing
imgMatNew = cvarrToMat(iplImg, 1);
// Start Image Processing Here
buildPyramid(imgMatNew, GPyr, levels, BORDER_DEFAULT);
// Show Window
imshow("GPyrOut", GPyr[levels]); //show G Pyr, at a certain level, mex index = levels
imshow("imageNew", imgMatNew); //show window
if (waitKey(10) >= 0)
break;
}
// waitKey(0);
}
cvReleaseCapture(&capture);
return 0;
}

so, there's 2 things wrong here.
a) you must not use opencv's outdated c-api, mixing c and c++ calls is the straight road to hell.
b) c++ starts indexing at 0, and the last valid index is size-1, so for 4 levels, levels[4] is out of bounds. please run a debug build to get proper exceptions in this case !
here's the corrected code:
Mat frame, frameCopy, image, imgMatNew, imgMatOut;
std::vector<cv::Mat> GPyr;
int levels = 4;
VideoCapture capture(0);
if (!capture.isOpened())
{
cout << "No camera detected" << endl;
return -1;
}
//cvNamedWindow("result", CV_WINDOW_AUTOSIZE);
namedWindow("GPyrOut", WINDOW_AUTOSIZE);
namedWindow("imageNew", WINDOW_AUTOSIZE);
cout << "In capture ..." << endl;
for (;;)
{
// capture frame from video camera
capture.read(frame);
// Start Image Processing Here
buildPyramid(frame, GPyr, levels, BORDER_DEFAULT);
// Show Window
imshow("GPyrOut", GPyr[levels-1]); //show last level
imshow("imageNew", frame); //show window
if (waitKey(10) >= 0)
break;
}

Related

Saved video doesn't have the same duration as streamed video from a camera

I have a question about saving a video with openCV in C++ (I'm using Linux Ubuntu).
I was trying to save a stream from open camera for some time. I finally succeeded, but now I have really not many ideas why the saved stream doesn't have the same duration as I was streaming from my camera. When I am streaming for 10 seconds it has only for example 2-3 seconds and looks like it is accelerated.
Does anybody have some clue what could be the problem? Something wrong in my code or maybe computing performance, maybe the system doesn't save every frame?
Thanks for your help.
My code:
#include <stdio.h>
#include <iostream> // for standard I/O
#include <string> // for strings
#include <opencv2/core.hpp> // Basic OpenCV structures (cv::Mat)
#include <opencv2/videoio.hpp> // Video write
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/opencv.hpp"
using namespace std;
using namespace cv;
int main(int argc, char** argv)
{
Mat capture;
VideoCapture cap(0);
if(!cap.isOpened())
{
cout<<"Cannot connect to camera"<<endl;
return -1;
}
namedWindow("Display",CV_WINDOW_AUTOSIZE);
double dWidth = cap.get(CV_CAP_PROP_FRAME_WIDTH);
double dHeight = cap.get(CV_CAP_PROP_FRAME_HEIGHT);
Size frameSize(static_cast<int>(dWidth), static_cast<int>(dHeight));
VideoWriter oVideoWriter ("/home/Stream_video/cpp/Cam/out.avi", CV_FOURCC('P','I','M','1'), 20, frameSize,true);
if ( !oVideoWriter.isOpened() ) {
cout << "ERROR: Failed to write the video" << endl;
return -1;
}
while(true){
Mat frame;
bool bSuccess = cap.read(frame); // read a new frame from video
if (!bSuccess) {
cout << "ERROR: Cannot read a frame from video file" << endl;
break; //if not success, break loop
}
oVideoWriter.write(frame); //writer the frame into the file
imshow("Display", frame);
if (waitKey(10) == 27) {
cout << "esc key is pressed by user" << endl;
break;
}
}
}

opencv Changing pixel format (YUYV to MJPG) when capturing from webcam?

I need to set my webcam to MJPG using the CV_CAP_PROP_FOURCC property to increase the FPS. If I try to set the parameter to MJPG I get an error of
HIGHGUI ERROR: V4L: Property <unknown property string>(6) not supported by device
cam1x.cpp
#include "opencv2/opencv.hpp"
#include <iostream>
#include <string>
#include <sstream>
#include <stdio.h>
#include <unistd.h>
using namespace cv;
using namespace std;
int main(int, char**)
{
VideoCapture cap(7); // open the default camera
//*********trying to set it here****************
cap.set(CV_CAP_PROP_FOURCC, CV_FOURCC('M','J','P','G') );
cap.set(CV_CAP_PROP_FRAME_WIDTH,320);
cap.set(CV_CAP_PROP_FRAME_HEIGHT,280);
//cap.set(CV_CAP_PROP_CONTRAST, 0.5);
//cap.set(CV_CAP_PROP_BRIGHTNESS, 0.5);
if(!cap.isOpened() ) // check if we succeeded
return -1;
//double contrast = cap.get(CV_CAP_PROP_CONTRAST);
//double brightness = cap.get(CV_CAP_PROP_BRIGHTNESS);
//cout << "Contrast = " << contrast << "BRIghtness" << brightness << endl;
//Mat edges;
namedWindow("cam1",1);
int x = 0;
while(true)
{
x++;
Mat frame;
if( !cap.grab())
{
cout << "Can not grab images." << endl;
return -1;
}
if(cap.retrieve(frame,3) ){
imshow("cam1", frame);
}
//cap >> frame; // get a new frame from camera
//cap1 >> frame1;
//imshow("edges1", frame1);
//sleep(2);
if(waitKey(30) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
to show that my camera supports mjpg
v4l2-ctl -d /dev/video7 --list-formats
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: 'UYVY'
Name : UYVY 4:2:2
Index : 1
Type : Video Capture
Pixel Format: 'MJPG' (compressed)
Name : Motion-JPEG
How do I fix this problem so that I can set the capture to MJPG without error???
Note that there is a similar question to mines on the opencv forum that is UNANSWERED.
http://answers.opencv.org/question/41899/changing-pixel-format-yuyv-to-mjpg-when-capturing-from-webcam/

How to use the BackgroundSubtractor?

I am quite new to image processing and OpenCV. I've tried using BackgroundSubtractorMOG in C++ to detect objects.
Here is the code.
//opencv
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/video/background_segm.hpp>
//C
#include <stdio.h>
//C++
#include <iostream>
#include <sstream>
using namespace cv;
using namespace std;
//global variables
Mat frame; //current frame
Mat resizeF;
Mat fgMaskMOG; //fg mask generated by MOG method
Ptr<BackgroundSubtractor> pMOG; //MOG Background subtractor
int keyboard;
void processVideo(char* videoFilename);
int main() {
//create GUI windows
namedWindow("Frame");
namedWindow("FG Mask MOG");
pMOG= new BackgroundSubtractorMOG(); //MOG approach
VideoCapture capture("F:/FFOutput/CCC- 18AYU1F.flv");
if(!capture.isOpened()) {
cerr << "Unable to open video file " << endl;
exit(EXIT_FAILURE);
}
while( (char)keyboard != 'q' && (char)keyboard != 27 ){
//read the current frame
if(!capture.read(frame)) {
cerr << "Unable to read next frame." << endl;
cerr << "Exiting..." << endl;
exit(EXIT_FAILURE);
}
pMOG->operator()(frame, fgMaskMOG);
imshow("Frame", frame);
imshow("FG Mask MOG", fgMaskMOG);
keyboard = waitKey( 30 );
}
capture.release();
destroyAllWindows();
return EXIT_SUCCESS;
}
The code works fine but my fgMaskMOG do not evolve through time, I mean, the subtractor doesnt seem to learn what is in the background.
The first frame which feed the model seems to be taken as a permanent background.
How could I fix this problem?

Unhandled exception at Canny edge detector

I want to try Canny edge detector, but when I try to start I receive an Unhandled exception:
Unhandled exception at 0x00007FF97F6C8B9C in canny_project.exe: Microsoft C++ exception: cv::Exception at memory location 0x0000002485D89860
Below is the code that I implemented In VS2012.
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
using namespace std;
using namespace cv;
int main(int, char**)
{
namedWindow("Edges", CV_WINDOW_NORMAL);
CvCapture* capture = cvCaptureFromCAM(-1);
cv::Mat frame; cv::Mat out; cv::Mat out2;
while (1) {
frame = cvQueryFrame(capture);
GaussianBlur(frame, out, Size(5, 5), 0, 0);
cvtColor(out, out2, CV_BGR2GRAY); // produces out2, a one-channel image (CV_8UC1)
Canny(out2, out2, 100, 200, 3); // the result goes to out2 again,but since it is still one channel it is fine
if (!frame.data) break;
imshow("Edges", out2);
char c = cvWaitKey(33);
if (c == 'c') break;
}
return 0;
}
Thanks in advance
The problem is probably you are using cvCaptureFromCAM wrong.
cvCaptureFromCAM(0) // not -1
Why do you use OpenCV with C-Code? Use VideoCapture instead CvCapture.
Please try this instead and tell me whether images are shown or not and try different device numbers too:
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
using namespace std;
using namespace cv;
int main(int, char**)
{
cv::namedWindow("Capture");
int deviceNum = 0; // please try different device numbers too like -1, 1, 2, ...
cv:VideoCapture capture(deviceNum);
cv::Mat frame;
if(!capture.isOpened())
{
std::cout << "Could not open device " << deviceNum << std::endl;
return 0;
}
while (true)
{
capture >> frame; // = cvQueryFrame(capture);
//if (!frame.data) break;
if(frame.empty())
{
std::cout << "could not capture a legal frame" << std::endl;
continue;
//break;
}
cv::imshow("Capture", frame);
char c = cv::waitKey(33);
if (c == 'c') break;
}
std::cout << "press any key to exit" << std::endl;
cv::waitKey(0); // wait until key pressed
return 0;
}
cvCaptureFromCAM(-1) has wrong argument, use 0, if you have just one camera connected. In addition, in C API, when you finished working with video, release CvCapture structure with cvReleaseCapture(), or use Ptr<CvCapture> that calls cvReleaseCapture() automatically in the destructor. Have a try, please, this example, to see if you access your camera properly.

Displaying Images from file using OpenCV VideoCapture, and C++ vectors

I am reading in an image sequence from a file using an OpenCV VidoeCapture - I believe I am doing this part correctly - and then putting them in a c++ vector, for processing at a later point.
To test this, I wrote the following that would read in images, put them in a vector, and then display those images from the vector one by one. However, when I run this, no images appear.
What's wrong?
I am using a raspberry pi, I don't know if that makes any difference.
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <vector>
#include <iostream>
using namespace cv;
using namespace std;
vector<Mat> imageQueue;
int main(int argc, char** argv)
{
string arg = ("/home/pi/pictures/ceilingSequence/%02d.jpg");
VidoeCapture sequence(arg);
if(!sequence.isOpened()) {
cout << "Failed to open image sequence" << endl;
return -1;
}
Mat image;
for(;;)
{
sequence >> image;
if(image.empty()) {
cout << "End of sequence" << endl;
break;
}
imageQueue.push_back(image);
}
for(int i = 0; i < 10; i++)
{
//display 10 images
Mat readImage;
readImage = imageQueue[i];
namedWindow("Current Image", CV_WINDOW_AUTOSIZE);
imshow("Current Image", readImage);
sleep(2);
}
return 0;
}
please replace the sleep(2) with a waitKey(2000). // assuming you want to wait for 2 seconds
even if you're not interested in keypresses in general, it is needed to update the opencv / highgui graphics loop correctly.