I have a yuv camera.
I convert yuv to bgr (because of opencv use bgr) but I get an exception:
Unhandled exception at 0x76c1a832 in test1.exe: Microsoft C++ exception: cv::Exception at memory location 0x00baee60..
How can I fix it?
#include <opencv/cv.h>
#include <opencv/highgui.h>
#include <opencv2/opencv.hpp>
void main()
{
IplImage* image ;
CvCapture* capture=cvCaptureFromCAM(CV_CAP_ANY);
//cv::Mat input;
cv::Mat output;
cvNamedWindow("webcam",1);
cvGrabFrame( capture );
image = cvRetrieveFrame( capture );
cv::Mat input = cv::cvarrToMat(image);
cv::cvtColor(input,output,CV_YUV2BGR_YUY2);
imshow("webcam", output);
/*
while(1)
{
//get image from Camera
image = cvQueryFrame(capture);
//Iplimage to Mat
cv::Mat input = cv::cvarrToMat(image);
//YUV to RGB, CV_YUV2RGB_NV12 CV_YUV2BGR_NV12 CV_YUV2RGB_YV12 CV_YUV2BGR_YV12 CV_YUV2RGB_IYUV CV_YUV2BGR_IYUV CV_YUV2RGB_UYVY CV_YUV2BGR_UYVY
cv::cvtColor(input,output,CV_YUV2BGR_YUY2);
// Draw image
//cvShowImage("webcam", image);
imshow("webcam", output);
//key = cvWaitKey(30);
if(cvWaitKey(33)>=27)
break;
}
*/
cvReleaseCapture(&capture);
cvDestroyWindow("webcam");
}
The code seems to be correct, but you are not checking whether the image was really retrieved from the capture device. The most likely problem in your code is that you are not retrieving an image and it is an empty Mat you are trying to convert (which is impossible) and this produces the error.
However, if you are already using C++, why don't you use C++ API?
Related
I'm new to opencv programming, so maybe my question will be very stupid. But i have such problem, i took one sample code, which should enable laptop webcamera to show image in desktop.
#include <opencv\cv.h>
#include <opencv\highgui.h>
using namespace cv;
int main()
{
Mat image; //create Matrix to store image
VideoCapture cap;
cap.open(0); // initialize capture
namedWindow("window", CV_WINDOW_AUTOSIZE); // create window to show image
while(1)
{
cap>>image; // copy webcam stream to image
imshow("window", image); // print image to screen
waitKey(33); // delay 33ms
}
return 0;
}
But when i'm trying to debug it i get an error message.
Unhandled exception at 0x5a16ebe6 in myNewOpenCV.exe: 0xC0000005: Access violation reading location 0x00000018.
But if i put breakpints on
cap>>image;
imshow("window", image); // print image to screen`
and after debuging im taking it off everything work correctly. Maybe someone can help to find a problem. Thanks.
I've been kicking around with OpenCV 2.4.3 and a Logitech C920 camera hoping to get a primitive sort of facial recognition scheme going. Very simple, not very sophisticated.
#include "opencv2/objdetect/objdetect.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/core/core.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/nonfree/features2d.hpp"
#include <iostream>
#include <stdio.h>
using namespace std;
using namespace cv;
/** Function Headers */
void grabcurrentuser();
void capturecurrentuser( Mat vsrc );
/** Global Variables **/
string face_cascade_name = "haarcascade_frontalface_alt.xml";
CascadeClassifier face_cascade;
int main( void ){//[main]
grabcurrentuser();
}//]/main]
void grabcurrentuser(){//[grabcurrentuser]
CvCapture* videofeed;
Mat videoframe;
//Load face cascade
if( !face_cascade.load( face_cascade_name ) ){
printf("Can't load haarcascade_frontalface_alt.xml\n");
}
//Read from source video feed for current user
videofeed = cvCaptureFromCAM( 1 );
if( videofeed ){
for(int i=0; i<10;i++){//Change depending on platform
videoframe = cvQueryFrame( videofeed );
//Debug source videofeed with display
if( !videoframe.empty() ){
imshow( "Source Video Feed", videoframe );
//Perform face detection on videoframe
capturecurrentuser( videoframe );
}else{
printf("Videoframe is empty or error!!!"); break;
}
int c = waitKey(33);//change to increase or decrease delay between frames
if( (char)c == 'q' ) { break; }
}
}
}//[/grabcurrentuser]
void capturecurrentuser( Mat vsrc ){//[capturecurrentuser]
std::vector<Rect> faces;
Mat vsrc_gray;
Mat currentuserface;
//Preprocess frame for face detection
cvtColor( vsrc, vsrc_gray, CV_BGR2GRAY );
equalizeHist( vsrc_gray, vsrc_gray );
//Find face
face_cascade.detectMultiScale( vsrc_gray, faces, 1.1, 2, 0|CV_HAAR_SCALE_IMAGE, Size(30,30) );
//Take face and crop out into a Mat
currentuserface = vsrc_gray( faces[0] );
//Save the mat into a jpg file on disk
imwrite( "currentuser.jpg", currentuserface );
//Show saved image in a window
imshow( "Current User Face", currentuserface );
}//[/capturecurrentuser]
The above code is the first component of this system. It's job is to accept the video feed, take 10 frames or so (hence the for loop) and run a haar cascade on the frames to obtain a face. Once a face is acquired, it cuts that face out into a Mat and saves it as a jpg in the working directory.
It's worked so far, but seems to be a very tempermental piece of code. It's giving me the desired output most of the time (I don't intend to ask here how I can make things more accurate or precise - but feel free to tell me :D) but other times it ends in a segmentation fault. The following is an example of normal output (i've looked around and seen that the VIDIOC invalid argument is something that can be ignored - again, if its an easy fix feel free to tell me) with the segmentation fault.
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
init done
opengl support available
Segmentation fault (core dumped)
Can anyone tell me why sometimes with concurrent runs of this program I run into a single or series of segmentation fault results like above, and other times not? This program is designed to create an output thats shunted off to another program I wrote, so I can't have it seizing up on me like this.
Much appreciated!
Your problem is in the following line:
currentuserface = vsrc_gray( faces[0] );
From my experience segmentation faults arise when you are accessing stuff that does not exist.
The program works fine if a face is detected because faces[0] contains data. However, when no face is detected (cover the camera), no rectangle is stored in faces[0]. Thus the error occurs.
Try initialising like this, so that imshow and imwrite works when nothing is detected:
cv::Mat currentuserface = cv::Mat::zeros(vsrc.size(), CV_8UC1);
and then check if faces is empty before you initialize currentuserface with it:
if( !faces.empty() )
currentuserface = vsrc_gray( faces[0] );
Good day everyone! So currently I'm working on a project with video processing, so I decided to give a try to OpenCV. As I'm new to it, I decided to find few sample codes and test them out. First one, is C OpenCV and looks like this:
#include <opencv/cv.h>
#include <opencv/highgui.h>
#include <stdio.h>
int main( void ) {
CvCapture* capture = 0;
IplImage *frame = 0;
if (!(capture = cvCaptureFromCAM(0)))
printf("Cannot initialize camera\n");
cvNamedWindow("Capture", CV_WINDOW_AUTOSIZE);
while (1) {
frame = cvQueryFrame(capture);
if (!frame)
break;
IplImage *temp = cvCreateImage(cvSize(frame->width/2, frame->height/2), frame->depth, frame->nChannels); // A new Image half size
cvResize(frame, temp, CV_INTER_CUBIC); // Resize
cvSaveImage("test.jpg", temp, 0); // Save this image
cvShowImage("Capture", frame); // Display the frame
cvReleaseImage(&temp);
if (cvWaitKey(5000) == 27) // Escape key and wait, 5 sec per capture
break;
}
cvReleaseImage(&frame);
cvReleaseCapture(&capture);
return 0;
}
So, this one works perfectly well and stores image to hard drive nicely. But problems begin with next sample, which uses C++ OpenCV:
#include "opencv2/opencv.hpp"
#include <string>
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
//namedWindow("edges",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
cvtColor(frame, edges, CV_RGB2XYZ);
imshow("edges", edges);
//imshow("edges2", frame);
//imwrite("test1.jpg", frame);
if(waitKey(1000) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
So, yeah, generally, in terms of showing video (image frames) there is practically no changes, but when it comes to using im** functions, some problems arise.
Using cvSaveImage() works out nicely, but the moment I try to use imwrite(), unhandled exception arises in regards of 'access violation reading location'. Same goes for imread(), when I'm trying to load image.
So, the thing I wanted to ask, is it possible to use most of the functionality with C OpenCV? Or is it necessary to use C++ OpenCV. If yes, is there any solution for the problem I described earlier.
Also as stated here, images initially are in BGR-format, so conversion needed. But doing BGR2XYZ conversion seems to invert colors, while RGB2XYZ preserve them. Examples:
images
Or is it necessary to use C++ OpenCV?
No, there is no necessity whatsoever. You can use any interface you like and you think you are good with it (OpenCV offers C, C++, Python interfaces).
For your problem about imwrite() and imread() :
For color images the order channel is normally Blue, Green, Red , this
is what imshow() , imread() and imwrite() expect
Quoted from there
I have a grabber which can get the images and show them on the screen with the following code
while((lastPicNr = Fg_getLastPicNumberBlockingEx(fg,lastPicNr+1,0,10,_memoryAllc))<200) {
iPtr=(unsigned char*)Fg_getImagePtrEx(fg,lastPicNr,0,_memoryAllc);
::DrawBuffer(nId,iPtr,lastPicNr,"testing"); }
but I want to use the pointer to the image data and display them with OpenCV, cause I need to do the processing on the pixels. my camera is a CCD mono camera and the depth of the pixels is 8bits. I am new to OpenCV, is there any option in opencv that can get the return of the (unsigned char*)Fg_getImagePtrEx(fg,lastPicNr,0,_memoryAllc); and disply it on the screen? or get the data from the iPtr pointer an allow me to use the image data?
Creating an IplImage from unsigned char* raw_data takes 2 important instructions: cvCreateImageHeader() and cvSetData():
// 1 channel for mono camera, and for RGB would be 3
int channels = 1;
IplImage* cv_image = cvCreateImageHeader(cvSize(width,height), IPL_DEPTH_8U, channels);
if (!cv_image)
{
// print error, failed to allocate image!
}
cvSetData(cv_image, raw_data, cv_image->widthStep);
cvNamedWindow("win1", CV_WINDOW_AUTOSIZE);
cvShowImage("win1", cv_image);
cvWaitKey(10);
// release resources
cvReleaseImageHeader(&cv_image);
cvDestroyWindow("win1");
I haven't tested the code, but the roadmap for the code you are looking for is there.
If you are using C++, I don't understand why your are not doing it the simple way like this:
If your camera is supported, I would do it this way:
cv::VideoCapture capture(0);
if(!capture.isOpened()) {
// print error
return -1;
}
cv::namedWindow("viewer");
cv::Mat frame;
while( true )
{
capture >> frame;
// ... processing here
cv::imshow("viewer", frame);
int c = cv::waitKey(10);
if( (char)c == 'c' ) { break; } // press c to quit
}
I would recommend starting to read the docs and tutorials which you can find here.
when I compile and run this code, I get an error. It compiles, but when I try to run it, it gives the following error:
The application has requested the Runtime to terminate in an unusual way.
This is the code:
#include <opencv2/opencv.hpp>
#include <string>
int main() {
cv::VideoCapture c(0);
double rate = 10;
bool stop(false);
cv::Mat frame;
cv::namedWindow("Hi!");
int delay = 1000/rate;
cv::Mat corners;
while(!stop){
if(!c.read(frame))
break;
cv::cornerHarris(frame,corners,3,3,0.1);
cv::imshow("Hi!",corners);
if(cv::waitKey(delay)>=0)
stop = true;
}
return 0;
}
BTW, I get the same error when using the Canny edge detector.
Your corners matrix is declared as a variable, but there is no memory allocated to it. The same with your frame variable. First you have to create a matrix big enough for the image to fit into it.
I suggest you first take a look at cvCreateImage so you can learn how basic images are created and handled, before you start working with video streams.
Make sure the capture is ready, and the image is ok:
if(!cap.IsOpened())
break;
if(!c.read(frame))
break;
if(frame.empty())
break;
You need to convert the image to grayscale before you use the corner detector:
cv::Mat frameGray;
cv::cvtColor(frame, frameGray, CV_RGB2GRAY);