when I compile and run this code, I get an error. It compiles, but when I try to run it, it gives the following error:
The application has requested the Runtime to terminate in an unusual way.
This is the code:
#include <opencv2/opencv.hpp>
#include <string>
int main() {
cv::VideoCapture c(0);
double rate = 10;
bool stop(false);
cv::Mat frame;
cv::namedWindow("Hi!");
int delay = 1000/rate;
cv::Mat corners;
while(!stop){
if(!c.read(frame))
break;
cv::cornerHarris(frame,corners,3,3,0.1);
cv::imshow("Hi!",corners);
if(cv::waitKey(delay)>=0)
stop = true;
}
return 0;
}
BTW, I get the same error when using the Canny edge detector.
Your corners matrix is declared as a variable, but there is no memory allocated to it. The same with your frame variable. First you have to create a matrix big enough for the image to fit into it.
I suggest you first take a look at cvCreateImage so you can learn how basic images are created and handled, before you start working with video streams.
Make sure the capture is ready, and the image is ok:
if(!cap.IsOpened())
break;
if(!c.read(frame))
break;
if(frame.empty())
break;
You need to convert the image to grayscale before you use the corner detector:
cv::Mat frameGray;
cv::cvtColor(frame, frameGray, CV_RGB2GRAY);
Related
I am trying to do video stabilization with opencv(without the opencv video stabilization class).
the steps for my algo is as follows->
Surf points extraction,
Matching,
Homography matrix,
warpPerspective
And the output video is not stabilized at all :(. it just looks like the original video. I could not find and reference code for video stabilization. I followed the procedure described here . Can anybody help me out by telling me where I am going wrong or provide me some source code link to improve my algo.
Please help. Thank you
You can use my code snippet as a start point (not very stable but seems it works):
#include "opencv2/opencv.hpp"
#include <iostream>
#include <vector>
#include <stdio.h>
using namespace cv;
using namespace std;
int main(int ac, char** av)
{
VideoCapture capture(0);
namedWindow("Cam");
namedWindow("Camw");
Mat frame;
Mat frame_edg;
Mat prev_frame;
int k=0;
Mat Transform;
Mat Transform_avg=Mat::eye(2,3,CV_64FC1);
Mat warped;
while(k!=27)
{
capture >> frame;
cv::cvtColor(frame,frame,cv::COLOR_BGR2GRAY);
cv::equalizeHist(frame,frame);
cv::Canny(frame,frame_edg,64,64);
//frame=frame_edg.clone();
imshow("Cam_e",frame_edg);
imshow("Cam",frame);
if(!prev_frame.empty())
{
Transform=estimateRigidTransform(frame,prev_frame,0);
Transform(Range(0,2),Range(0,2))=Mat::eye(2,2,CV_64FC1);
Transform_avg+=(Transform-Transform_avg)/2.0;
warpAffine(frame,warped,Transform_avg,Size( frame.cols, frame.rows));
imshow("Camw",warped);
}
if(prev_frame.empty())
{
prev_frame=frame.clone();
}
k=waitKey(20);
}
cv::destroyAllWindows();
return 0;
}
You can also look for paper: Chen_Halawa_Pang_FastVideoStabilization.pdf as I remeber there was MATLAB source code supplied.
In your "warpAffine(frame,warped,Transform_avg,Size( frame.cols, frame.rows));" function, you must specify FLAG as WARP_INVERSE_MAP for stabilization.
Sample code I have written:
Mat src, prev, curr, rigid_mat, dst;
VideoCapture cap("test_a3.avi");
while (1)
{
bool bSuccess = cap.read(src);
if (!bSuccess) //if not success, break loop
{
cout << "Cannot read the frame from video file" << endl;
break;
}
cvtColor(src, curr, CV_BGR2GRAY);
if (prev.empty())
{
prev = curr.clone();
}
rigid_mat = estimateRigidTransform(prev, curr, false);
warpAffine(src, dst, rigid_mat, src.size(), INTER_NEAREST|WARP_INVERSE_MAP, BORDER_CONSTANT);
// ---------------------------------------------------------------------------//
imshow("input", src);
imshow("output", dst);
Mat dst_gray;
cvtColor(dst, dst_gray, CV_BGR2GRAY);
prev = dst_gray.clone();
waitKey(30);
}
Hoping this will solve your problem :)
Surf is not so fast. the way I work is with Optical Flow. First you have to calculating good features on your first frame with the GoodFeaturesToTrack() function. After that I do some optimalisation with the FindCornerSubPix() function.
now you have the featurepoints in your startframe, the next thing you have to do is determine the optical flow. There are several Optical Flow functions but the one I use is OpticalFlow.PyrLK(), in one of the out parameters you get the featurespoints in the current frame. With that you can calculate the Homography matrix with the FindHomography() function. Next you have to do is invert this matrix, the explanation you can easily find with google, next you call the WarpPerspective() function to stabilize your frame.
PS. The functions I put here where from EmguCV, the .NET wrapper for OpenCV, so ther may be some differences
i'm working on a project using OpenCV243, I need to get the foreground during a stream, my Problem is that I use the cv::absdiff to get it doesn't really help, here is my code and the result .
#include <iostream>
#include<opencv2\opencv.hpp>
#include<opencv2\calib3d\calib3d.hpp>
#include<opencv2\core\core.hpp>
#include <opencv2\highgui\highgui.hpp>
int main (){
cv::VideoCapture cap(0);
cv::Mat frame,frame1,frame2;
cap >> frame;
frame.copyTo(frame1);
cv::imwrite("background.jpeg",frame1);
int key = 0;
while(key!=27){
cap >> frame;
cv::absdiff(frame, frame1, frame2); // frame2 = frame -frame1
cv::imshow("foreground", frame2);
if(key=='c'){
//frame.copyTo(frame2);
cv::imwrite("foreground.jpeg", frame2);
key = 0;
}
cv::imshow("frame",frame);
key = cv::waitKey(10);
}
cap.release();
return 0;
}
as you can see the subtraction work but what I want to get is only the values of that changed for example if have a Pixel in the background with [130,130,130] and the same pixel has [200,200,200] in the frame I want to get exactly the last values and not [70,70,70]
I've already seen this tutorial : http://mateuszstankiewicz.eu/?p=189
but I can't understand the code and I have problems setting cv::BackgroundSubtractorMOG2 with my openCV version
thanks in advance for you help
BackgroundSubtractorMOG2 should work with #include "opencv2/video/background_segm.hpp"
The samples with OpenCV have two nice c++ examples (in the samples\cpp directory).
bgfg_segm.cpp shows how to use the BackgroundSubtractorMOG2
bgfg_gmg.cpp uses BackgroundSubtractorGMG
To get the last values (and asuming you meant to get the foreground pixel values) you could copy the frame using the foreground mask. This is also done in the first example, in the following snippet:
bg_model(img, fgmask, update_bg_model ? -1 : 0);
fgimg = Scalar::all(0);
img.copyTo(fgimg, fgmask);
I am teaching myself OpenCV for a work project that will eventually involve object tracking and such, and I'm just trying to familiarize myself with the basics right now. I have a chunk of code that's meant to simply grab images from my webcam, convert them to grayscale and threshold them, and print them out to a window. I keep getting this error:
"cannot convert parameter 1 from 'cv::Mat' to 'const CvArr *'"
with this code:
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
using namespace cv;
int main()
{
Mat img;
VideoCapture cap(0);
while (true)
{
cap >> img;
Mat tHold;
cvtColor(img, tHold, CV_BGR2GRAY);
cvThreshold(tHold, tHold, 50, 255, CV_THRESH_BINARY);
imshow("Thresholded Image", tHold);
waitKey(1);
}
return 0;
}
The thing is that other functions seem to work, like Canny(), etc...I just can't get thresholding to work. Thoughts? Thanks!
You are using the function cvThreshold from the C interface of OpenCV. Whereas the input images are of type cv::Mat which are from the C++ interface.
The corresponding C++ function of cvThreshold is cv::threshold. Just replace cvThreshold with cv::threshold.
Good day everyone! So currently I'm working on a project with video processing, so I decided to give a try to OpenCV. As I'm new to it, I decided to find few sample codes and test them out. First one, is C OpenCV and looks like this:
#include <opencv/cv.h>
#include <opencv/highgui.h>
#include <stdio.h>
int main( void ) {
CvCapture* capture = 0;
IplImage *frame = 0;
if (!(capture = cvCaptureFromCAM(0)))
printf("Cannot initialize camera\n");
cvNamedWindow("Capture", CV_WINDOW_AUTOSIZE);
while (1) {
frame = cvQueryFrame(capture);
if (!frame)
break;
IplImage *temp = cvCreateImage(cvSize(frame->width/2, frame->height/2), frame->depth, frame->nChannels); // A new Image half size
cvResize(frame, temp, CV_INTER_CUBIC); // Resize
cvSaveImage("test.jpg", temp, 0); // Save this image
cvShowImage("Capture", frame); // Display the frame
cvReleaseImage(&temp);
if (cvWaitKey(5000) == 27) // Escape key and wait, 5 sec per capture
break;
}
cvReleaseImage(&frame);
cvReleaseCapture(&capture);
return 0;
}
So, this one works perfectly well and stores image to hard drive nicely. But problems begin with next sample, which uses C++ OpenCV:
#include "opencv2/opencv.hpp"
#include <string>
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
//namedWindow("edges",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
cvtColor(frame, edges, CV_RGB2XYZ);
imshow("edges", edges);
//imshow("edges2", frame);
//imwrite("test1.jpg", frame);
if(waitKey(1000) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
So, yeah, generally, in terms of showing video (image frames) there is practically no changes, but when it comes to using im** functions, some problems arise.
Using cvSaveImage() works out nicely, but the moment I try to use imwrite(), unhandled exception arises in regards of 'access violation reading location'. Same goes for imread(), when I'm trying to load image.
So, the thing I wanted to ask, is it possible to use most of the functionality with C OpenCV? Or is it necessary to use C++ OpenCV. If yes, is there any solution for the problem I described earlier.
Also as stated here, images initially are in BGR-format, so conversion needed. But doing BGR2XYZ conversion seems to invert colors, while RGB2XYZ preserve them. Examples:
images
Or is it necessary to use C++ OpenCV?
No, there is no necessity whatsoever. You can use any interface you like and you think you are good with it (OpenCV offers C, C++, Python interfaces).
For your problem about imwrite() and imread() :
For color images the order channel is normally Blue, Green, Red , this
is what imshow() , imread() and imwrite() expect
Quoted from there
I am an OpenCV and C++ beginner. I've got a problem with my student project.My Tutor wants to grab frames from a Camera and save the grabbed frames into jpg. So first I used "cvCreateCameraCapture,cvQueryFrame,cvSaveImage" and it worded ok. But the frame is relative big,about 2500x2000,and it takes about 1 second to save one Frame. But my Tutor requires at least to save 10 Frames per second.
Then I came out the ideas to save raw data first and after grabbing process I can save them into Jpg. So I wrote following test code.But the problem is that all the saved Images are the same and it seems they are just from the data of the last grabbed frame.I guess the problem is about my poor knowledge of c++ especially pointers.So I really hope to get help here.
Thanks in advance!
void COpenCVDuoArryTestDlg::OnBnClickedButton1()
{
IplImage* m_Frame=NULL;
TRACE("m_Frame initialed");
CvCapture * m_Video=NULL;
m_Video=cvCreateCameraCapture(0);
IplImage**Temp_Frame= (IplImage**)new IplImage*[100];
for(int j=0;j<100;j++){
Temp_Frame[j]= new IplImage [112];
}
TRACE("Tempt_Frame initialed\n");
cvNamedWindow("video",1);
int t=0;
while(m_Frame=cvQueryFrame(m_Video)){
for(int k=0;k<m_Frame->nSize;k++){
Temp_Frame[t][k]= m_Frame[k];
}
cvWaitKey(30);
t++;
if(t==100){
break;
}
}
for(int i=0;i<30;i++){
CString ImagesName;
ImagesName.Format(_T("Image%.3d.jpg"),i);
if(cvWaitKey(20)==27) {
break;
}
else{
cvSaveImage(ImagesName, Temp_Frame[i]);
}
}
cvReleaseCapture(&m_Video);
cvDestroyWindow("video");
TRACE("cvDestroy works\n");
delete []Temp_Frame;
}
If you use C++, why don't you use the C++ opencv interface?
The reason you get N times the same image is that the capture reuses the memory for each frame, if you want to store the frames you need to copy them. Example for the C++ interface:
#include <vector>
#include "cv.h"
#include "highgui.h"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
namedWindow("image",1);
std::vector<cv::Mat> images(100);
for(int i = 0; i < 100;++i) {
// this is optional, preallocation so there's no allocation
// during capture
images[i].create(480, 640, CV_8UC3);
}
for(int i = 0; i < 100;++i)
{
Mat frame;
cap >> frame; // get a new frame from camera
frame.copyTo(images[i]);
}
cap.release();
for(int i = 0; i < 100;++i)
{
imshow("image", images[i]);
if(waitKey(30) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
Do you have a multicore/multiple CPU system? Then you could farm out the 1second tasks across 16cores and save 16frames/second !
Or you could write your own optimized jpeg routine on the GPU in Cuda/OpenCL.
If you need it to run for longer you could dump the raw image data to disk, then read it back in later and convert to jpeg. 5Mpixel * 3color * 10fps is 150Mb/s (thanks etarion!) which you can do with two disks and windows Raid.
edit: If you only need to do 10frames then just buffer them in memory and then write them out as the other answer shows.
Since you already know how to retrieve a frame, check this answer:
openCV: How to split a video into image sequence?
This question is a little different because it retrieves frames from an AVI file instead of a webcam, but the way to save a frame to the disk is the same!