I am teaching myself OpenCV for a work project that will eventually involve object tracking and such, and I'm just trying to familiarize myself with the basics right now. I have a chunk of code that's meant to simply grab images from my webcam, convert them to grayscale and threshold them, and print them out to a window. I keep getting this error:
"cannot convert parameter 1 from 'cv::Mat' to 'const CvArr *'"
with this code:
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
using namespace cv;
int main()
{
Mat img;
VideoCapture cap(0);
while (true)
{
cap >> img;
Mat tHold;
cvtColor(img, tHold, CV_BGR2GRAY);
cvThreshold(tHold, tHold, 50, 255, CV_THRESH_BINARY);
imshow("Thresholded Image", tHold);
waitKey(1);
}
return 0;
}
The thing is that other functions seem to work, like Canny(), etc...I just can't get thresholding to work. Thoughts? Thanks!
You are using the function cvThreshold from the C interface of OpenCV. Whereas the input images are of type cv::Mat which are from the C++ interface.
The corresponding C++ function of cvThreshold is cv::threshold. Just replace cvThreshold with cv::threshold.
Related
I am trying to do video stabilization with opencv(without the opencv video stabilization class).
the steps for my algo is as follows->
Surf points extraction,
Matching,
Homography matrix,
warpPerspective
And the output video is not stabilized at all :(. it just looks like the original video. I could not find and reference code for video stabilization. I followed the procedure described here . Can anybody help me out by telling me where I am going wrong or provide me some source code link to improve my algo.
Please help. Thank you
You can use my code snippet as a start point (not very stable but seems it works):
#include "opencv2/opencv.hpp"
#include <iostream>
#include <vector>
#include <stdio.h>
using namespace cv;
using namespace std;
int main(int ac, char** av)
{
VideoCapture capture(0);
namedWindow("Cam");
namedWindow("Camw");
Mat frame;
Mat frame_edg;
Mat prev_frame;
int k=0;
Mat Transform;
Mat Transform_avg=Mat::eye(2,3,CV_64FC1);
Mat warped;
while(k!=27)
{
capture >> frame;
cv::cvtColor(frame,frame,cv::COLOR_BGR2GRAY);
cv::equalizeHist(frame,frame);
cv::Canny(frame,frame_edg,64,64);
//frame=frame_edg.clone();
imshow("Cam_e",frame_edg);
imshow("Cam",frame);
if(!prev_frame.empty())
{
Transform=estimateRigidTransform(frame,prev_frame,0);
Transform(Range(0,2),Range(0,2))=Mat::eye(2,2,CV_64FC1);
Transform_avg+=(Transform-Transform_avg)/2.0;
warpAffine(frame,warped,Transform_avg,Size( frame.cols, frame.rows));
imshow("Camw",warped);
}
if(prev_frame.empty())
{
prev_frame=frame.clone();
}
k=waitKey(20);
}
cv::destroyAllWindows();
return 0;
}
You can also look for paper: Chen_Halawa_Pang_FastVideoStabilization.pdf as I remeber there was MATLAB source code supplied.
In your "warpAffine(frame,warped,Transform_avg,Size( frame.cols, frame.rows));" function, you must specify FLAG as WARP_INVERSE_MAP for stabilization.
Sample code I have written:
Mat src, prev, curr, rigid_mat, dst;
VideoCapture cap("test_a3.avi");
while (1)
{
bool bSuccess = cap.read(src);
if (!bSuccess) //if not success, break loop
{
cout << "Cannot read the frame from video file" << endl;
break;
}
cvtColor(src, curr, CV_BGR2GRAY);
if (prev.empty())
{
prev = curr.clone();
}
rigid_mat = estimateRigidTransform(prev, curr, false);
warpAffine(src, dst, rigid_mat, src.size(), INTER_NEAREST|WARP_INVERSE_MAP, BORDER_CONSTANT);
// ---------------------------------------------------------------------------//
imshow("input", src);
imshow("output", dst);
Mat dst_gray;
cvtColor(dst, dst_gray, CV_BGR2GRAY);
prev = dst_gray.clone();
waitKey(30);
}
Hoping this will solve your problem :)
Surf is not so fast. the way I work is with Optical Flow. First you have to calculating good features on your first frame with the GoodFeaturesToTrack() function. After that I do some optimalisation with the FindCornerSubPix() function.
now you have the featurepoints in your startframe, the next thing you have to do is determine the optical flow. There are several Optical Flow functions but the one I use is OpticalFlow.PyrLK(), in one of the out parameters you get the featurespoints in the current frame. With that you can calculate the Homography matrix with the FindHomography() function. Next you have to do is invert this matrix, the explanation you can easily find with google, next you call the WarpPerspective() function to stabilize your frame.
PS. The functions I put here where from EmguCV, the .NET wrapper for OpenCV, so ther may be some differences
Good day everyone! So currently I'm working on a project with video processing, so I decided to give a try to OpenCV. As I'm new to it, I decided to find few sample codes and test them out. First one, is C OpenCV and looks like this:
#include <opencv/cv.h>
#include <opencv/highgui.h>
#include <stdio.h>
int main( void ) {
CvCapture* capture = 0;
IplImage *frame = 0;
if (!(capture = cvCaptureFromCAM(0)))
printf("Cannot initialize camera\n");
cvNamedWindow("Capture", CV_WINDOW_AUTOSIZE);
while (1) {
frame = cvQueryFrame(capture);
if (!frame)
break;
IplImage *temp = cvCreateImage(cvSize(frame->width/2, frame->height/2), frame->depth, frame->nChannels); // A new Image half size
cvResize(frame, temp, CV_INTER_CUBIC); // Resize
cvSaveImage("test.jpg", temp, 0); // Save this image
cvShowImage("Capture", frame); // Display the frame
cvReleaseImage(&temp);
if (cvWaitKey(5000) == 27) // Escape key and wait, 5 sec per capture
break;
}
cvReleaseImage(&frame);
cvReleaseCapture(&capture);
return 0;
}
So, this one works perfectly well and stores image to hard drive nicely. But problems begin with next sample, which uses C++ OpenCV:
#include "opencv2/opencv.hpp"
#include <string>
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
//namedWindow("edges",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
cvtColor(frame, edges, CV_RGB2XYZ);
imshow("edges", edges);
//imshow("edges2", frame);
//imwrite("test1.jpg", frame);
if(waitKey(1000) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
So, yeah, generally, in terms of showing video (image frames) there is practically no changes, but when it comes to using im** functions, some problems arise.
Using cvSaveImage() works out nicely, but the moment I try to use imwrite(), unhandled exception arises in regards of 'access violation reading location'. Same goes for imread(), when I'm trying to load image.
So, the thing I wanted to ask, is it possible to use most of the functionality with C OpenCV? Or is it necessary to use C++ OpenCV. If yes, is there any solution for the problem I described earlier.
Also as stated here, images initially are in BGR-format, so conversion needed. But doing BGR2XYZ conversion seems to invert colors, while RGB2XYZ preserve them. Examples:
images
Or is it necessary to use C++ OpenCV?
No, there is no necessity whatsoever. You can use any interface you like and you think you are good with it (OpenCV offers C, C++, Python interfaces).
For your problem about imwrite() and imread() :
For color images the order channel is normally Blue, Green, Red , this
is what imshow() , imread() and imwrite() expect
Quoted from there
I am now learning a code from the opencv codebook (OpenCV 2 Computer Vision Application Programming Cookbook): Chapter 5, Segmenting images using watersheds, page 131.
Here is my main code:
#include "opencv2/opencv.hpp"
#include <string>
using namespace cv;
using namespace std;
class WatershedSegmenter {
private:
cv::Mat markers;
public:
void setMarkers(const cv::Mat& markerImage){
markerImage.convertTo(markers, CV_32S);
}
cv::Mat process(const cv::Mat &image){
cv::watershed(image,markers);
return markers;
}
};
int main ()
{
cv::Mat image = cv::imread("/Users/yaozhongsong/Pictures/IMG_1648.JPG");
// Eliminate noise and smaller objects
cv::Mat fg;
cv::erode(binary,fg,cv::Mat(),cv::Point(-1,-1),6);
// Identify image pixels without objects
cv::Mat bg;
cv::dilate(binary,bg,cv::Mat(),cv::Point(-1,-1),6);
cv::threshold(bg,bg,1,128,cv::THRESH_BINARY_INV);
// Create markers image
cv::Mat markers(binary.size(),CV_8U,cv::Scalar(0));
markers= fg+bg;
// Create watershed segmentation object
WatershedSegmenter segmenter;
// Set markers and process
segmenter.setMarkers(markers);
segmenter.process(image);
imshow("a",image);
std::cout<<".";
cv::waitKey(0);
}
However, it doesn't work. How could I initialize a binary image? And how could I make this segmentation code work?
I am not very clear about this part of the book.
Thanks in advance!
There's a couple of things that should be mentioned about your code:
Watershed expects the input and the output image to have the same size;
You probably want to get rid of the const parameters in the methods;
Notice that the result of watershed is actually markers and not image as your code suggests; About that, you need to grab the return of process()!
This is your code, with the fixes above:
// Usage: ./app input.jpg
#include "opencv2/opencv.hpp"
#include <string>
using namespace cv;
using namespace std;
class WatershedSegmenter{
private:
cv::Mat markers;
public:
void setMarkers(cv::Mat& markerImage)
{
markerImage.convertTo(markers, CV_32S);
}
cv::Mat process(cv::Mat &image)
{
cv::watershed(image, markers);
markers.convertTo(markers,CV_8U);
return markers;
}
};
int main(int argc, char* argv[])
{
cv::Mat image = cv::imread(argv[1]);
cv::Mat binary;// = cv::imread(argv[2], 0);
cv::cvtColor(image, binary, CV_BGR2GRAY);
cv::threshold(binary, binary, 100, 255, THRESH_BINARY);
imshow("originalimage", image);
imshow("originalbinary", binary);
// Eliminate noise and smaller objects
cv::Mat fg;
cv::erode(binary,fg,cv::Mat(),cv::Point(-1,-1),2);
imshow("fg", fg);
// Identify image pixels without objects
cv::Mat bg;
cv::dilate(binary,bg,cv::Mat(),cv::Point(-1,-1),3);
cv::threshold(bg,bg,1, 128,cv::THRESH_BINARY_INV);
imshow("bg", bg);
// Create markers image
cv::Mat markers(binary.size(),CV_8U,cv::Scalar(0));
markers= fg+bg;
imshow("markers", markers);
// Create watershed segmentation object
WatershedSegmenter segmenter;
segmenter.setMarkers(markers);
cv::Mat result = segmenter.process(image);
result.convertTo(result,CV_8U);
imshow("final_result", result);
cv::waitKey(0);
return 0;
}
I took the liberty of using Abid's input image for testing and this is what I got:
Below is the simplified version of your code, and it works fine for me. Check it out :
#include "opencv2/objdetect/objdetect.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
using namespace cv;
using namespace std;
int main ()
{
Mat image = imread("sofwatershed.jpg");
Mat binary = imread("sofwsthresh.png",0);
// Eliminate noise and smaller objects
Mat fg;
erode(binary,fg,Mat(),Point(-1,-1),2);
// Identify image pixels without objects
Mat bg;
dilate(binary,bg,Mat(),Point(-1,-1),3);
threshold(bg,bg,1,128,THRESH_BINARY_INV);
// Create markers image
Mat markers(binary.size(),CV_8U,Scalar(0));
markers= fg+bg;
markers.convertTo(markers, CV_32S);
watershed(image,markers);
markers.convertTo(markers,CV_8U);
imshow("a",markers);
waitKey(0);
}
Below is my input image :
Below is my output image :
See the code explanation here : Simple watershed Sample in OpenCV
I had the same problem as you, following the exact same code sample of the cookbook (great book btw).
Just to place the matter I was coding under Visual Studio 2013 and OpenCV 2.4.8. After a lot of searching and no solutions I decided to change the IDE.
It's still Visual Studio BUT it's 2010!!!! And boom it works!
Becareful of how you configure Visual Studio with OpenCV. Here's a great tutorial for installation here
Good day to all
I'm trying to use thresholding on my video stream but it is not working.
My video stream:
Mat *depthImage = new Mat(480, 640, CV_8UC1, Scalar::all(0));
Then i try to do the adaptive thresholding, (also doesn't work with regular thresholding)
for(;;){
if( wrapper->update()){
wrapper->getDisplayDepth(depthImage);
cvAdaptiveThreshold(depthImage, depthImage,255,CV_ADAPTIVE_THRESH_MEAN_C, CV_THRESH_BINARY,75,10);
imshow("Depth", *depthImage);
}
int k = waitKey(25);
if(k == 27 ) exit(0);
}
I get this error :
OpenCV Error: Bad argument (Unknown array type) in cvarrToMat, file /Users/olivierjanssens/source/OpenCV-2.3.1/modules/core/src/matrix.cpp, line 646
terminate called throwing an exception
What am i doing wrong, i can get display and see the stream perfectly.But when i add this thresholding i get the error previously mentioned. (i'm rather new to opencv btw).
Thx in advance !
Your depthImage is a pointer to a cv::Mat, which to me seems strange...
...but, if you're using the C++ syntax then you'll want to use the C++ version of adaptiveThreshold, which deals with cv::Mat, with the following definition:
void adaptiveThreshold(InputArray src, OutputArray dst, double maxValue,
int adaptiveMethod, int thresholdType, int blockSize, double C);
which will need prefixed by cv:: if you're not using that namespace already.
For example:
Mat *depthImage; // Obtain this using your method
Mat image = *depthImage; // Obtain a regular Mat to use (doesn't copy data, just headers)
adaptiveThreshold(image, image,255,CV_ADAPTIVE_THRESH_MEAN_C, CV_THRESH_BINARY,75,10);
imshow("Depth Image", *depthImage);
// OR
imshow("Depth Image", image);
when I compile and run this code, I get an error. It compiles, but when I try to run it, it gives the following error:
The application has requested the Runtime to terminate in an unusual way.
This is the code:
#include <opencv2/opencv.hpp>
#include <string>
int main() {
cv::VideoCapture c(0);
double rate = 10;
bool stop(false);
cv::Mat frame;
cv::namedWindow("Hi!");
int delay = 1000/rate;
cv::Mat corners;
while(!stop){
if(!c.read(frame))
break;
cv::cornerHarris(frame,corners,3,3,0.1);
cv::imshow("Hi!",corners);
if(cv::waitKey(delay)>=0)
stop = true;
}
return 0;
}
BTW, I get the same error when using the Canny edge detector.
Your corners matrix is declared as a variable, but there is no memory allocated to it. The same with your frame variable. First you have to create a matrix big enough for the image to fit into it.
I suggest you first take a look at cvCreateImage so you can learn how basic images are created and handled, before you start working with video streams.
Make sure the capture is ready, and the image is ok:
if(!cap.IsOpened())
break;
if(!c.read(frame))
break;
if(frame.empty())
break;
You need to convert the image to grayscale before you use the corner detector:
cv::Mat frameGray;
cv::cvtColor(frame, frameGray, CV_RGB2GRAY);