output from calcOpticalFlowPyrLK function in openCV - c++

Im new to openCV and image processing I am trying to find moving objects using optical flow (Lucas-Kanade method) by comparing two saved images on disk which are frames from a camera.
The part of code I am asking about is:
Mat img = imread("t_mono1.JPG", CV_LOAD_IMAGE_UNCHANGED);//read the image
Mat img2 = imread("t_mono2.JPG", CV_LOAD_IMAGE_UNCHANGED);//read the 2nd image
Mat pts;
Mat pts2;
Mat stat;
Mat err;
goodFeaturesToTrack(img,pts,100,0.01,0.01);//extract features to track
calcOpticalFlowPyrLK(img,img2,pts,pts2,stat,err);//optical flow
cout<<" "<<pts.cols<<"\n"<<pts.rows<<"\n";`
I am getting that the size of pts is 100 by 1. I suppose it should have been 100 by 2 since the pixels have x and y coordinates. Further, when I used for loops to display the contents of the arrays the pts array was all zeros and all arrays were one dimensional.
I have seen this question:
OpenCV's calcOpticalFlowPyrLK throws exception
I tried using vectors as he did but I get an error when building it says cannot convert between different data types. I am using VS2008 with openCV2.4.11
I would like to get the (x,y) coordinates of features in the first and second images and the error but all the arrays passed to calcOpticalFlowPyrLK were one dimensional and I don't understand how can this be?

Related

Change columns order in opencv mat without copying data

I have an opencv mat which has M rows and 3 columns, is there a way to reorder the mat such that the first and last (i.e., third) columns are switched while the middle column kept in place without copying the data?
OpenCV data is an array of pixels. Sometimes you can get a column or a rectangle view of an image (like with col() ). In which the data is not continuous, and it is calculated, as far as I know, with a step between rows. However the data is shared and is still an array.
Then the question becomes: can I swap two portions of an array without copying the data? Not as far as I know.
You can use optimized functions of OpenCV to swap them, but the data will be copied.
Also, non continuous data is way slower than continuous data in OpenCV functions. More can be read here.
You can use the OpenCV function flip for that. As an example the following code flips an image about the mid column.
int main ()
{
Mat img, flipped; //your mat
img =imread("lena.jpg");
flip(img,flipped ,1); // flipped is the output
imshow("img",flipped);
waitKey(0);
}

Opencv Resize a stitch

I'm doing a mosaic from a video in Opencv. I'm using this example for stitching the frames of the videos: http://docs.opencv.org/doc/tutorials/features2d/feature_detection/feature_detection.html. At the end I'm doing this for merging the new frame with the stitch created at the passed iteration:
Mat H = findHomography(obj, scene, CV_RANSAC);
static Mat rImg;
warpPerspective(vImg[0], rImg, H, Size(vImg[0].cols, vImg[0].rows), INTER_NEAREST);//(vImg[0], rImg, H, Size(vImg[0].cols * 2, vImg[0].rows * 2), CV_INTER_LINEAR);
static Mat final_img(Size(rImg.cols*2, rImg.rows*2), CV_8UC3);
static Mat roi1(final_img, Rect(0, 0, vImg[1].cols, vImg[1].rows));
Mat roi2(final_img, Rect(0, 0, rImg.cols, rImg.rows));
rImg.copyTo(roi2);
vImg[1].copyTo(roi1);
imwrite("stitch.jpg", final_img);
vImg[0] = final_img;
So here's my problem: obviously the stitch becomes larger at each iteration, so how can I resize it to make it fit in the final_img image?
EDIT
Sorry but I had to remove images
For the second question, what you observe is an error in the homography that was estimated. This may come either from:
drift (if you chain homographies along the sequence), ie, small errors that accumulate and become large after dozens of frames
or (more likely) because your reference image is too old with respect to your new image, and they exhibit too few matching points to give an accurate homography, but yet enough to find one that passes the quality test inside cv::findHomography().
For your first question, you need to add some code that keeps track of the current bounds of the stitched image in a fixed coordinate frame.
I would suggest to choose the coordinates linked with the first image.
Then, when you stitch a new image, what you do really is to project this image onto this coordinate frame.
You can compute first for example the projected coordinates of the 4 corners of the incoming frame, test if they fit into the current stitching result, copy it to a new (bigger) image if necessary, then proceed with stitching the new image.

OpenCV: findContours exception

my matlab code is:
h = fspecial('average', filterSize);
imageData = imfilter(imageData, h, 'replicate');
bwImg = im2bw(imageData, grayThresh);
cDist=regionprops(bwImg, 'Area');
cDist=[cDist.Area];
opencv code is:
cv::blur(dst, dst,cv::Size(filterSize,filterSize));
dst = im2bw(dst, grayThresh);
cv::vector<cv::vector<cv::Point> > contours;
cv::vector<cv::Vec4i> hierarchy;
cv::findContours(dst,contours,hierarchy,CV_RETR_CCOMP, CV_CHAIN_APPROX_NONE);
here is my image2blackand white function
cv::Mat AutomaticMacbethDetection::im2bw(cv::Mat src, double grayThresh)
{
cv::Mat dst;
cv::threshold(src, dst, grayThresh, 1, CV_THRESH_BINARY);
return dst;
}
I'm getting an exception in findContours() C++ exception: cv::Exception at memory location 0x0000003F6E09E0A0.
Can you please explain what am I doing wrong.
dst is cv::Mat and I used it all along it has my original values.
Update here is my matrix written into *.txt file:
http://www.filedropper.com/gili
UPDATE 2:
I have added dst.convertTo(dst,CV_8U); like Micka suggested, I no longer have an exception. however values are nothing like expected.
Take a look at this question which has a similar problem to what you're encountering: Matlab and OpenCV calculate different image moment m00 for the same image.
Basically, the OP in the linked post is trying to find the zeroth image moment for both x and y of all closed contours - which is actually just the area, by using findContours in OpenCV and regionprops in MATLAB. In MATLAB, that can be accessed by the Area property from regionprops, and judging from your MATLAB code, you wish to find the same quantity.
From the post, there is most certainly a difference between how OpenCV and MATLAB finds contours in an image. This boils down to the way both platforms consider what is a "connected pixel". OpenCV only uses a four-pixel neighbourhood while MATLAB uses an eight-pixel neighbourhood.
As such, there is nothing wrong with your implementation, and converting to 8UC1 is good. However, the areas (and ultimately the total number of connected components and contours themselves) between both contours found with MATLAB and OpenCV are not the same. The only way for you to get exactly the same result is if you manually draw the contours found by findContours on a black image, and using the cv::moments function directly on this image.
However, because of the differing implementations of cv::blur() in comparison to fspecial with an averaging mask that is even, you still may not be able to get the same results along the borders of the image. If there are no important contours around the borders of your image, then hopefully this will give you the right result.
Good luck!

3 Channel image access using OpenCV

I'm having a 2D Vector called Mat with values from 0 to 255 that I'm assigning to an IPLIMAGE like what is follow:
IplImage *A=cvCreateImage(cvSize(640,480), IPL_DEPTH_8U, 1)
for (int i=0;i<640;i++)
{
for (j...)
{
A->imageData[i*640+j]=Mat[i][j]
}
}
But what about if i m having 3 2D vectors Mat1, Mat2, Mat3 and an IPLIMAGE whose number of channels is equal to 3:
IplImage *A=cvCreateImage(cvSize(640,480), IPL_DEPTH_8U, 3)
I thought that I could do it channel by channel and merge them all at the end, but I really believe it's not the optimal solution.
Any idea how to access to imageData of the 3 channels in that case?
First, note that you can avoid writing the first code if Mat is aligned, by directly assigning the imageData struct member of IplImage. You'll have to use cvCreateImageHeader instead of cvCreateImage to avoid allocating data for the image. More info about the struct can be found here.
Second, regarding your question - it is possible to do that by creating three images by the technique I mentioned earlier, and then using cvMerge to produce the final image. More information here.
In general, I recommend you to migrate to the C++ interface of OpenCV, which uses cv::Mat instead of the old IplImage interface.
If you look at OpenCV tutorial for C++ API, there is example for working with Mat.
http://docs.opencv.org/doc/tutorials/core/how_to_scan_images/how_to_scan_images.html#the-iterator-safe-method
There are presented 3 ways to access 3 channel image.

splitting image results in unhandled exception error

I am currently planning on splitting my image into 3 channels so i can get the RGB values of an image to plot a scatter graph so i can model is using a normal distribtion calculating the covariance matrix, mean, etc.
then calculate distance between the background points and the actual image to segment the image.
Now in my first task, i have wrote the following code.
VideoCapture cam(0);
//int id=0;
Mat image, Rch,Gch,Bch;
vector<Mat> rgb(3); //RGB is a vector of 3 matrices
namedWindow("window");
while(1)
{
cam>>image;
split(image,rgb);
Bch = rgb[0];
Gch = rgb[1];
Rch = rgb[2];
but as soon as it reaches the split function, i step through it, it causes a unhandled exception error. access violation writing location 0xfeeefeee
i am still new to opencv, so am not used to dealing with unhandled exception error.
thanks
It sounds as if split expects there to be three instances of Mat in the rgb vector.
But you have only prepared it to hold three items - it is actually empty.
Try adding three items to the vector and run again.
Although this is an old issue I would like to share the solution that worked for me. Instead of vector<Mat> rgb(3); I used Mat channels[3];. I realized there is something wrong with using vector when I was not able to use split even on an image loaded with imread. Unfortunately, I cannot explain why this change works, but if someone can that would be great.