Optical flow using opencv - c++

I am using Pyramid Lukas Kanade function of OpenCV to estimate the optical flow. i call the cvGoodFeaturesToTrack and then cvCalcOpticalFlowPyrLK. This is my code:
while(1)
{
...
cvGoodFeaturesToTrack(frameAth,eig_image,tmp_image,cornersA,&corner_count,0.01,5,NULL,3,0.4);
std::cout<<"CORNER COUNT AFTER GOOD FEATURES2TRACK CALL = "<<corner_count<<std::endl;
cvCalcOpticalFlowPyrLK(frameAth,frameBth,pyrA,pyrB,cornersA,cornersB,corner_count,cvSize(win_size,win_size),5,features_found,features_errors,cvTermCriteria( CV_TERMCRIT_ITER| CV_TERMCRIT_EPS,20,0.3 ),CV_LKFLOW_PYR_A_READY|CV_LKFLOW_PYR_B_READY);
cvCopy(frameBth,frameAth,0);
...
}
frameAth is the previous gray frame and frameBth is the current gray frame from a webcam. But when i output the number of good features to track in each frame the number decreases after sum time and keeps decreasing. but if i terminate the program and execute the code again(without disturbing the field of view of the webcam ) a lot more number of points are shown as good features to track...how can for the same field of view and for the same scene the function give such difference in number of points...and the difference is high..eg..number of points as good features to track after 4 minutes of execution is 20 or 50...but when the same program terminated and executed again the number is 500 to 700 initialy but again slowly decreases..i am using opencv for the past 4 months so i am lil new to openCV..please guide me or tell me where i can find a solution...lots of thanx in advance..

You have to call cvGoodFeaturesToTrack once (at the beginning, before loop) to detect good features to track and than track these features using cvCalcOpticalFlowPyrLK. Take a look at default opencv example: OpenCV/samples/cpp/lkdemo.cpp.

You are calling cvGoodFeatureToTrack and passing corner_count by reference. Its value decreases if less features are found. You have to reset the value of corner_count to its initial value before calling cvGoodFeaturesToTrackin each iteration of while loop.

Related

How to improve accuracy of estimateAffine2D (or estimageRigidTransform) in OpenCV?

I have two sets of points, one from time t-1 and current time t. The first set was generated using goodFeaturesToTrack, and the latter from using calcOpticalFlowPyrLK(). Using these two sets of points, I then estimate a transformation matrix via estimateAffine2DPartial() in order to keep track of its scale & rotation. Code snippet is listed below:
// Precompute image pyramids
maxLvl = cv::buildOpticalFlowPyramid(_imgPrev, imPyr1, _winSize, maxLvl, true);
maxLvl = cv::buildOpticalFlowPyramid(tmpImg, imPyr2, _winSize, maxLvl, true);
// Optical flow call for tracking pixels
cv::calcOpticalFlowPyrLK(imPyr1, imPyr2, _currentPoints, nextPts, status, err, _winSize, maxLvl, _terminationCriteria, 0, 0.000001);
// Get transformation matrix between the two data sets
cv::Mat H = cv::estimateAffinePartial2D(_currentPoints, nextPts, inlier_mask, cv::RANSAC, 10.0, 2000, 0.99);
Using H, I then map my masking points using perspectiveTransform(). The result seems accurate for the first few dozen frames until I notice some drift (in terms of rotation) occurring when the object I am tracking continues to rotate (usually when rotation becomes > M_PI). I'm honestly stumped on where the culprit is, but my main suspicion is perhaps my window size for optical flow might be too small, or too big. However, tweaking the window size did not seem to help, the position of my object is still accurate, but the estimated rotation (and scale) got worse. Can anyone hope to shed a light on this?
Warm regards and thanks.
EDIT: Images attached to show drift issue
Starting Frame
First few frames -- Rotation OK
Z-Rotation Drift occurs -- see anchor line has drifted towards the red rectangle.
Lucas Kanade tracker needs more features. Guess the tracking template you provided is not good enough.
(1) Try with other feature rich real images? e.g Opencv feautre tracking template image
(2) fix scale. Since you are doing simulation, you can try to anchor the size first.
calcOpticalFlowPyrLK is widely used in visual inertial state estimation studies. such as Semi direct visual odometry or VINSMONO. You can try to find the code inside those project to see how other people is playing with the feature and parameters

How to use buildOpticalFlowPyramid?

I'm using OpenCV 3.3.1. I want to do a semi-dense optical flow operation using cv::calcOpticalFlowPyrLK, but I've been getting some really noticeable slowdown whenever my ROI is pretty big (Partly due to the fact that I am letting the user decide what the winSize should be, ranging from from 10 to 100). Anyways, it seems like cv::buildOpticalFlowPyramid can mitigate the slowdown by building image pyramids? I'm sorta familiar what image pyramids are, but in context of the function, I'm especially confused about what parameters I pass in, and how it impacts my function call to cv::calcOpticalFlowPyrLK. With that in mind, I now have these set of questions:
The output is, according to the documentation, is an OutputArrayOfArrays, which I take it can be a vector of cv::Mat objects. If so, what do I pass in to cv::calcOpticalFlowPyrLK for prevImg and nextImg (assuming that I need to make image pyramids for both)?
According to the docs for cv::buildOpticalFlowPyramid, you need to pass in a winSize parameter in order to calculate required padding for pyramid levels. If so, do you pass in the same winSize value when you eventually call cv::calcOpticalFlowPyrLK?
What exactly are the arguments for pyrBorder and derivBorder doing?
Lastly, and apologies if it sounds newbish, but what is the purpose of this function? I always assumed that cv::calcOpticalFlowPyrLK internally builds the image pyramids. Is it just to speed up the optical flow operation?
I hope my questions were clear, I'm still very new to OpenCV, and computer vision, but this topic is very interesting.
Thank you for your time.
EDIT:
I used the function to see if my guess was correct, so far it has worked, but I've seen no noticeable speed up. Below is how I used it:
// Building pyramids
int maxLvl = 3;
maxLvl = cv::buildOpticalFlowPyramid(imgPrev, imPyr1, cv::Size(searchSize, searchSize), maxLvl, true);
maxLvl = cv::buildOpticalFlowPyramid(tmpImg, imPyr2, cv::Size(searchSize, searchSize), maxLvl, true);
// LK optical flow call
cv::calcOpticalFlowPyrLK(imPyr1, imPyr2, currentPoints, nextPts, status, err,
cv::Size(searchSize, searchSize), maxLvl, termCrit, 0, 0.00001);
So now I'm wondering what's the purpose of preparing the image pyramids if calcOpticalFlowPyrLK does it internally?
So the point of your question is that you are trying to improve speed of optical flow tracking by tuning your input parameters.
If you want dirty and quick answer then here it is
KTL (OpenCV's calcOpticalFlowPyrLK) define a e residual function which are sum of gradient of point inside search window .
The main purpose is to find vector of point that can minimize residual function
So if you increase search window size (winSize) then it is more difficult to find that set of points.
If your really really want to do that then please read the official paper.
See the section 2.4
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.185.585&rep=rep1&type=pdf
I took it from official document
https://docs.opencv.org/2.4/modules/video/doc/motion_analysis_and_object_tracking.html#bouguet00
Hope that help

opencv mat scan random time stealing

My application is C++ OpenCV based, which needs to detect an object in an image, by threshold filtering. I divided the image into small strips, because of performance reason. I only scan the areas I need to. The image is 2400x1800 pixles. The strips are 1000x50. The Image color space is HSV. As the desired object can be one of few colors (for example 8), I run the filter 8 times per strip. So, in the application, I run the filter a few tens of times.
The application is time critical.
For most of the runs, the strip filter takes <<1 millisecond.
The problem: Every few filters (can be between 10 to 40, depending on the strip size), the run takes 15 milliseconds (always the same 15 milliseconds)!
The total run which should run in 1-2 milliseconds, runs between 50 to 100 milliseconds, depending on how many times there was a 15 millisecond run.
The heart of the code which accesses the Mat and causes the steal of time looks like this:
for i....{ // cols
for j....{ // rows
p1i=img_hsv.at<uchar>(j,i*3+0); // H
p2i=img_hsv.at<uchar>(j,i*3+1); // S
p3i=img_hsv.at<uchar>(j,i*3+2); // V
}
}
Again, the rate of steal increases as the strip size increases. I assume it has something to do with accessing the PC memory resources. I already tries to change the page size, or define the code as critical section, with no success.
The application is Win32 XP or 7 based.
Appreciate your help.
Many thanks,
HBR.
It is normally not necessary to access pixels individually for filtering operations. You left out the details of your algorithm - maybe you can implement it by using an OpenCV function like threshold and related, which will work on the whole image. These methods are optimized for memory access, so you wouldn't have to spend time to track down timing issues like this.

OpenCV C++: How to slow down background adaptation of BackgroundSubtractorMOG?

I am using BackgroundSubtractorMOG in OpenCV to track objects. When they appear, it works fine but the background fastly adapts so I cannot track static objects. How can I make the background adaptation slower (I dont want it fully static, just slower)?
Setting the learning rate using the constructor doesn't change that:
BackgroundSubtractorMOG pBSMOG = BackgroundSubtractorMOG(???);
How can I solve this? Thanks!
BackgroundSubtractorMOG pBSMOG = BackgroundSubtractorMOG(int history=200, int nmixtures=5, double backgroundRatio=0.7, double noiseSigma=0);
Where,
history – Length of the history.
nmixtures – Number of Gaussian mixtures.
backgroundRatio – Background ratio.
noiseSigma – Noise strength (standard deviation of the brightness or each color channel). 0 means some automatic value.
Increasing the history value will slow down the adaptation rate.
There is another function available in OpenCV:
Ptr <BackgroundSubtractorMOG2> createBackgroundSubtractorMOG2(int
history=500, double varThreshold=16, bool detectShadows=true )
This is much faster than the previous one and it can eleminate detecting shadows too.

SuperResolution nextFrame bug

In the superresolution (gpu/super_resolution.cpp) sample (built with vc11 compiler) the the following line:
//Ptr superRes;
superRes->nextFrame(result);
results the following error error (tried with multipe test videos):
http://i.imgbox.com/abwNaL3z.jpg
and if I change the optical flow method to simple, it takes forever to run (stopped 30 min with an i7 2600k)
Any idea?
The BTV SuperResolution algorithm was oriented for small input videos. And it use a lot of memory for inner buffers. Your video has large resolution [768 x 576] and you upscale it with factor 4. Try to reduce scale factor, temporal radius or input resolution (for example upscale only a part of frame).