In the superresolution (gpu/super_resolution.cpp) sample (built with vc11 compiler) the the following line:
//Ptr superRes;
superRes->nextFrame(result);
results the following error error (tried with multipe test videos):
http://i.imgbox.com/abwNaL3z.jpg
and if I change the optical flow method to simple, it takes forever to run (stopped 30 min with an i7 2600k)
Any idea?
The BTV SuperResolution algorithm was oriented for small input videos. And it use a lot of memory for inner buffers. Your video has large resolution [768 x 576] and you upscale it with factor 4. Try to reduce scale factor, temporal radius or input resolution (for example upscale only a part of frame).
Related
I'm using the OpenCV library in a CUDA C++ environment to resize an image obtained by GPU processing.
A crucial step of the processing involves resampling the image and INVERTING the aspect ratio.
Example problem:
Resize a 2000 x 500 and transform it into a 500 x 2000 image using CUDA
This is attempted by the following OpenCV command:
cv::cuda::resize(d_src,d_dst,cv::Size(500,2000),cv::INTER_CUBIC);
Where d_src and d_dst are the proper GpuMats with 2000 x 500 and 500 x 2000 size.
The maximum permitted resize is a square of either 2000x2000 or 500x500. The function behaves as expected as long as the aspect ratio is not inverted. I have also attempted making the interpolation in two steps, either by expansion and reduction:
Going from 2000x500 to 2000x2000 to 500x2000.
cv::cuda::resize(d_src,d_buffer,cv::Size(2000,2000),cv::INTER_CUBIC);
cv::cuda::resize(d_buffer,d_dst,cv::Size(500,2000),cv::INTER_CUBIC);
Going from 2000x500 to 500x500 to 500x2000.
cv::cuda::resize(d_src,d_buffer,cv::Size(500,500),cv::INTER_CUBIC);
cv::cuda::resize(d_buffer,d_dst,cv::Size(500,2000),cv::INTER_CUBIC);
Both these approaches fail and are not preferred since they consume a considerable amount of extra GPU memory.
Has anyone experienced a similar problem with this function? Could somebody help me out?
Thank you in advance
Nevermind. This problem can be solved by setting the size parameters to auto:
cv::cuda::resize(d_src,d_dst,d_dst.size(),0,0,cv::INTER_CUBIC);
I have two sets of points, one from time t-1 and current time t. The first set was generated using goodFeaturesToTrack, and the latter from using calcOpticalFlowPyrLK(). Using these two sets of points, I then estimate a transformation matrix via estimateAffine2DPartial() in order to keep track of its scale & rotation. Code snippet is listed below:
// Precompute image pyramids
maxLvl = cv::buildOpticalFlowPyramid(_imgPrev, imPyr1, _winSize, maxLvl, true);
maxLvl = cv::buildOpticalFlowPyramid(tmpImg, imPyr2, _winSize, maxLvl, true);
// Optical flow call for tracking pixels
cv::calcOpticalFlowPyrLK(imPyr1, imPyr2, _currentPoints, nextPts, status, err, _winSize, maxLvl, _terminationCriteria, 0, 0.000001);
// Get transformation matrix between the two data sets
cv::Mat H = cv::estimateAffinePartial2D(_currentPoints, nextPts, inlier_mask, cv::RANSAC, 10.0, 2000, 0.99);
Using H, I then map my masking points using perspectiveTransform(). The result seems accurate for the first few dozen frames until I notice some drift (in terms of rotation) occurring when the object I am tracking continues to rotate (usually when rotation becomes > M_PI). I'm honestly stumped on where the culprit is, but my main suspicion is perhaps my window size for optical flow might be too small, or too big. However, tweaking the window size did not seem to help, the position of my object is still accurate, but the estimated rotation (and scale) got worse. Can anyone hope to shed a light on this?
Warm regards and thanks.
EDIT: Images attached to show drift issue
Starting Frame
First few frames -- Rotation OK
Z-Rotation Drift occurs -- see anchor line has drifted towards the red rectangle.
Lucas Kanade tracker needs more features. Guess the tracking template you provided is not good enough.
(1) Try with other feature rich real images? e.g Opencv feautre tracking template image
(2) fix scale. Since you are doing simulation, you can try to anchor the size first.
calcOpticalFlowPyrLK is widely used in visual inertial state estimation studies. such as Semi direct visual odometry or VINSMONO. You can try to find the code inside those project to see how other people is playing with the feature and parameters
I have a video source that produce many streams for different devices (such as: HD television, Pads, smart phones, etc.), every of them has to be checked within each other for similarity. The video stream release 50 images per second, one image every 20 milliseconds.
Lets take for instance img1 coming from stream1 at time ts1=1, img2 coming from stream2 at ts2=1 and img1.1 taken from stream1 at ts=2 (20 milliseconds later than ts=1), the comparison result should look something like this:
compare(img1, img1) = 1 same image same size
compare(img1, img2) = 0.9 same image different size
compare(img1, img1.1) = 0.8 different images same size
ideally this should be done real time, so within 20 millisecond, the goal is to understand if the streams are out of synchronization, I already implemented some compare methods (nobody of them works for this case yet):
1) histogram (SSE and OpenCV cuda), result compare(img1, img2) ~= compare(img1, img1.1)
2) pnsr (SSE and OCV cuda), result compare(img1, img2) < compare(img1, img1.1)
3) ssim (SSE and OCV cuda), resulting the same as pnsr
Maybe I get bad results because of the resize interpolation method?
Is it possible to realize a comparison method that fulfill my requirements, any ideas?
I'm afraid that you're running into a Real Problem (TM). This is not a trivial lets-give-it-to-the-intern problem.
The main challenge is that you can't do a brute-force comparison. HD images are 3 MB or more, and you're talking about O(N*M) comparisons (in time and across streams).
What you essentially need is a fingerprint that's robust against resizing but time-variant. And as you didn't realize that (the histogram idea for instance is quite time-stable, for instance) you didn't include the necessary information in this question.
So this isn't a C++ question, really. You need to understand your inputs.
My application is C++ OpenCV based, which needs to detect an object in an image, by threshold filtering. I divided the image into small strips, because of performance reason. I only scan the areas I need to. The image is 2400x1800 pixles. The strips are 1000x50. The Image color space is HSV. As the desired object can be one of few colors (for example 8), I run the filter 8 times per strip. So, in the application, I run the filter a few tens of times.
The application is time critical.
For most of the runs, the strip filter takes <<1 millisecond.
The problem: Every few filters (can be between 10 to 40, depending on the strip size), the run takes 15 milliseconds (always the same 15 milliseconds)!
The total run which should run in 1-2 milliseconds, runs between 50 to 100 milliseconds, depending on how many times there was a 15 millisecond run.
The heart of the code which accesses the Mat and causes the steal of time looks like this:
for i....{ // cols
for j....{ // rows
p1i=img_hsv.at<uchar>(j,i*3+0); // H
p2i=img_hsv.at<uchar>(j,i*3+1); // S
p3i=img_hsv.at<uchar>(j,i*3+2); // V
}
}
Again, the rate of steal increases as the strip size increases. I assume it has something to do with accessing the PC memory resources. I already tries to change the page size, or define the code as critical section, with no success.
The application is Win32 XP or 7 based.
Appreciate your help.
Many thanks,
HBR.
It is normally not necessary to access pixels individually for filtering operations. You left out the details of your algorithm - maybe you can implement it by using an OpenCV function like threshold and related, which will work on the whole image. These methods are optimized for memory access, so you wouldn't have to spend time to track down timing issues like this.
I am using Pyramid Lukas Kanade function of OpenCV to estimate the optical flow. i call the cvGoodFeaturesToTrack and then cvCalcOpticalFlowPyrLK. This is my code:
while(1)
{
...
cvGoodFeaturesToTrack(frameAth,eig_image,tmp_image,cornersA,&corner_count,0.01,5,NULL,3,0.4);
std::cout<<"CORNER COUNT AFTER GOOD FEATURES2TRACK CALL = "<<corner_count<<std::endl;
cvCalcOpticalFlowPyrLK(frameAth,frameBth,pyrA,pyrB,cornersA,cornersB,corner_count,cvSize(win_size,win_size),5,features_found,features_errors,cvTermCriteria( CV_TERMCRIT_ITER| CV_TERMCRIT_EPS,20,0.3 ),CV_LKFLOW_PYR_A_READY|CV_LKFLOW_PYR_B_READY);
cvCopy(frameBth,frameAth,0);
...
}
frameAth is the previous gray frame and frameBth is the current gray frame from a webcam. But when i output the number of good features to track in each frame the number decreases after sum time and keeps decreasing. but if i terminate the program and execute the code again(without disturbing the field of view of the webcam ) a lot more number of points are shown as good features to track...how can for the same field of view and for the same scene the function give such difference in number of points...and the difference is high..eg..number of points as good features to track after 4 minutes of execution is 20 or 50...but when the same program terminated and executed again the number is 500 to 700 initialy but again slowly decreases..i am using opencv for the past 4 months so i am lil new to openCV..please guide me or tell me where i can find a solution...lots of thanx in advance..
You have to call cvGoodFeaturesToTrack once (at the beginning, before loop) to detect good features to track and than track these features using cvCalcOpticalFlowPyrLK. Take a look at default opencv example: OpenCV/samples/cpp/lkdemo.cpp.
You are calling cvGoodFeatureToTrack and passing corner_count by reference. Its value decreases if less features are found. You have to reset the value of corner_count to its initial value before calling cvGoodFeaturesToTrackin each iteration of while loop.