Speed up OpenCV - c++

I am using OpenCV 2.4 (C++) for line finding on grayscale images. This involves some basic image processing steps like blurring, threshold, Canny edge detector, gradient filter or Hough transformation. I have to apply the line finding algorithm on thousands of images.
Is there a way to speed up the calculation considering the large number of images?
Does one of the following provide help? Intel TBB, IPP or OpenCV GPU?
I heard that OpenCV GPU can speed up calculations but data transfer is slowly. So using GPU might not be the right choice here?
Thank You!
EDIT:
Is there any sense in using parallel_for from TBB to speed up image processing? If I use a for loop like this:
for(int i=0; i<image_location.size();++i)
{
Mat img=imread(image_location[i]);
blur(img...);
threshold(img...);
...
}
Can I improve performance by using parallel_for instead? Can anyone provide examples how to use parallel_for including some opencv operations?

The scope of your question is virtually unbounded.
First of all, have you measured the performance of your application to detect the actual bottleneck(s) ? My guess would be the Hough transform, but who knows what else your code is doing. Now, if the Hough transform is the slow piece, and supposing OpenCV has a fast implementation of it, then this is the reason I tell you the question is problematic. Changing for a somewhat better implementation doesn't help much when you decide to increase your already large number of images, the problem is in the approach itself.
Do you really need to use Hough ? Maybe you could achieve something similar/better using morphological operators ? Are the images from some common domain ? Can you include examples of them ? Etc, etc.

Related

Most efficient way to blur an image in opencv

I am blurring the background of an image using the blur method. All the tutorials I have seen show the highest kernel size of (7,7). But that is not blurred enough for what I need it for.
I have used Size(33,33) and it works alright but I would like to go higher so currently I am using Size(77,77). Is this the most efficient way of blurring an image in OpenCV? And is it okay to go that high at all?
Another Idea is run the blur method more than once. with a kernel size of (7,7), but that doesn't seem like it is more efficient.
EDIT:
OpenCV version 3.2
Try cv::stackBlur().
It's an addition from v4.7.0. Its performance is almost flat, i.e. independent of kernel size. The pull request contains performance figures: https://github.com/opencv/opencv/pull/20379
GaussianBlur(sigmaX=22) (30 ms)
stackBlur(ksize=(101,101)) (0.4 ms)

calcopticalflowpyrlk function in opencv 3.0

I'm trying to track something in some frames. I know calcOpticalFlowPyrLK is supposed to be used for sparse tracking problems. However, I thought it wouldn't really hurt if I just try to track all pixels in the frames.
So my video frames are actually very stable(motions are barely visible by eyes), and calcopticalflowpyrlk works well for most pixels. But for some pixels it returns really big flow vectors(like [200,300]), which doesn't make sense.
And I also found a Matlab implementation that's using the same Pyramidal Lucas-Kanade algorithm, but this Matlab version doesn't return any crazy values.
So I'm wondering what is causing opencv function to return huge non-reasonable values. Is it because the matrix inversion is done differently?

FFT based image registration (optionally using OpenCV) in cpp?

I'm trying to align two images taken from a handheld camera.
At first, I was trying to use the OpenCV warpPerspective method based on SIFT/SURF feature points. The problem is the feature-extract & matching process may be extremely slow when the image quality is high (3000x4000). I tried to scale-down the image before find feature-points, the result is not as good as before.(The Mat generated from findHomography shouldn't be affected by scaling down the image, right?) And sometimes, due to lack of good feature point matches, the result is quite strange.
After searching on this topic, it seems that solving the problem in Fourier domain will speed up the registration process. And I've found this question which leads me to the code here.
The only problem is the code is written in python with numpy (not even using OpenCV), which makes it quite hard to re-written to C++ code using OpenCV (In OpenCV, I can only find dft and there's no fftshift nor fft stuff, I'm not quite familiar with NumPy, and I'm not brave enough to simply ignore the missing methods). So I'm wondering why there is not such a Fourier-domain image registration implementation using C++?
Can you guys give me some suggestion on how to implement one, or give me a link to the already implemented C++ version? Or help me to turn the python code into C++ code?
Big thanks!
I'm fairly certain that the FFT method can only recover a similarity transform, that is, only a (2d) rotation, translation and scale. Your results might not be that great using a handheld camera.
This is not quite a direct answer to your question, but, as a suggestion for a speed improvement, have you tried using a faster feature detector and descriptor? In OpenCV SIFT/SURF are some of the slowest methods they have for feature extraction/matching. You could try testing some of their other methods first, they all work quite well and are faster than SIFT/SURF. Especially if you use their FLANN-based matcher.
I've had to do this in the past with similar sized imagery, and using the binary descriptors OpenCV has increases the speed significantly.
If you need only shift you can use OpenCV's phasecorrelate

Speedup Image comparison

I'm looking for an algorithm, that would do image comparisons at real time, basically on images acquired from a webcam (like 30 frames/second). My current implementation is pretty slow, tired to improve it by dropping a few frames and reducing the resolution -- but with no success.
So, I'm exploring options like using better algorithms like Key-point Matching etc. And on a different note, I'm also looking for a GPU based image comparison sample implementations (either DirectX or OpenGL APIs).
Have you tried Perceptual Image Diff?
I didn't read the entire thread but it may help you somehow
Image comparison - fast algorithm

How do I do high quality scaling of a image?

I'm writing some code to scale a 32 bit RGBA image in C/C++. I have written a few attempts that have been somewhat successful, but they're slow and most importantly the quality of the sized image is not acceptable.
I compared the same image scaled by OpenGL (i.e. my video card) and my routine and it's miles apart in quality. I've Google Code Searched, scoured source trees of anything I thought would shed some light (SDL, Allegro, wxWidgets, CxImage, GD, ImageMagick, etc.) but usually their code is either convoluted and scattered all over the place or riddled with assembler and little or no comments. I've also read multiple articles on Wikipedia and elsewhere, and I'm just not finding a clear explanation of what I need. I understand the basic concepts of interpolation and sampling, but I'm struggling to get the algorithm right. I do NOT want to rely on an external library for one routine and have to convert to their image format and back. Besides, I'd like to know how to do it myself anyway. :)
I have seen a similar question asked on stack overflow before, but it wasn't really answered in this way, but I'm hoping there's someone out there who can help nudge me in the right direction. Maybe point me to some articles or pseudo code... anything to help me learn and do.
Here's what I'm looking for:
No assembler (I'm writing very portable code for multiple processor types).
No dependencies on external libraries.
I am primarily concerned with scaling DOWN, but will also need to write a scale up routine later.
Quality of the result and clarity of the algorithm is most important (I can optimize it later).
My routine essentially takes the following form:
DrawScaled(uint32 *src, uint32 *dst,
src_x, src_y, src_w, src_h,
dst_x, dst_y, dst_w, dst_h );
Thanks!
UPDATE: To clarify, I need something more advanced than a box resample for downscaling which blurs the image too much. I suspect what I want is some kind of bicubic (or other) filter that is somewhat the reverse to a bicubic upscaling algorithm (i.e. each destination pixel is computed from all contributing source pixels combined with a weighting algorithm that keeps things sharp.
Example
Here's an example of what I'm getting from the wxWidgets BoxResample algorithm vs. what I want on a 256x256 bitmap scaled to 55x55.
www.free_image_hosting.net/uploads/1a25434e0b.png
And finally:
www.free_image_hosting.net/uploads/eec3065e2f.png
the original 256x256 image
I've found the wxWidgets implementation fairly straightforward to modify as required. It is all C++ so no problems with portability there. The only difference is that their implementation works with unsigned char arrays (which I find to be the easiest way to deal with images anyhow) with a byte order of RGB and the alpha component in a separate array.
If you refer to the "src/common/image.cpp" file in the wxWidgets source tree there is a down-sampler function which uses a box sampling method "wxImage::ResampleBox" and an up-scaler function called "wxImage::ResampleBicubic".
A fairly simple and decent algorithm to resample images is Bicubic interpolation, wikipedia alone has all the info you need to get this implemented.
Is it possible that OpenGL is doing the scaling in the vector domain? If so, there is no way that any pixel-based scaling is going to be near it in quality. This is the big advantage of vector based images.
The bicubic algorithm can be tuned for sharpness vs. artifacts - I'm trying to find a link, I'll edit it in when I do.
Edit: It was the Mitchell-Netravali work that I was thinking of, which is referenced at the bottom of this link:
http://www.cg.tuwien.ac.at/~theussl/DA/node11.html
You might also look into Lanczos resampling as an alternative to bicubic.
Now that I see your original image, I think that OpenGL is using a nearest neighbor algorithm. Not only is it the simplest possible way to resize, but it's also the quickest. The only downside is that it looks very rough if there's any detail in your original image.
The idea is to take evenly spaced samples from your original image; in your case, 55 out of 256, or one out of every 4.6545. Just round the number to get the pixel to choose.
Try using the Adobe Generic Image Library ( http://opensource.adobe.com/wiki/display/gil/Downloads ) if you want something ready and not only an algorithm.
Extract from: http://www.catenary.com/howto/enlarge.html#c
Enlarge or Reduce - the C Source Code
Requires Victor Image Processing Library for 32-bit Windows v 5.3 or higher.
int enlarge_or_reduce(imgdes *image1)
{
imgdes timage;
int dx, dy, rcode, pct = 83; // 83% percent of original size
// Allocate space for the new image
dx = (int)(((long)(image1->endx - image1->stx + 1)) * pct / 100);
dy = (int)(((long)(image1->endy - image1->sty + 1)) * pct / 100);
if((rcode = allocimage(&timage, dx, dy,
image1->bmh->biBitCount)) == NO_ERROR) {
// Resize Image into timage
if((rcode = resizeex(image1, &timage, 1)) == NO_ERROR) {
// Success, free source image
freeimage(image1);
// Assign timage to image1
copyimgdes(&timage, image1);
}
else // Error in resizing image, release timage memory
freeimage(&timage);
}
return(rcode);
}
This example resizes an image area and replaces the original image with the new image.
Intel has IPP libraries which provide high speed interpolation algorithms optimized for Intel family processors. It is very good but it is not free though. Take a look at the following link:
Intel IPP
A generic article from our beloved host: Better Image Resizing, discussing the relative qualities of various algorithms (and it links to another CodeProject article).
It sounds like what you're really having difficulty understanding is the discrete -> continuous -> discrete flow involved in properly resampling an image. A good tech report that might help give you the insight into this that you need is Alvy Ray Smith's A Pixel Is Not A Little Square.
Take a look at ImageMagick, which does all kinds of rescaling filters.
As a follow up, Jeremy Rudd posted this article above. It implements filtered two pass resizing. The sources are C# but it looks clear enough that I can port it to give it a try. I found very similar C code yesterday that was much harder to understand (very bad variable names). I got it to sort-of-work, but it was very slow and did not produce good results which led me to believe there was an error in my adaptation. I may have better luck writing it from scratch with this as a reference, which I'll try.
But considering how the two pass algorithm works I wonder if there isn't a faster way of doing it, perhaps even in one pass?