I am trying to compare two images where one of them is rotated and shifted. I need to find the transformation from one to another so that I can resample and compare/subtract using VIPS to see the difference. Is there a way to do this?
nip2 has a couple of ways of doing this.
Load two images and click Toolkits / Images / Transform / Linear Match. A pair of tie points will appear on your two images: drag them to mark a pair of features on each image. The output image will be the second image resamped to match the first. There are some options to automatically improve your tie points, and to only rotate. It should be quick, even for very large images.
There's also an automatic transform finder. The auto search will only work for pairs of images which are rather similar; for example, it won't be able to match an x-ray and a visible image. To try this, load two images (they must be exactly the same size), and click Toolkits / Image / Transform / Rubber Sheet / Find. This will find a transform that matches the second image to the first. You can set how long it searches and the error threshold. It won't work for very large images (more than a few GB).
After you've found a transform, you can apply it to any other image with Toolkits / Image / Transform / Rubber Sheet / Apply. It'll take account of changes of scale, so you can find on a small image and apply on a large one.
Unfortunately, the auto transform finder was written by a friend of mine and he can't release the source. It's compiled into the Windows nip2 binary, or on linux you have to download a binary plugin and put it in the vips lib area.
http://www.vips.ecs.soton.ac.uk/supported/current/linux-64/
Related
I am working on a project to detect certain objects in an aerial image, and as part of this I am trying to utilize elevation data for the image. I am working with Digital Elevation Models (DEMs), basically a matrix of elevation values. When I am trying to detect trees, for example, I want to search for tree-shaped regions that are higher than their surrounding terrain. Here is an example of a tree in a DEM heatmap:
https://i.stack.imgur.com/pIvlv.png
I want to be able to find small regions like that that are higher than their surroundings.
I am using OpenCV and GDAL for my actual image processing. Do either of those already contain techniques for what I'm trying to accomplish? If not, can you point me in the right direction? Some ideas I've had are going through each pixel and calculating the rate of change in relation to it's surrounding pixels, which would hopefully mean that pixels with high rates change/steep slopes would signify an edge of a raised area.
Note that the elevations will change from image to image, and this needs to work with any elevation. So the ground might be around 10 meters in one image but 20 meters in another.
Supposing you can put the DEM information into a 2D Mat where each "pixel" has the elevation value, you can find local maximums by applying dilate and then substract the result from the original image.
There's a related post with code examples in: http://answers.opencv.org/question/28035/find-local-maximum-in-1d-2d-mat/
On inkscape, I am trying to make an SVG of a map that I drew, and I am trying to line it up with another map that I traced off of it. I imported pictures of both maps and tried to align them, however, the pictures were taken at a slightly different angle with my phone. When I stretched them out, rotated them, and aligned them, one was bigger on the top than the other one. Is there a way that I could compress the top part of the image, or a way to get around this problem?
I am new to OpenCV. I would like to know if we can compare two images (one of the images made by photoshop i.e source image and the otherone will be taken from the camera) and find if they are same or not.
I tried to compare the images using template matching. It does not work. Can you tell me what are the other procedures which we can use for this kind of comparison?
Comparison of images can be done in different ways depending on which purpose you have in mind:
if you just want to compare whether two images are approximately equal (with a few
luminance differences), but with the same perspective and camera view, you can simply
compute a pixel-to-pixel squared difference, per color band. If the sum of squares over
the two images is smaller than a threshold the images match, otherwise not.
If one image is a black-white variant of the other, conversion of the color images is
needed (see e.g. http://www.johndcook.com/blog/2009/08/24/algorithms-convert-color-grayscale). Afterwarts simply perform the step above.
If one image is a subimage of the other, you need to perform registration of the two
images. This means determining the scale, possible rotation and XY-translation that is
necessary to lay the subimage on the larger image (for methods to register images, see:
Pluim, J.P.W., Maintz, J.B.A., Viergever, M.A. , Mutual-information-based registration of
medical images: a survey, IEEE Transactions on Medical Imaging, 2003, Volume 22, Issue 8,
pp. 986 – 1004)
If you have perspective differences, you need an algorithm for deskewing one image to
match the other as well as possible. For ways of doing deskewing look for example in
http://javaanpr.sourceforge.net/anpr.pdf from page 15 and onwards.
Good luck!
You should try SIFT. You apply SIFT to your marker (image saved in memory) and you get some descriptors (points robust to be recognized). Then you can use FAST algorithm with the camera frames in order to find the coprrespondent keypoints of the marker in the camera image.
You have many threads about this topic:
How to get a rectangle around the target object using the features extracted by SIFT in OpenCV
How to search the image for an object with SIFT and OpenCV?
OpenCV - Object matching using SURF descriptors and BruteForceMatcher
Good luck
I would like to take a photo of some text and make the text easier to read. The tricky part is that the initial photo may have dark regions as well as light regions and I want the opengl function to enhance the text in all these regions.
Here is an example. On top is the original image. On bottom is the processed images.
[edited]
I have added in a better example picture of what is happening. I am able to enhance the text, but in areas where I have no text, this simple thresholding is creating speckled noise (image bottom left).
If I wind back the threshold, then I lose the text in the darker region (bottom right).
At the moment, the processed image only picks up some of the text, not all the text. The original algorithm I used was pretty simple:
- sample 8 pixels around the current pixel (pixels about 4-5 distant away seem to work best)
- figure out the lightest and darkest pixels from this sample
- if the current pixel is closer to the darkest threshold, then make black, and vice versa
This seemed to work very well for around text, but when it came to non-text, then it provided a very noisy image (even when I provided an initial rejection threshold)
I modified this algorithm to assume that text was always close to black. This provided the bottom image above, but once again I am not able to pull out all the text features I want.
Before implementing this as a program, you might want to take source photo and play with it in a GIMP or another editor to see what you can do.
One way to deal with shadows is to run high pass filter before thresolding.
This is how you do it in image editor (manually, without "highpass" filter plugin):
1. Convert image to grayscale and save it to "layer_A"
2. make a copy of "layer_A" into "Layer_B"
3. Invert colors in "Layer_B"
4. Gaussian blur "Layer_B" with radius that is larger than largest feature you want to preserve. (blur radius larger than letter)
5. Merge "Layer_A" with "Layer_B" where result = "Layer_A" * 0.5 + "Layer_B" * 0.5.
6. Increase contrast in resulting image.
7. Run thresold.
In opengl it'll be done in same fashion (and without multiple layers)
It won't work well with strong/crisp shadows (obviously), but it will exterminate huge smooth shadows that occurs due to page being bent, etc.
The technique (high pass filter) is frequently used for making seamless textures, and you should be able to find several such tutorials and additional info with google (GIMP seamless texture high pass or GIMP high pass).
By the way, if you want to improve "readability", then you might want to keep it grayscale (while improving contrast) instead of converting it to "black and white" (1 bit color). Sharp letter edges make text harder to read.
thanks for your help.
In the end I went for quite a basic approach.
Taking a sample of 8 nearby pixels, determining the max and min. Determined the local threshold (max - min). Then
smooth = dot(vec3(1.0/3.0), smoothstep(currentMin, currentMax, p11).rgb);
smooth = (localthreshold < threshold) ? 1.0 : smooth;
return vec4(smooth, smooth, smooth, 1);
This does not show me the text nicely in both the dark and light region, which is the ideal, but it nicely cleans up the text in the lighter region.
Mike
I'm working on a blob matching and tracking library in C++. Currently I'm using OpenCV to detect blobs and try to match blobs in a new frame by checking the position, velocity and size of the blob. This works quite okay and I'm receiving a high blob match rate (95% or higher).
Sometimes blobs fall out of the image or new blobs appear. Now I need to give matched blobs the same ID as they had before. I'm wondering if there are typical or commonly used techniques for doing this. Or even some keywords I can use to Google on.
Thanks
http://en.wikipedia.org/wiki/Blob_extraction
I assume you have your blobs in a binary image, simply floodfill each blob with a different color/id, and register overlapping blobs between frames with the same id.
CCV is used for multiple finger tracking for multi-touch environments. Check out their tracking code. It uses a function trackKnn which uses k nearest neighbour algorithm.
You can also use Kalman Filter if blobs collide each other. Check out this SO