I have a 3D image data obtained from a 3D OCT scan. The data can be represented as I(x,y,z) which means there is an intensity value at each voxel.
I am writing an algorithm which involves finding the image's gradient in x,y and z directions in C++. I've already written a code in C++ using OpenCV for 2D and want to extend it to 3D with minimal changes in my existing code for 2D.
I am familiar with 2D gradients using Sobel or Scharr operators. My search brought me to this post, answers to which recommend ITK and Point Cloud Library. However, these libraries have a lot more functionalities which might not be required. Since I am not very experienced with C++, these libraries require a bit of reading, which time doesn't permit me. Moreover, these libraries don't use cv::Mat object. If I use anything other than cv::Mat, my whole code might have to be changed.
Can anyone help me with this please?
Update 1: Possible solution using kernel separability
Based on #Photon's answer, I'm updating the question.
From what #Photon says, I get an idea of how to construct a Sobel kernel in 3D. However, even if I construct a 3x3x3 cube, how to implement it in OpenCV? The convolution operations in OpenCV using filter2d are only for 2D.
There can be one way. Since the Sobel kernel is separable, it means that we can break the 3D convolution into convolution in lower dimensions. Comments 20 and 21 of this link also tell the same thing. Now, we can separate the 3D kernel but even then filter2D cannot be used since the image is still in 3D. Is there a way to break down the image as well? There is an interesting post which hints at something like this. Any further ideas on this?
Since the Sobel operator is separable, it's easy to envision how to add a 3rd dimension.
For example, when you look at the filter definition for Gx in the link you posted, you see that is multiplies the surrounding pixels by coefficients that have a sign dependent on the relative X position, and magnitude relative to the offset in Y.
When you extend to 3D, the Gx gradient should be calculated the same way, but you need to work on a 3x3x3 cube, and the coefficient sign remains the same definition, and the magnitude now depends on change in either Y or Z or both.
The other gradients (Gy, Gz) are the same but around their axis.
Related
I am stitching together multiple images with arbitrary 3D views of a planar surface. I have some estimation of which images overlap and a coarse estimate of each pairwise homography between pairs of overlapping images. However, I need to refine my homographies by minimizing the global error across all images.
I have read a few different papers with various methods for doing this, and I think the best way would be to use a non-linear optimization such as Levenberg–Marquardt, ideally in a fast way that is sparse and/or parallel.
Ideally I would like to use an existing library such as sba or pba, but I am really confused as to how to limit the calculation to just estimating the eight parameters of the homography rather than the full 3 dimensions for both camera pose and object position. I also found this handy explanation by Szeliski (see section 5.1 on page 50) but again, the math is all for a rotating camera rather than a flat surface.
How do I use L-M to minimize the global error for a set of homographies? Is there a speedy way to do this with existing bundle adjustment libraries?
Note: I cannot use methods that rely on rotation-only camera motion (such as in openCV) because those cannot accurately estimate camera poses, and I also cannot use full 3D reconstruction methods (such as SfM) because those have too many parameters which results in non-planar point clouds. I definitely need something specific to a full 8 parameter homography. Camera intrinsics don't really matter because I am already correcting those in an earlier step.
Thanks for your help!
We have pictures taken from a plane flying over an area with 50% overlap and is using the OpenCV stitching algorithm to stitch them together. This works fine for our version 1. In our next iteration we want to look into a few extra things that I could use a few comments on.
Currently the stitching algorithm estimates the camera parameters. We do have camera parameters and a lot of information available from the plane about camera angle, position (GPS) etc. Would we be able to benefit anything from this information in contrast to just let the algorithm estimate everything based on matched feature points?
These images are taken in high resolution and the algorithm takes up quite amount of RAM at this point, not a big problem as we just spin large machines up in the cloud. But I would like to in our next iteration to get out the homography from down sampled images and apply it to the large images later. This will also give us more options to manipulate and visualize other information on the original images and be able to go back and forward between original and stitched images.
If we in question 1 is going to take apart the stitching algorithm to put in the known information, is it just using the findHomography method to get the info or is there better alternatives to create the homography when we actually know the plane position and angles and the camera parameters.
I got a basic understanding of opencv and is fine with c++ programming so its not a problem to write our own customized stitcher, but the theory is a bit rusty here.
Since you are using homographies to warp your imagery, I assume you are capturing areas small enough that you don't have to worry about Earth curvature effects. Also, I assume you don't use an elevation model.
Generally speaking, you will always want to tighten your (homography) model using matched image points, since your final output is a stitched image. If you have the RAM and CPU budget, you could refine your linear model using a max likelihood estimator.
Having a prior motion model (e.g. from GPS + IMU) could be used to initialize the feature search and match. With a good enough initial estimation of the feature apparent motion, you could dispense with expensive feature descriptor computation and storage, and just go with normalized crosscorrelation.
If I understand correctly, the images are taken vertically and overlap by a known amount of pixels, in that case calculating homography is a bit overkill: you're just talking about a translation matrix, and using more powerful algorithms can only give you bad conditioned matrixes.
In 2D, if H is a generalised homography matrix representing a perspective transformation,
H=[[a1 a2 a3] [a4 a5 a6] [a7 a8 a9]]
then the submatrixes R and T represent rotation and translation, respectively, if a9==1.
R= [[a1 a2] [a4 a5]], T=[[a3] [a6]]
while [a7 a8] represents the stretching of each axis. (All of this is a bit approximate since when all effects are present they'll influence each other).
So, if you known the lateral displacement, you can create a 3x3 matrix having just a3, a6 and a9=1 and pass it to cv::warpPerspective or cv::warpAffine.
As a criteria of matching correctness you can, f.e., calculate a normalized diff between pixels.
Hi i had computed the fundamental matrix from two images and i found out that the epipoles lie within the image. I cannot do the rectification using matlab if the image contains epipole.
May i know how to compute the fundamental matrix that the epipole is not in the image?
The epipolar geometry is the intrinsic projective geometry between two
views. It is independent of scene structure, and only depends on the
cameras' internal parameters and relative pose.
So the intrinsics/extrinsics of the cameras define the fundamental matrix that you get (i.e. you cannot compute another fundamental, s.t. the epipoles are not in the image).
What you can do is either take a different pair of images (with a different camera geometry, for example) and you may get epipoles out of the image.
The problem you're actually having is that the rectification algorithm that you're using is limited and doesn't work for the case when the epipole is inside the image. Note, there exist other algorithms that do not have this limitation. I have implemented such an algorithm in the past, and may be can find the (MATLAB) code. So, please let me know if you're interested.
If you're in a mood to learn more about epipolar geometry and the fundamental matrix, I recommend you take a look here:
I am attempting to convert an image in polar coordinates (axes are angle x radius) to an image in cartesian coordinates (axes are x and y).
This is simple enough in matlab using pcolor() but the issue is that I must do this in a mex file (c++ interface to Matlab). This seem's easy enough except that Matlab ONLY uses array containers so I can't think of a clever or eloquent way of doing this.
I do have access to the image dimensions and I can imagine a very messy way of repackaging the input image array as a matrix in C++ and carying out the conversion but this would be messy and problematic.
Also, I need to be able to interpolate gaps between points in the xy plain.
Any ideas?
This is reasonably standard in image processing, particularly in registration. However, it takes some thought and isn't "obvious". It wasn't obvious to me the first time either.
I'm assuming you have two images, in different "domains", in your case a source image in polar coordinates and a target image in Cartesian coordinates. I'm assuming you know the region in the target image you want to populate.
The commonly known best thing to do in image processing is to loop over coordinates in the known area of the target image that you want to populate. For each of these positions (x,y), you'll have some conversion to polar. It's probably r = sqrt(x*x+y*y) and theta = atan2(y,x) or something like that. Then you sample from that position in the polar coordinate position with interpolation.
Among choices of interpolation are:
Nearest neighbor - you just round to the nearest r and theta and choose the value of that.
Bilinear -
Bi-cubic
...
Of course you should take care of boundary conditions and what happens if your r and theta go out of your image.
This procedure also is similar (looping over the target image and sampling from the source image, and doing lookups based on the reverse transform) for all kinds of coordinates transformations. The nice thing is that you don't leave holes where your source imagine is relevant.
Hope this helps with the image part.
As for the mex part, here's some links:
Mex tutorial
Mex tutorial
Can you be more specific about what you need about the mex part?
How to find shift and rotation between same two images using programming languages vb.net or C++ or C#?
The problem you state is called motion detection (or motion compensation) and is one of the most important problems in image and video processing at the moment. No easy "here are ten lines of code that will do it" solution exists except for some really trivial cases.
Even your seemingly trivial case is quite a difficult one because a rotation by an unknown angle could cause slight pixel-by-pixel changes that can't be easily detected without specifically tailored algorithms used for motion detection.
If the images are very similar such that the camera is only slightly moved and rotated then the problem could be solved without using highly complex techniques.
What I would do, in that case, is use a motion tracking algorithm to get the optical flow of the image sequence which is a "map" which approximates how a pixel has "moved" from image A to B. OpenCV which is indeed a very good library has functions that does this: CalcOpticalFlowLK and CalcOpticalFlowPyrLK.
The tricky bit is going from the optical flow to total rotation of the image. I would start by heavily low pass filter the optical flow to get a smoother map to work with.
Then you need to use some logic to test if the image is only shifted or rotated. If it is only shifted then the entire map should be one "color", i.e. all flow vectors point in the same direction.
If there has been a rotation then the vectors will point in different direction depending on the rotation.
If the input images are not as nice as the above method requires, then I would look into feature descriptors to find how a specific object in the first image is located within the second. This will however be much harder.
There is no short answer. You could try to use free OpenCV library for finding relationship between two images.
The two operations, rotation and translation can be determined in either order. It's far easier to first detect rotation, because you can then compensate for that. Once both images are oriented the same, the translation becomes a matter of simmple correlation.
Finding the relative rotation of an image is best done by determining the local gradients. For every neighborhood (e.g. 3x3 pixels), treat the greyvalue as a function z(x,y), fit a plane through the 9 pixels, and determine the slope or gradient of that plane. Now average the gradient you found over the entire image, or at least the center of it. Your two images will produce different averages. Part of that is because for non-90 degree rotations the images won't overlap fully, but in general the difference in average gradients is the rotation between the two.
Once you've rotated back one image, you can determine a correlation. This is a fairly standard operation; you're essentially determining for each possible offset how well the two images overlap. This will give you an estimate for the shift.
Once you've got both, you can refine your rotation angle estimate by rotating back the translation, shifting the second image, and determining the average gradient only over the pixels common to both images.
If the images are exactly the same, it should be fairly easy to extract some feature points - for example using SIFT - and match the features of both images. You can then use any two of the matching features to find the rotation and translation. The translation is just the difference between two matching feature points. The you compensate for the translation in one image and get the rotation angle as the angle formed by the three remaining points.