Get HU values along a trajectory volume - c++

So, what I am trying to do is to calculate the density profile (HU) along a trajectory (represented by target x,y,z and tangent to it) in a CT. At the moment, I am able to get the profile along a line passing through the target and at a certain distance from the target (entrance). What I would like to do is to get the density profile for a volume (cylinder in this case) of width 1mm or so.
I guess I have to do interpolation of some sort along voxels since depending on the spacing between successive coordinates, several coordinates can point to the same index. For example, this is what I am talking about.
Additionally, I would like to get the density profile for different shapes of the tip of the trajectory, for example:
My idea is that I make a 3 by 3 matrix, representing the shapes of the tip, and convolve this with the voxel values to get HU values corresponding to the tip. How can I do this using ITK/VTK?
Kindly let me know if you need some more information. (I hope the images are clear enough).

If you want to calculate the density drill tip will encounter, it is probably easiest to create a mask of the tip's cutting surface in a resolution higher than your image. Define a transform matrix M which puts your drill into the wanted position in the CT image.
Then iterate through all the non-zero voxels in the mask, transform indices to physical points, apply transform M to them, sample (evaluate) the value in the CT image at that point using an interpolator, multiply it by the mask's opacity (in case of non-binary mask) and add the value to the running sum.
At the end your running sum will represent the total encountered density. This density sum will be dependent on the resolution of your mask of the tip's cutting surface. I don't know how you will relate it to some physical quantity (like resisting force in Newtons).

To get a profile along some path, you would use resample filter. Set up a transform matrix which transforms your starting point to 0,0,0 and your end point to x,0,0. Set the size of the target image to x,1,1 and spacing the same as in source image.
I don't understand your second question. To get HU value at the tip, you would sample that point using a high quality interpolator (example using linear interpolator). I don't get why would the shape of the tip matter.

Related

Finding regions of higher numbers in a matrix

I am working on a project to detect certain objects in an aerial image, and as part of this I am trying to utilize elevation data for the image. I am working with Digital Elevation Models (DEMs), basically a matrix of elevation values. When I am trying to detect trees, for example, I want to search for tree-shaped regions that are higher than their surrounding terrain. Here is an example of a tree in a DEM heatmap:
https://i.stack.imgur.com/pIvlv.png
I want to be able to find small regions like that that are higher than their surroundings.
I am using OpenCV and GDAL for my actual image processing. Do either of those already contain techniques for what I'm trying to accomplish? If not, can you point me in the right direction? Some ideas I've had are going through each pixel and calculating the rate of change in relation to it's surrounding pixels, which would hopefully mean that pixels with high rates change/steep slopes would signify an edge of a raised area.
Note that the elevations will change from image to image, and this needs to work with any elevation. So the ground might be around 10 meters in one image but 20 meters in another.
Supposing you can put the DEM information into a 2D Mat where each "pixel" has the elevation value, you can find local maximums by applying dilate and then substract the result from the original image.
There's a related post with code examples in: http://answers.opencv.org/question/28035/find-local-maximum-in-1d-2d-mat/

Compute edge for intensity decreases only

I want to find edges in my image, specifically vertical changes in intensity which go from light to dark. Is this possible? I'm using the Canny/Sobel edge detectors in OpenCV but they're picking up edges where the intensity increases, which I don't want.
You can write a custom filter and use cvFilter2D (2D convolution).
To give a very simple example, the convolution kernel {1 0 -1;1 0 -1; 1 0 -1} is a 3x3 filter that can highlight intensity decreases going from left to right. You can threshold the result to get the edges.
You will have to select the right size of the kernel, and also the right values, to suit your images.
Here is a good link that shows how to use cvFilter2d:
Once you understand what these filters do mathematically, it is quite clear what and you have to change. And where in the pipeline this must be. In his answer, Totoro already pointed out that you can pass your own filters to be run.
Sobel edge detection works by first running two filters on the image. These filters give the gradient of the image in X and Y direction. Edges and gradients are linked in the way that a large magnitude of the gradient means that there is a lot of change in the image, which would indicate an edge!
So the next step (iirc) in the Sobel algorithm is to find the magnitude of the gradients. And finally you threshold this, to take only large changes in the image as edges. Finally, you do some edge thinning and hysteresis thresholding along the direction of the edge but that is not very important here.
The important step where you want to be different than the Sobel algorithm is that you care about the direcction of change. If you compute the direction of change from the X and Y gradients (using sine and cosine), then you can filter out edges that only go in the direction you want.
If you just care about vertical changes, you can run a convolution kernel that computes the gradient along the horizontal and take only positive values. All positive values will indicate that there was a vertical change from light to dark. If you want you can do the following processing steps just as Sobel would do.

Calculating the precision of homography on 2D plane

I am trying to find a way to parametrize the precision of my homography calculation. I would like to obtain a value that describes the precision of the homography calculation for a measurement taken at a certain position.
I currently have succesfully calculated the homography (with cv::findHomography) and I can use it to map a point on my camera image onto a 2D map (using cv::perspectiveTransform). Now I want to track these objects on my 2D map and to do this I want to take in account that objects that are in the back of my camera image have a less precise position on my 2D map than the objects that are all the way in the front.
I have looked at the following example on this website that mentions plane fitting but I don't really understand how to fill the matrices correctly using this method. The visualisation of the result does seem to fit my needs. Is there any way to do this with standard OpenCV functions?
EDIT:
Thanks Francesco for your recommendations. But, I think I am looking for something different than your answer. I am not looking to test the precision of the homography itself, but the relation between the density of measurements in one real camera view and the actual size on a map I create. I want to know that when I am 1 pixel off on my detection in the camera image, how many meters this will be on my map at this point.
I can of course calculate by taking some pixels around my measurement on my camera image and then use the homography to see how many meters on my map this represent every time I do a homography, but I don't want to calculate this every time. What I would like is to have a formula that tells me the relation between pixels in my image and pixels on my map so I can take this in account for my tracking on the map.
What you are looking for is called "predictive error bars" or "prediction uncertainty". You should definitely consult a good introductory book on estimation theory for details (e.g. this one). But briefly, the predictive uncertainty is the probability that...
A certain pixel p in image 1 will is the mapping H(p') of a pixel p' in image 2 under the homography H...
Given the uncertainty in H which is due to the errors in the matched pairs (q0, q0'), (q1, q1'), ..., that have been used to estimate H, ...
But assuming the model is correct, that is, that the true map between images 1 and 2 is, in fact, a homography (although the estimated parameters of the homography itself may be affected by errors).
In order to estimate this probability distribution you'll need a model for the errors in the measurements, and a model for how they propagate through the (homography) model.

Fit a circle or a spline into a bunch of 3D Points

I have some 3D Points that roughly, but clearly form a segment of a circle. I now have to determine the circle that fits best all the points. I think there has to be some sort of least squares best fit but I cant figure out how to start.
The points are sorted the way they would be situated on the circle. I also have an estimated curvature at each point.
I need the radius and the plane of the circle.
I have to work in c/c++ or use an extern script.
You could use a Principal Component Analysis (PCA) to map your coordinates from three dimensions down to two dimensions.
Compute the PCA and project your data onto the first to principal components. You can then use any 2D algorithm to find the centre of the circle and its radius. Once these have been found/fitted, you can project the centre back into 3D coordinates.
Since your data is noisy, there will still be some data in the third dimension you squeezed out, but bear in mind that the PCA chooses this dimension such as to minimize the amount of data lost, i.e. by maximizing the amount of data that is represented in the first two components, so you should be safe.
A good algorithm for such data fitting is RANSAC (Random sample consensus). You can find a good description in the link so this is just a short outline of the important parts:
In your special case the model would be the 3D circle. To build this up pick three random non-colinear points from your set, compute the hyperplane they are embedded in (cross product), project the random points to the plane and then apply the usual 2D circle fitting. With this you get the circle center, radius and the hyperplane equation. Now it's easy to check the support by each of the remaining points. The support may be expressed as the distance from the circle that consists of two parts: The orthogonal distance from the plane and the distance from the circle boundary inside the plane.
Edit:
The reason because i would prefer RANSAC over ordinary Least-Squares(LS) is its superior stability in the case of heavy outliers. The following image is showing an example comparision of LS vs. RANSAC. While the ideal model line is created by RANSAC the dashed line is created by LS.
The arguably easiest algorithm is called Least-Square Curve Fitting.
You may want to check the math,
or look at similar questions, such as polynomial least squares for image curve fitting
However I'd rather use a library for doing it.

Illumination invariant image

I try to create an illumination invariant image with openCV like in this paper here: http://www.cvc.uab.es/adas/publications/alvarez_2008.pdf
Has someone an idea how one can create that image from the log-log plot image in OpenCV?
+1 for the link to an interesting paper.
I guess I would build a function to convert to log, divide the channels, rotate by theta, and project onto one axis. Then I would build a function to measure the quality of the resulting invariant image. Then I would set up a search over theta to optimize the quality. That looks like what Alvarez is doing.
But first, I would study the Luv color space, it might be the closest approximation to this scheme that is possible without the special narrowband camera. Project the uv space onto a vector at angle theta, and see what happens.
As far as I can understand the two papers, they are proceeding from a false premise and arriving at an interesting method for getting 1D illumination invariant information from 2D (such as uv from Luv, HS from HSV, etc) color space.
They say illumination invariant, but they show a method of obtaining Color Temperature invariant information from log ratio of color pairs, say {log(R/G),log(B/G)}. You can imagine the setup, with a lamp on a dimmer, and they plot the color ratios: dim the lights, yes, the illumination changes, but so does the color temperature T.
Not to mention that light is not all blackbody color temperature Lambertian. How in the world can this method work? But their results look good.
So, on to the interesting method: Maximum Entropy
As in answer above, project the (log of) uv space onto a vector at angle theta. What should theta be? Search theta to maximize entropy of the result. That is, to get the sharpest peaks in the 1D result. Sort of like an auto-focus.
To answer your question though, use calcHist in opencv. After computing the log, of course.