itk x and y component of the gradient of an image - c++

I would like to calculate the x and y component of the gradient of a 2D image. As in MATLAB is calculated with [dT2,dT1] = gradient(T);
ReaderType::Pointer T_g // image
FilterType::Pointer gradientFilter = FilterType::New();
gradientFilter->SetInput( T_g->GetOutput());
gradientFilter->Update();
With this sentence, I get the result, but I want to have the x-component and the y-component
gradientFilter->GetOutput()
Is there any method to extract it? I am looking for it but I have no positive result!
Thanks so much
Antonio

The output of the gradientFilter will be a vector image. I assume from your description it's
a 2d image!
ImageType::IndexType index;
index[0]=xcoord;
index[1]=ycoord;
gradientFilter->GetOutput()->GetPixel(index)[0]; // will return first component of xcoord,ycoord

http://www.vtk.org/Wiki/ITK/Examples
http://www.vtk.org/Wiki/ITK/Examples/ImageProcessing/NthElementImageAdaptor
template
class itk::NthElementImageAdaptor< TImage, TOutputPixelType >
Presents an image as being composed of the N-th element of its pixels.
It assumes that the pixels are of container type and have in their API an operator[]( unsigned int ) defined.
Additional casting is performed according to the input and output image types following C++ default casting rules.
Wiki Examples:
All Examples
Extract a component of an itkImage with pixels with multiple components
Process the nth component/element of a vector image

Related

How to use region-growing algorithms to define a region of interest?

I am working on DICOM images (CT scans) & would like to isolate some structures of interest in my picture such as human organs (like the aorta, cf the image enclosed). I am coding in C++ with the help of ITK & VTK.
Let's assume these organs have a particular brightness intensity, therefore I can automatically identify them by using a region-growing algorithm (code below). In order to do so, I previously computed some threshold values based on the mean & standard deviation values of the voxels belonging to the organ.
How can I only keep the aorta in my image with the help of ITK/VTK features? I guess that what I'm looking for is a filter that would do the exact opposite of the ITK mask image filter.
Please find the (pseudo) code corresponding to the organ isolation below. I computed a 5 voxels dilation on the result of the region-growing to be sure to include all voxels of the organ and to have a sufficient margin around the organ after cropping.
typedef short InputPixelType;
typedef unsigned char OutputPixelType;
const int Dimension = 3;
typedef itk::Image< InputPixelType, Dimension > InputImageType;
typedef itk::Image< OutputPixelType, Dimension > OutputImageType;
// Region growing
typedef itk::ConnectedThresholdImageFilter< InputImagetype,
OutputImagetype > ConnectedFilterType;
ConnectedFilterType::Pointer connectedThreshold = ConnectedFilterType::New();
connectedThreshold->SetInput(input);
connectedThreshold->SetUpper(upperThreshold);
connectedThreshold->SetLower(lowerThreshold);
//Initializing seed
InternalImagetype::IndexType index;
index[0] = seed_x;
index[1] = seed_y;
connectedThreshold->SetSeed(index);
// Dilate the resulting region-growing of 5 voxels for safety
typedef itk::BinaryBallStructuringElement< OutputImageType,
Dimension > StructuringElementType;
typedef itk::BinaryDilateImageFilter< OutputImageType,
OutputImageType, StruturingElementType > DilateFilterType;
StructuringElementType structuringElement;
structuringElement.SetRadius(5);
structuringElement.CreateStructuringElement();
DilateFilterType::Pointer dilateFilter = DilateFilterType::New();
dilateFilter->SetInput(connectedThreshold->GetOutput());
dilatefilter->SetKernel(structuringElement);
// Saving the results of the RG+dilation
typedef itk::ImageFileWriter< OutputImageType > WriterType;
WriterType::Pointer writer = WriterType::New();
writer->SetInput(dilateFilter->GetOutput());
writer->SetFileName("organ-segmented-with-dilation.mhd");
try {
writer->Update();
} catch(itk::ExceptionObject& err) {
std::cerr << "Exception caught! " << err.what() << std::endl;
return EXIT_FAILURE;
}
// What to do next to crop the input image with this region-growing?
Any help or remark is welcomed.
Mask filter itself can do the opposite of what it usually does. By default, masking value is 0, and so is outside value. This means that parts of image which correspond to non-zero part of the mask are kept, and the rest is zeroed out. If this is not what you want, you can easily invert the logic by setting different masking and outside values.
For the record, I solved my problem using the ITK mask negated filter, which contrarily to the basic mask filter directly answers the issue.

ITK get pixels list from itk::LabelObject

I have a problem accessing the list of pixels of an itk::LabelObject.
This LabelObject is obtained with a itk::OrientedBoundingBoxLabelObject (https://github.com/blowekamp/itkOBBLabelMap). The original 3D image is a CBCT Dicom, inside which I'm looking for the position and orientation of a small rectangular marker.
Here is the code which leads to get the itk::LabelObject :
typedef short LabelPixelType;
typedef itk::LabelMap<LabelObjectType> LabelMapType;
typedef itk::OrientedBoundingBoxLabelMapFilter<LabelMapType> OBBLabelMapFilter;
typename OBBLabelMapFilter::Pointer toOBBLabelMap = OBBLabelMapFilter::New();
typename ToLabelMapFilterType::Pointer toLabelMap = ToLabelMapFilterType::New();
toOBBLabelMap->SetInput(toLabelMap->GetOutput());
toOBBLabelMap->Update();
LabelObjectType* labelObject = toOBBLabelMap->GetOutput()->GetNthLabelObject(idx);
OBBSize = labelObject->GetOrientedBoundingBoxSize();
I guess that accessing the pixels coordinates is possible, as it has to be accessed somehow in order to calculate the bounding boxes, but I didn't manage to do it so far. I tried then to convert the itk::LabelMap (or the LabelObject directly) to a binary image, where I could get to the pixels more easily; and convert and display this markerBinaryImage with VTK, with no more results (I get a black image).
typedef itk::LabelMapToBinaryImageFilter<LabelMapType, ImageType> LabelMapToBinaryImageFilterType;
LabelMapToBinaryImageFilterType::Pointer labelImageConverter = LabelMapToBinaryImageFilterType::New();
labelImageConverter->SetInput(toLabelMap->GetOutput());
labelImageConverter->Update();
ImageType::Pointer markerBinaryImage = labelImageConverter->GetOutput();
Does anyone have an idea about how to get to this pixels list?
You may do it like this:
for(unsigned int i = 0; i < filter->GetOutput()->GetNumberOfLabelObjects(); ++i) {
//Obtain the ith label object
FilterType::OutputImageType::LabelObjectType* labelObject =
filter->GetOutput()->GetNthLabelObject(i);
//Then, you may obtain the pixels of each label object like this:
for(unsigned int pixelId = 0; pixelId < labelObject->Size(); pixelId++) {
std::cout << labelObject->GetIndex(pixelId);
}
}
This info was obtained from the Insight Journal in the article Label object representation and manipulation with ITK. There, it says that you may obtain the bounding boxes directly using the Region attribute. I did not find the way to obtain a region in itk::LabelObject, however here is the inheritance diagram of itk::LabelObject:
If your label object is of type itk::ShapeLabelObject, you can use the GetBoundingBox() method to get the bounding box. It has other many methods worth looking at.
I tried then to convert the itk::LabelMap (...) with no more results (I get a black image).
A piece of advice here, don't try this complicated stuff to verifyother complicated stuff. You may be failing somewhere else in the chain. Instead, read the pixels like I said before and check out the data. Good Look!

ITK Fast Marching output

I'm using ITK to do some preprocessing and I wanted to test something with the Fast Marching filter and the Geodesic Active Contour filter.
I'm following the algorithm described in the ITK software guide, section 9.3.3.
However, I'm not getting the expected results. I'm working with a 3D image.
Here is my code:
AnisotropicDiffusionFilter::Pointer anisotropic_filter = AnisotropicDiffusionFilter::New();
anisotropic_filter->SetInput(itk_image_in);
anisotropic_filter->SetTimeStep(0.0625);
anisotropic_filter->SetNumberOfIterations(5);
anisotropic_filter->SetConductanceParameter(3.0);
anisotropic_filter->Update();
GradientFilter::Pointer gradient_filter = GradientFilter::New();
gradient_filter->SetInput(anisotropic_filter->GetOutput());
gradient_filter->SetSigma(0.5);
gradient_filter->Update();
SigmoidFilter::Pointer sigmoid_filter = SigmoidFilter::New();
sigmoid_filter->SetInput(gradient_filter->GetOutput());
sigmoid_filter->SetOutputMinimum(0.0);
sigmoid_filter->SetOutputMaximum(1.0);
sigmoid_filter->SetAlpha(-1.5);
sigmoid_filter->SetBeta(4.0);
sigmoid_filter->Update();
FastMarchingFilter::Pointer fast_marching = FastMarchingFilter::New();
NodeContainer::Pointer seeds = NodeContainer::New();
Node node;
const double seedValue = -50.0;
node.SetValue(seedValue);
seeds->Initialize();
vector<GeoVec3s>::iterator it = m_clicks_.begin();
int i=0;
for(; it != m_clicks_.end(); it++)
{
itkIndex index;
index[0] = (*it)[0];
index[1] = (*it)[1];
index[2] = (*it)[2];
node.SetIndex(index);
seeds->InsertElement(i++, node);
}
fast_marching->SetTrialPoints(seeds);
fast_marching->SetSpeedConstant(1.0);
fast_marching->SetStoppingValue(100);
//fast_marching->SetInput(sigmoid_filter->GetOutput());
fast_marching->SetOutputSize(sigmoid_filter->GetOutput()->GetBufferedRegion().GetSize());
fast_marching->Update();
GeodesicFilter::Pointer geodesic_filter = GeodesicFilter::New();
geodesic_filter->SetInput(fast_marching->GetOutput());
geodesic_filter->SetFeatureImage(sigmoid_filter->GetOutput());
geodesic_filter->SetPropagationScaling(0.5);
geodesic_filter->SetCurvatureScaling(5.0);
geodesic_filter->SetAdvectionScaling(1.0);
geodesic_filter->SetMaximumRMSError( 0.02 );
geodesic_filter->Update();
BinaryThresholdFilter::Pointer thresholder = BinaryThresholdFilter::New();
thresholder->SetLowerThreshold(-1000);
thresholder->SetUpperThreshold(0);
thresholder->SetOutsideValue(0);
thresholder->SetInsideValue(255);
thresholder->SetInput( geodesic_filter->GetOutput() );
I'm using metrics described in this paper which goal is the same as mine.
I have a few questions:
The fast marching filter should output a distance map right? Instead, when I output my volume to a series of png (between values 0 and 4095) I have a binary image (pixels are either 0 or 4095). I think I should get a greyscale volume indicating the time needed for each pixel to be attained from the seeds.
Following the procedure described by Suzuki I succeeded to make the algorithm work more or less however I changed the values of the parameters of the geodesic filter. I don't remember the exact values but it wasn't close to those described in the paper. As we are working with the sigmoid input which is normalized between 0 and 1, what is happening?
Should I rather use a constant speed function for the fast marching filter or the sigmoid image? When should either method be preferred?
I'm using a re-scaler to output my float images (output from the filters). Could this be the reason for the inconsistencies I'm seeing?
Any advice on what I could be doing wrong?
Thanks.
Ok so I found my problem. The Fast Marching filter does output a time crossing map (distance map) but as I specified a stopping value in the algorithm all the pixels that weren't visited had a high value (1.7e+38 as it is half the max value of the type used for the output image which were float in my case 3.4e+38). So it squeezed all my image dynamic when I used the rescale filter and the result was an binary image.
I think better results are achieved with a sigmoid image as input for the fast marching filter.
Thanks #nav for the advice.

How to correctly use cv::triangulatePoints()

I am trying to triangulate some points with OpenCV and I found this cv::triangulatePoints() function. The problem is that there is almost no documentation or examples of it.
I have some doubts about it.
What method does it use?
I've making a small research about triangulations and there are several methods (Linear, Linear LS, eigen, iterative LS, iterative eigen,...) but I can't find which one is it using in OpenCV.
How should I use it? It seems that as an input it needs a projection matrix and 3xN homogeneous 2D points. I have them defined as std::vector<cv::Point3d> pnts, but as an output it needs 4xN arrays and obviously I can't create a std::vector<cv::Point4d> because it doesn't exist, so how should I define the output vector?
For the second question I tried: cv::Mat pnts3D(4,N,CV_64F); and cv::Mat pnts3d;, neither seems to work (it throws an exception).
1.- The method used is Least Squares. There are more complex algorithms than this one. Still it is the most common one, as the other methods may fail in some cases (i.e. some others fails if points are on plane or on infinite).
The method can be found in Multiple View Geometry in Computer Vision by Richard Hartley and Andrew Zisserman (p312)
2.-The usage:
cv::Mat pnts3D(1,N,CV_64FC4);
cv::Mat cam0pnts(1,N,CV_64FC2);
cv::Mat cam1pnts(1,N,CV_64FC2);
Fill the 2 chanel point Matrices with the points in images.
cam0 and cam1 are Mat3x4 camera matrices (intrinsic and extrinsic parameters). You can construct them by multiplying A*RT, where A is the intrinsic parameter matrix and RT the rotation translation 3x4 pose matrix.
cv::triangulatePoints(cam0,cam1,cam0pnts,cam1pnts,pnts3D);
NOTE: pnts3D NEEDs to be a 4 channel 1xN cv::Mat when defined, throws exception if not, but the result is a cv::Mat(4,N,cv_64FC1) matrix. Really confusing, but it is the only way I didn't got an exception.
UPDATE: As of version 3.0 or possibly earlier, this is no longer true, and pnts3D can also be of type Mat(4,N,CV_64FC1) or may be left completely empty (as usual, it is created inside the function).
A small addition to #Ander Biguri's answer. You should get your image points on a non-undistorted image, and invoke undistortPoints() on the cam0pnts and cam1pnts, because cv::triangulatePoints expects the 2D points in normalized coordinates (independent from the camera) and cam0 and cam1 should be only [R|t^T] matricies you do not need to multiple it with A.
Thanks to Ander Biguri! His answer helped me a lot. But I always prefer the alternative with std::vector, I edited his solution to this:
std::vector<cv::Point2d> cam0pnts;
std::vector<cv::Point2d> cam1pnts;
// You fill them, both with the same size...
// You can pick any of the following 2 (your choice)
// cv::Mat pnts3D(1,cam0pnts.size(),CV_64FC4);
cv::Mat pnts3D(4,cam0pnts.size(),CV_64F);
cv::triangulatePoints(cam0,cam1,cam0pnts,cam1pnts,pnts3D);
So you just need to do emplace_back in the points. Main advantage: you do not need to know the size N before start filling them. Unfortunately, there is no cv::Point4f, so pnts3D must be a cv::Mat...
I tried cv::triangulatePoints, but somehow it calculates garbage. I was forced to implement a linear triangulation method manually, which returns a 4x1 matrix for the triangulated 3D point:
Mat triangulate_Linear_LS(Mat mat_P_l, Mat mat_P_r, Mat warped_back_l, Mat warped_back_r)
{
Mat A(4,3,CV_64FC1), b(4,1,CV_64FC1), X(3,1,CV_64FC1), X_homogeneous(4,1,CV_64FC1), W(1,1,CV_64FC1);
W.at<double>(0,0) = 1.0;
A.at<double>(0,0) = (warped_back_l.at<double>(0,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,0) - mat_P_l.at<double>(0,0);
A.at<double>(0,1) = (warped_back_l.at<double>(0,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,1) - mat_P_l.at<double>(0,1);
A.at<double>(0,2) = (warped_back_l.at<double>(0,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,2) - mat_P_l.at<double>(0,2);
A.at<double>(1,0) = (warped_back_l.at<double>(1,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,0) - mat_P_l.at<double>(1,0);
A.at<double>(1,1) = (warped_back_l.at<double>(1,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,1) - mat_P_l.at<double>(1,1);
A.at<double>(1,2) = (warped_back_l.at<double>(1,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,2) - mat_P_l.at<double>(1,2);
A.at<double>(2,0) = (warped_back_r.at<double>(0,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,0) - mat_P_r.at<double>(0,0);
A.at<double>(2,1) = (warped_back_r.at<double>(0,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,1) - mat_P_r.at<double>(0,1);
A.at<double>(2,2) = (warped_back_r.at<double>(0,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,2) - mat_P_r.at<double>(0,2);
A.at<double>(3,0) = (warped_back_r.at<double>(1,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,0) - mat_P_r.at<double>(1,0);
A.at<double>(3,1) = (warped_back_r.at<double>(1,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,1) - mat_P_r.at<double>(1,1);
A.at<double>(3,2) = (warped_back_r.at<double>(1,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,2) - mat_P_r.at<double>(1,2);
b.at<double>(0,0) = -((warped_back_l.at<double>(0,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,3) - mat_P_l.at<double>(0,3));
b.at<double>(1,0) = -((warped_back_l.at<double>(1,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,3) - mat_P_l.at<double>(1,3));
b.at<double>(2,0) = -((warped_back_r.at<double>(0,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,3) - mat_P_r.at<double>(0,3));
b.at<double>(3,0) = -((warped_back_r.at<double>(1,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,3) - mat_P_r.at<double>(1,3));
solve(A,b,X,DECOMP_SVD);
vconcat(X,W,X_homogeneous);
return X_homogeneous;
}
the input parameters are two 3x4 camera projection matrices and a corresponding left/right pixel pair (x,y,w).
Additionally to Ginés Hidalgo comments,
if you did a stereocalibration and could estimate exactly Fundamental Matrix from there, which was calculated based on checkerboard.
Use correctMatches function refine detected keypoints
std::vector<cv::Point2f> pt_set1_pt_c, pt_set2_pt_c;
cv::correctMatches(F,pt_set1_pt,pt_set2_pt,pt_set1_pt_c,pt_set2_pt_c)

OpenCV: Accessing And Taking The Square Root Of Pixels

I'm using OpenCV for object detection and one of the operations I would like to be able to perform is a per-pixel square root. I imagine the loop would be something like:
IplImage* img_;
...
for (int y = 0; y < img_->height; y++) {
for(int x = 0; x < img_->width; x++) {
// Take pixel square root here
}
}
My question is how can I access the pixel value at coordinates (x, y) in an IplImage object?
Assuming img_ is of type IplImage, and assuming 16 bit unsigned integer data, I would say
unsigned short pixel_value = ((unsigned short *)&(img_->imageData[img_->widthStep * y]))[x];
See also here for IplImage definition.
OpenCV IplImage is a one dimensional array. You must create a single index to get at image data. The position of your pixel will be based on the color depth, and number of channels in your image.
// width step
int ws = img_->withStep;
// the number of channels (colors)
int nc = img_->nChannels;
// the depth in bytes of the color
int d = img_->depth&0x0000ffff) >> 3;
// assuming the depth is the size of a short
unsigned short * pixel_value = (img_->imageData)+((y*ws)+(x*nc*d));
// this gives you a pointer to the first color in a pixel
//if your are rolling grayscale just dereference the pointer.
You can pick a channel (color) by moving over pixel pointer pixel_value++. I would suggest using a look up table for square roots of pixels if this is going to be any sort of real time application.
please use the CV_IMAGE_ELEM macro.
Also, consider using cvPow with power=0.5 instead of working on pixels yourself, which should be avoided anyways
You may find several ways of reaching image elements in Gady Agam's nice OpenCV tutorial here.