Access to pixel values of a tif image using ITK - c++

I am new to ITK and I am trying to read a tif file using ITK and access to its pixels. Here is the piece of my code for this purpose:
typedef float PixelType; // Pixel type
const unsigned char Dimension = 2;
typedef itk::Image< PixelType, Dimension > ImageType;
ImageType::Pointer image;
typedef itk::ImageFileReader< ImageType > ReaderType;
ReaderType::Pointer reader;
reader = ReaderType::New();
const char * filename = "test.tif";
reader->SetImageIO(itk::TIFFImageIO::New());
reader->SetFileName(filename);
try
{
reader->Update();
}
catch (itk::ExceptionObject & excp)
{
std::cerr << excp << std::endl;
}
// Access to a pixel
ImageType::IndexType pixelIndex;
pixelIndex[0] = 103;
pixelIndex[1] = 178;
ImageType::PixelType pixelValue = image->GetPixel(pixelIndex);
std::cout << "pixel : " << pixelValue << std::endl;
The pixel value I am getting does not match the correct value! I have used MATLAB to check my test image. Here is its dimensions: <298x237x4 uint8>.
I tried storing the image(:,:,1) (i.e. <298x237 uint8>) as a new tif image using MATLAB, and then pass this image as input image to above program. It is working and I am getting the correct pixel value I expected!
I kind of know what is the problem, but I don't know how to solve it. I don't know how to modify my program to extract <298x237 uint8> image out of <298x237x4 uint8> input image.
Update:
Here is the information I am getting from tiffinfo :

You file contains pixels of RGBA, but you are reading them into an image of just floats.
The quick solution is to just change this typedef:
typedef float PixelType; // Pixel type
to:
typedef itk::RGBAPixel<float> PixelType; // Pixel type
With ITK when you specify an ImageFileReader you are specifying the type you want as output. ITK will implicitly do the conversion to the type you are requesting. In your case you are requesting conversion from RGBA to float.
By directly constructing an ImageIO you can obtain the pixel type information from the file. Here is an example form the Wiki Examples:
std::string inputFilename = argv[1];
typedef itk::ImageIOBase::IOComponentType ScalarPixelType;
itk::ImageIOBase::Pointer imageIO =
itk::ImageIOFactory::CreateImageIO(
inputFilename.c_str(), itk::ImageIOFactory::ReadMode);
imageIO->SetFileName(inputFilename);
imageIO->ReadImageInformation();
const ScalarPixelType pixelType = imageIO->GetComponentType();
std::cout << "Pixel Type is " << imageIO->GetComponentTypeAsString(pixelType) // 'double'
<< std::endl;
const size_t numDimensions = imageIO->GetNumberOfDimensions();
std::cout << "numDimensions: " << numDimensions << std::endl; // '2'
std::cout << "component size: " << imageIO->GetComponentSize() << std::endl; // '8'
std::cout << "pixel type (string): " << imageIO->GetPixelTypeAsString(imageIO->GetPixelType()) << std::endl; // 'vector'
std::cout << "pixel type: " << imageIO->GetPixelType() << std::endl; // '5'
/*
switch (pixelType)
{
case itk::ImageIOBase::COVARIANTVECTOR:
typedef itk::Image<unsigned char, 2> ImageType;
ImageType::Pointer image = ImageType::New();
ReadFile<ImageType>(inputFilename, image);
break;
typedef itk::Image<unsigned char, 2> ImageType;
ImageType::Pointer image = ImageType::New();
ReadFile<ImageType>(inputFilename, image);
break;
default:
std::cerr << "Pixel Type ("
<< imageIO->GetComponentTypeAsString(pixelType)
<< ") not supported. Exiting." << std::endl;
return -1;
}
*/
I also work on the SimpleITK project which provides a simplified layer onto of ITK designed for rapid prototyping and interactive computing such as for scripting languages. It has added a layer which automatically loads the image into the correct type. If you are coming from Matlab it may be an easier transition.

Related

Trouble when using Efficient_Ransac in CGAL

I want to use the Efficient Ransac implementation of CGAL, but whenever I try to set my own parameters, the algorithm doesn't detect any shape anymore.
This work is related to the Polyfit implementation in CGAL. I want to fine tune the plane detection to see the influence it has on the algorithm. When I use the standard call to ransac.detect(), it works perfectly. However, when I want to set my own parameters it just doesn't find any plane, even if I set them manually to the default values.
Here is my code, strongly related to this example
#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>
#include <CGAL/IO/read_xyz_points.h>
#include <CGAL/IO/Writer_OFF.h>
#include <CGAL/property_map.h>
#include <CGAL/Surface_mesh.h>
#include <CGAL/Shape_detection/Efficient_RANSAC.h>
#include <CGAL/Polygonal_surface_reconstruction.h>
#ifdef CGAL_USE_SCIP
#include <CGAL/SCIP_mixed_integer_program_traits.h>
typedef CGAL::SCIP_mixed_integer_program_traits<double> MIP_Solver;
#elif defined(CGAL_USE_GLPK)
#include <CGAL/GLPK_mixed_integer_program_traits.h>
typedef CGAL::GLPK_mixed_integer_program_traits<double> MIP_Solver;
#endif
#if defined(CGAL_USE_GLPK) || defined(CGAL_USE_SCIP)
#include <CGAL/Timer.h>
#include <fstream>
typedef CGAL::Exact_predicates_inexact_constructions_kernel Kernel;
typedef Kernel::Point_3 Point;
typedef Kernel::Vector_3 Vector;
// Point with normal, and plane index
typedef boost::tuple<Point, Vector, int> PNI;
typedef std::vector<PNI> Point_vector;
typedef CGAL::Nth_of_tuple_property_map<0, PNI> Point_map;
typedef CGAL::Nth_of_tuple_property_map<1, PNI> Normal_map;
typedef CGAL::Nth_of_tuple_property_map<2, PNI> Plane_index_map;
typedef CGAL::Shape_detection::Efficient_RANSAC_traits<Kernel, Point_vector, Point_map, Normal_map> Traits;
typedef CGAL::Shape_detection::Efficient_RANSAC<Traits> Efficient_ransac;
typedef CGAL::Shape_detection::Plane<Traits> Plane;
typedef CGAL::Shape_detection::Point_to_shape_index_map<Traits> Point_to_shape_index_map;
typedef CGAL::Polygonal_surface_reconstruction<Kernel> Polygonal_surface_reconstruction;
typedef CGAL::Surface_mesh<Point> Surface_mesh;
int main(int argc, char ** argv)
{
Point_vector points;
// Loads point set from a file.
const std::string &input_file = argv[1];
//const std::string input_file(input);
std::ifstream input_stream(input_file.c_str());
if (input_stream.fail()) {
std::cerr << "failed open file \'" <<input_file << "\'" << std::endl;
return EXIT_FAILURE;
}
std::cout << "Loading point cloud: " << input_file << "...";
CGAL::Timer t;
t.start();
if (!input_stream ||
!CGAL::read_xyz_points(input_stream,
std::back_inserter(points),
CGAL::parameters::point_map(Point_map()).normal_map(Normal_map())))
{
std::cerr << "Error: cannot read file " << input_file << std::endl;
return EXIT_FAILURE;
}
else
std::cout << " Done. " << points.size() << " points. Time: " << t.time() << " sec." << std::endl;
// Shape detection
Efficient_ransac ransac;
ransac.set_input(points);
ransac.add_shape_factory<Plane>();
std::cout << "Extracting planes...";
t.reset();
// Set parameters for shape detection.
Efficient_ransac::Parameters parameters;
// Set probability to miss the largest primitive at each iteration.
parameters.probability = 0.05;
// Detect shapes with at least 500 points.
parameters.min_points = 100;
// Set maximum Euclidean distance between a point and a shape.
parameters.epsilon = 0.01;
// Set maximum Euclidean distance between points to be clustered.
parameters.cluster_epsilon = 0.01;
// Set maximum normal deviation.
// 0.9 < dot(surface_normal, point_normal);
parameters.normal_threshold = 0.9;
// Detect shapes.
ransac.detect(parameters);
//ransac.detect();
Efficient_ransac::Plane_range planes = ransac.planes();
std::size_t num_planes = planes.size();
std::cout << " Done. " << num_planes << " planes extracted. Time: " << t.time() << " sec." << std::endl;
// Stores the plane index of each point as the third element of the tuple.
Point_to_shape_index_map shape_index_map(points, planes);
for (std::size_t i = 0; i < points.size(); ++i) {
// Uses the get function from the property map that accesses the 3rd element of the tuple.
int plane_index = get(shape_index_map, i);
points[i].get<2>() = plane_index;
}
//////////////////////////////////////////////////////////////////////////
std::cout << "Generating candidate faces...";
t.reset();
Polygonal_surface_reconstruction algo(
points,
Point_map(),
Normal_map(),
Plane_index_map()
);
std::cout << " Done. Time: " << t.time() << " sec." << std::endl;
//////////////////////////////////////////////////////////////////////////
Surface_mesh model;
std::cout << "Reconstructing...";
t.reset();
if (!algo.reconstruct<MIP_Solver>(model)) {
std::cerr << " Failed: " << algo.error_message() << std::endl;
return EXIT_FAILURE;
}
const std::string& output_file(input_file+"_result.off");
std::ofstream output_stream(output_file.c_str());
if (output_stream && CGAL::write_off(output_stream, model))
std::cout << " Done. Saved to " << output_file << ". Time: " << t.time() << " sec." << std::endl;
else {
std::cerr << " Failed saving file." << std::endl;
return EXIT_FAILURE;
}
//////////////////////////////////////////////////////////////////////////
// Also stores the candidate faces as a surface mesh to a file
Surface_mesh candidate_faces;
algo.output_candidate_faces(candidate_faces);
const std::string& candidate_faces_file(input_file+"_candidate_faces.off");
std::ofstream candidate_stream(candidate_faces_file.c_str());
if (candidate_stream && CGAL::write_off(candidate_stream, candidate_faces))
std::cout << "Candidate faces saved to " << candidate_faces_file << "." << std::endl;
return EXIT_SUCCESS;
}
#else
int main(int, char**)
{
std::cerr << "This test requires either GLPK or SCIP.\n";
return EXIT_SUCCESS;
}
#endif // defined(CGAL_USE_GLPK) || defined(CGAL_USE_SCIP)
When launched, I have the following message:
Loading point cloud: Scene1/test.xyz... Done. 169064 points. Time: 0.428 sec.
Extracting planes... Done. 0 planes extracted. Time: 8.328 sec.
Generating candidate faces... Done. Time: 0.028 sec.
Reconstructing... Failed: at least 4 planes required to reconstruct a closed surface mesh (only 1 provided)
While I have this when launching the code the ransac detection function without parameters:
Loading point cloud: Scene1/test.xyz... Done. 169064 points. Time: 0.448 sec.
Extracting planes... Done. 18 planes extracted. Time: 3.088 sec.
Generating candidate faces... Done. Time: 94.536 sec.
Reconstructing... Done. Saved to Scene1/test.xyz_result.off. Time: 30.28 sec.
Can someone help me setting my own parameters for the ransac shape detection?
However, when I want to set my own parameters it just doesn't find any
plane, even if I set them manually to the default values.
Just to be sure: "setting them manually to the default values" is not what you are doing in the code you shared.
Default values are documented as:
1% of the total number of points for min_points, which should be around 1700 points in your case, not 100
1% of the bounding box diagonal for epsilon and cluster_epsilon. For that obviously I don't know if that is what you used (0.01) as I don't have access to your point set, but if you want to reproduce default values, you should use the CGAL::Bbox_3 object at some point
If you use these values, there's no reason why it should behave differently than with no parameters given (if it does not work, then please let me know because there may be a bug).

how to read and write an image using C++ in visual studio with ITK configured

I am a beginner to ITK and c++. I have the following code where I can get the height and width of an image. Instead of giving the input image in the console, I want to do it in the code itself. How do I directly give the input image to this code?
#include "itkImage.h"
#include "itkImageFileReader.h"
int main()
{
mat m("filename");
imshow("windowname", m);
}
// verify command line arguments
if( argc < 2 )
{
std::cout << "usage: " << std::endl;
std::cerr << argv[0] << " inputimagefile" << std::endl;
return exit_failure;
}
typedef itk::image<float, 2> imagetype;
typedef itk::imagefilereader<imagetype> readertype;
readertype::pointer reader = readertype::new();
reader->setfilename( argv[1] );
reader->update();
std::cout << reader->getoutput()->getlargestpossibleregion().getsize()[0] << " "
<< reader->getoutput()->getlargestpossibleregion().getsize()[1] << std::endl;
// an example image had w = 200 and h = 100 (it is wider than it is tall). the above output
// 200 100
// so w = getsize()[0]
// and h = getsize()[1]
// a pixel inside the region
itk::index<2> indexinside;
indexinside[0] = 150;
indexinside[1] = 50;
std::cout << reader->getoutput()-
>getlargestpossibleregion().isinside(indexinside) << std::endl;
// a pixel outside the region
itk::index<2> indexoutside;
indexoutside[0] = 50;
indexoutside[1] = 150;
std::cout << reader->getoutput()- >getlargestpossibleregion().isinside(indexoutside) << std::endl;
// this means that the [0] component of the index is referencing the left to right (column) index
// and the [1] component of index is referencing the top to bottom (row) index
return exit_success;
}
Change the line reader->setfilename( argv[1] ); by reader->setfilename( "C:/path/to/file.png" );
I assume that
mat m("filename");
imshow("windowname", m);
sneaked in from some unrelated code? Otherwise the example would not compile.

TIFF files garbled by ArrayFire (C++)

I notice that this simple ArrayFire program is causing loaded TIFF images to be heavily distorted:
#include <iostream>
#include <arrayfire.h>
int main( int argc, char** argv ) {
af::array img = af::loadImage( argv[1] );
double mn, mx;
unsigned idxn, idxx;
af::min( &mn, &idxn, img );
af::max( &mx, &idxx, img );
std::cout << "Image size = " << img.dims()[0] << ", " << img.dims()[1] << '\n';
std::cout << "Data type = " << img.type() << '\n';
std::cout << "Min = " << mn << " (at " << idxn << ")\n";
std::cout << "Max = " << mx << " (at " << idxx << ")\n";
af::saveImage( argv[2], img );
return 0;
}
I then compile and run on a simple (monochrome) image:
./a.out orig.tif out.tif
with the following output:
Image size = 256, 256
Data type = 0
Min = 0 (at 65535)
Max = 81.5025 (at 31356)
When I visualize these images I get the following result:
which of course is not what ArrayFire is expected to do; I would expect it to dump the exact same image out since I didn't make any changes to it. Unfortunately I don't know enough about the TIFF image format or the graphics backend of ArrayFire to understand what is going on. Am I doing something wrong while loading the image? (I followed the ArrayFire documentation for loadImage and saveImage).
I also tried using loadImageNative and saveImageNative alternatively, but the latter returns a 4-layer TIFF image while the original image is only a 1-layer TIFF.
Any help at all from ArrayFire experts would be great.
Thanks!

I'm trying to convert pixel data to an OpenCV Mat object

I have raw pixel data that I want to output via the opencv cvShowImage() function.
I have the following code:
#include <opencv2/highgui/highgui.hpp>
// pdata is the raw pixel data as 3 uchars per pixel
static char bitmap[640*480*3];
memcpy(bitmap,pdata,640*480*3);
cv::Mat mat(480,640,CV_8UC3,bitmap);
std::cout << mat.flags << ", "
<< mat.dims << ", "
<< mat.rows << ", "
<< mat.cols << std::endl;
cvShowImage("result",&mat);
Which outputs:
1124024336, 2, 480, 640
to the console, but fails to output the image with cvShowImage(). Instead throwing an exception with the message:
OpenCV Error: Bad flag (parameter or structure field) (Unrecognized or unsupported array type) in cvGetMat
I suspect the problem is in the way I create the mat object, but I am having a very hard time finding any more specific information on how I am supposed to do that.
I don't think CV_8UC3 is enough of a description for it to render the array of data. Doesn't it have to know whether the data is RGB or YUY2, etc.? How do I set that?
Try cv::imshow("result", mat) instead of mixing the old C and new C++ APIs. I expect casting a Mat to a CvArr* is the source of the problem.
So, something like this:
#include <opencv2/highgui/highgui.hpp>
// pdata is the raw pixel data as 3 uchars per pixel
static char bitmap[640*480*3];
memcpy(bitmap,pdata,640*480*3);
cv::Mat mat(480,640,CV_8UC3,bitmap);
std::cout << mat.flags << ", "
<< mat.dims << ", "
<< mat.rows << ", "
<< mat.cols << std::endl;
cv::imshow("result", mat);

itk - segmentation of 3D images

Inside the InsightToolkit directory there is the Examples/Segmentation/ConnectedThresholdImageFilter.xx file.
Now, I want to make it operate on a three dimensional image. In this case, will the changes that I have to do bee applied to those lines of code (lines 102-110):
int main( int argc, char *argv[])
{
if( argc < 7 )
{
std::cerr << "Missing Parameters " << std::endl;
std::cerr << "Usage: " << argv[0];
std::cerr << " inputImage outputImage seedX seedY lowerThreshold upperThreshold" << std::endl;
return 1;
}
}
And, in order to do that, should I add the following seedZ to:
std::cerr << " inputImage outputImage seedX seedY lowerThreshold upperThreshold" << std::endl;
And, what change should I perform to the arguments in this case?
You need to add a z parameter like you mentioned in your post.
Then in the example, you need to make sure that the inputImage and the outputImage are set to be 3D. I don't have the code for the example but somewhere along the lines of:
typedef itk::Image< PixelType, 3 > InputImageType;
Hope this helps