I'm trying to convert pixel data to an OpenCV Mat object - c++

I have raw pixel data that I want to output via the opencv cvShowImage() function.
I have the following code:
#include <opencv2/highgui/highgui.hpp>
// pdata is the raw pixel data as 3 uchars per pixel
static char bitmap[640*480*3];
memcpy(bitmap,pdata,640*480*3);
cv::Mat mat(480,640,CV_8UC3,bitmap);
std::cout << mat.flags << ", "
<< mat.dims << ", "
<< mat.rows << ", "
<< mat.cols << std::endl;
cvShowImage("result",&mat);
Which outputs:
1124024336, 2, 480, 640
to the console, but fails to output the image with cvShowImage(). Instead throwing an exception with the message:
OpenCV Error: Bad flag (parameter or structure field) (Unrecognized or unsupported array type) in cvGetMat
I suspect the problem is in the way I create the mat object, but I am having a very hard time finding any more specific information on how I am supposed to do that.
I don't think CV_8UC3 is enough of a description for it to render the array of data. Doesn't it have to know whether the data is RGB or YUY2, etc.? How do I set that?

Try cv::imshow("result", mat) instead of mixing the old C and new C++ APIs. I expect casting a Mat to a CvArr* is the source of the problem.
So, something like this:
#include <opencv2/highgui/highgui.hpp>
// pdata is the raw pixel data as 3 uchars per pixel
static char bitmap[640*480*3];
memcpy(bitmap,pdata,640*480*3);
cv::Mat mat(480,640,CV_8UC3,bitmap);
std::cout << mat.flags << ", "
<< mat.dims << ", "
<< mat.rows << ", "
<< mat.cols << std::endl;
cv::imshow("result", mat);

Related

How to use the writeCloud() OpenCV function to contruct point cloud given 3D point coordinates?

I'm beginner in OpenCV and currently I'm using Visual Studio 2013 (64-bit) and OpenCV 3.2 (C++) to construct a two view geometry and try to display those matched 3D points in MeshLab. I use the triangulatePoints() to get the Points4D, which is a 4*N matrix that contains the coordinates of the matched points from two images. This the documentation of writeCloud().
triangulatePoints(CameraMatrix_1, CameraMatrix_2, matchpoints_1, matchpoints_2, Points4D);
writeCloud("twoview.ply", cloud, noArray(), noArray(), false);
My question is, what should be the cloud input of writeCloud() so that I could save those 3D points into a .ply file and display them? Assume that I do not assign color to the point cloud first.
Also, I have tried to use the MATLAB to generate a pointcloud.ply file and analyse it with the readCloud(), then I find out the following code successfully read a point cloud and save it into another one. But strangely, the cv::Mat twoviewcloud here is a 1*N matrix, how could you construct a point cloud form one dimensional array? I am totally confused.
Mat twoviewcloud = readCloud("pointcloud.ply");
writeCloud("trial.ply", twoviewcloud, noArray(), noArray(), false);
I would sincerely thank you if someone could give me some hint!
Ok, so I am still confused to use the original OpenCV function writeCloud(), however, I could just implement my own function to write the .ply file. Here is the code, it is quite simple actually and you could read the wiki page for the detailed .ply format.
struct dataType { Point3d point; int red; int green; int blue; };
typedef dataType SpacePoint;
vector<SpacePoint> pointCloud;
ofstream outfile("pointcloud.ply");
outfile << "ply\n" << "format ascii 1.0\n" << "comment VTK generated PLY File\n";
outfile << "obj_info vtkPolyData points and polygons : vtk4.0\n" << "element vertex " << pointCloud.size() << "\n";
outfile << "property float x\n" << "property float y\n" << "property float z\n" << "element face 0\n";
outfile << "property list uchar int vertex_indices\n" << "end_header\n";
for (int i = 0; i < pointCloud.size(); i++)
{
Point3d point = pointCloud.at(i).point;
outfile << point.x << " ";
outfile << point.y << " ";
outfile << point.z << " ";
outfile << "\n";
}
outfile.close();

TIFF files garbled by ArrayFire (C++)

I notice that this simple ArrayFire program is causing loaded TIFF images to be heavily distorted:
#include <iostream>
#include <arrayfire.h>
int main( int argc, char** argv ) {
af::array img = af::loadImage( argv[1] );
double mn, mx;
unsigned idxn, idxx;
af::min( &mn, &idxn, img );
af::max( &mx, &idxx, img );
std::cout << "Image size = " << img.dims()[0] << ", " << img.dims()[1] << '\n';
std::cout << "Data type = " << img.type() << '\n';
std::cout << "Min = " << mn << " (at " << idxn << ")\n";
std::cout << "Max = " << mx << " (at " << idxx << ")\n";
af::saveImage( argv[2], img );
return 0;
}
I then compile and run on a simple (monochrome) image:
./a.out orig.tif out.tif
with the following output:
Image size = 256, 256
Data type = 0
Min = 0 (at 65535)
Max = 81.5025 (at 31356)
When I visualize these images I get the following result:
which of course is not what ArrayFire is expected to do; I would expect it to dump the exact same image out since I didn't make any changes to it. Unfortunately I don't know enough about the TIFF image format or the graphics backend of ArrayFire to understand what is going on. Am I doing something wrong while loading the image? (I followed the ArrayFire documentation for loadImage and saveImage).
I also tried using loadImageNative and saveImageNative alternatively, but the latter returns a 4-layer TIFF image while the original image is only a 1-layer TIFF.
Any help at all from ArrayFire experts would be great.
Thanks!

Can LibTIFF be used with C++ to get the float values of images?

I read here that LibTIFF can display floating point TIFFs. However, I would like to load an image, then get the float values as an array.
Is this possible to do using LibTIFF?
Example TIFF
EDIT: I am using RHEL 6.
If you want to do it with pure libTIFF, your code might look something like this - note that I have not done much error checking so as not to confuse the reader of the code - but you should check that the image is of type float, and you should check the results of memory allocations and you probably shouldn't use malloc() like I do but rather the new C++ methods of memory allocation - but the concept is hopefully clear and the code generates the same answers as my CImg version...
#include "tiffio.h"
#include <cstdio>
#include <iostream>
using namespace std;
int main()
{
TIFF* tiff = TIFFOpen("image.tif","r");
if (!tiff) {
cerr << "Failed to open image" << endl;
exit(1);
}
uint32 width, height;
tsize_t scanlength;
// Read dimensions of image
if (TIFFGetField(tiff,TIFFTAG_IMAGEWIDTH,&width) != 1) {
cerr << "Failed to read width" << endl;
exit(1);
}
if (TIFFGetField(tiff,TIFFTAG_IMAGELENGTH, &height) != 1) {
cerr << "Failed to read height" << endl;
exit(1);
}
scanlength = TIFFScanlineSize(tiff);
// Make space for image in memory
float** image= (float**)malloc(sizeof (float*)*height);
cout << "Dimensions: " << width << "x" << height << endl;
cout << "Line buffer length (bytes): " << scanlength << endl;
// Read image data allocating space for each line as we get it
for (uint32 y = 0; y < height; y++) {
image[y]=(float*)malloc(scanlength);
TIFFReadScanline(tiff,image[y],y);
cout << "Line(" << y << "): " << image[y][0] << "," << image[y][1] << "," << image[y][2] << endl;
}
TIFFClose(tiff);
}
Sample Output
Dimensions: 512x256
Line buffer length (bytes): 6144
Line(0): 3.91318e-06,0.232721,128
Line(1): 0.24209,1.06866,128
Line(2): 0.185419,2.45852,128
Line(3): 0.141297,3.06488,128
Line(4): 0.346642,4.35358,128
...
...
By the way...
I converted your image to a regular JPEG using ImageMagick in the Terminal at the command line as follows:
convert map.tif[0] -auto-level result.jpg
Yes but you will have a much easier time with this if you use the OpenCV library.
If you have OpenCV library compiled and installed doing what you are asking is as easy as using the imread() function. This saves it to an object called cv::Mat (aka a matrix) with the same dimensions and values as the tiff.
From there you can do just about anything you want with it.
You can do it with LibTIFF, and I may well add an answer based on that later, but for ease of installation and use, I would look at CImg which is a C++ header-only library that is very powerful and ideal for your purposes. As it is header-only, it is simple to include (just one file) and needs no special linking or building.
Here is how you might read a TIFF of RGB floats:
#define cimg_display 0
#define cimg_use_tiff
#include "CImg.h"
#include <iostream>
using namespace cimg_library;
using namespace std;
int main(){
// Read in an image
CImg <float>img("image.tif");
// Get its width and height and tell user
int w=img.width();
int h=img.height();
cout << "Dimensions: " << w << "x" << h << endl;
// Get pointer to buffer/array of floats
float* buffer = img.data();
cout << buffer[0] << "," << buffer[1] << "," << buffer[2] << endl;
}
That prints the first three red pixels because they are arranged in planes - i.e. all the red pixels first, then all the green pixel then all the blue pixels.
You would compile that with:
g++-6 -std=c++11 read.cpp -I/usr/local/include -L/usr/local/lib -ltiff -o read
If you prefer, you can access the pixels a slightly different way like this:
#define cimg_display 0
#define cimg_use_tiff
#include "CImg.h"
#include <iostream>
using namespace cimg_library;
using namespace std;
int main(){
// Read in an image
CImg <float>img("image.tif");
// Get its width and height and tell user
int w=img.width();
int h=img.height();
cout << "Dimensions: " << w << "x" << h << endl;
// Dump the pixels
for(int y=0;y<h;y++)
for(int x=0;x<w;x++)
cout << x << "," << y << ": "
<< img(x,y,0,0) << "/"
<< img(x,y,0,1) << "/"
<< img(x,y,0,2) << endl;
}
Sample Output
Dimensions: 512x256
0,0: 3.91318e-06/0.232721/128
1,0: 1.06577/0.342173/128
2,0: 2.3778/0.405881/128
3,0: 3.22933/0.137184/128
4,0: 4.26638/0.152943/128
5,0: 5.10948/0.00773837/128
6,0: 6.02352/0.058757/128
7,0: 7.33943/0.02835/128
8,0: 8.33965/0.478541/128
9,0: 9.46735/0.335981/128
10,0: 10.1918/0.340277/128
...
...
For your information, I made the test image file also with CImg like this - basically each red pixel is set to its x-coordinate plus a small random float less than 0.5. each green pixel is set to its y-coordinate plus a small random float less than 0.5 and each blue pixel is set to a mid-tone.
#define cimg_display 0
#define cimg_use_tiff
#define cimg_use_png
#include "CImg.h"
#include <cstdlib>
using namespace cimg_library;
int main(){
const int w=512;
const int h=256;
const int channels=3;
float* buffer = new float[w*h*channels];
float* fp=buffer;
for(int y=0;y<h;y++){
for(int x=0;x<w;x++){
*fp++=x+float(rand())/(2.0*RAND_MAX); // red
}
}
for(int y=0;y<h;y++){
for(int x=0;x<w;x++){
*fp++=y+float(rand())/(2.0*RAND_MAX); // green
}
}
for(int y=0;y<h;y++){
for(int x=0;x<w;x++){
*fp++=128; // blue
}
}
CImg <float>img(buffer,w,h,1,channels);
img.save_tiff("result.tif");
}
Yet another, easily installed, lightweight option would be to use vips. You can convert your 32-bit TIF to a raw file of 32-bit floats and read them straight into your C++ program. At the commandline, do the conversion with
vips rawsave yourImage.tif raw.bin
and then read in the uncompressed, unformatted floats from file raw.bin. If we now dump the file raw.bin, interpreting the data as floats, you can see the same values as in my other answers:
od -f raw.bin
0000000 3.913185e-06 2.327210e-01 1.280000e+02 1.065769e+00
0000020 3.421732e-01 1.280000e+02 2.377803e+00 4.058807e-01
0000040 1.280000e+02 3.229325e+00 1.371841e-01 1.280000e+02
Of course, you can have your program do the conversion by linking to libvips or simply using system() to run the commandline version and then read its output file.

Access to pixel values of a tif image using ITK

I am new to ITK and I am trying to read a tif file using ITK and access to its pixels. Here is the piece of my code for this purpose:
typedef float PixelType; // Pixel type
const unsigned char Dimension = 2;
typedef itk::Image< PixelType, Dimension > ImageType;
ImageType::Pointer image;
typedef itk::ImageFileReader< ImageType > ReaderType;
ReaderType::Pointer reader;
reader = ReaderType::New();
const char * filename = "test.tif";
reader->SetImageIO(itk::TIFFImageIO::New());
reader->SetFileName(filename);
try
{
reader->Update();
}
catch (itk::ExceptionObject & excp)
{
std::cerr << excp << std::endl;
}
// Access to a pixel
ImageType::IndexType pixelIndex;
pixelIndex[0] = 103;
pixelIndex[1] = 178;
ImageType::PixelType pixelValue = image->GetPixel(pixelIndex);
std::cout << "pixel : " << pixelValue << std::endl;
The pixel value I am getting does not match the correct value! I have used MATLAB to check my test image. Here is its dimensions: <298x237x4 uint8>.
I tried storing the image(:,:,1) (i.e. <298x237 uint8>) as a new tif image using MATLAB, and then pass this image as input image to above program. It is working and I am getting the correct pixel value I expected!
I kind of know what is the problem, but I don't know how to solve it. I don't know how to modify my program to extract <298x237 uint8> image out of <298x237x4 uint8> input image.
Update:
Here is the information I am getting from tiffinfo :
You file contains pixels of RGBA, but you are reading them into an image of just floats.
The quick solution is to just change this typedef:
typedef float PixelType; // Pixel type
to:
typedef itk::RGBAPixel<float> PixelType; // Pixel type
With ITK when you specify an ImageFileReader you are specifying the type you want as output. ITK will implicitly do the conversion to the type you are requesting. In your case you are requesting conversion from RGBA to float.
By directly constructing an ImageIO you can obtain the pixel type information from the file. Here is an example form the Wiki Examples:
std::string inputFilename = argv[1];
typedef itk::ImageIOBase::IOComponentType ScalarPixelType;
itk::ImageIOBase::Pointer imageIO =
itk::ImageIOFactory::CreateImageIO(
inputFilename.c_str(), itk::ImageIOFactory::ReadMode);
imageIO->SetFileName(inputFilename);
imageIO->ReadImageInformation();
const ScalarPixelType pixelType = imageIO->GetComponentType();
std::cout << "Pixel Type is " << imageIO->GetComponentTypeAsString(pixelType) // 'double'
<< std::endl;
const size_t numDimensions = imageIO->GetNumberOfDimensions();
std::cout << "numDimensions: " << numDimensions << std::endl; // '2'
std::cout << "component size: " << imageIO->GetComponentSize() << std::endl; // '8'
std::cout << "pixel type (string): " << imageIO->GetPixelTypeAsString(imageIO->GetPixelType()) << std::endl; // 'vector'
std::cout << "pixel type: " << imageIO->GetPixelType() << std::endl; // '5'
/*
switch (pixelType)
{
case itk::ImageIOBase::COVARIANTVECTOR:
typedef itk::Image<unsigned char, 2> ImageType;
ImageType::Pointer image = ImageType::New();
ReadFile<ImageType>(inputFilename, image);
break;
typedef itk::Image<unsigned char, 2> ImageType;
ImageType::Pointer image = ImageType::New();
ReadFile<ImageType>(inputFilename, image);
break;
default:
std::cerr << "Pixel Type ("
<< imageIO->GetComponentTypeAsString(pixelType)
<< ") not supported. Exiting." << std::endl;
return -1;
}
*/
I also work on the SimpleITK project which provides a simplified layer onto of ITK designed for rapid prototyping and interactive computing such as for scripting languages. It has added a layer which automatically loads the image into the correct type. If you are coming from Matlab it may be an easier transition.

QImage read pixel data with precision

Sorry for the basic question, I am just starting to use QImage for reading pixel data from an image file.
To understand the member functions, I tried to load an image file and tried to output the functions return values:
QString fileName = "pic1.bmp";
QImage myImage;
myImage.load( fileName );
std::cout << "width = " << myImage.width() << std::endl;
std::cout << "height = " << myImage.height() << std::endl;
std::cout << "dotspermeterX = " << myImage.dotsPerMeterX() << std::endl;
std::cout << "dotspermeterY = " << myImage.dotsPerMeterY() << std::endl;
QRectF myRect = myImage.rect();
std::cout << "rect = " << myRect.bottomLeft().x() << "," << myRect.bottomLeft().y()
<< " " << myRect.topRight().x() << "," << myRect.topRight().y() << std::endl;
The output I got was:
width = 858
height = 608
dotspermeterX = 4724
dotspermeterY = 4724
rect = 0,608 858,0
My questions are:
1. What is the difference between dots and pixels?
2. Does QImage work only with int pixels? Can't I read sub-pixel data for better precision?
To clarify my question, Following is a zoomed bitmap image of a diagonal line and I want to read all the small pixels/dots in this line. Is this possible?
As for the "dots per meter", you probably heard of "dots per inch" (or DPI). It's the same. If, for example, you have a 20 inch monitor with the horizontal resolution of X pixels, you will have Y "dots per inch" (or pixels per inch). If you then switch to a 40 inch monitor but with the same horizontal resolution X, then you have half the number of DPI, as the screen is now double as wide. So DPI (or PPI) can be seens as a measurement of the size of the pixels.
And no, I seriously doubt that QImage have any support for sub-pixel data.