Save integer CV_32S image with OpenCV - c++

I am working with TIF images containing signed integer data. After successfully inputing one and processing it I need to output the image in the same format (input and output both *.tif files).
For the input, I know that OpenCV does not know if the data is signed or unsigned, so it assumes unsigned. Using this trick solves that problem (switching the type of cv::Mat by hand).
However, when I output the image and load it again, I do not get the expected result. The file contains multiple segments (groups of pixels), and the format is as follows (I must use this format):
all pixels not belonging to any segment have the value -9999
all the pixels belonging to a single segment have the same positive integer value
(e.g. all pixels of 1st segment have value 1, second 2 etc)
And here is the example code:
void ImageProcessor::saveSegments(const std::string &filename){
cv::Mat segmentation = cv::Mat(workingImage.size().height,
workingImage.size().width,
CV_32S, cv::Scalar(-9999));
for (int i=0, szi = segmentsInput.size(); i < szi; ++i){
for (int j=0, szj = segmentsInput[i].size(); j < szj; ++j){
segmentation.at<int>(segmentsInput[i][j].Y,
ssegmentsInput[i][j].X) = i+1;
}
}
cv::imwrite(filename, segmentation);
}
You can assume that all the variables (e.g. workingImage, segmentsInput) exist as global variables.
Using this code, when I input the image and examine the values, most of the values are set to 0 while the ones that are set take a full range of integer values (in my example I had 20 segments).

You can't save integer matrices directly with imwrite. As the documentation states: "Only 8-bit (or 16-bit unsigned (CV_16U) in case of PNG, JPEG 2000, and TIFF) single-channel or 3-channel (with ‘BGR’ channel order) images can be saved using this function."
However, what you could do it to convert your CV_32S matrix to a CV_8UC4 and save it as a PNG with no compression. Of course, this is a bit unsafe since endianness comes into play and may change your values between different systems or compilers (especially since we're talking about signed integers here). If you use always the same system and compiler, you can use this:
cv::Mat segmentation = cv::Mat(workingImage.size().height,
workingImage.size().width,
CV_32S, cv::Scalar(-9999));
cv::Mat pngSegmentation(segmentation.rows, segmentation.cols, CV_8UC4, (cv::Vec4b*)segmentation.data);
std::vector<int> params;
params.push_back(CV_IMWRITE_PNG_COMPRESSION);
params.push_back(0);
cv::imwrite("segmentation.png", pngSegmentation, params);

I also save opencv mats as tifs but i don`t use the opencv tif solution. I include the libtiff lib on my own (i think libtiff is also used in opencv) and than you can use the following code to save as tiff
TIFF* tif = TIFFOpen("file.tif", "w");
if (tif != NULL) {
for (int i = 0; i < pages; i++)
{
TIFFSetField(tif, TIFFTAG_IMAGEWIDTH, TIFF_UINT64_T(x)); // set the width of the image
TIFFSetField(tif, TIFFTAG_IMAGELENGTH, TIFF_UINT64_T(y)); // set the height of the image
TIFFSetField(tif, TIFFTAG_SAMPLESPERPIXEL, 1); // set number of channels per pixel
TIFFSetField(tif, TIFFTAG_BITSPERSAMPLE, 32); // set the size of the channels 32 for CV_32F
TIFFSetField(tif, TIFFTAG_PAGENUMBER, i, pages);
TIFFSetField(tif, TIFFTAG_SAMPLEFORMAT, SAMPLEFORMAT_IEEEFP); // for CV32_F
for (uint32 row = 0; row < y; row++)
{
TIFFWriteScanline(tif, &imageDataStack[i].data[row*x*32/ 8], row, 0);
}
TIFFWriteDirectory(tif);
}
}
imageDataStack is a vector of cv::Mat objects. This code works for me to save tiff stacks.

Related

OpenCV - RGB Channels in Float Data Type and Intensity Range within 0-255

How can I achieve the values of the RGB channels as
Float data type
Intensity range within 0-255
I used CV_32FC4 as the matrix type since I'll perform floating-point mathematical operations to implement Daltonization. I was expecting that the intensity range is the same with the intensity range of the RGB Channels in CV_8UC3, just having a different data type. But when I printed the matrix I noticed that the intensities of the channels are not within 0-255. I realized that it due to the range of the float matrix type.
Mat mFrame(height, width, CV_32FC4, (unsigned char *)pNV21FrameData);
for(int y = 0 ; y < height ; y++){
for(int x = 0 ; x < width ; x++){
Vec4f BGRA = mFrame.at<Vec4f>(y,x);
// Algorithm Implementation
mFrame.at<Vec4f>(y,x) = BGRA;
}
}
Mat mResult;
mFrame.convertTo(mResult, CV_8UC4, 1.0/255.0);
I need to manipulate the pixels like BGRA[0] = BGRA[0] * n; then assign it back to the matrix.
By your comments and the link in it I see that the data comes in BGRA. The data is in uchar.
I assume this from this line:
Mat mResult(height, width, CV_8UC4, (unsigned char *)poutPixels);
To solve this you can create the matrix and then convert it to float.
Mat mFrame(height, width, CV_8UC4, (unsigned char *)pNV21FrameData);
Mat mFloatFrame;
mFrame.convertTo(mFloatFrame, CV_32FC4);
Notice that this will keep the current ranges (0-255) if you need another one (like 0-1) you may put the scaling factor.
Finally you can convert back, but beware that this function does saturate_cast. If you have an specific way you want to manage the overflow or the decimals, you will have to do it before converting it.
Mat mResult;
mFloatFrame.convertTo(mResult, CV_8UC4);
Note that 1.0/255.0 is not there, since the data is already in the range of 0-255 (at least before the operations).
One final comment, the link in your comments use IplImage and other old C (deprecated) versions of OpenCv. If you are working in c++, stick to the c++ versions like Mat. This is not in the code you show here, but in the you linked. This comment is more for you to avoid future headaches.

How to convert an image with 16 bit integers into a QPixmap

I am working with software that has a proprietary image format. I need to be able to display a modified version of these images in a QT GUI. There is a method (Image->getpixel(x,y)) that returns a 16 bit integer (16 bits per pixel). To be clear, the 16 bit number does not represent an RGB color format. It literally represents a measurement or dimension to that particular point (height map) on the part that is being photographed. I need to take the range of dimensions (integers) in the image and apply a scale to be represented in colors. Then, I need to use that information to build an image for a QPixmap that can be displayed in a Qlabel. Here is the general gist...
QByteArray Arr;
unsigned int Temp;
for (int N = 0; N < Image->GetWidth(); N++) {
for (int M = 0; M < Image->GetHeight(); M++) {
Temp = Image.GetPixel(N,M);
bytes[0] = (Temp >> 8) & 0xFF;
bytes[1] = Temp & 0xFF;
Arr.push_back(bytes[0]);
Arr.push_back(bytes[1]);
}
}
// Take the range of 16 bit integers. Example (14,982 to 16,010)
// Apply a color scheme to the values
// Display the image
QPixmap Image2;
Image2.loadFromData(Arr);
ui->LabelPic->setPixmap(Image2);
Thoughts?
This screenshot is an example of what I am trying to replicate. It is important to note that the coloration of the image is not inherent to the underlying data in the image. It is the result of an application scaling the height values and applying a color scheme to the range of integers.
The information on proprietary image format is limited so the below is guess or thought (as requested) according to explanation above:
QImage img(/*raw image data*/ (const uchar*) qbyteArr.data(),
/*columns*/ width, /*height*/ rows,
/*bytes per line*/ width * sizeof(uint16),
/*format*/ QImage::Format_RGB16); // the format is likely matching the request
QPixpam pixmap(img); // if the pixmap is needed
I found pieces of the answer here.
Algorithm to convert any positive integer to an RGB value
As for the actual format, I chose to convert the 16 bit integer into a QImage::Format_RGB888 to create a heat map. This was accomplished by applying a scale to the range of integers and using the scale to plot different color equations.

openCV look up table for 16CU1

I need to replace the value of a Mat 8UC1 [0,255] to values of a cv::Mat lookUpTable(1, 256, CV_16UC1); I check un this OpenCV tutorial an explanation which is the fastest method, however, when Im checking the assigned values of the LUT in each position im only sabing 8-bits so Im lossing the other 8-bits. This is the source code:
unsigned short int zDTableHexa[256]={0};
.... get the values...
cv::Mat lookUpTable(1, 256, CV_16UC1);
uchar* p = lookUpTable.data;
for( int i = 0; i < 256; i++){
p[i] = zDTableHexa[i];
cout<<(int)p[i]<<":"<<zDTableHexa[i]<<sizeof(p[i])<<":"<<sizeof(zDTableHexa[i])<<endl;
}
The printing result are:
104:872
101:869
97:865
93:861
90:858
86:854
83:851
80:848
76:844
73:841
70:838
66:834
63:831
When I check in binary is only the first 8-bits.
I understand that the pointer is UCHAR(8bits) but how I can assign the full value?
try
unsigned short* p = (unsigned short*) lookUpTable.data;
OpenCV LUT is working just with CV_8U, so what you can do is splitting the numbers in 3, that is 3x CV_8U, or CV_8UC3. But you cannot have more than 256 elements in the LUT and not other values than 0..255 as index.
In other words, you can parse uchars to uchars or uchars to floats: CV_8UC1 -> CV_8UC1 or CV_8UC1 -> CV_8UC3 (I did not tried it whit CV_8UC4, it could work).
For getting the elements of CV_8UC3 check cv::Vec3b
I found this, that might interest you

DevIL library: save gray scale image in three matrices instead one

I need to make a program that convert a RGB image to a GRAYSCALE image and save it in PGM format. I use DevIL library, but when I save the image, I obtain always a 3D image (3 matrix), in grayscale but, if I load it in MATLAB, I have 3 matrices instead of just one. How can I obtain just one matrix in my output file using DevIL?
int main()
{
ilInit();
ilEnable(IL_ORIGIN_SET);
ilOriginFunc(IL_ORIGIN_UPPER_LEFT);
ilEnable(IL_FILE_OVERWRITE);
ILuint ImageName; // The image name to return.
ilGenImages(1, &ImageName);
ilBindImage(ImageName);
if(!ilLoadImage("/home/andrea/Scrivania/tests/siftDemoV4/et000.jpg"))
{ printf("err");
exit;
}
else
printf("caricata\n");
ILuint width,height;
width = ilGetInteger(IL_IMAGE_WIDTH);
height = ilGetInteger(IL_IMAGE_HEIGHT);
double v[3]={0.2989360212937755001405548682669177651405334472656250000,0.5870430744511212495240215503145009279251098632812500000,0.1140209042551033058465748126764083281159400939941406250};
printf("%.55f %.55f %.55f",v[0],v[1],v[2]);
ILubyte *imgValue=ilGetData();
int i=0;
ILubyte imgNuova[width*height];
while( i < width*height)
{
imgNuova[i]=(char)round( ( (double)imgValue[3*i]*v[0])+ ( (double)imgValue[3*i+1]*v[1])+((double)imgValue[3*i+2]*v[2]));
i++;
}
ILuint ImageName2;
ilGenImages(2, &ImageName2);
ilBindImage(ImageName2);
ilTexImage(width, height, 1, 1, IL_LUMINANCE,
IL_UNSIGNED_BYTE, imgNuova);
iluFlipImage();
ilSave(IL_PNM,"/home/andrea/Scrivania/tests/siftDemoV4/et000new.pgm");
return 0;
}
Unfortunately, due to a bug in the PNM export, DevIL can and will only write PPM (Portable Pixmaps, 3 channel RGB) files regardless of the file extension. The only solution to this is to use a different file format, that supports single channel grayscale images, like PNG.
Matlab should be able to use that just as well. If you absolutely need or want files in the PGM format, you will have to use a converter like png2pnm.

How to access single channel matrix in OpenCV

I want to ask if it is possible to access single channel matrix using img.at<T>(y, x) instead using img.ptr<T>(y, x)[0]
In the example below, I create a simple program to copy an image to another
cv::Mat inpImg = cv::imread("test.png");
cv::Mat img;
inpImg.convertTo(img, CV_8UC1); // single channel image
cv::Mat outImg(img.rows, img.cols, CV_8UC1);
for(int a = 0; a < img.cols; a++)
for(int b = 0; b < img.rows; b++)
outImg.at<uchar>(b, a) = img.at<uchar>(b, a); // This is wrong
cv::imshow("Test", outImg);
The shown result was wrong, but if I change it to
outImg.ptr<uchar>(b, a)[0] = img.ptr<uchar>(b, a)[0];
The result was correct.
I'm quite puzzled since using img.at<T>(y, x) should also be okay. I also tried with 32FC1 and float, the result is similar.
Although I know you already found it, the real reason - buried nicely in the documentation - is that cv::convertTo ignores the number of channels implied by the output type, so when you do this:
inpImg.convertTo(img, CV_8UC1);
And, assuming your input image has three channels, you actually end up with a CV_8UC3 format, which explains why your initial workaround was successful - effectively, you only took a single channel by doing this:
outImg.ptr<uchar>(b, a)[0] // takes the first channel of a CV_8UC3
This only worked by accident as the pixel should have been accessed like this:
outImg.ptr<Vec3b>(b, a)[0] // takes the blue channel of a CV_8UC3
As the data is still packed uchar in both cases, the effective reinterpretation happened to work.
As you noted, you can either convert to greyscale on loading:
cv::imread("test.png", CV_LOAD_IMAGE_GRAYSCALE)
Or, you can convert explicitly:
cv::cvtColor(inpImg, inpImg, CV_BGR2GRAY);