I have a raw file which contains a header of 5 bytes containing the number of rows and columns in first two bits each . The 5th byte contains the number of bits for each pixel in the image which is 8 bits in all cases. The image data follows after that.
Since I am new to openCV, i want to ask how to view this RAW image file as an greyscale image using C++?
I know how to read binary data in C++ and have stored the image as a 2-D unsigned char array (since each pixel is 8 bit).
Can anyone please tell me how to view this data as image using openCV ?
I am using the below code , but getting a completely weird image :
void openRaw() {
cv::Mat img(numRows, numCols,CV_8U,&(image[0][0]));
//img.t();
cv::imshow("img",img);
cv::waitKey();
}
Any help will be greatly appreciated.
Thanks,
Rohit
You have to convert it to an IplImage.
If you want to see it as a pure grey-scale image, its actually rather easy.
Example code I use in one application:
CvSize mSize;
mSize.height = 960;
mSize.width = 1280;
IplImage* image1 = cvCreateImage(mSize, 8, 1);
memcpy( image1->imageData, rawDataPointer, sizeOfImage);
cvNamedWindow( "corners1", 1 );
cvShowImage( "corners1", image1 );
At that point you have a valid IplImage, which you can then display. (last 2 lines of code display it)
If the image is bayer-tiled, you will have to convert to RGB.
c++ notation:
cv::Mat img(rows,cols,CV_8U,ptrToDat);
cv::imwhow("img",img);
cv::waitkey();
*data should be saved columwise, otherewise use:
cv::Mat img(cols,rows,CV_8U,ptrToDat);
img=img.t();
cv::imwhow("img",img);
cv::waitkey();
Related
I have an algorithm that does some stuff. Among them, there is a conversion that works fine if I'm working on a CV_8UC3 image but goes wrong if the file type is C_16UC3.
This is some code:
//new image is created
Mat3w img(100,100,Vec3w(1000,0,0));
//Image Conversion - ERROR!
cv::Mat inputSource;
//saving the image here will work
img.convertTo(inputSource, CV_64FC3);
//saving the image here will not work -> black image
The problem is that the CV_16UC3 image's processing result is an image of the right dimensions but fully black.
The problem is in the conversion because saving the image right before will give a legit one while saving it right after will give an almost completely white one.
EDIT:
I made some changes: cut off some useless code and added the inputSource declaration.
Now, while I was trying stuff, I arrived at the conclusion that either I haven't understood the CV Types, or something strange is happening.
I always thought that the number in the type was indicating the number of bits per channel. So, in my head, CV_16UC3 is a 3 channel with 16bits per channel. That idea is strengthened by the fact that the image I save during as tests (before the img.convertTo) actually had matching bits per channel number. The strange thing, is that the saved inputSource (type CV_64FC3) is an 8bpc image.
What's am I missing?
You get confused with the way imwrite and imread work in OpenCV. From the OpenCV documentation
imwrite
The function imwrite saves the image to the specified file. The image format is chosen based on the filename extension (see imread() for the list of extensions). Only 8-bit (or 16-bit unsigned (CV_16U) in case of PNG, JPEG 2000, and TIFF) single-channel or 3-channel (with ‘BGR’ channel order) images can be saved using this function. If the format, depth or channel order is different, use Mat::convertTo() , and cvtColor() to convert it before saving. Or, use the universal FileStorage I/O functions to save the image to XML or YAML format.
imread
The function imread loads an image from the specified file and returns it. Possible flags are:
IMREAD_UNCHANGED : If set, return the loaded image as is (with alpha channel, otherwise it gets cropped).
IMREAD_GRAYSCALE : If set, always convert image to the single channel grayscale image.
IMREAD_COLOR : If set, always convert image to the 3 channel BGR color image.
IMREAD_ANYDEPTH : If set, return 16-bit/32-bit image when the input has the corresponding depth, otherwise convert it to 8-bit.
IMREAD_ANYCOLOR : If set, the image is read in any possible color format.
So for your case, CV_16U are saved without conversion, while CV_64F is converted and saved as CV_8U. If you want to store double data, you should use FileStorage.
You should also take care to use imread the image with the appropriate flag.
This example should clarify:
#include <opencv2\opencv.hpp>
using namespace cv;
int main()
{
// Create a 16-bit 3 channel image
Mat3w img16UC3(100, 200, Vec3w(1000, 0, 0));
img16UC3(Rect(0, 0, 20, 50)) = Vec3w(0, 2000, 0);
// Convert to 64-bit (double) 3 channel image
Mat3d img64FC3;
img16UC3.convertTo(img64FC3, CV_64FC3);
// Save to disk
imwrite("16UC3.png", img16UC3); // No conversion
imwrite("64FC3.png", img64FC3); // Converted to CV_8UC3
FileStorage fout("64FC3.yml", FileStorage::WRITE);
fout << "img" << img64FC3; // No conversion
fout.release();
Mat img_maybe16UC3_a = imread("16UC3.png" /*, IMREAD_COLOR*/); // Will be CV_8UC3
Mat img_maybe16UC3_b = imread("16UC3.png", IMREAD_ANYDEPTH); // Will be CV_16UC1
Mat img_maybe16UC3_c = imread("16UC3.png", IMREAD_UNCHANGED); // Will be CV_16UC3
Mat img_maybe64FC3_a = imread("64FC3.png" /*, IMREAD_COLOR*/); // Will be CV_8UC3
Mat img_maybe64FC3_b = imread("64FC3.png", IMREAD_ANYDEPTH); // Will be CV_8UC1
Mat img_maybe64FC3_c = imread("64FC3.png", IMREAD_UNCHANGED); // Will be CV_8UC3
Mat img_mustbe64FC3;
FileStorage fin("64FC3.yml", FileStorage::READ);
fin["img"] >> img_mustbe64FC3; // Will be CV_64FC3
fin.release();
return 0;
}
I am using openCV for the first time. I am using openCV3 and XCode to code it. I want to create a 16 bit grayscale image but I want to the data I have is defined such that 4000 is the pixel value for white and 0 for black. I have the information for these pixels in an array of type int. How can I create a Mat and assign the values in the array to the Mat?
short data[] = { 0,0,4000,4000,0,0,4000, ...};
Mat gray16 = Mat(h, w, CV_16S, data);
again, the types must match. for 16bit, you need CV_16S and a shortarray, for 8bit CV_8U and a uchar* array, for float CV_32S and a float* ....
You can create your Mat with
cv::Mat m(rows, cols, CV_16UC1);
but to my knowledge, there is no way to define a custom value for "white", you'll have to multiply m with std::numeric_limits::max / 4000. However, this is only necessary when displaying the image.
A lookup-table could do the same (potentially slower), see cv::LUT. However, it appearently only supports 8-bit images.
edit: OK, I missed the part about assigning existing array values; see berak's answer. I hope the answer is still useful.
I am using openCV in my c++ image processing project.
I have this two dimensional array I[800][600] filled with values between 0 and 255, and i want to put this array in a graylevel "IplImage" so i can view it and process it using openCV functions.
Any help will be appreciated.
Thanks in advance.
It's easy in Opencv C++ interface, all you need to do is to init a matrice, see the line below
cv::Mat img = cv::Mat(800, 600, CV_8UC1, I) // I[800][600]
Now you can do whatever you want, Opencv treats img as an 8-bit grayscale image.
CvSize image_size;
image_size.height = 800;
image_size.width = 600;
int channels = 1;
IplImage *image = cvCreateImageHeader(image_size, IPL_DEPTH_8U, channels);
cvSetData(image, I, image->widthStep)
this is untested, but the most important thing likely to require fixing is the second parameter to cvSetData(). This needs to be a pointer to unsigned character data, and if you're just using a 2D array that isn't part of a Mat, then you'll have to do something a bit different, (possibly a loop? although you should avoid loops in openCV as much as possible).
see this post for a highly relevant question
When i read a grayscaled image using for example in Opencv 2.3:
Mat src = imread("44.png" ,0);
How can i access the pixel value of it?
I know if its RGB i can use:
std::cout << src.at<cv::Vec3b>(i,j)[0].
Thanks in advance.
Since a grayscale image contains only one component instead of 3, the resulting matrix/image is of type CV_8UC1 instead of CV_8UC3. And this in turn means, that individual pixels are not 3-vectors of bytes (cv::Vec3b) but just single bytes (unsigned char or OpenCV's uchar). So you can just use:
src.at<unsigned char>(i, j)
I have a strange problem in transform a stack of bmp images to raw file (unsigned char array) .This is the code :
for(int i=365;i<=385;i++)
{
sprintf(secondname,"C:\\tr\\tr_");
sprintf(secondtemp,"_%04d.bmp",i);
strcat(secondname,secondtemp);
cvSaveImage( secondname,out);
cvReleaseImage( &out );
IplImage* img2 = cvLoadImage( secondname,0);
memcpy(&im[xsize*ysize*(i-365)],img2->imageData,xsize*ysize);
}
outfile=fopen("C:\\Histo_Registration\\a.raw","wb");
fwrite((unsigned char*)im,1,(xsize)*(ysize)*(zsize),outfile);
fclose(outfile);
The problem is that when the images that i load is for example 512x512 the result raw is ok .When the images is 426x425 the result raw is strange is not for sure the correct one.Any idea?
Your code doesn't work with bitmap line alignment. See IplImage::widthStep member. You cannot copy the whole image in one memcpy call, if widthStep is not equal to (pixel size in bytes * line width in pixels).
Windows bitmaps are 32-bit aligned, this is why 512x512 image is OK, and 426x425 is wrong. For example, if image width = 11, and every pixel is 1 byte length, actual line width (widthStep) will be 12 (4 bytes alignment).
The length of each row in a BMP is a multiple of 4, if necessary the remaining bytes will be filled with 0. You need to take that into account.
See the Wikipedia article about the BMP file format for details.