I am using OpenCV 2.4.4 on a Cent OS machine. My code currently loads an image with the warning: component data type mismatch
here is the code:
#include <cv.h>
#include <highgui.h>
#include "imglib.h"
int main( int argc, char** argv )
{
Mat image = imread( argv[1], CV_LOAD_IMAGE_ANYDEPTH);
imwrite("debugwriteout.jp2", image);
}
I pass the name of a .jp2 greyscale file in the args. The image has a 14-bit pixel depth, but when I print out the pixel values I get values over 20000, and my image is now a completely black square. Any advice would be appreciated.
Additional information:
When I change the imread flag to CV_LOAD_IMAGE_GRAYSCALE it successfully convert the image to an 8-bit pixel depth and prints useful output so I can tell that the jasper module is working at least somewhat correctly.
Any advice would be appreciated,
Thanks
SZman,
I solved my problem.
The solution is the position of the high bit.
On 16 bits, for a 14 bits depth, you have xxxxxxxxxxxxxx00 instead of 00xxxxxxxxxxxxxx.
If you want the correct value, you must decal of 2 bits on the right.
Please read the image using those flags
Mat image = imread( argv[1], CV_LOAD_IMAGE_ANYDEPTH | CV_LOAD_IMAGE_ANYCOLOR);
Related
I am trying to read a .tif or .tiff floating point gray scale image in OpenCV.
I can read and write routine file format such as png, jpg etc but I am not able to read from my Desktop a format I never used before which is .tif or .tiff format.
The image: the image I am trying to read has the following parameters:
Size:
And width and height:
After some documentation and various sources I was able to understand that it is possible to use a convertTo function to convert between available data types, the source can be found here. However this didn't work well and I actually had a compilation error saying:
OpenCV(3.4.1) Error: Assertion failed (size.width>0 && size.height>0) in imshow, file /home/to/opencv/modules/highgui/src/window.cpp, line 356
terminate called after throwing an instance of cv::Exception
what(): OpenCV(3.4.1) /home/to/opencv/modules/highgui/src/window.cpp:356: error: (-215) size.width>0 && size.height>0 in function imshow
The code I am using is the following:
#include <opencv2/core.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/highgui.hpp>
#include <iostream>
#include <string>
using namespace cv;
using namespace std;
int main( int argc, char** argv )
{
Mat img = imread("/home/to/Desktop/example.tif");
cv::imshow("source",img);
Mat dst; // destination image
// check if we have RGB or grayscale image
if (img.channels() == 3) {
// convert 3-channel (RGB) 8-bit uchar image to 32 bit float
img.convertTo(dst, CV_32FC3);
}
else if (img.channels() == 1) {
// convert 1-chanel (grayscale) 8-bit uchar image to 32 bit float
img.convertTo(dst, CV_32FC1);
}
// display output, note that to display dst image correctly
// we have to divide each element of dst by 255 to keep
// the pixel values in the range [0,1].
cv::imshow("output",dst/255);
waitKey();
}
Additional example I tried to make it work is directly from the OpenCV documentation which can be found here, with a small modification though. I read from official documentation that the options IMREAD_ANYCOLOR | IMREAD_ANYDEPTH should also be activated and in fact is what I did in the second additional trial below:
#include <opencv2/core.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/highgui.hpp>
#include <iostream>
#include <string>
using namespace cv;
using namespace std;
int main( int argc, char** argv )
{
String imageName( "/home/to/Desktop/example.tif" ); // by default
if( argc > 1)
{
imageName = argv[1];
}
Mat image;
Mat outImage;
image = imread( imageName, IMREAD_ANYCOLOR | IMREAD_ANYDEPTH ); // Read the file
if( image.empty() ) // Check for invalid input
{
cout << "Could not open or find the image" << std::endl ;
return -1;
}
namedWindow( "Display window", WINDOW_AUTOSIZE ); // Create a window for display.
resize(image, outImage, cv::Size(500,500));
imshow("orig", image);
imshow("resized", outImage);
// Show our image inside it.
waitKey(0); // Wait for a keystroke in the window
return 0;
This time the compiler runs without any error but no image is shown as it is possible to see from the print screen below:
UPDATE
This is the result after the cv::resize
UPDATE 2
This the result after applying imshow("Display window", image*10);
Is there something that I am missing from the official documentation or something else I am forgetting to do?
Thanks for shedding light on this issue.
Your image is composed of a single channel of 64-bit floats which range from -219.774 to -22.907. I can tell that using tiffutil which is shipped with libtiff:
tiffutil -verboseinfo image.tif
TIFFReadDirectory: Warning, Unknown field with tag 33550 (0x830e) encountered.
TIFFReadDirectory: Warning, Unknown field with tag 33922 (0x8482) encountered.
TIFFReadDirectory: Warning, Unknown field with tag 42113 (0xa481) encountered.
Directory at 0x256b3a2
Image Width: 2277 Image Length: 2153
Bits/Sample: 64
Sample Format: IEEE floating point
Compression Scheme: none
Photometric Interpretation: "min-is-black"
Samples/Pixel: 1
Rows/Strip: 1
Number of Strips: 2153
Strips (Offset, ByteCount):
17466, 18216
35682, 18216
53898, 18216
...
...
I am not certain exactly what you plan to do, but as a first stab, you can just add 220 to every pixel and convert to unsigned char and your range will be 0 to 197 which is perfectly displayable:
I actually did it using Python because I am quicker with that, but the C++ will follow exactly the same format:
import cv2
# Load image
img = cv2.imread('image.tif',cv2.IMREAD_UNCHANGED)
# Add 220 to all values, round to unsigned 8-bit and display
Image.fromarray((img+220).astype(np.uint8)).show()
I'm stuck on writing a OpenCV Mat in 16 Bit. Whatever I try the result is always an 8 Bit (0-255) image. I checked for relating questions on SO but nothig here solved this issue.
The Mat contains 0-65535 greyscale values before writing it to disk. I already tried the following (and many more approaches):
cv::Mat depth;
depth.convertTo(depth, CV_16UC1);
imwrite("depth.png", depth);
as in the documentation for imwrite() they say, that it is possible to save a Mat with imwrite() when it is in CV_16U
What is wrong with the code? Anybody knows how to solve this?
Both the following work:
#include <iostream>
#include <opencv2/opencv.hpp>
using namespace cv;
int main( int, char** argv )
{
// Start with unsigned shorts and write to PNG
unsigned short data[4] = {0,12000,24000,65535};
Mat src = Mat(1,4,CV_16UC1,data);
imwrite("depth.png",src);
// Start with floats and convert to 16-bit then write to PNG
float dataf[4]={0.0,12000.0,25000.0,65000.0};
Mat a = Mat(1,4,CV_32FC1,dataf);
a.convertTo(a,CV_16UC1);
imwrite("d2.png",a);
}
I can check them with ImageMagick as follows, and both are 16-bit greyscale PNGs:
convert depth.png txt:
# ImageMagick pixel enumeration: 4,1,65535,gray
0,0: (0) #000000000000 gray(0)
1,0: (12000) #2EE02EE02EE0 gray(18.3108%)
2,0: (24000) #5DC05DC05DC0 gray(36.6217%)
3,0: (65535) #FFFFFFFFFFFF gray(255)
and
convert d2.png txt:
# ImageMagick pixel enumeration: 4,1,65535,gray
0,0: (0) #000000000000 gray(0)
1,0: (12000) #2EE02EE02EE0 gray(18.3108%)
2,0: (25000) #61A861A861A8 gray(38.1476%)
3,0: (65000) #FDE8FDE8FDE8 gray(99.1836%)
I suggest your problem is elsewhere. Please provide an MCVE.
Also, maybe try writing it to FileStorage or PGM format instead of PNG and look at the file in a normal editor to see if it really looks like 16-bit data - and if it works at all.
Thought I'd try my hand at a little (auto)correlation/convolution today in openCV and make my own 2D filter kernel.
Following openCV's 2D Filter Tutorial I discovered that making your own kernels for openCV's Filter2D might not be that hard. However I'm getting unhandled exceptions when I try to use one.
Code with comments relating to the issue here:
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <stdlib.h>
#include <stdio.h>
#include <iostream>
using namespace cv;
using namespace std;
int main(int argc, char** argv) {
//Loading the source image
Mat src;
src = imread( "1.png" );
//Output image of the same size and the same number of channels as src.
Mat dst;
//Mat dst = src.clone(); //didn't help...
//desired depth of the destination image
//negative so dst will be the same as src.depth()
int ddepth = -1;
//the convolution kernel, a single-channel floating point matrix:
Mat kernel = imread( "kernel.png" );
kernel.convertTo(kernel, CV_32F); //<<not working
//normalize(kernel, kernel, 1.0, 0.0, 4, -1, noArray()); //doesn't help
//cout << kernel.size() << endl; // ... gives 11, 11
//however, the example from tutorial that does work:
//kernel = Mat::ones( 11, 11, CV_32F )/ (float)(11*11);
//default value (-1,-1) here means that the anchor is at the kernel center.
Point anchor = Point(-1,-1);
//value added to the filtered pixels before storing them in dst.
double delta = 0;
//alright, let's do this...
filter2D(src, dst, ddepth , kernel, anchor, delta, BORDER_DEFAULT );
imshow("Source", src); //<< unhandled exception here
imshow("Kernel", kernel);
imshow("Destination", dst);
waitKey(1000000);
return 0;
}
As you can see, using the tutorials kernel works fine, but my image will crash the program, I've tried changing the bit-depth, normalizing, checking size and lots of commenting out blocks to see where it fails, but haven't cracked it yet.
The image is, '1.png':
And the kernel I want 'kernel.png':
I'm trying to see if I can get a hotspot in dst at the point where the eye catchlight is (the kernel I've chosen is the catchlight). I know there are other ways to do this, but I'm interested to see how effective convolving the catchlight over itself is. (autocorrelation I think that's called?)
Direct questions:
why the crash?
is the crash indicating a fundamental conceptual mistake?
or (hopefully) is it just some (silly) fault in the code?
Thanks in advance for any help :)
The assertion error should be posted which would help someone to answer you other than questioning why is the crash. Anyways, I have posted below the possible errors and solution for convolution filter2D.
Error 1:
OpenCV Error: Assertion failed (src.channels() == 1 && func != 0) in cv::countNo
nZero, file C:\builds\2_4_PackSlave-win32-vc12-shared\opencv\modules\core\src\st
at.cpp, line 549
Solution : Your input Image and the kernel should be grayscales. You can use the flag 0 in imread. (ex. cv::imread("kernel.png",0) to read the image as grayscale.) If you want to apply different kernels to different channels, split the image into separate color planes using split() and process them individually.
I don't see anything other than the obove error that may crash. Kernel size should in odd numbers and your kernel image is 11X11 which is fine. If it stills crashes kindly provide more information in order to help you out.
This maybe rudimentary, but is it possible to know how many channels a cv::Mat has? For eg, we load an RGB image, I know there are 3 channels. I do the following operations, just to get the laplacian of the image, which is straight from the Opencv Documentation.
int main(int argc, char **argv)
{
Mat src = imread(argv[1],1),src_gray,dst_gray,abs_dst_gray;
cvtColor(src,src_gray,COLOR_BGR2GRAY);
GaussianBlur( src, src, Size(3,3), 0, 0, BORDER_DEFAULT );
Laplacian(src_gray,dst_gray,ddepth,kernel_size,scale,delta,BORDER_DEFAULT);
convertScaleAbs(dst_gray,abs_dst_gray);
}
After converting to Grayscale, we should have only one channel. But how can I determine the number of channels of abs_dst_gray in program? Is there any function to do this? Or is it possible through any other method, which should be written by the programmer? Please help me here.
Thanks in advance.
Call Mat.channels() :
cv::Mat img(1,1,CV_8U,cvScalar(0));
std::cout<<img.channels();
Output:
1
which is the number of channels.
Also, try:
std::cout<<img.type();
Output:
0
which belongs to CV_8U (look here at line 542). Study file types_c.h for each define.
you might use:
Mat::channels()
http://docs.opencv.org/modules/core/doc/basic_structures.html#mat-channels
I have a raw file which contains a header of 5 bytes containing the number of rows and columns in first two bits each . The 5th byte contains the number of bits for each pixel in the image which is 8 bits in all cases. The image data follows after that.
Since I am new to openCV, i want to ask how to view this RAW image file as an greyscale image using C++?
I know how to read binary data in C++ and have stored the image as a 2-D unsigned char array (since each pixel is 8 bit).
Can anyone please tell me how to view this data as image using openCV ?
I am using the below code , but getting a completely weird image :
void openRaw() {
cv::Mat img(numRows, numCols,CV_8U,&(image[0][0]));
//img.t();
cv::imshow("img",img);
cv::waitKey();
}
Any help will be greatly appreciated.
Thanks,
Rohit
You have to convert it to an IplImage.
If you want to see it as a pure grey-scale image, its actually rather easy.
Example code I use in one application:
CvSize mSize;
mSize.height = 960;
mSize.width = 1280;
IplImage* image1 = cvCreateImage(mSize, 8, 1);
memcpy( image1->imageData, rawDataPointer, sizeOfImage);
cvNamedWindow( "corners1", 1 );
cvShowImage( "corners1", image1 );
At that point you have a valid IplImage, which you can then display. (last 2 lines of code display it)
If the image is bayer-tiled, you will have to convert to RGB.
c++ notation:
cv::Mat img(rows,cols,CV_8U,ptrToDat);
cv::imwhow("img",img);
cv::waitkey();
*data should be saved columwise, otherewise use:
cv::Mat img(cols,rows,CV_8U,ptrToDat);
img=img.t();
cv::imwhow("img",img);
cv::waitkey();