Producing CCITT compressed TIFF from CGImage - c++

I have a CGImage (core graphics, C/C++). It's grayscale. Well, originally it was B/W, but the CGImage may be RGB. That shouldn't matter. I want to create a CCITT-Group 4 TIFF.
I can create an LZW TIFF (grayscale or color) via creating a destination with the correct dictionary and adding the image in. No problem.
However, there doesn't seem to be an equivalent kCGImagePropertyTIFFCompression value to represent CCITT-4. It should be 4, but that produces uncompressed.
I have a manual CCITT compression routine, so if I can get the binary (1 bit per pixel) data, I'm set. But I can't seem to get 1 BPP data out of a CGImage. I have code that is supposed to put the CGImage into a CGBitmapContext and then give me the data, but it seems to be giving me all black.
I've asked a couple of questions today trying to get at this, but I just figured, lets ask the question I REALLY want answered and see if someone can answer it.
There's GOT to be a way to do this. I've got to be missing something dumb. What is it?

This seems to work and produce not-all-black output. There may be a way to do it that doesn't involve a manual conversion to grayscale first, but at least it works!
static void WriteCCITTTiffWithCGImage_URL_(CGImageRef im, CFURLRef url) {
// produce grayscale image
CGImageRef grayscaleImage;
{
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericGray);
CGContextRef bitmapCtx = CGBitmapContextCreate(NULL, CGImageGetWidth(im), CGImageGetHeight(im), 8, 0, colorSpace, kCGImageAlphaNone);
CGContextDrawImage(bitmapCtx, CGRectMake(0,0,CGImageGetWidth(im), CGImageGetHeight(im)), im);
grayscaleImage = CGBitmapContextCreateImage(bitmapCtx);
CFRelease(bitmapCtx);
CFRelease(colorSpace);
}
// generate options for ImageIO. Man this sucks in C.
CFMutableDictionaryRef options = CFDictionaryCreateMutable(kCFAllocatorDefault, 2, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
{
{
CFMutableDictionaryRef tiffOptions = CFDictionaryCreateMutable(kCFAllocatorDefault, 1, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
int fourInt = 4;
CFNumberRef fourNumber = CFNumberCreate(kCFAllocatorDefault, kCFNumberIntType, &fourInt);
CFDictionarySetValue(tiffOptions, kCGImagePropertyTIFFCompression, fourNumber);
CFRelease(fourNumber);
CFDictionarySetValue(options, kCGImagePropertyTIFFDictionary, tiffOptions);
CFRelease(tiffOptions);
}
{
int oneInt = 1;
CFNumberRef oneNumber = CFNumberCreate(kCFAllocatorDefault, kCFNumberIntType, &oneInt);
CFDictionarySetValue(options, kCGImagePropertyDepth, oneNumber);
CFRelease(oneNumber);
}
}
// write file
CGImageDestinationRef idst = CGImageDestinationCreateWithURL(url, kUTTypeTIFF, 1, NULL);
CGImageDestinationAddImage(idst, grayscaleImage, options);
CGImageDestinationFinalize(idst);
// clean up
CFRelease(idst);
CFRelease(options);
CFRelease(grayscaleImage);
}
Nepheli:tmp ken$ tiffutil -info /tmp/output.tiff
Directory at 0x1200
Image Width: 842 Image Length: 562
Bits/Sample: 1
Sample Format: unsigned integer
Compression Scheme: CCITT Group 4 facsimile encoding
Photometric Interpretation: "min-is-black"
Orientation: row 0 top, col 0 lhs
Samples/Pixel: 1
Number of Strips: 1
Planar Configuration: Not planar

ImageMagick can convert from and to almost any image format. As it is open source you can go and read the source code to find the answer to your question.
You can even use the ImageMagick API in you app if you use C++.
Edit:
If you can get the data from CGImage in any format (and it sounded like you can) you can use ImageMagick to convert it from whatever the format is that you get from CGImage to any other format supported by ImageMagick (your desired TIFF format).
Edit:
Technical Q&A QA1509
Getting the pixel data from a CGImage object states:
On Mac OS X 10.5 or later, a new call has been added that allows you to obtain the actual pixel data from a CGImage object. This call, CGDataProviderCopyData, returns a CFData object that contains the pixel data from the image in question.
Once you have the pixel data you can use ImageMagick to convert it.

NSBitmapImageRep claims to be able to generate a CCITT FAX Group 4 compressed TIFF. So something like this might do the trick (untested):
CFDataRef tiffFaxG4DataForCGImage(CGImageRef cgImage) {
NSBitmapImageRep *imageRep =
[[[NSBitmapImageRep alloc] initWithCGImage:cgImage] autorelease];
NSData *tiffData =
[imageRep TIFFRepresentationUsingCompression:NSTIFFCompressionCCITTFAX4
factor:0.0f];
return (CFDataRef) tiffData;
}
This function should return the data you seek.

Related

uint8_t buffer to cv::Mat conversion results in distorted image

I have a Mipi camera that captures frames and stores them into the struct buffer that you can see below. Once the frame is stored I want to convert it into a cv::Mat, the thing is that the Mat ends up looking like the first pic.
The var buf.index is just part of the V4L2 API, useful to understand which buffer I'm using.
//The structure where the data is stored
struct buffer{
void *start;
size_t length;
};
struct buffer *buffers;
//buffer->mat
cv::Mat im = cv::Mat(cv::Size(width, height), CV_8UC3, ((uint8_t*)buffers[buf.index].start));
At first I thought that the data might be corrupted but storing the image with lodepng results in a nice image without any distortion.
unsigned char* out_buf = (unsigned char*)malloc( width * height * 3);
for(int pix = 0; pix < width*height; ++pix) {
memcpy(out_buf + pix*3, ((uint8_t*)buffers[buf.index].start)+4*pix+1, 3);
}
lodepng_encode24_file(filename, out_buf, width, height);
I bet it's something really silly.
the picture you post has oddly colored pixels and the patterns look like there's more information than simply 24 bits per pixel.
after inspecting the data, it appears that V4L gives you four bytes per pixel, and the first byte is always 0xFF (let's call that X). further, the channel order seems to be XRGB.
create a cv::Mat using 8UC4 to contain the data.
to use the picture in OpenCV, you need BGR order. cv::split the received data into its four color planes which are X,R,G,B. use cv::merge to reassemble the B,G,R planes into a picture that OpenCV can handle, or reassemble into R,G,B to create a Mat for other purposes (that other library you seem to use).

Overlaying/merging two (and more) YUV images in OpenCV

I investigated and stripped down my previous question (Is there a way to avoid conversion from YUV to BGR?). I want to overlay few images (format is YUV) on the resulting, bigger image (think about it like it is a canvas) and send it via network library (OPAL) forward without converting it to to BGR.
Here is the code:
Mat tYUV;
Mat tClonedYUV;
Mat tBGR;
Mat tMergedFrame;
int tMergedFrameWidth = 1000;
int tMergedFrameHeight = 800;
int tMergedFrameHalfWidth = tMergedFrameWidth / 2;
tYUV = Mat(tHeader->height * 1.5f, tHeader->width, CV_8UC1, OPAL_VIDEO_FRAME_DATA_PTR(tHeader));
tClonedYUV = tYUV.clone();
tMergedFrame = Mat(Size(tMergedFrameWidth, tMergedFrameHeight), tYUV.type(), cv::Scalar(0, 0, 0));
tYUV.copyTo(tMergedFrame(cv::Rect(0, 0, tYUV.cols > tMergedFrameWidth ? tMergedFrameWidth : tYUV.cols, tYUV.rows > tMergedFrameHeight ? tMergedFrameHeight : tYUV.rows)));
tClonedYUV.copyTo(tMergedFrame(cv::Rect(tMergedFrameHalfWidth, 0, tYUV.cols > tMergedFrameHalfWidth ? tMergedFrameHalfWidth : tYUV.cols, tYUV.rows > tMergedFrameHeight ? tMergedFrameHeight : tYUV.rows)));
namedWindow("merged frame", 1);
imshow("merged frame", tMergedFrame);
waitKey(10);
The result of above code looks like this:
I guess the image is not correctly interpreted, so the pictures stay black/white (Y component) and below them, we can see the U and V component. There are images, which describes the problem well (http://en.wikipedia.org/wiki/YUV):
and: http://upload.wikimedia.org/wikipedia/en/0/0d/Yuv420.svg
Is there a way for these values to be correctly read? I guess I should not copy the whole images (their Y, U, V components) straight to the calculated positions. The U and V components should be below them and in the proper order, am I right?
First, there are several YUV formats, so you need to be clear about which one you are using.
According to your image, it seems your YUV format is Y'UV420p.
Regardless, it is a lot simpler to convert to BGR work there and then convert back.
If that is not an option, you pretty much have to manage the ROIs yourself. YUV is commonly a plane-format where the channels are not (completely) multiplexed - and some are of different sizes and depths. If you do not use the internal color conversions, then you will have to know the exact YUV format and manage the pixel copying ROIs yourself.
With a YUV image, the CV_8UC* format specifier does not mean much beyond the actual memory requirements. It certainly does not specify the pixel/channel muxing.
For example, if you wanted to only use the Y component, then the Y is often the first plane in the image so the first "half" of whole image can just be treated as a monochrome 8UC1 image. In this case using ROIs is easy.

ISampleGrabber::BufferCB to IplImage; display in OpenCV shows garbled image - C++

I'm using DirectShow to access a video stream, and then using the SampleGrabber filter and interface to get samples from each frame for further image processing. I'm using a callback, so it gets called after each new frame. I've basically just worked from the PlayCap sample application and added a sample filter to the graph.
The problem I'm having is that I'm trying to display the grabbed samples on a different OpenCV window. However, when I try to cast the information in the buffer to an IplImage, I get a garbled mess of pixels. The code for the BufferCB call is below, sans any proper error handling:
STDMETHODIMP BufferCB(double Time, BYTE *pBuffer, long BufferLen)
{
AM_MEDIA_TYPE type;
g_pGrabber->GetConnectedMediaType(&type);
VIDEOINFOHEADER *pVih = (VIDEOINFOHEADER *)type.pbFormat;
BITMAPINFO* bmi = (BITMAPINFO *)&pVih->bmiHeader;
BITMAPINFOHEADER* bmih = &(bmi->bmiHeader);
int channels = bmih->biBitCount / 8;
mih->biPlanes = 1;
bmih->biBitCount = 24;
bmih->biCompression = BI_RGB;
IplImage *Image = cvCreateImage(cvSize(bmih->biWidth, bmih->biHeight), IPL_DEPTH_8U, channels);
Image->imageSize = BufferLen;
CopyMemory(Image->imageData, pBuffer, BufferLen);
cvFlip(Image);
//openCV Mat creation
Mat cvMat = Mat(Image, true);
imshow("Display window", cvMat); // Show our image inside it.
waitKey(2);
return S_OK;
}
My question is, am I doing something wrong here that will make the image displayed look like this:
Am I missing header information or something?
The quoted code is a part of the solution. You create here an image object of certain width/height with 8-bit pixel data and unknown channel/component count. Then you copy data from another buffer of unknown format.
The only chance for it to work well is that all unknowns amazingly match without your effort. So you basically need to start with checking what media type is exactly on Sample Grabber's input pin. Then, if it is not what you wanted, you have to update your code respectively. It might also be important what is the downstream connection of the SG, and whether it is connected to video renderer in particular.

Viewing 8 bit RAW image file in openCV

I have a raw file which contains a header of 5 bytes containing the number of rows and columns in first two bits each . The 5th byte contains the number of bits for each pixel in the image which is 8 bits in all cases. The image data follows after that.
Since I am new to openCV, i want to ask how to view this RAW image file as an greyscale image using C++?
I know how to read binary data in C++ and have stored the image as a 2-D unsigned char array (since each pixel is 8 bit).
Can anyone please tell me how to view this data as image using openCV ?
I am using the below code , but getting a completely weird image :
void openRaw() {
cv::Mat img(numRows, numCols,CV_8U,&(image[0][0]));
//img.t();
cv::imshow("img",img);
cv::waitKey();
}
Any help will be greatly appreciated.
Thanks,
Rohit
You have to convert it to an IplImage.
If you want to see it as a pure grey-scale image, its actually rather easy.
Example code I use in one application:
CvSize mSize;
mSize.height = 960;
mSize.width = 1280;
IplImage* image1 = cvCreateImage(mSize, 8, 1);
memcpy( image1->imageData, rawDataPointer, sizeOfImage);
cvNamedWindow( "corners1", 1 );
cvShowImage( "corners1", image1 );
At that point you have a valid IplImage, which you can then display. (last 2 lines of code display it)
If the image is bayer-tiled, you will have to convert to RGB.
c++ notation:
cv::Mat img(rows,cols,CV_8U,ptrToDat);
cv::imwhow("img",img);
cv::waitkey();
*data should be saved columwise, otherewise use:
cv::Mat img(cols,rows,CV_8U,ptrToDat);
img=img.t();
cv::imwhow("img",img);
cv::waitkey();

CImg: How to save a grayscale?

When I use CImg to load a .BMP, how can I know whether it is a gray-scale or color image?
I have tried as follows, but failed:
cimg_library::CImg<unsigned char> img("lena_gray.bmp");
const int spectrum = img.spectrum();
img.save("lenaNew.bmp");
To my expectation, no matter what kind of .BMP I have loaded, spectrum will always be 3. As a result, when I load a gray-scale and save it, the result size will be 3 times bigger than it is.
I just want to save a same image as it is loaded. How do I save as gray-scale?
I guess the BMP format always store images as RGB-coded data, so reading a BMP will always result in a color image.
If you know your image is scalar, all channels will be the same, so you can discard two of them (here keeping the first one).
img.channel(0);
If you want to check that it is a scalar image, you can test the equality between channels, as
const CImg<unsigned char> R = img.get_shared_channel(0),
G = img.get_shared_channel(1),
B = img.get_shared_channel(2);
if (R==G && R==B) {
.. Your image is scalar !
} else {
.. Your image is in color.
}