I'm using the Leptonica Library to process some pictures. After that I want to show them in my QT GUI. Leptonica is using their own format Pix for the images, while QT is using their own format QPixmap. At the moment the only way for me is to save the pictures after processing as a file ( like bmp ) and then load them again with a QT function call. Now I want to convert them in my code, so I dont need to take the detour with saving them on the filesystem. Any ideas how to do this?
Best Regards
// edit:
Okay as already suggested I tried to convert the PIX* to a QImage.
The PIX* is defined like this:
http://tpgit.github.com/Leptonica/pix_8h_source.html
struct Pix
{
l_uint32 w; /* width in pixels */
l_uint32 h; /* height in pixels */
l_uint32 d; /* depth in bits */
l_uint32 wpl; /* 32-bit words/line */
l_uint32 refcount; /* reference count (1 if no clones) */
l_int32 xres; /* image res (ppi) in x direction */
/* (use 0 if unknown) */
l_int32 yres; /* image res (ppi) in y direction */
/* (use 0 if unknown) */
l_int32 informat; /* input file format, IFF_* */
char *text; /* text string associated with pix */
struct PixColormap *colormap; /* colormap (may be null) */
l_uint32 *data; /* the image data */
};
while QImage offers me a method like this:
http://developer.qt.nokia.com/doc/qt-4.8/qimage.html#QImage-7
QImage ( const uchar * data,
int width,
int height,
int bytesPerLine,
Format format )
I assume I cant just copy the data from the PIX to the QImage when calling the constructor. I guess I need to fill the QImage Pixel by Pixel, but actually I dont know how? Do I need to loop through all the coordinates? How do I regard the bit depth? Any ideas here?
I use this for conversion QImage to PIX:
PIX* TessTools::qImage2PIX(QImage& qImage) {
PIX * pixs;
l_uint32 *lines;
qImage = qImage.rgbSwapped();
int width = qImage.width();
int height = qImage.height();
int depth = qImage.depth();
int wpl = qImage.bytesPerLine() / 4;
pixs = pixCreate(width, height, depth);
pixSetWpl(pixs, wpl);
pixSetColormap(pixs, NULL);
l_uint32 *datas = pixs->data;
for (int y = 0; y < height; y++) {
lines = datas + y * wpl;
QByteArray a((const char*)qImage.scanLine(y), qImage.bytesPerLine());
for (int j = 0; j < a.size(); j++) {
*((l_uint8 *)lines + j) = a[j];
}
}
return pixEndianByteSwapNew(pixs);
}
And this for conversion PIX to QImage:
QImage TessTools::PIX2QImage(PIX *pixImage) {
int width = pixGetWidth(pixImage);
int height = pixGetHeight(pixImage);
int depth = pixGetDepth(pixImage);
int bytesPerLine = pixGetWpl(pixImage) * 4;
l_uint32 * s_data = pixGetData(pixEndianByteSwapNew(pixImage));
QImage::Format format;
if (depth == 1)
format = QImage::Format_Mono;
else if (depth == 8)
format = QImage::Format_Indexed8;
else
format = QImage::Format_RGB32;
QImage result((uchar*)s_data, width, height, bytesPerLine, format);
// Handle pallete
QVector<QRgb> _bwCT;
_bwCT.append(qRgb(255,255,255));
_bwCT.append(qRgb(0,0,0));
QVector<QRgb> _grayscaleCT(256);
for (int i = 0; i < 256; i++) {
_grayscaleCT.append(qRgb(i, i, i));
}
if (depth == 1) {
result.setColorTable(_bwCT);
} else if (depth == 8) {
result.setColorTable(_grayscaleCT);
} else {
result.setColorTable(_grayscaleCT);
}
if (result.isNull()) {
static QImage none(0,0,QImage::Format_Invalid);
qDebug() << "***Invalid format!!!";
return none;
}
return result.rgbSwapped();
}
This code accepts a const QImage& parameter.
static PIX* makePIXFromQImage(const QImage &image)
{
QByteArray ba;
QBuffer buf(&ba);
buf.open(QIODevice::WriteOnly);
image.save(&buf, "BMP");
return pixReadMemBmp(ba.constData(), ba.size());
}
I do not know the Leptonica Library, but I had a short look at the documentation and found the documentation about the PIX structure. You can create a QImage from the raw data and convert this to a QPixmap with convertFromImage.
Well I could solve the problem this way:
Leptonica offers a function
l_int32 pixWriteMemBmp (l_uint8 **pdata, size_t *psize, PIX *pix)
With this function you can write into the memory instead of a filestream. Still ( in this example ) the Bmp Header and format persists ( there are the same functions for other image formats too ).
The corresponding function from QT is this one:
bool QImage::loadFromData ( const uchar * data, int len, const char * format = 0 )
Since the the Header persits I just need to pass the data ptr and the size to the loadFromData function and QT does the rest.
So all together it would be like this:
PIX *m_pix;
FILE * pFile;
pFile = fopen( "PathToFile", "r" );
m_pix = pixReadStreamBmp(pFile); // If other file format use the according function
fclose(pFile);
// Now we have a Pix object from leptonica
l_uint8* ptr_memory;
size_t len;
pixWriteMemBmp(&ptr_memory, &size, m_pix);
// Now we have the picture somewhere in the memory
QImage testimage;
QPixmap pixmap;
testimage.loadFromData((uchar *)ptr_memory,len);
pixmap.convertFromImage(testimage);
// Now we have the image as a pixmap in Qt
This actually works for me, tho I don't know if there is a way to do this backwards so easy. ( If there is, please let me know )
Best Regards
You can save your pixmap to RAM instead of file (use QByteArray to store the data, and QBuffer as your I/O device).
Related
I'm new to image processing. I was able to use the Boost Generic Image Library (Boost::GIL) to convert between common formats such as Bitmaps, JPEG, PNG, and TIFF. Now, I want to use the openjpeg library to convert any common format to jpeg2000.
Below is my image wrapper class. The boost::gil::rgb8_image_t variable contains image information such as width, height, number of channels, pixels, etc.
class image_wrapper {
private:
boost::gil::rgb8_image_t _img;
public:
image_wrapper() = default;
enum class image_type : uint8_t { JPEG = 1, BMP = 2, TIFF = 3, PNG = 4, JPEG2000 = 5 };
void read_in_image(const std::string& filename);
void read_in_image(std::vector<uint8_t>& bytes);
void write_out_image(const std::string& file_name, image_type img_type);
};
I want to use the unencoded image data (pixel map) from the boost::gil::rgb8_image_t variable as the intermediate format to convert any common format to jpeg2000. The pixel map is stored in a 1D uint8_t vector. I want to store that vector into an openjpeg (opj_image_t) object.
By looking at the openjpeg source code, there's a function to convert an array of bitmap data to an opj_image_t object. How could I do the same thing to convert boost::gil::rgb8_image_t to opj_image_t?
Here is the code from the openjpeg library:
static void bmp24toimage(const OPJ_UINT8* pData, OPJ_UINT32 stride, opj_image_t* image){
int index;
OPJ_UINT32 width, height;
OPJ_UINT32 x, y;
const OPJ_UINT8* pSrc = NULL;
width = image->comps[0].w;
height = image->comps[0].h;
index = 0;
pSrc = pData + (height - 1U) * stride;
for (y = 0; y < height; y++) {
for (x = 0; x < width; x++) {
image->comps[0].data[index] = (OPJ_INT32)pSrc[3 * x + 2]; /* R */
image->comps[1].data[index] = (OPJ_INT32)pSrc[3 * x + 1]; /* G */
image->comps[2].data[index] = (OPJ_INT32)pSrc[3 * x + 0]; /* B */
index++;
}
pSrc -= stride;
}
Link to the openjpeg file that contains the code: https://github.com/uclouvain/openjpeg/blob/master/src/bin/jp2/convertbmp.c
I am trying to extract frames from a stream which I create with Gstreamer and trying to save them with FreeImage or QImage ( this one is for testing ).
GstMapInfo bufferInfo;
GstBuffer *sampleBuffer;
GstStructure *capsStruct;
GstSample *sample;
GstCaps *caps;
int width, height;
const int BitsPP = 32;
/* Retrieve the buffer */
g_signal_emit_by_name (sink, "pull-sample", &sample);
if (sample) {
sampleBuffer = gst_sample_get_buffer(sample);
gst_buffer_map(sampleBuffer,&bufferInfo,GST_MAP_READ);
if (!bufferInfo.data) {
g_printerr("Warning: could not map GStreamer buffer!\n");
throw;
}
caps = gst_sample_get_caps(sample);
capsStruct= gst_caps_get_structure(caps,0);
gst_structure_get_int(capsStruct,"width",&width);
gst_structure_get_int(capsStruct,"height",&height);
auto bitmap = FreeImage_Allocate(width, height, BitsPP,0,0,0);
memcpy( FreeImage_GetBits( bitmap ), bufferInfo.data, width * height * (BitsPP/8));
// int pitch = ((((BitsPP * width) + 31) / 32) * 4);
// auto bitmap = FreeImage_ConvertFromRawBits(bufferInfo.data,width,height,pitch,BitsPP,0, 0, 0);
FreeImage_FlipHorizontal(bitmap);
bitmap = FreeImage_RotateClassic(bitmap,180);
static int id = 0;
std::string name = "/home/stadmin/pic/sample" + std::to_string(id++) + ".png";
#ifdef FREE_SAVE
FreeImage_Save(FIF_PNG,bitmap,name.c_str());
#endif
#ifdef QT_SAVE
//Format_ARGB32
QImage image(bufferInfo.data,width,height,QImage::Format_ARGB32);
image.save(QString::fromStdString(name));
#endif
fibPipeline.push(bitmap);
gst_sample_unref(sample);
gst_buffer_unmap(sampleBuffer, &bufferInfo);
return GST_FLOW_OK;
The color output in FreeImage are totally wrong like when Qt - Format_ARGB32 [ greens like blue or blues like oranges etc.. ] but when I test with Qt - Format_RGBA8888 I can get correct output. I need to use FreeImage and I wish to learn how to correct this.
Since you say Qt succeeds using Format_RGBA8888, I can only guess: the gstreamer frame has bytes in RGBA order while FreeImage expects ARGB.
Quick fix:
//have a buffer the same length of the incoming bytes
size_t length = width * height * (BitsPP/8);
BYTE * bytes = (BYTE *) malloc(length);
//copy the incoming bytes to it, in the right order:
int index = 0;
while(index < length)
{
bytes[index] = bufferInfo.data[index + 2]; //B
bytes[index + 1] = bufferInfo.data[index + 1]; //G
bytes[index + 2] = bufferInfo.data[index]; //R
bytes[index + 3] = bufferInfo.data[index + 3]; //A
index += 4;
}
//fill the bitmap using the buffer
auto bitmap = FreeImage_Allocate(width, height, BitsPP,0,0,0);
memcpy( FreeImage_GetBits( bitmap ), bytes, length);
//don't forget to
free(bytes);
to circumvent some (a lot) of problems with the Actionscript Camera API on Windows 8 Systems,
I decided to create a native extension to deal with the camera.
Right now, the camera part and all the glue to communicate with the AIR Runtime is actually working, so clicking on a button in AIR will open a new Windows window that will return a System::Drawing::Bitmap.
My task would be now to
a) Create a FREBitmapData object and
b) Fill in the BitmapData from the Windows Bitmap.
Should be easy, I thought, many days ago... As I'm not really familiar with C++ I didn't get this to work at all.
Here's what I tried so far:
bmp = form1->bitmap; // bmp is a handle to the System::Drawing::Bitmap returned from the external window
// Lock the bitmap's bits.
Rectangle rect = Rectangle(0, 0, bmp->Width, bmp->Height);
System::Drawing::Imaging::BitmapData^ bmpData = bmp->LockBits(rect, System::Drawing::Imaging::ImageLockMode::ReadWrite, bmp->PixelFormat);
// Get the address of the first line.
IntPtr ptr = bmpData->Scan0;
// Declare an array to hold the bytes of the bitmap.
// This code is specific to a bitmap with 24 bits per pixels.
int inputLength = Math::Abs(bmpData->Stride) * bmp->Height;
array<Byte>^ input = gcnew array<Byte>(inputLength);
// Copy the RGB values into the array.
System::Runtime::InteropServices::Marshal::Copy(ptr, input, 0, inputLength);
// Unlock the bits.
bmp->UnlockBits(bmpData);
// Create a FREByteArray to hold the data.
// Don't know, if this is necessary
FREObject* outputObject;
FREByteArray* outputBytes = new FREByteArray;
outputBytes->length = inputLength;
outputBytes->bytes = (uint8_t *) &input;
FREAcquireByteArray(outputObject, outputBytes);
// now copy it over
memcpy(outputBytes->bytes, &input, inputLength);
FREReleaseByteArray(outputObject);
// we create a new instance of BitmapData here,
// as we cannot simply pass it over in the args,
// because we don't know it's correct size at extension creation
FREObject* width;
FRENewObjectFromUint32(bmp->Width, width);
FREObject* height;
FRENewObjectFromUint32(bmp->Height, height);
FREObject* transparent;
FRENewObjectFromBool(uint32_t(0), transparent);
FREObject* fillColor;
FRENewObjectFromUint32(uint32_t(0xFFFFFF), fillColor);
FREObject obs[4] = { width, height, transparent, fillColor };
// we create some Actionscript Intsances here, we want to send back
FREObject* asBmpObj;
FRENewObject("BitmapData", 4, obs, asBmpObj, NULL);
// Now we have our AS bitmap data, copy bytes over
FREBitmapData* asData;
FREAcquireBitmapData(asBmpObj, asData);
// Now what? asData->bits32 won't accept array<Bytes> or any other value I've tried.
return asBmpObj;
The basic idea was:
a) find out the size and bit-depth of the original Win Bitmap (size is determinded by cam resolution picked in the Camera window)
b) write it's bytes to an array. Convert to 32 bits as necessary. (Still missing any idea.)
c) create AS Bitmap of the same size. Bit-depth must always be 32.
d) copy over array to AS Bitmap.
But I just can't achieve this.
Any advice? Thank you!
I don't think the following straight copy will work.
// Copy the RGB values into the array.
System::Runtime::InteropServices::Marshal::Copy(ptr, input, 0, inputLength);
You have to convert pixel by pixel. I don't know how to convert it to FREBitmapData. Here are the examples you can following on msdn
I finally figured it out:
the code below doesn't deal with the 24to32 bit conversion though, but it actually works in my application quite well, so I thought, i might share it:
FREObject launch(FREContext ctx, void* funcData, uint32_t argc, FREObject argv[])
{
System::Drawing::Bitmap^ windowsBitmap;
SKILLCamControl::CamControlForm^ form1;
form1 = gcnew SKILLCamControl::CamControlForm();
DialogResult dr;
// Show testDialog as a modal dialog and determine if DialogResult = OK.
dr = form1->ShowDialog();
if (dr == DialogResult::OK) {
windowsBitmap = form1->bitmap;
int bmpW = windowsBitmap->Width;
int bmpH = windowsBitmap->Height;
// we create a new instance of BitmapData here,
// as we cannot simply pass it over in the args,
// because we don't know it's correct size at extension creation
FREObject width;
FRENewObjectFromUint32( uint32_t(bmpW), &width);
FREObject height;
FRENewObjectFromUint32( uint32_t(bmpH), &height);
FREObject transparent;
FRENewObjectFromBool( uint32_t(0), &transparent);
FREObject fillColor;
FRENewObjectFromUint32( uint32_t(0xFF0000), &fillColor);
FREObject obs[4] = { width, height, transparent, fillColor };
FREObject freBitmap;
FRENewObject((uint8_t *)"flash.display.BitmapData", 4, obs, &freBitmap , NULL);
FREBitmapData2 freBitmapData;
FREAcquireBitmapData2(freBitmap, &freBitmapData);
// is inverted?
if (&freBitmapData.isInvertedY != (uint32_t*)(0) ) windowsBitmap->RotateFlip(RotateFlipType::RotateNoneFlipY);
int pixelSize = 4;
//Rect rect( 0, 0, freBitmap.width, freBitmap.height );
System::Drawing::Rectangle rect(0, 0, bmpW, bmpH);
BitmapData^ windowsBitmapData = windowsBitmap->LockBits(rect, ImageLockMode::ReadOnly, PixelFormat::Format32bppArgb);
for (int y = 0; y < bmpH ; y++)
{
//get pixels from each bitmap
byte* oRow = (byte*)windowsBitmapData->Scan0.ToInt32() + (y * windowsBitmapData->Stride);
byte* nRow = (byte*)freBitmapData.bits32 + (y * freBitmapData.lineStride32 * 4);
for (int x = 0; x < bmpW ; x++)
{
// set pixels
nRow[x * pixelSize] = oRow[x * pixelSize]; //B
nRow[x * pixelSize + 1] = oRow[x * pixelSize + 1]; //G
nRow[x * pixelSize + 2] = oRow[x * pixelSize + 2]; //R
}
}
// Free resources
FREReleaseBitmapData(freBitmap);
FREInvalidateBitmapDataRect(freBitmap, 0, 0, bmpW, bmpH);
windowsBitmap->UnlockBits(windowsBitmapData);
delete windowsBitmapData;
delete windowsBitmap;
return freBitmap;
}
else if (dr == DialogResult::Cancel)
{
return NULL;
}
return NULL;
}
I dont use C++ myself so this is not a full answer but just something to consider...
Bitmap data is universal raw pixel data. It should be passable within different software. Unless you are actually creating .BMP files with header etc??
...that will return a System::Drawing::Bitmap does this mean you have a bitmap's data held by C++ (as raw uncompressed RGBA pixels)? If so then just either put that inside a byteArray and send to AS3 or a if you can get that bitmap copied to the Windows clipboard then use AS3 to read from clipboard into a new AS3 Bitmap.
these might help you:
AS3: Copy image from clipboard
AS3: Serialize Bitmaps : Scroll down to the section ByteArray to BitmapData (for this to work you must first store the C++ bitmap bytes as a file call it what you want, example tempIMG.dat or myPIc.bin or whatever since file extension does not really matter just that you need a loadable URL).
i tried to convert a dicom image read from a gdcm image reader which has photometric interpretation as 'monochrome2' and pixel format as unsigned int 16 or uint16, i tried the following code over it, but is not giving the required image, please help.
QVector<QRgb> table(2);
for(int c=0;c<256;c++)
{
table.append(qRgb(c,c,c));
}
std::cout << "this is the format UINT16" << std::endl;
int size = dimX*dimY*2; // length of data in buffer, in bytes
quint8 * output = reinterpret_cast<quint8*>(buffer);
const quint16 * input = reinterpret_cast<const quint16*>(buffer);
do {
*output++ = (*input) >> 8;
} while (size -= 2);
imageQt = new QImage(output, dimX, dimY, QImage::Format_Indexed8);
imageQt->setColorTable(table);
regards
I think I see your problem. You are writing the data to output and incrementing the pointer to output as you go along.
You then create the QImage pointing to the end of the bitmap.
You need to do the following:
imageQt = new QImage( reinterpret_cast< uchar* >( buffer ), dimX, dimY, QImage::Format_Indexed8);
Edit: Also you don't advance the input pointer.
You need to change your inner loop to the following:
*output++ = (*input++) >> 8;
I need to create a CImage from a byte array (actually, its an array of unsigned char, but I can cast to whatever form is necessary). The byte array is in the form "RGBRGBRGB...". The new image needs to contain a copy of the image bytes, rather than using the memory of the byte array itself.
I have tried many different ways of achieving this -- including going through various HBITMAP creation functions, trying to use BitBlt -- and nothing so far has worked.
To test whether the function works, it should pass this test:
BYTE* imgBits;
int width;
int height;
int Bpp; // BYTES per pixel (e.g. 3)
getImage(&imgBits, &width, &height, &Bpp); // get the image bits
// This is the magic function I need!!!
CImage img = createCImage(imgBits, width, height, Bpp);
// Test the image
BYTE* data = img.GetBits(); // data should now have the same data as imgBits
All implementations of createCImage() so far have ended up with data pointing to an empty (zero filled) array.
CImage supports DIBs quite neatly and has a SetPixel() method so you could presumably do something like this (uncompiled, untested code ahead!):
CImage img;
img.Create(width, height, 24 /* bpp */, 0 /* No alpha channel */);
int nPixel = 0;
for(int row = 0; row < height; row++)
{
for(int col = 0; col < width; col++)
{
BYTE r = imgBits[nPixel++];
BYTE g = imgBits[nPixel++];
BYTE b = imgBits[nPixel++];
img.SetPixel(row, col, RGB(r, g, b));
}
}
Maybe not the most efficient method but I should think it is the simplest approach.
Use memcpy to copy the data, then SetDIBits or SetDIBitsToDevice depending on what you need to do. Take care though, the scanlines of the raw image data are aligned on 4-byte boundaries (IIRC, it's been a few years since I did this) so the data you get back from GetDIBits will never be exactly the same as the original data (well it might, depending on the image size).
So most likely you will need to memcpy scanline by scanline.
Thanks everyone, I managed to solve it in the end with your help. It mainly involved #tinman and #Roel's suggestion to use SetDIBitsToDevice(), but it involved a bit of extra bit-twiddling and memory management, so I thought I'd share my end-point here.
In the code below, I assume that width, height and Bpp (Bytes per pixel) are set, and that data is a pointer to the array of RGB pixel values.
// Create the header info
bmInfohdr.biSize = sizeof(BITMAPINFOHEADER);
bmInfohdr.biWidth = width;
bmInfohdr.biHeight = -height;
bmInfohdr.biPlanes = 1;
bmInfohdr.biBitCount = Bpp*8;
bmInfohdr.biCompression = BI_RGB;
bmInfohdr.biSizeImage = width*height*Bpp;
bmInfohdr.biXPelsPerMeter = 0;
bmInfohdr.biYPelsPerMeter = 0;
bmInfohdr.biClrUsed = 0;
bmInfohdr.biClrImportant = 0;
BITMAPINFO bmInfo;
bmInfo.bmiHeader = bmInfohdr;
bmInfo.bmiColors[0].rgbBlue=255;
// Allocate some memory and some pointers
unsigned char * p24Img = new unsigned char[width*height*3];
BYTE *pTemp,*ptr;
pTemp=(BYTE*)data;
ptr=p24Img;
// Convert image from RGB to BGR
for (DWORD index = 0; index < width*height ; index++)
{
unsigned char r = *(pTemp++);
unsigned char g = *(pTemp++);
unsigned char b = *(pTemp++);
*(ptr++) = b;
*(ptr++) = g;
*(ptr++) = r;
}
// Create the CImage
CImage im;
im.Create(width, height, 24, NULL);
HDC dc = im.GetDC();
SetDIBitsToDevice(dc, 0,0,width,height,0,0, 0, height, p24Img, &bmInfo, DIB_RGB_COLORS);
im.ReleaseDC();
delete[] p24Img;
Here is a simpler solution. You can use GetPixelAddress(...) instead of all this BITMAPHEADERINFO and SedDIBitsToDevice. Another problem I have solved was with 8-bit images, which need to have the color table defined.
CImage outImage;
outImage.Create(width, height, channelCount * 8);
int lineSize = width * channelCount;
if (channelCount == 1)
{
// Define the color table
RGBQUAD* tab = new RGBQUAD[256];
for (int i = 0; i < 256; ++i)
{
tab[i].rgbRed = i;
tab[i].rgbGreen = i;
tab[i].rgbBlue = i;
tab[i].rgbReserved = 0;
}
outImage.SetColorTable(0, 256, tab);
delete[] tab;
}
// Copy pixel values
// Warining: does not convert from RGB to BGR
for ( int i = 0; i < height; i++ )
{
void* dst = outImage.GetPixelAddress(0, i);
const void* src = /* put the pointer to the i'th source row here */;
memcpy(dst, src, lineSize);
}