Not able to create multilayered tiff image using FreeImage - c++

Been trying for 2 days now without any luck. My goal is to take 2 arrays of data that we normally create 2 exr files from, and create a multilayered/multipage TIFF file (FreeImage only support multipage for tif, gif and ico, and we need this to work well in python too).
static unsigned DLL_CALLCONV
myReadProc(void *buffer, unsigned size, unsigned count, freeimage::fi_handle handle) {
return (unsigned)fread(buffer, size, count, (FILE *)handle);
}
static unsigned DLL_CALLCONV
myWriteProc(void *buffer, unsigned size, unsigned count, freeimage::fi_handle handle) {
return (unsigned)fwrite(buffer, size, count, (FILE *)handle);
}
static int DLL_CALLCONV
mySeekProc(freeimage::fi_handle handle, long offset, int origin) {
return fseek((FILE *)handle, offset, origin);
}
static long DLL_CALLCONV
myTellProc(freeimage::fi_handle handle) {
return ftell((FILE *)handle);
}
void MyClass::TestMultilayeredFile(math::float4 *data1, math::float4 *data2, Hash128 &hash, const int width, const int height)
{
freeimage::FreeImageIO io;
io.read_proc = myReadProc;
io.write_proc = myWriteProc;
io.seek_proc = mySeekProc;
io.tell_proc = myTellProc;
core::string cachePathAsset = GetAbsoluteHashFilePath(GetRelativeHashFilePath(hash, "_combined.tiff"));
const int pixelCount = width * height;
enum Layers
{
kData1 = 0,
kData2 = 1,
kLayerCount = 2
};
FILE *file = fopen(cachePathAsset.c_str(), "w+b");
if (file != NULL) {
freeimage::FIMULTIBITMAP *out = freeimage::FreeImage_OpenMultiBitmapFromHandle(freeimage::FREE_IMAGE_FORMAT::FIF_TIFF, &io, (freeimage::fi_handle)file, 0x0800);
if (out)
{
const math::float4* kLayers[2] = { data1, data2 };
for (int layer = 0; layer < kLayerCount; ++layer)
{
freeimage::FIBITMAP* bitmap = freeimage::FreeImage_AllocateT(freeimage::FIT_RGBAF, width, height);
const int pitch = freeimage::FreeImage_GetPitch(bitmap);
void* bytes = (freeimage::BYTE*)freeimage::FreeImage_GetBits(bitmap);
const int bytesPerPixel = freeimage::FreeImage_GetBPP(bitmap) / 8;
DebugAssert(pitch == width * bytesPerPixel);
DebugAssert(bytes);
DebugAssert(bytesPerPixel == 16);
memcpy(bytes, kLayers[layer], pixelCount * bytesPerPixel);
freeimage::FreeImage_AppendPage(out, bitmap);
freeimage::FreeImage_Unload(bitmap);
}
// Save the multi-page file to the stream
BOOL bSuccess = freeimage::FreeImage_SaveMultiBitmapToHandle(freeimage::FREE_IMAGE_FORMAT::FIF_TIFF, out, &io, (freeimage::fi_handle)file, 0x0800);
freeimage::FreeImage_CloseMultiBitmap(out, 0);
}
}
}
The tiff file is created, but only contains 1kb. Also, bSuccess returns false. The code to generate the individual images have worked in the past, but haven't done mulipage before.

Related

ffmpeg hevc encoding failure

I use ffmpeg to h265 encode yuv data, but the image after encoding is always incorrect, as shown below:
However, the following command can be used to encode correctly:ffmpeg -f rawvideo -s 480x256 -pix_fmt yuv420p -i origin.yuv -c:v hevc -f hevc -x265-params keyint=1:crf=18 out.h265, image below:
here my code:
void H265ImageCodec::InitCPUEncoder() {
avcodec_register_all();
AVCodec* encoder = avcodec_find_encoder(AV_CODEC_ID_H265);
CHECK(encoder) << "Can not find encoder with h265.";
// context
encode_context_ = avcodec_alloc_context3(encoder);
CHECK(encode_context_) << "Could not allocate video codec context.";
encode_context_->codec_id = AV_CODEC_ID_H265;
encode_context_->profile = FF_PROFILE_HEVC_MAIN;
encode_context_->codec_type = AVMEDIA_TYPE_VIDEO;
encode_context_->width = width_; // it's 480
encode_context_->height = height_; // it's 256
encode_context_->bit_rate = 384 * 1024;
encode_context_->pix_fmt = AVPixelFormat::AV_PIX_FMT_YUV420P;
encode_context_->time_base = (AVRational){1, 25};
encode_context_->framerate = (AVRational){25, 1};
AVDictionary* options = NULL;
av_dict_set(&options, "preset", "ultrafast", 0);
av_dict_set(&options, "tune", "zero-latency", 0);
av_opt_set(encode_context_->priv_data, "x265-params", "keyint=1:crf=18",
0); // crf: Quality-controlled variable bitrate
avcodec_open2(encode_context_, encoder, &options);
encode_frame_ = av_frame_alloc();
encode_frame_->format = encode_context_->pix_fmt;
encode_frame_->width = encode_context_->width;
encode_frame_->height = encode_context_->height;
av_frame_get_buffer(encode_frame_, 0);
// packet init
encode_packet_ = av_packet_alloc();
}
std::string H265ImageCodec::EncodeImage(std::string_view raw_image) {
av_packet_unref(encode_packet_);
av_frame_make_writable(encode_frame_);
const int64 y_size = width_ * height_;
int64 offset = 0;
memcpy(encode_frame_->data[0], raw_image.data() + offset, y_size);
offset += y_size;
memcpy(encode_frame_->data[1], raw_image.data() + offset, y_size / 4);
offset += y_size / 4;
memcpy(encode_frame_->data[2], raw_image.data() + offset, y_size / 4);
avcodec_send_frame(encode_context_, encode_frame_);
int ret = avcodec_receive_packet(encode_context_, encode_packet_);
CHECK_EQ(ret, 0) << "receive encode packet ret: " << ret;
std::string h265_frame(reinterpret_cast<char*>(encode_packet_->data),
encode_packet_->size);
return h265_frame;
}
Any idea what might cause this?
As commented, the issue is that rows of U and V buffers in encode_frame_ are not continuous in memory.
When executing encode_frame_ = av_frame_alloc() the steps are as follows:
encode_frame_->linesize[0] = 480
The value is equal to the width, so Y channel in continuous in memory.
encode_frame_->linesize[1] = 256 (not equal 480/2).
encode_frame_->linesize[2] = 256 (not equal 480/2).
The rows of U and V channels are not continuous in memory.
Illustration for destination U channel in memory:
<----------- 256 bytes ----------->
<------- 240 elements ------->
^ uuuuuuuuuuuuuuuuuuuuuuuuuuuuuu xxxx
| uuuuuuuuuuuuuuuuuuuuuuuuuuuuuu xxxx
128 rows uuuuuuuuuuuuuuuuuuuuuuuuuuuuuu xxxx
| uuuuuuuuuuuuuuuuuuuuuuuuuuuuuu xxxx
V uuuuuuuuuuuuuuuuuuuuuuuuuuuuuu xxxx
For checking we may print linesize:
printf("encode_frame_->linesize[0] = %d\n", encode_frame_->linesize[0]); //480
printf("encode_frame_->linesize[1] = %d\n", encode_frame_->linesize[1]); //256 (not 240)
printf("encode_frame_->linesize[2] = %d\n", encode_frame_->linesize[2]); //256 (not 240)
Inspired by cudaMemcpy2D, we may implement the function memcpy2D:
//memcpy from src to dst with optional source "pitch" and destination "pitch".
//The "pitch" is the step in bytes between two rows.
//The function interface is based on cudaMemcpy2D.
static void memcpy2D(void* dst,
size_t dpitch,
const void* src,
size_t spitch,
size_t width,
size_t height)
{
const unsigned char* I = (unsigned char*)src;
unsigned char* J = (unsigned char*)dst;
for (size_t y = 0; y < height; y++)
{
const unsigned char* I0 = I + y*spitch; //Pointer to the beggining of the source row
unsigned char* J0 = J + y*dpitch; //Pointer to the beggining of the destination row
memcpy(J0, I0, width); //Copy width bytes from row I0 to row J0
}
}
Use memcpy2D instead of memcpy for copy data to destination frame that may not be continuous in memory:
//Copy Y channel:
memcpy2D(encode_frame_->data[0], //void* dst,
encode_frame_->linesize[0], //size_t dpitch,
raw_image.data() + offset, //const void* src,
width_, //size_t spitch,
width_, //size_t width,
height_); //size_t height)
offset += y_size;
//Copy U channel:
memcpy2D(encode_frame_->data[1], //void* dst,
encode_frame_->linesize[1], //size_t dpitch,
raw_image.data() + offset, //const void* src,
width_/2, //size_t spitch,
width_/2, //size_t width,
height_/2); //size_t height)
offset += y_size / 4;
//Copy V channel:
memcpy2D(encode_frame_->data[2], //void* dst,
encode_frame_->linesize[2], //size_t dpitch,
raw_image.data() + offset, //const void* src,
width_/2, //size_t spitch,
width_/2, //size_t width,
height_/2); //size_t height)

VP8 C/C++ source, how to encode frames in ARGB format to frame instead of from file

I'm trying to get started with the VP8 library, I'm not building it in the standard way they tell you to, I just loaded all of the main files and the "encoder" folder into a new Visual Studio C++ DLL project, and just included the C files in an extern "C" dll export function, which so far builds fine etc., I just have no idea where to start with the C++ API to encode, say, 3 frames of ARGB data into a very basic video, just to get started
The only example I could find is in the examples folder called simple_encoder.c, although their premise is that they are loading in another file already and parsing its frames then converting it, so it seems a bit complicated, I just want to be able to pass in a byte array of a few ARGB frames and have it output a very simple VP8 video
I've seen How to encode series of images into VP8 using WebM VP8 Encoder API? (C/C++) but the accepted answer just links to the build instructions and references the general specification of the vp8 format, the closest I could find there is the example encoding parameters but I just want to do everything from C++ and I can't seem to find any other examples, besides for the default one simple_encoder.c?
Just to cite some of the relevant parts I think I understand, but still need more help on
//in int main...
...
vpx_image_t raw;
if (!vpx_img_alloc(&raw, VPX_IMG_FMT_I420, info.frame_width,
info.frame_height, 1)) {
//"Failed to allocate image." error
}
So that part I think I understand for the most part, VPX_IMG_FMT_I420 is the only part that's not made in this file itself, but its in vpx_image.h, first as
#define VPX_IMG_FMT_PLANAR
//then after...
typedef enum vpx_img_fmt {
VPX_IMG_FMT_NONE,
VPX_IMG_FMT_RGB24, /**< 24 bit per pixel packed RGB */
///some other formats....
VPX_IMG_FMT_ARGB, /**< 32 bit packed ARGB, alpha=255 */
VPX_IMG_FMT_YV12 = VPX_IMG_FMT_PLANAR | VPX_IMG_FMT_UV_FLIP | 1, /**< planar YVU */
VPX_IMG_FMT_I420 = VPX_IMG_FMT_PLANAR | 2,
} vpx_img_fmt_t; /**< alias for enum vpx_img_fmt */
So I guess part of my question is answered already just from writing this, that one of the formats is VPX_IMG_FMT_ARGB, although I don't where where it's defined, but I'm guessing in the above code I would replace it with
const VpxInterface *encoder = get_vpx_encoder_by_name("v8");
vpx_image_t raw;
VpxVideoInfo info = { 0, 0, 0, { 0, 0 } };
info.frame_width = 1920;
info.frame_height = 1080;
info.codec_fourcc = encoder->fourcc;
info.time_base.numerator = 1;
info.time_base.denominator = 24;
bool didIt = vpx_img_alloc(&raw, VPX_IMG_FMT_ARGB,
info.frame_width, info.frame_height/*example width and height*/, 1)
//check didIt..
vpx_codec_enc_cfg_t cfg;
vpx_codec_ctx_t codec;
vpx_codec_err_t res;
res = vpx_codec_enc_config_default(encoder->codec_interface(), &cfg, 0);
//check if !res for error
cfg.g_w = info.frame_width;
cfg.g_h = info.frame_height;
cfg.g_timebase.num = info.time_base.numerator;
cfg.g_timebase.den = info.time_base.denominator;
cfg.rc_target_bitrate = 200;
VpxVideoWriter *writer = NULL;
writer = vpx_video_writer_open(outfile_arg, kContainerIVF, &info);
//check if !writer for error
bool startIt = vpx_codec_enc_init(&codec, encoder->codec_interface(), &cfg, 0);
//not even sure where codec was set actually..
//check !startIt for error starting
//now the next part in the original is where it reads from the input file, but instead
//I need to pass in an array of some ARGB byte arrays..
//thing is, in the next step they use a while loop for
//vpx_img_read(&raw, fopen("path/to/YV12formatVideo", "rb"))
//to set the contents of the raw vpx image allocated earlier, then
//they call another program that writes it to the writer object,
//but I don't know how to read the actual ARGB data directly into the raw image
//without using fopen, so that's one question (review at end)
//so I'll just put a placeholder here for the **question**
//assuming I have an array of byte arrays stored individually
//for simplicity sake
int size = 1920 * 1080 * 4;
uint8_t imgOne[size] = {/*some big byte array*/};
uint8_t imgTwo[size] = {/*some big byte array*/};
uint8_t imgThree[size] = {/*some big byte array*/};
uint8_t *images[] = {imgOne, imgTwo, imgThree};
int framesDone = 0;
int maxFrames = 3;
//so now I can replace the while loop with a filler function
//until I find out how to set the raw image with ARGB data
while(framesDone < maxFrames) {
magicalFunctionToSetARGBOfRawImage(&raw, images[framesDone]);
encode_frame(&codec, &raw, framesDone, 0, writer);
framesDone++;
}
//now apparently it needs to be flushed after
while(encode_frame(&codec, 0, -1, 0, writer)){}
vpx_img_free(&raw);
bool isDestroyed = vpx_codec_destroy(&codec);
//check if !isDestroyed for error
//now we gotta define the encode_Frames function, but simpler
//(and make it above other function for reference purposes
//or in header
static int encode_frame(
vpx_codex_ctx_t *coydek,
vpx_image_t pic,
int currentFrame,
int flags,
VpxVideoWriter *koysayv/*writer*/
) {
//now to substitute their encodeFrame function for
//the actual raw calls to simplify things
const DidIt = vpx_codec_encode(
coydek,
pic,
currentFrame,
1,//duration I think
flags,//whatever that is
VPX_DL_REALTIME//different than simlpe_encoder
);
if(!DidIt) return;//error here
vpx_codec_iter_t iter = 0;
const vpx_codec_cx_pkt_t *pkt = 0;
int gotThings = 0;
while(
(pkt = vpx_codec_get_cx_data(
coydek,
&iter
)) != 0
) {
gotThings = 1;
if(
pkt->kind
== VPX_CODEC_CX_FRAME_PKT //don't exactly
//understand this part
) {
const
int
keyframe = (
pkt
->
data
.frame
.flags
&
VPX_FRAME_IS_KEY
) != 0; //don'texactly understand the
//& operator here or how it gets the keyframe
bool wroteFrame = vpx_video_writer_write_frame(
koysayv,
pkt->data.frame.buf
//I'm guessing this is the encoded
//frame data
,
pkt->data.frame.sz,
pkt->data.frame.pts
);
if(!wroteFrame) return; //error
}
}
return gotThings;
}
Thing is though, I don't know how to actually read the
ARGB data into the RAW image buffer itself, as mentioned
above, in the original example, they use
vpx_img_read(&raw, fopen("path/to/file", "rb"))
but if I'm starting off with the byte arrays themselves
then what function do I use for that instead of the file?
I have a feeling it can be solved by the source code for the vpx_img_read found in tools_common.c function:
int vpx_img_read(vpx_image_t *img, FILE *file) {
int plane;
for (plane = 0; plane < 3; ++plane) {
unsigned char *buf = img->planes[plane];
const int stride = img->stride[plane];
const int w = vpx_img_plane_width(img, plane) *
((img->fmt & VPX_IMG_FMT_HIGHBITDEPTH) ? 2 : 1);
const int h = vpx_img_plane_height(img, plane);
int y;
for (y = 0; y < h; ++y) {
if (fread(buf, 1, w, file) != (size_t)w) return 0;
buf += stride;
}
}
return 1;
}
although I personally am not experienced enough to necessarily know how to get a single frames ARGB data in, I think the key part is fread(buf, 1, w, file) which seems to read parts of file into buf which represents img->planes[plane];, which I think then by reading into buf that automatically reads into img->planes[plane];, but I'm not sure if that is the case, and also not sure how to replace the fread from file to just take in a bye array that is alreasy loaded into memory...
VPX_IMG_FMT_ARGB is not defined because not supported by libvpx (as far as I have seen). To compress an image using this library, you must first convert it to one of the supported format, like I420 (VPX_IMG_FMT_I420). The code here (not mine) : https://gist.github.com/racerxdl/8164330 do it well for the RGB format. If you don't want to use libswscale to make the conversion from RGB to I420, you can do things like this (this code convert a RGBA array of bytes to a I420 vpx_image that can be use by libvpx):
unsigned int tx = <width of your image>
unsigned int ty = <height of your image>
unsigned char *image = <array of bytes : RGBARGBA... of size ty*tx*4>
vpx_image_t *imageVpx = <result that must have been properly initialized by libvpx>
imageVpx->stride[VPX_PLANE_U ] = tx/2;
imageVpx->stride[VPX_PLANE_V ] = tx/2;
imageVpx->stride[VPX_PLANE_Y ] = tx;
imageVpx->stride[VPX_PLANE_ALPHA] = tx;
imageVpx->planes[VPX_PLANE_U ] = new unsigned char[ty*tx/4];
imageVpx->planes[VPX_PLANE_V ] = new unsigned char[ty*tx/4];
imageVpx->planes[VPX_PLANE_Y ] = new unsigned char[ty*tx ];
imageVpx->planes[VPX_PLANE_ALPHA] = new unsigned char[ty*tx ];
unsigned char *planeY = imageVpx->planes[VPX_PLANE_Y ];
unsigned char *planeU = imageVpx->planes[VPX_PLANE_U ];
unsigned char *planeV = imageVpx->planes[VPX_PLANE_V ];
unsigned char *planeA = imageVpx->planes[VPX_PLANE_ALPHA];
for (unsigned int y=0; y<ty; y++)
{
if (!(y % 2))
{
for (unsigned int x=0; x<tx; x+=2)
{
int r = *image++;
int g = *image++;
int b = *image++;
int a = *image++;
*planeY++ = max(0, min(255, (( 66*r + 129*g + 25*b) >> 8) + 16));
*planeU++ = max(0, min(255, ((-38*r + -74*g + 112*b) >> 8) + 128));
*planeV++ = max(0, min(255, ((112*r + -94*g + -18*b) >> 8) + 128));
*planeA++ = a;
r = *image++;
g = *image++;
b = *image++;
a = *image++;
*planeA++ = a;
*planeY++ = max(0, min(255, ((66*r + 129*g + 25*b) >> 8) + 16));
}
}
else
{
for (unsigned int x=0; x<tx; x++)
{
int const r = *image++;
int const g = *image++;
int const b = *image++;
int const a = *image++;
*planeA++ = a;
*planeY++ = max(0, min(255, ((66*r + 129*g + 25*b) >> 8) + 16));
}
}
}

How to resize an image from an rgb buffer using c++

I have an (char*)RGB buffer that has the data of actual image. Let's say that the actual image resolution is 720x576. Now I want to resize it to a resolution , say 120x90.
How can I do this using https://code.google.com/p/jpeg-compressor/ or libjpeg ?
Note: can use any other library, but should work in linux.
Edited: Video decoder decodes a frame in YUV, which I convert it into RGB. All these happen in a buffer.
I need to resize the RGB buffer to make a thumbnail out of it with variable size.
Thanks for the help in advance
I did the following to achieve my goal:
#define TN_WIDTH 240
#define TN_HEIGHT 180
#include "jpegcompressor/jpge.h"
#include "jpegcompressor/jpgd.h"
#include <ippi.h>
bool createThumnailJpeg(const uint8* pSrc, int srcwidth, int srcheight)
{
int req_comps = 3;
jpge::params params;
params.m_quality = 50;
params.m_subsampling = jpge::H2V2;
params.m_two_pass_flag = false;
FILE *fpJPEGTN = fopen("Resource\\jpegcompressor.jpeg","wb");
int dstWidth = TN_WIDTH;
int dstHeight = TN_HEIGHT;
int uiDstBufferSize = dstWidth * dstHeight * 3;
uint8 *pDstRGBBuffer = new uint8[uiDstBufferSize];
uint8 *pJPEGTNBuffer = new uint8[uiDstBufferSize];
int uiSrcBufferSize = srcwidth * srcheight * 3;
IppiSize srcSize = {srcwidth , srcheight};
IppiRect srcROI = {0, 0, srcwidth, srcheight};
IppiSize dstROISize = {dstWidth, dstHeight};
double xfactor = (double) dstWidth / srcwidth;
double yfactor = (double) dstHeight / srcheight;
IppStatus status = ippiResize_8u_C3R(pSrc, srcSize, srcwidth*3, srcROI,
pDstRGBBuffer, dstWidth*3, dstROISize, xfactor, yfactor, 1);
if (!jpge::compress_image_to_jpeg_file_in_memory(pJPEGTNBuffer, uiDstBufferSize, dstWidth, dstHeight, req_comps, pDstRGBBuffer, params))
{
cout << "failed!";
delete[] pDstRGBBuffer;
delete [] pJPEGTNBuffer;
return false;
}
if (fpJPEGTN)
{
fwrite(pJPEGTNBuffer, uiDstBufferSize, 1, fpJPEGTN);
fclose(fpJPEGTN);
}
delete [] pDstRGBBuffer;
delete [] pJPEGTNBuffer;
return true;
}

C++ OpenGL TGA Loading Failing

I've been working through a basic OpenGl tutorial on loading a TGA file, to be used as a texture on a 3d object. I've been able to load data from the TGA header, but when I attempt to load the actual image data, it fails. I'm not sure where it is going wrong. Here is my texture loading class:
Header file:
struct TGA_Header
{
GLbyte ID_Length;
GLbyte ColorMapType;
GLbyte ImageType;
// Color map specifications
GLbyte firstEntryIndex[2];
GLbyte colorMapLength[2];
GLbyte colorMapEntrySize;
//image specification
GLshort xOrigin;
GLshort yOrigin;
GLshort ImageWidth;
GLshort ImageHeight;
GLbyte PixelDepth;
GLbyte ImageDescriptor;
};
class Texture
{
public:
Texture(string in_filename, string in_name = "");
~Texture();
public:
unsigned short width;
unsigned short height;
unsigned int length;
unsigned char type;
unsigned char *imageData;
unsigned int bpp;
unsigned int texID;
string name;
static vector<Texture *> textures;
private:
bool loadTGA(string filename);
bool createTexture(unsigned char *imageData, int width, int height, int type);
void swap(unsigned char * ori, unsigned char * dest, GLint size);
void flipImage(unsigned char * image, bool flipHorizontal, bool flipVertical, GLushort width, GLushort height, GLbyte bpp);
};
Here is the load TGA function in the cpp:
bool Texture::loadTGA(string filename)
{
TGA_Header TGAheader;
ifstream file( filename.data(), std::ios::in, std::ios::binary );
//make sure the file was opened properly
if (!file.is_open() )
return false;
if( !file.read( (char *)&TGAheader, sizeof(TGAheader) ) )
return false;
//make sure the image is of a type we can handle
if( TGAheader.ImageType != 2 )
return false;
width = TGAheader.ImageWidth;
height = TGAheader.ImageHeight;
bpp = TGAheader.PixelDepth;
if( width < 0 || // if the width or height is less than 0, than
height <= 0 || // the image is corrupt
(bpp != 24 && bpp != 32) ) // make sure we are of the correct bit depth
{
return false;
}
//check for an alpha channel
GLuint type = GL_RGBA;
if ( bpp == 24 )
type = GL_RGB;
GLuint bytesPerPixel = bpp / 8;
//allocate memory for the TGA so we can read it
GLuint imageSize = width * height * bytesPerPixel;
imageData = new GLubyte[imageSize];
if ( imageData == NULL )
return false;
//make sure we are in the correct position to load the image data
file.seekg(-imageSize, std::ios::end);
// if something when wrong, make sure we free up the memory
//NOTE: It never gets past this point. The conditional always fails.
if ( !file.read( (char *)imageData, imageSize ) )
{
delete imageData;
return false;
}
//more code is down here, but it doesnt matter because it does not pass the above function
}
It seems to load some data, but it keeps returning that it failed. Any help on why would be greatly appreciated. Appologies if it gets a bit wordy, but I'm not sure what is or is not significant.
UPDATE:
So, I just rewrote the function. The ifsteam I was using, seemed to be the cause of the problem. Specifically, it would try and load far more bytes of data than I had entered. I don't know the cause of the behavior, but I've listed my functioning code below. Thank you every one for your help.
The problem could be depending on the TGA algorithm which do not support compressed TGA.
Make sure you do not compress the TGA and that the TGA order (less important) is in a Bottom Left origin.
I usually work with GIMP and at the moment of the same, uncheck the RLE compression and put the Bottom Left alignment.
I'm not familiar with C++, sorry.
Are you sure this line file.seekg(-imageSize, std::ios::end); is not supposed to be file.seekg(headerSize, std::ios::start); ?
Makes more sense to seek from start than from end.
You should also check for ColorMapType != 0.
P.S. Here if( width < 0 || height <=0 width check should be <= as well.
So, I changed from using an ifstream to a FILE. The ifstream, was trying to load far more bytes than I had listed in the arguments. Here is the new code. (NOTE: It still needs optomized. I believe there are some unused variables floating around, but it works perfectly.). Thanks again everyone for your help.
The header file:
//struct to hold tga data
struct TGA_Header
{
GLbyte ID_Length;
GLbyte ColorMapType;
GLbyte ImageType;
// Color map specifications
GLbyte firstEntryIndex[2];
GLbyte colorMapLength[2];
GLbyte colorMapEntrySize;
//image specification
GLshort xOrigin;
GLshort yOrigin;
GLshort ImageWidth;
GLshort ImageHeight;
GLbyte PixelDepth;
GLbyte ImageDescriptor;
};
class Texture
{
public:
//functions
Texture(string in_filename, string in_name = "");
~Texture();
public:
//vars
unsigned char *imageData;
unsigned int texID;
string name;
//temp global access point for accessing all loaded textures
static vector<Texture *> textures;
private:
//can add additional load functions for other image types
bool loadTGA(string filename);
bool createTexture(unsigned char *imageData, int width, int height, int type);
void swap(unsigned char * ori, unsigned char * dest, GLint size);
void flipImage(unsigned char * image, bool flipHorizontal, bool flipVertical, GLushort width, GLushort height, GLbyte bpp);
};
#endif
Here is the load TGA function:
bool Texture::loadTGA(string filename)
{
//var for swapping colors
unsigned char colorSwap = 0;
GLuint type;
TGA_Header TGAheader;
FILE* file = fopen(filename.c_str(), "rb");
unsigned char Temp_TGAheader[18];
//check to make sure the file loaded
if( file == NULL )
return false;
fread(Temp_TGAheader, 1, sizeof(Temp_TGAheader), file);
//pull out the relavent data. 2 byte data (short) must be converted
TGAheader.ID_Length = Temp_TGAheader[0];
TGAheader.ImageType = Temp_TGAheader[2];
TGAheader.ImageWidth = *static_cast<unsigned short*>(static_cast<void*>(&Temp_TGAheader[12]));
TGAheader.ImageHeight = *static_cast<unsigned short*>(static_cast<void*>(&Temp_TGAheader[14]));
TGAheader.PixelDepth = Temp_TGAheader[16];
//make sure the image is of a type we can handle
if( TGAheader.ImageType != 2 || TGAheader.ImageWidth <= 0 || TGAheader.ImageHeight <= 0 )
{
fclose(file);
return false;
}
//set the type
if ( TGAheader.PixelDepth == 32 )
{
type = GL_RGBA;
}
else if ( TGAheader.PixelDepth == 24 )
{
type = GL_RGB;
}
else
{
//incompatable image type
return false;
}
//remember bits != bytes. To convert we need to divide by 8
GLuint bytesPerPixel = TGAheader.PixelDepth / 8;
//The Memory Required For The TGA Data
unsigned int imageSize = TGAheader.ImageWidth * TGAheader.ImageHeight * bytesPerPixel;// Calculate
//request the needed memory
imageData = new GLubyte[imageSize];
if ( imageData == NULL ) // just in case
return false;
if( fread(imageData, 1, imageSize, file) != imageSize )
{
//Kill it
delete [] imageData;
fclose(file);
return false;
}
fclose(file);
for (unsigned int x = 0; x < imageSize; x +=bytesPerPixel)
{
colorSwap = imageData[x];
imageData[x] = imageData[x + 2];
imageData[x + 2] = colorSwap;
}
createTexture( imageData, TGAheader.ImageWidth, TGAheader.ImageHeight, type );
return true;
}

Convert Leptonica Pix Object to QPixmap ( or other image object )

I'm using the Leptonica Library to process some pictures. After that I want to show them in my QT GUI. Leptonica is using their own format Pix for the images, while QT is using their own format QPixmap. At the moment the only way for me is to save the pictures after processing as a file ( like bmp ) and then load them again with a QT function call. Now I want to convert them in my code, so I dont need to take the detour with saving them on the filesystem. Any ideas how to do this?
Best Regards
// edit:
Okay as already suggested I tried to convert the PIX* to a QImage.
The PIX* is defined like this:
http://tpgit.github.com/Leptonica/pix_8h_source.html
struct Pix
{
l_uint32 w; /* width in pixels */
l_uint32 h; /* height in pixels */
l_uint32 d; /* depth in bits */
l_uint32 wpl; /* 32-bit words/line */
l_uint32 refcount; /* reference count (1 if no clones) */
l_int32 xres; /* image res (ppi) in x direction */
/* (use 0 if unknown) */
l_int32 yres; /* image res (ppi) in y direction */
/* (use 0 if unknown) */
l_int32 informat; /* input file format, IFF_* */
char *text; /* text string associated with pix */
struct PixColormap *colormap; /* colormap (may be null) */
l_uint32 *data; /* the image data */
};
while QImage offers me a method like this:
http://developer.qt.nokia.com/doc/qt-4.8/qimage.html#QImage-7
QImage ( const uchar * data,
int width,
int height,
int bytesPerLine,
Format format )
I assume I cant just copy the data from the PIX to the QImage when calling the constructor. I guess I need to fill the QImage Pixel by Pixel, but actually I dont know how? Do I need to loop through all the coordinates? How do I regard the bit depth? Any ideas here?
I use this for conversion QImage to PIX:
PIX* TessTools::qImage2PIX(QImage& qImage) {
PIX * pixs;
l_uint32 *lines;
qImage = qImage.rgbSwapped();
int width = qImage.width();
int height = qImage.height();
int depth = qImage.depth();
int wpl = qImage.bytesPerLine() / 4;
pixs = pixCreate(width, height, depth);
pixSetWpl(pixs, wpl);
pixSetColormap(pixs, NULL);
l_uint32 *datas = pixs->data;
for (int y = 0; y < height; y++) {
lines = datas + y * wpl;
QByteArray a((const char*)qImage.scanLine(y), qImage.bytesPerLine());
for (int j = 0; j < a.size(); j++) {
*((l_uint8 *)lines + j) = a[j];
}
}
return pixEndianByteSwapNew(pixs);
}
And this for conversion PIX to QImage:
QImage TessTools::PIX2QImage(PIX *pixImage) {
int width = pixGetWidth(pixImage);
int height = pixGetHeight(pixImage);
int depth = pixGetDepth(pixImage);
int bytesPerLine = pixGetWpl(pixImage) * 4;
l_uint32 * s_data = pixGetData(pixEndianByteSwapNew(pixImage));
QImage::Format format;
if (depth == 1)
format = QImage::Format_Mono;
else if (depth == 8)
format = QImage::Format_Indexed8;
else
format = QImage::Format_RGB32;
QImage result((uchar*)s_data, width, height, bytesPerLine, format);
// Handle pallete
QVector<QRgb> _bwCT;
_bwCT.append(qRgb(255,255,255));
_bwCT.append(qRgb(0,0,0));
QVector<QRgb> _grayscaleCT(256);
for (int i = 0; i < 256; i++) {
_grayscaleCT.append(qRgb(i, i, i));
}
if (depth == 1) {
result.setColorTable(_bwCT);
} else if (depth == 8) {
result.setColorTable(_grayscaleCT);
} else {
result.setColorTable(_grayscaleCT);
}
if (result.isNull()) {
static QImage none(0,0,QImage::Format_Invalid);
qDebug() << "***Invalid format!!!";
return none;
}
return result.rgbSwapped();
}
This code accepts a const QImage& parameter.
static PIX* makePIXFromQImage(const QImage &image)
{
QByteArray ba;
QBuffer buf(&ba);
buf.open(QIODevice::WriteOnly);
image.save(&buf, "BMP");
return pixReadMemBmp(ba.constData(), ba.size());
}
I do not know the Leptonica Library, but I had a short look at the documentation and found the documentation about the PIX structure. You can create a QImage from the raw data and convert this to a QPixmap with convertFromImage.
Well I could solve the problem this way:
Leptonica offers a function
l_int32 pixWriteMemBmp (l_uint8 **pdata, size_t *psize, PIX *pix)
With this function you can write into the memory instead of a filestream. Still ( in this example ) the Bmp Header and format persists ( there are the same functions for other image formats too ).
The corresponding function from QT is this one:
bool QImage::loadFromData ( const uchar * data, int len, const char * format = 0 )
Since the the Header persits I just need to pass the data ptr and the size to the loadFromData function and QT does the rest.
So all together it would be like this:
PIX *m_pix;
FILE * pFile;
pFile = fopen( "PathToFile", "r" );
m_pix = pixReadStreamBmp(pFile); // If other file format use the according function
fclose(pFile);
// Now we have a Pix object from leptonica
l_uint8* ptr_memory;
size_t len;
pixWriteMemBmp(&ptr_memory, &size, m_pix);
// Now we have the picture somewhere in the memory
QImage testimage;
QPixmap pixmap;
testimage.loadFromData((uchar *)ptr_memory,len);
pixmap.convertFromImage(testimage);
// Now we have the image as a pixmap in Qt
This actually works for me, tho I don't know if there is a way to do this backwards so easy. ( If there is, please let me know )
Best Regards
You can save your pixmap to RAM instead of file (use QByteArray to store the data, and QBuffer as your I/O device).