I have a char array (char* dataToInflate) obtained from a .gz file I would like to inflate into another char array.
I don't know the original decompressed size, so I believe this means I can't use the uncompress function that is within the zlib library, since per the manual:
The size of the uncompressed data must have been saved previously by the compressor and transmitted to the decompressor by some mechanism outside the scope of this compression library.
I have looked at the zpipe.C example (https://zlib.net/zpipe.c), and the inf function here looks suitable but I'm not sure how to adapt it from FILEs to char arrays.
Does anyone know how or have any other ideas for inflating a char array into another char array?
Update:
I read here: Uncompress() of 'zlib' returns Z_DATA_ERROR
that for arrays obtained through gzip files, uncompress isn't suitable.
I found that I could decompress the file in full using gzopen, gzread and gzclose like so:
gzFile in_file_gz = gzopen(gz_char_array, "rb");
char unzip_buffer[8192];
int unzipped_bytes;
std::vector<char> unzipped_data;
while (true) {
unzipped_bytes = gzread(in_file_gz, unzip_buffer, 8192);
if (unzipped_bytes > 0) {
unzipped_data.insert(unzipped_data.end(), unzip_buffer, unzip_buffer + unzipped_bytes);
} else {
break;
}
}
gzclose(in_file_gz)
but I would also like to be able to decompress the char array. I tried with the following method:
void test_inflate(Byte *compr, uLong comprLen, Byte *uncompr, uLong *uncomprLen) {
int err;
z_stream d_stream; /* decompression stream */
d_stream.zalloc = NULL;
d_stream.zfree = NULL;
d_stream.opaque = NULL;
d_stream.next_in = compr;
d_stream.avail_in = 0;
d_stream.next_out = uncompr;
err = inflateInit2(&d_stream, MAX_WBITS + 16);
CHECK_ERR(err, "inflateInit");
while (d_stream.total_out < *uncomprLen && d_stream.total_in < comprLen) {
d_stream.avail_in = d_stream.avail_out = 1; /* force small buffers */
err = inflate(&d_stream, Z_NO_FLUSH);
if (err == Z_STREAM_END)
break;
CHECK_ERR(err, "inflate");
}
err = inflateEnd(&d_stream);
*uncomprLen = d_stream.total_out;
}
but in the while loop, the inflate method returns Z_STREAM_END before the file has decompressed in full.
The method returns successfully, but only a partial buffer has been written.
I put a minimum working example here:
https://github.com/alanjtaylor/zlibExample
if anyone has time to look.
Thanks a lot!
The example you have on github, "zippedFile.gz" is a concatenation of seven independent gzip members. This is permitted by the gzip standard (RFC 1952), and the zlib gz* file functions automatically process all of the members.
pigz will show all of the members:
% pigz -lvt zippedFile.gz
method check timestamp compressed original reduced name
gzip 8 e323586d ------ ----- 616431 1543643 60.1% zippedFile
gzip 8 7efd928a ------ ----- 369231 921600 59.9% <...>
gzip 8 7ebd8b2a ------ ----- 919565 2319970 60.4% <...>
gzip 8 3dd6e2ba ------ ----- 619670 1549236 60.0% <...>
gzip 8 c1cb922e ------ ----- 600367 1533151 60.8% <...>
gzip 8 a9fef06c ------ ----- 620250 1541785 59.8% <...>
gzip 8 43b57506 ------ ----- 623081 1555203 59.9% <...>
The inflate* functions will only process one member at a time, in order to let you know with Z_STREAM_END that the member decompressed successfully and that the CRC checked out ok.
All you need to do is put your inflator in a loop and run it until the input is exhausted, or you run into an error. (This is noted in the documentation for inflateInit2 in zlib.h.)
There are a few issues with your inflator, but I understand that it is just an initial attempt to get things working, so I won't comment.
uncompress is indeed designed for where you have all that information ready. It's a utility function.
It probably wraps inflate, which is what you want to use. You have to run it in a loop and manage the "stream" parameters yourself by repeatedly pointing to the next chunk of buffered data until it's all been eaten.
There's an annotated example in the documentation.
Related
Please explain if this is a Zlib bug or I misunderstand the use of Zlib.
I am trying to do the following:
-I have two strings - data from which I need to compress: string_data_1 and string_data_2 and which I compress with Zlib as raw data.
-Next, I create a third string and copy the already compressed data into this single row.
-Now I'm decompressing this combined compressed data and there is a problem.
Zlib decompressed only the "first" part of the compressed data, did not decompress the second part. Is that how it should be?
For an example in the facebook/zstd:Zstandard library - exactly the same action - leads to unpacking - all compressed data and the first and second parts.
Here is a simple code:
#include <iostream>
#include <string>
#include <zlib.h>
int my_Zlib__compress__RAW(std::string& string_data_to_be_compressed, std::string& string_compressed_result, int level_compressed)
{
//-------------------------------------------------------------------------
uLong zlib_uLong = compressBound(string_data_to_be_compressed.size());
string_compressed_result.resize(zlib_uLong);
//-------------------------------------------------------------------------
//this is the standard Zlib compress2 function - with one exception: the deflateInit2 function is used instead of the deflateInit function and the windowBits parameter is set to "-15" so that Zlib compresses the data as raw data:
int status = my_compress2((Bytef*)&string_compressed_result[0], &zlib_uLong, (const Bytef*)&string_data_to_be_compressed[0], string_data_to_be_compressed.size(), level_compressed);
if (status == Z_OK)
{
string_compressed_result.resize(zlib_uLong);
return 0;
}
else
{
return 1;
}
}
int my_Zlib__uncompress__RAW(std::string& string_data_to_be_uncompressed, std::string& string_compressed_data, size_t size_uncompressed_data)
{
//-------------------------------------------------------------------------
string_data_to_be_uncompressed.resize(size_uncompressed_data);
//-------------------------------------------------------------------------
//this is the standard Zlib uncompress function - with one exception: the inflateInit2 function is used instead of the inflateInit function and the windowBits parameter is set to "-15" so that Zlib uncompresses the data as raw data:
int status = my_uncompress((Bytef*)&string_data_to_be_uncompressed[0], (uLongf*)&size_uncompressed_data, (const Bytef*)&string_compressed_data[0], string_compressed_data.size());
if (status == Z_OK)
{
return 0;
}
}
int main()
{
int level_compressed = 9;
//------------------------------------------Compress_1-------------------------------------------
std::string string_data_1 = "Hello12_Hello12_Hello125"; //The data to be compressed.
std::string string_compressed_result_RAW_1; //Compressed data will be written here
int status = my_Zlib__compress__RAW(string_data_1 , string_compressed_result_RAW_1, level_compressed);
//----------------------------------------------------------------------------------------------
//--------------------------------------Compress_2----------------------------------------------
std::string string_data_2= "BUY22_BUY22_BUY223"; //The data to be compressed.
std::string string_compressed_result_RAW_2; //Compressed data will be written here
status = my_Zlib__compress__RAW(string_data_2 , string_compressed_result_RAW_2, level_compressed);
//----------------------------------------------------------------------------------------------
std::string Total_compressed_data = string_compressed_result_RAW_1 + string_compressed_result_RAW_2; //Combine two compressed data into one string
//Now I want to uncompress the data in a string - "Total_compressed_data"
//--------------------------------------Uncompress--------------------------------
std::string string_uncompressed_result_RAW; //Uncompressed data will be written here
int size_that_should_be_when_unpacking = string_data_1.size() + string_data_2.size();
status = my_Zlib__uncompress__RAW(string_uncompressed_result_RAW, Total_compressed_data, size_that_should_be_when_unpacking , level_compressed);
//--------------------------------------------------------------------------------
std::cout<<string_uncompressed_result_RAW<<std::endl; //Hello12_Hello12_Hello125
}
Zlib decompressed only the "first" part of the compressed data, did not decompress the "second" part.
Is that how it should be?
As noted in the comments, a concatenation of zlib streams is not a zlib stream. You need to uncompress again for the second zlib stream. Or compress the whole thing as one zlib stream in the first place.
You would need to use a variant of uncompress2(), not uncompress(), since the former will return the size of the first decompressed zlib stream in the last parameter, so that you know where to start decompressing the second one.
Better yet, you should use the inflate() functions instead for your application. The retention of the uncompressed size for use in decompression means that you'd need that on the other end. How do you get that? Are you transmitting it separately? You do not need that. You should use inflate() to decompress a chunk at a time, and then you don't need to know the uncompressed size ahead of time.
You should also use the deflate() functions for compression. Then you can keep the stream open, and keep compressing until you're done. Then you will have a single zlib stream.
I am writing a C++ library that also decompresses zlib files. For all of the files, the last call to gzread() (or at least one of the last calls) gives error -3 (Z_DATA_ERROR) with message "incorrect data check". As I have not created the files myself I am not entirely sure what is wrong.
I found this answer and if I do
gzip -dc < myfile.gz > myfile.decomp
gzip: invalid compressed data--crc error
on the command line the contents of myfile.decomp seems to be correct. There is still the crc error printed in this case, however, which may or may not be the same problem. My code, pasted below, should be straightforward, but I am not sure how to get the same behavior in code as on the command line above.
How can I achieve the same behavior in code as on the command line?
std::vector<char> decompress(const std::string &path)
{
gzFile inFileZ = gzopen(path.c_str(), "rb");
if (inFileZ == NULL)
{
printf("Error: gzopen() failed for file %s.\n", path.c_str());
return {};
}
constexpr size_t bufSize = 8192;
char unzipBuffer[bufSize];
int unzippedBytes = bufSize;
std::vector<char> unzippedData;
unzippedData.reserve(1048576); // 1 MiB is enough in most cases.
while (unzippedBytes == bufSize)
{
unzippedBytes = gzread(inFileZ, unzipBuffer, bufSize);
if (unzippedBytes == -1)
{
// Here the error is -3 / "incorrect data check" for (one of) the last block(s)
// in the file. The bytes can be correctly decompressed, as demonstrated on the
// command line, but how can this be achieved in code?
int errnum;
const char *err = gzerror(inFileZ, &errnum);
printf(err, "%s\n");
break;
}
if (unzippedBytes > 0)
{
unzippedData.insert(unzippedData.end(), unzipBuffer, unzipBuffer + unzippedBytes);
}
}
gzclose(inFileZ);
return unzippedData;
}
First off, the whole point of the CRC is to detect corrupted data. If the CRC is bad, then you should be going back to where this file came from and getting the data not corrupted. If the CRC is bad, discard the input and report an error.
You are not clear on the "behavior" you are trying to reproduce, but if you're trying to recover as much data as possible from a corrupted gzip file, then you will need to use zlib's inflate functions to decompress the file. int ret = inflateInit2(&strm, 31); will initialize the zlib stream to process a gzip file.
I've written a program, generating a tarball, which gets compressed by zlib.
At regular intervals, the same program is supposed to add a new file to the tarball.
Per definition, the tarball needs empty records (512 Byte blocks) to work properly at it's end, which already shows my problem.
According to documentation gzopen is unable to open the file in r+ mode, meaning I can't simply jump to the beginning of the empty records, append my file information and seal it again with empty records.
Right now, I'm at my wits end. Appending works fine with zlib, as long as the empty records are not involved, yet I need them to 'finalize' my compressed tarball.
Any ideas?
Ah yes, it would be nice if I could avoid decompressing the whole thing and/or parsing the entire tarball.
I'm also open for other (preferably simple) file formats I could implement instead of tar.
This is two separate problems, both of which are solvable.
The first is how to append to a tar file. All you need to do there is overwrite the final two zeroed 512-byte blocks with your file. You would write the 512-byte tar header, your file rounded up to an integer number of 512-byte blocks, and then two 512-byte blocks filled with zeros to mark the new end of the tar file.
The second is how to frequently append to a gzip file. The simplest approach is to write separate gzip streams and concatenate them. Write the last two 512-byte zeroed blocks in a separate gzip stream, and remember where that starts. Then overwrite that with a new gzip stream with the new tar entry, and then another gzip stream with the two end blocks. This can be done by seeking back in the file with lseek() and then using gzdopen() to start writing from there.
That will work well, with good compression, for added files that are large (at a minimum several 10's of K). If however you are adding very small files, simply concatenating small gzip streams will result in lousy compression, or worse, expansion. You can do something more complicated to actually add small amounts of data to a single gzip stream so that the compression algorithm can make use of the preceding data for correlation and string matching. For that, take a look at the approach in gzlog.h and gzlog.c in examples/ in the zlib distribution.
Here is an example of how to do the simple approach:
/* tapp.c -- Example of how to append to a tar.gz file with concatenated gzip
streams. Placed in the public domain by Mark Adler, 16 Jan 2013. */
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <assert.h>
#include <unistd.h>
#include <fcntl.h>
#include "zlib.h"
#define local static
/* Build an allocated string with the prefix string and the NULL-terminated
sequence of words strings separated by spaces. The caller should free the
returned string when done with it. */
local char *build_cmd(char *prefix, char **words)
{
size_t len;
char **scan;
char *str, *next;
len = strlen(prefix) + 1;
for (scan = words; *scan != NULL; scan++)
len += strlen(*scan) + 1;
str = malloc(len); assert(str != NULL);
next = stpcpy(str, prefix);
for (scan = words; *scan != NULL; scan++) {
*next++ = ' ';
next = stpcpy(next, *scan);
}
return str;
}
/* Usage:
tapp archive.tar.gz addthis.file andthisfile.too
tapp will create a new archive.tar.gz file if it doesn't exist, or it will
append the files to the existing archive.tar.gz. tapp must have been used
to create the archive in the first place. If it did not, then tapp will
exit with an error and leave the file unchanged. Each use of tapp appends a
new gzip stream whose compression cannot benefit from the files already in
the archive. As a result, tapp should not be used to append a small amount
of data at a time, else the compression will be particularly poor. Since
this is just an instructive example, the error checking is done mostly with
asserts.
*/
int main(int argc, char **argv)
{
int tgz;
off_t offset;
char *cmd;
FILE *pipe;
gzFile gz;
int page;
size_t got;
int ret;
ssize_t raw;
unsigned char buf[3][512];
const unsigned char z1k[] = /* gzip stream of 1024 zeros */
{0x1f, 0x8b, 8, 0, 0, 0, 0, 0, 2, 3, 0x63, 0x60, 0x18, 5, 0xa3, 0x60,
0x14, 0x8c, 0x54, 0, 0, 0x2e, 0xaf, 0xb5, 0xef, 0, 4, 0, 0};
if (argc < 2)
return 0;
tgz = open(argv[1], O_RDWR | O_CREAT, 0644); assert(tgz != -1);
offset = lseek(tgz, 0, SEEK_END); assert(offset == 0 || offset >= (off_t)sizeof(z1k));
if (offset) {
if (argc == 2) {
close(tgz);
return 0;
}
offset = lseek(tgz, -sizeof(z1k), SEEK_END); assert(offset != -1);
raw = read(tgz, buf, sizeof(z1k)); assert(raw == sizeof(z1k));
if (memcmp(buf, z1k, sizeof(z1k)) != 0) {
close(tgz);
fprintf(stderr, "tapp abort: %s was not created by tapp\n", argv[1]);
return 1;
}
offset = lseek(tgz, -sizeof(z1k), SEEK_END); assert(offset != -1);
}
if (argc > 2) {
gz = gzdopen(tgz, "wb"); assert(gz != NULL);
cmd = build_cmd("tar cf - -b 1", argv + 2);
pipe = popen(cmd, "r"); assert(pipe != NULL);
free(cmd);
got = fread(buf, 1, 1024, pipe); assert(got == 1024);
page = 2;
while ((got = fread(buf[page], 1, 512, pipe)) == 512) {
if (++page == 3)
page = 0;
ret = gzwrite(gz, buf[page], 512); assert(ret == 512);
} assert(got == 0);
ret = pclose(pipe); assert(ret != -1);
ret = gzclose(gz); assert(ret == Z_OK);
tgz = open(argv[1], O_WRONLY | O_APPEND); assert(tgz != -1);
}
raw = write(tgz, z1k, sizeof(z1k)); assert(raw == sizeof(z1k));
close(tgz);
return 0;
}
In my opinion this is not possible with TAR conforming to standard strictly. I have read through zlib[1] manual and GNU tar[2] file specification. I did not find any information how appending to TAR can be implemented. So I am assuming it has to be done by over-writing the empty blocks.
So I assume, again, you can do it by using gzseek(). However, you would need to know how large is the uncompressed archive (size) and set offset to size-2*512.
Note, that this might be cumbersome since "The whence parameter is defined as in lseek(2); the value SEEK_END is not supported."1 and you can't open file for reading and writing at the same time, i.e. for introspect where the end blocks are.
However, it should be possible abusing TAR specs slightly. The GNU tar[2] docs mention something funny:
"
Each file archived is represented by a header block which describes the file, followed by zero or more blocks which give the contents of the file. At the end of the archive file there are two 512-byte blocks filled with binary zeros as an end-of-file marker. A reasonable system should write such end-of-file marker at the end of an archive, but must not assume that such a block exists when reading an archive. In particular GNU tar always issues a warning if it does not encounter it.
"
This means, you can deliberately not write those blocks. This is easy if you wrote the tarball compressor. Then you can use zlib in the normal append mode, remembering that the TAR decompressor must be aware of the "broken" TAR file.
[1]http://www.zlib.net/manual.html#Gzip
[2]http://www.gnu.org/software/tar/manual/html_node/Standard.html#SEC182
I have a task to edit exif tags and add to them application specific values.
if the exif tags exist libexif is more than happy to edit them .
but if the exif tags don't exist, i will have to create them and append them to file.
libexif uses the C fopen so i don't think there is going to be an easy way without some IO manipulation.
I am thinking to read the raw image data put them in memory , fopen(newfile, 'w')
add the exif data
and then append the image data.
only if someone knows an easier way , ( i am restricted with libexif, libexiv2 might create a liscence conflict) .
for the common good i am going to answer my own question, exif application has a modified libjpeg that enable the manipulation of the jpeg raw data.
it has functions like
jpeg_data_load_data (JPEGData *data, const unsigned char *d,unsigned int size);
and
jpeg_data_set_exif_data(myJPEGImage,exif); jpeg_data_save_file(myJPEGImage,"gangrene1.jpg");
That can be used, also free available programs like imagemagick have their own libjpeg , libexif implementation to do manipulate exif and jpeg data.
Hopes this helps
I have just gone down the same road as you with choosing between libexif and libexiv2. I went with libexif due to the licensing.
Back to the question at hand,
libexif doesn't support directly loading JPG's in. You'll need another package to read in the JPG and extract the EXIF header (or you could write something yourself).
There is an excellent Github project called exifyay that uses libexif and has two extra libs that handle reading in JPGS. It is a python project but the sources for the libraries are C. You can find exifyay here (note I am not involved in any way with exifyay or libexif)
I have just recently compiled libexif and merged sources from exifyay into a VS2010 project here. There is an example in the folder 'contrib\examples\LibexifExample'. If you don't like downloading random links here is a sample of the code I got working:
/*
* write-exif.c
*
* Placed into the public domain by Daniel Fandrich
*
* Create a new EXIF data block and write it into a JPEG image file.
*
* The JPEG image data used in this example is fixed and is guaranteed not
* to contain an EXIF tag block already, so it is easy to precompute where
* in the file the EXIF data should be. In real life, a library like
* libjpeg (included with the exif command-line tool source code) would
* be used to write to an existing JPEG file.
*/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <assert.h>
#include <libexif/exif-data.h>
#include <libjpeg/jpeg-data.h>
#include <JpegEncoderEXIF/JpegEncoderEXIF.h>
/* byte order to use in the EXIF block */
#define FILE_BYTE_ORDER EXIF_BYTE_ORDER_INTEL
/* comment to write into the EXIF block */
#define FILE_COMMENT "libexif demonstration image"
/* special header required for EXIF_TAG_USER_COMMENT */
#define ASCII_COMMENT "ASCII\0\0\0"
static ExifEntry *create_tag(ExifData *exif, ExifIfd ifd, ExifTag tag, size_t len)
{
void *buf;
ExifEntry *entry;
/* Create a memory allocator to manage this ExifEntry */
ExifMem *mem = exif_mem_new_default();
assert(mem != NULL); /* catch an out of memory condition */
/* Create a new ExifEntry using our allocator */
entry = exif_entry_new_mem (mem);
assert(entry != NULL);
/* Allocate memory to use for holding the tag data */
buf = exif_mem_alloc(mem, len);
assert(buf != NULL);
/* Fill in the entry */
entry->data = (unsigned char*)buf;
entry->size = len;
entry->tag = tag;
entry->components = len;
entry->format = EXIF_FORMAT_UNDEFINED;
/* Attach the ExifEntry to an IFD */
exif_content_add_entry (exif->ifd[ifd], entry);
/* The ExifMem and ExifEntry are now owned elsewhere */
exif_mem_unref(mem);
exif_entry_unref(entry);
return entry;
}
int main(int argc, char **argv)
{
ExifEntry *entry;
//Input JPG
char mInputFilename[]="example.jpg";
//Load JPG
JPEGData * mJpegData = jpeg_data_new_from_file(mInputFilename);
//Load Exif data from JPG
ExifData * mExifData = jpeg_data_get_exif_data(mJpegData);
//Set some Exif options
exif_data_set_option(mExifData, EXIF_DATA_OPTION_FOLLOW_SPECIFICATION);
exif_data_set_data_type(mExifData, EXIF_DATA_TYPE_COMPRESSED);
exif_data_set_byte_order(mExifData, FILE_BYTE_ORDER);
entry = create_tag(mExifData, EXIF_IFD_EXIF, EXIF_TAG_USER_COMMENT,
sizeof(ASCII_COMMENT) + sizeof(FILE_COMMENT) - 2);
/* Write the special header needed for a comment tag */
memcpy(entry->data, ASCII_COMMENT, sizeof(ASCII_COMMENT)-1);
/* Write the actual comment text, without the trailing NUL character */
memcpy(entry->data+8, FILE_COMMENT, sizeof(FILE_COMMENT)-1);
/* create_tag() happens to set the format and components correctly for
* EXIF_TAG_USER_COMMENT, so there is nothing more to do. */
/* Create a EXIF_TAG_SUBJECT_AREA tag */
entry = create_tag(mExifData, EXIF_IFD_EXIF, EXIF_TAG_SUBJECT_AREA,
4 * exif_format_get_size(EXIF_FORMAT_SHORT));
entry->format = EXIF_FORMAT_SHORT;
entry->components = 4;
//Write back exif data
jpeg_data_set_exif_data(mJpegData,mExifData);
//Save to JPG
jpeg_data_save_file(mJpegData,"test.jpg");
return 0;
}
The two functions in openCV cvLoadImage and cvSaveImage accept file path's as arguments.
For example, when saving a image it's cvSaveImage("/tmp/output.jpg", dstIpl) and it writes on the disk.
Is there any way to feed this a buffer already in memory? So instead of a disk write, the output image will be in memory.
I would also like to know this for both cvSaveImage and cvLoadImage (read and write to memory buffers). Thanks!
My goal is to store the Encoded (jpeg) version of the file in Memory. Same goes to cvLoadImage, I want to load a jpeg that's in memory in to the IplImage format.
This worked for me
// decode jpg (or other image from a pointer)
// imageBuf contains the jpg image
cv::Mat imgbuf = cv::Mat(480, 640, CV_8U, imageBuf);
cv::Mat imgMat = cv::imdecode(imgbuf, CV_LOAD_IMAGE_COLOR);
// imgMat is the decoded image
// encode image into jpg
cv::vector<uchar> buf;
cv::imencode(".jpg", imgMat, buf, std::vector<int>() );
// encoded image is now in buf (a vector)
imageBuf = (unsigned char *) realloc(imageBuf, buf.size());
memcpy(imageBuf, &buf[0], buf.size());
// size of imageBuf is buf.size();
I was asked about a C version instead of C++:
#include <opencv/cv.h>
#include <opencv/highgui.h>
int
main(int argc, char **argv)
{
char *cvwin = "camimg";
cvNamedWindow(cvwin, CV_WINDOW_AUTOSIZE);
// setup code, initialization, etc ...
[ ... ]
while (1) {
// getImage was my routine for getting a jpeg from a camera
char *img = getImage(fp);
CvMat mat;
// substitute 640/480 with your image width, height
cvInitMatHeader(&mat, 640, 480, CV_8UC3, img, 0);
IplImage *cvImg = cvDecodeImage(&mat, CV_LOAD_IMAGE_COLOR);
cvShowImage(cvwin, cvImg);
cvReleaseImage(&cvImg);
if (27 == cvWaitKey(1)) // exit when user hits 'ESC' key
break;
}
cvDestroyWindow(cvwin);
}
There are a couple of undocumented functions in the SVN version of the libary:
CV_IMPL CvMat* cvEncodeImage( const char* ext,
const CvArr* arr, const int* _params )
CV_IMPL IplImage* cvDecodeImage( const CvMat* _buf, int iscolor )
Latest check in message states that they are for native encoding/decoding for bmp, png, ppm and tiff (encoding only).
Alternatively you could use a standard image encoding library (e.g. libjpeg) and manipulate the data in the IplImage to match the input structure of the encoding library.
I'm assuming you're working in linux. From libjpeg.doc:
The rough outline of a JPEG
compression operation is:
Allocate
and initialize a JPEG compression
object
Specify the destination for
the compressed data (eg, a file)
Set
parameters for compression, including
image size & colorspace
jpeg_start_compress(...);
while
(scan lines remain to be written)
jpeg_write_scanlines(...);
jpeg_finish_compress(...);
Release
the JPEG compression object
The real trick for doing what you want to do is providing a custom "data destination (or source) manager" which is defined in jpeglib.h:
struct jpeg_destination_mgr {
JOCTET * next_output_byte; /* => next byte to write in buffer */
size_t free_in_buffer; /* # of byte spaces remaining in buffer */
JMETHOD(void, init_destination, (j_compress_ptr cinfo));
JMETHOD(boolean, empty_output_buffer, (j_compress_ptr cinfo));
JMETHOD(void, term_destination, (j_compress_ptr cinfo));
};
Basically set that up so your source and/or destination are the memory buffers you want, and you should be good to go.
As an aside, this post could be a lot better but the libjpeg62 documentation is, quite frankly, superb. Just apt-get libjpeg62-dev and read libjpeg.doc and look at example.c. If you run into problems and can't get something to work, just post again and I'm sure someone will be able to help.
All you need to load files from the memory buffer is a different src manager (libjpeg). I have tested the following code in Ubuntu 8.10.
/******************************** First define mem buffer function bodies **************/
<pre>
/*
* memsrc.c
*
* Copyright (C) 1994-1996, Thomas G. Lane.
* This file is part of the Independent JPEG Group's software.
* For conditions of distribution and use, see the accompanying README file.
*
* This file contains decompression data source routines for the case of
* reading JPEG data from a memory buffer that is preloaded with the entire
* JPEG file. This would not seem especially useful at first sight, but
* a number of people have asked for it.
* This is really just a stripped-down version of jdatasrc.c. Comparison
* of this code with jdatasrc.c may be helpful in seeing how to make
* custom source managers for other purposes.
*/
/* this is not a core library module, so it doesn't define JPEG_INTERNALS */
//include "jinclude.h"
include "jpeglib.h"
include "jerror.h"
/* Expanded data source object for memory input */
typedef struct {
struct jpeg_source_mgr pub; /* public fields */
JOCTET eoi_buffer[2]; /* a place to put a dummy EOI */
} my_source_mgr;
typedef my_source_mgr * my_src_ptr;
/*
* Initialize source --- called by jpeg_read_header
* before any data is actually read.
*/
METHODDEF(void)
init_source (j_decompress_ptr cinfo)
{
/* No work, since jpeg_memory_src set up the buffer pointer and count.
* Indeed, if we want to read multiple JPEG images from one buffer,
* this *must* not do anything to the pointer.
*/
}
/*
* Fill the input buffer --- called whenever buffer is emptied.
*
* In this application, this routine should never be called; if it is called,
* the decompressor has overrun the end of the input buffer, implying we
* supplied an incomplete or corrupt JPEG datastream. A simple error exit
* might be the most appropriate response.
*
* But what we choose to do in this code is to supply dummy EOI markers
* in order to force the decompressor to finish processing and supply
* some sort of output image, no matter how corrupted.
*/
METHODDEF(boolean)
fill_input_buffer (j_decompress_ptr cinfo)
{
my_src_ptr src = (my_src_ptr) cinfo->src;
WARNMS(cinfo, JWRN_JPEG_EOF);
/* Create a fake EOI marker */
src->eoi_buffer[0] = (JOCTET) 0xFF;
src->eoi_buffer[1] = (JOCTET) JPEG_EOI;
src->pub.next_input_byte = src->eoi_buffer;
src->pub.bytes_in_buffer = 2;
return TRUE;
}
/*
* Skip data --- used to skip over a potentially large amount of
* uninteresting data (such as an APPn marker).
*
* If we overrun the end of the buffer, we let fill_input_buffer deal with
* it. An extremely large skip could cause some time-wasting here, but
* it really isn't supposed to happen ... and the decompressor will never
* skip more than 64K anyway.
*/
METHODDEF(void)
skip_input_data (j_decompress_ptr cinfo, long num_bytes)
{
my_src_ptr src = (my_src_ptr) cinfo->src;
if (num_bytes > 0) {
while (num_bytes > (long) src->pub.bytes_in_buffer) {
num_bytes -= (long) src->pub.bytes_in_buffer;
(void) fill_input_buffer(cinfo);
/* note we assume that fill_input_buffer will never return FALSE,
* so suspension need not be handled.
*/
}
src->pub.next_input_byte += (size_t) num_bytes;
src->pub.bytes_in_buffer -= (size_t) num_bytes;
}
}
/*
* An additional method that can be provided by data source modules is the
* resync_to_restart method for error recovery in the presence of RST markers.
* For the moment, this source module just uses the default resync method
* provided by the JPEG library. That method assumes that no backtracking
* is possible.
*/
/*
* Terminate source --- called by jpeg_finish_decompress
* after all data has been read. Often a no-op.
*
* NB: *not* called by jpeg_abort or jpeg_destroy; surrounding
* application must deal with any cleanup that should happen even
* for error exit.
*/
METHODDEF(void)
term_source (j_decompress_ptr cinfo)
{
/* no work necessary here */
}
/*
* Prepare for input from a memory buffer.
*/
GLOBAL(void)
jpeg_memory_src (j_decompress_ptr cinfo, const JOCTET * buffer, size_t bufsize)
{
my_src_ptr src;
/* The source object is made permanent so that a series of JPEG images
* can be read from a single buffer by calling jpeg_memory_src
* only before the first one.
* This makes it unsafe to use this manager and a different source
* manager serially with the same JPEG object. Caveat programmer.
*/
if (cinfo->src == NULL) { /* first time for this JPEG object? */
cinfo->src = (struct jpeg_source_mgr *)
(*cinfo->mem->alloc_small) ((j_common_ptr) cinfo, JPOOL_PERMANENT,
SIZEOF(my_source_mgr));
}
src = (my_src_ptr) cinfo->src;
src->pub.init_source = init_source;
src->pub.fill_input_buffer = fill_input_buffer;
src->pub.skip_input_data = skip_input_data;
src->pub.resync_to_restart = jpeg_resync_to_restart; /* use default method */
src->pub.term_source = term_source;
src->pub.next_input_byte = buffer;
src->pub.bytes_in_buffer = bufsize;
}
Then the usage is pretty simple. You may need to replace SIZEOF() with sizeof(). Find a standard decompression example. Just replace "jpeg_stdio_src" with "jpeg_memory_src". Hope that helps!
Here's an example in Delphi. It converts a 24bit bitmap for use with OpenCV
function BmpToPIplImageEx(Bmp: TBitmap): pIplImage;
Var
i: Integer;
offset: LongInt;
dataByte: PByteArray;
Begin
Assert(Bmp.PixelFormat = pf24bit, 'PixelFormat must be 24bit');
Result := cvCreateImageHeader(cvSize(Bmp.Width, Bmp.Height), IPL_DEPTH_8U, 3);
cvCreateData(Result);
for i := 0 to Bmp.height - 1 do
Begin
offset := longint(Result.imageData) + Result.WidthStep * i;
dataByte := PByteArray(offset);
CopyMemory(dataByte, Bmp.Scanline[i], Result.WidthStep);
End;
End;
This is an indirect answer...
In the past, I've directly used libpng and libjpeg directly to do this. They have a low-level enough API that you can use memory buffers instead of file buffers for reading and writing.