Deflate and inflate for PDF, using zlib C++ - c++

I am trying to implement the "zlib.h" deflate and inflate functions to compress and decompress streams in PDF-file.
Input: compressed stream from PDF-file. I implemented inflate function -- it's all right, I have uncopressed stream, after that I try to compress this stream again with deflate function, as output I have compressed stream, but it is not equal to input compressed stream and they are not equal to the length. What I'm doing wrong? This is a part of my code:
size_t outsize = (streamend - streamstart) * 10;
char* output = new char[outsize]; ZeroMemory(output, outsize);
z_stream zstrm; ZeroMemory(&zstrm, sizeof(zstrm));
zstrm.avail_in = streamend - streamstart + 1;
zstrm.avail_out = outsize;
zstrm.next_in = (Bytef*)(buffer + streamstart);//block of date to infalte
zstrm.next_out = (Bytef*)output;
int rsti = inflateInit(&zstrm);
if (rsti == Z_OK)
{
int rst2 = inflate(&zstrm, Z_FINISH);
if (rst2 >= 0)
{
cout << output << endl;//inflated data
}
}
char* deflate_output = new char[streamend - streamstart];
ZeroMemory(deflate_output, streamend - streamstart);
z_stream d_zstrm; ZeroMemory(&d_zstrm, sizeof(d_zstrm));
d_zstrm.avail_in = (uInt) (strlen(output)+1);
d_zstrm.avail_out = (uInt) (streamend - streamstart);
d_zstrm.next_in = (Bytef*)(output);
d_zstrm.next_out = (Bytef*)(deflate_output);
int rsti1 = deflateInit(&d_zstrm, Z_DEFAULT_COMPRESSION);
if (rsti1 == Z_OK)
{
int rst22 = deflate(&d_zstrm, Z_FINISH);
out << deflate_output << endl << "**********************" << endl;
//I try to write deflated stream to file
printf("New size of stream: %lu\n", (char*)d_zstrm.next_out - deflate_output);
}

There is nothing wrong. There is not a unique compressed stream for a given uncompressed stream. All that is required is that the decompression give you back exactly what was compressed (hence "lossless").
It may simply be caused by different compression parameters, different compression code, or even a different version of the same compression code.
If you can't reproduce the original compressed data, so what? All that matters is that you can make a valid PDF file that can be decompressed and has the content that you want.

Related

Fragmented MP4 - problem playing in browser

I try to create fragmented MP4 from raw H264 video data so I could play it in internet browser's player. My goal is to create live streaming system, where media server would send fragmented MP4 pieces to browser. The server would buffer input data from RaspberryPi camera, which sends video as H264 frames. It would then mux that video data and make it available for client. The browser would play media data (that were muxed by server and sent i.e. through websocket) by using Media Source Extensions.
For test purpose I wrote the following pieces of code (using many examples I found in the intenet):
C++ application using avcodec which muxes raw H264 video to fragmented MP4 and saves it to a file:
#define READBUFSIZE 4096
#define IOBUFSIZE 4096
#define ERRMSGSIZE 128
#include <cstdint>
#include <iostream>
#include <fstream>
#include <string>
#include <vector>
extern "C"
{
#include <libavformat/avformat.h>
#include <libavutil/error.h>
#include <libavutil/opt.h>
}
enum NalType : uint8_t
{
//NALs containing stream metadata
SEQ_PARAM_SET = 0x7,
PIC_PARAM_SET = 0x8
};
std::vector<uint8_t> outputData;
int mediaMuxCallback(void *opaque, uint8_t *buf, int bufSize)
{
outputData.insert(outputData.end(), buf, buf + bufSize);
return bufSize;
}
std::string getAvErrorString(int errNr)
{
char errMsg[ERRMSGSIZE];
av_strerror(errNr, errMsg, ERRMSGSIZE);
return std::string(errMsg);
}
int main(int argc, char **argv)
{
if(argc < 2)
{
std::cout << "Missing file name" << std::endl;
return 1;
}
std::fstream file(argv[1], std::ios::in | std::ios::binary);
if(!file.is_open())
{
std::cout << "Couldn't open file " << argv[1] << std::endl;
return 2;
}
std::vector<uint8_t> inputMediaData;
do
{
char buf[READBUFSIZE];
file.read(buf, READBUFSIZE);
int size = file.gcount();
if(size > 0)
inputMediaData.insert(inputMediaData.end(), buf, buf + size);
} while(!file.eof());
file.close();
//Initialize avcodec
av_register_all();
uint8_t *ioBuffer;
AVCodec *codec = avcodec_find_decoder(AV_CODEC_ID_H264);
AVCodecContext *codecCtxt = avcodec_alloc_context3(codec);
AVCodecParserContext *parserCtxt = av_parser_init(AV_CODEC_ID_H264);
AVOutputFormat *outputFormat = av_guess_format("mp4", nullptr, nullptr);
AVFormatContext *formatCtxt;
AVIOContext *ioCtxt;
AVStream *videoStream;
int res = avformat_alloc_output_context2(&formatCtxt, outputFormat, nullptr, nullptr);
if(res < 0)
{
std::cout << "Couldn't initialize format context; the error was: " << getAvErrorString(res) << std::endl;
return 3;
}
if((videoStream = avformat_new_stream( formatCtxt, avcodec_find_encoder(formatCtxt->oformat->video_codec) )) == nullptr)
{
std::cout << "Couldn't initialize video stream" << std::endl;
return 4;
}
else if(!codec)
{
std::cout << "Couldn't initialize codec" << std::endl;
return 5;
}
else if(codecCtxt == nullptr)
{
std::cout << "Couldn't initialize codec context" << std::endl;
return 6;
}
else if(parserCtxt == nullptr)
{
std::cout << "Couldn't initialize parser context" << std::endl;
return 7;
}
else if((ioBuffer = (uint8_t*)av_malloc(IOBUFSIZE)) == nullptr)
{
std::cout << "Couldn't allocate I/O buffer" << std::endl;
return 8;
}
else if((ioCtxt = avio_alloc_context(ioBuffer, IOBUFSIZE, 1, nullptr, nullptr, mediaMuxCallback, nullptr)) == nullptr)
{
std::cout << "Couldn't initialize I/O context" << std::endl;
return 9;
}
//Set video stream data
videoStream->id = formatCtxt->nb_streams - 1;
videoStream->codec->width = 1280;
videoStream->codec->height = 720;
videoStream->time_base.den = 60; //FPS
videoStream->time_base.num = 1;
videoStream->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
formatCtxt->pb = ioCtxt;
//Retrieve SPS and PPS for codec extdata
const uint32_t synchMarker = 0x01000000;
unsigned int i = 0;
int spsStart = -1, ppsStart = -1;
uint16_t spsSize = 0, ppsSize = 0;
while(spsSize == 0 || ppsSize == 0)
{
uint32_t *curr = (uint32_t*)(inputMediaData.data() + i);
if(*curr == synchMarker)
{
unsigned int currentNalStart = i;
i += sizeof(uint32_t);
uint8_t nalType = inputMediaData.data()[i] & 0x1F;
if(nalType == SEQ_PARAM_SET)
spsStart = currentNalStart;
else if(nalType == PIC_PARAM_SET)
ppsStart = currentNalStart;
if(spsStart >= 0 && spsSize == 0 && spsStart != i)
spsSize = currentNalStart - spsStart;
else if(ppsStart >= 0 && ppsSize == 0 && ppsStart != i)
ppsSize = currentNalStart - ppsStart;
}
++i;
}
videoStream->codec->extradata = inputMediaData.data() + spsStart;
videoStream->codec->extradata_size = ppsStart + ppsSize;
//Write main header
AVDictionary *options = nullptr;
av_dict_set(&options, "movflags", "frag_custom+empty_moov", 0);
res = avformat_write_header(formatCtxt, &options);
if(res < 0)
{
std::cout << "Couldn't write container main header; the error was: " << getAvErrorString(res) << std::endl;
return 10;
}
//Retrieve frames from input video and wrap them in container
int currentInputIndex = 0;
int framesInSecond = 0;
while(currentInputIndex < inputMediaData.size())
{
uint8_t *frameBuffer;
int frameSize;
res = av_parser_parse2(parserCtxt, codecCtxt, &frameBuffer, &frameSize, inputMediaData.data() + currentInputIndex,
inputMediaData.size() - currentInputIndex, AV_NOPTS_VALUE, AV_NOPTS_VALUE, 0);
if(frameSize == 0) //No more frames while some data still remains (is that even possible?)
{
std::cout << "Some data left unparsed: " << std::to_string(inputMediaData.size() - currentInputIndex) << std::endl;
break;
}
//Prepare packet with video frame to be dumped into container
AVPacket packet;
av_init_packet(&packet);
packet.data = frameBuffer;
packet.size = frameSize;
packet.stream_index = videoStream->index;
currentInputIndex += frameSize;
//Write packet to the video stream
res = av_write_frame(formatCtxt, &packet);
if(res < 0)
{
std::cout << "Couldn't write packet with video frame; the error was: " << getAvErrorString(res) << std::endl;
return 11;
}
if(++framesInSecond == 60) //We want 1 segment per second
{
framesInSecond = 0;
res = av_write_frame(formatCtxt, nullptr); //Flush segment
}
}
res = av_write_frame(formatCtxt, nullptr); //Flush if something has been left
//Write media data in container to file
file.open("my_mp4.mp4", std::ios::out | std::ios::binary);
if(!file.is_open())
{
std::cout << "Couldn't open output file " << std::endl;
return 12;
}
file.write((char*)outputData.data(), outputData.size());
if(file.fail())
{
std::cout << "Couldn't write to file" << std::endl;
return 13;
}
std::cout << "Media file muxed successfully" << std::endl;
return 0;
}
(I hardcoded a few values, such as video dimensions or framerate, but as I said this is just a test code.)
Simple HTML webpage using MSE to play my fragmented MP4
<!DOCTYPE html>
<html>
<head>
<title>Test strumienia</title>
</head>
<body>
<video width="1280" height="720" controls>
</video>
</body>
<script>
var vidElement = document.querySelector('video');
if (window.MediaSource) {
var mediaSource = new MediaSource();
vidElement.src = URL.createObjectURL(mediaSource);
mediaSource.addEventListener('sourceopen', sourceOpen);
} else {
console.log("The Media Source Extensions API is not supported.")
}
function sourceOpen(e) {
URL.revokeObjectURL(vidElement.src);
var mime = 'video/mp4; codecs="avc1.640028"';
var mediaSource = e.target;
var sourceBuffer = mediaSource.addSourceBuffer(mime);
var videoUrl = 'my_mp4.mp4';
fetch(videoUrl)
.then(function(response) {
return response.arrayBuffer();
})
.then(function(arrayBuffer) {
sourceBuffer.addEventListener('updateend', function(e) {
if (!sourceBuffer.updating && mediaSource.readyState === 'open') {
mediaSource.endOfStream();
}
});
sourceBuffer.appendBuffer(arrayBuffer);
});
}
</script>
</html>
Output MP4 file generated by my C++ application can be played i.e. in MPC, but it doesn't play in any web browser I tested it with. It also doesn't have any duration (MPC keeps showing 00:00).
To compare output MP4 file I got from my C++ application described above, I also used FFMPEG to create fragmented MP4 file from the same source file with raw H264 stream. I used the following command:
ffmpeg -r 60 -i input.h264 -c:v copy -f mp4 -movflags empty_moov+default_base_moof+frag_keyframe test.mp4
This file generated by FFMPEG is played correctly by every web browser I used for tests. It also has correct duration (but also it has trailing atom, which wouldn't be present in my live stream anyway, and as I need a live stream, it won't have any fixed duration in the first place).
MP4 atoms for both files look very similiar (they have identical avcc section for sure). What's interesting (but not sure if it's of any importance), both files have different NALs format than input file (RPI camera produces video stream in Annex-B format, while output MP4 files contain NALs in AVCC format... or at least it looks like it's the case when I compare mdat atoms with input H264 data).
I assume there is some field (or a few fields) I need to set for avcodec to make it produce video stream that would be properly decoded and played by browsers players. But what field(s) do I need to set? Or maybe problem lies somewhere else? I ran out of ideas.
EDIT 1:
As suggested, I investigated binary content of both MP4 files (generated by my app and FFMPEG tool) with hex editor. What I can confirm:
both files have identical avcc section (they match perfectly and are in AVCC format, I analyzed it byte after byte and there's no mistake about it)
both files have NALs in AVCC format (I looked closely at mdat atoms and they don't differ between both MP4 files)
So I guess there's nothing wrong with the extradata creation in my code - avcodec takes care of it properly, even if I just feed it with SPS and PPS NALs. It converts them by itself, so no need for me to do it by hand. Still, my original problem remains.
EDIT 2: I achieved partial success - MP4 generated by my app now plays in Firefox. I added this line to the code (along with rest of stream initialization):
videoStream->codec->time_base = videoStream->time_base;
So now this section of my code looks like this:
//Set video stream data
videoStream->id = formatCtxt->nb_streams - 1;
videoStream->codec->width = 1280;
videoStream->codec->height = 720;
videoStream->time_base.den = 60; //FPS
videoStream->time_base.num = 1;
videoStream->codec->time_base = videoStream->time_base;
videoStream->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
formatCtxt->pb = ioCtxt;
I finally found the solution. My MP4 now plays in Chrome (while still playing in other tested browsers).
In Chrome chrome://media-internals/ shows MSE logs (of a sort). When I looked there, I found a few of following warnings for my test player:
ISO-BMFF container metadata for video frame indicates that the frame is not a keyframe, but the video frame contents indicate the opposite.
That made me think and encouraged to set AV_PKT_FLAG_KEY for packets with keyframes. I added following code to section with filling AVPacket structure:
//Check if keyframe field needs to be set
int allowedNalsCount = 3; //In one packet there would be at most three NALs: SPS, PPS and video frame
packet.flags = 0;
for(int i = 0; i < frameSize && allowedNalsCount > 0; ++i)
{
uint32_t *curr = (uint32_t*)(frameBuffer + i);
if(*curr == synchMarker)
{
uint8_t nalType = frameBuffer[i + sizeof(uint32_t)] & 0x1F;
if(nalType == KEYFRAME)
{
std::cout << "Keyframe detected at frame nr " << framesTotal << std::endl;
packet.flags = AV_PKT_FLAG_KEY;
break;
}
else
i += sizeof(uint32_t) + 1; //We parsed this already, no point in doing it again
--allowedNalsCount;
}
}
A KEYFRAME constant turns out to be 0x5 in my case (Slice IDR).
MP4 atoms for both files look very similiar (they have identical avcc
section for sure)
Double check that, The code supplied suggests otherwise to me.
What's interesting (but not sure if it's of any importance), both
files have different NALs format than input file (RPI camera produces
video stream in Annex-B format, while output MP4 files contain NALs in
AVCC format... or at least it looks like it's the case when I compare
mdat atoms with input H264 data).
It is very important, mp4 will not work with annex b.
You need to fill in extradata with AVC Decoder Configuration Record, not just SPS/PPS
Here's how the record should look like:
AVCDCR
We can find this explanation in [Chrome Source]
(https://chromium.googlesource.com/chromium/src/+/refs/heads/master/media/formats/mp4/mp4_stream_parser.cc#799) "chrome media source code":
// Copyright 2014 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
// Use |analysis.is_keyframe|, if it was actually determined, for logging
// if the analysis mismatches the container's keyframe metadata for
// |frame_buf|.
if (analysis.is_keyframe.has_value() &&
is_keyframe != analysis.is_keyframe.value()) {
LIMITED_MEDIA_LOG(DEBUG, media_log_, num_video_keyframe_mismatches_,
kMaxVideoKeyframeMismatchLogs)
<< "ISO-BMFF container metadata for video frame indicates that the "
"frame is "
<< (is_keyframe ? "" : "not ")
<< "a keyframe, but the video frame contents indicate the "
"opposite.";
// As of September 2018, it appears that all of Edge, Firefox, Safari
// work with content that marks non-avc-keyframes as a keyframe in the
// container. Encoders/muxers/old streams still exist that produce
// all-keyframe mp4 video tracks, though many of the coded frames are
// not keyframes (likely workaround due to the impact on low-latency
// live streams until https://crbug.com/229412 was fixed). We'll trust
// the AVC frame's keyframe-ness over the mp4 container's metadata if
// they mismatch. If other out-of-order codecs in mp4 (e.g. HEVC, DV)
// implement keyframe analysis in their frame_bitstream_converter, we'll
// similarly trust that analysis instead of the mp4.
is_keyframe = analysis.is_keyframe.value();
}
As the code comment show, chrome trust the AVC frame's keyframe-ness over the mp4 container's metadata. So nalu type in H264/HEVC should be more important than mp4 container box sdtp & trun description.

C++ Sending Image via HTTP

I try to implement a simple HTTP server using C++. I was able to send text response to browser, but I am failing to send response for binary file request.
Here is my code to get HTML response to PNG file request:
string create_html_output_for_binary(const string &full_path)
{
const char* file_name = full_path.c_str();
FILE* file_stream = fopen(file_name, "rb");
string file_str;
size_t file_size;
if(file_stream != nullptr)
{
fseek(file_stream, 0, SEEK_END);
long file_length = ftell(file_stream);
rewind(file_stream);
// Allocate memory. Info: http://www.cplusplus.com/reference/cstdio/fread/?kw=fread
char* buffer = (char*) malloc(sizeof(char) * file_length);
if(buffer != nullptr)
{
file_size = fread(buffer, 1, file_length, file_stream);
stringstream out;
for(int i = 0; i < file_size; i++)
{
out << buffer[i];
}
string copy = out.str();
file_str = copy;
}
else
{
printf("buffer is null!");
}
}
else
{
printf("file_stream is null! file name -> %s\n", file_name);
}
string html = "HTTP/1.1 200 Okay\r\nContent-Type: text/html; charset=ISO-8859-4 \r\n\r\n" + string("FILE NOT FOUND!!");
if(file_str.length() > 0)
{
// HTTP/1.0 200 OK
// Server: cchttpd/0.1.0
// Content-Type: image/gif
// Content-Transfer-Encoding: binary
// Content-Length: 41758
string file_size_str = to_string(file_str.length());
html = "HTTP/1.1 200 Okay\r\nContent-Type: image/png; Content-Transfer-Encoding: binary; Content-Length: " + file_size_str + ";charset=ISO-8859-4 \r\n\r\n" + file_str;
printf("\n\nHTML -> %s\n\nfile_str -> %ld\n\n\n", html.c_str(), file_str.length());
}
return html;
}
This code successfully read file and store data on char* buffer.
What makes me confuse is the file_str always contains \211PNG, although when I check its size, is much large than \211PNG.
I suspect this is the problem that cause my image does not loaded in browsers because when I printf the html, it only shows:
HTTP/1.1 200 Okay
Content-Type: image/png; Content-Transfer-Encoding: binary; Content-Length: 187542;charset=ISO-8859-4
\211PNG
What I am thinking is the way to send binary data to browser is same with sending text data, so I make the string header first, then read file data, convert it to string and combine with the header, then finally send a large single string to HTTP socket.
I also tried this code:
if (file_stream != NULL)
{
short stringlength = 6;
string mystring;
mystring.reserve(stringlength);
fseek(file_stream , 0, SEEK_SET);
fread(&mystring[0], sizeof(char), (size_t)stringlength, file_stream);
printf("TEST -> %s, length -> %ld\n", mystring.c_str(), mystring.length());
fclose(file_stream );
}
But the HTML output always the same, and mystring also contains only \211PNG when printf-ed.
Am I in the wrong path?
Please help find out the mistakes in my code. Thank you.
You are storing the data in one large chunk into a std::stringstream. That will not work as the value zero will be interpreted as a null terminator. This causes everything after the null terminator to be ignored. You should use a container like std::vector to store and manage binary data.
#include <vector>
string create_html_output_for_binary(const string &full_path)
{
std::vector<char> buffer;
//... other code here
if(ile_stream != nullptr)
{
fseek(file_stream, 0, SEEK_END);
long file_length = ftell(file_stream);
rewind(file_stream);
buffer.resize(file_length);
file_size = fread(&buffer[0], 1, file_length, file_stream);
}
// .... other code here
}
To output the data do not use printf. It may handle new lines differently and will stop at the first null terminator is encounters. Instead (keeping with your use of C stream IO) use fwrite.
fwrite (buffer.data(), 1 , buffer.size() , stdout );
In order for the above to work you will need to reopen the stdout to that it writes in binary mode. This answer here on Stackoverflow shows how to accomplish that. This is just to output the contents to the stdout Since you are transmitting the date over sockets you do not have to do anything to stdout.
First, you need to open the file(where your picture is) to read binary.
fp=fopen(filename,"rb");
Next,set stdout to binary mode with this command:
_setmode(_fileno(stdout),O_BINARY);
You need to include <fcntl.h> and <io.h> headers.Find the exact size of the picture
you need to send,for example like Captain Obvilous has written:
fseek(fp, 0L, SEEK_END);
long file_length = ftell(fp);
rewind(fp);
Now use fread function to read all bytes from the picture:
while (!feof(fp))
{
fread(&ch, 1, 1, fp);
cout << ch;
}
Variable ch is type char.When you are finish set file mode back:
_setmode(_fileno(stdout), _O_TEXT);
**NOTE:**This code was written mostly in C but you can easily read files in C++ using istream

How do I get the DC coefficient from a jpg using the jpg library?

I am new to this stuff, but I need to get the dc-coefficient from a jpeg using the jpeg library?
I was told as a hint that the corresponding function is in jdhuff.c, but I can't find it. I tried to find a decent article about the jpg library where I can get this, but no success so far.
So I hope you guys can help me a bit and point me to either some documentation or have a hint.
So, here is what I know:
A jpg picture consists of 8x8 Blocks. That are 64 Pixels. 63 of it are named AC and 1 is named DC. Thats the coefficient. The position is at array[0][0].
But how do I exactly read that with the jpg library? I am using C++.
edit:
This is what I have so far:
read_jpeg::read_jpeg( const std::string& filename )
{
FILE* fp = NULL; // File-Pointer
jpeg_decompress_struct cinfo; // jpeg decompression parameters
JSAMPARRAY buffer; // Output row-buffer
int row_stride = 0; // physical row width
my_error_mgr jerr; // Custom Error Manager
// Set Error Manager
cinfo.err = jpeg_std_error(&jerr.pub);
jerr.pub.error_exit = my_error_exit;
// Handle longjump
if (setjmp(jerr.setjmp_buffer)) {
// JPEG has signaled an error. Clean up and throw an exception.
jpeg_destroy_decompress(&cinfo);
fclose(fp);
throw std::runtime_error("Error: jpeg has reported an error.");
}
// Open the file
if ( (fp = fopen(filename.c_str(), "rb")) == NULL )
{
std::stringstream ss;
ss << "Error: Cannot read '" << filename.c_str() << "' from the specified location!";
throw std::runtime_error(ss.str());
}
// Initialize jpeg decompression
jpeg_create_decompress(&cinfo);
// Show jpeg where to read the data
jpeg_stdio_src(&cinfo, fp);
// Read the header
jpeg_read_header(&cinfo, TRUE);
// Decompress the file
jpeg_start_decompress(&cinfo);
// JSAMPLEs per row in output buffer
row_stride = cinfo.output_width * cinfo.output_components;
// Make a one-row-high sample array
buffer = (*cinfo.mem->alloc_sarray)((j_common_ptr) &cinfo, JPOOL_IMAGE, row_stride, 1);
// Read image using jpgs counter
while (cinfo.output_scanline < cinfo.output_height)
{
// Read the image
jpeg_read_scanlines(&cinfo, buffer, 1);
}
// Finish the decompress
jpeg_finish_decompress(&cinfo);
// Release memory
jpeg_destroy_decompress(&cinfo);
// Close the file
fclose(fp);
}
This is not possible using the standard API. With libjpeg API the closest you can get is raw pixel data of Y/Cb/Cr channels.
To get coefficients' data you'd need to hack the decode_mcu function (or its callers) to save the data decoded there.

zlib's uncompress() strangely returning Z_BUF_ERROR

I'm writing Qt-based client application. It connects to remote server using QTcpSocket. Before sending any actual data it needs to send login info, which is zlib-compressed json.
As far as I know from server sources, to make everything work I need to send X bytes of compressed data following 4 bytes with length of uncompressed data.
Uncompressing on server-side looks like this:
/* look at first 32 bits of buffer, which contains uncompressed len */
unc_len = le32toh(*((uint32_t *)buf));
if (unc_len > CLI_MAX_MSG)
return NULL;
/* alloc buffer for uncompressed data */
obj_unc = malloc(unc_len + 1);
if (!obj_unc)
return NULL;
/* decompress buffer (excluding first 32 bits) */
comp_p = buf + 4;
if (uncompress(obj_unc, &dest_len, comp_p, buflen - 4) != Z_OK)
goto out;
if (dest_len != unc_len)
goto out;
memcpy(obj_unc + unc_len, &zero, 1); /* null terminate */
I'm compressing json using Qt built-in zlib (I've just downloaded headers and placed it in mingw's include folder):
char json[] = "{\"version\":1,\"user\":\"test\"}";
char pass[] = "test";
std::auto_ptr<Bytef> message(new Bytef[ // allocate memory for:
sizeof(ubbp_header) // + msg header
+ sizeof(uLongf) // + uncompressed data size
+ strlen(json) // + compressed data itself
+ 64 // + reserve (if compressed size > uncompressed size)
+ SHA256_DIGEST_LENGTH]);//+ SHA256 digest
uLongf unc_len = strlen(json);
uLongf enc_len = strlen(json) + 64;
// header goes first, so server will determine that we want to login
Bytef* pHdr = message.get();
// after that: uncompressed data length and data itself
Bytef* pLen = pHdr + sizeof(ubbp_header);
Bytef* pDat = pLen + sizeof(uLongf);
// hash of compressed message updated with user pass
Bytef* pSha;
if (Z_OK != compress(pLen, &enc_len, (Bytef*)json, unc_len))
{
qDebug("Compression failed.");
return false;
}
Complete function code here: http://pastebin.com/hMY2C4n5
Even though server correctly recieves uncompressed length, uncompress() returning Z_BUF_ERROR.
P.S.: I'm actually writing pushpool's client to figure out how it's binary protocol works. I've asked this question on official bitcoin forum, but no luck there. http://forum.bitcoin.org/index.php?topic=24257.0
Turns out it was server-side bug. More details in bitcoin forum thread.

How to write bitmaps as frames to Ogg Theora in C\C++?

How to write bitmaps as frames to Ogg Theora in C\C++?
Some Examples with source would be grate!)
The entire solution is a little lengthy to post on here as a code sample, but if you download libtheora from Xiph.org, there is an example png2theora. All of the library functions I am about to mention can be found in the documentation on Xiph.org for theora and ogg.
Call th_info_init() to initialise a th_info structure, then set up you output parameters by assigning the appropriate members in that.
Use that structure in a call to th_encode_alloc() to get an encoder context
Initialise an ogg stream, with ogg_stream_init()
Initialise a blank th_comment structure using th_comment_init
Iterate through the following:
Call th_encode_flushheader with the the encoder context, the blank comment structure and an ogg_packet.
Send the resulting packet to the ogg stream with ogg_stream_packetin()
Until th_encode_flushheader returns 0 (or an error code)
Now, repeatedly call ogg_stream_pageout(), every time writing the page.header and then page.body to an output file, until it returns 0. Now call ogg_stream_flush and write the resulting page to the file.
You can now write frames to the encoder. Here is how I did it:
int theora_write_frame(int outputFd, unsigned long w, unsigned long h, unsigned char *yuv_y, unsigned char *yuv_u, unsigned char *yuv_v, int last)
{
th_ycbcr_buffer ycbcr;
ogg_packet op;
ogg_page og;
unsigned long yuv_w;
unsigned long yuv_h;
/* Must hold: yuv_w >= w */
yuv_w = (w + 15) & ~15;
/* Must hold: yuv_h >= h */
yuv_h = (h + 15) & ~15;
//Fill out the ycbcr buffer
ycbcr[0].width = yuv_w;
ycbcr[0].height = yuv_h;
ycbcr[0].stride = yuv_w;
ycbcr[1].width = yuv_w;
ycbcr[1].stride = ycbcr[1].width;
ycbcr[1].height = yuv_h;
ycbcr[2].width = ycbcr[1].width;
ycbcr[2].stride = ycbcr[1].stride;
ycbcr[2].height = ycbcr[1].height;
if(encoderInfo->pixel_fmt == TH_PF_420)
{
//Chroma is decimated by 2 in both directions
ycbcr[1].width = yuv_w >> 1;
ycbcr[2].width = yuv_w >> 1;
ycbcr[1].height = yuv_h >> 1;
ycbcr[2].height = yuv_h >> 1;
}else if(encoderInfo->pixel_fmt == TH_PF_422)
{
ycbcr[1].width = yuv_w >> 1;
ycbcr[2].width = yuv_w >> 1;
}else if(encoderInfo->pixel_fmt != TH_PF_422)
{
//Then we have an unknown pixel format
//We don't know how long the arrays are!
fprintf(stderr, "[theora_write_frame] Unknown pixel format in writeFrame!\n");
return -1;
}
ycbcr[0].data = yuv_y;
ycbcr[1].data = yuv_u;
ycbcr[2].data = yuv_v;
/* Theora is a one-frame-in,one-frame-out system; submit a frame
for compression and pull out the packet */
if(th_encode_ycbcr_in(encoderContext, ycbcr)) {
fprintf(stderr, "[theora_write_frame] Error: could not encode frame\n");
return -1;
}
if(!th_encode_packetout(encoderContext, last, &op)) {
fprintf(stderr, "[theora_write_frame] Error: could not read packets\n");
return -1;
}
ogg_stream_packetin(&theoraStreamState, &op);
ssize_t bytesWritten = 0;
int pagesOut = 0;
while(ogg_stream_pageout(&theoraStreamState, &og)) {
pagesOut ++;
bytesWritten = write(outputFd, og.header, og.header_len);
if(bytesWritten != og.header_len)
{
fprintf(stderr, "[theora_write_frame] Error: Could not write to file\n");
return -1;
}
bytesWritten = write(outputFd, og.body, og.body_len);
if(bytesWritten != og.body_len)
{
bytesWritten = fprintf(stderr, "[theora_write_frame] Error: Could not write to file\n");
return -1;
}
}
return pagesOut;
}
Where encoderInfo is the th_info structure used to initialise the encoder (static in the data section for me).
On your last frame, setting the last frame on th_encode_packetout() will make sure the stream terminates properly.
Once your done, just make sure to clean up (closing fds mainly). th_info_clear() will clear the th_info structure, and th_encode_free() will free your encoder context.
Obviously, you'll need to convert your bitmap into YUV planes before you can pass them to theora_write_frame().
Hope this is of some help. Good luck!
Here's the libtheora API and example code.
Here's a micro howto that shows how to use the theora binaries. As the encoder reads raw, uncompressed 'yuv4mpeg' data for video you could use that from your app, too by piping the video frames to the encoder.