How Can I Reduce The Memory Useage For a Huge File Transfer? - web-services

I have to transfer some huge files (2GB-ish) to a web service:
public bool UploadContent(System.Web.HttpContext context)
{
var file = context.Request.Files[0];
var fileName = file.FileName;
byte[] fileBytes = new Byte[file.ContentLength];
file.InputStream.Read(fileBytes, 0, fileBytes.Length);
client.createResource(fileBytes);
}
The HttpContext already has the contents of the file in File[0], but I can't see a way to pass those bytes to the createResource(byte[] contents) method of the web service without making a copy as a byte array... so I am eating memory like candy.
Is there a more efficient way to do this?
EDIT client.createResource() is part of a COTS product and modification is outside our control.

Rather than sending the whole bytes you can send the chunks of the files. Seek the file for step by step upload and merge the next chunk to already save bytes on server.
You need to update your client.CreateResource method only if you're allowed to modify that method :)
Add following parameters:
string fileName // To locate the file name when you start sending the chunks
byte[] buffer // chunk that would be sent to server via webservice
long offset // Information that will tell you how much data is already uploaded, so that you can seek the file and merge the buffer.
Now your method will look like:
public bool CreateResource(string FileName, byte[] buffer, long Offset)
{
bool retVal = false;
try
{
string FilePath = "d:\\temp\\uploadTest.extension";
if (Offset == 0)
File.Create(FilePath).Close();
// open a file stream and write the buffer.
// Don't open with FileMode.Append because the transfer may wish to
// start a different point
using (FileStream fs = new FileStream(FilePath, FileMode.Open,
FileAccess.ReadWrite, FileShare.Read))
{
fs.Seek(Offset, SeekOrigin.Begin);
fs.Write(buffer, 0, buffer.Length);
}
retVal = true;
}
catch (Exception ex)
{
// Log exception or send error message to someone who cares
}
return retVal;
}
Now to read the file in chunks from the InputStream of HttpPostedFile try below code:
public bool UploadContent(System.Web.HttpContext context)
{
//the file that we want to upload
var file = context.Request.Files[0];
var fs = file.InputStream;
int Offset = 0; // starting offset.
//define the chunk size
int ChunkSize = 65536; // 64 * 1024 kb
//define the buffer array according to the chunksize.
byte[] Buffer = new byte[ChunkSize];
//opening the file for read.
try
{
long FileSize = file.ContentLength; // File size of file being uploaded.
// reading the file.
fs.Position = Offset;
int BytesRead = 0;
while (Offset != FileSize) // continue uploading the file chunks until offset = file size.
{
BytesRead = fs.Read(Buffer, 0, ChunkSize); // read the next chunk
if (BytesRead != Buffer.Length)
{
ChunkSize = BytesRead;
byte[] TrimmedBuffer = new byte[BytesRead];
Array.Copy(Buffer, TrimmedBuffer, BytesRead);
Buffer = TrimmedBuffer; // the trimmed buffer should become the new 'buffer'
}
// send this chunk to the server. it is sent as a byte[] parameter,
// but the client and server have been configured to encode byte[] using MTOM.
bool ChunkAppened = client.createResource(file.FileName, Buffer, Offset);
if (!ChunkAppened)
{
break;
}
// Offset is only updated AFTER a successful send of the bytes.
Offset += BytesRead; // save the offset position for resume
}
}
catch (Exception ex)
{
}
finally
{
fs.Close();
}
}
Disclaimer: I haven't tested this code. This is a sample code to show how large file upload can be achieved without hampering the memory.
Ref: Source article.

Related

Boost C++ UDP Socket stops receive after N packages

I am sending udp packages from server to client. At the server side I split data into packages by 500 bytes, and sent to client. The client receives the packages and accumulate received data and deserializes an object.
The problem is that client receive 133 packages maximum and stops like nothing else was sent to socket, but server send whole object (1238 packages). And this problem exists in Windows only, but works perfectly under OSX.
Here is a server code sending packages:
// sends #buffer of size #length to #endpoint
// #buffer already contains a header, and the method splits #buffer into chunks and send it one by one
void server::send_package(char* buffer, int length, udp::endpoint endpoint){
if (length > BUFFER){
protocol::header header;
int dataLength = length - sizeof (header);
// copy header from buffer
memcpy(&header, buffer, sizeof(header));
header.isEnd = false;
int position = 0;
// allocate memory to collect data to send
char* data_to_send = new char[dataLength];
// copy data
memcpy(data_to_send, &buffer[sizeof(header)], dataLength);
header.totalPackages = dataLength/(BUFFER-sizeof (header));
// create chucks of data and send
while (position < dataLength){
int frame_size = BUFFER;
header.currentPackage++;
if (dataLength-position+sizeof (header) <= BUFFER) {
header.isEnd = true;
frame_size = dataLength-position+sizeof (header);
}
char* temp_buffer = new char[frame_size];
header.length = frame_size-sizeof(header);
// set the header of a chunk
memcpy(temp_buffer, &header, sizeof(header));
// set data to chunk
memcpy(&temp_buffer[sizeof (header)], &data_to_send[position], frame_size-sizeof(header));
// send chunk
socket->send_to(boost::asio::buffer(temp_buffer, frame_size), endpoint);
socket->wait(boost::asio::ip::tcp::socket::wait_write);
position += frame_size-sizeof(header);
}
} else {
socket->async_send_to(boost::asio::buffer(buffer, length), endpoint,
boost::bind(&server::release_sent_buffer,
this,
buffer, length)
);
}
}
Here is the client receives packages:
void connectionManager::handle_receive( const boost::system::error_code &error,
std::size_t size,
udp::endpoint* ep) {
if (size > 0) {
// _lock.try_lock();
protocol::header header;
memcpy(&header, &recv_buffer, sizeof(header));
logg("response from server received " + boost::asio::ip::address_v4(header.ip).to_string());
logg("received header:");
logg(protocol::getHeaderInfo(header));
std::stringstream ss;
ss << "header.length = " << header.length;
logg(ss.str().c_str());
udp::endpoint endpoint(boost::asio::ip::address_v4(header.ip), _server_port);
switch (header.command) {
case protocol::commands::server_instance_instruments_state_response: {
package_chain chain(size-sizeof(header));
memcpy(chain.data, &recv_buffer[sizeof(header)], size-sizeof(header));
packages[header.id].push_back(chain);
// at Windows machine the last package is #133. But 1248 packages expected.
// WHY????...
int packs = (packages[header.id].size());
if (header.isEnd) {
char* buf = getDataFromPackages(header.id, header.length);
std::stringstream str;
str << buf;
boost::archive::text_iarchive ar(str);
instance_plugin_information* inst_inf;
inst_inf = new instance_plugin_information();
try {
ar & inst_inf;
if (onPluginStateResponse != nullptr) {
onPluginStateResponse(*inst_inf);
}
} catch (const std::exception& e) {
}
}
break;
}
}
// We will hang on this line when package #133 received.
socket->wait(boost::asio::ip::tcp::socket::wait_read);
connectionManager::start_receive();
}
I just don't understand what I am missing? Why client receives exactly 133 packages (133 x 500 bytes) and then drops?
I have changed the code in many ways, but with no luck. The last thing I added is
socket->wait(boost::asio::ip::tcp::socket::wait_read);
before I call start_receive() again, and the program hands on this line exactly when package #133 is received.
Please help. I am close to give up and become a pizza delivery guy.

Oboe Async Audio Extraction

I am trying to build a NDK based c++ low latancy audio player which will encounter three operations for multiple audios.
Play from assets.
Stream from an online source.
Play from local device storage.
From one of the Oboe samples provided by Google, I added another function to the class NDKExtractor.cpp to extract a URL based audio and render it to audio device while reading from source at the same time.
int32_t NDKExtractor::decode(char *file, uint8_t *targetData, AudioProperties targetProperties) {
LOGD("Using NDK decoder: %s",file);
// Extract the audio frames
AMediaExtractor *extractor = AMediaExtractor_new();
//using this method instead of AMediaExtractor_setDataSourceFd() as used for asset files in the rythem game example
media_status_t amresult = AMediaExtractor_setDataSource(extractor, file);
if (amresult != AMEDIA_OK) {
LOGE("Error setting extractor data source, err %d", amresult);
return 0;
}
// Specify our desired output format by creating it from our source
AMediaFormat *format = AMediaExtractor_getTrackFormat(extractor, 0);
int32_t sampleRate;
if (AMediaFormat_getInt32(format, AMEDIAFORMAT_KEY_SAMPLE_RATE, &sampleRate)) {
LOGD("Source sample rate %d", sampleRate);
if (sampleRate != targetProperties.sampleRate) {
LOGE("Input (%d) and output (%d) sample rates do not match. "
"NDK decoder does not support resampling.",
sampleRate,
targetProperties.sampleRate);
return 0;
}
} else {
LOGE("Failed to get sample rate");
return 0;
};
int32_t channelCount;
if (AMediaFormat_getInt32(format, AMEDIAFORMAT_KEY_CHANNEL_COUNT, &channelCount)) {
LOGD("Got channel count %d", channelCount);
if (channelCount != targetProperties.channelCount) {
LOGE("NDK decoder does not support different "
"input (%d) and output (%d) channel counts",
channelCount,
targetProperties.channelCount);
}
} else {
LOGE("Failed to get channel count");
return 0;
}
const char *formatStr = AMediaFormat_toString(format);
LOGD("Output format %s", formatStr);
const char *mimeType;
if (AMediaFormat_getString(format, AMEDIAFORMAT_KEY_MIME, &mimeType)) {
LOGD("Got mime type %s", mimeType);
} else {
LOGE("Failed to get mime type");
return 0;
}
// Obtain the correct decoder
AMediaCodec *codec = nullptr;
AMediaExtractor_selectTrack(extractor, 0);
codec = AMediaCodec_createDecoderByType(mimeType);
AMediaCodec_configure(codec, format, nullptr, nullptr, 0);
AMediaCodec_start(codec);
// DECODE
bool isExtracting = true;
bool isDecoding = true;
int32_t bytesWritten = 0;
while (isExtracting || isDecoding) {
if (isExtracting) {
// Obtain the index of the next available input buffer
ssize_t inputIndex = AMediaCodec_dequeueInputBuffer(codec, 2000);
//LOGV("Got input buffer %d", inputIndex);
// The input index acts as a status if its negative
if (inputIndex < 0) {
if (inputIndex == AMEDIACODEC_INFO_TRY_AGAIN_LATER) {
// LOGV("Codec.dequeueInputBuffer try again later");
} else {
LOGE("Codec.dequeueInputBuffer unknown error status");
}
} else {
// Obtain the actual buffer and read the encoded data into it
size_t inputSize;
uint8_t *inputBuffer = AMediaCodec_getInputBuffer(codec, inputIndex,
&inputSize);
//LOGV("Sample size is: %d", inputSize);
ssize_t sampleSize = AMediaExtractor_readSampleData(extractor, inputBuffer,
inputSize);
auto presentationTimeUs = AMediaExtractor_getSampleTime(extractor);
if (sampleSize > 0) {
// Enqueue the encoded data
AMediaCodec_queueInputBuffer(codec, inputIndex, 0, sampleSize,
presentationTimeUs,
0);
AMediaExtractor_advance(extractor);
} else {
LOGD("End of extractor data stream");
isExtracting = false;
// We need to tell the codec that we've reached the end of the stream
AMediaCodec_queueInputBuffer(codec, inputIndex, 0, 0,
presentationTimeUs,
AMEDIACODEC_BUFFER_FLAG_END_OF_STREAM);
}
}
}
if (isDecoding) {
// Dequeue the decoded data
AMediaCodecBufferInfo info;
ssize_t outputIndex = AMediaCodec_dequeueOutputBuffer(codec, &info, 0);
if (outputIndex >= 0) {
// Check whether this is set earlier
if (info.flags & AMEDIACODEC_BUFFER_FLAG_END_OF_STREAM) {
LOGD("Reached end of decoding stream");
isDecoding = false;
} else {
// Valid index, acquire buffer
size_t outputSize;
uint8_t *outputBuffer = AMediaCodec_getOutputBuffer(codec, outputIndex,
&outputSize);
/*LOGV("Got output buffer index %d, buffer size: %d, info size: %d writing to pcm index %d",
outputIndex,
outputSize,
info.size,
m_writeIndex);*/
// copy the data out of the buffer
memcpy(targetData + bytesWritten, outputBuffer, info.size);
bytesWritten += info.size;
AMediaCodec_releaseOutputBuffer(codec, outputIndex, false);
}
} else {
// The outputIndex doubles as a status return if its value is < 0
switch (outputIndex) {
case AMEDIACODEC_INFO_TRY_AGAIN_LATER:
LOGD("dequeueOutputBuffer: try again later");
break;
case AMEDIACODEC_INFO_OUTPUT_BUFFERS_CHANGED:
LOGD("dequeueOutputBuffer: output buffers changed");
break;
case AMEDIACODEC_INFO_OUTPUT_FORMAT_CHANGED:
LOGD("dequeueOutputBuffer: output outputFormat changed");
format = AMediaCodec_getOutputFormat(codec);
LOGD("outputFormat changed to: %s", AMediaFormat_toString(format));
break;
}
}
}
}
// Clean up
AMediaFormat_delete(format);
AMediaCodec_delete(codec);
AMediaExtractor_delete(extractor);
return bytesWritten;
}
Now the problem i am facing is that this code it first extracts all the audio data saves it into a buffer which then becomes part of AFileDataSource which i derived from DataSource class in the same sample.
And after its done extracting the whole file it plays by calling the onAudioReady() for Oboe AudioStreamBuilder.
What I need is to play as it streams the chunk of audio buffer.
Optional Query: Also aside from the question it blocks the UI even though i created a foreground service to communicate with the NDK functions to execute this code. Any thoughts on this?
You probably solved this already, but for future readers...
You need a FIFO buffer to store the decoded audio. You can use the Oboe's FIFO buffer e.g. oboe::FifoBuffer.
You can have a low/high watermark for the buffer and a state machine, so you start decoding when the buffer is almost empty and you stop decoding when it's full (you'll figure out the other states that you need).
As a side note, I implemented such player only to find at some later time, that the AAC codec is broken on some devices (Xiaomi and Amazon come to mind), so I had to throw away the AMediaCodec/AMediaExtractor parts and use an AAC library instead.
You have to implement a ringBuffer (or use the one implemented in the oboe example LockFreeQueue.h) and copy the data on buffers that you send on the ringbuffer from the extracting thread. On the other end of the RingBuffer, the audio thread will get that data from the queue and copy it to the audio buffer. This will happen on onAudioReady(oboe::AudioStream *oboeStream, void *audioData, int32_t numFrames) callback that you have to implement in your class (look oboe docs). Be sure to follow all the good practices on the Audio thread (don't allocate/deallocate memory there, no mutexes and no file I/O etc.)
Optional query: A service doesn't run in a separate thread, so obviously if you call it from UI thread it blocks the UI. Look at other types of services, there you can have IntentService or a service with a Messenger that will launch a separate thread on Java, or you can create threads in C++ side using std::thread

Load config parameters on Arduino ESP8266

I am using Arduino ESP8266 to store and load configuration settings on SPIFSS. I used this ConfigFile.ino as a reference example.
https://github.com/esp8266/Arduino/blob/master/libraries/esp8266/examples/ConfigFile/ConfigFile.ino
This function loads the configuration settings onto variables serverName and accessToken.
bool loadConfig() {
File configFile = SPIFFS.open("/config.json", "r");
if (!configFile) {
Serial.println("Failed to open config file");
return false;
}
size_t size = configFile.size();
if (size > 1024) {
Serial.println("Config file size is too large");
return false;
}
// Allocate a buffer to store contents of the file.
std::unique_ptr<char[]> buf(new char[size]);
// We don't use String here because ArduinoJson library requires the input
// buffer to be mutable. If you don't use ArduinoJson, you may as well
// use configFile.readString instead.
configFile.readBytes(buf.get(), size);
StaticJsonBuffer<200> jsonBuffer;
JsonObject& json = jsonBuffer.parseObject(buf.get());
if (!json.success()) {
Serial.println("Failed to parse config file");
return false;
}
const char* serverName = json["serverName"];
const char* accessToken = json["accessToken"];
// Real world application would store these values in some variables for
// later use.
Serial.print("Loaded serverName: ");
Serial.println(serverName);
Serial.print("Loaded accessToken: ");
Serial.println(accessToken);
return true;
}
I made some modifications to this function to load the configuration settings into a struct.
struct ConfigSettingsStruct
{
String ssid;
String password;
};
ConfigSettingsStruct ConfigSettings;
bool loadConfig() {
File configFile = SPIFFS.open("/config.json", "r");
if (!configFile) {
Serial.println("Failed to open config file");
return false;
}
size_t size = configFile.size();
if (size > 1024) {
Serial.println("Config file size is too large");
return false;
}
// Allocate a buffer to store contents of the file.
std::unique_ptr<char[]> buf(new char[size]);
// We don't use String here because ArduinoJson library requires the input
// buffer to be mutable. If you don't use ArduinoJson, you may as well
// use configFile.readString instead.
configFile.readBytes(buf.get(), size);
StaticJsonBuffer<200> jsonBuffer;
JsonObject& json = jsonBuffer.parseObject(buf.get());
if (!json.success()) {
Serial.println("Failed to parse config file");
return false;
}
//const char* serverName = json["serverName"];
//const char* accessToken = json["accessToken"];
char ssid_[30];
strcpy(ssid_, json["ssid"]);
ConfigSettings.ssid = String(ssid_);
char password_[30];
strcpy(password_, json["password"]);
ConfigSettings.password = String(password_);
// Real world application would store these values in some variables for
// later use.
Serial.print("Loaded ssid: ");
Serial.println(ConfigSettings.ssid);
Serial.print("Loaded password: ");
Serial.println(ConfigSettings.password);
return true;
}
After I download the code and run ESP8266, the WiFi chip resets with some stack error. What is wrong with my code? How can the config settings be properly loaded onto ConfigSettings?
There is nothing wrong with your code in the question. It should work. I strongly suspect that the cause of the stack error lies elsewhere. Please check your code carefully again.
This does not count as an answer but can be helpful as a reminder to look elsewhere. You may be looking at the wrong place.
Please notice that; you have a possible memory leak after
std::unique_ptr<char[]> buf(new char[size]);
I suggest you to use to allocate some memory via malloc (which is not stylish but classic) and free it after all. You need also close file before returns.
Also your ssid and passphrase buffer lengths are not enough. Max ssid length must be 32. Assuming that you got a psk based encryption, you need to increase pass buffer length to 64.
Tiny but; maybe you can think to add a typedef before struct define despite C++ threads them definable within the namespace.

How to properly delimit multiple images before sending them over a socket

let's say I need to send, for instance, five images from a client to a server over a socket and that I want to do it at once (not sending one and waiting for an ACK).
Questions:
I'd like to know if there are some best practices or guidelines for delimiting the end of each one.
What would be the safest approach for detecting the delimiters and processing each image once in the server? (In C/C++ if possible)
Thanks in advance!
Since images are binary data, it would be difficult to come up with delimiter that cannot be contained in the image. (And ultimately confusing the receiving side)
I would advice you to create a header that would be placed at the beginning of the transmission, or at the beginning of each image.
An example:
struct Header
{
uint32_t ImageLength;
// char ImageName[128];
} __attribute__(packed);
The sender should prepend this before each image and fill in the length correctly. The receiver would then know when the image ends and would expect another Header structure at that position.
The attribute(packed) is a safety, that makes sure the header will have the same alignment even if you compile server and client with different GCC versions. It's recomended in cases where structures are interpreted by different processes.
Data Stream:
Header
Image Data
Header
Image Data
Header
Image Data
...
You can use these function to send files (from client in java) to a server (in C). The idea is to send 4 bytes which indicates the file's size followed by the file content, when all files have been sent, send 4 bytes (all set to 0 zero) to indicate the end of the transfer.
// Compile with Microsoft Visual Studio 2008
// path, if not empty, must be ended with a path separator '/'
// for example: "C:/MyImages/"
int receiveFiles(SOCKET sck, const char *pathDir)
{
int fd;
long fSize=0;
char buffer[8 * 1024];
char filename[MAX_PATH];
int count=0;
// keep on receiving until we get the appropiate signal
// or the socket has an error
while (true)
{
if (recv(sck, buffer, 4, 0) != 4)
{
// socket is closed or has an error
// return what we've received so far
return count;
}
fSize = (int) ((buffer[0] & 0xff) << 24) |
(int) ((buffer[1] & 0xff) << 16) |
(int) ((buffer[2] & 0xff) << 8) |
(int) (buffer[3] & 0xff);
if (fSize == 0)
{
// received final signal
return count;
}
sprintf(filename, "%sIMAGE_%d.img", pathDir, count+1);
fd = _creat(filename, _S_IREAD | _S_IWRITE);
int iReads;
int iRet;
int iLeft=fSize;
while (iLeft > 0)
{
if (iLeft > sizeof(buffer)) iReads = sizeof(buffer);
else iReads=iLeft;
if ((iRet=recv(sck, buffer, iReads, 0)) <= 0)
{
_close(fd);
// you may delete the file or leave it to inspect
// _unlink(filename);
return count; // socket is closed or has an error
}
iLeft-=iRet;
_write(fd, buffer, iRet);
}
count++;
_close(fd);
}
}
The client part
/**
* Send a file to a connected socket.
* <p>
* First it send the file size if 4 bytes then the file's content.
* </p>
* <p>
* Note: File size is limited to a 32bit signed integer, 2GB
* </p>
*
* #param os
* OutputStream of the connected socket
* #param fileName
* The complete file's path of the image to send
* #throws Exception
* #see {#link receiveFile} for an example on how to receive the file from the other side.
*
*/
public void sendFile(OutputStream os, String fileName) throws Exception
{
// File to send
File myFile = new File(fileName);
int fSize = (int) myFile.length();
if (fSize == 0) return; // No empty files
if (fSize < myFile.length())
{
System.out.println("File is too big'");
throw new IOException("File is too big.");
}
// Send the file's size
byte[] bSize = new byte[4];
bSize[0] = (byte) ((fSize & 0xff000000) >> 24);
bSize[1] = (byte) ((fSize & 0x00ff0000) >> 16);
bSize[2] = (byte) ((fSize & 0x0000ff00) >> 8);
bSize[3] = (byte) (fSize & 0x000000ff);
// 4 bytes containing the file size
os.write(bSize, 0, 4);
// In case of memory limitations set this to false
boolean noMemoryLimitation = true;
FileInputStream fis = new FileInputStream(myFile);
BufferedInputStream bis = new BufferedInputStream(fis);
try
{
if (noMemoryLimitation)
{
// Use to send the whole file in one chunk
byte[] outBuffer = new byte[fSize];
int bRead = bis.read(outBuffer, 0, outBuffer.length);
os.write(outBuffer, 0, bRead);
}
else
{
// Use to send in a small buffer, several chunks
int bRead = 0;
byte[] outBuffer = new byte[8 * 1024];
while ((bRead = bis.read(outBuffer, 0, outBuffer.length)) > 0)
{
os.write(outBuffer, 0, bRead);
}
}
os.flush();
}
finally
{
bis.close();
}
}
To send the files from the client:
try
{
// The file name must be a fully qualified path
sendFile(mySocket.getOutputStream(), "C:/MyImages/orange.png");
sendFile(mySocket.getOutputStream(), "C:/MyImages/lemmon.png");
sendFile(mySocket.getOutputStream(), "C:/MyImages/apple.png");
sendFile(mySocket.getOutputStream(), "C:/MyImages/papaya.png");
// send the end of the transmition
byte[] buff = new byte[4];
buff[0]=0x00;
buff[1]=0x00;
buff[2]=0x00;
buff[3]=0x00;
mySocket.getOutputStream().write(buff, 0, 4);
}
catch (Exception e)
{
e.printStackTrace();
}
If you cannot easily send a header containing the length, use some likely delimiter. If the images are not compressed and consist of bitmap-stype data, maybe 0xFF/0XFFFF/0xFFFFFFF as fully-saturated luminance values are usually rare?
Use an escape-sequence to eliminate any instances of the delimiter that turn up inside your data.
This does mean iterating all the data at both ends, but depending on your data flows, and what is being done anyway, it may be a useful solution :(

zlib's uncompress() strangely returning Z_BUF_ERROR

I'm writing Qt-based client application. It connects to remote server using QTcpSocket. Before sending any actual data it needs to send login info, which is zlib-compressed json.
As far as I know from server sources, to make everything work I need to send X bytes of compressed data following 4 bytes with length of uncompressed data.
Uncompressing on server-side looks like this:
/* look at first 32 bits of buffer, which contains uncompressed len */
unc_len = le32toh(*((uint32_t *)buf));
if (unc_len > CLI_MAX_MSG)
return NULL;
/* alloc buffer for uncompressed data */
obj_unc = malloc(unc_len + 1);
if (!obj_unc)
return NULL;
/* decompress buffer (excluding first 32 bits) */
comp_p = buf + 4;
if (uncompress(obj_unc, &dest_len, comp_p, buflen - 4) != Z_OK)
goto out;
if (dest_len != unc_len)
goto out;
memcpy(obj_unc + unc_len, &zero, 1); /* null terminate */
I'm compressing json using Qt built-in zlib (I've just downloaded headers and placed it in mingw's include folder):
char json[] = "{\"version\":1,\"user\":\"test\"}";
char pass[] = "test";
std::auto_ptr<Bytef> message(new Bytef[ // allocate memory for:
sizeof(ubbp_header) // + msg header
+ sizeof(uLongf) // + uncompressed data size
+ strlen(json) // + compressed data itself
+ 64 // + reserve (if compressed size > uncompressed size)
+ SHA256_DIGEST_LENGTH]);//+ SHA256 digest
uLongf unc_len = strlen(json);
uLongf enc_len = strlen(json) + 64;
// header goes first, so server will determine that we want to login
Bytef* pHdr = message.get();
// after that: uncompressed data length and data itself
Bytef* pLen = pHdr + sizeof(ubbp_header);
Bytef* pDat = pLen + sizeof(uLongf);
// hash of compressed message updated with user pass
Bytef* pSha;
if (Z_OK != compress(pLen, &enc_len, (Bytef*)json, unc_len))
{
qDebug("Compression failed.");
return false;
}
Complete function code here: http://pastebin.com/hMY2C4n5
Even though server correctly recieves uncompressed length, uncompress() returning Z_BUF_ERROR.
P.S.: I'm actually writing pushpool's client to figure out how it's binary protocol works. I've asked this question on official bitcoin forum, but no luck there. http://forum.bitcoin.org/index.php?topic=24257.0
Turns out it was server-side bug. More details in bitcoin forum thread.