Read a binary (exe,zip etc..) file into a char*, c++ - c++

I'm trying to read a file and put bytes in a byte buffer, but when i try to read exe or zip file not all bytes are loaded in the buffer. my function:
char* read_file_bytes(const string &name) {
FILE *img = fopen(name.c_str(), "rb");
fseek(img, 0, SEEK_END);
unsigned long filesize = ftell(img);
char *buffer = (char*)malloc(sizeof(char)*filesize);
rewind(img);
fread(buffer, sizeof(char), filesize, img);
return buffer;
}
the piece of code that check the buffer:
char* bytes = read_file_bytes(path);
for(int i = 0; i < strlen(bytes); i++)
cout << hex << (unsigned int)(bytes[i]);

strlen() is designed for text characters, not for binary bytes. It stops counting when it encounters a nul char (0x00), which binary data is likely to contain.
Your read_file_bytes() function knows how many bytes it reads in. You need to return that number to the caller, eg:
typedef unsigned char byte;
byte* read_file_bytes(const std::string &name, unsigned long &filesize)
{
filesize = 0;
FILE *img = fopen(name.c_str(), "rb");
if (!img)
return NULL;
if (fseek(img, 0, SEEK_END) != 0)
{
fclose(img);
return NULL;
}
long size = ftell(img);
if (size == -1L)
{
fclose(img);
return NULL;
}
byte *buffer = static_cast<byte*>(std::malloc(size));
if (!buffer)
{
fclose(img);
return NULL;
}
rewind(img);
if (fread(buffer, 1, size, img) < size)
{
free(buffer);
close(img);
return NULL;
}
fclose(img);
filesize = size;
return buffer;
}
unsigned long filesize;
byte* bytes = read_file_bytes(path, filesize);
for(unsigned long i = 0; i < filesize; ++i)
std::cout << std::hex << static_cast<unsigned int>(bytes[i]);
free(bytes);
Note that this approach is very C-ish and error prone. A more C++ approach would look like this instead:
using byte = unsigned char;
// or, use std::byte in C++17 and later...
std::vector<byte> read_file_bytes(const std::string &name)
{
std::ifstream img;
img.exceptions(std::ifstream::failbit | std::ifstream::badbit);
img.open(name.c_str(), std::ifstream::binary | std::ifstream::ate);
std::ifstream::pos_type size = img.tellg();
ifs.seekg(0, std::ios::beg);
// or, use std::filesystem::file_size() instead...
std::vector<byte> buffer(size);
img.read(buffer.data(), size);
return buffer;
}
std::vector<byte> bytes = read_file_bytes(path);
for(byte b : bytes)
std::cout << std::hex << static_cast<unsigned int>(b);

Related

Reading file into buffer and avoiding splitting lines between reads

I was reading sehe's answer for fast text file reading in C++, which looks like this.
static uintmax_t wc(char const *fname)
{
static const auto BUFFER_SIZE = 16*1024;
int fd = open(fname, O_RDONLY);
if(fd == -1)
handle_error("open");
/* Advise the kernel of our access pattern. */
posix_fadvise(fd, 0, 0, 1); // FDADVICE_SEQUENTIAL
char buf[BUFFER_SIZE + 1];
uintmax_t lines = 0;
while(size_t bytes_read = read(fd, buf, BUFFER_SIZE))
{
if(bytes_read == (size_t)-1)
handle_error("read failed");
if (!bytes_read)
break;
for(char *p = buf; (p = (char*) memchr(p, '\n', (buf + bytes_read) - p)); ++p)
++lines;
}
return lines;
}
This is cool, but I was wondering if a similar approach can be taken when we aren't dealing with a character operation like counting newlines, but want to operate on each line of data. Say for instance I had a file of doubles, and already some function parse_line_to_double to use on each line.
12.44243
4242.910
...
That is, how can I read BUFFER_SIZE bytes into my buffer but avoid splitting the last line read? Effectively, can I ask "Give me BUFFER_SIZE or less bytes while ensuring that the last byte read is a newline character (or EOF)"?
Knowing extremely little about low level IO like this, ideas that came to mind were
Can I "back up" fd to the most recent newline between iterations?
Do I have to keep a second buffer holding a copy of the current line being read all the time?
Here is a comparison test. First, lets try the easy way. Just read the file with standard C++ functions:
#include <iostream>
#include <string>
#include <fstream> //std::ifstream
#include <sstream> //std::stringstream
uintmax_t test1(char const *fname)
{
std::ifstream fin(fname);
if(!fin) return 0;
uintmax_t lines = 0;
std::string str;
double value;
while(fin >> value)
{
//std::cout << value << "\n";
lines++;
}
return lines;
}
Next, with std::stringstream this is about 2.5 times faster:
uintmax_t test2(char const *fname)
{
std::ifstream fin(fname);
if(!fin) return 0;
uintmax_t lines = 0;
std::string str;
double value;
std::stringstream ss;
ss << fin.rdbuf();
while(ss >> value)
lines++;
return lines;
}
Next, lets read the whole file in to memory. This will be fine as long as the file is less than 1 GiB or so. Assuming there is a double value on each line, lets extract that value. test3 is more complicated and less flexible, and it's not any faster than test2:
uintmax_t test3(char const *fname)
{
std::ifstream fin(fname, std::ios::binary);
if(!fin) return 0;
fin.seekg(0, std::ios::end);
size_t filesize = (size_t)fin.tellg();
fin.seekg(0);
std::string str(filesize, 0);
fin.read(&str[0], filesize);
double value;
uintmax_t lines = 0;
size_t beg = 0;
size_t i;
size_t len = str.size();
for(i = 0; i < len; i++)
{
if(str[i] == '\n' || i == len - 1)
{
try
{
value = std::stod(str.substr(beg, i - beg));
//std::cout << value << "\n";
beg = i + 1;
lines++;
}
catch(...)
{
}
}
}
return lines;
}
For comparison to the wc function in the question, let's read the whole file in to memory and only count the number of lines. This runs a little faster than wc (as expected), suggesting that there is no need for additional optimizations
uintmax_t test_countlines(char const *fname)
{
std::ifstream fin(fname, std::ios::binary);
if(!fin) return 0;
fin.seekg(0, std::ios::end);
size_t filesize = (size_t)fin.tellg();
fin.seekg(0);
std::string str(filesize, 0);
fin.read(&str[0], filesize);
uintmax_t lines = 0;
for(auto &c : str)
if(c == '\n')
lines++;
return lines;
}

Can't load file using fopen()

I creating a program that takes a file and ecrypts it, but now i'am with a problem opening the file to read, the fopen() always return 0.
void run(){
char buffer[260] = { '\0' };
GetWindowTextA(Path,buffer,260);
encryptFile(buffer, "C:\\Users\\DownD\\Desktop\\Some.dat");
}
I think the problem is somewhere on this function run(), because when replace the buffer array with some string for example, "C:\\Somefile.exe" replacing the function encryptFile() for:
encryptFile("C:\\Somefile.exe", "C:\\Users\\DownD\\Desktop\\Some.dat");.It reads the file nice and clean.
Here it is parts of the rest of the project.
int CCrypter::encryptFile(char* filePath, LPCSTR outFile)
{
unsigned char* data = NULL;
int cypherSize;
int fSize = readFile(data, filePath);
if (!fSize)
return 2;
unsigned char *ciphertext = new unsigned char[fSize];
cypherSize = encrypt(data, fSize, ciphertext);
if (!cypherSize)
return 3;
if (!Create_File(ciphertext, cypherSize, outFile))
return 4;
return 1;
}
int CCrypter::readFile(unsigned char *&buffer, const char* path)
{
int lenght = 0;
OutputDebugString(path);
FILE* input = fopen(path, "rb");
if (!input) // Input is always 0
return 0;
fseek(input, 0, SEEK_END);
lenght = ftell(input);
buffer = new unsigned char[lenght];
printf("%d", buffer);
ZeroMemory(buffer, lenght);
rewind(input);
if (!fread(buffer, 1, lenght, input))
return 0;
fclose(input);
return lenght;
}
Just to clarify, i'm using Multi-Byte Character Set
I solved the issue. The problem was that I had opened the file before and did not close it, that was why I was receiving permission denied.

C++: Store read binary file into buffer

I'm trying to read a binary file and store it in a buffer. The problem is, that in the binary file are multiple null-terminated characters, but they are not at the end, instead they are before other binary text, so if I store the text after the '\0' it just deletes it in the buffer.
Example:
char * a = "this is a\0 test";
cout << a;
This will just output: this is a
here's my real code:
this function reads one character
bool CStream::Read (int * _OutChar)
{
if (!bInitialized)
return false;
int iReturn = 0;
*_OutChar = fgetc (pFile);
if (*_OutChar == EOF)
return false;
return true;
}
And this is how I use it:
char * SendData = new char[4096 + 1];
for (i = 0; i < 4096; i++)
{
if (Stream.Read (&iChar))
SendData[i] = iChar;
else
break;
}
I just want to mention that there is a standard way to read from a binary file into a buffer.
Using <cstdio>:
char buffer[BUFFERSIZE];
FILE * filp = fopen("filename.bin", "rb");
int bytes_read = fread(buffer, sizeof(char), BUFFERSIZE, filp);
Using <fstream>:
std::ifstream fin("filename.bin", ios::in | ios::binary );
fin.read(buffer, BUFFERSIZE);
What you do with the buffer afterwards is all up to you of course.
Edit: Full example using <cstdio>
#include <cstdio>
const int BUFFERSIZE = 4096;
int main() {
const char * fname = "filename.bin";
FILE* filp = fopen(fname, "rb" );
if (!filp) { printf("Error: could not open file %s\n", fname); return -1; }
char * buffer = new char[BUFFERSIZE];
while ( (int bytes = fread(buffer, sizeof(char), BUFFERSIZE, filp)) > 0 ) {
// Do something with the bytes, first elements of buffer.
// For example, reversing the data and forget about it afterwards!
for (char *beg = buffer, *end=buffer + bytes; beg < end; beg++, end-- ) {
swap(*beg, *end);
}
}
// Done and close.
fclose(filp);
return 0;
}
static std::vector<unsigned char> read_binary_file (const std::string filename)
{
// binary mode is only for switching off newline translation
std::ifstream file(filename, std::ios::binary);
file.unsetf(std::ios::skipws);
std::streampos file_size;
file.seekg(0, std::ios::end);
file_size = file.tellg();
file.seekg(0, std::ios::beg);
std::vector<unsigned char> vec;
vec.reserve(file_size);
vec.insert(vec.begin(),
std::istream_iterator<unsigned char>(file),
std::istream_iterator<unsigned char>());
return (vec);
}
and then
auto vec = read_binary_file(filename);
auto src = (char*) new char[vec.size()];
std::copy(vec.begin(), vec.end(), src);
The problem is definitievely the writing of your buffer, because you read a byte at a time.
If you know the length of the data in your buffer, you could force cout to go on:
char *bf = "Hello\0 world";
cout << bf << endl;
cout << string(bf, 12) << endl;
This should give the following output:
Hello
Hello world
However this is a workaround, as cout is foreseent to output printable data. Be aware that the output of non printable chars such as '\0' is system dependent.
Alternative solutions:
But if you manipulate binary data, you should define ad-hoc data structures and printing. Here some hints, with a quick draft for the general principles:
struct Mybuff { // special strtucture to manage buffers of binary data
static const int maxsz = 512;
int size;
char buffer[maxsz];
void set(char *src, int sz) // binary copy of data of a given length
{ size = sz; memcpy(buffer, src, max(sz, maxsz)); }
} ;
Then you could overload the output operator function:
ostream& operator<< (ostream& os, Mybuff &b)
{
for (int i = 0; i < b.size; i++)
os.put(isprint(b.buffer[i]) ? b.buffer[i]:'*'); // non printables replaced with *
return os;
}
ANd you could use it like this:
char *bf = "Hello\0 world";
Mybuff my;
my.set(bf, 13); // physical copy of memory
cout << my << endl; // special output
I believe your problem is not in reading the data, but rather in how you try to print it.
char * a = "this is a\0 test";
cout << a;
This example you show us prints a C-string. Since C-string is a sequence of chars ended by '\0', the printing function stops at the first null char.
This is because you need to know where the string ends either by using special terminating character (like '\0' here) or knowing its length.
So, to print whole data, you must know the length of it and use a loop similar to the one you use for reading it.
Are you on Windows? If so you need to execute _setmode(_fileno(stdout), _O_BINARY);
Include <fcntl.h> and <io.h>

fopen - can't write more than 16K?

I'm currently using fopen to write/read binary files. With small files all is fines. But in some cases, when "exactly" the content is > 16K the remainder of the file is invalid !!!
The code is simple, fopen ... fread/fwrite ... fflush ... fclose !
I have try with C++. But now I got a problem during the "read"
in BinaryDefaultRead it return -1 !!! But really don't know why !
I only write 4 bytes at a time !!!
It is under Win7 64 bits with MSVC 2008 compiler.
#include <fstream>
using namespace std;
size_t BinaryDefaultRead(ifstream& stream, void* buffer, unsigned int bufferSize)
{
//return fread(buffer, 1, (size_t) bufferSize, file);
stream.read((char*)buffer, bufferSize);
if (!stream)
return -1;
return bufferSize;
}
size_t BinaryDefaultWrite(ofstream& stream, const void* buffer, unsigned int bufferSize)
{
//return fwrite(buffer, 1, (size_t) bufferSize, file);
stream.write((char*)buffer, bufferSize);
if (!stream)
return -1;
return bufferSize;
}
// Read an unsigned integer from a stream in a machine endian independent manner (for portability).
size_t BinaryReadUINT(ifstream& stream, unsigned int* value)
{
unsigned char buf[4];
size_t result = BinaryDefaultRead(stream, (void *)buf, 4);
if (result < 0)
return result;
*value = ((unsigned int) buf[0]) |
(((unsigned int) buf[1]) << 8) |
(((unsigned int) buf[2]) << 16) |
(((unsigned int) buf[3]) << 24);
return result;
}
// Write an unsigned integer to a stream in a machine endian independent manner (for portability).
size_t BinaryWriteUINT(ofstream& stream, unsigned int aValue)
{
unsigned char buf[4];
buf[0] = aValue & 0x000000ff;
buf[1] = (aValue >> 8) & 0x000000ff;
buf[2] = (aValue >> 16) & 0x000000ff;
buf[3] = (aValue >> 24) & 0x000000ff;
return BinaryDefaultWrite(stream, (void*)buf, 4);
}
// Read a floating point value from a stream in a machine endian independent manner (for portability).
size_t BinaryReadFLOAT(ifstream& stream, float* value)
{
union {
float f;
unsigned int i;
} u;
size_t result = BinaryReadUINT(stream, &u.i);
if (result < 0)
return result;
*value = u.f;
return result;
}
// Write a floating point value to a stream in a machine endian independent manner (for portability).
size_t BinaryWriteFLOAT(ofstream& stream, float aValue)
{
union {
float f;
unsigned int i;
} u;
u.f = aValue;
return BinaryWriteUINT(stream, u.i);
}
size_t BinaryReadUINTArray(ifstream& stream, unsigned int* buffer, unsigned int count)
{
size_t result;
for(unsigned int i = 0; i < count; i++)
{
result = BinaryReadUINT(stream, buffer + i);
if (result < 0)
return result;
}
return result;
}
size_t BinaryWriteUINTArray(ofstream& stream, unsigned int* buffer, unsigned int count)
{
size_t result;
for(unsigned int i = 0; i < count; i++)
{
result = BinaryWriteUINT(stream, buffer[i]);
if (result < 0)
return result;
}
return result;
}
size_t BinaryReadFLOATArray(ifstream& stream, float* buffer, unsigned int count)
{
size_t result;
for(unsigned int i = 0; i < count; i++)
{
result = BinaryReadFLOAT(stream, buffer + i);
if (result < 0)
return result;
}
return result;
}
size_t BinaryWriteFLOATArray(ofstream& stream, float* buffer, unsigned int count)
{
size_t result;
for(unsigned int i = 0; i < count; i++)
{
result = BinaryWriteFLOAT(stream, buffer[i]);
if (result < 0)
return result;
}
return result;
}
fopen is only used to open a file stream, not to read or write. fread and fwrite are used to do that.
fwrite and fread don't ensure you to write all the elements you pass them: they return the number of elements written, which can be lesser than the number of elements you passed it.
Just check the returned value, and keep fwrite-ing until you write out all of your elements or until there's an error with the stream: use ferror to check for an error.
From fwrite manual:
fread() and fwrite() return the number of items successfully read or written (i.e., not the number of characters). If an error occurs, or the end-of-file is reached, the return value is a short item count (or zero).
fread() does not distinguish between end-of-file and error, and callers must use feof(3) and ferror(3) to determine which occurred.

How to compress a buffer with zlib?

There is a usage example at the zlib website: http://www.zlib.net/zlib_how.html
However in the example they are compressing a file. I would like to compress a binary data stored in a buffer in memory. I don't want to save the compressed buffer to disk either.
Basically here is my buffer:
fIplImageHeader->imageData = (char*)imageIn->getFrame();
How can I compress it with zlib?
I would appreciate some code example of how to do that.
zlib.h has all the functions you need: compress (or compress2) and uncompress. See the source code of zlib for an answer.
ZEXTERN int ZEXPORT compress OF((Bytef *dest, uLongf *destLen, const Bytef *source, uLong sourceLen));
/*
Compresses the source buffer into the destination buffer. sourceLen is
the byte length of the source buffer. Upon entry, destLen is the total size
of the destination buffer, which must be at least the value returned by
compressBound(sourceLen). Upon exit, destLen is the actual size of the
compressed buffer.
compress returns Z_OK if success, Z_MEM_ERROR if there was not
enough memory, Z_BUF_ERROR if there was not enough room in the output
buffer.
*/
ZEXTERN int ZEXPORT uncompress OF((Bytef *dest, uLongf *destLen, const Bytef *source, uLong sourceLen));
/*
Decompresses the source buffer into the destination buffer. sourceLen is
the byte length of the source buffer. Upon entry, destLen is the total size
of the destination buffer, which must be large enough to hold the entire
uncompressed data. (The size of the uncompressed data must have been saved
previously by the compressor and transmitted to the decompressor by some
mechanism outside the scope of this compression library.) Upon exit, destLen
is the actual size of the uncompressed buffer.
uncompress returns Z_OK if success, Z_MEM_ERROR if there was not
enough memory, Z_BUF_ERROR if there was not enough room in the output
buffer, or Z_DATA_ERROR if the input data was corrupted or incomplete. In
the case where there is not enough room, uncompress() will fill the output
buffer with the uncompressed data up to that point.
*/
This is an example to pack a buffer with zlib and save the compressed contents in a vector.
void compress_memory(void *in_data, size_t in_data_size, std::vector<uint8_t> &out_data)
{
std::vector<uint8_t> buffer;
const size_t BUFSIZE = 128 * 1024;
uint8_t temp_buffer[BUFSIZE];
z_stream strm;
strm.zalloc = 0;
strm.zfree = 0;
strm.next_in = reinterpret_cast<uint8_t *>(in_data);
strm.avail_in = in_data_size;
strm.next_out = temp_buffer;
strm.avail_out = BUFSIZE;
deflateInit(&strm, Z_BEST_COMPRESSION);
while (strm.avail_in != 0)
{
int res = deflate(&strm, Z_NO_FLUSH);
assert(res == Z_OK);
if (strm.avail_out == 0)
{
buffer.insert(buffer.end(), temp_buffer, temp_buffer + BUFSIZE);
strm.next_out = temp_buffer;
strm.avail_out = BUFSIZE;
}
}
int deflate_res = Z_OK;
while (deflate_res == Z_OK)
{
if (strm.avail_out == 0)
{
buffer.insert(buffer.end(), temp_buffer, temp_buffer + BUFSIZE);
strm.next_out = temp_buffer;
strm.avail_out = BUFSIZE;
}
deflate_res = deflate(&strm, Z_FINISH);
}
assert(deflate_res == Z_STREAM_END);
buffer.insert(buffer.end(), temp_buffer, temp_buffer + BUFSIZE - strm.avail_out);
deflateEnd(&strm);
out_data.swap(buffer);
}
You can easily adapt the example by replacing fread() and fwrite() calls with direct pointers to your data. For zlib compression (referred to as deflate as you "take out all the air of your data") you allocate z_stream structure, call deflateInit() and then:
fill next_in with the next chunk of data you want to compress
set avail_in to the number of bytes available in next_in
set next_out to where the compressed data should be written which should usually be a pointer inside your buffer that advances as you go along
set avail_out to the number of bytes available in next_out
call deflate
repeat steps 3-5 until avail_out is non-zero (i.e. there's more room in the output buffer than zlib needs - no more data to write)
repeat steps 1-6 while you have data to compress
Eventually you call deflateEnd() and you're done.
You're basically feeding it chunks of input and output until you're out of input and it is out of output.
The classic way more convenient with C++ features
Here's a full example which demonstrates compression and decompression using C++ std::vector objects:
#include <cstdio>
#include <iosfwd>
#include <iostream>
#include <vector>
#include <zconf.h>
#include <zlib.h>
#include <iomanip>
#include <cassert>
void add_buffer_to_vector(std::vector<char> &vector, const char *buffer, uLongf length) {
for (int character_index = 0; character_index < length; character_index++) {
char current_character = buffer[character_index];
vector.push_back(current_character);
}
}
int compress_vector(std::vector<char> source, std::vector<char> &destination) {
unsigned long source_length = source.size();
uLongf destination_length = compressBound(source_length);
char *destination_data = (char *) malloc(destination_length);
if (destination_data == nullptr) {
return Z_MEM_ERROR;
}
Bytef *source_data = (Bytef *) source.data();
int return_value = compress2((Bytef *) destination_data, &destination_length, source_data, source_length,
Z_BEST_COMPRESSION);
add_buffer_to_vector(destination, destination_data, destination_length);
free(destination_data);
return return_value;
}
int decompress_vector(std::vector<char> source, std::vector<char> &destination) {
unsigned long source_length = source.size();
uLongf destination_length = compressBound(source_length);
char *destination_data = (char *) malloc(destination_length);
if (destination_data == nullptr) {
return Z_MEM_ERROR;
}
Bytef *source_data = (Bytef *) source.data();
int return_value = uncompress((Bytef *) destination_data, &destination_length, source_data, source.size());
add_buffer_to_vector(destination, destination_data, destination_length);
free(destination_data);
return return_value;
}
void add_string_to_vector(std::vector<char> &uncompressed_data,
const char *my_string) {
int character_index = 0;
while (true) {
char current_character = my_string[character_index];
uncompressed_data.push_back(current_character);
if (current_character == '\00') {
break;
}
character_index++;
}
}
// https://stackoverflow.com/a/27173017/3764804
void print_bytes(std::ostream &stream, const unsigned char *data, size_t data_length, bool format = true) {
stream << std::setfill('0');
for (size_t data_index = 0; data_index < data_length; ++data_index) {
stream << std::hex << std::setw(2) << (int) data[data_index];
if (format) {
stream << (((data_index + 1) % 16 == 0) ? "\n" : " ");
}
}
stream << std::endl;
}
void test_compression() {
std::vector<char> uncompressed(0);
auto *my_string = (char *) "Hello, world!";
add_string_to_vector(uncompressed, my_string);
std::vector<char> compressed(0);
int compression_result = compress_vector(uncompressed, compressed);
assert(compression_result == F_OK);
std::vector<char> decompressed(0);
int decompression_result = decompress_vector(compressed, decompressed);
assert(decompression_result == F_OK);
printf("Uncompressed: %s\n", uncompressed.data());
printf("Compressed: ");
std::ostream &standard_output = std::cout;
print_bytes(standard_output, (const unsigned char *) compressed.data(), compressed.size(), false);
printf("Decompressed: %s\n", decompressed.data());
}
In your main.cpp simply call:
int main(int argc, char *argv[]) {
test_compression();
return EXIT_SUCCESS;
}
The output produced:
Uncompressed: Hello, world!
Compressed: 78daf348cdc9c9d75128cf2fca495164000024e8048a
Decompressed: Hello, world!
The Boost way
#include <iostream>
#include <boost/iostreams/filtering_streambuf.hpp>
#include <boost/iostreams/copy.hpp>
#include <boost/iostreams/filter/zlib.hpp>
std::string compress(const std::string &data) {
boost::iostreams::filtering_streambuf<boost::iostreams::output> output_stream;
output_stream.push(boost::iostreams::zlib_compressor());
std::stringstream string_stream;
output_stream.push(string_stream);
boost::iostreams::copy(boost::iostreams::basic_array_source<char>(data.c_str(),
data.size()), output_stream);
return string_stream.str();
}
std::string decompress(const std::string &cipher_text) {
std::stringstream string_stream;
string_stream << cipher_text;
boost::iostreams::filtering_streambuf<boost::iostreams::input> input_stream;
input_stream.push(boost::iostreams::zlib_decompressor());
input_stream.push(string_stream);
std::stringstream unpacked_text;
boost::iostreams::copy(input_stream, unpacked_text);
return unpacked_text.str();
}
TEST_CASE("zlib") {
std::string plain_text = "Hello, world!";
const auto cipher_text = compress(plain_text);
const auto decompressed_plain_text = decompress(cipher_text);
REQUIRE(plain_text == decompressed_plain_text);
}
This is not a direct answer on your question about the zlib API, but you may be interested in boost::iostreams library paired with zlib.
This allows to use zlib-driven packing algorithms using the basic "stream" operations notation and then your data could be easily compressed by opening some memory stream and doing the << data operation on it.
In case of boost::iostreams this would automatically invoke the corresponding packing filter for every data that passes through the stream.