boost::iostreams reading from source device - c++

I've been trying to get my head around the iostreams library by boost.
But i cant really fully grasp the concepts.
Say i have the following class:
Pseudocode: The below code is only to illustrate the problem.
Edit: removed the read code because it removed focus on the real problem.
class my_source {
public:
my_source():value(0x1234) {}
typedef char char_type;
typedef source_tag category;
std::streamsize read(char* s, std::streamsize n)
{
... read into "s" ...
}
private:
/* Other members */
};
Now say i want to stream the this to an int.
What do i need to do ? I've tried the following
boost::iostreams::stream<my_source> stream;
stream.open(my_source());
int i = 0;
stream >> i;
// stream.fail() == true; <-- ??
This results in a fail, (failbit is set)
While the following works fine.
boost::iostreams::stream<my_source> stream;
stream.open(my_source());
char i[4];
stream >> i;
// stream.fail() == false;
Could someone explain to me why this is happening ? Is this because i've set the char_type char ?
I cant really find a good explenation anywhere. I've been trying to read the documentation but i cant find the defined behavior for char_type if this is the problem. While when im using stringstreams i can read into a int without doing anything special.
So if anyone has any insight please enlighten me.

All iostreams are textual streams, so this will take the bytewise representation of 0x1234, interpret it as text and try to parse it as integer.
By the way
std::streamsize read(char* s, std::streamsize n)
{
int size = sizeof(int);
memcpy(s, &value, 4);
return size;
}
This has the potential for a buffer overflow if n < 4. Also, you write four bytes and then return the size of an int. memcpy(s, &value, sizeof value); will do the job, a simple return sizeof value; will do the rest.

boost::iostreams::stream constructor without arguments does nothing and in result stream is not open. You need to add fake argument to my_source constructor.
class my_source {
public:
my_source(int fake) : value(0x1234) {}
...
boost::iostreams::stream<my_source> stream(0);

Related

How should I approach parsing the network packet using C++ template?

Let's say I have an application that keeps receiving the byte stream from the socket. I have the documentation that describes what the packet looks like. For example, the total header size, and total payload size, with the data type corresponding to different byte offsets. I want to parse it as a struct. The approach I can think of is that I will declare a struct and disable the padding by using some compiler macro, probably something like:
struct Payload
{
char field1;
uint32 field2;
uint32 field3;
char field5;
} __attribute__((packed));
and then I can declare a buffer and memcpy the bytes to the buffer and reinterpret_cast it to my structure. Another way I can think of is that process the bytes one by one and fill the data into the struct. I think either one should work but it is kind of old school and probably not safe.
The reinterpret_cast approach mentioned, should be something like:
void receive(const char*data, std::size_t data_size)
{
if(data_size == sizeof(payload)
{
const Payload* payload = reinterpret_cast<const Payload*>(data);
// ... further processing ...
}
}
I'm wondering are there any better approaches (more modern C++ style? more elegant?) for this kind of use case? I feel like using metaprogramming should help but I don't have an idea how to use it.
Can anyone share some thoughts? Or Point me to some related references or resources or even relevant open source code so that I can have a look and learn more about how to solve this kind of problem in a more elegant way.
There are many different ways of approaching this. Here's one:
Keeping in mind that reading a struct from a network stream is semantically the same thing as reading a single value, the operation should look the same in either case.
Note that from what you posted, I am inferring that you will not be dealing with types with non-trivial default constructors. If that were the case, I would approach things a bit differently.
In this approach, we:
Define a read_into(src&, dst&) function that takes in a source of raw bytes, as well as an object to populate.
Provide a general implementation for all arithmetic types is provided, switching from network byte order when appropriate.
Overload the function for our struct, calling read_into() on each field in the order expected on the wire.
#include <cstdint>
#include <bit>
#include <concepts>
#include <array>
#include <algorithm>
// Use std::byteswap when available. In the meantime, just lift the implementation from
// https://en.cppreference.com/w/cpp/numeric/byteswap
template<std::integral T>
constexpr T byteswap(T value) noexcept
{
static_assert(std::has_unique_object_representations_v<T>, "T may not have padding bits");
auto value_representation = std::bit_cast<std::array<std::byte, sizeof(T)>>(value);
std::ranges::reverse(value_representation);
return std::bit_cast<T>(value_representation);
}
template<typename T>
concept DataSource = requires(T& x, char* dst, std::size_t size ) {
{x.read(dst, size)};
};
// General read implementation for all arithmetic types
template<std::endian network_order = std::endian::big>
void read_into(DataSource auto& src, std::integral auto& dst) {
src.read(reinterpret_cast<char*>(&dst), sizeof(dst));
if constexpr (sizeof(dst) > 1 && std::endian::native != network_order) {
dst = byteswap(dst);
}
}
struct Payload
{
char field1;
std::uint32_t field2;
std::uint32_t field3;
char field5;
};
// Read implementation specific to Payload
void read_into(DataSource auto& src, Payload& dst) {
read_into(src, dst.field1);
read_into<std::endian::little>(src, dst.field2);
read_into(src, dst.field3);
read_into(src, dst.field5);
}
// mind you, nothing stops you from just reading directly into the struct, but beware of endianness issues:
// struct Payload
// {
// char field1;
// std::uint32_t field2;
// std::uint32_t field3;
// char field5;
// } __attribute__((packed));
// void read_into(DataSource auto& src, Payload& dst) {
// src.read(reinterpret_cast<char*>(&dst), sizeof(Payload));
// }
// Example
struct some_data_source {
std::size_t read(char*, std::size_t size);
};
void foo() {
some_data_source data;
Payload p;
read_into(data, p);
}
An alternative API could have been dst.field2 = read<std::uint32_t>(src), which has the drawback of requiring to be explicit about the type, but is more appropriate if you have to deal with non-trivial constructors.
see it in action on godbolt: https://gcc.godbolt.org/z/77rvYE1qn

Implementing a String class with implicit conversion to char* (C++)

It might not be advisable according to what I have read at a couple of places (and that's probably the reason std::string doesn't do it already), but in a controlled environment and with careful usage, I think it might be ok to write a string class which can be implicitly converted to a proper writable char buffer when needed by third party library methods (which take only char* as an argument), and still behave like a modern string having methods like Find(), Split(), SubString() etc. While I can try to implement the usual other string manipulation methods later, I first wanted to ask about the efficient and safe way to do this main task. Currently, we have to allocate a char array of roughly the maximum size of the char* output that is expected from the third party method, pass it there, then convert the return char* to a std::string to be able to use the convenient methods it allows, then again pass its (const char*) result to another method using string.c_str(). This is both lengthy and makes the code look a little messy.
Here is my very initial implementation so far:
MyString.h
#pragma once
#include<string>
using namespace std;
class MyString
{
private:
bool mBufferInitialized;
size_t mAllocSize;
string mString;
char *mBuffer;
public:
MyString(size_t size);
MyString(const char* cstr);
MyString();
~MyString();
operator char*() { return GetBuffer(); }
operator const char*() { return GetAsConstChar(); }
const char* GetAsConstChar() { InvalidateBuffer(); return mString.c_str(); }
private:
char* GetBuffer();
void InvalidateBuffer();
};
MyString.cpp
#include "MyString.h"
MyString::MyString(size_t size)
:mAllocSize(size)
,mBufferInitialized(false)
,mBuffer(nullptr)
{
mString.reserve(size);
}
MyString::MyString(const char * cstr)
:MyString()
{
mString.assign(cstr);
}
MyString::MyString()
:MyString((size_t)1024)
{
}
MyString::~MyString()
{
if (mBufferInitialized)
delete[] mBuffer;
}
char * MyString::GetBuffer()
{
if (!mBufferInitialized)
{
mBuffer = new char[mAllocSize]{ '\0' };
mBufferInitialized = true;
}
if (mString.length() > 0)
memcpy(mBuffer, mString.c_str(), mString.length());
return mBuffer;
}
void MyString::InvalidateBuffer()
{
if (mBufferInitialized && mBuffer && strlen(mBuffer) > 0)
{
mString.assign(mBuffer);
mBuffer[0] = '\0';
}
}
Sample usage (main.cpp)
#include "MyString.h"
#include <iostream>
void testSetChars(char * name)
{
if (!name)
return;
//This length is not known to us, but the maximum
//return length is known for each function.
char str[] = "random random name";
strcpy_s(name, strlen(str) + 1, str);
}
int main(int, char*)
{
MyString cs("test initializer");
cout << cs.GetAsConstChar() << '\n';
testSetChars(cs);
cout << cs.GetAsConstChar() << '\n';
getchar();
return 0;
}
Now, I plan to call the InvalidateBuffer() in almost all the methods before doing anything else. Now some of my questions are :
Is there a better way to do it in terms of memory/performance and/or safety, especially in C++ 11 (apart from the usual move constructor/assignment operators which I plan to add to it soon)?
I had initially implemented the 'buffer' using a std::vector of chars, which was easier to implement and more C++ like, but was concerned about performance. So the GetBuffer() method would just return the beginning pointer of the resized vector of . Do you think there are any major pros/cons of using a vector instead of char* here?
I plan to add wide char support to it later. Do you think a union of two structs : {char,string} and {wchar_t, wstring} would be the way to go for that purpose (it will be only one of these two at a time)?
Is it too much overkill rather than just doing the usual way of passing char array pointer, converting to a std::string and doing our work with it. The third party function calls expecting char* arguments are used heavily in the code and I plan to completely replace both char* and std::string with this new string if it works.
Thank you for your patience and help!
If I understood you correctly, you want this to work:
mystring foo;
c_function(foo);
// use the filled foo
with a c_function like ...
void c_function(char * dest) {
strcpy(dest, "FOOOOO");
}
Instead, I propose this (ideone example):
template<std::size_t max>
struct string_filler {
char data[max+1];
std::string & destination;
string_filler(std::string & d) : destination(d) {
data[0] = '\0'; // paranoia
}
~string_filler() {
destination = data;
}
operator char *() {
return data;
}
};
and using it like:
std::string foo;
c_function(string_filler<80>{foo});
This way you provide a "normal" buffer to the C function with a maximum that you specify (which you should know either way ... otherwise calling the function would be unsafe). On destruction of the temporary (which, according to the standard, must happen after that expression with the function call) the string is copied (using std::string assignment operator) into a buffer managed by the std::string.
Addressing your questions:
Do you think there are any major pros/cons of using a vector instead of char* here?
Yes: Using a vector frees your from manual memory management. This is a huge pro.
I plan to add wide char support to it later. Do you think a union of two structs : {char,string} and {wchar_t, wstring} would be the way to go for that purpose (it will be only one of these two at a time)?
A union is a bad idea. How do you know which member is currently active? You need a flag outside of the union. Do you really want every string to carry that around? Instead look what the standard library is doing: It's using templates to provide this abstraction.
Is it too much overkill [..]
Writing a string class? Yes, way too much.
What you want to do already exists. For example with this plain old C function:
/**
* Write n characters into buffer.
* n cann't be more than size
* Return number of written characters
*/
ssize_t fillString(char * buffer, ssize_t size);
Since C++11:
std::string str;
// Resize string to be sure to have memory
str.resize(80);
auto newSize = fillSrting(&str[0], str.size());
str.resize(newSize);
or without first resizing:
std::string str;
if (!str.empty()) // To avoid UB
{
auto newSize = fillSrting(&str[0], str.size());
str.resize(newSize);
}
But before C++11, std::string isn't guaranteed to be stored in a single chunk of contiguous memory. So you have to pass through a std::vector<char> before;
std::vector<char> v;
// Resize string to be sure to have memor
v.resize(80);
ssize_t newSize = fillSrting(&v[0], v.size());
std::string str(v.begin(), v.begin() + newSize);
You can use it easily with something like Daniel's proposition

use iostream or alternative for managing stream

I want to write a function which (simplified) takes as a parameter an input buffer of variable size, processes it (sequentially), and returns a buffer of a fixed size. The remaining part of the buffer has to stay in the "pipeline" for the next call of the function.
Question 1:
From my research it looks like iostream is the way to go, but apparently no one is using it. Is this the best way to go?
Question 2:
How can I declare the iostream object globally? Actually, as I have several streams I will need to write the iostream Object in a struct-vector. How do I do this?
At the moment my code looks like that:
struct membuf : std::streambuf
{
membuf(char* begin, char* end) {
this->setg(begin, begin, end);
}
};
void read_stream(char* bufferIn, char* BufferOut, int lengthBufferIn)
{
char* buffer = (char*) malloc(300); //How do I do this globally??
membuf sbuf(buffer, buffer + sizeof(buffer));//How do I do this globally??
std::iostream s(&sbuf); //How do I do this globally??
s.write(bufferIn, lengthBufferIn);
s.read(BufferOut, 100);
process(BufferOut);
}
I see no need for iostream here. You can create an object who has a reference to the buffer (so no copies involved) and to the position where it is left.
So something along this:
class Transformer {
private:
char const *input_buf_;
public:
Transformer(char const *buf) : input_buf_(buf) {
}
bool has_next() const { return input_buf_ != nullptr; } // or your own condition
std::array<char, 300> read_next() {
// read from input_buf_ as much as you need
// advance input_buf_ to the remaining part
// make sure to set input_buf_ accordingly after the last part
// e.g. input_buf_ = nullptr; for how I wrote hasNext
return /*the processed fixed size buffer*/;
}
}
usage:
char *str == //...;
Transformer t(str);
while (t.has_next()) {
std::array<char, 300> arr = t.read_next();
// use arr
}
Question 1: From my research it looks like iostream is the way to go, but apparently no one is using it. Is this the best way to go?
Yes (the std::istream class and specializations, are there to manage streams, and they fit the problem well).
Your code could look similar to this:
struct fixed_size_buffer
{
static const std::size_t size = 300;
std::vector<char> value;
fixed_size_buffer() : value(fixed_size_buffer::size, ' ') {}
};
std::istream& operator>>(std::istream& in, fixed_size_buffer& data)
{
std::noskipws(in); // read spaces as well as characters
std::copy_n(std::istream_iterator<char>{ in },
fixed_size_buffer::size);
std::begin(data.value)); // this leaves in in an invalid state
// if there is not enough data in the input
// stream;
return in;
}
Consuming the data:
fixed_size_buffer buffer;
std::ifstream fin{ "c:\\temp\\your_data.txt" };
while(fin >> buffer)
{
// do something with buffer here
}
while(std::cin >> buffer) // read from standard input
{
// do something with buffer here
}
std::istringstream sin{ "long-serialized-string-here" };
while(sin >> buffer) // read from standard input
{
// do something with buffer here
}
Question 2: How can I declare the iostream object globally? Actually, as I have several streams I will need to write the iostream Object in a struct-vector. How do I do this?
iostreams do not support copy-construction; Because of this, you will need to keep them in a sequence of pointers / references to base class:
auto fin = std::make_unique<std::ifstream>("path_to_input_file");
std::vector<std::istream*> streams;
streams.push_back(&std::cin);
streams.push_back(fin.get());
fixed_size_buffer buffer;
for(auto in_ptr: streams)
{
std::istream& in = &in_ptr;
while(in >> buffer)
{
// do something with buffer here
}
}

How to write custom input stream in C++

I'm currently learning C++ (Coming from Java) and I'm trying to understand how to use IO streams properly in C++.
Let's say I have an Image class which contains the pixels of an image and I overloaded the extraction operator to read the image from a stream:
istream& operator>>(istream& stream, Image& image)
{
// Read the image data from the stream into the image
return stream;
}
So now I'm able to read an image like this:
Image image;
ifstream file("somepic.img");
file >> image;
But now I want to use the same extraction operator to read the image data from a custom stream. Let's say I have a file which contains the image in compressed form. So instead of using ifstream I might want to implement my own input stream. At least that's how I would do it in Java. In Java I would write a custom class extending the InputStream class and implementing the int read() method. So that's pretty easy. And usage would look like this:
InputStream stream = new CompressedInputStream(new FileInputStream("somepic.imgz"));
image.read(stream);
So using the same pattern maybe I want to do this in C++:
Image image;
ifstream file("somepic.imgz");
compressed_stream stream(file);
stream >> image;
But maybe that's the wrong way, don't know. Extending the istream class looks pretty complicated and after some searching I found some hints about extending streambuf instead. But this example looks terribly complicated for such a simple task.
So what's the best way to implement custom input/output streams (or streambufs?) in C++?
Solution
Some people suggested not using iostreams at all and to use iterators, boost or a custom IO interface instead. These may be valid alternatives but my question was about iostreams. The accepted answer resulted in the example code below. For easier reading there is no header/code separation and the whole std namespace is imported (I know that this is a bad thing in real code).
This example is about reading and writing vertical-xor-encoded images. The format is pretty easy. Each byte represents two pixels (4 bits per pixel). Each line is xor'd with the previous line. This kind of encoding prepares the image for compression (usually results in lot of 0-bytes which are easier to compress).
#include <cstring>
#include <fstream>
using namespace std;
/*** vxor_streambuf class ******************************************/
class vxor_streambuf: public streambuf
{
public:
vxor_streambuf(streambuf *buffer, const int width) :
buffer(buffer),
size(width / 2)
{
previous_line = new char[size];
memset(previous_line, 0, size);
current_line = new char[size];
setg(0, 0, 0);
setp(current_line, current_line + size);
}
virtual ~vxor_streambuf()
{
sync();
delete[] previous_line;
delete[] current_line;
}
virtual streambuf::int_type underflow()
{
// Read line from original buffer
streamsize read = buffer->sgetn(current_line, size);
if (!read) return traits_type::eof();
// Do vertical XOR decoding
for (int i = 0; i < size; i += 1)
{
current_line[i] ^= previous_line[i];
previous_line[i] = current_line[i];
}
setg(current_line, current_line, current_line + read);
return traits_type::to_int_type(*gptr());
}
virtual streambuf::int_type overflow(streambuf::int_type value)
{
int write = pptr() - pbase();
if (write)
{
// Do vertical XOR encoding
for (int i = 0; i < size; i += 1)
{
char tmp = current_line[i];
current_line[i] ^= previous_line[i];
previous_line[i] = tmp;
}
// Write line to original buffer
streamsize written = buffer->sputn(current_line, write);
if (written != write) return traits_type::eof();
}
setp(current_line, current_line + size);
if (!traits_type::eq_int_type(value, traits_type::eof())) sputc(value);
return traits_type::not_eof(value);
};
virtual int sync()
{
streambuf::int_type result = this->overflow(traits_type::eof());
buffer->pubsync();
return traits_type::eq_int_type(result, traits_type::eof()) ? -1 : 0;
}
private:
streambuf *buffer;
int size;
char *previous_line;
char *current_line;
};
/*** vxor_istream class ********************************************/
class vxor_istream: public istream
{
public:
vxor_istream(istream &stream, const int width) :
istream(new vxor_streambuf(stream.rdbuf(), width)) {}
virtual ~vxor_istream()
{
delete rdbuf();
}
};
/*** vxor_ostream class ********************************************/
class vxor_ostream: public ostream
{
public:
vxor_ostream(ostream &stream, const int width) :
ostream(new vxor_streambuf(stream.rdbuf(), width)) {}
virtual ~vxor_ostream()
{
delete rdbuf();
}
};
/*** Test main method **********************************************/
int main()
{
// Read data
ifstream infile("test.img");
vxor_istream in(infile, 288);
char data[144 * 128];
in.read(data, 144 * 128);
infile.close();
// Write data
ofstream outfile("test2.img");
vxor_ostream out(outfile, 288);
out.write(data, 144 * 128);
out.flush();
outfile.close();
return 0;
}
The proper way to create a new stream in C++ is to derive from std::streambuf and to override the underflow() operation for reading and the overflow() and sync() operations for writing. For your purpose you'd create a filtering stream buffer which takes another stream buffer (and possibly a stream from which the stream buffer can be extracted using rdbuf()) as argument and implements its own operations in terms of this stream buffer.
The basic outline of a stream buffer would be something like this:
class compressbuf
: public std::streambuf {
std::streambuf* sbuf_;
char* buffer_;
// context for the compression
public:
compressbuf(std::streambuf* sbuf)
: sbuf_(sbuf), buffer_(new char[1024]) {
// initialize compression context
}
~compressbuf() { delete[] this->buffer_; }
int underflow() {
if (this->gptr() == this->egptr()) {
// decompress data into buffer_, obtaining its own input from
// this->sbuf_; if necessary resize buffer
// the next statement assumes "size" characters were produced (if
// no more characters are available, size == 0.
this->setg(this->buffer_, this->buffer_, this->buffer_ + size);
}
return this->gptr() == this->egptr()
? std::char_traits<char>::eof()
: std::char_traits<char>::to_int_type(*this->gptr());
}
};
How underflow() looks exactly depends on the compression library being used. Most libraries I have used keep an internal buffer which needs to be filled and which retains the bytes which are not yet consumed. Typically, it is fairly easy to hook the decompression into underflow().
Once the stream buffer is created, you can just initialize an std::istream object with the stream buffer:
std::ifstream fin("some.file");
compressbuf sbuf(fin.rdbuf());
std::istream in(&sbuf);
If you are going to use the stream buffer frequently, you might want to encapsulate the object construction into a class, e.g., icompressstream. Doing so is a bit tricky because the base class std::ios is a virtual base and is the actual location where the stream buffer is stored. To construct the stream buffer before passing a pointer to a std::ios thus requires jumping through a few hoops: It requires the use of a virtual base class. Here is how this could look roughly:
struct compressstream_base {
compressbuf sbuf_;
compressstream_base(std::streambuf* sbuf): sbuf_(sbuf) {}
};
class icompressstream
: virtual compressstream_base
, public std::istream {
public:
icompressstream(std::streambuf* sbuf)
: compressstream_base(sbuf)
, std::ios(&this->sbuf_)
, std::istream(&this->sbuf_) {
}
};
(I just typed this code without a simple way to test that it is reasonably correct; please expect typos but the overall approach should work as described)
boost (which you should have already if you're serious about C++), has a whole library dedicated to extending and customizing IO streams: boost.iostreams
In particular, it already has decompressing streams for a few popular formats (bzip2, gzlib, and zlib)
As you saw, extending streambuf may be an involving job, but the library makes it fairly easy to write your own filtering streambuf if you need one.
Don't, unless you want to die a terrible death of hideous design. IOstreams are the worst component of the Standard library - even worse than locales. The iterator model is much more useful, and you can convert from stream to iterator with istream_iterator.
I agree with #DeadMG and wouldn't recommend using iostreams. Apart from poor design the performance is often worse than that of plain old C-style I/O. I wouldn't stick to a particular I/O library though, instead, I'd create an interface (abstract class) that has all required operations, for example:
class Input {
public:
virtual void read(char *buffer, size_t size) = 0;
// ...
};
Then you can implement this interface for C I/O, iostreams, mmap or whatever.
It is probably possible to do this, but I feel that it's not the "right" usage of this feature in C++. The iostream >> and << operators are meant for fairly simple operations, such as wriitng the "name, street, town, postal code" of a class Person, not for parsing and loading images. That's much better done using the stream::read() - using Image(astream);, and you may implement a stream for compression, as descrtibed by Dietmar.

Can boost iostreams read and compress gzipped files on the fly?

I am reading a gzipped file using boost iostreams:
The following works fine:
namespace io = boost::iostreams;
io::filtering_istream in;
in.push(boost::iostreams::basic_gzip_decompressor<>());
in.push(io::file_source("test.gz"));
stringstream ss;
copy(in, ss);
However, I don't want to take the memory hit of reading an entire gzipped file
into memory. I want to be able to read the file incrementally.
For example, if I have a data structure X that initializes itself from istream,
X x;
x.read(in);
fails. Presumably this is because we may have to put back characters into the stream
if we are doing partial streams. Any ideas whether boost iostreams supports this?
According to the iostream documentation the type boost::io::filtering_istream derives from std::istream. That is, it should be possible to pass this everywhere an std::istream& is expected. If you have errors at run-time because you need to unget() or putback() characters you should have a look at the pback_size parameter which specifies how many characters are return at most. I haven't seen in the documentation what the default value for this parameter is.
If this doesn't solve your problem can you describe what your problem is exactly? From the looks of it should work.
I think you need to write your own filter. For instance, to read a .tar.gz and output the files contained, I wrote something like
//using namespace std;
namespace io = boost::iostreams;
struct tar_expander
{
tar_expander() : out(0), status(header)
{
}
~tar_expander()
{
delete out;
}
/* qualify filter */
typedef char char_type;
struct category :
io::input_filter_tag,
io::multichar_tag
{ };
template<typename Source>
void fetch_n(Source& src, std::streamsize n = block_size)
{
/* my utility */
....
}
// Read up to n filtered characters into the buffer s,
// returning the number of characters read or -1 for EOF.
// Use src to access the unfiltered character sequence
template<typename Source>
std::streamsize read(Source& src, char* s, std::streamsize n)
{
fetch_n(src);
const tar_header &h = cast_buf<tar_header>();
int r;
if (status == header)
{
...
}
std::ofstream *out;
size_t fsize, stored;
static const size_t block_size = 512;
std::vector<char> buf;
enum { header, store_file, archive_end } status;
}
}
My function read(Source &...) when called receives the unzipped text.
To use the filter:
ifstream file("/home/..../resample-1.8.1.tar.gz", ios_base::in | ios_base::binary);
io::filtering_streambuf<io::input> in;
in.push(tar_expander());
in.push(io::gzip_decompressor());
in.push(file);
io::copy(in, cout);