So, I've been playing around with c++ a bit and decided to write a program that involves opening and writing to a file in binary mode. I am not too familiar with the iostream functionality of c++ (I mostly do API based programming), but I read several technical guides on the subject and wrote some code. The code is meant to open one file, read it's data to a buffer, and then convert that buffer to another format and write it to another file. The problem is that it keeps throwing a "Debug Assertion" error which apparently revolves around the invalid use of a null pointer. However, I couldn't make sense of it when I looked through the code. I probably just misused the iostream library or made a simple logic error. I need to have the separate SetMemBlock function as I plan on using the same base for formatting different output on a variety of functions. This is just my prototype. Anyways, here's my quick n' dirty class setup:
const DebugMode = true;
class A
{
public:
bool FileFunction( const char *, const char * );
protected:
bool SetMemBlock( char *, std::fstream &, std::streamoff & );
private:
std::fstream SrcFileStream;
std::fstream DestFileStream;
};
bool A::SetMemBlock( char* MemBlock, std::fstream & FileStream, std::streamoff & Size )
{
std::streamoff TempOff = 0;
//This is meant to check for a non-empty buffer and to see if the stream is valid.
if( MemBlock != 0 || !FileStream.is_open() )
return false;
TempOff = FileStream.tellg();
FileStream.seekg(0, std::ios::end);
Size = FileStream.tellg();
MemBlock = new( std::nothrow ) char[ (int) Size ];
if( MemBlock == 0 )
return false;
FileStream.seekg(0, std::ios::beg);
FileStream.read( MemBlock, (int) Size );
if( !FileStream )
return false;
FileStream.seekg(TempOff);
return true;
}
bool A::FileFunction( const char * SrcFile, const char * DestFile )
{
char * MemBlock = 0;
std::streamoff Size = 0;
SrcFileStream.open( SrcFile, std::ios::binary | std::ios::in );
DestFileStream.open( DestFile, std::ios::binary | std::ios::out );
if( !SrcFileStream.is_open() || !DestFileStream.is_open() )
return false;
if( DebugMode )
{
std::cout<<"Files opened succesfully...\nNow writing memory block..."<<std::endl;
}
if( !SetMemBlock( MemBlock, SrcFileStream, Size ) )
{
std::cout<<"An error occured when reading to memory block!"<<std::endl;
return false;
}
if( DebugMode )
{
std::cout<<"Memory block written..."<<std::endl;
}
DestFileStream.seekp( std::ios::beg );
DestFileStream.write( MemBlock, Size );
SrcFileStream.close();
DestFileStream.close();
delete[] MemBlock;
return true;
}
You're passing MemBlock to SetMemBlock by value. The function therefore just sets the value of a local copy, which has no effect on the calling function; the value of MemBlock in the calling function thus remains garbage. Using it as a pointer will probably then lead to an assertion (if you're lucky) or an out-and-out crash (if you're not.) You want to pass that argument by reference instead.
If you don't know what these terms mean, Google "pass by value" and "pass by reference". You really need to understand the difference!
Pass MemBlock by reference:
bool A::SetMemBlock( char*& MemBlock, std::fstream & FileStream, std::streamoff & Size )
Related
I have never worked with SHA256 before. Recently, I've been trying to implement a SHA256 checksum in order to see if a library has been tampered with.
The funny thing is that OpenSSL SHA256 generates a different sum for the exact same library depending on its location. If it's located in another folder, the sum is different.
Is there anything I could do to get the same sum no matter where the file is located? I provided the code snippets and the sums I get.
unsigned char* getsum( char* filename ) {
std::ifstream pFile( filename, std::ios::binary );
SHA256_CTX sContext;
char pBuffer[ 1024*16 ];
unsigned char pSum[SHA256_DIGEST_LENGTH];
SHA256_Init( &sContext );
while( pFile.good() ) {
pFile.read( pBuffer, sizeof(pBuffer) );
SHA256_Update( &sContext, pBuffer, pFile.gcount() );
}
SHA256_Final( pSum, &sContext );
return pSum;
}
...
char* cl_sum = new char[256];
sprintf( cl_sum, "%02x", getsum("library.dll") );
MessageBoxA( NULL, cl_sum , NULL, NULL );
delete[] cl_sum;
exit( -1 );
I also tried using the SHA256() function instead of the whole SHA256 context, SHA256_Init(), Update & Final thing, but still the same result.
Your code has lots of basic mistakes:
sprintf( cl_sum, "%02x", getsum("library.dll") );
Format string says "print me int value", getsum returns char *! This is wrong and if you use compiler warnings it should explain issue.
Return value is wrong too:
unsigned char* getsum( char* filename ) {
...
unsigned char pSum[SHA256_DIGEST_LENGTH];
...
return pSum;
}
You are returning a pointer to local object which lifetime ends when function returns.
Here is my tweak to your code
using Sha256Hash = std::array<unsigned char, SHA256_DIGEST_LENGTH>;
Sha256Hash calcSha256Hash(std::istream& in)
{
SHA256_CTX sContext;
SHA256_Init(&sContext);
std::array<char, 1024> data;
while (in.good()) {
in.read(data.data(), data.size());
SHA256_Update(&sContext, data.data(), in.gcount());
}
Sha256Hash hash;
SHA256_Final(hash.data(), &sContext);
return hash;
}
https://godbolt.org/z/Y3nY13jao
Side note: I'm surprised that std::istream::read sets failure bit when end of file is reached and some data has bee read. This is a inconsistent with behavior of streaming operator.
You are returning a pointer to local variable pSum, that is undefined behaviour(UB).
An explanation for differen outputs might be that since the array is located on stack, calls to future functions like sprintf or MessageBoxA likely overwrite the array with their own variables. But anything can happen after UB.
Use and return std::vector<unsigned char> instead.
Also do not use new, instead std::array or another std::vector is much safer.
Lastly, I would strongly advise to turn on compiler warnings for your compiler, it should warn about the above issue.
Your code's output depends on what value getsum returns. The value getsum returns is an address on the stack. So your code's output depends on where the stack is located.
Somewhere in your code, you should save the result of the SHA256 operation in a buffer of some kind and you should output the contents of that buffer. Your code does neither of those two things.
If you think you save the output of the SHA256 operation in some kind of buffer, where do you think that buffer is allocated. The only buffer you use is pBuffer, and that's allocated on the stack in getsum and so doesn't exist when getsum returns.
If you think you actually look in the buffer somewhere, where do you think that code is? Your sprintf call never looks in the buffer, it just outputs a pointer in hex.
There are several problems with your code:
your function is returning a pointer to a local variable. When your function exits, the variable is destroyed, leaving the returned pointer dangling.
you are printing the value of the returned pointer itself (ie, the memory address it holds), not the SHA data that it points at. That is why you are seeing the same value being displayed in the message box. You are seeing the memory address of the local variable, and multiple invocations of your function may reuse the same memory.
your reading loop is not making sure that read() is successful before calling SHA256_Update().
Try something more like this instead:
using Sha256Digest = std::array<unsigned char, SHA256_DIGEST_LENGTH>;
Sha256Digest getsum( const char* filename ) {
std::ifstream file( filename, std::ios::binary );
SHA256_CTX sContext;
char buffer[ 1024*16 ];
Sha256Digest sum;
SHA256_Init( &sContext );
while( file.read( buffer, sizeof(buffer) ) ) {
SHA256_Update( &sContext, buffer, pFile.gcount() );
}
SHA256_Final( sum.data(), &sContext );
return sum;
}
...
Sha256Digest sum = getsum("library.dll");
char cl_sum[256] = {}, *ptr = cl_sum;
for (auto ch : sum) {
ptr += sprintf( ptr, "%02x", ch );
}
MessageBoxA( NULL, cl_sum, NULL, NULL );
/* Alternatively:
std::ostringstream cl_sum;
for (auto ch : sum) {
cl_sum << std::hex << std::setw(2) << std::setfill('0') << ch;
}
MessageBoxA( NULL, cl_sum.str().c_str(), NULL, NULL );
*/
I did a sample project to read a file into a buffer.
When I use the tellg() function it gives me a larger value than the
read function is actually read from the file. I think that there is a bug.
here is my code:
EDIT:
void read_file (const char* name, int *size , char*& buffer)
{
ifstream file;
file.open(name,ios::in|ios::binary);
*size = 0;
if (file.is_open())
{
// get length of file
file.seekg(0,std::ios_base::end);
int length = *size = file.tellg();
file.seekg(0,std::ios_base::beg);
// allocate buffer in size of file
buffer = new char[length];
// read
file.read(buffer,length);
cout << file.gcount() << endl;
}
file.close();
}
main:
void main()
{
int size = 0;
char* buffer = NULL;
read_file("File.txt",&size,buffer);
for (int i = 0; i < size; i++)
cout << buffer[i];
cout << endl;
}
tellg does not report the size of the file, nor the offset
from the beginning in bytes. It reports a token value which can
later be used to seek to the same place, and nothing more.
(It's not even guaranteed that you can convert the type to an
integral type.)
At least according to the language specification: in practice,
on Unix systems, the value returned will be the offset in bytes
from the beginning of the file, and under Windows, it will be
the offset from the beginning of the file for files opened in
binary mode. For Windows (and most non-Unix systems), in text
mode, there is no direct and immediate mapping between what
tellg returns and the number of bytes you must read to get to
that position. Under Windows, all you can really count on is
that the value will be no less than the number of bytes you have
to read (and in most real cases, won't be too much greater,
although it can be up to two times more).
If it is important to know exactly how many bytes you can read,
the only way of reliably doing so is by reading. You should be
able to do this with something like:
#include <limits>
file.ignore( std::numeric_limits<std::streamsize>::max() );
std::streamsize length = file.gcount();
file.clear(); // Since ignore will have set eof.
file.seekg( 0, std::ios_base::beg );
Finally, two other remarks concerning your code:
First, the line:
*buffer = new char[length];
shouldn't compile: you have declared buffer to be a char*,
so *buffer has type char, and is not a pointer. Given what
you seem to be doing, you probably want to declare buffer as
a char**. But a much better solution would be to declare it
as a std::vector<char>& or a std::string&. (That way, you
don't have to return the size as well, and you won't leak memory
if there is an exception.)
Second, the loop condition at the end is wrong. If you really
want to read one character at a time,
while ( file.get( buffer[i] ) ) {
++ i;
}
should do the trick. A better solution would probably be to
read blocks of data:
while ( file.read( buffer + i, N ) || file.gcount() != 0 ) {
i += file.gcount();
}
or even:
file.read( buffer, size );
size = file.gcount();
EDIT: I just noticed a third error: if you fail to open the
file, you don't tell the caller. At the very least, you should
set the size to 0 (but some sort of more precise error
handling is probably better).
In C++17 there are std::filesystem file_size methods and functions, so that can streamline the whole task.
std::filesystem::file_size - cppreference.com
std::filesystem::directory_entry::file_size - cppreference.com
With those functions/methods there's a chance not to open a file, but read cached data (especially with the std::filesystem::directory_entry::file_size method)
Those functions also require only directory read permissions and not file read permission (as tellg() does)
void read_file (int *size, char* name,char* buffer)
*buffer = new char[length];
These lines do look like a bug: you create an char array and save to buffer[0] char. Then you read a file to buffer, which is still uninitialized.
You need to pass buffer by pointer:
void read_file (int *size, char* name,char** buffer)
*buffer = new char[length];
Or by reference, which is the c++ way and is less error prone:
void read_file (int *size, char* name,char*& buffer)
buffer = new char[length];
...
fseek(fptr, 0L, SEEK_END);
filesz = ftell(fptr);
will do the file if file opened through fopen
using ifstream,
in.seekg(0,ifstream::end);
dilesz = in.tellg();
would do similar
I try to read a ppm file aand create a new one identical. But when I open them with GIMP2 the images are not the same.
Where is the problem with my code ?
int main()
{
FILE *in, *out;
in = fopen("parrots.ppm","r");
if( in == NULL )
{
std::cout<<"Error.\n";
return 0;
}
unsigned char *buffer = NULL;
long size = 0;
fseek(in, 0, 2);
size = ftell(in);
fseek(in, 0, 0);
buffer = new unsigned char[size];
if( buffer == NULL )
{
std::cout<<"Error\n";
return 0;
}
if( fread(buffer, size, 1, in) < 0 )
{
std::cout<<"Error.\n";
return 0 ;
}
out = fopen("out.ppm","w");
if( in == NULL )
{
std::cout<<"Error.\n";
return 0;
}
if( fwrite(buffer, size, 1, out) < 0 )
{
std::cout<<"Error.\n";
return 0;
}
delete[] buffer;
fcloseall();
return 0;
}
Before that I read the ppm file in a structure and when I wrote it I get the same image but the green was more intense than in the original picture. Then I tried this simple reading and writing but I get the same result.
int main()
Missing includes.
FILE *in, *out;
C style I/O in a C++ program, why? Also, declare at point of initialization, close to first use.
in = fopen("parrots.ppm","r");
This is opening the file in text mode, which is most certainly not what you want. Use "rb" for mode.
unsigned char *buffer = NULL;
Declare at point of initialization, close to first use.
fseek(in, 0, 2);
You are supposed to use SEEK_END, which is not guaranteed to be defined as 2.
fseek(in, 0, 0);
See above, for SEEK_SET not guaranteed to be defined as 0.
buffer = new unsigned char[size];
if( buffer == NULL )
By default, new will not return a NULL pointer, but throw a std::bad_alloc exception. (With overallocation being the norm on most current operating systems, checking for NULL would not protect you from out-of-memory even with malloc(), but good to see you got into the habit of checking anyway.)
C++11 brought us smart pointers. Use them. They are an excellent tool to avoid memory leaks (one of the very few weaknesses of C++).
if( fread(buffer, size, 1, in) < 0 )
Successful use of fread should return the number of objects written, which should be checked to be equal the third parameter (!= 1), not < 0.
out = fopen("out.ppm","w");
Text mode again, you want "wb" here.
if( fwrite(buffer, size, 1, out) < 0 )
See the note about the fread return value above. Same applies here.
fcloseall();
Not a standard function. Use fclose( in ); and fclose( out );.
A C++11-ified solution (omitting the error checking for brevity) would look somewhat like this:
#include <iostream>
#include <fstream>
#include <memory>
int main()
{
std::ifstream in( "parrots.ppm", std::ios::binary );
std::ofstream out( "out.ppm", std::ios::binary );
in.seekg( 0, std::ios::end );
auto size = in.tellg();
in.seekg( 0 );
std::unique_ptr< char[] > buffer( new char[ size ] );
in.read( buffer.get(), size );
out.write( buffer.get(), size );
in.close();
out.close();
return 0;
}
Of course, a smart solution would do an actual filesystem copy, either through Boost.Filesystem or the standard functionality (experimental at the point of this writing).
I am trying to serialize a Plain Old Datastructure using ifstream and ofstream and I wasn't able to get it to work. I then tried to reduce my problem to an ultra basic serialization of just a char and int and even that didn't work. Clearly I'm missing something at a core fundamental level.
For a basic structure:
struct SerializeTestStruct
{
char mCharVal;
unsigned int mIntVal;
void Serialize(std::ofstream& ofs);
};
With serialize function:
void SerializeTestStruct::Serialize(std::ofstream& ofs)
{
bool isError = (false == ofs.good());
if (false == isError)
{
ofs.write((char*)&mCharVal, sizeof(mCharVal));
ofs.write((char*)&mIntVal, sizeof(mIntVal));
}
}
Why would this fail with the following short program?
//ultra basic serialization test.
SerializeTestStruct* testStruct = new SerializeTestStruct();
testStruct->mCharVal = 'y';
testStruct->mIntVal = 9;
//write
std::string testFileName = "test.bin";
std::ofstream fileOut(testFileName.data());
fileOut.open(testFileName.data(), std::ofstream::binary|std::ofstream::out);
fileOut.clear();
testStruct->Serialize(fileOut);
fileOut.flush();
fileOut.close();
delete testStruct;
//read
char * memblock;
std::ifstream fileIn (testFileName.data(), std::ifstream::in|std::ifstream::binary);
if (fileIn.is_open())
{
// get length of file:
fileIn.seekg (0, std::ifstream::end);
int length = fileIn.tellg();
fileIn.seekg (0, std::ifstream::beg);
// allocate memory:
memblock = new char [length];
fileIn.read(memblock, length);
fileIn.close();
// read data as a block:
SerializeTestStruct* testStruct2 = new(memblock) SerializeTestStruct();
delete[] testStruct2;
}
When I run through the code I notice that memblock has a "y" at the top so maybe it is working and it's just a problem with the placement new at the very end? After that placement new I end up with a SerializeTestStruct with values: 0, 0.
Here is how your stuff should read:
#include <fstream>
#include <string>
#include <stdexcept>
struct SerializeTestStruct
{
char mCharVal;
unsigned int mIntVal;
void Serialize(::std::ostream &os);
static SerializeTestStruct Deserialize(::std::istream &is);
};
void SerializeTestStruct::Serialize(std::ostream &os)
{
if (os.good())
{
os.write((char*)&mCharVal, sizeof(mCharVal));
os.write((char*)&mIntVal, sizeof(mIntVal));
}
}
SerializeTestStruct SerializeTestStruct::Deserialize(std::istream &is)
{
SerializeTestStruct retval;
if (is.good())
{
is.read((char*)&retval.mCharVal, sizeof(retval.mCharVal));
is.read((char*)&retval.mIntVal, sizeof(retval.mIntVal));
}
if (is.fail()) {
throw ::std::runtime_error("failed to read full struct");
}
return retval;
}
int main(int argc, const char *argv[])
{
//ultra basic serialization test.
// setup
const ::std::string testFileName = "test.bin";
// write
{
SerializeTestStruct testStruct;
testStruct.mCharVal = 'y';
testStruct.mIntVal = 9;
::std::ofstream fileOut(testFileName.c_str());
fileOut.open(testFileName.c_str(),
std::ofstream::binary|std::ofstream::out);
fileOut.clear();
testStruct.Serialize(fileOut);
}
// read
{
::std::ifstream fileIn (testFileName.c_str(),
std::ifstream::in|std::ifstream::binary);
if (fileIn.is_open())
{
SerializeTestStruct testStruct = \
SerializeTestStruct::Deserialize(fileIn);
::std::cout << "testStruct.mCharVal == '" << testStruct.mCharVal
<< "' && testStruct.mIntVal == " << testStruct.mIntVal
<< '\n';
}
}
return 0;
}
Style issues:
Don't use new to create things if you can help it. Stack allocated objects are usually what you want and significantly easier to manage than the arbitrary lifetime objects you allocate from the heap. If you do use new, consider using a smart pointer type of some kind to help manage the lifetime for you.
Serialization and deserialization code should be matched up so that they can be examined and altered together. This makes maintenance of such code much easier.
Rely on C++ to clean things up for you with destructors, that's what they're for. This means making basic blocks containing parts of your code if it the scopes of the variables used is relatively confined.
Don't needlessly use flags.
Mistakes...
Don't use the data member function of ::std::string.
Using placement new and that memory block thing is really bad idea because it's ridiculously complex. And if you did use it, then you do not use array delete in the way you did. And lastly, it won't work anyway for a reason explained later.
Do not use ofstream in the type taken by your Serialize function as it is a derived class who's features you don't need. You should always use the most base class in a hierarchy that has the features you need unless you have a very specific reason not to. Serialize works fine with the features of the base ostream class, so use that type instead.
The on-disk layout of your structure and the in memory layout do not match, so your placement new technique is doomed to fail. As a rule, if you have a serialize function, you need a matching deserialize function.
Here is a further explanation of your memory layout issue. The structure, in memory, on an x86_64 based Linux box looks like this:
+------------+-----------+
|Byte number | contents |
+============+===========+
| 0 | 0x79 |
| | (aka 'y') |
+------------+-----------+
| 1 | padding |
+------------+-----------+
| 3 | padding |
+------------+-----------+
| 4 | padding |
+------------+-----------+
| 5 | 9 |
+------------+-----------+
| 6 | 0 |
+------------+-----------+
| 7 | 0 |
+------------+-----------+
| 8 | 0 |
+------------+-----------+
The contents of the padding section are undefined, but generally 0. It doesn't matter though because that space is never used and merely exists so that access to the following int lies on an efficient 4-byte boundary.
The size of your structure on disk is 5 bytes, and is completely missing the padding sections. So that means when you read it into memory it won't line up properly with the in memory structure at all and accessing it is likely to cause some kind of horrible problem.
The first rule, if you need a serialize function, you need a deserialize function. Second rule, unless you really know exactly what you are doing, do not dump raw memory into a file. This will work just fine in many cases, but there are important cases in which it won't work. And unless you are aware of what does and doesn't work, and when it does or doesn't work, you will end up code that seems to work OK in certain test situations, but fails miserable when you try to use it in a real system.
My code still does dump memory into a file. And it should work as long as you read the result back on exactly the same architecture and platform with code compiled with the same version of the compiler as when you wrote it. As soon as one of those variables changes, all bets are off.
bool isError = (false == ofs.good());
if (false == isError)
{
ofs.write((char*)&mCharVal, sizeof(mCharVal));
ofs.write((char*)&mIntVal, sizeof(mIntVal));
}
change to
if ( ofs.good() )
{
ofs.write((char*)&mCharVal, sizeof(mCharVal));
ofs.write((char*)&mIntVal, sizeof(mIntVal));
}
I would do:
ostream & operator << ( ostream &os, const SerializeTestStruct &mystruct )
{
if ( ofs.good() )
{
os.write((char*)&mystruct.mCharVal, sizeof(mCharVal));
os.write((char*)&mystruct.mIntVal, sizeof(mIntVal));
}
return os;
}
The problem is here:
SerializeTestStruct* testStruct2 = new(memblock) SerializeTestStruct();
This will construct value-initialized object of type SerializeTestStruct in previously allocated memory. It will fill the memblock with zeros, since value-initialization is zero-initialization for POD-types (more info).
Here's fast fix for your code:
SerializeTestStruct* testStruct2 = new SerializeTestStruct;
fileIn.read( (char*)&testStruct2->mCharVal, sizeof(testStruct2->mCharVal) );
fileIn.read( (char*)&testStruct2->mIntVal, sizeof(testStruct2->mIntVal) );
fileIn.close();
// do some with testStruct2
// ...
delete testStruct2;
In my opinion, you need allow serialization to a buffer and not directly to a stream. Writing to a buffer allows for nested or inherited classes to write to memory, then the whole buffer can be written to the stream. Writing bits and pieces to the stream is not efficient.
Here is something I concocted, before I stopped writing binary data to streams:
struct Serialization_Interface
{
//! Returns size occupied on a stream.
/*! Note: size on the platform may be different.
* This method is used to allocate memory.
*/
virtual size_t size_on_stream(void) const = 0;
//! Stores the fields of the object to the given pointer.
/*! Pointer is incremented by the size on the stream.
*/
virtual void store_to_buffer(unsigned char *& p_buffer) const = 0;
//! Loads the object's fields from the buffer, advancing the pointer.
virtual void load_from_buffer(const unsigned char *& p_buffer) = 0;
};
struct Serialize_Test_Structure
: Serialization_Interface
{
char mCharVal;
int mIntVal;
size_t size_on_stream(void) const
{
return sizeof(mCharVal) + sizeof(mIntVal);
}
void store_to_buffer(unsigned char *& p_buffer) const
{
*p_buffer++ = mCharVal;
((int&)(*p_buffer)) = mIntVal;
p_buffer += sizeof(mIntVal);
return;
}
void load_from_buffer(const unsigned char *& p_buffer)
{
mCharVal = *p_buffer++;
mIntVal = (const int&)(*p_buffer);
p_buffer += sizeof(mIntVal);
return;
}
};
int main(void)
{
struct Serialize_Test_Struct myStruct;
myStruct.mCharVal = 'G';
myStruct.mIntVal = 42;
// Allocate a buffer:
unsigned char * buffer = new unsigned char[](myStruct.size_on_stream());
// Create output file.
std::ofstream outfile("data.bin");
// Does your design support this concept?
unsigned char * p_buffer = buffer;
myStruct.store_to_buffer(p_buffer);
outfile.write((char *) buffer, myStruct.size_on_stream());
outfile.close();
return 0;
}
I stopped writing binary data to streams in favor of textual data because textual data doesn't have to worry about Endianess or which IEEE floating point format is accepted by the receiving platform.
Am I the only one that finds this totally opaque:
bool isError = (false == ofs.good());
if (false == isError) {
// stuff
}
why not:
if ( ofs ) {
// stuff
}
I'm quite surprised that Google didn't find a solution. I'm searching for a solution that allows SDL_RWops to be used with std::istream. SDL_RWops is the alternative mechanism for reading/writing data in SDL.
Any links to sites that tackle the problem?
An obvious solution would be to pre-read enough data to memory and then use SDL_RWFromMem. However, that has the downside that I'd need to know the filesize beforehand.
Seems like the problem could somehow be solved by "overriding" SDL_RWops functions...
I feel bad answering my own question, but it preocupied me for some time, and this is the solution I came up with:
int istream_seek( struct SDL_RWops *context, int offset, int whence)
{
std::istream* stream = (std::istream*) context->hidden.unknown.data1;
if ( whence == SEEK_SET )
stream->seekg ( offset, std::ios::beg );
else if ( whence == SEEK_CUR )
stream->seekg ( offset, std::ios::cur );
else if ( whence == SEEK_END )
stream->seekg ( offset, std::ios::end );
return stream->fail() ? -1 : stream->tellg();
}
int istream_read(SDL_RWops *context, void *ptr, int size, int maxnum)
{
if ( size == 0 ) return -1;
std::istream* stream = (std::istream*) context->hidden.unknown.data1;
stream->read( (char*)ptr, size * maxnum );
return stream->bad() ? -1 : stream->gcount() / size;
}
int istream_close( SDL_RWops *context )
{
if ( context ) {
SDL_FreeRW( context );
}
return 0;
}
SDL_RWops *SDL_RWFromIStream( std::istream& stream )
{
SDL_RWops *rwops;
rwops = SDL_AllocRW();
if ( rwops != NULL )
{
rwops->seek = istream_seek;
rwops->read = istream_read;
rwops->write = NULL;
rwops->close = istream_close;
rwops->hidden.unknown.data1 = &stream;
}
return rwops;
}
Works under the assumptions that istream's are never freed by SDL (and that they live through the operation). Also only istream support is in, a separate function would be done for ostream -- I know I could pass iostream, but that would not allow passing an istream to the conversion function :/.
Any tips on errors or upgrades welcome.
If you're trying to get an SDL_RWops struct from an istream, you could do it by reading the whole istream into memory and then using SDL_RWFromMem to get a struct to represent it.
Following is a quick example; note that it's unsafe, as no sanity checks are done. For example, if the file's size is 0, accessing buffer[0] may throw an exception or assert in debug builds.
// Open a bitmap
std::ifstream bitmap("bitmap.bmp");
// Find the bitmap file's size
bitmap.seekg(0, std::ios_base::end);
std::istream::pos_tye fileSize = bitmap.tellg();
bitmap.seekg(0);
// Allocate a buffer to store the file in
std::vector<unsigned char> buffer(fileSize);
// Copy the istream into the buffer
std::copy(std::istreambuf_iterator<unsigned char>(bitmap), std::istreambuf_iterator<unsigned char>(), buffer.begin());
// Get an SDL_RWops struct for the file
SDL_RWops* rw = SDL_RWFromMem(&buffer[0], buffer.size());
// Do stuff with the SDL_RWops struct