make the compiler not to automatically flush the buffer - c++

Why does the below code not stop the compiler from flushing the buffer automatically?
cout.sync_with_stdio(false);
cin.tie(nullptr);
cout << "hello";
cout << "world";
int a;
cin >> a;
output:
helloworld
I'm using Visual Studio 2012 Ultimate

AFAIK, the stream can be flushed whenever the implementation likes to do so, i.e. there's no guarantee that the stream will be flushed after an insert operation. However, you could use one of these manipulators to ensure your stream gets flushed (these are the only ones I know of so if someone is aware of others, please comment):
std::endl - inserts a newline into the stream and flushes it,
std::flush - just flushes the stream,
std::(no)unitbuf - enables/disables flushing the stream after each insert operation.

The standard allows an implementation to flush any time it feels
like it, but from a quality of implementation point of view, one
really doesn't expect a flush here. You might try adding
a setbuf, telling std::cin to use a buffer you specify:
std::cout.rdbuf()->setbuf( buffer, sizeof(buffer) );
Again,the standard doesn't guarantee anything, but if this isn't
respected, I'd consider the quality bad enough to warrant a bug
report.
Finally, if worse comes to worse, you can always insert
a filtering streambuf which does the buffering you want. You
shouldn't have to, but it won't be the first time we've had to
write extra code to work around a lack of quality in compilers
or libraries. If all you're doing is straightforward output (no
seeks, or anything, something like the following should do the
trick:
class BufferingOutStreambuf : public std::streambuf
{
std::streambuf* myDest;
std::ostream* myOwner;
std::vector<char> myBuffer;
static size_t const bufferSize = 1000;
protected:
virtual int underflow( int ch )
{
return sync() == -1
? EOF
: sputc( ch );
}
virtual int sync()
{
int results = 0;
if ( pptr() != pbase() ) {
if ( myDest->sputn( pbase(), pptr() - pbase() )
!= pptr() - pbase() ) {
results = -1;
}
}
setp( &myBuffer[0], &myBuffer[0] + myBuffer.size() );
return results;
}
public:
BufferingOutStreambuf( std::streambuf* dest )
: myDest( dest )
, myOwner( NULL )
, myBuffer( bufferSize )
{
setp( &myBuffer[0], &myBuffer[0] + myBuffer.size() );
}
BufferingOutStreambuf( std::ostream& dest )
: myDest( dest.rdbuf() )
, myOwner( &dest )
, myBuffer( bufferSize )
{
setp( &myBuffer[0], &myBuffer[0] + myBuffer.size() );
myOwner->rdbuf( this );
}
~BufferingOutStreambuf()
{
if ( myOwner != NULL ) {
myOwner->rdbuf( myDest );
}
}
};
Then just do:
BufferingOutStreambuf buffer( std::cout );
as the first line in main. (One could argue that iostreams
should have been designed to work like this from the start, with
filtering streambuf for buffering and code translation. But
it wasn't, and this shouldn't be necessary with a decent
implementation.)

Related

C++ text file content erased after power loss [duplicate]

I'm currently implementing a ping/pong buffering scheme to safely write a file to disk. I'm using C++/Boost on a Linux/CentOS machine. Now I'm facing the problem to force the actual write of the file to disk. Is it possible to do so irrespective of all the caching policies of the filesystem (ext3/ext4) / SO custom rules / RAID controller / harddisk controller ?
Is it best to use plain fread()/fwrite(), c++ ostream or boost filesystem?
I've heard that simply flushing out the file (fflush()) doesn't guarantee the actual write
fflush (for FILE*), std::flush (for IOStream) to force your program to send to the OS.
POSIX has
sync(2) to ask to schedule writing its buffers, but can return before the writing is done (Linux is waiting that the data is send to the hardware before returning).
fsync(2) which is guaranteed to wait for the data to be send to the hardware, but needs a file descriptor (you can get one from a FILE* with fileno(3), I know of no standard way to get one from an IOStream).
O_SYNC as a flag to open(2).
In all cases, the hardware may have it's own buffers (but if it has control on it, a good implementation will try to flush them also and ISTR that some disks are using capacitors so that they are able to flush whatever happens to the power) and network file systems have their own caveat.
You can use fsync()/fdatasync() to force(Note 1) the data onto the storage.
Those requres a file descriptor, as given by e.g. open().
The linux manpage have more linux specific info, particularly on the difference of fsync and fdatasync.
If you don't use file desciptors directly, many abstractions will contain internal buffers residing in your process.
e.g. if you use a FILE*, you first have to flush the data out of your application.
//... open and write data to a FILE *myfile
fflush(myfile);
fsync(fileno(myfile));
Note 1: These calls force the OS to ensure that any data in any OS cache is written to the drive, and the drive acknowledges that fact. Many hard-drives lie to the OS about this, and might stuff the data in cache memory on the drive.
Not in standard C++. You'll have to use some sort of system specific
IO, like open with the O_SYNC flag under Unix, and then write.
Note that this is partially implicit by the fact that ostream (and in
C, FILE*) are buffered. If you don't know exactly when something is
written to disk, then it doesn't make much sense to insist on the
transactional integrity of the write. (It wouldn't be too hard to
design a streambuf which only writes when you do an explicit flush,
however.)
EDIT:
As a simple example:
class SynchronizedStreambuf : public std::streambuf
{
int myFd;
std::vector<char> myBuffer;
protected:
virtual int overflow( int ch );
virtual int sync();
public:
SynchronizedStreambuf( std::string const& filename );
~SynchronizedStreambuf();
};
int SynchronizedStreambuf::overflow( int ch )
{
if ( myFd == -1 ) {
return traits_type::eof();
} else if ( ch == traits_type::eof() ) {
return sync() == -1 ? traits_type::eof() : 0;
} else {
myBuffer.push_back( ch );
size_t nextPos = myBuffer.size();
myBuffer.resize( 1000 );
setp( &myBuffer[0] + nextPos, &myBuffer[0] + myBuffer.size() );
return ch;
}
}
int SynchronizedStreambuf::sync()
{
size_t toWrite = pptr() - &myBuffer[0];
int result = (toWrite == 0 || write( myFd, &myBuffer[0], toWrite ) == toWrite ? 0 : -1);
if ( result == -1 ) {
close( myFd );
setp( NULL, NULL );
myFd = -1;
} else {
setp( &myBuffer[0], &myBuffer[0] + myBuffer.size() );
}
return result;
}
SynchronizedStreambuf::SynchronizedStreambuf( std::string const& filename )
: myFd( open( filename.c_str(), O_WRONLY | O_CREAT | O_SYNC, 0664 ) )
{
}
SynchronizedStreambuf::~SynchronizedStreambuf()
{
sync();
close( myFd );
}
(This has only been superficially tested, but the basic idea is there.)

Force write of a file to disk

I'm currently implementing a ping/pong buffering scheme to safely write a file to disk. I'm using C++/Boost on a Linux/CentOS machine. Now I'm facing the problem to force the actual write of the file to disk. Is it possible to do so irrespective of all the caching policies of the filesystem (ext3/ext4) / SO custom rules / RAID controller / harddisk controller ?
Is it best to use plain fread()/fwrite(), c++ ostream or boost filesystem?
I've heard that simply flushing out the file (fflush()) doesn't guarantee the actual write
fflush (for FILE*), std::flush (for IOStream) to force your program to send to the OS.
POSIX has
sync(2) to ask to schedule writing its buffers, but can return before the writing is done (Linux is waiting that the data is send to the hardware before returning).
fsync(2) which is guaranteed to wait for the data to be send to the hardware, but needs a file descriptor (you can get one from a FILE* with fileno(3), I know of no standard way to get one from an IOStream).
O_SYNC as a flag to open(2).
In all cases, the hardware may have it's own buffers (but if it has control on it, a good implementation will try to flush them also and ISTR that some disks are using capacitors so that they are able to flush whatever happens to the power) and network file systems have their own caveat.
You can use fsync()/fdatasync() to force(Note 1) the data onto the storage.
Those requres a file descriptor, as given by e.g. open().
The linux manpage have more linux specific info, particularly on the difference of fsync and fdatasync.
If you don't use file desciptors directly, many abstractions will contain internal buffers residing in your process.
e.g. if you use a FILE*, you first have to flush the data out of your application.
//... open and write data to a FILE *myfile
fflush(myfile);
fsync(fileno(myfile));
Note 1: These calls force the OS to ensure that any data in any OS cache is written to the drive, and the drive acknowledges that fact. Many hard-drives lie to the OS about this, and might stuff the data in cache memory on the drive.
Not in standard C++. You'll have to use some sort of system specific
IO, like open with the O_SYNC flag under Unix, and then write.
Note that this is partially implicit by the fact that ostream (and in
C, FILE*) are buffered. If you don't know exactly when something is
written to disk, then it doesn't make much sense to insist on the
transactional integrity of the write. (It wouldn't be too hard to
design a streambuf which only writes when you do an explicit flush,
however.)
EDIT:
As a simple example:
class SynchronizedStreambuf : public std::streambuf
{
int myFd;
std::vector<char> myBuffer;
protected:
virtual int overflow( int ch );
virtual int sync();
public:
SynchronizedStreambuf( std::string const& filename );
~SynchronizedStreambuf();
};
int SynchronizedStreambuf::overflow( int ch )
{
if ( myFd == -1 ) {
return traits_type::eof();
} else if ( ch == traits_type::eof() ) {
return sync() == -1 ? traits_type::eof() : 0;
} else {
myBuffer.push_back( ch );
size_t nextPos = myBuffer.size();
myBuffer.resize( 1000 );
setp( &myBuffer[0] + nextPos, &myBuffer[0] + myBuffer.size() );
return ch;
}
}
int SynchronizedStreambuf::sync()
{
size_t toWrite = pptr() - &myBuffer[0];
int result = (toWrite == 0 || write( myFd, &myBuffer[0], toWrite ) == toWrite ? 0 : -1);
if ( result == -1 ) {
close( myFd );
setp( NULL, NULL );
myFd = -1;
} else {
setp( &myBuffer[0], &myBuffer[0] + myBuffer.size() );
}
return result;
}
SynchronizedStreambuf::SynchronizedStreambuf( std::string const& filename )
: myFd( open( filename.c_str(), O_WRONLY | O_CREAT | O_SYNC, 0664 ) )
{
}
SynchronizedStreambuf::~SynchronizedStreambuf()
{
sync();
close( myFd );
}
(This has only been superficially tested, but the basic idea is there.)

C++ Debug Assertion Error

So, I've been playing around with c++ a bit and decided to write a program that involves opening and writing to a file in binary mode. I am not too familiar with the iostream functionality of c++ (I mostly do API based programming), but I read several technical guides on the subject and wrote some code. The code is meant to open one file, read it's data to a buffer, and then convert that buffer to another format and write it to another file. The problem is that it keeps throwing a "Debug Assertion" error which apparently revolves around the invalid use of a null pointer. However, I couldn't make sense of it when I looked through the code. I probably just misused the iostream library or made a simple logic error. I need to have the separate SetMemBlock function as I plan on using the same base for formatting different output on a variety of functions. This is just my prototype. Anyways, here's my quick n' dirty class setup:
const DebugMode = true;
class A
{
public:
bool FileFunction( const char *, const char * );
protected:
bool SetMemBlock( char *, std::fstream &, std::streamoff & );
private:
std::fstream SrcFileStream;
std::fstream DestFileStream;
};
bool A::SetMemBlock( char* MemBlock, std::fstream & FileStream, std::streamoff & Size )
{
std::streamoff TempOff = 0;
//This is meant to check for a non-empty buffer and to see if the stream is valid.
if( MemBlock != 0 || !FileStream.is_open() )
return false;
TempOff = FileStream.tellg();
FileStream.seekg(0, std::ios::end);
Size = FileStream.tellg();
MemBlock = new( std::nothrow ) char[ (int) Size ];
if( MemBlock == 0 )
return false;
FileStream.seekg(0, std::ios::beg);
FileStream.read( MemBlock, (int) Size );
if( !FileStream )
return false;
FileStream.seekg(TempOff);
return true;
}
bool A::FileFunction( const char * SrcFile, const char * DestFile )
{
char * MemBlock = 0;
std::streamoff Size = 0;
SrcFileStream.open( SrcFile, std::ios::binary | std::ios::in );
DestFileStream.open( DestFile, std::ios::binary | std::ios::out );
if( !SrcFileStream.is_open() || !DestFileStream.is_open() )
return false;
if( DebugMode )
{
std::cout<<"Files opened succesfully...\nNow writing memory block..."<<std::endl;
}
if( !SetMemBlock( MemBlock, SrcFileStream, Size ) )
{
std::cout<<"An error occured when reading to memory block!"<<std::endl;
return false;
}
if( DebugMode )
{
std::cout<<"Memory block written..."<<std::endl;
}
DestFileStream.seekp( std::ios::beg );
DestFileStream.write( MemBlock, Size );
SrcFileStream.close();
DestFileStream.close();
delete[] MemBlock;
return true;
}
You're passing MemBlock to SetMemBlock by value. The function therefore just sets the value of a local copy, which has no effect on the calling function; the value of MemBlock in the calling function thus remains garbage. Using it as a pointer will probably then lead to an assertion (if you're lucky) or an out-and-out crash (if you're not.) You want to pass that argument by reference instead.
If you don't know what these terms mean, Google "pass by value" and "pass by reference". You really need to understand the difference!
Pass MemBlock by reference:
bool A::SetMemBlock( char*& MemBlock, std::fstream & FileStream, std::streamoff & Size )

inheriting ostream and streambuf problem with xsputn and overflow

I have been doing research on creating my own ostream and along with that a streambuf to handle the buffer for my ostream. I actually have most of it working, I can insert (<<) into my stream and get strings no problem. I do this by implimenting the virtual function xsputn. However if I input (<<) a float or an int to the stream instead of a string xsputn never gets called.
I have walked through the code and I see that the stream is calling do_put, then f_put which eventually tries to put the float 1 character at a time into the buffer. I can get it to call my implementation of the virtual function overflow(int c) if I leave my buffer with no space and thereby get the data for the float and the int.
Now here is the problem, I need to know when the float is done being put into the buffer. Or to put it another way, I need to know when this is the last time overflow will be called for a particular value being streamed in. The reason xsputn works for me is because I get the whole value up front and its length. So i can copy it into the buffer then call out to the function waiting for the buffer to be full.
I am admittedly abusing the ostream design in that I need to cache the output then send it all at once for each inputted value (<<).
Anyways to be clear I will restate what I am shooting for in another way. There is a very good chance I am just going about it the wrong way.
I want to use an inherited ostream and streambuf so I can input values into it and allow it to handle my type conversion for me, then I want to ferry that information off to another object that I am passing a handle down to the streambuf to (for?). That object has expensive i/o so I dont want to send the data 1 char at a time.
Sorry in advance if this is unclear. And thank you for your time.
It's not too clear what you're doing, although it sounds roughly
right. Just to be sure: all your ostream does is provide
convenience constructors to create and install your streambuf,
a destructor, and possibly an implementation of rdbuf to
handle buffers of the right type. Supposing that's true:
defining xsputn in your streambuf is purely an optimization.
The key function you have to define is overflow. The simplest
implementation of overflow just takes a single character, and
outputs it to the sink. Everything beyond that is optimization:
you can, for example, set up a buffer using setp; if you do
this, then overflow will only be called when the buffer is
full, or a flush was requested. In this case, you'll have to
output buffer as well (use pbase and pptr to get the
addresses). (The streambuf base class initializes the
pointers to create a 0 length buffer, so overflow will be
called for every character.) Other functions which you might
want to override in (very) specific cases:
imbue: If you need the locale for some reason. (Remember that
the current character encoding is part of the locale.)
setbuf: To allow client code to specify a buffer. (IMHO, it's
usually not worth the bother, but you may have special
requirements.)
seekoff: Support for seeking. I've never used this in any of
my streambufs, so I can't give any information beyond what
you could read in the standard.
sync: Called on flush, should output any characters in the
buffer to the sink. If you never call setp (so there's no
buffer), you're always in sync, and this can be a no-op.
overflow or uflow can call this one, or both can call some
separate function. (About the only difference between sync
and uflow is that uflow will only be called if there is
a buffer, and it will never be called if the buffer is empty.
sync will be called if the client code flushes the stream.)
When writing my own streams, unless performance dictates
otherwise, I'll keep it simple, and only override overflow.
If performance dictates a buffer, I'll usually put the code to
flush the buffer into a separate write(address, length)
function, and implement overflow and sync along the lines
of:
int MyStreambuf::overflow( int ch )
{
if ( pbase() == NULL ) {
// save one char for next overflow:
setp( buffer, buffer + bufferSize - 1 );
if ( ch != EOF ) {
ch = sputc( ch );
} else {
ch = 0;
}
} else {
char* end = pptr();
if ( ch != EOF ) {
*end ++ = ch;
}
if ( write( pbase(), end - pbase() ) == failed ) {
ch = EOF;
} else if ( ch == EOF ) {
ch = 0;
}
setp( buffer, buffer + bufferSize - 1 );
}
return ch;
}
int sync()
{
return (pptr() == pbase()
|| write( pbase(), pptr() - pbase() ) != failed)
? 0
: -1;
}
Generally, I'll not bother with xsputn, but if your client
code is outputting a lot of long strings, it could be useful.
Something like this should do the trick:
streamsize xsputn(char const* p, streamsize n)
{
streamsize results = 0;
if ( pptr() == pbase()
|| write( pbase(), pptr() - pbase() ) != failed ) {
if ( write(p, n) != failed ) {
results = n;
}
}
setp( buffer, buffer + bufferSize - 1 );
return results;
}

Reading SDL_RWops from a std::istream

I'm quite surprised that Google didn't find a solution. I'm searching for a solution that allows SDL_RWops to be used with std::istream. SDL_RWops is the alternative mechanism for reading/writing data in SDL.
Any links to sites that tackle the problem?
An obvious solution would be to pre-read enough data to memory and then use SDL_RWFromMem. However, that has the downside that I'd need to know the filesize beforehand.
Seems like the problem could somehow be solved by "overriding" SDL_RWops functions...
I feel bad answering my own question, but it preocupied me for some time, and this is the solution I came up with:
int istream_seek( struct SDL_RWops *context, int offset, int whence)
{
std::istream* stream = (std::istream*) context->hidden.unknown.data1;
if ( whence == SEEK_SET )
stream->seekg ( offset, std::ios::beg );
else if ( whence == SEEK_CUR )
stream->seekg ( offset, std::ios::cur );
else if ( whence == SEEK_END )
stream->seekg ( offset, std::ios::end );
return stream->fail() ? -1 : stream->tellg();
}
int istream_read(SDL_RWops *context, void *ptr, int size, int maxnum)
{
if ( size == 0 ) return -1;
std::istream* stream = (std::istream*) context->hidden.unknown.data1;
stream->read( (char*)ptr, size * maxnum );
return stream->bad() ? -1 : stream->gcount() / size;
}
int istream_close( SDL_RWops *context )
{
if ( context ) {
SDL_FreeRW( context );
}
return 0;
}
SDL_RWops *SDL_RWFromIStream( std::istream& stream )
{
SDL_RWops *rwops;
rwops = SDL_AllocRW();
if ( rwops != NULL )
{
rwops->seek = istream_seek;
rwops->read = istream_read;
rwops->write = NULL;
rwops->close = istream_close;
rwops->hidden.unknown.data1 = &stream;
}
return rwops;
}
Works under the assumptions that istream's are never freed by SDL (and that they live through the operation). Also only istream support is in, a separate function would be done for ostream -- I know I could pass iostream, but that would not allow passing an istream to the conversion function :/.
Any tips on errors or upgrades welcome.
If you're trying to get an SDL_RWops struct from an istream, you could do it by reading the whole istream into memory and then using SDL_RWFromMem to get a struct to represent it.
Following is a quick example; note that it's unsafe, as no sanity checks are done. For example, if the file's size is 0, accessing buffer[0] may throw an exception or assert in debug builds.
// Open a bitmap
std::ifstream bitmap("bitmap.bmp");
// Find the bitmap file's size
bitmap.seekg(0, std::ios_base::end);
std::istream::pos_tye fileSize = bitmap.tellg();
bitmap.seekg(0);
// Allocate a buffer to store the file in
std::vector<unsigned char> buffer(fileSize);
// Copy the istream into the buffer
std::copy(std::istreambuf_iterator<unsigned char>(bitmap), std::istreambuf_iterator<unsigned char>(), buffer.begin());
// Get an SDL_RWops struct for the file
SDL_RWops* rw = SDL_RWFromMem(&buffer[0], buffer.size());
// Do stuff with the SDL_RWops struct