inheriting ostream and streambuf problem with xsputn and overflow - c++

I have been doing research on creating my own ostream and along with that a streambuf to handle the buffer for my ostream. I actually have most of it working, I can insert (<<) into my stream and get strings no problem. I do this by implimenting the virtual function xsputn. However if I input (<<) a float or an int to the stream instead of a string xsputn never gets called.
I have walked through the code and I see that the stream is calling do_put, then f_put which eventually tries to put the float 1 character at a time into the buffer. I can get it to call my implementation of the virtual function overflow(int c) if I leave my buffer with no space and thereby get the data for the float and the int.
Now here is the problem, I need to know when the float is done being put into the buffer. Or to put it another way, I need to know when this is the last time overflow will be called for a particular value being streamed in. The reason xsputn works for me is because I get the whole value up front and its length. So i can copy it into the buffer then call out to the function waiting for the buffer to be full.
I am admittedly abusing the ostream design in that I need to cache the output then send it all at once for each inputted value (<<).
Anyways to be clear I will restate what I am shooting for in another way. There is a very good chance I am just going about it the wrong way.
I want to use an inherited ostream and streambuf so I can input values into it and allow it to handle my type conversion for me, then I want to ferry that information off to another object that I am passing a handle down to the streambuf to (for?). That object has expensive i/o so I dont want to send the data 1 char at a time.
Sorry in advance if this is unclear. And thank you for your time.

It's not too clear what you're doing, although it sounds roughly
right. Just to be sure: all your ostream does is provide
convenience constructors to create and install your streambuf,
a destructor, and possibly an implementation of rdbuf to
handle buffers of the right type. Supposing that's true:
defining xsputn in your streambuf is purely an optimization.
The key function you have to define is overflow. The simplest
implementation of overflow just takes a single character, and
outputs it to the sink. Everything beyond that is optimization:
you can, for example, set up a buffer using setp; if you do
this, then overflow will only be called when the buffer is
full, or a flush was requested. In this case, you'll have to
output buffer as well (use pbase and pptr to get the
addresses). (The streambuf base class initializes the
pointers to create a 0 length buffer, so overflow will be
called for every character.) Other functions which you might
want to override in (very) specific cases:
imbue: If you need the locale for some reason. (Remember that
the current character encoding is part of the locale.)
setbuf: To allow client code to specify a buffer. (IMHO, it's
usually not worth the bother, but you may have special
requirements.)
seekoff: Support for seeking. I've never used this in any of
my streambufs, so I can't give any information beyond what
you could read in the standard.
sync: Called on flush, should output any characters in the
buffer to the sink. If you never call setp (so there's no
buffer), you're always in sync, and this can be a no-op.
overflow or uflow can call this one, or both can call some
separate function. (About the only difference between sync
and uflow is that uflow will only be called if there is
a buffer, and it will never be called if the buffer is empty.
sync will be called if the client code flushes the stream.)
When writing my own streams, unless performance dictates
otherwise, I'll keep it simple, and only override overflow.
If performance dictates a buffer, I'll usually put the code to
flush the buffer into a separate write(address, length)
function, and implement overflow and sync along the lines
of:
int MyStreambuf::overflow( int ch )
{
if ( pbase() == NULL ) {
// save one char for next overflow:
setp( buffer, buffer + bufferSize - 1 );
if ( ch != EOF ) {
ch = sputc( ch );
} else {
ch = 0;
}
} else {
char* end = pptr();
if ( ch != EOF ) {
*end ++ = ch;
}
if ( write( pbase(), end - pbase() ) == failed ) {
ch = EOF;
} else if ( ch == EOF ) {
ch = 0;
}
setp( buffer, buffer + bufferSize - 1 );
}
return ch;
}
int sync()
{
return (pptr() == pbase()
|| write( pbase(), pptr() - pbase() ) != failed)
? 0
: -1;
}
Generally, I'll not bother with xsputn, but if your client
code is outputting a lot of long strings, it could be useful.
Something like this should do the trick:
streamsize xsputn(char const* p, streamsize n)
{
streamsize results = 0;
if ( pptr() == pbase()
|| write( pbase(), pptr() - pbase() ) != failed ) {
if ( write(p, n) != failed ) {
results = n;
}
}
setp( buffer, buffer + bufferSize - 1 );
return results;
}

Related

Fastest way to read millions of integers from stdin C++?

I am working on a sorting project and I've come to the point where a main bottleneck is reading in the data. It takes my program about 20 seconds to sort 100,000,000 integers read in from stdin using cin and std::ios::sync_with_stdio(false); but it turns out that 10 of those seconds is reading in the data to sort. We do know how many integers we will be reading in (the count is at the top of the file we need to sort).
How can I make this faster? I know it's possible because a student in a previous semester was able to do counting sort in a little over 3 seconds (and that's basically purely read time).
The program is just fed the contents of a file with integers separated by newlines like $ ./program < numstosort.txt
Thanks
Here is the relevant code:
std::ios::sync_with_stdio(false);
int max;
cin >> max;
short num;
short* a = new short[max];
int n = 0;
while(cin >> num) {
a[n] = num;
n++;
}
This will get your data into memory about as fast as possible, assuming Linux/POSIX running on commodity hardware. Note that since you apparently aren't allowed to use compiler optimizations, C++ IO is not going to be the fastest way to read data. As others have noted, without optimizations the C++ code will not run anywhere near as fast as it can.
Given that the redirected file is already open as stdin/STDIN_FILENO, use low-level system call/C-style IO. That won't need to be optimized, as it will run just about as fast as possible:
struct stat sb;
int rc = ::fstat( STDIN_FILENO, &sb );
// use C-style calloc() to get memory that's been
// set to zero as calloc() is often optimized to be
// faster than a new followed by a memset().
char *data = (char *)::calloc( 1, sb.st_size + 1 );
size_t totalRead = 0UL;
while ( totalRead < sb.st_size )
{
ssize_t bytesRead = ::read( STDIN_FILENO,
data + totalRead, sb.st_size - totalRead );
if ( bytesRead <= 0 )
{
break;
}
totalRead += bytesRead;
}
// data is now in memory - start processing it
That code will read your data into memory as one long C-style string. And the lack of compiler optimizations won't matter one bit as it's all almost bare-metal system calls.
Using fstat() to get the file size allows allocating all the needed memory at once - no realloc() or copying data around is necessary.
You'll need to add some error checking, and a more robust version of the code would check to be sure the data returned from fstat() actually is a regular file with an actual size, and not a "useless use of cat" such as cat filename | YourProgram, because in that case the fstat() call won't return a useful file size. You'll need to examine the sb.st_mode field of the struct stat after the call to see what the stdin stream really is:
::fstat( STDIN_FILENO, &sb );
...
if ( S_ISREG( sb.st_mode ) )
{
// regular file...
}
(And for really high-performance systems, it can be important to ensure that the memory pages you're reading data into are actually mapped in your process address space. Performance can really stall if data arrives faster than the kernel's memory management system can create virtual-to-physical mappings for the pages data is getting dumped into.)
To handle a large file as fast as possible, you'd want to go multithreaded, with one thread reading data and feeding one or more data processing threads so you can start processing data before you're done reading it.
Edit: parsing the data.
Again, preventing compiler optimizations probably makes the overhead of C++ operations slower than C-style processing. Based on that assumption, something simple will probably run faster.
This would probably work a lot faster in a non-optimized binary, assuming the data is in a C-style string read in as above:
char *next;
long count = ::strtol( data, &next, 0 );
long *values = new long[ count ];
for ( long ii = 0; ii < count; ii++ )
{
values[ ii ] = ::strtol( next, &next, 0 );
}
That is also very fragile. It relies on strtol() skipping over leading whitespace, meaning if there's anything other than whitespace between the numeric values it will fail. It also relies on the initial count of values being correct. Again - that code will fail if that's not true. And because it can replace the value of next before checking for errors, if it ever goes off the rails because of bad data it'll be hopelessly lost.
But it should be about as fast as possible without allowing compiler optimizations.
That's what crazy about not allowing compiler optimizations. You can write simple, robust C++ code to do all your processing, make use of a good optimizing compiler, and probably run almost as fast as the code I posted - which has no error checking and will fail spectacularly in unexpected and undefined ways if fed unexpected data.
You can make it faster if you use a SolidState hard drive. If you want to ask something about code performance, you need to post how are you doing things in the first place.
You may be able to speed up your program by reading the data into a buffer, then converting the text in the buffer to internal representation.
The thought behind this is that all stream devices like to keep streaming. Starting and stopping the stream wastes time. A block read transfers a lot of data with one transaction.
Although cin is buffered, by using cin.read and a buffer, you can make the buffer a lot bigger than cin uses.
If the data has fixed width fields, there are opportunities to speed up the input and conversion processes.
Edit 1: Example
const unsigned int BUFFER_SIZE = 65536;
char text_buffer[BUFFER_SIZE];
//...
cin.read(text_buffer, BUFFER_SIZE);
//...
int value1;
int arguments_scanned = snscanf(&text_buffer, REMAINING_BUFFER_SIZE,
"%d", &value1);
The tricky part is handling the cases where the text of a number is cut off at the end of the buffer.
Can you ran this little test in compare to your test with and without commented line?
#include <iostream>
#include <cstdlib>
int main()
{
std::ios::sync_with_stdio(false);
char buffer[20] {0,};
int t = 0;
while( std::cin.get(buffer, 20) )
{
// t = std::atoi(buffer);
std::cin.ignore(1);
}
return 0;
}
Pure read test:
#include <iostream>
#include <cstdlib>
int main()
{
std::ios::sync_with_stdio(false);
char buffer[1024*1024];
while( std::cin.read(buffer, 1024*1024) )
{
}
return 0;
}

do writefile function twice

bool sendMessageToGraphics(char* msg)
{
//char ea[] = "SSS";
char* chRequest = msg; // Client -> Server
DWORD cbBytesWritten, cbRequestBytes;
// Send one message to the pipe.
cbRequestBytes = sizeof(TCHAR) * (lstrlen(chRequest) + 1);
if (*msg - '8' == 0)
{
char new_msg[1024] = { 0 };
string answer = "0" + '\0';
copy(answer.begin(), answer.end(), new_msg);
char *request = new_msg;
WriteFile(hPipe, request, cbRequestBytes, &cbRequestBytes, NULL);
}
BOOL bResult = WriteFile( // Write to the pipe.
hPipe, // Handle of the pipe
chRequest, // Message to be written
cbRequestBytes, // Number of bytes to writ
&cbBytesWritten, // Number of bytes written
NULL); // Not overlapped
if (!bResult/*Failed*/ || cbRequestBytes != cbBytesWritten/*Failed*/)
{
_tprintf(_T("WriteFile failed w/err 0x%08lx\n"), GetLastError());
return false;
}
_tprintf(_T("Sends %ld bytes; Message: \"%s\"\n"),
cbBytesWritten, chRequest);
return true;
}
after the first writefile in running (In case of '8') the other writefile function doesn't work right, can someone understand why ?
the function sendMessageToGraphics need to send move to chess board
There are 2 problems in your code:
First of all, there's a (minor) problem where you initialize a string in your conditional statement. You initialize it as so:
string answer = "0" + '\0';
This does not do what you think it does. It will invoke the operator+ using const char* and char as its argument types. This will perform pointer addition, adding the value of '\0' to where your constant is stored. Since '\0' will be converted to the integer value of 0, it will not add anything to the constant. But your string ends up not having a '\0' terminator. You could solve this by changing the statement to:
string answer = std::string("0") + '\0';
But the real problem lies in the way you use your size variables. You first initialize the size variable to the string length of your input variable (including the terminating '\0' character). Then in your conditional statement you create a new string which you pass to WriteFile, yet you still use the original size. This may cause a buffer overrun, which is undefined behavior. You also set your size variable to however many bytes you wrote to the file. Then later on you use this same value again in the next call. You never actually check this value, so this could cause problems.
The easiest way to change this, is to make sure your sizes are set up correctly. For example, instead of the first call, you could do this:
WriteFile(hPipe, request, answer.size(), &cbBytesWritten, NULL);
Then check the return value WriteFile and the value of cbBytesWritten before you make the next call to WriteFile, that way you know your first call succeeded too.
Also, do not forget to remove your sizeof(TCHAR) part in your size calculation. You are never using TCHAR in your code. Your input is a regular char* and so is the string you use in your conditional. I would also advice replacing WriteFile by WriteFileA to show you are using such characters.
Last of all, make sure your server is actually reading bytes from the handle you write to. If your server does not read from the handle, the WriteFile function will freeze until it can write to the handle again.

Reading a file into a string buffer and detecting EOF

I am opening a file and placing it's contents into a string buffer to do some lexical analysis on a per-character basis. Doing it this way enables parsing to finish faster than using a subsequent number of fread() calls, and since the source file will always be no larger than a couple MBs, I can rest assured that the entire contents of the file will always be read.
However, there seems to be some trouble in detecting when there is no more data to be parsed, because ftell() often gives me an integer value higher than the actual number of characters within the file. This wouldn't be a problem with the use of the EOF (-1) macro, if the trailing characters were always -1... But this is not always the case...
Here's how I am opening the file, and reading it into the string buffer:
FILE *fp = NULL;
errno_t err = _wfopen_s(&fp, m_sourceFile, L"rb, ccs=UNICODE");
if(fp == NULL || err != 0) return FALSE;
if(fseek(fp, 0, SEEK_END) != 0) {
fclose(fp);
fp = NULL;
return FALSE;
}
LONG fileSize = ftell(fp);
if(fileSize == -1L) {
fclose(fp);
fp = NULL;
return FALSE;
}
rewind(fp);
LPSTR s = new char[fileSize];
RtlZeroMemory(s, sizeof(char) * fileSize);
DWORD dwBytesRead = 0;
if(fread(s, sizeof(char), fileSize, fp) != fileSize) {
fclose(fp);
fp = NULL;
return FALSE;
}
This always appears to work perfectly fine. Following this is a simple loop, which checks the contents of the string buffer one character at a time, like so:
char c = 0;
LONG nPos = 0;
while(c != EOF && nPos <= fileSize)
{
c = s[nPos];
// do something with 'c' here...
nPos++;
}
The trailing bytes of the file are usually a series of ý (-3) and « (-85) characters, and therefore EOF is never detected. Instead, the loop simply continues onward until nPos ends up being of higher value than fileSize -- Which is not desirable for proper lexical analysis, because you often end up skipping the final token in a stream which omits a newline character at the end.
In a Basic Latin character set, would it be safe to assume that an EOF char is any character with a negative value? Or perhaps there is just a better way to go about this?
#EDIT: I have just tried to implement the feof() function into my loop, and all the same, it doesn't seem to detect EOF either.
Assembling comments into an answer...
You leak memory (potentially a lot of memory) when you fail to read.
You haven't allowed for a null terminator at the end of the string read.
There's no point in zeroing the memory when it is all about to be overwritten by the data from the file.
Your test loop is accessing memory out of bounds; nPos == fileSize is one beyond the end of the memory you allocated.
char c = 0;
LONG nPos = 0;
while(c != EOF && nPos <= fileSize)
{
c = s[nPos];
// do something with 'c' here...
nPos++;
}
There are other problems, not previously mentioned, with this. You did ask if it is 'safe to assume that an EOF char is any character with a negative value', to which I responded No. There are several issues here, that affect both C and C++ code. The first is that plain char may be a signed type or an unsigned type. If the type is unsigned, then you can never store a negative value in it (or, more accurately, if you attempt to store a negative integer into an unsigned char, it will be truncated to the least significant 8* bits and will be treated as positive.
In the loop above, one of two problems can occur. If char is a signed type, then there is a character (ÿ, y-umlaut, U+00FF, LATIN SMALL LETTER Y WITH DIAERESIS, 0xFF in the Latin-1 code set) that has the same value as EOF (which is always negative and usually -1). Thus, you might detect EOF prematurely. If char is an unsigned type, then there will never be any character equal to EOF. But the test for EOF on a character string is fundamentally flawed; EOF is a status indicator from I/O operations and not a character.
During I/O operations, you will only detect EOF when you've attempted to read data that isn't there. The fread() won't report EOF; you asked to read what was in the file. If you tried getc(fp) after the fread(), you'd get EOF unless the file had grown since you measured how long it is. Since _wfopen_s() is a non-standard function, it might be affecting how ftell() behaves and the value it reports. (But you later established that wasn't the case.)
Note that functions such as fgetc() or getchar() are defined to return characters as positive integers and EOF as a distinct negative value.
If the end-of-file indicator for the input stream pointed to by stream is not set and a
next character is present, the fgetc function obtains that character as an unsigned
char converted to an int.
If the end-of-file indicator for the stream is set, or if the stream is at end-of-file, the end-of-
file indicator for the stream is set and the fgetc function returns EOF. Otherwise, the
fgetc function returns the next character from the input stream pointed to by stream.
If a read error occurs, the error indicator for the stream is set and the fgetc function
returns EOF.289)
289) An end-of-file and a read error can be distinguished by use of the feof and ferror functions.
This indicates how EOF is separate from any valid character in the context of I/O operations.
You comment:
As for any potential memory leakage... At this stage in my project, memory leaks are one of many problems with my code which, as of yet, are of no concern to me. Even if it didn't leak memory, it doesn't even work to begin with, so what's the point? Functionality comes first.
It is easier to head off memory leaks in error paths at the initial coding stage than to go back later and fix them — because you may not spot them because you may not trigger the error condition. However, the extent to which that matters depends on the intended audience for the program. If it is a one-off for a coding course, you may be fine. If you're the only person who'll use it, you may be fine. But if it will be installed by millions, you'll have problems retrofitting the checks everywhere.
I have swapped _wfopen_s() with fopen() and the result from ftell() is the same. However, after changing the corresponding lines to LPSTR s = new char[fileSize + 1], RtlZeroMemory(s, sizeof(char) * fileSize + 1); (which should also null-terminate it, btw), and adding if(nPos == fileSize) to the top of the loop, it now comes out cleanly.
OK. You could use just s[fileSize] = '\0'; to null terminate the data too, but using RtlZeroMemory() achieves the same effect (but would be slower if the file is many megabytes in size). But I'm glad the various comments and suggestions helped get you back on track.
* In theory, CHAR_BITS might be larger than 8; in practice it is almost always 8 and for simplicity, I'm assuming it is 8 bits here. The discussion has to be more nuanced if CHAR_BITS is 9 or more, but the net effect is much the same.

Use stringstream to read from TCP socket

I am using a socket library (I'd rather not not use it) whose recv operations works with std::string, but is just a wrapper for one call of the recv socket function, so it is probably that I only got some part of the message I wanted. My first instinct was to go in a loop and append the received string to another string until I get everything, but this seems inefficient. Another possibility was to do the same with a char array, but this seems messy. (I'd have to check the strings size before adding into the array and if it overflowed I need to store the string somewhere until the array is empty again.. )
So I was thinking about using a stringstream. I use a TLV protocol, so I need to first extract two bytes into an unsigned short, then get a certain amount of bytes from the stringstream and then loop again until I reach a delimiter field.
Is there any better way to do this? Am I completely on the wrong track? Are there any best practices? So far I've always only seen direct use of the socket library with char arrays so I'm curious why using `std::string`` with stringstreams could be a bad idea..
Edit: Replying to the comment below: The library is one we use internally, its not public (its nothing special though, mostly just a wrapper around the socket library to add exceptions, etc.).
I should mention that I have a working prototype using the socket library directly.
This works something like:
int lengthFieldSize = sizeof(unsigned short);
int endOfBuffer= 0;//Pointer to last valid position in buffer.
while(true) {
char buffer[RCVBUFSIZE];
while(true) {
int offset= endOfBuffer;
int rs= 0;
rs= recv(sock, buffer+offset, sizeof(buffer)-offset, 0);
endOfBuffer+= rs;
if(rs < 1) {
// Received nothing or error.
break;
} else if(endOfBuffer == RCVBUFSIZE) {
// Buffer full.
break;
} else if(rs > 0 && endOfBuffer > 1) {
unsigned short msglength= 0;
memcpy((char *) &msglength, buffer+endOfBuffer-lengthFieldSize, lengthFieldSize);
if(msglength == 0) {
break; // Received a full transmission.
}
}
}
unsigned int startOfData = 0;
unsigned short protosize= 0;
while(true) {
// Copy first two bytes into protosize (length field)
memcpy((char *) &protosize, buffer+startOfData, lengthFieldSize);
// Is the last length field the delimiter?
// Then reply and return. (We're done.)
// Otherwise: Is the next message not completely in the buffer?
// Then break. (Outer while will take us back to receiving)
if(protosize == 0) {
// Done receiving. Now send:
SendReplyMsg(sock, lengthFieldSize);
// Clean up.
close(sock);
return;
} else if((endOfBuffer-lengthFieldSize-startOfData) < protosize) {
memmove(buffer, buffer+startOfData, RCVBUFSIZE-startOfData);
//Adjust endOfBuffer:
endOfBuffer-=startOfData;
break;
}
startOfData+= lengthFieldSize;
gtControl::gtMsg gtMessage;
if(!gtMessage.ParseFromArray(buffer+startOfData, protosize)) {
cerr << "Failed to parse gtMessage." << endl;
close(sock);
return;
}
// Move position pointer forward by one message (length+pbuf)
startOfData+= protosize;
PrintGtMessage(&gtMessage);
}
}
So basically I have a big loop which contains a receiving loop and a parsing loop. There's a character array being passed back and forth as I can't be sure to have received everything until I actually parse it. I'm trying to replicate this behaviour using "proper" C++ (i.e. std::string)
My first instinct was to go in a loop and append the received string to another string until I get everything, but this seems inefficient.
String concatenation is technically platform dependent, but probably str1 + str2 will require one dynamic allocation and two copies (from str1 and str2). That's sorta slow, but it's far faster than network access! So my first piece of advice would be to go with your first instinct, to find out whether it's both correct and fast enough.
If it's not fast enough, and your profiler shows that the redundant string copies are to blame, consider maintaining a list of strings (std::vector<string*>, perhaps) and joining all the strings together once at the end. This requires some care, but should avoid a bunch of redundant string copying.
But definitely profile first!

Using buffers to read from unknown size file

I'm trying to read blocks from a file and I have a problem.
char* inputBuffer = new char[blockSize]
while (inputFile.read(inputBuffer, blockSize)) {
int i = inputFile.gcount();
//Do stuff
}
Suppose our block size is 1024 bytes, and the file is 24,3 KiB. After reading the 23rd block, there will be 0,3 KiB left to read. I also want to read that 0,3 KiB, in fact I use gcount() later so I can know how much of the buffer did read(...) modify (in case if it is less).
But when it accesses the 24th block, read(...) returns a value such that the program does not enter the loop, obviously because the size of the remaining unread bytes in the file is less than the buffer size. What should I do?
I think that Konrad Rudolf who you talk about in the comment to another answer makes a good point about the problem with reading until eof. If you never reach eof because of some other error you are in an infinite loop. So take his advice, but modify it to address the problem you have identified. One way of doing it is as follows;
bool okay=true;
while ( okay ) {
okay = inputFile.read(inputBuffer, blockSize);
int i = inputFile.gcount();
if( i ) {
//Do stuff
}
}
Edit: Since my answer has been accepted, I am editing it to be as useful as possible. It turns out my bool okay is quite unnecessary (see ferosekhanj's answer). It is better to test the value of inputFile directly, that also has the advantage that you can elegantly avoid entering the loop if the file did not open okay. So I think this is the canonical solution to this problem;
inputFile.open( "test.txt", ios::binary );
while ( inputFile ) {
inputFile.read( inputBuffer, blockSize );
int i = inputFile.gcount();
if( i ) {
//Do stuff
}
}
Now the last time you //Do stuff, i will be less than blockSize, except in the case that the file happens to be a multiple of blockSize bytes long.
Konrad Rudolf's answer here is also good, it has the advantage that .gcount() is only called once, outside the loop, but the disadvantage that it really needs data processing to be put in a separate function, to avoid duplication.
The solution #Konrad Rudolph mentioned is to check for the stream object itself since that includes checking for eof and error condition. The inputFile.read() returns the stream that is inputFile itself so you can write like
while(inputFile.read())
But this will not work always. The case where it fails is your case. A proper solution would be to write like below
char* inputBuffer = new char[blockSize]
while (inputFile)
{
inputFile.read(inputBuffer, blockSize);
int count = inputFile.gcount();
//Access the buffer until count bytes
//Do stuff
}
I think this was the solution what #Konrad Rudolph meant in his post. From my old CPP experience I also would do something like above.
But when it accesses the 24th block, read(...) returns a value such that the program does not enter the loop, obviously because the size of the remaining unread bytes in the file is less than the buffer size.
That's because your loop is wrong. You should be doing:
while(!inputFile) {
std::streamsize numBytes = inputFile.readsome(inputBuffer, blockSize);
//Do stuff
}
Notice the use of readsome instead of read.