ifstream.eof() - end of file is reached before the real end - c++

I have a roughly 11.1G binary file where stores a series of the depth frames from the Kinect. There are 19437 frames in this file. To read one frame per time, I use ifstream in fstream but it reaches eof before the real end of the file. (I only got the first 20 frames, and the function stops because of the eof flag)
However, all frames can be read by using fread in stdio instead.
Can anyone explain this situation? Thank you for precious time on my question.
Here are my two functions:
// ifstream.read() - Does Not Work: the loop will stop after 20th frame because of the eof flag
ifstream depthStream("fileName.dat");
if(depthStream.is_open())
{
while(!depthStream.eof())
{
char* buffer = new char[640*480*2];
depthStream.read(buffer, 640*480*2);
// Store the buffer data in OpenCV Mat
delete[] buffer;
}
}
// fread() - Work: Get 19437 frames successfully
FILE* depthStream
depthStream = fopen("fileName.dat", "rb");
if(depthStream != NULL)
{
while(!feof(depthStream))
{
char* buffer = new char[640*480*2];
fread(buffer, 1, 640*480*2, depthStream);
// Store the buffer data in OpenCV Mat
delete[] buffer;
}
Again, thank you for precious time on my question

You need to open the stream in binary mode, otherwise it will stop at the first byte it sees with a value of 26.
ifstream depthStream("fileName.dat", ios_base::in | ios_base::binary);
As for why 26 is special, it's the code for Ctrl-Z which was a convention used to mark the end of a text file. The history behind this was recorded in Raymond Chen's blog.

Related

How to check whether ifstream is end of file in C++

I need to read all blocks of one large file(about 10GB) sequentially, the file contains many floats with a few strings, like this(each item splited by '\n'):
6.292611
-1.078219E-266
-2.305673E+065
sod;eiwo
4.899747e-237
1.673940e+089
-4.515213
I read MAX_NUM_PER_FILE items each time and process them and write to another file, but i don't know when the ifstream is ended.
Here is my code:
ifstream file_input(path_input); //my file is a text file, but i tried both text and binary mode, both failed.
if(file_input)
{
file_input.seekg(0,file_input.end);
unsigned long long length = file_input.tellg(); //get file size
file_input.seekg(0,file_input.beg);
char * buffer = new char [MAX_NUM_PER_FILE+MAX_NUM_PER_LINE];
int i=1,j;
char c,tmp[3];
while(file_input.tellg()<length)
{
file_input.read(buffer,MAX_NUM_PER_FILE);
j=MAX_NUM_PER_FILE;
while(file_input.get(c)&&c!='\n')
buffer[j++]=c; //get a complete item
//process with buffer...
itoa(i++,tmp,10); //int2char
string out_name="out"+string(tmp)+".txt";
ofstream file_output(out_name);
file_output.write(buffer,j);
file_output.close();
}
file_input.close();
delete[] buffer;
}
My code goes wrong, length is bigger than real file size. I have tried file_input.good() or !file_input.eof(), they didn't work, getline(file_input,s) is good, but it is much slower than read, i want read, but i don't know how to check whether ifstream is end-of-file.
I do my work in WINDOWS 7 with VS2010.
I have searched, but there are not any answer about it, How to open a file using ifstream and keep reading it until the end this link can't answer my question.
Update, Problem solved
Hi everyone, I have figured it out that it's my fault. Both while(file_input.tellg()<length) and while(file_input.peek()!=EOF) work fine! while(file_input.peek()!=EOF) is recommended.
The extra items written after the end-of-file is the left items in buffer written in the last time.
Here is the correct code:
ifstream file_input(path_input);
if(file_input)
{
//file_input.seekg(0,file_input.end);
//unsigned long long length = file_input.tellg(); //get file size
//file_input.seekg(0,file_input.beg);
char * buffer = new char [MAX_NUM_PER_FILE+MAX_NUM_PER_LINE];
int i=1,j;
char c,tmp[3];
while(file_input.peek()!=EOF)
{
memset(buffer,0,sizeof(char)*(MAX_NUM_PER_FILE+MAX_NUM_PER_LINE)); //clear first!
file_input.read(buffer,MAX_NUM_PER_FILE);
j=MAX_NUM_PER_FILE;
while(file_input.get(c)&&c!='\n')
buffer[j++]=c;
itoa(i++,tmp,10);//int2char
string out_name="out"+string(tmp)+".txt";
ofstream file_output(out_name);
file_output.write(buffer,strlen(buffer)); //use the correct buffer size instead of j
file_output.close();
}
file_input.close();
delete[] buffer;
}
while( file_input.peek() != EOF )
{
// code
}
Basically peek() will read the next char without extracting it.
So you can simply compare it to EOF.

C++ reading leftover data at the end of a file

I am taking input from a file in binary mode using C++; I read the data into unsigned ints, process them, and write them to another file. The problem is that sometimes, at the end of the file, there might be a little bit of data left that isn't large enough to fit into an int; in this case, I want to pad the end of the file with 0s and record how much padding was needed, until the data is large enough to fill an unsigned int.
Here is how I am reading from the file:
std::ifstream fin;
fin.open('filename.whatever', std::ios::in | std::ios::binary);
if(fin) {
unsigned int m;
while(fin >> m) {
//processing the data and writing to another file here
}
//TODO: read the remaining data and pad it here prior to processing
} else {
//output to error stream and exit with failure condition
}
The TODO in the code is where I'm having trouble. After the file input finishes and the loop exits, I need to read in the remaining data at the end of the file that was too small to fill an unsigned int. I need to then pad the end of that data with 0's in binary, recording enough about how much padding was done to be able to un-pad the data in the future.
How is this done, and is this already done automatically by C++?
NOTE: I cannot read the data into anything but an unsigned int, as I am processing the data as if it were an unsigned integer for encryption purposes.
EDIT: It was suggested that I simply read what remains into an array of chars. Am I correct in assuming that this will read in ALL remaining data from the file? It is important to note that I want this to work on any file that C++ can open for input and/or output in binary mode. Thanks for pointing out that I failed to include the detail of opening the file in binary mode.
EDIT: The files my code operates on are not created by anything I have written; they could be audio, video, or text. My goal is to make my code format-agnostic, so I can make no assumptions about the amount of data within a file.
EDIT: ok, so based on constructive comments, this is something of the approach I am seeing, documented in comments where the operations would take place:
std::ifstream fin;
fin.open('filename.whatever', std::ios::in | std::ios::binary);
if(fin) {
unsigned int m;
while(fin >> m) {
//processing the data and writing to another file here
}
//1: declare Char array
//2: fill it with what remains in the file
//3: fill the rest of it until it's the same size as an unsigned int
} else {
//output to error stream and exit with failure condition
}
The question, at this point, is this: is this truly format-agnostic? In other words, are bytes used to measure file size as discrete units, or can a file be, say, 11.25 bytes in size? I should know this, I know, but I've got to ask it anyway.
Are bytes used to measure file size as discrete units, or can a file be, say, 11.25 bytes in size?
No data type can be less than a byte, and your file is represented as an array of char meaning each character is one byte. Thus it is impossible to not get a whole number measure in bytes.
Here is step one, two, and three as per your post:
while (fin >> m)
{
// ...
}
std::ostringstream buffer;
buffer << fin.rdbuf();
std::string contents = buffer.str();
// fill with 0s
std::fill(contents.begin(), contents.end(), '0');

Why would a stat() call return am incorrect value of zero(0) for file size?

I'm running a windows c++ multithreaded app in which one instance/thread of the server class is appending to the file. Other threads run client instances which only load up the file upon
each client's startup.
When I get to within 2k bytes of the end of loading the file I check to see if the file has changed
in size, so I know to update how many total bytes to read. Once in a while the file size
I get back is erroneously determined to be zero(0). I am using the stat call below for this. When zero is returned, then as a sanity check, I then call getFileSizeWithTellg() to see what it returns and it returns the expected non-zero value. A value that is the same or greater than the initial value.
I realize that the cast to unsigned int could be problematic, but the files are never
larger than 5 mgBytes.
What could be causing the stat() call to return a zero value, when the ..Tellg call doesn't?
Thanks for any insight into this.
/
/ snippets from methods in different classes
//
// from client class
ifstream fileSeqIn
fileSeqIn.open(fName.c_str(), ios::in | ios::binary |ios::ate);
// to get initial size
size = fileSeqIn.tellg();
fileSeqIn.seekg(0, ios::beg);
// later to determine if the file has grown
struct stat filestatus;
unsigned int size;
if (stat(fName, &filestatus ) == 0) {
size = (unsigned int)filestatus.st_size;
}
//
unsigned int getFileSizeWithTellg(char *fname)
{
// get length of file
is.open (fname, ios::binary );
is.seekg (0, ios::end);
length = is.tellg();
is.close();
return(length);
}
//-----------------------------------------------------------------------------
// from server class
ofstream fileSeqOut;
fileSeqOut.open(fName.c_str(), ios::app | ios::out |ios::ate |ios::binary);
One significant difference: stat returns the system's view of the size of the file; tellg returns a value dependend on the internal state of the stream. File bases streams are buffered, and the data may not be passed on to the system until you flush or close the file. Do you get the same difference if you precede the call to stat with a flush of the stream?
If what Larry Osterman said is true, using GetFileInformationByHandle might solve the problem.

Problem writing binary data with ofstream

Hey all, I'm writing an application which records microphone input to a WAV file. Previously, I had written this to fill a buffer of a specified size and that worked fine. Now, I'd like to be able to record to an arbitrary length. Here's what I'm trying to do:
Set up 32 small audio buffers (circular buffering)
Start a WAV file with ofstream -- write the header with PCM length set to 0
Add a buffer to input
When a buffer completes, append its data to the WAV file and update the header; recycle the buffer
When the user hits "stop", write the remaining buffers to file and close
It kind of works in that the files are coming out to the correct length (header and file size and are correct). However, the data is wonky as hell. I can make out a semblance of what I said -- and the timing is correct -- but there's this repetitive block of distortion. It basically sounds like only half the data is getting into the file.
Here are some variables the code uses (in header)
// File writing
ofstream mFile;
WAVFILEHEADER mFileHeader;
int16_t * mPcmBuffer;
int32_t mPcmBufferPosition;
int32_t mPcmBufferSize;
uint32_t mPcmTotalSize;
bool mRecording;
Here is the code that prepares the file:
// Start recording audio
void CaptureApp::startRecording()
{
// Set flag
mRecording = true;
// Set size values
mPcmBufferPosition = 0;
mPcmTotalSize = 0;
// Open file for streaming
mFile.open("c:\my.wav", ios::binary|ios::trunc);
}
Here's the code that receives the buffer. This assumes the incoming data is correct -- it should be, but I haven't ruled out that it isn't.
// Append file buffer to output WAV
void CaptureApp::writeData()
{
// Update header with new PCM length
mPcmBufferPosition *= sizeof(int16_t);
mPcmTotalSize += mPcmBufferPosition;
mFileHeader.bytes = mPcmTotalSize + sizeof(WAVFILEHEADER);
mFileHeader.pcmbytes = mPcmTotalSize;
mFile.seekp(0);
mFile.write(reinterpret_cast<char *>(&mFileHeader), sizeof(mFileHeader));
// Append PCM data
if (mPcmBufferPosition > 0)
{
mFile.seekp(mPcmTotalSize - mPcmBufferPosition + sizeof(WAVFILEHEADER));
mFile.write(reinterpret_cast<char *>(&mPcmBuffer), mPcmBufferPosition);
}
// Reset file buffer position
mPcmBufferPosition = 0;
}
And this is the code that closes the file:
// Stop recording
void CaptureApp::stopRecording()
{
// Save remaining data
if (mPcmBufferSize > 0)
writeData();
// Close file
if (mFile.is_open())
{
mFile.flush();
mFile.close();
}
// Turn off recording flag
mRecording = false;
}
If there's anything here that looks like it would result in bad data getting appended to the file, please let me know. If not, I'll triple check the input data (in the callback). This data should be good, because it works if I copy it to a larger buffer (eg, two minutes) and then save that out.
I am just wondering, how
void CaptureApp::writeData()
{
mPcmBufferPosition *= sizeof(int16_t); // mPcmBufferPosition = 0, so 0*2 = 0;
// (...)
mPcmBufferPosition = 0;
}
works (btw. sizeof int16_t is always 2). Are you setting mPcmBufferPosition somewhere else?
void CaptureApp::writeData()
{
// Update header with new PCM length
long pos = mFile.tellp();
mPcmBufferBytesToWrite *= 2;
mPcmTotalSize += mPcmBufferBytesToWrite;
mFileHeader.bytes = mPcmTotalSize + sizeof(WAVFILEHEADER);
mFileHeader.pcmbytes = mPcmTotalSize;
mFile.seekp(0);
mFile.write(reinterpret_cast<char *>(&mFileHeader), sizeof(mFileHeader));
mFile.seekp(pos);
// Append PCM data
if (mPcmBufferBytesToWrite > 0)
mFile.write(reinterpret_cast<char *>(mPcmBuffer), mPcmBufferBytesToWrite);
}
Also mPcmBuffer is a pointer, so don't know why you use & in write.
The most likely reason is you're writing from the address of the pointer to your buffer, not from the buffer itself. Ditch the "&" in the final mFile.write. (It may have some good data in it if your buffer is allocated nearby and you happen to grab a chunk of it, but that's just luck that your write hapens to overlap your buffer)
In general, if you find yourself in this sort of situation, you could try to think how you can test this code in isolation from the recording code: Set up a buffer that has the values 0..255 in it, and then set your "chunk size" to 16 and see if it writes out a continuous sequence of 0..255 across 16 separate write operations. That will quickly verify if your buffering code is working or not.
I won't debug your code, but will try to give you checklist of the things you can try to check and determine where's the error:
always have referent recorder or player handy. It can be something as simple as Windows Sound Recorder, or Audacity, or Adobe Audition. Have a recorder/player that you are CERTAIN that will record and play files correctly.
record the file with your app and try to play it with reference player. Working?
try to record the file with reference recorder, and play it with your player. Working?
when you write SOUND data to the WAV file in your recorder, write it to one extra file. Open that file in RAW mode with the player (Windows Sound Recorder won't be enough here). Does it play correctly?
when playing the file in your player, and writing to the soundcard, write output to the RAW file, to see if you are playing the data correctly at all or you have soundcars issues. Does it play correctly?
Try all this, and you'll have much better idea of where something went wrong.
Shoot, sorry -- had a late night of work and am a bit off today. I forgot to show y'all the actual callback. This is it:
// Called when buffer is full
void CaptureApp::onData(float * data, int32_t & size)
{
// Check recording flag and buffer size
if (mRecording && size <= BUFFER_LENGTH)
{
// Save the PCM data to file and reset the array if we
// don't have room for this buffer
if (mPcmBufferPosition + size >= mPcmBufferSize)
writeData();
// Copy PCM data to file buffer
copy(mAudioInput.getData(), mAudioInput.getData() + size, mPcmBuffer + mPcmBufferPosition);
// Update PCM position
mPcmBufferPosition += size;
}
}
Will try y'alls advice and report.

Using fstream tellg to read a portion of the stream till the end

I have this simple code that needs to get a chunk of a large log file that is being written into. At some point it stores the current location returned from
streampos start = istream::tellg();
method.
Later on the code has to read from the stream a buffer from the start till the end. The code is approximately like this:
streampos start = my_stream.tellg();
... // do some stuff with logging
streampos end = my_stream.tellg();
const streamsize size_to_read = (end - start);
char *buf = new char[size_to_read];
lock (m_logReadLock);
{
my_stream.flush();
my_stream.seekg(start);
my_stream.read(buf, size_to_read);
size_read = my_stream->gcount();
}
unlock (m_logReadLock);
The effect that I'm observing is that size_read is smaller than size_to_read and the stream has its eof flag set. Shouldn't the end pointer specify exactly where the stream ends and read() method return that exact amount of data?
It is fine, I can work round it by checking the eof flag.
However, can anyone provide the explanation for this effect?
Thanks.
You seem to be calling gcount on stream_loc instead of my_stream.
http://groups.google.com/group/comp.lang.c++/browse_thread/thread/709cde3942e64d6c#