Writing MAT files: Access violation writing location after 508 successful calls - c++

I'm running a 64-bit C++ program in VS2012 that processes images and writes the results to a MAT file. For whatever reason, after 508 working iterations, I get:
"Unhandled exception at ____ (libmat.dll) in Program.exe:____. Access violation writing location ____." (Underscores represent address locations)
However, if I restart the program on image number 509 (changing nothing else; just a restart), it works just fine for the next 508 images and then hands me the same error again.
A comment on an earlier, less-detailed post said it may be some memory issue. Perhaps I'm not handling garbage collection properly? I can't figure it out though.
For the record, all of the data being saved to files ends up in a 127x47 (row x col) double matrix. That means each of the 508 successful files contained 5969 doubles (plus whatever metadata goes into a MAT file). Perhaps some memory limit gets reached because I don't clear it properly?
The code in question is below:
void writeMat (void * data, int rows, int cols, std::string fname)
{
// Copies data to MATLAB format matrix
mxArray * mat;
mat = mxCreateDoubleMatrix(rows, cols, mxREAL);
memcpy((void*)mxGetPr(mat), data, rows * cols * sizeof(double));
// Creates output file
MATFile * output;
std::string matFilename = fname + ".mat"; // Output filename
std::string varName = "tmp"; // Storage variable in MAT file
output = matOpen(matFilename.c_str(), "w"); // Opens MAT file for writing
if (output == NULL) {
printf("Error creating file");
}
// Adds data variable to MAT file
int status = matPutVariable(output, varName.c_str(), mat);
if (status != 0)
{
printf("Error writing mat file");
}
mxDestroyArray(mat); // Free up memory
}
Any help would be appreciated. Thanks in advance!

It appears that you are running out of file handles, because you keep calling matOpen but then don't subsequently call matClose. Most systems impose an upper limit on the number of concurrently open files - it would appear that on your system this limit is 512 - there are already a few files open, so when you get to around the 508th iteration you run out of file handles.
Having said that, you should not see a crash - you have error checking on matOpen and this should fail gracefully when you try to open too many files, but evidently it doesn't!

Related

Unable to correctly read bmp file

I am trying to read certain information from a bmp file. Basically file type i.e B M in my bmp file. I start with first opening the file. Which is happening correctly. The first fread is however failing. Why is this happening?
#include<stdio.h>
#include<string.h>
#define SIZE 1
int main(void)
{
FILE* fd = NULL;
char buff[2];
unsigned int i=0,size=0,offset=0;
memset(buff,0,sizeof(buff));
fd = fopen("RIT.bmp","r+");
if(NULL == fd)
{
printf("\n fopen() Error!!!\n");
return 1;
}
printf("\n File opened successfully\n");
if(SIZE*2 != fread(buff,SIZE,2,fd))//to read the file type.(i. e. B M)
{
printf("\n first fread() failed\n");
return 1;
}
return 0;
}
Output
File opened successfully
first fread() failed
Press any key to continue . . .
Update
Yes the file is empty, due to some earlier error. That is why this error is coming.
Probably your file doesn't have enough(2 bytes) data. Its giving correct output when I checked with file> 2 bytes. Same is failing for empty file
From the man page: "Upon successful completion, fread() shall return the number of elements successfully read [...]."
That would be 2, not SIZE*2.
Although, at second thought, SIZE is 1, so while the program is error-prone, it is not actually wrong. In that case, the second part of the sentence applies: " ... which is less than nitems only if a read error or end-of-file is encountered.". And as others said, check the global errno if the file is long enough. Maybe it's time for a new SSD.

Reading in raw encoded nrrd data file into double

Does anyone know how to read in a file with raw encoding? So stumped.... I am trying to read in floats or doubles (I think). I have been stuck on this for a few weeks. Thank you!
File that I am trying to read from:
http://www.sci.utah.edu/~gk/DTI-data/gk2/gk2-rcc-mask.raw
Description of raw encoding:
hello://teem.sourceforge.net/nrrd/format.html#encoding (change hello to http to go to page)
- "raw" - The data appears on disk exactly the same as in memory, in terms of byte values and byte ordering. Produced by write() and fwrite(), suitable for read() or fread().
Info of file:
http://www.sci.utah.edu/~gk/DTI-data/gk2/gk2-rcc-mask.nhdr - I think the only things that matter here are the big endian (still trying to understand what that means from google) and raw encoding.
My current approach, uncertain if it's correct:
//Function ripped off from example of c++ ifstream::read reference page
void scantensor(string filename){
ifstream tdata(filename, ifstream::binary); // not sure if I should put ifstream::binary here
// other things I tried
// ifstream tdata(filename) ifstream tdata(filename, ios::in)
if(tdata){
tdata.seekg(0, tdata.end);
int length = tdata.tellg();
tdata.seekg(0, tdata.beg);
char* buffer = new char[length];
tdata.read(buffer, length);
tdata.close();
double* d;
d = (double*) buffer;
} else cerr << "failed" << endl;
}
/* P.S. I attempted to print the first 100 elements of the array.
Then I print 100 other elements at some arbitrary array indices (i.e. 9,900 - 10,000). I actually kept increasing the number of 0's until I ran out of bound at 100,000,000 (I don't think that's how it works lol but I was just playing around to see what happens)
Here's the part that makes me suspicious: so the ifstream different has different constructors like the ones I tried above.
the first 100 values are always the same.
if I use ifstream::binary, then I get some values for the 100 arbitrary printing
if I use the other two options, then I get -6.27744e+066 for all 100 of them
So for now I am going to assume that ifstream::binary is the correct one. The thing is, I am not sure if the file I provided is how binary files actually look like. I am also unsure if these are the actual numbers that I am supposed to read in or just casting gone wrong. I do realize that my casting from char* to double* can be unsafe, and I got that from one of the threads.
*/
I really appreciate it!
Edit 1: Right now the data being read in using the above method is apparently "incorrect" since in paraview the values are:
Dxx,Dxy,Dxz,Dyy,Dyz,Dzz
[0, 1], [-15.4006, 13.2248], [-5.32436, 5.39517], [-5.32915, 5.96026], [-17.87, 19.0954], [-6.02961, 5.24771], [-13.9861, 14.0524]
It's a 3 x 3 symmetric matrix, so 7 distinct values, 7 ranges of values.
The floats that I am currently parsing from the file right now are very large (i.e. -4.68855e-229, -1.32351e+120).
Perhaps somebody knows how to extract the floats from Paraview?
Since you want to work with doubles, I recommend to read the data from file as buffer of doubles:
const long machineMemory = 0x40000000; // 1 GB
FILE* file = fopen("c:\\data.bin", "rb");
if (file)
{
int size = machineMemory / sizeof(double);
if (size > 0)
{
double* data = new double[size];
int read(0);
while (read = fread(data, sizeof(double), size, file))
{
// Process data here (read = number of doubles)
}
delete [] data;
}
fclose(file);
}

Reading file with fread() in reverse order causes memory leak?

I have a program that basically does this:
Opens some binary file
Reads the file backwards (by backwards, I mean it starts near EOF, and ends reading at beginning of file, i.e. reads the file "right-to-left"), using 4MB chunks
Closes the file
My question is: why memory consumption looks like below, even though there are no obvious memory leaks in my attached code?
Here's the source of program that was run to obtain above image:
#include <stdio.h>
#include <string.h>
int main(void)
{
//allocate stuff
const int bufferSize = 4*1024*1024;
FILE *fileHandle = fopen("./input.txt", "rb");
if (!fileHandle)
{
fprintf(stderr, "No file for you\n");
return 1;
}
unsigned char *buffer = new unsigned char[bufferSize];
if (!buffer)
{
fprintf(stderr, "No buffer for you\n");
return 1;
}
//get file size. file can be BIG, hence the fseeko() and ftello()
//instead of fseek() and ftell().
fseeko(fileHandle, 0, SEEK_END);
off_t totalSize = ftello(fileHandle);
fseeko(fileHandle, 0, SEEK_SET);
//read the file... in reverse order. This is important.
for (off_t pos = totalSize - bufferSize, j = 0;
pos >= 0;
pos -= bufferSize, j ++)
{
if (j % 10 == 0)
{
fprintf(stderr,
"reading like crazy: %lld / %lld\n",
pos, totalSize);
}
/*
* below is the heart of the problem. see notes below
*/
//seek to desired position
fseeko(fileHandle, pos, SEEK_SET);
//read the chunk
fread(buffer, sizeof(unsigned char), bufferSize, fileHandle);
}
fclose(fileHandle);
delete []buffer;
}
I have also following observations:
Even though RAM usage jumps by 1GB, the whole program uses only 5MB thorough whole execution.
Commenting call to fread() out makes memory leak go away. This is weird, since I don't allocate anything anywhere near it, that could trigger memory leak...
Also, reading the file normally instead of backwards (= commenting call to fseeko() out), makes memory leak go away as well. This is the ultra-weird part.
Further information...
Following doesn't help:
Checking results of fread() - yields nothing out of ordinary.
Switching to normal, 32-bit fseek and ftell.
Doing stuff like setbuf(fileHandle, NULL).
Doing stuff like setvbuf(fileHandle, NULL, _IONBF, *any integer*).
Compiled with g++ 4.5.3 on Windows 7 via cygwin and mingw; without any optimalizations, just g++ test.cpp -o test. Both present such behaviour.
The file used in tests was 4GB long, full of zeros.
The weird pause in the middle of the chart could be explained with some kind of temporary I/O hangup, unrelated to this question.
Finally, if I wrap reading in infinite loop... the memory usage stops increasing after first iteration.
I think it has to do with some kind of internal cache building up till it's filled with whole file. How does it really work behind the scenes? How can I prevent that in a portable way??
I think, this is more an OS issue (or even an OS resource use reporting issue) than an issue with your program. Of course, it only uses 5 MB of memory: 1 MB for itself (libs, stack etc.) and 4 MB for the buffer. Whenever you do a fread(), the OS seems to "bind" part of the file to your process, and seems to release it not at the same speed. As memory use on your machine is low, this is not a problem: The OS just keeps the already read data "hanging around" longer than necessary, probably assuming, that your application might read it again, soon, and then it doesn't have to do that binding again.
If memory pressure was higher, than the OS is very likely to unbind the memory faster, so that jump on your memory usage history would be smaller.
I had the exact same problem, although in Java but it doesn't matter in this context. I solved it by reading much bigger chunks at a time. I also read 4Mb size chunks, but when I increased it to 100-200 Mb the problem went away. Perhaps it'll do that for you as well. I'm on Windows 7.

Heap Corruption caused by Invalid Casting?

I have the code:
unsigned char *myArray = new unsigned char[40000];
char pixelInfo[3];
int c = 0;
while(!reader.eof()) //reader is a ifstream open to a BMP file
{
reader.read(pixelInfo, 3);
myArray[c] = (unsigned char)pixelInfo[0];
myArray[c + 1] = (unsigned char)pixelInfo[1];
myArray[c + 2] = (unsigned char)pixelInfo[2];
c += 3;
}
reader.close();
delete[] myArray; //I get HEAP CORRUPTION here
After some tests, I found it to be caused by the cast in the while loop, if I use a signed char myArray I don't get the error, but I must use unsigned char for the rest of my code.
Casting pixelInfo to unsigned char also gives the same error.
Is there any solution to this?
This is what you should do:
reader.read((char*)myArray, myArrayLength); /* note, that isn't (sizeof myArray) */
if (!reader) { /* report error */ }
If there's processing going on inside the loop, then
int c = 0;
while (c + 2 < myArraySize) //reader is a ifstream open to a BMP file
{
reader.read(pixelInfo, 3);
myArray[c] = (unsigned char)pixelInfo[0];
myArray[c + 1] = (unsigned char)pixelInfo[1];
myArray[c + 2] = (unsigned char)pixelInfo[2];
c += 3;
}
Trying to read after you've hit the end is not a problem -- you'll get junk in the rest of the array, but you can deal with that at the end.
Assuming your array is big enough to hold the whole file invites buffer corruption. Buffer overrun attacks involving image files with carefully crafted incorrect metadata are quite well-known.
in Mozilla
in Sun Java
in Internet Explorer
in Windows Media Player
again in Mozilla
in MSN Messenger
in Windows XP
Do not rely on the entire file content fitting in the calculated buffer size.
reader.eof() will only tell you if the previous read hit the end of the file, which causes your final iteration to write past the end of the array. What you want instead is to check if the current read hits the end of file. Change your while loop to:
while(reader.read(pixelInfo, 3)) //reader is a ifstream open to a BMP file
{
// ...
}
Note that you are reading 3 bytes at a time. If the total number of bytes is not divisible by 3 (not a multiple of 3) then only part of the pixelInfo array will actually be filled with correct data which may cause an error with your program. You could try the following piece of not tested code.
while(!reader.eof()) //reader is a ifstream open to a BMP file
{
reader.read(pixelInfo, 3);
for (int i = 0; i < reader.gcount(); i++) {
myArray[c+i] = pixelInfo[i];
}
c += 3;
}
Your code does follow the documentation on cplusplus.com very well since eof bit will be set after an incomplete read so this code will terminate after your last read however, as I mentioned before the likely cause of your issue is the fact that you are assigning likely junk data to the heap since pixelInfo[x] might not necessarily be set if 3 bytes were not read.

Problem writing binary data with ofstream

Hey all, I'm writing an application which records microphone input to a WAV file. Previously, I had written this to fill a buffer of a specified size and that worked fine. Now, I'd like to be able to record to an arbitrary length. Here's what I'm trying to do:
Set up 32 small audio buffers (circular buffering)
Start a WAV file with ofstream -- write the header with PCM length set to 0
Add a buffer to input
When a buffer completes, append its data to the WAV file and update the header; recycle the buffer
When the user hits "stop", write the remaining buffers to file and close
It kind of works in that the files are coming out to the correct length (header and file size and are correct). However, the data is wonky as hell. I can make out a semblance of what I said -- and the timing is correct -- but there's this repetitive block of distortion. It basically sounds like only half the data is getting into the file.
Here are some variables the code uses (in header)
// File writing
ofstream mFile;
WAVFILEHEADER mFileHeader;
int16_t * mPcmBuffer;
int32_t mPcmBufferPosition;
int32_t mPcmBufferSize;
uint32_t mPcmTotalSize;
bool mRecording;
Here is the code that prepares the file:
// Start recording audio
void CaptureApp::startRecording()
{
// Set flag
mRecording = true;
// Set size values
mPcmBufferPosition = 0;
mPcmTotalSize = 0;
// Open file for streaming
mFile.open("c:\my.wav", ios::binary|ios::trunc);
}
Here's the code that receives the buffer. This assumes the incoming data is correct -- it should be, but I haven't ruled out that it isn't.
// Append file buffer to output WAV
void CaptureApp::writeData()
{
// Update header with new PCM length
mPcmBufferPosition *= sizeof(int16_t);
mPcmTotalSize += mPcmBufferPosition;
mFileHeader.bytes = mPcmTotalSize + sizeof(WAVFILEHEADER);
mFileHeader.pcmbytes = mPcmTotalSize;
mFile.seekp(0);
mFile.write(reinterpret_cast<char *>(&mFileHeader), sizeof(mFileHeader));
// Append PCM data
if (mPcmBufferPosition > 0)
{
mFile.seekp(mPcmTotalSize - mPcmBufferPosition + sizeof(WAVFILEHEADER));
mFile.write(reinterpret_cast<char *>(&mPcmBuffer), mPcmBufferPosition);
}
// Reset file buffer position
mPcmBufferPosition = 0;
}
And this is the code that closes the file:
// Stop recording
void CaptureApp::stopRecording()
{
// Save remaining data
if (mPcmBufferSize > 0)
writeData();
// Close file
if (mFile.is_open())
{
mFile.flush();
mFile.close();
}
// Turn off recording flag
mRecording = false;
}
If there's anything here that looks like it would result in bad data getting appended to the file, please let me know. If not, I'll triple check the input data (in the callback). This data should be good, because it works if I copy it to a larger buffer (eg, two minutes) and then save that out.
I am just wondering, how
void CaptureApp::writeData()
{
mPcmBufferPosition *= sizeof(int16_t); // mPcmBufferPosition = 0, so 0*2 = 0;
// (...)
mPcmBufferPosition = 0;
}
works (btw. sizeof int16_t is always 2). Are you setting mPcmBufferPosition somewhere else?
void CaptureApp::writeData()
{
// Update header with new PCM length
long pos = mFile.tellp();
mPcmBufferBytesToWrite *= 2;
mPcmTotalSize += mPcmBufferBytesToWrite;
mFileHeader.bytes = mPcmTotalSize + sizeof(WAVFILEHEADER);
mFileHeader.pcmbytes = mPcmTotalSize;
mFile.seekp(0);
mFile.write(reinterpret_cast<char *>(&mFileHeader), sizeof(mFileHeader));
mFile.seekp(pos);
// Append PCM data
if (mPcmBufferBytesToWrite > 0)
mFile.write(reinterpret_cast<char *>(mPcmBuffer), mPcmBufferBytesToWrite);
}
Also mPcmBuffer is a pointer, so don't know why you use & in write.
The most likely reason is you're writing from the address of the pointer to your buffer, not from the buffer itself. Ditch the "&" in the final mFile.write. (It may have some good data in it if your buffer is allocated nearby and you happen to grab a chunk of it, but that's just luck that your write hapens to overlap your buffer)
In general, if you find yourself in this sort of situation, you could try to think how you can test this code in isolation from the recording code: Set up a buffer that has the values 0..255 in it, and then set your "chunk size" to 16 and see if it writes out a continuous sequence of 0..255 across 16 separate write operations. That will quickly verify if your buffering code is working or not.
I won't debug your code, but will try to give you checklist of the things you can try to check and determine where's the error:
always have referent recorder or player handy. It can be something as simple as Windows Sound Recorder, or Audacity, or Adobe Audition. Have a recorder/player that you are CERTAIN that will record and play files correctly.
record the file with your app and try to play it with reference player. Working?
try to record the file with reference recorder, and play it with your player. Working?
when you write SOUND data to the WAV file in your recorder, write it to one extra file. Open that file in RAW mode with the player (Windows Sound Recorder won't be enough here). Does it play correctly?
when playing the file in your player, and writing to the soundcard, write output to the RAW file, to see if you are playing the data correctly at all or you have soundcars issues. Does it play correctly?
Try all this, and you'll have much better idea of where something went wrong.
Shoot, sorry -- had a late night of work and am a bit off today. I forgot to show y'all the actual callback. This is it:
// Called when buffer is full
void CaptureApp::onData(float * data, int32_t & size)
{
// Check recording flag and buffer size
if (mRecording && size <= BUFFER_LENGTH)
{
// Save the PCM data to file and reset the array if we
// don't have room for this buffer
if (mPcmBufferPosition + size >= mPcmBufferSize)
writeData();
// Copy PCM data to file buffer
copy(mAudioInput.getData(), mAudioInput.getData() + size, mPcmBuffer + mPcmBufferPosition);
// Update PCM position
mPcmBufferPosition += size;
}
}
Will try y'alls advice and report.