Valgrind says about "Invalid write" in fclose() - c++

I created a stream using fmemopen(). I am closing it with fclose() and freeing the buffer after reading. Valgrind reports about problem at fclose() line:
==9323== Invalid write of size 8
==9323== at 0x52CAE52: _IO_mem_finish (memstream.c:139)
==9323== by 0x52C6A3E: fclose##GLIBC_2.2.5 (iofclose.c:63)
==9323== by 0x400CB6: main (main.cpp:80)
==9323== Address 0xffefffa80 is just below the stack ptr. To suppress,
use: --workaround-gcc296-bugs=yes
What's happening? Maybe fclose() cannot properly close a memory stream? Or maybe valgrind worries without reason and I can ignore that?

As you haven't posted your code, the following is pure speculation. You're probably writing more output to the stream than you promised to when you opened it. Did you account for the final NUL (if you didn't open with "b")?
Did you read the following in the manual page?
When a stream that has been opened for writing is flushed (fflush(3)) or closed (fclose(3)), a null byte is written at the end of the buffer if there is space. The caller should ensure that an extra byte is available in
the buffer (and that size counts that byte) to allow for this.
Attempts to write more than size bytes to the buffer result in an error. (By default, such errors will be visible only when the stdio buffer is flushed. Disabling buffering with setbuf(fp, NULL) may be useful to detect
errors at the time of an output operation. Alternatively, the caller can explicitly set buf as the stdio stream buffer, at the same time informing stdio of the buffer's size, using setbuffer(fp, buf, size).)
Following the latter advice should reveal the write that is exceeding the capacity.

It was my fault. I wrote that I use fmemopen() to open a stream, but this code wasn't running, and I get stream, produced earlier with open_memstream(). Although this stream was opened for writing, I can read from it. However valgrind reports about problem. I fixed my code, valgrind find no errors.

Related

avcodec_decode_video2: what do the extra bytes prevent?

In the documentation for avcodec_decode_video2 it gives the following warning:
Warning:
The input buffer must be FF_INPUT_BUFFER_PADDING_SIZE larger than the
actual read bytes because some optimized bitstream
readers read 32 or 64 bits at once and could read over the end. The
end of the input buffer buf should be set to 0 to ensure that no
overreading happens for damaged MPEG streams.
If this were not implemented would this cause segmentation faults when overreading occurs? Or would it potentially cause weird corruption? I'm just curious as I have corruption and I'm not sure if this could potentially be causing my problem.
It wouldn't necessarily cause segmentation faults, but it would be undefined behavior, since these readers would be reading unallocated memory. This could make the program crash immediately, or work for a while, or even appear to work fine: you can never be sure when it comes to undefined behavior.

Is this code safe for opening a buffer in memory as a file?(Code appears to work) [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to get file descriptor of buffer in memory?
I'm trying to force a library that uses a FILE class to use a buffer in memory instead of a file. I tried fmemopen, however, the library uses fileno which returns -1 and causes the library to crash. So I read that I need a file descriptor and I could use the pipe command to make one. I don't fully understand what I did but I got it to work. However, I don't know if what I did is actually safe. So here is the code:
int fstr[2];
pipe(fstr);
write(fstr[1], src, strlen(src));
close(fstr[1]);
FILE* file = fdopen( fstr[0], "r" );
So is this safe?
Use fmemopen(3) instead. This function already facilitate writing/reading to/from memory buffer for you.
If you really need a file handle, this is as good as anything I can think of.
One possible issue you may run into is that the buffer size for a pipe is limited. If your string is larger than the buffer size, the write(...) to the pipe will block until some data is read(...) from the other end.
Ideally you would have a worker thread writing to the pipe, but if this is not possible/too hard, you can possibly adjust the pipe buffer size through fcntl(fd, F_SETPIPE_SZ, ...).

Check if there is sufficient disk space to save a file; reserve it

I'm writing a C++ program which will be printing out large (2-4GB) files.
I'd like to make sure that there's sufficient space on the drive to save the files before I start writing them. If possible, I'd like to reserve this space.
This is taking place on a Linux-based system.
Any thoughts on a good way to do this?
Take a look at posix_fallocate():
NAME
posix_fallocate - allocate file space
SYNOPSIS
int posix_fallocate(int fd, off_t offset, off_t len);
DESCRIPTION
The function posix_fallocate() ensures that disk space is allocated for
the file referred to by the descriptor fd for the bytes in the range
starting at offset and continuing for len bytes. After a successful
call to posix_fallocate(), subsequent writes to bytes in the specified
range are guaranteed not to fail because of lack of disk space.
edit In the comments you indicate that you use C++ streams to write to the file. As far as I know, there's no standard way to get the file descriptor (fd) from a std::fstream.
With this in mind, I would make disk space pre-allocation a separate step in the process. It would:
open() the file;
use posix_fallocate();
close() the file.
This can be turned into a short function to be called before you even open the fstream.
Use aix's answer (posix_fallocate()), but since you're using C++ streams, you'll need a bit of a hack to get the stream's file descriptor.
For that, use the code here: http://www.ginac.de/~kreckel/fileno/.
If you are using C++ 17, you should do it with std::filesystem::resize_file
As shown in this post

Will fseek function flush data in the buffer in C++?

We know that call to functions like fprintf or fwrite will not write data to the disk immediately, instead, the data will be buffered until a threshold is reached. My question is, if I call the fseek function, will these buffered data writen to disk before seeking to the new position? Or the data is still in the buffer, and is writen to the new position?
cheng
I'm not aware if the buffer is guaranteed to be flushed, it may not if you seek to a position close enough. However there is no way that the buffered data will be written to the new position. The buffering is just an optimization, and as such it has to be transparent.
Yes; fseek() ensures that the file will look like it should according to the fwrite() operations you've performed.
The C standard, ISO/IEC 9899:1999 ยง7.19.9.2 fseek(), says:
The fseek function sets the file position indicator for the stream pointed to by stream.
If a read or write error occurs, the error indicator for the stream is set and fseek fails.
I don't believe that it's specified that the data must be flushed on a fseek but when the data is actually written to disk it must be written at that position that the stream was at when the write function was called. Even if the data is still buffered, that buffer can't be written to a different part of the file when it is flushed even if there has been a subsequent seek.
It seems that your real concern is whether previously-written (but not yet flushed) data would end up in the wrong place in the file if you do an fseek.
No, that won't happen. It'll behave as you'd expect.
I have vague memories of a requirement that you call fflush before
fseek, but I don't have my copy of the C standard available to verify.
(If you don't it would be undefined behavior or implementation defined,
or something like that.) The common Unix standard specifies that:
If the most recent operation, other than ftell(), on a given stream is
fflush(), the file offset in the underlying open file description
shall be adjusted to reflect the location specified by fseek().
[...]
If the stream is writable and buffered data had not been written to
the underlying file, fseek() shall cause the unwritten data to be
written to the file and shall mark the st_ctime and st_mtime fields of
the file for update.
This is marked as an extention to the ISO C standard, however, so you can't count on it except on Unix platforms (or other platforms which make similar guarantees).

c++ file bad bit

when I run this code, the open and seekg and tellg operation all success.
but when I read it, it fails, the eof,bad,fail bit are 0 1 1.
What can cause a file bad?
thanks
int readriblock(int blockid, char* buffer)
{
ifstream rifile("./ri/reverseindex.bin", ios::in|ios::binary);
rifile.seekg(blockid * RI_BLOCK_SIZE, ios::beg);
if(!rifile.good()){ cout<<"block not exsit"<<endl; return -1;}
cout<<rifile.tellg()<<endl;
rifile.read(buffer, RI_BLOCK_SIZE);
**cout<<rifile.eof()<<rifile.bad()<<rifile.fail()<<endl;**
if(!rifile.good()){ cout<<"error reading block "<<blockid<<endl; return -1;}
rifile.close();
return 0;
}
Quoting the Apache C++ Standard Library User's Guide:
The flag std::ios_base::badbit indicates problems with the underlying stream buffer. These problems could be:
Memory shortage. There is no memory available to create the buffer, or the buffer has size 0 for other reasons (such as being provided from outside the stream), or the stream cannot allocate memory for its own internal data, as with std::ios_base::iword() and std::ios_base::pword().
The underlying stream buffer throws an exception. The stream buffer might lose its integrity, as in memory shortage, or code conversion failure, or an unrecoverable read error from the external device. The stream buffer can indicate this loss of integrity by throwing an exception, which is caught by the stream and results in setting the badbit in the stream's state.
That doesn't tell you what the problem is, but it might give you a place to start.
Keep in mind the EOF bit is generally not set until a read is attempted and fails. (In other words, checking rifile.good after calling seekg may not accomplish anything.)
As Andrey suggested, checking errno (or using an OS-specific API) might let you get at the underlying problem. This answer has example code for doing that.
Side note: Because rifile is a local object, you don't need to close it once you're finished. Understanding that is important for understanding RAII, a key technique in C++.
try old errno. It should show real reason for error. unfortunately there is no C++ish way to do it.