I wrote a QT code to copy the whole content from sourcefile to destfile. With small-size files, it runs successfully, but with large-size files (for example, a video with 700MB of size), after process of copy, the destfile only had 2Kb of size.
This is the code of Copy routine, which does the copying process:
void Copy(QString destfile, QString sourcefile)
{
qint64 bufSize = 100*1024*1024;//100 MB
char *buf = new char [bufSize];
//qint64 dataSize;
QFile sfile(sourcefile), dfile(destfile);
if (!sfile.open(QIODevice::ReadOnly) || !dfile.open(QIODevice::WriteOnly))
{
qDebug() << "Error";
return;
}
while (!sfile.atEnd())
{
sfile.read(buf, bufSize);
dfile.write(buf);
}
sfile.close();
dfile.close();
qDebug() << "OK";
}
dfile.write(buf); will write a null terminated string to your file, not the entire array. You want to do :
dfile.write(buf, bufSize);
^^^^
Notes:
You probably want to check the returned value of this call to test how many bytes were actually written. Same for sfile.read(buf, bufSize);, test how many bytes were read (it could be less than bufSize).
Something like :
while (!sfile.atEnd())
{
qint64 bytesRead = sfile.read(buf, bufSize);
if(dfile.write(buf, bytesRead) < 0)
// error ...
}
Also, buf could be allocated on the stack, no new is needed here.
Related
I use fuse to build my own file system in MIT 6.824 lab, and the read operation is implemented in this function.
void
fuseserver_read(fuse_req_t req, fuse_ino_t ino, size_t size,
off_t off, struct fuse_file_info *fi)
{
std::string buf;
int r;
if ((r = yfs->read(ino, size, off, buf)) == yfs_client::OK) {
char* retbuf = (char *)malloc(buf.size());
memcpy(retbuf,buf.data(),buf.size());
//Print the information of the result.
printf("debug read in fuse: the content of %lu is %s, size %lu\n",ino,retbuf, buf.size());
fuse_reply_buf(req,retbuf,buf.size());
} else {
fuse_reply_err(req, ENOENT);
}
//global definition
//struct fuse_lowlevel_ops fuseserver_oper;
//In main()
// fuseserver_oper.read = fuseserver_read;
I print the information of the buf before it return.
The write operation is also implemented, of course.
Then I run a simple test to read out some words.
//test.c
int main(){
//./yfs1 is the mount point of my filesystem
int fd = open("./yfs1/test-file",O_RDWR | O_CREAT,0777);
char* buf = "123";
char* readout;
readout = (char *)malloc(3);
int writesize = write(fd,buf,3);
int readsize = read(fd,readout,3);
printf("%s,%d\n",buf,writesize);
printf("%s,%d\n",readout,readsize);
close(fd);
}
I can get nothing by read(fd,readout,3), but the information printed by the fuseserver_read shows that the buffer is read out successfully before fuse_reply_buf
$ ./test
123,3
,0
debug read in fuse: the content of 2 is 123, size 3
So why the read() in test.c can not read anything from my file system??
Firstly, I've made a mistake to write my test file. The file pointer will point to the end of the file after "write" and of course can read nothing later. So simply reopen the file can make the test work.
Secondly, before read() operation of FUSE, the FUSE will getattr() first and truncate the result of the read() operation with the "size" attribute of the file. So it must be very careful to manipulate the attribute of a file.
There is also a need to notify that you have finished reading by sending an empty buffer, as an "EOF". You can do that by using reply_buf_limited.
Take a look at hello_ll example in the fuse source tree:
static void tfs_read(fuse_req_t req, fuse_ino_t ino, size_t size,
off_t off, struct fuse_file_info *fi) {
(void) fi;
assert(ino == FILE_INO);
reply_buf_limited(req, file_contents, file_size, off, size);
}
static int reply_buf_limited(fuse_req_t req, const char *buf, size_t bufsize,
off_t off, size_t maxsize)
{
if (off < bufsize)
return fuse_reply_buf(req, buf + off,
min(bufsize - off, maxsize));
else
return fuse_reply_buf(req, NULL, 0);
}
Here is my situation:
I'm using Audio Queue Services in order to record sound. When the callback function is called (as soon as the buffer is full), I send the buffer content to an objective-C object to process it.
void AQRecorder::MyInputBufferHandler(void * inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp * inStartTime,
UInt32 inNumPackets,
const AudioStreamPacketDescription* inPacketDesc)
{
AQRecorder *aqr = (AQRecorder *)inUserData;
try {
if (inNumPackets > 0) {
NSLog(#"Callback ! Sending buffer content ...");
aqr->objectiveC_Call([NSData dataWithBytes:inBuffer->mAudioData length:inBuffer->mAudioDataBytesCapacity]);
aqr->mRecordPacket += inNumPackets;
}
if (aqr->IsRunning())
XThrowIfError(AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL), "AudioQueueEnqueueBuffer failed");
} catch (CAXException e) {
char buf[256];
fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf));
}
}
void AQRecorder::objectiveC_Call(NSData *buffer) {
MyObjCObject *myObj = [[MyObjCObject alloc] init];
[myObj process:buffer];
}
The problem here is that I get an EXC_BAD_ACCESS during my process (from myObj's process method), and after some research I guess that it's related to myObj being released.
MyObjCObject.process performs a for loop from the buffer content, and I get the EXC_BAD_ACCESS error even if I just do a NSLog on the buffer values.
-(void)run:(NSData *)bufferReceived {
NSUInteger bufferSize = [bufferReceived length];
self.buffer = (short *)malloc(bufferSize);
memcpy(self.buffer, [bufferReceived bytes], bufferSize);
for(int i= 0; i < bufferSize; i++) {
NSLog("value: %i", buffer[i]);
}
}
Can you please tell me the way to do this ?
ps: My files have the .mm extension, ARC is enabled on the whole project and the rest of my code seems to works as expected.
Thanks !
You malloc the buffer and cast to a '(short*)' but then you enumerate the buffer using 'bufferSize' (number of bytes). That would mean that the 'for' loop would eventually attempt to read past the end of the buffer potentially resulting in 'EXE_BAD_ACCESS'. That would be because each iteration is moving forward by a 'short' rather than a 'byte'. You should change the loop to something like:
for(int i= 0; i < bufferSize / sizeof(short); i++) {
NSLog("value: %i", buffer[i]);
}
Either that or change the type of the 'buffer' member variable.
I'm trying to load a PNG from a memory buffer so I can access the ImageData without having to save it as a file first.
The memory buffer contains a valid png-file, when using fwrite to save it as a file on disk I get the following image: https://dl.dropboxusercontent.com/u/13077624/test.png
This represents a depth Image received by a Kinect sensor, for those of you wondering.
This is the code that gives errors:
struct mem_encode
{
char *buffer;
png_uint_32 size;
png_uint_32 current_pos;
};
void handle_data(const boost::system::error_code& error,
size_t bytes_transferred)
{
if (!error)
{
cout<<"Saving as file: "<<determinePathExtension(PNGFrame,"png");
FILE* fp=fopen("test.png","wb");
fwrite(data_,bytes_transferred,1,fp);
fclose(fp);
//get PNG file info struct (memory is allocated by libpng)
png_structp png_ptr = NULL;
png_ptr = png_create_read_struct(PNG_LIBPNG_VER_STRING, NULL, NULL, NULL);
if (!png_ptr) {
std::cerr << "ERROR: Couldn't initialize png read struct" << std::endl;
cin.get();
return; //Do your own error recovery/handling here
}
// get PNG image data info struct (memory is allocated by libpng)
png_infop info_ptr = NULL;
info_ptr = png_create_info_struct(png_ptr);
if (!info_ptr) {
std::cerr << "ERROR: Couldn't initialize png info struct" << std::endl;
cin.get();
png_destroy_read_struct(&png_ptr, (png_infopp)0, (png_infopp)0);
return; //Do your own error recovery/handling here
}
struct mem_encode pngdata;
pngdata.buffer=data_;
pngdata.size=(png_uint_32)bytes_transferred;
pngdata.current_pos=0;
png_set_read_fn(png_ptr,&pngdata, ReadData);
//Start reading the png header
png_set_sig_bytes(png_ptr, 8);
png_read_info(png_ptr,info_ptr);
//... Program crashes here
}
else
{
cout<<error.message()<<" Bytes received: "<<bytes_transferred<<endl;
delete this;
}
}
static void ReadData(png_structp png_ptr, png_bytep outBytes,
png_size_t byteCountToRead){
struct mem_encode* p=(struct mem_encode*)png_get_io_ptr(png_ptr);
size_t nsize=p->size + byteCountToRead;
if(byteCountToRead>(p->size-p->current_pos)) png_error(png_ptr,"read error in read_data_memory (loadpng)");
/* copy new bytes */
memcpy(outBytes,p->buffer + p->size,byteCountToRead);
p->current_pos+=byteCountToRead;
}
Calling the method results in the program crashing with the following error:
libpng error: [00][00][00][00]: invalid chunk type
data_ represents the the databuffer storing the PNG-image and is a char *.
Any help would be appreciated.
Sources I used:
http://www.libpng.org/pub/png/libpng-1.0.3-manual.html
http://blog.hammerian.net/2009/reading-png-images-from-memory/
http://santosdev.blogspot.be/2012/08/loading-png-image-with-libpng-1512-or.html
http://www.piko3d.net/tutorials/libpng-tutorial-loading-png-files-from-streams/
Could this be caused by network bytes being translated badly?
I think you forgot to read the PNG signature bytes. Use
if (png_sig_cmp(data, 0, 8)
png_error(png_ptr, "it's not a PNG file");
Then your
png_set_sig_bytes(png_ptr,8);
lets libpng know you have already read the signature.
Or, you could use png_set_sig_bytes(png_ptr, 0); and
let libpng do the checking for you.
Are you sure your ReadData function is correct? Why do you start memcpy from the address p->buffer + p->size - isn't it the end of the buffer? And what does nsize do?
I using libzip to work with zip files and everything goes fine, until i need to read file from zip
I need to read just a whole text files, so it will be great to achieve something like PHP "file_get_contents" function.
To read file from zip there is a function "int
zip_fread(struct zip_file *file, void *buf, zip_uint64_t nbytes)".
Main problem what i don't know what size of buf must be and how many nbytes i must read (well i need to read whole file, but files have different size). I can just do a big buffer to fit them all and read all it's size, or do a while loop until fread return -1 but i don't think it's rational option.
You can try using zip_stat to get file size.
http://linux.die.net/man/3/zip_stat
I haven't used the libzip interface but from what you write it seems to look very similar to a file interface: once you got a handle to the stream you keep calling zip_fread() until this function return an error (ir, possibly, less than requested bytes). The buffer you pass in us just a reasonably size temporary buffer where the data is communicated.
Personally I would probably create a stream buffer for this so once the file in the zip archive is set up it can be read using the conventional I/O stream methods. This would look something like this:
struct zipbuf: std::streambuf {
zipbuf(???): file_(???) {}
private:
zip_file* file_;
enum { s_size = 8196 };
char buffer_[s_size];
int underflow() {
int rc(zip_fread(this->file_, this->buffer_, s_size));
this->setg(this->buffer_, this->buffer_,
this->buffer_ + std::max(0, rc));
return this->gptr() == this->egptr()
? traits_type::eof()
: traits_type::to_int_type(*this->gptr());
}
};
With this stream buffer you should be able to create an std::istream and read the file into whatever structure you need:
zipbuf buf(???);
std::istream in(&buf);
...
Obviously, this code isn't tested or compiled. However, when you replace the ??? with whatever is needed to open the zip file, I'd think this should pretty much work.
Here is a routine I wrote that extracts data from a zip-stream and prints out a line at a time. This uses zlib, not libzip, but if this code is useful to you, feel free to use it:
#
# compile with -lz option in order to link in the zlib library
#
#include <zlib.h>
#define Z_CHUNK 2097152
int unzipFile(const char *fName)
{
z_stream zStream;
char *zRemainderBuf = malloc(1);
unsigned char zInBuf[Z_CHUNK];
unsigned char zOutBuf[Z_CHUNK];
char zLineBuf[Z_CHUNK];
unsigned int zHave, zBufIdx, zBufOffset, zOutBufIdx;
int zError;
FILE *inFp = fopen(fName, "rbR");
if (!inFp) { fprintf(stderr, "could not open file: %s\n", fName); return EXIT_FAILURE; }
zStream.zalloc = Z_NULL;
zStream.zfree = Z_NULL;
zStream.opaque = Z_NULL;
zStream.avail_in = 0;
zStream.next_in = Z_NULL;
zError = inflateInit2(&zStream, (15+32)); /* cf. http://www.zlib.net/manual.html */
if (zError != Z_OK) { fprintf(stderr, "could not initialize z-stream\n"); return EXIT_FAILURE; }
*zRemainderBuf = '\0';
do {
zStream.avail_in = fread(zInBuf, 1, Z_CHUNK, inFp);
if (zStream.avail_in == 0)
break;
zStream.next_in = zInBuf;
do {
zStream.avail_out = Z_CHUNK;
zStream.next_out = zOutBuf;
zError = inflate(&zStream, Z_NO_FLUSH);
switch (zError) {
case Z_NEED_DICT: { fprintf(stderr, "Z-stream needs dictionary!\n"); return EXIT_FAILURE; }
case Z_DATA_ERROR: { fprintf(stderr, "Z-stream suffered data error!\n"); return EXIT_FAILURE; }
case Z_MEM_ERROR: { fprintf(stderr, "Z-stream suffered memory error!\n"); return EXIT_FAILURE; }
}
zHave = Z_CHUNK - zStream.avail_out;
zOutBuf[zHave] = '\0';
/* copy remainder buffer onto line buffer, if not NULL */
if (zRemainderBuf) {
strncpy(zLineBuf, zRemainderBuf, strlen(zRemainderBuf));
zBufOffset = strlen(zRemainderBuf);
}
else
zBufOffset = 0;
/* read through zOutBuf for newlines */
for (zBufIdx = zBufOffset, zOutBufIdx = 0; zOutBufIdx < zHave; zBufIdx++, zOutBufIdx++) {
zLineBuf[zBufIdx] = zOutBuf[zOutBufIdx];
if (zLineBuf[zBufIdx] == '\n') {
zLineBuf[zBufIdx] = '\0';
zBufIdx = -1;
fprintf(stdout, "%s\n", zLineBuf);
}
}
/* copy some of line buffer onto the remainder buffer, if there are remnants from the z-stream */
if (strlen(zLineBuf) > 0) {
if (strlen(zLineBuf) > strlen(zRemainderBuf)) {
/* to minimize the chance of doing another (expensive) malloc, we double the length of zRemainderBuf */
free(zRemainderBuf);
zRemainderBuf = malloc(strlen(zLineBuf) * 2);
}
strncpy(zRemainderBuf, zLineBuf, zBufIdx);
zRemainderBuf[zBufIdx] = '\0';
}
} while (zStream.avail_out == 0);
} while (zError != Z_STREAM_END);
/* close gzip stream */
zError = inflateEnd(&zStream);
if (zError != Z_OK) {
fprintf(stderr, "could not close z-stream!\n");
return EXIT_FAILURE;
}
if (zRemainderBuf)
free(zRemainderBuf);
fclose(inFp);
return EXIT_SUCCESS;
}
With any streaming you should consider the memory requirements of your app.
A good buffer size is large, but you do not want to have too much memory in use depending on your RAM usage requirements. A small buffer size will require you call your read and write operations more times which are expensive in terms of time performance. So, you need to find a buffer in the middle of those two extremes.
Typically I use a size of 4096 (4KB) which is sufficiently large for many purposes. If you want, you can go larger. But at the worst case size of 1 byte, you will be waiting a long time for you read to complete.
So to answer your question, there is no "right" size to pick. It is a choice you should make so that the speed of your app and the memory it requires are what you need.
So I am trying to implement timed http connection Keep-Alive. And I need to be capable of killing it on some time-out. So currently I have (or at least I would like to have):
void http_request::timed_receive_base(boost::asio::ip::tcp::socket& socket, int buffer_size, int seconds_to_wait, int seconds_to_parse)
{
this->clear();
http_request_parser_state parser_state = METHOD;
char* buffer = new char[buffer_size];
std::string key = "";
std::string value = "";
boost::asio::ip::tcp::iostream stream;
stream.rdbuf()->assign( boost::asio::ip::tcp::v4(), socket.native() );
try
{
do
{
stream.expires_from_now(boost::posix_time::seconds(seconds_to_wait));
int bytes_read = stream.read_some(boost::asio::buffer(buffer, buffer_size));
stream.expires_from_now(boost::posix_time::seconds(seconds_to_parse));
if (stream) // false if read timed out or other error
{
parse_buffer(buffer, parser_state, key, value, bytes_read);
}
else
{
throw std::runtime_error("Waiting for 2 long...");
}
} while (parser_state != OK);
}
catch (...)
{
delete buffer;
throw;
}
delete buffer;
}
But there is no read_some in tcp::iostream, so compiler gives me an error:
Error 1 error C2039: 'read_some' : is not a member of 'boost::asio::basic_socket_iostream<Protocol>'
That is why I wonder - how to read 1 byte via stream.read (like stream.read(buffer, 1);) and than read_some to that very buffer via socket API ( it would look like int bytes_read = socket.read_some(boost::asio::buffer(buffer, buffer_size)); and than call my parse_buffer function with real bytes_read value)
BTW it seems like there will be a really sad problem of 1 last byte..(
Sorry to be a bit rough, but did you read the documentation? The socket iostream is supposed to work like the normal iostream, like cin and cout. Just do stream >> var. Maybe you want basic_stream_socket::read_some instead?