Implementing QIODevice::writeData, confusing documentation - c++

I'm trying to implement a double buffer for a real-time audio application, and QAudioInput requires it to be a subclass of QIODevice. I'm finding the documentation for this method pretty confusing.
First of all, the method signature in the documentation doesn't match the header for QT 5.9.2, which has virtual qint64 writeData(const char *data, qint64 len) = 0;.
The documentation has this signature though: qint64 QIODevice::writeData(const char *data, qint64 maxSize)
The maxSize parameter confuses me because it implies that I can just buffer some of the data, which the documentation also implies with:
Writes up to maxSize bytes from data to the device. Returns the number of bytes written, or -1 if an error occurred.
However, immediately afterword the documentation says this, which seems contradictory to me:
When reimplementing this function it is important that this function writes all the data available before returning. This is required in order for QDataStream to be able to operate on the class. QDataStream assumes all the information was written and therefore does not retry writing if there was a problem.
So is my QIODevice implementation responsible for buffering all the data in a single call or not?

What they basically trying to say is: The passed data is maxSize bytes long. Your implementation should write all data and return the number of bytes written.
It is possible to write less data then available, but you should not. If you do, some classes that use your device may not react to this (like QDataStream). It depends on how QAudioInput handles write calls. If it checks the result and writes missing data again if not completly written, not writing all data is fine. If thats not the case, you have to always write all data.
Simply try it out: always write only 1 byte (and return 1). If it works, it's fine, if not you have to always write all passed data, or fail with -1.

Related

When I use protocol-buffers, how could I know ByteSize() on the server side?

According to the protocol-buffers api, ParseFromArray(const void * data, int size) will fail if the format is wrong, in my case, it return false when the size parameter is not right. A lot of answers point out that ByteSize() should be used when using SerializeToArray(void * data, int size),and make sure parse the right size, but none of them clearly point out how. So how to pass the ByteSize() value to the server side to make sure ParseFromArray doesn't return false?
As far as I know, all the examples I found make the size parameter the full size of a receive buffer, and didn't check the return value at all, since the fields will parse any way. Is this a good idea leaving the return value unchecked?
Is this a good idea leaving the return value unchecked?
No!
Where did you get the buffer of data from? Wherever you got it, should have also told you how many bytes are in the buffer.
For example, if you used recv() to receive a UDP datagram from the network, it returns the number of bytes received.
If you are using TCP, then again recv() returns the number of bytes received, but note that the message may arrive in multiple parts. You need some way to know when it is done, such as by using shutdown() to end the stream after the message is sent, or by separately writing the size before the message. For the latter solution, you might be interested in my pull request, which adds helpers to protobufs to accomplish this (but it's not too hard to do yourself, too).

C++ write filestream

I am writing an application that produces and logs a lot of data in the form of ASCII and binary output (not mixed, one or the other depending on the log). The application is single-threaded (should make things easier) and I want to write my data to disk in the order that it was generated. I need to implement a write(char* data) method that takes a null-terminated character array and writes it to disk. Ideally, I want the function to buffer the data and return before the data is actually written to disk...I figure that there must be some way for Windows to setup a thread and do this in the background. The only thing that I care about is that I get the data in the log file in the order that it was written. What is the best way to do this? Someone else implemented the current write method and it looks like:
void writeData(const char* data, int size)
{
if (fp != 0)
fwrite (data, 1, size, fp);
}
fp is the file pointer.
C++ Stdio.h header:
http://www.cplusplus.com/reference/cstdio/fwrite/
In multi-thread, may be you can use something like log queue.
In single-thread, the order is guaranteed
If you are talking Windows-only then you pretty much have two options: Overlapped I/O through the WinAPI or setting up a separate thread in your program to handle file I/O (which can potentially be cross-platform by using pthreads)

Qt QIODevice::write / QTcpSocket::write and bytes written

We are quite confused about the behavior of QIODevice::write in general and the QTcpSocket implementation specifically. There is a similar question already, but the answer is not really satisfactory. The main confusion stems from the there mentioned bytesWritten signal respectively the waitForBytesWritten method. Those two seem to indicate the bytes that were written from the buffer employed by the QIODevice to the actual underlying device (there must be such buffer, otherwise the method would not make much sense). The question then is though, if the number returned by QIODevice::write corresponds with this number, or if in that case it indicates the number of bytes that were stored in the internal buffer, not the bytes written to the underlying device. If the number returned would indicate the bytes written to the internal buffer, we would need to employ a pattern like the following to ensure all our data is written:
void writeAll(QIODevice& device, const QByteArray& data) {
int written = 0;
do {
written = device.write(data.constData() + written, data.size() - written);
} while(written < data.size());
}
However, this will insert duplicate data if the return value of QIODevice::write corresponds with the meaning of the bytesWritten signal. The documentation is very confusing about this, as in both methods the word device is used, even though it seems logical and the general understanding, that one actually indicates written to buffer, and not device.
So to summarize, the question is: Is the number returned bye QIODevice::write the number of bytes written to the underlying device, and hence its save to call QIODevice::write without checking the returned number of bytes, as everything is stored in the internal buffer. Or does it indicate how much bytes it could store internally and a pattern like the above writeAll has to be employed to safely write all data to the device?
(UPDATE: Looking at the source, the QTcpSocket::write implementation actually will never return less bytes than one wanted to write, so the writeAll above is not needed. However, that is specific to the socket and this Qt version, the documentation is still confusing...)
QTcpSocket is a buffered QAbstractSocket. An internal buffer is allocated inside QAbstractSocket, and data is copied in that buffer. The return value of write is the size of the data passed to write().
waitForBytesWritten waits until the data in the internal buffer of QAbstractSocket is written to the native socket.
That previous question answers your question, as does the QIODevice::write(const char * data, qint64 maxSize) documentation:
Writes at most maxSize bytes of data from data to the device. Returns the number of bytes that were actually written, or -1 if an error occurred.
This can (and will in real life) return less than what you requested, and it's up to you to call write again with the remainder.
As for waitForBytesWritten:
For buffered devices, this function waits until a payload of buffered written data has been written to the device...
It applies only to buffered devices. Not all devices are buffered. If they are, and you wrote less than what the buffer can hold, write can return successfully before the device has finished sending all the data.
Devices are not necessarily buffered.

C++ reading buffer size

Suppose that this file is 2 and 1/2 blocks long, with block size of 1024.
aBlock = 1024;
char* buffer = new char[aBlock];
while (!myFile.eof()) {
myFile.read(buffer,aBlock);
//do more stuff
}
The third time it reads, it is going to write half of the buffer, leaving the other half with invalid data. Is there a way to know how many bytes did it actually write to the buffer?
istream::gcount returns the number of bytes read by the previous read.
Your code is both overly complicated and error-prone.
Reading in a loop and checking only for eof is a logic error since this will result in an infinite loop if there is an error while reading (for whatever reason).
Instead, you need to check all fail states of the stream, which can be done by simply checking for the istream object itself.
Since this is already returned by the read function, you can (and, indeed, should) structure any reader loop like this:
while (myFile.read(buffer, aBlock))
process(buffer, aBlock);
process(buffer, myFile.gcount());
This is at the same time shorter, doesn’t hide bugs and is more readable since the check-stream-state-in-loop is an established C++ idiom.
You could also look at istream::readsome, which actually returns the amount of bytes read.

prepend and remove from a ( void * ) in C

i think this is a pretty straight forward problem , but i still can not figure it out .
I have function which sends stream over the network . naturally , this takes const void * as argument:
void network_send(const void* data, long data_length)
i am trying to prepend a specific header in the form of char* to this before sending it out over the socket:
long sent_size = strlen(header)+data_length;
data_to_send = malloc(sent_size);
memcpy(data_to_send,header,strlen(header)); /*first copy the header*/
memcpy((char*)data_to_send+strlen(header),data,dat_length); /*now copy the actual data*/
This works fine as long as the data is actually char* . but if it changes to some other data type , then this stops working .
when receiving , i need to remove the header from the data before processing it . so this is how it do it:
void network_data_received(const void* data, long data_length)
{
........
memmove(data_from_network,(char*)data_from_network + strlen(header),data_length); /*move the data to the beginning of the array*/
ProcessFurther(data_from_network ,data_length - strlen(header)) /*data_length - strlen(header) causes the function ProcessFurther to read only certain part of the array*/
}
This again works ok if the data is char type . but crashes if it is of any different type .
Can anyone suggest how to properly implement this ?
Regards,
Khan
Sounds like alignment could be the issue, but you don't specify which platform you're doing this on (different CPU architectures have different alignment requirements).
If the header's length is "wrong" for the alignment of the following data, that could cause access violations.
Something surprise me in this code. Is your header actually a string ? If it is a struct, of something similar you should replace strlen with sizeof. Calling strlen on non zero terminated string is likely to cause crashes.
The second thing that surprise me is that when reading received data, you should copy the header somewhere. If not using it, why bother sending it over the wire ?
EDIT: OK, the header is some http like header string. There should not be any problem from there, and it indeed does not need to be analysed if you're just testing.
And you should move the data to the place you actually need it, moving it to the beginning of the buffer does not look like the right thing to do.
If the problem comes from alignment, it will disappear if you copy the data to some variable of the real target type at byte level before using it.
There is another solution: allocate your buffer with malloc and put the data structure you want at the beginning. Then you should be able to cast it. Addresses returned by malloc are compatible with any type.
Also be aware that if you were working with C++, casting to a non-trivial class is unlikely to work (for one thing vtables are likely to get a wrong addresses, and there is other issues).
Another possible source of problem is the way you get data_length. It should be a number of bytes. Are you sure it is not a number of items ? To be sure we need some hint of the calling code.
memcpy's behaviour is undefined if the source and target overlap (as in this instance) you should be using memmove()
What exactly is happening when what is not char*? These functions will generally cast to void* before actually doing any work...
It's possible that data_length is not calculated correctly in the calling code. Otherwise this code seems to be fine apart from possible alignment issues mentioned by #unwind.
How is header declared? Does it have variable length? Are you missing a terminating NUL character after the header?
I'd also check to make sure that both sender and receiver use the same byte ordering architecture (little endian vs. big endian).
using unsigned char * solved the issue . thankyou all for your comments.