I'm trying to write a dynamic data to a QTcpSocket, this is how I implement it:
connect(&m_tcpSocket, SIGNAL(bytesWritten(qint64)), SLOT(written(qint64)));
//...
void MyClass::written(qint64 iBytes)
{
if(iBytes>0)
m_strWrite = m_strWrite.mid(iBytes);
if(m_strWrite.length()<1)
{
if(m_hHandle->isDone())
m_tcpSocket.disconnectFromHost();
}else if(m_tcpSocket.isValid()){
m_tcpSocket.write(m_strWrite);
}
}
//...
void MyClass::dataReady(const QByteArray &strData)
{
bool bWrite = m_strWrite.isEmpty();
m_strWrite.append(strData);
if(bWrite)
written(0);
}
dataReady is a slot which is called whenever there is some data ready, and strData is at most 8192 bytes.
This method works perfectly, but when data is huge (>500 MB), strange things happen, sometimes data is written much more than what I expect, sometimes some data is missing, sometimes nothing is written after a while and ...
I just want dynamic buffer, be written to socket, is there another way for that?
QTcpSocket has its own write buffer. Just use m_tcpSocket.write(strData). Qt does not limit the write buffer size.
Related
I have client that send messages serialized by protobuf to the server via linux fifo. I use ifstream and ofstream in my code for I / O operations.
If I write like this:
//client
Client::request() {
std::ofstream pipeOut;
pipeOut.open(outputPipeName);
msg.SerializeToOstream(&pipeOut);
pipeOut.close();
...
}
//server
Server::process_requests() {
std::ifstream pipeIn;
while(isRunning) {
pipeIn.open(inputPipeName);
msg.ParseFromIstream(&pipeIn);
pipeIn.close();
...
}
}
everything works perfectly. But I don't want to constantly open and close the streams. Instead, I want to write something like this:
//client
class Client {
std::ofstream pipeOut;
};
Client::Client() {
pipeOut.open(outputPipeName);
}
Client::~Client() {
pipeOut.close();
}
Client::request() {
msg.SerializeToOstream(&pipeOut);
...
}
//server
Server::process_requests() {
std::ifstream pipeIn;
pipeIn.open(inputPipeName);
while(isRunning) {
msg.ParseFromIstream(&pipeIn);
...
}
pipeIn.close();
}
but with this code server blockes inside ParseFromIstream function and the execution of the program goes no further. Can anybody please tell me how to write this correctly?
Try flushing pipeOut after "msg.SerializeToOstream(&pipeOut)" via ostream's .flush() function. Closing the stream flushes it so that's why the first code example works. When you keep the stream open and write less than the stream buffer size worth of data to it that data isn't made available to the read side unless/until more data is written to fill the buffer and prompt it to be sent OR a flush operation is done.
As it turned out, the problem was that I used the wrong method for serialization and the protobuff didn't know when the message would end and waited for the next part of the message until the pipe was closed. That is why the first version of the code worked, and the second one didn't. I managed to fix this behavior using Delimiting Protobuf Messages.
Note that I'm using boost async, due to the lack of threading classes support in MinGW.
So, I wanted to send a packet every 5 seconds and decided to use boost::async (std::async) for this purpose.
This is the function I use to send the packet (this is actually copying to the buffer and sending in the main application loop - nvm - it's working fine outside async method!)
m_sendBuf = new char[1024]; // allocate buffer
[..]
bool CNetwork::Send(const void* sourceBuffer, size_t size) {
size_t bufDif = m_sendBufSize - m_sendInBufPos;
if (size > bufDif) {
return false;
}
memcpy(m_sendBuf + m_sendInBufPos, sourceBuffer, size);
m_sendInBufPos += size;
return true;
}
Packet sending code:
struct TestPacket {
unsigned char type;
int code;
};
void SendPacket() {
TestPacket myPacket{};
myPacket.type = 10;
myPacket.code = 1234;
Send(&TestPacket, sizeof(myPacket));
}
Async code:
void StartPacketSending() {
SendPacket();
std::this_thread::sleep_for(std::chrono::seconds{5});
StartPacketSending(); // Recursive endless call
}
boost::async(boost::launch::async, &StartPacketSending);
Alright. So the thing is, when I call SendPacket() from the async method, received packet is malformed on the server side and the data is different than specified. This doesn't happend when called outside the async call.
What is going on here? I'm out of ideas.
I think I have my head wrapped around what you are doing here. You are loading all unsent in to buffer in one thread and then flushing it in a different thread. Even thought the packets aren't overlapping (assuming they are consumed quickly enough), you still to synchronize all the shared data.
m_sendBuf, m_sendInPos, and m_sendBufSize are all being read from the main thread, likely while memcpy or your buffer size logic is running. I suspect you will have to use a proper queue to get your program to work as intended in the long run, but try protecting those variables with a mutex.
Also as other commenters have pointed out, infinite recursion is not supported in C++, but that probably does not contribute to your malformed packets.
Following from this question I have decided to see whether I could implement proper asynchronous file I/O using (multiple) QFiles. The idea is to use a "pool" of QFile objects operating on a single file and dispatch requests via QtConcurrent API to be executed with dedicated QFile object each. After the task would finish the result would be emitted (in case of reads) and QFile object returned to the pool. My initial tests seem to indicate that this is a valid approach and in fact does allow concurrent read/write operations (e.g. read while writing) and also that it can further help with performance (read can finish in between a write).
The obvious issue is reading and writing the same segment of the file. To see what happens I used the above mentioned approach to set up the situation and just let it write and read frantically over the same part of the file. To spot the possible "corruption" I am increasing a number at the beginning of the segment and at the end of it in the writes. The idea being that if the read ever reads different numbers at the start or at the end it can in real situation read corrupted data because it did read partially written data in such a case.
The reads and writes were overlapping a lot so I knew they were happening asynchronously and yet not a single time the output was "wrong". It basically means that the read will never read partially written data. At least on Windows. Using QIODevice::Unbuffered flag did not change it.
I assume that some kind of locking is done on the OS level to prevent this (or caching possibly?), please correct me if this assumption is wrong. I base this on a fact that a read that started after write started could finish before a write finished. Since I plan to deploy the application on other platforms as well I was wondering whether I can count on this being the case for all platforms supported by Qt (mainly those based on POSIX and Android) or I need to actually implement a locking mechanism myself for these situations - to defer reading from a segment that is being written to.
There's nothing in the implementation of QFile that guarantees atomicity of writes. So the idea of using multiple QFile objects to access the same sections of the same underlying file won't ever work right. Your tests on Windows are not indicative of there not being a problem, they are merely insufficient: had they been sufficient, they'd have produced the problem you're expecting.
For highly performant file access in small, possibly overlapping chunks, you have to:
Map the file to memory.
Serialize access to the memory, perhaps using multiple mutexes to improve concurrency.
Access memory concurrently, and don't hold the mutex while the data is paged in.
This is done by first prefetching - either reading from every page in the range of bytes to be accessed, and discarding the results, or using a platform-specific API. Then you lock the mutex and copy the data either out of the file or into it.
The OS does the rest.
class FileAccess : public QObject {
Q_OBJECT
QFile m_file;
QMutex m_mutex;
uchar * m_area = nullptr;
void prefetch(qint64 pos, qint64 size);
public:
FileAccess(const QString & name) : m_file{name} {}
bool open() {
if (m_file.open(QIODevice::ReadWrite)) {
m_area = m_file.map(0, m_file.size());
if (! m_area) m_file.close();
}
return m_area != nullptr;
}
void readReq(qint64 pos, qint64 size);
Q_SIGNAL readInd(const QByteArray & data, qint64 pos);
void write(const QByteArray & data, qint64 pos);
};
void FileAccess:prefetch(qint64 pos, qint64 size) {
const qint64 pageSize = 4096;
const qint64 pageMask = ~pageSize;
for (qint64 offset = pos & pageMask; offset < size; offset += pageSize) {
volatile uchar * p = m_area+offset;
(void)(*p);
}
}
void FileAccess:readReq(qint64 pos, qint64 size) {
QtConcurrent::run([=]{
QByteArray result{size, Qt::Uninitialized};
prefetch(pos, size);
QMutexLocker lock{&m_mutex};
memcpy(result.data(), m_area+pos, result.size());
lock.unlock();
emit readInd(result, pos);
});
}
void FileAccess::write(const QByteArray & data, qint64 pos) {
QtConcurrent::run([=]{
prefetch(pos, data.size());
QMutexLocker lock{&m_mutex};
memcpy(m_area+pos, data.constData(), data.size());
});
}
I am developing a Qt interface for a 3D printer. When I cilck the Print button (the printer starts printing) the interface crashes. I am using this code:
*future= QtConcurrent::run(Imprimir,filename.toUtf8().data());
What can I use to solve it?? What types of threads can I use????
I need to use the interface while the printer is printing (it may take several minutes).
Thank you for advance.
Edit:
Imprimir function:
int Imprimir(char *fich)
{
char *aux = new char;
FILE *f;
f=fopen(fich, "r");
while(!feof(f)){
fgets(aux, 200, f);
Enviar(aux);
while(!seguir_imprimiendo);
}
Sleep(7000);
return 0;
}
It's making life harder than necessary by not using QFile. When you use QFile, you don't have to deal with silly things like passing C-string filenames around. You're likely to do it wrong, since who's to guarantee that the platform expects them to be encoded in UTF-8. The whole point of Qt is that it helps you avoid such issues. They are taken care of, the code is tested on multiple platforms to ensure that the behavior is correct in each case.
By not using QByteArray and QFile, you're liable to commit silly mistakes like your C-classic bug of allocating a single character buffer and then pretending that it's 200 characters long.
I see no reason to sleep in that method. It also makes no sense to wait for the continue flag seguir_imprimiendo to change, since Enviar runs in the same thread. It should block until the data is sent.
I presume that you've made Enviar run its code through QtConcurrent::run, too. This is unnecessary and leads to a deadlock. Think of what happens if a free thread can never be available while Imprimir is running. It's valid for the pool Imprimir runs on to be limited to just one thread. You can't simply pretend that it can't happen.
bool Imprimir(const QString & fileName)
{
QFile src(fileName);
if (! src.open(QIODevice::ReadOnly)) return false;
QByteArray chunk;
do {
chunk.resize(4096);
qint64 read = src.read(chunk.data(), chunk.size());
if (read < 0) return false;
if (read == 0) break; //we're done
chunk.resize(read);
if (!Enviar(chunk)) return false;
} while (! src.atEnd());
return true;
}
bool Enviar(const QByteArray & data)
{
...
return true; // if successful
}
Assuming there's no problem with Imprimir, the issue is probably with filename.toUtf8().data(). The data pointer you get from this function is only valid while filename is in-scope. When filename goes out of scope, the data may be deleted and any code accessing the data will crash.
You should change the Imprimir function to accept a QString parameter instead of char* to be safe.
If you can't change the Imprimir function (because it's in another library, for example), then you will have to wrap it in your own function which accepts a QString. If you're using C++11, you can use a lambda expression to do the job:
QtConcurrent::run([](QString filename) {
Imprimir(filename.toUtf8().data());
}, filename);
If not, you will have to write a separate ImprimirWrapper(QString filename) function and invoke it using QtConcurrent::run.
I have a question for you.
I have this class:
`
#define DIMBLOCK 128
#ifndef _BLOCCO_
#define _BLOCCO_
class blocco
{
public:
int ID;
char* data;
blocco(int id);
};
#endif
blocco::blocco(int id)
{
ID = id;
data = new char[DIMBLOCK];
}
`
and the application has a client and a server.
In the main of my server I instantiate an object of this class in this way:
blocco a(1);
After that I open a connection between the client and the server using sockets.
The question is: how can I send this object from the server to the client or viceversa?
Could you help me please?
It's impossible to send objects across a TCP connection in the literal sense. Sockets only know how to transmit and receive a stream of bytes. So what you can do is send a series of bytes across the TCP connection, formatted in such a way that the receiving program knows how to interpret them and create an object that is identical to the one the sending program wanted to send.
That process is called serialization (and deserialization on the receiving side). Serialization isn't built in to the C++ language itself, so you'll need some code to do it. It can be done by hand, or using XML, or via Google's Protocol Buffers, or by converting the object to human-readable-text and sending the text, or any of a number of other ways.
Have a look here for more info.
you can do this using serialization. This means pulling object into pieces so you can send these elements over the socket. Then you need to reconstruct your class in the other end of connection. in Qt there is QDataStream class available providing such functionality. In combination with a QByteArray you can create a data package which you can send. Idea is simple:
Sender:
QByteArray buffer;
QDataStream out(&buffer);
out << someData << someMoreData;
Receiver:
QByteArray buffer;
QDataStream in(&buffer);
in >> someData >> someMoreData;
Now you might want to provide additional constructor:
class blocco
{
public:
blocco(QDataStream& in){
// construct from QDataStream
}
//or
blocco(int id, char* data){
//from data
}
int ID;
char* data;
blocco(int id);
};
extended example
I don't know how much flak I'll get for this, but well I tried this and though I should share it. I am a beginner at socket programming so don't get pissed off.
What I did is I created an array of characters which is of the size of the class (representing the block of memory at the server side). Then I recved the block of memory at the client side and typecast that block of memory as an object and voila!! I managed to send an object from client to server.
Sample code:
blocco *data;
char blockOfData[sizeof(*data)];
if(recv(serverSocket, blockOfData, sizeof(*data), 0) == -1) {
cerr << "Error while receiving!!" << endl;
return 1;
}
data = (blocco *)blockOfData;
Now you can do whatever you want with this data using this as a pointer to the object. Just remember do not try to delete/free this pointer as this memory is assigned to the array blockOfData which is on the stack.
Hope this helps if you wanted to implement something like this.
PS: If you think what I've done is poor way of coding please let me know why. I don't know why this is such a bad idea(if it is in fact a bad idea to do this). Thanks!!