Qt threads for an desktop interface - c++

I am developing a Qt interface for a 3D printer. When I cilck the Print button (the printer starts printing) the interface crashes. I am using this code:
*future= QtConcurrent::run(Imprimir,filename.toUtf8().data());
What can I use to solve it?? What types of threads can I use????
I need to use the interface while the printer is printing (it may take several minutes).
Thank you for advance.
Edit:
Imprimir function:
int Imprimir(char *fich)
{
char *aux = new char;
FILE *f;
f=fopen(fich, "r");
while(!feof(f)){
fgets(aux, 200, f);
Enviar(aux);
while(!seguir_imprimiendo);
}
Sleep(7000);
return 0;
}

It's making life harder than necessary by not using QFile. When you use QFile, you don't have to deal with silly things like passing C-string filenames around. You're likely to do it wrong, since who's to guarantee that the platform expects them to be encoded in UTF-8. The whole point of Qt is that it helps you avoid such issues. They are taken care of, the code is tested on multiple platforms to ensure that the behavior is correct in each case.
By not using QByteArray and QFile, you're liable to commit silly mistakes like your C-classic bug of allocating a single character buffer and then pretending that it's 200 characters long.
I see no reason to sleep in that method. It also makes no sense to wait for the continue flag seguir_imprimiendo to change, since Enviar runs in the same thread. It should block until the data is sent.
I presume that you've made Enviar run its code through QtConcurrent::run, too. This is unnecessary and leads to a deadlock. Think of what happens if a free thread can never be available while Imprimir is running. It's valid for the pool Imprimir runs on to be limited to just one thread. You can't simply pretend that it can't happen.
bool Imprimir(const QString & fileName)
{
QFile src(fileName);
if (! src.open(QIODevice::ReadOnly)) return false;
QByteArray chunk;
do {
chunk.resize(4096);
qint64 read = src.read(chunk.data(), chunk.size());
if (read < 0) return false;
if (read == 0) break; //we're done
chunk.resize(read);
if (!Enviar(chunk)) return false;
} while (! src.atEnd());
return true;
}
bool Enviar(const QByteArray & data)
{
...
return true; // if successful
}

Assuming there's no problem with Imprimir, the issue is probably with filename.toUtf8().data(). The data pointer you get from this function is only valid while filename is in-scope. When filename goes out of scope, the data may be deleted and any code accessing the data will crash.
You should change the Imprimir function to accept a QString parameter instead of char* to be safe.
If you can't change the Imprimir function (because it's in another library, for example), then you will have to wrap it in your own function which accepts a QString. If you're using C++11, you can use a lambda expression to do the job:
QtConcurrent::run([](QString filename) {
Imprimir(filename.toUtf8().data());
}, filename);
If not, you will have to write a separate ImprimirWrapper(QString filename) function and invoke it using QtConcurrent::run.

Related

C++ weird async behaviour

Note that I'm using boost async, due to the lack of threading classes support in MinGW.
So, I wanted to send a packet every 5 seconds and decided to use boost::async (std::async) for this purpose.
This is the function I use to send the packet (this is actually copying to the buffer and sending in the main application loop - nvm - it's working fine outside async method!)
m_sendBuf = new char[1024]; // allocate buffer
[..]
bool CNetwork::Send(const void* sourceBuffer, size_t size) {
size_t bufDif = m_sendBufSize - m_sendInBufPos;
if (size > bufDif) {
return false;
}
memcpy(m_sendBuf + m_sendInBufPos, sourceBuffer, size);
m_sendInBufPos += size;
return true;
}
Packet sending code:
struct TestPacket {
unsigned char type;
int code;
};
void SendPacket() {
TestPacket myPacket{};
myPacket.type = 10;
myPacket.code = 1234;
Send(&TestPacket, sizeof(myPacket));
}
Async code:
void StartPacketSending() {
SendPacket();
std::this_thread::sleep_for(std::chrono::seconds{5});
StartPacketSending(); // Recursive endless call
}
boost::async(boost::launch::async, &StartPacketSending);
Alright. So the thing is, when I call SendPacket() from the async method, received packet is malformed on the server side and the data is different than specified. This doesn't happend when called outside the async call.
What is going on here? I'm out of ideas.
I think I have my head wrapped around what you are doing here. You are loading all unsent in to buffer in one thread and then flushing it in a different thread. Even thought the packets aren't overlapping (assuming they are consumed quickly enough), you still to synchronize all the shared data.
m_sendBuf, m_sendInPos, and m_sendBufSize are all being read from the main thread, likely while memcpy or your buffer size logic is running. I suspect you will have to use a proper queue to get your program to work as intended in the long run, but try protecting those variables with a mutex.
Also as other commenters have pointed out, infinite recursion is not supported in C++, but that probably does not contribute to your malformed packets.

Concurrently processing data. What do I need to watch out for?

I have a routine that is meant to load and parse data from a file. There is a possibility that the data from the same file might need to be retrieved from two places at once, i.e. during a background caching process and from a user request.
Specifically I am using C++11 thread and mutex libraries. We compile with Visual C++ 11 (aka 2012), so are limited by whatever it lacks.
My naive implementation went something like this:
map<wstring,weak_ptr<DataStruct>> data_cache;
mutex data_cache_mutex;
shared_ptr<DataStruct> ParseDataFile(wstring file_path) {
auto data_ptr = make_shared<DataStruct>();
/* Parses and processes the data, may take a while */
return data_ptr;
}
shared_ptr<DataStruct> CreateStructFromData(wstring file_path) {
lock_guard<mutex> lock(data_cache_mutex);
auto cache_iter = data_cache.find(file_path);
if (cache_iter != end(data_cache)) {
auto data_ptr = cache_iter->second.lock();
if (data_ptr)
return data_ptr;
// reference died, remove it
data_cache.erase(cache_iter);
}
auto data_ptr = ParseDataFile(file_path);
if (data_ptr)
data_cache.emplace(make_pair(file_path, data_ptr));
return data_ptr;
}
My goals were two-fold:
Allow multiple threads to load separate files concurrently
Ensure that a file is only processed once
The problem with my current approach is that it doesn't allow concurrent parsing of multiple files at all. If I understand what will happen correctly, they're each going to hit the lock and end up processing linearly, one thread at a time. It may change from run to run the order which the threads pass through the lock first, but the end result is the same.
One solution I've considered was to create a second map:
map<wstring,mutex> data_parsing_mutex;
shared_ptr<DataStruct> ParseDataFile(wstring file_path) {
lock_guard<mutex> lock(data_parsing_mutex[file_path]);
/* etc. */
data_parsing_mutex.erase(file_path);
}
But now I have to be concerned with how data_parsing_mutex is being updated. So I guess I need another mutex?
map<wstring,mutex> data_parsing_mutex;
mutex data_parsing_mutex_mutex;
shared_ptr<DataStruct> ParseDataFile(wstring file_path) {
unique_lock<mutex> super_lock(data_parsing_mutex_mutex);
lock_guard<mutex> lock(data_parsing_mutex[file_path]);
super_lock.unlock();
/* etc. */
super_lock.lock();
data_parsing_mutex.erase(file_path);
}
In fact, looking at this, it's not going to avoid necessarily double-processing a file if it hasn't been completed by the background process when the user requests it, unless I check the cache yet again.
But by now my spidey senses are saying There must be a better way. Is there? Would futures, promises, or atomics help me at all here?
From what you described, it sounds like you're trying to do a form of lazy initialization of the DataStruct using a thread pool, along with a reference counted cache. std::async should be able to provide a lot of the dispatch and synchronization necessary for something like this.
Using std::async, the code would look something like this...
map<wstring,weak_ptr<DataStruct>> cache;
map<wstring,shared_future<shared_ptr<DataStruct>>> pending;
mutex cache_mutex, pending_mutex;
shared_ptr<DataStruct> ParseDataFromFile(wstring file) {
auto data_ptr = make_shared<DataStruct>();
/* Parses and processes the data, may take a while */
return data_ptr;
}
shared_ptr<DataStruct> CreateStructFromData(wstring file) {
shared_future<weak_ptr<DataStruct>> pf;
shared_ptr<DataStruct> ce;
{
lock_guard(cache_mutex);
auto ci = cache.find(file);
if (!(ci == cache.end() || ci->second.expired()))
return ci->second.lock();
}
{
lock_guard(pending_mutex);
auto fi = pending.find(file);
if (fi == pending.end() || fi.second.get().expired()) {
pf = async(ParseDataFromFile, file).share();
pending.insert(fi, make_pair(file, pf));
} else {
pf = pi->second;
}
}
pf.wait();
ce = pf.get();
{
lock_guard(cache_mutex);
auto ci = cache.find(file);
if (ci == cache.end() || ci->second.expired())
cache.insert(ci, make_pair(file, ce));
}
{
lock_guard(pending_mutex);
auto pi = pending.find(file);
if (pi != pending.end())
pending.erase(pi);
}
return ce;
}
This can probably be optimized a bit, but the general idea should be the same.
On a typical computer there is little point in trying to load files concurrently, since disk access will be the bottleneck. Instead, it's better to have a single thread load files (or use asynchronous I/O) and dish out the parsing to a thread pool. Then store the results in a shared container.
Regarding preventing double work, you should consider if this is really necessary. If you are only doing this out of premature optimization, you'd probably make users happier by focussing on making the program responsive, rather than efficient. That is, make sure the user gets what they ask for quickly, even if it means doing double work.
OTOH, if there is a technical reason for not parsing a file twice, you can keep track of the status of each file (loading, parsing, parsed) in the shared container.

Qt QTcpSocket async write

I'm trying to write a dynamic data to a QTcpSocket, this is how I implement it:
connect(&m_tcpSocket, SIGNAL(bytesWritten(qint64)), SLOT(written(qint64)));
//...
void MyClass::written(qint64 iBytes)
{
if(iBytes>0)
m_strWrite = m_strWrite.mid(iBytes);
if(m_strWrite.length()<1)
{
if(m_hHandle->isDone())
m_tcpSocket.disconnectFromHost();
}else if(m_tcpSocket.isValid()){
m_tcpSocket.write(m_strWrite);
}
}
//...
void MyClass::dataReady(const QByteArray &strData)
{
bool bWrite = m_strWrite.isEmpty();
m_strWrite.append(strData);
if(bWrite)
written(0);
}
dataReady is a slot which is called whenever there is some data ready, and strData is at most 8192 bytes.
This method works perfectly, but when data is huge (>500 MB), strange things happen, sometimes data is written much more than what I expect, sometimes some data is missing, sometimes nothing is written after a while and ...
I just want dynamic buffer, be written to socket, is there another way for that?
QTcpSocket has its own write buffer. Just use m_tcpSocket.write(strData). Qt does not limit the write buffer size.

Pausing and resuming parsing in Ragel

I'm using Ragel to parse a string in C++. I need to be able to pause parsing for some indefinite time and then resume parsing where I left off.
Right now I'm trying to do this by putting an fbreak at the end of a finishing action. This seems to work fine, relinquishing control back to the parent program. However, I'm not sure how to resume parsing. I thought that just calling the code generated by %write exec would be enough, but this doesn't seem to be the case. When it gets back into parsing, the reference to the original string seems to be wrong/lost.
Not sure if I'm doing something wrong in C++ here (it's not my native tongue) or if I'm taking the wrong approach with Ragel.
Here's my start and resume code:
const char *p;
const char *pe;
void start()
{
int len = theString.length();
char chars[len+1];
theString.toCharArray(chars, len+1);
p = chars;
pe = chars + len;
resume();
}
void resume() {
%% write exec;
}
The first time I call start(), my statemachine eventually fbreaks out, and then i call resume() to (hopefully) continue parsing.
Any pointers on what I might be doing wrong?
Looks like it was some sort of dangling pointer problem. Moving the initialization of theString elsewhere seems to have fixed the problem. So basically, I still suck at C.

Segfault accessing classes across threads

I'm a bit stumped on an issue I'm having with threading and C++. I'm writing a DSP plugin for Windows Media Player, and I want to send the data I intercept to a separate thread where I'll send it out on the network. I'm using a simple producer-consumer queue like the one explained here
The program is crashing on the isFull() function which just compares two integers:
bool ThreadSafeQueue::isFull()
{
if (inCount == outCount) //CRASH!
return true;
else
return false;
}
The thread that's doing the dequeuing:
void WMPPlugin::NetworkThread (LPVOID pParam)
{
ThreadSafeQueue* dataQueue = (ThreadSafeQueue*)(pParam);
while (!networkThreadDone)
{
Sleep(2); /// so we don't hog the processor or make a race condition
if (!dataQueue->isFull())
short s = dataQueue->dequeue();
if (networkThreadDone) // variable set in another process so we know to exit
break;
}
}
The constructor of the class that's creating the consumer thread:
WMPPlugin::WMPPlugin()
{
// etc etc
dataQueue = new ThreadSafeQueue();
_beginthread(WMPPlugin::NetworkThread, 0, dataQueue);
}
inCount and outCount are just integers and they're only read here, not written. I was under the impression this meant they were thread safe. The part that writes them aren't included, but each variable is only written to by one thread, never by both. I've done my best to not include code that I don't feel is the issue, but I can include more if necessary. Thanks in advance for any help.
Most often, when a crash happens accessing a normal member variable, it means this is NULL or an invalid address.
Are you sure you aren't invoking it on a NULL instance?
Regarding this line:
ThreadSafeQueue* dataQueue = (ThreadSafeQueue*)(pParam);
How sure are you that pParam is always non-NULL?
How sure are you that pParam is always a ThreadSafeQueue object?
Are you possible deleting the ThreadSafeQueue objects on other threads?