My QT application relies upon TimerEvent (startTimer/killTimer) to animation GUI components. Recently, however, I compiled and ran my app on my Mac laptop (as opposed to the Windows desktop computer I was developing on) and found that now everything appears to run/update at half the speed it usually does.
The application is not lagging, it simply appears that the update rate is less frequent than what is was originally. What should I do to guarantee consistent timing with the application on all platforms?
Alternatively, should I be using a different feature for temporary timer events? I would prefer not to, as TimerEvent is unbelievably convenient for integrating update cycles into Widgets, but would be interested if they provide consistent timing.
(basic example code for context):
// Once MyObject is created, counts to 20.
// The time taken is noticeably different on each platform though.
class MyObject: public QObject {
public:
MyObject() {
timerId = startTimer(60);
}
protected:
void timerEvent(QTimerEvent* event) {
qDebug() << (counter++);
if(counter == 20) {
killTimer(timerId);
}
Object::timerEvent(event);
}
private:
int timerId = -1, counter = 0;
}
You are likely seeing problems due to accuracy. QTimer's accuracy varies on different platforms:
Note that QTimer's accuracy depends on the underlying operating system and hardware. The timerType argument allows you to customize the accuracy of the timer. See Qt::TimerType for information on the different timer types. Most platforms support an accuracy of 20 milliseconds; some provide more. If Qt is unable to deliver the requested number of timer events, it will silently discard some.
You could try passing Qt::PreciseTimer to startTimer (the default is Qt::CoarseTimer), but additionally I recommend checking the current timestamp against some start time or against the timestamp of the previous tick. This will allow you to adjust how you handle the varying amounts of time between timer events. This is not dissimilar to how time steps are sometimes handled in games.
For example:
class MyObject: public QObject {
public:
MyObject() {
timerId = startTimer(60, Qt::PreciseTimer);
startTime = std::chrono::steady_clock::now();
}
protected:
void timerEvent(QTimerEvent* event) {
qDebug() << (counter++);
if(std::chrono::duration_cast<std::chrono::microseconds>(std::chrono::steady_clock::now() - startTime) / 60 >= 20) {
killTimer(timerId);
}
Object::timerEvent(event);
}
private:
int timerId = -1, counter = 0;
std::chrono::steady_clock::time_point startTime;
}
Another example using QElapsedTimer:
class MyObject: public QObject {
public:
MyObject() {
timerId = startTimer(60, Qt::PreciseTimer);
elapsedTimer.start();
}
protected:
void timerEvent(QTimerEvent* event) {
qDebug() << (counter++);
if(elapsedTimer.elapsed() / 60 >= 20) {
killTimer(timerId);
}
Object::timerEvent(event);
}
private:
int timerId = -1, counter = 0;
QElapsedTimer elapsedTimer;
}
Related
I'm using libsourcey which uses libuv as its underlying I/O networking layer.
Everything is setup and seems to run (haven't testen anything yet at all since I'm only prototyping and experimenting). However, I require that next to the application loop (the one that comes with libsourcey which relies on libuv's loop), also calls an "Idle function". As it is now, it calls the Idle CB on every cycle which is very CPU consuming. I'd need a way to limit the call-rate of the uv_idle_cb without blocking the calling thread which is the same the application uses to process I/O data (not sure about this last statement, correct me if i'm mistaken).
The idle function will be managing several different aspects of the application and it needs to run only x times within 1 second. Also, everything needs to run one the same thread (planning to upgrade an older application's network infrastructure which runs entirely single-threaded).
This is the code I have so far which also includes the test I did with sleeping the thread within the callback but it blocks everything so even the 2nd idle cb I set up has the same call-rate as the 1st one.
struct TCPServers
{
CTCPManager<scy::net::SSLSocket> ssl;
};
int counter = 0;
void idle_cb(uv_idle_t *handle)
{
printf("Idle callback %d TID %d\n", counter, std::this_thread::get_id());
counter++;
std::this_thread::sleep_for(std::chrono::milliseconds(1000 / 25));
}
int counter2 = 0;
void idle_cb2(uv_idle_t *handle)
{
printf("Idle callback2 %d TID %d\n", counter2, std::this_thread::get_id());
counter2++;
std::this_thread::sleep_for(std::chrono::milliseconds(1000 / 50));
}
class CApplication : public scy::Application
{
public:
CApplication() : scy::Application(), m_uvIdleCallback(nullptr), m_bUseSSL(false)
{}
void start()
{
run();
if (m_uvIdleCallback)
uv_idle_start(&m_uvIdle, m_uvIdleCallback);
if (m_uvIdleCallback2)
uv_idle_start(&m_uvIdle2, m_uvIdleCallback2);
}
void stop()
{
scy::Application::stop();
uv_idle_stop(&m_uvIdle);
if (m_bUseSSL)
scy::net::SSLManager::instance().shutdown();
}
void bindIdleEvent(uv_idle_cb cb)
{
m_uvIdleCallback = cb;
uv_idle_init(loop, &m_uvIdle);
}
void bindIdleEvent2(uv_idle_cb cb)
{
m_uvIdleCallback2 = cb;
uv_idle_init(loop, &m_uvIdle2);
}
void initSSL(const std::string& privateKeyFile = "", const std::string& certificateFile = "")
{
scy::net::SSLManager::instance().initNoVerifyServer(privateKeyFile, certificateFile);
m_bUseSSL = true;
}
private:
uv_idle_t m_uvIdle;
uv_idle_t m_uvIdle2;
uv_idle_cb m_uvIdleCallback;
uv_idle_cb m_uvIdleCallback2;
bool m_bUseSSL;
};
int main()
{
CApplication app;
app.bindIdleEvent(idle_cb);
app.bindIdleEvent2(idle_cb2);
app.initSSL();
app.start();
TCPServers srvs;
srvs.ssl.start("127.0.0.1", 9000);
app.waitForShutdown([&](void*) {
srvs.ssl.shutdown();
});
app.stop();
system("PAUSE");
return 0;
}
Thanks in advance if anyone can help out.
Solved the problem by using uv_timer_t and uv_timer_cb (Hadn't digged into libuv's doc yet). CPU usage went down drastically and nothing gets blocked.
First of all I did look at the other topics on this website and found they don't relate to my problem as those mostly deal with people using I/O operations or thread creation overheads. My problem is that my threadpool or worker-task structure implementation is (in this case) a lot slower than single threading. I'm really confused by this and not sure if it's the ThreadPool, the task itself, how I test it, the nature of threads or something out of my control.
// Sorry for the long code
#include <vector>
#include <queue>
#include <thread>
#include <mutex>
#include <future>
#include "task.hpp"
class ThreadPool
{
public:
ThreadPool()
{
for (unsigned i = 0; i < std::thread::hardware_concurrency() - 1; i++)
m_workers.emplace_back(this, i);
m_running = true;
for (auto&& worker : m_workers)
worker.start();
}
~ThreadPool()
{
m_running = false;
m_task_signal.notify_all();
for (auto&& worker : m_workers)
worker.terminate();
}
void add_task(Task* task)
{
{
std::unique_lock<std::mutex> lock(m_in_mutex);
m_in.push(task);
}
m_task_signal.notify_one();
}
private:
class Worker
{
public:
Worker(ThreadPool* parent, unsigned id) : m_parent(parent), m_id(id)
{}
~Worker()
{
terminate();
}
void start()
{
m_thread = new std::thread(&Worker::work, this);
}
void terminate()
{
if (m_thread)
{
if (m_thread->joinable())
{
m_thread->join();
delete m_thread;
m_thread = nullptr;
m_parent = nullptr;
}
}
}
private:
void work()
{
while (m_parent->m_running)
{
std::unique_lock<std::mutex> lock(m_parent->m_in_mutex);
m_parent->m_task_signal.wait(lock, [&]()
{
return !m_parent->m_in.empty() || !m_parent->m_running;
});
if (!m_parent->m_running) break;
Task* task = m_parent->m_in.front();
m_parent->m_in.pop();
// Fixed the mutex being locked while the task is executed
lock.unlock();
task->execute();
}
}
private:
ThreadPool* m_parent = nullptr;
unsigned m_id = 0;
std::thread* m_thread = nullptr;
};
private:
std::vector<Worker> m_workers;
std::mutex m_in_mutex;
std::condition_variable m_task_signal;
std::queue<Task*> m_in;
bool m_running = false;
};
class TestTask : public Task
{
public:
TestTask() {}
TestTask(unsigned number) : m_number(number) {}
inline void Set(unsigned number) { m_number = number; }
void execute() override
{
if (m_number <= 3)
{
m_is_prime = m_number > 1;
return;
}
else if (m_number % 2 == 0 || m_number % 3 == 0)
{
m_is_prime = false;
return;
}
else
{
for (unsigned i = 5; i * i <= m_number; i += 6)
{
if (m_number % i == 0 || m_number % (i + 2) == 0)
{
m_is_prime = false;
return;
}
}
m_is_prime = true;
return;
}
}
public:
unsigned m_number = 0;
bool m_is_prime = false;
};
int main()
{
ThreadPool pool;
unsigned num_tasks = 1000000;
std::vector<TestTask> tasks(num_tasks);
for (auto&& task : tasks)
task.Set(randint(0, 1000000000));
auto s = std::chrono::high_resolution_clock::now();
#if MT
for (auto&& task : tasks)
pool.add_task(&task);
#else
for (auto&& task : tasks)
task.execute();
#endif
auto e = std::chrono::high_resolution_clock::now();
double seconds = std::chrono::duration_cast<std::chrono::nanoseconds>(e - s).count() / 1000000000.0;
}
Benchmarks with VS2013 Profiler:
10,000,000 tasks:
MT:
13 seconds of wall clock time
93.36% is spent in msvcp120.dll
3.45% is spent in Task::execute() // Not good here
ST:
0.5 seconds of wall clock time
97.31% is spent with Task::execute()
Usual disclaimer in such answers: the only way to tell for sure is to measure it with a profiler tool.
But I will try to explain your results without it. First of all, you have one mutex across all your threads. So only one thread at a time can execute some task. It kills all your gains you might have. In spite of your threads your code is perfectly serial. So at the very least make your task execution out of the mutex. You need to lock the mutex only to get a task out of the queue — you don't need to hold it when the task gets executed.
Next, your tasks are so simple that single thread will execute them in no time. You just can't measure any gains with such tasks. Create some heavy tasks which could produce some more interesting results(some tasks which are closer to the real world, not such contrived).
And the 3rd point: threads are not without their cost — context switching, mutex contention etc. To have real gains, as the previous 2 points say, you need to have tasks which take more time than the overheads threads introduce and the code should be truly parallel instead of waiting on some resource making it serial.
UPD: I looked at the wrong part of the code. The task is complex enough provided you create tasks with sufficiently large numbers.
UPD2: I've played with your code and found a good prime number to show how the MT code is better. Use the following prime number: 1019048297. It will give enough computation complexity to show the difference.
But why your code doesn't produce good results? It is hard to tell without seeing the implementation of randint() but I take it is pretty simple and in a half of the cases it returns even numbers and other cases produce not much of big prime numbers either. So the tasks are so simple that context switching and other things around your particular implementation and threads in general consume more time than the computation itself. Using the prime number I gave you give the tasks no choice but spend time computing — no easy answer since the number is big and actually prime. That's why the big number will give you the answer you seek — better time for the MT code.
You should not hold the mutex while the task is getting executed, otherwise other threads will not be able to get a task:
void work() {
while (m_parent->m_running) {
Task* currentTask = nullptr;
std::unique_lock<std::mutex> lock(m_parent->m_in_mutex);
m_parent->m_task_signal.wait(lock, [&]() {
return !m_parent->m_in.empty() || !m_parent->m_running;
});
if (!m_parent->m_running) continue;
currentTask = m_parent->m_in.front();
m_parent->m_in.pop();
lock.unlock(); //<- Release the lock so that other threads can get tasks
currentTask->execute();
currentTask = nullptr;
}
}
For MT, how much time is spent in each phase of the "overhead": std::unique_lock, m_task_signal.wait, front, pop, unlock?
Based on your results of only 3% useful work, this means the above consumes 97%. I'd get numbers for each part of the above (e.g. add timestamps between each call).
It seems to me, that the code you use to [merely] dequeue the next task pointer is quite heavy. I'd do a much simpler queue [possibly lockless] mechanism. Or, perhaps, use atomics to bump an index into the queue instead of the five step process above. For example:
void
work()
{
while (m_parent->m_running) {
// NOTE: this is just an example, not necessarily the real function
int curindex = atomic_increment(&global_index);
if (curindex >= max_index)
break;
Task *task = m_parent->m_in[curindex];
task->execute();
}
}
Also, maybe you should pop [say] ten at a time instead of just one.
You might also be memory bound and/or "task switch" bound. (e.g.) For threads that access an array, more than four threads usually saturates the memory bus. You could also have heavy contention for the lock, such that the threads get starved because one thread is monopolizing the lock [indirectly, even with the new unlock call]
Interthread locking usually involves a "serialization" operation where other cores must synchronize their out-of-order execution pipelines.
Here's a "lockless" implementation:
void
work()
{
// assume m_id is 0,1,2,...
int curindex = m_id;
while (m_parent->m_running) {
if (curindex >= max_index)
break;
Task *task = m_parent->m_in[curindex];
task->execute();
curindex += NUMBER_OF_WORKERS;
}
}
I am using QT 4.7, and have a timer that will execute a function 2 times a second or 4 times a second based on what type of sound is required.
void Keypad::SoundRequest(bool allowBeeps, int beepType)
{
// Clear any Previous active Timer
if (sound1Timer->isActive() == true)
{
sound1Timer->stop();
}
else if(sound2Timer->isActive() == true)
{
sound2Timer->stop();
}
// Zero out main
(void) memset(&soundmain_t, (char) 0, sizeof(soundmain_t));
// Zero out sound1
(void) memset(&sound1_t, (char) 0, sizeof(sound1_t));
// Zero out the sound2
(void) memset(&sound2_t, (char) 0, sizeof(sound2_t));
if (allowBeeps == true)
{
if(beepType == OSBEEP)
{
sound1Timer->start(500); // 250mS On / 250mS off called every 500mS = 2HZ
}
else if (beepType == DOORBEEP)
{
sound2Timer->start(250); // 125mS On / 125mS off called every 250mS = 4HZ
}
}
else if (allowBeeps == false)
{
//Shut the Beeper Down
if (sound1Timer->isActive() == true)
{
sound1Timer->stop();
}
else if(sound2Timer->isActive() == true)
{
sound2Timer->stop();
}
SOUND_BLAST(0, &soundmain_t);
}
}
Constructor:
sound1Timer = new QTimer(this);
sound2Timer = new QTimer(this);
connect(sound1Timer, SIGNAL(timeout()), this, SLOT(sound1Handler()));
connect(sound2Timer, SIGNAL(timeout()), this, SLOT(sound2Handler()));
SLOTS:
void Keypad::sound1Handler()
{
// Sound a 250mS chirp
SOUND_BLAST(0, &sound1_t);
}
// Public SLOT, Called by sound2Timer()
// Sounds a single 125mS Beep
void Keypad::sound2Handler()
{
// Emit a Single 125mS chirp
SOUND_BLAST(0, &sound2_t);
}
The Timer is mostly accurate, but it is not exactly 2Hz or 4Hz all the time. To improve the accuracy I was thinking of using a faster timer of say 25mS and letting it run, and every time it accumulates to 250mS or 125mS then sound the beep. However, I am not sure if this would make it more accurate.
Should I measure execution time with QElapsedTimer() and subtract the overhead to sound1Timer, and sound2Timer intervals? Is there a better way to do this?
The accuracy of the timer is limited by the operating system. The Qt library uses operating system timers "under the covers".
If you need high accuracy I would have a timer subroutine that reads a hardware based clock to establish the timing of events. You'll need to dig into your operating system documentation to get the details.
I have the following code, that starts multiple Threads (a threadpool) at the very beginning (startWorkers()). Subsequently, at some point i have a container full of myWorkObject instances, which I want to process using multiple worker threads simulatenously. The myWorkObject are completely isolated from another in terms of memory usage. For now lets assume myWorkObject has a method doWorkIntenseStuffHere() which takes some cpu time to calculate.
When benchmarking the following code, i have noticed that this code does not scale well with the number of threads, and the overhead for initializing/synchronizing the worker threads exceeds the benefit of multithreading unless there are 3-4 threads active. I've looked into this issue and read about the false-sharing problem and i assume my code suffers from this problem. However, I'd like to debug/profile my code to see whether there is some kind of starvation/false sharing going on. How can I do this? Please feel free to critize anything about my code as I'm still learning a lot about memory/cpu and multithreading in particular.
#include <boost/thread.hpp>
class MultiThreadedFitnessProcessingStrategy
{
public:
MultiThreadedFitnessProcessingStrategy(unsigned int numWorkerThreads):
_startBarrier(numWorkerThreads + 1),
_endBarrier(numWorkerThreads + 1),
_started(false),
_shutdown(false),
_numWorkerThreads(numWorkerThreads)
{
assert(_numWorkerThreads > 0);
}
virtual ~MultiThreadedFitnessProcessingStrategy()
{
stopWorkers();
}
void startWorkers()
{
_shutdown = false;
_started = true;
for(unsigned int i = 0; i < _numWorkerThreads;i++)
{
boost::thread* workerThread = new boost::thread(
boost::bind(&MultiThreadedFitnessProcessingStrategy::workerTask, this,i)
);
_threadQueue.push_back(new std::queue<myWorkObject::ptr>());
_workerThreads.push_back(workerThread);
}
}
void stopWorkers()
{
_startBarrier.wait();
_shutdown = true;
_endBarrier.wait();
for(unsigned int i = 0; i < _numWorkerThreads;i++)
{
_workerThreads[i]->join();
}
}
void workerTask(unsigned int id)
{
//Wait until all worker threads have started.
while(true)
{
//Wait for any input to become available.
_startBarrier.wait();
bool queueEmpty = false;
std::queue<SomeClass::ptr >* myThreadq(_threadQueue[id]);
while(!queueEmpty)
{
SomeClass::ptr myWorkObject;
//Make sure queue is not empty,
//Caution: this is necessary if start barrier was triggered without queue input (e.g., shutdown) , which can happen.
//Do not try to be smart and refactor this without knowing what you are doing!
queueEmpty = myThreadq->empty();
if(!queueEmpty)
{
chromosome = myThreadq->front();
assert(myWorkObject);
myThreadq->pop();
}
if(myWorkObject)
{
myWorkObject->doWorkIntenseStuffHere();
}
}
//Wait until all worker threads have synchronized.
_endBarrier.wait();
if(_shutdown)
{
return;
}
}
}
void doWork(const myWorkObject::chromosome_container &refcontainer)
{
if(!_started)
{
startWorkers();
}
unsigned int j = 0;
for(myWorkObject::chromosome_container::const_iterator it = refcontainer.begin();
it != refcontainer.end();++it)
{
if(!(*it)->hasFitness())
{
assert(*it);
_threadQueue[j%_numWorkerThreads]->push(*it);
j++;
}
}
//Start Signal!
_startBarrier.wait();
//Wait for workers to be complete
_endBarrier.wait();
}
unsigned int getNumWorkerThreads() const
{
return _numWorkerThreads;
}
bool isStarted() const
{
return _started;
}
private:
boost::barrier _startBarrier;
boost::barrier _endBarrier;
bool _started;
bool _shutdown;
unsigned int _numWorkerThreads;
std::vector<boost::thread*> _workerThreads;
std::vector< std::queue<myWorkObject::ptr >* > _threadQueue;
};
Sampling-based profiling can give you a pretty good idea whether you're experiencing false sharing. Here's a previous thread that describes a few ways to approach the issue. I don't think that thread mentioned Linux's perf utility. It's a quick, easy and free way to count cache misses that might tell you what you need to know (am I experiencing a significant number of cache misses that correlates with how many times I'm accessing a particular variable?).
If you do find that your threading scheme might be causing a lot of conflict misses, you could try declaring your myWorkObject instances or the data contained within them that you're actually concerned about with __attribute__((aligned(64))) (alignment to 64 byte cache lines).
If you're on Linux, there is a tool called valgrind, with one of the modules doing cache effects simulation (cachegrind). Please take a look at
http://valgrind.org/docs/manual/cg-manual.html
Hy,
I'm writing my first Qt program and getting now in troubles with:
QObject::killTimer: timers cannot be stopped from another thread
QObject::startTimer: timers cannot be started from another thread
My program will communicate to a CANOpen bus for that I'm using the Canfestival Stack. The Canfestival will work with callback methods. To detects timeout in communication I setup a timer function (somehow like a watchdog). My timer package consist out of a "tmr" module, a "TimerForFWUpgrade" module and a "SingleTimer" module. The "tmr" module was originally C programmed so the static "TimerForFWUpgrade" methods will interface it. The "tmr" module will be part of a C programed Firmware update package.
The timer will work as follows. Before a message is sent I will call TMR_Set method. An then in my idle program loop with TMR_IsElapsed we check for a timer underflow. If TMR_IsElapsed I will do the errorhandling. As you see the TMR_Set method will be called continuously and restart the QTimer again and again.
The above noted errors are appearing if I start my program. Can you tell me if my concept could work? Why does this errors appear? Do I have to use additional threads (QThread) to the main thread?
Thank you
Matt
Run and Idle loop:
void run
{
// start communicate with callbacks where TMR_Set is set continously
...
while(TMR_IsElapsed(TMR_NBR_CFU) != 1);
// if TMR_IsElapsed check for errorhandling
....
}
Module tmr (interface to C program):
extern "C"
{
void TMR_Set(UINT8 tmrnbr, UINT32 time)
{
TimerForFWUpgrade::set(tmrnbr, time);
}
INT8 TMR_IsElapsed(UINT8 tmrnbr)
{
return TimerForFWUpgrade::isElapsed(tmrnbr);
}
}
Module TimerForFWUpgrade:
SingleTimer* TimerForFWUpgrade::singleTimer[NR_OF_TIMERS];
TimerForFWUpgrade::TimerForFWUpgrade(QObject* parent)
{
for(unsigned char i = 0; i < NR_OF_TIMERS; i++)
{
singleTimer[i] = new SingleTimer(parent);
}
}
//static
void TimerForFWUpgrade::set(unsigned char tmrnbr, unsigned int time)
{
if(tmrnbr < NR_OF_TIMERS)
{
time *= TimerForFWUpgrade::timeBase;
singleTimer[tmrnbr]->set(time);
}
}
//static
char TimerForFWUpgrade::isElapsed(unsigned char tmrnbr)
{
if(true == singleTimer[tmrnbr]->isElapsed())
{
return 1;
}
else
{
return 0;
}
}
Module SingleTimer:
SingleTimer::SingleTimer(QObject* parent) : QObject(parent),
pTime(new QTimer(this)),
myElapsed(true)
{
connect(pTime, SIGNAL(timeout()), this, SLOT(slot_setElapsed()));
pTime->setTimerType(Qt::PreciseTimer);
pTime->setSingleShot(true);
}
void SingleTimer::set(unsigned int time)
{
myElapsed = false;
pTime->start(time);
}
bool SingleTimer::isElapsed()
{
QCoreApplication::processEvents();
return myElapsed;
}
void SingleTimer::slot_setElapsed()
{
myElapsed = true;
}
Use QTimer for this purpose and make use of SIGNALS and SLOT for the purpose of starting and stopping the timer/s from different threads. You can emit the signal from any thread and catch it in the thread which created the timer to act on it.
Since you say you are new to Qt, I suggest you go through some tutorials before proceeding so that you will know what Qt has to offer and don't end up trying to reinvent the wheel. :)
VoidRealms is a good starting point.
You have this problem because the timers in the static array is created in Thread X, but started and stopped in Thread Y. This is not allowed, because Qt rely on thread affinity to timeout timers.
You can either create, start stop in the same thread or use signal and slots to trigger start and stop operations for timers. The signal and slot solution is a bit problematic Because you have n QTimer objects (Hint: how do you start the timer at position i?)
What you can do instead is create and initialize the timer at position tmrnbr in
TimerForFWUpgrade::set(unsigned char tmrnbr, unsigned int time)
{
singleTimer[tmrnbr] = new SingleTimer(0);
singleTimer[tmrnbr]->set(time);
}
which is executed by the same thread.
Futhermore, you don't need a SingleTimer class. You are using Qt5, and you already have all you need at your disposal:
SingleTimer::isElapsed is really QTimer::remainingTime() == 0;
SingleTimer::set is really QTimer::setSingleShot(true); QTimer::start(time);
SingleTimer::slot_setElapsed becomes useless
ThusSingleTimer::SingleTimer becomes useless and you dont need a SingleTimer class anymore
I got the errors away after changing my timer concept. I'dont use anymore my SingleTimer module. Before the QTimer I won't let timeout and maybe because of that I run into problems. Now I have a cyclic QTimer that times out every 100ms in slot function I will then count the events. Below my working code:
TimerForFWUpgrade::TimerForFWUpgrade(QObject* parent) : QObject(parent),
pTime(new QTimer(this))
{
connect(pTime, SIGNAL(timeout()), this, SLOT(slot_handleTimer()));
pTime->setTimerType(Qt::PreciseTimer);
pTime->start(100);
}
void TimerForFWUpgrade::set(unsigned char tmrnbr, unsigned int time)
{
if(tmrnbr < NR_OF_TIMERS)
{
if(timeBase != 0)
{
myTimeout[tmrnbr] = time / timeBase;
}
else
{
myTimeout[tmrnbr] = 0;
}
myTimer[tmrnbr] = 0;
myElapsed[tmrnbr] = false;
myActive[tmrnbr] = true;
}
}
char TimerForFWUpgrade::isElapsed(unsigned char tmrnbr)
{
QCoreApplication::processEvents();
if(tmrnbr < NR_OF_TIMERS)
{
if(true == myElapsed[tmrnbr])
{
return 1;
}
else
{
return 0;
}
}
else
{
return 0; // NOK
}
}
void TimerForFWUpgrade::slot_handleTimer()
{
for(UINT8 i = 0; i < NR_OF_TIMERS; i++)
{
if(myActive[i] == true)
{
myTimer[i]++;
if(myTimeout[i] < myTimer[i])
{
myTimer[i] = 0;
myElapsed[i] = true;
myActive[i] = false;
}
}
}
}