I have am creating a game.Then game has a timer 100 sec ,which move to zero like 99,98,97...
Now when we lose the game we retry ...but this time the timer decrements of2...lieke 98,96,94
if we lose again and retry this time the difference is of 3...
i noticed that when we lose and retry the timer function is called twice so it make decrement of 2...similarly if we retry for the 3rd time the timer is called three times and so on?
what is this issue?please urgent help required
Perhaps your timer is being started each time you retry. The first time, you have one timer running. Then the second time, you have two timers, and the third time three... You may need to stop the previous timer before retrying so that you only have one timer running.
Related
If my timeslice is 3 seconds, I am guessing the alarm stops the execution of a process every three seconds. What does sleep do? Does it put the process to sleep for 3 seconds? This does not make sense to me - what if there are a lot of processes? Wouldn't it have to sleep for longer?
I am doing this with the round robin stimulation:
while (head!=NULL)
{
alarm(TIMESLICE);
sleep(TIMESLICE);
}
cout<<"no processes left"<<endl;
The code works, but I just want to understand what exactly is going on as I am new to this concept.
I am guessing the alarm stops the execution of a process every three seconds.
Sort of. It arranges for a signal to be sent to the process in three seconds. The process can then continue normally and can even ignore the signal if it wants to.
What does sleep do? Does it put the process to sleep for 3 seconds?
Correct.
This does not make sense to me - what if there are a lot of processes? Wouldn't it have to sleep for longer?
No. Even a process that never sleeps isn't guaranteed to get the CPU all the time. A process that isn't sleeping may or may not be scheduled to run on a core at any particular time. Once it's no longer sleeping, it's ready-to-run, and the scheduler will make the decision of when and for how to long to let it use what core.
I am trying to understand the difference between interval and delay in cocos2d's scheduler.
If delay is the time between 2 selectors get executed. What is interval really?
Comments in cocos2D CCNode.h file, may be useful.
Schedules a custom selector with an interval time in seconds
/**
repeat will execute the action repeat + 1 times, for a continues action use kCCRepeatForever
delay is the amount of time the action will wait before execution
*/
If the selector is already scheduled, then the interval parameter will be updated without scheduling it again.
-(void) schedule:(SEL)selector interval:(ccTime)interval repeat: (uint) repeat delay:(ccTime) delay;
I have a windowless timer (no WM_TIMER) which fires a callback function only once when a given time period is elapsed. It is implemented as a SetTimer()/KillTimer(). Time periods are small enough: 100-300 milliseconds.
Is that cheap enough (I mean performance) to call SetTimer()/KillTimer() pair for every such short time interval?
What if I have 100 such timers which periodically call SetTimer()/KillTimer()? How much Window timer objects may exist simultaneously in the system?
That is the question:
Use a bunch of such timer objects and rely on good Windows implementation of timers, or create one Windows timer object that ticks every, say, 30 milliseconds, and subscribe all custom 100-300 milliseconds one-time timers to it.
Thanks
The problem with timer messages as you are trying to use them is that they are low priority messages. Actually they are fake messages. Timers are associated with an underlying kernel timer object - when the message loop detects the kernel timer is signalled it simply marks the current threads message queue with a flag indicating that the next call to GetMessage - WHEN THERE ARE NO OTHER MESSAGES TO PROCESS - should synthesise a WM_TIMER message just in time and return it.
With potentially lots of timer objects, its not at all obvious that the system will fairly signal timer messages for all the timers equally, and any system load can entirely prevent the generation of WM_TIMER messages for long periods of time.
If you are in control of the message loop, you could use maintain your own list of timer events (along with GetTickCount timestamps when they should occur) and MSGWaitForMultipleObject - instead of GetMessage to wait for messages. Use the dwTimeout parameter to provide the smallest interval - from now - until the next timer should be signalled. So it will return from waiting for messages each time you have a timer to process.
And/Or you could use waitable timers - either on a GUI thread with MSGWaitForMultipleObjects, or just on a worker thread, to access the lower level timing functionality directly.
The biggest SetTimer() pitfall is that actually it is USER object (despite the fact it's not listed in MSDN USER objects list) hence it falls under Windows USER objects limitation - by default max 10000 objects per process, max 65535 objects per session (all running processes).
This can be easily proven by simple test - just call SetTimer() (parameters don't care, both windowed and windowless act the same way) and see USER objects count increased in Task Manager.
Also see ReactOS ntuser.h source and this article. Both of them state that TYPE_TIMER is one of USER handle types.
So beware - creating a bunch of timers could exhaust your system resources and make your process crash or even entire system unresponsive.
Here are the details that I feel you're actually after while asking this question:
SetTimer() will first scan the non-kernel timer list (doubly linked list) to see if the timer ID already exists. If the timer exists, it will simply be reset. If not, an HMAllocObject call occurs and creates space for the structure. The timer struct will then be populated and linked to the head of the list.
This will be the total overhead for creating each your 100 timers. That's exactly what the routine does, save for checking against the min and max dwElapsed parameters.
As far as timer expiration goes, the timer list is scanned at (approximately) the duration of the smallest timer duration seen during the last timer list scan. (Actually, what really happens is -- a kernel timer is set to the duration of the smallest user timer found, and this kernel timer wakes up the thread that does the checking for user timer expirations and wakes the respective threads via setting a flag in their message queue status.)
For each timer in the list, the current delta between the last time (in ms) the timer list was scanned and the current time (in ms) is decremented from each timer in the list. When one is due (<= 0 remaining), it's flagged as "ready" in its own struct and and a pointer to the thread info is read from the timer struct and used to wake the respective thread by setting the thread's QS_TIMER flag. It then increments your message queue's CurrentTimersReady counter. That's all timer expiration does. No actual messages are posted.
When your main message pump calls GetMessage(), when no other messages are available, GetMessage() checks for QS_TIMER in your thread's wake bits, and if set -- generates a WM_TIMER message by scanning the full user timer list for the smallest timer in the list flagged READY and that is associated with your thread id. It then decrements your thread CurrentTimersReady count, and if 0, clears the timer wake bit. Your next call to GetMessage() will cause the same thing to occur until all timers are exhausted.
One shot timers stay instantiated. When they expire, they're flagged as WAITING. The next call to SetTimer() with the same timer ID will simply update and re-activate the original. Both one shot and periodic timers reset themselves and only die with KillTimer or when your thread or window are destroyed.
The Windows implementation is very basic, and I think it'd be trivial for you to write a more performant implementation.
I try to start timer at specific time like 02:30. Every day it starts at 02.30.
Is it possible? Do you have any idea?
Thank a lot.
QTimer doesn't handle specific times of day natively, but you could use it in conjunction with QDateTime to get what you want. That is, use QDateTime objects to figure out how many seconds are between (right now) and 2:30 (QDateTime::msecsTo() looks particularly appropriate here), then set your QTimer to go off after that many seconds. Repeat as necessary.
Depending on the required resolution, you could use an ordinary QTimer that fires let's say every minute.
In the timerEvent, you could check if you are on the right time (using QDateTime), and trigger the necessary event.
The solution of Jeremy is indeed elegant, but it doesn't take into account the daylight savings time.
To guard against that, you should fire a timer event every hour and check the wall clock.
Calculate the delta to the target, like Jeremy proposes, and if it falls within the coming hour, set a timer to fire, and disable the hourly timer.
If not, just wait for the hourly timer to fire again.
Pseudo code:
Get wall clock time
Calculate difference between target time and wall clock
If difference < 1 hour:
Set timer to fire after difference secs
If this is a repeating event, restart the hourly timer
Else:
Start watch timer to do this calculation again after one hour
I need to periodically do a particular task and am currently using nanosleep.
The task needs to be run every second or every 10 seconds.
Is there a better way to do this than:
while(true)
{
doTask();
sleep();
}
Walter
One of the options could be to create a thread that will do the task with specified timeout.
You can use a thread library to create a thread which handle run the doTask(). Your main thread just keeps sleeping and runs every 1 second or 10 seconds.
This can be done with a QTimer and a QRunnable.
http://doc.qt.nokia.com/latest/qtimer.html
According to the dock, the resolution is around 1 ms in most cases. For your need, this should be sufficient.