Unexplained crash while polling systemtime type - c++

I have a program that runs every 5 minutes when the stock market is open, which it does by running once, then entering the following function, which returns once 5 minutes has passed if the stock market is open.
What I don't understand, is that after a period of time, usually about 18 or 19 hours, it crashes returning a sigsegv error. I have no idea why, as it isn't writing to any memory - although I don't know much about the systemtime type, so maybe that's it?
Anyway, any help you could give would be very much appreciated! Thanks in advance!!
void KillTimeUntilNextStockDataReleaseOnWeb()
{
SYSTEMTIME tLocalTimeNow;
cout<<"\n*****CHECKING IF RUN HAS JUST COMPLETED OR NOT*****\n";
GetLocalTime(&tLocalTimeNow);//CHECK IF A RUN HAS JUST COMPLETED. IF SO, AWAIT NEXT 5 MINUTE MARK
while((tLocalTimeNow.wMinute % 5)==0)
GetLocalTime(&tLocalTimeNow);
cout<<"\n*****AWAITING 5 MINUTE MARK TO UPDATE STOCK DATA*****\n";
GetLocalTime(&tLocalTimeNow);//LOOP THROUGH THIS SECTION, CHECKING CURRENT TIME, UNTIL 5 MINUTE UPDATE. THEN PROCEED
while((tLocalTimeNow.wMinute % 5)!=0)
GetLocalTime(&tLocalTimeNow);
cout<<"\n*****CHECKING IF MARKET IS OPEN*****\n";
//CHECK IF STOCK MARKET IS EVEN OPEN. IF NOT, REPEAT
GetLocalTime(&tLocalTimeNow);
while((tLocalTimeNow.wHour < 8)||(tLocalTimeNow.wHour) > 17)
GetLocalTime(&tLocalTimeNow);
cout<<"\n*****PROGRAM CONTINUING*****\n";
return;
}

If you want to "wait for X seconds", then the Windows system call Sleep(x) will sleep for x milliseconds. Note however, if you sleep for, say, 300s, after some operation that took 3 seconds, that would mean you drift 3 seconds every 5minutes - it may not matter, but if it's critical that you keep the same timing all the time, you should figure out [based on time or some such function] how long it is to the next boundary, and then sleep that amount [possibly run a bit short and then add another check and sleep if you woke up early]. If "every five minutes" is more of an approximate thing, then 300s is fine.
There are other methods to wait for a given amount of time, but I suspect the above is sufficient.

Instead of using a busy loop, or even Sleep() in a loop, I would suggest using a Waitable Timer instead. That way, the calling thread can sleep effectively while it is waiting, while still providing a mechanism to "wake up" early if needed.

Related

ESP32/FreeRTOS, how to stop currently running task when the new one was triggered (avoiding overlap)

I'm working on code to control 2 module relay regarding door access. I'm looking for the way to stop the currently running tasks, before running the new one (the same task). All I want is to avoid overlap.
void TaskOpenManRoom(void *parameter){
Serial.println("Opening men room");
digitalWrite(manRelay, LOW);
vTaskDelay(6000 / portTICK_PERIOD_MS);
digitalWrite(manRelay, HIGH);
Serial.println("Closing men room");
vTaskDelete(NULL);
};
xTaskCreate(
TaskOpenManRoom,
"TaskOpenManRoom",
1000,
(void *) &man,
1,
&TaskMen
);
My goal is to extend the time when the door should be opened. So basically when the first task was triggered and then after some while the second one, the door should stay opened another 6000ms.
In mu current code, when the second task is called like in the middle of the first one, the door get closed because of first task calling digitalWrite(manRelay, HIGH);
I would appreciate the hint how I can kill the first task when the second been triggered.
Tasks are meant to be long-running, because they are relatively heavyweight. Don't start and end tasks for each user activity and don't delay them for extended periods of time.
You don't need any task at all for your functionality, you just need a timer to perform the closing activity after 6000 ms. Then you can reset it whenever you need.
TimerHandle_t closeManRoomTimer;
void OpenManRoom() {
xTimerReset(closeManRoomTimer, 100); // <------ (re)arm the close timer
Serial.println("Opening men room");
digitalWrite(manRelay, LOW);
};
void CloseManRoom(TimerHandle_t) {
Serial.println("Closing men room");
digitalWrite(manRelay, HIGH);
};
// during program startup, setup a one-shot close timer
closeManRoomTimer = xTimerCreate("closeManRoomTimer", pdMS_TO_TICKS(6000), pdFALSE, 0, &CloseManRoom);
I would not kill the first task when the second starts.
If you use a task at all, I'd rewrite the task to something along this general line:
cast parameter to pointer to uint32
atomic increment open count, and if it was zero {
open the door
repeat {
sleep six seconds
} atomic decrement count, and exit loop if it was 1
close the door
}
exit the task
...and when you create the task, pass a pointer to a uint32_t for it to use to store the open count.
So the task starts by atomically incrementing the open count, which returns the value that was previously in the open count. If that was zero, it means the door is currently closed. In that case, we open it and got to sleep.
If the task runs again while it's sleeping, the open count will now be one. We immediately increment that, but when we check the previous value, it was 1, so we don't try to open the door again--we just skip all the stuff in the if statement, and exit the task.
When the first instance of the task wakes up, it decrements the count, and it it was 1, it exits the loop, closes the door, and exits the task. But if the task ran again while it was sleeping, the count will still be greater than 1, so it will stay in the loop and sleep some more.
This is open to a little bit of optimization. As it stands right now, it sleeps a fixed period of time (six seconds) even if the current open count is greater than 1. If the task as expensive enough to justify a little extra work, we could do an atomic exchange, to retrieve the current open count and set the open count to 0, multiply the retrieved value by 6000, then sleep for that long. That adds quite a bit of extra complexity though, and in this case, the benefit would be much too small to justify it.
This does depend on our not running the task more than 4 billion times while the door is open. If we did, our atomic increment would overflow, and the code would misbehave. For the case at hand (and most others) this is unlikely to be a problem. In the rare situation where it might be, the obvious fix is a 64-bit variable (and 64-bit atomic increment and decrement). Incrementing the variable until a 64-bit variable overflows is generally not a realistic possibility (e.g., if you incremented at 1 GHz, it would take centuries).
Many ways:
use vTaskDelay which puts the task in the not running state (it is not blocking
Wait for, mutex semaphore, queue or task notification from another task.
I would appreciate the hint how I can kill the first task when the
second been triggered.
It will kill current task:
vTaskDelete(NULL);

Scheduling reset every 24 hours at midnight

I have a counter "numberOrders" and i want to reset it everyday at midnight, to know how many orders I get in one day, what I have right now is this:
val system = akka.actor.ActorSystem("system")
system.scheduler.schedule(86400000 milliseconds, 0 milliseconds){(numberOrders = 0)}
This piece of code is inside a def which is called every time i get a new order, so want it does is: reset numberOrders after 24hours from the first order or from every order, I'm not really sure if every time there's a new order is going to reset after 24 hours, which is not what I want. I want to rest the variable everyday at midnight, any idea? Thanks!
To further increase pushy's answer. Since you might not always be sure when the site started and if you want to be exactly sure it runs at midnight you can do the following
val system = akka.actor.ActorSystem("system")
val wait = (24 hours).toMillis - System.currentTimeMillis
system.scheduler.schedule(Duration.apply(wait, MILLISECONDS), 24 hours, orderActor, ResetCounterMessage)
Might not be the tidiest of solutions but it does the job.
As schedule supports repeated executions, you could just set the interval parameter to 24 hours, the initial delay to the amount of time between now and midnight, and initiate the code at startup. You seem to be creating a new actorSystem every time you get an order right now, that does not seem quite right, and you would be rid of that as well.
Also I would suggest using the schedule method which sends messages to actors instead. This way the actor that processes the order could keep count, and if it receives a ResetCounter message it would simply reset the counter. You could simply write:
system.scheduler.schedule(x seconds, 24 hours, orderActor, ResetCounterMessage)
when you start up your actor system initially, and be done with it.

How can I periodically execute some function if this function takes along time to run (less than peroid)

I want to run a function for example func() exactly 1 time per second. However the running time of func() is about 500 ms. How Can I do that? I know if the running time of the function is low, I can write a while loop in func() and sleep() for 1 second after each execution. But now, the running time is high. What should I do to ensure the func() run exactly 1 time per second? Thanks.
Yo do:
Take the current time in start_time.
Perform your job
Take the current time in end_time
Wait for (1 second + start_time - end_time)
That way, you can perform your tasks every seconds reliably. If the task takes less time, you will wait longer and vice versa. Note however that this assumes that your task takes always less than 1 sec. to execute. In the real code, you want to check for that before the sleep statement.
Implementation details depend on the platform.
Note that using this method still results in a small drift due to the time it takes to compute step 4. A more accurate alternative would be to synchronize on integer multiple of one second. That way, over 1000s of cycles you would not drift.
It depends on the level of accuracy you need.
If you want a brute, easy to code solution, you can get the time before first run of the function and save it in some variable (start_time). Create repeat index count variable (repeat_number) that stores next repeat number. Then you can do kinda this:
1) next_run_time = ++repeat_number*1sec + start_time;
2) func();
3) wait_time = next_run_time - current_time;
4) sleep(wait_time)
5) goto 1;
This approach disables accumulation of time error on each iteration.
But for the real application you should find some event framework or library.

Limit iterations per time unit

Is there a way to limit iterations per time unit? For example, I have a loop like this:
for (int i = 0; i < 100000; i++)
{
// do stuff
}
I want to limit the loop above so there will be maximum of 30 iterations per second.
I would also like the iterations to be evenly positioned in the timeline so not something like 30 iterations in first 0.4s and then wait 0.6s.
Is that possible? It does not have to be completely precise (though the more precise it will be the better).
#FredOverflow My program is running
very fast. It is sending data over
wifi to another program which is not
fast enough to handle them at the
current rate. – Richard Knop
Then you should probably have the program you're sending data to send an acknowledgment when it's finished receiving the last chunk of data you sent then send the next chunk. Anything else will just cause you frustrations down the line as circumstances change.
Suppose you have a good Now() function (GetTickCount() is bad example, it's OS specific and has bad precision):
for (int i = 0; i < 1000; i++){
DWORD have_to_sleep_until = GetTickCount() + EXPECTED_ITERATION_TIME_MS;
// do stuff
Sleep(max(0, have_to_sleep_until - GetTickCount()));
};
You can check elapsed time inside the loop, but it may be not an usual solution. Because computation time is totally up to the performance of the machine and algorithm, people optimize it during their development time(ex. many game programmer requires at least 25-30 frames per second for properly smooth animation).
easiest way (for windows) is to use QueryPerformanceCounter(). Some pseudo-code below.
QueryPerformanceFrequency(&freq)
timeWanted = 1.0/30.0 //time per iteration if 30 iterations / sec
for i
QueryPerf(count1)
do stuff
queryPerf(count2)
timeElapsed = (double)(c2 - c1) * (double)(1e3) / double(freq) //time in milliseconds
timeDiff = timeWanted - timeElapsed
if (timeDiff > 0)
QueryPerf(c3)
QueryPerf(c4)
while ((double)(c4 - c3) * (double)(1e3) / double(freq) < timeDiff)
queryPerf(c4)
end for
EDIT: You must make sure that the 'do stuff' area takes less time than your framerate or else it doesn't matter. Also instead of 1e3 for milliseconds, you can go all the way to nanoseconds if you do 1e9 (if you want that much accuracy)
WARNING... this will eat your CPU but give you good 'software' timing... Do it in a separate thread (and only if you have more than 1 processor) so that any guis wont lock. You can put a conditional in there to stop the loop if this is a multi-threaded app too.
#FredOverflow My program is running very fast. It is sending data over wifi to another program which is not fast enough to handle them at the current rate. – Richard Knop
What you might need a buffer or queue at the receiver side. The thread that receives the messages from the client (like through a socket) get the message and put it in the queue. The actual consumer of the messages reads/pops from the queue. Of course you need concurrency control for your queue.
Besides the flow control methods mentioned, if you also have the need to maintain an accurate specific data sending rate in your sender part. Usually it can be done like this.
E.x. if you want to send at 10Mbps, create a timer of interval 1ms so it will call a predefined function every 1ms. Then in the timer handler function, by keep tracking of 2 static variables 1)Time elapsed since beginning of sending data 2)How much data in bytes have been sent up to last call, you can easily calculate how much data is needed to be sent in the current call (or just sleep and wait for next call).
By this way, you can do "streaming" of data in a very stable way with very little jitterness, and this is usually adopted in streaming of videos. Of course it also depends on how accurate the timer is.

How to get an accurate 1ms Timer Tick under WinXP

I try to call a function every 1 ms. The problem is, I like to do this with windows. So I tried the multimediatimer API.
Multimediatimer API
Source
idTimer = timeSetEvent(
1,
0,
TimerProc,
0,
TIME_PERIODIC|TIME_CALLBACK_FUNCTION );
My result was that most of the time the 1 ms was ok, but sometimes I get the double period. See the little bump at around 1.95ms
multimediatimerHistogram http://www.freeimagehosting.net/uploads/8b78f2fa6d.png
My first thought was that maybe my method was running too long. But I measured this already and this was not the case.
Queued Timers API
My next try was using the queud timers API with
hTimerQueue = CreateTimerQueue();
if(hTimerQueue == NULL)
{
printf("Error creating queue: 0x%x\n", GetLastError());
}
BOOL res = CreateTimerQueueTimer(
&hTimer,
hTimerQueue,
TimerProc,
NULL,
0,
1, // 1ms
WT_EXECUTEDEFAULT);
But also the result was not as expected. Now I get most of the time 2 ms cycletime.
queuedTimer http://www.freeimagehosting.net/uploads/2a46259a15.png
Measurement
For measuring the times I used the method QueryPerformanceCounter and QueryPerformanceFrequency.
Question
So now my question is if somebody encountered similar problems under windows and maybe even found a solution?
Thanks.
Without going to a real-time OS, you cannot expect to have your function called every 1 ms.
On Windows that is NOT a real-time OS (for Linux it is similar), a program that repeatedly read a current time with microsecond precision, and store consecutive differences in an histogram have a non-empty bin for >10 ms! This means that sometimes you will have 2 ms, but you can also get more between your calls.
You can try to run timeBeginPeriod(1) at the program start and timeEndPeriod(1) before quitting. This probably can enhance timer precision.
A call to NtQueryTimerResolution() will return a value for ActualResolution. In your case the actual resolution is almost certainly 0.9765625 ms. This is exactly what you show in the first plot.
The second occurace of about 1.95 ms is more precisely Sleep(1) = 1.9531 ms = 2 x 0.9765625 ms
I guess the interrupt period runs at someting close to 1ms (0.9765625).
And now the trouble begins: The timer signals when the desired delay expires.
Say the ActualResolution is set to 0.9765625, the interrupt heartbeat of the system will run at 0.9765625 ms periods or 1024 Hz and a call to Sleep is made with a desired delay of 1 ms. Two scenarios are to be looked at:
The call was made < 1ms (ΔT) ahead of the next interrupt. The next interrupt will not confirm that the desired period of time has expired. Only the following interrupt will cause the call to return. The resulting sleep delay will be ΔT + 0.9765625 ms.
The call was made >= 1ms (ΔT) ahead of the next interrupt. The next interrupt will force the call to return. The resulting sleep delay will be ΔT.
So the result depends a lot on when the call was made and therefore you may observe 0.98ms events as well as 1.95ms events.
Edit: Using the CreateTimerQueueTimer will push the observed delay to 1.95 because the timer tick (interrupt period) is 0.9765625 ms. On the first occurence of the interrupt, the requested duration of 1 ms has not quite expired, thus the TimerProc will only be triggered after the second interrupt (2 x 0.9765625 ms = 1.953125 ms > 1 ms). Consequently, the queueTimer plot shows the peak at 1.953125 ms.
Note: This behavior strongly depends on the underlying hardware.
More details can be found at the Windows Timestamp Project