Is event recording on time-sensitive possible? - c++

Basic Question
Is there any way to event recording an playback within a time-sensitive (framerate independent) system?
Any help - including a simple "No sorry it is impossible" - would be greatly appreciated. I have spent almost 20 hours working on this over the past few weekends and am driving myself crazy.
Full Details
This is being currently aimed at a game but the libraries I'm writing are designed to be more general and this concept applies to more than just my C++ coding.
I have some code that looks functionally similar this... (it is written in C++0x but I'm taking some liberties to make it more compact)
void InputThread()
{
InputAxisReturn AxisState[IA_SIZE];
while (Continue)
{
Threading()->EventWait(InputEvent);
Threading()->EventReset(InputEvent);
pInput->GetChangedAxis(AxisState);
//REF ALPHA
if (AxisState[IA_GAMEPAD_0_X].Changed)
{
X_Axis = AxisState[IA_GAMEPAD_0_X].Value;
}
}
}
And I have a separate thread that looks like this...
//REF BETA
while (Continue)
{
//Is there a message to process?
StandardWindowsPeekMessageProcessing();
//GetElapsedTime() returns a float of time in seconds since its last call
UpdateAll(LoopTimer.GetElapsedTime());
}
Now I'd like to record input events for playback for testing and some limited replay functionality.
I can easily record the events with precision timing by simply inserting the following code where I marked //REF ALPHA
//REF ALPHA
EventRecordings.pushback(EventRecording(TimeSinceRecordingBegan, AxisState));
The real issue is playing these back. My LoopTimer is extremely high precision using the High Performance Counter (QueryPreformanceCounter). This means that it is nearly impossible to hit the same time difference using code like below in place of //REF BETA
// REF BETA
NextEvent = EventRecordings.pop_back()
Time TimeSincePlaybackBegan;
while (Continue)
{
//Is there a message to process?
StandardWindowsPeekMessageProcessing();
//Did we reach the next event?
if (TimeSincePlaybackBegan >= NextEvent.TimeSinceRecordingBegan)
{
if (NextEvent.AxisState[IA_GAMEPAD_0_X].Changed)
{
X_Axis = NextEvent.AxisState[IA_GAMEPAD_0_X].Value;
}
NextEvent = EventRecordings.pop_back();
}
//GetElapsedTime() returns a float of time in seconds since its last call
Time elapsed = LoopTimer.GetElapsedTime()
UpdateAll(elapsed);
TimeSincePlabackBegan += elapsed;
}
The issue with this approach is that you will almost never hit the exact same time so you will have a few microseconds where the playback doesn't match the recording.
I also tried event snapping. Kind of a confusing term but basically if the TimeSincePlaybackBegan > NextEvent.TimeSinceRecordingBegan then TimeSincePlaybackBegan = NextEvent.TimeSinceRecordingBegan and ElapsedTime was altered to suit.
It had some interesting side effects which you would expect (like some slowdown) but it unfortunately still resulted in the playback de-synchronizing.
For some more background - and possibly a reason why my time snapping approach didn't work - I'm using BulletPhysics somewhere down that UpdateAll call. Kind of like this...
void Update(float diff)
{
static const float m_FixedTimeStep = 0.005f;
static const uint32 MaxSteps = 200;
//Updates the fps
cWorldInternal::Update(diff);
if (diff > MaxSteps * m_FixedTimeStep)
{
Warning("cBulletWorld::Update() diff > MaxTimestep. Determinism will be lost");
}
pBulletWorld->stepSimulation(diff, MaxSteps, m_FixedTimeStep);
}
But I also tried with pBulletWorkd->stepSimulation(diff, 0, 0) which according to http://www.bulletphysics.org/mediawiki-1.5.8/index.php/Stepping_the_World should have done the trick but still with no avail.

Answering my own question for anyone else who stumbles upon this.
Basically if you want deterministic recording and playback you need to lock the frame-rate. If the system cannot handle the frame-rate you must introduce slowdown or risk dsyncronization.
After two weeks of additional research I've decided it is just not possible due to floating point inadequacies and the fact that floating point numbers are not necessarily associative.
The only solution to have a deterministic engine that relies on floating point numbers is to have a stable and fixed frame-rate. Any change in the frame-rate will - across a long term - result in the playback becoming desynchronized.

Related

C++ Hot loop makes timing and function accurate but takes 20% CPU

Hey guys so this is my first question of Stack Overflow so if I've done something wrong my bad.
I have my program which is designed to make precise mouse movements at specific times, and it calculates the timing using a few hard coded variables and a timing function, which is running in microseconds for accuracy. The program works perfectly as intended, and makes the correct movements at the correct timing etc.
Only problem is, that the sleeping function I am using is a hot loop (as in, its a while loop without a sleep), so when the program is executing the movements, it can take up to 20% CPU usage. The context of this is in a game, and can drop FPS in game from 60 down to 30 with lots of stuttering, making the game unplayable.
I am still learning c++, so any help is greatly appreciated. Below is some snippets of my code to show what I am trying to explain.
this is where the sleep is called for some context
void foo(paramenters and stuff not rly important)
{
Code is doing a bunch of movement stuff here not important blah blah
//after doing its first movement of many, it calls this sleep function from if statement (Time::Sleep) so it knows how long to sleep before executing the next movement.
if (repeat_delay - animation > 0) Time::Sleep(repeat_delay - animation, excess);
}
Now here is the actual sleeping function, which, after using the visual studio performance debugger I can see is using all my resources. All of the parameters in this function are accounted for already, like I said before, the code works perfectly, apart from performance.
#include "Time.hpp"
#include <windows.h>
namespace Time
{
void Sleep(int64_t sleep_ms, std::chrono::time_point<std::chrono::steady_clock> start)
{
sleep_ms *= 1000;
auto truncated = (sleep_ms - std::chrono::duration_cast<std::chrono::microseconds>(std::chrono::high_resolution_clock::now() - start).count()) / 1000;
while (std::chrono::duration_cast<std::chrono::microseconds>(std::chrono::high_resolution_clock::now() - start).count() < sleep_ms)
{
if (truncated)
{
std::this_thread::sleep_for(std::chrono::milliseconds(truncated));
truncated = 0;
}
/*
I have attempted putting even a 1 microsecond sleep in here, which brings CPU usage down to
0.5%
which is great, but my movements slowed right down, and even after attempting to speed up
the movements manually by altering a the movement functions mouse speed variable, it just
makes the movements inaccurate. How can I improve performance here without sacrificing
accuracy
*/
}
}
}
Why did you write a sleep function? Just use std::this_thread::sleep_for as it doesn't use any resources and is reasonably accurate.
Its' accuracy might depend on platform. On my Windows 10 PC it is accurate within 1 millisecond which should be suitable for durations over 10ms (= 100fps).

Switching an image at specific frequencies c++

I am currently developing a stimuli provider for the brain's visual cortex as a part of a university project. The program is to (preferably) be written in c++, utilising visual studio and OpenCV. The way it is supposed to work is that the program creates a number of threads, accordingly to the amount of different frequencies, each running a timer for their respective frequency.
The code looks like this so far:
void timerThread(void *param) {
t *args = (t*)param;
int id = args->data1;
float freq = args->data2;
unsigned long period = round((double)1000 / (double)freq)-1;
while (true) {
Sleep(period);
show[id] = 1;
Sleep(period);
show[id] = 0;
}
}
It seems to work okay for some of the frequencies, but others vary quite a lot in frame rate. I have tried to look into creating my own timing function, similar to what is done in Arduino's "blinkWithoutDelay" function, though this worked very badly. Also, I have tried with the waitKey() function, this worked quite like the Sleep() function used now.
Any help would be greatly appreciated!
You should use timers instead of "sleep" to fix this, as sometimes the loop may take more or less time to complete.
Restart the timer at the start of the loop and take its value right before the reset- this'll give you the time it took for the loop to complete.
If this time is greater than the "period" value, then it means you're late, and you need to execute right away (and even lower the period for the next loop).
Otherwise, if it's lower, then it means you need to wait until it is greater.
I personally dislike sleep, and instead constantly restart the timer until it's greater.
I suggest looking into "fixed timestep" code, such as the one below. You'll need to put this snippet of code on every thread with varying values for the period (ns) and put your code where "doUpdates()" is.
If you need a "timer" library, since I don't know OpenCV, I recommend SFML (SFML's timer docs).
The following code is from here:
long int start = 0, end = 0;
double delta = 0;
double ns = 1000000.0 / 60.0; // Syncs updates at 60 per second (59 - 61)
while (!quit) {
start = timeAsMicro();
delta+=(double)(start - end) / ns; // You can skip dividing by ns here and do "delta >= ns" below instead //
end = start;
while (delta >= 1.0) {
doUpdates();
delta-=1.0;
}
}
Please mind the fact that in this code, the timer is never reset.
(This may not be completely accurate but is the best assumption I can make to fix your problem given the code you've presented)

running a background process on arduino

I am trying to get my arduino mega to run a function in the background while it is also running a bunch of other functions.
The function that I am trying to run in the background is a function to determine wind speed from an anemometer. The way it processes the data is similar to that of an odometer in that it reads the number of turns that the anemometer makes during a set time period and then takes that number of turns over the time to determine the wind speed. The longer time period that i have it run over the more accurate data i receive as there is more data to average.
The problem that i have is there is a bunch of other data that i am also reading in to the arduino which i would like to be reading in once a second. This one second time interval is too short for me to get accurate wind readings as not enough revolutions are being completed by the anemometer to give high accuracy wind data.
Is there a way to have the wind sensor function run in the background and update a global variable once every 5 seconds or so while the rest of my program is running simultaneously and updating the other data every second.
Here is the code that i have for reading the data from the wind sensor. Every time the wind sensor makes a revolution there is a portion where the signal reads in as 0, otherwise the sensor reads in as a integer larger than 0.
void windmeterturns(){
startime = millis();
endtime = startime + 5000;
windturncounter = 0;
turned = false;
int terminate = startime;
while(terminate <= endtime){
terminate = millis();
windreading = analogRead(windvelocityPin);
if(windreading == 0){
if(turned == true){
windturncounter = windturncounter + 1;
turned = false;
}
}
else if(windreading >= 1){
turned = true;
}
delay(5);
}
}
The rest of the processing of takes place in another function but this is the one that I am currently struggling with. Posting the whole code would not really be reasonable here as it is close to a 1000 lines.
The rest of the functions run with a 1 second delay in the loop but as i have found through trial and error the delay along with the processing of the other functions make it so that the delay is actually longer than a second and it varies based off of what kind of data i am reading in from the other sensors so a 5 loop counter for timing i do not think will work here
Let Interrupts do the work for you.
In short, I recommend using a Timer Interrupt to generate a periodic interrupt that measures the analog reading in the background. Subsequently this can update a static volatile variable.
See my answer here as it is a similar scenario, detailing how to use the timer interrupt. Where you can replace the callback() with your above analogread and increment.
Without seeing how the rest of your code is set up, I would try having windturncounter as a global variable, and add another integer that is iterated every second your main program loops. Then:
// in the main loop
if(iteratorVariable >= 5){
iteratorVariable = 0;
// take your windreading and implement logic here
} else {
iteratorVariable++;
}
I'm not sure how your anemometer stores data or what other challenges you might be facing, so this may not be a 100% solution, but it would allow you to run the logic from your original post every five seconds.

How can I set tens of thousands of tasks to each trigger at a different defined time?

I'm constructing a data visualisation system that visualises over 100,000 data points (visits to a website) across a time period. The time period (say 1 week) is then converted into simulation time (1 week = 2 minutes in simulation), and a task is performed on each and every piece of data at the specific time it happens in simulation time (the time each visit occurred during the week in real time). With me? =p
In other programming languages (eg. Java) I would simply set a timer for each datapoint. After each timer is complete it triggers a callback that allows me to display that datapoint in my app. I'm new to C++ and unfortunately it seems that timers with callbacks aren't built-in. Another method I would have done in ActionScript, for example, would be using custom events that are triggered after a specific timeframe. But then again I don't think C++ has support for custom events either.
In a nutshell; say I have 1000 pieces of data that span across a 60 second period. Each piece of data has it's own time in relation to that 60 second period. For example, one needs to trigger something at 1 second, another at 5 seconds, etc.
Am I going about this the right way, or is there a much easier way to do this?
Ps. I'm using Mac OS X, not Windows
I would not use timers to do that. Sounds like you have too many events and they may lie too close to each other. Performance and accuracy may be bad with timers.
a simulation is normally done like that:
You are simly doing loops (or iterations). And on every loop you add an either measured (for real time) or constant (non real time) amount to your simulation time.
Then you manually check all your events and execute them if they have to.
In your case it would help to have them sorted for execution time so you would not have to loop through them all every iteration.
Tme measuring can be done with gettimer() c function for low accuracy or there are better functions for higher accuracy e.g. QueryPerformanceTimer() on windows - dont know the equivalent for Mac.
Just make a "timer" mechanism yourself, that's the best, fastest and most flexible way.
-> make an array of events (linked to each object event happens to) (std::vector in c++/STL)
-> sort the array on time (std::sort in c++/STL)
-> then just loop on the array and trigger the object action/method upon time inside a range.
Roughly that gives in C++:
// action upon data + data itself
class Object{
public:
Object(Data d) : data(d) {
void Action(){display(data)};
Data data;
};
// event time + object upon event acts
class Event{
public:
Event(double t, Object o) time (t), object(o) {};
// useful for std::sort
bool operator<(Event e) { return time < e.time; }
double time;
Object object;
}
//init
std::vector<Event> myEvents;
myEvents.push_back(Event(1.0, Object(data0)));
//...
myEvents.push_back(Event(54.0, Object(data10000)));
// could be removed is push_back() is guaranteed to be in the correct order
std::sort(myEvents.begin(), myEvents.end());
// the way you handle time... period is for some fuzziness/animation ?
const double period = 0.5;
const double endTime = 60;
std::vector<Event>::iterator itLastFirstEvent = myEvents.begin();
for (double currtime = 0.0; currtime < endTime; currtime+=0.1)
{
for (std::vector<Event>::iterator itEvent = itLastFirstEvent ; itEvent != myEvents.end();++itEvent)
{
if (currtime - period < itEvent.time)
itLastFirstEvent = itEvent; // so that next loop start is optimised
else if (itEvent.time < currtime + period)
itEvent->actiontick(); // action speaks louder than words
else
break; // as it's sorted, won't be any more tick this loop
}
}
ps: About custom events, you might want to read/search about delegates in c++ and function/method pointers.
If you are using native C++, you should look at the Timers section of the Windows API on the MSDN website. They should tell you exactly what you need to know.

ASCII DOS Games - Rendering methods

I'm writing an old school ASCII DOS-Prompt game. Honestly I'm trying to emulate ZZT to learn more about this brand of game design (Even if it is antiquated)
I'm doing well, got my full-screen text mode to work and I can create worlds and move around without problems BUT I cannot find a decent timing method for my renders.
I know my rendering and pre-rendering code is fast because if I don't add any delay()s or (clock()-renderBegin)/CLK_TCK checks from time.h the renders are blazingly fast.
I don't want to use delay() because it is to my knowledge platform specific and on top of that I can't run any code while it delays (Like user input and processing). So I decided to do something like this:
do {
if(kbhit()) {
input = getch();
processInput(input);
}
if(clock()/CLOCKS_PER_SEC-renderTimer/CLOCKS_PER_SEC > RenderInterval) {
renderTimer = clock();
render();
ballLogic();
}
}while(input != 'p');
Which should in "theory" work just fine. The problem is that when I run this code (setting the RenderInterval to 0.0333 or 30fps) I don't get ANYWHERE close to 30fps, I get more like 18 at max.
I thought maybe I'd try setting the RenderInterval to 0.0 to see if the performance kicked up... it did not. I was (with a RenderInterval of 0.0) getting at max ~18-20fps.
I though maybe since I'm continuously calling all these clock() and "divide this by that" methods I was slowing the CPU down something scary, but when I took the render and ballLogic calls out of the if statement's brackets and set RenderInterval to 0.0 I get, again, blazingly fast renders.
This doesn't make sence to me since if I left the if check in, shouldn't it run just as slow? I mean it still has to do all the calculations
BTW I'm compiling with Borland's Turbo C++ V1.01
The best gaming experience is usually achieved by synchronizing with the vertical retrace of the monitor. In addition to providing timing, this will also make the game run smoother on the screen, at least if you have a CRT monitor connected to the computer.
In 80x25 text mode, the vertical retrace (on VGA) occurs 70 times/second. I don't remember if the frequency was the same on EGA/CGA, but am pretty sure that it was 50 Hz on Hercules and MDA. By measuring the duration of, say, 20 frames, you should have a sufficiently good estimate of what frequency you are dealing with.
Let the main loop be someting like:
while (playing) {
do whatever needs to be done for this particular frame
VSync();
}
... /* snip */
/* Wait for vertical retrace */
void VSync() {
while((inp(0x3DA) & 0x08));
while(!(inp(0x3DA) & 0x08));
}
clock()-renderTimer > RenderInterval * CLOCKS_PER_SEC
would compute a bit faster, possibly even faster if you pre-compute the RenderInterval * CLOCKS_PER_SEC part.
I figured out why it wasn't rendering right away, the timer that I created is fine the problem is that the actual clock_t is only accurate to .054547XXX or so and so I could only render at 18fps. The way I would fix this is by using a more accurate clock... which is a whole other story
What about this: you are substracting from x (=clock()) y (=renderTimer). Both x and y are being divided by CLOCKS_PER_SEC:
clock()/CLOCKS_PER_SEC-renderTimer/CLOCKS_PER_SEC > RenderInterval
Wouldn't it be mor efficiente to write:
( clock() - renderTimer ) > RenderInterval
The very first problem I saw with the division was that you're not going to get a real number from it, since it happens between two long ints. The secons problem is that it is more efficiente to multiply RenderInterval * CLOCKS_PER_SEC and this way get rid of it, simplifying the operation.
Adding the brackets gives more legibility to it. And maybe by simplifying this phormula you will get easier what's going wrong.
As you've spotted with your most recent question, you're limited by CLOCKS_PER_SEC which is only about 18. You get one frame per discrete value of clock, which is why you're limited to 18fps.
You could use the screen vertical blanking interval for timing, it's traditional for games as it avoids "tearing" (where half the screen shows one frame and half shows another)