I thought I was jumping over a puddle, but have instead fallen into an ocean :/
I'm trying to implement a 5 second timer (don't need more than milliseconds).
My goal:
// I start the program in gamestate 0...
{
if (button_has_been_pressed == 1)
{
gamestate = 1;
}
}
if (gamestate==1)
{
//wait for 5 seconds and go to gamestate2
gamestate = 2;
}
I've tried the following:
GLUT_TIME_ELAPSED measures time from the beginning of my program. I am unable to 'reset' GLUT_TIME_ELAPSED after entering gamestate1. Otherwise, it would work wonderfully.
gettimeofday gives me much more resolution than I need. At most, milliseconds would be applicable.
Regardless of my resolution needs, I've tried Song Ho's method:
gamestate1_elapsedTime = (t2.tv_sec - t1.tv_sec) * 1000.0; // sec to ms
gamestate1_elapsedTime += (t2.tv_usec - t1.tv_usec) / 1000.0; // us to ms
// add that elapsed time together, and keep track of its total
//r_gamestate1_elapsedTime_total = gamestate1_elapsedTime;
//if (r_gamestate1_elapsedTime_total > 5 seconds) ...
However, the gamestate1_elapsedTime appears to have some variability to it. Because the output is seldom consistent. I guess it's b/c gettimeofday employs CPU time(?), and I am artificially clamping this with my fps clamp.
I've tried clock() as well, but that also appears to be CPU time - not wall time.
As mentioned above, GLUT_ELAPSED_TIME works well, except that I am unable to reset it midway through my program, and my 5-seconds is no longer dependent upon my initial button click.
I would deeply appreciate even a nudge in the right direction, if you could lend some advice. Thank you very much in advance.
-kropcke
You don't need to "reset" GLUT_ELAPSED_TIME, you just need to copy it somewhere that you can use as an offset. For example:
int timeAtReset = glutGet(GLUT_ELAPSED_TIME);
// I start the program in gamestate 0...
{
if (button_has_been_pressed == 1)
{
gamestate = 1;
timeAtReset = glutGet(GLUT_ELAPSED_TIME);
}
}
if (gamestate==1)
{
int timeSinceReset = glutGet(GLUT_ELAPSED_TIME) - timeAtReset;
// use timeSinceReset, instead of glutGet(GLUT_ELAPSED_TIME), to
// wait for 5 seconds and go to gamestate2
gamestate = 2;
}
Related
bool IsGameEnded()
{
static int i = 0;
i++;
if (i == 10)
return true;
return false;
}
int main()
{
bool GameEnd = false;
float ElapsedTime = 0;
while(!GameEnd)
{
chrono::steady_clock::time_point StartingTime = chrono::steady_clock::now();
if (ElapsedTime > 10)
{
ElapsedTime = 0;
draw();
}
GameEnd = IsGameEnded();
chrono::steady_clock::time_point EndingTime = chrono::steady_clock::now();
ElapsedTime = ElapsedTime + chrono::duration_cast<chrono::milliseconds>(EndingTime - StartingTime).count();
}
return 0;
}
I wan't to make a snake game. It will be based on time. For example screen will update every 5 seconds or so. For that I used chrono library. I am not used to this trying o learn it so I might have missed something obvious. But the problem is main function doesn't get get into the if block. So it draws nothing to the console.
I tried debugging (with running line by line). It is not actually like a running program becasue time intervals get long but it enters if block every time. Also if I make the if condition 2 nanoseconds it also works but since cout function can not print so fast I need it to be a lot longer than that. While Debugging I also realised that "StartingTime" and "EndingTime" variables don't get initiated (unless I directly stop on them) . The interesting part is If ı add cout after if block, after a while program starts entering the If block.
When you do:
chrono::duration_cast<chrono::milliseconds>(EndingTime - StartingTime).count();
not enough time has passed, and the count of milliseconds always returns 0. This means you always add 0 to ElapsedTime and it never crosses 10.
One fix is to use a smaller resolution:
chrono::duration_cast<chrono::nanoseconds>(EndingTime - StartingTime).count();
as you mentioned in the question, and adjust the if condition appropriately.
However, the best fix would be to change ElapsedTime from a float to a chrono::duration (of the appropriate unit) since that is the unit that the variable represents. This would let you avoid having to do .count() on the duration as well.
problem with resetting window while repeat clock at c++
I tried to reset the time using time, chrono. However, the code execution time continued to increase without initialization.
Hello, I'm a student at the Korea Institute of Technology. Use the translator.
Please excuse me for speaking awkwardly.
I'm designing a program that uses the C++ OpenPose library to measure the right PC user tax.
Basically, we've completed the function of floating a pop-up to provide feedback when your right shoulder or left shoulder is crooked.
However, I would like to issue an alert after a certain second rather than a feedback that is sent in the wrong position.
Time should be measured when the supose function is not executed, but when the user is sitting in the wrong position. Time will be measured from the time when the supose function is executed. An event should occur if the same seat is held for 20 seconds from the wrong seat.
The event will occur after 20 seconds, and if the time is not initialized, it will occur as soon as it is recognized as an incorrect posture. I think it's because of when. If the library breaks from where, it ends with 0 code.
Once I've solved the time-related function, I'm going to ask you a question because I'm having a hard time completing the program.
Thank you.
while (!userWantsToExit)
{
// start frame
std::shared_ptr<std::vector<UserDatum>> datumProcessed;
if (opWrapper.waitAndPop(datumProcessed))
{
userWantsToExit = userOutputClass.display(datumProcessed);
userOutputClass.printKeypoints(datumProcessed);
....
//string to int
int subShoulder = stoi(rShoulderY) - stoi(lShoulderY);
//clac keypoint values for posedata
if (50 < subShoulder || -50 > subShoulder)
{
if (stoi(rShoulderY) < stoi(lShoulderY)) {
clock_t t_start, t_end;
int time;
t_start = clock(); //start clock
time = t_start / CLOCKS_PER_SEC;
op::log(time);
if (time > 20) {
t_end = clock(); //end clock
time = (int)(t_end - t_start) / CLOCKS_PER_SEC;
cv::imshow("SUPOSE", imgLshoulderDown);
std::string id = "hjw";
std::string pose = "leftShoulder";
httpRequestPose(id, pose);
}
}
else if (stoi(rShoulderY) > stoi(lShoulderY)) {
clock_t t_start, t_end;
int time;
t_start = clock(); //start clock
time = t_start / CLOCKS_PER_SEC;
op::log(time);
if (time > 20) {
cv::imshow("SUPOSE", imgRshoulderDown);
std::string id = "hjw";
std::string pose = "rightShoulder";
httpRequestPose(id, pose);
}
t_end = clock();
time = (int)(t_end - t_start) / CLOCKS_PER_SEC;
}
else {
clock_t t_start, t_end;
int time;
t_end = clock();
time = (int)(t_end - t_start) / CLOCKS_PER_SEC;
}
}
}
else {}
//op::log("Processed datum could not be emplaced.", op::Priority::High, __LINE__, __FUNCTION__, __FILE__);
}
*image
Honestly, it is very hard for me to completely understand your question. However, I was a student and not good at english (even now), i know how struggling it was to find help with google translated paragraphs. Hence, i will make a try.
As I understand, you want to trigger an alert if the shoulder keeps inclining left for more than 20 seconds (same to right incline). Is that correct?
If it is correct, I think the problem is that you keep the variable t_start inside the poseDetect function. It means that everytime the pose is detected the t_start is a new one. The value (t_end - t_start) is always 0. You should declare t_start outside of that function and use a flag to check if it is the first time. I would like to suggest with below pseudo code
bool isLeftIncline = false;
clock_t startLeft=clock(); //get current time
void poseDetect()
{
if (50 < subShoulder || -50 > subShoulder){
if (stoi(rShoulderY) < stoi(lShoulderY)){ // should is inclined to the left
if(!isLeftIncline){ // the first time
startLeft=clock(); // get the starting time
isLeftIncline=true;
//trigger for the first time here
}else { // after the first time
clock_t current=clock();
int timeDiff=current-startLeft;
if(timeDiff>20){ //after 20 second with same pose
// issue an alert
}
//or trigger alert every nearly 20 seconds
//if(timeDiff%20<3){
//trigger
//}
}
}else {
// the shoulder no longer inclines to the left
// reset isLeftIncline time
isLeftIncline = false;
}
// you can apply the same for the right incline here
}
}
I'm trying to implement a MIDI-like clocked sample player.
There is a timer, which increments pulse counter, and every 480 pulses is a quarter, so pulse period is 1041667 ns for 120 beats per minute.
Timer is not sleep-based and running in separate thread, but it seems like delay time is inconsistent: period between samples played in a test file is fluctuating +- 20 ms (in some occasions period is OK and steady, I can't find out dependency of this effect).
Audio backend influence is excluded: i've tried OpenAL as well as SDL_mixer.
void Timer_class::sleep_ns(uint64_t ns){
auto start = std::chrono::high_resolution_clock::now();
bool sleep = true;
while(sleep)
{
auto now = std::chrono::high_resolution_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::nanoseconds>(now - start);
if (elapsed.count() >= ns) {
TestTime = elapsed.count();
sleep = false;
//break;
}
}
}
void Timer_class::Runner(void){
// this running as thread
while(1){
sleep_ns(BPMns);
if (Run) Transport.IncPlaybackMarker(); // marker increment
if (Transport.GetPlaybackMarker() == Transport.GetPlaybackEnd()){ // check if timer have reached end, which is 480 pulses
Transport.SetPlaybackMarker(Transport.GetPlaybackStart());
Player.PlayFile(1); // period of this event fluctuates severely
}
}
};
void Player_class::PlayFile(int FileNumber){
#ifdef AUDIO_SDL_MIXER
if(Mix_PlayChannel(-1, WaveData[FileNumber], 0)==-1) {
printf("Mix_PlayChannel: %s\n",Mix_GetError());
}
#endif // AUDIO_SDL_MIXER
}
Am i doing something wrong in terms of an approach? Is there any better way to implement timer of this kind?
Deviation higher than 4-5 ms is too much in case of audio.
I see a large error and a small error. The large error is that your code assumes that the main processing in Runner consistently takes zero time:
if (Run) Transport.IncPlaybackMarker(); // marker increment
if (Transport.GetPlaybackMarker() == Transport.GetPlaybackEnd()){ // check if timer have reached end, which is 480 pulses
Transport.SetPlaybackMarker(Transport.GetPlaybackStart());
Player.PlayFile(1); // period of this event fluctuates severely
}
That is, you're "sleeping" for the time you want your loop iteration to take, and then you're doing processing on top of that.
The small error is presuming that you can represent your ideal loop iteration time with an integral number of nanoseconds. This error is so small that it doesn't really matter. However I amuse myself by showing people how they can get rid of this error too. :-)
First lets correct the small error by exactly representing the idealized loop iteration time:
using quarterPeriod = std::ratio<1, 2>;
using iterationPeriod = std::ratio_divide<quarterPeriod, std::ratio<480>>;
using iteration_time = std::chrono::duration<std::int64_t, iterationPeriod>;
I know nothing of music, but I'm guessing the above code is right because if you convert iteration_time{1} to nanoseconds, you get approximately 1041667ns. iteration_time{1} is intended to be the precise amount of time you want each iteration of your loop in Timer_class::Runner to take.
To correct the large error, you need to sleep until a time_point, as opposed to sleeping for a duration. Here's a generic utility to help you do that:
template <class Clock, class Duration>
void
delay_until(std::chrono::time_point<Clock, Duration> tp)
{
while (Clock::now() < tp)
;
}
Now if you code Timer_class::Runner to use delay_until instead of sleep_ns, I think you'll get better results:
void
Timer_class::Runner()
{
auto next_start = std::chrono::steady_clock::now() + iteration_time{1};
while (true)
{
if (Run) Transport.IncPlaybackMarker(); // marker increment
if (Transport.GetPlaybackMarker() == Transport.GetPlaybackEnd()){ // check if timer have reached end, which is 480 pulses
Transport.SetPlaybackMarker(Transport.GetPlaybackStart());
Player.PlayFile(1);
}
delay_until(next_start);
next_start += iteration_time{1};
}
}
I ended up using #howard-hinnant version of delay, and reducing buffer size in openal-soft, that's what made a huge difference, fluctuations is now about +-5 ms for 1/16th at 120BPM (125 ms period) and +-1 ms for quarter beats. Leaves a lot to be desired, but i guess it's okay
code segment as follows, code come frome chromium, why?
// Initilalize initial_ticks and initial_time
void InitializeClock() {
initial_ticks = TimeTicks::Now();
// Initilalize initial_time
initial_time = CurrentWallclockMicroseconds();
}// static
Time Time::Now() {
if (initial_time == 0)
InitializeClock();
// We implement time using the high-resolution timers so that we can get
// timeouts which are smaller than 10-15ms. If we just used
// CurrentWallclockMicroseconds(), we'd have the less-granular timer.
//
// To make this work, we initialize the clock (initial_time) and the
// counter (initial_ctr). To compute the initial time, we can check
// the number of ticks that have elapsed, and compute the delta.
//
// To avoid any drift, we periodically resync the counters to the system
// clock.
while (true) {
TimeTicks ticks = TimeTicks::Now();
// Calculate the time elapsed since we started our timer
TimeDelta elapsed = ticks - initial_ticks;
// Check if enough time has elapsed that we need to resync the clock.
if (elapsed.InMilliseconds() > kMaxMillisecondsToAvoidDrift) {
InitializeClock();
continue;
}
return Time(elapsed + Time(initial_time));
}
}
I assume your answer lies in the comment of the code you pasted:
// We implement time using the high-resolution timers so that we can get
// timeouts which are smaller than 10-15ms. If we just used
// CurrentWallclockMicroseconds(), we'd have the less-granular timer.
So Now gives a time value of high resolution, which is beneficial when you need higher resolution than 10-15ms, as they state in the comment. For instance, if you want to reschedule a task every 100 ns, you need the higher resolution, or if you want to measure the execution time of something - 10-15 ms is an eternity.
I have a game with Bullet Physics as the physics engine, the game is online multiplayer so I though to try the Source Engine approach to deal with physics sync over the net. So in the client I use GLFW so the fps limit is working there by default. (At least I think it's because GLFW). But in the server side there is no graphics libraries so I need to "lock" the loop which simulating the world and stepping the physics engine to 60 "ticks" per second.
Is this the right way to lock a loop to run 60 times a second? (A.K.A 60 "fps").
void World::Run()
{
m_IsRunning = true;
long limit = (1 / 60.0f) * 1000;
long previous = milliseconds_now();
while (m_IsRunning)
{
long start = milliseconds_now();
long deltaTime = start - previous;
previous = start;
std::cout << m_Objects[0]->GetObjectState().position[1] << std::endl;
m_DynamicsWorld->stepSimulation(1 / 60.0f, 10);
long end = milliseconds_now();
long dt = end - start;
if (dt < limit)
{
std::this_thread::sleep_for(std::chrono::milliseconds(limit - dt));
}
}
}
Is it ok to use std::thread for this task?
Is this way is efficient enough?
Will the physics simulation will be steped 60 times a second?
P.S
The milliseconds_now() looks like this:
long long milliseconds_now()
{
static LARGE_INTEGER s_frequency;
static BOOL s_use_qpc = QueryPerformanceFrequency(&s_frequency);
if (s_use_qpc) {
LARGE_INTEGER now;
QueryPerformanceCounter(&now);
return (1000LL * now.QuadPart) / s_frequency.QuadPart;
}
else {
return GetTickCount();
}
}
Taken from: https://gamedev.stackexchange.com/questions/26759/best-way-to-get-elapsed-time-in-miliseconds-in-windows
If you want to limit the rendering to a maximum FPS of 60, it is very simple :
Each frame, just check if the game is running too fast, if so just wait, for example:
while ( timeLimitedLoop )
{
float framedelta = ( timeNow - timeLast )
timeLast = timeNow;
for each ( ObjectOrCalculation myObjectOrCalculation in allItemsToProcess )
{
myObjectOrCalculation->processThisIn60thOfSecond(framedelta);
}
render(); // if display needed
}
Please note that if vertical sync is enabled, rendering will already be limited to the frequency of your vertical refresh, perhaps 50 or 60 Hz).
If, however, you wish the logic locked at 60fps, that's different matter: you will have to segregate your display and logic code in such a way that the logic runs at a maximum of 60 fps, and modify the code so that you can have a fixed time-interval loop and a variable time-interval loop (as above). Good sources to look at are "fixed timestep" and "variable timestep" ( Link 1 Link 2 and the old trusty Google search).
Note on your code:
Because you are using a sleep for the whole duration of the 1/60th of a second - already elapsed time you can miss the correct timing easily, change the sleep to a loop running as follows:
instead of
if (dt < limit)
{
std::this_thread::sleep_for(std::chrono::milliseconds(limit - dt));
}
change to
while(dt < limit)
{
std::this_thread::sleep_for(std::chrono::milliseconds(limit - (dt/10.0)));
// or 100.0 or whatever fine-grained step you desire
}
Hope this helps, however let me know if you need more info:)