I'm running an OpenCV application on visual studio on a windows 7 machine. As part of the end application, I need a timer running in parallel to the OpenCV application which is under execution. The OpenCV application has real time video capture as input to a eye blink detection algorithm. The OpenCV code must be running continuously and can not be paused or stopped. However, to find the interval between blinks, I need to have a timer running after each blink. So the timer has to run while blinks are detected. I have gone through the SetTimer and CreateTimerQueueTimer functions and was unable to obtain a clear understanding of how to go about this. Is there any other way of running a timer in a c++ program? Any suggestions and solutions will be highly appreciated.
Why do you need a timer to calculate the interval between the blinks? Can't you just store the current time at each blink and subtract it from the previous?
With c++11 you can use std::chrono to make a simple timer:
auto start = std::chrono::high_resolution_clock::now();
// do processing here
auto end = std::chrono::high_resolution_clock::now();
std::cout << "time was " <<
std::chrono::duration_cast<std::chrono::nanoseconds>(end - start).count();
Edit in response to the comment on the other answer, you could do the following:
...
auto take = (end - start);
if(take > std::chrono::nanoseconds(x)) {
// ... do whatever you want here
}
Oh and one thing to mention is that you can replace the nanoseconds with any other time unit.
Related
I use clock_gettime() in Linux and QueryPerformanceCounter() in Windows to measure time. When measuring time, I encountered an interesting case.
Firstly, I'm calculating DeltaTime in infinite while loop. This loop calls some update functions. To calculating DeltaTime, the program's waiting in 40 milliseconds in an Update function because update functions is empty yet.
Then, in the program compiled as Win64-Debug i measure DeltaTime. It's approximately 0.040f. And this continues as long as the program is running (Win64-Release works like that too). It runs correctly.
But in the program compiled as Linux64-Debug or Linux64-Release, there is a problem.
When the program starts running. Everything is normal. DeltaTime is approximately 0.040f. But after a while, deltatime is calculated 0.12XXf or 0.132XX, immediately after it's 0.040f. And so on.
I thought I was using QueryPerformanceCounter correctly and using clock_gettime() incorrectly. Then I decided to try it with the standard library std::chrono::high_resolution_clock, but it's the same. No change.
#define MICROSECONDS (1000*1000)
auto prev_time = std::chrono::high_resolution_clock::now();
decltype(prev_time) current_time;
while(1)
{
current_time = std::chrono::high_resolution_clock::now();
int64_t deltaTime = std::chrono::duration_cast<std::chrono::microseconds>(current_time - previous_time).count();
printf("DeltaTime: %f", deltaTime/(float)MICROSECONDS);
NetworkManager::instance().Update();
prev_time = current_time;
}
void NetworkManager::Update()
{
auto start = std::chrono::high_resolution_clock::now();
decltype(start) end;
while(1)
{
end = std::chrono::high_resolution_clock::now();
int64_t y = std::chrono::duration_cast<std::chrono::microseconds>(end-start).count();
if(y/(float)MICROSECONDS >= 0.040f)
break;
}
return;
}
Normal
Problem
Possible causes:
Your clock_gettime is not using VDSO and is a system call instead - will be visible if run under strace, can be configured on modern kernel versions.
Your thread gets preempted (taken out of CPU by the scheduler). To run a clean experiment run your app with real time priority and pinned to a specific CPU core.
Also, I would disable CPU frequency scaling when experimenting.
I am currently developing a stimuli provider for the brain's visual cortex as a part of a university project. The program is to (preferably) be written in c++, utilising visual studio and OpenCV. The way it is supposed to work is that the program creates a number of threads, accordingly to the amount of different frequencies, each running a timer for their respective frequency.
The code looks like this so far:
void timerThread(void *param) {
t *args = (t*)param;
int id = args->data1;
float freq = args->data2;
unsigned long period = round((double)1000 / (double)freq)-1;
while (true) {
Sleep(period);
show[id] = 1;
Sleep(period);
show[id] = 0;
}
}
It seems to work okay for some of the frequencies, but others vary quite a lot in frame rate. I have tried to look into creating my own timing function, similar to what is done in Arduino's "blinkWithoutDelay" function, though this worked very badly. Also, I have tried with the waitKey() function, this worked quite like the Sleep() function used now.
Any help would be greatly appreciated!
You should use timers instead of "sleep" to fix this, as sometimes the loop may take more or less time to complete.
Restart the timer at the start of the loop and take its value right before the reset- this'll give you the time it took for the loop to complete.
If this time is greater than the "period" value, then it means you're late, and you need to execute right away (and even lower the period for the next loop).
Otherwise, if it's lower, then it means you need to wait until it is greater.
I personally dislike sleep, and instead constantly restart the timer until it's greater.
I suggest looking into "fixed timestep" code, such as the one below. You'll need to put this snippet of code on every thread with varying values for the period (ns) and put your code where "doUpdates()" is.
If you need a "timer" library, since I don't know OpenCV, I recommend SFML (SFML's timer docs).
The following code is from here:
long int start = 0, end = 0;
double delta = 0;
double ns = 1000000.0 / 60.0; // Syncs updates at 60 per second (59 - 61)
while (!quit) {
start = timeAsMicro();
delta+=(double)(start - end) / ns; // You can skip dividing by ns here and do "delta >= ns" below instead //
end = start;
while (delta >= 1.0) {
doUpdates();
delta-=1.0;
}
}
Please mind the fact that in this code, the timer is never reset.
(This may not be completely accurate but is the best assumption I can make to fix your problem given the code you've presented)
I'm currently making a small console game. At the end of the game loop is another loop that doesn't release until 1/100s after the iteration's begin time.
Of course that uses up a lot of CPU, so I placed
Sleep(1);
at the end to solve it. I thought everything was right until I ran the game on a 2005 XP laptop... and it was really slow.
When I removed the Sleep command, the game worked perfectly on both computers, but now I have the CPU usage problem.
Does anyone have a good solution for this?
So I found out that the problem was with Windows NT (2000, XP, 2003) sleep granularity that was around 15 ms.. if anyone also struggles with this type of problem, here's how to solve it:
timeBeginPeriod(1); //from windows.h
Call it once at the beginning of the main() function. This affects a few things including Sleep() so that it's actually 'sleeping' for an exact millisecond.
timeEndPeriod(1); //on exit
Of course I was developing the game on Windows 7 all time and thought everything was right, so apparently Windows 6.0+ removed this problem... but it's still useful considering the fact that a lot of people still use XP
You should use std::this_thread::sleep_for in header <thread> for this, along with std::chrono stuff. Maybe something like this:
while(...)
{
auto begin = std::chrono::steady_clock::now();
// your code
auto end = std::chrono::steady_clock::now();
auto duration = std::chrono::duration_cast<std::chrono::milliseconds>(end - begin);
std::this_thread::sleep_for(std::chrono::milliseconds(10) - duration);
}
If your code doesn't consume much time during one iteration or if each iteration takes constant time, you can leave alone the measuring and just put there some constant:
std::this_thread::sleep_for(std::chrono::milliseconds(8));
Sounds like the older laptop just takes more time to do all your processes then it sleeps for 1 millisecond.
You should include a library that tells time,
get the current time at the start of the program / start of the loop then at the end of the loop / program compare the difference of your starting time and the current to the amount of time you want. If it's lower than the amount of time you want (let's say 8 milliseconds) tell it to sleep for minimumTime - currentTime - recordedTime (the variable you set at the start of the loop)
I've done this for my own game in SDL2, SDL_GetTicks() just finds the amount of milliseconds the program has been running and "frametime" is the time at the start of the main game loop. This is how I keep my game running at a maximum of 60fps. This if statement should be modified and placed at the bottom of your program.
if( SDL_GetTicks() - frametime < MINFRAMETIME )
{
SDL_Delay( MINFRAMETIME - ( SDL_GetTicks() - frametime ) );
}
I think the standard library equivalent would be:
if( clock() - lastCheck < MIN_TIME )
{
sleep( MIN_TIME - ( clock() - lastCheck ) );
}
How would you wait a frame in c++.
I don't want the program to sleep or anything.
It would go soemthing like
Do this in this frame (1)
Continue with rest of program
Do this in the next frame (2)
where action 1 happens only in the first frame and action 2 happens only in the next frame. It would continue like this. 1, 2, 1 again, 2
I have the time between frames, I use c++ and i'm using Visual Studio 2008 to compile.
Edit:
I'm using Opengl my OS is Windows 7.
Frame - http://en.wikipedia.org/wiki/Frame_rate
like each image of the scene printed to the screen over a given time period
I'm making some assumptions here.
Suppose you have a model for which you wish to show the state. You might wish to maximise the CPU time spent evolving the model rather than rendering.
So you fix the target frame rate, at e.g. 25 fps.
Again, assume you have optimised rendering so that it can be done in much less than 0.04 seconds.
So you might want something like (pseudo-code):
Time lastRendertime = now();
while(forever)
{
Time current = now();
if ((current - lastRenderTime > 0.04))
{
renderEverything();
lastRenderTime = current;
}
else
{
evolveModelABit();
}
}
Of course, you probably have an input handler to break the loop. Note that this approach assumes that you do not want the model evolution affected by elapsed real time. If you do, and may games do, then pass in the current time to the evolveModelABit();.
For time functions on Windows, you can use:
LARGE_INTEGER frequency; // ticks per second
LARGE_INTEGER t1; // ticks
QueryPerformanceFrequency(&frequency);
QueryPerformanceCounter(&t1);
Note that this approach is suitable for a scientific type simulation. The model evolution will not depend on the frame rate, rendering etc, and gives the same result very time.
For a game, typically there is a push for maximising the fps. This means that the main loop is of the form:
Time lastRendertime = now();
while(forever)
{
Time current = now();
evolveModelABit(current, lastRenderTime);
renderEverything();
lastRenderTime = current;
}
If V-Sync is enabled, SwapBuffers will block the current thread until the next frame has been shown. So if you create a worker thread, and release a lock, or resume its execution right before the call of SwapBuffers your programm recieves the CPU time it would otherwise yield to the rest of the system during the wait-for-swap block. If the worker thread is manipulating GPU resources, it is a good idea using high resolution/performance counters to determine how much time is left until the swap, minus some margin and use this timing in the worker thread, so that the worker thread puts itself to sleep at about the time the swap happens, so that the GPU will not have to context switch between worker and renderer thread.
I'm looking for a way to be able to know how much time it's been since my program was started, at any given time. A sort of timer that would keep running while the main code is doing everything else, and that can be called at any time.
The context is an OpenGL application on Windows, and as well as knowing which keyboard keys are being pressed (using glutKeyboardFunc), I'd like to know when exactly each key is pressed. All of this info is written into an XML file that will later be used to replay everything the user did. (sort of like the replay functionality in a car racing game, but more simple).
C++ 11:
#include <iostream>
#include <chrono>
auto start = std::chrono::system_clock::now();
auto end = std::chrono::system_clock::now();
std::chrono::duration<double> elapsed_seconds = end - start;
std::cout << "elapsed time: " << elapsed_seconds.count() << "s\n";
Code taken from en.cppreference.com and simplified.
Old answer:
GetTickCount() in windows.h returns ticks(miliseconds) elapsed.
When your app starts, call this function and store its value, then whenever you need to know elapsed time since your program start, call this method again and subtract its value from start value.
int start = GetTickCount(); // At Program Start
int elapsed = GetTickCount() - start; // This is elapsed time since start of your program
You don't need a timer for this, you save the timestamp at start of the app with time(0). And the you do the same each time you want to measure the time and you can just to init_time - current_time and you'll get the time lapse.