Wait function for one section of code? - c++

I'm trying to pause one section of code. I've tried doing a for loop which isn't very effective and I only want to pause one section.
Would wait() work or does that pause the whole program?
I'm making a game for my course and the only way I can think of to make AI to move at a set speed is to pause them and move every 1-3 seconds (for example).
Does anyone have a solution for me?
Edit: I was apparently being an idiot. Problem solved without any wait() functions. Thanks for all the help!

It depends of how are you running the AI. If the AI has a thread that you can pause, that may work. But I personally would go for another approach:
Try to make the velocity relative to the delta time of the previous frame.
Movement+=velocity*dt; //velocity per second * time of previous frame in seconds
Have a fixed framerate and use the magic number (your fixed delta time) to achieve the same effect than before.
(sleep Sleep for milliseconds)
Use a counter and an if to run that section of your code only in certain moments.
...
time+=dt;
if (time>3){
...
time=0.0f;
}

Related

C++ Run only for a certain time

I'm writing a little game in c++ atm.
My Game While loop is always active, in this loop,
I have a condition if the player is shooting.
Now I face the following problem,
After every shot fired, there is a delay, this delay changes over time and while the delay the player should move.
shoot
move
wait 700 ms
shoot again
atm I'm using Sleep(700) the problem is I can't move while the 700 ms, I need something like a timer, so the move command is only executed for 700 ms instead of waiting 700 ms
This depends on how your hypothetical 'sleep' is implemented. There's a few things you should know, as it can be solved in a few ways.
You don't want to put your thread to sleep because then everything halts, which is not what you want.
Plus you may get more time than sleep allows. For example, if you sleep for 700ms you may get more than that, which means if you depend on accurate times you will get burned possibly by this.
1) The first way would be to record the raw time inside of the player. This is not the best approach but it'd work for a simple toy program and record the result of std::chrono::high_resolution_clock::now() (check #include <chrono> or see here) inside the class at the time you fire. To check if you can fire again, just compare the value you stored to ...::now() and see if 700ms has elapsed. You will have to read the documentation to work with it in milliseconds.
2) A better way would be to give your game a pulse via something called 'game ticks', which is the pulse to which your world moves forward. Then you can store the gametick that you fired on and do something similar to the above paragraph (except now you are just checking if currentGametick > lastFiredGametick + gametickUntilFiring).
For the gametick idea, you would make sure you do gametick++ every X milliseconds, and then run your world. A common value is somewhere between 10ms and 50ms.
Your game loop would then look like
while (!exit) {
readInput();
if (ticker.shouldTick()) {
ticker.tick();
world.tick(ticker.gametick);
}
render();
}
The above has the following advantages:
You only update the world every gametick
You keep rendering between gameticks, so you can have smooth animations since you will be rendering at a very high framerate
If you want to halt, just spin in a while loop until the amount of time has elapsed
Now this has avoided a significant amount of discussion, of which you should definitely read this if you are thinking of going the gametick route.
With whatever route you take, you probably need to read this.

Do I need Threading Python Computer Vison

Do I need threading? I figured I could just write an entire separate python file and run concurrently but didn't seem very elegant.
Example of Code
while (True):
cap screen
process screen
if standard deviation > Threasholdstd:
do something
if movement not detected after 20 seconds
do something
start monitoring for Standard deviation again
Not sure if there is some way to accomplish this. Thanks guys!
I presume you "cap screen" periodically, probably many times per second. So, even though you don't detect movement, the loop is still running, and you can compare current time with last time you detected something, and if the difference is > 20 seconds then "do something". No need for threads there. – Dan Mašek

GDB, how to pause system clock also?

I have to debug a program running animations with gdb, but when I pause, the next animation frame is the one if no pause had occured (I mean, it calculate the delata since the previous frame's tick and the next).
Is there a way to make gdb "pause" the system clock for the program (I mean for exemple, execute step by step setting manually the number of ticks for the step, or something similar) ?
There is no direct way to do this. But it doesn't matter too much, because the idea of using a realtime clock to drive animation is faulty to begin with. For example, if you put your computer to sleep at 1 PM and wake it up at 2 PM, is it supposed to draw one hour worth of animation? This is, by the way, a real bug that existed in Adobe Flash about ten years ago.
You may want to add an animation clock to your program. You can update it using the system clock by default at the start of each redraw cycle, but you could then add a mechanism to use other clocks, including a "stepwise" one that just goes as fast as it can (which, if running stepwise in gdb, will be not fast at all).

Windows Sleep inconsistency?

Having a bit of an issue with a game I'm making using opengl. The game will sometimes run at half speed and sometimes it will run normally.
I don't think it is the opengl causing the problem since it runs at literally 14,000 fps on my computer. (even when its running at half speed)
This has led me to believe that is is the "game timer" thats causing the problem. The game timer runs on a seperate thread and is programmed to pause at the end of its "loop" with a Sleep(5) call. if i remove the Sleep(5) call, it runs so fast that i can barely see the sprites on the screen. (predictable behavior)
I tried throwing a Sleep(16) at the end of the Render() thread (also on its own thread). This action should limit the fps to around 62. Remember that the app runs sometimes at its intended speed and sometimes at half speed (i have tried on both of the computers that i own and it persists).
When it runs at its intended speed, the fps is 62 (good) and sometimes 31-ish (bad). it never switches between half speed and full speed mid execution, and the problem persists even after a reboot..
So its not the rendering that causing the slowness, its the Sleep() function
I guess what im saying is that the Sleep() function is inconsistent with the times that it actually sleeps. is this a proven thing? is there a better Sleep() function that i could use?
A waitable timer (CreateWaitableTimer and WaitForSingleObject or friends) is much better for periodic wakeup.
However, in your case you probably should just enable VSYNC.
See the following discussion of the Sleep function, focusing on the bit about scheduling priorities:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms686298(v=vs.85).aspx
yes, Sleep function is inconsistency, it is very useful in the case of macro condition.
if you want to a consistency time,please use QueryPerformanceFrequency get the frequency of CPU, and QueryPerformanceCount twice for start and end, and then (end-start) / frequency get the consistency time, but you must look out that if your CPU is mulit cores, the start and end time maybe not the same CPU core, so please us SetThreadAffinity for you working thread set the same CPU core.
Had a same problem. For I just made my own sleep logic and worked for me.
#include <chrono>
using namespace std::chrono;
high_resolution_clock::time_point sleep_start_time = high_resolution_clock::now();
while (duration_cast<duration<double>>(high_resolution_clock::now() - sleep_start_time).count() < must_sleep_duration) {}

What is the best way to exit out of a loop after an elapsed time of 30ms in C++

What is the best way to exit out of a loop as close to 30ms as possible in C++. Polling boost:microsec_clock ? Polling QTime ? Something else?
Something like:
A = now;
for (blah; blah; blah) {
Blah();
if (now - A > 30000)
break;
}
It should work on Linux, OS X, and Windows.
The calculations in the loop are for updating a simulation. Every 30ms, I'd like to update the viewport.
The calculations in the loop are for
updating a simulation. Every 30ms, I'd
like to update the viewport.
Have you considered using threads? What you describe seems the perfect example of why you should use threads instead of timers.
The main process thread keeps taking care of the UI, and have a QTimer set to 30ms to update it. It locks a QMutex to have access to the data, performs the update, and releases the mutex.
The second thread (see QThread) does the simulation. For each cycle, it locks the QMutex, does the calculations and releases the mutex when the data is in a stable state (suitable for the UI update).
With the increasing trend on multi-core processors, you should think more and more on using threads than on using timers. Your applications automatically benefits from the increased power (multiple cores) of new processors.
While this does not answer the question, it might give another look at the solution. What about placing the simulation code and user interface in different threads? If you use Qt, periodic update can be realized using a timer or even QThread::msleep(). You can adapt the threaded Mandelbrot example to suit your need.
The code snippet example in this link pretty much does what you want:
http://www.cplusplus.com/reference/clibrary/ctime/clock/
Adapted from their example:
void runwait ( int seconds )
{
clock_t endwait;
endwait = clock () + seconds * CLOCKS_PER_SEC ;
while (clock() < endwait)
{
/* Do stuff while waiting */
}
}
If you need to do work until a certain time has elapsed, then docflabby's answer is spot-on. However, if you just need to wait, doing nothing, until a specified time has elapsed, then you should use usleep()
Short answer is: you can't in general, but you can if you are running on the right OS or on the right hardware.
You can get CLOSE to 30ms on all the OS's using an assembly call on Intel systems and something else on other architectures. I'll dig up the reference and edit the answer to include the code when I find it.
The problem is the time-slicing algorithm and how close to the end of your time slice you are on a multi-tasking OS.
On some real-time OS's, there's a system call in a system library you can make, but I'm not sure what that call would be.
edit: LOL! Someone already posted a similiar snippet on SO: Timer function to provide time in nano seconds using C++
VonC has got the comment with the CPU timer assembly code in it.
According to your question, every 30ms you'd like to update the viewport. I wrote a similar app once that probed hardware every 500ms for similar stuff. While this doesn't directly answer your question, I have the following followups:
Are you sure that Blah(), for updating the viewport, can execute in less than 30ms in every instance?
Seems more like running Blah() would be done better by a timer callback.
It's very hard to find a library timer object that will push on a 30ms interval to do updates in a graphical framework. On Windows XP I found that the standard Win32 API timer that pushes window messages upon timer interval expiration, even on a 2GHz P4, couldn't do updates any faster than a 300ms interval, no matter how low I set the timing interval to on the timer. While there were high performance timers available in the Win32 API, they have many restrictions, namely, that you can't do any IPC (like update UI widgets) in a loop like the one you cited above.
Basically, the upshot is you have to plan very carefully how you want to have updates occur. You may need to use threads, and look at how you want to update the viewport.
Just some things to think about. They caught me by surprise when I worked on my project. If you've thought these things through already, please disregard my answer :0).
You might consider just updating the viewport every N simulation steps rather than every K milliseconds. If this is (say) a serious commercial app, then you're probably going to want to go the multi-thread route suggested elsewhere, but if (say) it's for personal or limited-audience use and what you're really interested in is the details of whatever it is you're simulating, then every-N-steps is simple, portable and may well be good enough to be getting on with.
See QueryPerformanceCounter and QueryPerformanceFrequency
If you are using Qt, here is a simple way to do this:
QTimer* t = new QTimer( parent ) ;
t->setInterval( 30 ) ; // in msec
t->setSingleShot( false ) ;
connect( t, SIGNAL( timeout() ), viewPort, SLOT( redraw() ) ) ;
You'll need to specify viewPort and redraw(). Then start the timer with t->start().