I'm writing an old school ASCII DOS-Prompt game. Honestly I'm trying to emulate ZZT to learn more about this brand of game design (Even if it is antiquated)
I'm doing well, got my full-screen text mode to work and I can create worlds and move around without problems BUT I cannot find a decent timing method for my renders.
I know my rendering and pre-rendering code is fast because if I don't add any delay()s or (clock()-renderBegin)/CLK_TCK checks from time.h the renders are blazingly fast.
I don't want to use delay() because it is to my knowledge platform specific and on top of that I can't run any code while it delays (Like user input and processing). So I decided to do something like this:
do {
if(kbhit()) {
input = getch();
processInput(input);
}
if(clock()/CLOCKS_PER_SEC-renderTimer/CLOCKS_PER_SEC > RenderInterval) {
renderTimer = clock();
render();
ballLogic();
}
}while(input != 'p');
Which should in "theory" work just fine. The problem is that when I run this code (setting the RenderInterval to 0.0333 or 30fps) I don't get ANYWHERE close to 30fps, I get more like 18 at max.
I thought maybe I'd try setting the RenderInterval to 0.0 to see if the performance kicked up... it did not. I was (with a RenderInterval of 0.0) getting at max ~18-20fps.
I though maybe since I'm continuously calling all these clock() and "divide this by that" methods I was slowing the CPU down something scary, but when I took the render and ballLogic calls out of the if statement's brackets and set RenderInterval to 0.0 I get, again, blazingly fast renders.
This doesn't make sence to me since if I left the if check in, shouldn't it run just as slow? I mean it still has to do all the calculations
BTW I'm compiling with Borland's Turbo C++ V1.01
The best gaming experience is usually achieved by synchronizing with the vertical retrace of the monitor. In addition to providing timing, this will also make the game run smoother on the screen, at least if you have a CRT monitor connected to the computer.
In 80x25 text mode, the vertical retrace (on VGA) occurs 70 times/second. I don't remember if the frequency was the same on EGA/CGA, but am pretty sure that it was 50 Hz on Hercules and MDA. By measuring the duration of, say, 20 frames, you should have a sufficiently good estimate of what frequency you are dealing with.
Let the main loop be someting like:
while (playing) {
do whatever needs to be done for this particular frame
VSync();
}
... /* snip */
/* Wait for vertical retrace */
void VSync() {
while((inp(0x3DA) & 0x08));
while(!(inp(0x3DA) & 0x08));
}
clock()-renderTimer > RenderInterval * CLOCKS_PER_SEC
would compute a bit faster, possibly even faster if you pre-compute the RenderInterval * CLOCKS_PER_SEC part.
I figured out why it wasn't rendering right away, the timer that I created is fine the problem is that the actual clock_t is only accurate to .054547XXX or so and so I could only render at 18fps. The way I would fix this is by using a more accurate clock... which is a whole other story
What about this: you are substracting from x (=clock()) y (=renderTimer). Both x and y are being divided by CLOCKS_PER_SEC:
clock()/CLOCKS_PER_SEC-renderTimer/CLOCKS_PER_SEC > RenderInterval
Wouldn't it be mor efficiente to write:
( clock() - renderTimer ) > RenderInterval
The very first problem I saw with the division was that you're not going to get a real number from it, since it happens between two long ints. The secons problem is that it is more efficiente to multiply RenderInterval * CLOCKS_PER_SEC and this way get rid of it, simplifying the operation.
Adding the brackets gives more legibility to it. And maybe by simplifying this phormula you will get easier what's going wrong.
As you've spotted with your most recent question, you're limited by CLOCKS_PER_SEC which is only about 18. You get one frame per discrete value of clock, which is why you're limited to 18fps.
You could use the screen vertical blanking interval for timing, it's traditional for games as it avoids "tearing" (where half the screen shows one frame and half shows another)
Related
I'm building a tetris game and I need the pieces to fall every x seconds; something like:
while(true){
moveDown();
sleep(x)
}
The problem is, I need to be able to move the pieces left and right in the meantime, i.e., call a function while it's sleeping.
How can I do that in c++?
Both time and key presses can be events which can be used to wait on. On UNIXes you'd use something like poll() with a suitable time for timeout and the input device used to recognize key presses. On other systems there are similar facilities (I'm a UNIX persons and I have never worked on Windows specific stuff although it seems the Windows facilities are actually more flexible). Depending on the result of poll() (timeout or activity on the I/O device in that case) you'd do the appropriate action.
This problem is solvable in multiple ways (another idea that comes to mind is multithreading, but that seems overkill). One approach would be to keep track of the number of "game cycles" and execute some function every n-th cycle like this:
for(int32_t count{1};;count++)
{
if (!count % 5)
{
// do something every 5th cycle
}
// do something every cycle
sleep(x);
}
you can measure how much time has passed since last fall and move piece down after given amount and then reset counter. In pseudo-code it could look like this:
while(true)
{
counter.update();
if(counter.value() == fall_period)
{
move_piece_down();
couter.reset();
}
// rotate pieces
}
If you are using typical implementation of game loop your counter can just accumulate elapsed time since last frame.
Hey guys so this is my first question of Stack Overflow so if I've done something wrong my bad.
I have my program which is designed to make precise mouse movements at specific times, and it calculates the timing using a few hard coded variables and a timing function, which is running in microseconds for accuracy. The program works perfectly as intended, and makes the correct movements at the correct timing etc.
Only problem is, that the sleeping function I am using is a hot loop (as in, its a while loop without a sleep), so when the program is executing the movements, it can take up to 20% CPU usage. The context of this is in a game, and can drop FPS in game from 60 down to 30 with lots of stuttering, making the game unplayable.
I am still learning c++, so any help is greatly appreciated. Below is some snippets of my code to show what I am trying to explain.
this is where the sleep is called for some context
void foo(paramenters and stuff not rly important)
{
Code is doing a bunch of movement stuff here not important blah blah
//after doing its first movement of many, it calls this sleep function from if statement (Time::Sleep) so it knows how long to sleep before executing the next movement.
if (repeat_delay - animation > 0) Time::Sleep(repeat_delay - animation, excess);
}
Now here is the actual sleeping function, which, after using the visual studio performance debugger I can see is using all my resources. All of the parameters in this function are accounted for already, like I said before, the code works perfectly, apart from performance.
#include "Time.hpp"
#include <windows.h>
namespace Time
{
void Sleep(int64_t sleep_ms, std::chrono::time_point<std::chrono::steady_clock> start)
{
sleep_ms *= 1000;
auto truncated = (sleep_ms - std::chrono::duration_cast<std::chrono::microseconds>(std::chrono::high_resolution_clock::now() - start).count()) / 1000;
while (std::chrono::duration_cast<std::chrono::microseconds>(std::chrono::high_resolution_clock::now() - start).count() < sleep_ms)
{
if (truncated)
{
std::this_thread::sleep_for(std::chrono::milliseconds(truncated));
truncated = 0;
}
/*
I have attempted putting even a 1 microsecond sleep in here, which brings CPU usage down to
0.5%
which is great, but my movements slowed right down, and even after attempting to speed up
the movements manually by altering a the movement functions mouse speed variable, it just
makes the movements inaccurate. How can I improve performance here without sacrificing
accuracy
*/
}
}
}
Why did you write a sleep function? Just use std::this_thread::sleep_for as it doesn't use any resources and is reasonably accurate.
Its' accuracy might depend on platform. On my Windows 10 PC it is accurate within 1 millisecond which should be suitable for durations over 10ms (= 100fps).
I am currently developing a stimuli provider for the brain's visual cortex as a part of a university project. The program is to (preferably) be written in c++, utilising visual studio and OpenCV. The way it is supposed to work is that the program creates a number of threads, accordingly to the amount of different frequencies, each running a timer for their respective frequency.
The code looks like this so far:
void timerThread(void *param) {
t *args = (t*)param;
int id = args->data1;
float freq = args->data2;
unsigned long period = round((double)1000 / (double)freq)-1;
while (true) {
Sleep(period);
show[id] = 1;
Sleep(period);
show[id] = 0;
}
}
It seems to work okay for some of the frequencies, but others vary quite a lot in frame rate. I have tried to look into creating my own timing function, similar to what is done in Arduino's "blinkWithoutDelay" function, though this worked very badly. Also, I have tried with the waitKey() function, this worked quite like the Sleep() function used now.
Any help would be greatly appreciated!
You should use timers instead of "sleep" to fix this, as sometimes the loop may take more or less time to complete.
Restart the timer at the start of the loop and take its value right before the reset- this'll give you the time it took for the loop to complete.
If this time is greater than the "period" value, then it means you're late, and you need to execute right away (and even lower the period for the next loop).
Otherwise, if it's lower, then it means you need to wait until it is greater.
I personally dislike sleep, and instead constantly restart the timer until it's greater.
I suggest looking into "fixed timestep" code, such as the one below. You'll need to put this snippet of code on every thread with varying values for the period (ns) and put your code where "doUpdates()" is.
If you need a "timer" library, since I don't know OpenCV, I recommend SFML (SFML's timer docs).
The following code is from here:
long int start = 0, end = 0;
double delta = 0;
double ns = 1000000.0 / 60.0; // Syncs updates at 60 per second (59 - 61)
while (!quit) {
start = timeAsMicro();
delta+=(double)(start - end) / ns; // You can skip dividing by ns here and do "delta >= ns" below instead //
end = start;
while (delta >= 1.0) {
doUpdates();
delta-=1.0;
}
}
Please mind the fact that in this code, the timer is never reset.
(This may not be completely accurate but is the best assumption I can make to fix your problem given the code you've presented)
Using Cocos2d 3.
I am trying to work out how it is possible to play a sound or music from a certain time / millisecond within the loaded effect / sound.
Essentially a setTime method for the cocosDenshion class?
Anyone have any ideas? Or is this something I could contribute?
You can call the schedular after a certain a time like
code
this->scheduleOnce(schedule_selector(HelloWorld::sound_method), 0.5);
void Helloworld::sound_method(float ty)
{ CocosDenshion::SimpleAudioEngine::getInstance( )->playEffect("Byuuton_yap.mp3"); }
here 0.5 is time ,sound_method call after 0.5 second.
i hope,this answer help someone
You can do this:
auto sae = CocosDenshion::SimpleAudioEngine::getInstance();
someSprite->runAction(Sequence::createWithTwoActions(DelayTime::create(time), CallFunc::create([&](){ sae->playEffect("mySound.wav"); })));
Wav should have smallest delay than other extension.
You can also preload it:
sae->preloadEffect("mySound.wav");
Still it's no guaranteed you will here it in perfect timing if you need high precision, but that's a hardware problem.
Basic Question
Is there any way to event recording an playback within a time-sensitive (framerate independent) system?
Any help - including a simple "No sorry it is impossible" - would be greatly appreciated. I have spent almost 20 hours working on this over the past few weekends and am driving myself crazy.
Full Details
This is being currently aimed at a game but the libraries I'm writing are designed to be more general and this concept applies to more than just my C++ coding.
I have some code that looks functionally similar this... (it is written in C++0x but I'm taking some liberties to make it more compact)
void InputThread()
{
InputAxisReturn AxisState[IA_SIZE];
while (Continue)
{
Threading()->EventWait(InputEvent);
Threading()->EventReset(InputEvent);
pInput->GetChangedAxis(AxisState);
//REF ALPHA
if (AxisState[IA_GAMEPAD_0_X].Changed)
{
X_Axis = AxisState[IA_GAMEPAD_0_X].Value;
}
}
}
And I have a separate thread that looks like this...
//REF BETA
while (Continue)
{
//Is there a message to process?
StandardWindowsPeekMessageProcessing();
//GetElapsedTime() returns a float of time in seconds since its last call
UpdateAll(LoopTimer.GetElapsedTime());
}
Now I'd like to record input events for playback for testing and some limited replay functionality.
I can easily record the events with precision timing by simply inserting the following code where I marked //REF ALPHA
//REF ALPHA
EventRecordings.pushback(EventRecording(TimeSinceRecordingBegan, AxisState));
The real issue is playing these back. My LoopTimer is extremely high precision using the High Performance Counter (QueryPreformanceCounter). This means that it is nearly impossible to hit the same time difference using code like below in place of //REF BETA
// REF BETA
NextEvent = EventRecordings.pop_back()
Time TimeSincePlaybackBegan;
while (Continue)
{
//Is there a message to process?
StandardWindowsPeekMessageProcessing();
//Did we reach the next event?
if (TimeSincePlaybackBegan >= NextEvent.TimeSinceRecordingBegan)
{
if (NextEvent.AxisState[IA_GAMEPAD_0_X].Changed)
{
X_Axis = NextEvent.AxisState[IA_GAMEPAD_0_X].Value;
}
NextEvent = EventRecordings.pop_back();
}
//GetElapsedTime() returns a float of time in seconds since its last call
Time elapsed = LoopTimer.GetElapsedTime()
UpdateAll(elapsed);
TimeSincePlabackBegan += elapsed;
}
The issue with this approach is that you will almost never hit the exact same time so you will have a few microseconds where the playback doesn't match the recording.
I also tried event snapping. Kind of a confusing term but basically if the TimeSincePlaybackBegan > NextEvent.TimeSinceRecordingBegan then TimeSincePlaybackBegan = NextEvent.TimeSinceRecordingBegan and ElapsedTime was altered to suit.
It had some interesting side effects which you would expect (like some slowdown) but it unfortunately still resulted in the playback de-synchronizing.
For some more background - and possibly a reason why my time snapping approach didn't work - I'm using BulletPhysics somewhere down that UpdateAll call. Kind of like this...
void Update(float diff)
{
static const float m_FixedTimeStep = 0.005f;
static const uint32 MaxSteps = 200;
//Updates the fps
cWorldInternal::Update(diff);
if (diff > MaxSteps * m_FixedTimeStep)
{
Warning("cBulletWorld::Update() diff > MaxTimestep. Determinism will be lost");
}
pBulletWorld->stepSimulation(diff, MaxSteps, m_FixedTimeStep);
}
But I also tried with pBulletWorkd->stepSimulation(diff, 0, 0) which according to http://www.bulletphysics.org/mediawiki-1.5.8/index.php/Stepping_the_World should have done the trick but still with no avail.
Answering my own question for anyone else who stumbles upon this.
Basically if you want deterministic recording and playback you need to lock the frame-rate. If the system cannot handle the frame-rate you must introduce slowdown or risk dsyncronization.
After two weeks of additional research I've decided it is just not possible due to floating point inadequacies and the fact that floating point numbers are not necessarily associative.
The only solution to have a deterministic engine that relies on floating point numbers is to have a stable and fixed frame-rate. Any change in the frame-rate will - across a long term - result in the playback becoming desynchronized.