Limit Speed Of Gameplay On Different Computers - c++

I'm creating a 2D game using OpenGL and C++.
I want it so that the game runs at the same speed on different computers, At the moment my game runs faster on my desktop than my laptop (i.e. my player moves faster on my desktop)
I was told about QueryPerformanceCounter() but I don't know how to use this.
how do I use that or is there a better/easier way?
My Display function
void display()
{
static long timestamp = clock();
// First frame will have zero delta, but this is just an example.
float delta = (float)(clock() - timestamp) / CLOCKS_PER_SEC;
glClear(GL_COLOR_BUFFER_BIT);
glLoadIdentity();
createBackground();
int curSpeed = (player.getVelocity()/player.getMaxSpeed())*100;
glColor3f(1.0,0.0,0.0);
glRasterPos2i(-screenWidth+20,screenHeight-50);
glPrint("Speed: %i",curSpeed);
glRasterPos2i(screenWidth-200,screenHeight-50);
glPrint("Lives: %i",lives);
glRasterPos2i(screenWidth-800,screenHeight-50);
glPrint("Heading: %f",player.getHeading());
for(int i = 0;i<90;i++){
if (numBullets[i].fireStatus == true){
numBullets[i].updatePosition(player);
if (numBullets[i].getXPos() > screenWidth || numBullets[i].getXPos() < -screenWidth || numBullets[i].getYPos() > screenHeight || numBullets[i].getYPos() < -screenHeight ){
numBullets[i].fireStatus = false;
numBullets[i].reset(player);
numBullets[i].~Bullet();
}
}
}
player.updatePosition(playerTex,delta);
glFlush();
timestamp = clock();
}
My Update positiom method
void Player::updatePosition(GLuint playerTex, float factor){
//draw triangle
glPushMatrix();
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBindTexture(GL_TEXTURE_2D, playerTex);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glTranslatef(factor*XPos, factor*YPos, 0.0);
glRotatef(heading, 0,0,1);
glColor3f(1.0,0.0,0.0);
glBegin(GL_POLYGON);
glTexCoord2f(0.0, 1.0); glVertex2f(-40,40);
glTexCoord2f(0.0, 0.0); glVertex2f(-40,-40);
glTexCoord2f(1.0, 0.0); glVertex2f(40,-40);
glTexCoord2f(1.0, 1.0); glVertex2f(40,40);
glEnd();
glDisable(GL_BLEND);
glDisable(GL_TEXTURE_2D);
glPopMatrix();
XPos += speed*cos((90+heading)*(PI/180.0f));
YPos += speed*sin((90+heading)*(PI/180.0f));
}

As a rule, you want to do all gameplay calculations based on a time delta, i.e. the amount of time that has passed since the last frame. This will standardize speed on all machines. Unless you want extreme precision, you can use clock() (from <ctime>) to get the current timestamp.
Example:
void main_loop() {
static long timestamp = clock();
// First frame will have zero delta, but this is just an example.
float delta = (float)(clock() - timestamp) / CLOCKS_PER_SEC;
calculate_physics(delta);
render();
timestamp = clock();
}
void calculate_physics(float delta) {
// Calculate expected displacement per second.
applyDisplacement(displacement * delta);
}
void render() {
// Do rendering.
}
EDIT: If you want higher precision, you should use your OS timer features. On Windows, the most precise method is using QueryPerformanceCounter(). Example
#include <windows.h>
void main_loop(double delta) {
// ...
}
int main() {
LARGE_INTEGER freq, last, current;
double delta;
QueryPerformanceFrequency(&freq);
QueryPerformanceCounter(&last);
while (1) {
QueryPerformanceCounter(&current);
delta = (double)(current.QuadPart - last.QuadPart) / (double)freq.QuadPart;
main_loop(delta);
last = current;
}
}

Most games use a frame time-scaling factor. Essentially, you find the length of the frame your physics and movement are set at and divide it between the actual length of the frame the game is running at (1/fps). This produces a scalar factor which you can multiply by changes in movement to keep all movements consistent while maintaining the benefits from increasing the FPS.
A good example is here.

The best solution to your problem is to only update the positions of your objects once every 10 ms (or something similar) but render the objects as often as possible. This is called "fixed time step" as you only update the game state at fixed intervals. This requires you to decouple the rendering code from the update code (which is a good idea anyway).
Basically (in psudoe code) what you would do is something like:
accumulatedTimeSinceLastUpdate = 0;
while(gameIsRunning)
{
accumulatedTimeSinceLastUpdate += timeSinceLastFrame();
while(accumulatedTimeSinceLastUpdate >= 10) // or another value
{
updatePositions();
accumulatedTimeSinceLastUpdate -= 10;
}
display();
}
This means that if your computer is running super-duper fast display() will be called a lot of times and every now and then updatePositions(). If your computer is ultra slow updatePositions may be called several times for each time display() is called.
Here's another good read (in addition to Mason Blier's):
http://gafferongames.com/game-physics/fix-your-timestep/

Related

Physics in my game gives a wrong result with vsync turned off C++

I calculate Newtonian physics based on gravitation in my 2D game. It works exactly how it's supposed to, when vsync is turned on (60fps), but once I turn it off and gain about 3.5k fps, the character starts to fall incredibly fast. The answer seems obvious, I just need to multiply character's velocity by deltaTime, but I already do that and still no result. It slows down the character a bit, but seems to be sort of not enough..
this is what character's update function looks like:
void Update(float deltaTime) {
if (!onGround) {
acceleration += -Physics::g; // 9.81f
/* EDIT: THIS IS WHAT IT ACTUALLY LOOKS LIKE, sorry*/
SetPosition(position + Vec2(0.0f, 1.0f) * deltaTime * acceleration);
/* instead of this:
SetPosition(position + Vec2(0.0f, 1.0f) * acceleration); */
if (ceiling) {
acceleration = 0;
ceiling = false;
}
} else {
acceleration = 0;
}
}
and here's the calculation of deltaTime
inline static void BeginFrame() {
currentTime = static_cast<float>(glfwGetTime()); // Time in seconds
delta = (currentTime - lastTime);
lastTime = currentTime;
}
What am I missing?
Thanks in advance.
The acceleration means how large the velocity increases per unit time, so you should multiply deltaTime to the acceleration, not only to the velocity.
In other words,
acceleration += -Physics::g; // 9.81f
should be:
acceleration += deltaTime * -Physics::g; // 9.81f

How to render at a fixed FPS in a GLFW window?

I am trying to render at 60 FPS but my scene is rendering at a much higher rate than 60 FPS.
This is my code for the Render Loop, is this the correct way to render at a desired FPS or are there better ways?
double lastTime = glfwGetTime(), timer = lastTime;
double deltaTime = 0, nowTime = 0;
int frames = 0, updates = 0;
while (!glfwWindowShouldClose(window))
{
// input
// -----
processInput(window);
// - Measure time
nowTime = glfwGetTime();
deltaTime += (nowTime - lastTime) / limitFPS; // limitFPS = 1.0 / 60.0
lastTime = nowTime;
// - Only update at 60 frames / s
while (deltaTime >= 1.0) {
updates++;
deltaTime--;
glClearColor(0.0, 0.0, 0.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
w.render(); // Render function
frames++;
}
glfwPollEvents();
// - Reset after one second
if (glfwGetTime() - timer > 1.0) {
timer++;
}
glfwSwapBuffers(window);
}
Based on the discussion in comments above, you want to draw a maximum of 60 FPS, but you want the logic to update as often as possible. Correct?
That can be achieved with just one loop, a timer, and an if statement:
const double fpsLimit = 1.0 / 60.0;
double lastUpdateTime = 0; // number of seconds since the last loop
double lastFrameTime = 0; // number of seconds since the last frame
// This while loop repeats as fast as possible
while (!glfwWindowShouldClose(window))
{
double now = glfwGetTime();
double deltaTime = now - lastUpdateTime;
glfwPollEvents();
// update your application logic here,
// using deltaTime if necessary (for physics, tweening, etc.)
// This if-statement only executes once every 60th of a second
if ((now - lastFrameTime) >= fpsLimit)
{
// draw your frame here
glfwSwapBuffers(window);
// only set lastFrameTime when you actually draw something
lastFrameTime = now;
}
// set lastUpdateTime every iteration
lastUpdateTime = now;
}
Everything that you want to execute as often as possible should be in the outer part of that while loop, and everything you want to execute at a maximum of 60 times per second should be inside the if statement.
If the loop takes longer than 1/60th of a second to execute an iteration then your FPS and update rate will drop to whatever rate is achievable for that workload/system.

OGLFT draws text when GLStipple is used

I have an interesting bug that has been "bugging" me for a few days now.
I am currently using OpenGL to draw text on a screen. I am utilizing the OGLFT library to assist the drawing. This library actually uses the freetype2 library. I am actually not doing anything special with the text. I am only looking for monochromatic text.
Anyways, after implementing the library, I noticed that the text is only drawn correct when I have glStipple enabled. I believe that there is some interference issue between the OGLFT library and what I am enabling.
I was wondering if there is anyone out there with some experience on using the OGLFT library. I am posting a minimalist example of my code to demonstrate what is going on:
(Please note that there are some variables that are used to st the zoom factor of my glCanvas and the position of the camera and that this is only for 2D)
double _zoomX = 1.0;
double _zoomY = 1.0;
double _cameraX = 0;
double _cameraY = 0;
/* This function gets called everytime a draw routine is needed */
void modelDefinition::onPaintCanvas(wxPaintEvent &event)
{
wxGLCanvas::SetCurrent(*_geometryContext);// This will make sure the the openGL commands are routed to the wxGLCanvas object
wxPaintDC dc(this);// This is required for drawing
glMatrixMode(GL_MODELVIEW);
glClear(GL_COLOR_BUFFER_BIT);
updateProjection();
OGLFT::Monochrome *testface = new OGLFT::Monochrome( "/usr/share/fonts/truetype/dejavu/DejaVuSansMono.ttf", 8);
testface->draw(0, 0, "test");
glEnable(GL_LINE_STIPPLE);// WHen I comment out this line, the text is unable to be drawn
glLineStipple(1, 0b0001100011000110);
glBegin(GL_LINES);
glVertex2d(_startPoint.x, _startPoint.y);
glVertex2d(_endPoint.x, _endPoint.y);
glEnd();
glDisable(GL_LINE_STIPPLE);
SwapBuffers();
}
void modelDefinition::updateProjection()
{
// First, load the projection matrix and reset the view to a default view
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-_zoomX, _zoomX, -_zoomY, _zoomY, -1.0, 1.0);
//Reset to modelview matrix
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glViewport(0, 0, (double)this->GetSize().GetWidth(), (double)this->GetSize().GetHeight());
/* This section will handle the translation (panning) and scaled (zooming).
* Needs to be called each time a draw occurs in order to update the placement of all the components */
if(_zoomX < 1e-9 || _zoomY < 1e-9)
{
_zoomX = 1e-9;
_zoomY = _zoomX;
}
if(_zoomX > 1e6 || _zoomY > 1e6)
{
_zoomX = 1e6;
_zoomY = _zoomX;
}
glTranslated(-_cameraX, -_cameraY, 0.0);
}
Also one thing to note is that the code below the glEnable(GL_LINE_STIPPLE); is required. It is as if the glStipple needs to be drawn correctly for the text to be displayed correctly.
Looking through your code, I believe that your intention is to render it as a greyscale? If so, then you can simply use the OGLFT::Grayscale *testface = new OGLFT::Grayscale( "/usr/share/fonts/truetype/dejavu/DejaVuSansMono.ttf", 8);
This will get what you need without having to worry about the issue that you posted. In fact, I recommend doing it this way too.

Function to draw text on screen using C++/OpenGL/Glut faster than glutBitmapCharacter

I need something muuuch faster than glutBitmapCharacter(font, text[i]). It's decreasing performacne few times ! I need to display fps, xyz position etc. so not in 3D just displaying HUD.
Currently I'm using :
glRasterPos2f(x,y);
for (int i=0; i<text.size(); i++)
{
glutBitmapCharacter(font, text[i]);
}
I'm using this function :
void glutPrint(float x, float y, LPVOID font, string text)
{
glRasterPos2f(x,y);
for (int i=0; i<text.size(); i++)
{
glutBitmapCharacter(font, text[i]);
}
}
Every frame in DisplayFunction in drawing HUD section (calling DrawHUD()) :
void DrawHUD (void)
{
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrtho(0.0, windowWidth, windowHeight, 0.0, -1.0, 10.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glClear(GL_DEPTH_BUFFER_BIT);
glColor3f(0.2f,0.2f,0.2f);
glutPrint(2,10,glutFonts[4],its(sfps)+" fps; "+its(todrawquads)+" quads drawing; ");
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
}
Also using int to string function :
string its(int i)
{
stringstream out;
out << i;
return out.str();
}
some fact about performance (measured in FPS)
Without calling DrawHUD function ~ 3500
With calling DrawHUD function ~ 3500 (maybe few less fps)
With DrawHUD + 1 x GlutPrint ~ 3300
With DrawHUD + 2 x GlutPrint ~ 2400
With DrawHUD + 3 x GlutPrint ~ 1700
(eg. when I mean 3 x GlutPrint I meant in DrawHUD :
{
[...]
glutPrint(...);
glutPrint(...);
glutPrint(...);
[...]
}
)
That's not nice ... I know measuring using frame rate isn't good.
also
when I commented :
glutBitmapCharacter(font, text[i]);
in loop in glutPrint there was ~3500 fps ... so I'm SURE that glutBitmapCharacter is problem. So what use instead it :) ?
Then what to do ?
When in doubt, go back to the basics: create your own (better) drawing function. Make a rectangular texture of characters and draw contiguous quads (triangle strips) on screen with the font texture selected, one strip per string of characters.
The above still holds fine with VBO's if you choose to go that route. The only important thing is to buffer the writes as much as possible, maybe by adding stuff to print to the screen to an array of sorts and writing them out at the end of the frame draw at the same time.

suspend a function for sometime in opengl

I have a function Drwa() this is rendering a triangle on screen.and also i have another Draw_poly() which is rendering a Rectangle on screen. And i also i m rotating rectangle and triangle both simultaneously.I want to keep speed of rotation different for both how will i do ?
Let suppose i am moving an object on screen and another i m rotating then how will i do ? That's why i m looking for function moving of object will keep time limited and rotating object will not keep time.So rotation will be fast and moving of object will be slow
First, define your rotation as angle per second. Then in your main draw function, compute the elapsed time in second, multiply by the angular speed, and you're done.
I would like to partecipate with a an answer of mine.
The answer of genpfault could be good as much as you need, but if you would like to produce a good animation you need to design a better software.
Here, look at my answer. However, reading another your question, I think you are missing some fundamental point: learn OpenGL architecture, practice on each OpenGL entry point, read books.
At last, but not least, I would you suggest to search for answer already told on stackoverflow. This is supposed to be a question & answer site...
Rotate one less/slower than the other:
static float rot_a = 0.0;
static float rot_b = 0.0;
rot_a += 1.0;
rot_b += 0.5;
glPushMatrix();
glRotatef( rot_a, 0, 0, 1 );
Draw_A();
glPopMatrix();
glPushMatrix();
glRotatef( rot_b, 0, 0, 1 );
Draw_B();
glPopMatrix();
Alternatively you can spin up some threads that modify your object positions and sleep() without blocking the render thread.
Position obj_a;
Position obj_b;
void thread_1()
{
while( !done )
{
sleep(1);
modify_pos( obj_a );
}
}
void thread_2()
{
while( !done )
{
sleep(2);
modify_pos( obj_b );
}
}
void draw()
{
glPushMatrix();
position_object( obj_a );
Draw_A();
glPopMatrix();
glPushMatrix();
position_object( obj_b );
Draw_B();
glPopMatrix();
}
int main()
{
...
launch_thread( thread_1 );
launch_thread( thread_2 );
...
return 0;
}