Im trying to create a simple pong game in C++ using opengl. I have the borders displaying on the screen, the paddles, the ball, and all of them move, so that's great! The problem is that the ball moves lightning fast even at one pixel of speed.
Im updating it's position in a call-back function called init which I then pass into glutIdleFunc like so: glutIdleFunc(idle);
the idle function is as follows:
void idle(){
ball.moveLeft();
glutPostRedisplay();
}
essentially im just having it move left by one pixel but, I guess that idle gets called a lot so it moves lightning fast. How do I fix this error?
If there's more information you need just ask!
Use a GLUT timer to kick your display() callback every 16 milliseconds:
void timer( int extra )
{
glutPostRedisplay();
glutTimerFunc( 16, timer, 0 );
}
int main( int argc, char **argv )
{
glutInit( &argc, argv );
glutInitDisplayMode( ... );
glutInitWindowSize( ... );
glutCreateWindow( ... );
...
glutTimerFunc( 0, timer, 0 );
...
glutMainLoop();
return 0;
}
Here's a link I found to a blog that talks about how to get the time in glut to display frames per second.
http://www.lighthouse3d.com/tutorials/glut-tutorial/frames-per-second/
You need to decide what the velocity of your ball is in pixels/second. Then multiply that velocity by the number of seconds that have elapsed between your last update & the current update. According to the blog, you can get this via glutGet(GLUT_ELAPSED_TIME). If that doesn't work, google for how to find the current time in milliseconds on your platform.
Hope this helps.
Related
A Project I am working on involves me using glScissor, in some cases i need to perform a scissor on an area twice (or more), with the goal of only rendering what is within both scissor boxes.
The issue im running into is that the second scissor box just overrides the previous one, meaning only the last box set is used instead of both.
I have tried existing solutions such as setting scissor1, push matrix, enable scissor_test, set scissor2, disable scissor_test, popmatrix, disable scissor_test. As proposed here: glScissor() call inside another glScissor()
I could not get these to produce any difference, I had also tried glPushAttrib instead of matrix but still no difference.
Here is an example program I wrote for scissor testing, its compiled by g++ and uses freeglut, the scissoring takes place in display():
/*
Compile: g++ .\scissor.cpp -lglu32 -lfreeglut -lopengl32
*/
#include <GL/gl.h>//standard from mingw, already in glut.h - header library
#include <GL/glu.h>//standard from mingw, already in glut.h - utility library
#include <GL/glut.h>//glut/freeglut - more utilities, utility tool kit
void display();
void reshape(int, int);
void timer(int);
void init(){
glClearColor(0, 0, 0, 1);
}
int main(int argc, char **argv){
glutInit(&argc, argv);//init glut
glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE);//init display mode, add double buffer mode
//init window
glutInitWindowPosition(200, 100);//if not specified, it will display in a random spot
glutInitWindowSize(500, 500);//size
//create window
glutCreateWindow("Window 1");
//give glut a function pointer so it can call that function later
glutDisplayFunc(display);
glutReshapeFunc(reshape);
glutTimerFunc(0, timer, 0);//call certain function after a specified amount of time
init();
glutMainLoop();//once this loop runs your program has started running, when the loop ends the program terminates
}
float xPos = -10;
int state = 1;//1 = right, -1 = left
//our rendering happens here
void display(){
//clear previous frame
glClear(GL_COLOR_BUFFER_BIT);//pass in flag of frame buffer
//draw next frame below
glLoadIdentity();//reset rotations, transformations, ect. (resets coordinate system)
//we are using a model view matrix by default
//TEST
glEnable(GL_SCISSOR_TEST);
glScissor(0, 0, 100, 1000);
glPushMatrix();
glEnable(GL_SCISSOR_TEST);
glScissor(50, 0, 1000, 1000);
//assuming both scissors intersect, we should only see the square between 50 and 100 pixels
//draw
glBegin(GL_QUADS);//every set of 3 verticies is a triangle
//GL_TRIANGLES = 3 points
//GL_QUADS = 4 points
//GL_POLYGON = any amount of points
glVertex2f(xPos, 1);//the 2 is the amount of args we pass in, the f means theyr floats
glVertex2f(xPos, -1);
glVertex2f(xPos+2, -1);
glVertex2f(xPos+2, 1);
glEnd();//tell opengl your done drawing verticies
glDisable(GL_SCISSOR_TEST);
glPopMatrix();
glDisable(GL_SCISSOR_TEST);
//display frame buffer on screen
//glFlush();
glutSwapBuffers();//if double buffering, call swap buffers instead of flush
}
//gets called when window is reshaped
void reshape(int width, int hight){
//set viewport and projection
//viewport is a rectangle where everything is drawn, like its the window
glViewport(0, 0, width, hight);
//matrix modes: there is model view and projection, projection has depth
glMatrixMode(GL_PROJECTION);
glLoadIdentity();//reset current matrix after changing matrix mode
gluOrtho2D(-10, 10, -10, 10);//specify 2d projection, set opengl's coordinate system
glMatrixMode(GL_MODELVIEW);//change back to model view
}
//this like makes a loop
void timer(int a){
glutPostRedisplay();//opengl will call the display function the next time it gets the chance
glutTimerFunc(1000/60, timer, 0);
//update positions and stuff
//this can be done here or in the display function
switch(state){
case 1:
if(xPos < 8)
xPos += 0.15;
else
state = -1;
break;
case -1:
if(xPos > -10)
xPos -= 0.15;
else
state = 1;
break;
}
}
I tried following example solutions, such as push/pop matrix/attrib, but couldnt get anything to work
There is no first or second scissor box. There is just the scissor box. You can change the scissor box and that change will affect subsequent rendering. But at any one time, there is only one.
What you want is to use the stencil buffer to discard fragments outside of an area defined by rendering certain values into the stencil buffer.
I have no experience in writing a game and this week I'm trying writing a player of a music game's map (finally may become a game?) in QT; met problem and I think I need some help.
I want to show animation in 60 FPS on QOpenGLWidget. It's just some circles move in the widget, and CPU usage is low. But it looks laggy.
I enabled VSync by set the default surface format's swap behavior to doublebuffer/triplebuffer and has an interval of 1 which I think it means 60 FPS.
I implement the paintGL() method and draw the content by QPainter which QT's 2D drawing example does.
The step to compute the positions of each circle is placed outsides the paintGL method, and will run before paintGL is called.
This is the flow of the program runs:
read the script
start a timer
post a event to call "tick" procedure
"tick" procedure runs, and request update the window.
paintGL runs, draw the frame
before exit the paintGL method, a event to call "tick" is posted
I think now it waits for VSync and swap buffer
"tick" is called, go to step 4
the code:
class CgssFumenPlayer : public QOpenGLWidget
{
Q_OBJECT
...
bool Load();
public slots:
void onTick();
protected:
....
void paintGL() override;
QElapsedTimer elapsedTimer;
};
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
QSurfaceFormat fmt;
fmt.setSwapBehavior(QSurfaceFormat::TripleBuffer);
fmt.setSwapInterval(1);
QSurfaceFormat::setDefaultFormat(fmt);
CgssFumenPlayer w;
w.Load();
w.setFixedSize(704, 396);
w.show();
return a.exec();
}
bool CgssFumenPlayer::Load()
{
....
elapsedTimer.start();
QMetaObject::invokeMethod(this, "onTick", Qt::QueuedConnection);
}
void CgssFumenPlayer::onTick()
{
playerContext.currentTime = elapsedTimer.elapsed() / 1000.0;
double f = playerContext.currentTime / (1.0 / 60);
playerContext.currentTime = (int)f * (1.0 / 60);
fumen->Compute(&playerContext);
update();
}
void CgssFumenPlayer::paintGL()
{
QPainter painter;
painter.begin(this);
painter.setRenderHint(QPainter::Antialiasing);
painter.setWindow(0, 0, windowWidth, windowHeight);
painter.fillRect(QRectF(0, 0, windowWidth, windowHeight), QColor().black());
DrawButtons(painter);
DrawIcons(painter, &playerContext);
painter.end();
QMetaObject::invokeMethod(this, "onTick", Qt::QueuedConnection);
}
I tried these ways to get more information:
print current time by qDebug() each time entering the paintGL method.
It seems sometimes frame is dropped; it looks very obvious, and he interval to last time it's called is more than 30ms.
move the mouse in/out the window duration animation. It became laggy in higher possibility.
collect the time cost in compute position, seems only a very short time.
run this program in android, just the same or even more laggy.
game which are much more complex runs fluently on my computer. I think the hardware is fast enough. ( i7-4800M, GTX 765M )
restart the program again and again. it's now fluent (less or no frame-dropping happened), now laggy... I can't find the pattern.
Also, adjust the animation to 30 FPS cause it always looks laggy.
How can I deal with the problem?
(p.s. I hope it can run on android as well)
this is the full source code
https://github.com/sorayuki/CGSSPlayer/releases (cgssplayer.zip, not the source code)
(cgss-fumen.cpp makes no difference in this problem I think)
It can build in QTCreator (5.6) with no other dependency.
(for QT 5.5, it require to add
CONFIG += c++11
into the .pro file)
I'm learning SDL and I have a frustrating problem. Code is below.
Even though there is a loop that keeps the program alive, when I load an image and change the x value of the source rect to animate, the image that was loaded disappears after exactly 15 seconds. This does not happen with static images. Only with animations. I'm sure there is a simple thing I'm missing but I cant see it.
void update(){
rect1.x = 62 * int ( (SDL_GetTicks() / 100) % 12);
/* 62 is the width of a frame, 12 is the number of frames */
}
void shark(){
surface = IMG_Load("s1.png");
if (surface != 0){
texture = SDL_CreateTextureFromSurface(renderer,surface);
SDL_FreeSurface(surface);
}
rect1.y = 0;
rect1.h = 90;
rect1.w = 60;
rect2.x = 0;
rect2.y = 0;
rect2.h = rect1.h+30; // enlarging the image
rect2.w = rect1.w+30;
SDL_RenderCopy(renderer,texture,&rect1,&rect2);
}
void render(){
SDL_SetRenderDrawColor(renderer, 0, 0, 100, 150);
SDL_RenderPresent(renderer);
SDL_RenderClear(renderer);
}
and in main
update();
shark();
render();
SDL_image header is included, linked, dll exists. Could be the dll is broken?
I left out rest of the program to keep it simple. If this is not enough, I can post the whole thing.
Every time you call the shark function, it loads another copy of the texture. With that in a loop like you have it, you will run out of video memory quickly (unless you are calling SDL_DestroyTexture after every frame, which you have not indicated). At which point, you will no longer be able to load textures. Apparently this takes about fifteen seconds for you.
If you're going to use the same image over and over, then just load it once, before your main loop.
This line int ( (SDL_GetTicks() / 100) % 12);
SDL_GetTicks() returns the number of miliseconds that have elapsed since the lib initialized (https://wiki.libsdl.org/SDL_GetTicks). So you're updating with the TOTAL AMOUNT OF TIME since your application started, not the time since last frame.
You're supposed to keep count of the last time and update the application with how much time has passed since the last update.
Uint32 currentTime=SDL_GetTicks();
int deltaTime = (int)( currentTime-lastTime );
lastTime=currentTime; //declared previously
update( deltaTime );
shark();
render();
Edit: Benjamin is right, the update line works fine.
Still using the deltaTime is a good advice. In a game, for instance, you won't use the total time since the beginning of the application, you'll probably need to keep your own counter of how much time has passed (since you start an animation).
But there's nothing wrong with that line for your program anyhow.
I've done a very basic window with SDL and want to keep it running until I press the X on window.
#include "SDL.h"
const int SCREEN_WIDTH = 640;
const int SCREEN_HEIGHT = 480;
int main(int argc, char **argv)
{
SDL_Init( SDL_INIT_VIDEO );
SDL_Surface* screen = SDL_SetVideoMode( SCREEN_WIDTH, SCREEN_HEIGHT, 0,
SDL_HWSURFACE | SDL_DOUBLEBUF );
SDL_WM_SetCaption( "SDL Test", 0 );
SDL_Event event;
bool quit = false;
while (quit != false)
{
if (SDL_PollEvent(&event)) {
if (event.type == SDL_QUIT) {
quit = true;
}
}
SDL_Delay(80);
}
SDL_Quit();
return 0;
}
I tried adding SDL_Delay() at the end of the while-clause and it worked quite well.
However, 80 ms seemed to be the highest value I could use to keep the program running smoothly and even then the CPU usage is about 15-20%.
Is this the best way to do this and do I have to just live with the fact that it eats this much CPU already on this point?
I know this is an older post, but I myself just came across this issue with SDL when starting up a little demo project. Like user 'thebuzzsaw' noted, the best solution is to use SDL_WaitEvent to reduce the CPU usage of your event loop.
Here's how it would look in your example for anyone looking for a quick solution to it in the future. Hope it helps!
#include "SDL.h"
const int SCREEN_WIDTH = 640;
const int SCREEN_HEIGHT = 480;
int main(int argc, char **argv)
{
SDL_Init( SDL_INIT_VIDEO );
SDL_Surface* screen = SDL_SetVideoMode( SCREEN_WIDTH, SCREEN_HEIGHT, 0,
SDL_HWSURFACE | SDL_DOUBLEBUF );
SDL_WM_SetCaption( "SDL Test", 0 );
SDL_Event event;
bool quit = false;
while (quit == false)
{
if (SDL_WaitEvent(&event) != 0) {
switch (event.type) {
case SDL_QUIT:
quit = true;
break;
}
}
}
SDL_Quit();
return 0;
}
I would definitely experiment with fully blocking functions (such as SDL_WaitEvent). I have an OpenGL application in Qt, and I noticed the CPU usage hovers between 0% and 1%. It spikes to maybe 4% during "usage" (moving the camera and/or causing animations).
I am working on my own windowing toolkit. I have noticed I can achieve similar CPU usage when I use blocking event loops. This will complicate any timers you may depend on, but it is not terribly difficult to implement timers with this new approach.
I just figured out how to reduce CPU usage in my game from 50% down to < 10%.
Your program is much more simple and simply using SDL_Delay() should be enough.
What I did was:
Use SDL_DisplayFormat() when loading images, so the blitting would be faster. This brought its CPU usage down to about 30%.
So I found out that blitting the games background (big one-piece .png file) was eating the most out of my CPU. I searched the Internet for a solution, but all I found was the same answer - just use SDL_Delay(). Finally, I found out that the problem was embarrassingly simple - the SDL_DisplayFormat() was converting my 24-bit images to 32-bit. So I set my display BPP to 24, which brought CPU usage to ~20%. Bringing it down to 16 bit solved the problem for me and the CPU usage is under 10% now.
Of course this means loss of color detail, but as my game is a simplistic 2D game with not too detailed graphics, this was OK.
In order to really understand this, you need to understand threading. In a threaded application, the program runs until it is waiting for something, then it tells the OS that something else can run. In essence, you are doing this with the SDL_Delay command. If there was no delay at all, I suspect your program would be running at near 100% capacity.
The amount of time that you should put in the delay statement only matters if the other commands are taking a significant amount of time. In general, I would put the delay to be a similar amount of time that it takes to test the poll command, but not more than, say, 10 ms. What will happen is that the OS will wait at least that length of time, allowing other applications to run in the background.
As to what you can do to improve this, well, it looks like there isn't a whole lot that you can do. However, take note that if there was another process running taking a significant amount of CPU power, your program's share would decrease.
I am making an application that does some custom image processing. The program will be driven by a simple menu in the console. The user will input the filename of an image, and that image will be displayed using openGL in a window. When the user selects some processing to be done to the image, the processing is done, and the openGL window should redraw the image.
My problem is that my image is never drawn to the window, instead the window is always black. I think it may have to do with the way I am organizing the threads in my program. The main execution thread handles the menu input/output and the image processing and makes calls to the Display method, while a second thread runs the openGL mainloop.
Here is my main code:
#include <iostream>
#include <GL/glut.h>
#include "ImageProcessor.h"
#include "BitmapImage.h"
using namespace std;
DWORD WINAPI openglThread( LPVOID param );
void InitGL();
void Reshape( GLint newWidth, GLint newHeight );
void Display( void );
BitmapImage* b;
ImageProcessor ip;
int main( int argc, char *argv[] ) {
DWORD threadID;
b = new BitmapImage();
CreateThread( 0, 0, openglThread, NULL, 0, &threadID );
while( true ) {
char choice;
string path = "TestImages\\";
string filename;
cout << "Enter filename: ";
cin >> filename;
path += filename;
b = new BitmapImage( path );
Display();
cout << "1) Invert" << endl;
cout << "2) Line Thin" << endl;
cout << "Enter choice: ";
cin >> choice;
if( choice == '1' ) {
ip.InvertColour( *b );
}
else {
ip.LineThinning( *b );
}
Display();
}
return 0;
}
void InitGL() {
int argc = 1;
char* argv[1];
argv[0] = new char[20];
strcpy( argv[0], "main" );
glutInit( &argc, argv );
glutInitDisplayMode( GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);
glutInitWindowPosition( 0, 0 );
glutInitWindowSize( 800, 600 );
glutCreateWindow( "ICIP Program - Character recognition using line thinning, Hilbert curve, and wavelet approximation" );
glutDisplayFunc( Display );
glutReshapeFunc( Reshape );
glClearColor(0.0,0.0,0.0,1.0);
glEnable(GL_DEPTH_TEST);
}
void Reshape( GLint newWidth, GLint newHeight ) {
/* Reset viewport and projection parameters */
glViewport( 0, 0, newWidth, newHeight );
}
void Display( void ) {
glClear (GL_COLOR_BUFFER_BIT); // Clear display window.
b->Draw();
glutSwapBuffers();
}
DWORD WINAPI openglThread( LPVOID param ) {
InitGL();
glutMainLoop();
return 0;
}
Here is my draw method for BitmapImage:
void BitmapImage::Draw() {
cout << "Drawing" << endl;
if( _loaded ) {
glBegin( GL_POINTS );
for( unsigned int i = 0; i < _height * _width; i++ ) {
glColor3f( _bitmap_image[i*3] / 255.0, _bitmap_image[i*3+1] / 255.0, _bitmap_image[i*3+2] / 255.0 );
// invert the y-axis while drawing
glVertex2i( i % _width, _height - (i / _width) );
}
glEnd();
}
}
Any ideas as to the problem?
Edit: The problem was technically solved by starting a glutTimer from the openglThread which calls glutPostRedisplay() every 500ms. This is OK for now, but I would prefer a solution in which I only have to redisplay every time I make changes to the bitmap (to save on processing time) and one in which I don't have to run another thread (the timer is another thread im assuming). This is mainly because the main processing thread is going to be doing a lot of intensive work and I would like to dedicate most of the resources to this thread rather than anything else.
I've had this problem before - it's pretty annoying. The problem is that all of your OpenGL calls must be done in the thread where you started the OpenGL context. So when you want your main (input) thread to change something in the OpenGL thread, you need to somehow signal to the thread that it needs to do stuff (set a flag or something).
Note: I don't know what your BitmapImage loading function (here, your constructor) does, but it probably has some OpenGL calls in it. The above applies to that too! So you'll need to signal to the other thread to create a BitmapImage for you, or at least to do the OpenGL-related part of creating the bitmap.
A few points:
Generally, if you're going the multithreaded route, it's preferable if your main thread is your GUI thread i.e. it does minimal tasks keeping the GUI responsive. In your case, I would recommend moving the intensive image processing tasks into a thread and doing the OpenGL rendering in your main thread.
For drawing your image, you're using vertices instead of a textured quad. Unless you have a very good reason, it's much faster to use a single textured quad (the processed image being the texture). Check out glTexImage2D and glTexSubImage2D.
Rendering at a framerate of 2fps (500ms, as you mentioned) will have negligible impact on resources if you're using an OpenGL implementation that is accelerated, which is almost guaranteed on any modern system, and if you use a textured quad instead of a vertex per pixel.
Your problem may be in Display() at the line
b->Draw();
I don't see where b is passed into the scope of Display().
You need to make OpenGL calls on the thread in which context was created (glutInitDisplayMode). Hence glXX calls inside Display method which is on different thread will not be defined. You can see this easily by dumping the function address, hopefully it would be undefined or NULL.
It sounds like the 500ms timer is calling Display() regularly, after 2 calls it fills the back-buffer and the front-buffer with the same rendering. Display() continues to be called until the user enters something, which the OpenGL thread never knows about, but, since, global variable b is now different, the thread blindly uses that in Display().
So how about doing what Jesse Beder says and use a global int, call it flag, to flag when the user entered something. For example:
set flag = 1; after you do the b = new BitmapImage( path );
then set flag = 0; after you call Display() from the OpenGL thread.
You loop on the timer, but, now check if flag = 1. You only need call glutPostRedisplay() when flag = 1, i.e. the user entered something.
Seems like a good way without using a sleep/wake mechanism. Accessing global variables among more than one thread can also be unsafe. I think the worst that can happen here is the OpenGL thread miss-reads flag = 0 when it should read flag = 1. It should then catch it after no more than a few iterations. If you get strange behavior go to synchronization.
With the code you show, you call Display() twice in main(). Actually, main() doesn't even need to call Display(), the OpenGL thread does it.