I read a few tutorials about OpenGL and now I'm trying to use it with SDL. The thing is that when I use SDL_GL_SwapBuffers() in a while loop the window just freezes. Here's some code:
#include "SDL.h"
#include "system.h"
#include "SDL_opengl.h"
System Sys(800, 600, 32);
SDL_Event kpress;
int main( int argc, char* args[] )
{
Sys.init();
bool quit = false;
while (!quit)
{
while (SDL_PollEvent(&kpress)) if(kpress.type == SDL_QUIT) quit = true;
glClear(GL_COLOR_BUFFER_BIT);
SDL_GL_SwapBuffers();
}
SDL_Quit();
return 0;
}
------------------------------These are in system.h, class System----------------
bool System::init()
{
if (SDL_Init(SDL_INIT_EVERYTHING) < 0)
{
errorCode = 1;
return false;
}
if (SDL_SetVideoMode(screen_h, screen_w, bpp, SDL_OPENGL) == 0)
{
errorCode = 2;
return false;
}
SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 );
if (!init_GL())
{
errorCode = 3;
return false;
}
SDL_WM_SetCaption("Engine", 0);
return true;
}
bool System::init_GL()
{
glClearColor(1, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, screen_h, screen_w, 0, -1, 1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
if (glGetError() != GL_NO_ERROR) return false;
return true;
}
If I draw some shapes or use a timer for limiting FPS - nothing changes.
Do you have any ideas?
My first advice: Get rid of that bogus System class: So far all the tasks it does are purely sequential/procedural and that should be reflected in the programs outline. People tend to put everything into classes just becase they're taught see everything in terms of object models. But this System class would have to follow the singleton pattern, which, in my opinion, is an anti-pattern.
All the stuff you placed in init_GL belong into the rendering loop. OpenGL initialization ends after creating a render context. OpenGL state is not initialized it is set on demand. OpenGL objects are initialized, but also on demand.
Also you're using glGetError not correctly. It needs to be called in a loop until no more errors are reported. It thus also makes little sense to bail out if a GL error is reported. OpenGL errors should be considered diagnostic.
SDL_GL_SetAttribute must be set before calling SDL_SetVideoMode so you're probably not double buffering.
Hey, you aren't calling the function which initializes OpenGL: init_GL()
youre not checking the result from system::init - it might be failing somewhere in that function and not setting up your initial state correctly
SDL #defines main() to be SDL_main() to allow it to do some extra initialization before program start. Which you seem to be bypassing via the statically initialized class.
Try constructing your System object in main().
Related
This is my first big OpenGL project and am confused about a new feature I want to implement.
I am working on a game engine. In my engine I have two classes: Renderer and CustomWindow. GLFW needs to be initialized, then an OpenGL context needs to be created, then glew can be initialized. There is no problem with this, until I decided to support multiple windows to be created at the same time. Here are the things I am confused about:
Do I need to initialize GLEW for every window that is created? If no, can I still call glewInit() for every window creation and everything be fine?
If I create a window, and then destroy it, do I have to call glewInit() again and will I have to call these functions again?:
glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS, &numberOfTexturesSupported);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
glEnable(GL_MULTISAMPLE);
glEnable(GL_LINE_SMOOTH);
glEnable(GL_POINT_SMOOTH);
glEnable(GL_PROGRAM_POINT_SIZE);
If there is any off topic comments that would help, they are very welcomed.
Update 1: More Context
For reference, the reason I want to do this is to implement multiple window rendering that share the same OpenGL context. Note, each window uses its own vertex array object (VAO). Here is the code for reference:
// CustomWindow.cpp
CustomWindow::CustomWindow() {
window = nullptr;
title = defaultTitle;
shouldClose = false;
error = false;
vertexArrayObjectID = 0;
frameRate = defaultFrameRate;
window = glfwCreateWindow(defaultWidth, defaultHeight, title.c_str(), nullptr, nullptr);
if (!window) {
error = true;
return;
}
glfwMakeContextCurrent(window);
if (glewInit() != GLEW_OK) {
error = true;
return;
}
glGenVertexArrays(1, &vertexArrayObjectID);
glBindVertexArray(vertexArrayObjectID);
allWindows.push_back(this);
}
CustomWindow::CustomWindow(int width, int height, const std::string& title, GLFWmonitor* monitor, GLFWwindow* share) {
window = nullptr;
this->title = title;
shouldClose = false;
error = false;
vertexArrayObjectID = 0;
frameRate = defaultFrameRate;
window = glfwCreateWindow(width, height, title.c_str(), monitor, share);
if (!window) {
error = true;
return;
}
glfwMakeContextCurrent(window);
glGenVertexArrays(1, &vertexArrayObjectID);
allWindows.push_back(this);
}
CustomWindow::~CustomWindow() {
if (window != nullptr || error)
glfwDestroyWindow(window);
unsigned int position = 0;
for (unsigned int i = 0; i < allWindows.size(); i++)
if (allWindows[i] == this) {
position = i;
break;
}
allWindows.erase(allWindows.begin() + position);
if (mainWindow == this)
mainWindow = nullptr;
}
// Rendere.cpp
Renderer::Renderer() {
error = false;
numberOfTexturesSupported = 0;
if (singleton != nullptr) {
error = true;
return;
}
singleton = this;
// Init GLFW
if (!glfwInit()) {
error = true;
return;
}
// Set window hints
glfwWindowHint(GLFW_MAXIMIZED, true);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_COMPAT_PROFILE);
glfwWindowHint(GLFW_SAMPLES, 4);
// Init GLEW
if (glewInit() != GLEW_OK) {
error = true;
return;
}
// Set graphics message reporting
glEnable(GL_DEBUG_OUTPUT);
glEnable(GL_DEBUG_OUTPUT_SYNCHRONOUS);
glDebugMessageCallback(openglDebugCallback, nullptr);
// Set up OpenGL
glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS, &numberOfTexturesSupported);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
glEnable(GL_MULTISAMPLE);
glEnable(GL_LINE_SMOOTH);
glEnable(GL_POINT_SMOOTH);
glEnable(GL_PROGRAM_POINT_SIZE);
}
After some research, i would say that it depends, therefore it's always best to have a look at the base to form an opinion.
The OpenGL wiki has some useful information to offer.
Loading OpenGL Functions is an important task for initializing OpenGL after creating an OpenGL context. You are strongly advised to use an OpenGL Loading Library instead of a manual process. However, if you want to know how it works manually, read on.
Windows
This function only works in the presence of a valid OpenGL context. Indeed, the function pointers it returns are themselves context-specific. The Windows documentation for this function states that the functions returned may work with another context, depending on the vendor of that context and that context's pixel format.
In practice, if two contexts come from the same vendor and refer to the same GPU, then the function pointers pulled from one context will work in the other.
Linux and X-Windows
This function can operate without an OpenGL context, though the functions it returns obviously can't. This means that functions are not associated with a context in any way.
If you take a look into the source code of glew (./src/glew.c), you will see that the lib simply calls the loading procedures of the underlying system and assigns the results of those calls to the global function pointers.
In other words, calling glewInit multiple times has no other side effect other than that explained in the OpenGL wiki.
Another question would be: do you really need multiple windows for that task? A different approach could be achieved with only one context and multiple framebuffer objects.
Multiple contexts (sharing resources between them) and event handling (which can only be called from the 'main' thread) needs proper synchronization and multiple context switches.
I am new to the ImGui library and recently i've been trying out the examples included. Everything worked like a charm until I changed the include (and functions) of gl3w to glad (the loader i would like to use). The moment I swapped between the two loaders I got a segmentation fault exception inside the imgui_impl_glfw_gl3.cpp file. I found a post which suggested that this may happen because of some functions failing to "bind" and producing nullpointers.
I have located the error in line 216 of imgui_impl_glfw_gl3.cpp
this is the code in line 216:
glGetIntegerv(GL_TEXTURE_BINDING_2D, &last_texture);
I have also changed the include file in imgui_impl_glfw_gl3.cpp from gl3w to glad with no results.
This is the main function i am executing (it's the basic opengl3 example of imgui using glad):
#include "gui/imgui.h"
#include "gui/imgui_impl_glfw_gl3.h"
#include <stdio.h>
#include <glad/glad.h> // This example is using gl3w to access OpenGL functions (because it is small). You may use glew/glad/glLoadGen/etc. whatever already works for you.
#include <GLFW/glfw3.h>
static void error_callback(int error, const char* description)
{
fprintf(stderr, "Error %d: %s\n", error, description);
}
int main(int, char**)
{
// Setup window
glfwSetErrorCallback(error_callback);
if (!glfwInit())
return 1;
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
GLFWwindow* window = glfwCreateWindow(1280, 720, "ImGui OpenGL3 example", NULL, NULL);
glfwMakeContextCurrent(window);
glfwSwapInterval(1); // Enable vsync
glfwInit();
// Setup ImGui binding
ImGui_ImplGlfwGL3_Init(window, true);
// Setup style
//ImGui::StyleColorsDark();
ImGui::StyleColorsClassic();
bool show_demo_window = true;
bool show_another_window = false;
bool algo = true;
ImVec4 clear_color = ImVec4(0.45f, 0.55f, 0.60f, 1.00f);
// Main loop
while (!glfwWindowShouldClose(window))
{
glfwPollEvents();
ImGui_ImplGlfwGL3_NewFrame();
// 1. Show a simple window.
// Tip: if we don't call ImGui::Begin()/ImGui::End() the widgets automatically appears in a window called "Debug".
{
static float f = 0.0f;
static int counter = 0;
ImGui::Text("Hello, world!"); // Display some text (you can use a format string too)
ImGui::SliderFloat("float", &f, 0.0f, 1.0f); // Edit 1 float using a slider from 0.0f to 1.0f
ImGui::ColorEdit3("COLORINES", (float*)&clear_color); // Edit 3 floats representing a color
ImGui::Checkbox("Demo Window", &show_demo_window); // Edit bools storing our windows open/close state
ImGui::Checkbox("Booleanooooo", &algo);
ImGui::Checkbox("Another Window", &show_another_window);
if (ImGui::Button("Button")) // Buttons return true when clicked (NB: most widgets return true when edited/activated)
counter++;
ImGui::SameLine();
ImGui::Text("counter = %d", counter);
ImGui::Text("pues se ve que hay texto: %d", algo);
ImGui::Text("Application average %.3f ms/frame (%.1f FPS)", 1000.0f / ImGui::GetIO().Framerate, ImGui::GetIO().Framerate);
}
{
ImGui::Begin("VENTANA WAPA");
ImGui::Text("POS SA QUEDAO BUENA VENTANA");
static float yee = 0.0f;
ImGui::SliderFloat("lel", &yee,1.0f,0.5f);
ImGui::End();
}
// 2. Show another simple window. In most cases you will use an explicit Begin/End pair to name your windows.
if (show_another_window)
{
ImGui::Begin("Another Window", &show_another_window);
ImGui::Text("Hello from another window!");
if (ImGui::Button("Close Me"))
show_another_window = false;
ImGui::End();
}
// 3. Show the ImGui demo window. Most of the sample code is in ImGui::ShowDemoWindow(). Read its code to learn more about Dear ImGui!
if (show_demo_window)
{
ImGui::SetNextWindowPos(ImVec2(650, 20), ImGuiCond_FirstUseEver); // Normally user code doesn't need/want to call this because positions are saved in .ini file anyway. Here we just want to make the demo initial state a bit more friendly!
ImGui::ShowDemoWindow(&show_demo_window);
}
// Rendering
int display_w, display_h;
glfwGetFramebufferSize(window, &display_w, &display_h);
glViewport(0, 0, display_w, display_h);
glClearColor(clear_color.x, clear_color.y, clear_color.z, clear_color.w);
glClear(GL_COLOR_BUFFER_BIT);
ImGui::Render();
glfwSwapBuffers(window);
}
// Cleanup
//ImGui_ImplGlfwGL3_Shutdown();
glfwTerminate();
return 0;
}
I have no clue why this is happenning and I'm pretty new to openGL an ImGui so, any ideas? :(
Glad & gl3w are both extension loader libraries. They generally need to be initialized on a current GL context before use.
The original code called gl3wInit(). Yours is missing any sort of glad init.
Make sure you initialize glad (gladLoadGLLoader((GLADloadproc) glfwGetProcAddress)) after glfwMakeContextCurrent() and before you call any OpenGL functions.
Otherwise all the OpenGL function pointers glad declares will remain NULL. Trying to call NULL function pointers generally doesn't go well for a process.
I'm reading the OpenGl red book, and I'm pretty much stuck at the first tutorial. Everything works fine if I use freeglut and glew, but I'd like to handle input and such myself. So I ditched freeglut and glew and wrote my own code. I've looked at some other tutorials and finished the code, but nothing is displayed. It seems like FreeGlut does some voodoo magic in the background, but I don't know what I'm missing. I've tried this:
int attributeListInt[19];
int pixelFormat[1];
unsigned int formatCount;
int result;
PIXELFORMATDESCRIPTOR pixelFormatDescriptor;
int attributeList[5];
context = GetDC (hwnd);
if (!context)
return -1;
attributeListInt[0] = WGL_SUPPORT_OPENGL_ARB;
attributeListInt[1] = TRUE;
attributeListInt[2] = WGL_DRAW_TO_WINDOW_ARB;
attributeListInt[3] = TRUE;
attributeListInt[4] = WGL_ACCELERATION_ARB;
attributeListInt[5] = WGL_FULL_ACCELERATION_ARB;
attributeListInt[6] = WGL_COLOR_BITS_ARB;
attributeListInt[7] = 24;
attributeListInt[8] = WGL_DEPTH_BITS_ARB;
attributeListInt[9] = 24;
attributeListInt[10] = WGL_DOUBLE_BUFFER_ARB;
attributeListInt[11] = TRUE;
attributeListInt[12] = WGL_SWAP_METHOD_ARB;
attributeListInt[13] = WGL_SWAP_EXCHANGE_ARB;
attributeListInt[14] = WGL_PIXEL_TYPE_ARB;
attributeListInt[15] = WGL_TYPE_RGBA_ARB;
attributeListInt[16] = WGL_STENCIL_BITS_ARB;
attributeListInt[17] = 8;
attributeListInt[18] = 0;
result = wglChoosePixelFormatARB (context, attributeListInt, NULL, 1, pixelFormat, &formatCount);
if (result != 1)
return -1;
result = SetPixelFormat (context, pixelFormat[0], &pixelFormatDescriptor);
if (result != 1)
return -1;
attributeList[0] = WGL_CONTEXT_MAJOR_VERSION_ARB;
attributeList[1] = 4;
attributeList[2] = WGL_CONTEXT_MINOR_VERSION_ARB;
attributeList[3] = 2;
attributeList[4] = 0;
rendercontext = wglCreateContextAttribsARB (context, 0, attributeList);
if (rendercontext == NULL)
return -1;
result = wglMakeCurrent (context, rendercontext);
if (result != 1)
return -1;
glClearDepth (1.0f);
glFrontFace (GL_CCW);
glEnable (GL_CULL_FACE);
glCullFace (GL_BACK);
return 0;
This sets up a graphics context, but is apparently not enough to make everything work. The tutorial didn't include anything about view or projection matrices, so I'm not sure whether I should add anything like that. But the window remains black.
This is the tutorial code, adjusted to my code:
#define BUFFER_OFFSET(offset) ((void *)(offset))
bool init ();
bool mainloop ();
enum VAO_IDs { Triangles, NumVAOs };
enum Buffer_IDs { ArrayBuffer, NumBuffers };
enum Attrib_IDs { vPosition = 0 };
GLuint VAOs[NumVAOs];
GLuint Buffers[NumBuffers];
const GLuint NumVertices = 6;
int main (int argc, char** argv)
{
Window w;
w.init (&mainloop);
if (!init ())
return 0;
w.run ();
w.shutdown ();
return 0;
}
bool init ()
{
glGenVertexArrays (NumVAOs, VAOs);
glBindVertexArray (VAOs[Triangles]);
GLfloat vertices[NumVertices][2] = {
{-0.90f, -0.90f}, // Triangle 1
{0.85f, -0.90f},
{-0.90f, 0.85f},
{0.90f, -0.85f}, // Triangle 2
{0.90f, 0.90f},
{-0.85f, 0.90f}
};
glGenBuffers (NumBuffers, Buffers);
glBindBuffer (GL_ARRAY_BUFFER, Buffers[ArrayBuffer]);
glBufferData (GL_ARRAY_BUFFER, sizeof(vertices),
vertices, GL_STATIC_DRAW);
ShaderInfo shaders[] = {
{GL_VERTEX_SHADER, "triangles.vert"},
{GL_FRAGMENT_SHADER, "triangles.frag"},
{GL_NONE, NULL}
};
GLuint program = LoadShaders (shaders);
glUseProgram (program);
glVertexAttribPointer (vPosition, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET (0));
glEnableVertexAttribArray (vPosition);
return true;
}
bool mainloop ()
{
glClear (GL_COLOR_BUFFER_BIT);
glBindVertexArray (VAOs[Triangles]);
glDrawArrays (GL_TRIANGLES, 0, NumVertices);
glFlush ();
return true;
}
Creating a OpenGL context is not trivial. Especially if you want to use wglChoosePixelFormatARB which must be loaded through the OpenGL extension mechanism… which requires a functioning OpenGL context to work in the first place. I think you see that this is kind of a chicken-and-egg problem. In addition the Window you use to create an OpenGL context requires certain attributed to work reliably. For one the WndClass should have the CS_OWNDC style set and the window style should include WS_CLIPSIBLINGS | WS_CLIPCHILDREN
I recently approached the aforementioned chicken-and-egg problem with my small wglarb helper tools: https://github.com/datenwolf/wglarb
It also comes with a small test program that shows how to use it.
I suggest you use the functions provided by that. I wrote this little helper library in a way, that it is not negatively impacted if your program uses other extension loading mechanisms. However it's not thread-safe yet; I'll have to add that eventually. I took the time to make the code thread safe. You can now use the exposed functions without having to care about synchronization; it's all done internally in a reliable way.
I have a sdl/opengl game I am working on for fun. I get a decent fps on average, but movement is really choppy because SDL_GL_SwapBuffers() will randomly take a crazy long amount of time to process. With textures loaded and written to the buffer sometimes it will take over 100ms! I cut out a lot of my code to try and figure out if it was something I did wrong but I haven't had much luck. When I run this bare bones program it will still block for up to 70ms at times.
Main:
// Don't forget to link to opengl32, glu32, SDL_image.lib
// includes
#include <stdio.h>
// SDL
#include <cstdlib>
#include <SDL/SDL.h>
// Video
#include "videoengine.h"
int main(int argc, char *argv[])
{
// begin SDL
if ( SDL_Init(SDL_INIT_VIDEO) != 0 )
{
printf("Unable to initialize SDL: %s\n", SDL_GetError());
}
// begin video class
VideoEngine videoEngine;
// BEGIN MAIN LOOP
bool done = false;
while (!done)
{
int loopStart = SDL_GetTicks();
printf("STARTING SWAP BUFFER : %d\n", SDL_GetTicks() - loopStart);
SDL_GL_SwapBuffers();
int total = SDL_GetTicks() - loopStart;
if (total > 6)
printf("END LOOP : %d ------------------------------------------------------------>\n", total);
else
printf("END LOOP : %d\n", total);
}
// END MAIN LOOP
return 0;
}
My "VideoEngine" constructor:
VideoEngine::VideoEngine()
{
UNIT = 16;
SCREEN_X = 320;
SCREEN_Y = 240;
SCALE = 1;
// Begin Initalization
SDL_Surface *screen;
SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 ); // [!] SDL_GL_SetAttributes must be done BEFORE SDL_SetVideoMode
screen = SDL_SetVideoMode( SCALE*SCREEN_X, SCALE*SCREEN_Y, 16, SDL_OPENGL ); // Set screen to the window with opengl
if ( !screen ) // make sure the window was created
{
printf("Unable to set video mode: %s\n", SDL_GetError());
}
// set opengl state
opengl_init();
// End Initalization
}
void VideoEngine::opengl_init()
{
// Set the OpenGL state after creating the context with SDL_SetVideoMode
//glClearColor( 0, 0, 0, 0 ); // sets screen buffer to black
//glClearDepth(1.0f); // Tells OpenGL what value to reset the depth buffer when it is cleared
glViewport( 0, 0, SCALE*SCREEN_X, SCALE*SCREEN_Y ); // sets the viewport to the default resolution (SCREEN_X x SCREEN_Y) multiplied by SCALE. (x,y,w,h)
glMatrixMode( GL_PROJECTION ); // Applies subsequent matrix operations to the projection matrix stack.
glLoadIdentity(); // Replaces the current matrix with the identity matrix
glOrtho( 0, SCALE*SCREEN_X, SCALE*SCREEN_Y, 0, -1, 1 ); //describes a transformation that produces a parallel projection
glMatrixMode( GL_MODELVIEW ); // Applies subsequent matrix operations to the projection matrix stack.
glEnable(GL_TEXTURE_2D); // Need this to display a texture
glLoadIdentity(); // Replaces the current matrix with the identity matrix
glEnable(GL_BLEND); // Enable blending for transparency
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); // Specifies pixel arithmetic
//glDisable( GL_LIGHTING ); // Disable lighting
//glDisable( GL_DITHER ); // Disable dithering
//glDisable( GL_DEPTH_TEST ); // Disable depth testing
//Check for error
GLenum error = glGetError();
if( error != GL_NO_ERROR )
{
printf( "Error initializing OpenGL! %s\n", gluErrorString( error ) );
}
return;
}
I'm starting to think possibly I have a hardware issue? I have never had this problem with a game though.
SDL does use the SwapIntervalEXT extension so you can make sure that the buffer swaps are as fast as possible (VSYNC disabled). Also, buffer swap is not a simple operation, OpenGL needs to copy contents of back buffers to front buffers for case that you want to glReadPixels(). This behavior can be controlled using WGL_ARB_pixel_format, using WGL_SWAP_EXCHANGE_ARB (you can read about all this stuff in the specs; now I'm not sure if there is an alternative to that for Linux).
And then on top of all that, there is the windowing system. That can actually cause a lot of trouble. Also, if some errors are generated ...
This behavior is probably ok if you're running on a small mobile GPU.
SDL_GL_SwapBuffers() only contains a call to glxSwapBuffers() / wglSwapBuffers() so there is no time spent in there.
I think I have a bug in my program. I use SDL and OpenGL to render an animation. The program also measures the average FPS. Tipically, when I run the program, it will run at around 550 FPS.
However, if I start a second instance of the program, the FPS drops for both at around half (220 FPS). The strange thing is that if I close the first instance, the second one will still run at only 220 FPS. This leads me to believe that my cleanup code is somehow flawed.
Sometimes, even if I run a single instance, it will run at only 220 FPS, probably due to a previous instance that failed to clean up properly. Is there something wrong with my approach below?
I use a screen class which has the following *tors:
namespace gfx
{
screen::screen(const settings& vs) : dbl_buf_(false), sdl_surface_(0)
{
if (SDL_Init(SDL_INIT_VIDEO) < 0)
throw util::exception(::std::string("Unable to initialize SDL video: ") + SDL_GetError());
if (!set(vs))
{
SDL_Quit();
throw util::exception("Unable to setup initial video mode.");
}
glewInit();
}
screen::~screen()
{
SDL_Quit();
}
bool screen::set(const settings& vs)
{
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
Uint32 flags = SDL_HWSURFACE | SDL_OPENGL;
if (vs.full_screen) flags |= SDL_FULLSCREEN;
sdl_surface_ = SDL_SetVideoMode(vs.size_x, vs.size_y, vs.bpp, flags);
if (!sdl_surface_) return false;
settings_ = vs;
int db_flag = 0;
SDL_GL_GetAttribute(SDL_GL_DOUBLEBUFFER, &db_flag);
dbl_buf_ = (db_flag == 1);
return true;
}
// ...
}
Also:
int main()
{
try
{
gfx::settings vs = {800, 600, 32, false};
gfx::screen scr(vs);
// main app loop, render animation using OpenGL calls
// loop runs while running_ variable is true (see below)
}
// catch, etc.
return 0;
}
If it makes any difference, I use Linux and an ATI card.
Update: Event handling code:
SDL_Event event;
while (SDL_PollEvent(&event))
{
switch (event.type)
{
case SDL_KEYDOWN:
if (event.key.keysym.sym == SDLK_ESCAPE)
running_ = false;
break;
case SDL_QUIT:
running_ = false;
break;
default:
world_.process_event(event);
break;
}
}
When a process terminates all the resources it used are freed automatically. That includes OpenGL. What may happen is, that you don't terminate your process but only hide the window by clicking the close button.