I ran my program twice. The first time with glfwSwapInterval(1) and everything was just fine.
The second time without glfwSwapInterval(1) and it was using 100% of my CPU.
My Question: Is this normal and do I really have to call glfwSwapInterval(1) in order for my program to run properly.
The code:
glfwInit();
long window = glfwCreateWindow(1200, 800, "OpenGL", 0, 0);
glfwShowWindow(window);
glfwMakeContextCurrent(window);
GL.createCapabilities();
glClearColor(1, 0, 0, 1);
while (!glfwWindowShouldClose(window)) {
glfwPollEvents();
glClear(GL_COLOR_BUFFER_BIT);
glfwSwapBuffers(window);
}
glfwTerminate();
The GLFW documentation mentions: "glfwSwapInterval is not called during context creation, leaving the swap interval set to whatever is the default on that platform."
So essentially, whether or not synchronization is enabled depends on your platform. For example, on my machine the default seems to be the opposite from what you were seeing.
In most cases, you'll want to call glfwSwapInterval(1) to enable sync, but if you have a reason to disable it (for example, if you're comparing shader performance) you can also call glfwSwapInterval(0).
If you want to sync your rendering loop to the refresh rate of the monitor you have to call it. The default behaviour is rendering as many frames as possible.
Related
I have an application which renders a 3d object using OpenGL, allowing the user to rotate and zoom and inspect the object. Currently, this is driven directly by received mouse messages (it's a Windows MFC MDI application). When a mouse movement is received, the viewing matrix is updated, and the scene re-rendered into the back buffer, and then SwapBuffers is called. For a spinning view, I start a 20ms timer and render the scene on the timer, with small updates to the viewing matrix each frame. This is OK, but is not perfectly smooth. It sometimes pauses or skips frames, and is not linked to vsync. I would love to make it smoother and smarter with the rendering.
It's not like a game where it needs to be rendered every frame though. There are long periods where the object is not moved, and does not need to be re-rendered.
I have come across GLFW library and the glfwSwapInterval function. Is this a commonly used solution?
Should I create a separate thread for the render loop, rather than being message/timer driven?
Are there other solutions I should investigate?
Are there any good references for how to structure a suitable render loop? I'm OK with all the rendering code - just looking for a better structure around the rendering code.
So, I consider you are using GLFW for creating / operating your window.
If you don't have to update your window on each frame, suggest using glfwWaitEvents() or glfwWaitEventsTimeout(). The first one tells the system to put this process (not window) on sleep state, until any event happens (mouse press / resize event etc.). The second one is similar, but you can specify a timeout for the sleep state. The function will wait till any event happens OR till specified time runs out.
What's for the glfwSwapInterval(), this is probably not the solution you are looking for. This function sets the amount of frames that videocard has to skip (wait) when glfwSwapBuffers() is called.
If you, for example, use glfwSwapInterval(1) (assuming you have valid OpenGL context), this will sync your context to the framerate of your monitor (aka v-sync, but I'm not sure if it is valid to call it so).
If you use glfwSwapInterval(0), this will basicly unset your syncronisation with monitor, and videocard will swap buffers with glfwSwapBuffers() instanly, without waiting.
If you use glfwSwapInterval(2), this will double up the time that glfwSwapBuffers() waits after (or before?) flushing framebuffer to screen. So, if you have, for instance, 60 fps on your display, using glfwSwapInterval(2) will result in 30 fps in your program (assuming you use glfwSwapBuffers() to flush framebuffer).
The glfwSwapInterval(3) will give you 20 fps, glfwSwapInterval(4) - 15 fps and so on.
As for separate render thread, this is good if you want to divide your "thinking" and rendering processes, but it comes with its own advantages, disadvantages and difficulties. Tip: some window events can't be handled "properly" without having separate thread (See this question).
The usual render loop looks like this (as far as I've learned from learnopengl lessons):
// Setup process before...
while(!window_has_to_close) // <-- Run game loop until window is marked "has to
// close". In GLFW this is done using glfwWindowShouldClose()
// https://www.glfw.org/docs/latest/group__window.html#ga24e02fbfefbb81fc45320989f8140ab5
{
// Prepare for handling input events (e. g. callbacks in GLFW)
prepare();
// Handle events (if there are none, this is just skipped)
glfwPollEvents(); // <-- You can also use glfwWaitEvents()
// "Thinknig step" of your program
tick();
// Clear window framebuffer (better also put this in separate func)
glClearColor(0.f, 0.f, 0.f, 1.f);
glClear(GL_COLOR_BUFFER_BIT);
// Render everything
render();
// Swap buffers (you can also put this in separate function)
glfwSwapBuffers(window); // <-- Flush framebuffer to screen
}
// Exiting operations after...
See this ("Ready your engines" part) for additional info. Wish you luck!
My program creates many vertex buffer just after startup as soon as vertex data is loaded over a network, and then occasionally deletes or create vertex buffers during hot loop. It works as expected almost always, but sometimes on some machines buffer creation in hot loop produces zero names.
It doesn't look like an invalid state, because it would fire much earlier. Also, documentation and spec is not clear enough about such type of errors. Does it mean that implementation run out of buffer names?
I also found this thread. Topicstarter says that initializing names before passing them to glGenBuffers fixed his problem. Is it necessary to initialize those values?
Since it seems to work on some machines, glGenBuffer returning 0 could be because of an improperly set up context. Here
davek20 had the same problem with glGenBuffers. He solved it by fixing his incorrect context setup.
As stated on here on GLFW 'Getting started' page, under 'Creating a window and context' they state
"If the required minimum version is not supported on the machine, context (and window) creation fails."
and these machines of yours might have correct drivers but probably doesn't support all or some versions of OpenGL, as the documentation states.
If you are using GLFW_CONTEXT_VERSION_MAJOR and GLFW_CONTEXT_VERSION_MINOR consider changing these. I also recommend checking the context creation for returning NULL (0).
Example from GLFW's documentation page:
GLFWwindow* window;
if (!glfwInit())
return -1;
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
window = glfwCreateWindow(960, 540, "OpenGL", NULL, NULL);
if (!window)
{
glfwTerminate();
return -1;
}
Summary:
An OpenGL context is created successfully on the development computer, but when trying to distribute the application, the screen only shows black. What kind of issues need to be considered when distributing an OpenGL application?
Details:
I am using SDL2 to create a OpenGL 3.1 context. The context has to be at least 3.1 to work.
I have not thoroughly tested the issue, so I do not have information such as the graphics cards in use. However, I am more interested in the general question asked in the summary about what needs to be considered when distributing an OpenGL application.
Here is the context creation code.
// CREATE SDL
U32 flags;
flags |= SDL_INIT_VIDEO;
flags |= SDL_INIT_EVENTS;
if(!SDL_WasInit(0)) // Make sure SDL is initialized.
SDL_Init(0);
CHECK(!SDL_InitSubSystem(flags));
// SET OPENGL ATTRIBUTES
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, config.glVersionMajor);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, config.glVersionMinor);
if(config.glCoreProfile)
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE);
else
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_COMPATIBILITY);
//SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, config.glDepthBuffer);
SDL_GL_SetSwapInterval(0);
// CREATE WINDOW
flags = SDL_WINDOW_OPENGL | SDL_WINDOW_SHOWN;
if(config.fullscreen)
flags = flags | SDL_WINDOW_FULLSCREEN_DESKTOP;
else if(config.maximized)
flags = flags | SDL_WINDOW_MAXIMIZED;
if(config.resizable)
flags = flags | SDL_WINDOW_RESIZABLE;
mainWindow = SDL_CreateWindow(config.programName, SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED,
config.windowWidth, config.windowHeight, flags);
SDL_GetWindowSize(mainWindow, (int*)&windowWidth, (int*)&windowHeight);
CHECK(mainWindow != NULL);
// CREATE OPENGL CONTEXT
mainContext = SDL_GL_CreateContext(mainWindow);
CHECK(mainContext != NULL);
// INIT GLEW
#ifdef _WIN32
CHECK(GLEW_OK == glewInit());
#endif
glEnable(GL_DEPTH_TEST);
glViewport(0,0,windowWidth,windowHeight);
glClearColor(0,0,0,1);
//glEnable(GL_PRIMITIVE_RESTART);
glEnable(GL_CULL_FACE);
//glPrimitiveRestartIndex(0xFFFFFFFF);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
TTF_Init();
Make sure you know what your application is dependent on, and demand it from the platform. Saying "Core profile" means little in my experience. Better query each and every extension your application need and shut the application down (gracefully and kindly in the eyes of the user) if something is missing. And extensions are not everything. Check maximum sizes of all buffers, too. Of that I have real life experience.
Never rely on standard compliance. Yes, the standard says that GL_DEPTH is intially disabled. And no, thou shalt never rely on that the driver is compliant to that rule. And yes, that was a real life scenario.
Run proper tests on a variaty of hardwares. Vendors implement drivers differently. Some may think negative vertex index is perfectly fine. Some may not. Real life experience there as well...
Never accept a silent OpenGL error. "Something went wrong and it is probably nothing to worry about. For this hardware. For this driver. For this version. For this OS. Maybe."
Do the math. Floating point precision is not that strict by the OpenGL standard (more strict in OpenGL) and behavior for operations with undefined behavior, such as division by zero or any operations based on NaN, is never something you want to rely on.
Read the standard. Yes it is a pain, but trust me it will pay out. You won´t feel it of course since you will never experience problems you will never have.
As a side note, nobody really follow this practice. Everybody code first and then debug forever.
I was considering using glfw in my application, while developing on mac
After successfully writing a very simple program to render a triangle on a colored backround,
I noticed that when resizing the window, it takes quite some time to rerender the scene, as I suspect due to framebuffer resize.
This is not the case when I am repeating the experiment with NSOpenGLView. Is there a way to hint glfw to use bigger framebuffer size on start, to avoid expensive resizes?
I am using GLFW 3.
Could you also help me with enabling High DPI for retina display. Couldn't find something in docs on that, but it supported in version 3.
Obtaining a larger framebuffer
Try to obtain a large initial frame-buffer by calling glfwCreateWindow() with large values for width & height and immediately switching to displaying a smaller window using glfwSetWindowSize() with the actual initial window size desired.
Alternately, register your own framebuffer size callback function using glfwSetFramebufferSizeCallback() and set the framebuffer to a large size according to your requirement as follows :
void custom_fbsize_callback(GLFWwindow* window, int width, int height)
{
/* use system width,height */
/* glViewport(0, 0, width, height); */
/* use custom width,height */
glViewport(0, 0, <CUSTOM_WIDTH>, <CUSTOM_HEIGHT>);
}
UPDATE :
The render pipeline stall seen during the window re-size(and window drag) operation is due to the blocking behavior implemented in the window manager.
To mitigate this in one's app, one needs to install handler functions for the window messages and run the render pipeline in a separate thread independent from the main app(GUI) thread.
High DPI support
The GLFW documentation says :
GLFW now supports high-DPI monitors on both Windows and OS X, giving
windows full resolution framebuffers where other UI elements are
scaled up. To achieve this, glfwGetFramebufferSize() and
glfwSetFramebufferSizeCallback() have been added. These work with
pixels, while the rest of the GLFW API work with screen coordinates.
AFAIK, that seems to be pretty much everything about high-DPI in the documentation.
Going through the code we can see that on Windows, glfw hooks into the SetProcessDPIAware() and calls it during platformInit. Currently i am not able to find any similar code for high-DPI support on mac.
I've looked at a ton of articles and SO questions about OpenGL not drawing, common mistakes, etc. This one is stumping me.
I've tried several different settings for glOrtho, different vertex positions, colors, etc., all to no avail.
I can confirm the OpenGL state is valid because the clear color is purple in the code (meaning the window is purple). gDEBugger is also confirming frames are being updated (so is Fraps).
Here is the code. Lines marked as "didn't help" were not there originally, and were things that I tried and failed.
QTWindow::QTWindow( )
{
// Enable mouse tracking
this->setMouseTracking(true);
}
void QTWindow::initializeGL()
{
// DEBUG
debug("Init'ing GL");
this->makeCurrent(); ///< Didn't help
this->resizeGL(0, 0); ///< Didn't help
glDisable(GL_CULL_FACE); ///< Didn't help
glClearColor(1, 0, 1, 0);
}
void QTWindow::paintGL()
{
// DEBUG
debug("Painting GL");
this->makeCurrent(); ///< Didn't help
glLoadIdentity();
glClear(GL_COLOR_BUFFER_BIT);
glColor3f(0,1,1);
glBegin(GL_TRIANGLES);
glVertex2f(500,100);
glVertex2f(100,500);
glVertex2f(0,0);
glEnd();
this->swapBuffers(); ///< Didn't help
}
void QTWindow::resizeGL(int width, int height)
{
// DEBUG
debug("Resizing GL");
this->makeCurrent(); ///< Didn't help
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, 1000, 0, 1000, -1, 1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
The triangle is not being displayed at all, even with culling turned off. However, all three debug logs are called exactly how they should be.
What am I missing?
Try calling glViewport() function at the very beginning of the QTWindow::resizeGL() function:
glViewport(0, 0, width, height);
And don't ever call resizeGL() with width and height set to 0 ;) Besides that, it is not necessary for you to call resizeGL() directly as it is being called by Qt whenever the window is being resized.
You can remove all calls to the swapBuffers() function - it is being called internally by Qt.
The makeCurrent() function should be called before all other GL calls, so it is good that you have called it in initializeGL(), but you don't have to call it in the paintGL() function (unless paintGL() is being called from another thread, but I bet it isn't in your code).
The issue ended up being versions. The version string returned with glGetString(GL_VERSION) indicated 4.2 compatibility context was being used.
Since the triangle calls in the paintGL method were removed in 3.1 (if I recall correctly), it was obvious why they weren't drawing anything. Further, no errors were being thrown because it was in compat. mode.
Because I couldn't get the version down below 3.0 on the QGLWidget (due to the fact that QT requires 2.1, as I was informed on another message board), I set the version to 3.0 and tried using some 3.0 drawing calls and it ended up working.