My program creates many vertex buffer just after startup as soon as vertex data is loaded over a network, and then occasionally deletes or create vertex buffers during hot loop. It works as expected almost always, but sometimes on some machines buffer creation in hot loop produces zero names.
It doesn't look like an invalid state, because it would fire much earlier. Also, documentation and spec is not clear enough about such type of errors. Does it mean that implementation run out of buffer names?
I also found this thread. Topicstarter says that initializing names before passing them to glGenBuffers fixed his problem. Is it necessary to initialize those values?
Since it seems to work on some machines, glGenBuffer returning 0 could be because of an improperly set up context. Here
davek20 had the same problem with glGenBuffers. He solved it by fixing his incorrect context setup.
As stated on here on GLFW 'Getting started' page, under 'Creating a window and context' they state
"If the required minimum version is not supported on the machine, context (and window) creation fails."
and these machines of yours might have correct drivers but probably doesn't support all or some versions of OpenGL, as the documentation states.
If you are using GLFW_CONTEXT_VERSION_MAJOR and GLFW_CONTEXT_VERSION_MINOR consider changing these. I also recommend checking the context creation for returning NULL (0).
Example from GLFW's documentation page:
GLFWwindow* window;
if (!glfwInit())
return -1;
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
window = glfwCreateWindow(960, 540, "OpenGL", NULL, NULL);
if (!window)
{
glfwTerminate();
return -1;
}
Related
I ran my program twice. The first time with glfwSwapInterval(1) and everything was just fine.
The second time without glfwSwapInterval(1) and it was using 100% of my CPU.
My Question: Is this normal and do I really have to call glfwSwapInterval(1) in order for my program to run properly.
The code:
glfwInit();
long window = glfwCreateWindow(1200, 800, "OpenGL", 0, 0);
glfwShowWindow(window);
glfwMakeContextCurrent(window);
GL.createCapabilities();
glClearColor(1, 0, 0, 1);
while (!glfwWindowShouldClose(window)) {
glfwPollEvents();
glClear(GL_COLOR_BUFFER_BIT);
glfwSwapBuffers(window);
}
glfwTerminate();
The GLFW documentation mentions: "glfwSwapInterval is not called during context creation, leaving the swap interval set to whatever is the default on that platform."
So essentially, whether or not synchronization is enabled depends on your platform. For example, on my machine the default seems to be the opposite from what you were seeing.
In most cases, you'll want to call glfwSwapInterval(1) to enable sync, but if you have a reason to disable it (for example, if you're comparing shader performance) you can also call glfwSwapInterval(0).
If you want to sync your rendering loop to the refresh rate of the monitor you have to call it. The default behaviour is rendering as many frames as possible.
I am currently writing a game using C++, OpenGL and GLFW. I would like to allow users to change the number of samples the game uses for antialiasing, since users with old systems might want to disable antialising altogether for performance reasons.
The problem is that GLFW_SAMPLES is a window-creation hint, which means that it's applied when a window is created:
// Use 4 samples for antialiasing
glfwWindowHint(GLFW_SAMPLES, 4);
// The hint above is applied to the window that's created below
GLFWwindow* myWindow = glfwCreateWindow(widthInPix, heightInPix, title.c_str(), glfwGetPrimaryMonitor(), nullptr);
// Disable antialiasing
// This hint is not applied to the previously created window
glfwWindowHint(GLFW_SAMPLES, 4);
The GLFW documentation doesn't contain any information about how to change the number of samples of an existing window. Has anyone faced this problem in the past?
No, you must create a new window and destroy the old one. Preferably sharing the two contexts, so that non-container objects won't be lost in the shuffle.
Alternatively, you can create multisampled textures or renderbuffers, render to an FBO, and then blit the rendered data to a non-multisampled window. That way, you have complete control over the number of samples, and you can easily destroy and recreate such images at your leisure.
I am trying to understand what's GLFW_CONTEXT_VERSION_MAJOR and GLFW_CONTEXT_VERSION_MINOR. And what exactly those functions do:
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
And it seems to me that first of all I must find out what is context. Documentation explanation looks too complicated and doesn't even give its definition, so I can't understand what it is and what's its purpose.
It states in the first sentence with the headline Context objects: "A window object encapsulates both a top-level window and an OpenGL or OpenGL ES context."
So it will be an OpenGL/OpenGL ES context. The functions set the OpenGL/OpenGL ES version requirement for that context the window will create when you create the window.
In your example above GLFW will try to create an OpenGL 3.3 context for that window.
Summary:
An OpenGL context is created successfully on the development computer, but when trying to distribute the application, the screen only shows black. What kind of issues need to be considered when distributing an OpenGL application?
Details:
I am using SDL2 to create a OpenGL 3.1 context. The context has to be at least 3.1 to work.
I have not thoroughly tested the issue, so I do not have information such as the graphics cards in use. However, I am more interested in the general question asked in the summary about what needs to be considered when distributing an OpenGL application.
Here is the context creation code.
// CREATE SDL
U32 flags;
flags |= SDL_INIT_VIDEO;
flags |= SDL_INIT_EVENTS;
if(!SDL_WasInit(0)) // Make sure SDL is initialized.
SDL_Init(0);
CHECK(!SDL_InitSubSystem(flags));
// SET OPENGL ATTRIBUTES
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, config.glVersionMajor);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, config.glVersionMinor);
if(config.glCoreProfile)
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE);
else
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_COMPATIBILITY);
//SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, config.glDepthBuffer);
SDL_GL_SetSwapInterval(0);
// CREATE WINDOW
flags = SDL_WINDOW_OPENGL | SDL_WINDOW_SHOWN;
if(config.fullscreen)
flags = flags | SDL_WINDOW_FULLSCREEN_DESKTOP;
else if(config.maximized)
flags = flags | SDL_WINDOW_MAXIMIZED;
if(config.resizable)
flags = flags | SDL_WINDOW_RESIZABLE;
mainWindow = SDL_CreateWindow(config.programName, SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED,
config.windowWidth, config.windowHeight, flags);
SDL_GetWindowSize(mainWindow, (int*)&windowWidth, (int*)&windowHeight);
CHECK(mainWindow != NULL);
// CREATE OPENGL CONTEXT
mainContext = SDL_GL_CreateContext(mainWindow);
CHECK(mainContext != NULL);
// INIT GLEW
#ifdef _WIN32
CHECK(GLEW_OK == glewInit());
#endif
glEnable(GL_DEPTH_TEST);
glViewport(0,0,windowWidth,windowHeight);
glClearColor(0,0,0,1);
//glEnable(GL_PRIMITIVE_RESTART);
glEnable(GL_CULL_FACE);
//glPrimitiveRestartIndex(0xFFFFFFFF);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
TTF_Init();
Make sure you know what your application is dependent on, and demand it from the platform. Saying "Core profile" means little in my experience. Better query each and every extension your application need and shut the application down (gracefully and kindly in the eyes of the user) if something is missing. And extensions are not everything. Check maximum sizes of all buffers, too. Of that I have real life experience.
Never rely on standard compliance. Yes, the standard says that GL_DEPTH is intially disabled. And no, thou shalt never rely on that the driver is compliant to that rule. And yes, that was a real life scenario.
Run proper tests on a variaty of hardwares. Vendors implement drivers differently. Some may think negative vertex index is perfectly fine. Some may not. Real life experience there as well...
Never accept a silent OpenGL error. "Something went wrong and it is probably nothing to worry about. For this hardware. For this driver. For this version. For this OS. Maybe."
Do the math. Floating point precision is not that strict by the OpenGL standard (more strict in OpenGL) and behavior for operations with undefined behavior, such as division by zero or any operations based on NaN, is never something you want to rely on.
Read the standard. Yes it is a pain, but trust me it will pay out. You won´t feel it of course since you will never experience problems you will never have.
As a side note, nobody really follow this practice. Everybody code first and then debug forever.
I'm following a tutorial which uses glfwWindowHint() to set the version of GLFW that he is using. He's on OSX and I'm on Windows. I have the exact same code as his. When I do this:
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_COMPAT_PROFILE);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
And then this:
GLFWwindow* window = glfwCreateWindow(640, 360, "Modern OpenGL", NULL, NULL);
It always returns NULL. But in the tutorial he said that setting the window hints was necessary to use the code that the program uses. When I take out the window hints the window is created sucessfully, but then it crashes (because of the other code that probably required the window hint changes).
I'm on Windows XP. How do I fix this?
Interestingly, this code should always return NULL on OSX, as OSX supports OpenGL >=3 only in core profiles, while this code requests a compatibility profile.
On Windows, a compatibility profile of that version might be supported. But this will depend on the GPU and the drivers you have installed. It might very well be the case that your system does simply not support GL3.2. Make sure you are using recent drivers. And check what GL version your GPU actually supports. One thing you could try though is setting the GLFW_OPENGL_FORWARD_COMPAT hint to GL_FALSE.