glGenBuffers - GL_INVALID_OPERATION - opengl

I'm having an issue whenever I call glGenBuffers on windows. Whenever I call glGenBuffers, or any 3.2 or above functions, OpenGL returns an INVALID_OPERATION error. After reading around the web this is probably being caused by not having the updated function pointer for 3.2 on windows. From everything I have read you must acquire the function pointers at runtime by asking the windows API wglGetProcAddress to return the function pointer for use in your program and your current driver. This in itself isn't hard, but why reinvent the wheel. Instead I choose to include GLEW to handle the function pointers for me.
Here is a small program to demonstrate my issue.
#define GLEW_STATIC
#include <GL\glew.h>
#include <GLFW\glfw3.h>
#include <iostream>
void PrintError(void)
{
// Print glGetError()
}
int main()
{
glfwInit();
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT,GL_TRUE);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
GLFWwindow* window = glfwCreateWindow(800, 600, "OpenGL", nullptr, nullptr);
glfwMakeContextCurrent(window);
PrintError();
// Initalize GLEW
glewExperimental = GL_TRUE;
GLenum error = glewInit();
if(error!= GLEW_OK)
std::cerr << glewGetErrorString(error) << std::endl;
// Glew information
if(GLEW_VERSION_3_2)
std::cout << "Supports 3.2.." << std::endl;
// Try buffer
GLuint buffer;
glGenBuffers(1,&buffer); // INVALID_OPERATION
PrintError(); // Error shows up
// This is our main loop
while(!glfwWindowShouldClose(window))
{
glfwSwapBuffers(window);
glfwPollEvents();
}
glfwTerminate();
return 0;
}
When I run the following code I get an OpenGL windows. However, in my console I see that the first call to PrintError returned NO_ERROR while the second call returned INVALID_OPERATION. I have a feeling I'm overlooking some small fact, but I just can't seem to locate it on glfw or GLEWs webpage.
I'm currently running:
glfw-3.0.4 (32bit)
glew-1.10.0 (32bit)
**** Update ****
In response to glampert post I added the following code after the glewInit() method.
GLenum loop_error = glGetError();
while (loop_error != GL_NO_ERROR)
{
switch (loop_error)
{
case GL_NO_ERROR: std::cout << "GL_NO_ERROR" << std::endl; break;
case GL_INVALID_ENUM: std::cout << "GL_INVALID_ENUM" << std::endl; break;
case GL_INVALID_VALUE: std::cout << "GL_INVALID_VALUE" << std::endl; break;
case GL_INVALID_OPERATION: std::cout << "GL_INVALID_OPERATION" << std::endl; break;
case GL_INVALID_FRAMEBUFFER_OPERATION: std::cout << "GL_INVALID_FRAMEBUFFER_OPERATION" << std::endl; break;
case GL_OUT_OF_MEMORY: std::cout << "GL_OUT_OF_MEMORY" << std::endl; break;
}
loop_error = glGetError();
}
Which confirms his assumption that the invalid operation is caused by the glewinit code.
* Update *
It looks like this is a known issue with GLEW & 3.2 context.
http://www.opengl.org/wiki/OpenGL_Loading_Library
After locating GLEW as the trouble maker I located the following two post.
OpenGL: glGetError() returns invalid enum after call to glewInit()
Seems like the suggested solution is to set the experimental flag, which I'm already doing. However, the website mentions that even after doing so their is still the possibility it will cause an invalid operation and fail to grab the function pointers.
I think at this point my best solution is to just grab my own function pointers.

It could very well be that the error you are getting is a residual error from GLFW or GLEW.
Try adding this after the library init code:
GLenum error;
while ((error = glGetError()) != GL_NO_ERROR)
{
// print(error), etc...
}
and before you attempt to call any OpenGL function, to clear the error cache.
If the error is indeed caused by GLEW, it may not necessarily be a dangerous one that stops the library from working. So I wouldn't drop the library just because of this issue, which will eventually be fixed. However, if you do decide to fetch the function pointers yourself, GLFW provides the glfwGetProcAddress function, which will allow you to do so in a portable manner.

Related

Initializing opengl non-graphical application

I am writing a non-graphical command-line tool which calls some OpenGL functions.
const int DEFAULT_VERSION_MAJOR = 4;
const int DEFAULT_VERSION_MINOR = 3;
const auto DEFAULT_PROFILE = QGLFormat::CoreProfile;
void main ()
{
QGLFormat format;
{
format.setVersion (
DEFAULT_VERSION_MAJOR,
DEFAULT_VERSION_MINOR);
format.setProfile (DEFAULT_PROFILE);
}
QGLContext context (format);
// EDIT: this line is failing.
if (false == context.isValid ())
{
std::cerr << "No valid OpenGL context created." << std::endl;
return EXIT_FAILURE;
}
context.makeCurrent ();
if (const GLenum err = glewInit (); GLEW_OK != err)
{
std::cerr
<< __PRETTY_FUNCTION__
<< ": glewInit() returned "
<< glewGetErrorString (err)
<< std::endl;
}
glEnable (GL_DEBUG_OUTPUT);
// SEGMENTATION FAULT
glDebugMessageCallback ((GLDEBUGPROC) message_callback, nullptr);
I assume this is segfaulting because the libraries are not properly initialized (function pointers not set up or whatever).
The GLEW error is Missing GL version.
This tool will need to create OpenGL objects e.g. compile shaders, but not draw anything.
What are the minimum steps to get OpenGL libraries working for a non-graphical application?
(A cross-platform solution would be nice, a Linux-only solution will be fine.)
I forgot to call QGLContext::create. That (probably) answers this question, and I'll post another question directed at the QGLContext problem.

GLX create GL 4.x context with GLEW crashes glXChooseFBConfig

I have problems creating a GL 4.x context using plain GLX and GLEW (1.12). All solutions I found here are not working for me.
What I am trying to do (and what I found in other questions): Create a 'base' GL context, call glewInit(), then create the proper GL 4.x context using glXCreateContextAttribsARB(...). This did not work for me, since glXChooseFBConfig keeps segfaulting.
I read that I have to call glxewInit first, but that didn't work. Building without GLEW_MX defined resulted in glxewInit not being available. Building with GLEW_MX defined resulted in the following compile error:
error: 'glxewGetContext' was not declared in this scope #define glxewInit() glxewContextInit(glxewGetContext())
note: in expansion of macro 'glewxInit' auto result = glxewInit()
When I ommit calling glxewInit() the application crashes when calling glXChooseFBConfig(...)
I am rather stuck here. What is the proper way to get a GL 4.x context using plain GLX? (I cannot use glfw or something similiar, I am working on a given application, and I get a Display pointer and a Window id to work with, the window is already there)
Thanks to the hint from Nicol I was able to fix the problem myself. Looking at the glfw code I did the following (might not apply to everyone).
Create a default / old-school GL context using the given Display pointer and Window id. I also call glXQueryExtension to check for proper glX support.
After creating the default context I call glewInit but created function pointers to glXChooseFBConfigSGIX 'by hand'. Don't really know why I had to use the SGIX Extension, didn't bother to take a deeper look.
This way I was able to get a proper FBConfig structure to call glXCreateContextAttribsARB and create a new context after destroying the old one
Et voilá, I get my working GL 4.x context
Thanks againg to Nicol, I don't know why I didn't think of looking at other code.
Here's my solution:
XMapWindow( m_X11View->m_pDisplay, m_X11View->m_hWindow );
XWindowAttributes watt;
XGetWindowAttributes( m_X11View->m_pDisplay, m_X11View->m_hWindow, &watt );
XVisualInfo temp;
temp.visualid = XVisualIDFromVisual(watt.visual);
int n;
XVisualInfo* visual = XGetVisualInfo( m_X11View->m_pDisplay, VisualIDMask, &temp, &n );
int n_elems = 0;
if (glXQueryExtension(m_X11View->m_pDisplay,NULL,NULL))
{
// create dummy base context to init glew, create proper 4.x context
m_X11View->m_GLXContext = glXCreateContext( m_X11View->m_pDisplay, visual, 0, true );
glXMakeCurrent( m_X11View->m_pDisplay, m_X11View->m_hWindow, m_X11View->m_GLXContext );
// some debug output stuff
std::cerr << "GL vendor: " << glGetString(GL_VENDOR) << std::endl;
std::cerr << "GL renderer: " << glGetString(GL_RENDERER) << std::endl;
std::cerr << "GL Version: " << glGetString(GL_VERSION) << std::endl;
std::cerr << "GLSL version: " << glGetString(GL_SHADING_LANGUAGE_VERSION) << std::endl;
int glx_version_major;
int glx_version_minor;
if (glXQueryVersion(m_X11View->m_pDisplay,&glx_version_major,&glx_version_minor))
{
if (ExtensionSupported(m_X11View->m_pDisplay,screen,"GLX_SGIX_fbconfig"))
{
int result = glewInit();
if (GLEW_OK==result)
{
std::cerr << "GLEW init successful" << std::endl;
PFNGLXGETFBCONFIGATTRIBSGIXPROC GetFBConfigAttribSGIX = (PFNGLXGETFBCONFIGATTRIBSGIXPROC)
glXGetProcAddress( (GLubyte*)"glXGetFBConfigAttribSGIX");
PFNGLXCHOOSEFBCONFIGSGIXPROC ChooseFBConfigSGIX = (PFNGLXCHOOSEFBCONFIGSGIXPROC)
glXGetProcAddress( (GLubyte*)"glXChooseFBConfigSGIX");
PFNGLXCREATECONTEXTWITHCONFIGSGIXPROC CreateContextWithConfigSGIX = (PFNGLXCREATECONTEXTWITHCONFIGSGIXPROC)
glXGetProcAddress( (GLubyte*)"glXCreateContextWithConfigSGIX");
PFNGLXGETVISUALFROMFBCONFIGSGIXPROC GetVisualFromFBConfigSGIX = (PFNGLXGETVISUALFROMFBCONFIGSGIXPROC)
glXGetProcAddress( (GLubyte*)"glXGetVisualFromFBConfigSGIX");
int gl_attribs[] = {
GLX_CONTEXT_MAJOR_VERSION_ARB, 4,
GLX_CONTEXT_MINOR_VERSION_ARB, 4,
GLX_CONTEXT_PROFILE_MASK_ARB, GLX_CONTEXT_CORE_PROFILE_BIT_ARB,
//GLX_CONTEXT_FLAGS_ARB, GLX_CONTEXT_DEBUG_BIT_ARB,
0
};
GLXFBConfigSGIX* configs = ChooseFBConfigSGIX( m_X11View->m_pDisplay, screen, NULL, &n_elems );
glXDestroyContext( m_X11View->m_pDisplay, m_X11View->m_GLXContext );
m_X11View->m_GLXContext = glXCreateContextAttribsARB( m_X11View->m_pDisplay, configs[0], 0, true, gl_attribs );
glXMakeCurrent( m_X11View->m_pDisplay, m_X11View->m_hWindow, m_X11View->m_GLXContext );
/*
glDebugMessageCallback( GLDebugLog, NULL );
// setup message control
// disable everything
// enable errors only
glDebugMessageControl( GL_DONT_CARE, GL_DONT_CARE, GL_DONT_CARE, 0, 0, GL_FALSE );
glDebugMessageControl( GL_DEBUG_SOURCE_API, GL_DEBUG_TYPE_ERROR, GL_DONT_CARE,
0, 0, GL_TRUE );
*/
}
else
{
std::cerr << "GLEW init failed: " << glewGetErrorString(result) << std::endl;
}
}
}
}

Why am I not getting a forward compatible OpenGL context with GLFW?

To my knowledge, when I set these context constraints on GLFW:
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
I should get the maximum available OpenGL context on the running machine, provided that it's above OpenGL 3.3. However, by using glGetString to get the OpenGL context version, I am finding that this is not the case. Everytime I query glGetString for the context version, I only get the major and minor versions I set with glfwWindowHint, nothing above. Keep in mind, my GPU supports OpenGL 4.5.
Another thing to note, when I set no constraints whatsoever, I do in fact get an OpenGL 4.5 context.
Here is the full source code, which seems to replicate the problem:
#define GLEW_STATIC
#include <iostream>
#include <GL\glew.h>
#include <GLFW\glfw3.h>
#include <glm\glm.hpp>
int main(int argc, char argv[])
{
if (!glfwInit())
{
std::cerr << "Failed to initialize GLFW 3.0.4" << std::endl;
getchar();
glfwTerminate();
return -1;
}
std::cout << "Initialized GLFW 3.0.4" << std::endl;
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
GLFWwindow* m_window = glfwCreateWindow(640, 480, "Koala", NULL, NULL);
if (!m_window)
{
std::cerr << "Failed to create OpenGL 3.3+ context" << std::endl;
getchar();
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(m_window);
std::cout << "Created OpenGL 3.3+ context" << std::endl;
glewExperimental = GL_TRUE;
if (glewInit() != GLEW_OK)
{
std::cerr << "Failed to initialize GLEW 1.11.0" << std::endl;
getchar();
glfwTerminate();
return -1;
}
std::cout << "Initialized GLEW 1.11.0" << std::endl;
const GLubyte* renderer = glGetString(GL_RENDERER);
const GLubyte* version = glGetString(GL_VERSION);
std::cout << "GPU: " << renderer << std::endl;
std::cout << "OpenGL Version: " << version << std::endl;
while (!glfwWindowShouldClose(m_window))
{
glfwSwapBuffers(m_window);
glfwPollEvents();
}
glfwTerminate();
return 0;
}
I should get the maximum available OpenGL context on the running machine, provided that it's above OpenGL 3.3.
That's not the way it is defined. From the GLFW documentation:
The GLFW_CONTEXT_VERSION_MAJOR and GLFW_CONTEXT_VERSION_MINOR hints specify the client API version that the created context must be compatible with.
For OpenGL, these hints are not hard constraints, as they don't have to match exactly, but glfwCreateWindow will still fail if the resulting OpenGL version is less than the one requested.
While there is no way to ask the driver for a context of the highest supported version, most drivers provide this when you ask GLFW for a version 1.0 context.
So "most drivers" would give you what you expected, but this is not guaranteed.
The typical usage is that you specify the minimum version that supports all the features that your code uses. Then you don't care if you get exactly that version, or possibly a higher one. But you know that you will get a failure if the minimum version is not supported.
If you want to dynamically test which version is supported, your best bet is probably to first specify the highest version you can take advantage of, and test the return value of glfwCreateWindow(). If it fails, reduce the version as long as it fails, and call glfwCreateWindow() again, until you reach the minimum version you can run with. Then you can keep track of which version succeeded, or the version reported by glGetString(GL_VERSION), and use that to decide at runtime which features you can use.
The version you receive from the glGetString does not mean that functionality is capped at that version. In our experience, we receive a 3.3 context that can do everything that any context from that driver can (I mean 4.2+ version features). You only need to worry about the minimum version you require from the driver.

SDL image render difficulties

Hi there fellow Overflowers!
I have recently begun learning SDL. I chose simple directmedia layer as my external API to my C++ knowledge because I found it to offer the most visually enhanced mechanics for game dev. Consider this code below:
#include <iostream>
#include "SDL/SDL.h"
using std::cerr;
using std::endl;
int main(int argc, char* args[])
{
// Initialize the SDL
if (SDL_Init(SDL_INIT_VIDEO) != 0)
{
cerr << "SDL_Init() Failed: " << SDL_GetError() << endl;
exit(1);
}
// Set the video mode
SDL_Surface* display;
display = SDL_SetVideoMode(640, 480, 32, SDL_HWSURFACE | SDL_DOUBLEBUF);
if (display == NULL)
{
cerr << "SDL_SetVideoMode() Failed: " << SDL_GetError() << endl;
exit(1);
}
// Set the title bar
SDL_WM_SetCaption("SDL Tutorial", "SDL Tutorial");
// Load the image
SDL_Surface* image;
image = SDL_LoadBMP("LAND.bmp");
if (image == NULL)
{
cerr << "SDL_LoadBMP() Failed: " << SDL_GetError() << endl;
exit(1);
}
// Main loop
SDL_Event event;
while(1)
{
// Check for messages
if (SDL_PollEvent(&event))
{
// Check for the quit message
if (event.type == SDL_QUIT)
{
// Quit the program
break;
}
}
// Game loop will go here...
// Apply the image to the display
if (SDL_BlitSurface(image, NULL, display, NULL) != 0)
{
cerr << "SDL_BlitSurface() Failed: " << SDL_GetError() << endl;
exit(1);
}
//Update the display
SDL_Flip(display);
}
// Tell the SDL to clean up and shut down
SDL_Quit();
return 0;
}
All I have done is just made a screen surface, Double buffered it, made another surface of an image, blit'ed the two together, and for some reason when I build the application, It closes instantly! The application build succeeds but then closes without a window opening! This is really frustrating.
I am using XCode5 and SDL 2.0.3 :)
Help is needed!
EDIT: Turns out in the error log, it says SDL_LoadBMP(): Failed to load LAND.bmp. The bmp is saved in the root directory, the same folder as the main.cpp folder? Why doesn't this work?
You should be able to test your code by using the absolute (full) path to the image. That will verify that the code is actually working.
To be able to use resources with an absolute path you should create a Build Phase to Copy Files. The Destination should be set to 'Product Directory'. You can leave Subpath blank or provide a directory to place the resource in (this will be useful when you get a lot of resources) eg textures. If you supply a Subpath then alter your code so it would be textures/LAND.bmp
You would also use the build phase for packaging the SDL2.framework and any others e.g. SDL2_image etc with your final application. This would allow users who don't have SDL on their machines to run the app. To do this create another build phase with the Detination set to 'Frameworks' and leave the Subpath empty. Then just add any frameworks you want to package with the app. One other setting you will want to make in Build Settings is to change 'Runpath Search Paths' (found under 'Linking') to be #executeable_path/../Frameworks so that the application knows where to find packaged frameworks
I have a tutorial on configuring SDL2 in Xcode along with a template to make it quick
http://zamma.co.uk/how-to-setup-sdl2-in-xcode-osx/

GLSL fails to compile, no error log

I'm trying to step into using shaders with OpenGL, but it seems I can't compile the shader. More frustratingly, it also appears the error log is empty. I've searched through this site extensively and checked many different tutorials but there doesn't seem to be an explanation for why it fails. I even bought a book dealing with shaders but it doesn't account for vanishing log files.
My feeling is that there must be an error with how I am linking the shader source.
//Create shader
GLuint vertObject;
vertObject = glCreateShader(GL_VERTEX_SHADER);
//Stream
ifstream vertfile;
//Try for open - vertex
try
{
//Open
vertfile.open(vertex.c_str());
if(!vertfile)
{
// file couldn't be opened
cout << "Open " << vertex << " failed.";
}
else
{
getline(vertfile, verttext, '\0');
//Test file read worked
cout << verttext;
}
//Close
vertfile.close();
}
catch(std::ifstream::failure e)
{
cout << "Failed to open" << vertfile;
}
//Link source
GLint const shader_length = verttext.size();
GLchar const *shader_source = verttext.c_str();
glShaderSource(vertObject, 1, &shader_source, &shader_length);
//Compile
glCompileShader(vertObject);
//Check for compilation result
GLint isCompiled = 0;
glGetShaderiv(vertObject, GL_COMPILE_STATUS, &isCompiled);
if (!isCompiled)
{
//Did not compile, why?
std::cout << "The shader could not be compiled\n" << std::endl;
char errorlog[500] = {'\0'};
glGetShaderInfoLog(vertObject, 500, 0, errorlog);
string failed(errorlog);
printf("Log size is %d", failed.size());
}
printf("Compiled state = %d", isCompiled);
The shader code is as trivial as can be.
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
I can't get either my fragment or my vertex shader to compile. If I can get the error log to show, then at least I will be able to start error checking my own work. For now, though, I can't even get the errors.
It seems that the reason glCompile is failing without a log in this case is because I attempted to perform this action before the OpenGL context had been initialised (glutCreateWindow etc).
If anybody else gets this problem in future, try getting your version just before you try to create any GLSL objects.
printf("OpenGL version is (%s)\n", glGetString(GL_VERSION));
If you get "OpenGL version is (null)", then you don't have a valid context to create your shaders with. Find where you create your context, and make sure your shader creation comes afterwards.