So this one is a doozie;
I've got a pretty large OpenGL solution, written in version 3.2 core with GLSL 1.5 in Windows 7. I am using GLEW and GLM as helper libraries. When I create a window, I am using the following lines:
// Initialize main window
glewExperimental = GL_TRUE;
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MAJOR, 3); // Use OpenGL Core v3.2
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MINOR, 2);
glfwOpenWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
if(!glfwOpenWindow(Game::WINDOW_X, Game::WINDOW_Y, 0, 0, 0, 0, 32, 0, GLFW_WINDOW))
{ ...
If I omit the three glfwOpenWindowHint functions, the application crashes my video drivers upon call to glDrawArrays(GL_TRIANGLES, 0, m_numIndices);
But here's the kicker. When someone else in my group tries to update and run the solution, they get a blank window with no geometry. Commenting out the three lines makes the program run fine for them. There is a pretty even split between working with the 3.2core hint and without. I haven't been able to determine any difference between nVidia, AMD, desktop, or laptop.
The best I could find was a suggestion to add glewExperimental = GL_TRUE; as Glew is said to have problems with core. It made no difference. The solution is too big to post code, but I can put up shaders, rendering code, etc as needed.
Thanks so much! This has been killing us for several days now.
Try asking for a forward-compatible GLFW window:
GLFW_OPENGL_FORWARD_COMPAT - Specify whether the OpenGL contextshould be forward-compatible (i.e. disallow legacy functionality). This should only beused when requesting OpenGL version 3.0 or above.
And try not setting the profile hint and let the system choose:
// Use OpenGL Core v3.2
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MAJOR, 3);
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MINOR, 2);
glfwOpenWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
Also, make sure that you actually get a version you want:
int major, minor, rev;
glfwGetGLVersion(&major, &minor, &rev);
fprintf(stderr, "OpenGL version recieved: %d.%d.%d", major, minor, rev);
Not sure whether you also run for Macs, but read this anyway:
A.4 OpenGL 3.0+ on Mac OS X
Support for OpenGL 3.0 and above was introduced with Mac OS X 10.7,
and even then forward-compatible OpenGL 3.2 core profile contexts are
supported and there is no mechanism for requesting debug contexts.
Earlier versions of Mac OS X supports at most OpenGL version 2.1.
Because of this, on Mac OS X 10.7, the GLFW_OPENGL_VERSION_MAJOR and
GLFW_OPENGL_VERSION_MINOR hints will fail if given a version above
3.2, the GLFW_OPENGL_DEBUG_CONTEXT and GLFW_FORWARD_COMPAT hints are ignored, and setting the GLFW_OPENGL_PROFILE hint to anything except
zero or GLFW_OPENGL_CORE_PROFILE will cause glfwOpenWindow to fail.
Also, on Mac OS X 10.6 and below, the GLFW_OPENGL_VERSION_MAJOR and
GLFW_OPENGL_VERSION_MINOR hints will fail if given a version above
2.1, the GLFW_OPENGL_DEBUG_CONTEXT hint will have no effect, and setting the GLFW_OPENGL_PROFILE or GLFW_FORWARD_COMPAT hints to a
non-zero value will cause glfwOpenWindow to fail.
I ran into the same issue. I had to create a VAO before my VBO's and now it works on OS X.
GLuint vertex_array;
glGenVertexArrays(1, &vertex_array);
glBindVertexArray(vertex_array);
Related
I'm using openFrameworks on Windows, which uses GLFW and GLEW, and I'm having issues regarding extensions availability on different GPUs brands.
Basically, if I run my program on openGL 2, the extensions are available. But if I change to openGL 3.2 or higher, all the extensions became unavailable on Nvida (tested on a* GTX1080) and Intel (*UHD), but not on AMD (*Vega Mobile GL/GH and RX 5700).
This translates to not being able to use GL_ARB_texture_float, and therefore my compute shaders don't work as intended.
I'm using openGL 4.3, for the compute shader support and for support of the Intel GPU. All drivers are up to date and all GPU support GL_ARB_texture_float.
Also, enabling the extension on the GLSL does nothing.
This is how openFrameworks makes the context:
glfwWindowHint(GLFW_CLIENT_API, GLFW_OPENGL_API);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, settings.glVersionMajor);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, settings.glVersionMinor);
if((settings.glVersionMajor==3 && settings.glVersionMinor>=2) || settings.glVersionMajor>=4)
{
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
}
if(settings.glVersionMajor>=3)
{
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
}
Not sure what is going on, nor exactly how to search for an issue like this. Any pointers are welcome!
ARB_texture_float is an ancient extension that was incorporated into OpenGL proper in 3.0. That is, if you ask for 3.2, you can just use floating-point formats for textures. They're always available.
Furthermore, later GL versions add more floating-point formats.
Since there is no core profile of OpenGL less than version 3.2, I suspect that these implementations are simply not advertising extensions that have long since been part of core OpenGL if you ask for a core profile.
I have a slightly modified version of the sample code found on the main LWJGL page. It works but it uses legacy OpenGL version 2.1. If I attempt to use the forward-compatible context described in GLFW doc, the version used is 4.1 (no matter what major/minor I hint), the window is created, but it crashes on the first call to glPushMatrix().
Forward compatibility enabled like so:
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
Some info I print in the console:
LWJGL 3.1.6 build 14
OS name Mac OS X
OS version 10.13.4
OpenGL version 4.1 ATI-1.66.31
Logging:
[LWJGL] A function that is not available in the current context was called.
Problematic frame:
C [liblwjgl.dylib+0x1c494]
From here I don't know what to look for. Should this code be working? Or am I missing some ceremony? Many resources are outdated, making it harder to figure things out.
glPushMatrix is a function for not Core Profile context, but for OpenGL < 3.2.
If you want to use it (and other pre-core features) you need a Compatibility context, not a Forward compatible one, not a Core profile either.
The GLFW hints should be like these, without asking for a Core Profile.
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_COMPAT_PROFILE);
Likely the driver will give the highest available version, but with all of old features too.
I am a student learning c++ and opengl for 5 months now and we have touched some advanced topics over the course of time starting from basic opengl like glBegin/glEnd to VA to VBO to shaders etc. Our professor has made us build up our graphics engine over time form first class and every now and then he asks us to stop using one or the other deprecated features and move on to the newer versions.
Now as part of the current assignment, he asked us to get rid of everything prior to OpenGl ES 2.0. Our codebase is fairly large and I was wondering if I could set OpenGL to 2.0 and above only so that having those deprecated features would actually fail at compile time, so that I can make sure all those features are out of my engine.
When you initialize your OpenGL context, you can pass hints to the context to request a specific context version. For example, using the GLFW library:
glfwWindowHint(GLFW_CLIENT_API, GLFW_OPENGL_ES_API);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 2);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 0);
GLFWwindow* window = glfwCreateWindow(res_width, res_height, window_name, monitor, NULL);
This will fail (in the case of GLFW, it returns a NULL window) if the OpenGL library doesn't support ES 2.0. Your platform's native EGL (or WGL, GLX, AGL, etc.) functions offer this functionality.
I have set out to learn OpenGl using this tutorial.
I followed the instructions, installed the libraries and compiled the tutorial source code and when I tried to run it I got:
Failed to open GLFW window. If you have an Intel GPU, they are not 3.3
compatible. Try the 2.1 version of the tutorials.
So I checked out the FAQ on this particular issue and got this advice:
However I do not fully understand this advice fully. I have a 5 year old laptop with Ubuntu 13.10 and a Mobile IntelĀ® GM45 Express Chipset x86/MMX/SSE2. According to the FAQ OpenGl
3.3 is not supported for me. The FAQ suggests that I learn OpenGl 3.3 anyhow.
But how can I learn it without actually running the code?
Is there a way to emulate OpenGl 3.3 somehow on older hardware?
I think the sad truth is that you have to update your hardware. It's relatively cheap on desktop computers (3.3 GPUs can be get for coffee money, really), but on mobile you are more limited, I guess.
The emulators available like ANGLE or the ARM MALI one focus on ES mostly, and in the latter case require 3.2/3.3 support anyway.
That being said, you absolutely can learn OpenGL without running the code, altough it's certainly less fun. Aside from GL2.1, I'd explore WebGL too; maybe it's not cutting edge, but it's fun enough for a lot of people to dig it.
Perhaps you can set out to learn OpenGL 2.1 instead; however, I wouldn't recommend sticking with it! There are a ton of changes that happened in OpenGL 3.0, where a lot of old functionality you could use in v2.1 becomes deprecated.
Modern versions of the OpenGL specification force developers to use their 'programmable pipeline' via shader programs in order to render.
While although v2.1 supports some shader features, it also contains support for the 'fixed-function pipeline' for rendering.
Personally, I started learning OpenGL through using the Java bindings for it (this may simplify things if you are using the Windows API). However, no matter which bindings you use, the OpenGL specification remains the same. All implementations of OpenGL require you to create some window/display to render to and to respond to some basic rendering events (initialization and window resize for example).
Within the fixed-function pipeline, you can make calls such as the following to render a triangle to the screen. The vertices and colors for those vertices are described within the glBegin/End block.
glBegin(GL_TRIANGLES)
glColor3d(1, 0, 0);
glVertex3d(-1, 0, 0);
glColor3d(0, 1, 0);
glVertex3d(1, 0, 0);
glColor3d(0, 0, 1);
glVertex3d(0, 1, 0);
glEnd();
Here are some links you may want to visit to learn more:
- OpenGL Version History
- Swiftless Tutorials (I highly recomend this one!)
- Lighthouse 3D (good for GLSL)
- Java OpenGL Tutorial
In GLFW I'm setting OpenGL context version via:
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 2);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 0);
However when I print it to the console after glfwMakeContextCurrent(window); and glewInit(); via:
Log::brightWhite("OpenGL version:\t");
Log::white("%s\n", glGetString(GL_VERSION));
Log::brightWhite("GLSL version:\t");
Log::white("%s\n", glGetString(GL_SHADING_LANGUAGE_VERSION));
I get the following:
Why is it 4.3 and not 2.0?
Because the implementation is free to give you any version it likes, as long it is supporting everything which is in core GL 2.0. You typically will get the highsest supported compatibilty profile version of the implementation. There is nothing wrong with that.
Note that forward and backward compatible contexts and profiles were added in later versions, so when requesting a 1.x/2.x context, this is the behavior you should expet. Note that on OSX, GL 3.X an above is only supported in core profile, so you will very likely end up with a 2.1 context there.