glGenVertexArrays not giving unique vaos - opengl

My friend and I are working on a project using C++ and OpenGL. We've created a C++ class for a "ModelObject", and each ModelObject has a GLuint vao as a member variable. Then while initializing a ModelObject, we call
glGenVertexArrays( 1, &vao );
glBindVertexArray( vao );
and at the end of the initializing function we call
glBindVertexArray(0);
to get out of the context. Now we're trying to render 2 objects, a train car and a cube. On his machine (Linux Mint) they both render fine, with their own textures, and querying the objects for their vaos returns 1 and 2 respectively.
On my machine however (a MacBook Pro), both objects render as a cube (though one has the texture of a train and the other the texture of the cube). Querying their vaos returns 0 and 0.
As an experiment we told glGenVertexArrays to create 5 vaos for each ModelObject. This resulted in the list 0, 1, 918273, 8, 7 (or something similar to that), and it was the same list for both ModelObjects.
So as far as I can tell the problem is that glGenVertexArrays is both a) using 0 as a valid address, and b) generating identical addresses on each call, even though we're never calling glDeleteVertexArray. Why would it be doing this on my machine and not his, and how do I stop it?

Does your GPU support OpenGL 3.0? What does glview say about the entry point glGenVertexArrays? It is possible that your GPU/Driver doesn't support VAOs.

I had the same issue on an iMac. It turned out that apparently on MacOS, you need to use glGenVertexArrayAPPLE and glBindVertexArrayAPPLE. Replacing the call give consistent unique VAO.

VAOs were introduced in OpenGL 3.0, so they will only work in contexts that support 3.0 or later.
Mac OS only supports OpenGL 3.x and 4.x in Core Profile contexts. By default, you will get a context that supports OpenGL 2.1 with all the legacy features that are deprecated, and have been removed in the Core Profile. Mac OS does not support the Compatibility Profile, where features from 3.0 and later can be used in combination with legacy features.
How you create a Core Profile context depends on the window system interface you use. For two of the common ones:
With GLUT (which is marked as deprecated itself, but still works at the moment): Add GLUT_3_2_CORE_PROFILE to the flags passed to glutInitDisplayMode() (see Glut deprecation in Mac OSX 10.9, IDE: QT Creator for details).
With Cocoa, add this attribute/value pair to the pixel format attributes:
NSOpenGLPFAOpenGLProfile, NSOpenGLProfileVersion3_2Core,
Once you have a Core Profile context, glGenVertexArrays() and glBindVertexArray() will work fine. This obviously requires a machine that can support at least OpenGL 3.x. The table on this page lists the version support for each machine: http://support.apple.com/kb/HT5942.

Related

What controls which interface GLAutoDrawable.getGL() returns?

In trying to work through tutorials on JOGL, I run into trouble when entering this part of the code:
#Override
public void display(GLAutoDrawable glad) {
GL gl = glad.getGL();
gl.glClear(GL.GL_COLOR_BUFFER_BIT);
gl.glBegin(GL.GL_TRIANGLES);
This doesn't compile because glBegin is not a method in GL even though the online tutorials use it that way. By digging around in the JOGL javadoc I found that I could get the tutorial to work by doing this:
GL2 gl = (GL2) glad.getGL();
Then all the methods are there. My questions are:
Can I expect a different interface to be returned by getGL on different platforms? This is on MacOS 10.9. What controls what version of the interface is used?
Since it appears that the tutorials are out of date, did this work under a different version of OpenGL?
Please look at our Java documentation and our overview of OpenGL evolution. The GL interface you get depends on the GLProfile you use and the GL interface your machine supports. It doesn't depend on your operating system but rather on what is supported by your graphics card. All methods were in GL in JOGL 1 whereas there are several GL interfaces and implementations in JOGL 2. user3256930's answer is incomplete. A desktop machine can support both OpenGL (backward and forward compatible profiles) and OpenGL ES. Then, there are at least 3 GL implementations that you can get and it depends on your profile, you can call GLProfile.getMaxFixedFunc(boolean), GLProfile.getMaxProgrammable(boolean), getDefault(), ...
As glBegin is only in the fixed pipeline and not in OpenGL ES, you'll get it only if the GL interface you get is GL2 or another interface that extends it, for example GL4bc (bc = backward compatible).
Please rather post your questions specific to JOGL on our official forum.
Your question is very similar to Can't find GL.glColor3f in JOGL?. The reason why you were not able to use glBegin without casting to GL2 is because glBegin (and many other functions) were deprecated in OpenGL 3. This is known as immediate mode where you specify vertices between glBegin and glEnd. In OpenGL 3 and higher the way to draw primitives is by storing the vertices in a buffer and drawing with glDrawArrays or glDrawElements. You will get different versions of OpenGL on different platforms and different computers. It depends on what version of OpenGL is supported on that computer. Newer computers with newer graphics cards will be able to support the latest version of OpenGL while older computers might be stuck on an older OpenGL version.

How did I just use an OpenGL 3 feature in a 1.1 context?

I just started programming in OpenGL a few weeks ago, and as people suggested to me, I used GLFW as my window handler. I also used GLEW as my extensions handler. So I go through the whole process of making a vertex buffer with three points to draw a triangle and passing it to OpenGL to draw it and I compile and run. No triangle draws, presumably because I didn't have any shaders. So I think to myself "Why don't I lower my OpenGL version through the context creation using GLFW?" and I did that. From OpenGL 3.3 to 1.1 and surely enough, there's a triangle. Success, I thought. Then I remember an article saying that vertex buffers have only been introduce in OpenGL 3, so how have I possibly used an OpenGL 3 feature in a 1.1 context?
The graphics driver is free to give you a context which is a different version than what you requested, as long as they are compatible. For example, you may get a v3.0 context even if you ask for a v1.1 context, as OpenGL 3.0 does not change or remove any features from OpenGL 1.1.
Additionally, often times the only difference between OpenGL versions is what extensions that the GPU must support. If you have a v1.1 context but ARB_vertex_buffer_object is supported, then you will still be able to use VBOs (though you may need to append the ARB suffix to the function names).

CUDA + OpenGL Interop without deprecated functionality

I've previously been able to populate textures in CUDA for use in OpenGL by:
Create and initialize the GL texture (gl::GenTextures(), etc.)
Create a GL Pixel Buffer Object
Register the PBO with CUDA
In the update/render loop:
cudaGraphicsMapResource() with the PBO
Launch the kernel to update the PBO
cudaGraphicsUnmapResource() the PBO from CUDA
Load the GL program, bind texture, render as normal
Wash, rinse repeat.
However, I'm wondering if PBOs are still the best way to write a texture from a kernel. I've seen articles like this one (updated for v5 here) which don't appear to use PBOs at all.
I've seen some references to cudaTextureObject and cudaSurfaceObject, but their role in OpenGL interop is unclear to me.
Are PBOs still the recommended approach? If not, what are the alternatives I should be investigating?
(I'm specifically targeting Kepler and newer architectures.)
You can see on the official example in CUDA 6 SDK, it's called "simpleCUDA2GL" in "3_Imaging" directory.
It has two different approaches to access texture inside CUDA kernel.
One of them (I think the old one) uses the PBO, and it is 3 times slower on my machine.
You may want to look at this very recent CUDA GL Interop example from NVIDIA:
https://github.com/nvpro-samples/gl_cuda_interop_pingpong_st

Porting OpenGL ES 2 to OpenGL

I have an iPhone game which I am porting to PC. The game uses OpenGL ES 2 (shaders) and I am to decide which version of OpenGL to use.
Should I port all shaders to OpenGL 2 (and support older hardware)? Or should I port to OpenGL 3 (which I don't know very well yet, but it seems that the shaders are more compatible)? Which will be easier to port?
Since OpenGL ES 2 is a subset of desktop OpenGL 2, you should be pretty fine with 2.x if you don't need your game to use more advanced techniques on PC (like instancing, texture arrays, geometry shaders, ..., things your ES device never heard about).
Just keep in mind, that the fact that you can use immediate mode and the fixed-function pipeline in desktop GL 2 doesn't mean you should. You can code completely forward-compatible modern GL (VBOs, shaders, ...) in OpenGL 2.0 just fine. On the other hand GL 3.x isn't really any different from previous versions, it just brings some additional features, which can be included later on if need arises. There is no real decision between GL 2 or 3, only between modern VBO-shader-based GL and old immediate-mode-fixed-function-based GL, and with your game already using GL ES 2.0 you have made the correct decision already.
Actually your shaders would be more compatible with GL 2 than 3, as ES 2 uses the old attribute/varying syntax. But that as a side note, nothing to really drive your decision as you may have to change your shaders a little bit, anyway (e.g. handle/remove precision qualifiers).

OpenGL: How to select correct mipmapping method automatically?

I'm having problem at mipmapping the textures on different hardware. I use the following code:
char *exts = (char *)glGetString(GL_EXTENSIONS);
if(strstr(exts, "SGIS_generate_mipmap") == NULL){
// use gluBuild2DMipmaps()
}else{
// use GL_GENERATE_MIPMAP
}
But on some cards it says GL_GENERATE_MIPMAP is supported when its not, thus the gfx card is trying to read memory from where the mipamp is supposed to be, thus the card renders other textures to those miplevels.
I tried glGenerateMipmapEXT(GL_TEXTURE_2D) but it makes all my textures white, i enabled GL_TEXTURE_2D before using that function (as it was told to).
I could as well just use gluBuild2DMipmaps() for everyone, since it works. But i dont want to make new cards load 10x slower because theres 2 users who have really old cards.
So how do you choose mipmap method correctly?
glGenerateMipmap is supported at least by OpenGL 3.3 as a part of functionality, not as extension.
You have following options:
Check OpenGL version, if it is more recent that the first one that ever supported glGenerateMipmap, use glGenerateMipmap.
(I'd recommend this one) OpenGL 1.4..2.1 supports texParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE)(see this) Which will generate mipmaps from base level. This probably became "deprecated" in OpenGL 3, but you should be able to use it.
Use GLEE or GLEW ans use glewIsSupported / gleeIsSupported call to check for extension.
Also I think that instead of using extensions, it should be easier to stick with OpenGL specifications. A lot of hardware supports OpenGL 3, so you should be able get most of required functionality (shaders, mipmaps, framebuffer objects, geometry shaders) as part of OpenGL specification, not as extension.
If drivers lie, there's not much you can do about it. Also remember that glGenerateMipmapEXT is part of the GL_EXT_framebuffer_object extension.
What you are doing wrong is checking for the SGIS_generate_mipmap extension and using GL_GENERATE_MIPMAP, since this enum belongs to core OpenGL, but that's not really the problem.
The issue you describe sounds like a very horrible OpenGL implementation bug, i would bypass it using gluBuild2DMipmaps on those cards (having a list and checking at startup).