How to implement 3d texturing in OpenGL 3.3 - c++

So I have just realized that the code I was working on for 3d textures was for OpenGL 1.1 or something and is no longer supported in OpenGL 3.3. Is there another way to do this without glTexture3D? Perhaps through a library or another function in OpenGL 3.3 that I do not know about?
EDIT:
I am not sure where I read that 3d texturing was taken out of OpenGL in newer versions (been googling a lot today), but consider this:
I have been following the tutorial/guide here. The program compiles without a hitch. Now read the following quote from the article:
The potential exists that the environment the program is being run on does not support 3D texturing, which would cause us to get a NULL address back, and attempting to use a NULL pointer is A Bad Thing so make sure to check for it and respond appropriately (the provided example exits with an error).
That quote is referring to the following function:
glTexImage3D = (PFNGLTEXIMAGE3DPROC) wglGetProcAddress("glTexImage3D");
When running my program on my computer (which has OpenGL 3.3) that same function returns null for me. When my friend runs it on his computer (which has OpenGL 1.2) it does not return null.

The way one uploads 3D textures has not changes since OpenGL-1.2. The functions for this are still named
glTexImage3D
glTexSubImage3D
glCopyTexSubImage3D

Related

OpenGL Qt 4.8 Render to texture floating point

so i'm working on a project based on Qt 4.8 so when using OpenGL i have to go through the QGL stuff.
My goal is to write data on a floating point texture to perform per pixel picking (3 values are written on each pixel 2 integers and a float value).
so i used the QGLFramebufferObject and the offscreen rendering is happening, but i'm having issues retrieving my data. The first thing i looked into is specifying the internal format for the FBO , but when trying to use the right format i need GL_RGB32F the compiler can't find it , i checked the context and it is a 3.1 core profile so it should be there. My second problem is with the clamping when reading back information from the buffer it is normalized so i know i have to disable the clamping of values with glClampColorARB but compiler doesn't find it neither.
So i guess my question is how do i load what's missing so i can find my constant for the internal format and the clamping function.
Thanks
I would guess that you are compiling with an older OpenGL header file. On MS Windows the default GL/gl.h is for something like version 1.1 :-(
AFAIK the Qt headers for the GL related classes don't include everything in OpenGL, just the minimum to work. You should get your own copy of glcorearb.h, say from www.opengl.org, and include that in your source code.
(What you are attempting can be done: I have a Linux/MSWin/Mac program built with Qt 4.8.6 that renders to offscreen floating point buffers. I'd offer code but I created the framebuffer directly with OpenGL calls rather than using the Qt class.)

Swapping buffers with core profile uses invalid operations

Whenever I call a function to swap buffers I get tons of errors from glDebugMessageCallback saying:
glVertex2f has been removed from OpenGL Core context (GL_INVALID_OPERATION)
I've tried using both with GLFW and freeglut, and neither work appropriately.
I haven't used glVertex2f, of course. I even went as far as to delete all my rendering code to see if I can find what's causing it, but the error is still there, right after glutSwapBuffers/glfwSwapBuffers.
Using single-buffering causes no errors either.
I've initialized the context to 4.3, core profile, and flagged forward-compatibility.
As explained in comments, the problem here is actually third-party software and not any code you yourself wrote.
When software such as the Steam overlay or FRAPS need to draw something overtop OpenGL they usually go about this by hooking/injecting some code into your application's SwapBuffers implementation at run-time.
You are dealing with a piece of software (RivaTuner) that still uses immediate mode to draw its overlay and that is the source of the unexplained deprecated API calls on every buffer swap.
Do you have code you can share? Either the driver is buggy or something tries to call glVertex in your process. You could try to use glloadgen to build a loader library that covers OpenGL-4.3 symbols only (and only that symbols), so that when linking your program uses of symbols outside the 4.3 specs causes linkage errors.

Tesselation in Go-GL

I'm trying to tesselate a simple triangle using the Golang OpenGL bindings
The library doesn't claim support for the tesselation shaders, but I looked through the source code, and adding the correct bindings didn't seem terribly tricky. So I branched it and tried adding the correct constants in gl_defs.go.
The bindings still compile just fine and so does my program, it's when I actually try to use the new bindings that things go strange. The program goes from displaying a nicely circling triangle to a black screen whenever I actually try to include the tesselation shaders.
I'm following along with the OpenGL Superbible (6th edition) and using their shaders for this project, so I don't image I'm using broken shaders (they don't spit out an error log, anyway). But in case the shaders themselves could be at fault, they can be found in the setupProgram() function here.
I'm pretty sure my graphics card supports tesselation because printing the openGL version returns 4.4.0 NVIDIA 331.38
.
So my questions:
Is there any reason adding go bindings for tesselation wouldn't work? The bindings seem quite straightforward.
Am I adding the new bindings incorrectly?
If it should work, why is it not working for me?
What am I doing wrong here?
Steps that might be worth taking:
Your driver and video card may support tessellation shaders, but the GL context that your binding is returning for you might be for an earlier version of OpenGL. Try glGetString​(GL_VERSION​) and see what you get.
Are you calling glGetError basically everywhere and actually checking its values? Does this binding provide error return values? If so, are you checking those?

Cg problems with OpenGL

I'm working on an OpenGL project on Windows, using GLEW to provide the functionality the provided Windows headers lack. For shader support, I'm using NVIDIA's Cg. All the documentation and code samples I have read indicate that the following is the correct method for loading an using shaders, and I've implemented things this way in my code:
Create a Cg context with cgCreateContext.
Get the latest vertex and pixel shader profiles using cgGLGetLatestProfile with CG_GL_VERTEX and CG_GL_FRAGMENT, respectively. Use cgGLSetContextOptimalOptions to create the optimal setup for both profiles.
Using these profiles and shaders you have written, create shader programs using cgCreateProgramFromFile.
Load the shader programs using cgGLLoadProgram.
Then, each frame, for an object that uses a given shader:
Bind the desired shader(s) (vertex and/or pixel) using cgGLBindProgram.
Enable the profile(s) for the desired shader(s) using cgGLEnableProfile.
Retrieve and set any needed uniform shader parameters using cgGetNamedParameter and the various parameter setting functions.
Render your object normally
Clean up the shader by calling cgGLDisableProfile
However, things start getting strange. When using a single shader everything works just fine, but the act of loading a second shader with cgGLLoadProgram seems to make objects using the first one cease to render. Switching the draw order seems to resolve the issue, but that's hardly a fix. This problem occurs on both my and my partner's laptops (fairly recent machines with Intel integrated chipsets).
I tested the same code on my desktop with a GeForce GTX 260, and everything worked fine. I would just write this off as my laptop GPU not getting along with Cg, but I've successfully built and run programs that use several Cg shaders simultaneously on my laptop using the OGRE graphics engine (unfortunately the assignment I'm currently working on is for a computer graphics class, so I can't just use OGRE).
In conclusion, I'm stumped. What is OGRE doing that my code is not? Am I using Cg improperly?
You have to call cgGLEnableProfile before you call cgGLBindProgram. From your question it appears you do it the other way around.
From the Cg documentation for cgGLBindProgram:
cgGLBindProgram binds a program to the current state. The program must have been loaded with cgGLLoadProgram before it can be bound. Also, the profile of the program must be enabled for the binding to work. This may be done with the cgGLEnableProfile function.

no-render with OpenGL --> contexts

i have a program that does some GPU computing with Optional OpenGL rendering.
The use dynamic is as follow:
init function (init GLEW being the most relevant).
load mesh from file to GPU (use glGenBuffers are related functions to make VBO).
process this mesh in parallel (GPU Computing API).
save mesh into file.
my problem is that when mesh is loading i use opengl calls and wihout context created i just
get segmentation fault.
Edit: Evolution of the problem:
I was missing GL/glx.h I thought that GL/glxew.h included it, thanks to the answers that got fixed.
I was missing glXMakeCurrent; and therefore it was having zero contexts.
After this fixes, it works :).
also thanks for the tools suggestions, i would gladly use them it is just that i needed the low level code for this particular case.
i tried making a context with this code ( i am using glew, so i change the header to GL/glxew.h but the rest of this code remains the same)
Don'd do it. glxew is used for loading glx functions. You probably don't need it.
If you want to use GLEW, replace GL/gl.h with GL/glew.h leave GL/glx.h as it is.
X11 and GLX are quite complex, consider using sdl of glfw instead.
Just wildly guessing here, but could it be that GLEW redefined glXChooseFBConfig with something custom? Something in the call of glXChooseFBConfig dereferences an invalid pointer. So either glXChooseFBConfig itself is invalid, or fbcount to so small, or visual_attribs not properly terminated.
GLEW has nothing to do with context creation. It is an OpenGL loading library; it loads OpenGL functions. It needs you to have an OpenGL context in order for it to function.
Since you're not really using this context for drawing stuff, I would suggest using an off-the-shelf tool for context creation. GLFW or FreeGLUT would be the most light-weight alternatives. Just use them to create a context, do what you need to do, then destroy the windows they create.