I'm working on an OpenGL project on Windows, using GLEW to provide the functionality the provided Windows headers lack. For shader support, I'm using NVIDIA's Cg. All the documentation and code samples I have read indicate that the following is the correct method for loading an using shaders, and I've implemented things this way in my code:
Create a Cg context with cgCreateContext.
Get the latest vertex and pixel shader profiles using cgGLGetLatestProfile with CG_GL_VERTEX and CG_GL_FRAGMENT, respectively. Use cgGLSetContextOptimalOptions to create the optimal setup for both profiles.
Using these profiles and shaders you have written, create shader programs using cgCreateProgramFromFile.
Load the shader programs using cgGLLoadProgram.
Then, each frame, for an object that uses a given shader:
Bind the desired shader(s) (vertex and/or pixel) using cgGLBindProgram.
Enable the profile(s) for the desired shader(s) using cgGLEnableProfile.
Retrieve and set any needed uniform shader parameters using cgGetNamedParameter and the various parameter setting functions.
Render your object normally
Clean up the shader by calling cgGLDisableProfile
However, things start getting strange. When using a single shader everything works just fine, but the act of loading a second shader with cgGLLoadProgram seems to make objects using the first one cease to render. Switching the draw order seems to resolve the issue, but that's hardly a fix. This problem occurs on both my and my partner's laptops (fairly recent machines with Intel integrated chipsets).
I tested the same code on my desktop with a GeForce GTX 260, and everything worked fine. I would just write this off as my laptop GPU not getting along with Cg, but I've successfully built and run programs that use several Cg shaders simultaneously on my laptop using the OGRE graphics engine (unfortunately the assignment I'm currently working on is for a computer graphics class, so I can't just use OGRE).
In conclusion, I'm stumped. What is OGRE doing that my code is not? Am I using Cg improperly?
You have to call cgGLEnableProfile before you call cgGLBindProgram. From your question it appears you do it the other way around.
From the Cg documentation for cgGLBindProgram:
cgGLBindProgram binds a program to the current state. The program must have been loaded with cgGLLoadProgram before it can be bound. Also, the profile of the program must be enabled for the binding to work. This may be done with the cgGLEnableProfile function.
Related
I've previously been able to populate textures in CUDA for use in OpenGL by:
Create and initialize the GL texture (gl::GenTextures(), etc.)
Create a GL Pixel Buffer Object
Register the PBO with CUDA
In the update/render loop:
cudaGraphicsMapResource() with the PBO
Launch the kernel to update the PBO
cudaGraphicsUnmapResource() the PBO from CUDA
Load the GL program, bind texture, render as normal
Wash, rinse repeat.
However, I'm wondering if PBOs are still the best way to write a texture from a kernel. I've seen articles like this one (updated for v5 here) which don't appear to use PBOs at all.
I've seen some references to cudaTextureObject and cudaSurfaceObject, but their role in OpenGL interop is unclear to me.
Are PBOs still the recommended approach? If not, what are the alternatives I should be investigating?
(I'm specifically targeting Kepler and newer architectures.)
You can see on the official example in CUDA 6 SDK, it's called "simpleCUDA2GL" in "3_Imaging" directory.
It has two different approaches to access texture inside CUDA kernel.
One of them (I think the old one) uses the PBO, and it is 3 times slower on my machine.
You may want to look at this very recent CUDA GL Interop example from NVIDIA:
https://github.com/nvpro-samples/gl_cuda_interop_pingpong_st
I'm trying to tesselate a simple triangle using the Golang OpenGL bindings
The library doesn't claim support for the tesselation shaders, but I looked through the source code, and adding the correct bindings didn't seem terribly tricky. So I branched it and tried adding the correct constants in gl_defs.go.
The bindings still compile just fine and so does my program, it's when I actually try to use the new bindings that things go strange. The program goes from displaying a nicely circling triangle to a black screen whenever I actually try to include the tesselation shaders.
I'm following along with the OpenGL Superbible (6th edition) and using their shaders for this project, so I don't image I'm using broken shaders (they don't spit out an error log, anyway). But in case the shaders themselves could be at fault, they can be found in the setupProgram() function here.
I'm pretty sure my graphics card supports tesselation because printing the openGL version returns 4.4.0 NVIDIA 331.38
.
So my questions:
Is there any reason adding go bindings for tesselation wouldn't work? The bindings seem quite straightforward.
Am I adding the new bindings incorrectly?
If it should work, why is it not working for me?
What am I doing wrong here?
Steps that might be worth taking:
Your driver and video card may support tessellation shaders, but the GL context that your binding is returning for you might be for an earlier version of OpenGL. Try glGetString​(GL_VERSION​) and see what you get.
Are you calling glGetError basically everywhere and actually checking its values? Does this binding provide error return values? If so, are you checking those?
I'm fairly new to CUDA, but I've managed to display something generated by a kernel on the screen using OpenGL. I've tried several approach :
Using a PBO and an OpenGL texture (old style);
Using a OpenGL texture as a CUDA surface and rendering on a quad (new style);
Using a renderbuffer as a CUDA surface and rendering using glBlitFramebuffer.
All of them worked, but, while implementing #2, I erroneously set the hint as cudaGraphicsRegisterFlagsWriteDiscard. Since all of the data will be generated by CUDA, I thought this was the correct option. However, later I realized that I needed a CUDA surface to write to an OpenGL texture, and when you use a surface, you are requested to use the LoadStore flag.
So basically my question is this : Since I absolutely need a CUDA surface to write to an OpenGL texture in CUDA, what is the use case of cudaGraphicsRegisterFlagsWriteDiscard in cudaGraphicsGLRegisterImage?
The documentation description seems pretty straightforward. It is for one-way delivery of data from CUDA to OpenGL.
This online book excerpt provides a similar explanation:
Applications where CUDA is the producer and OpenGL is the consumer should register the objects with a write-discard flag...
If you want to see an example, take a look at the postProcessGL cuda sample. In that case, OpenGL is rendering an image, and it's being post-processed (blur added) by cuda, before display. In this case, there are two separate pathways for data flow. In the OpenGL->CUDA case, the data is handled by the createTextureSrc function, and the flag specified is read-only. For the CUDA->OpenGL case (delivery of the post-processed frame) the function is handled in createTextureDst, where a call is made to cudaGraphicsGLRegisterImage with the cudaGraphicsMapFlagsWriteDiscard flag specified, since on this path, CUDA is producing and OpenGL is consuming.
To understand how the textures are handled (populated with data from the cuda operations via a cudaArray) you probably want to study the sequence of operations in processImage().
So I have just realized that the code I was working on for 3d textures was for OpenGL 1.1 or something and is no longer supported in OpenGL 3.3. Is there another way to do this without glTexture3D? Perhaps through a library or another function in OpenGL 3.3 that I do not know about?
EDIT:
I am not sure where I read that 3d texturing was taken out of OpenGL in newer versions (been googling a lot today), but consider this:
I have been following the tutorial/guide here. The program compiles without a hitch. Now read the following quote from the article:
The potential exists that the environment the program is being run on does not support 3D texturing, which would cause us to get a NULL address back, and attempting to use a NULL pointer is A Bad Thing so make sure to check for it and respond appropriately (the provided example exits with an error).
That quote is referring to the following function:
glTexImage3D = (PFNGLTEXIMAGE3DPROC) wglGetProcAddress("glTexImage3D");
When running my program on my computer (which has OpenGL 3.3) that same function returns null for me. When my friend runs it on his computer (which has OpenGL 1.2) it does not return null.
The way one uploads 3D textures has not changes since OpenGL-1.2. The functions for this are still named
glTexImage3D
glTexSubImage3D
glCopyTexSubImage3D
I need to make a fallback if the user doesnt support the shader i have made to render some things faster.
So, how exactly do i check these things? I know some of the shader functions are not supported by some GLSL versions, but, where is the complete list of these functions vs versions they need?
But the problem is, i dont know what exactly i need to know in order to know who can render that shader. Is it only about checking which function is supported by which GLSL version? or is there something more to know? I want to be 100% sure when to switch to fallback render and when to use GLSL render.
I know how to retrieve the GLSL and OpenGL version strings.
If glLinkProgram sets the GL error state then the shader(s) are not compatible with the card.
After calling glLinkProgram, it is advised to check the link status, by using :
glGetProgramiv(program, GL_LINK_STATUS, &linkStatus);
This will give you a boolean value indicating if the program linked fine. You also have a GL_COMPILE_STATUS available.
Most of the time, this will indicate if the program fails to compile or link on your platform.
Be advised, though, that a program may link fine but not be suitable to run on your hardware, in this case the GL rendering will fallback on software rendering, and be slow slow slow.
In this case, if you're lucky, you'll get a message in this link log, but this message is platform dependent.