I'm just pulling my hair out an can't find a hint: my app resizes its RTT texture when needed via glTexImage2d with the new texture resolution.
When upsizing it all looks good. When downsizing, it looks like the TexCoord mapping of [1.0;1.0] maps to [oldRes.width; oldRes.height]. I'm sure I'm missing something vital, but cannot find it right now. Any ideas?
EDIT: oops, that wasn't it, also. My State Cache simply didn't enable the correct texturing unit on bind, when that texture already was bound (this fix also fixed different other problems).
I just found it - just too simple: obviously (I'm on NVidia) the texturing unit our RTT texture is bound to needs to be reinitialized after resize (NOT ON INITIAL SIZING). Unbinding the texture and rebinding it when needed again did the job.
P.S.: I'm working on a State-Cache that uses all available texturing units - that's why this popped up: the texture was never unbound as my examples use much less textures than there are units... (so no texture gets unbound unless deleted).
Related
If my understanding is correct, a texture unit has a number of targets (GL_TEXTURE_2D etc.) that I can bind textures to. I can change the currently active texture unit with glActiveTexture. When I'm calling glBindTexture, I bind the specified texture object to the specified target of the currently active texture unit, right?
When I want to later change the parameters of a texture or call a function like glTexSubImage2D, is it enough to call glActiveTexture with the texture unit that my texture is bound to? Or do I have to call glBindTexture everytime, even if the texture is already bound to a unit?
So long as you know what state is bound to which unit, you can rely on just changing glActiveTexture to switch to the right unit to find it.
However, you should not rely upon this for this purpose. Not because OpenGL is unreliable, but because you may be wrong about what you think you've bound to which unit.
Furthermore, it blurs the line between binding a texture to render with it and binding a texture to modify it. You want the two to be entirely separate. First, because modifying a texture's state in the rendering loop (where you're binding it to modify it) is bad form and likely to be slow. And second, so that when it comes time to adopt GL 4.5 and DSA, you can do so quickly and efficiently.
You can think of Texture unit as pipes present on GPU .
There are many such texture units present on GPU. By calling glActivateTexture() you are telling OpenGL driver to tell GPU that which ever texture I will bind next load/connect it in mentioned texture unit.By saying glBindTexture() you are telling driver that whatever operations I am about to do, please do it on the texture I have bound. As once you have call glTextureImage2D your texture is residing in driver memory and hence you dont have direct access to it. So driver has given you handle to texture object which you bind and tell driver which resource I am talking about.
So when ever you render its always good to do both activate texture unit and then bind your texture. As you have done activate first and bind later the texture will be automatically bound to unit you have given.
If you dont mention texture unit by default GL_TEXTURE0 will be used and hence sometimes it could be confusing.
Hope this helps.
I've been learning a bit of OpenGL lately, and I just got to the Framebuffers.
So by my current understanding, if you have a framebuffer of your own, and you want to draw the color buffer onto the window, you'll need to first draw a quad, and then wrap the texture over it? Is that right? Or is there something like glDrawArrays(), glDrawElements() version for framebuffers?
It seems a bit... Odd (clunky? Hackish?) to me that you have to wrap a texture over a quad in order to draw the framebuffer. This doesn't have to be done with the default framebuffer. Or is that done behind your back?
Well. The main point of framebuffer objects is to render scenes to buffers that will not get displayed but rather reused somewhere, as a source of data for some other operation (shadow maps, High dynamic range processing, reflections, portals...).
If you want to display it, why do you use a custom framebuffer in the first place?
Now, as #CoffeeandCode comments, there is indeed a glBlitFramebuffer call to allow transfering pixels from one framebuffer to another. But before you go ahead and use that call, ask yourself why you need that extra step. It's not a free operation...
I have a strange bug in opengl I cannot explain.
I have "programmed" using copy-paste of tutorials a shadow map.
I wanted to display the shadow map, so I wrote a little routine that displays it. It doesn't work (I only get a white square).
Finally, by trying a lot, I get the shadow to be almost correct on the main rendering of the scene. But the shadow doesn't render anymore when I un-comment the routine to display the shadowmap. (which goes before the main rendering)
I guess it has to do with glBindTexture(GL_TEXTURE_2D, depthTex) (which is stored in the shadowbuffer). If it is not asked to refer to this depthTex to display the shadow map, then somehow the shadow is built, but when asked to display this depthTex, then the shadow computed is nonsens.
I wonder if once asked glBindTexture(GL_TEXTURE_2D, depthTex) for the displaying of the shadowMap, the depthTex is not anymore able to be linked for the computation of the shadow shader.
I don't understand if glBindTexture is for reading or for writing...
This depthTex is indeed defined as the texture in which is stored the shadowmap.
to sum up : two pieces of code interfere, so that the rendering of a texture appears to become more complicated than simply giving the regular opengl commands to display a texture. Just as if glBindTexture(GL_TEXTURE_2D, depthTex) could be called only once.
If I ask in the displaying of the shadow map routine to display another texture of the program (such as "floor"), then the shadow is ok on the main rendered scene, but when i ask to display "depthTex", then the shadow doesn't works anymore.
I have a strange bug in opengl I cannot explain. I have "programmed" using copy-past of tutorials a shadow map.
In short: You're cargo culting. Read some good OpenGL tutorial(s), understand what they teach and then you'll make some progress.
I don't understand if glBindTexture is for reading or for writing... This depthTex is indeed defined as the texture in which is stored the shadowmap.
It's for both. It selects the texture bound to the active texture unit. When texturing is enabled or a shader bound that samples from the texture unit the bound texture is read. Calling one of the glTex…Image functions will write to it.
I need to gather information about the target used in a texture attachment of an FBO in order to copy it to another FBO.
As far as OpenGL ES 2.0 is concerned, I can use glGetFramebufferAttachmentParameter[if]v() and, since OpenGL ES 2.0 only supports GL_TEXTURE_2D and GL_TEXTURE_CUBE_MAP, the information returned is enough to determine the texture target that was used (when it's not a cube map face, it is a GL_TEXTURE_2D since it can't be anything else).
On the desktop, however, things change:
Because then we have GL_TEXTURE_1D, GL_TEXTURE_2D, GL_TEXTURE_2D_MULTISAMPLE, GL_TEXTURE_RECTANGLE, GL_TEXTURE_3D, and the 6 cube map faces as valid targets for an FBO's texture attachment, and while the 6 cube map faces and GL_TEXTURE_3D targets are easy to tell (since there are specific queries for cube map faces and layered textures), the same does not apply to the remaining targets: GL_TEXTURE_1D, GL_TEXTURE_2D, GL_TEXTURE_2D_MULTISAMPLE, and GL_TEXTURE_RECTANGLE, at least as far as the manual pages are concerned. Therefore, how would I be able to tell which of these 4 targets was used in a texture attachment?
The need to copy an FBO stems from the fact that FBOs are not shared between contexts, the implementation creates screen FBOs in the main thread, and I want to use them in child threads dedicated to each screen so as to not stall the main thread with render loops and thus keep the application responsive to UI events. Caching state is both undesirable and unfeasible in this case; undesirable because it cuts through otherwise distinct concerns of the application when the client library (which only concern is to serve as a communication API between the application and the OpenGL server) is in a much better position to cache state itself, and unfeasible since in this case I don't even control some of the concerns in my application, as mentioned before.
Right now this is a theoretical question, because the implementation I'm working on only supports OpenGL ES 2.0, but I would rather write future-proof code where I can be certain about the exact texture target used as an FBO attachment than code that works only because the number of available options is limited to the point where I can figure out which option was chosen by excluding those that weren't, an approach that, as demonstrated above, wouldn't work on the feature-rich desktop versions and may not work on future OpenGL ES version.
OpenGL has no solution for the problem you're having. There is no way to look at a texture object and know what target it is, nor is there a way to know what the textarget parameter of a texture that was attached to an FBO was. Generally speaking, you are expected to keep track of the texture object's target, just as you're expected to keep track of the texture object's name (the GLuint you get back from glGenTextures).
The best way to handle this would be to simply ask the client library what textures and texture targets it adds to it's FBO. If you can't get this client library to provide you this information, then you can't do what you need to do.
Lets say i have an application ( the details of the application should be irrelevent for solving the problem ). Instead of rendering to the screen, i am somehow able to force the application to render to a framebuffer object instead of rendering to the screen ( messing with glew or intercepting a call in a dll ).
Once the application has rendered its content to the FBO is it possible to apply a shader to the contents of the FB? My knowledge is limited here, so from what i understand at this stage all information about vertices is no longer available and all the necessary tests have been applied, so whats left in the buffer is just pixel data. Is this correct?
If it is possible to apply a shader to the FBO, is is possible to get a fisheye affect? ( like this for example: http://idea.hosting.lv/a/gfx/quakeshots.html )
The technique used in the linke above is to create 6 different viewports and render each viewport to a cubemap face and then apply the texture to a mesh.
Thanks
A framebuffer object encapsulates several other buffers, specifically those that are implicitly indexed by fragment location. So a single framebuffer object may bundle together a colour buffer, a depth buffer, a stencil buffer and a bunch of others. The individual buffers are known as renderbuffers.
You're right — there's no geometry in there. For the purposes of reading back the scene you get only final fragment values, which if you're highjacking an existing app will probably be a 2d pixel image of the frame and some other things that you don't care about.
If your GPU has render-to-texture support (originally an extension circa OpenGL 1.3 but you'd be hard pressed to find a GPU without it nowadays, even in mobile phones) then you can link a texture as a renderbuffer within a framebuffer. So the rendering code is exactly as it would be normally but ends up writing the results to a texture that you can then use as a source for drawing.
Fragment shaders can programmatically decide which location of a texture map to sample in order to create their output. So you can write a fragment shader that applies a fisheye lens, though you're restricted to the field of view rendered in the original texture, obviously. Which would probably be what you'd get in your Quake example if you had just one of the sides of the cube available rather than six.
In summary: the answer is 'yes' to all of your questions. There's a brief introduction to framebuffer objects here.
Look here for some relevant info:
http://www.opengl.org/wiki/Framebuffer_Object
The short, simple explanation is that a FBO is the 3D equivalent of a software frame buffer. You have direct access to individual pixels, instead of having to modify a texture and upload it. You can get shaders to point to an FBO. The link above gives an overview of the procedure.