I have a strange bug in opengl I cannot explain.
I have "programmed" using copy-paste of tutorials a shadow map.
I wanted to display the shadow map, so I wrote a little routine that displays it. It doesn't work (I only get a white square).
Finally, by trying a lot, I get the shadow to be almost correct on the main rendering of the scene. But the shadow doesn't render anymore when I un-comment the routine to display the shadowmap. (which goes before the main rendering)
I guess it has to do with glBindTexture(GL_TEXTURE_2D, depthTex) (which is stored in the shadowbuffer). If it is not asked to refer to this depthTex to display the shadow map, then somehow the shadow is built, but when asked to display this depthTex, then the shadow computed is nonsens.
I wonder if once asked glBindTexture(GL_TEXTURE_2D, depthTex) for the displaying of the shadowMap, the depthTex is not anymore able to be linked for the computation of the shadow shader.
I don't understand if glBindTexture is for reading or for writing...
This depthTex is indeed defined as the texture in which is stored the shadowmap.
to sum up : two pieces of code interfere, so that the rendering of a texture appears to become more complicated than simply giving the regular opengl commands to display a texture. Just as if glBindTexture(GL_TEXTURE_2D, depthTex) could be called only once.
If I ask in the displaying of the shadow map routine to display another texture of the program (such as "floor"), then the shadow is ok on the main rendered scene, but when i ask to display "depthTex", then the shadow doesn't works anymore.
I have a strange bug in opengl I cannot explain. I have "programmed" using copy-past of tutorials a shadow map.
In short: You're cargo culting. Read some good OpenGL tutorial(s), understand what they teach and then you'll make some progress.
I don't understand if glBindTexture is for reading or for writing... This depthTex is indeed defined as the texture in which is stored the shadowmap.
It's for both. It selects the texture bound to the active texture unit. When texturing is enabled or a shader bound that samples from the texture unit the bound texture is read. Calling one of the glTex…Image functions will write to it.
Related
I have question about 3D rendering.
Deferred rendering is very powerful but popular for not being nice to MSAA.
I clearly see why, but I suddenly came up some idea to solve that.
It's simple : just do deferred rendering completely, and get screen image on texture. This texture(attached on framebuffer or whatever) is of course not-antialiased.
Here comes further processing : then next, draw full scene again but this time fragment shader looks up the exact same position on pre-rendered texture using texelFetch(). And output that. Done.
It's silly but I think it might work. If we draw the geometry again with deferred-rendered result as the output color, it means we re-render the scene with geometry.
So we can now provide super-sampled depth information, and the GPU will be able to perform MSAA with aliased color but super-sampled depth geometry. (It's similar with picking up only the 'center' of fragment and evaluating that on ordinary MSAA process).
I'm not sure whether this description makes sense or not. I tested using opengl, but doing that makes no difference with just deferred-rendering.
Does my idea work?
No, your idea does not work.
If you did not render the initial image with multisampling, reading from it later while doing multisampling will not magically create information that doesn't exist in that image.
In your method, every sample which corresponds to a particular pixel in the multisampled rendering will have the same color value. So if two primitives overlap in a pixel, writing to different samples, it won't matter, since both primitives will be generating the same color. All you would be doing is generating multiple different depth values within a pixel, and that doesn't actually contribute to an antialiased output (directly).
I started learning OpenGL some days ago and I have some difficulties in understanding a thing. I followed this tutorial: https://www.youtube.com/playlist?list=PLEETnX-uPtBXT9T-hD0Bj31DSnwio-ywh until the fifth part and it works perfectly but when I tried to make another separate triangle and another texture for it, the two triangles had the same texture. I don't understand how to bind a texture to an object, this program binds a texture for every object in the scene or maybe I didn't understand how to do it properly.
Here is my source: https://github.com/deiandrei/blackunity_opengl_alpha
Have a good day!
What are these "objects" you're talking about? OpenGL doesn't know what a "object" is. OpenGL just knows points, lines and triangles and all it bothers is drawing one after another with the state currently enabled. Once something has been drawn, OpenGL already forgot about it.
So the typical OpenGL program drawing structure roughly looks like this:
glBindTexture(GL_TEXTURE_2D, texture_A);
draw_triangles(); /* the triangles are drawn using texture_A */
draw_lines(); /* the lines are drawn using texture_A */
glBindTexture(GL_TEXTURE_2D, texture_B);
draw_some_other_triangles(); /* the other triangles are drawn using texture_B */
I'm just pulling my hair out an can't find a hint: my app resizes its RTT texture when needed via glTexImage2d with the new texture resolution.
When upsizing it all looks good. When downsizing, it looks like the TexCoord mapping of [1.0;1.0] maps to [oldRes.width; oldRes.height]. I'm sure I'm missing something vital, but cannot find it right now. Any ideas?
EDIT: oops, that wasn't it, also. My State Cache simply didn't enable the correct texturing unit on bind, when that texture already was bound (this fix also fixed different other problems).
I just found it - just too simple: obviously (I'm on NVidia) the texturing unit our RTT texture is bound to needs to be reinitialized after resize (NOT ON INITIAL SIZING). Unbinding the texture and rebinding it when needed again did the job.
P.S.: I'm working on a State-Cache that uses all available texturing units - that's why this popped up: the texture was never unbound as my examples use much less textures than there are units... (so no texture gets unbound unless deleted).
I've recently succeeded at making a small test app with a GL_TEXTURE_RECTANGLE. Now I'm trying to integrate it into my larger project, but when I call glBindTexture(GL_TEXTURE_RECTANGLE, _tex_id[0]) inside the render function, it's causing the GL_INVALID_OPERATION error. The texture image sometimes shows for a fraction of a second, then turns black and stays black.
I am trying to do this by using two sets of vertex and fragment shaders, one set for the 3D scene, and one set for the 2D overlay, but I've never tried this before so I don't know if that's what's causing the error, or if I should be going about this a different way. The shaders are all compiling and linking fine.
Any insight would be much appreciated, and if it would help to see some code, let me know and I'll post some of it (although I think it may be too much for anyone to reasonably look through).
Edit: gDEBugger breaks at the call to glBindTexture(), and when clicking on the breakpoint, the properties window shows a picture of one of my other textures (one that's being loaded by the 3D scene's shaders), it shows that it's trying to load texture number 1, but I know this number is already being used to draw the same 3D scene's texture shown in the properties window... why would glGenTextures() be giving me overlapping texture id numbers? Is this normal or maybe part of the problem?
The black texture was due to me not forwarding some vertex shader inputs (normals) through to the fragment shader, even though I'm not using normals for anything in the 2D overlay shaders. As soon as I added outputs for all the inputs, and forwarded them along to the fragment shader, the texture was no longer black, but it was still disappearing after a fraction of a second. This was because I was calling glBindTexture(GL_TEXTURE_RECTANGLE, 0) at the end of the render function with the hopes that it would clean up some state... this was clearly the wrong thing to do, because removing that call caused the 2D texture to stay on-screen. Furthermore, calling glBindTexture() with the GL_TEXTURE_RECTANGLE target seems to work during the texture setup stage, but during rendering the GL_TEXTURE_RECTANGLE target was causing the GL_INVALID_OPERATION error. Changing the target to GL_TEXTURE_2D only in the render function made the error go away, and everything seems to work nicely now.
Lets say i have an application ( the details of the application should be irrelevent for solving the problem ). Instead of rendering to the screen, i am somehow able to force the application to render to a framebuffer object instead of rendering to the screen ( messing with glew or intercepting a call in a dll ).
Once the application has rendered its content to the FBO is it possible to apply a shader to the contents of the FB? My knowledge is limited here, so from what i understand at this stage all information about vertices is no longer available and all the necessary tests have been applied, so whats left in the buffer is just pixel data. Is this correct?
If it is possible to apply a shader to the FBO, is is possible to get a fisheye affect? ( like this for example: http://idea.hosting.lv/a/gfx/quakeshots.html )
The technique used in the linke above is to create 6 different viewports and render each viewport to a cubemap face and then apply the texture to a mesh.
Thanks
A framebuffer object encapsulates several other buffers, specifically those that are implicitly indexed by fragment location. So a single framebuffer object may bundle together a colour buffer, a depth buffer, a stencil buffer and a bunch of others. The individual buffers are known as renderbuffers.
You're right — there's no geometry in there. For the purposes of reading back the scene you get only final fragment values, which if you're highjacking an existing app will probably be a 2d pixel image of the frame and some other things that you don't care about.
If your GPU has render-to-texture support (originally an extension circa OpenGL 1.3 but you'd be hard pressed to find a GPU without it nowadays, even in mobile phones) then you can link a texture as a renderbuffer within a framebuffer. So the rendering code is exactly as it would be normally but ends up writing the results to a texture that you can then use as a source for drawing.
Fragment shaders can programmatically decide which location of a texture map to sample in order to create their output. So you can write a fragment shader that applies a fisheye lens, though you're restricted to the field of view rendered in the original texture, obviously. Which would probably be what you'd get in your Quake example if you had just one of the sides of the cube available rather than six.
In summary: the answer is 'yes' to all of your questions. There's a brief introduction to framebuffer objects here.
Look here for some relevant info:
http://www.opengl.org/wiki/Framebuffer_Object
The short, simple explanation is that a FBO is the 3D equivalent of a software frame buffer. You have direct access to individual pixels, instead of having to modify a texture and upload it. You can get shaders to point to an FBO. The link above gives an overview of the procedure.