glViewport, window sizes and clipping - opengl

I am trying to understand the relationship between the screen and the logic OpenGL uses to decide if a primitive should be rendered, i.e. is it onscreen or not.
For example, suppose you set the viewport larger than the screen (no reason to do this, but for example's sake). OpenGL doesn't "know" the screen size, so it will "draw" points off the screen so long as the orthographic projection places them within the viewport, correct?
But also, if I define a vertex position to be outside the viewport as determined by the projection, does OpenGL include it in rendering?
glViewport(0,0,100,100);
ApplyOrtho(50,50); // custom ES 2.0 utility to apply 2D orthographic projection
Now a vertex of position (75,75) would not get rendered by OpenGL, right?

the logic OpenGL uses to decide if a primitive should be rendered
There is only one piece of logic that OpenGL uses to decide if a primitive should be rendered. If the primitive hasn't been clipped/culled, then the only thing that will stop it from being rasterized is if the user has disabled all primitive rasterization with glEnable(GL_RASTERIZER_DISCARD). Otherwise, the OpenGL specification defines that all primitives that were not culled as part of clipping will be rasterized.
Now, whether they will produce any visible effect is a different question. And since primitives that are off-screen can't produce visible effects (unless you're using image load/store), a conforming OpenGL implementation is free to cull such triangles if it wants. But more likely, it will rasterize them and simply check to see if the fragment falls outside of the window. If it does, it will just discard those fragments.
In general, this should not be something you should be concerned about. Just set a reasonable viewport and you'll be fine.

Related

Is it possible to separate normalized device coordinates and window clipping in openGL (glViewport)

Is there a way to set a transformation for NDC to window, but separately specify the clipping region so it matches the actual window size?
Background: I have a bunch of openGL code that renders a 2D map to a window. It's a lot of complex code, because I use both the GPU and the CPU to draw on the map, so it's important that I keep to a consistent coordinate system in both places. To keep that simple, I use glViewport(0,0,mapSizeX, mapSizeY), and now map coordinates correspond well to pixel coordinates in the frame buffer, exactly what I need. I can use GLSL to draw some of the map, call glReadPixels and use the CPU to draw on top of that, and glDrawPixels to send that back to the frame buffer, all of that using the same coordinate system. Finally I use GLSL to draw a few final things over that (that I don't want zoomed). That all works, except...
The window isn't the same size as the map, and glViewport doesn't just set up the transformation. It also sets up clipping. So now when I go to draw a few last items, and the window is larger than the map, things I draw near the top of the screen get clipped away. Is there a workaround?
glViewport doesn't just set up the transformation. It also sets up clipping.
No, it just sets up the transformation. By the time the NDC-to-window space transform happens, clipping has already been done. That happened immediately after vertex processing; your vertex shader (or whatever you're doing to transform vertices) handled that based on how it transformed vertices into clip-space.
You should use the viewport to set up how you want the NDC box to visibly appear in the window. Your VS needs to handle the transformation into the clipping area. So it effectively decides how much of the world gets put into the NDC box that things get clipped to.
Basically, you have map space (the coordinates used by your map) and clip-space (the coordinates after vertex transformations). And you have some notion of which part of the map you want to actually draw to the window. You need to transform the region of your map that you want to see such that the corners of this region appear in the corners of the clipping box (for orthographic projections, this is typically [-1, 1]).
In compatibility OpenGL, this might be defined by using glOrtho for othographic projections to transform from you. In a proper vertex shader, you'll need to provide an appropriate orthographic matrix.

Do you have to call glViewport every time you bind a frame buffer with a different resolution?

I have a program with about 3 framebuffers of varying sizes. I initialise them at the start, give them the appropriate render target and change the viewport size for each one.
I originally thought that you only had to call glViewport only when you initialise the framebuffer, however this creates problems in my program so I assume that's wrong? Because they all differ in resolution, right now when I render in each frame I bind the first framebuffer, change the viewport size to fit that framebuffer, bind the second framebuffer, change the viewport size to fit the resolution of the second framebuffer, bind the third framebuffer, change the viewport size to fit it, then bind the window frame buffer and change the viewport size to the resolution of the window.
Is this necessary, or is something else in the program to blame? This is done every frame, so I'm worried it would have a slight unnecessary overhead if I don't have to do it.
You always need to call glViewport() before starting to draw to a framebuffer with a different size. This is necessary because the viewport is not part of the framebuffer state.
If you look at for example the OpenGL 3.3 spec, section 6.2, titled "State Tables", starting on page 278, contains tables with the entire state, showing the scope of each piece of state:
Table 6.23 on page 299 lists "state per framebuffer object". The only state listed are the draw buffers and the read buffer. If the viewport were part of the framebuffer state, it would be listed here.
The viewport is listed in table 6.8 "transformation state". This is global state, and not associated with any object.
OpenGL 4.1 introduces multiple viewports. But they are still part of the global transformation state.
If you wonder why it is like this, the only real answer is that it was defined this way. Looking at the graphics pipeline, it does make sense. While the glViewport() call makes it look like you're specifying a rectangle within the framebuffer that you want to render to, the call does in fact define a transformation that is applied as part of the fixed function block between vertex shader (or geometry shader, if you have one) and fragment shader. The viewport settings determine how NDC (Normalized Device Coordinates) are mapped to Window Coordinates.
The framebuffer state, on the other hand, determines how the fragment shader output is written to the framebuffer. So it controls an entirely different part of the pipeline.
From the way viewports are normally used by applications, I think it would have made more sense to make the viewport part of the framebuffer state. But OpenGL is really an API intended as an abstraction of the graphics hardware, and from that point of view, the viewport is independent of the framebuffer state.
I originally thought that you only had to call glViewPort only when you initialise the framebuffer, however this creates problems in my program so I assume that's wrong?
Yes it is a wrong assumption (probably fed by countless of bad tutorials which misplace glViewport).
glViewport always belongs into the drawing code. You always call glViewport with the right parameters just before you're about to draw something into a framebuffer. The parameters set by glViewport are used in the transformation pipeline, so you should think of glViewport of a command similar to glTransform (in the fixed function pipeline) or glUniform.

OpenGL: Specify what value gets wrote to the depth buffer?

As I understand the depth buffer, it calculates a fragment's relation to the far/near clipping planes, and deduces the depth value from that before writing it. However, this isn't what I want as I don't utilize the clipping planes, or the 3rd dimension at all. However, depth testing would still be immensely helpful to me.
My question, is there any way to specify what value gets written to the depth buffer manually, for all geometry rendered after you set it (that passes the Alpha Test) regardless of it's true depth in a scene? The Stencil buffer works this way, with the value specified as the second argument of glStencilFunc(), so I thought glDepthFunc() might have behaved similarly but I was mistaken.
The main reason I need depth testing in a 2D game, is because my lighting model uses stencils a great deal. Objects closer to the camera than the light must be rendered first, for shadow stencils to be properly laid out, with the lights drawn after that. It's a pretty tricky draw order, but basically it just means lights have to be drawn after the scene is finished drawing, is all.
The OpenGL version I'm using is 2.0, though I'm trying to avoid using a fragment shader if possible.
It seems you are talking about a technique called Parallax scrolling. You don't need to write to the depth buffer manually, just enable it, and then you can use a layer approach and specify the Z manually for each object. Then render the scene front to back (sorting).

Multi-pass shading using render-to-texture

I'm trying to implement a multi-pass rendering method using OpenSceneGraph. However, I'm not entirely certain my problem is theoretical or due to a lack of applied knowledge of OSG. Thus far, I've successfully implemented multi-pass shading by rendering to a texture using an orthogonal projection, but I cannot seem to make a perspective projection work.
It may be that I don't quite understand how to implement multi-pass shading. Of course, I have to pre-render the entire scene with the multi-pass shaders to a texture, then use the texture in the final render. However, I'm not talking about creating a separate texture for each object in the scene, but effectively capturing a screenshot of the entire prerendered scene. Then, from that texture alone, applying the rendered effects to the individual geometries.
I assume this means I would have to do an extra conversion of the vertex coordinates for each geometry in the vertex shader. That is, after computing:
gl_Position = ModelViewProjectionMatrix * Vertex;
I would need to go a step further and calculate the vertex's screen coordinates in order to map the vertices correctly (again, given that the texture consists of an entire screen shot of the scene).
If I am correct, then I must be able to pre-render the scene in a perspective view identical to the view used in the final render, rather than an orthogonal view. This is where I have troubles. I can make an orthogonal view do what I want, but not the perspective view.
Am I correct in my approach? The only other approach I can imagine is to render everything to a screen-filling quad (in effect, the same thing as converting to screen coordinates), but that doesn't alleviate the need to use a perspective projection in the pre-render stage.
Thoughts? Links??
edit: I should also point out that in my successful attempts, I used a fragment shader only. The perspective projection worked, but, of course, the screen aligned quad I was using was offset rather than centered. I added a pass-through vertex shader and everything went blank.
As it turns out, my approach was correct. It's especially nice as it avoids having to add another camera to my scene graph to render the final output - I can simply use the main camera. Unfortunately, it means that all of my output textures are rendered at the screen resolution, rather than a resolution appropriate to the size of the object. That is, if my screen is 1024 x 1024, then so is the output texture, one for each pre-render camera in the graph. Not exactly efficient, but it'll do for now.

Why is there a glMatrixMode in OpenGL?

I just don't understand what OpenGL's glMatrixMode is for.
As far as I can see, when glMatrixMode(GL_MODELVIEW) is called, it
is followed by glVertex, glTranslate, glRotate and the like,
that is, OpenGL commands that place some objects somewhere in
the space. On the other hand, if glOrtho or glFrustum or gluProjection
is called (ie how the placed objects are rendered), it has a preceeding call of glMatrixMode(GL_PROJECTION).
I guess what I have written so far is an assumption on which someone will prove
me wrong, but is not the point of using different Matrix Modes exactly
because there are different kinds of gl-functions: those concerned with
placing objects and those with how the objects are rendered?
This is simple and can be answered very briefly:
Rendering vertices (as in glVertex ) depends on the current state of matrices called "model-view matrix" and "projection matrix";
The commands glTranslatef, glPushMatrix, glLoadIdentity, glLoadMatrix, glOrtho, gluPerspective and the whole family affect the current matrix (which is either of the above);
The command glMatrixMode selects the matrix (model-view or projection) which is affected by the forementioned commands.
(There's also the texture matrix used for texture coordinates, but it's seldomly used.)
So the common use case is:
Have the model-view matrix active most of the time;
Whenever you have to initialize the projection matrix (usually at the beginning or when the window is resized, perhaps), switch the active to projection, set up a perspective, and revert back to model-view.
You can use glRotate and glTranslate for projection matrices as well.
Also: OpenGL supports transforms of textures and colors. If you active this feature you can for example modify the texture coordinates of an object without rewriting the texture coordinates each frame (slow).
This is a very useful feature if you want to scroll a texture across an object. All you have to do for this is to draw the textured object, set the matrix mode to GL_TEXTURE and call glTranslate to set the offset into the texture.
As Nils pointed out, you do have more to matrices than just what you mentioned.
I'll add a couple thoughts:
OpenGL core (from 3.1 onwards) does away with all the matrix stuff completely, so does GL ES 2.0. This is simply due to the fact that shader programs removed much of the requirement of having them exposed at the GL level (it's still a convenience, though). You then only have uniforms, and you have to compute their value completely on the client side.
There are more matrix manipulation entrypoints than the ones you mention. Some of them apply equally well to projection/modelview (glLoadIdentity/glLoadMatrix/glMultMatrix, Push/Pop), They are very useful if you want to perform the matrix computation yourself (say because you need them somewhere else in your application).
All geometry coordinates undergo several linear transformations in sequence. While any linear transformation can be expressed by a single matrix, often you want to think of a sequence of transformations and edit the sequence, and if you have only a single matrix you could only change the ends of that sequence. By providing several transformation steps, OpenGL gives you several places in the middle where you can change the transformation as well.
Calling glMatrixMode before emitting geometry has no effect at all. You call glMatrixMode before editing the transform matrix, to determine where in the overall sequence those edits appear.
(NB: Looking at the sequence makes a lot more sense if you remember that translation and rotation are not commutative, because translation changes the center of rotation. Similarly translation and scaling are not commutative.)