What OpenGL functions modify vertex positions prior to the vertex shader? - c++

Let's say there's a color render texture that is 1000 px wide, 500 px tall. And I draw a quad with vertices at the four corners (-1, -1, 0), (1, -1, 0), (-1, 1, 0), (1, 1, 0) to it without any transformation in the vertex shader.
Will this always cover the entire texture's surface by default, assuming no other other GL functions prior to this sole draw command were called?
What OpenGL functions (that modify vertex positions) could cause this quad to no longer fill the screen?
(I'm trying to understand how vertices can be messed with prior to the vertex shader, so I can avoid or use the right functions to always map them so NDC (-1, -1) to (1, 1) represents the entire surface).
edit: If the positions are not altered, then I'm also wondering how their mapping to a render buffer might be modified prior to the vertex shader. For instance, will (-1, -1, 0) reliably refer to a fragment at the bottom-left of the render buffer, (0, 0, 0) to the middle, and (1, 1, 0) to the top-right?

Nothing happens to vertex data "prior to the vertex shader". Nothing can happen to it, because OpenGL doesn't know what the vertex attributes mean. It doesn't know what attribute 2 refers to; it doesn't know what is a position, normal, texture coordinate or anything. As far as OpenGL is concerned, it's all just data. What gives that data meaning is your vertex shader. And only in the way defined by your vertex shader.
Data from buffer objects are read in accord with the format specified by your VAO, and are given to the vertex shader invocations which process those vertices.

Related

Determine distance from each vertex in glsl fragment shader

Say I have a simple OpenGL triangle like this:
//1
glColor3f(1, 0, 0);
glVertex3f(0.5, 0, 0);
//2
glColor3f(0, 1, 0);
glVertex3f(0, 1, 0);
//3
glColor3f(0, 0, 1);
glVertex3f(1, 1, 0);
In a glsl fragment shader I can use the interpolated fragment color to determine my distance from each vertex. In this example the red component of the color determines distance from the first vertex, green determines the distance from the second, and blue from the third.
Is there a way I can determine these distances in the shader without passing vertex data such as texture coordinates or colors?
Not in standard OpenGL. There are two vendor-specific extensions:
AMD_shader_explicit_vertex_parameter
NV_fragment_shader_barycentric
which will give you access to the barycentric coordinates within the primitive. But without such extensions, there are only very clumsy ways to get this data to the FS, and each will have significant drawbacks. Here are some ideas:
You could use per-vertex attributes as you already suggested, but in real meshes, it will require a lot of additional vertex splitting to get the values right.
You could use geometry shaders to generate those attribute values on the fly, but that will come with a huge performance hit as geometry shaders really don't perform well.
You could make your vertex data available to the FS (for example via an SSBO) and basically calculate the barycentric coordinates based on gl_FragCoord and the relevant endpoints. But this requires you to get information on which vertices were used to the FS, which might require extra data structures (i.e. some triangle- and/or vertex-indices lookup table based on gl_PrimitiveID).

GLSL get relative normalized coordinates of the fragment

in a fragment shader, without a texture associated to it, how do I get normalized coordinates that start, for instance, on (0, 0) on the top left corner of my geometry, and (1, 1) on the bottom-right corner? This should be independent of the geometry I'm using

How exactly does indexing work?

From my understanding, indexing or IBOs in OpenGL are mainly used to reduce the number of vertices needed to draw for a given geometry. I understand that with an Index Buffer, OpenGL only draws the vertices with the given indexes and skips any other vertices. But doesn't that eliminate the possibility to use texturing? As far as i am aware, if you skip vertices with index buffers, it also skips their vertex attributes? If i have my vertex attributes set like this:
attribute vec4 v_Position;
attribute vec2 v_TexCoord;
and then use an index buffer and glDrawElements(...), wont that eliminate the usage of texturing, or does v_Position get "reused"? if they don't, how can i texture when using an index buffer?
I think you are misunderstanding several key terms.
"Vertex attributes" are the data that defines each individual vertex. While these include texture coordinates, they also include position. In fact, at least if you are not using fixed-function, the meaning of vertex attributes is entirely arbitrary; their meaning is defined by how the vertex shader uses and/or forwards them to following shader stages.
As such, there is no difference between how position, texture coordinates, and any other vertex attribute are forwarded to the vertex shader. They are all parsed exactly the same no matter how indexes are used (or not used).
An example vertex shader:
layout(location = 0) in vec4 position;
layout(location = 1) in vec2 uvAttr;
out vec2 uv;
void main( )
{
uv = uvAttr;
gl_Position = position;
}
And the beginning of the fragment shader to which the above is paired:
in vec2 uv;
The output of vertex shaders is, as you can see, based on the vertex attributes. That output is then interpolated across the faces generated by primitive assembly, before sending it to fragment shaders. Primitive assembly is the main place where indexes come into play: indexes determine how the vertex shader output is used to create actual geometry. That geometry is then broken up into fragments, which are what actually affect the rendering output. Outputs from the vertex shader become inputs to the fragment shader.
After the vertex shader, the vertex attributes cease being defined. Only if you forward them, as above, can they be accessed for use in something like texturing. So, you are not even using the vertex attribute itself as a texture coordinate in the first place: you're using a variable output by the vertex shader and interpolated in primitive assembly/rasterization.
"if you skip vertices with index buffers, it also skips their vertex attributes"
Yes - it totally ignores the vertex: texture coordinates, position, and whatever else you have defined for that vertex. But only the skipped vertex. The rest continue to be processed normally as if the skipped vertex never existed.
For example. Let us say for the sake of argument I have 5 vertexes. I have these ordered into a bow-tie shape as you can see below. Each vertex has position (a 2 component vector of just x and y) and a single component "brightness" to be used as a color. The center vertex of the bow tie is only defined once, but referenced via indexes twice.
The vertex attributes are:
[(1, 1), 0.5], aka [(x, y), brightness]
[(1, 5), 0.5]
[(3, 3), 0.0]
[(5, 5), 0.5]
[(5, 1), 0.5]
The indexes are: 1, 2, 3, 4, 5, 3.
Note that in this example, the "brightness" might as well stand in for your UV(W) coordinates. It would be interpolated similarly, just as a vector. As I said before, the meaning of vertex attributes is arbitrary.
Now, since you're asking about skipping vertexes, here is what the output would be if I changed the indexes to 1, 2, 4:
And this would be 1, 2, 3:
See the pattern here? OpenGL is concerned with the vertexes that makes up the faces it generates, nothing else. Indexes merely change how those faces are assembled (and can enable it to skip unneeded vertexes being calculated entirely). They have no impact on the meaning of the vertexes that are used and do go into the faces. If the black vertex #3 is skipped, it does not contribute to any face, because it is not part of any face.
As an aside, the standard allows implementations to re-use vertex shader output within single draw calls. So, you should expect that using the same index repeatedly will probably not result in additional vertex shader calls. I say "probably not" because what your driver actually does is always going to be voodoo.
Note that in this I have intentionally ignored tesselation and geometry shaders. Those are a topic beyond the scope of this question, but can have some interesting implications for how vertex attributes are handled. I also ignored the fact that the ordering of vertexes can be accessed to a degree in shaders, and thus might impact output.
Index buffer is used for speed.
With index buffer, vertex cache is used to store recently transformed vertices. During transformation, if vertex pointed by index is already transformed and available in vertex cache, it is reused otherwise, vertex is transformed. Without index buffer, vertex cache cannot be utilized so vertices always get transformed. That is why it is important to order your indices to maximize vertex cache hits.
Index buffer is also used for reducing memory footprint.
Single vertex data is usually quite large. For example: to store single precision floating point of position data (x, y, z) requires 12 bytes (assuming that each float requires 4 bytes). This memory requirement gets bigger if you include vertex color, texture coordinate or vertex normal.
If you have a quad composed of two triangles with each vertex consist of position data only (x, y, z). Without index buffer, you require 6 vertices (72 bytes) to store a quad. With 16-bit index buffer, you only need 4 vertices (48 bytes)+ 6 indices (6*2 bytes = 12 bytes) = 60 bytes to store a quad. With index buffer, this memory requirement gets smaller if you have many shared vertices.

Accessing barycentric coordinates inside fragment shader

In the fragment shader, values are naturally interpolated. For example, if I have three vertices, each with a color, red for the first vertex, green for the second and blue for the third. If I render a triangle with them, the expected result is the common
triangle.
Obviously, OpenGL calculates the interpolation coefficients (a, b, c) for each point inside the triangle. Is there any way to explicitly access these values or would I need to calculate the fragment coordinates of the three vertices and find the barycentric coordinates of the point myself?
I know this is perfectly feasible, but I thought OpenGL could have provided something.
I'm not aware of any built-in for getting the barycentric coordinates. But you should't need any calculations in the fragment shader.
You can pass the barycentric coordinates of the triangle vertices as attributes into the vertex shader. The attribute values for the 3 vertices are simply (1, 0, 0), (0, 1, 0), and (0, 0, 1). Then pass the attribute value through to the fragment shader (using a varying variable in legacy OpenGL, out in vertex shader and in in fragment shader in core OpenGL). Then value of the variable received by the fragment shader are the barycentric coordinates of the fragment.
This is very similar to the way you would commonly pass texture coordinates into the vertex shader, and them pass them through to the fragment shader, which receives the interpolated values.
NV_fragment_shader_barycentric

OpenGL render-to-texture-via-FBO -- incorrect display vs. normal Texture

off-screen rendering to a texture-bound offscreen framebuffer object should be so trivial but I'm having a problem I cannot wrap my head around.
My full sample program (2D only for now!) is here:
http://pastebin.com/hSvXzhJT
See below for some descriptions.
I'm creating an rgba texture object 512x512, bind it to an FBO. No depth or other render buffers are needed at this point, strictly 2D.
The following extremely simple shaders render to this texture:
Vertex shader:
varying vec2 vPos; attribute vec2 aPos;
void main (void) {
vPos = (aPos + 1) / 2;
gl_Position = vec4(aPos, 0.0, 1.0);
}
In aPos this just gets a VBO containing 4 xy coords for a quad (-1, -1 :: 1, -1 :: 1, 1 :: -1, 1)
So although the framebuffer resolution should theoretically by 512x512 obviously the shader renders its "texture" onto a "full-(off)screen quad", following GLs -1..1 coords paradigm.
Fragment shader:
varying vec2 vPos;
void main (void) {
gl_FragColor = vec4(0.25, vPos, 1);
}
So it sets a fully opaque color with red fixed at 0.25 and green/blue depending on x/y anywhere between 0 and 1.
At this point my assumption is that a 512x512 texture is rendered showing only the -1..1 full-(off)screen quad, fragment-shaded for green/blue from 0..1.
So this is my off-screen setup. On-screen, I have another real visible full-screen quad with 4 xyz coords { -1, -1, 1 ::: 1, -1, 1 ::: 1, 1, 1 ::: -1, 1, 1 }. Again, for now this is 2D so no matrices and so z is always 1.
This quad is drawn by a different shader, simply rendering a given texture, text-book GL-101 style. In my sample program linked above I have a simple boolean toggle doRtt, when this is false (the default), render-to-texture is not performed at all and this shader simply shows uses texture.jpg from the current directory.
This doRtt=false mode shows that the second on-screen quad-renderer is "correct" for my current requirements and performs the texturing as I want it to: repeated twice vertically and twice horizontally (later will be clamped, repeat is just for testing here), otherwise scaling with NO texture filtering or mipmapping.
So no matter how the window (and thus view port) is resized, we always see a full-screen quad with a single texture repeated twice horizontally, twice vertically.
Now, with doRtt=true, the second shader still does its job but the texture is never fully correctly scaled -- or drawn, this I'm not sure since unfortunately we can't just say "hey gl save this FBO to disk for debugging purposes".
The RTT shader DOES perform some partial rendering (or maybe a full one, again can't be sure what's happening off-screen...) Especially when you resize the viewport a lot smaller than the default size you see the breaks between the texture repeats, and not all colors to be expected from our very simple RTT fragment shader are indeed shown.
(A) either: the 512x512 texture is created correctly but not mapped correctly by my code (but then why is with doRtt=false any given texture.jpg file using the exact same simple textured-quad-shader showing just fine?)
(B) or: the 512x512 texture is not rendered correctly and somehow the rtt frag shader changes its output depending on the window resolution -- but why? The off-screen quad is always at -1..1 for x and y, the vertex shader always maps this to fragment coords 0..1, the RTT texture always stays at 512x512 for this simple test!
Note, BOTH the off-screen quad AND the on-screen quad never change their coords and are always "full-screen" (-1..1 in both dimensions).
Again, this should be so simple. What on earth am I missing?
Specs: OpenGL 4.2 (but the code doesn't need any 4.2 features obviously!), Nvidia Quadro 5010M, openSuse 12.1 64bit, Golang Weekly 22-Feb-2012.
First of all - try checking OpenGL errors. Call glGetError() after each OpenGL function. Also you must set correct viewport for drawing. Before drawing to FBO call glViewport(0, 0, 512, 512). Before drawing to screen call glViewport(0, 0, display_width, display_height).
Also there is no need to bind rttFrameTex when you are rendering to it using FBO. Binding texture is needed only when you are reading texture in shader.