I've worked through a couple of tutorials in the breakout series on learnopengl.com, so I have a very simple 2D renderer. I want to add a subimage feature to it, though, where I can specify a vec4 for a kind of "source rectangle", so if the vec4 was (10, 10, 32, 32), it would only render a rectangle at 10, 10 with a width and height of 32, kind of like how the SDL renderer works.
The way the renderer is set up is there is a quad VAO which all the sprites use, which contains the texture coordinates. Initially, I though I could use an array of VAO's for each sprite, each with different texture coordinates, but I'd like to be able to change the source rectangle before the sprite gets drawn, to make things like animation easier... My second idea was to have a seperate uniform vec4 passed into the fragment shader for the source rectangle, but how do I only render that section in pixel coordinates?
Use the Primitiv type GL_TRIANGLE_STRIP or GL_TRIANGLE_FAN to render a quad. Use integral one-dimensional vertex coordinates instead of floating-point vertex coordinates. The vertex coordinates are the indices of the quad corners. For a GL_TRIANGLE_FAN they are:
vertex 1: 0
vertex 2: 1
vertex 3: 2
vertex 4: 3
Set the rectangle definition (10, 10, 32, 32) in the vertex shader uisng a Uniform variable of type vec4. With this information, you can calculate the vertex coordinate in the vertex shader:
in int cornerIndex;
uniform vec4 rectangle;
void main()
{
vec2 vertexArray[4] =
vec2[4](rectangle.xy, rectangle.zy, rectangle.zw, rectangle.xw);
vec2 vertex = vertexArray[cornerIndex];
// [...]
}
The Vertex Shader provides the built-in input gl_VertexID, which specifies the index of the vertex currently being processed. This variable could be used instead of cornerIndex in this case. Note that it is not necessary for the vertex shader to have any explicit input.
I ended up doing this in the vertex shader. I passed in the vec4 as a uniform to the vertex shader, as well as the size of the image, and used the below calculation:
// convert pixel coordinates to vertex coordinates
float widthPixel = 1.0f / u_imageSize.x;
float heightPixel = 1.0f / u_imageSize.y;
float startX = u_sourceRect.x, startY = u_sourceRect.y, width = u_sourceRect.z, height = u_sourceRect.w;
v_texCoords = vec2(widthPixel * startX + width * widthPixel * texPos.x, heightPixel * startY + height * heightPixel * texPos.y);
v_texCoords is a varying that the fragment shader uses to map the texture.
I am rendering 2 triangles to make a square, using a single draw call with GL_TRIANGLE_STRIP.
I calculate position and uv in the shader with:
vec2 uv = vec2(gl_VertexID >> 1, gl_VertexID & 1);
vec2 position = uv * 333.0f;
float offset = 150.0f;
mat4 model = mat4(1.0f);
model[3][1] = offset;
gl_Position = projection * model * position;
projection is a regular orthographic projection that matches screen size.
In the fragment shader I want to draw each first line of pixels with blue and each second line of pixels with red color.
int v = int(uv.y * 333.0f);
if (v % 2 == 0) {
color = vec4(1.0f, 0.0f, 0.0f, 1.0f);
} else {
color = vec4(0.0f, 0.0f, 1.0f, 1.0f);
}
This works ok, however if I use offset that will give me a subpixel translation:
offset = 150.5f;
The 2 triangles don't get matching uvs as seen in this picture:
What am I doing wrong?
Attribute interpolation is done only with a finite precision. That means that due to round-off errors, even a difference in 1 ulp (unit-last-place, i.e. least significant digit) can cause the result rounded to the other direction. Since the order of operations in the hardware interpolation unit can be different between the two triangles, the values prior to rounding can be slightly different. OpenGL does not provide any guarantees about that.
For example you might be getting 1.499999 in the upper triangle and 1.50000 in the lower triangle. Consequently when you add an offset of 0.5 then 1.99999 will be rounded down to 1.00000 but 2.000000 will be rounded to 2.00000.
If pixel perfect results are important to you I suggest you calculate uv coordinates manually from the gl_FragCoord.xy. In a case of an axis-aligned rectangle, as in your example, it is straightforward to do.
I'm trying to calculate texture coordinates based on the coordinates of an incoming vertex in the Vertex Shader. This is a stripped down version of my attempt:
#version 120
varying vec4 color;
uniform sampler2D heightmap;
uniform ivec2 heightmapSize;
void main(void)
{
vec2 fHeightmapSize = vec2(heightmapSize);
vec2 pos = gl_Vertex.zx + vec2(0.5f, 0.5f);
vec2 offset = floor(fHeightmapSize * pos) + vec2(0.5f, 0.5f);
if (fract(offset.x) > 0.45f && fract(offset.x) < 0.55f
&& fract(offset.y) > 0.45f && fract(offset.y) < 0.55f)
color = vec4(0.0f, 1.0f, 0.0f, 1.0f);
else
color = vec4(1.0f, 0.0f, 0.0f, 1.0f);
// gl_Position = ...
// ...
}
gl_Vertex is in [-0.5, 0.5]^2, on the XZ-plane. So what I am basically trying to do, is to
first create a float vec2 from the ivec2 heightmapSize, which holds the width and height of the heightmap sampler.
Then I'm converting the vertex coordinates to the interval [0, 1]^2.
Then I'm calculating the offset in texture coordinates by multiplying the vertex position with the heightmap size. The left part (using floor()) should return the number of texels in each direction.
Example: The texure is of size 4x4, position is (0.55, 0.5). This means I would got 2 texels to the right, and two texels upwards -> (2.0, 2.0).
In the right part, I add another (0.5, 0.5) because I want the center of the texel. The (2.0, 2.0) becomes (2.5, 2.5). Note: whatever the coordinates are, the fractional part should be 0.5 in the end.
Now comes the strange part. I'm testing for the result by specifying two colors. If the fractional part of the offset is "close" to 0.5, I'm setting the resulting color to green, otherwise red. The image is almost all red.
How is it possible, that either fract() does not result in a vector of two integer values (or close to integer because of float accuracy), or the adding of vec2(0.5, 0.5) has no effect? Am I missing something else?
I'm trying to render to texture with OpenGL + GLSL shaders. For start I'm trying to fill every pixel of 30x30 texture with white color. I'm passing to vertex shader index from 0 to 899, representing each pixel of texture. Is this correct?
Vertex shader:
flat in int index;
void main(void) {
gl_Position = vec4((index % 30) / 15 - 1, floor(index / 30) / 15 - 1, 0, 1);
}
Fragment shader:
out vec4 color;
void main(void) {
color = vec4(1, 1, 1, 1);
}
You are trying to render 900 vertices, with one vertex per pixel? Why are you doing that? What primitive type are you using. It would only make sense if you were using points, but then you would need some slight modification of the output coordinates to actually hit the fragment centers.
The usual way for this is to render just a quad (easily represented as a triangle strip with just 4 vertices) which is filling the whole framebuffer. To achieve this, you just need to setup the viewport to the full frambeuffer and render are quad from (-1,-1) to (1,1).
Note that in both approaches, you don't need vertex attributes. You could just use gl_VertexID (directly as replacement for index in your approach, or as aan index to a const array of 4 vertex coords for the quad).
I'm doing ray casting in the fragment shader. I can think of a couple ways to draw a fullscreen quad for this purpose. Either draw a quad in clip space with the projection matrix set to the identity matrix, or use the geometry shader to turn a point into a triangle strip. The former uses immediate mode, deprecated in OpenGL 3.2. The latter I use out of novelty, but it still uses immediate mode to draw a point.
I'm going to argue that the most efficient approach will be in drawing a single "full-screen" triangle. For a triangle to cover the full screen, it needs to be bigger than the actual viewport. In NDC (and also clip space, if we set w=1), the viewport will always be the [-1,1] square. For a triangle to cover this area just completely, we need to have two sides to be twice as long as the viewport rectangle, so that the third side will cross the edge of the viewport, hence we can for example use the following coordiates (in counter-clockwise order): (-1,-1), (3,-1), (-1,3).
We also do not need to worry about the texcoords. To get the usual normalized [0,1] range across the visible viewport, we just need to make the corresponding texcoords for the vertices tiwce as big, and the barycentric interpolation will yield exactly the same results for any viewport pixel as when using a quad.
This approach can of course be combined with attribute-less rendering as suggested in demanze's answer:
out vec2 texcoords; // texcoords are in the normalized [0,1] range for the viewport-filling quad part of the triangle
void main() {
vec2 vertices[3]=vec2[3](vec2(-1,-1), vec2(3,-1), vec2(-1, 3));
gl_Position = vec4(vertices[gl_VertexID],0,1);
texcoords = 0.5 * gl_Position.xy + vec2(0.5);
}
Why will a single triangle be more efficient?
This is not about the one saved vertex shader invocation, and the one less triangle to handle at the front-end. The most significant effect of using a single triangle will be that there are less fragment shader invocations
Real GPUs always invoke the fragment shader for 2x2 pixel sized blocks ("quads") as soon as a single pixel of the primitive falls into such a block. This is necessary for calculating the window-space derivative functions (those are also implicitly needed for texture sampling, see this question).
If the primitive does not cover all 4 pixels in that block, the remaining fragment shader invocations will do no useful work (apart from providing the data for the derivative calculations) and will be so-called helper invocations (which can even be queried via the gl_HelperInvocation GLSL function). See also Fabian "ryg" Giesen's blog article for more details.
If you render a quad with two triangles, both will have one edge going diagonally across the viewport, and on both triangles, you will generate a lot of useless helper invocations at the diagonal edge. The effect will be worst for a perfectly square viewport (aspect ratio 1). If you draw a single triangle, there will be no such diagonal edge (it lies outside of the viewport and won't concern the rasterizer at all), so there will be no additional helper invocations.
Wait a minute, if the triangle extends across the viewport boundaries, won't it get clipped and actually put more work on the GPU?
If you read the textbook materials about graphics pipelines (or even the GL spec), you might get that impression. But real-world GPUs use some different approaches like Guard-band clipping. I won't go into detail here (that would be a topic on it's own, have a look at Fabian "ryg" Giesen's fine blog article for details), but the general idea is that the rasterizer will produce fragments only for pixels inside the viewport (or scissor rect) anyway, no matter if the primitive lies completely inside it or not, so we can simply throw bigger triangles at it if both of the following are true:
a) the triangle does only extend the 2D top/bottom/left/right clipping planes (as opposed to the z-Dimension near/far ones, which are more tricky to handle, especially because vertices may also lie behind the camera)
b) the actual vertex coordinates (and all intermediate calculation results the rasterizer might be doing on them) are representable in the internal data formats the GPU's hardware rasterizer uses. The rasterizer will use fixed-point data types of implementation-specific width, while vertex coords are 32Bit single precision floats. (That is basically what defines the size of the Guard-band)
Our triangle is only factor 3 bigger than the viewport, so we can be very sure that there is no need to clip it at all.
But is it worth it?
Well, the savings on fragment shader invocations are real (especially when you have a complex fragment shader), but the overall effect might be barely measurable in a real-world scenario. On the other hand, the approach is not more complicated than using a full-screen quad, and uses less data, so even if might not make a huge difference, it won't hurt, so why not using it?
Could this approach be used for all sorts of axis-aligned rectangles, not just fullscreen ones?
In theory, you can combine this with the scissor test to draw some arbitrary axis-aligned rectangle (and the scissor test will be very efficient, as it just limits which fragments are produced in the first place, it isn't a real "test" in HW which discards fragments). However, this requires you to change the scissor parameters for each rectangle you want to draw, which implies a lot of state changes and limits you to a single rectangle per draw call, so doing so won't be a good idea in most scenarios.
You can send two triangles creating a quad, with their vertex attributes set to -1/1 respectively.
You do not need to multiply them with any matrix in the vertex/fragment shader.
Here are some code samples, simple as it is :)
Vertex Shader:
const vec2 madd=vec2(0.5,0.5);
attribute vec2 vertexIn;
varying vec2 textureCoord;
void main() {
textureCoord = vertexIn.xy*madd+madd; // scale vertex attribute to [0-1] range
gl_Position = vec4(vertexIn.xy,0.0,1.0);
}
Fragment Shader :
varying vec2 textureCoord;
void main() {
vec4 color1 = texture2D(t,textureCoord);
gl_FragColor = color1;
}
No need to use a geometry shader, a VBO or any memory at all.
A vertex shader can generate the quad.
layout(location = 0) out vec2 uv;
void main()
{
float x = float(((uint(gl_VertexID) + 2u) / 3u)%2u);
float y = float(((uint(gl_VertexID) + 1u) / 3u)%2u);
gl_Position = vec4(-1.0f + x*2.0f, -1.0f+y*2.0f, 0.0f, 1.0f);
uv = vec2(x, y);
}
Bind an empty VAO. Send a draw call for 6 vertices.
To output a fullscreen quad geometry shader can be used:
#version 330 core
layout(points) in;
layout(triangle_strip, max_vertices = 4) out;
out vec2 texcoord;
void main()
{
gl_Position = vec4( 1.0, 1.0, 0.5, 1.0 );
texcoord = vec2( 1.0, 1.0 );
EmitVertex();
gl_Position = vec4(-1.0, 1.0, 0.5, 1.0 );
texcoord = vec2( 0.0, 1.0 );
EmitVertex();
gl_Position = vec4( 1.0,-1.0, 0.5, 1.0 );
texcoord = vec2( 1.0, 0.0 );
EmitVertex();
gl_Position = vec4(-1.0,-1.0, 0.5, 1.0 );
texcoord = vec2( 0.0, 0.0 );
EmitVertex();
EndPrimitive();
}
Vertex shader is just empty:
#version 330 core
void main()
{
}
To use this shader you can use dummy draw command with empty VBO:
glDrawArrays(GL_POINTS, 0, 1);
This is similar to the answer by demanze, but I would argue it's easier to understand. Also this is only drawn with 4 vertices by using TRIANGLE_STRIP.
#version 300 es
out vec2 textureCoords;
void main() {
const vec2 positions[4] = vec2[](
vec2(-1, -1),
vec2(+1, -1),
vec2(-1, +1),
vec2(+1, +1)
);
const vec2 coords[4] = vec2[](
vec2(0, 0),
vec2(1, 0),
vec2(0, 1),
vec2(1, 1)
);
textureCoords = coords[gl_VertexID];
gl_Position = vec4(positions[gl_VertexID], 0.0, 1.0);
}
The following comes from the draw function of the class that draws fbo textures to a screen aligned quad.
Gl.glUseProgram(shad);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, vbo);
Gl.glEnableVertexAttribArray(0);
Gl.glEnableVertexAttribArray(1);
Gl.glVertexAttribPointer(0, 3, Gl.GL_FLOAT, Gl.GL_FALSE, 0, voff);
Gl.glVertexAttribPointer(1, 2, Gl.GL_FLOAT, Gl.GL_FALSE, 0, coff);
Gl.glActiveTexture(Gl.GL_TEXTURE0);
Gl.glBindTexture(Gl.GL_TEXTURE_2D, fboc);
Gl.glUniform1i(tileLoc, 0);
Gl.glDrawArrays(Gl.GL_QUADS, 0, 4);
Gl.glBindTexture(Gl.GL_TEXTURE_2D, 0);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, 0);
Gl.glUseProgram(0);
The actual quad itself and the coords are got from:
private float[] v=new float[]{ -1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
1.0f, 1.0f, 0.0f,
-1.0f, 1.0f, 0.0f,
0.0f, 0.0f,
1.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f
};
The binding and set up of the vbo's I leave to you.
The vertex shader:
#version 330
layout(location = 0) in vec3 pos;
layout(location = 1) in vec2 coord;
out vec2 coords;
void main() {
coords=coord.st;
gl_Position=vec4(pos, 1.0);
}
Because the position is raw, that is, not multiplied by any matrix the -1, -1::1, 1 of the quad fit into the viewport. Look for Alfonse's tutorial linked off any of his posts on openGL.org.