Anti-aliased lines using barycentric coordinates in opengl - c++

I am drawing a line segment by joining two triangles. Each vertex of a triangle gets a unique set of coordinates as varying variable (vec3 vBC). Then in fragment shader the value that i get for this variable is an interpolated value suggesting the location of fragment inside the triangle. I use this value to determine how far the fragment is from the edge of the triangle, which in turn is used to do anti-aliasing on the edges.
See the problem in image below:
Notice the jagged edges in Rect 1. This is because i am not applying anti-aliasing to edges BC and DE as these are internal edges. Because length x is very small, jagged edges on the edges BC and DE close to vertices C and D become visible; these jagged edges are longer in length than length x. If i apply anti-aliasing to these two edges (BC and DE), i see Rect 2. This is because smoothing process during anti-aliasing assign alpha value of 0.0 to fragments lying on or very close to edge. So anti-aliased BC and DE introduce a white diagonal line inside.
So my question is how can i possibly smooth Rect 1?
Fragment shader:
varying vec4 DestinationColor;
varying vec2 TexCoordOut;
varying vec3 vBC;
uniform sampler2D Texture;
uniform int TexEnabled;
float edgeFactor();
void main() {
if (TexEnabled == 1) {
gl_FragColor = texture2D(Texture, TexCoordOut) * DestinationColor;
} else{
gl_FragColor = vec4(DestinationColor.xyz, edgeFactor());
}
}
float edgeFactor() {
vec3 d = fwidth(vBC);
vec3 a3 = smoothstep(vec3(0.0), d*1.5, vBC);
return a3.y;
};
In shader code, a3.y refers to the second value (distance between vertex D or C and fragment) in a set of coordinates i.e y represents 1 in D(0,1,0) and first 0 and second 0 represent x and z respectively.

The algorithm is mostly sound, but you need some means to distinguish an internal edge from an external edge, which you cannot do with simple geometry. You need to duplicate vertices so that the vertices contributing to the internal edge can have some unique attributes so they look different to the external edge. That said, it sounds expensive ...
It might just be simpler to create a small texture with has an transparent alpha value towards the edges which is applied across the line width, which is repeated along the width of the line. It would possibly be faster than all of the additional per-vertex data, especially on mobile with deferred geometry processing.

Related

Why does a scaling matrix causes the vertices to exit bounds of QQuickItem?

I am trying to understand the behavoir of OpenGL ES vector shader within QQuickItem.
I started with the "graph" example, and then I tried to scale samples of arbitrary minimum and maximum values into the QQuickItem bounds (in order to achieve a simple, GPU accelerated auto-scaling on y axis). I thought a scaling matrix would be of help.
The shader takes a pair of (coincident) vertices and adds an offset for every odd vertex to the Y-axis (hence the attribute t). Lines are drawn as GL_TRIANGLE_STRIP to simulate a tick line (simple solution even if it leaves gaps in case of vertical lines).
This is my attempt to keep the scenario as simple as possible. I added a scaling matrix (half x and y) to the vertex shader.
attribute highp vec4 pos;
attribute highp float t;
uniform lowp float size;
uniform highp mat4 qt_Matrix;
mat4 mx = mat4(0.5,0,0,0, 0,0.5,0,0, 0,0,1,0, 0,0,0,1);
varying lowp float vT;
void main(void)
{
vec4 adjustedPos = pos;
adjustedPos.y += (t * size);
// Added transformation
adjustedPos = mx * adjustedPos;
gl_Position = qt_Matrix * adjustedPos;
vT = t;
}
I passed fixes vertex positions into updateGeometry() function (linenode.cpp), so that the first vertex pair passed to the shader is located at (0, 0) as follows:
void LineNode::updateGeometry(const QRectF &bounds, const QList<qreal> &samples)
{
m_geometry.allocate(4);
LineVertex *v = (LineVertex *) m_geometry.vertexData();
v[0].set(0, 0, 0);
v[1].set(0, 0, 1);
v[2].set(bounds.width(), bounds.height(), 0);
v[3].set(bounds.width(), bounds.height(), 1);
markDirty(QSGNode::DirtyGeometry);
}
The geometry gets scaled correctly, but within the whole window area. As in the pictures below, the scaled line appears translated. From my understanding, if the scaling matrix mx were applied to (0, 0) the position would remain unchanged, so that the first point of the line would be located at the top left corner of the Item after transformation with qt_Matrix.
I suspect that the vertex located at (0, 0) is passed already translated in to the shader, but I was unable to find details into documentation. And in this case I would not understand the point of having qt_Matrix performing context to Item transformations.
By the way, if I pass half the coordinates as vertices to updateGeometry(), the graph is scaled correctly.
Graph without transformation matrix:
Graph with transformation matrix applied in vertex shader:
Graph with scaling applied directly to vertices in C++ code:
What I need to understand is:
Why does the line is not getting just scaled within the Item coordinates?
Am I making any wrong assumption?
In case the vertex passed to the shader is translated, how could I scale the (x, y) pair while remaining into the QQuickItem bounds?

How to reduce the number of drawing calls to a large number of texture?

I'm trying to develop a map for a 2D tile based game, the approach I'm using is to save the map images in a large texture (tileset) and draw only the desired tiles on the screen by updating the positions through vertex shader, however on a 10x10 map involves 100 glDrawArrays calls, looking through the task manager, this consumes 5% of CPU usage and 4 ~ 5% of GPU, imagine if it was a complete game with dozens of calls, there is a way to optimize this, such as preparing the whole scene and just make 1 draw call, drawing all at once, or some other approach?
void GameMap::draw() {
m_shader - > use();
m_texture - > bind();
glBindVertexArray(m_quadVAO);
for (size_t r = 0; r < 10; r++) {
for (size_t c = 0; c < 10; c++) {
m_tileCoord - > setX(c * m_tileHeight);
m_tileCoord - > setY(r * m_tileHeight);
m_tileCoord - > convert2DToIso();
drawTile(0);
}
}
glBindVertexArray(0);
}
void GameMap::drawTile(GLint index) {
glm::mat4 position_coord = glm::mat4(1.0 f);
glm::mat4 texture_coord = glm::mat4(1.0 f);
m_srcX = index * m_tileWidth;
GLfloat clipX = m_srcX / m_texture - > m_width;
GLfloat clipY = m_srcY / m_texture - > m_height;
texture_coord = glm::translate(texture_coord, glm::vec3(glm::vec2(clipX, clipY), 0.0 f));
position_coord = glm::translate(position_coord, glm::vec3(glm::vec2(m_tileCoord - > getX(), m_tileCoord - > getY()), 0.0 f));
position_coord = glm::scale(position_coord, glm::vec3(glm::vec2(m_tileWidth, m_tileHeight), 1.0 f));
m_shader - > setMatrix4("texture_coord", texture_coord);
m_shader - > setMatrix4("position_coord", position_coord);
glDrawArrays(GL_TRIANGLES, 0, 6);
}
--Vertex Shader
#version 330 core
layout (location = 0) in vec4 vertex; // <vec2 position, vec2 texCoords>
out vec4 TexCoords;
uniform mat4 texture_coord;
uniform mat4 position_coord;
uniform mat4 projection;
void main()
{
TexCoords = texture_coord * vec4(vertex.z, vertex.w, 1.0, 1.0);
gl_Position = projection * position_coord * vec4(vertex.xy, 0.0, 1.0);
}
-- Fragment Shader
#version 330 core
out vec4 FragColor;
in vec4 TexCoords;
uniform sampler2D image;
uniform vec4 spriteColor;
void main()
{
FragColor = vec4(spriteColor) * texture(image, vec2(TexCoords.x, TexCoords.y));
}
The Basic Technique
The first thing you want to do is set up your 10x10 grid vertex buffer. Each square in the grid is actually two triangles. And all the triangles will need their own vertices because the UV coordinates for adjacent tiles are not the same, even though the XY coordinates are the same. This way each triangle can copy the area out of the texture atlas that it needs to and it doesn't need to be contiguous in UV space.
Here's how the vertices of two adjacent quads in the grid will be set up:
1: xy=(0,0) uv=(Left0 ,Top0)
2: xy=(1,0) uv=(Right0,Top0)
3: xy=(1,1) uv=(Right0,Bottom0)
4: xy=(1,1) uv=(Right0,Bottom0)
5: xy=(0,1) uv=(Left0 ,Bottom0)
6: xy=(0,0) uv=(Left0 ,Top0)
7: xy=(1,0) uv=(Left1 ,Top1)
8: xy=(2,0) uv=(Right1,Top1)
9: xy=(2,1) uv=(Right1,Bottom1)
10: xy=(2,1) uv=(Right1,Bottom1)
11: xy=(1,1) uv=(Left1 ,Bottom1)
12: xy=(1,0) uv=(Left1 ,Top1)
These 12 vertices define 4 triangles. The Top, Left, Bottom, Right UV coordinates for the first square can be completely different from the coordinates of the second square, thus allowing each square to be textured by a different area of the texture atlas. E.g. see below to see how the UV coordinates for each triangle map to a tile in the texture atlas.
In your case with your 10x10 grid, you would have 100 quads, or 200 triangles. With 200 triangles at 3 vertices each, that would be 600 vertices to define. But it's a single draw call of 200 triangles (600 vertices). Each vertex has its own x, y, u, v, coordinates. To change which tile a quad is, you have to update the uv coordinates of 6 vertices in your vertex buffer.
You will likely find that this is the most convenient and efficient approach.
Advanced Approaches
There are more memory efficient or convenient ways of setting this up with multiple streams to reduce duplication of vertices and leverage shaders to do the work of setting it up if you're willing to trade off computation time for memory or convenience. Find the balance that is right for you. But you should grasp the basic technique first before trying to optimize.
But in the multiple-stream approach, you could specify all the xy vertices separately from all the uv vertices to avoid duplication. You could also specify a second set of texture coordinates which was just the top-left corner of the tile in the atlas and let the uv coordinates just go from 0,0 (top left) to 1,1 (bottom right) for each quad, then let your shader scale and transform the uv coordinates to arrive at final texture coordinates. You could also specify a single uv coordinate of the top-left corner of the source area for each primitive and let a geometry shader complete the squares. And even smarter, you could specify only the x,y coordinates (omitting the uv coordinates entirely) and in your vertex shader, you can sample a texture that contains the "tile numbers" of each quad. You would sample this texture at coordinates based on the x,y values in the grid, and then based on the value you read, you could transform that into the uv coordinates in the atlas. To change the tile in this system, you just change the one pixel in the tile map texture. And finally, you could skip generating the primitives entirely and derive them entirely from a single list sent to the geometry shader and generate the x,y coordinates of the grid which gets sent downstream to the vertex shader to complete the triangle geometry and uv coordinates of the grid, this is the most memory efficient, but relies on the GPU to compute the setup at runtime.
With a static 6-vertices-per-triangle setup, you free up GPU processing at the cost of a little extra memory. Depending on what you need for performance, you may find that using up more memory to get higher fps is desirable. Vertex buffers are tiny compared to textures anyway.
So as I said, you should start with the basic technique first as it's likely also the optimal solution for performance as well, especially if your map doesn't change very often.
You can upload all parameters to gpu memory and draw everything using only one draw call.
This way it's not required to update vertex shader uniforms and you should have zero cpu load.
It's been 3 years since I used OpenGL so I can only point you into the right direction.
Start reading some material like for instance:
https://ferransole.wordpress.com/2014/07/09/multidrawindirect/
https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glDrawArraysIndirect.xhtml
Also, keep in mind this is GL 4.x stuff, check your target platform (software+hardware) GL version support.

Using shaders to implement field of view on a 2D environment

I'm implementing dynamic field of view. I decided to use shaders in order to make the illumination better looking and how it affects the walls. Here is the scenario I'm working on:
http://i.imgur.com/QxZVyo7.jpg
I have a map, with a flat floor and walls. Every thing here is 2d, there is no 3d geometry, only 2d polygons that compose the walls.
Using the vertex of the polygons I cast shadows, to define the viewable area. (The purple lines are part of the mask I use in the next step)
Using the shader when drawing the shadows on top of the scenario, I avoid the walls to be also obscured.
This way the shadows are cast dynamically along the walls as the field of view changes
I have used the following shader to achieve this. But I feel this is kind of a overkill and really really unefficient:
uniform sampler2D texture;
uniform sampler2D filterTexture;
uniform vec2 textureSize;
uniform float cellSize;
uniform sampler2D shadowTexture;
void main()
{
vec2 position;
vec4 filterPixel;
vec4 shadowPixel;
vec4 pixel = texture2D(texture, gl_TexCoord[0].xy );
for( float i=0 ; i<=cellSize*2 ; i++)
{
position = gl_TexCoord[0].xy;
position.y = position.y - (i/textureSize.y);
filterPixel = texture2D( filterTexture, position );
position.y = position.y - (1/textureSize.y);
shadowPixel = texture2D( texture, position );
if (shadowPixel == 0){
if( filterPixel.r == 1.0 )
{
if( filterPixel.b == 1.0 ){
pixel.a = 0;
break;
}
else if( i<=cellSize )
{
pixel.a = 0;
break;
}
}
}
}
gl_FragColor = pixel;
}
Iterating for each frament just to look for the red colored pixel in the mask seems like a huge overload, but I fail to see how to complete this taks in any other way by using shaders.
The solution here is really quite simple: use shadow maps.
Your situation may be 2D instead of 3D, but the basic concept is the same. You want to "shadow" areas based on whether there is an obstructive surface between some point in the world and a "light source" (in your case, the player character).
In 3D, shadow maps work by rendering the world from the perspective of the light source. This results in a 2D texture where the values represent the depth from the light (in a particular direction) to the nearest obstruction. When you render the scene for real, you check the current fragment's location by projecting it into the 2D depth texture (the shadow map). If the depth value you compute for the current fragment is closer than the nearest obstruction in the projected location in the shadow map, then the fragment is visible from the light. If not, then it isn't.
Your 2D version would have to do the same thing, only with one less dimension. You render your 2D world from the perspective of the "light source". Your 2D world in this case is really just the obstructing quads (you'll have to render them with line polygon filling). Any quads that obstruct sight should be rendered into the shadow map. Texture accesses are completely unnecessary; the only information you need is depth. Your shader doesn't even have to write a color. You render these objects by projecting the 2D space into a 1D texture.
This would look something like this:
X..X
XXXXXXXX..XXXXXXXXXXXXXXXXXXXX
X.............\.../..........X
X..............\./...........X
X...............C............X
X............../.\...........X
X............./...\..........X
X............/.....\.........X
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
C is the character's position; the dots are just regular, unobstructive floor. The Xs are the walls. The lines from C represent the four directions you need to render the 2D lines from.
In 3D, to do shadow mapping for point lights, you have to render the scene 6 times, in 6 different directions into the faces of a cube shadow map. In 2D, you have to render the scene 4 times, in 4 different directions into 4 different 1D shadow maps. You can use a 1D array texture for this.
Once you have your shadow maps, you just use them in your shader to detect when a fragment is visible. To do that, you'll need a set of transforms from window space into the 4 different projections that represent the 4 directions of view that you rendered into. Only one of these will be used for any particular fragment, based on where the fragment is relative to the target.
To implement this, I'd start with just getting a simple case of directional "shadowing" to work. That is, don't use a position; just a direction for a "light". That will test your ability to develop a 2D-to-1D projection matrix, as well as an appropriate camera-space matrix to transform your world-space quads into camera space. Once you have mastered that, then you can get to work doing it 4 times with different projections.

GLSL - put decal texture into base texture with color indication

I'm trying to write simple shader to put some "mark"(64*64) on base texture(128*128), to indicate where mark must be, i use cyan colored mark-sized(64*64) region on base texture.
becomes
Fragment shader
precision lowp float;
uniform sampler2D us_base_tex;
uniform sampler2D us_mark_tex;
varying vec2 vv_base_tex;
varying vec2 vv_mark_tex;
const vec4 c_mark_col = vec4(0.0, 1.0, 1.0, 1.0);//CYAN
void main()
{
vec4 base_col = texture2D(us_base_tex, vv_base_tex);
if(base_col == c_mark_col)
{
vec4 mark_col = texture2D(us_mark_tex, vv_mark_tex);//texelFetch magic overhere must be
base_col = mix(base_col, mark_col, mark_col.a);
}
gl_FragColor = base_col;
}
Of course, it not works as it should, i got something like this (transperity only for demonstration, there is no cyan region, only piece of "T"):
I try to figure it and only something like texelFetch will help me, but i can't figure out, how get tex coord of base texture cyan texel and converted it to get - first col/first row cyan base texel = first col/first row mark texel, second col/first row base = second col/first row of mark. e.t.c.
I think there's a way to do this in a single pass - but it involves using another texture that is capable of holding the information presented below. So you're going to increase your texture memory usage.
In this approach, the second texture (it can be generated by post-processing the original texture either offline or somehow) contains the UV map for the decal
R = normalized distance from left of cyan square
G = normalized distance from the top of the cyan square
B = don't care
Now the pixel shader is simple, all it needs to do is to see if the current texel is cyan, pick the R and G from the "decal-uvmap" texture and use those as texture coords to sample the decal texture.
Note that the bit depth of this texture (and it's size) is related to the size of the original texture so it may be possible to get away with a much smaller "decal-uvmap" texture than the original.

OpenGL: How to render perfect rectangular gradient?

I can render triangular gradient with simply just one triangle and using glColor for each corner.
But how to render perfect rectangular gradient? I tried with one quad, but the middle will get ugly seam. I also tried with texture of 2x2 size, it was like it should be done: proper blending from each corner, but the texture sampling precision becomes unprecise when stretched too much (i started to see pixels bigger than 1x1 size).
Is there some way of calculating this in a shader perhaps?
--
Edit: Link to images were broken(removed).
Indeed, the kind of gradient you want relies on 4 colors at each pixel, where OpenGL typically only interpolates input over triangles (so 3 inputs). Getting the perfect gradient is not possible just with the standard interpolants.
Now, as you mentioned, a 2x2 texture can do it. If you did see precision issues, I suggest switching the format of the texture to something that typically requires more precision (like a float texture).
Last, and as you mentioned also in your question, you can solve this with a shader. Say you pass an extra attribute per-vertex that corresponds to (u,v) = (0,0) (0,1) (1,0) (1,0) all the way to the pixel shader (with the vertex shader just doing a pass-through).
You can do the following in the pixel shader (note, the idea here is sound, but I did not test the code):
Vertex shader snippet:
varying vec2 uv;
attribute vec2 uvIn;
uv = uvIn;
Fragment shader:
uniform vec3 color0;
uniform vec3 color1;
varying vec2 uv;
// from wikipedia on bilinear interpolation on unit square:
// f(x,y) = f(0,0)(1-x)(1-y) + f(1,0)x(1-y) + f(0,1)(1-x)y + f(1,1) xy.
// applied here:
// gl_FragColor = color0 * ((1-x)*(1-y) + x*y) + color1*(x*(1-y) + (1-x)*y)
// gl_FragColor = color0 * (1 - x - y + 2 * x * y) + color1 * (x + y - 2 * x * y)
// after simplification:
// float temp = (x + y - 2 * x * y);
// gl_FragColor = color0 * (1-temp) + color1 * temp;
gl_FragColor = mix(color0, color1, uv.u + uv.v - 2 * uv.u * uv.v);
The problem is because you use a quad. The quad is drawn using two triangles, but the triangles are not in the orientation that you need.
If I define the quad vertices as:
A: bottom left vertex
B: bottom right vertex
C: top right vertex
D: top left vertex
I would say that the quad is composed by the following triangles:
A B D
D B C
The colors assigned to each vertex are:
A: yellow
B: red
C: yellow
D: red
Keeping in mind the geometry (the two triangles), the pixels between D and B are result of the interpolation between red and red: indeed, red!
The solution would be the a geometry with two triangles, but orientated in a different way:
A B C
A C D
But probably you will no get the exact gradient, since in middle of quad you will get a full yellow, instead of a yellow mixed with red. So, I suppose you can achieve the exact result using 4 triangles (or a triangle fan), in which the centered vertex is the interpolation between the yellow and the red.
Wooop! Effetively the result is not what I was expecting. I thought the gradient was produced by linear interpolation between colors, but surely is not (I really need to setup the LCD color space!). Indeed, the most scalable solution is rendering using fragment shaders.
Keep the solution proposed by Bahbar. I would advice to start the implementation of a pass-through vertex/fragment shader (specifying only vertices and colors you should get the previous result); then, start playing with the mix function and the texture coordinate passed to the vertex shader.
You really need to understand the rendering pipeline with programmable shaders: vertex shader is called once per vertex, fragment shader is called once per fragment (without multisampling, a fragment is a pixel; with multisampling, a a pixel is composed by a many fragments which are interpolated to get the pixel color).
The vertex shader take the input parameters (uniforms and inputs; uniforms are constant for all vertices issued between glBegin/glEnd; inputs are characteristic of each vertex shader instance (4 vertices, 4 vertex shader instances).
A fragment shader takes as input the vertex shader outputs which has produced the fragment (due the rasterization of triangles, lines and points). In the Bahbar answer the only output is the uv variable (common to both shader sources).
In you case, the vertex shader outputs the vertex texture coordinates UV (passed "as-are"). These UV coordinates are available for each fragment, and they are computed by interpolating the values outputted by the vertex shader depending on the fragment position.
Once you have those coordinates, you only need two colors: the red and the yellow in your case (in Bahbar answer corresponds to color0 and color1 uniforms). Then, mix those colors depending on the UV coordinates of the specific fragment. (*)
(*) Here is the power of shaders: you can specify different interpolation methods by simply modifying the shader source. Linear, Bilinear or Spline interpolation are implemented by specifying additional uniforms to the fragment shader.
Good practice!
Do all of your vertices have the same depth (Z) value, and are all of your triangles completely on-screen? If so, then you should have no problem getting a "perfect" color gradient over a quad made from two triangles with glColor. If not, then it's possible that your OpenGL implementation treats colors poorly.
This leads me to suspect that you may have a very old or strange OpenGL implementation. I recommend that you tell us what platform you're using, and what version of OpenGL you have...?
Without any more information, I recommend you attempt writing a shader, and avoid telling OpenGL that you want a "color." If possible, tell it that you want a "texcoord" but treat it like a color anyway. This trick has worked in some cases where color accuracy is too low.