OpenGL: How to render perfect rectangular gradient? - c++

I can render triangular gradient with simply just one triangle and using glColor for each corner.
But how to render perfect rectangular gradient? I tried with one quad, but the middle will get ugly seam. I also tried with texture of 2x2 size, it was like it should be done: proper blending from each corner, but the texture sampling precision becomes unprecise when stretched too much (i started to see pixels bigger than 1x1 size).
Is there some way of calculating this in a shader perhaps?
--
Edit: Link to images were broken(removed).

Indeed, the kind of gradient you want relies on 4 colors at each pixel, where OpenGL typically only interpolates input over triangles (so 3 inputs). Getting the perfect gradient is not possible just with the standard interpolants.
Now, as you mentioned, a 2x2 texture can do it. If you did see precision issues, I suggest switching the format of the texture to something that typically requires more precision (like a float texture).
Last, and as you mentioned also in your question, you can solve this with a shader. Say you pass an extra attribute per-vertex that corresponds to (u,v) = (0,0) (0,1) (1,0) (1,0) all the way to the pixel shader (with the vertex shader just doing a pass-through).
You can do the following in the pixel shader (note, the idea here is sound, but I did not test the code):
Vertex shader snippet:
varying vec2 uv;
attribute vec2 uvIn;
uv = uvIn;
Fragment shader:
uniform vec3 color0;
uniform vec3 color1;
varying vec2 uv;
// from wikipedia on bilinear interpolation on unit square:
// f(x,y) = f(0,0)(1-x)(1-y) + f(1,0)x(1-y) + f(0,1)(1-x)y + f(1,1) xy.
// applied here:
// gl_FragColor = color0 * ((1-x)*(1-y) + x*y) + color1*(x*(1-y) + (1-x)*y)
// gl_FragColor = color0 * (1 - x - y + 2 * x * y) + color1 * (x + y - 2 * x * y)
// after simplification:
// float temp = (x + y - 2 * x * y);
// gl_FragColor = color0 * (1-temp) + color1 * temp;
gl_FragColor = mix(color0, color1, uv.u + uv.v - 2 * uv.u * uv.v);

The problem is because you use a quad. The quad is drawn using two triangles, but the triangles are not in the orientation that you need.
If I define the quad vertices as:
A: bottom left vertex
B: bottom right vertex
C: top right vertex
D: top left vertex
I would say that the quad is composed by the following triangles:
A B D
D B C
The colors assigned to each vertex are:
A: yellow
B: red
C: yellow
D: red
Keeping in mind the geometry (the two triangles), the pixels between D and B are result of the interpolation between red and red: indeed, red!
The solution would be the a geometry with two triangles, but orientated in a different way:
A B C
A C D
But probably you will no get the exact gradient, since in middle of quad you will get a full yellow, instead of a yellow mixed with red. So, I suppose you can achieve the exact result using 4 triangles (or a triangle fan), in which the centered vertex is the interpolation between the yellow and the red.
Wooop! Effetively the result is not what I was expecting. I thought the gradient was produced by linear interpolation between colors, but surely is not (I really need to setup the LCD color space!). Indeed, the most scalable solution is rendering using fragment shaders.
Keep the solution proposed by Bahbar. I would advice to start the implementation of a pass-through vertex/fragment shader (specifying only vertices and colors you should get the previous result); then, start playing with the mix function and the texture coordinate passed to the vertex shader.
You really need to understand the rendering pipeline with programmable shaders: vertex shader is called once per vertex, fragment shader is called once per fragment (without multisampling, a fragment is a pixel; with multisampling, a a pixel is composed by a many fragments which are interpolated to get the pixel color).
The vertex shader take the input parameters (uniforms and inputs; uniforms are constant for all vertices issued between glBegin/glEnd; inputs are characteristic of each vertex shader instance (4 vertices, 4 vertex shader instances).
A fragment shader takes as input the vertex shader outputs which has produced the fragment (due the rasterization of triangles, lines and points). In the Bahbar answer the only output is the uv variable (common to both shader sources).
In you case, the vertex shader outputs the vertex texture coordinates UV (passed "as-are"). These UV coordinates are available for each fragment, and they are computed by interpolating the values outputted by the vertex shader depending on the fragment position.
Once you have those coordinates, you only need two colors: the red and the yellow in your case (in Bahbar answer corresponds to color0 and color1 uniforms). Then, mix those colors depending on the UV coordinates of the specific fragment. (*)
(*) Here is the power of shaders: you can specify different interpolation methods by simply modifying the shader source. Linear, Bilinear or Spline interpolation are implemented by specifying additional uniforms to the fragment shader.
Good practice!

Do all of your vertices have the same depth (Z) value, and are all of your triangles completely on-screen? If so, then you should have no problem getting a "perfect" color gradient over a quad made from two triangles with glColor. If not, then it's possible that your OpenGL implementation treats colors poorly.
This leads me to suspect that you may have a very old or strange OpenGL implementation. I recommend that you tell us what platform you're using, and what version of OpenGL you have...?
Without any more information, I recommend you attempt writing a shader, and avoid telling OpenGL that you want a "color." If possible, tell it that you want a "texcoord" but treat it like a color anyway. This trick has worked in some cases where color accuracy is too low.

Related

How to reduce the number of drawing calls to a large number of texture?

I'm trying to develop a map for a 2D tile based game, the approach I'm using is to save the map images in a large texture (tileset) and draw only the desired tiles on the screen by updating the positions through vertex shader, however on a 10x10 map involves 100 glDrawArrays calls, looking through the task manager, this consumes 5% of CPU usage and 4 ~ 5% of GPU, imagine if it was a complete game with dozens of calls, there is a way to optimize this, such as preparing the whole scene and just make 1 draw call, drawing all at once, or some other approach?
void GameMap::draw() {
m_shader - > use();
m_texture - > bind();
glBindVertexArray(m_quadVAO);
for (size_t r = 0; r < 10; r++) {
for (size_t c = 0; c < 10; c++) {
m_tileCoord - > setX(c * m_tileHeight);
m_tileCoord - > setY(r * m_tileHeight);
m_tileCoord - > convert2DToIso();
drawTile(0);
}
}
glBindVertexArray(0);
}
void GameMap::drawTile(GLint index) {
glm::mat4 position_coord = glm::mat4(1.0 f);
glm::mat4 texture_coord = glm::mat4(1.0 f);
m_srcX = index * m_tileWidth;
GLfloat clipX = m_srcX / m_texture - > m_width;
GLfloat clipY = m_srcY / m_texture - > m_height;
texture_coord = glm::translate(texture_coord, glm::vec3(glm::vec2(clipX, clipY), 0.0 f));
position_coord = glm::translate(position_coord, glm::vec3(glm::vec2(m_tileCoord - > getX(), m_tileCoord - > getY()), 0.0 f));
position_coord = glm::scale(position_coord, glm::vec3(glm::vec2(m_tileWidth, m_tileHeight), 1.0 f));
m_shader - > setMatrix4("texture_coord", texture_coord);
m_shader - > setMatrix4("position_coord", position_coord);
glDrawArrays(GL_TRIANGLES, 0, 6);
}
--Vertex Shader
#version 330 core
layout (location = 0) in vec4 vertex; // <vec2 position, vec2 texCoords>
out vec4 TexCoords;
uniform mat4 texture_coord;
uniform mat4 position_coord;
uniform mat4 projection;
void main()
{
TexCoords = texture_coord * vec4(vertex.z, vertex.w, 1.0, 1.0);
gl_Position = projection * position_coord * vec4(vertex.xy, 0.0, 1.0);
}
-- Fragment Shader
#version 330 core
out vec4 FragColor;
in vec4 TexCoords;
uniform sampler2D image;
uniform vec4 spriteColor;
void main()
{
FragColor = vec4(spriteColor) * texture(image, vec2(TexCoords.x, TexCoords.y));
}
The Basic Technique
The first thing you want to do is set up your 10x10 grid vertex buffer. Each square in the grid is actually two triangles. And all the triangles will need their own vertices because the UV coordinates for adjacent tiles are not the same, even though the XY coordinates are the same. This way each triangle can copy the area out of the texture atlas that it needs to and it doesn't need to be contiguous in UV space.
Here's how the vertices of two adjacent quads in the grid will be set up:
1: xy=(0,0) uv=(Left0 ,Top0)
2: xy=(1,0) uv=(Right0,Top0)
3: xy=(1,1) uv=(Right0,Bottom0)
4: xy=(1,1) uv=(Right0,Bottom0)
5: xy=(0,1) uv=(Left0 ,Bottom0)
6: xy=(0,0) uv=(Left0 ,Top0)
7: xy=(1,0) uv=(Left1 ,Top1)
8: xy=(2,0) uv=(Right1,Top1)
9: xy=(2,1) uv=(Right1,Bottom1)
10: xy=(2,1) uv=(Right1,Bottom1)
11: xy=(1,1) uv=(Left1 ,Bottom1)
12: xy=(1,0) uv=(Left1 ,Top1)
These 12 vertices define 4 triangles. The Top, Left, Bottom, Right UV coordinates for the first square can be completely different from the coordinates of the second square, thus allowing each square to be textured by a different area of the texture atlas. E.g. see below to see how the UV coordinates for each triangle map to a tile in the texture atlas.
In your case with your 10x10 grid, you would have 100 quads, or 200 triangles. With 200 triangles at 3 vertices each, that would be 600 vertices to define. But it's a single draw call of 200 triangles (600 vertices). Each vertex has its own x, y, u, v, coordinates. To change which tile a quad is, you have to update the uv coordinates of 6 vertices in your vertex buffer.
You will likely find that this is the most convenient and efficient approach.
Advanced Approaches
There are more memory efficient or convenient ways of setting this up with multiple streams to reduce duplication of vertices and leverage shaders to do the work of setting it up if you're willing to trade off computation time for memory or convenience. Find the balance that is right for you. But you should grasp the basic technique first before trying to optimize.
But in the multiple-stream approach, you could specify all the xy vertices separately from all the uv vertices to avoid duplication. You could also specify a second set of texture coordinates which was just the top-left corner of the tile in the atlas and let the uv coordinates just go from 0,0 (top left) to 1,1 (bottom right) for each quad, then let your shader scale and transform the uv coordinates to arrive at final texture coordinates. You could also specify a single uv coordinate of the top-left corner of the source area for each primitive and let a geometry shader complete the squares. And even smarter, you could specify only the x,y coordinates (omitting the uv coordinates entirely) and in your vertex shader, you can sample a texture that contains the "tile numbers" of each quad. You would sample this texture at coordinates based on the x,y values in the grid, and then based on the value you read, you could transform that into the uv coordinates in the atlas. To change the tile in this system, you just change the one pixel in the tile map texture. And finally, you could skip generating the primitives entirely and derive them entirely from a single list sent to the geometry shader and generate the x,y coordinates of the grid which gets sent downstream to the vertex shader to complete the triangle geometry and uv coordinates of the grid, this is the most memory efficient, but relies on the GPU to compute the setup at runtime.
With a static 6-vertices-per-triangle setup, you free up GPU processing at the cost of a little extra memory. Depending on what you need for performance, you may find that using up more memory to get higher fps is desirable. Vertex buffers are tiny compared to textures anyway.
So as I said, you should start with the basic technique first as it's likely also the optimal solution for performance as well, especially if your map doesn't change very often.
You can upload all parameters to gpu memory and draw everything using only one draw call.
This way it's not required to update vertex shader uniforms and you should have zero cpu load.
It's been 3 years since I used OpenGL so I can only point you into the right direction.
Start reading some material like for instance:
https://ferransole.wordpress.com/2014/07/09/multidrawindirect/
https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glDrawArraysIndirect.xhtml
Also, keep in mind this is GL 4.x stuff, check your target platform (software+hardware) GL version support.

Anti-aliased lines using barycentric coordinates in opengl

I am drawing a line segment by joining two triangles. Each vertex of a triangle gets a unique set of coordinates as varying variable (vec3 vBC). Then in fragment shader the value that i get for this variable is an interpolated value suggesting the location of fragment inside the triangle. I use this value to determine how far the fragment is from the edge of the triangle, which in turn is used to do anti-aliasing on the edges.
See the problem in image below:
Notice the jagged edges in Rect 1. This is because i am not applying anti-aliasing to edges BC and DE as these are internal edges. Because length x is very small, jagged edges on the edges BC and DE close to vertices C and D become visible; these jagged edges are longer in length than length x. If i apply anti-aliasing to these two edges (BC and DE), i see Rect 2. This is because smoothing process during anti-aliasing assign alpha value of 0.0 to fragments lying on or very close to edge. So anti-aliased BC and DE introduce a white diagonal line inside.
So my question is how can i possibly smooth Rect 1?
Fragment shader:
varying vec4 DestinationColor;
varying vec2 TexCoordOut;
varying vec3 vBC;
uniform sampler2D Texture;
uniform int TexEnabled;
float edgeFactor();
void main() {
if (TexEnabled == 1) {
gl_FragColor = texture2D(Texture, TexCoordOut) * DestinationColor;
} else{
gl_FragColor = vec4(DestinationColor.xyz, edgeFactor());
}
}
float edgeFactor() {
vec3 d = fwidth(vBC);
vec3 a3 = smoothstep(vec3(0.0), d*1.5, vBC);
return a3.y;
};
In shader code, a3.y refers to the second value (distance between vertex D or C and fragment) in a set of coordinates i.e y represents 1 in D(0,1,0) and first 0 and second 0 represent x and z respectively.
The algorithm is mostly sound, but you need some means to distinguish an internal edge from an external edge, which you cannot do with simple geometry. You need to duplicate vertices so that the vertices contributing to the internal edge can have some unique attributes so they look different to the external edge. That said, it sounds expensive ...
It might just be simpler to create a small texture with has an transparent alpha value towards the edges which is applied across the line width, which is repeated along the width of the line. It would possibly be faster than all of the additional per-vertex data, especially on mobile with deferred geometry processing.

How to render a radial field in OpenGL?

How would I render a 2D radial field in OpenGL? I know I can render it pixel by pixel but I'm wondering if there are more efficient solutions? I don't mind if it requires OpenGL3+ functionality.
How familiar are you with shaders? Because I'm thinking an easy-ish answer would be to render a quad and then write a fragment shader to color the quad based off of how far each pixel is from the center.
Pseudocode:
vertex shader:
vec2 center = vec2((x1+x2)/2,(y1+y2)/2); //pass this to the fragment shader
fragment shader:
float dist = distance(pos,center); //"pos" is the interpolated position of the fragment. Its passed in from the vertex shader
//Now that we have the distance between each fragment and the center, we can do all kinds of stuff:
gl_fragcolor = vec4(1,1,1,dist) //Assuming you're drawing a unit square, this will make each pixel's transparency smoothly vary from 1 (right next to the center) to 0 (on the edce of the square)
gl_fragcolor = vec4(dist, dist, dist, 1.0) //Vary each pixel's color from white to black
//etc, etc
Let me know if you need more detail

Using shaders to implement field of view on a 2D environment

I'm implementing dynamic field of view. I decided to use shaders in order to make the illumination better looking and how it affects the walls. Here is the scenario I'm working on:
http://i.imgur.com/QxZVyo7.jpg
I have a map, with a flat floor and walls. Every thing here is 2d, there is no 3d geometry, only 2d polygons that compose the walls.
Using the vertex of the polygons I cast shadows, to define the viewable area. (The purple lines are part of the mask I use in the next step)
Using the shader when drawing the shadows on top of the scenario, I avoid the walls to be also obscured.
This way the shadows are cast dynamically along the walls as the field of view changes
I have used the following shader to achieve this. But I feel this is kind of a overkill and really really unefficient:
uniform sampler2D texture;
uniform sampler2D filterTexture;
uniform vec2 textureSize;
uniform float cellSize;
uniform sampler2D shadowTexture;
void main()
{
vec2 position;
vec4 filterPixel;
vec4 shadowPixel;
vec4 pixel = texture2D(texture, gl_TexCoord[0].xy );
for( float i=0 ; i<=cellSize*2 ; i++)
{
position = gl_TexCoord[0].xy;
position.y = position.y - (i/textureSize.y);
filterPixel = texture2D( filterTexture, position );
position.y = position.y - (1/textureSize.y);
shadowPixel = texture2D( texture, position );
if (shadowPixel == 0){
if( filterPixel.r == 1.0 )
{
if( filterPixel.b == 1.0 ){
pixel.a = 0;
break;
}
else if( i<=cellSize )
{
pixel.a = 0;
break;
}
}
}
}
gl_FragColor = pixel;
}
Iterating for each frament just to look for the red colored pixel in the mask seems like a huge overload, but I fail to see how to complete this taks in any other way by using shaders.
The solution here is really quite simple: use shadow maps.
Your situation may be 2D instead of 3D, but the basic concept is the same. You want to "shadow" areas based on whether there is an obstructive surface between some point in the world and a "light source" (in your case, the player character).
In 3D, shadow maps work by rendering the world from the perspective of the light source. This results in a 2D texture where the values represent the depth from the light (in a particular direction) to the nearest obstruction. When you render the scene for real, you check the current fragment's location by projecting it into the 2D depth texture (the shadow map). If the depth value you compute for the current fragment is closer than the nearest obstruction in the projected location in the shadow map, then the fragment is visible from the light. If not, then it isn't.
Your 2D version would have to do the same thing, only with one less dimension. You render your 2D world from the perspective of the "light source". Your 2D world in this case is really just the obstructing quads (you'll have to render them with line polygon filling). Any quads that obstruct sight should be rendered into the shadow map. Texture accesses are completely unnecessary; the only information you need is depth. Your shader doesn't even have to write a color. You render these objects by projecting the 2D space into a 1D texture.
This would look something like this:
X..X
XXXXXXXX..XXXXXXXXXXXXXXXXXXXX
X.............\.../..........X
X..............\./...........X
X...............C............X
X............../.\...........X
X............./...\..........X
X............/.....\.........X
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
C is the character's position; the dots are just regular, unobstructive floor. The Xs are the walls. The lines from C represent the four directions you need to render the 2D lines from.
In 3D, to do shadow mapping for point lights, you have to render the scene 6 times, in 6 different directions into the faces of a cube shadow map. In 2D, you have to render the scene 4 times, in 4 different directions into 4 different 1D shadow maps. You can use a 1D array texture for this.
Once you have your shadow maps, you just use them in your shader to detect when a fragment is visible. To do that, you'll need a set of transforms from window space into the 4 different projections that represent the 4 directions of view that you rendered into. Only one of these will be used for any particular fragment, based on where the fragment is relative to the target.
To implement this, I'd start with just getting a simple case of directional "shadowing" to work. That is, don't use a position; just a direction for a "light". That will test your ability to develop a 2D-to-1D projection matrix, as well as an appropriate camera-space matrix to transform your world-space quads into camera space. Once you have mastered that, then you can get to work doing it 4 times with different projections.

How to get pixel information inside a fragment shader?

In my fragment shader I can load a texture, then do this:
uniform sampler2D tex;
void main(void) {
vec4 color = texture2D(tex, gl_TexCoord[0].st);
gl_FragColor = color;
}
That sets the current pixel to color value of texture. I can modify these, etc and it works well.
But a few questions. How do I tell "which" pixel I am? For example, say I want to set pixel 100,100 (x,y) to red. Everything else to black. How do I do a :
"if currentSelf.Position() == (100,100); then color=red; else color=black?"
?
I know how to set colors, but how do I get "my" location?
Secondly, how do I get values from a neighbor pixel?
I tried this:
vec4 nextColor = texture2D(tex, gl_TexCoord[1].st);
But not clear what it is returning? if I'm pixel 100,100; how do I get the values from 101,100 or 100,101?
How do I tell "which" pixel I am?
You're not a pixel. You're a fragment. There's a reason that OpenGL calls them "Fragment shaders"; it's because they aren't pixels yet. Indeed, not only may they never become pixels (via discard or depth tests or whatever), thanks to multisampling, multiple fragments can combine to form a single pixel.
If you want to tell where your fragment shader is in window-space, use gl_FragCoord. Fragment positions are floating-point values, not integers, so you have to test with a range instead of a single "100, 100" value.
Secondly, how do I get values from a neighbor pixel?
If you're talking about the neighboring framebuffer pixel, you don't. Fragment shaders cannot arbitrarily read from the framebuffer, either in their own position or in a neighboring one.
If you're talking about accessing a neighboring texel from the one you accessed, then that's just a matter of biasing the texture coordinate you pass to texture2D. You have to get the size of the texture (since you're not using GLSL 1.30 or above, you have to manually pass this in), invert the size and either add or subtract these sizes from the S and T component of the texture coordinate.
Easy peasy.
Just compute the size of a pixel based on resolution. Then look up +1 and -1.
vec2 onePixel = vec2(1.0, 1.0) / u_textureSize;
gl_FragColor = (
texture2D(u_image, v_texCoord) +
texture2D(u_image, v_texCoord + vec2(onePixel.x, 0.0)) +
texture2D(u_image, v_texCoord + vec2(-onePixel.x, 0.0))) / 3.0;
There's a good example here