What's the best way to draw a fullscreen quad in OpenGL 3.2? - opengl

I'm doing ray casting in the fragment shader. I can think of a couple ways to draw a fullscreen quad for this purpose. Either draw a quad in clip space with the projection matrix set to the identity matrix, or use the geometry shader to turn a point into a triangle strip. The former uses immediate mode, deprecated in OpenGL 3.2. The latter I use out of novelty, but it still uses immediate mode to draw a point.

I'm going to argue that the most efficient approach will be in drawing a single "full-screen" triangle. For a triangle to cover the full screen, it needs to be bigger than the actual viewport. In NDC (and also clip space, if we set w=1), the viewport will always be the [-1,1] square. For a triangle to cover this area just completely, we need to have two sides to be twice as long as the viewport rectangle, so that the third side will cross the edge of the viewport, hence we can for example use the following coordiates (in counter-clockwise order): (-1,-1), (3,-1), (-1,3).
We also do not need to worry about the texcoords. To get the usual normalized [0,1] range across the visible viewport, we just need to make the corresponding texcoords for the vertices tiwce as big, and the barycentric interpolation will yield exactly the same results for any viewport pixel as when using a quad.
This approach can of course be combined with attribute-less rendering as suggested in demanze's answer:
out vec2 texcoords; // texcoords are in the normalized [0,1] range for the viewport-filling quad part of the triangle
void main() {
vec2 vertices[3]=vec2[3](vec2(-1,-1), vec2(3,-1), vec2(-1, 3));
gl_Position = vec4(vertices[gl_VertexID],0,1);
texcoords = 0.5 * gl_Position.xy + vec2(0.5);
}
Why will a single triangle be more efficient?
This is not about the one saved vertex shader invocation, and the one less triangle to handle at the front-end. The most significant effect of using a single triangle will be that there are less fragment shader invocations
Real GPUs always invoke the fragment shader for 2x2 pixel sized blocks ("quads") as soon as a single pixel of the primitive falls into such a block. This is necessary for calculating the window-space derivative functions (those are also implicitly needed for texture sampling, see this question).
If the primitive does not cover all 4 pixels in that block, the remaining fragment shader invocations will do no useful work (apart from providing the data for the derivative calculations) and will be so-called helper invocations (which can even be queried via the gl_HelperInvocation GLSL function). See also Fabian "ryg" Giesen's blog article for more details.
If you render a quad with two triangles, both will have one edge going diagonally across the viewport, and on both triangles, you will generate a lot of useless helper invocations at the diagonal edge. The effect will be worst for a perfectly square viewport (aspect ratio 1). If you draw a single triangle, there will be no such diagonal edge (it lies outside of the viewport and won't concern the rasterizer at all), so there will be no additional helper invocations.
Wait a minute, if the triangle extends across the viewport boundaries, won't it get clipped and actually put more work on the GPU?
If you read the textbook materials about graphics pipelines (or even the GL spec), you might get that impression. But real-world GPUs use some different approaches like Guard-band clipping. I won't go into detail here (that would be a topic on it's own, have a look at Fabian "ryg" Giesen's fine blog article for details), but the general idea is that the rasterizer will produce fragments only for pixels inside the viewport (or scissor rect) anyway, no matter if the primitive lies completely inside it or not, so we can simply throw bigger triangles at it if both of the following are true:
a) the triangle does only extend the 2D top/bottom/left/right clipping planes (as opposed to the z-Dimension near/far ones, which are more tricky to handle, especially because vertices may also lie behind the camera)
b) the actual vertex coordinates (and all intermediate calculation results the rasterizer might be doing on them) are representable in the internal data formats the GPU's hardware rasterizer uses. The rasterizer will use fixed-point data types of implementation-specific width, while vertex coords are 32Bit single precision floats. (That is basically what defines the size of the Guard-band)
Our triangle is only factor 3 bigger than the viewport, so we can be very sure that there is no need to clip it at all.
But is it worth it?
Well, the savings on fragment shader invocations are real (especially when you have a complex fragment shader), but the overall effect might be barely measurable in a real-world scenario. On the other hand, the approach is not more complicated than using a full-screen quad, and uses less data, so even if might not make a huge difference, it won't hurt, so why not using it?
Could this approach be used for all sorts of axis-aligned rectangles, not just fullscreen ones?
In theory, you can combine this with the scissor test to draw some arbitrary axis-aligned rectangle (and the scissor test will be very efficient, as it just limits which fragments are produced in the first place, it isn't a real "test" in HW which discards fragments). However, this requires you to change the scissor parameters for each rectangle you want to draw, which implies a lot of state changes and limits you to a single rectangle per draw call, so doing so won't be a good idea in most scenarios.

You can send two triangles creating a quad, with their vertex attributes set to -1/1 respectively.
You do not need to multiply them with any matrix in the vertex/fragment shader.
Here are some code samples, simple as it is :)
Vertex Shader:
const vec2 madd=vec2(0.5,0.5);
attribute vec2 vertexIn;
varying vec2 textureCoord;
void main() {
textureCoord = vertexIn.xy*madd+madd; // scale vertex attribute to [0-1] range
gl_Position = vec4(vertexIn.xy,0.0,1.0);
}
Fragment Shader :
varying vec2 textureCoord;
void main() {
vec4 color1 = texture2D(t,textureCoord);
gl_FragColor = color1;
}

No need to use a geometry shader, a VBO or any memory at all.
A vertex shader can generate the quad.
layout(location = 0) out vec2 uv;
void main()
{
float x = float(((uint(gl_VertexID) + 2u) / 3u)%2u);
float y = float(((uint(gl_VertexID) + 1u) / 3u)%2u);
gl_Position = vec4(-1.0f + x*2.0f, -1.0f+y*2.0f, 0.0f, 1.0f);
uv = vec2(x, y);
}
Bind an empty VAO. Send a draw call for 6 vertices.

To output a fullscreen quad geometry shader can be used:
#version 330 core
layout(points) in;
layout(triangle_strip, max_vertices = 4) out;
out vec2 texcoord;
void main()
{
gl_Position = vec4( 1.0, 1.0, 0.5, 1.0 );
texcoord = vec2( 1.0, 1.0 );
EmitVertex();
gl_Position = vec4(-1.0, 1.0, 0.5, 1.0 );
texcoord = vec2( 0.0, 1.0 );
EmitVertex();
gl_Position = vec4( 1.0,-1.0, 0.5, 1.0 );
texcoord = vec2( 1.0, 0.0 );
EmitVertex();
gl_Position = vec4(-1.0,-1.0, 0.5, 1.0 );
texcoord = vec2( 0.0, 0.0 );
EmitVertex();
EndPrimitive();
}
Vertex shader is just empty:
#version 330 core
void main()
{
}
To use this shader you can use dummy draw command with empty VBO:
glDrawArrays(GL_POINTS, 0, 1);

This is similar to the answer by demanze, but I would argue it's easier to understand. Also this is only drawn with 4 vertices by using TRIANGLE_STRIP.
#version 300 es
out vec2 textureCoords;
void main() {
const vec2 positions[4] = vec2[](
vec2(-1, -1),
vec2(+1, -1),
vec2(-1, +1),
vec2(+1, +1)
);
const vec2 coords[4] = vec2[](
vec2(0, 0),
vec2(1, 0),
vec2(0, 1),
vec2(1, 1)
);
textureCoords = coords[gl_VertexID];
gl_Position = vec4(positions[gl_VertexID], 0.0, 1.0);
}

The following comes from the draw function of the class that draws fbo textures to a screen aligned quad.
Gl.glUseProgram(shad);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, vbo);
Gl.glEnableVertexAttribArray(0);
Gl.glEnableVertexAttribArray(1);
Gl.glVertexAttribPointer(0, 3, Gl.GL_FLOAT, Gl.GL_FALSE, 0, voff);
Gl.glVertexAttribPointer(1, 2, Gl.GL_FLOAT, Gl.GL_FALSE, 0, coff);
Gl.glActiveTexture(Gl.GL_TEXTURE0);
Gl.glBindTexture(Gl.GL_TEXTURE_2D, fboc);
Gl.glUniform1i(tileLoc, 0);
Gl.glDrawArrays(Gl.GL_QUADS, 0, 4);
Gl.glBindTexture(Gl.GL_TEXTURE_2D, 0);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, 0);
Gl.glUseProgram(0);
The actual quad itself and the coords are got from:
private float[] v=new float[]{ -1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
1.0f, 1.0f, 0.0f,
-1.0f, 1.0f, 0.0f,
0.0f, 0.0f,
1.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f
};
The binding and set up of the vbo's I leave to you.
The vertex shader:
#version 330
layout(location = 0) in vec3 pos;
layout(location = 1) in vec2 coord;
out vec2 coords;
void main() {
coords=coord.st;
gl_Position=vec4(pos, 1.0);
}
Because the position is raw, that is, not multiplied by any matrix the -1, -1::1, 1 of the quad fit into the viewport. Look for Alfonse's tutorial linked off any of his posts on openGL.org.

Related

Artifacts with rendering texture with horizontal and vertical lines with OpenGL

I created 8x8 pixel bitmap letters to render them with OpenGL, but sometimes, depending on scaling I get weird artifacts as shown below in the image. Texture filtering is set to nearest pixel. It looks like rounding issue, but how could there be some if the line is perfectly horizontal.
Left original 8x8, middle scaled to 18x18, right scaled to 54x54.
Vertex data are unsigned bytes in format (x-offset, y-offset, letter). Here is full code:
vertex shader:
#version 330 core
layout(location = 0) in uvec3 Data;
uniform float ratio;
uniform float font_size;
out float letter;
void main()
{
letter = Data.z;
vec2 position = vec2(float(Data.x) / ratio, Data.y) * font_size - 1.0f;
position.y = -position.y;
gl_Position = vec4(position, 0.0f, 1.0f);
}
geometry shader:
#version 330 core
layout (points) in;
layout (triangle_strip, max_vertices = 4) out;
uniform float ratio;
uniform float font_size;
out vec3 texture_coord;
in float letter[];
void main()
{
// TODO: pre-calculate
float width = font_size / ratio;
float height = -font_size;
texture_coord = vec3(0.0f, 0.0f, letter[0]);
gl_Position = gl_in[0].gl_Position + vec4(0.0f, height, 0.0f, 0.0f);
EmitVertex();
texture_coord = vec3(1.0f, 0.0f, letter[0]);
gl_Position = gl_in[0].gl_Position + vec4(width, height, 0.0f, 0.0f);
EmitVertex();
texture_coord = vec3(0.0f, 1.0f, letter[0]);
gl_Position = gl_in[0].gl_Position + vec4(0.0f, 0.0f, 0.0f, 0.0f);
EmitVertex();
texture_coord = vec3(1.0f, 1.0f, letter[0]);
gl_Position = gl_in[0].gl_Position + vec4(width, 0.0f, 0.0f, 0.0f);
EmitVertex();
EndPrimitive();
}
fragment shader:
#version 330 core
in vec3 texture_coord;
uniform sampler2DArray font_texture_array;
out vec4 output_color;
void main()
{
output_color = texture(font_texture_array, texture_coord);
}
I had the same problem developing with Freetype and OpenGL. And after days of researching and scratching my head, I found the solution. In my case, I had to explicitly call the function 'glBlendColor'. Once, I did that, I did not observe any more artifacts.
Here is a snippet:
//Set Viewport
glViewport(0, 0, FIXED_WIDTH, FIXED_HEIGHT);
//Enable Blending
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBlendColor(1.0f, 1.0f, 1.0f, 1.0f); //Without this I was having artifacts: IMPORTANT TO EXPLICITLY CALLED
//Set Alignment requirement to 1 byte
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
I figured out the solution after reviewing the source code of this OpenGL-Freetype library on github: opengl-freetype library
Well, when using nearest filtering, you will see such issues if your sample location is very close to the boundary between two texels. And since the tex coords are to be interpolated separately for each fragment you are drawing, slight numerical inaccuracies will result in jumping between those two texels.
When you draw an 8x8 texture to an 18x18 pixel big rectangle, and your rectangle is perfectly aligned to the putput pixel raster, you are almost guaranteed to trigger that behavior:
Looking at the texel coodinates will then reveal that for the very bottom output pixel, the texture coords would be interpolated to 1/(2*18) = 1/36. Going one pixel up will add 1/18 = 2/36 to the t coordinate. So for the fifth row from the bottom, it would be 9/36.
So for the 8x8 texel big texture you are sampling from, you are actually sampling at unnormalized texel coordinates (9/36)*8 == 2.0. This is exactly the boundary between the second and third row of your texture. Since the texture coordinates for each fragment are interpolated by a barycentric interpolation between the tex coords assigned to the three vertices froming the triangle, there can be slight inaccuracies. And even the slightest possible inaccuracy representable in floating point format would result in flipping between two texels in this case.
I think your approach is just not good. Scaling bitmap fonts is always problematic (maybe besides integral scale factors). If you want nicely looking scalable texture fonts, I recommend you to look into signed distance fields. It is quite a simple and powerful technique, and there are tools available to generate the necessary distance field textures.
If you are looking for a quick hack, you coud also just offset your output rectangle slightly. You basically must make sure to keep the offset in [-0.5,0.5] pixels (so that never different fragments are generated during rasterization, and you must make sure that all the potential sample locations will never lie close to an integer, so the offset will depend on the actual scale factor.

OpenGL: How do I sort points depending on the camera distance?

I have a structure made of 100.000 spheres as point-sprites using OpenGL. I have an issue when I rotate the structure on its centre axis.
The point-sprite are rendered in order depending on their array, that means, the last ones overlap the first point-sprite created, not taking care of the depth in the three-dimensional space.
How can I sort and rearrange in real-time the order of the point-sprites to keep always the three-dimensional perspective? I guess the idea is to read the camera position against the particles and then sort the array to always show closer particles first.
Is it possible to be fixed using shaders?
Here is my shader:
shader.frag
#version 120
uniform sampler2D tex;
varying vec4 inColor;
//uniform vec3 lightDir;
void main (void) {
gl_FragColor = texture2D(tex, gl_TexCoord[0].st) * inColor;
}
shader vert
#version 120
// define a few constants here, for faster rendering
attribute float particleRadius;
attribute vec4 myColor;
varying vec4 inColor;
void main(void)
{
vec4 eyeCoord = vec4(gl_ModelViewMatrix * gl_Vertex);
gl_Position = gl_ProjectionMatrix * eyeCoord;
float distance = length(eyeCoord);
float attenuation = 700.0 / distance;
gl_PointSize = particleRadius * attenuation;
//gl_PointSize = 1.0 / distance * SIZE;
//gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_FrontColor = gl_Color;
inColor = myColor;
}
draw method:
void MyApp::draw(){
//gl::clear( ColorA( 0.0f, 0.0f, 0.0f, 0.0f ), true );
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
// SET MATRICES TO WINDOW
gl::setMatricesWindow( getWindowSize(), false );
gl::setViewport( getWindowBounds() );
gl::enableAlphaBlending();
gl::enable( GL_TEXTURE_2D );
gl::enable(GL_ALPHA_TEST);
glEnable(GL_DEPTH_TEST);
gl::color( ColorA( 1.0f, 1.0f, 1.0f, 1.0f ) );
mShader.bind();
// store current OpenGL state
glPushAttrib( GL_POINT_BIT | GL_ENABLE_BIT );
// enable point sprites and initialize it
gl::enable( GL_POINT_SPRITE_ARB );
glPointParameterfARB( GL_POINT_FADE_THRESHOLD_SIZE_ARB, -1.0f );
glPointParameterfARB( GL_POINT_SIZE_MIN_ARB, 0.1f );
glPointParameterfARB( GL_POINT_SIZE_MAX_ARB, 200.0f );
// allow vertex shader to change point size
gl::enable( GL_VERTEX_PROGRAM_POINT_SIZE );
GLint thisColor = mShader.getAttribLocation( "myColor" );
glEnableVertexAttribArray(thisColor);
glVertexAttribPointer(thisColor,4,GL_FLOAT,true,0,theColors);
GLint particleRadiusLocation = mShader.getAttribLocation( "particleRadius" );
glEnableVertexAttribArray(particleRadiusLocation);
glVertexAttribPointer(particleRadiusLocation, 1, GL_FLOAT, true, 0, mRadiuses);
glTexEnvi(GL_POINT_SPRITE, GL_COORD_REPLACE, GL_TRUE);
glEnable(GL_VERTEX_PROGRAM_POINT_SIZE);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, mPositions);
mTexture.enableAndBind();
glDrawArrays( GL_POINTS, 0, mNumParticles );
mTexture.unbind();
glDisableClientState(GL_VERTEX_ARRAY);
glDisableVertexAttribArrayARB(thisColor);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableVertexAttribArrayARB(particleRadiusLocation);
// unbind shader
mShader.unbind();
// restore OpenGL state
glPopAttrib();
}
You have two different blending cases in void MyApp::draw()
additive - (src + dst)
Order independent
alpha - (src * src.a + (dst * (1.0 - src.a))
Order depdendent
The first blending function would not cause the issues you are discussing, so I am assuming that mRoom.isPowerOn() == false and that we are dealing with alpha blending.
To solve order dependency issues with the latter case you need to transform your points into eye-space and sort using their z coordinates. The problem here is that this is not something that can be easily solved in GLSL - you need to sort the data before your vertex shader runs (so the most straight-forward approach involves doing this on the CPU). GPU-based solutions are possible and may be necessary to do this in real-time given the huge number of data points involved, but you should start out by doing this on the CPU and figure out where to go from there.
When implementing the sort, keep in mind that point sprites are always screen aligned (uniform z value in eye-space), so you do not have to worry about intersection (a point sprite will either be completely in-front of, behind, or parallel to any other point sprite it overlaps). This makes sorting them a lot simpler than other types of geometry, which may have to be split at points of intersection and drawn twice for proper ordering.

Display Part of Texture in GLSL

I'm using GLSL to draw sprites from a sprite-sheet. I'm using jME 3, yet there are only small differences, and only with regards to deprecated functions.
The most important part of drawing a sprite from a sprite sheet is to draw only a subset/range of pixels, for example the range from (100, 0) to (200, 100). In the following test case sprite-sheet, and using the previous bounds, only the green part of the sprite-sheet would be drawn.
.
This is what I have so far:
Definition:
MaterialDef Solid Color {
//This is the list of user-defined variables to be used in the shader
MaterialParameters {
Vector4 Color
Texture2D ColorMap
}
Technique {
VertexShader GLSL100: Shaders/tc_s1.vert
FragmentShader GLSL100: Shaders/tc_s1.frag
WorldParameters {
WorldViewProjectionMatrix
}
}
}
.vert file:
uniform mat4 g_WorldViewProjectionMatrix;
attribute vec3 inPosition;
attribute vec4 inTexCoord;
varying vec4 texture_coordinate;
void main(){
gl_Position = g_WorldViewProjectionMatrix * vec4(inPosition, 1.0);
texture_coordinate = vec4(inTexCoord);
}
.frag:
uniform vec4 m_Color;
uniform sampler2D m_ColorMap;
varying vec4 texture_coordinate;
void main(){
vec4 color = vec4(m_Color);
vec4 tex = texture2D(m_ColorMap, texture_coordinate);
color *= tex;
gl_FragColor = color;
}
In jME 3, inTexCoord refers to gl_MultiTexCoord0, and inPosition refers to gl_Vertex.
As you can see, I tried to give the texture_coordinate a vec4 type, rather than a vec2, so as to be able to reference its p and q values (texture_coordinate.p and texture_coordinate.q). Modifying them only resulted in different hues.
m_Color refers to the color, inputted by the user, and serves the purpose of altering the hue. In this case, it should be disregarded.
So far, the shader works as expected and the texture displays correctly.
I've been using resources and tutorials from NeHe (http://nehe.gamedev.net/article/glsl_an_introduction/25007/) and Lighthouse3D (http://www.lighthouse3d.com/tutorials/glsl-tutorial/simple-texture/).
Which functions/values I should alter to get the desired effect of displaying only part of the texture?
Generally, if you want to only display part of a texture, then you change the texture coordinates associated with each vertex. Since you don't show your code for how you're telling OpenGL about your vertices, I'm not sure what to suggest. But in general, if you're using older deprecated functions, instead of doing this:
// Lower Left of triangle
glTexCoord2f(0,0);
glVertex3f(x0,y0,z0);
// Lower Right of triangle
glTexCoord2f(1,0);
glVertex3f(x1,y1,z1);
// Upper Right of triangle
glTexCoord2f(1,1);
glVertex3f(x2,y2,z2);
You could do this:
// Lower Left of triangle
glTexCoord2f(1.0 / 3.0, 0.0);
glVertex3f(x0,y0,z0);
// Lower Right of triangle
glTexCoord2f(2.0 / 3.0, 0.0);
glVertex3f(x1,y1,z1);
// Upper Right of triangle
glTexCoord2f(2.0 / 3.0, 1.0);
glVertex3f(x2,y2,z2);
If you're using VBOs, then you need to modify your array of texture coordinates to access the appropriate section of your texture in a similar manner.
For the sampler2D the texture coordinates are normalized so that the leftmost and bottom-most coordinates are 0, and the rightmost and topmost are 1. So for your example of a 300-pixel-wide texture, the green section would be between 1/3rd and 2/3rds the width of the texture.

GLSL per pixel point light shading

VC++, OpenGL, SDL
I am wondering if there is a way to achieve smoother shading across a single Quad of geometry. Right now, the shading looks smooth with my point light, however, the intensity rises along the [/] diagonal subdivision of the face. The lighting is basically non-visible in-between vertices.
This is what happens as the light moves from left to right
As I move the light across the surface, it does this consistently. Gets brightest at each vertex and fades from there.
Am I forced to up the subdivision to achieve a smoother, more radial shade? or is there a method around this?
Here are the shaders I am using:
vert
varying vec3 vertex_light_position;
varying vec3 vertex_normal;
void main()
{
vertex_normal = normalize(gl_NormalMatrix * gl_Normal);
vertex_light_position = normalize(gl_LightSource[0].position.xyz);
gl_FrontColor = gl_Color;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
frag
varying vec3 vertex_light_position;
varying vec3 vertex_normal;
void main()
{
float diffuse_value = max(dot(vertex_normal, vertex_light_position), 0.0);
gl_FragColor = gl_Color * diffuse_value;
}
My geometry in case anyone is wondering:
glBegin(GL_QUADS);
glNormal3f(0.0f, 0.0f, 1.0f);
glTexCoord2f(0.0f, 1.0f); glVertex3f(pos_x, pos_y - size_y, depth);
glTexCoord2f(1.0f, 1.0f); glVertex3f(pos_x + size_x, pos_y - size_y, depth);
glTexCoord2f(1.0f, 0.0f); glVertex3f(pos_x + size_x, pos_y, depth);
glTexCoord2f(0.0f, 0.0f); glVertex3f(pos_x, pos_y, depth);
glEnd();
There are a couple things I see as being possible issues.
Unless I am mistaken, you are using normalize(gl_LightSource[0].position.xyz); to calculate the light vector, but that is based solely on the position of the light, not on the vertex you are operating on. That means the value there will be the same for every vertex and will only change based on the current modelview matrix and light position. I would think that calculating the light vector by doing something like normalize(glLightSource[0].position.xyz - (gl_ModelViewMatrix * gl_Vertex).xyz) would be closer to what you would want.
Secondly, you ought to normalize your vectors in the fragment shader as well as in the vertex shader, since the interpolation of two unit vectors is not guaranteed to be a unit vector itself.
I think the problem is with light vector...
I suggest using:
vec3 light_vector = normalize(gl_LightSource[0].position.xyz - vertex_pos)
vertex_pos can be calculated by using:
vertex_pos = gl_ModelViewMatrix * gl_Vertex
Notice that all the vectors should be in the same space (camera, world, object)
Am I forced to up the subdivision to achieve a smoother, more radial
shade? or is there a method around this?
No, you are free to do whatever you want. The only code you need to change is the fragment shader. Try to play with it and see if you get a better result.
For example, you could do this :
diffuse_value = pow(diffuse_value, 3.0);
as explained here.

GLSL rendering in 2D

(OpenGL 2.0)
I managed to do some nice text-rendering in opengl, and decided to make it shader-designed.
However, rendered font texture that looked nice in fixed pipeline mode looked unpleasant in GLSL mode.
In fixed pipeline mode, I don't see any difference between GL_LINEAR and GL_NEAREST filtering, it's because the texture doesn't need to filter really, because I set orthographic projection and align quad's width and height to the texture coordinates.
Now when I'm trying to render it with shader, i can see some very bad GL_NEAREST filtering artifacts, and for GL_LINEAR the texture appears too blurry.
Fixed pipeline, satysfying, best quality (no difference between linear/nearest):
GLSL, nearest (visible artifacts, for example, look at fraction glyphs):
GLSL, linear (too blurry):
Shader program:
Vertex shader was successfully compiled to run on hardware.
Fragment shader was successfully compiled to run on hardware.
Fragment shader(s) linked, vertex shader(s) linked.
------------------------------------------------------------------------------------------
attribute vec2 at_Vertex;
attribute vec2 at_Texcoord;
varying vec2 texCoord;
void main(void) {
texCoord = at_Texcoord;
gl_Position = mat4(0.00119617, 0, 0, 0, 0, 0.00195503, 0, 0, 0, 0, -1, 0, -1, -1, -0, 1)* vec4(at_Vertex.x, at_Vertex.y, 0, 1);
}
-----------------------------------------------------------------------------------------
varying vec2 texCoord;
uniform sampler2D diffuseMap;
void main(void) {
gl_FragColor = texture2D(diffuseMap, texCoord);
}
Quad rendering, fixed:
glTexCoord2f (0.0f, 0.0f);
glVertex2f (40.0f, 40.0f);
glTexCoord2f (0.0f, 1.0f);
glVertex2f ((font.tex_r.w+40.0f), 40.0f);
glTexCoord2f (1.0f, 1.0f);
glVertex2f ((font.tex_r.w+40.0f), (font.tex_r.h+40.0f));
glTexCoord2f (1.0f, 0.0f);
glVertex2f (40.0f, (font.tex_r.h+40.0f));
Quad rendering, shader-mode:
glVertexAttrib2f(__MeshShader::ATTRIB_TEXCOORD, 0.0f, 0.0f);
glVertexAttrib2f(__MeshShader::ATTRIB_VERTEX, 40.0f, 40.0f);
glVertexAttrib2f(__MeshShader::ATTRIB_TEXCOORD, 0.0f, 1.0f);
glVertexAttrib2f(__MeshShader::ATTRIB_VERTEX, (font.tex_r.w+40.0f), 40.0f);
glVertexAttrib2f(__MeshShader::ATTRIB_TEXCOORD, 1.0f, 1.0f);
glVertexAttrib2f(__MeshShader::ATTRIB_VERTEX, (font.tex_r.w+40.0f), (font.tex_r.h+40.0f));
glVertexAttrib2f(__MeshShader::ATTRIB_TEXCOORD, 1.0f, 0.0f);
glVertexAttrib2f(__MeshShader::ATTRIB_VERTEX, 40.0f, (font.tex_r.h+40.0f));
In both cases the matrices are calculated from the same source, though for performance reasons, as you can see, I'm writing constant values into the shader code with the help of such a function (if that is the reason, how do I write them properly ? ):
std::ostringstream buffer;
buffer << f;
return buffer.str().c_str();
where "f" is some double value.
EDIT:
Result of my further research is a little bit surprising.
Now I'm multiplying vertex coordinates by the same orthogonal matrix on CPU (not in vertex shader like before) and I'm leaving the vertex untouched in vertex shader, just passing it to the gl_Position. I couldn't believe, but this really works and actually solves my problem. Every operation is made on floats, as in GPU.
Seems like matrix/vertex multiplication is much more accurate on CPU.
question is: why ?
EDIT: Actually, whole reason was different matrix sources..! Really, really small bug!
Nicol was nearest the solution.
though for performance reasons, as you can see, I'm writing constant values into the shader code
That's not going to help your performance. Uploading a single matrix uniform is pretty standard for most OpenGL shaders, and will cost you nothing of significance in terms of performance.
Seems like matrix/vertex multiplication is much more accurate on CPU. question is: why ?
It's not more accurate; it's simply using a different matrix. If you passed that matrix to GLSL via a shader uniform, you would probably get the same result. The matrix you use in the shader is not the same matrix that you used on the CPU.