Artifacts with rendering texture with horizontal and vertical lines with OpenGL - opengl

I created 8x8 pixel bitmap letters to render them with OpenGL, but sometimes, depending on scaling I get weird artifacts as shown below in the image. Texture filtering is set to nearest pixel. It looks like rounding issue, but how could there be some if the line is perfectly horizontal.
Left original 8x8, middle scaled to 18x18, right scaled to 54x54.
Vertex data are unsigned bytes in format (x-offset, y-offset, letter). Here is full code:
vertex shader:
#version 330 core
layout(location = 0) in uvec3 Data;
uniform float ratio;
uniform float font_size;
out float letter;
void main()
{
letter = Data.z;
vec2 position = vec2(float(Data.x) / ratio, Data.y) * font_size - 1.0f;
position.y = -position.y;
gl_Position = vec4(position, 0.0f, 1.0f);
}
geometry shader:
#version 330 core
layout (points) in;
layout (triangle_strip, max_vertices = 4) out;
uniform float ratio;
uniform float font_size;
out vec3 texture_coord;
in float letter[];
void main()
{
// TODO: pre-calculate
float width = font_size / ratio;
float height = -font_size;
texture_coord = vec3(0.0f, 0.0f, letter[0]);
gl_Position = gl_in[0].gl_Position + vec4(0.0f, height, 0.0f, 0.0f);
EmitVertex();
texture_coord = vec3(1.0f, 0.0f, letter[0]);
gl_Position = gl_in[0].gl_Position + vec4(width, height, 0.0f, 0.0f);
EmitVertex();
texture_coord = vec3(0.0f, 1.0f, letter[0]);
gl_Position = gl_in[0].gl_Position + vec4(0.0f, 0.0f, 0.0f, 0.0f);
EmitVertex();
texture_coord = vec3(1.0f, 1.0f, letter[0]);
gl_Position = gl_in[0].gl_Position + vec4(width, 0.0f, 0.0f, 0.0f);
EmitVertex();
EndPrimitive();
}
fragment shader:
#version 330 core
in vec3 texture_coord;
uniform sampler2DArray font_texture_array;
out vec4 output_color;
void main()
{
output_color = texture(font_texture_array, texture_coord);
}

I had the same problem developing with Freetype and OpenGL. And after days of researching and scratching my head, I found the solution. In my case, I had to explicitly call the function 'glBlendColor'. Once, I did that, I did not observe any more artifacts.
Here is a snippet:
//Set Viewport
glViewport(0, 0, FIXED_WIDTH, FIXED_HEIGHT);
//Enable Blending
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBlendColor(1.0f, 1.0f, 1.0f, 1.0f); //Without this I was having artifacts: IMPORTANT TO EXPLICITLY CALLED
//Set Alignment requirement to 1 byte
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
I figured out the solution after reviewing the source code of this OpenGL-Freetype library on github: opengl-freetype library

Well, when using nearest filtering, you will see such issues if your sample location is very close to the boundary between two texels. And since the tex coords are to be interpolated separately for each fragment you are drawing, slight numerical inaccuracies will result in jumping between those two texels.
When you draw an 8x8 texture to an 18x18 pixel big rectangle, and your rectangle is perfectly aligned to the putput pixel raster, you are almost guaranteed to trigger that behavior:
Looking at the texel coodinates will then reveal that for the very bottom output pixel, the texture coords would be interpolated to 1/(2*18) = 1/36. Going one pixel up will add 1/18 = 2/36 to the t coordinate. So for the fifth row from the bottom, it would be 9/36.
So for the 8x8 texel big texture you are sampling from, you are actually sampling at unnormalized texel coordinates (9/36)*8 == 2.0. This is exactly the boundary between the second and third row of your texture. Since the texture coordinates for each fragment are interpolated by a barycentric interpolation between the tex coords assigned to the three vertices froming the triangle, there can be slight inaccuracies. And even the slightest possible inaccuracy representable in floating point format would result in flipping between two texels in this case.
I think your approach is just not good. Scaling bitmap fonts is always problematic (maybe besides integral scale factors). If you want nicely looking scalable texture fonts, I recommend you to look into signed distance fields. It is quite a simple and powerful technique, and there are tools available to generate the necessary distance field textures.
If you are looking for a quick hack, you coud also just offset your output rectangle slightly. You basically must make sure to keep the offset in [-0.5,0.5] pixels (so that never different fragments are generated during rasterization, and you must make sure that all the potential sample locations will never lie close to an integer, so the offset will depend on the actual scale factor.

Related

Fragment shader not creating gradient like light in OpenGL GLSL

I am trying to understand how to manipulate my renderings with shaders. I haven't changed the projection matrix of the scene but I draw a triangle with vertices = {-0.5, -0.5} {0.5, -0.5} {0, 0.5}. I then pass in a vec2 position of a "light" to the uniform of my fragment shader that i want to essentially shine onto my triangle from the top right of the triangle (lightPos = (0.5,0.5))
Here is a very bad drawing of where everything is located.
and this is what I aim to have in my triangle (kind of.. it doesnt need to be white to blue it just needs to be brighter near the light and darker further away)
Here is the shader
#version 460 core
in vec3 oPos;
out vec4 fragColor;
uniform vec3 u_outputColor;
uniform vec2 u_lightPosition;
void main(){
float intensity = 1 / length(oPos.xy - u_lightPosition);
vec4 col = vec4(u_outputColor, 1.0f);
fragColor = col * intensity;
}
Here is the basic code to compiling the shader(most of it is abstracted away so it is fairly simple)
/* Test data for shader program. */
program.init("passthrough.vert", "passthrough.frag");
program.setUniformVec3("u_outputColor", 0.3f, 0.3f, 0.8f);
program.setUniformVec2("u_lightPosition", 0.5f, 0.5f);
GLfloat vertices[9] = {-0.5f, -0.5, 0, 0,0.5f,0, 0.5, -0.5, 0};
Here is vertex shader:
#version 460 core
layout (location = 0) in vec3 aPos;
out vec3 oPos;
void main(){
gl_Position = vec4(aPos.x, aPos.y, aPos.z, 1.0);
}
Every single test I have run to see why I can't get this to work seems to show me that if there is a slight color change it will change the entire triangle to a different shade. All tests show a triangle of ONE color across the entire thing; no gradient at all. I want the triangle to be a gradient that is brighter near the light and darker further from it. This is driving me crazy because I have been stuck on such a simple thing for 3 hours now and it just seems like any code I write modifies all 3 vertices at once as if they are in the exact same spot. I wrote the math out and I strongly feel as if this should work. Any help is very appreciated.
EDIT
The triangle after the solution fixed my issue:
Try this for your vertex shader:
#version 460 core
layout (location = 0) in vec3 aPos;
out vec3 oPos;
void main(){
oPos.xyz = aPos.xyz; // ADD THIS LINE
gl_Position = vec4(aPos.xyz, 1.0);
}
Your version never writes to oPos, so the fragment shader gets either a) a random value or, in your case b) vec3(0,0,0). Since your color calculation is based off of:
float intensity = 1 / length(oPos.xy - u_lightPosition);
This is basically the same as
float intensity = 1 / length(-1*u_lightPosition);
So the color only depends on the light position.
You can debug and verify this by setting your fragment color to oPos:
vec4 col = vec4(oPos.xy, oPos.z + 0.5, 1.0f);
If oPos was set correctly in the vertex shader, then this line in the fragment shader would show you an RGB ramp. If oPos is not set correctly, you'll see 50% blue.
Always check for errors and logs returned from OpenGL. It should have emitted a warning about this that would have sent you straight to the problem.
Also, I'm surprised that your entire triangle isn't being clipped since vertices have a z of 0.

OpenGL: How do I sort points depending on the camera distance?

I have a structure made of 100.000 spheres as point-sprites using OpenGL. I have an issue when I rotate the structure on its centre axis.
The point-sprite are rendered in order depending on their array, that means, the last ones overlap the first point-sprite created, not taking care of the depth in the three-dimensional space.
How can I sort and rearrange in real-time the order of the point-sprites to keep always the three-dimensional perspective? I guess the idea is to read the camera position against the particles and then sort the array to always show closer particles first.
Is it possible to be fixed using shaders?
Here is my shader:
shader.frag
#version 120
uniform sampler2D tex;
varying vec4 inColor;
//uniform vec3 lightDir;
void main (void) {
gl_FragColor = texture2D(tex, gl_TexCoord[0].st) * inColor;
}
shader vert
#version 120
// define a few constants here, for faster rendering
attribute float particleRadius;
attribute vec4 myColor;
varying vec4 inColor;
void main(void)
{
vec4 eyeCoord = vec4(gl_ModelViewMatrix * gl_Vertex);
gl_Position = gl_ProjectionMatrix * eyeCoord;
float distance = length(eyeCoord);
float attenuation = 700.0 / distance;
gl_PointSize = particleRadius * attenuation;
//gl_PointSize = 1.0 / distance * SIZE;
//gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_FrontColor = gl_Color;
inColor = myColor;
}
draw method:
void MyApp::draw(){
//gl::clear( ColorA( 0.0f, 0.0f, 0.0f, 0.0f ), true );
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
// SET MATRICES TO WINDOW
gl::setMatricesWindow( getWindowSize(), false );
gl::setViewport( getWindowBounds() );
gl::enableAlphaBlending();
gl::enable( GL_TEXTURE_2D );
gl::enable(GL_ALPHA_TEST);
glEnable(GL_DEPTH_TEST);
gl::color( ColorA( 1.0f, 1.0f, 1.0f, 1.0f ) );
mShader.bind();
// store current OpenGL state
glPushAttrib( GL_POINT_BIT | GL_ENABLE_BIT );
// enable point sprites and initialize it
gl::enable( GL_POINT_SPRITE_ARB );
glPointParameterfARB( GL_POINT_FADE_THRESHOLD_SIZE_ARB, -1.0f );
glPointParameterfARB( GL_POINT_SIZE_MIN_ARB, 0.1f );
glPointParameterfARB( GL_POINT_SIZE_MAX_ARB, 200.0f );
// allow vertex shader to change point size
gl::enable( GL_VERTEX_PROGRAM_POINT_SIZE );
GLint thisColor = mShader.getAttribLocation( "myColor" );
glEnableVertexAttribArray(thisColor);
glVertexAttribPointer(thisColor,4,GL_FLOAT,true,0,theColors);
GLint particleRadiusLocation = mShader.getAttribLocation( "particleRadius" );
glEnableVertexAttribArray(particleRadiusLocation);
glVertexAttribPointer(particleRadiusLocation, 1, GL_FLOAT, true, 0, mRadiuses);
glTexEnvi(GL_POINT_SPRITE, GL_COORD_REPLACE, GL_TRUE);
glEnable(GL_VERTEX_PROGRAM_POINT_SIZE);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, mPositions);
mTexture.enableAndBind();
glDrawArrays( GL_POINTS, 0, mNumParticles );
mTexture.unbind();
glDisableClientState(GL_VERTEX_ARRAY);
glDisableVertexAttribArrayARB(thisColor);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableVertexAttribArrayARB(particleRadiusLocation);
// unbind shader
mShader.unbind();
// restore OpenGL state
glPopAttrib();
}
You have two different blending cases in void MyApp::draw()
additive - (src + dst)
Order independent
alpha - (src * src.a + (dst * (1.0 - src.a))
Order depdendent
The first blending function would not cause the issues you are discussing, so I am assuming that mRoom.isPowerOn() == false and that we are dealing with alpha blending.
To solve order dependency issues with the latter case you need to transform your points into eye-space and sort using their z coordinates. The problem here is that this is not something that can be easily solved in GLSL - you need to sort the data before your vertex shader runs (so the most straight-forward approach involves doing this on the CPU). GPU-based solutions are possible and may be necessary to do this in real-time given the huge number of data points involved, but you should start out by doing this on the CPU and figure out where to go from there.
When implementing the sort, keep in mind that point sprites are always screen aligned (uniform z value in eye-space), so you do not have to worry about intersection (a point sprite will either be completely in-front of, behind, or parallel to any other point sprite it overlaps). This makes sorting them a lot simpler than other types of geometry, which may have to be split at points of intersection and drawn twice for proper ordering.

GLSL 1.2 floor() issues in Vertex Shader

I'm trying to calculate texture coordinates based on the coordinates of an incoming vertex in the Vertex Shader. This is a stripped down version of my attempt:
#version 120
varying vec4 color;
uniform sampler2D heightmap;
uniform ivec2 heightmapSize;
void main(void)
{
vec2 fHeightmapSize = vec2(heightmapSize);
vec2 pos = gl_Vertex.zx + vec2(0.5f, 0.5f);
vec2 offset = floor(fHeightmapSize * pos) + vec2(0.5f, 0.5f);
if (fract(offset.x) > 0.45f && fract(offset.x) < 0.55f
&& fract(offset.y) > 0.45f && fract(offset.y) < 0.55f)
color = vec4(0.0f, 1.0f, 0.0f, 1.0f);
else
color = vec4(1.0f, 0.0f, 0.0f, 1.0f);
// gl_Position = ...
// ...
}
gl_Vertex is in [-0.5, 0.5]^2, on the XZ-plane. So what I am basically trying to do, is to
first create a float vec2 from the ivec2 heightmapSize, which holds the width and height of the heightmap sampler.
Then I'm converting the vertex coordinates to the interval [0, 1]^2.
Then I'm calculating the offset in texture coordinates by multiplying the vertex position with the heightmap size. The left part (using floor()) should return the number of texels in each direction.
Example: The texure is of size 4x4, position is (0.55, 0.5). This means I would got 2 texels to the right, and two texels upwards -> (2.0, 2.0).
In the right part, I add another (0.5, 0.5) because I want the center of the texel. The (2.0, 2.0) becomes (2.5, 2.5). Note: whatever the coordinates are, the fractional part should be 0.5 in the end.
Now comes the strange part. I'm testing for the result by specifying two colors. If the fractional part of the offset is "close" to 0.5, I'm setting the resulting color to green, otherwise red. The image is almost all red.
How is it possible, that either fract() does not result in a vector of two integer values (or close to integer because of float accuracy), or the adding of vec2(0.5, 0.5) has no effect? Am I missing something else?

GLSL per pixel point light shading

VC++, OpenGL, SDL
I am wondering if there is a way to achieve smoother shading across a single Quad of geometry. Right now, the shading looks smooth with my point light, however, the intensity rises along the [/] diagonal subdivision of the face. The lighting is basically non-visible in-between vertices.
This is what happens as the light moves from left to right
As I move the light across the surface, it does this consistently. Gets brightest at each vertex and fades from there.
Am I forced to up the subdivision to achieve a smoother, more radial shade? or is there a method around this?
Here are the shaders I am using:
vert
varying vec3 vertex_light_position;
varying vec3 vertex_normal;
void main()
{
vertex_normal = normalize(gl_NormalMatrix * gl_Normal);
vertex_light_position = normalize(gl_LightSource[0].position.xyz);
gl_FrontColor = gl_Color;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
frag
varying vec3 vertex_light_position;
varying vec3 vertex_normal;
void main()
{
float diffuse_value = max(dot(vertex_normal, vertex_light_position), 0.0);
gl_FragColor = gl_Color * diffuse_value;
}
My geometry in case anyone is wondering:
glBegin(GL_QUADS);
glNormal3f(0.0f, 0.0f, 1.0f);
glTexCoord2f(0.0f, 1.0f); glVertex3f(pos_x, pos_y - size_y, depth);
glTexCoord2f(1.0f, 1.0f); glVertex3f(pos_x + size_x, pos_y - size_y, depth);
glTexCoord2f(1.0f, 0.0f); glVertex3f(pos_x + size_x, pos_y, depth);
glTexCoord2f(0.0f, 0.0f); glVertex3f(pos_x, pos_y, depth);
glEnd();
There are a couple things I see as being possible issues.
Unless I am mistaken, you are using normalize(gl_LightSource[0].position.xyz); to calculate the light vector, but that is based solely on the position of the light, not on the vertex you are operating on. That means the value there will be the same for every vertex and will only change based on the current modelview matrix and light position. I would think that calculating the light vector by doing something like normalize(glLightSource[0].position.xyz - (gl_ModelViewMatrix * gl_Vertex).xyz) would be closer to what you would want.
Secondly, you ought to normalize your vectors in the fragment shader as well as in the vertex shader, since the interpolation of two unit vectors is not guaranteed to be a unit vector itself.
I think the problem is with light vector...
I suggest using:
vec3 light_vector = normalize(gl_LightSource[0].position.xyz - vertex_pos)
vertex_pos can be calculated by using:
vertex_pos = gl_ModelViewMatrix * gl_Vertex
Notice that all the vectors should be in the same space (camera, world, object)
Am I forced to up the subdivision to achieve a smoother, more radial
shade? or is there a method around this?
No, you are free to do whatever you want. The only code you need to change is the fragment shader. Try to play with it and see if you get a better result.
For example, you could do this :
diffuse_value = pow(diffuse_value, 3.0);
as explained here.

What's the best way to draw a fullscreen quad in OpenGL 3.2?

I'm doing ray casting in the fragment shader. I can think of a couple ways to draw a fullscreen quad for this purpose. Either draw a quad in clip space with the projection matrix set to the identity matrix, or use the geometry shader to turn a point into a triangle strip. The former uses immediate mode, deprecated in OpenGL 3.2. The latter I use out of novelty, but it still uses immediate mode to draw a point.
I'm going to argue that the most efficient approach will be in drawing a single "full-screen" triangle. For a triangle to cover the full screen, it needs to be bigger than the actual viewport. In NDC (and also clip space, if we set w=1), the viewport will always be the [-1,1] square. For a triangle to cover this area just completely, we need to have two sides to be twice as long as the viewport rectangle, so that the third side will cross the edge of the viewport, hence we can for example use the following coordiates (in counter-clockwise order): (-1,-1), (3,-1), (-1,3).
We also do not need to worry about the texcoords. To get the usual normalized [0,1] range across the visible viewport, we just need to make the corresponding texcoords for the vertices tiwce as big, and the barycentric interpolation will yield exactly the same results for any viewport pixel as when using a quad.
This approach can of course be combined with attribute-less rendering as suggested in demanze's answer:
out vec2 texcoords; // texcoords are in the normalized [0,1] range for the viewport-filling quad part of the triangle
void main() {
vec2 vertices[3]=vec2[3](vec2(-1,-1), vec2(3,-1), vec2(-1, 3));
gl_Position = vec4(vertices[gl_VertexID],0,1);
texcoords = 0.5 * gl_Position.xy + vec2(0.5);
}
Why will a single triangle be more efficient?
This is not about the one saved vertex shader invocation, and the one less triangle to handle at the front-end. The most significant effect of using a single triangle will be that there are less fragment shader invocations
Real GPUs always invoke the fragment shader for 2x2 pixel sized blocks ("quads") as soon as a single pixel of the primitive falls into such a block. This is necessary for calculating the window-space derivative functions (those are also implicitly needed for texture sampling, see this question).
If the primitive does not cover all 4 pixels in that block, the remaining fragment shader invocations will do no useful work (apart from providing the data for the derivative calculations) and will be so-called helper invocations (which can even be queried via the gl_HelperInvocation GLSL function). See also Fabian "ryg" Giesen's blog article for more details.
If you render a quad with two triangles, both will have one edge going diagonally across the viewport, and on both triangles, you will generate a lot of useless helper invocations at the diagonal edge. The effect will be worst for a perfectly square viewport (aspect ratio 1). If you draw a single triangle, there will be no such diagonal edge (it lies outside of the viewport and won't concern the rasterizer at all), so there will be no additional helper invocations.
Wait a minute, if the triangle extends across the viewport boundaries, won't it get clipped and actually put more work on the GPU?
If you read the textbook materials about graphics pipelines (or even the GL spec), you might get that impression. But real-world GPUs use some different approaches like Guard-band clipping. I won't go into detail here (that would be a topic on it's own, have a look at Fabian "ryg" Giesen's fine blog article for details), but the general idea is that the rasterizer will produce fragments only for pixels inside the viewport (or scissor rect) anyway, no matter if the primitive lies completely inside it or not, so we can simply throw bigger triangles at it if both of the following are true:
a) the triangle does only extend the 2D top/bottom/left/right clipping planes (as opposed to the z-Dimension near/far ones, which are more tricky to handle, especially because vertices may also lie behind the camera)
b) the actual vertex coordinates (and all intermediate calculation results the rasterizer might be doing on them) are representable in the internal data formats the GPU's hardware rasterizer uses. The rasterizer will use fixed-point data types of implementation-specific width, while vertex coords are 32Bit single precision floats. (That is basically what defines the size of the Guard-band)
Our triangle is only factor 3 bigger than the viewport, so we can be very sure that there is no need to clip it at all.
But is it worth it?
Well, the savings on fragment shader invocations are real (especially when you have a complex fragment shader), but the overall effect might be barely measurable in a real-world scenario. On the other hand, the approach is not more complicated than using a full-screen quad, and uses less data, so even if might not make a huge difference, it won't hurt, so why not using it?
Could this approach be used for all sorts of axis-aligned rectangles, not just fullscreen ones?
In theory, you can combine this with the scissor test to draw some arbitrary axis-aligned rectangle (and the scissor test will be very efficient, as it just limits which fragments are produced in the first place, it isn't a real "test" in HW which discards fragments). However, this requires you to change the scissor parameters for each rectangle you want to draw, which implies a lot of state changes and limits you to a single rectangle per draw call, so doing so won't be a good idea in most scenarios.
You can send two triangles creating a quad, with their vertex attributes set to -1/1 respectively.
You do not need to multiply them with any matrix in the vertex/fragment shader.
Here are some code samples, simple as it is :)
Vertex Shader:
const vec2 madd=vec2(0.5,0.5);
attribute vec2 vertexIn;
varying vec2 textureCoord;
void main() {
textureCoord = vertexIn.xy*madd+madd; // scale vertex attribute to [0-1] range
gl_Position = vec4(vertexIn.xy,0.0,1.0);
}
Fragment Shader :
varying vec2 textureCoord;
void main() {
vec4 color1 = texture2D(t,textureCoord);
gl_FragColor = color1;
}
No need to use a geometry shader, a VBO or any memory at all.
A vertex shader can generate the quad.
layout(location = 0) out vec2 uv;
void main()
{
float x = float(((uint(gl_VertexID) + 2u) / 3u)%2u);
float y = float(((uint(gl_VertexID) + 1u) / 3u)%2u);
gl_Position = vec4(-1.0f + x*2.0f, -1.0f+y*2.0f, 0.0f, 1.0f);
uv = vec2(x, y);
}
Bind an empty VAO. Send a draw call for 6 vertices.
To output a fullscreen quad geometry shader can be used:
#version 330 core
layout(points) in;
layout(triangle_strip, max_vertices = 4) out;
out vec2 texcoord;
void main()
{
gl_Position = vec4( 1.0, 1.0, 0.5, 1.0 );
texcoord = vec2( 1.0, 1.0 );
EmitVertex();
gl_Position = vec4(-1.0, 1.0, 0.5, 1.0 );
texcoord = vec2( 0.0, 1.0 );
EmitVertex();
gl_Position = vec4( 1.0,-1.0, 0.5, 1.0 );
texcoord = vec2( 1.0, 0.0 );
EmitVertex();
gl_Position = vec4(-1.0,-1.0, 0.5, 1.0 );
texcoord = vec2( 0.0, 0.0 );
EmitVertex();
EndPrimitive();
}
Vertex shader is just empty:
#version 330 core
void main()
{
}
To use this shader you can use dummy draw command with empty VBO:
glDrawArrays(GL_POINTS, 0, 1);
This is similar to the answer by demanze, but I would argue it's easier to understand. Also this is only drawn with 4 vertices by using TRIANGLE_STRIP.
#version 300 es
out vec2 textureCoords;
void main() {
const vec2 positions[4] = vec2[](
vec2(-1, -1),
vec2(+1, -1),
vec2(-1, +1),
vec2(+1, +1)
);
const vec2 coords[4] = vec2[](
vec2(0, 0),
vec2(1, 0),
vec2(0, 1),
vec2(1, 1)
);
textureCoords = coords[gl_VertexID];
gl_Position = vec4(positions[gl_VertexID], 0.0, 1.0);
}
The following comes from the draw function of the class that draws fbo textures to a screen aligned quad.
Gl.glUseProgram(shad);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, vbo);
Gl.glEnableVertexAttribArray(0);
Gl.glEnableVertexAttribArray(1);
Gl.glVertexAttribPointer(0, 3, Gl.GL_FLOAT, Gl.GL_FALSE, 0, voff);
Gl.glVertexAttribPointer(1, 2, Gl.GL_FLOAT, Gl.GL_FALSE, 0, coff);
Gl.glActiveTexture(Gl.GL_TEXTURE0);
Gl.glBindTexture(Gl.GL_TEXTURE_2D, fboc);
Gl.glUniform1i(tileLoc, 0);
Gl.glDrawArrays(Gl.GL_QUADS, 0, 4);
Gl.glBindTexture(Gl.GL_TEXTURE_2D, 0);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, 0);
Gl.glUseProgram(0);
The actual quad itself and the coords are got from:
private float[] v=new float[]{ -1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
1.0f, 1.0f, 0.0f,
-1.0f, 1.0f, 0.0f,
0.0f, 0.0f,
1.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f
};
The binding and set up of the vbo's I leave to you.
The vertex shader:
#version 330
layout(location = 0) in vec3 pos;
layout(location = 1) in vec2 coord;
out vec2 coords;
void main() {
coords=coord.st;
gl_Position=vec4(pos, 1.0);
}
Because the position is raw, that is, not multiplied by any matrix the -1, -1::1, 1 of the quad fit into the viewport. Look for Alfonse's tutorial linked off any of his posts on openGL.org.