(OpenGL 2.0)
I managed to do some nice text-rendering in opengl, and decided to make it shader-designed.
However, rendered font texture that looked nice in fixed pipeline mode looked unpleasant in GLSL mode.
In fixed pipeline mode, I don't see any difference between GL_LINEAR and GL_NEAREST filtering, it's because the texture doesn't need to filter really, because I set orthographic projection and align quad's width and height to the texture coordinates.
Now when I'm trying to render it with shader, i can see some very bad GL_NEAREST filtering artifacts, and for GL_LINEAR the texture appears too blurry.
Fixed pipeline, satysfying, best quality (no difference between linear/nearest):
GLSL, nearest (visible artifacts, for example, look at fraction glyphs):
GLSL, linear (too blurry):
Shader program:
Vertex shader was successfully compiled to run on hardware.
Fragment shader was successfully compiled to run on hardware.
Fragment shader(s) linked, vertex shader(s) linked.
------------------------------------------------------------------------------------------
attribute vec2 at_Vertex;
attribute vec2 at_Texcoord;
varying vec2 texCoord;
void main(void) {
texCoord = at_Texcoord;
gl_Position = mat4(0.00119617, 0, 0, 0, 0, 0.00195503, 0, 0, 0, 0, -1, 0, -1, -1, -0, 1)* vec4(at_Vertex.x, at_Vertex.y, 0, 1);
}
-----------------------------------------------------------------------------------------
varying vec2 texCoord;
uniform sampler2D diffuseMap;
void main(void) {
gl_FragColor = texture2D(diffuseMap, texCoord);
}
Quad rendering, fixed:
glTexCoord2f (0.0f, 0.0f);
glVertex2f (40.0f, 40.0f);
glTexCoord2f (0.0f, 1.0f);
glVertex2f ((font.tex_r.w+40.0f), 40.0f);
glTexCoord2f (1.0f, 1.0f);
glVertex2f ((font.tex_r.w+40.0f), (font.tex_r.h+40.0f));
glTexCoord2f (1.0f, 0.0f);
glVertex2f (40.0f, (font.tex_r.h+40.0f));
Quad rendering, shader-mode:
glVertexAttrib2f(__MeshShader::ATTRIB_TEXCOORD, 0.0f, 0.0f);
glVertexAttrib2f(__MeshShader::ATTRIB_VERTEX, 40.0f, 40.0f);
glVertexAttrib2f(__MeshShader::ATTRIB_TEXCOORD, 0.0f, 1.0f);
glVertexAttrib2f(__MeshShader::ATTRIB_VERTEX, (font.tex_r.w+40.0f), 40.0f);
glVertexAttrib2f(__MeshShader::ATTRIB_TEXCOORD, 1.0f, 1.0f);
glVertexAttrib2f(__MeshShader::ATTRIB_VERTEX, (font.tex_r.w+40.0f), (font.tex_r.h+40.0f));
glVertexAttrib2f(__MeshShader::ATTRIB_TEXCOORD, 1.0f, 0.0f);
glVertexAttrib2f(__MeshShader::ATTRIB_VERTEX, 40.0f, (font.tex_r.h+40.0f));
In both cases the matrices are calculated from the same source, though for performance reasons, as you can see, I'm writing constant values into the shader code with the help of such a function (if that is the reason, how do I write them properly ? ):
std::ostringstream buffer;
buffer << f;
return buffer.str().c_str();
where "f" is some double value.
EDIT:
Result of my further research is a little bit surprising.
Now I'm multiplying vertex coordinates by the same orthogonal matrix on CPU (not in vertex shader like before) and I'm leaving the vertex untouched in vertex shader, just passing it to the gl_Position. I couldn't believe, but this really works and actually solves my problem. Every operation is made on floats, as in GPU.
Seems like matrix/vertex multiplication is much more accurate on CPU.
question is: why ?
EDIT: Actually, whole reason was different matrix sources..! Really, really small bug!
Nicol was nearest the solution.
though for performance reasons, as you can see, I'm writing constant values into the shader code
That's not going to help your performance. Uploading a single matrix uniform is pretty standard for most OpenGL shaders, and will cost you nothing of significance in terms of performance.
Seems like matrix/vertex multiplication is much more accurate on CPU. question is: why ?
It's not more accurate; it's simply using a different matrix. If you passed that matrix to GLSL via a shader uniform, you would probably get the same result. The matrix you use in the shader is not the same matrix that you used on the CPU.
Related
I try understand opengl, but I have many simple problems... I try rotate my object and I'm doing this:
glBindVertexArray(VAOs[1]);
glBindTexture(GL_TEXTURE_2D, texture1);
glActiveTexture(GL_TEXTURE1);
glUniform1i(glGetUniformLocation(theProgram.get_programID(), "Texture1"), 1);
glMatrixMode(GL_MODELVIEW_MATRIX);
glPushMatrix();
glRotatef(10.0f, 0.0f, 0.0f, -0.1f);
glDrawElements(GL_TRIANGLES, _countof(tableIndices2), GL_UNSIGNED_INT, 0);
glPopMatrix();
And my object is still in the same position.
What is wrong?
Functions like glRotate are part of the See Fixed Function Pipeline which is deprecated. This functions do not affect the vertices which are processed by a shader program.
See Khronos wiki - Legacy OpenGL:
In 2008, version 3.0 of the OpenGL specification was released. With this revision, the Fixed Function Pipeline as well as most of the related OpenGL functions and constants were declared deprecated. These deprecated elements and concepts are now commonly referred to as legacy OpenGL. ...
See Khronos wiki - Fixed Function Pipeline:
OpenGL 3.0 was the last revision of the specification which fully supported both fixed and programmable functionality. Even so, most hardware since the OpenGL 2.0 generation lacked the actual fixed-function hardware. Instead, fixed-function processes are emulated with shaders built by the system. ...
In modern OpenGL you have to do this stuff by yourself.
If you switch from Fixed Function Pipeline to today's OpenGL (in C++), then I recommend to use a library like glm OpenGL Mathematics for the matrix operations.
First you have to create a vertex shader with a matrix uniform and you have to do the multiplivation of the vertex coordinate and the model matrix in the vertex shader:
in vec3 vert_pos;
uniform mat4 model_matrix;
void main()
{
gl_Position = model * vec4(vert_pos.xyz, 1.0);
}
Of course you can declare further matrices like projection matrix and view matrix:
in vec3 vert_pos;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = projection * view * model * vec4(vert_pos.xyz, 1.0);
}
In the c++ code you have to set set up the matrix and you have to set the matrix uniform:
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp> // glm::rotate
#include <glm/gtc/type_ptr.hpp> // glm::value_ptr
glm::mat4 modelMat = glm::rotate(
glm::mat4(1.0f),
glm::radians(10.0f),
glm::vec3(0.0f, 0.0f, -1.0f) );
GLint model_loc = glGetUniformLocation(theProgram.get_programID(), "model");
glUniformMatrix4fv(model_loc, 1, GL_FALSE, glm::value_ptr(modelMat));
See also the documentation of the glm matrix transformations.
The problem with your code is that you are mixing the old fixed-function calls (like glPushMatrix()) with the newer shader calls (like glUniform()). Functions like glPushMatrix() assume that you are giving the OpenGL driver more control over the graphics card (in other words, these functions take care of some newer OpenGL complications for you), which means that using newer function calls, like glUniform() can lead to random results (since you have no idea if you are disturbing the OpenGL driver's code).
Basically, all of the old fixed-function calls can be found here (this is actually OpenGL ES, which is opengl for embedded systems, or phones, in other words, but it is practically the same as opengl on the computer): Fixed-Function Calls.
I would recommend that you start learning the OpenGL ES 2 functions, which can be found here: OpenGL ES 2 (Shader Function Calls)
I think this is a good site to learn OpenGL ES 2 (even though you're not using OpenGL ES, this site teaches the concepts well). This site also uses Java, so ignore all the buffer information in the first tutorial (not to be confused with vertex and texture buffers, which are actually OpenGL, and not Java details): Learn OpenGL Concepts
Firstly
glMatrixMode(GL_MODELVIEW); // don't change to GL_MODELVIEW_MATRIX
glLoadIdentity(); // good habit to initialize matrix by identity.
glPushMatrix();
Your call of function
glRotatef(10.0f, 0.0f, 0.0f, -0.1f);
should be same as
glRotatef(10.0f, 0.0f, 0.0f, -1f);
meaning ten degrees by z-axis.
According to documentation of glRotatef(angle, x, y, z) - vector (x, y, z) is normalized to size 1 (if is not of size 1).
It should work. Is 10 degrees really significant to see result? Try to do something movable like.
glRotatef(rotZ, 0.0f, 0.0f, -1f);
rotZ += 5;
if (rotZ > 360) rotZ -= 360;
I created 8x8 pixel bitmap letters to render them with OpenGL, but sometimes, depending on scaling I get weird artifacts as shown below in the image. Texture filtering is set to nearest pixel. It looks like rounding issue, but how could there be some if the line is perfectly horizontal.
Left original 8x8, middle scaled to 18x18, right scaled to 54x54.
Vertex data are unsigned bytes in format (x-offset, y-offset, letter). Here is full code:
vertex shader:
#version 330 core
layout(location = 0) in uvec3 Data;
uniform float ratio;
uniform float font_size;
out float letter;
void main()
{
letter = Data.z;
vec2 position = vec2(float(Data.x) / ratio, Data.y) * font_size - 1.0f;
position.y = -position.y;
gl_Position = vec4(position, 0.0f, 1.0f);
}
geometry shader:
#version 330 core
layout (points) in;
layout (triangle_strip, max_vertices = 4) out;
uniform float ratio;
uniform float font_size;
out vec3 texture_coord;
in float letter[];
void main()
{
// TODO: pre-calculate
float width = font_size / ratio;
float height = -font_size;
texture_coord = vec3(0.0f, 0.0f, letter[0]);
gl_Position = gl_in[0].gl_Position + vec4(0.0f, height, 0.0f, 0.0f);
EmitVertex();
texture_coord = vec3(1.0f, 0.0f, letter[0]);
gl_Position = gl_in[0].gl_Position + vec4(width, height, 0.0f, 0.0f);
EmitVertex();
texture_coord = vec3(0.0f, 1.0f, letter[0]);
gl_Position = gl_in[0].gl_Position + vec4(0.0f, 0.0f, 0.0f, 0.0f);
EmitVertex();
texture_coord = vec3(1.0f, 1.0f, letter[0]);
gl_Position = gl_in[0].gl_Position + vec4(width, 0.0f, 0.0f, 0.0f);
EmitVertex();
EndPrimitive();
}
fragment shader:
#version 330 core
in vec3 texture_coord;
uniform sampler2DArray font_texture_array;
out vec4 output_color;
void main()
{
output_color = texture(font_texture_array, texture_coord);
}
I had the same problem developing with Freetype and OpenGL. And after days of researching and scratching my head, I found the solution. In my case, I had to explicitly call the function 'glBlendColor'. Once, I did that, I did not observe any more artifacts.
Here is a snippet:
//Set Viewport
glViewport(0, 0, FIXED_WIDTH, FIXED_HEIGHT);
//Enable Blending
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBlendColor(1.0f, 1.0f, 1.0f, 1.0f); //Without this I was having artifacts: IMPORTANT TO EXPLICITLY CALLED
//Set Alignment requirement to 1 byte
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
I figured out the solution after reviewing the source code of this OpenGL-Freetype library on github: opengl-freetype library
Well, when using nearest filtering, you will see such issues if your sample location is very close to the boundary between two texels. And since the tex coords are to be interpolated separately for each fragment you are drawing, slight numerical inaccuracies will result in jumping between those two texels.
When you draw an 8x8 texture to an 18x18 pixel big rectangle, and your rectangle is perfectly aligned to the putput pixel raster, you are almost guaranteed to trigger that behavior:
Looking at the texel coodinates will then reveal that for the very bottom output pixel, the texture coords would be interpolated to 1/(2*18) = 1/36. Going one pixel up will add 1/18 = 2/36 to the t coordinate. So for the fifth row from the bottom, it would be 9/36.
So for the 8x8 texel big texture you are sampling from, you are actually sampling at unnormalized texel coordinates (9/36)*8 == 2.0. This is exactly the boundary between the second and third row of your texture. Since the texture coordinates for each fragment are interpolated by a barycentric interpolation between the tex coords assigned to the three vertices froming the triangle, there can be slight inaccuracies. And even the slightest possible inaccuracy representable in floating point format would result in flipping between two texels in this case.
I think your approach is just not good. Scaling bitmap fonts is always problematic (maybe besides integral scale factors). If you want nicely looking scalable texture fonts, I recommend you to look into signed distance fields. It is quite a simple and powerful technique, and there are tools available to generate the necessary distance field textures.
If you are looking for a quick hack, you coud also just offset your output rectangle slightly. You basically must make sure to keep the offset in [-0.5,0.5] pixels (so that never different fragments are generated during rasterization, and you must make sure that all the potential sample locations will never lie close to an integer, so the offset will depend on the actual scale factor.
I found this code on internet http://rioki.org/2013/03/07/glsl-skybox.html for cubemap enviromental texture(actually rendering a skybox). But i do not understand it why it works.
void main()
{
mat4 r = gl_ModelViewMatrix;
r[3][0] = 0.0;
r[3][1] = 0.0;
r[3][2] = 0.0;
vec4 v = inverse(r) * inverse(gl_ProjectionMatrix) * gl_Vertex;
gl_TexCoord[0] = v;
gl_Position = gl_Vertex;
}
So gl_Vertex is in world coordinates but what do we get by multiplying that by inverse of projection matrix and then modelview matrix?
this is the code I use to draw my skybox
void SkyBoxDraw(void)
{
GLfloat SkyRad = 1.0f;
glUseProgramObjectARB(glsl_program_skybox);
glDepthMask(0);
glDisable(GL_DEPTH_TEST);
// Cull backs of polygons
glCullFace(GL_BACK);
glEnable(GL_CULL_FACE);
glEnable(GL_TEXTURE_CUBE_MAP);
glBegin(GL_QUADS);
//////////////////////////////////////////////
// Negative X
glTexCoord3f(-1.0f, -1.0f, 1.0f);
glVertex3f(-SkyRad, -SkyRad, SkyRad);
glTexCoord3f(-1.0f, -1.0f, -1.0f);
glVertex3f(-SkyRad, -SkyRad, -SkyRad);
glTexCoord3f(-1.0f, 1.0f, -1.0f);
glVertex3f(-SkyRad, SkyRad, -SkyRad);
glTexCoord3f(-1.0f, 1.0f, 1.0f);
glVertex3f(-SkyRad, SkyRad, SkyRad);
......
......
glEnd();
glDepthMask(1);
glDisable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
glDisable(GL_TEXTURE_CUBE_MAP);
glUseProgramObjectARB(0);
}
So gl_Vertex is in world coordinates ...
No no no, gl_Vertex is in object/model-space unless I see some code elsewhere (e.g. how your vertex position is calculated in the actual non-shader portion of your program) that indicates otherwise :) In OpenGL we skip from object-space to eye/view/camera-space when we multiply by the combined Model*View matrix. As you can see, there are lots of names for the same coordinate spaces, but object-space definitely is not a synonym for world-space. Setting r3 to < 0, 0, 0, 1 > basically re-positions the camera's origin without affecting direction, which is useful when all you want to know is the direction for a cubemap lookup.
That is, in a nutshell, what you want when using cubemaps. Just a simple direction vector. The fact that textureCube (...) takes a 3D vector instead of 4D is an immediate hint that it is looking for a direction instead of position. Position vectors have a 4th component, direction do not. So, technically if you wanted to port this shader to modern OpenGL you would probably use an out vec3 and swizzle .xyz off of v, since v.w is unnecessary.
... but what do we get by multiplying that by inverse of projection matrix and then modelview matrix?
You are basically undoing projection when you multiply by the inverse of these matrices. The only way this shader makes sense is if the coordinates you are passing for your vertices are defined in clip-space. So instead of going from object-space through the GL pipeline and winding up in screen-space at the end you want the reverse of that, only since the viewport is not involved in your shader we cannot be dealing with screen-space. A little bit more information on how your vertex positions are calculated should clear this up.
I'm trying to output some data from compute shader to a texture, but imageStore() seems to do nothing. Here's the shader:
#version 430
layout(RGBA32F) uniform image2D image;
layout (local_size_x = 1, local_size_y = 1) in;
void main() {
imageStore(image, ivec2(gl_GlobalInvocationID.xy), vec4(0.0f, 1.0f, 1.0f, 1.0f));
}
and the application code is here:
GLuint tex;
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, WIDTH, HEIGHT, 0, GL_RGBA, GL_FLOAT, 0);
glBindImageTexture(0, tex, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBA32F);
glUseProgram(program->GetName());
glUniform1i(program->GetUniformLocation("image"), 0);
glDispatchCompute(WIDTH, HEIGHT, 1);
then a full screen quad is rendered with that texture but currently it only shows some random old data from video memory. Any idea what could be wrong?
EDIT:
This is how I display the texture:
// This comes right after the previous block of code
glUseProgram(drawProgram->GetName());
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tex);
glUniform1i(drawProgram->GetUniformLocation("sampler"), 0);
glBindVertexArray(vao);
glDrawArrays(GL_TRIANGLES, 0, 6);
glfwSwapBuffers();
and the drawProgram consists of:
#version 430
#extension GL_ARB_explicit_attrib_location : require
layout(location = 0) in vec2 position;
out vec2 uvCoord;
void main() {
gl_Position = vec4(position.x, position.y, 0.0f, 1.0f);
uvCoord = position;
}
and:
#version 430
in vec2 uvCoord;
out vec4 color;
uniform sampler2D sampler;
void main() {
vec2 uv = (uvCoord + vec2(1.0f)) / 2.0f;
uv.y = 1.0f - uv.y;
color = texture(sampler, uv);
//color = vec4(uv.x, uv.y, 0.0f, 1.0f);
}
The last commented line in fragment shader produces this output: Render output
The vertex array object (vao) has one buffer with 6 2D vertices:
-1.0, -1.0
1.0, -1.0
1.0, 1.0
1.0, 1.0
-1.0, 1.0
-1.0, -1.0
This is how I display the texture:
That's not good enough. I don't see a call to glMemoryBarrier, so there's no guarantee that your code actually works.
Remember: writes to images via Image Load/Store are not memory coherent. They require explicit user synchronization before they become visible. If you want to use an image you have stored to as a texture later, there must be an explicit glMemoryBarrier call after the rendering command that writes to it, but before the rendering command that samples from it as a texture.
Why that is a problem, I don't know
Because desktop OpenGL is not OpenGL ES.
The last three parameters only describe the arrangement of the pixel data you're giving OpenGL. They change nothing about how OpenGL stores the data. In ES, they do, but that's only because ES doesn't do format conversions.
In desktop OpenGL, it is perfectly legal to upload floating-point data to a normalized integer texture; OpenGL is expected to convert the data as best it can. ES doesn't do conversions, so it has to change the internal format (the third parameter) to match the data.
Desktop GL does not. If you want a specific image format, you ask for it. Desktop GL gives you what you ask for, and only what you ask for.
Always use sized internal formats.
GL_RGBA is not a sized internal format and so you're not able to know which it is really. Most often, it is transformed to GL_RGBA8 by OpenGL.
In your case, the GL_FLOAT parameter you set only describes the pixel data you could upload in the texture.
Read the table 2 here to know what you can set as an internal texture format.
Okay I found the solution. The problem lies here:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, WIDTH, HEIGHT, 0, GL_RGBA, GL_FLOAT, 0);
this line doesn't specify the size of the internal format (GL_RGBA). When I supplied GL_RGBA32F it started working. Why that is a problem, I don't know (hopefully somebody will be able to explain).
I'm doing ray casting in the fragment shader. I can think of a couple ways to draw a fullscreen quad for this purpose. Either draw a quad in clip space with the projection matrix set to the identity matrix, or use the geometry shader to turn a point into a triangle strip. The former uses immediate mode, deprecated in OpenGL 3.2. The latter I use out of novelty, but it still uses immediate mode to draw a point.
I'm going to argue that the most efficient approach will be in drawing a single "full-screen" triangle. For a triangle to cover the full screen, it needs to be bigger than the actual viewport. In NDC (and also clip space, if we set w=1), the viewport will always be the [-1,1] square. For a triangle to cover this area just completely, we need to have two sides to be twice as long as the viewport rectangle, so that the third side will cross the edge of the viewport, hence we can for example use the following coordiates (in counter-clockwise order): (-1,-1), (3,-1), (-1,3).
We also do not need to worry about the texcoords. To get the usual normalized [0,1] range across the visible viewport, we just need to make the corresponding texcoords for the vertices tiwce as big, and the barycentric interpolation will yield exactly the same results for any viewport pixel as when using a quad.
This approach can of course be combined with attribute-less rendering as suggested in demanze's answer:
out vec2 texcoords; // texcoords are in the normalized [0,1] range for the viewport-filling quad part of the triangle
void main() {
vec2 vertices[3]=vec2[3](vec2(-1,-1), vec2(3,-1), vec2(-1, 3));
gl_Position = vec4(vertices[gl_VertexID],0,1);
texcoords = 0.5 * gl_Position.xy + vec2(0.5);
}
Why will a single triangle be more efficient?
This is not about the one saved vertex shader invocation, and the one less triangle to handle at the front-end. The most significant effect of using a single triangle will be that there are less fragment shader invocations
Real GPUs always invoke the fragment shader for 2x2 pixel sized blocks ("quads") as soon as a single pixel of the primitive falls into such a block. This is necessary for calculating the window-space derivative functions (those are also implicitly needed for texture sampling, see this question).
If the primitive does not cover all 4 pixels in that block, the remaining fragment shader invocations will do no useful work (apart from providing the data for the derivative calculations) and will be so-called helper invocations (which can even be queried via the gl_HelperInvocation GLSL function). See also Fabian "ryg" Giesen's blog article for more details.
If you render a quad with two triangles, both will have one edge going diagonally across the viewport, and on both triangles, you will generate a lot of useless helper invocations at the diagonal edge. The effect will be worst for a perfectly square viewport (aspect ratio 1). If you draw a single triangle, there will be no such diagonal edge (it lies outside of the viewport and won't concern the rasterizer at all), so there will be no additional helper invocations.
Wait a minute, if the triangle extends across the viewport boundaries, won't it get clipped and actually put more work on the GPU?
If you read the textbook materials about graphics pipelines (or even the GL spec), you might get that impression. But real-world GPUs use some different approaches like Guard-band clipping. I won't go into detail here (that would be a topic on it's own, have a look at Fabian "ryg" Giesen's fine blog article for details), but the general idea is that the rasterizer will produce fragments only for pixels inside the viewport (or scissor rect) anyway, no matter if the primitive lies completely inside it or not, so we can simply throw bigger triangles at it if both of the following are true:
a) the triangle does only extend the 2D top/bottom/left/right clipping planes (as opposed to the z-Dimension near/far ones, which are more tricky to handle, especially because vertices may also lie behind the camera)
b) the actual vertex coordinates (and all intermediate calculation results the rasterizer might be doing on them) are representable in the internal data formats the GPU's hardware rasterizer uses. The rasterizer will use fixed-point data types of implementation-specific width, while vertex coords are 32Bit single precision floats. (That is basically what defines the size of the Guard-band)
Our triangle is only factor 3 bigger than the viewport, so we can be very sure that there is no need to clip it at all.
But is it worth it?
Well, the savings on fragment shader invocations are real (especially when you have a complex fragment shader), but the overall effect might be barely measurable in a real-world scenario. On the other hand, the approach is not more complicated than using a full-screen quad, and uses less data, so even if might not make a huge difference, it won't hurt, so why not using it?
Could this approach be used for all sorts of axis-aligned rectangles, not just fullscreen ones?
In theory, you can combine this with the scissor test to draw some arbitrary axis-aligned rectangle (and the scissor test will be very efficient, as it just limits which fragments are produced in the first place, it isn't a real "test" in HW which discards fragments). However, this requires you to change the scissor parameters for each rectangle you want to draw, which implies a lot of state changes and limits you to a single rectangle per draw call, so doing so won't be a good idea in most scenarios.
You can send two triangles creating a quad, with their vertex attributes set to -1/1 respectively.
You do not need to multiply them with any matrix in the vertex/fragment shader.
Here are some code samples, simple as it is :)
Vertex Shader:
const vec2 madd=vec2(0.5,0.5);
attribute vec2 vertexIn;
varying vec2 textureCoord;
void main() {
textureCoord = vertexIn.xy*madd+madd; // scale vertex attribute to [0-1] range
gl_Position = vec4(vertexIn.xy,0.0,1.0);
}
Fragment Shader :
varying vec2 textureCoord;
void main() {
vec4 color1 = texture2D(t,textureCoord);
gl_FragColor = color1;
}
No need to use a geometry shader, a VBO or any memory at all.
A vertex shader can generate the quad.
layout(location = 0) out vec2 uv;
void main()
{
float x = float(((uint(gl_VertexID) + 2u) / 3u)%2u);
float y = float(((uint(gl_VertexID) + 1u) / 3u)%2u);
gl_Position = vec4(-1.0f + x*2.0f, -1.0f+y*2.0f, 0.0f, 1.0f);
uv = vec2(x, y);
}
Bind an empty VAO. Send a draw call for 6 vertices.
To output a fullscreen quad geometry shader can be used:
#version 330 core
layout(points) in;
layout(triangle_strip, max_vertices = 4) out;
out vec2 texcoord;
void main()
{
gl_Position = vec4( 1.0, 1.0, 0.5, 1.0 );
texcoord = vec2( 1.0, 1.0 );
EmitVertex();
gl_Position = vec4(-1.0, 1.0, 0.5, 1.0 );
texcoord = vec2( 0.0, 1.0 );
EmitVertex();
gl_Position = vec4( 1.0,-1.0, 0.5, 1.0 );
texcoord = vec2( 1.0, 0.0 );
EmitVertex();
gl_Position = vec4(-1.0,-1.0, 0.5, 1.0 );
texcoord = vec2( 0.0, 0.0 );
EmitVertex();
EndPrimitive();
}
Vertex shader is just empty:
#version 330 core
void main()
{
}
To use this shader you can use dummy draw command with empty VBO:
glDrawArrays(GL_POINTS, 0, 1);
This is similar to the answer by demanze, but I would argue it's easier to understand. Also this is only drawn with 4 vertices by using TRIANGLE_STRIP.
#version 300 es
out vec2 textureCoords;
void main() {
const vec2 positions[4] = vec2[](
vec2(-1, -1),
vec2(+1, -1),
vec2(-1, +1),
vec2(+1, +1)
);
const vec2 coords[4] = vec2[](
vec2(0, 0),
vec2(1, 0),
vec2(0, 1),
vec2(1, 1)
);
textureCoords = coords[gl_VertexID];
gl_Position = vec4(positions[gl_VertexID], 0.0, 1.0);
}
The following comes from the draw function of the class that draws fbo textures to a screen aligned quad.
Gl.glUseProgram(shad);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, vbo);
Gl.glEnableVertexAttribArray(0);
Gl.glEnableVertexAttribArray(1);
Gl.glVertexAttribPointer(0, 3, Gl.GL_FLOAT, Gl.GL_FALSE, 0, voff);
Gl.glVertexAttribPointer(1, 2, Gl.GL_FLOAT, Gl.GL_FALSE, 0, coff);
Gl.glActiveTexture(Gl.GL_TEXTURE0);
Gl.glBindTexture(Gl.GL_TEXTURE_2D, fboc);
Gl.glUniform1i(tileLoc, 0);
Gl.glDrawArrays(Gl.GL_QUADS, 0, 4);
Gl.glBindTexture(Gl.GL_TEXTURE_2D, 0);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, 0);
Gl.glUseProgram(0);
The actual quad itself and the coords are got from:
private float[] v=new float[]{ -1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
1.0f, 1.0f, 0.0f,
-1.0f, 1.0f, 0.0f,
0.0f, 0.0f,
1.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f
};
The binding and set up of the vbo's I leave to you.
The vertex shader:
#version 330
layout(location = 0) in vec3 pos;
layout(location = 1) in vec2 coord;
out vec2 coords;
void main() {
coords=coord.st;
gl_Position=vec4(pos, 1.0);
}
Because the position is raw, that is, not multiplied by any matrix the -1, -1::1, 1 of the quad fit into the viewport. Look for Alfonse's tutorial linked off any of his posts on openGL.org.