I try understand opengl, but I have many simple problems... I try rotate my object and I'm doing this:
glBindVertexArray(VAOs[1]);
glBindTexture(GL_TEXTURE_2D, texture1);
glActiveTexture(GL_TEXTURE1);
glUniform1i(glGetUniformLocation(theProgram.get_programID(), "Texture1"), 1);
glMatrixMode(GL_MODELVIEW_MATRIX);
glPushMatrix();
glRotatef(10.0f, 0.0f, 0.0f, -0.1f);
glDrawElements(GL_TRIANGLES, _countof(tableIndices2), GL_UNSIGNED_INT, 0);
glPopMatrix();
And my object is still in the same position.
What is wrong?
Functions like glRotate are part of the See Fixed Function Pipeline which is deprecated. This functions do not affect the vertices which are processed by a shader program.
See Khronos wiki - Legacy OpenGL:
In 2008, version 3.0 of the OpenGL specification was released. With this revision, the Fixed Function Pipeline as well as most of the related OpenGL functions and constants were declared deprecated. These deprecated elements and concepts are now commonly referred to as legacy OpenGL. ...
See Khronos wiki - Fixed Function Pipeline:
OpenGL 3.0 was the last revision of the specification which fully supported both fixed and programmable functionality. Even so, most hardware since the OpenGL 2.0 generation lacked the actual fixed-function hardware. Instead, fixed-function processes are emulated with shaders built by the system. ...
In modern OpenGL you have to do this stuff by yourself.
If you switch from Fixed Function Pipeline to today's OpenGL (in C++), then I recommend to use a library like glm OpenGL Mathematics for the matrix operations.
First you have to create a vertex shader with a matrix uniform and you have to do the multiplivation of the vertex coordinate and the model matrix in the vertex shader:
in vec3 vert_pos;
uniform mat4 model_matrix;
void main()
{
gl_Position = model * vec4(vert_pos.xyz, 1.0);
}
Of course you can declare further matrices like projection matrix and view matrix:
in vec3 vert_pos;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = projection * view * model * vec4(vert_pos.xyz, 1.0);
}
In the c++ code you have to set set up the matrix and you have to set the matrix uniform:
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp> // glm::rotate
#include <glm/gtc/type_ptr.hpp> // glm::value_ptr
glm::mat4 modelMat = glm::rotate(
glm::mat4(1.0f),
glm::radians(10.0f),
glm::vec3(0.0f, 0.0f, -1.0f) );
GLint model_loc = glGetUniformLocation(theProgram.get_programID(), "model");
glUniformMatrix4fv(model_loc, 1, GL_FALSE, glm::value_ptr(modelMat));
See also the documentation of the glm matrix transformations.
The problem with your code is that you are mixing the old fixed-function calls (like glPushMatrix()) with the newer shader calls (like glUniform()). Functions like glPushMatrix() assume that you are giving the OpenGL driver more control over the graphics card (in other words, these functions take care of some newer OpenGL complications for you), which means that using newer function calls, like glUniform() can lead to random results (since you have no idea if you are disturbing the OpenGL driver's code).
Basically, all of the old fixed-function calls can be found here (this is actually OpenGL ES, which is opengl for embedded systems, or phones, in other words, but it is practically the same as opengl on the computer): Fixed-Function Calls.
I would recommend that you start learning the OpenGL ES 2 functions, which can be found here: OpenGL ES 2 (Shader Function Calls)
I think this is a good site to learn OpenGL ES 2 (even though you're not using OpenGL ES, this site teaches the concepts well). This site also uses Java, so ignore all the buffer information in the first tutorial (not to be confused with vertex and texture buffers, which are actually OpenGL, and not Java details): Learn OpenGL Concepts
Firstly
glMatrixMode(GL_MODELVIEW); // don't change to GL_MODELVIEW_MATRIX
glLoadIdentity(); // good habit to initialize matrix by identity.
glPushMatrix();
Your call of function
glRotatef(10.0f, 0.0f, 0.0f, -0.1f);
should be same as
glRotatef(10.0f, 0.0f, 0.0f, -1f);
meaning ten degrees by z-axis.
According to documentation of glRotatef(angle, x, y, z) - vector (x, y, z) is normalized to size 1 (if is not of size 1).
It should work. Is 10 degrees really significant to see result? Try to do something movable like.
glRotatef(rotZ, 0.0f, 0.0f, -1f);
rotZ += 5;
if (rotZ > 360) rotZ -= 360;
Related
I am trying to move objects with my 3D world in different ways but I can't move one object without affecting the entire scene. I tried using a second shader with different uniform names and I had some very strange results like objects disappearing and other annoying stuff.
I tried linking and unlinking programs but everything seems to translate together when I apply different matrices to the different shaders in hopes of seeing them move differently.
The TRANSLATE matrix is just a rotation * scale * translation matrix.
Edit - here is how set my uniforms:
//All of my mat4's
// Sorry for not initialising any of the vec3 or mat4's don't want the code to be too lengthy
perspectiveproj = glm::perspective(glm::radians(95.0f), static_cast<float>(width)/height , 0.01f, 150.0f);
views = glm::lookAt(position, position + viewdirection, UP);
trans1 = glm::rotate(trans1, 0.0f, glm::vec3(0.0f, 1.0f, 0.0f));
trans1 = glm::scale(trans1, glm::vec3(0.0f, 0.0f, 0.0f));
trans1 = glm::translate(trans1, glm::vec3(1.0f, 0.0f, 1.0f));
//These are the uniforms for my perspective matrix per shader
int persp = glGetUniformLocation(shader_one, "perspective");
glUniformMatrix4fv(persp, 1, GL_FALSE, glm::value_ptr(perspectiveproj));
int persp2 = glGetUniformLocation(shader_two, "perspective");
glUniformMatrix4fv(persp2, 1, GL_FALSE, glm::value_ptr(perspectiveproj));
//These are the uniforms for my lookAt matrix per shader
int Look = glGetUniformLocation(shader_one, "lookAt");
glUniformMatrix4fv(Look, 1, GL_FALSE, glm::value_ptr(views));
int Look2 = glGetUniformLocation(shader_two, "perspective");
glUniformMatrix4fv(Look2, 1, GL_FALSE, glm::value_ptr(views));
//This is the one uniform for my Translation to one shader object matrix
moving Shader two
//objects differently than shader one
int Moveoneshader = glGetUniformLocation(shader_two, "TRANSLATE");
glUniformMatrix4fv(Moveoneshader, 1, GL_FALSE, glm::value_ptr(trans1))
shader one:
gl_Positions = perspective * lookAt * vec4(position.x, position.y, position.z, 1.0);
shader two:
gl_Positions = perspective * lookAt * TRANSLATE * vec4(position.x, position.y, position.z, 1.0);
linking and drawing:
glUseProgram(shader_one);
glBindVertexArray(vao_one);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, nullptr);
glDeleteProgram(shader_one);
glUseProgram(shader_two);
glBindVertexArray(vao_two);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, nullptr);
glDeleteProgram(shader_two);
It seems that you are having trouble understanding the mechanics behind using a shader.
A shader is supposed to be a set of instructions that can run on multiple inputs, e.g. objects.
Let's first call the TRANSLATE matrix model matrix, since it holds all transformations that affect our model directly. The model matrix can have different values for different objects. So instead of using different shaders, you can use one generalized shader that calculates:
gl_Position = perspective * view * model * vec4(position, 1.0);
where view equals lookAt. I have exchanged the names of your matrices to follow naming conventions. I advise you to use these names so that you can find more information during research.
When creating a model matrix, you have to be careful about the order of matrix multiplication as well. In most cases, you want your model matrix to be composed like this
model = translate * rotate * scale
to avoid distortions of your object.
To be able to render multiple objects with their own respective model matrix, you have to loop over all objects and update the matrix value in the shader before drawing the object. A simplified example would be:
std::string name = "model";
for (Object obj : objects)
{
glUniformMatrix4fv(glGetUniformLocation(shaderID, name.c_str()), 1,
GL_FALSE, glm::value_ptr(model));
// draw object
}
You can read more about this here https://learnopengl.com/Getting-started/Coordinate-Systems.
Related to your problem, objects can disappear if you draw them with multiple shaders. This is related to how shaders write their data to your screen. By default, the active shader writes on all pixels of your screen. This means that when switching shaders to draw with the second shader after drawing with the first shader, the result of the first shader will be overwritten.
To combine multiple images, you can use Framebuffers. Instead of writing directly on your screen, you can use them to write into images first. Later, these images can be combined in a third shader.
However, this will cost way too much memory and will be too computationally inefficient to consider for your scenario. These techniques are usually applied when rendering post-processing effects.
I have a few objects on the scene and even if I specify that the object A have y= 10 (the highest object), from the TOP camera I can see the bottom objects through the object A. Here is an image from my scene.
And only today I found an interesting property that draw order of models matters, I may be wrong. Here is another image where I change the draw order of "ship1", attention: "ship1" is way bellow my scene, if I do ship1.draw(); first, the ship disappears (correct), but if I do ship1.draw(); in last, he appears on top (incorrect).
Video: Opengl Depth Problem video
Q1) Does the draw order always matter?
Q2) How do I fix this, should I change draw order every time I change camera position?
Edit: I also compared my class of Perspective Projection with the glm library, just to be sure that it´s not the problem with my projection Matrix. Everything is correct.
Edit1: I have my project on git: Arkanoid git repository (windows, project is ready to run at any computer with VS installed)
Edit2: I don´t use normals or texture. Just vertices and indices.
Edit3: Is there a problem, if every object on the scene uses(share) vertices from the same file ?
Edit4: I also changed my Perspective Projection values. I had near plane at 0.0f, now I have near=20.0f and far=500.0f, angle=60º. But nothing changes, view does but the depth not. =/
Edit5: Here is my Vertex and Fragment shaders.
Edit6: contact me any time, I am here all day, so ask me anything. At the moment I am rewriting all project from zero. I have two cubes which renders well, one in front of another. Already added mine class for: camera, projections, handler for shaders. Moving to Class which creates and draws objects.
// Vertex shader
in vec4 in_Position;
out vec4 color;
uniform mat4 Model;
uniform mat4 View;
uniform mat4 Projection;
void main(void)
{
color = in_Position;
gl_Position = Projection * View * Model * in_Position;
}
// Fragment shader
#version 330 core
in vec4 color;
out vec4 out_Color;
void main(void)
{
out_Color = color;
}
Some code:
void setupOpenGL() {
std::cerr << "CONTEXT: OpenGL v" << glGetString(GL_VERSION) << std::endl;
glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glDepthMask(GL_TRUE);
glDepthRange(0.0, 1.0);
glClearDepth(1.0);
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
glFrontFace(GL_CCW);
}
void display()
{
++FrameCount;
glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
renderScene();
glutSwapBuffers();
}
void renderScene()
{
wallNorth.draw(shader);
obstacle1.draw(shader);
wallEast.draw(shader);
wallWest.draw(shader);
ship1.draw(shader);
plane.draw(shader);
}
I have cloned the repository you have linked to see if the issue was located somewhere else. In your most recent version the Object3D::draw function looks like this:
glBindVertexArray(this->vaoID);
glUseProgram(shader.getProgramID());
glUniformMatrix4fv(this->currentshader.getUniformID_Model(), 1, GL_TRUE, this->currentMatrix.getMatrix()); // PPmat é matriz identidade
glDrawElements(GL_TRIANGLES, 40, GL_UNSIGNED_INT, (GLvoid*)0);
glBindVertexArray(0);
glUseProgram(0);
glClear( GL_DEPTH_BUFFER_BIT); <<< clears the current depth buffer.
The last line clears the depth buffer after each object that is drawn, meaning that the next object drawn is not occluded properly. You should only clear the depth buffer once every frame.
I'm trying to set up a camera in OpenGL to view some points in 3 dimensions.
To achieve this, I don't want to use the old, fixed functionality style (glMatrixMode(), glTranslate, etc.) but rather set up the Model View Projection matrix myself and use it in my vertex shader. An orthographic projection is sufficient.
A lot of tutorials on this seem to use the glm library for this, but since I'm completely new to OpenGL, I'd like to learn it the right way and afterwards use some third party libraries. Additionally, most tutorials don't describe how to use the glMotionFunc() and glMouseFunc() to position the camera in space.
So, I guess I'm looking for some sample code and guidance how to see my points in 3D. Here's the vertex shader I've written:
const GLchar *vertex_shader = // Vertex Shader
"#version 330\n"
"layout (location = 0) in vec4 in_position;"
"layout (location = 1) in vec4 in_color;"
"uniform float myPointSize;"
"uniform mat4 myMVP;"
"out vec4 color;"
"void main()"
"{"
" color = in_color;"
" gl_Position = in_position * myMVP;"
" gl_PointSize = myPointSize;"
"}\0";
I set up the initial value of the MVP to be the identity matrix in my shader set up method which gives me the correct 2D representation of my points:
// Set up initial values for uniform variables
glUseProgram(shader_program);
location_pointSize = glGetUniformLocation(shader_program, "myPointSize");
glUniform1f(location_pointSize, 25.0f);
location_mvp = glGetUniformLocation(shader_program, "myMVP");
float mvp_array[16] = {1.0f, 0.0f, 0.0f, 0.0f, // 1st column
0.0f, 1.0f, 0.0f, 0.0f, // 2nd column
0.0f, 0.0f, 1.0f, 0.0f, // 3rd column
0.0f, 0.0f, 0.0f, 1.0f // 4th column
};
glUniformMatrix4fv(location_mvp, 1, GL_FALSE, mvp_array);
glUseProgram(0);
Now my question is how to adapt the two functions "motion" and "mouse", which to this point only have some code from a previous example, where the deprecated style of doing this was used:
// OLD, UNUSED VARIABLES
int mouse_old_x;
int mouse_old_y;
int mouse_buttons = 0;
float rotate_x = 0.0;
float rotate_y = 0.0;
float translate_z = -3.0;
...
// set view matrix
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0.0, 0.0, translate_z);
glRotatef(rotate_x, 1.0, 0.0, 0.0);
glRotatef(rotate_y, 0.0, 1.0, 0.0);
...
// OLD, UNUSED FUNCTIONS
void mouse(int button, int state, int x, int y)
{
if (state == GLUT_DOWN)
{
mouse_buttons |= 1<<button;
}
else if (state == GLUT_UP)
{
mouse_buttons = 0;
}
mouse_old_x = x;
mouse_old_y = y;
}
void motion(int x, int y)
{
float dx, dy;
dx = (float)(x - mouse_old_x);
dy = (float)(y - mouse_old_y);
if (mouse_buttons & 1)
{
rotate_x += dy * 0.2f;
rotate_y += dx * 0.2f;
}
else if (mouse_buttons & 4)
{
translate_z += dy * 0.01f;
}
mouse_old_x = x;
mouse_old_y = y;
}
I'd like to learn it the right way and afterwards use some third party libraries.
There's nothing wrong in using GLM, as GLM is just a math library to deal with matrices. It's a very good thing that you want to learn the very basics. A trait only seldomly seen these days. Knowing these things is invaluable when doing advanced OpenGL.
Okay, three things to learn for you:
Basic discrete linear algebra, i.e. how to deal with matrices and vectors with discrete elements. Scalar and complex elements will suffice for the time being.
A little bit of numerics. You must be able to write code performing the elementary linear algebra operations: Scaling, and adding vectors, performing an inner and and outer product of vectors. Perform matrix-vector and matrix-matrix multiplication. Inverting a matrix.
Learn about homogenous coordinates.
( 4. if you want to spice things up, learn quaternions, those things rock! )
After Step 3 you're ready to write your own linear math code. Even if you don't know about homogenous coordinates yet. Just write it do deal efficiently with matrices of dimension 4×4 and vectors of dimension 4.
Once you mastered homogenous coordinates you understand what OpenGL actually does. And then: Drop those first coding steps in writing your own linear math library. Why? Because it will be full of bugs. The one small linmath.h I maintain is riddled with them; everytime I use it in a new project I fix a number of them. Hence I recommend you use something well tested, like GLM, or Eigen.
I set up the initial value of the MVP to be the identity matrix in my shader set up method which gives me the correct 2D representation of my points:
You should separate these into 3 matrices: Model, View and Projection. In your shader you should have two, Modelview and Projection. I.e. you pass the projection to the shader as it is, but calculate a compound Model · View = Modelview matrix passed in a separate uniform.
To move the "camera" you modify the View matrix.
Now my question is how to adapt the two functions "motion" and "mouse", which to this point only have some code from a previous example, where the deprecated style of doing this was used:
Most of this code remains the same, as it doesn't touch OpenGL. What you have to replace is those glRotate and glTranslate calls.
You're working on the View matrix, as already told. First lets look what glRotate does. In fixed function OpenGL there's an internal alias, let's call it M, that is set to whatever matrix is selected with glMatrixMode. Then we can write glRotate in pseudocode as
proc glRotate(angle, vec_x, vec_y, vec_z):
mat4x4 R = make_rotation_matrix(angle, vec_x, vec_y, vec_z)
M = M · R
Okay, all the magic seems to lie within the function make_rotation_matrix. How that that one look. Well since you're learning linear algebra this is a great exercise for you. Find the matrix R with the following properties:
l a = R·a, where a is the axis of rotation
cos(phi) = b·c && b·a = 0 && b·c = 0, where phi is the angle of rotation
Since you probably just want to get this thing done, you can as well resort to look into the OpenGL-1.1 specification, which documents this matrix in its section about glRotatef
Right beside them you can find the specs for all the other matrix manipulation functions.
Now instead of operating on some hidden state variable you select with glMatrixMode you let your matrix math library operate directly on the matrix variable you define and supply. In your case View. And similar you do with Projection and Model. Then when you're rendering, you contract Model and View into the compound already mentioned. The reason for this is, that often you want the intermediate result of bringing the vertex position into eyespace (Modelview * position for the fragment shader). After determining the matrix values you bind the program (glUseProgram) and set the uniform values, then render your geometry. (glDraw…)
(OpenGL 2.0)
I managed to do some nice text-rendering in opengl, and decided to make it shader-designed.
However, rendered font texture that looked nice in fixed pipeline mode looked unpleasant in GLSL mode.
In fixed pipeline mode, I don't see any difference between GL_LINEAR and GL_NEAREST filtering, it's because the texture doesn't need to filter really, because I set orthographic projection and align quad's width and height to the texture coordinates.
Now when I'm trying to render it with shader, i can see some very bad GL_NEAREST filtering artifacts, and for GL_LINEAR the texture appears too blurry.
Fixed pipeline, satysfying, best quality (no difference between linear/nearest):
GLSL, nearest (visible artifacts, for example, look at fraction glyphs):
GLSL, linear (too blurry):
Shader program:
Vertex shader was successfully compiled to run on hardware.
Fragment shader was successfully compiled to run on hardware.
Fragment shader(s) linked, vertex shader(s) linked.
------------------------------------------------------------------------------------------
attribute vec2 at_Vertex;
attribute vec2 at_Texcoord;
varying vec2 texCoord;
void main(void) {
texCoord = at_Texcoord;
gl_Position = mat4(0.00119617, 0, 0, 0, 0, 0.00195503, 0, 0, 0, 0, -1, 0, -1, -1, -0, 1)* vec4(at_Vertex.x, at_Vertex.y, 0, 1);
}
-----------------------------------------------------------------------------------------
varying vec2 texCoord;
uniform sampler2D diffuseMap;
void main(void) {
gl_FragColor = texture2D(diffuseMap, texCoord);
}
Quad rendering, fixed:
glTexCoord2f (0.0f, 0.0f);
glVertex2f (40.0f, 40.0f);
glTexCoord2f (0.0f, 1.0f);
glVertex2f ((font.tex_r.w+40.0f), 40.0f);
glTexCoord2f (1.0f, 1.0f);
glVertex2f ((font.tex_r.w+40.0f), (font.tex_r.h+40.0f));
glTexCoord2f (1.0f, 0.0f);
glVertex2f (40.0f, (font.tex_r.h+40.0f));
Quad rendering, shader-mode:
glVertexAttrib2f(__MeshShader::ATTRIB_TEXCOORD, 0.0f, 0.0f);
glVertexAttrib2f(__MeshShader::ATTRIB_VERTEX, 40.0f, 40.0f);
glVertexAttrib2f(__MeshShader::ATTRIB_TEXCOORD, 0.0f, 1.0f);
glVertexAttrib2f(__MeshShader::ATTRIB_VERTEX, (font.tex_r.w+40.0f), 40.0f);
glVertexAttrib2f(__MeshShader::ATTRIB_TEXCOORD, 1.0f, 1.0f);
glVertexAttrib2f(__MeshShader::ATTRIB_VERTEX, (font.tex_r.w+40.0f), (font.tex_r.h+40.0f));
glVertexAttrib2f(__MeshShader::ATTRIB_TEXCOORD, 1.0f, 0.0f);
glVertexAttrib2f(__MeshShader::ATTRIB_VERTEX, 40.0f, (font.tex_r.h+40.0f));
In both cases the matrices are calculated from the same source, though for performance reasons, as you can see, I'm writing constant values into the shader code with the help of such a function (if that is the reason, how do I write them properly ? ):
std::ostringstream buffer;
buffer << f;
return buffer.str().c_str();
where "f" is some double value.
EDIT:
Result of my further research is a little bit surprising.
Now I'm multiplying vertex coordinates by the same orthogonal matrix on CPU (not in vertex shader like before) and I'm leaving the vertex untouched in vertex shader, just passing it to the gl_Position. I couldn't believe, but this really works and actually solves my problem. Every operation is made on floats, as in GPU.
Seems like matrix/vertex multiplication is much more accurate on CPU.
question is: why ?
EDIT: Actually, whole reason was different matrix sources..! Really, really small bug!
Nicol was nearest the solution.
though for performance reasons, as you can see, I'm writing constant values into the shader code
That's not going to help your performance. Uploading a single matrix uniform is pretty standard for most OpenGL shaders, and will cost you nothing of significance in terms of performance.
Seems like matrix/vertex multiplication is much more accurate on CPU. question is: why ?
It's not more accurate; it's simply using a different matrix. If you passed that matrix to GLSL via a shader uniform, you would probably get the same result. The matrix you use in the shader is not the same matrix that you used on the CPU.
I'm doing ray casting in the fragment shader. I can think of a couple ways to draw a fullscreen quad for this purpose. Either draw a quad in clip space with the projection matrix set to the identity matrix, or use the geometry shader to turn a point into a triangle strip. The former uses immediate mode, deprecated in OpenGL 3.2. The latter I use out of novelty, but it still uses immediate mode to draw a point.
I'm going to argue that the most efficient approach will be in drawing a single "full-screen" triangle. For a triangle to cover the full screen, it needs to be bigger than the actual viewport. In NDC (and also clip space, if we set w=1), the viewport will always be the [-1,1] square. For a triangle to cover this area just completely, we need to have two sides to be twice as long as the viewport rectangle, so that the third side will cross the edge of the viewport, hence we can for example use the following coordiates (in counter-clockwise order): (-1,-1), (3,-1), (-1,3).
We also do not need to worry about the texcoords. To get the usual normalized [0,1] range across the visible viewport, we just need to make the corresponding texcoords for the vertices tiwce as big, and the barycentric interpolation will yield exactly the same results for any viewport pixel as when using a quad.
This approach can of course be combined with attribute-less rendering as suggested in demanze's answer:
out vec2 texcoords; // texcoords are in the normalized [0,1] range for the viewport-filling quad part of the triangle
void main() {
vec2 vertices[3]=vec2[3](vec2(-1,-1), vec2(3,-1), vec2(-1, 3));
gl_Position = vec4(vertices[gl_VertexID],0,1);
texcoords = 0.5 * gl_Position.xy + vec2(0.5);
}
Why will a single triangle be more efficient?
This is not about the one saved vertex shader invocation, and the one less triangle to handle at the front-end. The most significant effect of using a single triangle will be that there are less fragment shader invocations
Real GPUs always invoke the fragment shader for 2x2 pixel sized blocks ("quads") as soon as a single pixel of the primitive falls into such a block. This is necessary for calculating the window-space derivative functions (those are also implicitly needed for texture sampling, see this question).
If the primitive does not cover all 4 pixels in that block, the remaining fragment shader invocations will do no useful work (apart from providing the data for the derivative calculations) and will be so-called helper invocations (which can even be queried via the gl_HelperInvocation GLSL function). See also Fabian "ryg" Giesen's blog article for more details.
If you render a quad with two triangles, both will have one edge going diagonally across the viewport, and on both triangles, you will generate a lot of useless helper invocations at the diagonal edge. The effect will be worst for a perfectly square viewport (aspect ratio 1). If you draw a single triangle, there will be no such diagonal edge (it lies outside of the viewport and won't concern the rasterizer at all), so there will be no additional helper invocations.
Wait a minute, if the triangle extends across the viewport boundaries, won't it get clipped and actually put more work on the GPU?
If you read the textbook materials about graphics pipelines (or even the GL spec), you might get that impression. But real-world GPUs use some different approaches like Guard-band clipping. I won't go into detail here (that would be a topic on it's own, have a look at Fabian "ryg" Giesen's fine blog article for details), but the general idea is that the rasterizer will produce fragments only for pixels inside the viewport (or scissor rect) anyway, no matter if the primitive lies completely inside it or not, so we can simply throw bigger triangles at it if both of the following are true:
a) the triangle does only extend the 2D top/bottom/left/right clipping planes (as opposed to the z-Dimension near/far ones, which are more tricky to handle, especially because vertices may also lie behind the camera)
b) the actual vertex coordinates (and all intermediate calculation results the rasterizer might be doing on them) are representable in the internal data formats the GPU's hardware rasterizer uses. The rasterizer will use fixed-point data types of implementation-specific width, while vertex coords are 32Bit single precision floats. (That is basically what defines the size of the Guard-band)
Our triangle is only factor 3 bigger than the viewport, so we can be very sure that there is no need to clip it at all.
But is it worth it?
Well, the savings on fragment shader invocations are real (especially when you have a complex fragment shader), but the overall effect might be barely measurable in a real-world scenario. On the other hand, the approach is not more complicated than using a full-screen quad, and uses less data, so even if might not make a huge difference, it won't hurt, so why not using it?
Could this approach be used for all sorts of axis-aligned rectangles, not just fullscreen ones?
In theory, you can combine this with the scissor test to draw some arbitrary axis-aligned rectangle (and the scissor test will be very efficient, as it just limits which fragments are produced in the first place, it isn't a real "test" in HW which discards fragments). However, this requires you to change the scissor parameters for each rectangle you want to draw, which implies a lot of state changes and limits you to a single rectangle per draw call, so doing so won't be a good idea in most scenarios.
You can send two triangles creating a quad, with their vertex attributes set to -1/1 respectively.
You do not need to multiply them with any matrix in the vertex/fragment shader.
Here are some code samples, simple as it is :)
Vertex Shader:
const vec2 madd=vec2(0.5,0.5);
attribute vec2 vertexIn;
varying vec2 textureCoord;
void main() {
textureCoord = vertexIn.xy*madd+madd; // scale vertex attribute to [0-1] range
gl_Position = vec4(vertexIn.xy,0.0,1.0);
}
Fragment Shader :
varying vec2 textureCoord;
void main() {
vec4 color1 = texture2D(t,textureCoord);
gl_FragColor = color1;
}
No need to use a geometry shader, a VBO or any memory at all.
A vertex shader can generate the quad.
layout(location = 0) out vec2 uv;
void main()
{
float x = float(((uint(gl_VertexID) + 2u) / 3u)%2u);
float y = float(((uint(gl_VertexID) + 1u) / 3u)%2u);
gl_Position = vec4(-1.0f + x*2.0f, -1.0f+y*2.0f, 0.0f, 1.0f);
uv = vec2(x, y);
}
Bind an empty VAO. Send a draw call for 6 vertices.
To output a fullscreen quad geometry shader can be used:
#version 330 core
layout(points) in;
layout(triangle_strip, max_vertices = 4) out;
out vec2 texcoord;
void main()
{
gl_Position = vec4( 1.0, 1.0, 0.5, 1.0 );
texcoord = vec2( 1.0, 1.0 );
EmitVertex();
gl_Position = vec4(-1.0, 1.0, 0.5, 1.0 );
texcoord = vec2( 0.0, 1.0 );
EmitVertex();
gl_Position = vec4( 1.0,-1.0, 0.5, 1.0 );
texcoord = vec2( 1.0, 0.0 );
EmitVertex();
gl_Position = vec4(-1.0,-1.0, 0.5, 1.0 );
texcoord = vec2( 0.0, 0.0 );
EmitVertex();
EndPrimitive();
}
Vertex shader is just empty:
#version 330 core
void main()
{
}
To use this shader you can use dummy draw command with empty VBO:
glDrawArrays(GL_POINTS, 0, 1);
This is similar to the answer by demanze, but I would argue it's easier to understand. Also this is only drawn with 4 vertices by using TRIANGLE_STRIP.
#version 300 es
out vec2 textureCoords;
void main() {
const vec2 positions[4] = vec2[](
vec2(-1, -1),
vec2(+1, -1),
vec2(-1, +1),
vec2(+1, +1)
);
const vec2 coords[4] = vec2[](
vec2(0, 0),
vec2(1, 0),
vec2(0, 1),
vec2(1, 1)
);
textureCoords = coords[gl_VertexID];
gl_Position = vec4(positions[gl_VertexID], 0.0, 1.0);
}
The following comes from the draw function of the class that draws fbo textures to a screen aligned quad.
Gl.glUseProgram(shad);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, vbo);
Gl.glEnableVertexAttribArray(0);
Gl.glEnableVertexAttribArray(1);
Gl.glVertexAttribPointer(0, 3, Gl.GL_FLOAT, Gl.GL_FALSE, 0, voff);
Gl.glVertexAttribPointer(1, 2, Gl.GL_FLOAT, Gl.GL_FALSE, 0, coff);
Gl.glActiveTexture(Gl.GL_TEXTURE0);
Gl.glBindTexture(Gl.GL_TEXTURE_2D, fboc);
Gl.glUniform1i(tileLoc, 0);
Gl.glDrawArrays(Gl.GL_QUADS, 0, 4);
Gl.glBindTexture(Gl.GL_TEXTURE_2D, 0);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, 0);
Gl.glUseProgram(0);
The actual quad itself and the coords are got from:
private float[] v=new float[]{ -1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
1.0f, 1.0f, 0.0f,
-1.0f, 1.0f, 0.0f,
0.0f, 0.0f,
1.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f
};
The binding and set up of the vbo's I leave to you.
The vertex shader:
#version 330
layout(location = 0) in vec3 pos;
layout(location = 1) in vec2 coord;
out vec2 coords;
void main() {
coords=coord.st;
gl_Position=vec4(pos, 1.0);
}
Because the position is raw, that is, not multiplied by any matrix the -1, -1::1, 1 of the quad fit into the viewport. Look for Alfonse's tutorial linked off any of his posts on openGL.org.