Related
I'm trying to implement a HUD in OpenGL which will display text in 2D on the front of the viewing window and a 3D perspective view behind (similar to a HUD).
I'm creating my 3D fragments using a projectionView matrix then switch to an ortho matrix to render my quads. I've managed to get this to work without the ortho matrix (see below), but for some reason when using the matrix if I draw 3D objects my 2D text disappears, if rendering the text alone it is present
and displays correctly.
Basic rendering loop:
glm::mat4 projection, projectionView, windowMatrix;
projection = glm::perspective(glm::radians(camera.Zoom), 800.0f / 600.0f, 0.1f, 100.0f);
windowMatrix = glm::ortho(0.0f,800.0f,0.0f,600.0f);
while (!glfwWindowShouldClose(window)) //Render loop
{
glm::mat4 view = camera.GetViewMatrix();
projectionView = projection * view;
//Update the Uniform
threeDShader->setMat4("projection", projectionView);
//Render() calls glDrawElements()
threeDObject->Render();
//Update the Uniform
textShader->setMat4("projection", windowMatrix);
//Render params = text, xPos, yPos, scale, color
//RenderText() calls glDrawArrays()
textHUD->RenderText("Hello World!", 25.0f, 25.0f, 1.0f, glm::vec3(0.9, 0.2f, 0.8f));
}
The textShader vertex shader is:
#version 420 core
layout (location = 0) in vec4 vertex; // <vec2 pos, vec2 tex>
out vec2 TexCoords;
uniform mat4 projection;
void main()
{
gl_Position = vec4(vertex.xy * vec2(2.0/800,2.0/600) - vec2(1,1), 0.0f, 1.0f); <----(1)
//gl_Position = projection * vec4(vertex.xy, 0.0f, 1.0f); <----(2)
TexCoords = vertex.zw;
}
The line (1) in the vertex shader code that explicitly states the location on the screen works displaying the text with other 3D objects being in the background.
I would prefer to use line (2) which uses the orthographic matrix (which would be neater to change for the resolution), but for some reason works when no other 3D objects are rendered, but disappears when a 3D object is rendered in the scene. They both use separate matricies then draw their vertices/fragments, so in my opinion they should not be interfering with each other.
The fragment shader is the same for each and should not be an issue. I thought it might be something to do with the near clipping plane for the perspective matrix, but with the independent draw calls it shouldn't be an issue.
I've also tried implementing the ortho matrix with a near and far clipping plane similar to the perspective matrix, but to no success.
but for some reason works when no other 3D objects are rendered
I guess, that threeDObject->Render(); respectively textHUD->RenderText install the shader program by glUseProgram.
glUniform* changes a uniform variable in the default uniform block of the currently installed program.
You've install the shader program before you can change the uniform:
(In the following I suggest, that the sahder program class has a method use(), which installs the program)
while (!glfwWindowShouldClose(window)) //Render loop
{
glm::mat4 view = camera.GetViewMatrix();
projectionView = projection * view;
// Install 3D shader program and update the Uniform
threeDShader->use();
threeDShader->setMat4("projection", projectionView);
//Render() calls glDrawElements()
threeDObject->Render();
// Install text shader program and update the Uniform
textShader->use();
textShader->setMat4("projection", windowMatrix);
//Render params = text, xPos, yPos, scale, color
//RenderText() calls glDrawArrays()
textHUD->RenderText("Hello World!", 25.0f, 25.0f, 1.0f, glm::vec3(0.9, 0.2f, 0.8f));
}
I'm trying to set up a camera in OpenGL to view some points in 3 dimensions.
To achieve this, I don't want to use the old, fixed functionality style (glMatrixMode(), glTranslate, etc.) but rather set up the Model View Projection matrix myself and use it in my vertex shader. An orthographic projection is sufficient.
A lot of tutorials on this seem to use the glm library for this, but since I'm completely new to OpenGL, I'd like to learn it the right way and afterwards use some third party libraries. Additionally, most tutorials don't describe how to use the glMotionFunc() and glMouseFunc() to position the camera in space.
So, I guess I'm looking for some sample code and guidance how to see my points in 3D. Here's the vertex shader I've written:
const GLchar *vertex_shader = // Vertex Shader
"#version 330\n"
"layout (location = 0) in vec4 in_position;"
"layout (location = 1) in vec4 in_color;"
"uniform float myPointSize;"
"uniform mat4 myMVP;"
"out vec4 color;"
"void main()"
"{"
" color = in_color;"
" gl_Position = in_position * myMVP;"
" gl_PointSize = myPointSize;"
"}\0";
I set up the initial value of the MVP to be the identity matrix in my shader set up method which gives me the correct 2D representation of my points:
// Set up initial values for uniform variables
glUseProgram(shader_program);
location_pointSize = glGetUniformLocation(shader_program, "myPointSize");
glUniform1f(location_pointSize, 25.0f);
location_mvp = glGetUniformLocation(shader_program, "myMVP");
float mvp_array[16] = {1.0f, 0.0f, 0.0f, 0.0f, // 1st column
0.0f, 1.0f, 0.0f, 0.0f, // 2nd column
0.0f, 0.0f, 1.0f, 0.0f, // 3rd column
0.0f, 0.0f, 0.0f, 1.0f // 4th column
};
glUniformMatrix4fv(location_mvp, 1, GL_FALSE, mvp_array);
glUseProgram(0);
Now my question is how to adapt the two functions "motion" and "mouse", which to this point only have some code from a previous example, where the deprecated style of doing this was used:
// OLD, UNUSED VARIABLES
int mouse_old_x;
int mouse_old_y;
int mouse_buttons = 0;
float rotate_x = 0.0;
float rotate_y = 0.0;
float translate_z = -3.0;
...
// set view matrix
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0.0, 0.0, translate_z);
glRotatef(rotate_x, 1.0, 0.0, 0.0);
glRotatef(rotate_y, 0.0, 1.0, 0.0);
...
// OLD, UNUSED FUNCTIONS
void mouse(int button, int state, int x, int y)
{
if (state == GLUT_DOWN)
{
mouse_buttons |= 1<<button;
}
else if (state == GLUT_UP)
{
mouse_buttons = 0;
}
mouse_old_x = x;
mouse_old_y = y;
}
void motion(int x, int y)
{
float dx, dy;
dx = (float)(x - mouse_old_x);
dy = (float)(y - mouse_old_y);
if (mouse_buttons & 1)
{
rotate_x += dy * 0.2f;
rotate_y += dx * 0.2f;
}
else if (mouse_buttons & 4)
{
translate_z += dy * 0.01f;
}
mouse_old_x = x;
mouse_old_y = y;
}
I'd like to learn it the right way and afterwards use some third party libraries.
There's nothing wrong in using GLM, as GLM is just a math library to deal with matrices. It's a very good thing that you want to learn the very basics. A trait only seldomly seen these days. Knowing these things is invaluable when doing advanced OpenGL.
Okay, three things to learn for you:
Basic discrete linear algebra, i.e. how to deal with matrices and vectors with discrete elements. Scalar and complex elements will suffice for the time being.
A little bit of numerics. You must be able to write code performing the elementary linear algebra operations: Scaling, and adding vectors, performing an inner and and outer product of vectors. Perform matrix-vector and matrix-matrix multiplication. Inverting a matrix.
Learn about homogenous coordinates.
( 4. if you want to spice things up, learn quaternions, those things rock! )
After Step 3 you're ready to write your own linear math code. Even if you don't know about homogenous coordinates yet. Just write it do deal efficiently with matrices of dimension 4×4 and vectors of dimension 4.
Once you mastered homogenous coordinates you understand what OpenGL actually does. And then: Drop those first coding steps in writing your own linear math library. Why? Because it will be full of bugs. The one small linmath.h I maintain is riddled with them; everytime I use it in a new project I fix a number of them. Hence I recommend you use something well tested, like GLM, or Eigen.
I set up the initial value of the MVP to be the identity matrix in my shader set up method which gives me the correct 2D representation of my points:
You should separate these into 3 matrices: Model, View and Projection. In your shader you should have two, Modelview and Projection. I.e. you pass the projection to the shader as it is, but calculate a compound Model · View = Modelview matrix passed in a separate uniform.
To move the "camera" you modify the View matrix.
Now my question is how to adapt the two functions "motion" and "mouse", which to this point only have some code from a previous example, where the deprecated style of doing this was used:
Most of this code remains the same, as it doesn't touch OpenGL. What you have to replace is those glRotate and glTranslate calls.
You're working on the View matrix, as already told. First lets look what glRotate does. In fixed function OpenGL there's an internal alias, let's call it M, that is set to whatever matrix is selected with glMatrixMode. Then we can write glRotate in pseudocode as
proc glRotate(angle, vec_x, vec_y, vec_z):
mat4x4 R = make_rotation_matrix(angle, vec_x, vec_y, vec_z)
M = M · R
Okay, all the magic seems to lie within the function make_rotation_matrix. How that that one look. Well since you're learning linear algebra this is a great exercise for you. Find the matrix R with the following properties:
l a = R·a, where a is the axis of rotation
cos(phi) = b·c && b·a = 0 && b·c = 0, where phi is the angle of rotation
Since you probably just want to get this thing done, you can as well resort to look into the OpenGL-1.1 specification, which documents this matrix in its section about glRotatef
Right beside them you can find the specs for all the other matrix manipulation functions.
Now instead of operating on some hidden state variable you select with glMatrixMode you let your matrix math library operate directly on the matrix variable you define and supply. In your case View. And similar you do with Projection and Model. Then when you're rendering, you contract Model and View into the compound already mentioned. The reason for this is, that often you want the intermediate result of bringing the vertex position into eyespace (Modelview * position for the fragment shader). After determining the matrix values you bind the program (glUseProgram) and set the uniform values, then render your geometry. (glDraw…)
I'm following this tutorial to learn something more about OpenGL and in particular point sprites. But I'm stuck on one of the exercises at the end of the page:
Try to rotate the point sprites 45 degrees by changing the fragment shader.
There are no hints about this sort of thing in the chapter, nor in the previous ones. And I didn't find any documentation on how to do it. These are my vertex and fragment shaders:
Vertex Shader
#version 140
attribute vec2 coord2d;
varying vec4 f_color;
uniform float offset_x;
uniform float scale_x;
uniform float point_size;
void main(void) {
gl_Position = vec4((coord2d.x + offset_x) * scale_x, coord2d.y, 0.0, 1.0);
f_color = vec4(coord2d.xy / 2.0 + 0.5, 1.0, 1.0);
gl_PointSize = point_size;
}
Fragment Shader
#version 140
varying vec4 f_color;
uniform sampler2D texture;
void main(void) {
gl_FragColor = texture2D(texture, gl_PointCoord) * f_color;
}
I thought about using a 2x2 matrix in the FS to rotate the gl_PointCoord, but I have no idea how to fill the matrix to accomplish it. Should I pass it directly to the FS as a uniform?
The traditional method is to pass a matrix to the shader, whether vertex or fragment. If you don't know how to fill in a rotation matrix, Google and Wikipedia can help.
The main thing is that you're going to run into is the simple fact that a 2D rotation is not enough. gl_PointCoord goes from [0, 1]. A pure rotation matrix rotates around the origin, which is the bottom-left in point-coord space. So you need more than a pure rotation matrix.
You need a 3x3 matrix, which has part rotation and part translation. This matrix should be generated as follows (using GLM for math stuff):
glm::mat4 currMat(1.0f);
currMat = glm::translate(currMat, glm::vec3(0.5f, 0.5f, 0.0f));
currMat = glm::rotate(currMat, angle, glm::vec3(0.0f, 0.0f, 1.0f));
currMat = glm::translate(currMat, glm::vec3(-0.5f, -0.5f, 0.0f));
You then pass currMat to the shader as a 4x4 matrix. Your shader does this:
vec2 texCoord = (rotMatrix * vec4(gl_PointCoord, 0, 1)).xy
gl_FragColor = texture2D(texture, texCoord) * f_color;
I'll leave it as an exercise for you as to how to move the translation from the fourth column into the third, and how to pass it as a 3x3 matrix. Of course, in that case, you'll do vec3(gl_PointCoord, 1) for the matrix multiply.
I was stuck in the same problem too, but I found a tutorial that explain how to perform a 2d texture rotation in the same fragment shader with only with passing the rotate value (vRotation).
#version 130
uniform sampler2D tex;
varying float vRotation;
void main(void)
{
float mid = 0.5;
vec2 rotated = vec2(cos(vRotation) * (gl_PointCoord.x - mid) + sin(vRotation) * (gl_PointCoord.y - mid) + mid,
cos(vRotation) * (gl_PointCoord.y - mid) - sin(vRotation) * (gl_PointCoord.x - mid) + mid);
vec4 rotatedTexture=texture2D(tex, rotated);
gl_FragColor = gl_Color * rotatedTexture;
}
Maybe this method is slow but is only to prove/show that you have an alternative to perform a texture 2D rotation inside fragment shader instead of passing a Matrix.
Note: vRotation should be in Radians.
Cheers,
You're right - a 2x2 rotation matrix will do what you want.
This page: http://www.cg.info.hiroshima-cu.ac.jp/~miyazaki/knowledge/teche31.html shows how to compute the elements. Note that you will be rotating the texture coordinates, not the vertex positions - the result will probably not be what you're expecting - it will rotate around the 0,0 texture coordinate, for example.
You may alse need to multiply the point_size by 2 and shrink the gl_PointCoord by 2 to ensure the whole texture fits into the point sprite when it's rotated. But do that as a second change. Note that a straight scale of texture coordinates move them towards the texture coordinate origin, not the middle of the sprite.
If you use a higher dimension matrix (3x3) then you will be able to combine the offset, scale and rotation into one operation.
VC++, OpenGL, SDL
I am wondering if there is a way to achieve smoother shading across a single Quad of geometry. Right now, the shading looks smooth with my point light, however, the intensity rises along the [/] diagonal subdivision of the face. The lighting is basically non-visible in-between vertices.
This is what happens as the light moves from left to right
As I move the light across the surface, it does this consistently. Gets brightest at each vertex and fades from there.
Am I forced to up the subdivision to achieve a smoother, more radial shade? or is there a method around this?
Here are the shaders I am using:
vert
varying vec3 vertex_light_position;
varying vec3 vertex_normal;
void main()
{
vertex_normal = normalize(gl_NormalMatrix * gl_Normal);
vertex_light_position = normalize(gl_LightSource[0].position.xyz);
gl_FrontColor = gl_Color;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
frag
varying vec3 vertex_light_position;
varying vec3 vertex_normal;
void main()
{
float diffuse_value = max(dot(vertex_normal, vertex_light_position), 0.0);
gl_FragColor = gl_Color * diffuse_value;
}
My geometry in case anyone is wondering:
glBegin(GL_QUADS);
glNormal3f(0.0f, 0.0f, 1.0f);
glTexCoord2f(0.0f, 1.0f); glVertex3f(pos_x, pos_y - size_y, depth);
glTexCoord2f(1.0f, 1.0f); glVertex3f(pos_x + size_x, pos_y - size_y, depth);
glTexCoord2f(1.0f, 0.0f); glVertex3f(pos_x + size_x, pos_y, depth);
glTexCoord2f(0.0f, 0.0f); glVertex3f(pos_x, pos_y, depth);
glEnd();
There are a couple things I see as being possible issues.
Unless I am mistaken, you are using normalize(gl_LightSource[0].position.xyz); to calculate the light vector, but that is based solely on the position of the light, not on the vertex you are operating on. That means the value there will be the same for every vertex and will only change based on the current modelview matrix and light position. I would think that calculating the light vector by doing something like normalize(glLightSource[0].position.xyz - (gl_ModelViewMatrix * gl_Vertex).xyz) would be closer to what you would want.
Secondly, you ought to normalize your vectors in the fragment shader as well as in the vertex shader, since the interpolation of two unit vectors is not guaranteed to be a unit vector itself.
I think the problem is with light vector...
I suggest using:
vec3 light_vector = normalize(gl_LightSource[0].position.xyz - vertex_pos)
vertex_pos can be calculated by using:
vertex_pos = gl_ModelViewMatrix * gl_Vertex
Notice that all the vectors should be in the same space (camera, world, object)
Am I forced to up the subdivision to achieve a smoother, more radial
shade? or is there a method around this?
No, you are free to do whatever you want. The only code you need to change is the fragment shader. Try to play with it and see if you get a better result.
For example, you could do this :
diffuse_value = pow(diffuse_value, 3.0);
as explained here.
I'm doing ray casting in the fragment shader. I can think of a couple ways to draw a fullscreen quad for this purpose. Either draw a quad in clip space with the projection matrix set to the identity matrix, or use the geometry shader to turn a point into a triangle strip. The former uses immediate mode, deprecated in OpenGL 3.2. The latter I use out of novelty, but it still uses immediate mode to draw a point.
I'm going to argue that the most efficient approach will be in drawing a single "full-screen" triangle. For a triangle to cover the full screen, it needs to be bigger than the actual viewport. In NDC (and also clip space, if we set w=1), the viewport will always be the [-1,1] square. For a triangle to cover this area just completely, we need to have two sides to be twice as long as the viewport rectangle, so that the third side will cross the edge of the viewport, hence we can for example use the following coordiates (in counter-clockwise order): (-1,-1), (3,-1), (-1,3).
We also do not need to worry about the texcoords. To get the usual normalized [0,1] range across the visible viewport, we just need to make the corresponding texcoords for the vertices tiwce as big, and the barycentric interpolation will yield exactly the same results for any viewport pixel as when using a quad.
This approach can of course be combined with attribute-less rendering as suggested in demanze's answer:
out vec2 texcoords; // texcoords are in the normalized [0,1] range for the viewport-filling quad part of the triangle
void main() {
vec2 vertices[3]=vec2[3](vec2(-1,-1), vec2(3,-1), vec2(-1, 3));
gl_Position = vec4(vertices[gl_VertexID],0,1);
texcoords = 0.5 * gl_Position.xy + vec2(0.5);
}
Why will a single triangle be more efficient?
This is not about the one saved vertex shader invocation, and the one less triangle to handle at the front-end. The most significant effect of using a single triangle will be that there are less fragment shader invocations
Real GPUs always invoke the fragment shader for 2x2 pixel sized blocks ("quads") as soon as a single pixel of the primitive falls into such a block. This is necessary for calculating the window-space derivative functions (those are also implicitly needed for texture sampling, see this question).
If the primitive does not cover all 4 pixels in that block, the remaining fragment shader invocations will do no useful work (apart from providing the data for the derivative calculations) and will be so-called helper invocations (which can even be queried via the gl_HelperInvocation GLSL function). See also Fabian "ryg" Giesen's blog article for more details.
If you render a quad with two triangles, both will have one edge going diagonally across the viewport, and on both triangles, you will generate a lot of useless helper invocations at the diagonal edge. The effect will be worst for a perfectly square viewport (aspect ratio 1). If you draw a single triangle, there will be no such diagonal edge (it lies outside of the viewport and won't concern the rasterizer at all), so there will be no additional helper invocations.
Wait a minute, if the triangle extends across the viewport boundaries, won't it get clipped and actually put more work on the GPU?
If you read the textbook materials about graphics pipelines (or even the GL spec), you might get that impression. But real-world GPUs use some different approaches like Guard-band clipping. I won't go into detail here (that would be a topic on it's own, have a look at Fabian "ryg" Giesen's fine blog article for details), but the general idea is that the rasterizer will produce fragments only for pixels inside the viewport (or scissor rect) anyway, no matter if the primitive lies completely inside it or not, so we can simply throw bigger triangles at it if both of the following are true:
a) the triangle does only extend the 2D top/bottom/left/right clipping planes (as opposed to the z-Dimension near/far ones, which are more tricky to handle, especially because vertices may also lie behind the camera)
b) the actual vertex coordinates (and all intermediate calculation results the rasterizer might be doing on them) are representable in the internal data formats the GPU's hardware rasterizer uses. The rasterizer will use fixed-point data types of implementation-specific width, while vertex coords are 32Bit single precision floats. (That is basically what defines the size of the Guard-band)
Our triangle is only factor 3 bigger than the viewport, so we can be very sure that there is no need to clip it at all.
But is it worth it?
Well, the savings on fragment shader invocations are real (especially when you have a complex fragment shader), but the overall effect might be barely measurable in a real-world scenario. On the other hand, the approach is not more complicated than using a full-screen quad, and uses less data, so even if might not make a huge difference, it won't hurt, so why not using it?
Could this approach be used for all sorts of axis-aligned rectangles, not just fullscreen ones?
In theory, you can combine this with the scissor test to draw some arbitrary axis-aligned rectangle (and the scissor test will be very efficient, as it just limits which fragments are produced in the first place, it isn't a real "test" in HW which discards fragments). However, this requires you to change the scissor parameters for each rectangle you want to draw, which implies a lot of state changes and limits you to a single rectangle per draw call, so doing so won't be a good idea in most scenarios.
You can send two triangles creating a quad, with their vertex attributes set to -1/1 respectively.
You do not need to multiply them with any matrix in the vertex/fragment shader.
Here are some code samples, simple as it is :)
Vertex Shader:
const vec2 madd=vec2(0.5,0.5);
attribute vec2 vertexIn;
varying vec2 textureCoord;
void main() {
textureCoord = vertexIn.xy*madd+madd; // scale vertex attribute to [0-1] range
gl_Position = vec4(vertexIn.xy,0.0,1.0);
}
Fragment Shader :
varying vec2 textureCoord;
void main() {
vec4 color1 = texture2D(t,textureCoord);
gl_FragColor = color1;
}
No need to use a geometry shader, a VBO or any memory at all.
A vertex shader can generate the quad.
layout(location = 0) out vec2 uv;
void main()
{
float x = float(((uint(gl_VertexID) + 2u) / 3u)%2u);
float y = float(((uint(gl_VertexID) + 1u) / 3u)%2u);
gl_Position = vec4(-1.0f + x*2.0f, -1.0f+y*2.0f, 0.0f, 1.0f);
uv = vec2(x, y);
}
Bind an empty VAO. Send a draw call for 6 vertices.
To output a fullscreen quad geometry shader can be used:
#version 330 core
layout(points) in;
layout(triangle_strip, max_vertices = 4) out;
out vec2 texcoord;
void main()
{
gl_Position = vec4( 1.0, 1.0, 0.5, 1.0 );
texcoord = vec2( 1.0, 1.0 );
EmitVertex();
gl_Position = vec4(-1.0, 1.0, 0.5, 1.0 );
texcoord = vec2( 0.0, 1.0 );
EmitVertex();
gl_Position = vec4( 1.0,-1.0, 0.5, 1.0 );
texcoord = vec2( 1.0, 0.0 );
EmitVertex();
gl_Position = vec4(-1.0,-1.0, 0.5, 1.0 );
texcoord = vec2( 0.0, 0.0 );
EmitVertex();
EndPrimitive();
}
Vertex shader is just empty:
#version 330 core
void main()
{
}
To use this shader you can use dummy draw command with empty VBO:
glDrawArrays(GL_POINTS, 0, 1);
This is similar to the answer by demanze, but I would argue it's easier to understand. Also this is only drawn with 4 vertices by using TRIANGLE_STRIP.
#version 300 es
out vec2 textureCoords;
void main() {
const vec2 positions[4] = vec2[](
vec2(-1, -1),
vec2(+1, -1),
vec2(-1, +1),
vec2(+1, +1)
);
const vec2 coords[4] = vec2[](
vec2(0, 0),
vec2(1, 0),
vec2(0, 1),
vec2(1, 1)
);
textureCoords = coords[gl_VertexID];
gl_Position = vec4(positions[gl_VertexID], 0.0, 1.0);
}
The following comes from the draw function of the class that draws fbo textures to a screen aligned quad.
Gl.glUseProgram(shad);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, vbo);
Gl.glEnableVertexAttribArray(0);
Gl.glEnableVertexAttribArray(1);
Gl.glVertexAttribPointer(0, 3, Gl.GL_FLOAT, Gl.GL_FALSE, 0, voff);
Gl.glVertexAttribPointer(1, 2, Gl.GL_FLOAT, Gl.GL_FALSE, 0, coff);
Gl.glActiveTexture(Gl.GL_TEXTURE0);
Gl.glBindTexture(Gl.GL_TEXTURE_2D, fboc);
Gl.glUniform1i(tileLoc, 0);
Gl.glDrawArrays(Gl.GL_QUADS, 0, 4);
Gl.glBindTexture(Gl.GL_TEXTURE_2D, 0);
Gl.glBindBuffer(Gl.GL_ARRAY_BUFFER, 0);
Gl.glUseProgram(0);
The actual quad itself and the coords are got from:
private float[] v=new float[]{ -1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
1.0f, 1.0f, 0.0f,
-1.0f, 1.0f, 0.0f,
0.0f, 0.0f,
1.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f
};
The binding and set up of the vbo's I leave to you.
The vertex shader:
#version 330
layout(location = 0) in vec3 pos;
layout(location = 1) in vec2 coord;
out vec2 coords;
void main() {
coords=coord.st;
gl_Position=vec4(pos, 1.0);
}
Because the position is raw, that is, not multiplied by any matrix the -1, -1::1, 1 of the quad fit into the viewport. Look for Alfonse's tutorial linked off any of his posts on openGL.org.