I'm trying to write a simple maze game, without using any deprecated OpenGL API (i.e. no immediate mode). I'm using one Vertex Buffer Object for each tile I have in my maze, which is essentially a combination of four Vertexs:
class Vertex {
public:
GLfloat x, y, z; // coords
GLfloat tx, ty; // texture coords
Vertex();
};
and are stored in VBOs like this:
void initVBO()
{
Vertex vertices[4];
vertices[0].x = -0.5;
vertices[0].y = -0.5;
vertices[0].z = 0.0;
vertices[0].tx = 0.0;
vertices[0].ty = 1.0;
vertices[1].x = -0.5;
vertices[1].y = 0.5;
vertices[1].z = 0.0;
vertices[1].tx = 0.0;
vertices[1].ty = 0.0;
vertices[2].x = 0.5;
vertices[2].y = 0.5;
vertices[2].z = 0.0;
vertices[2].tx = 1.0;
vertices[2].ty = 0.0;
vertices[3].x = 0.5;
vertices[3].y = -0.5;
vertices[3].z = 0.0;
vertices[3].tx = 1.0;
vertices[3].ty = 1.0;
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex)*4, &vertices[0].x, GL_STATIC_DRAW);
ushort indices[4];
indices[0] = 0;
indices[1] = 1;
indices[2] = 2;
indices[3] = 3;
glGenBuffers(1, &ibo);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(ushort) * 4, indices, GL_STATIC_DRAW);
}
Now, I'm stuck on the camera movement. In a previous version of my project, I used glRotatef and glTranslatef to translate and rotate the scene and then I rendered every tile using glBegin()/glEnd() mode. But these two functions are now deprecated, and I didn't find any tutorial about creating a camera in a context using only VBOs. Which is the correct way to proceed? Should I loop between every tile modifying the position of the vertices according to the new camera position?
But these two functions are now deprecated, and I didn't find any tutorial about creating a camera in a context using only VBOs.
VBOs have nothing to do with this.
Immediate mode and the matrix stack are two different pairs of shoes. VBOs deal with getting geometry data to the renderer, the matrix stack deals with getting the transformation there. It's only geometry data that's affected by VBOs.
As for your question: You calculate the matrices yourself and pass them to the shader by uniform. It's also important to understand the OpenGL's matrix function never have been GPU accelerated (except for one single machine, SGI's Onyx), so this didn't even offer some performance gain. Actually using OpenGL's matrix stack had a negative impact on overall performance, due to carrying out redundant operations, that have to be done somewhere else in the program as well.
For a simple matrix math library look at my linmath.h http://github.com/datenwolf/linmath.h
I will add to datenwolf's answer. I am assuming that only the shader pipeline is available to you.
Requirements
In OpenGL 4.0+ Opengl does not do any rendering for you whatsoever as it moves away from the fixed function pipeline. If you are rendering your geometry without a shader right now you are using the deprecated pipeline. Getting up and running without some base framework will be difficult (not impossible, but I would recommend using a base framework). I would recommend, as a start, using GLUT (this will create a window for you and has basic callbacks for the idle function and input), GLEW (to setup the rendering context) and gLTools (matrix stack, generic shaders and shader manager for a quick setup so that you can at least start rendering).
Setup
I will be giving the important pieces here which you can then piece together. At this point I am assuming you have GLUT set up properly (search for how to set it up) and you are able to register the update loop with it and create a window (that is, the loop which calls one of your selected functions [note, this cannot be a method] every frame). Refer to the link above for help on this.
First, initialize glew (by calling glewInit())
Setup your scene. This includes using GLBatch class (from glTools) to create a set of vertices to render as triangles and initializing the GLShaderManager class (also from GLTools) and its stock shaders by calling its InitializeStockShaders() function.
In your idle loop, call UseStockShader() function of the shader manager to start a new batch. Call the draw() function on your vertex batch. For a complete overview of glTools, go here.
Don't forget to clear the window before rendering and swapping the buffers after rendering by calling glClear() and glSwapBuffers() respectively.
Note that most of the functions I gave above accept arguments. You should be able to figure those out by looking at the respective library's documentations.
MVP Matrix (EDIT: Forgot to add this section)
OpenGL renders everything that is in the -1,1 co-ordinates looking down the z-axis. It has no notion of a camera and does not care for anything that falls outside these co-ordinates. The model-view-projection matrix is what transforms your scene to fit these co-ordinates.
As a starting point, don't worry about this until you have something rendered on the screen (make sure all the co-ordinates that you give your vertex batch are less than 1). Once you do, then setup your projection matrix (default projection is orthographic) by using the GLFrustum class in glTools. You will get your projection matrix from this class which you will multiply with your model view matrix. The model-view matrix is a combination of the model's transformation matrix and your camera's transformation (remember, there is no camera, so essentially, you are moving the scene instead). Once you have multiplied all of them to make one matrix, you pass it on to the shader using the UseStockShader() function.
Use a stock shader in GLTools (e.g. GLT_SHADER_FLAT) then start creating your own.
Reference
Lastly, I would highly recommend getting this book: OpenGL SuperBible, Comprehensive Tutorial and Reference (Fifth edition - make sure it is this edition)
If you really want to stick with the newest OpenGL API, where lots of features where removed in favor of a programmable pipeline (OpenGL 4 and OpenGL ES 2), you will have to write the vertex and fragment shaders yourself, and implement the transformation stuff there. You will have to manually create all the attributes you use in the shader, specifically the coords and texture coords you use in your example. You will also need 2 uniform variables, one for model-view matrix and other for projection matrix, if you want to mimic the behavior of old fixed functionality OpenGL.
The rotation/translation you are used to do are matrix operations. During vertex transformation stage of the pipeline, now performed by the vertex shader you supply, you must multiply a 4x4 transformation matrix by the vertex position (4 coordinates, interpreted as a 4x1 matrix, where the 4th coordinate is usually 1 if you are not doing anything too fancy). There resulting vector will be at the correct relative position according to the that transformation. Then you multiply the projection matrix to that vector, and the output the result to the fragment shader.
You can learn how all those matrices are built by looking at the documentation of glRotate, glTranslate and gluPerspective. Remember matrix multiplication is non-comutative, so the order you multiply them matters (this is exactly why the order you call glRotate and
glTranslate also matters).
About learning GLSL and how to use shaders, I've learned on here, but these tutorials relates to OpenGL 1.4 and 2 and are now very old. The main difference is that the predefined input variables to the vertex shaders, such as gl_Vertex and gl_ModelViewMatrix no longer exists, and you have to create them by yourself.
Related
I am leaning GLSL and in general some OpenGL and I am having some trouble with vertex movement and management.
I am good with camera rotations and translation but now I need to move a few vertices and have them stay in their new positions.
What I would like to do is move them using the vertex shader but also not keep track of their new positions trough matrices (as I need to move them around independently and it would be very pricey in terms of memory and computing power to store that many matrices).
If there were a way to change their position values in the VBO directly from the vertex shader, that would be optimal.
Is there a way to do that? What other ways do you suggest?
Thanks in advance.
PS I am using GLSL version 1.30
While it's possible to write values from a shader into a buffer and later read it from the CPU-client side (i.e., by using glReadPixels()) I don't think it is your case.
You can move a group of vertices, all with the same movement, with a single matrix. Why don't you do it with the CPU and store the results, updating their gl-buffer when needed? (VAO remains unchanged if you just update the glBuffer) Once they are moved, you don't need that matrix anymore, right? Or if you want to undo the movement, then, yes, yo need to store also the matrix.
It seems that transform feedback is exactly what you need.
What I would like to do is move them using the vertex shader but also not keep track of their new positions trough matrices
If I understand you correctly then what you want is to send some vertices to the GPU. Then having the vertex shader moving them. You can't because a vertex shader is only able to read from the vertex buffer, it isn't able to write back to it.
it would be very pricey in terms of memory and computing power to store that many matrices.
Considering:
I am good with camera rotations and translation
Then in wouldn't be expensive at all. Considering that you already have a view and projection matrix for the camera and viewport. Then having a model matrix contain the translation, rotation and scaling of each object isn't anywhere near a bottleneck.
In the vertex shader you'd simply have:
uniform mat4 mvp; // model view projection matrix
...
gl_Position = mvp * vec4(position, 1.0);
On the CPU side of things you'd do:
mvp = projection * view * model;
GLint mvpLocation = glGetUniformLocation(shaderGeometryPass, "mvp")
glUniformMatrix4fv(mvpLocation, 1, GL_FALSE, (const GLfloat*)&mvp);
If this gives you performance issues then the problem lies elsewhere.
If you really want to "save" which ever changes you make on the GPU side of things, then you'd have to look into Shader Storage Buffer Object and/or Transform Feedback
I am attempting to load models exported from Blender into OpenGL. Particularly, I followed the source code from this tutorial to help me get started. Because the loader is fairly simple, it only read in the vertex coordinates as well as face indices, ignoring the normal and tex.
It then calculates the normal for each face:
float coord1[3] = { Faces_Triangles[triangle_index], Faces_Triangles[triangle_index+1],Faces_Triangles[triangle_index+2]};
float coord2[3] = {Faces_Triangles[triangle_index+3],Faces_Triangles[triangle_index+4],Faces_Triangles[triangle_index+5]};
float coord3[3] = {Faces_Triangles[triangle_index+6],Faces_Triangles[triangle_index+7],Faces_Triangles[triangle_index+8]};
float *norm = this->calculateNormal( coord1, coord2, coord3 );
float* Model_OBJ::calculateNormal( float *coord1, float *coord2, float *coord3 )
{
/* calculate Vector1 and Vector2 */
float va[3], vb[3], vr[3], val;
va[0] = coord1[0] - coord2[0];
va[1] = coord1[1] - coord2[1];
va[2] = coord1[2] - coord2[2];
vb[0] = coord1[0] - coord3[0];
vb[1] = coord1[1] - coord3[1];
vb[2] = coord1[2] - coord3[2];
/* cross product */
vr[0] = va[1] * vb[2] - vb[1] * va[2];
vr[1] = vb[0] * va[2] - va[0] * vb[2];
vr[2] = va[0] * vb[1] - vb[0] * va[1];
/* normalization factor */
val = sqrt( vr[0]*vr[0] + vr[1]*vr[1] + vr[2]*vr[2] );
float norm[3];
norm[0] = vr[0]/val;
norm[1] = vr[1]/val;
norm[2] = vr[2]/val;
return norm;
}
I have 2 questions.
How do I know if the normal is facing inwards or outwards? Is there some ordering of the vertices in each row in the .obj file that gives indication how to calculate the normal?
In the initialization function, he uses GL_SMOOTH. Is this incorrect, since I need to provide normals for each vertex if using GL_SMOOTH instead of GL_FLAT?
Question 1
glFrontFace determines wind order
Wind order means, what order a set of vertexes should appear for a normal to be considered positive. Consider the triangle below. It's vertexes are defined clockwise. If we told OpenGL glFrontFace(GL_CW) (That clockwise means front face) then the normal would essentially be sticking right out of the screen towards you in order to be considered "outward".
On a side note, counter-clockwise is the default and what you should stick with.
No matter what, you need should really define normals especially if you want to do any lighting in your scene as they are used for lighting calculation. glFrontFace just lets you tell OpenGL which way you want to interpret what the front of a polygon is.
In the above example, and below diagram, if we told OpenGL that we define faces counter-clockwise and also glEnabled glCullFace and set it to GL_BACK then our triangle wouldn't show up because we would be looking at the back of it and we told OpenGL not to show the back of polygons.
You can read more about face culling here: Face Culling.
Wavefront .obj has support for declaring normals in a file if you don't want to create them yourself. Just make sure your exporter adds them.
Additionaly, Wavefront wants each vertex to have a normal defined:
f v1//vn1 v2//vn2 v3//vn3 ...
Where vN is the vertex of the f face and vnN is the normal of the vertex. By providing a normal for each vertex, you achieve a smoother looking surface than you would by defining a normal per face or by setting all of the normals of vertexes of the same face to be the same. Take a look at this question to see the difference you can make on a sphere: OpenGL: why do I have to set a normal with glNormal?
If your .obj file doesn't have normals defined, I would use the face definition order and cross two edges of the defined face. Consider the method used here: Calculating a Surface Normal
Edit
I think I may be a little confusing. The front face of a polygon is only slightly related to its normals. Normals are only really used for lighting calculations. You don't have to have them, but they are one of the big variables used in calculating how lit your object is.
I am explaining the "front-faced-ness" of a polygon at the same time because it sort of makes sense, when talking about convex polygons, that your normal would stick out of the "front" of your triangle with respect to the shape you are making.
If you created a huge cave, or if your camera were to mostly reside inside of some concave shape, then it would make sense to have your normals point inwards since your light sources are probably going to want to bounce off of the inside of your shape.
Question 2
GL_SMOOTH determines which of the shading models you want to use with glShadeModel
GL_SMOOTH means smooth shading, where color is actually interpolated between each vertex, vs GL_FLAT means flat shading, where only one one color will be used. Typically, you'll use the default value, GL_SMOOTH.
You don't have to define normals for each vertex in either case. However, if you want GL_SMOOTH to look "good" you'll probably want to as it will interpolate as it renders between each vertex rather than just picking one vertex for properties.
Also, bear in mind that all of this goes out the window whenever you leave the fixed-function pipeline and start using shaders.
I'm migrating our graphics ending from using the old fixed pipeline functions to making use of the programmable pipeline. Our simplest model is just a collection of points in space where each point can be represented by different shapes. One of these being a cube.
I'm basing my code off the cube example from the OpenGL superbible.
In this example the cubes are placed at somewhat random places whereas I will have a fixed lit of points in space. I'm wondering if there is a way to pass that list to my shader so that a cube is drawn at each point vs looping through the list and calling glDrawElements each time. Is that even worth the trouble (performance wise)?
PS we are limited to OpenGL 3.3 functionality.
Is that even worth the trouble (performance wise)?
Probably yes, but try to profile nonetheless.
What you are looking for is instanced rendering, take a look at glDrawElementsInstanced and glVertexAttribDivisor.
What you want to do is store the 8 vertices of a generic cube (centered on the origin) in one buffer, and also store the coordinates of the center of each cube in another vertex attribute buffer.
Then you can use glDrawElementsInstanced to draw N cubes taking the vertices from the first buffer, and translating them in the shader using the specific position stored in the second buffer.
Something like this:
glVertexAttribPointer( vertexPositionIndex, /** Blah .. */ );
glVertexAttribPointer( cubePositionIndex, /** Blah .. */ );
glVertexAttribDivisor( cubePositionIndex, 1 ); // Advance one vertex attribute per instance
glDrawElementsInstanced( GL_TRIANGLES, 36, GL_UNSIGNED_BYTE, indices, NumberOfCubes );
In your vertex shader you need two attributes:
vec3 vertexPosition; // The coordinates of a vertex of the generic cube
vec3 cubePosition; // The coordinates of the center the specific cube being rendered
// ....
vec3 vertex = vertexPosition + cubePosition;
Obviously you can have also a buffer to store the size of each cube, or another one for the orientation, the idea remains the same.
In your example every cube uses its own model matrix per frame.
If you want to keep that you need multiple drawElements calls.
If some cubes don't move (don't need a per frame model matrix) you should combine these cubes into one VBO.
I've got this not-so-small-anymore tile-based game, which is my first real OpenGL project. I want to render every tile as a 3D object. So at first I created some objects, like a cube and a sphere, provided them with vertex normals and rendered them in immediate mode with flat shading. But since I've got like 10.000 objects per level, it was a bit slow. So I put my vertices and normals into VBOs.
That's where I encountered the first problem: Before using VBOs I just push()ed and pop()ed matrices for every object and used glTranslate / glRotate to place them in my scene. But when I did the same with VBOs, the lighting started to behave strangely. Instead of a fixed lighting position behind the camera, the light seemed to rotate with my objects. When moving around them 180 degrees I could see only a shadow.
So i did some research. I could not find any answer to my specific problem, but I read, that instead of using glTranslate/glRotate one should implement shaders and provide them with uniform matrices.
I thought "perhaps that could fix my problem too" and implemented a first small vertex shader program which only stretched my objects a bit, just to see if I could get a shader to work before focusing on the details.
void main(void)
{
vec4 v = gl_Vertex;
v.x = v.x * 0.5;
v.y = v.y * 0.5;
gl_Position = gl_ModelViewProjectionMatrix * v;
}
Well, my objects get stretched - but now OpenGLs flat shading is broken. I just get white shades. And I can't find any helpful information. So I got a few questions:
Can I only use one shader at a time, and when using my own shader, OpenGLs flat shading is turned off? So do I have to implement flat shading myself?
What about my vector normals? I read somewhere, that there is something like a normal-matrix. Perhaps I have to apply operations to my normals as well when modifying vertices?
That your lighting gets messed up with matrix operations changes means, that your calls to glLightfv(..., GL_POSITION, ...) happen in the wrong context (not the OpenGL context, but state of matrices, etc.).
Well, my objects get stretched - but now OpenGLs flat shading is broken. I just get white shades
I think you mean Gourad shading (flat shading means something different). The thing is: If you're using a vertex shader you must do everthing the fixed function pipeline did. That includes the lighting calculation. Lighthouse3D has a nice tutorial http://www.lighthouse3d.com/tutorials/glsl-tutorial/lighting/ as does Nicol Bolas' http://arcsynthesis.org/gltut/Illumination/Illumination.html
I have a bit of experience writing OpenGL 2 applications and want to learn using OpenGL 3. For this I've bought the Addison Wesley "Red-book" and "Orange-book" (GLSL) which descirbe the deprecation of the fixed functionality and the new programmable pipeline (shaders). But what I can't get a grasp of is how to construct a scene with multiple objects without using the deprecated translate*, rotate* and scale* functions.
What I used to do in OGL2 was to "move about" in 3D space using the translate and rotate functions, and create the objects in local coordinates where I wanted them using glBegin ... glEnd. In OGL3 these functions are all deprecated, and, as I understand, replaced by shaders. But I can't call a shaderprogram for each and every object I make, can I? Wouldn't this affect all the other objects too?
I'm not sure if I've explained my problem satisfactory, but the core of it is how to program a scene with multiple objects defined in local coordinates in OpenGL 3.1. All the beginner tutorials I've found only uses a single object and doesn't have/solve this problem.
Edit: Imagine you want two spinning cubes. It would be a pain manually modifying each vertex coordinate, and you can't simply modify the modelview-matrix, because that would rather spin the camera around two static cubes...
Let's start with the basics.
Usually, you want to transform your local triangle vertices through the following steps:
local-space coords-> world-space coords -> view-space coords -> clip-space coords
In standard GL, the first 2 transforms are done through GL_MODELVIEW_MATRIX, the 3rd is done through GL_PROJECTION_MATRIX
These model-view transformations, for the many interesting transforms that we usually want to apply (say, translate, scale and rotate, for example), happen to be expressible as vector-matrix multiplication when we represent vertices in homogeneous coordinates. Typically, the vertex V = (x, y, z) is represented in this system as (x, y, z, 1).
Ok. Say we want to transform a vertex V_local through a translation, then a rotation, then a translation. Each transform can be represented as a matrix*, let's call them T1, R1, T2.
We want to apply the transform to each vertex: V_view = V_local * T1 * R1 * T2. Matrix multiplication being associative, we can compute once and for all M = T1 * R1 * T2.
That way, we only need to pass down M to the vertex program, and compute V_view = V_local * M. In the end, a typical vertex shader multiplies the vertex position by a single matrix. All the work to compute that one matrix is how you move your object from local space to the clip space.
Ok... I glanced over a number of important details.
First, what I described so far only really covers the transformation we usually want to do up to the view space, not the clip space. However, the hardware expects the output position of the vertex shader to be represented in that special clip-space. It's hard to explain clip-space coordinates without significant math, so I will leave that out, but the important bit is that the transformation that brings the vertices to that clip-space can usually be expressed as the same type of matrix multiplication. This is what the old gluPerspective, glFrustum and glOrtho compute.
Second, this is what you apply to vertex positions. The math to transform normals is somewhat different. That's because you want the normal to stay perpendicular to the surface after transformation (for reference, it requires a multiplication by the inverse-transpose of the model-view in the general case, but that can be simplified in many cases)
Third, you never send 4-D coordinates to the vertex shader. In general you pass 3-D ones. OpenGL will transform those 3-D coordinates (or 2-D, btw) to 4-D ones so that the vertex shader does not have to add the extra coordinate. it expands each vertex to add the 1 as the w coordinate.
So... to put all that back together, for each object, you need to compute those magic M matrices based on all the transforms that you want to apply to the object. Inside the shader, you then have to multiply each vertex position by that matrix and pass that to the vertex shader Position output. Typical code is more or less (this is using old nomenclature):
mat4 MVP;
gl_Position=MVP * gl_Vertex;
* the actual matrices can be found on the web, notably on the man pages for each of those functions: rotate, translate, scale, perspective, ortho
Those functions are apparently deprecated, but are technically still perfectly functional and indeed will compile. So you can certainly still use the translate3f(...) etc functions.
HOWEVER, this tutorial has a good explanation of how the new shaders and so on work, AND for multiple objects in space.
You can create x arrays of vertexes, and bind them into x VAO objects, and you render the scene from there with shaders etc...meh, it's easier for you to just read it - it is a really good read to grasp the new concepts.
Also, the OpenGL 'Red Book' as it is called has a new release - The Official Guide to Learning OpenGL, Versions 3.0 and 3.1. It includes 'Discussion of OpenGL’s deprecation mechanism and how to verify your programs for future versions of OpenGL'.
I hope that's of some assistance!