I'm required to draw my name using triangles. I understand how to handle shaders. I am just confused on how to actual draw the objects and connect them to make a letter.
I've been given some code to work with:
#include "Angel.h"
const int NumPoints = 50000;
/*This function initializes an array of 3d vectors
and sends it to the graphics card along with shaders
properly connected to them.*/
void
init( void )
{
vec3 points[NumPoints];
// Specifiy the vertices for a triangle
vec3 vertices[] = {
vec3( -1.0, -1.0, 0.0 ),
vec3( 0.0, 1.0, 0.0 ),
vec3( 1.0, -1.0, 0.0 )
};
// Select an arbitrary initial point inside of the triangle
points[0] = vec3( 0.0, 1.0, 0.0 );
// compute and store NumPoints - 1 new points
for ( int i = 1; i < NumPoints; ++i ) {
int j = rand() % 3; // pick a vertex from the triangle at random
// Compute the point halfway between the selected vertex
// and the previous point
points[i] = ( points[i - 1] + vertices[j] ) / 2.0;
}
// Create a vertex array object
GLuint vao; //just an integer recognized by graphics card
glGenVertexArrays( 1, &vao ); //generate 1 buffer
glBindVertexArray( vao ); //become array buffer
// Create and initialize a buffer object //sends it to graphics card
GLuint buffer;
glGenBuffers( 1, &buffer );
glBindBuffer( GL_ARRAY_BUFFER, buffer );
glBufferData( GL_ARRAY_BUFFER, sizeof(points), points, GL_STATIC_DRAW ); //size of array glstatic draw means this point isnt gonna change its static
// Load shaders and use the resulting shader program
GLuint program = InitShader("simpleShader - Copy.vert", "simpleShader - Copy.frag");
// make these shaders the current shaders
glUseProgram( program );
// Initialize the vertex position attribute from the vertex shader
GLuint loc = glGetAttribLocation( program, "vPosition" ); //find the location in the code
glEnableVertexAttribArray( loc );
glVertexAttribPointer( loc, 3, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0) );
glClearColor( 0.5, 0.5, 0.5, 1.0 ); // gray background
}
//----------------------------------------------------------------------------
/* This function handles the display and it is automatically called by GLUT
once it is declared as the display function. The application should not
call it directly.
*/
void
display( void )
{
glClear( GL_COLOR_BUFFER_BIT ); // clear the window
glDrawArrays( GL_POINTS, 0, NumPoints ); // draw the points
glFlush(); // flush the buffer
}
//----------------------------------------------------------------------------
/* This function handles the keyboard and it is called by GLUT once it is
declared as the keyboard function. The application should not call it
directly.
*/
void
keyboard( unsigned char key, int x, int y )
{
switch ( key ) {
case 033: // escape key
exit( EXIT_SUCCESS ); // terminates the program
break;
}
}
//----------------------------------------------------------------------------
/* This is the main function that calls all the functions to initialize
and setup the OpenGL environment through GLUT and GLEW.
*/
int
main( int argc, char **argv )
{
// Initialize GLUT
glutInit( &argc, argv );
// Initialize the display mode to a buffer with Red, Green, Blue and Alpha channels
glutInitDisplayMode( GLUT_RGBA );
// Set the window size
glutInitWindowSize( 512, 512 );
// Here you set the OpenGL version
glutInitContextVersion( 3, 2 );
//Use only one of the next two lines
//glutInitContextProfile( GLUT_CORE_PROFILE );
glutInitContextProfile( GLUT_COMPATIBILITY_PROFILE );
glutCreateWindow( "Simple GLSL example" );
// Uncomment if you are using GLEW
glewInit();
// initialize the array and send it to the graphics card
init();
// provide the function that handles the display
glutDisplayFunc( display );
// provide the functions that handles the keyboard
glutKeyboardFunc( keyboard );
glutMainLoop();
return 0;
}
It looks like your problem is intended to familiarize you with setting up and using a vertex buffer. If so, it doesn't matter how you come up with the triangles -- the point is to understand how the setup works.
So, the first thing you need to do is to read your textbook on this topic. If you don't have a text, you will need to look up the graphics calls in an OpenGL reference.
If you run the program as-is, it should draw a bunch of randomly chosen, disconnected points (in the form of a fractal, but still...). This is because the actual draw call, glDrawArrays(), is called with the enumerated value GL_POINTS, which tells it to draw points, as its first argument.
If you read the documentation for glDrawArrays(), it should list other values for this argument, some of which draw triangles in various ways. The most straightforward of these is GL_TRIANGLES, but I recommend you look up all of them to give you an idea what your options are.
Generating the triangles is up to you. If your name is short, generating them by hand should be fairly easy. Note that you should entirely replace the random-point-generating code; options include:
with inline data
with some code to load the coordinates from a file
with something more clever so you don't have to hand-generate them
Related
So far as i have understood the vertex fetch stage is encapsulated by the VAO and the VAO is required to contain the vertex fetch stage state for piping between the buffer objects and vertex attributes as well as formatting the data in the buffer objects.
Both books that i have been reading on the subject i.Red book, Blue book both mention explicitly that the VAO must contain the vertex fetch stage state data
However when i actually create 2 texture objects and simply format the data once WITHOUT a VAO into which to store this information about the buffer, it still runs fine without any hiccups, and then i reload the first object back again, and again it works fine without any issues, so where is this information pulled from about the formatting of the data in the buffer object?
I even upload buffer data a second time to same buffer object which would imply that previous information held there would be reset? And the picture still renders fine to the window
So what exactly is going on? the Books say one thing, what happens in reality is totally different and opposite
Can somebody actually explain what IS actually needed here and what isnt? What is actually going on?
When do we actually need a VAO and when we can do without?
What's the point of extra code processing when it is not needed?
The code below:
int main(){
int scrW=1280, scrH=720;
//create context and shader program
init(scrW, scrH);
createShaders();
//create texture objects and load data from image to server memory
char object[2][25];
strcpy(object[0], "back.bmp");
strcpy(object[1], "256x256.bmp");
//triangle 1
GLfloat vertices[] =
// X Y U V
{ -1.0, -1.0, 0.0, 0.0,
1.0, -1.0, 1.0, 0.0,
1.0, 1.0, 1.0, 1.0,
-1.0, 1.0, 0.0, 1.0};
//glPointSize(40.0f);
//create and bound vertex buffer object(memory buffers)
GLuint vbo1 = createVbo();
//The state set by glVertexAttribPointer() is stored in the currently bound vertex array object (VAO) if vertex array object bound
//associates the format of the data for the currently bound buffer object to the vertex attribute so opengl knows how much and how to read it
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 4*sizeof(GLfloat), 0);
glEnableVertexAttribArray(0);
//shader vertex attribute for texture coordinates
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 4*sizeof(GLfloat), (const GLvoid*)(2 * sizeof(GLfloat)));
glEnableVertexAttribArray(1);
//upload vertices to buffer memory
//will upload data to currently bound/active buffer object
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
//load and create texture object from image data
GLuint tex1 = createTexture(object[0]);
glDrawArrays(GL_QUADS, 0, 4);
glXSwapBuffers ( dpy, glxWin );
sleep(3);
GLuint tex2 = createTexture(object[1]);
glDrawArrays(GL_QUADS, 0, 4);
glXSwapBuffers ( dpy, glxWin );
sleep(3);
glBindTexture(GL_TEXTURE_2D, tex1);
glDrawArrays(GL_QUADS, 0, 4);
glXSwapBuffers ( dpy, glxWin );
sleep(3);
//////////////de-initialize
glXMakeContextCurrent( dpy, 0, 0, NULL );
glXDestroyContext( dpy, context );
glXDestroyWindow(dpy, glxWin);
XDestroyWindow( dpy, win );
XCloseDisplay( dpy );
return 0;
}
and the shaders
const char* vertex_shader =
"#version 400\n"
"layout(location = 0) in vec2 vp;"
"layout(location = 1) in vec2 tex;"
"out vec2 texCoord;"
"void main () {"
" gl_Position = vec4 (vp, 0.0f, 1.0f);"
" texCoord = tex; "
"}";
const char* fragment_shader =
"#version 400\n"
"uniform sampler2D s;"
"in vec2 texCoord;"
"out vec4 color;"
"void main () {"
"color = texture(s, texCoord);"
"}";
in order to avoid any confusion , here is the init() procedure
static int att[] =
{
GLX_X_RENDERABLE , True,
GLX_DRAWABLE_TYPE , GLX_WINDOW_BIT,
GLX_RENDER_TYPE , GLX_RGBA_BIT,
GLX_X_VISUAL_TYPE , GLX_TRUE_COLOR,
GLX_RED_SIZE , 8,
GLX_GREEN_SIZE , 8,
GLX_BLUE_SIZE , 8,
GLX_ALPHA_SIZE , 8,
GLX_DEPTH_SIZE , 24,
GLX_STENCIL_SIZE , 8,
GLX_DOUBLEBUFFER , True,
//GLX_SAMPLE_BUFFERS , 1,
//GLX_SAMPLES , 4,
None
};
Display *dpy;
Window root;
XVisualInfo *vi;
Colormap cmap;
XSetWindowAttributes swa;
Window win;
GLXContext context;
GLXFBConfig *fbc;
GLXWindow glxWin;
int fbcount;
void init(int width, int height){
//set and choose displays for creating window
dpy = XOpenDisplay(NULL);
if (!dpy){
printf("Failed to open X display\n");
exit(1);
}
root = DefaultRootWindow(dpy);
//request a framebuffer configuration
fbc = glXChooseFBConfig(dpy, DefaultScreen(dpy), att, &fbcount);
if (!fbc){
printf( "Failed to retrieve a framebuffer config\n" );
exit(1);
}
vi = glXGetVisualFromFBConfig( dpy, fbc[0] );
if(vi==NULL){
printf("Error getting visual info\n");
exit(1);
}
swa.colormap = XCreateColormap( dpy, RootWindow( dpy, vi->screen ), vi->visual, AllocNone );
swa.background_pixmap = None ;
swa.border_pixel = 0;
swa.event_mask = StructureNotifyMask;
//Window XCreateWindow(display, parent, x, y, width, height, border_width, depth, class, visual, valuemask, attributes)
win = XCreateWindow( dpy, RootWindow( dpy, vi->screen ), 0, 0, width, height, 0, vi->depth, InputOutput, vi->visual, CWBorderPixel|CWColormap|CWEventMask, &swa );
if ( !win ){
printf( "Failed to create window.\n" );
exit(1);
}
context = glXCreateNewContext( dpy, fbc[0], GLX_RGBA_TYPE, NULL, True );
glxWin = glXCreateWindow(dpy, fbc[0], win, NULL);
XMapWindow(dpy, win);
glXMakeContextCurrent(dpy, glxWin, glxWin, context);
// start GLEW extension handler
glewExperimental = GL_TRUE;
GLuint err = glewInit();
if(err!=GLEW_OK){
fprintf(stderr, "Error: %s\n", glewGetErrorString(err));
exit(1);
}
XSelectInput(dpy, win, ButtonPressMask|KeyPressMask);
// tell GL to only draw onto a pixel if the shape is closer to the viewer
//glEnable (GL_DEPTH_TEST); // enable depth-testing
//glDepthFunc (GL_LESS); // depth-testing interprets a smaller value as "closer"
}
If you use a compatibility OpenGL context, you don't need a VAO. In a sense, there is a "default" VAO which is always bound. This is how it works in OpenGL 2.x, and this is part of what the "compatibility" means in "compatibility profile".
In you use a core OpenGL context, you do need a VAO. If you don't, your code simply won't work. If you want to continue pretending you don't need a VAO, you can create a single VAO and have it bound for the entire duration of your program.
The issue of choosing a core vs compatibility profiles has its nuances, but in general it is recommended to request a core profile if you are developing a new program. Not all systems have great support for compatibility profiles anyway. Mesa limits compatibility profiles to 3.0 and OS X limits them to 2.1. If you want a core profile, you have to explicitly request a core profile when you create the context.
I'm attempting to make an OpenGL Engine in C++, but cannot render meshes correctly. Meshes, when rendered, create faces that connect two random points on the mesh, or a random point on the mesh with 0,0,0.
The problem can be seen here:
(I made it a wireframe to see the problem more clearly)
Code:
// Render all meshes (Graphics.cpp)
for( int curMesh = 0; curMesh < numMesh; curMesh++ ) {
// Save pointer of buffer
meshes[curMesh]->updatebuf();
Buffer buffer = meshes[curMesh]->buffer;
// Update model matrix
glm::mat4 mvp = Proj*View*(meshes[curMesh]->model);
// Initialize vertex array
glBindBuffer( GL_ARRAY_BUFFER, vertbuffer );
glBufferData( GL_ARRAY_BUFFER, sizeof(GLfloat)*buffer.numcoords*3, meshes[curMesh]->verts, GL_STATIC_DRAW );
// Pass information to shader
GLuint posID = glGetAttribLocation( shader, "s_vPosition" );
glVertexAttribPointer( posID, 3, GL_FLOAT, GL_FALSE, 0, (void*)0 );
glEnableVertexAttribArray( posID );
// Check if texture applicable
if( meshes[curMesh]->texID != NULL && meshes[curMesh]->uvs != NULL ) {
// Initialize uv array
glBindBuffer( GL_ARRAY_BUFFER, uvbuffer );
glBufferData( GL_ARRAY_BUFFER, sizeof(GLfloat)*buffer.numcoords*2, meshes[curMesh]->uvs, GL_STATIC_DRAW );
// Pass information to shader
GLuint uvID = glGetAttribLocation( shader, "s_vUV" );
glVertexAttribPointer( uvID, 2, GL_FLOAT, GL_FALSE, 0, (void*)(0) );
glEnableVertexAttribArray( uvID );
// Set mesh texture
glActiveTexture( GL_TEXTURE0 );
glBindTexture( GL_TEXTURE_2D, meshes[curMesh]->texID );
GLuint texID = glGetUniformLocation( shader, "Sampler" );
glUniform1i( texID, 0 );
}
// Actiavte shader
glUseProgram( shader );
// Set MVP matrix
GLuint mvpID = glGetUniformLocation( shader, "MVP" );
glUniformMatrix4fv( mvpID, 1, GL_FALSE, &mvp[0][0] );
// Draw verticies on screen
bool wireframe = true;
if( wireframe )
for(int i = 0; i < buffer.numcoords; i += 3)
glDrawArrays(GL_LINE_LOOP, i, 3);
else
glDrawArrays( GL_TRIANGLES, 0, buffer.numcoords );
}
// Mesh Class (Graphics.h)
class mesh {
public:
mesh();
void updatebuf();
Buffer buffer;
GLuint texID;
bool updated;
GLfloat* verts;
GLfloat* uvs;
glm::mat4 model;
};
My Obj loading code is here: https://www.dropbox.com/s/tdcpg4vok11lf9d/ObjReader.txt (It's pretty crude and isn't organized, but should still work)
This looks like a primitive restart issue to me. Hard to tell what exactly is the problem without seeing some code. It would help a lot to see the about 20 lines above and below and including the drawing calls render the teapot. I.e. the 20 lines before the corresponding glDrawArrays, glDrawElements or glBegin call and the 20 lines after.
subtract 1 from the indices for your use, since these are 1-based indices, and you will almost certainly need 0-based indices.
This is because your triangles are not connected for the wireframe to look perfect.
In case triangles is not connected you should construct index buffer.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Closed 9 years ago.
Improve this question
I am trying to get the 3D triangle using opengl 3.1. It renders fine as 2D triangle but when I add third coordinate it doesn't do it like I want it. I add 0.0 z coodinate to all three triangle vertices. The following is the code:
void init( void )
{
vec3 points[NumPoints];
/*
points[0] = vec2(-0.9, 0.9 );
points[1] = (vec2(-0.9, -0.9));
points[2] = (vec2(0.9, -0.9));
*/
points[0] = vec3(-0.9, 0.9, 0.0 );
points[1] = vec3(-0.9, -0.9, 0.0 );
points[2] = vec3(0.9, -0.9, 0.0 );
// Create a vertex array object
GLuint vao;
glGenVertexArrays( 1, &vao );
glBindVertexArray( vao );
// Create and initialize a buffer object
GLuint buffer;
glGenBuffers( 1, &buffer );
glBindBuffer( GL_ARRAY_BUFFER, buffer );
glBufferData( GL_ARRAY_BUFFER, sizeof(points), points, GL_STATIC_DRAW );
// Load shaders and use the resulting shader program
GLuint program = InitShader( "vshader21.glsl", "fshader21.glsl" );
glUseProgram( program );
// Initialize the vertex position attribute from the vertex shader
GLuint loc = glGetAttribLocation( program, "vPosition" );
glEnableVertexAttribArray( loc );
glVertexAttribPointer( loc, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0) );
glClearColor( 1.0, 1.0, 1.0, 1.0 ); // white background
glEnable(GL_DEPTH_TEST);
}
void display( void )
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // clear the window
glDrawArrays( GL_TRIANGLES, 0, NumPoints ); // draw the points
glFlush();
}
int main( int argc, char **argv )
{
glutInit(&argc, argv);
glutInitDisplayMode( GLUT_RGBA | GLUT_DEPTH );
glutInitWindowSize( 512, 512 );
glutCreateWindow( "Triangle" );
glewInit();
init();
glutDisplayFunc( display );
glutKeyboardFunc( keyboard );
glutMainLoop();
return 0;
}
You say it adds zero to Z ,but that's what you are actually specifying in your vertex array. Also glVertexAttribPointer second param should be 3 in your case.
I'm preparing some buffers with 2f vertices, 2f texvertices and 4f colors. It's displayed right. The whole thing is in one class. If I have more instances (every generating it's own buffer id, never passed in a function so it's not cleaned up, wrapped as pointers in a std::list) only during the first draw (paused after first draw using gdb and I see all buffered things) all buffered data is visible. In the next draw only the last drawn buffer is visible.
I prepare them by generate, bind and then fill the buffer with data with this call:
glBufferData( GL_ARRAY_BUFFER, Size * 8 * sizeof( float ), f, GL_STATIC_DRAW );
where Size is a std::size_t with the number of vertices and f the float-Array. To draw the buffer I bind it, activate the clientstates: GL_VERTEX_ARRAY, GL_TEXTURE_COORD_ARRAY, GL_COLOR_ARRAY.
glDrawArrays( Mode, 0, Size );
where Mode is a GLenum with GL_TRIANGLES.
I fixed it by calling glBufferData before glDrawArrays every frame but that's not how it supposed to be. It supposed to be generating, binding, filling and then to draw by just binding and calling glDrawArrays, isn't it?
If necessary: I'm working with C++, gcc on a Windows 7 x64.
I was asked for more code:
void Buffer::CopyToGPU( )
{
glBindBuffer( GL_ARRAY_BUFFER, Object );
float* f = new float[ Size * 8 ];
for ( std::size_t s( 0 ) ; s < Size ; ++s )
CopyVertexToFloatArray( &f[ s * 8 ], Vortex[ s ] );
glBufferData( GL_ARRAY_BUFFER, Size * 8 * sizeof( float ), f, GL_STATIC_DRAW );
delete[] f;
glVertexPointer( 2, GL_FLOAT, 8 * sizeof( float ), NULL );
glTexCoordPointer( 2, GL_FLOAT, 8 * sizeof( float ), (char*)( 2 * sizeof( float ) ) );
glColorPointer( 4, GL_FLOAT, 8 * sizeof( float ), (char*)( 4 * sizeof( float ) ) );
}
void Buffer::Render( )
{
glBindBuffer( GL_ARRAY_BUFFER, Object );
glEnableClientState( GL_VERTEX_ARRAY );
glEnableClientState( GL_TEXTURE_COORD_ARRAY );
glEnableClientState( GL_COLOR_ARRAY );
//Actually draw the triangle, giving the number of vertices provided
glDrawArrays( Mode, 0, Size );
glDisableClientState( GL_VERTEX_ARRAY );
glDisableClientState( GL_TEXTURE_COORD_ARRAY );
glDisableClientState( GL_COLOR_ARRAY );
}
int main( ... ) // stripped buffer sfml2 initialization etc.
{
glClearColor( 0, 0, 0, 1 );
glEnable( GL_ALPHA_TEST );
glAlphaFunc( GL_GREATER , 0.1 );
glEnable( GL_TEXTURE_2D );
glEnable( GL_BLEND );
glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA );
while ( win.isOpen( ) ) // sf::Window
{
/// draw
glClear( GL_COLOR_BUFFER_BIT );
MVP.Apply( );
CallDraw( );
win.display( );
}
}
You seem to specify the attrib pointers when you update the buffer object. This is not how it works. The vertex attrib pointers are (depending on the GL version) either global state, or per-VAO state, but never per-VBO state. Currently, when you do something like
bufferA.CopyToGPU();
bufferB.CopyToGPU();
while(true) {
bufferA.render();
bufferB.render();
}
only buffer B will be used (leaving potential for out-ouf-bounds accesses as you think you use buffer A when rendering it), as the vertex array state is set to buffer B in the second call, overwriting any attrib pointers set in the first call. You need either to respecify the pointers when you draw each object, or use Vertex Array Objects to encapsulate those pointers. Note that the latter path is mandatory on GL >= 3.X core profile.
I got some time and I've read about VBO and that's what I got :
http://img64.imageshack.us/img64/5733/fps8.jpg
Ok, it's much better than before. It's compiled on Release. I uses VBO (probably, if everthing's OK) and glDrawArrays to draw.
Here's drawing code. Please give me advice how to optimize it. I wanted with terrain ... uh few thousand FPS, is it real ?
void DrawVBO (void)
{
int i;
CVert* m_pVertices;
CTexCoord* m_pTexCoords;
unsigned int m_nTextureId;
m_pVertices = NULL;
m_pTexCoords = NULL;
m_nVertexCount = 0;
m_nVBOVertices = m_nVBOTexCoords = m_nTextureId = 0;
if( IsExtensionSupported( "GL_ARB_vertex_buffer_object" ) )
{
// Pobierz wskaźniki na funkcje OpenGL
glGenBuffersARB = (PFNGLGENBUFFERSARBPROC) wglGetProcAddress("glGenBuffersARB");
glBindBufferARB = (PFNGLBINDBUFFERARBPROC) wglGetProcAddress("glBindBufferARB");
glBufferDataARB = (PFNGLBUFFERDATAARBPROC) wglGetProcAddress("glBufferDataARB");
glDeleteBuffersARB = (PFNGLDELETEBUFFERSARBPROC)
wglGetProcAddress("glDeleteBuffersARB");
}
todrawquads=0;
nIndex=0;
// my function counting how many quads I will draw
for (i=0;i<MAX_CHUNKS_LOADED;i++)
{
if (chunks_loaded[i].created==1)
{
countquads(i);
}
}
m_nVertexCount=4*todrawquads;
m_pVertices = new CVec[m_nVertexCount];
m_pTexCoords = new CTexCoord[m_nVertexCount];
// another my function adding every quad which i'm going to draw (its verticles) to array
for (i=0;i<MAX_CHUNKS_LOADED;i++)
{
if (chunks_loaded[i].created==1)
{
addchunktodraw(i,m_pVertices,m_pTexCoords);
}
}
glClearColor (1,1,1, 0.0);
glColor3f(1,1,1);
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear Screen And Depth Buffer
glLoadIdentity (); // Reset The Modelview Matrix
fps++;
// Camera settings.
//gluLookAt (zom, zom, zom, 0.0, 0.0, 0.0, 0, 0, 1);
gluLookAt (zoom, zoom, zoom, 0.0, 0.0, 0.0, 0, 0, 1);
glRotatef((rot_x / 180 * 3.141592654f),1,0,0);
glRotatef((rot_y / 180 * 3.141592654f),0,1,0);
glRotatef((rot_z / 180 * 3.141592654f),0,0,1);
//m_nTextureId = t_terrain;
// Generate And Bind The Vertex Buffer
glGenBuffersARB( 1, &m_nVBOVertices ); // Get A Valid Name
glBindBufferARB( GL_ARRAY_BUFFER_ARB, m_nVBOVertices ); // Bind The Buffer
// Load The Data
glBufferDataARB( GL_ARRAY_BUFFER_ARB, m_nVertexCount*3*sizeof(float), m_pVertices, GL_STATIC_DRAW_ARB );
// Generate And Bind The Texture Coordinate Buffer
glGenBuffersARB( 1, &m_nVBOTexCoords ); // Get A Valid Name
glBindBufferARB( GL_ARRAY_BUFFER_ARB, m_nVBOTexCoords ); // Bind The Buffer
// Load The Data
glBufferDataARB( GL_ARRAY_BUFFER_ARB, m_nVertexCount*2*sizeof(float), m_pTexCoords, GL_STATIC_DRAW_ARB );
// Enable Pointers
glEnableClientState( GL_VERTEX_ARRAY ); // Enable Vertex Arrays
glEnableClientState( GL_TEXTURE_COORD_ARRAY ); // Enable Texture Coord Arrays
glBindBufferARB( GL_ARRAY_BUFFER_ARB, m_nVBOVertices );
glVertexPointer( 3, GL_FLOAT, 0, (char *) NULL ); // Set The Vertex Pointer To The Vertex Buffer
glBindBufferARB( GL_ARRAY_BUFFER_ARB, m_nVBOTexCoords );
glTexCoordPointer( 2, GL_FLOAT, 0, (char *) NULL ); // Set The TexCoord Pointer To The TexCoord Buffe
glDrawArrays( GL_QUADS, 0, m_nVertexCount); // Draw All Of The Triangles At Once
glDisableClientState( GL_VERTEX_ARRAY ); // Disable Vertex Arrays
glDisableClientState( GL_TEXTURE_COORD_ARRAY ); // Disable Texture Coord Arrays
liniergb();
glutSwapBuffers();
delete [] m_pVertices; m_pVertices = NULL;
delete [] m_pTexCoords; m_pTexCoords = NULL;
}
So what I can do with it ?
(Code above is main draw function)
edit
I've moved this :
if( IsExtensionSupported( "GL_ARB_vertex_buffer_object" ) )
{
// Pobierz wskaźniki na funkcje OpenGL
glGenBuffersARB = (PFNGLGENBUFFERSARBPROC) wglGetProcAddress("glGenBuffersARB");
glBindBufferARB = (PFNGLBINDBUFFERARBPROC) wglGetProcAddress("glBindBufferARB");
glBufferDataARB = (PFNGLBUFFERDATAARBPROC) wglGetProcAddress("glBufferDataARB");
glDeleteBuffersARB = (PFNGLDELETEBUFFERSARBPROC)
wglGetProcAddress("glDeleteBuffersARB");
}
to my main function. Probably no improvement.
edit2
Now , also I can see that it's eating memory. Every few seconds program memory usage is rising up more and more ... What's wrong ? What I'm not deleting ?
edit3
Ok , thanks sooooo much. I've moved some code outside draw function and ... much more fps !
Thanks so much !
http://img197.imageshack.us/img197/5193/fpsfinal.jpg
It's 640x640 blocks (so 40 times bigger) map with 650'000 quads (about 70 times more) and still ~170 fps. Great ! And no memory leaks. Thanks again !
You're reloading all your buffers and freeing them again in every single frame? Stop doing that and your frame rate will go up.
Note that your current code will eventually run out of VBO identifiers, since you never delete the VBOs you create.
Linking extension functions definitely doesn't need to be done every frame either.
Your DrawVBO function should contain only:
glClearColor (1,1,1, 0.0);
glColor3f(1,1,1);
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear Screen And Depth Buffer
glLoadIdentity (); // Reset The Modelview Matrix
fps++;
// Camera settings.
//gluLookAt (zom, zom, zom, 0.0, 0.0, 0.0, 0, 0, 1);
gluLookAt (zoom, zoom, zoom, 0.0, 0.0, 0.0, 0, 0, 1);
glRotatef((rot_x / 180 * 3.141592654f),1,0,0);
glRotatef((rot_y / 180 * 3.141592654f),0,1,0);
glRotatef((rot_z / 180 * 3.141592654f),0,0,1);
// Enable Pointers
glEnableClientState( GL_VERTEX_ARRAY ); // Enable Vertex Arrays
glEnableClientState( GL_TEXTURE_COORD_ARRAY ); // Enable Texture Coord Arrays
glBindBufferARB( GL_ARRAY_BUFFER_ARB, m_nVBOVertices );
glVertexPointer( 3, GL_FLOAT, 0, (char *) NULL ); // Set The Vertex Pointer To The Vertex Buffer
glBindBufferARB( GL_ARRAY_BUFFER_ARB, m_nVBOTexCoords );
glTexCoordPointer( 2, GL_FLOAT, 0, (char *) NULL ); // Set The TexCoord Pointer To The TexCoord Buffe
glDrawArrays( GL_QUADS, 0, m_nVertexCount); // Draw All Of The Triangles At Once
glDisableClientState( GL_VERTEX_ARRAY ); // Disable Vertex Arrays
glDisableClientState( GL_TEXTURE_COORD_ARRAY ); // Disable Texture Coord Arrays
liniergb();
glutSwapBuffers();
You need to move the rest to separate function called only once at start-up (or when the terrain changes).
A few things stand out:
In your drawing function you're allocating memory for your geometry data
m_pVertices = new CVec[m_nVertexCount];
m_pTexCoords = new CTexCoord[m_nVertexCount];
Memory allocation is a extreme expensive operation, this is one of those things that should be done only once. OpenGL is not meant to be "initialized" – but the data structures you're going to pass to it are!
Here you're copying the newly allocated buffers to OpenGL, again and again with each frame. This is exactly the opposite of what to do.
// Generate And Bind The Vertex Buffer
glGenBuffersARB( 1, &m_nVBOVertices ); // Get A Valid Name
glBindBufferARB( GL_ARRAY_BUFFER_ARB, m_nVBOVertices ); // Bind The Buffer
// Load The Data
glBufferDataARB( GL_ARRAY_BUFFER_ARB, m_nVertexCount*3*sizeof(float), m_pVertices, GL_STATIC_DRAW_ARB );
// Generate And Bind The Texture Coordinate Buffer
glGenBuffersARB( 1, &m_nVBOTexCoords ); // Get A Valid Name
glBindBufferARB( GL_ARRAY_BUFFER_ARB, m_nVBOTexCoords ); // Bind The Buffer
// Load The Data
glBufferDataARB( GL_ARRAY_BUFFER_ARB, m_nVertexCount*2*sizeof(float), m_pTexCoords, GL_STATIC_DRAW_ARB );
The whole idea of VBOs is, to load the data only once, copy it to OpenGL and then never reallocate it again. Note that this is not OpenGL initialization, it's data initialization, something totally reasonable. I see that you have named your variables m_pVertices and m_pTexCoords indicating that those are class member variables. Then the solution is simple: Move that whole initialization code into some loader function. Also instead of naked C++ arrays I strongly suggest using std::vector.
So let's fix this:
// Load Extensions only once. Well, once per context actually, however
// why don't you just use an extension wrapper and forget about those
// gritty details? Google GLEW or GLee
void init_extensions()
{
if( IsExtensionSupported( "GL_ARB_vertex_buffer_object" ) )
{
// Pobierz wska\u017aniki na funkcje OpenGL
glGenBuffersARB = (PFNGLGENBUFFERSARBPROC) wglGetProcAddress("glGenBuffersARB");
glBindBufferARB = (PFNGLBINDBUFFERARBPROC) wglGetProcAddress("glBindBufferARB");
glBufferDataARB = (PFNGLBUFFERDATAARBPROC) wglGetProcAddress("glBufferDataARB");
glDeleteBuffersARB = (PFNGLDELETEBUFFERSARBPROC) wglGetProcAddress("glDeleteBuffersARB");
}
}
class Block0r
{
protected:
GLuint m_nTextureId;
GLuint m_nVBOVertices;
GLuint m_nVBOTexCoords;
GLuint m_nVertexCount;
// Call this one time to load the data
void LoadVBO()
{
std::vector<CVert> vertices;
std::vector<CTexCoord> texCoords;
// my function counting how many quads I will draw
todrawquads = 0;
for(int i=0; i < MAX_CHUNKS_LOADED; i++) {
if( chunks_loaded[i].created == 1 ) {
countquads(i);
}
}
m_nVertexCount = 4*todrawquads;
vertices.resize(vertexcount);
texcoords.resize(vertexcount);
for (i=0;i<MAX_CHUNKS_LOADED;i++) {
if (chunks_loaded[i].created==1) {
addchunktodraw(i, &vertices[0], &texcoords[0]);
}
}
glGenBuffersARB( 1, &m_nVBOVertices );
glBindBufferARB( GL_ARRAY_BUFFER_ARB, m_nVBOVertices );
glBufferDataARB( GL_ARRAY_BUFFER_ARB, vertices.size()*sizeof(CVert), &vertices[0], GL_STATIC_DRAW_ARB );
glGenBuffersARB( 1, &m_nVBOTexCoords );
glBindBufferARB( GL_ARRAY_BUFFER_ARB, m_nVBOTexCoords );
glBufferDataARB( GL_ARRAY_BUFFER_ARB, texCoords.size()*sizeof(CTexCoord), &texCoords[0], GL_STATIC_DRAW_ARB );
}
void DrawVBO()
{
glClearColor (1,1,1, 0.0);
glColor3f(1,1,1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
setup_projection(); // this should really be done in the drawing handler
glMatrixMode(GL_MODELVIEW); // don't asssume a certain matrix being active!
glLoadIdentity();
fps++;
gluLookAt (zoom, zoom, zoom, 0.0, 0.0, 0.0, 0, 0, 1);
glRotatef((rot_x / 180 * 3.141592654f),1,0,0);
glRotatef((rot_y / 180 * 3.141592654f),0,1,0);
glRotatef((rot_z / 180 * 3.141592654f),0,0,1);
glEnableClientState( GL_VERTEX_ARRAY );
glEnableClientState( GL_TEXTURE_COORD_ARRAY );
glBindBufferARB( GL_ARRAY_BUFFER_ARB, m_nVBOVertices );
glVertexPointer( 3, GL_FLOAT, 0, (GLvoid *) 0 );
glBindBufferARB( GL_ARRAY_BUFFER_ARB, m_nVBOTexCoords );
glTexCoordPointer( 2, GL_FLOAT, 0, (GLvoid *) 0 );
glDrawArrays( GL_QUADS, 0, m_nVertexCount);
glDisableClientState( GL_VERTEX_ARRAY );
glDisableClientState( GL_TEXTURE_COORD_ARRAY );
liniergb();
glutSwapBuffers();
}
}
On a side note: Comments like // Generate And Bind The Vertex Buffer or // Set The Vertex Pointer To The Vertex Buffer are not very usefull. Those comments just redundantly tell, what one can read from the code anyway.
Comments should be added to code, which inner workings are not immediately understandably, or if you had to do some kind of hack to fix something and this hack would puzzle someone else, or yourself in a few months time.