glClearColor( 1.0f, 1.0f, 1.0f, 1.0f );
AttachVertexShader( shader, "szescian_vs.glsl" );
AttachFragmentShader( shader, "szescian_fs.glsl" );
LinkProgram( shader );
glBindVertexArray( vertexVAO );
glGenBuffers( 1, &positionBuffer );
glGenBuffers( 1, &positionBuffer );
glBindBuffer( GL_ARRAY_BUFFER, positionBuffer );
glBufferData( GL_ARRAY_BUFFER, sizeof( position ), position, GL_STATIC_DRAW );
positionLoc = glGetAttribLocation( shader, "inPosition" );
glEnableVertexAttribArray ( positionLoc );
glVertexAttribPointer ( positionLoc, 3, GL_FLOAT, GL_FALSE, 0, ( void* ) 0 ); //here gDEBugger GL breaks on OpenGL Error
It's part of my init function, and I really don't know why gDEBugger breaks on it, can anybody explain it for me?
Break Reason OpenGL Error Breaked-on glVertexAttribPointer(0 , 3
, GL_FLOAT , FALSE , 0 , 0x00000000) Error-Code
GL_INVALID_OPERATION Error-Description The specified operation is
not allowed in the current state. The offending function is ignored,
having no side effect other than to set the error flag.
* Stopped before function execution
This is break information.
The possible GL_INVALID_OPERATION errors generated by glVertexAttribPointer():
GL_INVALID_OPERATION is generated if size is GL_BGRA and type is not
GL_UNSIGNED_BYTE, GL_INT_2_10_10_10_REV or GL_UNSIGNED_INT_2_10_10_10_REV.
GL_INVALID_OPERATION is generated if type is GL_INT_2_10_10_10_REV
or GL_UNSIGNED_INT_2_10_10_10_REV and size is not 4 or GL_BGRA.
GL_INVALID_OPERATION is generated if type is
GL_UNSIGNED_INT_10F_11F_11F_REV and size is not 3.
GL_INVALID_OPERATION is generated by glVertexAttribPointer if size
is GL_BGRA and noramlized is GL_FALSE.
GL_INVALID_OPERATION is generated if zero is bound to the
GL_ARRAY_BUFFER buffer object binding point and the pointer argument
is not NULL.
http://www.opengl.org/sdk/docs/man/xhtml/glVertexAttribPointer.xml
Related
I'm porting OpenGL 3.X code to OpenGL 2.1, where no VAO available. The codes below make my program crash inside graphics driver, immediately at glDrawElements call.
Part of the code is shared between 3.X and 2.1 mode, and it works properly in OpenGL 3.X above.
struct StrokeVertex
{
glm::vec2 position;
glm::vec2 tangent;
float center_dist;
};
if (use_gl3)
bind_vao();
bind_vbo();
bind_ibo();
send_data();
bind_program();
set_uniforms();
// setup client-side vertex array here
if (!use_gl3)
{
glEnableClientState( GL_VERTEX_ARRAY );
GLHELPER_CHECK;
glEnableVertexAttribArray( 0 );
GLHELPER_CHECK;
glVertexAttribPointer( get_attribute_location( "tangent" ) /* got 1 here */, 2, GL_FLOAT, GL_FALSE, sizeof( StrokeVertex ), (void*) offsetof( StrokeVertex, tangent ) );
GLHELPER_CHECK;
glEnableVertexAttribArray( 1 );
GLHELPER_CHECK;
glVertexAttribPointer( get_attribute_location( "center_dist" ) /* got 2 here */, 1, GL_FLOAT, GL_FALSE, sizeof( StrokeVertex ), (void*) offsetof( StrokeVertex, center_dist ) );
GLHELPER_CHECK;
// I also tried call this before setting the two attributes
glVertexPointer( 2, GL_FLOAT, sizeof( StrokeVertex ), (void*) offsetof( StrokeVertex, position ) );
GLHELPER_CHECK;
}
// immediately crash here
glDrawElements( GL_TRIANGLES, GLsizei( n_elem ), GL_UNSIGNED_INT, nullptr );
GLHELPER_CHECK;
if (!use_gl3)
{
glDisableClientState( GL_VERTEX_ARRAY );
}
else
{
unbind_vao();
}
And the part of vertex shader of OpenGL 3.X and 2.X. Most part are same expect attribute declaration:
in vec2 logic_pos;
in vec2 tangent;
in float center_dist;
OpenGL 2.X:
// builtin gl_Vertex is used, so we omit logic_pos
attribute vec2 tangent;
attribute float center_dist;
There seems to be a mismatch regarding which vertex attributes get enabled and to which data gets bound:
The following code tells OpenGL that it should enable vertex attribute 0, but the data gets bound to attribute 1 (at least according to the comment).
glEnableVertexAttribArray( 0 );
glVertexAttribPointer( get_attribute_location( "tangent" ) /* got 1 here */, 2, GL_FLOAT, GL_FALSE, sizeof( StrokeVertex ), (void*) offsetof( StrokeVertex, tangent ) );
In total, it looks as if your code enables attributes 0 and 1, but binds data to 1 and 2.
If you have attributes enabled but do not bind any data to them, this might lead to the crash you describe.
I have some example code that calls glVertexAttribPointer() in 2 places. Is this necessary or can it just be done once?
First time - associating the vertex buffer data:
glVertexAttribPointer( 0, 3, GL_FLOAT, GL_FALSE, sizeof( COLVERTEX ), 0 );
glBufferData( GL_ARRAY_BUFFER, sizeof( v ), v, GL_STATIC_DRAW );
Second time - in the rendering callback function:
glEnableVertexAttribArray(0);
glBindBuffer( GL_ARRAY_BUFFER, vboQuad );
glVertexAttribPointer( 0, 3, GL_FLOAT, GL_FALSE, sizeof( COLVERTEX ), 0 );
glDrawArrays( GL_QUADS, 0, 4 );
glDisableVertexAttribArray(0);
Is it really necessary to describe the vertex attributes twice?
BufferData fills VBO with raw data. At this point it doesn't matter how data is supposed to be interpreted when drawing (e.g. the same data may be interpreted as vertex positions at one draw but as normals in another). So yes, you can remove this first call.
If you use vertex array objects, you could set vertex attribute pointers only once (via binding VBO, enabling vertex attibute, and setting vertex attribute pointer) and then just call glBindVertexArray before drawing and have all recorded vertex attrubtes set up (you don't even need to bind VBO containing vertex attributes before draw call).
I'm attempting to make an OpenGL Engine in C++, but cannot render meshes correctly. Meshes, when rendered, create faces that connect two random points on the mesh, or a random point on the mesh with 0,0,0.
The problem can be seen here:
(I made it a wireframe to see the problem more clearly)
Code:
// Render all meshes (Graphics.cpp)
for( int curMesh = 0; curMesh < numMesh; curMesh++ ) {
// Save pointer of buffer
meshes[curMesh]->updatebuf();
Buffer buffer = meshes[curMesh]->buffer;
// Update model matrix
glm::mat4 mvp = Proj*View*(meshes[curMesh]->model);
// Initialize vertex array
glBindBuffer( GL_ARRAY_BUFFER, vertbuffer );
glBufferData( GL_ARRAY_BUFFER, sizeof(GLfloat)*buffer.numcoords*3, meshes[curMesh]->verts, GL_STATIC_DRAW );
// Pass information to shader
GLuint posID = glGetAttribLocation( shader, "s_vPosition" );
glVertexAttribPointer( posID, 3, GL_FLOAT, GL_FALSE, 0, (void*)0 );
glEnableVertexAttribArray( posID );
// Check if texture applicable
if( meshes[curMesh]->texID != NULL && meshes[curMesh]->uvs != NULL ) {
// Initialize uv array
glBindBuffer( GL_ARRAY_BUFFER, uvbuffer );
glBufferData( GL_ARRAY_BUFFER, sizeof(GLfloat)*buffer.numcoords*2, meshes[curMesh]->uvs, GL_STATIC_DRAW );
// Pass information to shader
GLuint uvID = glGetAttribLocation( shader, "s_vUV" );
glVertexAttribPointer( uvID, 2, GL_FLOAT, GL_FALSE, 0, (void*)(0) );
glEnableVertexAttribArray( uvID );
// Set mesh texture
glActiveTexture( GL_TEXTURE0 );
glBindTexture( GL_TEXTURE_2D, meshes[curMesh]->texID );
GLuint texID = glGetUniformLocation( shader, "Sampler" );
glUniform1i( texID, 0 );
}
// Actiavte shader
glUseProgram( shader );
// Set MVP matrix
GLuint mvpID = glGetUniformLocation( shader, "MVP" );
glUniformMatrix4fv( mvpID, 1, GL_FALSE, &mvp[0][0] );
// Draw verticies on screen
bool wireframe = true;
if( wireframe )
for(int i = 0; i < buffer.numcoords; i += 3)
glDrawArrays(GL_LINE_LOOP, i, 3);
else
glDrawArrays( GL_TRIANGLES, 0, buffer.numcoords );
}
// Mesh Class (Graphics.h)
class mesh {
public:
mesh();
void updatebuf();
Buffer buffer;
GLuint texID;
bool updated;
GLfloat* verts;
GLfloat* uvs;
glm::mat4 model;
};
My Obj loading code is here: https://www.dropbox.com/s/tdcpg4vok11lf9d/ObjReader.txt (It's pretty crude and isn't organized, but should still work)
This looks like a primitive restart issue to me. Hard to tell what exactly is the problem without seeing some code. It would help a lot to see the about 20 lines above and below and including the drawing calls render the teapot. I.e. the 20 lines before the corresponding glDrawArrays, glDrawElements or glBegin call and the 20 lines after.
subtract 1 from the indices for your use, since these are 1-based indices, and you will almost certainly need 0-based indices.
This is because your triangles are not connected for the wireframe to look perfect.
In case triangles is not connected you should construct index buffer.
A week ago I upgraded my OS to the latest version, so GLEW, nVidia drivers, g++, Qt, etc. all got upgraded too. That was the day my QGLWidgets stopped showing raw OpenGL 3.3 content, only the 2D QPainter based stuff. Since no other OpenGL applications (including my DE) on my machine were affected, I must have written some dodgy code in my application, that perhaps had been allowed by older versions of these libraries - which have now been amended.
I have a lot of heavily abstracted OpenGL code, so many potential places for a failure; and after a few days of trying to get any sort of output from it (glGetError() was not returning errors before I started messing with it), I decided the best thing to do was write the simplest OpenGL application possible and then slowly build it up until it broke.
But I can't even get a triangle to appear!
void Viewer::initializeGL()
{
// Initialise GLEW, compile/link fragment and vertex shaders, error check, etc.
...
// Create default VAO.
glGenVertexArrays( 1, &vao_ );
glBindVertexArray( vao_ );
glUseProgram( shader_ );
vVertex_ = glGetAttribLocation( shader_, "vVertex" );
// Create the test triangle VBO.
glGenBuffers( 1, &vbo_ );
glBindBuffer( GL_ARRAY_BUFFER, vbo_ );
glEnableVertexAttribArray( vVertex_ );
glVertexAttribPointer( vVertex_, 3, GL_FLOAT, false, 0,
reinterpret_cast< GLvoid* >( 0 ) );
// Upload the data to the GPU.
static const float verts[9] = { 0.0f, 0.0f, -0.5f,
1.0f, 0.0f, -0.5f,
0.5f, 1.0f, -0.5f };
glBufferData( GL_ARRAY_BUFFER, sizeof( verts ),
static_cast< const void* >( verts ), GL_STATIC_DRAW );
glBindBuffer( GL_ARRAY_BUFFER, GL_NONE );
glDisableVertexAttribArray( vVertex_ );
Sy_GL::checkError();
}
void Viewer::paintGL()
{
// Clear the buffers.
qglClearColor( QColor( Qt::black ) );
glClear( GL_COLOR_BUFFER_BIT );
glPolygonMode( GL_FRONT_AND_BACK, GL_LINE );
glBindBuffer( GL_ARRAY_BUFFER, vbo_ );
glEnableVertexAttribArray( vVertex_ );
glVertexAttribPointer( vVertex_, 3, GL_FLOAT, false, 0,
reinterpret_cast< GLvoid* >( 0 ) );
glDrawArrays( GL_TRIANGLES, 0, 3 );
glBindBuffer( GL_ARRAY_BUFFER, GL_NONE );
glDisableVertexAttribArray( vVertex_ );
Sy_GL::checkError();
}
I'm not using my VAO for what it is for because VAOs cannot be shared across contexts, which is the scenario in my 'real' application, so I'm replicating that situation here. Sy_GL::checkError() just calls glGetError() and throws an exception if there's a problem. My two shaders could not be simpler:
// Vertex shader.
#version 330
in vec3 vVertex;
void main( void )
{
gl_Position = vec4( vVertex, 1.0 );
}
// Fragment shader (in different file).
#version 330
out vec4 fragColour;
void main( void )
{
fragColour = vec4( 1.0, 0.0, 0.0, 1.0 );
}
This should display a red line rendered triangle against a black background, but I just get the black background - no console output or exceptions. My system really does support OpenGL 3.3 and higher, here is top of my glxinfo output:
name of display: :0
display: :0 screen: 0
direct rendering: Yes
server glx vendor string: NVIDIA Corporation
server glx version string: 1.4
And my glewinfo output:
GLEW version 1.9.0
Reporting capabilities of display :0, visual 0x2b
Running on a GeForce GTX 560 Ti/PCIe/SSE2 from NVIDIA Corporation
OpenGL version 4.3.0 NVIDIA 310.32 is supported
So my question is: Is my code wrong? Or is my system damaged very subtly?
Edit
It appears that QGLFormat is reporting that I only have OpenGL v1.0 - what mechanism is Qt using to get that value?
In my 'real' application I perform a OpenGL version check using QGLFormat::openGLVersionFlags() & QGLFormat::OpenGL_Version_3_3, and this passes; but using myQGLWidget->format().majorVersion() and myQGLWidget->format().minorVersion() return 1 and 0 respectively.
Edit 2
Interestingly, if I set a default QGLFormat of v3.3 in my main.cpp:
QGLFormat format( QGL::DoubleBuffer |
QGL::DepthBuffer |
QGL::AlphaChannel );
format.setVersion( 3, 3 );
QGLFormat::setDefaultFormat( format );
It segfaults on the first of my OpenGL calls, specifically glGenVertexArrays(1, &vao_), but myQGLWidget->format().majorVersion() and myQGLWidget->format().minorVersion() return 3 and 3 respectively.
We recently had this at work. The cause was the latest NVIDIA drivers, which broke the major and minor version queries.
Edit: I think it may have been related to calling these functions before setting up a valid GL context.
End edit
So, you could try with a slightly older driver. However, there is also an issue with some Qt versions and glversion. Check this link out:
http://qt-project.org/forums/viewthread/20424
I'm having an issue when first rendering a vertexbuffer with a program,
and then rendering a different vertexbuffer without program.
for the first buffer, when a program is enabled, i use code similar to:
glBindBuffer( GL_ARRAY_BUFFER, m_id );
GLint location = glGetAttribLocation( pID, "position" );
glEnableVertexAttribArray( location );
glVertexAttribPointer( location, 3, GL_FLOAT, GL_FALSE, 3 * sizeof( GLfloat ), 0 );
glDrawArrays( m_mode, 0, m_numVertices );
for the second, without program:
glBindBuffer( GL_ARRAY_BUFFER, m_id );
glEnableClientState( GL_VERTEX_ARRAY );
glVertexPointer( 3, GL_FLOAT, 3 * sizeof( GLfloat ), 0 );
glDrawArrays( m_mode, 0, m_numVertices );
both codepaths work fine individually, but when done in the order
"with program"->"without program", the second seems to use the buffer of the first,
and in the order "without program"->"with program", the first is not drawn (in the second iteration).
now this suggests to me that I'm missing some state change done by the glEnableVertexAttribArray block, but I don't understand what state change is causing the problems.
ps the reason I'm rendering with and without program is that in the scenegraph lib im using you can turn programs on or off per node.
Try adding
glDisableVertexAttribArray( location ); // location of "position"
before switching to fixed function rendering.