First, i'm using LWJGL 3 and OpenGL 3.2
I'm trying to use the "indices" with the function GL11.glDrawElements but nothing rendered in the window.
Buffers generation code ( do not really useindices but I think it can still work) :
public void updateBuffers(Game game, int positionsAttrib, int texCoordsAttrib) { // positionsAttrib and texCoordsAttrib are pointer to shader program attribs
FloatBuffer positionsBuffer = null;
FloatBuffer texCoordsBuffer = null;
IntBuffer indicesBuffer = null;
try {
this.vertexCount = this.tiles.size() * 4;
positionsBuffer = MemoryUtil.memAllocFloat( this.tiles.size() * 3 * 4 );
texCoordsBuffer = MemoryUtil.memAllocFloat( this.tiles.size() * 2 * 4 );
indicesBuffer = MemoryUtil.memAllocInt( this.vertexCount );
int i = 0;
for ( Entry<TilePosition, Tile> tilesEntry : this.tiles.entrySet() ) {
TilePosition tilePosition = tilesEntry.getKey();
Tile tile = tilesEntry.getValue();
String tileTextureIdentifier = tile.getTextureIdentifier();
TextureDefinition tileTextureDefinition = game.getTexturesManager().getTextureDefinition("tiles");
Rectangle tileTextureRectangle = tileTextureDefinition.getTilePosition( tileTextureIdentifier );
if ( tileTextureRectangle == null ) continue;
positionsBuffer.put( tilePosition.getX() ).put( tilePosition.getY() + 1 ).put( 0 );
positionsBuffer.put( tilePosition.getX() + 1 ).put( tilePosition.getY() + 1 ).put( 0 );
positionsBuffer.put( tilePosition.getX() + 1 ).put( tilePosition.getY() ).put( 0 );
positionsBuffer.put( tilePosition.getX() ).put( tilePosition.getY() ).put( 0 );
texCoordsBuffer.put( tileTextureRectangle.x ).put( tileTextureRectangle.y );
texCoordsBuffer.put( tileTextureRectangle.x + tileTextureRectangle.width ).put( tileTextureRectangle.y );
texCoordsBuffer.put( tileTextureRectangle.x + tileTextureRectangle.width ).put( tileTextureRectangle.y + tileTextureRectangle.height );
texCoordsBuffer.put( tileTextureRectangle.x ).put( tileTextureRectangle.y + tileTextureRectangle.height );
indicesBuffer.put( i ).put( i + 1 ).put( i + 2 ).put( i + 3 );
i += 4;
}
positionsBuffer.flip();
texCoordsBuffer.flip();
indicesBuffer.flip();
this.vao.bind(); // vbo and vao are class VertexBufferObject and VertexArrayObject which save internal id of buffers and most usefull functions
this.positionsVbo.bind( GL15.GL_ARRAY_BUFFER );
VertexBufferObject.uploadData( GL15.GL_ARRAY_BUFFER, positionsBuffer, GL15.GL_STATIC_DRAW );
ShaderProgram.pointVertexAttribute( positionsAttrib, 3, 0, 0 );
this.texCoordsVbo.bind( GL15.GL_ARRAY_BUFFER );
VertexBufferObject.uploadData( GL15.GL_ARRAY_BUFFER, texCoordsBuffer, GL15.GL_STATIC_DRAW );
ShaderProgram.pointVertexAttribute( texCoordsAttrib, 2, 0, 0 );
this.indicesVbo.bind( GL15.GL_ELEMENT_ARRAY_BUFFER );
VertexBufferObject.uploadData( GL15.GL_ELEMENT_ARRAY_BUFFER, indicesBuffer, GL15.GL_STATIC_DRAW );
VertexArrayObject.unbind();
} finally {
if ( positionsBuffer != null ) MemoryUtil.memFree( positionsBuffer );
if ( texCoordsBuffer != null ) MemoryUtil.memFree( texCoordsBuffer );
if ( indicesBuffer != null ) MemoryUtil.memFree( indicesBuffer );
}
}
Used shader program :
// scene.vs :
#version 330 // edit : I have to change this line because of OpenGL used version
layout (location=0) in vec3 position;
layout (location=1) in vec2 texCoord;
out vec2 outTexCoord;
uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;
void main() {
mat4 mvp = projection * view * model;
gl_Position = mvp * vec4( position, 1.0 );
outTexCoord = texCoord;
}
// scene.fs :
#version 330
in vec2 outTexCoord;
out vec4 fragColor;
uniform sampler2D textureSampler;
void main() {
vec3 vertexColor = vec3( 1.0, 1.0, 1.0 );
vec4 textureColor = texture( textureSampler, outTexCoord );
fragColor = vec4( vertexColor, 1.0 ) * textureColor;
}
And the rendering functions :
private void beginRender(Game game, int positionsAttrib, int texCoordsAttrib) {
Texture texture = game.getTexturesManager().getTextureDefinition("tiles").getTexture();
GL13.glActiveTexture( GL13.GL_TEXTURE0 );
texture.bind();
this.vao.bind();
ShaderProgram.enableVertexAttribute( positionsAttrib );
ShaderProgram.enableVertexAttribute( texCoordsAttrib );
}
private void endRender(Game game, int positionsAttrib, int texCoordsAttrib) {
ShaderProgram.disableVertexAttribute( positionsAttrib );
ShaderProgram.disableVertexAttribute( texCoordsAttrib );
VertexArrayObject.unbind();
Texture.unbind();
}
// render is called by render loop between clear and swapbuffer GL functions
public void render(Game game, int positionsAttrib, int texCoordsAttrib) {
this.beginRender( game, positionsAttrib, texCoordsAttrib );
GL11.glDrawElements( GL11.GL_QUADS, this.vertexCount, GL11.GL_UNSIGNED_INT, 0 );
this.endRender( game, positionsAttrib, texCoordsAttrib );
}
I'm not sure it's very clear, especially with my approximate English ..
You don't expicitely state it, but it looks like you're using a core profile (as you should). However, if so, GL_QUADS will be not available, and your draw call will just result in a GL_INVALID_ENUM error.
As a side note: since you say you use OpenGL 4.5, I'd strongly recommend that you use OpenGL's debug output feature during development, it will make spotting and interpreting any GL errors much easier, and might furthermore provide useful performance hints.
Related
I am developing a small OpenGL engine and currently stuck at the following GLSL problem:
I am trying to implement a shader for multiple lights and it generates the correct result but only if I put the calculation of the diffuse light inside an (completely useless) if statement. The specultar light component is always correctly rendered.
Works:
for( int i = 0; i < uNumLights; ++i ) {
//...
if( <anything> == <something> ) {
lDiffuseLight += vAmbientDiffuseMaterial * uLights[i].color * lIntensity;
}
}
Does not work:
for( int i = 0; i < uNumLights; ++i ) {
//...
lDiffuseLight += vAmbientDiffuseMaterial * uLights[i].color * lIntensity;
}
Furthermore it has no effect if I use a constant instead of an uniform for the loop condition.
Why does the code with the if work and without the if not? How can I make my shader work without silly if statements?
The full shader code:
The Vertex Shader:
#version 330
const int MAX_LIGHTS = 4;
in vec3 iVertex;
in vec3 iNormals;
uniform mat4 uModelView;
uniform mat4 uMVP;
uniform mat3 uNormal;
smooth out vec3 vPosition;
smooth out vec3 vModelView;
smooth out vec3 vNormals;
// Colors...
smooth out vec3 vAmbientDiffuseMaterial; // Make some colors...
smooth out vec3 vAmbientLight;
smooth out vec3 vLightDirection[MAX_LIGHTS];
uniform vec3 uAmbientColor;
uniform int uNumLights;
uniform struct Light {
vec3 color;
vec3 position;
} uLights[MAX_LIGHTS];
void main(void) {
vAmbientDiffuseMaterial = clamp(iVertex, 0.0, 1.0);
vAmbientLight = vAmbientDiffuseMaterial * uAmbientColor;
gl_Position = uMVP * vec4( iVertex.xyz, 1.0 );
vPosition = gl_Position.xyz;
vNormals = normalize( uNormal * iNormals );
vModelView = ( uModelView * vec4( iVertex , 1 )).xyz;
for( int i = 0; i < uNumLights; ++i ) {
vLightDirection[i] = normalize( uLights[i].position - vModelView );
}
}
The fragment shader:
#version 330
const int MAX_LIGHTS = 4;
uniform mat4 uModelView;
out vec4 oFinalColor;
smooth in vec3 vPosition;
smooth in vec3 vModelView;
smooth in vec3 vNormals;
smooth in vec3 vAmbientDiffuseMaterial;
smooth in vec3 vAmbientLight;
smooth in vec3 vLightDirection[MAX_LIGHTS];
// Light stuff
uniform int uNumLights;
uniform struct Light {
vec3 color;
vec3 position;
} uLights[MAX_LIGHTS];
const vec3 cSpecularMaterial = vec3( 0.9, 0.9, 0.9 );
const float cShininess = 30.0;
void main(void) {
vec3 lReflection, lSpecularLight = vec3( 0 ), lDiffuseLight = vec3( 0 );
float lIntensity;
// Specular Light
for( int i = 0; i < uNumLights; ++i ) {
lReflection = normalize( reflect( -vLightDirection[i], vNormals) );
lIntensity = max( 0.0, dot( -normalize(vModelView), lReflection ) );
lSpecularLight += cSpecularMaterial * uLights[i].color * pow( lIntensity, cShininess );
// Diffuse Light
lIntensity = max( 0, dot( vNormals, vLightDirection[i] ) );
// Here is the strange if part:
// Only with the if lDiffuseLight is what it is supposed to be.
if( uLights[i].color != vec3( 0, 0, 0 ) ) {
lDiffuseLight += vAmbientDiffuseMaterial * uLights[i].color * lIntensity;
}
}
oFinalColor = vec4( vAmbientLight + lDiffuseLight + lSpecularLight, 1 );
}
I have tested all uniforms and they are all correct set in the shader.
I would like pass to my vertex shader a float as a uniform because each vertex will use the same value for this parameter. It works well on my computer (linux/gles) but on a Raspberry Pi, value at execution time are not correct at all.
I use bcm_host.h and EGL to setup my OpenGL context.
I have written a simple code to demonstrate this. It should show a blue triangle during 1s then a red one, then a blue one...
No OpenGL error is set during the execution on the Pi but the triangle stays blue.
I've look for the value with some graphicals tests and it looks like to be a little negative float number.
It is strange because I use uniforms for matrices and theses work well.
Vertex shader:
#version 100
precision mediump float;
uniform mat4 projection_matrix;
uniform mat4 modelview_matrix;
uniform float test;
attribute vec3 a_Vertex;
varying vec4 color;
void main( void )
{
if( test == 500.0 )
color = vec4( 1.0, 0.0, 0.0, 1.0 );
else
color = vec4( 0.0, 0.0, 1.0, 1.0 );
gl_Position = projection_matrix * modelview_matrix * vec4( a_Vertex, 1.0 );
}
Fragment shader:
#version 100
precision mediump float;
varying vec4 color;
void main(void)
{
gl_FragColor = color;
}
C++ display code:
if( ticks - lastDrawTicks > 1000 )
{
Screen::get()->clear();
program->use();
program->sendUniform( "projection_matrix", projection, false );
program->sendUniform( "modelview_matrix", modelview, false );
if( value )
{
int location = glGetUniformLocation( program->getId(), "test" );
CheckOpenGLError(glGetUniformLocation);
Logger::get() << location << " <- Red !" << Logger::endl;
glUniform1f( location, 500.0f );
CheckOpenGLError(glUniform1f);
//program->sendUniform( "test", 500.0f );
value = false;
}
else
{
int location = glGetUniformLocation( program->getId(), "test" );
CheckOpenGLError(glGetUniformLocation);
Logger::get() << location << " <- Blue !" << Logger::endl;
glUniform1f( location, 0.0f );
CheckOpenGLError(glUniform1f);
//program->sendUniform( "test", 0.0f );
value = true;
}
program->sendVertexPointer( "a_Vertex", vbo );
ibo->draw();
Screen::get()->render();
lastDrawTicks = ticks;
}
EDIT:
Using "glUniform1fv" instead of "glUniform1f" is working.
float fvalue = 500.0f;
glUniform1fv( location, 1, &fvalue );
Been reading 'OpenGL 4.0 Shading Language Cookbook'. But I've run into a wall with the cubemap tutorial.
The issue is that model I'm drawing appears completely grey. As if it's not getting any data from the samplerCube texture.
All my code seems to be correct. I've looked at other tutorials and it's the same thing.
Don't know if my Intel HD Graphics 4000 is responsible, but I have made certain that I do have the GL_ARB_texture_cube_map extension.
I'm using the DevIL library for loading images from file, which it seems to do just fine, but from what I can tell something is going wrong in transferring the data to OpenGL.
I'm posting the loading where I get the data from the files. All files are loading correctly as well.
I'm also posting the drawing code, where I bind the texture to the pipeline.
And I'm also posting my vertex and fragment shader just in case, but they do appear to be working as they should.
Any ideas?
Loading code
uint TARGETS[6] =
{
GL_TEXTURE_CUBE_MAP_POSITIVE_X,
GL_TEXTURE_CUBE_MAP_NEGATIVE_Y,
GL_TEXTURE_CUBE_MAP_POSITIVE_Y,
GL_TEXTURE_CUBE_MAP_NEGATIVE_Y,
GL_TEXTURE_CUBE_MAP_POSITIVE_Z,
GL_TEXTURE_CUBE_MAP_NEGATIVE_Z
};
string EXTS[6] =
{
"posx",
"negx",
"posy",
"negy",
"posz",
"negz"
};
// Create & bind cubemap texture
glGenTextures( 1, &cubemap );
glBindTexture( GL_TEXTURE_CUBE_MAP, cubemap );
for( int i = 0; i < 6; i++ )
{
string file = "textures/cubemap_" + EXTS[i] + ".png";
uint image = ilGenImage();
// Load with DevIL
ilBindImage( image );
if( !ilLoadImage( file.c_str() ) )
{
cout << "ERROR: Failed to load image " << endl;
return false;
}
// Fetch info from DevIL
int width = ilGetInteger( IL_IMAGE_WIDTH );
int height = ilGetInteger( IL_IMAGE_HEIGHT );
uint format = ilGetInteger( IL_IMAGE_FORMAT );
uint type = ilGetInteger( IL_IMAGE_TYPE );
// Send data to OpenGL
glTexImage2D(
TARGETS[i],
0,
GL_RGBA,
width,
height,
0,
format,
type,
ilGetData() );
// Error check
if( !ErrorCheck("Failed to bind a side of the cubemap!") )
return false;
// Get rid of DevIL data
ilDeleteImage( image );
}
// Parameters
glTexParameterf( GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameterf( GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameterf( GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameterf( GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
glTexParameterf( GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE );
Draw code
// Update
glfwPollEvents();
UpdateTime();
// Clear back buffer for new frame
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
// Bind shader
shader->Bind();
// Cubemap
shader->SetUniform( "cubemapTexture", 0 );
glActiveTexture( GL_TEXTURE0 );
glBindTexture( GL_TEXTURE_CUBE_MAP, cubemap );
// Bind model
if( model->Bind() )
{
static float angle = 0;
angle += 25.0f * deltaTime;
// Matrices
mat4 world =
translate( vec3( 0.0f, 0.0f, 0.0f) ) *
rotateZ( angle * PI / 180 ) *
rotateX( angle * PI / 180 ) *
scale( vec3( 1.0f, 1.0f, 1.0f) );
mat4 view = ViewMatrix(
cameraPosition,
cameraTarget,
vec3( 0.0f, 0.0f, 1.0f) );
mat4 proj = ProjectionMatrix(
fov,
(float)windowX,
(float)windowY,
nearPlane,
farPlane );
// Uniforms
shader->SetUniform( "uWorld", world );
shader->SetUniform( "uView", view );
shader->SetUniform( "uProj", proj );
shader->SetUniform( "materialColor", vec3( 0.5f, 0.5f, 0.5f ) );
shader->SetUniform( "drawSkybox", false );
shader->SetUniform( "world_cameraPosition", cameraPosition );
shader->SetUniform( "reflectFactor", 0.5f );
// Draw
glDrawElements( GL_TRIANGLES, model->GetIndexCount(), GL_UNSIGNED_SHORT, NULL );
}
// Put the new image on the screen
glfwSwapBuffers( window );
Vertex Shader
#version 400
layout(location=0) in vec3 vertex_position;
layout(location=1) in vec3 vertex_normal;
layout(location=2) in vec4 vertex_tangent;
layout(location=3) in vec2 vertex_texCoords;
out vec2 texCoords;
out vec3 reflectDir;
uniform mat4 uWorld;
uniform mat4 uView;
uniform mat4 uProj;
uniform bool drawSkybox;
uniform vec3 world_cameraPosition;
void main()
{
if( drawSkybox )
{
reflectDir = vertex_position;
}
else
{
vec3 world_pos = vec3( uWorld * vec4(vertex_position,1.0) );
vec3 world_norm = vec3( uWorld * vec4(vertex_normal,0.0) );
vec3 world_view = normalize( world_cameraPosition - world_pos );
reflectDir = reflect( -world_view, world_norm );
}
gl_Position = uProj * uView * uWorld * vec4(vertex_position,1.0);
texCoords = vertex_texCoords;
}
Fragment shader
#version 400
out vec4 fragColor;
in vec2 texCoords;
in vec3 reflectDir;
uniform samplerCube cubemapTexture;
uniform vec3 materialColor;
uniform bool drawSkybox;
uniform float reflectFactor;
void main()
{
vec3 color = texture( cubemapTexture, reflectDir ).rgb;
if( drawSkybox )
{
fragColor = vec4( color, 1.0 );
}
else
{
fragColor = vec4( mix( materialColor, color, reflectFactor ), 1.0 );
}
}
Your cube map texture is not texture complete. All 6 sides need to be specified for a cube map texture to be complete. From the specs:
Additionally, a cube map texture is cube complete if the following conditions all hold true: [..] The level_base arrays of each of the six texture images making up the cube map have identical, positive, and square dimensions.
Your code does not specify an image for NEGATIVE_X:
uint TARGETS[6] =
{
GL_TEXTURE_CUBE_MAP_POSITIVE_X,
GL_TEXTURE_CUBE_MAP_NEGATIVE_Y,
GL_TEXTURE_CUBE_MAP_POSITIVE_Y,
GL_TEXTURE_CUBE_MAP_NEGATIVE_Y,
GL_TEXTURE_CUBE_MAP_POSITIVE_Z,
GL_TEXTURE_CUBE_MAP_NEGATIVE_Z
};
Using this table, the image for NEGATIVE_Y is specified twice, but it's missing NEGATIVE_X. It should be:
uint TARGETS[6] =
{
GL_TEXTURE_CUBE_MAP_POSITIVE_X,
GL_TEXTURE_CUBE_MAP_NEGATIVE_X,
GL_TEXTURE_CUBE_MAP_POSITIVE_Y,
GL_TEXTURE_CUBE_MAP_NEGATIVE_Y,
GL_TEXTURE_CUBE_MAP_POSITIVE_Z,
GL_TEXTURE_CUBE_MAP_NEGATIVE_Z
};
Instead of enumerating the 6 targets, you can also use GL_TEXTURE_CUBE_MAP_POSITIVE_X + i for i in the range 0..5 to address the 6 targets.
I have been trying to display a mesh wireframe and pass each edge its own color as vertex attribute array. For that I have used two vertices in the vertex buffer for each edge of the mesh. I could display the edges correctly. The problem is coming for attribute locations 2 , 3 , 4 in the shader , it seems like I am not receiving the values that I loaded in the VBO. And when I just reorder the vertex attribute , say I put the 'eColor' attribute at location 0 , I start getting correct value in the fragment shader. I want to know the reason , why this is happening. I have executed the same code in different environment , windows/mac , with glfw/Qt , everywhere I am facing the the same issue. Can anyone please point my mistake.
Following is the code for vertex attribute binding:
GL_CHECK( glBindVertexArray( mVAO ) );
GL_CHECK( glUseProgram( mDebugProgram ) );
GL_CHECK( glGenBuffers( 2 , mVBO ) );
GL_CHECK( glBindBuffer( GL_ARRAY_BUFFER , mVBO[ 0 ] ) );
GL_CHECK( glBufferData( GL_ARRAY_BUFFER , mDebugVertexData.size() * sizeof( DebugVertexData ) , mDebugVertexData.data() , GL_DYNAMIC_DRAW ) );
GL_CHECK( glBindBuffer( GL_ELEMENT_ARRAY_BUFFER , mVBO[ 1 ] ) );
GL_CHECK( glBufferData( GL_ELEMENT_ARRAY_BUFFER , mDebugWFIndices.size() * sizeof( GLuint ) , mDebugWFIndices.data() , GL_DYNAMIC_DRAW ) );
int offset = 0;
GL_CHECK( glVertexAttribPointer( 0 , 3, GL_FLOAT, GL_FALSE, sizeof( DebugVertexData ) , 0) );
offset = 3 * sizeof( GLfloat );
GL_CHECK( glVertexAttribPointer( 1 , 3 , GL_FLOAT, GL_FALSE, sizeof ( DebugVertexData ) , ( float * )offset ) );
offset = 3 * sizeof( GLfloat );
GL_CHECK( glVertexAttribPointer( 2 , 3 , GL_FLOAT, GL_FALSE, sizeof ( DebugVertexData ) , ( float * )offset ) );
offset = 3 * sizeof( GLfloat );
GL_CHECK( glVertexAttribPointer( 3 , 1 , GL_FLOAT, GL_FALSE, sizeof ( DebugVertexData ) , ( float * )offset ) );
offset = sizeof( GLfloat );
GL_CHECK( glVertexAttribPointer( 4 , 1 , GL_FLOAT, GL_FALSE, sizeof ( DebugVertexData ) , ( float * )offset ) );
GL_CHECK( glEnableVertexAttribArray(0) );
GL_CHECK( glEnableVertexAttribArray(1) );
GL_CHECK( glEnableVertexAttribArray(2) );
GL_CHECK( glEnableVertexAttribArray(3) );
GL_CHECK( glEnableVertexAttribArray(4) );
GL_CHECK( glBindBuffer( GL_ARRAY_BUFFER , 0 ) );
GL_CHECK( glBindBuffer( GL_ELEMENT_ARRAY_BUFFER , 0 ) );
GL_CHECK( glUseProgram( 0) );
GL_CHECK( glBindVertexArray(0) );
Following is my vertex shader:
#version 410
layout ( location=0 ) in vec3 position;
layout ( location=1 ) in vec3 normal;
layout ( location=2 ) in vec3 eColor;
layout ( location=3 ) in float underRegion;
layout ( location=4 ) in float isSplittable;
out vec4 vPosition;
out vec3 vNormal;
out vec3 vColor;
out float flag1;
out float flag2;
void main()
{
vPosition = vec4( position , 1 );
vNormal = normal;
flag1 = underRegion;
flag2 = isSplittable;
vColor = eColor;
// gl_Position = mvpMatrix * vec4( position , 1 );
}
This is geometry shader:
#version 410
layout( lines ) in;
layout( line_strip , max_vertices = 2 ) out;
uniform mat4 mvMatrix;
uniform mat4 mvpMatrix;
in vec4[ 2 ] vPosition;
in vec3[ 2 ] vNormal;
in vec3[ 2 ] vColor;
in float[ 2 ] flag1;
in float[ 2 ] flag2;
// Output to the fragment shader
out float isEdgeSplittable;
out vec3 edgeColor;
void main()
{
float l = length( vPosition[ 0 ].xyz - vPosition[ 1 ].xyz );
vec4 v1 = vPosition[ 0 ];
vec4 v2 = vPosition[ 1 ];
v1.xyz += vNormal[ 0 ] * l * 0.001;
v2.xyz += vNormal[ 1 ] * l * 0.001;
v1 = mvpMatrix * v1;
v2 = mvpMatrix * v2;
edgeColor = vColor[ 0 ];
gl_Position = v1;
if( flag1[ 0 ] > 0.5 )
{
isEdgeSplittable = 1.0;
}
else
{
isEdgeSplittable = 0.0;
}
EmitVertex();
gl_Position = v2;
if( flag1[ 0 ] > 0.5 )
{
isEdgeSplittable = 1.0;
}
else
{
isEdgeSplittable = 0.0;
}
edgeColor = vColor[ 1 ];
EmitVertex();
EndPrimitive();
}
Following is fragment shader:
#version 410
layout (location = 0) out vec4 color;
in float isEdgeSplittable;
in vec3 edgeColor;
void main()
{
color.xyz = edgeColor;//vec3(0 , 0 , 1) ;//eColor2;
//color.w = 1.0;
if( isEdgeSplittable > 0.5 )
{
color.xyz = vec3( 0 , 0 , 1 );
}
}
Regards
Avanindra
I'm having a bit of an odd problem. I'm trying to render some data with OpenGL on my Windows system. I found a set of tutorials at opengl-tutorial.org which were written for OpenGL 3.3. As my laptop (where I do a great deal of developing) only supports OpenGL 2.1, I proceeded to download the OpenGL 2.1 port of the tutorial. I messed around with it a bit, adding features and refactoring it for scalability, but noticed something odd. Whenever I rendered my data with Vertex Buffer Objects, I got a rather incorrect representation of my data. This is shown below.
http://www.majhost.com/gallery/DagonEcelstraun/Others/HelpNeeded/badrender.png
However, when I specify my data using glVertex3fv and such, I get a much nicer result, again shown below.
http://www.majhost.com/gallery/DagonEcelstraun/Others/HelpNeeded/goodrender.png
The problem occurs both on my Windows 8.1 laptop with Intel i3 integrated graphics and on my Windows 7 desktop with its nVidia GTX 660, so it's not a hardware problem. Does anyone know what may be the issue here?
Loading mesh data:
const aiScene *scene = aiImportFile( sName.c_str(),
aiProcessPreset_TargetRealtime_MaxQuality | aiProcess_FlipUVs );
const aiMesh *mesh = scene->mMeshes[0];
for( int i = 0; i < mesh->mNumVertices; i++ ) {
meshData.push_back( mesh->mVertices[i][0] );
meshData.push_back( mesh->mVertices[i][1] );
meshData.push_back( mesh->mVertices[i][2] );
meshData.push_back( mesh->mNormals[i][0] );
meshData.push_back( mesh->mNormals[i][1] );
meshData.push_back( mesh->mNormals[i][2] );
meshData.push_back( mesh->mTextureCoords[0][i][0] );
meshData.push_back( mesh->mTextureCoords[0][i][1] );
meshData.push_back( 0 );
meshData.push_back( mesh->mTangents[i][0] );
meshData.push_back( mesh->mTangents[i][1] );
meshData.push_back( mesh->mTangents[i][2] );
}
for( int i = 0; i < mesh->mNumFaces; i++ ) {
for( int j = 0; j < 3; j++ ) {
indices.push_back( mesh->mFaces[i].mIndices[j] );
}
}
Sending data to the graphics card for the first time (called right after previous code):
glGenBuffers( 1, &glVertData );
glBindBuffer( GL_ARRAY_BUFFER, glVertData );
glBufferData( GL_ARRAY_BUFFER, meshData.size() * sizeof( GLfloat ), &meshData[0], GL_STATIC_DRAW );
// Generate a buffer for the indices as well
glGenBuffers( 1, &glIndexes );
glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, glIndexes );
glBufferData( GL_ELEMENT_ARRAY_BUFFER, indices.size() * sizeof(unsigned short), &indices[0], GL_STATIC_DRAW );
Rendering the mesh:
//Tell the shader to use our data
//bindVerts, bindUvs, bindNorms, and bindTangents refer to attribute variables in my shader
//vertexPosition_modelspace, vertexUV, vertexNormal_modelspace, and vertexTangent_modelspace, respectively.
this->verts = bindVerts;
this->uvs = bindUvs;
this->norms = bindNorms;
this->tangents = bindTangents;
glEnableVertexAttribArray( verts );
glEnableVertexAttribArray( uvs );
glEnableVertexAttribArray( norms );
glEnableVertexAttribArray( tangents );
//Specify how the graphics card should decode our data
// 1rst attribute buffer : vertices
glBindBuffer( GL_ARRAY_BUFFER, glVertData );
glVertexAttribPointer( verts, 3, GL_FLOAT, GL_FALSE, 12, (void*) 0 );
// 2nd attribute buffer : normals
glVertexAttribPointer( norms, 3, GL_FLOAT, GL_FALSE, 12, (void*) 3 );
//3rd attribute buffer : UVs
glVertexAttribPointer( uvs, 3, GL_FLOAT, GL_FALSE, 12, (void*) 6 );
//4th attribute buffer: tangents
glVertexAttribPointer( tangents, 3, GL_FLOAT, GL_FALSE, 12, (void*) 9 );
// Index buffer
glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, glIndexes );
//rendering the mesh with VBOs:
glDrawElements( GL_LINES, indices.size(), GL_UNSIGNED_SHORT, (void*) 0 );
//specifying the vertex data individually:
glBegin( GL_TRIANGLES );
int ind;
for( int i = 0; i < indices.size(); i++ ) {
ind = indices[i] * 12;
glNormal3fv( &meshData[ind + 3] );
glTexCoord2fv( &meshData[ind + 6] );
glVertex3fv( &meshData[ind] );
}
glEnd();
//clean up after the render
glDisableVertexAttribArray( verts );
glDisableVertexAttribArray( uvs );
glDisableVertexAttribArray( norms );
glDisableVertexAttribArray( tangents );
My vertex shader:
#version 130
// Input vertex data, different for all executions of this shader.
//it doesn't work, so we'll just get rid of it
attribute vec3 vertexPosition_modelspace;
attribute vec3 vertexUV;
attribute vec3 vertexNormal_modelspace;
attribute vec3 vertexTangent_modelspace;
// Output data ; will be interpolated for each fragment.
out vec2 UV;
out vec3 Position_worldspace;
out vec3 Normal_cameraspace;
out vec3 EyeDirection_cameraspace;
out vec3 LightDirection_cameraspace;
out vec4 ShadowCoord;
// Values that stay constant for the whole mesh.
uniform mat4 MVP;
uniform mat4 V;
uniform mat4 M;
uniform vec3 LightInvDirection_worldspace;
uniform mat4 DepthBiasMVP;
uniform sampler2D normalMap;
attribute vec3 vTangent;
void main() {
// Output position of the vertex, in clip space : MVP * position
gl_Position = MVP * vec4( vertexPosition_modelspace, 1 );
ShadowCoord = DepthBiasMVP * vec4( vertexPosition_modelspace, 0 );
// Position of the vertex, in worldspace : M * position
Position_worldspace = ( M * vec4( vertexPosition_modelspace, 0 ) ).xyz;
// Vector that goes from the vertex to the camera, in camera space.
// In camera space, the camera is at the origin (0,0,0).
EyeDirection_cameraspace = vec3( 0, 0, 0 ) - ( V * M * vec4( vertexPosition_modelspace, 0 ) ).xyz;
// Vector that goes from the vertex to the light, in camera space
LightDirection_cameraspace = ( V * vec4( LightInvDirection_worldspace, 0 ) ).xyz;
// UV of the vertex. No special space for this one.
UV = vertexUV.st;
// Normal of the the vertex, in camera space
// Only correct if ModelMatrix does not scale the model ! Use its inverse transpose if not.
Normal_cameraspace = ( V * M * vec4( vertexNormal_modelspace.xyz, 0 ) ).xyz;
}
Fragment shader:
#version 130
// Interpolated values from the vertex shaders
in vec2 UV;
in vec3 Position_worldspace;
in vec3 Normal_cameraspace;
in vec3 EyeDirection_cameraspace;
in vec3 LightDirection_cameraspace;
in vec4 ShadowCoord;
out vec4 fragColor;
// Values that stay constant for the whole mesh.
uniform sampler2D diffuse;
uniform mat4 MV;
uniform vec3 LightPosition_worldspace;
uniform sampler2D shadowMap;
//uniform int shadowLevel; //0 is no shadow, 1 is hard shadows, 2 is soft shadows, 3 is PCSS
// Returns a random number based on a vec3 and an int.
float random( vec3 seed, int i ) {
vec4 seed4 = vec4( seed, i );
float dot_product = dot( seed4, vec4( 12.9898, 78.233, 45.164, 94.673 ) );
return fract( sin( dot_product ) * 43758.5453 );
}
int mod( int a, int b ) {
return a - (a / b);
}
void main() {
int shadowLevel = 1; //let's just do hard shadows
// Light emission properties
vec3 LightColor = vec3( 1, 1, 1 );
float LightPower = 1.0f;
// Material properties
vec3 MaterialDiffuseColor = texture( diffuse, UV ).rgb;
vec3 MaterialAmbientColor = vec3( 0.1, 0.1, 0.1 ) * MaterialDiffuseColor;
vec3 MaterialSpecularColor = vec3( 0.3, 0.3, 0.3 );
vec3 n = normalize( Normal_cameraspace );
vec3 l = normalize( LightDirection_cameraspace );
float cosTheta = clamp( dot( n, l ), 0.2, 1 );
// Eye vector (towards the camera)
vec3 E = normalize( EyeDirection_cameraspace );
// Direction in which the triangle reflects the light
vec3 R = reflect( -l, n );
// Cosine of the angle between the Eye vector and the Reflect vector,
// clamped to 0
// - Looking into the reflection -> 1
// - Looking elsewhere -> < 1
float cosAlpha = clamp( dot( E, R ), 0, 1 );
float visibility = 1.0;
//variable bias
float bias = 0.005 * tan( acos( cosTheta ) );
bias = clamp( bias, 0, 0.01 );
// dFragment to the light
float dFragment = ( ShadowCoord.z-bias ) / ShadowCoord.w;
float dBlocker = 0;
float penumbra = 1;
float wLight = 5.0;
if( shadowLevel == 3 ) {
// Sample the shadow map 8 times
float count = 0;
float temp;
float centerBlocker = texture( shadowMap, ShadowCoord.xy).r;
float scale = (wLight * (dFragment - centerBlocker)) / dFragment;
for( int i = 0; i < 16; i++ ) {
temp = texture( shadowMap, ShadowCoord.xy + (scale * poissonDisk( i ) / 50.0) ).r;
if( temp < dFragment ) {
dBlocker += temp;
count += 1;
}
}
if( count > 0 ) {
dBlocker /= count;
penumbra = wLight * (dFragment - dBlocker) / dFragment;
}
}
if( shadowLevel == 1 ) {
if( texture( shadowMap, ShadowCoord.xy).r < dFragment ) {
visibility -= 0.8;
}
} else if( shadowLevel > 1 ) {
float iterations = 32;
float sub = 0.8f / iterations;
for( int i = 0; i < iterations; i++ ) {
int index = mod( int( 32.0 * random( gl_FragCoord.xyy, i ) ), 32 );
if( texture( shadowMap, ShadowCoord.xy + (penumbra * poissonDisk( index ) / 250.0) ).r < dFragment ) {
visibility -= sub;
}
}
}
visibility = min( visibility, cosTheta );
//MaterialDiffuseColor = vec3( 0.8, 0.8, 0.8 );
fragColor.rgb = MaterialAmbientColor +
visibility * MaterialDiffuseColor * LightColor * LightPower +
visibility * MaterialSpecularColor * LightColor * LightPower * pow( cosAlpha, 5 );
}
Note that poissonDisk( int ind ) returns a vec2 with a magnitude of no more than 1 which is in a poisson disk distribution. Even though I'm using shader version 130, I used a function and not an array because the array runs rather slowly on my laptop.
I do bind that shader before I do any rendering. I also make sure to upload the correct variables to all of my uniforms, but I didn't show that to save space since I know it's working correctly.
Does anyone know what's causing this incorrect render?
Well, first of all, stop drawing the VBO using GL_LINES. Use the same primitive mode for immediate mode and VBO drawing.
Also, since when is 3*4 = 3? The address (offset) in your VBO vertex pointers should be the number of elements multiplied by the size of the data type when using an interleaved data structure. GL_FLOAT is 4 bytes, if you have a 3-component vertex position this means that the offset to the next field in your VBO is 3*4 = (void *)12, not (void *)3. This process must continue for each additional vertex array pointer, they all use incorrect offsets.
Likewise, the stride of your VBO should be 12 * sizeof (GLfloat) = 48, not 12.