OpenGL Animation seems to be missing bone data once in a while - c++

Alright, so I've got basic models loading and rendering in an OpenGL engine. I had animations working for a model. However, when I tried adding multiple models with animations to a scene, I got a bunch of weird behaviour - the last model animated incorrectly.
In trying to isolate the issue, I believe I've happened upon something that might be related - when rendering a model, if I 'zero out' the bone data in OpenGL (that is, send in a bunch of identity matrices), and THEN send the actual bone data, I get weird 'stuttering' in a models animation. It looks like there is a gap in the animation, where the model suddenly goes back to it's neutral position, then quickly goes back to the animation on the next frame.
I'm using Debian 7 64bit with the proprietary NVidia graphics drivers installed (GeForce GTX 560M with 3GB VRAM).
I have a video of this happening here: http://jarrettchisholm.com/static/videos/wolf_model_animation_problem_1.ogv
It's a bit hard to see in the video (it doesn't catch all of the frames I guess). You can see it more clearly when the wolf is on its side. This happens throughout the animation.
My model render code:
for ( glm::detail::uint32 i = 0; i < meshes_.size(); i++ )
{
if ( textures_[i] != nullptr )
{
// TODO: bind to an actual texture position (for multiple textures per mesh, which we currently don't support...maybe at some point we will??? Why would we need multiple textures?)
textures_[i]->bind();
//shader->bindVariable( "Texture", textures_[i]->getBindPoint() );
}
if ( materials_[i] != nullptr )
{
materials_[i]->bind();
shader->bindVariable( "Material", materials_[i]->getBindPoint() );
}
if (currentAnimation_ != nullptr)
{
// This is when I send the Identity matrices to the shader
emptyAnimation_->bind();
shader->bindVariable( "Bones", emptyAnimation_->getBindPoint() );
glw::Animation* a = currentAnimation_->getAnimation();
a->setAnimationTime( currentAnimation_->getAnimationTime() );
// This generates the new bone matrices
a->generateBoneTransforms(globalInverseTransformation_, rootBoneNode_, meshes_[i]->getBoneData());
// This sends the new bone matrices to the shader,
// and also binds the buffer
a->bind();
// This sets the bind point to the Bone uniform matrix in the shader
shader->bindVariable( "Bones", a->getBindPoint() );
}
else
{
// Zero out the animation data
// TODO: Do we need to do this?
// TODO: find a better way to load 'empty' bone data in the shader
emptyAnimation_->bind();
shader->bindVariable( "Bones", emptyAnimation_->getBindPoint() );
}
meshes_[i]->render();
}
The shader binding code:
void GlslShaderProgram::bindVariable(std::string varName, GLuint bindPoint)
{
GLuint uniformBlockIndex = glGetUniformBlockIndex(programId_, varName.c_str());
glUniformBlockBinding(programId_, uniformBlockIndex, bindPoint);
}
Animation code:
...
// This gets called when we create an Animation object
void Animation::setupAnimationUbo()
{
bufferId_ = openGlDevice_->createBufferObject(GL_UNIFORM_BUFFER, 100 * sizeof(glm::mat4), &currentTransforms_[0]);
}
void Animation::loadIntoVideoMemory()
{
glBindBuffer(GL_UNIFORM_BUFFER, bufferId_);
glBufferSubData(GL_UNIFORM_BUFFER, 0, currentTransforms_.size() * sizeof(glm::mat4), &currentTransforms_[0]);
}
/**
* Will stream the latest transformation matrices into opengl memory, and will then bind the data to a bind point.
*/
void Animation::bind()
{
loadIntoVideoMemory();
bindPoint_ = openGlDevice_->bindBuffer( bufferId_ );
}
...
My OpenGL Wrapper code:
...
GLuint OpenGlDevice::createBufferObject(GLenum target, glmd::uint32 totalSize, const void* dataPointer)
{
GLuint bufferId = 0;
glGenBuffers(1, &bufferId);
glBindBuffer(target, bufferId);
glBufferData(target, totalSize, dataPointer, GL_DYNAMIC_DRAW);
glBindBuffer(target, 0);
bufferIds_.push_back(bufferId);
return bufferId;
}
...
GLuint OpenGlDevice::bindBuffer(GLuint bufferId)
{
// TODO: Do I need a better algorithm here?
GLuint bindPoint = bindPoints_[currentBindPoint_];
currentBindPoint_++;
if ( currentBindPoint_ > bindPoints_.size() )
currentBindPoint_ = 1;
glBindBufferBase(GL_UNIFORM_BUFFER, bindPoint, bufferId);
return bindPoint;
}
...
My Vertex shader:
#version 150 core
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;
uniform mat4 pvmMatrix;
uniform mat3 normalMatrix;
in vec3 in_Position;
in vec2 in_Texture;
in vec3 in_Normal;
in ivec4 in_BoneIds;
in vec4 in_BoneWeights;
out vec2 textureCoord;
out vec3 normalDirection;
out vec3 lightDirection;
struct Light {
vec4 ambient;
vec4 diffuse;
vec4 specular;
vec4 position;
vec4 direction;
};
layout(std140) uniform Lights
{
Light lights[ 2 ];
};
layout(std140) uniform Bones
{
mat4 bones[ 100 ];
};
void main() {
// Calculate the transformation on the vertex position based on the bone weightings
mat4 boneTransform = bones[ in_BoneIds[0] ] * in_BoneWeights[0];
boneTransform += bones[ in_BoneIds[1] ] * in_BoneWeights[1];
boneTransform += bones[ in_BoneIds[2] ] * in_BoneWeights[2];
boneTransform += bones[ in_BoneIds[3] ] * in_BoneWeights[3];
vec4 tempPosition = boneTransform * vec4(in_Position, 1.0);
gl_Position = pvmMatrix * tempPosition;
vec4 lightDirTemp = viewMatrix * lights[0].direction;
textureCoord = in_Texture;
normalDirection = normalize(normalMatrix * in_Normal);
lightDirection = normalize(vec3(lightDirTemp));
}
I apologize if I haven't included enough information - I put in what I thought would be useful. If you want/need to see more, you can get all of the code at https://github.com/jarrettchisholm/glr under the master_animation_work branch.

It isn't really opengl specific.
When exporter exports model, some of them export "skin parade" pose. I.e. the pose in which "bone modifier" was initially applied.
In your case, It is probably one of those
Either your exporter exported this "skin parade" as the very first frame (and animation loops over it)
Or your animation framework can't loop around properly, - can't find next frame when it is on the last animation key, and use "skin parade" as the default key.
The problem is probably in routine that calculates transforms for animations.
Here's how you debug it.
Render debug bone hierarchy (using dumbest shader possible or even fixed-function opengl). Debug bone hierarchy could look like this:
In the picture - orange lines show current position of animation bones. Flying coordinates systems (the ones that are not connected) show default locations. triangle and square are debug geometry for other purposes and are not related to animation system.
Visually check if bone hierarchy moves correctly.
If this "default frame" appears in debug hierarchy (i.e. bones themselves take "skin parade" pose once in a while), it is either an animation framework problem, purely mathematical and it doesn't have anything to do with opengl itself, or it is exporter problem (extra frame)
If it does not appear there (i.e. bones move around properly BUT geometry stands in skin parade pose), it is shader problem.
Debug animation skeleton should be rendered without any bone weights - just calculate world-space position of bones and connect them with simple lines. Use dumbest shader possible or fixed-function.

Related

Implementing clip planes with geometry shaders?

What am I using: Qt 5.11.1, MinGW 5.3, Windows 10, C++11, GPU: NVidia 820M (supports OpenGL 4.5)
My task: I have non-solid (just surface) object, rendering by glDrawArrays, and i need to get cross-section of this object by plane. I have found ancient openGL function glClipPlane, but its not compability with VAOs and VBOs. Also Ive found out that its possible to rewrite glClipPlane via geometry shader.
My questions/problems:
Do you know other ways to realize this task?
I really dont understand, how to add geometry shader in QtCreator, there is no "icon" of geometry shader, I tried to add vertex shader and rename it to .gsh or just .glsl, tried to use QOpenGLShaderProgram::addShaderFromSourceCode(QOpenGLShader::Geometry, QString &source) and write shader code in program, but every time I get "QOpenGLShader: could not create shader" on string with adding geometry shader.
look of adding shader into program
Vertex shader:
layout (triangles) in;
layout (triangles) out;
layout (max_vertices = 3) out;
void main()
{
int i;
for (i = 0; i < gl_in.length(); i++)
{
gl_Position = gl_in[i].gl_Position;
EmitVertex();
}
EndPrimitive();
}
Geometry shader:
layout (triangles) in;
layout (triangles) out;
layout (max_vertices = 3) out;
void main()
{
int i;
for (i = 0; i < gl_in.length(); i++)
{
gl_Position = gl_in[i].gl_Position;
EmitVertex();
}
EndPrimitive();
}
Fragment shader:
precision mediump float;
uniform highp float u_lightPower;
uniform sampler2D u_texture;
uniform highp mat4 u_viewMatrix;
varying highp vec4 v_position;
varying highp vec2 v_texCoord;
varying highp vec3 v_normal;
void main(void)
{
vec4 resultColor = vec4(0.25, 0.25, 0.25, 0.0);
vec4 diffMatColor = texture2D(u_texture, v_texCoord);
vec3 eyePosition = vec3(u_viewMatrix);
vec3 eyeVect = normalize(v_position.xyz - eyePosition);
float dist = length(v_position.xyz - eyePosition);
vec3 reflectLight = normalize(reflect(eyeVect, v_normal));
float specularFactor = 1.0;
float ambientFactor = 0.05;
vec4 diffColor = diffMatColor * u_lightPower * dot(v_normal, -eyeVect);// * (1.0 + 0.25 * dist * dist);
resultColor += diffColor;
gl_FragColor = resultColor;
}
Let's sort out a few misconceptions first:
have found ancient openGL function glClipPlane, but its not compability with VAOs and VBOs.
That is not correct. The user defined clip planes via glClipPlane are indeed deprecated in modern GL, and removed from core profiles. But if you use a context where they still exist, you can combine them with VAOs and VBOs without any issue.
Also Ive found out that its possible to rewrite glClipPlane via geometry shader.
You don't need a geometry shader for custom clip planes.
The modern way of user-defined clip planes is calculating gl_ClipDistance for each vertex. While you can modify this value in a geometry shader, you can also directly generate it in the vertex shader. If you don't otherwise need a geometry shader, there is absolutely no reason to add it just for the clip planes.
I really dont understand, how to add geometry shader in QtCreator, there is no "icon" of geometry shader, I tried to add vertex shader and rename it to .gsh or just .glsl, tried to use OpenGLShaderProgram::addShaderFromSourceCode(QOpenGLShader::Geometry, QString &source) and write shader code in program, but every time I get "QOpenGLShader: could not create shader" on string with adding geometry shader.
You first need to find out which OpenGL version you're actually using. With Qt, you can easily end up with an OpenGLES 2.0 context (depending on how you create the context, and also how your Qt was compiled). Your shader code is either desktop GL 2.x (GLSL 1.10/1.20) or GLES 2.0 (GLSL 1.00ES), but not valid in modern core profiles of OpenGL.
GLES2 does not support geometry shaders at all. It also does not support gl_ClipDistance, so if you _really) have to use GLES2, you can try to emulate the clipping in the fragment shader. But the better option would be switching to a modern core profile GL context.
While glClipPlane is deprecated in modern OpenGL, the concept of clipping planes is not.
In your CPU code before you start drawing the geometry to be clipped you must enable one of the clipping planes.
glEnable(GL_CLIP_DISTANCE0);
Once you have finished drawing you would disable this in a similar way.
glDisable(GL_CLIP_DISTANCE0);
You are guaranteed to be able to enable minimum of 8 clipping planes.
In your vertex or geometry shader you must then tell OpenGL the signed distance of your vertex from the plane so that it knows what to clip. To be clear you don't need a geometry shader for clipping but it can be done there if you wish. The shader code would look something like the following:
// vertex in world space
vec4 vert_pos_world = world_matrix * vec4(vert_pos_model, 1.0);
// a horizontal plane at a specified height with normal pointing up
// could be a uniform or hardcoded
vec4 plane = vec4(0, 1, 0, clip_height_world);
// 0 index since that's the clipping plane we enabled
gl_ClipDistance[0] = dot(vert_pos_world, plane);

Why does the number of fragments vary significantly from one system to the other despite no changes in source code?

I have implemented a shader that counts how many fragments it generates.
I have noticed that, without changing the code, the number of counted generated fragments is different in different machines.
It's consistent in one machine (always the same value), but distinctly different on different computers.
The monitors have the same resolution but the graphic cards are different. My expectation is, that if the geometry, shaders, C++ code, viewport dimensions and monitor are the same, the number of fragments should also be the same, but it seems I am wrong, why would that be?
EDIT:
It was requested I add the MVC example. I honestly don't think it's actually relevant to the question since this is not behaviour that would be specific to my code but a property of GPUs none the less:
Vertex shader:
#version 430
layout(location = 0) in vec3 position; // (x,y,z) coordinates of a vertex
layout(location = 1) in vec3 normal; // normal to the vertex
layout(location = 2) in vec2 uv; // texture coordinates
out vec3 v_pos;
out vec3 v_norm;
out vec2 v_uv;
uniform mat4 model_m = mat4(1); // model matrix
uniform mat4 view_m = mat4(1); // view matrix
uniform mat4 proj_m = mat4(1); // perspective projection matrix
void main()
{
v_pos = vec3(model_m*vec4(position,1));
v_norm = vec3(model_m*vec4(normal,1.0));
v_uv = uv;
gl_Position = proj_m*view_m*vec4(v_pos, 1.0);
}
Fragment shader:
#version 430
layout(location = 0) in vec3 position; // (x,y,z) coordinates of a vertex
out vec3 v_pos;
uniform mat4 model_m = mat4(1); // model matrix
void main()
{
v_pos = vec3(model_m*vec4(position,1));
}
C++:
//Binding the buffer
glGenBuffers(1, &ssbo);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, ssbo);
glObjectLabel(GL_BUFFER, ssbo, -1, "\"SSBO\"");
GLint zero = 0;
glBufferStorage(GL_SHADER_STORAGE_BUFFER, sizeof(GLint), &zero,
GL_MAP_READ_BIT | GL_MAP_WRITE_BIT | GL_DYNAMIC_STORAGE_BIT);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, ssbo);
GLuint *counter;
void render()
{
glClearColor(0,0.5,0.5,0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(counter);
mesh->draw();
glMemoryBarrier(GL_SHADER_STORAGE_BARRIER_BIT);
GLint z2;
glGetBufferSubData(GL_SHADER_STORAGE_BUFFER, 0, sizeof(GLint), &z2);
cout << "Fragments: " << z2 << endl;
GLint zero=0;
glBufferSubData(GL_SHADER_STORAGE_BUFFER, 0, sizeof(GLint), &zero);endl;
}
OpenGL is not a pixel-accurate API. As such, implementations can implement rasterization in slightly different ways, or provide different numeric precision, which generate different numbers of fragments.
Also, if you're rendering an actual scene rather than just a full-screen quad, there can be other effects. For example, let's say you have two triangles in a rendering command, and one of them is closer than the other. On some piece of hardware, the closer triangle does its full read/modify/write pass on the depth buffer before the farther triangle gets rasterized at all. If early depth tests are on, then none of the fragment shaders for fragments from the farther triangle gets produced.
But what if fragments from both triangles get processed at the same time? That could happen, and what causes that will depend on the hardware (and the distance between the triangles in the rendering command). For some pixels, the farther triangle will get some of its fragments counted as well as the nearer one.
This is also why it is important to turn on early fragment tests if you're using image load/store operations in tandem with depth tests.

Simple curiosity about relation between texture mapping and shader program using Opengl/GLSL

I'm working on a small homemade 3D engine and more precisely on rendering optimization. Until here I developped a sort algorithm whose goal is to gather a maximum of geometry (meshes) which have in common the same material properties and same shader program into batches. This way I minimize the state changes (glBindXXX) and draw calls (glDrawXXX). So, if I have a scene composed by 10 boxes, all sharing the same texture and need to be rendered with the same shader program (for example including ADS lighting) so all the vertices of these meshes will be merged into a unique VBO, the texture will be bind just one time and one simple draw call only will be needed.
Scene description:
- 10 meshes (boxes) mapped with 'texture_1'
Pseudo-code (render):
shaderProgram_1->Bind()
{
glActiveTexture(texture_1)
DrawCall(render 10 meshes with 'texture_1')
}
But now I want to be sure one thing: Let's assume our scene is always composed by the same 10 boxes but this time 5 of them will be mapped with a different texture (not multi-texturing, just simple texture mapping).
Scene description:
- 5 boxes with 'texture_1'
- 5 boxes with 'texture_2'
Pseudo-code (render):
shaderProgram_1->Bind()
{
glActiveTexture(texture_1)
DrawCall(render 5 meshes with 'texture_1')
}
shaderProgram_2->Bind()
{
glActiveTexture(texture_2)
DrawCall(render 5 meshes with 'texture_2')
}
And my fragment shader has a unique declaration of sampler2D (the goal of my shader program is to render geometry with simple texture mapping and ADS lighting):
uniform sampler2D ColorSampler;
I want to be sure it's not possible to draw this scene with a unique draw call (like it was possible with my previous example (1 batch was needed)). It was possible because I used the same texture for the whole geometry. I think this time I will need 2 batches hence 2 draw calls and of course for the rendering of each batch I will bind the 'texture_1' and 'texture_2' before each draw call (one for the first 5 boxes and an other one for the 5 others).
To sum up, if all the meshes are mapped with a simple texture (simple texture mapping):
5 with a red texture (texture_red)
5 with a blue texture (texture_blue)
Is it possible to render the scene with a simple draw call? I don't think so because my pseudo code will look like this:
Pseudo-code:
shaderProgram->Bind()
{
glActiveTexture(texture_blue)
glActiveTexture(texture_red)
DrawCall(render 10 meshes)
}
I think it's impossible to differentiate the 2 textures when my fragment shader has to compute the pixel color using a unique sampler2D uniform variable (simple texture mapping).
Here's my fragment shader:
#version 440
#define MAX_LIGHT_COUNT 1
/*
** Output color value.
*/
layout (location = 0) out vec4 FragColor;
/*
** Inputs.
*/
in vec3 Position;
in vec2 TexCoords;
in vec3 Normal;
/*
** Material uniforms.
*/
uniform MaterialBlock
{
vec3 Ka, Kd, Ks;
float Shininess;
} MaterialInfos;
uniform sampler2D ColorSampler;
struct Light
{
vec4 Position;
vec3 La, Ld, Ls;
float Kc, Kl, Kq;
};
uniform struct Light LightInfos[MAX_LIGHT_COUNT];
uniform unsigned int LightCount;
/*
** Light attenuation factor.
*/
float getLightAttenuationFactor(vec3 lightDir, Light light)
{
float lightAtt = 0.0f;
float dist = 0.0f;
dist = length(lightDir);
lightAtt = 1.0f / (light.Kc + (light.Kl * dist) + (light.Kq * pow(dist, 2)));
return (lightAtt);
}
/*
** Basic phong shading.
*/
vec3 Basic_Phong_Shading(vec3 normalDir, vec3 lightDir, vec3 viewDir, int idx)
{
vec3 Specular = vec3(0.0f);
float lambertTerm = max(dot(lightDir, normalDir), 0.0f);
vec3 Ambient = LightInfos[idx].La * MaterialInfos.Ka;
vec3 Diffuse = LightInfos[idx].Ld * MaterialInfos.Kd * lambertTerm;
if (lambertTerm > 0.0f)
{
vec3 reflectDir = reflect(-lightDir, normalDir);
Specular = LightInfos[idx].Ls * MaterialInfos.Ks * pow(max(dot(reflectDir, viewDir), 0.0f), MaterialInfos.Shininess);
}
return (Ambient + Diffuse + Specular);
}
/*
** Fragment shader entry point.
*/
void main(void)
{
vec3 LightIntensity = vec3(0.0f);
vec4 texDiffuseColor = texture2D(ColorSampler, TexCoords);
vec3 normalDir = (gl_FrontFacing ? -Normal : Normal);
for (int idx = 0; idx < LightCount; idx++)
{
vec3 lightDir = vec3(LightInfos[idx].Position) - Position.xyz;
vec3 viewDir = -Position.xyz;
float lightAttenuationFactor = getLightAttenuationFactor(lightDir, LightInfos[idx]);
LightIntensity += Basic_Phong_Shading(
-normalize(normalDir), normalize(lightDir), normalize(viewDir), idx
) * lightAttenuationFactor;
}
FragColor = vec4(LightIntensity, 1.0f) * texDiffuseColor;
}
Are you agree with me?
It's possible if you either: (i) consider it to be a multitexturing problem where the function per fragment just picks between the two incoming fragments (ideally using mix with a coefficient of 0.0 or 1.0, not genuine branching); or (ii) composite your two textures into one texture (subject to your ability to wrap and clamp texture coordinates efficiently — watch out for those dependent reads — and maximum texture size constraints).
It's an open question as to whether either of these things would improve performance. Definitely go with (ii) if you can.

OpenGL mapping texture to sphere

I have OpenGL program that I want to texture sphere with bitmap of earth. I prepared mesh in Blender and exported it to OBJ file. Program loads appropriate mesh data (vertices, uv and normals) and bitmap properly- I have checked it texturing cube with bone bitmap.
My program is texturing sphere, but incorrectly (or in the way I don't expect). Each triangle of this sphere includes deformed copy of this bitmap. I've checked bitmap and uv seems to be ok. I've tried many sizes of bitmap (powers of 2, multiples of 2 etc).
Here's the texture:
Screenshot of my program (like It would ignore my UV coords):
Mappings of UVs in Blender I've done in this way:
Code setting texture after loading it (apart from code adding texture to VBO- I think it's ok):
GLuint texID;
glGenTextures(1,&texID);
glBindTexture(GL_TEXTURE_2D,texID);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGB,width,height,0,GL_BGR,GL_UNSIGNED_BYTE,(GLvoid*)&data[0]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP);
Is there needed any extra code to map this texture properly?
[Edit]
Initializing textures (earlier presented code is in LoadTextureBMP_custom() function)
bool Program::InitTextures(string texturePath)
{
textureID = LoadTextureBMP_custom(texturePath);
GLuint TBO_ID;
glGenBuffers(1,&TBO_ID);
glBindBuffer(GL_ARRAY_BUFFER,TBO_ID);
glBufferData(GL_ARRAY_BUFFER,uv.size()*sizeof(vec2),&uv[0],GL_STATIC_DRAW);
return true;
}
My main loop:
bool Program::MainLoop()
{
bool done = false;
mat4 projectionMatrix;
mat4 viewMatrix;
mat4 modelMatrix;
mat4 MVP;
Camera camera;
shader.SetShader(true);
while(!done)
{
if( (glfwGetKey(GLFW_KEY_ESC)))
done = true;
if(!glfwGetWindowParam(GLFW_OPENED))
done = true;
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Tutaj przeksztalcenia macierzy
camera.UpdateCamera();
modelMatrix = mat4(1.0f);
viewMatrix = camera.GetViewMatrix();
projectionMatrix = camera.GetProjectionMatrix();
MVP = projectionMatrix*viewMatrix*modelMatrix;
// Koniec przeksztalcen
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D,textureID);
shader.SetShaderParameters(MVP);
SetOpenGLScene(width,height);
glEnableVertexAttribArray(0); // Udostepnienie zmiennej Vertex Shadera => vertexPosition_modelspace
glBindBuffer(GL_ARRAY_BUFFER,VBO_ID);
glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,(void*)0);
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER,TBO_ID);
glVertexAttribPointer(1,2,GL_FLOAT,GL_FALSE,0,(void*)0);
glDrawArrays(GL_TRIANGLES,0,vert.size());
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glfwSwapBuffers();
}
shader.SetShader(false);
return true;
}
VS:
#version 330
layout(location = 0) in vec3 vertexPosition;
layout(location = 1) in vec2 vertexUV;
out vec2 UV;
uniform mat4 MVP;
void main()
{
vec4 v = vec4(vertexPosition,1.0f);
gl_Position = MVP*v;
UV = vertexUV;
}
FS:
#version 330
in vec2 UV;
out vec4 color;
uniform sampler2D texSampler; // Uchwyt tekstury
void main()
{
color = texture(texSampler, UV);
}
I haven't done any professional GL programming, but I've been working with 3D software quite a lot.
your UVs are most likely bad
your texture is a bad fit to project on a sphere
considering UVs are bad, you might want to check your normals as well
consider an ISOSPHERE instead of a regular one to make more efficient use of polygons
You are currently using a flat texture with flat mapping, which may give you very ugly results, since you will have very low resolution in the "outer" perimeter and most likely a nasty seam artifact where the two projections meet if you like... rotate the planet or something.
Note that you don't have to have any particular UV map, it just needs to be correct with the geometry, which it doesn't look like it is right now. The spherical mapping will take care for the rest. You could probably get away with a cylindrical map as well, since most Earth textures are in a suitable projection.
Finally, I've got the answer. Error was there:
bool Program::InitTextures(string texturePath)
{
textureID = LoadTextureBMP_custom(texturePath);
// GLuint TBO_ID; _ERROR_
glGenBuffers(1,&TBO_ID);
glBindBuffer(GL_ARRAY_BUFFER,TBO_ID);
glBufferData(GL_ARRAY_BUFFER,uv.size()*sizeof(vec2),&uv[0],GL_STATIC_DRAW);
}
There is the part of Program class declaration:
class Program
{
private:
Shader shader;
GLuint textureID;
GLuint VAO_ID;
GLuint VBO_ID;
GLuint TBO_ID; // Member covered with local variable declaration in InitTextures()
...
}
I've erroneously declared local TBO_ID that covered TBO_ID in class scope. UVs were generated with crummy precision and seams are horrible, but they weren't problem.
I have to admit that information I've supplied is too small to enable help. I should have put all the code of Program class. Thanks everybody who tried to.

Point Sprites for particle system

Are point sprites the best choice to build a particle system?
Are point sprites present in the newer versions of OpenGL and drivers of the latest graphics cards? Or should I do it using vbo and glsl?
Point sprites are indeed well suited for particle systems. But they don't have anything to do with VBOs and GLSL, meaning they are a completely orthogonal feature. No matter if you use point sprites or not, you always have to use VBOs for uploading the geometry, be they just points, pre-made sprites or whatever, and you always have to put this geometry through a set of shaders (in modern OpenGL of course).
That being said point sprites are very well supported in modern OpenGL, just not that automatically as with the old fixed-function approach. What is not supported are the point attenuation features that let you scale a point's size based on it's distance to the camera, you have to do this manually inside the vertex shader. In the same way you have to do the texturing of the point manually in an appropriate fragment shader, using the special input variable gl_PointCoord (that says where in the [0,1]-square of the whole point the current fragment is). For example a basic point sprite pipeline could look this way:
...
glPointSize(whatever); //specify size of points in pixels
glDrawArrays(GL_POINTS, 0, count); //draw the points
vertex shader:
uniform mat4 mvp;
layout(location = 0) in vec4 position;
void main()
{
gl_Position = mvp * position;
}
fragment shader:
uniform sampler2D tex;
layout(location = 0) out vec4 color;
void main()
{
color = texture(tex, gl_PointCoord);
}
And that's all. Of course those shaders just do the most basic drawing of textured sprites, but are a starting point for further features. For example to compute the sprite's size based on its distance to the camera (maybe in order to give it a fixed world-space size), you have to glEnable(GL_PROGRAM_POINT_SIZE) and write to the special output variable gl_PointSize in the vertex shader:
uniform mat4 modelview;
uniform mat4 projection;
uniform vec2 screenSize;
uniform float spriteSize;
layout(location = 0) in vec4 position;
void main()
{
vec4 eyePos = modelview * position;
vec4 projVoxel = projection * vec4(spriteSize,spriteSize,eyePos.z,eyePos.w);
vec2 projSize = screenSize * projVoxel.xy / projVoxel.w;
gl_PointSize = 0.25 * (projSize.x+projSize.y);
gl_Position = projection * eyePos;
}
This would make all point sprites have the same world-space size (and thus a different screen-space size in pixels).
But point sprites while still being perfectly supported in modern OpenGL have their disadvantages. One of the biggest disadvantages is their clipping behaviour. Points are clipped at their center coordinate (because clipping is done before rasterization and thus before the point gets "enlarged"). So if the center of the point is outside of the screen, the rest of it that might still reach into the viewing area is not shown, so at the worst once the point is half-way out of the screen, it will suddenly disappear. This is however only noticable (or annyoing) if the point sprites are too large. If they are very small particles that don't cover much more than a few pixels each anyway, then this won't be much of a problem and I would still regard particle systems the canonical use-case for point sprites, just don't use them for large billboards.
But if this is a problem, then modern OpenGL offers many other ways to implement point sprites, apart from the naive way of pre-building all the sprites as individual quads on the CPU. You can still render them just as a buffer full of points (and thus in the way they are likely to come out of your GPU-based particle engine). To actually generate the quad geometry then, you can use the geometry shader, which lets you generate a quad from a single point. First you do only the modelview transformation inside the vertex shader:
uniform mat4 modelview;
layout(location = 0) in vec4 position;
void main()
{
gl_Position = modelview * position;
}
Then the geometry shader does the rest of the work. It combines the point position with the 4 corners of a generic [0,1]-quad and completes the transformation into clip-space:
const vec2 corners[4] = {
vec2(0.0, 1.0), vec2(0.0, 0.0), vec2(1.0, 1.0), vec2(1.0, 0.0) };
layout(points) in;
layout(triangle_strip, max_vertices = 4) out;
uniform mat4 projection;
uniform float spriteSize;
out vec2 texCoord;
void main()
{
for(int i=0; i<4; ++i)
{
vec4 eyePos = gl_in[0].gl_Position; //start with point position
eyePos.xy += spriteSize * (corners[i] - vec2(0.5)); //add corner position
gl_Position = projection * eyePos; //complete transformation
texCoord = corners[i]; //use corner as texCoord
EmitVertex();
}
}
In the fragment shader you would then of course use the custom texCoord varying instead of gl_PointCoord for texturing, since we're no longer drawing actual points.
Or another possibility (and maybe faster, since I remember geometry shaders having a reputation for being slow) would be to use instanced rendering. This way you have an additional VBO containing the vertices of just a single generic 2D quad (i.e. the [0,1]-square) and your good old VBO containing just the point positions. What you then do is draw this single quad multiple times (instanced), while sourcing the individual instances' positions from the point VBO:
glVertexAttribPointer(0, ...points...);
glVertexAttribPointer(1, ...quad...);
glVertexAttribDivisor(0, 1); //advance only once per instance
...
glDrawArraysInstanced(GL_TRIANGLE_STRIP, 0, 4, count); //draw #count quads
And in the vertex shader you then assemble the per-point position with the actual corner/quad-position (which is also the texture coordinate of that vertex):
uniform mat4 modelview;
uniform mat4 projection;
uniform float spriteSize;
layout(location = 0) in vec4 position;
layout(location = 1) in vec2 corner;
out vec2 texCoord;
void main()
{
vec4 eyePos = modelview * position; //transform to eye-space
eyePos.xy += spriteSize * (corner - vec2(0.5)); //add corner position
gl_Position = projection * eyePos; //complete transformation
texCoord = corner;
}
This achieves the same as the geometry shader based approach, properly-clipped point sprites with a consistent world-space size. If you actually want to mimick the screen-space pixel size of actual point sprites, you need to put some more computational effort into it. But this is left as an exercise and would be quite the oppisite of the world-to-screen transformation from the the point sprite shader.