OpenGL: Radeon driver seems to mess with depth testing - c++

I'm having a really weird issue with depth testing here.
I'm rendering a simple mesh in an OpenGL 3.3 core profile context on Windows, with depth testing enabled and glDepthFunc set to GL_LESS. On my machine (a laptop with a nVidia Geforce GTX 660M), everything is working as expected, the depth test is working, this is what it looks like:
Now, if I run the program on a different PC, a tower with a Radeon R9 280, it looks more like this:
Strange enough, the really weird thing is that when I call glEnable(GL_DEPTH_TEST) every frame before drawing, the result is correct on both machines.
As it's working when I do that, I figure the depth buffer is correctly created on both machines, it just seems that the depth test is somehow being disabled before rendering when I enable it only once at initialization.
Here's the minimum code that could somehow be part of the problem:
Code called at initialization, after a context is created and made current:
glEnable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
Code called every frame before the buffer swap:
glClearColor(0.4f, 0.6f, 0.8f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// mShaderProgram->getID() simply returns the handle of a simple shader program
glUseProgram(mShaderProgram->getID());
glm::vec3 myColor = glm::vec3(0.7f, 0.5f, 0.4f);
GLuint colorLocation = glGetUniformLocation(mShaderProgram->getID(), "uColor");
glUniform3fv(colorLocation, 1, glm::value_ptr(myColor));
glm::mat4 modelMatrix = glm::mat4(1.0f);
glm::mat4 viewMatrix = glm::lookAt(glm::vec3(0.0f, 3.0f, 5.0f), glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
glm::mat4 projectionMatrix = glm::perspectiveFov(60.0f, (float)mWindow->getProperties().width, (float)mWindow->getProperties().height, 1.0f, 100.0f);
glm::mat4 inverseTransposeMVMatrix = glm::inverseTranspose(viewMatrix*modelMatrix);
GLuint mMatrixLocation = glGetUniformLocation(mShaderProgram->getID(), "uModelMatrix");
GLuint vMatrixLocation = glGetUniformLocation(mShaderProgram->getID(), "uViewMatrix");
GLuint pMatrixLocation = glGetUniformLocation(mShaderProgram->getID(), "uProjectionMatrix");
GLuint itmvMatrixLocation = glGetUniformLocation(mShaderProgram->getID(), "uInverseTransposeMVMatrix");
glUniformMatrix4fv(mMatrixLocation, 1, GL_FALSE, glm::value_ptr(modelMatrix));
glUniformMatrix4fv(vMatrixLocation, 1, GL_FALSE, glm::value_ptr(viewMatrix));
glUniformMatrix4fv(pMatrixLocation, 1, GL_FALSE, glm::value_ptr(projectionMatrix));
glUniformMatrix4fv(itmvMatrixLocation, 1, GL_FALSE, glm::value_ptr(inverseTransposeMVMatrix));
// Similiar to the shader program, mMesh.gl_vaoID is simply the handle of a vertex array object
glBindVertexArray(mMesh.gl_vaoID);
glDrawArrays(GL_TRIANGLES, 0, mMesh.faces.size()*3);
With the above code, I'll get the wrong output on the Radeon.
Note: I'm using GLFW3 for context creation and GLEW for the function pointers (and obviously GLM for the math).
The vertex array object contains three attribute array buffers, for positions, uv coordinates and normals. Each of these should be correctly configured and send to the shaders, as everything is working fine when enabling the depth test every frame.
I should also mention that the Radeon machine runs Windows 8 while the nVidia machine runs Windows 7.
Edit: By request, here's the code used to load the mesh and create the attribute data. I do not create any element buffer objects as I am not using element draw calls.
std::vector<glm::vec3> positionData;
std::vector<glm::vec2> uvData;
std::vector<glm::vec3> normalData;
std::vector<meshFaceIndex> faces;
std::ifstream fileStream(path);
if (!fileStream.is_open()){
std::cerr << "ERROR: Could not open file '" << path << "!\n";
return;
}
std::string lineBuffer;
while (std::getline(fileStream, lineBuffer)){
std::stringstream lineStream(lineBuffer);
std::string typeString;
lineStream >> typeString; // Get line token
if (typeString == TOKEN_VPOS){ // Position
glm::vec3 pos;
lineStream >> pos.x >> pos.y >> pos.z;
positionData.push_back(pos);
}
else{
if (typeString == TOKEN_VUV){ // UV coord
glm::vec2 UV;
lineStream >> UV.x >> UV.y;
uvData.push_back(UV);
}
else{
if (typeString == TOKEN_VNORMAL){ // Normal
glm::vec3 normal;
lineStream >> normal.x >> normal.y >> normal.z;
normalData.push_back(normal);
}
else{
if (typeString == TOKEN_FACE){ // Face
meshFaceIndex faceIndex;
char interrupt;
for (int i = 0; i < 3; ++i){
lineStream >> faceIndex.positionIndex[i] >> interrupt
>> faceIndex.uvIndex[i] >> interrupt
>> faceIndex.normalIndex[i];
}
faces.push_back(faceIndex);
}
}
}
}
}
fileStream.close();
std::vector<glm::vec3> packedPositions;
std::vector<glm::vec2> packedUVs;
std::vector<glm::vec3> packedNormals;
for (auto f : faces){
Face face; // Derp derp;
for (auto i = 0; i < 3; ++i){
if (!positionData.empty()){
face.vertices[i].position = positionData[f.positionIndex[i] - 1];
packedPositions.push_back(face.vertices[i].position);
}
else
face.vertices[i].position = glm::vec3(0.0f);
if (!uvData.empty()){
face.vertices[i].uv = uvData[f.uvIndex[i] - 1];
packedUVs.push_back(face.vertices[i].uv);
}
else
face.vertices[i].uv = glm::vec2(0.0f);
if (!normalData.empty()){
face.vertices[i].normal = normalData[f.normalIndex[i] - 1];
packedNormals.push_back(face.vertices[i].normal);
}
else
face.vertices[i].normal = glm::vec3(0.0f);
}
myMesh.faces.push_back(face);
}
glGenVertexArrays(1, &(myMesh.gl_vaoID));
glBindVertexArray(myMesh.gl_vaoID);
GLuint positionBuffer; // positions
glGenBuffers(1, &positionBuffer);
glBindBuffer(GL_ARRAY_BUFFER, positionBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(glm::vec3)*packedPositions.size(), &packedPositions[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
GLuint uvBuffer; // uvs
glGenBuffers(1, &uvBuffer);
glBindBuffer(GL_ARRAY_BUFFER, uvBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(glm::vec2)*packedUVs.size(), &packedUVs[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, (void*)0);
GLuint normalBuffer; // normals
glGenBuffers(1, &normalBuffer);
glBindBuffer(GL_ARRAY_BUFFER, normalBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(glm::vec3)*packedNormals.size(), &packedNormals[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(2);
glVertexAttribPointer(2, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
The .obj loading routine is mostly adapted from this one:
http://www.limegarden.net/2010/03/02/wavefront-obj-mesh-loader/

This doesn't look like a depth testing issue to me, but more like misalignment in the vertex / index array data. Please show us the code in which you load the vertex buffer objects and the element buffer objects.

It is because of the function ChoosePixelFormat.
In my case the ChoosePixelFormat returns a pixelformat ID with value 8 which provides a depth buffer with 16 bits instead of the required 24 bits.
One simple fix was to set the ID manually to the value of 11 instead of 8 to get a suitable pixelformat for the application with 24 bits of depth-buffer.

Related

Cannot Read Values Passed to Vertex Shader

I am trying to wrap my head around the various types of GLSL shaders in OpenGL.
At the moment I am struggling with a 2d layered-tile implementation. For some reason the int values that get passed into my shader are always 0 (or more likely, null).
I currently have a 2048x2048px 2d texture composed of 20x20 tiles. I am trying to texture one quad with it and change the index of the tile based upon the block of ints I pass into the vertex shader.
I am passing in a vec2 of floats for the position of the quad (really a TRIANGLE_STRIP). I am also attempting to pass in 6 ints that will represent the 6 layers of tiles.
My input:
// Build and compile our shader program
Shader ourShader("b_vertex.vertexShader", "b_fragment.fragmentShader");
const int floatsPerPosition = 2;
const int intsPerTriangle = 6;
const int numVertices = 4;
const int sizeOfPositions = sizeof(float) * numVertices * floatsPerPosition;
const int sizeOfColors = sizeof(int) * numVertices * intsPerTriangle;
const int numIndices = 4;
const int sizeOfIndices = sizeof(int) * numIndices;
float positions[numVertices][floatsPerPosition] =
{
{ -1, 1 },
{ -1, -1 },
{ 1, 1 },
{ 1, -1 },
};
// ints indicating Tile Index
int colors[numVertices][intsPerTriangle] =
{
{ 1, 2, 3, 4, 5, 6 },
{ 1, 2, 3, 4, 5, 6 },
{ 1, 2, 3, 4, 5, 6 },
{ 1, 2, 3, 4, 5, 6 },
};
// Indexes on CPU
int indices[numVertices] =
{
0, 1, 2, 3,
};
My setup:
GLuint vao, vbo1, vbo2, ebo; // Identifiers of OpenGL objects
glGenVertexArrays(1, &vao); // Create new VAO
// Binded VAO will store connections between VBOs and attributes
glBindVertexArray(vao);
glGenBuffers(1, &vbo1); // Create new VBO
glBindBuffer(GL_ARRAY_BUFFER, vbo1); // Bind vbo1 as current vertex buffer
// initialize vertex buffer, allocate memory, fill it with data
glBufferData(GL_ARRAY_BUFFER, sizeOfPositions, positions, GL_STATIC_DRAW);
// indicate that current VBO should be used with vertex attribute with index 0
glEnableVertexAttribArray(0);
// indicate how vertex attribute 0 should interpret data in connected VBO
glVertexAttribPointer(0, floatsPerPosition, GL_FLOAT, GL_FALSE, 0, 0);
glGenBuffers(1, &vbo2); // Create new VBO
glBindBuffer(GL_ARRAY_BUFFER, vbo2); // Bind vbo2 as current vertex buffer
// initialize vertex buffer, allocate memory, fill it with data
glBufferData(GL_ARRAY_BUFFER, sizeOfColors, colors, GL_STATIC_DRAW);
// indicate that current VBO should be used with vertex attribute with index 1
glEnableVertexAttribArray(1);
// indicate how vertex attribute 1 should interpret data in connected VBO
glVertexAttribPointer(1, intsPerTriangle, GL_INT, GL_FALSE, 0, 0);
// Create new buffer that will be used to store indices
glGenBuffers(1, &ebo);
// Bind index buffer to corresponding target
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo);
// ititialize index buffer, allocate memory, fill it with data
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeOfIndices, indices, GL_STATIC_DRAW);
// reset bindings for VAO, VBO and EBO
glBindVertexArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
// Load and create a texture
GLuint texture1 = loadBMP_custom("uvtemplate3.bmp");
GLuint texture2 = loadBMP_custom("texture1.bmp");
My draw:
// Game loop
while (!glfwWindowShouldClose(window))
{
// Check if any events have been activiated (key pressed, mouse moved etc.) and call corresponding response functions
glfwPollEvents();
// Render
// Clear the colorbuffer
glClearColor(1.f, 0.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
// Activate shader
ourShader.Use();
// Bind Textures using texture units
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture1);
//add some cool params
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
float borderColor[] = { 0.45f, 0.25f, 0.25f, 0.25f };
glTexParameterfv(GL_TEXTURE_2D, GL_TEXTURE_BORDER_COLOR, borderColor);
glUniform1i(glGetUniformLocation(ourShader.Program, "ourTexture1"), 0);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, texture2);
glUniform1i(glGetUniformLocation(ourShader.Program, "ourTexture2"), 1);
// Draw container
//glBindVertexArray(VAO);
//glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
glBindVertexArray(vao);
//glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
glDrawElements(GL_TRIANGLE_STRIP, numIndices, GL_UNSIGNED_INT, NULL);
glBindVertexArray(0);
// Swap the screen buffers
glfwSwapBuffers(window);
}
My shader most definitely works, as I can adjust the output by hard-coding the
values from within the vertexShader. My suspicion is I am not passing the values correctly/ in the correct format or not indicating somewhere that the int[6] needs to be included per vertex.
I cannot read anything from my layout (location = 1) in int Base[6]; I've tried just about everything I can think of. Declaring each int individually, trying to read two ivec3's, uint and what ever else I could think of but everything comes back with 0.
The following are my vertex and fragment shader for completeness:
#version 330 core
layout (location = 0) in vec2 position;
layout (location = 1) in int Base[6];
out vec2 TexCoord;
out vec2 TexCoord2;
out vec2 TexCoord3;
out vec2 TexCoord4;
out vec2 TexCoord5;
out vec2 TexCoord6;
// 0.5f, 0.5f,// 0.0f, 118.0f, 0.0f, 0.0f, 0.0f, 0.0f, // Top Right
// 0.5f, -0.5f,// 0.0f, 118.0f, 1.0f, 0.0f, 0.0f,0.009765625f, // Bottom Right
// -0.5f, -0.5f,// 0.0f, 118.0f, 0.0f, 1.0f, 0.009765625f, 0.009765625f, // Bottom Left
// -0.5f, 0.5f//, 0.0f, 118.0f, 1.0f, 0.0f, 0.009765625f, 0.0f // Top Left
void main()
{
int curBase = Base[5];
int curVertex = gl_VertexID % 4;
vec2 texCoord = (curVertex == 0?
vec2(0.0,0.0):(
curVertex == 1?
vec2(0.0,0.009765625):(
curVertex == 2?
vec2(0.009765625,0.0):(
curVertex == 3?
vec2(0.009765625,0.009765625):(
vec2(0.0,0.0)))))
);
gl_Position = vec4(position, 0.0f, 1.0f);
TexCoord = vec2(texCoord.x + ((int(curBase)%102)*0.009765625f)
, (1.0 - texCoord.y) - ((int(curBase)/102)*0.009765625f));
//curBase = Base+1;
TexCoord2 = vec2(texCoord.x + ((int(curBase)%102)*0.009765625f)
, (1.0 - texCoord.y) - ((int(curBase)/102)*0.009765625f));
//curBase = Base+2;
TexCoord3 = vec2(texCoord.x + ((int(curBase)%102)*0.009765625f)
, (1.0 - texCoord.y) - ((int(curBase)/102)*0.009765625f));
}
Fragment:
#version 330 core
//in vec3 ourColor;
in vec2 TexCoord;
in vec2 TexCoord2;
in vec2 TexCoord3;
in vec2 TexCoord4;
in vec2 TexCoord5;
in vec2 TexCoord6;
out vec4 color;
// Texture samplers
uniform sampler2D ourTexture1;
uniform sampler2D ourTexture2;
void main()
{
color = (texture(ourTexture2, TexCoord )== vec4(1.0,0.0,1.0,1.0)?
(texture(ourTexture2, TexCoord2 )== vec4(1.0,0.0,1.0,1.0)?
(texture(ourTexture2, TexCoord3 )== vec4(1.0,0.0,1.0,1.0)?
(texture(ourTexture2, TexCoord4 )== vec4(1.0,0.0,1.0,1.0)?
(texture(ourTexture2, TexCoord5 )== vec4(1.0,0.0,1.0,1.0)?
(texture(ourTexture2, TexCoord6 )== vec4(1.0,0.0,1.0,1.0)?
vec4(0.0f,0.0f,0.0f,0.0f)
:texture(ourTexture2, TexCoord6 ))
:texture(ourTexture2, TexCoord5 ))
:texture(ourTexture2, TexCoord4 ))
:texture(ourTexture2, TexCoord3 ))
:texture(ourTexture2, TexCoord2 ))
:texture(ourTexture2, TexCoord ));
}
This is wrong in two different ways:
glVertexAttribPointer(1, intsPerTriangle, GL_INT, GL_FALSE, 0, 0);
Vertex attributes in the GL can be scalars or vectors of 2 to 4 components. Hence, the size parameter of glVertexAttribPointer can take the values of 1, 2, 3 or 4. Using a different value (intsPerTriangle == 6) means that the call will just generate an GL_INVALID_VALUE error and has no ther effect, so you don't even set a pointer.
If you you want to pass 6 values per vertex, you can either use 6 different scalr attributes (consuming 6 attribute slots), or pack this into some vectors, like 2 3d vectors (consuming only 2 slots). No matter what packing you chose, you'll need a proper attrib pointer setup for each attribute slot in use.
However, glVertexAttribPointer is also the wrong function for your use case. It is defining floating-point attributes, which musthave matching declarations as float/vec* in the shader. The fact that you can input GL_INT just means that the GPU can do the conversion to floating-point on the fly for you.
If you want to use an int or ivec (or their unsigned counterparts) attribute, you have to use glVertexAttribIPointer (note the I in that function name) when setting up the attribute.

openGL - orthogonal projection matrix

I'm very new to openGL and I am doing a mini project where I experiment with the depth buffer. I got to the stage of displaying it to the screen. However I want to draw it as screen coordinates instead of converting to floats. I read somewhere that I need to use a projection matrix. I have looked for ages and tested loads of different options but I can't seem to get it right.
Can anyone point me to a useful resource or explain how I would go about doing this?
EDIT
At the moment my matrix looks like this:
projectionMat = glm::ortho(0.0f, (float)_cols, 0.0f, (float)_rows, 0.0f, (float)_maxDepthVal);
projection = glGetUniformLocation(_program, "Projection");
glUniformMatrix4fv(projection, 1, GL_FALSE, glm::value_ptr(projectionMat));
EDIT 2
With some fiddling I found that cols had to be negative for some strange reason before it would display. I twill now display correctly on the screen but for some reason it his a gap around the sides opposite the origin, why is this? Even a small move in the camera position and target cause all of it to vanish so I don't think that would be the problem.
Pixel Art Representation!!
OOOO!!
OOOO!!
OOOO!!
!!!!!!!!!!!!!!
New code
glm::mat4 Projection = glm::ortho(0.0f, -static_cast<float>(_cols), 0.0f, static_cast<float>(_rows), 0.0f, static_cast<float>(_maxDepthVal));
projection = glGetUniformLocation(_program, "Projection");
glm::mat4 View = glm::lookAt(
glm::vec3(0.0f, 0.0f, -0.1f),
glm::vec3(0.0f , 0.0f, 0.0f), // and looks at the origin
glm::vec3(0,1,0) // Head is up (set to 0,-1,0 to look upside-down)
);
// Model matrix : an identity matrix (model will be at the origin)
glm::mat4 Model = glm::mat4(1.0f);
projectionMat = Projection * View * Model;
glUniformMatrix4fv(projection, 1, GL_FALSE, glm::value_ptr(projectionMat));
EDIT 3
I can translate it using the Model matrix but it has a gap of 5 pixels around it that I can't get rid of, any help on that would be appreciated but thanks for taken an interest.
UPDATE
As per request my draw code
glUseProgram(_program);
glDepthMask(GL_TRUE);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_ALWAYS);
glBindBuffer(GL_ARRAY_BUFFER, _vbo);
SDL_GL_SwapWindow(_window);
glPointSize(1);
glEnableVertexAttribArray(0);
//Insert matrix here
glVertexAttribPointer(0, 3, GL_UNSIGNED_INT, GL_FALSE, 0, 0);
glDrawArrays(GL_POINTS, 0, _dataCount)
glDisableVertexAttribArray(0);
my vbo:
glGenBuffers(1, &_vbo);
glBindBuffer(GL_ARRAY_BUFFER, _vbo);
glBufferData(GL_ARRAY_BUFFER, _dataCount * 4 * sizeof(unsigned int), NULL, GL_STATIC_DRAW);
if(_vbo == 0 || glGetError() != GL_NO_ERROR)
{
_errorMessage = "VBO COULD NOT BE CREATED";
error();
}
checkCudaErrors(cudaGraphicsGLRegisterBuffer(&vbo, _vbo, cudaGraphicsMapFlagsNone));
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glUseProgram(0);
I'm also having issues with the write as when it converts to floats(for drawing) it loses precision so if I read the value out again it rounds to the nearest factor(0, 256, 512 etc.). Is there another way to do it that stores it as unsigned int. (I realize this is getting slightly off topic but any help would be appreciated)
The issue appeared to be with the cols variable, it needed to be inverted to work otherwise it was off the screen.

How do I bind multiple textures to multiple objects in OpenGL

I want to draw a cube and a sphere and apply a different texture to each.
I use blender to create the scene and then export to an obj file which then includes the vertices, normals, uvs and faces for both objects as well as the textures.
I have created a routine which loads all the data from the obj file. This all works as I can load the objects and display them etc but with only one texture. As I say I have gone through pages and pages of code and posts and 99% only deal with 1 texture to 1 object and those that deal with multiple textures only deal with one object or are in a very old version of openGL.
The one thing I haven't tried is uniform sample2D arrays in the fragment shader but I haven't found an explanation on that so haven't tried it.
My code that I have below:
ObjLoader *obj = new ObjLoader();
string _filepath = "objects\\" + _filename;
//bool res = obj->loadObjWithStaticColor(_filepath.c_str(), _vertices, _normals, vertex_colors, _colors, 1.0);
bool res = obj->loadObjWithTextures(_filepath.c_str(), _objects, _textures);
program = InitShader("shaders\\vshader.glsl", "shaders\\fshader.glsl");
glUseProgram(program);
GLuint vao_world_objects;
glGenVertexArrays(1, &vao_world_objects);
glBindVertexArray(vao_world_objects);
//GLuint vbo_world_objects;
//glGenBuffers(1, &vbo_world_objects);
//glBindBuffer(GL_ARRAY_BUFFER, vbo_world_objects);
NumVertices = _objects[_objects.size() - 1]._stop + 1;
for (size_t i = 0; i < _objects.size(); i++)
{
_vertices.insert(_vertices.end(), _objects[i]._vertices.begin(), _objects[i]._vertices.end());
_normals.insert(_normals.end(), _objects[i]._normals.begin(), _objects[i]._normals.end());
_uvs.insert(_uvs.end(), _objects[i]._uvs.begin(), _objects[i]._uvs.end());
}
GLuint _vSize = _vertices.size() * sizeof(point4);
GLuint _nSize = _normals.size() * sizeof(point4);
GLuint _uSize = _uvs.size() * sizeof(point2);
GLuint _totalSize = _vSize + _uSize; // normals + vertices + uvs
GLuint vertexbuffer;
glGenBuffers(1, &vertexbuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glBufferData(GL_ARRAY_BUFFER, _vSize, &_vertices[0], GL_STATIC_DRAW);
GLuint uvbuffer;
glGenBuffers(1, &uvbuffer);
glBindBuffer(GL_ARRAY_BUFFER, uvbuffer);
glBufferData(GL_ARRAY_BUFFER, _uSize, &_uvs[0], GL_STATIC_DRAW);
TextureID = glGetUniformLocation(program, "myTextureSampler");
TextureObjects = new GLuint[_textures.size()];
glGenTextures(_textures.size(), TextureObjects);
for (size_t i = 0; i < _textures.size(); i++)
{
// "Bind" the newly created texture : all future texture functions will modify this texture
glBindTexture(GL_TEXTURE_2D, TextureObjects[i]);
// Give the image to OpenGL
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, _textures[i].width, _textures[i].height, 0, GL_BGR, GL_UNSIGNED_BYTE, _textures[i]._tex_data);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
}
for (size_t i = 0; i < _objects.size(); i++)
{
if (i == 0)
{
glActiveTexture(GL_TEXTURE0);
}
else
{
glActiveTexture(GL_TEXTURE1);
}
glBindTexture(GL_TEXTURE_2D, TextureObjects[i]);
GLuint _v_size = _objects[i]._vertices.size() * sizeof(point4);
GLuint _u_size = _objects[i]._uvs.size() * sizeof(point2);
GLuint vPosition = glGetAttribLocation(program, "vPosition");
glEnableVertexAttribArray(vPosition);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
if (i == 0)
{
glVertexAttribPointer(vPosition, 4, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
}
else
{
glVertexAttribPointer(vPosition, 4, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(_v_size));
}
GLuint vUV = glGetAttribLocation(program, "vUV");
glEnableVertexAttribArray(vUV);
glBindBuffer(GL_ARRAY_BUFFER, uvbuffer);
if (i == 0)
{
glVertexAttribPointer(vUV, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
}
else
{
glVertexAttribPointer(vUV, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(_u_size));
}
if (i == 0)
{
glUniform1i(TextureID, 0);
}
else
{
glUniform1i(TextureID, 1);
}
}
_scale = Scale(zoom, zoom, zoom);
_projection = Perspective(45.0, 4.0 / 3.0, 0.1, 100.0);
_view = LookAt(point4(Camera.x, Camera.y, Camera.z, 0), point4(0, 0, 0, 0), point4(0, 1, 0, 0));
_model = mat4(1.0); // identity matrix
_mvp = _projection * _view * _model;
MVP = glGetUniformLocation(program, "MVP");
theta = glGetUniformLocation(program, "theta");
Zoom = glGetUniformLocation(program, "Zoom");
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
glEnable(GL_CULL_FACE);
glClearColor(1.0, 1.0, 1.0, 1.0);
I understand that I have to switch between the active textures when drawing an object but I can't figure out how.
UPDATE
#immibis Ok I tried to do that yesterday but it didn't work but it was late and I was highly frustrated. SO just to get my thinking correct here, do I have to create a buffer every time (glGenBuffer) and then fill it, activate texture and then glDrawArrays or do I just create the buffer and then fill it every time with the different vetices and uvs for each object, set the offsets and then call glDrawArray for each object?
When I tried this originally I didn't know where the
glGetAttribLocation / glEnableVertexAttribArray /glBindBuffer
should go. So if I understand correctly every time I do a transformation like rotating around the x axis then buffers have to be filled etc so the code needs to go in the display function. Is that correct?
SOLVED
Ok so thanx to immibus' comments, it got me looking in a different direction. I was staring the whole time into how the data was pumped into the arrays that I never even looked at glDrawArrays. I was searching the web again and I came across a piece of code in a tutorial and the person explained glDrawArrays and I saw that you can tell it what to draw.
So then this became easy, as I originally thought it was supposed to be. I changed my code back to pumping everything in the buffers and since I have a start and stop property on my objects returned from my loader it was real easy to tell glDrawArrays what to do.
Thank you.

OpenGL Model/Texture rendering using VAO/VBO

I am trying to render 3D models with textures using Assimp. The conversion goes perfect, all textures positions and what not gets loaded. I have tested the texture images by drawing them to the screen in 2D.
For some reason it does not render the textures to the model.
I am a beginner in OpenGL so forgive me if i dont explain it right.
The tutorial I have based the code on is from here, but i stripped a big part since I have my own camera/movement system.
The model renders like this: http://i.stack.imgur.com/5sK9K.png
whilest the texture in use looks like this: http://i.stack.imgur.com/sWGp7.jpg
The relevant rendering code is the following:
Generating textures from data file:
int Mesh::LoadGLTextures(const aiScene* scene){
if (scene->HasTextures()) return -1; //yes this is correct
/* getTexture Filenames and Numb of Textures */
for (unsigned int m = 0; m<scene->mNumMaterials; m++){
int texIndex = 0;
aiReturn texFound;
aiString path; // filename
while ((texFound = scene->mMaterials[m]->GetTexture(aiTextureType_DIFFUSE, texIndex, &path)) == AI_SUCCESS){
textureIdMap[path.data] = NULL; //fill map with textures, pointers still NULL yet
texIndex++;
}
}
int numTextures = textureIdMap.size();
/* create and fill array with GL texture ids */
GLuint* textureIds = new GLuint[numTextures];
/* get iterator */
std::map<std::string, GLuint>::iterator itr = textureIdMap.begin();
std::string basepath = getBasePath(path);
ALLEGRO_BITMAP *image;
for (int i = 0; i<numTextures; i++){
std::string filename = (*itr).first; // get filename
(*itr).second = textureIds[i]; // save texture id for filename in map
itr++; // next texture
std::string fileloc = basepath + filename; /* Loading of image */
image = al_load_bitmap(fileloc.c_str());
if (image) /* If no error occured: */{
GLuint texId = al_get_opengl_texture(image);
//glGenTextures(numTextures, &textureIds[i]); /* Texture name generation */
glBindTexture(GL_TEXTURE_2D, texId); /* Binding of texture name */
//redefine standard texture values
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); /* We will use linear
interpolation for magnification filter */
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); /* We will use linear
interpolation for minifying filter */
textureIdMap[filename] = texId;
} else {
/* Error occured */
std::cout << "Couldn't load Image: " << fileloc.c_str() << "\n";
}
}
//Cleanup
delete[] textureIds;
//return success
return true;
}
Generating VBO/VAO:
void Mesh::genVAOsAndUniformBuffer(const aiScene *sc) {
struct MyMesh aMesh;
struct MyMaterial aMat;
GLuint buffer;
// For each mesh
for (unsigned int n = 0; n < sc->mNumMeshes; ++n){
const aiMesh* mesh = sc->mMeshes[n];
// create array with faces
// have to convert from Assimp format to array
unsigned int *faceArray;
faceArray = (unsigned int *)malloc(sizeof(unsigned int) * mesh->mNumFaces * 3);
unsigned int faceIndex = 0;
for (unsigned int t = 0; t < mesh->mNumFaces; ++t) {
const aiFace* face = &mesh->mFaces[t];
memcpy(&faceArray[faceIndex], face->mIndices, 3 * sizeof(unsigned int));
faceIndex += 3;
}
aMesh.numFaces = sc->mMeshes[n]->mNumFaces;
// generate Vertex Array for mesh
glGenVertexArrays(1, &(aMesh.vao));
glBindVertexArray(aMesh.vao);
// buffer for faces
glGenBuffers(1, &buffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(unsigned int) * mesh->mNumFaces * 3, faceArray, GL_STATIC_DRAW);
// buffer for vertex positions
if (mesh->HasPositions()) {
glGenBuffers(1, &buffer);
glBindBuffer(GL_ARRAY_BUFFER, buffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * 3 * mesh->mNumVertices, mesh->mVertices, GL_STATIC_DRAW);
glEnableVertexAttribArray(vertexLoc);
glVertexAttribPointer(vertexLoc, 3, GL_FLOAT, 0, 0, 0);
}
// buffer for vertex normals
if (mesh->HasNormals()) {
glGenBuffers(1, &buffer);
glBindBuffer(GL_ARRAY_BUFFER, buffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * 3 * mesh->mNumVertices, mesh->mNormals, GL_STATIC_DRAW);
glEnableVertexAttribArray(normalLoc);
glVertexAttribPointer(normalLoc, 3, GL_FLOAT, 0, 0, 0);
}
// buffer for vertex texture coordinates
if (mesh->HasTextureCoords(0)) {
float *texCoords = (float *)malloc(sizeof(float) * 2 * mesh->mNumVertices);
for (unsigned int k = 0; k < mesh->mNumVertices; ++k) {
texCoords[k * 2] = mesh->mTextureCoords[0][k].x;
texCoords[k * 2 + 1] = mesh->mTextureCoords[0][k].y;
}
glGenBuffers(1, &buffer);
glEnableVertexAttribArray(texCoordLoc);
glBindBuffer(GL_ARRAY_BUFFER, buffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * 2 * mesh->mNumVertices, texCoords, GL_STATIC_DRAW);
glVertexAttribPointer(texCoordLoc, 2, GL_FLOAT, GL_FALSE, 0, 0);
}
// unbind buffers
glBindVertexArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
// create material uniform buffer
aiMaterial *mtl = sc->mMaterials[mesh->mMaterialIndex];
aiString texPath; //contains filename of texture
if (AI_SUCCESS == mtl->GetTexture(aiTextureType_DIFFUSE, 0, &texPath)){
//bind texture
unsigned int texId = textureIdMap[texPath.data];
aMesh.texIndex = texId;
aMat.texCount = 1;
} else {
aMat.texCount = 0;
}
float c[4];
set_float4(c, 0.8f, 0.8f, 0.8f, 1.0f);
aiColor4D diffuse;
if (AI_SUCCESS == aiGetMaterialColor(mtl, AI_MATKEY_COLOR_DIFFUSE, &diffuse))
color4_to_float4(&diffuse, c);
memcpy(aMat.diffuse, c, sizeof(c));
set_float4(c, 0.2f, 0.2f, 0.2f, 1.0f);
aiColor4D ambient;
if (AI_SUCCESS == aiGetMaterialColor(mtl, AI_MATKEY_COLOR_AMBIENT, &ambient))
color4_to_float4(&ambient, c);
memcpy(aMat.ambient, c, sizeof(c));
set_float4(c, 0.0f, 0.0f, 0.0f, 1.0f);
aiColor4D specular;
if (AI_SUCCESS == aiGetMaterialColor(mtl, AI_MATKEY_COLOR_SPECULAR, &specular))
color4_to_float4(&specular, c);
memcpy(aMat.specular, c, sizeof(c));
set_float4(c, 0.0f, 0.0f, 0.0f, 1.0f);
aiColor4D emission;
if (AI_SUCCESS == aiGetMaterialColor(mtl, AI_MATKEY_COLOR_EMISSIVE, &emission))
color4_to_float4(&emission, c);
memcpy(aMat.emissive, c, sizeof(c));
float shininess = 0.0;
unsigned int max;
aiGetMaterialFloatArray(mtl, AI_MATKEY_SHININESS, &shininess, &max);
aMat.shininess = shininess;
glGenBuffers(1, &(aMesh.uniformBlockIndex));
glBindBuffer(GL_UNIFORM_BUFFER, aMesh.uniformBlockIndex);
glBufferData(GL_UNIFORM_BUFFER, sizeof(aMat), (void *)(&aMat), GL_STATIC_DRAW);
myMeshes.push_back(aMesh);
}
}
Rendering model:
void Mesh::recursive_render(const aiScene *sc, const aiNode* nd){
// draw all meshes assigned to this node
for (unsigned int n = 0; n < nd->mNumMeshes; ++n){
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, myMeshes[nd->mMeshes[n]].texIndex);
// bind VAO
glBindVertexArray(myMeshes[nd->mMeshes[n]].vao);
// draw
glDrawElements(GL_TRIANGLES, myMeshes[nd->mMeshes[n]].numFaces * 3, GL_UNSIGNED_INT, 0);
}
// draw all children
for (unsigned int n = 0; n < nd->mNumChildren; ++n){
recursive_render(sc, nd->mChildren[n]);
}
}
Any other relevant code parts can be found in my open github project https://github.com/kwek20/StrategyGame/tree/master/Strategy
Mesh.cpp is relevant, as well as main.cpp and Camera.cpp.
As far as I understaind I followed the guidelines well, created a VAO, created VBOs, added data and enabled the proper vertex array attriute tot render the scene with.
I have checked all the data variables and everything is filled according to plan
Could anyone here spot the mistake I have made and or explain it?
Some links are typed weird because of the limit I have :(
It would help if you posted your shaders also.
I can post some rendering code with textures if that helps you out:
Generating the texture for opengl and loading a grayscale (UC8) image with width and height into the GPU
void GLRenderer::getTexture(unsigned char * image, int width, int height)
{
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &mTextureID);
glBindTexture(GL_TEXTURE_2D, mTextureID);
glTexStorage2D(GL_TEXTURE_2D, 1, GL_RGB8, width, height);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_BGR, GL_UNSIGNED_BYTE, image);
if (aux::checkGlErrors(__LINE__, __FILE__))assert(false);
glBindTexture(GL_TEXTURE_2D, 0);
}
Loading the vertices from assimp onto the gpu
//** buffer a obj file-style model, initialize the VAO
void GLRenderer::bufferModel(float* aVertexArray, int aNumberOfVertices, float* aNormalArray, int aNumberOfNormals, float* aUVList, int aNumberOfUVs, unsigned int* aIndexList, int aNumberOfIndices)
{
//** just to be sure we are current
glfwMakeContextCurrent(mWin);
//** Buffer all data in VBOs
glGenBuffers(1, &mVertex_buffer_object);
glBindBuffer(GL_ARRAY_BUFFER, mVertex_buffer_object);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * aNumberOfVertices * 3, aVertexArray, GL_STATIC_DRAW);
glGenBuffers(1, &mNormal_buffer_object);
glBindBuffer(GL_ARRAY_BUFFER, mNormal_buffer_object);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * aNumberOfNormals * 3, aNormalArray, GL_STATIC_DRAW);
glGenBuffers(1, &mUV_buffer_object);
glBindBuffer(GL_ARRAY_BUFFER, mUV_buffer_object);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * aNumberOfUVs * 2, aUVList, GL_STATIC_DRAW);
glGenBuffers(1, &mIndex_buffer_object);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mIndex_buffer_object);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(unsigned int) * aNumberOfIndices, aIndexList, GL_STATIC_DRAW);
if (aux::checkGlErrors(__LINE__, __FILE__))assert(false);
//** VAO tells our shaders how to match up data from buffer to shader input variables
glGenVertexArrays(1, &mVertex_array_object);
glBindVertexArray(mVertex_array_object);
//** vertices first
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, mVertex_buffer_object);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
//** normals next
if (aNumberOfNormals > 0){
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER, mNormal_buffer_object);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, NULL);
}
//** UVs last
if (aNumberOfUVs > 0){
glEnableVertexAttribArray(2);
glBindBuffer(GL_ARRAY_BUFFER, mUV_buffer_object);
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 0, NULL);
}
//** indexing for reusing vertices in triangle-meshes
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mIndex_buffer_object);
//** check errors and store the number of vertices
if (aux::checkGlErrors(__LINE__, __FILE__))assert(false);
mNumVert = aNumberOfVertices;
mNumNormals = aNumberOfNormals;
mNumUVs = aNumberOfUVs;
mNumIndices = aNumberOfIndices;
}
The code above is called like:
//read vertices from file
std::vector<float> vertex, normal, uv;
std::vector<unsigned int> index;
//assimp-wrapping function to load obj to vectors
aux::loadObjToVectors("Resources\\vertices\\model.obj", vertex, normal, index, uv);
mPtr->bufferModel(&vertex[0], static_cast<int>(vertex.size()) / 3, &normal[0], static_cast<int>(normal.size()) / 3, &uv[0], static_cast<int>(uv.size()) / 2, &index[0], static_cast<int>(index.size()));
Then comes the shader-part:
In the vertex shader you just hand-through the UV-coordinate layer
#version 400 core
layout (location = 0) in vec3 vertexPosition_modelspace;
layout (location = 1) in vec3 vertexNormal_modelspace;
layout (location = 2) in vec2 vertexUV;
out vec2 UV;
[... in main then ...]
UV = vertexUV;
While in the fragment shader you assign the value to the pixel:
#version 400 core
in vec2 UV;
uniform sampler2D textureSampler;
layout(location = 0) out vec4 outColor;
[... in main then ...]
// you probably want to calculate lighting here then too, so its just the simplest way to get the texture inside
outColor = vec4(texture2D(textureSampler, UV).rgb, cosAngle);
//you can also check whether the UV coords are correctly bound by using:
outColor = vec4(UV.x, UV.y,1,1);
//and then checking the pixel-values in the resulting image (e.g. render it to a PBO and then download it onto the CPU for)
In the rendering loop also make sure that all the uniforms are correctly bound (especially texture related ones) and that the texture is active and bound
if (mTextureID != -1) {
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, mTextureID);
}
GLint textureLocation = glGetUniformLocation(mShaderProgram, "textureSampler");
glUniform1i(textureLocation, 0);
//**set the poligon mode
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
//**drawElements because of indexing
glDrawElements(GL_TRIANGLES, mNumIndices, GL_UNSIGNED_INT, 0);
I hope I could help you!
Kind regards,
VdoP

OpenGL shape only draws when initial position is (0, 0, 0)

I have a cube that I am loading from an OBJ file. When I make its position (0, 0, 0) everything works fine. The cube renders, and my function that gives it a velocity moves the cube across the screen. However if I change the position of the cube to something other than (0, 0, 0) before entering my while loop where I render and calculate velocity changes, the cube never renders. This is the first time I have tried to reload my vertices every time I render a frame, and I am assuming I messed up something there - but I've looked over other code and can't figure out what.
Here is my main function:
int main()
{
#ifdef TESTING
testing();
exit(0);
#endif
setupAndInitializeWindow(768, 480, "Final Project");
TriangleTriangleCollision collisionDetector;
Asset cube1("cube.obj", "vertexShader.txt", "fragmentShader.txt");
cube1.position = glm::vec3(0.0, 2.0, 0.0);
cube1.velocity = glm::vec3(0.0, -0.004, 0.0);
MVP = projection * view * model;
do{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
moveAsset(cube1);
renderAsset(cube1);
glfwSwapBuffers(window);
glfwPollEvents();
} while (glfwGetKey(window, GLFW_KEY_ESCAPE) != GLFW_PRESS &&
glfwWindowShouldClose(window) == 0);
glfwTerminate();
return 0;
}
my moveAsset function:
void moveAsset(Asset &asset)
{
double currentTime = glfwGetTime();
asset.position.x += (asset.velocity.x * (currentTime - asset.lastTime));
asset.position.y += (asset.velocity.y * (currentTime - asset.lastTime));
asset.position.z += (asset.velocity.z * (currentTime - asset.lastTime));
for (glm::vec3 &vertex : asset.vertices)
{
glm::vec4 transformedVector = glm::translate(glm::mat4(1.0f), asset.position) * glm::vec4(vertex.x, vertex.y, vertex.z, 1);
vertex = glm::vec3(transformedVector.x, transformedVector.y, transformedVector.z);
}
asset.lastTime = glfwGetTime();
}
void renderAsset(Asset asset)
{
glUseProgram(asset.programID);
GLuint MatrixID = glGetUniformLocation(asset.programID, "MVP");
glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &MVP[0][0]);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, asset.vertexbuffer);
glBufferData(GL_ARRAY_BUFFER, asset.vertices.size() * sizeof(glm::vec3), &asset.vertices[0], GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glDrawArrays(GL_TRIANGLES, 0, asset.vertices.size());
glDisableVertexAttribArray(0);
}
my model, view and projection matrices are defined as:
glm::mat4 model = glm::mat4(1.0f);
glm::mat4 view = glm::lookAt(glm::vec3(5, 5, 10),
glm::vec3(0, 0, 0),
glm::vec3(0, 1, 0));
glm::mat4 projection = glm::perspective(45.0f, (float) _windowWidth / _windowHeight, 0.1f, 100.0f);
and finally, my Asset struct:
struct Asset
{
Asset() { }
Asset(std::string assetOBJFile, std::string vertexShader, std::string fragmentShader)
{
glGenVertexArrays(1, &vertexArrayID);
glBindVertexArray(vertexArrayID);
programID = LoadShaders(vertexShader.c_str(), fragmentShader.c_str());
// Read our .obj file
std::vector<glm::vec2> uvs;
std::vector<glm::vec3> normals;
loadOBJ(assetOBJFile.c_str(), vertices, uvs, normals);
// Load it into a VBO
glGenBuffers(1, &vertexbuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glBufferData(GL_ARRAY_BUFFER, vertices.size() * sizeof(glm::vec3), &vertices[0], GL_STATIC_DRAW);
//velocity = glm::vec3(0.0, 1.0, 1.0);
velocity = glm::vec3(0.0, 0.0, 0.0);
position = glm::vec3(0.0, 0.0, 0.0);
lastTime = glfwGetTime();
}
GLuint vertexArrayID;
GLuint programID;
GLuint vertexbuffer;
std::vector<glm::vec3> faces;
std::vector<glm::vec3> vertices;
glm::vec3 velocity;
double lastTime;
glm::vec3 position;
};
It looks like you're adding the current asset.position to your vertex positions on every iteration, replacing the previous positions. From the moveAsset() function:
for (glm::vec3 &vertex : asset.vertices)
{
glm::vec4 transformedVector = glm::translate(glm::mat4(1.0f), asset.position) *
glm::vec4(vertex.x, vertex.y, vertex.z, 1);
vertex = glm::vec3(transformedVector.x, transformedVector.y, transformedVector.z);
}
Neglecting the velocity for a moment, and assuming that you have an original vertex at (0, 0, 0), you would move it to asset.position on the first iteration. Then add asset.position again on the second iteration, which places it at 2 * asset.position. Then on the third iteration, add asset.position to this current position again, resulting in 3 * asset.position. So after n steps, the vertices will be around n * asset.position. Even if your object might be visible initially, it would move out of the visible range before you can blink.
To get your original strategy working, the most straightforward approach is to have two lists of vertices. One list contains your original object coordinates, which you never change. Then before you draw, you build a second list of vertices, calculated as the sum of the original vertices plus the current asset.position, and use that second list for rendering.
The whole thing is... not very OpenGL. There's really no need to modify the vertex coordinates on the CPU. You can make the translation part of the transformation applied in your vertex shader. You already have a model matrix in place. You can simply put the translation by asset.position into the model matrix, and recalculate the MVP matrix. You already have the glUniformMatix4fv() call to pass the new matrix to the shader program in your renderAsset() function.