OpenGL prevent rewriting uniforms between draw calls - c++

In the example below I have 2 meshes that use the same shader and I want to send different values
to each mesh the code below works but it seems unnecessary to send the data to the GPU every frame. Is there a way to send non-shader specific data to the GPU that can be used by the shader code and can by bound before draw calls?
I could just make a new shaders for every mesh I want to draw but this just seems like waste of memory.
glUseProgram(ShaderID);
glBindBuffer(GL_ARRAY_BUFFER, mesh1->VBO);
glBindVertexArray(mesh1->VAO);
glUniformMatrix4fv(MVP_ID, 1, GL_FALSE, (const GLfloat*)mvp1);
glUniform1fv(DataUniformID, 5, data1);
glDrawArrays(GL_TRIANGLES, 0, mesh1->size);
glBindBuffer(GL_ARRAY_BUFFER, mesh2->VBO);
glBindVertexArray(mesh2->VAO);
glUniformMatrix4fv(MVP_ID, 1, GL_FALSE, (const GLfloat*)mvp2);
glUniform1fv(DataUniformID, 5, data2);
glDrawArrays(GL_TRIANGLES, 0, mesh2->size);

Related

Dynamic VBO read/write in GLSL?

Right now it seems to me that my interleaved VBO is strictly ' read-only ' but I want to update it every frame (preferrably from GLSL).
I have a planet that moves around in an orbit, the code below is for rendering points of the orbit.
Problem outline:
I want each point on that orbit to have its own "lifetime", logic:
when the planet passes each consecutive point? update lifetime to 1.0 and reduce with time!
This will be used to create a fading orbitpath of each moving object. Right now Im just looking for ways to manipulate the vbo...
How can I read AND write within GLSL to and from a VBO ? Can anyone post example please?
Update: I modified the code above to work with transform feedback (suggested by user Andon M. Coleman) but I think I might be doing something wrong (I get glError):
Setup:
// Initialize and upload to graphics card
glGenVertexArrays(1, &_vaoID);
glGenBuffers(1, &_vBufferID);
glGenBuffers(1, &_iBufferID);
glGenBuffers(1, &_tboID);
// First VAO setup
glBindVertexArray(_vaoID);
glBindBuffer(GL_ARRAY_BUFFER, _vBufferID);
glBufferData(GL_ARRAY_BUFFER, _vsize * sizeof(Vertex), _varray, GL_DYNAMIC_DRAW);
// TRANSFORM FEEDBACK
glBindBuffer(GL_TRANSFORM_FEEDBACK_BUFFER, _tboID); // Specify buffer
// Allocate space without specifying data
glBufferData(GL_TRANSFORM_FEEDBACK_BUFFER,
_vsize*sizeof(Vertex), NULL, GL_DYNAMIC_COPY);
// Tell OGL which object to store the results of transform feedback
glBindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 0, _vBufferID); //correct?
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE,
sizeof(Vertex), reinterpret_cast<const GLvoid*>(offsetof(Vertex, location)));
glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE,
sizeof(Vertex), reinterpret_cast<const GLvoid*>(offsetof(Vertex, velocity)));
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _iBufferID);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, _isize * sizeof(int), _iarray, GL_STREAM_DRAW);
render method():
//disable fragment, so that we do a first run with feedback
glEnable(GL_RASTERIZER_DISCARD);
glBindVertexArray(_vaoID);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _iBufferID);
glBindTransformFeedback(GL_TRANSFORM_FEEDBACK, _tboID);
glBeginTransformFeedback(_mode);
glDrawElements(_mode, _isize, GL_UNSIGNED_INT, 0);
glEndTransformFeedback();
glBindTransformFeedback(GL_TRANSFORM_FEEDBACK, 0);
glBindVertexArray(0);
glDisable(GL_RASTERIZER_DISCARD);
// then i attempt to do the actual draw
glBindVertexArray(_vaoID);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _iBufferID);
glDrawElements(_mode, _isize, GL_UNSIGNED_INT, 0);
glBindVertexArray(0);
And - right before linking:
const GLchar* feedbackVaryings[] = { "point_position" };
glTransformFeedbackVaryings(_ephemerisProgram->getProgramID(), 1, feedbackVaryings, GL_INTERLEAVED_ATTRIBS);
you can not change the content of your VBO from the rendering pipline of Opengl, but you can use tricks to update them depending on the time, also if you are using Opengl 4.4 you can use ComputeShaders but it's a little bit complicated to explain it in here, hust google for it, good luck.
How can I read AND write within GLSL to and from a VBO?
You can't. VBOs are strictly readonly from the normal rendering shaders. Modification is not possible at all (because that would open an infathomable deep barrel of worms) but using transformation feedback the results of the shader stages can be written into a buffer.
Or you use compute shaders.
Problem outline: I want each point on that orbit to have its own "lifetime", logic: When the planet passes each consecutive point? update lifetime to 1.0 and reduce with time!
Sound like a task for a compute shader. But honestly I don't think there's much to gain from processing this on a GPU.

How to Access data on cuda by openGL?

I have generally learned OpenGL Interoperability with CUDA, but my problem is like this:
I have a lot of arrays, some for vertex, some for norm and some for alpha value alone, and some pointers to these arrays on device memory (something like dev_ver, dev_norm) which are used in kernel. I have already mapped the resource and now I want to use these data in shaders to make some effects. My rendering code is like this:
glUseProgram (programID);
glBindBuffer (GL_ARRAY_BUFFER, vertexBuffer_0);
glBufferData(GL_ARRAY_BUFFER, size, _data_on_cpu_0, GL_DYNAMIC_DRAW);
glVertexAttribPointer (0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glBindBuffer (GL_ARRAY_BUFFER, vertexBuffer_1);
glBufferData(GL_ARRAY_BUFFER, size, _data_on_cpu_1, GL_DYNAMIC_DRAW);
glVertexAttribPointer (1, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glBindBuffer (GL_ARRAY_BUFFER, vertexBuffer_2);
glBufferData(GL_ARRAY_BUFFER, size, _data_on_cpu_2, GL_DYNAMIC_DRAW);
glVertexAttribPointer (2, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glEnableVertexAttribArray (0);
glEnableVertexAttribArray (1);
glEnableVertexAttribArray (2);
glDrawArrays (GL_TRIANGLES, 0, _max_);
glDisableVertexAttribArray (0);
glDisableVertexAttribArray (1);
glDisableVertexAttribArray (2);
However, now I have no _data_on_cpu_, is it still possible to do the same thing ? The sample in cuda 6.0 is something like this:
glBindBuffer(GL_ARRAY_BUFFER, posVbo);
glVertexPointer(4, GL_FLOAT, 0, 0);
glEnableClientState(GL_VERTEX_ARRAY);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, normalVbo);
glNormalPointer(GL_FLOAT, sizeof(float)*4, 0);
glEnableClientState(GL_NORMAL_ARRAY);
glColor3f(1.0, 0.0, 0.0);
glDrawArrays(GL_TRIANGLES, 0, totalVerts);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
I don't exactly understand how this could work and what to do in my case.
By the way, the method I have used is to cudaMemcpy the dev_ to host and do the render like usual, but this is obviously not efficient, because when I do rendering I again send the data back to GPU by OpenGL (if I'm right).
It's not really clear what your asking for, you mention CUDA yet none of the code you have posted is CUDA specific. I'm guessing vertexbuffer_2 contains additional per vertex information you want to access in the shader?
OpenGL calls are as efficient as you will get it, they aren't actually copying any data back from device to host. They are simply sending the addresses to the device, telling it where to get the data from and how much data to use to render.
You only need to fill the vertex and normal information at the start of your program, there isn't much reason to be changing this information during execution. You can then change data stored in texture buffers to pass additional per entity data to shaders to change model position, rotation, colour etc.
When you write your shader you must include in it;
attribute in vec3 v_data; (or similar)
When you init your shader you must then;
GLuint vs_v_data = glGetAttribLocation(p_shaderProgram, "v_data");
Then instead of your;
glBindBuffer (GL_ARRAY_BUFFER, vertexBuffer_2);
glBufferData(GL_ARRAY_BUFFER, size, _data_on_cpu_2, GL_DYNAMIC_DRAW);
glVertexAttribPointer (2, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
You use;
glEnableVertexAttribArray (vs_v_data);
glBindBuffer (GL_ARRAY_BUFFER, vertexBuffer_2);
glBufferData(GL_ARRAY_BUFFER, size, _data_on_cpu_2, GL_DYNAMIC_DRAW);
glVertexAttribPointer (vs_v_data, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
This should let you access a float3 inside your vshaders called v_data that has whatevers stored in vertexBuffer_2, presumably secondary vertex information to lerp between for animation.
A simple shader for this that simply repositions vertices based on an input tick
#version 120
attribute in float tick;
attribute in vec3 v_data;
void main()
{
gl_Vertex.xyz = mix(gl_Vertex.xyz, v_data, tick);
}
If you want per entity data instead of/in addition to per vertex data, you should be doing that via texture buffers.
If your trying to access vertex buffer obj data inside kernels you need to use a bunch of functions;
cudaGraphicsGLRegisterBuffer() This will give you a resource pointer to the buffer, execute this once after you initially setup the vbo.
cudaGraphicsMapResources() This will map the buffer (you can use it in CUDA but not gl)
cudaGraphicsResourceGetMappedPointer() This will give you a device pointer to the buffer, pass this to the the kernel.
cudaGraphicsUnmapResources() This will unmap the buffer (you can use it in gl, but not CUDA)

OpenGL Flicker with glBufferSubData [duplicate]

It seems like glBufferSubData is overwriting or somehow mangling data between my glDrawArrays calls. I'm working in Windows 7 64bit, with that latest drivers for my Nvidia GeForce GT520M CUDA 1GB.
I have 2 models, each with an animation. The models have 1 mesh, and that mesh is stored in the same VAO. They also have 1 animation each, and the bone transformations to be used for rendering the mesh is stored in the same VBO.
My workflow looks like this:
calculate bone transformation matrices for a model
load bone transformation matrices into opengl using glBufferSubData, then bind the buffer
render the models mesh using glDrawArrays
For one model, this works (at least, mostly - sometimes I get weird gaps in between the vertices).
However, for more than one model, it looks like bone transformation matrix data is getting mixed up between the rendering calls to the meshes.
Single Model Animated Windows
Two Models Animated Windows
I load my bone transformation data like so:
void Animation::bind()
{
glBindBuffer(GL_UNIFORM_BUFFER, bufferId_);
glBufferSubData(GL_UNIFORM_BUFFER, 0, currentTransforms_.size() * sizeof(glm::mat4), &currentTransforms_[0]);
bindPoint_ = openGlDevice_->bindBuffer( bufferId_ );
}
And I render my mesh like so:
void Mesh::render()
{
glBindVertexArray(vaoId_);
glDrawArrays(GL_TRIANGLES, 0, vertices_.size());
glBindVertexArray(0);
}
If I add a call to glFinish() after my call to render(), it works just fine! This seems to indicate to me that, for some reason, the transformation matrix data for one animation is 'bleeding' over to the next animation.
How could this happen? I am under the impression that if I called glBufferSubData while that buffer was in use (i.e. for a glDrawArrays for example), then it would block. Is this not the case?
It might be worth mentioning that this same code works just fine in Linux.
Note: Related to a previous post, which I deleted.
Mesh Loading Code:
void Mesh::load()
{
LOG_DEBUG( "loading mesh '" + name_ +"' into video memory." );
// create our vao
glGenVertexArrays(1, &vaoId_);
glBindVertexArray(vaoId_);
// create our vbos
glGenBuffers(5, &vboIds_[0]);
glBindBuffer(GL_ARRAY_BUFFER, vboIds_[0]);
glBufferData(GL_ARRAY_BUFFER, vertices_.size() * sizeof(glm::vec3), &vertices_[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, vboIds_[1]);
glBufferData(GL_ARRAY_BUFFER, textureCoordinates_.size() * sizeof(glm::vec2), &textureCoordinates_[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, vboIds_[2]);
glBufferData(GL_ARRAY_BUFFER, normals_.size() * sizeof(glm::vec3), &normals_[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(2);
glVertexAttribPointer(2, 3, GL_FLOAT, GL_FALSE, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, vboIds_[3]);
glBufferData(GL_ARRAY_BUFFER, colors_.size() * sizeof(glm::vec4), &colors_[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(3);
glVertexAttribPointer(3, 4, GL_FLOAT, GL_FALSE, 0, 0);
if (bones_.size() == 0)
{
bones_.resize( vertices_.size() );
for (auto& b : bones_)
{
b.weights = glm::vec4(0.25f);
}
}
glBindBuffer(GL_ARRAY_BUFFER, vboIds_[4]);
glBufferData(GL_ARRAY_BUFFER, bones_.size() * sizeof(VertexBoneData), &bones_[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(4);
glVertexAttribIPointer(4, 4, GL_INT, sizeof(VertexBoneData), (const GLvoid*)0);
glEnableVertexAttribArray(5);
glVertexAttribPointer(5, 4, GL_FLOAT, GL_FALSE, sizeof(VertexBoneData), (const GLvoid*)(sizeof(glm::ivec4)));
glBindVertexArray(0);
}
Animation UBO Setup:
void Animation::setupAnimationUbo()
{
bufferId_ = openGlDevice_->createBufferObject(GL_UNIFORM_BUFFER, Constants::MAX_NUMBER_OF_BONES_PER_MESH * sizeof(glm::mat4), &currentTransforms_[0]);
}
where Constants::MAX_NUMBER_OF_BONES_PER_MESH is set to 100.
In OpenGlDevice:
GLuint OpenGlDevice::createBufferObject(GLenum target, glmd::uint32 totalSize, const void* dataPointer)
{
GLuint bufferId = 0;
glGenBuffers(1, &bufferId);
glBindBuffer(target, bufferId);
glBufferData(target, totalSize, dataPointer, GL_DYNAMIC_DRAW);
glBindBuffer(target, 0);
bufferIds_.push_back(bufferId);
return bufferId;
}
Those usage flags are mostly correct for this scenario, though you might consider trying GL_STREAM_DRAW.
Your driver appears to be failing to implicitly synchronize for some reason, so you might want to try a technique that eliminates the need for synchronization in the first place. I would suggest Buffer Orphaning: call glBufferData (...) with NULL for the data pointer prior to sending data. This will allow commands that are currently using the UBO to continue using the original data store without forcing synchronization, since you will allocate a new data store before sending new data. When the earlier mentioned commands finish the original data store will be orphaned and the GL implementation will free it.
In newer OpenGL implementations you can use glInvalidateBuffer[Sub]Data (...) to hint the driver into doing what was discussed above. Likewise, you can use glMapBufferRange (...) with appropriate flags to control all of this behavior more explicitly. Unmapping will implicitly flush and synchronize access to a buffer object unless told otherwise, this might get your driver to do its job if you do not want to mess around with synchronization-free buffer update logic.
Most of what I mentioned is discussed in more detail here.

glBufferSubData between glDrawArrays calls mangling data

It seems like glBufferSubData is overwriting or somehow mangling data between my glDrawArrays calls. I'm working in Windows 7 64bit, with that latest drivers for my Nvidia GeForce GT520M CUDA 1GB.
I have 2 models, each with an animation. The models have 1 mesh, and that mesh is stored in the same VAO. They also have 1 animation each, and the bone transformations to be used for rendering the mesh is stored in the same VBO.
My workflow looks like this:
calculate bone transformation matrices for a model
load bone transformation matrices into opengl using glBufferSubData, then bind the buffer
render the models mesh using glDrawArrays
For one model, this works (at least, mostly - sometimes I get weird gaps in between the vertices).
However, for more than one model, it looks like bone transformation matrix data is getting mixed up between the rendering calls to the meshes.
Single Model Animated Windows
Two Models Animated Windows
I load my bone transformation data like so:
void Animation::bind()
{
glBindBuffer(GL_UNIFORM_BUFFER, bufferId_);
glBufferSubData(GL_UNIFORM_BUFFER, 0, currentTransforms_.size() * sizeof(glm::mat4), &currentTransforms_[0]);
bindPoint_ = openGlDevice_->bindBuffer( bufferId_ );
}
And I render my mesh like so:
void Mesh::render()
{
glBindVertexArray(vaoId_);
glDrawArrays(GL_TRIANGLES, 0, vertices_.size());
glBindVertexArray(0);
}
If I add a call to glFinish() after my call to render(), it works just fine! This seems to indicate to me that, for some reason, the transformation matrix data for one animation is 'bleeding' over to the next animation.
How could this happen? I am under the impression that if I called glBufferSubData while that buffer was in use (i.e. for a glDrawArrays for example), then it would block. Is this not the case?
It might be worth mentioning that this same code works just fine in Linux.
Note: Related to a previous post, which I deleted.
Mesh Loading Code:
void Mesh::load()
{
LOG_DEBUG( "loading mesh '" + name_ +"' into video memory." );
// create our vao
glGenVertexArrays(1, &vaoId_);
glBindVertexArray(vaoId_);
// create our vbos
glGenBuffers(5, &vboIds_[0]);
glBindBuffer(GL_ARRAY_BUFFER, vboIds_[0]);
glBufferData(GL_ARRAY_BUFFER, vertices_.size() * sizeof(glm::vec3), &vertices_[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, vboIds_[1]);
glBufferData(GL_ARRAY_BUFFER, textureCoordinates_.size() * sizeof(glm::vec2), &textureCoordinates_[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, vboIds_[2]);
glBufferData(GL_ARRAY_BUFFER, normals_.size() * sizeof(glm::vec3), &normals_[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(2);
glVertexAttribPointer(2, 3, GL_FLOAT, GL_FALSE, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, vboIds_[3]);
glBufferData(GL_ARRAY_BUFFER, colors_.size() * sizeof(glm::vec4), &colors_[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(3);
glVertexAttribPointer(3, 4, GL_FLOAT, GL_FALSE, 0, 0);
if (bones_.size() == 0)
{
bones_.resize( vertices_.size() );
for (auto& b : bones_)
{
b.weights = glm::vec4(0.25f);
}
}
glBindBuffer(GL_ARRAY_BUFFER, vboIds_[4]);
glBufferData(GL_ARRAY_BUFFER, bones_.size() * sizeof(VertexBoneData), &bones_[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(4);
glVertexAttribIPointer(4, 4, GL_INT, sizeof(VertexBoneData), (const GLvoid*)0);
glEnableVertexAttribArray(5);
glVertexAttribPointer(5, 4, GL_FLOAT, GL_FALSE, sizeof(VertexBoneData), (const GLvoid*)(sizeof(glm::ivec4)));
glBindVertexArray(0);
}
Animation UBO Setup:
void Animation::setupAnimationUbo()
{
bufferId_ = openGlDevice_->createBufferObject(GL_UNIFORM_BUFFER, Constants::MAX_NUMBER_OF_BONES_PER_MESH * sizeof(glm::mat4), &currentTransforms_[0]);
}
where Constants::MAX_NUMBER_OF_BONES_PER_MESH is set to 100.
In OpenGlDevice:
GLuint OpenGlDevice::createBufferObject(GLenum target, glmd::uint32 totalSize, const void* dataPointer)
{
GLuint bufferId = 0;
glGenBuffers(1, &bufferId);
glBindBuffer(target, bufferId);
glBufferData(target, totalSize, dataPointer, GL_DYNAMIC_DRAW);
glBindBuffer(target, 0);
bufferIds_.push_back(bufferId);
return bufferId;
}
Those usage flags are mostly correct for this scenario, though you might consider trying GL_STREAM_DRAW.
Your driver appears to be failing to implicitly synchronize for some reason, so you might want to try a technique that eliminates the need for synchronization in the first place. I would suggest Buffer Orphaning: call glBufferData (...) with NULL for the data pointer prior to sending data. This will allow commands that are currently using the UBO to continue using the original data store without forcing synchronization, since you will allocate a new data store before sending new data. When the earlier mentioned commands finish the original data store will be orphaned and the GL implementation will free it.
In newer OpenGL implementations you can use glInvalidateBuffer[Sub]Data (...) to hint the driver into doing what was discussed above. Likewise, you can use glMapBufferRange (...) with appropriate flags to control all of this behavior more explicitly. Unmapping will implicitly flush and synchronize access to a buffer object unless told otherwise, this might get your driver to do its job if you do not want to mess around with synchronization-free buffer update logic.
Most of what I mentioned is discussed in more detail here.

Moving to OpenGL 2.0, getting a black screen

I'm trying to modernize my OpenGL code a little bit by bringing it to OpenGL 2.0 time, but all I get is a black screen... perhaps someone spots a mistake in my code (not that much of code).
Here is a snip of the relevant bits of my old, fully working code.
// Send color data into GPU memory
glEnableClientState(GL_COLOR_ARRAY);
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, colorsHandle);
GL15.glBufferData(GL15.GL_ARRAY_BUFFER, colors, GL15.GL_DYNAMIC_DRAW);
glColorPointer(4, GL_FLOAT, 0, 0);
// Send vertex data into GPU memory
glEnableClientState(GL_VERTEX_ARRAY);
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, verticesHandle);
GL15.glBufferData(GL15.GL_ARRAY_BUFFER, GL15.GL_DYNAMIC_DRAW);
glVertexPointer(3, GL_FLOAT, 0, 0);
// Send texcoords data into GPU memory
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, texcoordsHandle);
GL15.glBufferData(GL15.GL_ARRAY_BUFFER, GL15.GL_DYNAMIC_DRAW);
glTexCoordPointer(2, GL_FLOAT, 0, 0);
for (Batch batch : batches) batch.render();
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
... batch calls glDrawArrays (amongst doing some opengl state changes) ...
glDrawArrays(GL_QUADS, spriteOffset * 4, spriteCount * 4);
And here is the new code:
// Send color data into GPU memory
GL20.glEnableVertexAttribArray(0);
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, colorsHandle);
GL15.glBufferData(GL15.GL_ARRAY_BUFFER, colors, GL15.GL_DYNAMIC_DRAW);
GL20.glVertexAttribPointer(0, 4, GL_FLOAT, false, 0, 0);
// Send vertex data into GPU memory
GL20.glEnableVertexAttribArray(1);
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, verticesHandle);
GL15.glBufferData(GL15.GL_ARRAY_BUFFER, vertices, GL15.GL_DYNAMIC_DRAW);
GL20.glVertexAttribPointer(1, 3, GL_FLOAT, false, 0, 0);
// Send texcoords data into GPU memory
GL20.glEnableVertexAttribArray(2);
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, texcoordsHandle);
GL15.glBufferData(GL15.GL_ARRAY_BUFFER, texcoords,GL15.GL_DYNAMIC_DRAW);
GL20.glVertexAttribPointer(2, 2, GL_FLOAT, false, 0, 0);
for (Batch batch : batches) batch.render();
GL20.glDisableVertexAttribArray(0);
GL20.glDisableVertexAttribArray(1);
GL20.glDisableVertexAttribArray(2);
Like I said the first bit of code works perfectly, but the second does not draw anything at all. Those are the only lines of code I changed in my renderer, and they look correct to me. But obviously there is a problem somewhere... What could possibly cause the absence of rendering?
If those are really the only lines you've changed, then I guess you haven't implemented any shaders?
In OpenGLES 2.0 there is no fixed function pipeline, so you need to write vertex/fragment shaders, compile them, link program objects, etc.
I'm sure you can find many tutorials for this if you search.