I am creating a UBO.
glGenBuffers(1, &uboMatrices);
glBindBuffer(GL_UNIFORM_BUFFER, uboMatrices);
glBufferData(GL_UNIFORM_BUFFER, 2 * sizeof(glm::mat4), NULL, GL_STATIC_DRAW);
glBindBuffer(GL_UNIFORM_BUFFER, 0);
glBindBufferRange(GL_UNIFORM_BUFFER, 0, uboMatrices, 0, 2 * sizeof(glm::mat4));
Updating the Data
glBindBuffer(GL_UNIFORM_BUFFER, uboMatrices);
glBufferSubData(GL_UNIFORM_BUFFER, 0, sizeof(glm::mat4), glm::value_ptr(projection));
glBindBuffer(GL_UNIFORM_BUFFER, 0);
glBindBuffer(GL_UNIFORM_BUFFER, uboMatrices);
glBufferSubData(GL_UNIFORM_BUFFER, sizeof(glm::mat4), sizeof(glm::mat4), glm::value_ptr(view));
glBindBuffer(GL_UNIFORM_BUFFER, 0);
Is it Necessary to do this step ?
unsigned int uniformIndexMyShader1 = glGetUniformBlockIndex(MyShader1.ID, "Matrices");
glUniformBlockBinding(MyShader1.ID, uniformIndexMyShader1, 0);
Without this also all of the shaders are getting updated with the Projection and view matrices.
According to the OpenGL 4.6 standard, section 7.6.3:
Each of a program’s active uniform blocks has a corresponding uniform
buffer object binding point. The binding is established when a program
is linked or re- linked, and the initial value of the binding is
specified by a layout qualifier (if present), or zero otherwise. The
binding point can be assigned by calling void UniformBlockBinding(
uint program, uint uniformBlockIndex, uint uniformBlockBinding );
So if you specify a binding in the shader (via the layout) then it seems you would not need to explicitly call glUniformBlockBinding. If you do not specify the binding in the shader, then the standard appears to be saying that all uniform blocks get associated with binding point zero. If that's what you expect, then the call to glUniformBlockBinding wouldn't hurt, but it wouldn't be changing anything either.
Note that you have to establish the binding before linking the program.
Related
Does anyone know why this error is being thrown?
I thought I am binding to VBO when I use glEnableVertexAttribArray?
com.jogamp.opengl.GLException: array vertex_buffer_object must be bound to call this method
at jogamp.opengl.gl4.GL4bcImpl.checkBufferObject(GL4bcImpl.java:39146)
at jogamp.opengl.gl4.GL4bcImpl.checkArrayVBOBound(GL4bcImpl.java:39178)
at jogamp.opengl.gl4.GL4bcImpl.glVertexAttribPointer(GL4bcImpl.java:37371)
This is my code to draw ..
public void draw(final GL2ES2 gl, Matrix4f projectionMatrix, Matrix4f viewMatrix, int shaderProgram, final Vec3 position, final float angle) {
// enable glsl
gl.glUseProgram(shaderProgram);
// enable alpha
gl.glEnable(GL2ES2.GL_BLEND);
gl.glBlendFunc(GL2ES2.GL_SRC_ALPHA, GL2ES2.GL_ONE_MINUS_SRC_ALPHA);
// get handle to glsl variables
mPositionHandle = gl.glGetAttribLocation(shaderProgram, "vPosition");
setmColorHandle(gl.glGetUniformLocation(shaderProgram, "vColor"));
mProj = gl.glGetUniformLocation(shaderProgram, "mProj");
mView = gl.glGetUniformLocation(shaderProgram, "mView");
mModel = gl.glGetUniformLocation(shaderProgram, "mModel");
// perform translations
getModelMatrix().loadIdentity();
getModelMatrix().translate(new Vec3(position.x * 60.0f, position.y * 60.0f, position.z * 60.0f));
getModelMatrix().rotate(angle, 0, 0, 1);
// set glsl variables
gl.glUniform4fv(getmColorHandle(), 1, getColorArray(), 0);
gl.glUniformMatrix4fv(mProj, 1, true, projectionMatrix.getValues(), 0);
gl.glUniformMatrix4fv(mView, 1, true, viewMatrix.getValues(), 0);
gl.glUniformMatrix4fv(mModel, 1, true, getModelMatrix().getValues(), 0);
// Enable a handle to the triangle vertices
gl.glEnableVertexAttribArray(mPositionHandle);
// Prepare the triangle coordinate data
gl.glVertexAttribPointer(
getmPositionHandle(),
COORDS_PER_VERTEX,
GL2ES2.GL_FLOAT,
false,
vertexStride, 0L); // This is the line that throws error
// Draw the square
gl.glDrawElements(
GL2ES2.GL_TRIANGLES,
drawOrder.length,
GL2ES2.GL_UNSIGNED_SHORT,
0L);
// Disable vertex array
gl.glDisableVertexAttribArray(mPositionHandle);
gl.glDisable(GL2ES2.GL_BLEND);
gl.glUseProgram(0);
}
(I've never used OpenGL with Java, so I'll use C/C++ code, but I hope it will come across well)
You do not create or bind a Vertex Buffer Object.
First, use glGenBuffers to create a buffer, as so:
GLuint bufferID;
glGenBuffers(1, &bufferID);
This allocates a handle and stores it in bufferID.
Then, bind the buffer:
glBindBuffers(GL_ARRAY_BUFFER, bufferID);
This makes it the "current" buffer to use.
Next, fill the buffer with data. Assuming vertices is an array that stores your vertex coordinates, in flat format, with three floats per vertex:
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), &vertices, GL_STATIC_DRAW);
This actually puts the data in GPU memory.
Then enable the attribute array and set the pointer:
glEnableVertexAttribArray(mPositionHandle);
glVertexAttribPointer(mPositionHandle, 3, GL_FLOAT, 0, 0, 0);
This will make the data in vertices available for shader programs under the vertex attribute location of mPositionHandle.
The second-to-last parameter of glVertexAttribPointer is stride. In this example, it is 0, because the buffer contains only vertex position data. If you want to pack both vertex position data and color data in the same buffer, as so:
v1.positionX v1.positionY v1.positionZ v1.colorR v1.colorG v1.colorB
v2.positionX ...
you will need to use a non-zero stride. stride specifies the offset between one attribute and the next of the same type; with stride of 0, they are assumed to be tightly packed. In this case, you'll want to set a stride of sizeof(GLfloat) * 6, so that after reading one vertex's position, it will skip the color data to arrive at the next vertex, and similarily for colors.
// (create, bind and fill vertex buffer here)
glEnableVertexAttribArray(location_handle_of_position_data);
glVertexAttribPointer(location_handle_of_position_data, 3, GL_FLOAT, 0, sizeof(GLfloat) * 6, 0);
glEnableVertexAttribArray(location_handle_of_color_data);
glVertexAttribPointer(location_handle_of_color_data, 3, GL_FLOAT, 0, sizeof(GLfloat) * 6, sizeof(GLfloat) * 3);
The last parameter is the offset to the first attribute - the first color attribute starts after the third float.
Other considerations:
You should look into using Vertex Array Objects. It might or might not work without them, but by standard they are required, and they simplify the code in any case.
For the sake of simplicity, this example code stores color data in floats, but for real use bytes are preferable.
glVertexAttribPointer() specifies that data for the attribute should be pulled from the currently bound vertex buffer, using the parameters specified. So you need to call:
gl.glBindBuffer(GL_VERTEX_ARRAY, ...);
before you call glVertexAttribPointer().
glEnableVertexAttribArray() specifies that an array should be used for the vertex attribute. Otherwise, a constant value, specified with calls like glVertexAttrib4f() is used. But it does not specify that the array is in a buffer. And even more importantly, there's no way glVertexAttribPointer() would know which buffer to use for the attribute unless you bind a specific buffer.
I have been working to get OpenGL to render multiple different entities on the scene.
According to http://www.opengl.org/wiki/Vertex_Specification,
Vertex Array Object should remember what Vertex Buffer Object was bound to GL_ARRAY_BUFFER, GL_ELEMENT_ARRAY_BUFFER. (Or at least that is how I understood what it was saying)
Yet, even though I call draw after binding any vao, the application will use only the last one bound to GL_ARRAY_BUFFER.
Question - Am I understanding it right? Considering the code below, is all sequence of calling to gl functions is correct?
void OglLayer::InitBuffer()
{
std::vector<float> out;
std::vector<unsigned> ibOut;
glGenVertexArrays(V_COUNT, vaos);
glGenBuffers(B_COUNT, buffers);
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= //
glBindVertexArray(vaos[V_PLANE]);
PlaneBuffer(out, ibOut, 0.5f, 0.5f, divCount, divCount);
OGL_CALL(glBindBuffer(GL_ARRAY_BUFFER, buffers[B_PLANE_VERTEX]));
OGL_CALL(glBufferData(GL_ARRAY_BUFFER, out.size()*sizeof(float), out.data(), GL_DYNAMIC_DRAW));
//GLuint vPosition = glGetAttribLocation( programs[P_PLANE], "vPosition" );
OGL_CALL(glEnableVertexAttribArray(0));
//OGL_CALL(glVertexAttribPointer( vPosition, 3, GL_FLOAT, GL_FALSE, sizeof(float)*3, BUFFER_OFFSET(0) ));
OGL_CALL(glVertexAttribPointer( 0, 3, GL_FLOAT, GL_FALSE, sizeof(float)*3, BUFFER_OFFSET(0) ));
bufferData[B_PLANE_VERTEX].cbSize = sizeof(float) * out.size();
bufferData[B_PLANE_VERTEX].elementCount = out.size()/3;
OGL_CALL(glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buffers[B_PLANE_INDEX]));
OGL_CALL(glBufferData(GL_ELEMENT_ARRAY_BUFFER, ibOut.size()*sizeof(unsigned), ibOut.data(), GL_STATIC_DRAW));
bufferData[B_PLANE_INDEX].cbSize = sizeof(float) * ibOut.size();
bufferData[B_PLANE_INDEX].elementCount = ibOut.size();
// -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= //
glBindVertexArray(vaos[V_CUBE]);
out.clear();
ibOut.clear();
GenCubeMesh(out, ibOut);
glBindBuffer(GL_ARRAY_BUFFER, buffers[B_CUBE_VERTEX]);
glBufferData(GL_ARRAY_BUFFER, out.size()*sizeof(float), out.data(), GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3*sizeof(float), BUFFER_OFFSET(0));
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buffers[B_CUBE_INDEX]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(unsigned), ibOut.data(), GL_STATIC_DRAW);
}
void RenderPlane::Render( float dt )
{
// Putting any vao here always results in using the lastly buffer bound.
OGL_CALL(glBindVertexArray(g_ogl.vaos[m_pRender->vao]));
{
OGL_CALL(glUseProgram(g_ogl.programs[m_pRender->program]));
// uniform location
GLint loc = 0;
// Send Transform Matrix ( Rotate Cube over time )
loc = glGetUniformLocation(g_ogl.programs[m_pRender->program], "transfMat");
auto transf = m_pRender->pParent->m_transform->CreateTransformMatrix();
glUniformMatrix4fv(loc, 1, GL_TRUE, &transf.matrix()(0,0));
// Send View Matrix
loc = glGetUniformLocation(g_ogl.programs[m_pRender->program], "viewMat");
mat4 view = g_ogl.camera.transf().inverse();
glUniformMatrix4fv(loc, 1, GL_TRUE, &view(0,0));
// Send Projection Matrix
loc = glGetUniformLocation(g_ogl.programs[m_pRender->program], "projMat");
mat4 proj = g_ogl.camera.proj();
glUniformMatrix4fv(loc, 1, GL_TRUE, &proj(0,0));
}
OGL_CALL(glDrawElements(GL_TRIANGLES, g_ogl.bufferData[m_pRender->ib].elementCount, GL_UNSIGNED_INT, 0));
}
The GL_ARRAY_BUFFER binding is not part of the VAO state. The wiki page you link explains that correctly:
Note: The GL_ARRAY_BUFFER​ binding is NOT part of the VAO's state! I know that's confusing, but that's the way it is.
The GL_ELEMENT_ARRAY_BUFFER binding on the other hand is part of the VAO state.
While I don't completely disagree with calling this confusing, it actually does make sense if you start thinking about it. The goal of a VAO is to capture all vertex state that you would typically set up before making a draw call. When using indexed rendering, this includes binding the proper index buffer used for the draw call. Therefore, including the GL_ELEMENT_ARRAY_BUFFER binding in the VAO state makes complete sense.
On the other hand, the current GL_ARRAY_BUFFER binding does not influence a draw call at all. It only matters that the correct binding is established before calling glVertexAttribPointer(). And all the state set by glVertexAttribPointer() is part of the VAO state. So the VAO state contains the vertex buffer reference used for each attribute, which is established by the glVertexAttribPointer() call. The current GL_ARRAY_BUFFER on the other hand is not part of the VAO state because the current binding at the time of the draw call does not have any effect on the draw call.
Another way of looking at this: Since attributes used for a draw call can be pulled from different vertex buffers, having a single vertex buffer binding tracked in the VAO state would not be useful. On the other hand, since OpenGL only ever uses a single index buffer for a draw call, and uses the current index buffer binding for the draw call, tracking the index buffer binding in the VAO makes sense.
private int vbo;
private int ibo;
vbo = glGenBuffers();
ibo = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, Util.createFlippedBuffer(vertices), GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, Util.createFlippedBuffer(indices), GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
//glEnableVertexAttribArray(2);
//glBindBuffer(GL_ARRAY_BUFFER, vbo);
glVertexAttribPointer(0, 3, GL_FLOAT, false, Vertex.SIZE * 4, 0);
glVertexAttribPointer(1, 2, GL_FLOAT, false, Vertex.SIZE * 4, 12);
//glVertexAttribPointer(2, 3, GL_FLOAT, false, Vertex.SIZE * 4, 20);
//glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo);
glDrawElements(GL_TRIANGLES, size, GL_UNSIGNED_INT, 0);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
//glDisableVertexAttribArray(2);
The vertex shader code looks like
#version 330
layout (location = 0) in vec3 position;
layout (location = 1) in vec2 texCoord;
out vec2 texCoord0;
uniform mat4 transform;
void main()
{
gl_Position = transform * vec4(position, 1.0);
texCoord0 = texCoord;
}
So, here is my understanding. The purpose of glVertexAttribPointer is to define the format of data in the vertex buffer object. So, in vbo it stores data as follows
buffer.put(vertices[i].getPos().getX());
buffer.put(vertices[i].getPos().getY());
buffer.put(vertices[i].getPos().getZ());
buffer.put(vertices[i].getTexCoord().getX());
buffer.put(vertices[i].getTexCoord().getY());
buffer.put(vertices[i].getNormal().getX());
buffer.put(vertices[i].getNormal().getY());
buffer.put(vertices[i].getNormal().getZ());
So, we have two glVertexAttribPointer lines because we have two variables defined in the vertex shader. So basically we are defining what these two variables point to. So, the first glVertexAttribPointer defines that the first variable "position" is a vertex with three coordinates each being float. The second glVertexAttribPointer defines the second variable "texCoord" being a pair of texture coordinates each being float. So, if my understanding is correct so far then i was assuming we first need to bind the vertex buffer object first but even after commenting out this line
glBindBuffer(GL_ARRAY_BUFFER, vbo);
it still works. I am confused. How does it know which buffer object we are talking about since there are two vbos?
#datenwolf already covered the key aspect in a comment above. To elaborate some more:
You do not have to bind GL_ARRAY_BUFFER again before the glDrawElements() call. What matters is that the buffer you want to source a given attribute from is bound when you make the glVertexAttribPointer() call for that attribute.
The best way to picture this is that when you make this call:
glVertexAttribPointer(0, 3, GL_FLOAT, false, Vertex.SIZE * 4, 0);
you're specifying all the state needed to tell OpenGL where to get the data for attribute 0 (first argument) from, and how to read it. Most of that state is given directly by the arguments:
it has 3 components
the components are float values
vertices are read with a stride of 20 bytes...
... and starting at byte 0 of the buffer
But there's an additional implied piece of state that is also stored away for attribute 0 when you make the call:
the data is read from the buffer currently bound to GL_ARRAY_BUFFER
In other words, the state associated with each attribute includes the id of the buffer the attribute data is sourced from. This can be the same buffer for multiple/all attributes, or it can be a different buffer for each attribute.
Note that the same is not true for GL_ELEMENT_ARRAY_BUFFER. That one needs to be bound at the time of the glDrawElements() call. While it seems somewhat inconsistent, this is necessary because there's no equivalent to glVertexAttribPointer() for the index array. The API could have been defined to have this kind of call, but... it was not. The reason is most likely that it was simply not necessary, since only one index array can be used for a draw call, while multiple vertex buffers can be used.
I have generally learned OpenGL Interoperability with CUDA, but my problem is like this:
I have a lot of arrays, some for vertex, some for norm and some for alpha value alone, and some pointers to these arrays on device memory (something like dev_ver, dev_norm) which are used in kernel. I have already mapped the resource and now I want to use these data in shaders to make some effects. My rendering code is like this:
glUseProgram (programID);
glBindBuffer (GL_ARRAY_BUFFER, vertexBuffer_0);
glBufferData(GL_ARRAY_BUFFER, size, _data_on_cpu_0, GL_DYNAMIC_DRAW);
glVertexAttribPointer (0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glBindBuffer (GL_ARRAY_BUFFER, vertexBuffer_1);
glBufferData(GL_ARRAY_BUFFER, size, _data_on_cpu_1, GL_DYNAMIC_DRAW);
glVertexAttribPointer (1, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glBindBuffer (GL_ARRAY_BUFFER, vertexBuffer_2);
glBufferData(GL_ARRAY_BUFFER, size, _data_on_cpu_2, GL_DYNAMIC_DRAW);
glVertexAttribPointer (2, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glEnableVertexAttribArray (0);
glEnableVertexAttribArray (1);
glEnableVertexAttribArray (2);
glDrawArrays (GL_TRIANGLES, 0, _max_);
glDisableVertexAttribArray (0);
glDisableVertexAttribArray (1);
glDisableVertexAttribArray (2);
However, now I have no _data_on_cpu_, is it still possible to do the same thing ? The sample in cuda 6.0 is something like this:
glBindBuffer(GL_ARRAY_BUFFER, posVbo);
glVertexPointer(4, GL_FLOAT, 0, 0);
glEnableClientState(GL_VERTEX_ARRAY);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, normalVbo);
glNormalPointer(GL_FLOAT, sizeof(float)*4, 0);
glEnableClientState(GL_NORMAL_ARRAY);
glColor3f(1.0, 0.0, 0.0);
glDrawArrays(GL_TRIANGLES, 0, totalVerts);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
I don't exactly understand how this could work and what to do in my case.
By the way, the method I have used is to cudaMemcpy the dev_ to host and do the render like usual, but this is obviously not efficient, because when I do rendering I again send the data back to GPU by OpenGL (if I'm right).
It's not really clear what your asking for, you mention CUDA yet none of the code you have posted is CUDA specific. I'm guessing vertexbuffer_2 contains additional per vertex information you want to access in the shader?
OpenGL calls are as efficient as you will get it, they aren't actually copying any data back from device to host. They are simply sending the addresses to the device, telling it where to get the data from and how much data to use to render.
You only need to fill the vertex and normal information at the start of your program, there isn't much reason to be changing this information during execution. You can then change data stored in texture buffers to pass additional per entity data to shaders to change model position, rotation, colour etc.
When you write your shader you must include in it;
attribute in vec3 v_data; (or similar)
When you init your shader you must then;
GLuint vs_v_data = glGetAttribLocation(p_shaderProgram, "v_data");
Then instead of your;
glBindBuffer (GL_ARRAY_BUFFER, vertexBuffer_2);
glBufferData(GL_ARRAY_BUFFER, size, _data_on_cpu_2, GL_DYNAMIC_DRAW);
glVertexAttribPointer (2, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
You use;
glEnableVertexAttribArray (vs_v_data);
glBindBuffer (GL_ARRAY_BUFFER, vertexBuffer_2);
glBufferData(GL_ARRAY_BUFFER, size, _data_on_cpu_2, GL_DYNAMIC_DRAW);
glVertexAttribPointer (vs_v_data, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
This should let you access a float3 inside your vshaders called v_data that has whatevers stored in vertexBuffer_2, presumably secondary vertex information to lerp between for animation.
A simple shader for this that simply repositions vertices based on an input tick
#version 120
attribute in float tick;
attribute in vec3 v_data;
void main()
{
gl_Vertex.xyz = mix(gl_Vertex.xyz, v_data, tick);
}
If you want per entity data instead of/in addition to per vertex data, you should be doing that via texture buffers.
If your trying to access vertex buffer obj data inside kernels you need to use a bunch of functions;
cudaGraphicsGLRegisterBuffer() This will give you a resource pointer to the buffer, execute this once after you initially setup the vbo.
cudaGraphicsMapResources() This will map the buffer (you can use it in CUDA but not gl)
cudaGraphicsResourceGetMappedPointer() This will give you a device pointer to the buffer, pass this to the the kernel.
cudaGraphicsUnmapResources() This will unmap the buffer (you can use it in gl, but not CUDA)
I've just read through a tutorial about Vertex Array Objects and Vertex Buffer Objects, and I can't work out from the following code how OpenGL knows the first VBO (vertexBufferObjID[0]) represents vertex coordinates, and the second VBO (vertexBufferObjID[1]) represents colour data?
glGenBuffers(2, vertexBufferObjID);
// VBO for vertex data
glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObjID[0]);
glBufferData(GL_ARRAY_BUFFER, 9*sizeof(GLfloat), vertices, GL_STATIC_DRAW);
glVertexAttribPointer((GLuint)0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(0);
// VBO for colour data
glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObjID[1]);
glBufferData(GL_ARRAY_BUFFER, 9*sizeof(GLfloat), colours, GL_STATIC_DRAW);
glVertexAttribPointer((GLuint)1, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(1);
Edit: Thanks to Peter's answer, I found the following two lines of code which hook up each VBO with the shaders (indices 0 & 1 correlate to the VBO index):
glBindAttribLocation(programId, 0, "in_Position");
glBindAttribLocation(programId, 1, "in_Color");
It doesn't, you have to tell it which is which in the shader.
layout(location = 0) in vec3 position;
layout(location = 1) in vec3 color;
void main()
{
/* ... */
}
glVertexAttribPointer is only meant to be used with shaders. Therefore you, as the shader writer, always know what each attribute index is supposed to do, as it maps to a certain variable in your vertex shader. You control the index to variable map with commands glBindAttribLocation or glGetAttribLocation, or from special keywords inside GLSL.
If you're just using the fixed function pipeline (default shader), then you don't want to use glVertexAttribPointer. Instead for the fixed function pipe the equivalent commands are glVertexPointer and glColorPointer, etc. Note that these are deprecated commands.
Also note that for fixed function glEnableVertexAttribArray is to be replaced with glEnableClientState.