openGL using glVertexAttribPointer - opengl

So I created a quad using glBegin(GL_QUADS) and then drawing vertices and now I want to pass into my shader an array of texture coordinates so that I can apply a texture to my quad.
So I'm having some trouble getting the syntax right.
First I made a 2D array of values
GLfloat coords[4][2];
coords[0][0] = 0;
coords[0][1] = 0;
coords[1][0] = 1;
coords[1][1] = 0;
coords[2][0] = 1;
coords[2][1] = 1;
coords[3][0] = 0;
coords[3][1] = 1;
and then I tried to put it into my shader where I have a attribute vec2 texcoordIn
GLint texcoord = glGetAttribLocation(shader->programID(), "texcoordIn");
glEnableVertexAttribArray(texcoord);
glVertexAttribPointer(texcoord, ???, GL_FLOAT, ???, ???, coords);
So I'm confused as to what I should put in for parameters to glVertexAttribPointer that I marked with '???' and I'm also wondering if I'm even allowed to represent the texture coordinates as a 2d array like I did in the first place.

The proper values would be
glVertexAttribPointer(
texcoord,
2, /* two components per element */
GL_FLOAT,
GL_FALSE, /* don't normalize, has no effect for floats */
0, /* distance between elements in sizeof(char), or 0 if tightly packed */
coords);
and I'm also wondering if I'm even allowed to represent the texture coordinates as a 2d array like I did in the first place.
If you write it in the very way you did above, i.e. using a statically allocated array, then yes, because the C standard asserts that the elements will be tightly packed in memory. However if using a dynamically allocated array of pointers to pointers, then no.

Related

Opengl: triangles with element buffer object (EBO) aka GL_ELEMENT_ARRAY_BUFFER

I am trying to render a kinect depth map in real time and in 3D using openGL in an efficient way to possibly scale up and use multiple kinects.
A frame from the kinect gives 640*480 3D coordinates. X and Y are static and Z vary each frame depending on the depth of what the kinect films.
I am trying to modify my GL_ARRAY_BUFFER but partially since the X and Y don't change, I just need to change the Z part of the buffer. This is easy yet, I can use glBufferSubData or glMapBuffer to partially change the buffer and I thus decided to put all X values together, all Y togethers and all Z together at the end, I can thus change in one block the whole Z values.
The problem is the following: Since I have a cloud points of vertices, I want to draw triangles from them and the easy way I found was using a GL_ELEMENT_ARRAY_BUFFER which prevents repeating vertices multiple times. But GL_ELEMENT_ARRAY_BUFFER reads from the buffer X,Y,Z in an automatic way. Like you give the indice 0 to the GL_ELEMENT_ARRAY_BUFFER, I'd like him to take his X from the first X element in the buffer, his Y from the first Y element in the buffer and his Z from the first Z element in the buffer. Since the vertices coordinates are not arranged in a continuous fashion, it doesn't work.
Is there an alternative to specify to the GL_ELEMENT_ARRAY_BUFFER how to interprete the indices?
I tried to find a way to glBufferSubData in a disparate way (not big continuous chunk of memory but rater changing an element in the buffer every 3 steps, but this seems not optimal)
I'm not entirely sure what the problem is here? Indices stored within a GL_ELEMENT_ARRAY_BUFFER can be used to index multiple buffers at the same time. Just set up your separated vertex buffers in your VAO:
glBindBuffer(GL_ARRAY_BUFFER, vbo_X);
glVertexAttribPointer(0, 1, GL_FLOAT, GL_FALSE, sizeof(float), 0); //< x
glBindBuffer(GL_ARRAY_BUFFER, vbo_Y);
glVertexAttribPointer(1, 1, GL_FLOAT, GL_FALSE, sizeof(float), 0); //< y
glBindBuffer(GL_ARRAY_BUFFER, vbo_Z);
glVertexAttribPointer(2, 1, GL_FLOAT, GL_FALSE, sizeof(float), 0); //< z
Set your indices and draw:
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indices_vbo);
glDrawElements(GL_TRIANGLES, num_indices, GL_UNSIGNED_INT, 0);
And then just recombine the vertex data in your vertex shader
layout(location = 0) in float x_value;
layout(location = 1) in float y_value;
layout(location = 2) in float z_value;
uniform mat4 mvp;
void main() {
gl_Position = mvp * vec4(x_value, y_value, z_value, 1.0);
}

array vertex_buffer_object must be bound to call this method

Does anyone know why this error is being thrown?
I thought I am binding to VBO when I use glEnableVertexAttribArray?
com.jogamp.opengl.GLException: array vertex_buffer_object must be bound to call this method
at jogamp.opengl.gl4.GL4bcImpl.checkBufferObject(GL4bcImpl.java:39146)
at jogamp.opengl.gl4.GL4bcImpl.checkArrayVBOBound(GL4bcImpl.java:39178)
at jogamp.opengl.gl4.GL4bcImpl.glVertexAttribPointer(GL4bcImpl.java:37371)
This is my code to draw ..
public void draw(final GL2ES2 gl, Matrix4f projectionMatrix, Matrix4f viewMatrix, int shaderProgram, final Vec3 position, final float angle) {
// enable glsl
gl.glUseProgram(shaderProgram);
// enable alpha
gl.glEnable(GL2ES2.GL_BLEND);
gl.glBlendFunc(GL2ES2.GL_SRC_ALPHA, GL2ES2.GL_ONE_MINUS_SRC_ALPHA);
// get handle to glsl variables
mPositionHandle = gl.glGetAttribLocation(shaderProgram, "vPosition");
setmColorHandle(gl.glGetUniformLocation(shaderProgram, "vColor"));
mProj = gl.glGetUniformLocation(shaderProgram, "mProj");
mView = gl.glGetUniformLocation(shaderProgram, "mView");
mModel = gl.glGetUniformLocation(shaderProgram, "mModel");
// perform translations
getModelMatrix().loadIdentity();
getModelMatrix().translate(new Vec3(position.x * 60.0f, position.y * 60.0f, position.z * 60.0f));
getModelMatrix().rotate(angle, 0, 0, 1);
// set glsl variables
gl.glUniform4fv(getmColorHandle(), 1, getColorArray(), 0);
gl.glUniformMatrix4fv(mProj, 1, true, projectionMatrix.getValues(), 0);
gl.glUniformMatrix4fv(mView, 1, true, viewMatrix.getValues(), 0);
gl.glUniformMatrix4fv(mModel, 1, true, getModelMatrix().getValues(), 0);
// Enable a handle to the triangle vertices
gl.glEnableVertexAttribArray(mPositionHandle);
// Prepare the triangle coordinate data
gl.glVertexAttribPointer(
getmPositionHandle(),
COORDS_PER_VERTEX,
GL2ES2.GL_FLOAT,
false,
vertexStride, 0L); // This is the line that throws error
// Draw the square
gl.glDrawElements(
GL2ES2.GL_TRIANGLES,
drawOrder.length,
GL2ES2.GL_UNSIGNED_SHORT,
0L);
// Disable vertex array
gl.glDisableVertexAttribArray(mPositionHandle);
gl.glDisable(GL2ES2.GL_BLEND);
gl.glUseProgram(0);
}
(I've never used OpenGL with Java, so I'll use C/C++ code, but I hope it will come across well)
You do not create or bind a Vertex Buffer Object.
First, use glGenBuffers to create a buffer, as so:
GLuint bufferID;
glGenBuffers(1, &bufferID);
This allocates a handle and stores it in bufferID.
Then, bind the buffer:
glBindBuffers(GL_ARRAY_BUFFER, bufferID);
This makes it the "current" buffer to use.
Next, fill the buffer with data. Assuming vertices is an array that stores your vertex coordinates, in flat format, with three floats per vertex:
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), &vertices, GL_STATIC_DRAW);
This actually puts the data in GPU memory.
Then enable the attribute array and set the pointer:
glEnableVertexAttribArray(mPositionHandle);
glVertexAttribPointer(mPositionHandle, 3, GL_FLOAT, 0, 0, 0);
This will make the data in vertices available for shader programs under the vertex attribute location of mPositionHandle.
The second-to-last parameter of glVertexAttribPointer is stride. In this example, it is 0, because the buffer contains only vertex position data. If you want to pack both vertex position data and color data in the same buffer, as so:
v1.positionX v1.positionY v1.positionZ v1.colorR v1.colorG v1.colorB
v2.positionX ...
you will need to use a non-zero stride. stride specifies the offset between one attribute and the next of the same type; with stride of 0, they are assumed to be tightly packed. In this case, you'll want to set a stride of sizeof(GLfloat) * 6, so that after reading one vertex's position, it will skip the color data to arrive at the next vertex, and similarily for colors.
// (create, bind and fill vertex buffer here)
glEnableVertexAttribArray(location_handle_of_position_data);
glVertexAttribPointer(location_handle_of_position_data, 3, GL_FLOAT, 0, sizeof(GLfloat) * 6, 0);
glEnableVertexAttribArray(location_handle_of_color_data);
glVertexAttribPointer(location_handle_of_color_data, 3, GL_FLOAT, 0, sizeof(GLfloat) * 6, sizeof(GLfloat) * 3);
The last parameter is the offset to the first attribute - the first color attribute starts after the third float.
Other considerations:
You should look into using Vertex Array Objects. It might or might not work without them, but by standard they are required, and they simplify the code in any case.
For the sake of simplicity, this example code stores color data in floats, but for real use bytes are preferable.
glVertexAttribPointer() specifies that data for the attribute should be pulled from the currently bound vertex buffer, using the parameters specified. So you need to call:
gl.glBindBuffer(GL_VERTEX_ARRAY, ...);
before you call glVertexAttribPointer().
glEnableVertexAttribArray() specifies that an array should be used for the vertex attribute. Otherwise, a constant value, specified with calls like glVertexAttrib4f() is used. But it does not specify that the array is in a buffer. And even more importantly, there's no way glVertexAttribPointer() would know which buffer to use for the attribute unless you bind a specific buffer.

How to correctly link open GL normal for OpenGL shaders

I am trying to do a simple map rendered in OpenGL 2.1 and Qt5. But I'm failing on very basic issues. The one I'm presenting here is surface normals.
I have 4 object made of a single triangle geometry. A geoetry to make simple, is a dynamically allocated array of Vertex, where a Vertex is a couple of two QVector3D, a 3D position class predefined in Qt.
struct Vertex
{
QVector3D position;
QVector3D normal;
};
I'm computing the normal at a vertex by using the cross product of the two vectors from that vertex to the next or previous vertex. Normal computation for the structure seems fine, by debugging or printing results to the console.
QVector3D(-2, -2, -2) has normal QVector3D(0, 0, 1)
QVector3D(2, -2, -2) has normal QVector3D(0, 0, 1)
QVector3D(-2, 2, -2) has normal QVector3D(0, 0, 1)
...
But when I feed the data to the shaders, the result are absurd! Here is a picture of the polygons colored with the normal value at each position:
As in normal maps, red=x, green=y and blue=z. The top left corner of the black square is the origin of the world. As you can see the normal at some point seems to simply be the position at that point, without the z-value. Can you hint me what might be wrong, knowing the painting code is :
glUseProgram(program.programId());
glEnableClientState(GL_NORMAL_ARRAY);
program.setUniformValue("modelViewProjectionMatrix", viewCamera);
program.setUniformValue("entityBaseColor", QColor(0,120,233));
program.setUniformValue("sunColor", QColor("white"));
program.setUniformValue("sunBrightness", 1.0f);
static QVector3D tmpSunDir = QVector3D(0.2,-0.2,1.0).normalized();
program.setUniformValue("sunDir",tmpSunDir);
for( size_t i = 0; i < m_numberOfBoundaries; ++i)
{
glBindBuffer(GL_ARRAY_BUFFER, m_bufferObjects[i]);
int vertexLocation = program.attributeLocation("vertexPosition");
program.setAttributeArray( vertexLocation, GL_FLOAT, &(m_boundaries[i].data->position), sizeof(Vertex) );
program.enableAttributeArray(vertexLocation);
glVertexAttribPointer( vertexLocation, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), 0 );
int vertexNormal = program.attributeLocation("vertexNormal");
program.setAttributeArray( vertexNormal, GL_FLOAT, &(m_boundaries[i].data->normal), sizeof(Vertex) );
program.enableAttributeArray(vertexNormal);
glVertexAttribPointer( vertexNormal, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), 0 );
glDrawArrays( GL_POLYGON, 0, m_boundaries[i].sizeOfData );
}
glDisableClientState(GL_NORMAL_ARRAY);
where a boundary is a geometrically connected component of the polygon. program is a QOpenGLShaderProgram, an Qt abstraction for shader programs. Each boundary is bound to a buffer object. The buffer object numbers are stored in the array m_bufferObjects. Polygon “boundaries” are stored as struct in the array m_boundaries. They have two fields : data, a pointer to the start of the array of vertices for the loop, and sizeOfData, the number of points for the polygon.
Until I get to the real problem of yours, here's something, probably unrelated but just as wrong:
glEnableClientState(GL_NORMAL_ARRAY);
/*...*/
glDisableClientState(GL_NORMAL_ARRAY);
You're using self defined vertex attributes, so it makes absolutely no sense to use those old fixed function pipeline client state locations. Use glEnableVertexAttribArray(location_index) instead,
Update
So I finally came around to take a closer look at your code and your problem is the mix of Qt's abstraction layer and use of raw OpenGL commands. Essentially your problem boils down to that you have a VBO bound when making calls to QOpenGLShaderProgram::setAttribArray followed by a call of glVertexAttribPointer.
One problem is, that setAttribArray internally makes the call of glVertexAttribPointer for you, so your own call to it is redundant and overwrites whatever Qt's stuff did. The more severe problem is, that you do have a VBO bound by glBindBuffer, so calls to glVertexAttribPointer actually take an byte offset into the VBO data instead of a pointer (in fact with a VBO bound passing a 0, which in pointer terms was a null pointer will yield a perfectly valid data offset). See this answer by me, why this is all a bit misleading and actually violates the C specification: https://stackoverflow.com/a/8284829/524368
Recent OpenGL versions actually have a new API for specifying attrib array offsets that conform to the C language specification.
The correct Qt method to use would be QOpenGLShaderProgramm::setAttribBuffer. Unfortunately your code shows not the exact definition of m_boundaries and your call to glBufferData or glBufferSubData; if I had that I could give you instructions on how to alter your code.

OpenGL Batching: Why does my draw call exceed array buffer bounds?

I trying to implement some relatively simple 2D sprite batching in OpenGL ES 2.0 using vertex buffer objects. However, my geometry is not drawing correctly and some error I can't seem to locate is causing the GL ES analyzer in Instruments to report:
Draw Call Exceeded Array Buffer Bounds
A draw call accessed a vertex outside the range of an array buffer in use. This is a serious error, and may result in a crash.
I've tested my drawing with the same vertex layout by drawing single quads at a time instead of batching and it draws as expected.
// This technique doesn't necessarily result in correct layering,
// but for this game it is unlikely that the same texture will
// need to be drawn both in front of and behind other images.
while (!renderQueue.empty())
{
vector<GLfloat> batchVertices;
GLuint texture = renderQueue.front()->textureName;
// find all the draw descriptors with the same texture as the first
// item in the vector and batch them together, back to front
for (int i = 0; i < renderQueue.size(); i++)
{
if (renderQueue[i]->textureName == texture)
{
for (int vertIndex = 0; vertIndex < 24; vertIndex++)
{
batchVertices.push_back(renderQueue[i]->vertexData[vertIndex]);
}
// Remove the item as it has been added to the batch to be drawn
renderQueue.erase(renderQueue.begin() + i);
i--;
}
}
int elements = batchVertices.size();
GLfloat *batchVertArray = new GLfloat[elements];
memcpy(batchVertArray, &batchVertices[0], elements * sizeof(GLfloat));
// Draw the batch
bindTexture(texture);
glBufferData(GL_ARRAY_BUFFER, elements, batchVertArray, GL_STREAM_DRAW);
prepareToDraw();
glDrawArrays(GL_TRIANGLES, 0, elements / BufferStride);
delete [] batchVertArray;
}
Other info of plausible relevance: renderQueue is a vector of DrawDescriptors. BufferStride is 4, as my vertex buffer format is interleaved position2, texcoord2: X,Y,U,V...
Thank you.
glBufferData expects its second argument to be the size of the data in bytes. The correct way to copy your vertex data to the GPU would therefore be:
glBufferData(GL_ARRAY_BUFFER, elements * sizeof(GLfloat), batchVertArray, GL_STREAM_DRAW);
Also make sure that the correct vertex buffer is bound when calling glBufferData.
On a performance note, allocating a temporary array is absolutely unnecessary here. Just use the vector directly:
glBufferData(GL_ARRAY_BUFFER, batchVertices.size() * sizeof(GLfloat), &batchVertices[0], GL_STREAM_DRAW);

Use index as coordinate in OpenGL

I want to implement a timeseries viewer that allows a user to zoom and smoothly pan.
I've done some immediate mode opengl before, but that's now deprecated in favor of VBOs. All the examples of VBOs I can find store XYZ coordinates of each and every point.
I suspect that I need to keep all my data in VRAM in order to get a framerate during pan that can be called "smooth", but I have only Y data (the dependent variable). X is an independent variable which can be calculated from the index, and Z is constant. If I have to store X and Z then my memory requirements (both buffer size and CPU->GPU block transfer) are tripled. And I have tens of millions of data points through which the user can pan, so the memory usage will be non-trivial.
Is there some technique for either drawing a 1-D vertex array, where the index is used as the other coordinate, or storing a 1-D array (probably in a texture?) and using a shader program to generate the XYZ? I'm under the impression that I need a simple shader anyway under the new fixed-feature-less pipeline model to implement scaling and translation, so if I could combine the generation of X and Z coordinates and scaling/translation of Y that would be ideal.
Is this even possible? Do you know of any sample code that does this? Or can you at least give me some pseudocode saying what GL functions to call in what order?
Thanks!
EDIT: To make sure this is clear, here's the equivalent immediate-mode code, and vertex array code:
// immediate
glBegin(GL_LINE_STRIP);
for( int i = 0; i < N; ++i )
glVertex2(i, y[i]);
glEnd();
// vertex array
struct { float x, y; } v[N];
for( int i = 0; i < N; ++i ) {
v[i].x = i;
v[i].y = y[i];
}
glVertexPointer(2, GL_FLOAT, 0, v);
glDrawArrays(GL_LINE_STRIP, 0, N);
note that v[] is twice the size of y[].
That's perfectly fine for OpenGL.
Vertex Buffer Objects (VBO) can store any information you want in one of GL supported format. You can fill a VBO with just a single coordinate:
glGenBuffers( 1, &buf_id);
glBindBuffer( GL_ARRAY_BUFFER, buf_id );
glBufferData( GL_ARRAY_BUFFER, N*sizeof(float), data_ptr, GL_STATIC_DRAW );
And then bind the proper vertex attribute format for a draw:
glBindBuffer( GL_ARRAY_BUFFER, buf_id );
glEnableVertexAttribArray(0); // hard-coded here for the sake of example
glVertexAttribPointer(0, 1, GL_FLOAT, false, 0, NULL);
In order to use it you'll need a simple shader program. The vertex shader can look like:
#version 130
in float at_coord_Y;
void main() {
float coord_X = float(gl_VertexID);
gl_Position = vec4(coord_X,at_coord_Y,0.0,1.0);
}
Before linking the shader program, you should bind it's at_coord_Y to the attribute index you'll use (=0 in my code):
glBindAttribLocation(program_id,0,"at_coord_Y");
Alternatively, you can ask the program after linking for the index to which this attribute was automatically assigned and then use it:
const int attrib_pos = glGetAttribLocation(program_id,"at_coord_Y");
Good luck!
Would you store ten millions of XY coordinates, in VRAM?
I would you suggest to compute those coordinates on CPU, and pass them to the shader pipeline as uniforms (since coordinates are fixed to the panned image).
Keep it simple.