I'm confused about using glMultiDrawElementsIndirect(), could anyone explain to me the meaning of the cmd struct used by that drawing command?
struct cmd:
count = ..
primCount = ..
firstIndex = ..
baseVertex = ..
baseInstance = ..
if I have three different triangle strips to draw, would that be like this?
for (int n = 0; n < 3; n++)
{
count = indexCount;
primCount = 1;
firstIndex = n * count;
baseVertex = 0;
baseInstance = n;
...
}
Setup Buffers:
protected void setupBuffers()
{
gl.glGenVertexArrays(1, vaoBuff);
gl.glBindVertexArray(vaoBuff.get(0));
gl.glGenBuffers(5, vboBuff);
//bind indirect buffer object
gl.glBindBuffer(GL4.GL_DRAW_INDIRECT_BUFFER, vboBuff.get(0));
gl.glBufferData(GL4.GL_DRAW_INDIRECT_BUFFER, instanceCount * 5 * Integer.SIZE / 8, null, GL4.GL_STATIC_DRAW);
gl.glBufferSubData(...);
//bind draw instance ID in the shader with a buffer object
gl.glBindBuffer(GL4.GL_ARRAY_BUFFER, vboBuff.get(1));
gl.glBufferData(GL4.GL_ARRAY_BUFFER, instanceCount * Integer.SIZE / 8, drawIndexBuff, GL4.GL_STATIC_DRAW);
gl.glVertexAttribIPointer(d_idLoc, 1, GL4.GL_UNSIGNED_INT, 0, 0);
gl.glVertexAttribDivisor(d_idLoc, 1);
gl.glEnableVertexAttribArray(d_idLoc);
//bind vertex data buffer object
gl.glBindBuffer(GL4.GL_ARRAY_BUFFER, vboBuff.get(2));
gl.glBufferData(GL4.GL_ARRAY_BUFFER, instanceCount / column_subdivision * vertBuffSize * Float.SIZE / 8, null, GL4.GL_STATIC_DRAW);
gl.glBufferSubData(...);
gl.glVertexAttribPointer(verPosLoc, 3, GL4.GL_FLOAT, false, 0, 0);
gl.glEnableVertexAttribArray(verPosLoc);
//bind vertex index data buffer object
gl.glBindBuffer(GL4.GL_ELEMENT_ARRAY_BUFFER, vboBuff.get(3));
gl.glBufferData(GL4.GL_ELEMENT_ARRAY_BUFFER, instanceCount / column_subdivision * indexBuffSize * Integer.SIZE / 8, null, GL4.GL_STATIC_DRAW);
gl.glBufferSubData(...);
//bind texture coordinate data buffer object
gl.glBindBuffer(GL4.GL_ARRAY_BUFFER, vboBuff.get(4));
gl.glBufferData(GL4.GL_ARRAY_BUFFER, instanceCount / column_subdivision * texBuffSize * Float.SIZE / 8, null, GL4.GL_STATIC_DRAW);
gl.glBufferSubData(...);
gl.glVertexAttribPointer(tc_inLoc, 2, GL4.GL_FLOAT, false, 0, 0);
gl.glEnableVertexAttribArray(tc_inLoc);
}
drawing command:
gl.glBindVertexArray(vaoBuff.get(0));
glBindBuffer(GL4.GL_DRAW_INDIRECT_BUFFER, vboBuff.get(0));
gl.glMultiDrawElementsIndirect(GL4.GL_TRIANGLE_STRIP, GL4.GL_UNSIGNED_INT, null, 3, 0);
Fixed it, there're some problems within the vertex data. The other is all correct.
Related
Overview
We have an image/movie viewer powered via Qt5 and OpenGL that performs well however, we find that the draw surface itself consumes a large swath of resources no matter what is playing, even when we strip down the shaders.
We've added timing (gl query timers) for all of our own, custom, tools and cannot locate the source of the additional performance draw. The draw times are quite low considering our use case (~7ms per frame).
This is for an image rendering app; So 2D textures only. We use 2 triangles to cover the viewport and then dynamically generate the fragment based on the input requirements. This is for the film industry in which we do many many things to an image in terms of color and composite.
Discovery
We've stripped down the fragment to near nothing:
#version 330 core
void main() {
outColor = vec4(0.5, 0.0, 0.0, 1.0);
}
We've also disabled all PBO usage and have no uniform assignment. All timers read 0-1024ns for all of their commands (because all of our own gl commands are disabled)
The draw surface only calls paintGL, the Qt paint event for their OpenGL widget, once every ~42ms (24fps). Even with the simplicity of this, we use 70-80% of a GTX 1050 (resolution 3000x2000)
While this card is by no means a powerhouse, we are expecting to see less usage than that for something as simple as a solid color fragment. If we shrink the OpenGL window down to nothing, to only render the additional UI elements, we see ~5% usage. So our OpenGL surface that's doing next to nothing at a fixed frame rate is still consuming ~60-70% of the GPU for a reason we cannot determine.
Misc Info:
https://forum.qt.io/topic/121179/high-gpu-usage-when-animating/2
Additional Testing
We've attempted an apitrace for the gl commands but nothing jumped out at us. This included turn off the blit from our render frame buffer to the windows' buffer. So realistically, all of these are internal Qt-driven OpenGL commands.
10008 glViewport(x = 0, y = 0, width = 3000, height = 1896)
10009 glClearColor(red = 0, green = 0, blue = 0, alpha = 1)
10010 glClear(mask = GL_COLOR_BUFFER_BIT)
10011 glBindVertexArray(array = 1)
10012 glUseProgram(program = 7)
10013 glBindBuffer(target = GL_ARRAY_BUFFER, buffer = 9)
10014 glVertexAttribPointer(index = 0, size = 3, type = GL_FLOAT, normalized = GL_TRUE, stride = 0, pointer = NULL)
10015 glEnableVertexAttribArray(index = 0)
10016 glBindBuffer(target = GL_ARRAY_BUFFER, buffer = 0)
10017 glBindBuffer(target = GL_ARRAY_BUFFER, buffer = 10)
10018 glVertexAttribPointer(index = 1, size = 2, type = GL_FLOAT, normalized = GL_TRUE, stride = 0, pointer = NULL)
10019 glEnableVertexAttribArray(index = 1)
10020 glBindBuffer(target = GL_ARRAY_BUFFER, buffer = 0)
10021 glBindTexture(target = GL_TEXTURE_2D, texture = 1)
10022 glBindBuffer(target = GL_ARRAY_BUFFER, buffer = 9)
10023 glVertexAttribPointer(index = 0, size = 3, type = GL_FLOAT, normalized = GL_TRUE, stride = 0, pointer = NULL)
10024 glEnableVertexAttribArray(index = 0)
10025 glBindBuffer(target = GL_ARRAY_BUFFER, buffer = 0)
10026 glUniformMatrix4fv(location = 4, count = 1, transpose = GL_FALSE, value = {1, 0, 0, 0, 0, 0.8016878, 0, 0, 0, 0, 1, 0, 0, 0.1476793, 0, 1})
10027 glBindBuffer(target = GL_ARRAY_BUFFER, buffer = 10)
10028 glVertexAttribPointer(index = 1, size = 2, type = GL_FLOAT, normalized = GL_TRUE, stride = 0, pointer = NULL)
10029 glEnableVertexAttribArray(index = 1)
10030 glBindBuffer(target = GL_ARRAY_BUFFER, buffer = 0)
10031 glUniform1i(location = 1, v0 = 0)
10032 glUniformMatrix3fv(location = 3, count = 1, transpose = GL_FALSE, value = {1, 0, 0, 0, 1, 0, 0, 0, 1})
10033 glDrawArrays(mode = GL_TRIANGLES, first = 0, count = 6)
10034 glBindTexture(target = GL_TEXTURE_2D, texture = 0)
10035 glPixelStorei(pname = GL_UNPACK_ROW_LENGTH, param = 3000)
10036 glBindTexture(target = GL_TEXTURE_2D, texture = 2)
10037 glTexSubImage2D(target = GL_TEXTURE_2D, level = 0, xoffset = 0, yoffset = 48, width = 3000, height = 1520, format = GL_RGBA, type = GL_UNSIGNED_BYTE, pixels = blob(18240000))
10038 glPixelStorei(pname = GL_UNPACK_ROW_LENGTH, param = 0)
10039 glEnable(cap = GL_BLEND)
10040 glBlendFuncSeparate(sfactorRGB = GL_ONE, dfactorRGB = GL_ONE_MINUS_SRC_ALPHA, sfactorAlpha = GL_ONE, dfactorAlpha = GL_ONE)
10041 glBindTexture(target = GL_TEXTURE_2D, texture = 2)
10042 glBindBuffer(target = GL_ARRAY_BUFFER, buffer = 9)
10043 glVertexAttribPointer(index = 0, size = 3, type = GL_FLOAT, normalized = GL_TRUE, stride = 0, pointer = NULL)
10044 glEnableVertexAttribArray(index = 0)
10045 glBindBuffer(target = GL_ARRAY_BUFFER, buffer = 0)
10046 glUniformMatrix4fv(location = 4, count = 1, transpose = GL_FALSE, value = {1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1})
10047 glBindBuffer(target = GL_ARRAY_BUFFER, buffer = 10)
10048 glVertexAttribPointer(index = 1, size = 2, type = GL_FLOAT, normalized = GL_TRUE, stride = 0, pointer = NULL)
10049 glEnableVertexAttribArray(index = 1)
10050 glBindBuffer(target = GL_ARRAY_BUFFER, buffer = 0)
10051 glUniform1i(location = 1, v0 = 1)
10052 glUniformMatrix3fv(location = 3, count = 1, transpose = GL_FALSE, value = {1, 0, 0, 0, -1, 0, 0, 1, 1})
10053 glDrawArrays(mode = GL_TRIANGLES, first = 0, count = 6)
10054 glBindTexture(target = GL_TEXTURE_2D, texture = 0)
10055 glDisable(cap = GL_BLEND)
10056 glUseProgram(program = 0)
10057 glBindVertexArray(array = 0)
10058 wglSwapBuffers(hdc = 0xffffffff86011399) = TRUE
Any ideas would be excellent. I'm also happy to provide more information if required.
Do you have any idea why this isn't working?
The old immediate mode works but I want to use VAO and VBOs.
(PS: I know the VOA creation should only be created once, but I build it all in this method for the test. I will move thoses lines after testing)
private void allocateIndexBuffer(GL2 graphics, int[] indices) {
int[] id = new int[1];
graphics.glGenBuffers(1, id, 0);
int vboId = id[0];
graphics.glBindBuffer(GL2.GL_ELEMENT_ARRAY_BUFFER, vboId);
IntBuffer buffer = IntBuffer.allocate(indices.length);
buffer.put(0, indices);
buffer.flip();
graphics.glBufferData(GL2.GL_ELEMENT_ARRAY_BUFFER, indices.length, buffer, GL2.GL_DYNAMIC_DRAW);
//graphics.glDeleteBuffers(vboId, buffer); TODO: clean up when on closing
}
private void allocateAttributeBuffer(GL2 graphics, int attribute, float[] data) {
int[] id = new int[1];
graphics.glGenBuffers(1, id, 0);
int vboId = id[0];
graphics.glBindBuffer(GL2.GL_ARRAY_BUFFER, vboId); //juste remplir vboId ou le remplacer à chaque fois ?
FloatBuffer buffer = FloatBuffer.allocate(data.length);
buffer.put(0, data);
buffer.flip();
graphics.glBufferData(GL2.GL_ARRAY_BUFFER, data.length, buffer, GL2.GL_DYNAMIC_DRAW);
graphics.glVertexAttribPointer(0, 2, GL2.GL_FLOAT, false, 0, 0); //once the buffer is bound
graphics.glEnableVertexAttribArray(0);
graphics.glBindBuffer(GL2.GL_ARRAY_BUFFER, 0);
//graphics.glDeleteBuffers(vboId, buffer); TODO: clean up when on closing
//graphics.glDeleteVertexArrays(vboId, null); TODO: clean up vaos
}
#Override
protected void draw(GL2 graphics) {
String mode = "new";
if (mode.equals("new")) {
float[] vertices = {
bounds.getX(), bounds.getY(),
bounds.getX(), bounds.getY() + bounds.getHeight(),
bounds.getX() + bounds.getWidth(), bounds.getY() + bounds.getHeight(),
bounds.getX() + bounds.getWidth(), bounds.getY(),
};
int[] indices = { 0, 1, 2, 2, 1, 3 };
//creation vao
int[] id = new int[1];
graphics.glGenVertexArrays(1, id, 0);
int vaoId = id[0];
graphics.glBindVertexArray(vaoId);
allocateIndexBuffer(graphics, indices);
allocateAttributeBuffer(graphics, 0, vertices);
graphics.glBindVertexArray(0);
//render
graphics.glBindVertexArray(vaoId);
graphics.glEnableVertexAttribArray(0);
graphics.glDrawElements(GL2.GL_TRIANGLES, indices.length, GL2.GL_UNSIGNED_INT, 0);
graphics.glDisableVertexAttribArray(0);
graphics.glBindVertexArray(0);
graphics.glFlush();
} else if (mode.equals("old")) {
graphics.glColor3f(255, 0, 0);
graphics.glBegin(GL2.GL_QUADS);
graphics.glVertex2f(bounds.getX(), bounds.getY());
graphics.glVertex2f(bounds.getX() + bounds.getWidth(), bounds.getY());
graphics.glVertex2f(bounds.getX() + bounds.getWidth(), bounds.getY() + bounds.getHeight());
graphics.glVertex2f(bounds.getX(), bounds.getY() + bounds.getHeight());
graphics.glEnd();
}
}
The size of the buffer has to be specified in bytes (see glBufferData);
graphics.glBufferData(GL2.GL_ELEMENT_ARRAY_BUFFER, indices.length, buffer, GL2.GL_DYNAMIC_DRAW);
graphics.glBufferData(GL2.GL_ELEMENT_ARRAY_BUFFER, indices.capacity() * 4,
buffer, GL2.GL_DYNAMIC_DRAW);
graphics.glBufferData(GL2.GL_ARRAY_BUFFER, data.length, buffer, GL2.GL_DYNAMIC_DRAW);
graphics.glBufferData(GL2.GL_ARRAY_BUFFER, data.capacity() * 4,
buffer, GL2.GL_DYNAMIC_DRAW);
I work on particles system and I want to use SSBO to make update of velocity and position on my particles with compute shader. But I see for each update-call the compute use same values of positions but compute update position because in draw-call particles are moved.
Load particles into SSBOs
// Load Positions
glGenBuffers(1, &m_SSBOpos);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, m_SSBOpos);
// Allocation de la mémoire vidéo
glBufferData(GL_SHADER_STORAGE_BUFFER, pb.size() * 4 * sizeof(float), NULL, GL_STATIC_DRAW);
GLint bufMask = GL_MAP_WRITE_BIT | GL_MAP_INVALIDATE_BUFFER_BIT; // the invalidate makes a big difference when re-writing
float *points = (float *) glMapBufferRange(GL_SHADER_STORAGE_BUFFER, 0, pb.size() * 4 * sizeof(float), bufMask);
for (int i = 0; i < pb.size(); i++)
{
points[i * 4] = pb.at(i).m_Position.x;
points[i * 4 + 1] = pb.at(i).m_Position.y;
points[i * 4 + 2] = pb.at(i).m_Position.z;
points[i * 4 + 3] = 0;
}
glUnmapBuffer(GL_SHADER_STORAGE_BUFFER);
// Load vélocité
glGenBuffers(1, &m_SSBOvel);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, m_SSBOvel);
// Allocation de la mémoire vidéo
glBufferData(GL_SHADER_STORAGE_BUFFER, pb.size() * 4 * sizeof(float), NULL, GL_STATIC_DRAW);
float *vels = (float *)glMapBufferRange(GL_SHADER_STORAGE_BUFFER, 0, pb.size() * 4 * sizeof(float), bufMask);
for (int i = 0; i < pb.size(); i++)
{
vels[i * 4] = pb.at(i).m_Velocity.x;
vels[i * 4 + 1] = pb.at(i).m_Velocity.y;
vels[i * 4 + 2] = pb.at(i).m_Velocity.z;
vels[i * 4 + 3] = 0;
}
glUnmapBuffer(GL_SHADER_STORAGE_BUFFER);
Update
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 4, shaderUtil.getSSBOpos());
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 5, shaderUtil.getSSBOvel());
// UPDATE DES PARTICULES
shaderUtil.UseCompute();
glUniform1i(shaderUtil.getDT(), fDeltaTime);
glDispatchCompute(NUM_PARTICLES / WORK_GROUP_SIZE, 1, 1);
glMemoryBarrier(GL_SHADER_STORAGE_BARRIER_BIT);
shaderUtil.DeleteCompute();
Draw
shaderUtil.Use();
glUniformMatrix4fv(glGetUniformLocation(shaderUtil.getProgramID(), "projection"), 1, GL_FALSE, glm::value_ptr(projection));
glUniformMatrix4fv(glGetUniformLocation(shaderUtil.getProgramID(), "modelview"), 1, GL_FALSE, glm::value_ptr(View * Model));
glPointSize(10);
// Rendu
glBindBuffer(GL_ARRAY_BUFFER, shaderUtil.getSSBOpos());
glVertexPointer(4, GL_FLOAT, 0, (void *)0);
glEnableClientState(GL_VERTEX_ARRAY);
glDrawArrays(GL_POINTS, 0, NUM_PARTICLES);
glDisableClientState(GL_VERTEX_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, 0);
shaderUtil.Delete();
Comput shader
#version 430 compatibility
#extension GL_ARB_compute_shader : enable
#extension GL_ARB_shader_storage_buffer_object : enable
layout(std140, binding = 4) buffer Pos
{
vec4 Positions[]; // array of structures
};
layout(std140, binding = 5) buffer Vel
{
vec4 Velocities[]; // array of structures
};
uniform float dt;
layout(local_size_x = 128, local_size_y = 1, local_size_z = 1) in;
void main()
{
uint numParticule = gl_GlobalInvocationID.x;
vec4 v = Velocities[numParticule];
vec4 p = Positions[numParticule];
vec4 tmp = vec4(0, -9.81, 0,0) + v * (0.001 / (7. / 1000.));
v += tmp ;
Velocities[numParticule] = v;
p += v ;
Positions[numParticule] = p;
}
Do you know why it's happened ?
I've defined a block of vertex data like so:
struct vertex {
float x, y, u, v;
};
struct vertex_group {
vertex tl, bl, br, tr;
glm::vec4 color;
vertex_group(float x, float y, float width, float height, glm::vec4 c) {
tl.x = x; tl.y = y + height; tl.u = 0; tl.v = 0;
bl.x = x; bl.y = y; bl.u = 0; bl.v = 1;
br.x = x + width; br.y = y; br.u = 1; br.v = 1;
tr.x = x + width; tr.y = y + height; tr.u = 1; tr.v = 0;
color = c;
}
vertex_group(positioned_letter const& l) :
vertex_group(l.x, l.y, l.width, l.height, l.l.color) {
}
const float * data() const {
return &tl.x;
}
};
The attribute pointers are set like this:
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 4 * sizeof(GLfloat), nullptr);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, 0, (void*)(4 * 4 * sizeof(GLfloat)));
And the draw code is invoked like so:
vertex_group vertices(l);
glBindTexture(GL_TEXTURE_2D, g.texture);
glBindBuffer(GL_ARRAY_BUFFER, objects.rect_buffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices.data(), GL_STREAM_DRAW);
glDrawArrays(GL_QUADS, 0, 4);
The basic idea is that all four vertices of the quad should all be using the same data for color, even as they require different values for position and texture data. However, when I set the color to red (1,0,0,1), the results on screen are.... not quite right.
Just for reference sake, if the *only changes I make are to the first two sections of code, to the following:
struct vertex {
float x, y, u, v;
};
struct vertex_group {
vertex tl;
glm::vec4 color1;
vertex bl;
glm::vec4 color2;
vertex br;
glm::vec4 color3;
vertex tr;
glm::vec4 color4;
vertex_group(float x, float y, float width, float height, glm::vec4 c) {
tl.x = x; tl.y = y + height; tl.u = 0; tl.v = 0;
bl.x = x; bl.y = y; bl.u = 0; bl.v = 1;
br.x = x + width; br.y = y; br.u = 1; br.v = 1;
tr.x = x + width; tr.y = y + height; tr.u = 1; tr.v = 0;
color1 = color2 = color3 = color4 = c;
}
vertex_group(positioned_letter const& l) :
vertex_group(l.x, l.y, l.width, l.height, l.l.color) {
}
const float * data() const {
return &tl.x;
}
};
(other part)
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 8 * sizeof(GLfloat), nullptr);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, 8 * sizeof(GLfloat), (void*)(4 * sizeof(GLfloat)));
It renders correctly:
So in a nutshell, my question is: I'd like to structure my data (and render with it) like xyuvxyuvxyuvxyuvrgba but the only way I can get it to work is by doing xyuvrgbaxyuvrgbaxyuvrgbaxyuvrgba. How do I set my pointers/call the draw function so that I can use the first method?
You can't do that. But you can achieve this layout using instanced rendering:
xyuvxyuvxyuvxyuv // <- only once
whrgbawhrgbawhrgbawhrgba... // <- repeated per glyph
where w and h are the sizes of each quad that are to be applied in the vertex shader.
Here I split it in two buffers, but you can technically load it all into one buffer. Also I use the OpenGL 4.5 bindless API here because I think that it is easier to use. If you don't have it yet then you can change it to use the older calls accordingly.
float quad[] = {
0, 1, 0, 0,
0, 0, 0, 1,
1, 1, 1, 0,
1, 0, 1, 1,
};
struct Instance {
vec2 size;
// TODO: add index of the glyph you want to render
vec4 color;
};
Instance inst[] = { ... };
int ninst = sizeof(inst)/sizeof(inst[0]);
GLuint quad_buf = ... create buffer from quad[] ...;
GLuint inst_buf = ... create buffer from inst[] ...;
GLuint vao;
glCreateVertexArrays(1, &vao);
glEnableVertexArrayAttrib(vao, 0);
glVertexArrayAttribFormat(vao, 0, 4, GL_FLOAT, GL_FALSE, 0);
glVertexArrayAttribBinding(vao, 0, 0); // from 0th buffer
glEnableVertexArrayAttrib(vao, 1);
glVertexArrayAttribFormat(vao, 1, 2, GL_FLOAT, GL_FALSE, offsetof(Instance, size));
glVertexArrayAttribBinding(vao, 1, 1); // from 1st buffer
glEnableVertexArrayAttrib(vao, 2);
glVertexArrayAttribFormat(vao, 2, 4, GL_FLOAT, GL_FALSE, offsetof(Instance, color));
glVertexArrayAttribBinding(vao, 2, 1); // from 1st buffer
glVertexArrayVertexBuffer(vao, 0, quad_buf, 0, sizeof(float)*4); // 0th buffer is the quad
glVertexArrayVertexBuffer(vao, 1, inst_buf, 0, sizeof(Instance)); // 1th buffer for instances
glVertexArrayBindingDivisor(vao, 1, 1); // 1st buffer advances once per instance
// to draw:
glBindTexture(...);
glBindVertexArray(vao);
glDrawArraysInstanced(GL_TRIANGLE_STRIP, 0, 4, ninst);
I am having trouble setting the position, normal, and texture coordinate attributes in my shader. I am using meshomatic to load obj files, here is how the attributes are added to a single vbo:
void LoadBuffers(MeshData m)
{
float[] verts, norms, texcoords;
uint[] indices;
m.OpenGLArrays(out verts, out norms, out texcoords, out indices);
GL.GenBuffers(1, out dataBuffer);
GL.GenBuffers(1, out indexBuffer);
// Set up data for VBO.
// We're going to use one VBO for all geometry, and stick it in
// in (VVVVNNNNCCCC) order. Non interleaved.
int buffersize = (verts.Length + norms.Length + texcoords.Length);
float[] bufferdata = new float[buffersize];
vertOffset = 0;
normOffset = verts.Length;
texcoordOffset = (verts.Length + norms.Length);
verts.CopyTo(bufferdata, vertOffset);
norms.CopyTo(bufferdata, normOffset);
texcoords.CopyTo(bufferdata, texcoordOffset);
bool v = false;
for (int i = texcoordOffset; i < bufferdata.Length; i++)
{
if (v)
{
bufferdata[i] = 1 - bufferdata[i];
v = false;
}
else
{
v = true;
}
}
// Load geometry data
GL.BindBuffer(BufferTarget.ArrayBuffer, dataBuffer);
GL.BufferData<float>(BufferTarget.ArrayBuffer, (IntPtr)(buffersize * sizeof(float)), bufferdata,
BufferUsageHint.StaticDraw);
// Load index data
GL.BindBuffer(BufferTarget.ElementArrayBuffer, indexBuffer);
GL.BufferData<uint>(BufferTarget.ElementArrayBuffer,
(IntPtr)(indices.Length * sizeof(uint)), indices, BufferUsageHint.StaticDraw);
}
And here is how I am drawing:
void DrawBuffer()
{
// Push current Array Buffer state so we can restore it later
GL.PushClientAttrib(ClientAttribMask.ClientVertexArrayBit);
GL.ClientActiveTexture(TextureUnit.Texture0);
GL.BindTexture(TextureTarget.Texture2D, diff);
GL.EnableVertexAttribArray(positionIndex);
GL.BindBuffer(BufferTarget.ArrayBuffer, dataBuffer);
GL.VertexAttribPointer(positionIndex, 3, VertexAttribPointerType.Float, false, 0, vertOffset);
GL.EnableVertexAttribArray(texcoordIndex);
GL.BindBuffer(BufferTarget.ArrayBuffer, dataBuffer);
GL.VertexAttribPointer(texcoordIndex, 2, VertexAttribPointerType.Float, false, 0, texcoordOffset);
GL.EnableVertexAttribArray(normalIndex);
GL.BindBuffer(BufferTarget.ArrayBuffer, dataBuffer);
GL.VertexAttribPointer(normalIndex, 3, VertexAttribPointerType.Float, false, 0, normOffset);
// Index array
GL.BindBuffer(BufferTarget.ElementArrayBuffer, indexBuffer);
GL.DrawElements(PrimitiveType.Triangles, m.Tris.Length * 3, DrawElementsType.UnsignedInt, IntPtr.Zero);
// Restore the state
GL.PopClientAttrib();
}
However my texture coordinates are wrong. It seems that only a single pixel from my texture is used to cover the entire obj. I think I am using GL.VertexAttribPointer(...) incorrectly. What is the second arg int "size"?