glVertexAttribPointer with normalize flag enabled - opengl

I am trying to draw a quad by doing the following:
glVertexAttribPointer(0, 2, GL_UNSIGNED_INT, GL_TRUE, 0, 0x0D4C7488); //where 0x0D4C7488 is the address of the data passed and the data being (from the Visual Studio memory window):
0x0D4C7488 3699537464 875578074
0x0D4C7490 4253028175 875578074
0x0D4C7498 3699537464 2966829591
0x0D4C74A0 3699537464 2966829591
0x0D4C74A8 4253028175 875578074
0x0D4C74B0 4253028175 2966829591
glDrawArrays(GL_TRIANGLES, 0, 6);
However I don't see any quad being drawn. Also what happens with the data when the normalize flag is enabled. Help me understand what happens to the data when it is normalized and why am I not seeing any quad being drawn.

Sending a static address you've divined from a debugger or other tool is a red flag. What you should be doing is setting an offset into the buffer that contains your vertex data. If you have just position vectors, this should be 0. Read into OpenGL's vertex specification for a good understanding of how to do this (it's better than the somewhat cryptic spec pages). Depending on whether you interleave attributes (such as normals and tex coords) this can change.
To answer your question about normalization, it basically has to deal with converting integers to floats and you can also read that in OpenGL's vertex specification as well. If you're already using float, set it to false.

Related

glDrawRangeElements doesn't draw the chosen range

I want to draw two cubes with a rectangle between them, so I stored vertices data into a vbo,then i created an ebo(Element Buffer Object) to avoid extra vertices(42 vs 12).
I need to draw them separately, because I want the rectangle to reflect the up cube, doing stencil test and disabling the depth mask while drawing the rectangle.
I thought I could draw the first cube with a glDrawElements call
glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_INT, 0);
then, to draw the ractangle I'm trying to use glDrawRangeElements
glDrawRangeElements(GL_TRIANGLES, 36, 41, 6, GL_UNSIGNED_INT, 0);
but it is just drawing the base of the cube.
For the last cube I am using the same draw call of the first, just inverting it in the z axis.
I think i did something wrong with glDrawRangeElements parameters, because i tried doing just one call for the first cube and then the rectangle
glDrawElements(GL_TRIANGLES, 42, GL_UNSIGNED_INT, 0);
and it works.
What's wrong with that glDrawRangeElements call?
EDIT: I solved by not using a glDrawRangeElements call but a simple glDrawArrays call, rearranging the rectangle's vertices to draw two triangles;
glDrawRangeElements doesn't do what you think it does. The functionality of glDrawRangeElements is identical to glDrawElements. The only difference is that glDrawRangeElements takes a range that acts as a hint to the implementation as to which vertices you'll be using.
See, because your indices are in an array, the driver doesn't automatically know what section of the vertex data you're using. You use glDrawRangeElements as a potential performance enhancer; it lets you tell the driver what range of vertices your draw call uses.
Nowadays, glDrawRangeElements is pointless. See, the range used to matter, because implementations used to read vertex arrays from CPU memory. So when you did a regular glDrawElements, the driver had to read your index buffer, figure out what the range of vertex data was, and then copy that data from your vertex buffers into GPU memory and then issue the draw call. The Range version allows it to skip the expensive index reading step.
That doesn't matter much anymore, since now we store vertex data in buffer objects on the GPU. So you shouldn't be using glDrawRangeElements at all.

Is there a way to create the verts/indices of a cube that can be well represented by line drawing mode as well as render correctly in triangles mode?

Attempting to switch drawing mode to GL_LINE, GL_LINE_STRIP or GL_LINE_LOOP when your cube's vertex data is constructed mainly for use with GL_TRIANGLES presents some interesting results but none that provide a good wireframe representation of the cube.
Is there a way to construct the cube's vertex and index data so that simply toggling the draw mode between GL_LINES/GL_LINE_STRIP/GL_LINE_LOOP and GL_TRIANGLES provides nice results? Or is the only way to get a good wireframe to re-create the vertices specifically for use with one of the line modes?
The most practical approach is most likely the simplest one: Use separate index arrays for line and triangle rendering. There is certainly no need to replicate the vertex attributes, but drawing entirely different primitive types with the same indices sounds highly problematic.
To implement this, you could use two different index (GL_ELEMENT_ARRAY_BUFFER) buffers. Or, more elegantly IMHO, use a single buffer, and store both sets of indices in it. Say you need triIdxCount indices for triangle rendering, and lineIdxCount for line rendering. You can then set up you index buffer with:
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuf);
glBufferData(GL_ELEMENT_ARRAY_BUFFER,
(triIdxCount + lineIdxCount) * sizeof(GLushort), 0,
GL_STATIC_DRAW);
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER,
0, triIdxCount * sizeof(GLushort), triIdxArray);
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER,
triIdxCount * sizeof(GLushort), lineIdxCount * sizeof(GLushort),
lineIdxArray);
Then, when you're ready to draw, set up all your state, including the index buffer binding (ideally using a VAO for all of the state setup) and then render conditionally:
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuf);
if (renderTri) {
glDrawElements(GL_TRIANGLES, triIndexCount, GL_UNSIGNED_SHORT, 0);
} else {
glDrawElements(GL_LINES, lineIdxCount, GL_UNSIGNED_SHORT,
triIndexCount * sizeof(GLushort));
}
From a memory usage point of view, having two sets of indices is a moderate amount of overhead. The actual vertex attribute data is normally much bigger than the index data, and the key point here is that the attribute data is not replicated.
If you don't strictly want to render lines, but just have a requirement for wireframe types of rendering, there are other options. There is for example an elegant approach (never implemented it myself, but it looks clever) where you only draw pixels close to the boundary of polygons, and discard the interior pixels in the fragment shader based on the distance to the polygon edge. This question (where I contributed an answer) elaborates on the approach: Wireframe shader - Issue with Barycentric coordinates when using shared vertices.

Can OpenGL be used to draw real valued triangles into buffer?

I need to implement an image reconstruction which involves drawing triangles in a buffer representing the pixels in an image. These triangles are assigned some floating point value to be filled with. If triangles are drawn such that they overlap, the values of the overlapping regions must be added together.
Is it possible to accomplish this with OpenGL? I would like to take advantage of the fact that rasterizing triangles is a basic graphics task that can be accelerated on the graphics card. I have a cpu-only implementation of this algorithm already but it is not fast enough for my purposes. This is due to the huge number of triangles that need to be drawn.
Specifically my questions are:
Can I draw triangles with a real value using openGL? (Or can I come up with a hack using color etc?)
Can OpenGL add the values where triangles overlap? (Once again I could deal with a hack, like color mixing)
Can I recover the real values for the pixels as an array of floats or similar to be further processed?
Do I have misconceptions about the idea that drawing in OpenGL -> using GPU to draw -> likely faster execution?
Additionally, I would like to run this code on a virtual machine so getting acceleration to work with OpenGL is more feasible than rolling my own implementation in something like Cuda as far as I understand. Is this true?
EDIT: Is an accumulation buffer an option here?
If 32-bit floats are sufficient then it looks like the answer is yes: http://www.opengl.org/wiki/Image_Format#Required_formats
Even under the old fixed pipeline you could use a blending mode with the function GL_FUNC_ADD, though I'm sure fragment shaders can do it more easily now.
glReadPixels() will get you the data back out of the buffer after drawing.
There are software implementations of OpenGL, but you get to choose when you set up the context. Using the GPU should be much faster than the CPU.
No idea. I've never used OpenGL or CUDA on a VM. I've never used CUDA at all.
I guess giving pieces of code as an answer wouldn't be appropriate here as your question is extremely broad. So I'll simply answer your questions individually with bits of hints.
Yes, drawing triangles with openGL is a piece of cake. You provide 3 vertice per triangle and with the proper shaders your can draw triangles, with filling or just edges, whatever you want. You seem to require a large set (bigger than [0, 255]) since a lot of triangles may overlap, and the value of each may be bigger than one. This is not a problem. You can fill a 32bit precision one channel frame buffer. In your case only one channel may suffice.
Yes, the blending exists since forever on openGL. So whatever the version of openGL you choose to use, there will be a way to add up the value of the trianlges overlapping.
Yes, depending on you implement it you may have to use glGetSubData() or glReadPixels or something else. However, depending on the size of the matrix you're filling, it may be a bit long to download the full buffer (2000x1000 pixels for a one channel at 32bit would be around 4-5ms). It may be more efficient to do all your processing on the GPU and extract only few valuable information instead of continuing the processing on the CPU.
The execution will be undoubtedly faster. However, the download of data from the GPU memory is often not optimized (upload is). So the time you will win on the processing may be lost on the download. I've never worked with openGL on VM so the additional loss of performance is unknown to me.
//Struct definition
struct Triangle {
float[2] position;
float intensity;
};
//Init
glGenBuffers(1, &m_buffer);
glBindBuffer(GL_ARRAY_BUFFER, 0, m_buffer);
glBufferData(GL_ARRAY_BUFFER,
triangleVector.data() * sizeof(Triangle),
triangleVector.size(),
GL_DYNAMIC_DRAW);
glBindBufferBase(GL_ARRAY_BUFFER, 0, 0);
glGenVertexArrays(1, &m_vao);
glBindVertexArray(m_vao);
glBindBuffer(GL_ARRAY_BUFFER, m_buffer);
glVertexAttribPointer(
POSITION,
2,
GL_FLOAT,
GL_FALSE,
sizeof(Triangle),
0);
glVertexAttribPointer(
INTENSITY,
1,
GL_FLOAT,
GL_FALSE,
sizeof(Triangle),
sizeof(float)*2);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
glGenTextures(1, &m_texture);
glBindTexture(GL_TEXTURE_2D, m_texture);
glTexImage2D(
GL_TEXTURE_2D,
0,
GL_R32F,
width,
height,
0,
GL_RED,
GL_FLOAT,
NULL);
glBindTexture(GL_FRAMEBUFFER, 0);
glGenFrameBuffers(1, &m_frameBuffer);
glBindFrameBuffer(GL_FRAMEBUFFER, m_frameBuffer);
glFramebufferTexture(
GL_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0,
m_texture,
0);
glBindFrameBuffer(GL_FRAMEBUFFER, 0);
After you just need to write your render function. A simple glDraw*() should be enough. Just remember to bind your buffers correctly. To enable the blending with the proper function. You might also want to disable the anti aliasing for your case. At first I'd say you need an ortho projection but I don't have all the element of your problem so it's up to you.
Long story short, if you never worked with openGL, the piece of code above will be relevant only after you read few documentation/tutorials on openGL/GLSL.

How to retrieve current vertices position of triangles

I have piece of code which are following:
glRotatef(triangle_info.struct_triangle.rot, 0, 0, 1.);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(2, GL_FLOAT, 0, trian_data_values);
glDrawArrays(GL_TRIANGLES, 0, 3);
I want to know each VERTEX position after each transformation. How will I do this?
There is no easy way to get to this data. OpenGL is mostly concerned with drawing, and you normally don't need that data, it will still draw fine. Applications that do need this data usually do the math themselves.
You will either have to do the transform manually, for each vertex. To do this, multiply each vertex with the modelview matrix. If you don't know the modelview matrix (a program needing to do this calculation would normally cache that value), you can query it with glGetFloatv(GL_MODELVIEW_MATRIX).
Alternatively, you can use transform feedback if either your OpenGL version is at least 3.0 or EXT_transform_feedback is supported. That will require you to create and bind a buffer, set the vertex positions as feedback varyings, and call begin/end transform feedback. Lastly, you have to map the buffer to get your data back.
Transform feedback is more setup, but spares you from doing the math yourself.

Replicate in space an object with position and orientation from GPU kernel

Context
I am doing swarm simulation using GPU programming (both OpenCL and CUDA,
but not at the same time of course) for scientific purpose.
I use OpenGL for display.
Goal
I would like to draw the same object —namely the swarming particle, can be a simple triangle in 2D— N times at different positions and with
different orientations in the most efficient way knowing that:
the object is always exactly the same
the positions and orientations are calculated on the GPU and thus stored in the GPU memory
the number of particles N can be large
Current solution
So far, to avoid sending back the data to the CPU, I store the position and
orientation arrays in a VBO and use:
glBindBuffer(GL_ARRAY_BUFFER, position_vbo);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(2, GL_FLOAT, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, velocity_vbo);
glEnableClientState(GL_COLOR_ARRAY);
glColorPointer(4, GL_FLOAT, 0, 0);
glDrawArrays(GL_POINTS, 0, N);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, 0);
to draw a set of points with color-coded velocity without copying back the arrays to the CPU.
What I would like to do is something like drawing a full object instead of a simple point
using a similar way ie without copying back the VBO's to the CPU.
Basically I would like to store on the GPU the model of an object
(a Display List? a Vertex Array?) and to use the positions and orientations on the GPU
to draw the object N times without sending data back to the CPU.
Is it possible and how? Else, how should I do it?
PS: I like keeping the code clean so I would rather separate the display issues from the swarming kernel.
I believe you can do this with a geometry shader (available in OpenGL 3.2). See this tutorial for specific information.
In your case, you need to make the input type and output type of the geometry shader to GL_POINTS and GL_TRIANGLES respectively, and in your geometry shader, emit the 3 vertices of your triangle for each incoming point vertex.