How to retrieve current vertices position of triangles - opengl

I have piece of code which are following:
glRotatef(triangle_info.struct_triangle.rot, 0, 0, 1.);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(2, GL_FLOAT, 0, trian_data_values);
glDrawArrays(GL_TRIANGLES, 0, 3);
I want to know each VERTEX position after each transformation. How will I do this?

There is no easy way to get to this data. OpenGL is mostly concerned with drawing, and you normally don't need that data, it will still draw fine. Applications that do need this data usually do the math themselves.
You will either have to do the transform manually, for each vertex. To do this, multiply each vertex with the modelview matrix. If you don't know the modelview matrix (a program needing to do this calculation would normally cache that value), you can query it with glGetFloatv(GL_MODELVIEW_MATRIX).
Alternatively, you can use transform feedback if either your OpenGL version is at least 3.0 or EXT_transform_feedback is supported. That will require you to create and bind a buffer, set the vertex positions as feedback varyings, and call begin/end transform feedback. Lastly, you have to map the buffer to get your data back.
Transform feedback is more setup, but spares you from doing the math yourself.

Related

How to include model matrix to a VBO?

I want to send a buffer list (to the GPU/vertex shader) which contains information about vertex position, world position, color, scale, and rotation.
If each of my 3D objects have transformation related information in a matrix, how can i pass this array of matrices (in addition to the other vertex data) to the GPU via the VBO(s) ?
Updated
Please excuse any typos:
// bind & set vertices.
gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer);
gl.vertexAtribPointer(a_Position, 3, gl.FLOAT, false, stride, 0);
// bind & set vertex normals.
gl.bindBuffer(gl.ARRAY_BUFFER,, vertexNormalsBuffer);
gl.vertexAttribPointer(a_Normal, 3, gl.FLOAT, false, stride, 0);
// becaue i cant pass in a model matrix via VBO, im tryng to pass in my world coordinates.
gl.bindBuffer(gl.ARRAY_BUFFER, worldPositionBuffer);
// not sure why i need to do this, but most tutorials i've read says to do this.
gl.bindBuffer(gl.ARRAY_BUFFER, null);
// bind & draw index buffer.
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, vertexIndexBuffer);
gl.drawElements(gl.TRIANGLES, vertexIndexCount, gl.UNSIGNED_SHORT, 0);
Note that these buffers (vertexBuffer, vertexNormalsBuffer, worldPostiionBuffer, vertexIndexBuffer) are a concatenation of all the respective 3D objects in my scene (which i was rendering one-by-one via attributes/uniforms - a naive approach which is much simpler and easier to grasp, yet horribly slow for 1000's objects).
For any values that you need to change frequently while rendering a frame, it can be more efficient to pass them into the shader as an attribute instead of a uniform. This also has the advantage that you can store the values in a VBO if you choose. Note that it's not required to store attributes in VBOs, they can also be specified with glVertexAttrib[1234]f() or glVertexAttrib[1234]fv().
This applies to the transformation matrix like any other value passed into the shader. If it changes very frequently, you should probably make it an attribute. The only slight wrinkle in this case is that we're dealing with a matrix, and attributes have to be vectors. But that's easy to overcome. What is normally passed in as a mat4 can be represented by 3 values of type vec4, where these 3 vectors are the column vectors of the matrix. It would of course be 4 vectors to represent a fully generic 4x4 matrix, but the 4th column in a transformation matrix is not used for any common transformation types (except for projection matrices).
If you want the transformations to be in the VBO, you set up 3 more attributes, the same way you already did for your positions and colors. The values of the attributes you store in the VBO are the column vectors of the corresponding transformation matrix.
Then in the vertex shader, you apply the transformation by calculating the dot product of the transformation attribute vectors with your input position. The code could look like this:
attribute vec4 InPosition;
attribute vec4 XTransform;
attribute vec4 YTransform;
attribute vec4 ZTransform;
main() {
vec3 eyePosition = vec3(
dot(XTransform, InPosition),
dot(YTransform, InPosition),
dot(ZTransform, InPosition));
...
}
There are other approaches to solve this problem in full OpenGL, like using Uniform Buffer Objects. But for WebGL and OpenGL ES 2.0, I think this is the best solution.
Your method is correct and in someways unavoidable. If you have 1000 different objects that are not static then you will need to (or it is best to) make 1000 draw calls. However, if your objects are static then you can merge them together as long as they use the same material.
Merging static objects is simple. You modify the vertex positions by multiplying by the model matrix in order to transform the vertices into world space. You then render the batch in a single draw call.
If you have many instances of the same object but with different model matrices (i.e. different positions, orientations or scales) then you should use instanced rendering. This will allow you to render all the instances in a single draw call.
Finally, note that draw calls are not necessarily expensive. What happens is that state changes are deferred until you issue your draw call. For example, consider the following:
gl.drawElements(gl.TRIANGLES, vertexIndexCount, gl.UNSIGNED_SHORT, 0);
gl.drawElements(gl.TRIANGLES, vertexIndexCount, gl.UNSIGNED_SHORT, 0);
The second draw call will be much less taxing on the CPU than the second (try it for yourself). This is because there are no state changes between the two draw calls. If you are just updating the model matrix uniform variable between draw calls then that shouldn't add significantly to the cost. It is possible (and recommended) to minimize state changes by sorting your objects by shader program and by material.

Post GPU scaling colouring in OpenGL without shaders

I am writing a program to plot the distribution of a stream of live noisy data. The plots look something like
The scene is lit with 3 lights - 2 diffuse and 1 ambient - to allow modeling to be revealed once filtering is applied to the data
Currently vertical scaling and vertex colour assignment is done by my code before sending the vertices to the GPU using:
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, sizeof(c_vertex), &(vertex_store[0][0].x));
glEnableClientState(GL_COLOR_ARRAY);
glColorPointer(3, GL_FLOAT, sizeof(c_vertex),&(vertex_store[0][0].r));
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, sizeof(c_vertex),&(vertex_store[0][0].nx));
glDrawElements(GL_TRIANGLES, (max_bins-1)*(max_bins-1)*2*3, GL_UNSIGNED_INT, vertex_order);
The use of older functions is so that I can let the fixed pipeline do the lighting calculations with out me writing a shader [something I have not done to the depth needed to do lighting with 3 sources]
To speed up processing I would like to send unscaled data to the GPU and apply a matrix with X and Z scale of 1 and Y scale of the appropriate value to make the peaks reach to +1 on the y axis. After this I would then like the GPU to select the colour for the vertex depending on its post scaling Y value from a look-up table which I assume would be a texture map.
Now I know I can do the last paragraph IF I write my own shaders - but that the necessitates writing code for lighting which I want to avoid doing. Is there anyway of doing this using the buffers in the drawing code above?
After
this I would then like the GPU to select the colour for the vertex
depending on its post scaling Y value from a look-up table which I
assume would be a texture map.
You really should write your own shaders for that. Writing a shader for 3 light sources isn't more complicated as writing one for just one and making a loop around it.
However, I think what you asked for could still be done with the fixed function pipeline. You can use a 1D texture for the colors, enable texturing and the automatic texture coordinate generation, the latter via the glTexGen() family of functions.
In your specific case, the best appraoch seems to set up a GL_OBJECT_LINEAR mapping for s (the first and only texture coordinate that you would need for a 1D texture):
glEnable(GL_TEXTURE_GEN_S);
glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_OBJECT_LINEAR);
What the GL now will do is calcualte s as a function of your input vertex cooridnates (x,y,z,w) such that:
s=a*x + b*y + c*z + d*w
where a,b,c and d are just some coefficients you can freely choose. I'm assuming your original vertices just need to be scaled along y direction by a scaling factor V, so you can just set b=V and all other to zero:
GLfloat coeffs[4]={0.0f, V, 0.0f, 0.0f);
glTexGenfv(GL_S, GL_OBJECT_PLANE, coeffs);
Then, you just have to enable texture mapping and provide a texture to get the desired lookat.

Matrix operations on only part of vertex buffer (opengl-tutorial.org)

I recently learned that:
glBegin(GL_TRIANGLES);
glVertex3f( 0.0f, 1.0f, 0.0f);
glVertex3f(-1.0f,-1.0f, 0.0f);
glVertex3f( 1.0f,-1.0f, 0.0f);
glEnd();
is not really how its done. I've been working with the tutorials at opengl-tutorial.org which introduced me to VBOs. Now I'm migrating it all into a class and trying to refine my understanding.
My situation is similar to this. I understand how to use matrices for rotations and I could do it all myself then hand it over to gl and friends. But I'm sure thats far less efficient and it would involve more communication with the graphics card. Tutorial 17 on that website shows how to rotate things, it uses:
// Send our transformation to the currently bound shader,
// in the "MVP" uniform
glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &MVP[0][0]);
glUniformMatrix4fv(ModelMatrixID, 1, GL_FALSE, &ModelMatrix[0][0]);
glUniformMatrix4fv(ViewMatrixID, 1, GL_FALSE, &ViewMatrix[0][0]);
to rotate objects. I assume this is more efficient then anything I could ever produce. What I want is to do something like this, but only multiply the matrix by some of the mesh, without breaking the mesh into pieces (because that would disrupt the triangle-vertex-index relationship and I'd end up stitching it back together manually).
Is there a separate function for that? Is there some higher level library that handles meshes and bones that I should be using (as some of the replies to the other guys post seems to suggest)? I don't want to get stuck using something outdated and inefficient again, only to end up redoing everything again later.
Uniforms are so named because they are uniform: unchanging over the course of a render call. Shaders can only operate on input values (which are provided per input type. Per-vertex for vertex shaders, per-fragment for fragment shaders, etc), uniforms (which are fixed for a single rendering call), and global variables (which are reset to their original values for every instantiation of a shader).
If you want to do different stuff for different parts of an object within a single rendering call, you must do this based on input variables, because only inputs change within a single rendering call. It sounds like you're trying to do something with matrix skinning or hierarchies of objects, so you probably want to give each vertex a matrix index or something as an input. You use this index to look up a uniform matrix array to get the actual matrix you want to use.
OpenGL is not a scene graph. It doesn't think in meshes or geometry. When you specify a uniform, it won't get "applied" to the mesh. It merely sets a register to be accessed by a shader. Later when you draw primitives from a Vertex Array (maybe contained in a VBO), the call to glDraw… determines which parts of the VA are batched for drawing. It's perfectly possible and reasonable to glDraw… just a subset of the VA, then switch uniforms, and draw another subset.
In any case OpenGL will not change the data in the VA.

per pixel drawing in OpenGL

I used to write small games with 'per pixel' drawing,
I mean with some SetPixel(x,y,color) function or such.
I am also interested in OpenGL but do not know it much.
Is it a good (fast) way to do a per pixel drawing in OpenGL ?
It would be good for example to use textured quads as a sprites,
or whole application background screen, with possibility to
set distinct pixel with some kind of my own SetPixel routine
i would write... or any other way - but it should be efficient
as much as it cans
(especialy im interested in basic fundamenta 1.0 version of OGL)
You can set a projection that will map vertex coordinates 1:1 to pixel coordinates:
glViewport(0, 0, window_width, window_height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, window_width, 0, window_height, -1, 1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
From here on, vertex X,Y coordinates are in pixels with the origin in the lower left corner. In theory you could use the immediate mode with GL_POINT primitives. But it's a much better idea to batch things up. Instead of sending each point indivisually create an array of all the points you want to draw:
struct Vertex
{
GLfloat x,y,red,green,blue;
};
std::vector<Vertex> vertices;
/* fill the vertices vector */
This you can OpenGL point to…
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
/* Those next two calls don't copy the data, they set a pointer, so vertices must not be deallocated, as long OpenGL points to it! */
glVertexPointer(2, GL_FLOAT, sizeof(Vertex), &vertices[0].x);
glColorPointer(3, GL_FLOAT, sizeof(Vertex), &vertices[0].red);
…and have it access and draw it all with a single call:
glDrawArrays(GL_POINTS, 0, vertices.size();
You really don't want to do this.
Is it a good (fast) way to do a per pixel drawing in OpenGL ?
There is no good or fast way to do this. It is highly discouraged due to the speed.
The proper way, although not easy (or in some cases possible) in OGL 1, is to use pixel shaders or blend modes. That is the only correct way, anything else is hacking around the entire system.
Depending on how the data needs modified, vertex colors and blend modes may be able to solve the some uses. It won't tint each pixel individually, but you can modify the texture must faster.
To do it, you can draw single-pixel quads (although care must be taken to offset them and handle filtering to prevent blurring) or you can get texture data and manipulate it later. Both will be unbelievably slow, but could function.
Working with the texture data is probably simpler and may be slightly faster.

Replicate in space an object with position and orientation from GPU kernel

Context
I am doing swarm simulation using GPU programming (both OpenCL and CUDA,
but not at the same time of course) for scientific purpose.
I use OpenGL for display.
Goal
I would like to draw the same object —namely the swarming particle, can be a simple triangle in 2D— N times at different positions and with
different orientations in the most efficient way knowing that:
the object is always exactly the same
the positions and orientations are calculated on the GPU and thus stored in the GPU memory
the number of particles N can be large
Current solution
So far, to avoid sending back the data to the CPU, I store the position and
orientation arrays in a VBO and use:
glBindBuffer(GL_ARRAY_BUFFER, position_vbo);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(2, GL_FLOAT, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, velocity_vbo);
glEnableClientState(GL_COLOR_ARRAY);
glColorPointer(4, GL_FLOAT, 0, 0);
glDrawArrays(GL_POINTS, 0, N);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, 0);
to draw a set of points with color-coded velocity without copying back the arrays to the CPU.
What I would like to do is something like drawing a full object instead of a simple point
using a similar way ie without copying back the VBO's to the CPU.
Basically I would like to store on the GPU the model of an object
(a Display List? a Vertex Array?) and to use the positions and orientations on the GPU
to draw the object N times without sending data back to the CPU.
Is it possible and how? Else, how should I do it?
PS: I like keeping the code clean so I would rather separate the display issues from the swarming kernel.
I believe you can do this with a geometry shader (available in OpenGL 3.2). See this tutorial for specific information.
In your case, you need to make the input type and output type of the geometry shader to GL_POINTS and GL_TRIANGLES respectively, and in your geometry shader, emit the 3 vertices of your triangle for each incoming point vertex.