Say I have a simple OpenGL triangle like this:
//1
glColor3f(1, 0, 0);
glVertex3f(0.5, 0, 0);
//2
glColor3f(0, 1, 0);
glVertex3f(0, 1, 0);
//3
glColor3f(0, 0, 1);
glVertex3f(1, 1, 0);
In a glsl fragment shader I can use the interpolated fragment color to determine my distance from each vertex. In this example the red component of the color determines distance from the first vertex, green determines the distance from the second, and blue from the third.
Is there a way I can determine these distances in the shader without passing vertex data such as texture coordinates or colors?
Not in standard OpenGL. There are two vendor-specific extensions:
AMD_shader_explicit_vertex_parameter
NV_fragment_shader_barycentric
which will give you access to the barycentric coordinates within the primitive. But without such extensions, there are only very clumsy ways to get this data to the FS, and each will have significant drawbacks. Here are some ideas:
You could use per-vertex attributes as you already suggested, but in real meshes, it will require a lot of additional vertex splitting to get the values right.
You could use geometry shaders to generate those attribute values on the fly, but that will come with a huge performance hit as geometry shaders really don't perform well.
You could make your vertex data available to the FS (for example via an SSBO) and basically calculate the barycentric coordinates based on gl_FragCoord and the relevant endpoints. But this requires you to get information on which vertices were used to the FS, which might require extra data structures (i.e. some triangle- and/or vertex-indices lookup table based on gl_PrimitiveID).
Related
Let's say there's a color render texture that is 1000 px wide, 500 px tall. And I draw a quad with vertices at the four corners (-1, -1, 0), (1, -1, 0), (-1, 1, 0), (1, 1, 0) to it without any transformation in the vertex shader.
Will this always cover the entire texture's surface by default, assuming no other other GL functions prior to this sole draw command were called?
What OpenGL functions (that modify vertex positions) could cause this quad to no longer fill the screen?
(I'm trying to understand how vertices can be messed with prior to the vertex shader, so I can avoid or use the right functions to always map them so NDC (-1, -1) to (1, 1) represents the entire surface).
edit: If the positions are not altered, then I'm also wondering how their mapping to a render buffer might be modified prior to the vertex shader. For instance, will (-1, -1, 0) reliably refer to a fragment at the bottom-left of the render buffer, (0, 0, 0) to the middle, and (1, 1, 0) to the top-right?
Nothing happens to vertex data "prior to the vertex shader". Nothing can happen to it, because OpenGL doesn't know what the vertex attributes mean. It doesn't know what attribute 2 refers to; it doesn't know what is a position, normal, texture coordinate or anything. As far as OpenGL is concerned, it's all just data. What gives that data meaning is your vertex shader. And only in the way defined by your vertex shader.
Data from buffer objects are read in accord with the format specified by your VAO, and are given to the vertex shader invocations which process those vertices.
I am trying to implement a custom interpolation technique in GLSL shader.
I have switched off the default OpenGL bilinear filters using flat interpolation specifier for my texture coordinates. I followed the technique that is specified in the below link:
How to switch off default interpolation in OpenGL
While rasterizing, the image now gets an image which is based on the provoking vertex.
Is it possible for me to introduce an interpolation mechanism to decide on the colors filled between vertices in a triangle ? Or is this hardcoded in OpenGL ?
I am a newbie in GLSL world and hence would request you to provide me with a non complicated answer.
Interpolation is hard-coded into OpenGL. If you want to do your own interpolation, you will have to provide to the fragment shader:
The barycentric coordinates for that particular fragment. That can be done by passing, for the three vertices of the triangle, vec3(1, 0, 0), vec3(0, 1, 0), and vec3(0, 0, 1).
All three uninterpolated values for your triangle's data that you wish to interpolate. This will require 3 flat input variables in your FS (or an array of 3 inputs, which is the same thing).
Of course, you'll need to match the 1, 0, 0 triangle with the particular uninterpreted value for that vertex of the triangle. And the same goes with the other two indices.
This basically requires a geometry shader, since it's very difficult for a VS to pass barycentric coordinates or to provide the right uninterpolated data. But with the barycentric coordinates of the position, and the values to be interpolated, you should be able to implement whatever interpolations scheme you like. #2 could include more than 3 values, for example.
Usual approach to needing a custom interpolation function is to use the standard interpolation for a texture coordinate, and look up whatever data you want in a texture.
I am writing a program to plot the distribution of a stream of live noisy data. The plots look something like
The scene is lit with 3 lights - 2 diffuse and 1 ambient - to allow modeling to be revealed once filtering is applied to the data
Currently vertical scaling and vertex colour assignment is done by my code before sending the vertices to the GPU using:
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, sizeof(c_vertex), &(vertex_store[0][0].x));
glEnableClientState(GL_COLOR_ARRAY);
glColorPointer(3, GL_FLOAT, sizeof(c_vertex),&(vertex_store[0][0].r));
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, sizeof(c_vertex),&(vertex_store[0][0].nx));
glDrawElements(GL_TRIANGLES, (max_bins-1)*(max_bins-1)*2*3, GL_UNSIGNED_INT, vertex_order);
The use of older functions is so that I can let the fixed pipeline do the lighting calculations with out me writing a shader [something I have not done to the depth needed to do lighting with 3 sources]
To speed up processing I would like to send unscaled data to the GPU and apply a matrix with X and Z scale of 1 and Y scale of the appropriate value to make the peaks reach to +1 on the y axis. After this I would then like the GPU to select the colour for the vertex depending on its post scaling Y value from a look-up table which I assume would be a texture map.
Now I know I can do the last paragraph IF I write my own shaders - but that the necessitates writing code for lighting which I want to avoid doing. Is there anyway of doing this using the buffers in the drawing code above?
After
this I would then like the GPU to select the colour for the vertex
depending on its post scaling Y value from a look-up table which I
assume would be a texture map.
You really should write your own shaders for that. Writing a shader for 3 light sources isn't more complicated as writing one for just one and making a loop around it.
However, I think what you asked for could still be done with the fixed function pipeline. You can use a 1D texture for the colors, enable texturing and the automatic texture coordinate generation, the latter via the glTexGen() family of functions.
In your specific case, the best appraoch seems to set up a GL_OBJECT_LINEAR mapping for s (the first and only texture coordinate that you would need for a 1D texture):
glEnable(GL_TEXTURE_GEN_S);
glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_OBJECT_LINEAR);
What the GL now will do is calcualte s as a function of your input vertex cooridnates (x,y,z,w) such that:
s=a*x + b*y + c*z + d*w
where a,b,c and d are just some coefficients you can freely choose. I'm assuming your original vertices just need to be scaled along y direction by a scaling factor V, so you can just set b=V and all other to zero:
GLfloat coeffs[4]={0.0f, V, 0.0f, 0.0f);
glTexGenfv(GL_S, GL_OBJECT_PLANE, coeffs);
Then, you just have to enable texture mapping and provide a texture to get the desired lookat.
I have to support some legacy code which draws point clouds using the following code:
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, (float*)cloudGlobal.data());
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, 0, (float*)normals.data());
glDrawArrays(GL_POINTS, 0, (int)cloudGlobal.size());
glFinish();
This code renders all vertices regardless of the angle between normal and the "line of sight". What I need is draw only vertices whose normals are directed towards us.
For faces this would be called "culling", but I don't know how to enable this option for mere vertices. Please suggest.
You could try to use the lighting system (unless you already need it for shading). Set ambient color alpha to zero, and then simply use alpha test to discard the points with zero alpha. You will probably need to set quite high alpha in diffuse color in order to avoid half-transparent points, in case alpha blending is required to antialiass the points (to render discs instead of squares).
This assumes that the vertices have normals (but since you are talking about "facing away", I assume they do).
EDIT:
As correctly pointed out by #derhass, this will not work.
If you have cube-map textures, perhaps you can copy normal to texcoord and perform lookup of alpha from a cube-map (also in combination with the texture matrix to take camera and point cloud transformations into account).
Actually in case your normals are normalized, you can scale them using the texture matrix to [-0.49, +0.49] and then use a simple 1D (or 2D) bar texture (half white, half black - incl. alpha). Note that counterintuitively, this requires texture wrap mode to be left as default GL_REPEAT (not clamp).
If your point clouds have shape of some closed objects, you can still get similar behavior even without cube-map textures by drawing a dummy mesh with glColorMask(0, 0, 0, 0) (will only write depth) that will "cover" the points that are facing away. You can generate this mesh also as a group of quads that are placed behind the points in the opposite direction of their normal, and are only visible from the other side than the points are supposed to be visible, thus covering them.
Note that this will only lead to visual improvement (it will look like the points are culled), not performance improvement.
Just out of curiosity - what's your application and why do you need to avoid shaders?
I recently learned that:
glBegin(GL_TRIANGLES);
glVertex3f( 0.0f, 1.0f, 0.0f);
glVertex3f(-1.0f,-1.0f, 0.0f);
glVertex3f( 1.0f,-1.0f, 0.0f);
glEnd();
is not really how its done. I've been working with the tutorials at opengl-tutorial.org which introduced me to VBOs. Now I'm migrating it all into a class and trying to refine my understanding.
My situation is similar to this. I understand how to use matrices for rotations and I could do it all myself then hand it over to gl and friends. But I'm sure thats far less efficient and it would involve more communication with the graphics card. Tutorial 17 on that website shows how to rotate things, it uses:
// Send our transformation to the currently bound shader,
// in the "MVP" uniform
glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &MVP[0][0]);
glUniformMatrix4fv(ModelMatrixID, 1, GL_FALSE, &ModelMatrix[0][0]);
glUniformMatrix4fv(ViewMatrixID, 1, GL_FALSE, &ViewMatrix[0][0]);
to rotate objects. I assume this is more efficient then anything I could ever produce. What I want is to do something like this, but only multiply the matrix by some of the mesh, without breaking the mesh into pieces (because that would disrupt the triangle-vertex-index relationship and I'd end up stitching it back together manually).
Is there a separate function for that? Is there some higher level library that handles meshes and bones that I should be using (as some of the replies to the other guys post seems to suggest)? I don't want to get stuck using something outdated and inefficient again, only to end up redoing everything again later.
Uniforms are so named because they are uniform: unchanging over the course of a render call. Shaders can only operate on input values (which are provided per input type. Per-vertex for vertex shaders, per-fragment for fragment shaders, etc), uniforms (which are fixed for a single rendering call), and global variables (which are reset to their original values for every instantiation of a shader).
If you want to do different stuff for different parts of an object within a single rendering call, you must do this based on input variables, because only inputs change within a single rendering call. It sounds like you're trying to do something with matrix skinning or hierarchies of objects, so you probably want to give each vertex a matrix index or something as an input. You use this index to look up a uniform matrix array to get the actual matrix you want to use.
OpenGL is not a scene graph. It doesn't think in meshes or geometry. When you specify a uniform, it won't get "applied" to the mesh. It merely sets a register to be accessed by a shader. Later when you draw primitives from a Vertex Array (maybe contained in a VBO), the call to glDraw… determines which parts of the VA are batched for drawing. It's perfectly possible and reasonable to glDraw… just a subset of the VA, then switch uniforms, and draw another subset.
In any case OpenGL will not change the data in the VA.