I'm trying to render a proper refraction/reflection in my OpenGL project and I need to use clip distance. I'm following the tutorial by Thin Matrix.
I enable the clip distance with
glEnable(GL_CLIP_DISTANCE0);
in my game loop and then I try to use it in the vertex shader like this
gl_ClipDistance[0] = -1;
I've tried to change around the -1 value but nothing happens no matter what I do. Is there something more I need to do to enable it properly?
The gl_ClipDistance array contains a series of floating-point values that represent, for each vertex, which side of a conceptual "plane" it is on. Each array index is a conceptual "plane". The vertex can be either on the negative side, or the non-negative side.
If two vertices of the same line/triangle are on two difference sides of the plane (that is, one vertex has a negative value and the other a non-negative one), then the primitive will be clipped at the location where that distance is 0. The part of the primitive on the non-negative side will be visible.
Given that, what you need to do is set up a clip distance value that represents the distance between the vertex position and the plane you want to clip against. The standard plane equation (Ax + By + Cz + D = 0) gives us a way to handle this. The distance of a point from the plane A, B, C, D, assuming the vector (A, B, C) is a unit vector, is simply:
dot(point, vec3(A, B, C)) + D
This also assumes that point is in the same space as A, B, C and D.
Of course, if point is a vec4, with the last component as 1.0, then you can just do this:
dot(point, plane);
Where plane is a vec4 that contains A, B, C, and D. And that's your clip distance.
You also need to redeclare gl_ClipDistance with an explicit size in the shader stage that uses it. In GLSL 3.20+, gl_ClipDistance lives inside an interface block, so you have to redeclare that:
out gl_PerVertex
{
vec4 gl_Position;
float gl_ClipDistance[1];
};
Related
Edit: it is "backwards" to me - I may be missing some intuition
Given a glm transform function such as glm::translate, the two parameters are first a matrix m and then a vector v for translation.
Intuitively, I would expect this function to apply the translation "after" my matrix transform, i.e. multiplying an object by the returned matrix will first apply m followed by the translation v specified.
This intuition comes from the fact that one usually builds a transformation in mathmetical order e.g. first compute a scale matrix, then apply rotation, then transform etc. so I would think the function calling order would be the same (i.e. given a matrix, I can simply call glm::translate to apply a translation which happens after my matrix's transform is applied)
However, as mentioned in this thread, that is not the case - the translation is applied first, followed by the matrix m passed in.
I don't believe this has anything to do with column major/row major convention and notation as some threads suggest. Is there a historical reason for this? It just seems a bit backwards to me and I would probably rewrite the functions unless there's a good enough reason for it.
This intuition comes from the fact that one usually builds a transformation in mathmetical order
But there is no such thing as a mathematical order. Consider the following: v is an n-dimensional vector and M a n x n square matrix. Now the question is: which is the correct multiplication order? And that depends on your convention again. In most classic math textbook, vectors are defined as column vectors. And then: M * v is the only valid multiplication order, while v * M is simply not a valid operation mathematically.
If v is a column vector, then it's transpose v^T is a row vector and then v^T * M is the only valid multiplication order. However, to achieve the same result as before, say x = M * v, you have to also transpose M: x^T = v^T * M^T.
If M is the product of two matrices A and B, what we get here due to the non-commutative way of matrix multiplication is this:
x = M * v
x = A * B * v
x = A * (B * v)
or, we could say:
y = B * v
x = A * y
so clearly, B is applied first.
In the transposed convention with row matrices, we need to follow (A * B)^T = B^T * A^T and get
x^T = v^T * M^T
x^T = v^T * B^T * A^T
x^T = (v^T * B^T) * A^T
So B^T again is applied first.
Actually, when you consider the multiplication order, the matrix which is written closest to the vector is generally the one applied first.
I don't believe this has anything to do with column major/row major convention and notation as some threads suggest.
You are right, it has absolutely nothing to do with that. The storage order can be arbitrary and does not change the meaning of the matrices and operations. The confusion often comes from the fact that interpreting a matrix which is stored column-major as a matrix stored row-major (or vice-versa) will just have the effect of transposing the matrix.
Also, GLSL and HLSL and many math libraries do not use explicit column or row vectors, but use it as it fits. E.g., in GLSL you can write:
vec4 v;
mat4 M;
vec4 a = M * v; // v is treated as column vector here
vec4 b = v * M; // v is treated as row vector now
// NOTE: a and b are NOT equal here, they would be if b = v * transpose(M), so by swapping the multiplication order, you get the effect of transposing the matrix
Is there a historical reason for this?
OpenGL follows classical math conventions at many points (i.e. the window space origin is bottom-left and not top-left as most window systems do work), the old fixed function view space convention was to use a right-handed coordinate system (z pointing out of the screen towards the viewer, so the camera looking towards -z), and the OpenGL spec uses column vectors to this day. This means that the vertex transform has to be M * v and the "reverse" order of the transformations applies.
This means, in legacy GL, the following sequence:
glLoadIdentity(); // M = I
glRotate(...); // M = M * R = R
glTranslate(...); // M = M * T = R * T
will first translate the object, and then rotate it.
GLM was designed to follow the OpenGL conventions by default, and the function glm::mat4 glm::translate(glm::mat4 const& m, glm::vec3 const& translation); is explicitely emulating the old fixed-function GL behavior.
It just seems a bit backwards to me and I would probably rewrite the functions unless there's a good enough reason for it.
Do as you wish. You could set up fnctions which instead of psot-multiply do a pre-multiplication. Or you could set up all transformation matrices as transposed, and post-multiply in the order you consider "intuitive". But note that for someone following either classical math conventions, or typical GL conventions, the "backwards" notation is the "intuitive" one.
In the WebGL reference card, there is the documentation about Vector components.
While it seems to me that i can use also {x, y, z, w}, I am not able to understand if it is mandatory to use {s, t, p, q} when reading from Textures.
Use when accessing vectors that represent texture coordinates
What is the meaning of the {s, t, p, q} letters? It is just only a matter of convention for readable code or is there something more that i am missing?
See WebGL Specification Vesion 1.0:
4.3 Supported GLSL Constructs
A WebGL implementation must only accept shaders which conform to The OpenGL ES Shading Language, Version 1.00 ...
See the OpenGL ES Shading Language 1.00 Specification :
5.5 Vector Components
The names of the components of a vector or scalar are denoted by a single letter. As a notational convenience, several letters are associated with each component based on common usage of position, color or texture coordinate vectors. The individual components can be selected by following the variable
name with period ( . ) and then the component name.
The component names supported are:
{x, y, z, w} Useful when accessing vectors that represent points or normals
{r, g, b, a} Useful when accessing vectors that represent colors
{s, t, p, q} Useful when accessing vectors that represent texture coordinates
The component names x, r, and s are, for example, synonyms for the same (first) component in a vector.
Note that the third component of the texture coordinate set, r in OpenGL ES, has been renamed p so as to avoid the confusion with r (for red) in a color.
You can also use v[0], v[1], v[2], v[3] to access the components of a the vector.
This means for a vec4 v;, v.stpq is exactly the same as v.xyzw or v.rgba.
The s, t naming comes from the Plane (geometry), where a plan can be described parametrically as the set of all points of the form R = R0 + sV + tW. (R and R0 are points, V and W are vectors, s and t are real numbers.
I'm trying to render a proper refraction/reflection in my OpenGL project and I need to use clip distance. I'm following the tutorial by Thin Matrix.
I enable the clip distance with
glEnable(GL_CLIP_DISTANCE0);
in my game loop and then I try to use it in the vertex shader like this
gl_ClipDistance[0] = -1;
I've tried to change around the -1 value but nothing happens no matter what I do. Is there something more I need to do to enable it properly?
The gl_ClipDistance array contains a series of floating-point values that represent, for each vertex, which side of a conceptual "plane" it is on. Each array index is a conceptual "plane". The vertex can be either on the negative side, or the non-negative side.
If two vertices of the same line/triangle are on two difference sides of the plane (that is, one vertex has a negative value and the other a non-negative one), then the primitive will be clipped at the location where that distance is 0. The part of the primitive on the non-negative side will be visible.
Given that, what you need to do is set up a clip distance value that represents the distance between the vertex position and the plane you want to clip against. The standard plane equation (Ax + By + Cz + D = 0) gives us a way to handle this. The distance of a point from the plane A, B, C, D, assuming the vector (A, B, C) is a unit vector, is simply:
dot(point, vec3(A, B, C)) + D
This also assumes that point is in the same space as A, B, C and D.
Of course, if point is a vec4, with the last component as 1.0, then you can just do this:
dot(point, plane);
Where plane is a vec4 that contains A, B, C, and D. And that's your clip distance.
You also need to redeclare gl_ClipDistance with an explicit size in the shader stage that uses it. In GLSL 3.20+, gl_ClipDistance lives inside an interface block, so you have to redeclare that:
out gl_PerVertex
{
vec4 gl_Position;
float gl_ClipDistance[1];
};
Im trying to understand how matrix transformations work in opengl/glsl, and Im wondering how to make a single 4x4 id-matrix that has the potential for every scale/rotation/translation.
So, after all the binding and whatnot, im only uniform/inputting 1 matrix to designate its location/spin.
This idea seems correct to me, but I cant figure out how to make the object move without distorting it. It rotates just fine, and it scales as well.
But idk how to apply the translation to the id matrix, if that makes sense. In any case, this is my relevant code:
//update matrix
glUniformMatrix4fv(transform, 1, GL_FALSE, glm::value_ptr(ident));
//spin according to z
void object::spinz(float a) { ident = glm::rotate(ident, a, glm::vec3(0.0f, 0.0f, 1.0f)); }
this will modify my
glm::mat4 ident();//id matrix
but when i try giving it translation:
void object::translate(float x, float y, float z);
the method itself will only distort the object/matrix/result
ident += glm::vec4(x, y, z, 0);
what am I doing wrong? Should I even try to only have 1 uniform input?
Solution: the idea for translation is just wrong. A correct one would look more like this: (but the main thing is doing it seperately for each object)
glm::mat4 test = glm::translate(glm::mat4(1.0f), glm::vec3(x, y, z));
finaluniformmatrix *= test;
Or basically make a unique translation matrix, that I then multiply with the overall projection*view matrix.
edit: a cheaper translation is:
matrix[3][0]=x;matrix[3][1]=y;matrix[3][2]=z; //where xyz are xyz coordinates.
ps: why am I getting downvotes for this? This is me finding out (some time ago) that you need a unique identity matrix for rendering seperate objects, and not just the same matrix for everything. (like mixing up projection, view, identity, by adding them for each object)
You can use a number of individual matrix operations, and multiply them together to turn them in to a single matrix that specifies the entire operation. Any number of 4x4 matrices can be multiplied and the order IS important.
Also be wary of non uniform scale and rotation, which can sometimes have the effect of "sheering" the object.
You can fairly simply build translation, rotation-x, rotation-y, rotation-z and scale 4x4 matrices and multiply them together to create a single matrix.
http://www.flipcode.com/documents/matrfaq.html#Q11
http://www.flipcode.com/documents/matrfaq.html#Q41
I'm not sure about the code you are using tho - I'd suggest only using 4x4 matrix and multiply operations to begin with and work from there.
I have a triangle ABC in 2D space, and have defined f(A), f(B), and
f(C), where f is a real valued function. Questions:
Can I extend f to be defined on the entire triangle such that f is
continuous?
If yes, is the solution unique?
If not, can I find an f that's smooth (infinitely differentiable)
inside the triangle?
f needn't be defined outside the triangle (thus, it's OK if f isn't
continuous/differentiable at the vertexs/edges)
Goal: I want to assign rainbow colors (saturation and value both 100%,
varying hue) to the three points of a triangle, and the color the
triangle "naturally".