Translating GLSL to C++ float / vec3? - c++

What does this line exactly do
ra.rgb * ra.w / max(ra.r, 1e-4) * (bR.r / bR);
The part I am confused about is how to translate
(bR.r / bR);
A float divided by a vec3?
I want to translate this to C++ but what is that returning a float divided by all the elements of the vector? etc
(no access to graphics card to check?)

This is an example of component-wise division, and it works as follows:
GLSL 4.40 Specification - 5.9 Expressions - pp. 101-102
If the fundamental types in the operands do not match, then the conversions from section 4.1.10 “Implicit Conversions” are applied to create matching types. [...] After conversion, the following cases are valid:
[...]
One operand is a scalar, and the other is a vector or matrix. In this case, the scalar operation is applied independently to each component of the vector or matrix, resulting in the same size vector or matrix.
Given the expression:
vv vec3
(bR.r / bR);
^ float
The scalar bR.r is essentially promoted to vec3 (bR.r, bR.r, bR.r) and then component-wise division is performed, resulting in vec3 (bR.r/bR.r, bR.r/bR.g, bR.r/bR.b).
Thus, this expression is equivalent to:
vec3 (1.0, bR.r/bR.g, bR.r/bR.b)

Related

Why glm transform functions are applied "backwards"?

Edit: it is "backwards" to me - I may be missing some intuition
Given a glm transform function such as glm::translate, the two parameters are first a matrix m and then a vector v for translation.
Intuitively, I would expect this function to apply the translation "after" my matrix transform, i.e. multiplying an object by the returned matrix will first apply m followed by the translation v specified.
This intuition comes from the fact that one usually builds a transformation in mathmetical order e.g. first compute a scale matrix, then apply rotation, then transform etc. so I would think the function calling order would be the same (i.e. given a matrix, I can simply call glm::translate to apply a translation which happens after my matrix's transform is applied)
However, as mentioned in this thread, that is not the case - the translation is applied first, followed by the matrix m passed in.
I don't believe this has anything to do with column major/row major convention and notation as some threads suggest. Is there a historical reason for this? It just seems a bit backwards to me and I would probably rewrite the functions unless there's a good enough reason for it.
This intuition comes from the fact that one usually builds a transformation in mathmetical order
But there is no such thing as a mathematical order. Consider the following: v is an n-dimensional vector and M a n x n square matrix. Now the question is: which is the correct multiplication order? And that depends on your convention again. In most classic math textbook, vectors are defined as column vectors. And then: M * v is the only valid multiplication order, while v * M is simply not a valid operation mathematically.
If v is a column vector, then it's transpose v^T is a row vector and then v^T * M is the only valid multiplication order. However, to achieve the same result as before, say x = M * v, you have to also transpose M: x^T = v^T * M^T.
If M is the product of two matrices A and B, what we get here due to the non-commutative way of matrix multiplication is this:
x = M * v
x = A * B * v
x = A * (B * v)
or, we could say:
y = B * v
x = A * y
so clearly, B is applied first.
In the transposed convention with row matrices, we need to follow (A * B)^T = B^T * A^T and get
x^T = v^T * M^T
x^T = v^T * B^T * A^T
x^T = (v^T * B^T) * A^T
So B^T again is applied first.
Actually, when you consider the multiplication order, the matrix which is written closest to the vector is generally the one applied first.
I don't believe this has anything to do with column major/row major convention and notation as some threads suggest.
You are right, it has absolutely nothing to do with that. The storage order can be arbitrary and does not change the meaning of the matrices and operations. The confusion often comes from the fact that interpreting a matrix which is stored column-major as a matrix stored row-major (or vice-versa) will just have the effect of transposing the matrix.
Also, GLSL and HLSL and many math libraries do not use explicit column or row vectors, but use it as it fits. E.g., in GLSL you can write:
vec4 v;
mat4 M;
vec4 a = M * v; // v is treated as column vector here
vec4 b = v * M; // v is treated as row vector now
// NOTE: a and b are NOT equal here, they would be if b = v * transpose(M), so by swapping the multiplication order, you get the effect of transposing the matrix
Is there a historical reason for this?
OpenGL follows classical math conventions at many points (i.e. the window space origin is bottom-left and not top-left as most window systems do work), the old fixed function view space convention was to use a right-handed coordinate system (z pointing out of the screen towards the viewer, so the camera looking towards -z), and the OpenGL spec uses column vectors to this day. This means that the vertex transform has to be M * v and the "reverse" order of the transformations applies.
This means, in legacy GL, the following sequence:
glLoadIdentity(); // M = I
glRotate(...); // M = M * R = R
glTranslate(...); // M = M * T = R * T
will first translate the object, and then rotate it.
GLM was designed to follow the OpenGL conventions by default, and the function glm::mat4 glm::translate(glm::mat4 const& m, glm::vec3 const& translation); is explicitely emulating the old fixed-function GL behavior.
It just seems a bit backwards to me and I would probably rewrite the functions unless there's a good enough reason for it.
Do as you wish. You could set up fnctions which instead of psot-multiply do a pre-multiplication. Or you could set up all transformation matrices as transposed, and post-multiply in the order you consider "intuitive". But note that for someone following either classical math conventions, or typical GL conventions, the "backwards" notation is the "intuitive" one.

Compute the cosine of a vector

I'm learning about shader, and I've come across the following GLSL code:
vec3 color = cos(vec3(.5,.3,.4));
How do I compute the cosine of a vector vec3(.5,.3,.4)?
In GLSL most of the functions are overloaded and the argument can be a vector. Operations and functions may operate component wise. In case of cos, the cosine is computed for each component of the vector and the result is stored in a new vector:
The expression statement
vec3 color = cos(vec3(.5,.3,.4));
can be read as
vec3 color = vec3(cos(.5), cos(.3), cos(.4));

GLSL - length function

From the GLSL documentation (https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/length.xhtml), the length function "calculate the length of a vector".
But I don't get it, what does "length" mean here ?
For instance:
length(.5); // returns .5
length(1.); // returns 1.
So how and why are you supposed to use this function?
See The OpenGL ES Shading Language
8 Built-in Functions, page 63
When the built-in functions are specified below, where the input arguments (and corresponding output) can be float, vec2, vec3, or vec4, genType is used as the argument.
8.4 Geometric Functions, page 68
float length (genType x)
Returns the length of vector x, i.e.,
This means the result of length(.5) is:
sqrt(0.5 * 0.5) = 0.5
and the result of length(1.) is
sqrt(1.0 * 1.0) = 1.0
The documentation uses 'genType' for generic type and mostly it shows all functions accepting this, meaning that it could be any of the base types.
I don't know why it is not more specific when it clearly says that it's a vector operation.
I think most probably it simply returns the input value if it's a 1-dimensional vector which is just one number and it will calculate the length of 2-,3- dimension vectors properly.
Here the length means the euclidean distance of a vector, not the length or count of the element's it has.

GLSL sum of vector vec3 and float

This may be odd because as I understand a vector and a scalar cannot be added. However I've found this sample and in line 157 it doing the following operation:
hsv.x + vec3(0.,2./3.,1./3.)
where hsv.x happens to be a float number, the value comes from the mouse X coordinates and well the rest is a vec3.
My question is what is the result of that operation?
If you add a scalar to a vector, then the scalar will be add to each component of the vector, because the The OpenGL Shading Language specification (Version 4.6, Chapter 5 Operators and Expressions) says:
One operand is a scalar, and the other is a vector or matrix. In this case, the scalar operation is
applied independently to each component of the vector or matrix, resulting in the same size vector
or matrix.

Multiplying a matrix and a vector in GLM (OpenGL)

I have a transformation matrix, m, and a vector, v. I want to do a linear transformation on the vector using the matrix. I'd expect that I would be able to do something like this:
glm::mat4 m(1.0);
glm::vec4 v(1.0);
glm::vec4 result = v * m;
This doesn't seem to work, though. What is the correct way to do this kind of operation in GLM?
Edit:
Just a note to anyone who runs into a similar problem. GLM requires all operands to use the same type. Don't try multiplying a dvec4 with a mat4 and expect it to work, you need a vec4.
glm::vec4 is represented as a column vector. Therefore, the proper form is:
glm::vec4 result = m * v;
(note the order of the operands)
Since GLM is designed to mimic GLSL and is designed to work with OpenGL, its matrices are column-major. And if you have a column-major matrix, you left-multiply it with the vector.
Just as you should be doing in GLSL (unless you transposed the matrix on upload).