I'm learning about shader, and I've come across the following GLSL code:
vec3 color = cos(vec3(.5,.3,.4));
How do I compute the cosine of a vector vec3(.5,.3,.4)?
In GLSL most of the functions are overloaded and the argument can be a vector. Operations and functions may operate component wise. In case of cos, the cosine is computed for each component of the vector and the result is stored in a new vector:
The expression statement
vec3 color = cos(vec3(.5,.3,.4));
can be read as
vec3 color = vec3(cos(.5), cos(.3), cos(.4));
Related
I am trying to generate up my projection and transform matrix functions inside my vertex shader, e.g. defining my transform, rotation, and perspective matrix functions in terms of GLSL. I am doing this in order to increase readability of my program by bypassing all the loading/importing etc. of matrices into the shader, apart from camera position, rotation and FOV.
My only concern is that the matrix is being calculated each shader call or each vertex calculation.
Which, if either of the two, is what actually happens in the shader?
Is it better to deal with the clutter and import the matrix from my program, or is my short-cut of creating the matrix in-shader acceptable/recommended?
*update with code*
#version 400
in vec4 position;
uniform vec3 camPos;
uniform vec3 camRot;
mat4 calcMatrix(
vec3 pos,
vec3 rot,
) {
float foo=1;
float bar=0;
return mat4(pos.x,pos.y,pos.z,0,
rot.x,rot.y,rot.z,0,
foo,bar,foo,bar,
0,0,0,1);
}
void main()
{
gl_Position = calcMatrix(camPos, camRot) * position;
}
versus:
#version 400
in vec4 position;
uniform mat4 viewMatrix;
void main()
{
gl_Position = viewMatrix * position;
}
Which method is recommended?
Whats wrong with doing
float[16] matrix;
calculate_transform(matrix, args);
glUniformMatrix4fv(mvp, 1, false, matrix);
Or even
set_matrix_uniform_using(mvp, args);
which then does what the previous bit of code does.
If you are worried about clutter then extract a function and give it a good name.
To do this in the shader there are several consequences: you would need multiple varaibles to express what the single matrix expresses, leading to clutter at shader load and uniform upload, shader debugging is much more difficult than making sure your own cope does what it needs to do. If you hardcode the movement code you cannot replace it with a free moving camera without changing the shader.
All that doesn't even touch on performance costs. The GPU is much better at loading a matrix from uniform memory and multiplying a it with a vector than it is at doing the trig function needed for the frustum and rotate.
If you need a different matrix for each vertex, well, fairly do it in the shader. I can't imagine a case where that's needed.
Otherwise, it's much more faster to pass the matrix as a uniform. Don't overload the GPU computing again and again the same matrix.
I can't use my matrix in my OpenGL shader; this is giving me an error:
glUniform4f(matLocation, mat[0], mat[1], mat[2], mat[3]); // glm mat4
no suitable conversion function from "glm::tvec4<float, glm::highp>" to "GLfloat" exists
I don't entirely understand why it's giving me this error message, and as usual, Googling doesn't give me any good results. Why would it want me to convert matrices to floats?
glUniform4f only updates a vec4 with four float values, so you need glUniformMatrix4fv which takes a pointer to an array of 16 float values glm can provide with glm::value_ptr.
mat[n] only yields the nth column, a vec4; thus the compilation error.
I have the following vertex shader:
#version 330
layout (location = 0) in vec3 Position;
uniform mat4 gWVP;
out vec4 Color;
void main()
{
gl_Position = gWVP * vec4(Position, 1.0);
};
How can I get, for example, the third value of vec3? The first my thought was: "Maybe I can get it by multiplying this vector(Position) on something?" But I am not sure that something like "vertical vector type" exists.
So, what is the best way? I need this value to set the color of the pixel.
There are at least 4 options:
You can access vector components with component names x, y, z, w. This is mostly used for vectors that represent points/vectors. In your example, that would be Position.z.
You can use component names r, g, b, a. This is mostly used for vectors that represent colors. In your example, you could use Position.b, even though that would not be very readable. On the other hand, Color.b would be a good option for the other variable.
You can use component names s, t, p, q. This is mostly used for vectors that represent texture coordinates. In our example, Position.p would also give you the 3rd component.
You can use the subscript notation with 0-based indices. In your example, Position[2] also gives he 3rd element.
Each vector has overloaded access to elements. In this case, using Position.z should work.
I want to extract the three first values of a Vector4 type in Eigen, into a Vector3 type. So far I am doing it in a for-loop. Is there a smarter way to do it?
The .head() member function returns the first n elements of a vector. If n is a compile-time constant, then you can use the templated variant (as in the code example below) and the Eigen library will automatically unroll the loop.
Eigen::Vector4f vec4;
// initialize vec4
Eigen::Vector3f vec3 = vec4.head<3>();
In the Eigen documentation, see Block operations for an introduction to similar operations for extracting parts of vectors and matrices, and DenseBase::head() for the specific function.
The answer of #Jitse Niesen is correct. Maybe this should be a comment on the original question, but I found this question because I had some confusion about Eigen. In case the original questioner, or some future reader has the same confusion, I wanted to provide some additional explanation.
If the goal is to transform 3d (“position”) vectors by a 4x4 homogeneous transformation matrix, as is common in 3d graphics (e.g. OpenGL etc), then Eigen provides a cleaner way to do that with its Transform template class, often represented as the concrete classes Affine3f or Affine3d (as tersely described here). So while you can write such a transform like this:
Eigen::Matrix4f transform; // your 4x4 homogeneous transformation
Eigen::Vector3f input; // your input
Eigen::Vector4f input_temp;
input_temp << input, 1; // input padded with w=1 for 4d homogeneous space
Eigen::Vector4f output_temp = transform * input_temp;
Eigen::Vector3f output = output_temp.head<3>() / output_temp.w(); // output in 3d
You can more concisely write it like this:
Eigen::Affine3f transform; // your 4x4 homogeneous transformation
Eigen::Vector3f input; // your input
Eigen::Vector3f output = transform * input;
That is: an Eigen::Affine3f is a 4x4 homogeneous transformation that maps from 3d to 3d.
Yeah, because you know the size is static (3 elements) you should unroll the loop and copy them explicitly. This optimization might be performed by the compiler already, but it can't hurt to do it yourself just in case.
I have a transformation matrix, m, and a vector, v. I want to do a linear transformation on the vector using the matrix. I'd expect that I would be able to do something like this:
glm::mat4 m(1.0);
glm::vec4 v(1.0);
glm::vec4 result = v * m;
This doesn't seem to work, though. What is the correct way to do this kind of operation in GLM?
Edit:
Just a note to anyone who runs into a similar problem. GLM requires all operands to use the same type. Don't try multiplying a dvec4 with a mat4 and expect it to work, you need a vec4.
glm::vec4 is represented as a column vector. Therefore, the proper form is:
glm::vec4 result = m * v;
(note the order of the operands)
Since GLM is designed to mimic GLSL and is designed to work with OpenGL, its matrices are column-major. And if you have a column-major matrix, you left-multiply it with the vector.
Just as you should be doing in GLSL (unless you transposed the matrix on upload).