Rotate quad made in geometry shader - c++

I'm drawing a quad using Geometry Shader, but can't figure out how to rotate it with angle.
void main(void)
{
float scaleX = 2.0f / u_resolution.x;
float scaleY = 2.0f / u_resolution.y;
float nx = (u_position.x * scaleX) - 1.0f;
float ny = -(u_position.y * scaleY) + 1.0f;
float nw = u_size.x * scaleX;
float nh = u_size.y * scaleY;
gl_Position = vec4( nx+nw, ny, 0.0, 1.0 );
texcoord = vec2( 1.0, 0.0 );
EmitVertex();
gl_Position = vec4(nx, ny, 0.0, 1.0 );
texcoord = vec2( 0.0, 0.0 );
EmitVertex();
gl_Position = vec4( nx+nw, ny-nh, 0.0, 1.0 );
texcoord = vec2( 1.0, 1.0 );
EmitVertex();
gl_Position = vec4(nx, ny-nh, 0.0, 1.0 );
texcoord = vec2( 0.0, 1.0 );
EmitVertex();
EndPrimitive();
}
Should I use a rotation matrix or sin and cos? I'm not too great at math.

You don't have to use matrices, but you need to use those sine functions. Here is a way to rotate a 3D position about some arbitrary axis by some angle specified in degrees:
// This is the 3D position that we want to rotate:
vec3 p = position.xyz;
// Specify the axis to rotate about:
float x = 0.0;
float y = 0.0;
float z = 1.0;
// Specify the angle in radians:
float angle = 90.0 * 3.14 / 180.0; // 90 degrees, CCW
vec3 q;
q.x = p.x * (x*x * (1.0 - cos(angle)) + cos(angle))
+ p.y * (x*y * (1.0 - cos(angle)) + z * sin(angle))
+ p.z * (x*z * (1.0 - cos(angle)) - y * sin(angle));
q.y = p.x * (y*x * (1.0 - cos(angle)) - z * sin(angle))
+ p.y * (y*y * (1.0 - cos(angle)) + cos(angle))
+ p.z * (y*z * (1.0 - cos(angle)) + x * sin(angle));
q.z = p.x * (z*x * (1.0 - cos(angle)) + y * sin(angle))
+ p.y * (z*y * (1.0 - cos(angle)) - x * sin(angle))
+ p.z * (z*z * (1.0 - cos(angle)) + cos(angle));
gl_Position = vec4(q, 1.0);
If you know that you are rotating about some standard x-, y-, or z-axis, you can simplify the "algorithm" a lot by defining it explicitly for that standard axis.
Notice how we rotate about the z-axis in the above code. For example, rotation about the x-axis would be (x,y,z) = (1,0,0). You could set the variables to anything, but the values should result in the axis being a unit vector (if that even matters.)
Then again, you might as well use matrices:
vec3 n = vec3(0.0, 0.0, 1.0); // the axis to rotate about
// Specify the rotation transformation matrix:
mat3 m = mat3(
n.x*n.x * (1.0f - cos(angle)) + cos(angle), // column 1 of row 1
n.x*n.y * (1.0f - cos(angle)) + n.z * sin(angle), // column 2 of row 1
n.x*n.z * (1.0f - cos(angle)) - n.y * sin(angle), // column 3 of row 1
n.y*n.x * (1.0f - cos(angle)) - n.z * sin(angle), // column 1 of row 2
n.y*n.y * (1.0f - cos(angle)) + cos(angle), // ...
n.y*n.z * (1.0f - cos(angle)) + n.x * sin(angle), // ...
n.z*n.x * (1.0f - cos(angle)) + n.y * sin(angle), // column 1 of row 3
n.z*n.y * (1.0f - cos(angle)) - n.x * sin(angle), // ...
n.z*n.z * (1.0f - cos(angle)) + cos(angle) // ...
);
// Apply the rotation to our 3D position:
vec3 q = m * p;
gl_Position = vec4(q, 1.0);
Notice how the elements of the matrix are laid out such that we first complete the first column, and then the second, and so on; the matrix is in column-major order. This matters when you try to transfer a matrix written in mathematical notation into a data type in your code. You basically need to transpose it (to flip the elements diagonally) in order to use it in your code. Also, we are essentially multiplying a matrix on left with a column vector on right.
If you need a 4-by-4 homogeneous matrix, then you would simply add an extra column and a row into the above matrix, such that both the new rightmost column and bottommost row would consist of [0 0 0 1]:
vec4 p = position.xyzw; // new dimension
vec3 n = ...; // same
mat4 m = mat4( // new dimension
...
0.0,
...
0.0,
...
0.0,
0.0,
0.0,
0.0,
1.0
);
vec4 q = m * p;
gl_Position = q;
Again, notice the order of the multiplication when applying the rotation, it is important because it affects the end result. What happens in the multiplication is basically that a new vector is formed by calculating the dot-product of the position vector and each column in the matrix; each coordinate in the resulting vector is the dot-product of the original vector and a single column in the matrix (see the first code example.)
The
q.x = p.x * (x*x * (1.0 - cos(angle)) + cos(angle))
+ p.y * (x*y * (1.0 - cos(angle)) + z * sin(angle))
+ p.z * (x*z * (1.0 - cos(angle)) - y * sin(angle));
Is same as:
q.x = dot(p, m[0]);
One could even compose the matrix with itself: m = m*m; which would result in a 180-degree counterclockwise rotation matrix, depending on the angle used.

Related

OpenGL Triangle pipe around line segment

I would like to ask how can I render in geometry shader a triangle pipe from a line segment?
I first compute perpendicular vector "perp" to the line vector "axis".
Then I rotate the "perp" vector few times by "rotate" function.
Since mesh is composed from 8 vertices I am trying to use "triangle_strip".
My current code :
#version 330 core
layout (lines) in;
layout(triangle_strip, max_vertices = 8) out;//triangle_strip
uniform float u_thickness ;
uniform vec2 u_viewportSize ;
uniform bool u_scale_width_by_zoom ;
in gl_PerVertex
{
vec4 gl_Position;
//float gl_PointSize;
//float gl_ClipDistance[];
} gl_in[];
vec4 rotate(vec4 p, float x, float y, float z,float angle )
{
vec3 q;
q[0] = p[0] * (x*x * (1.0 - cos(angle)) + cos(angle))
+ p[1] * (x*y * (1.0 - cos(angle)) + z * sin(angle))
+ p[2] * (x*z * (1.0 - cos(angle)) - y * sin(angle));
q[1] = p[0] * (y*x * (1.0 - cos(angle)) - z * sin(angle))
+ p[1]* (y*y * (1.0 - cos(angle)) + cos(angle))
+ p[2] * (y*z * (1.0 - cos(angle)) + x * sin(angle));
q[2] = p[0] * (z*x * (1.0 - cos(angle)) + y * sin(angle))
+ p[1] * (z*y * (1.0 - cos(angle)) - x * sin(angle))
+ p[2] * (z*z * (1.0 - cos(angle)) + cos(angle));
return vec4(q, 0.0);
}
void main() {
//https://stackoverflow.com/questions/54686818/glsl-geometry-shader-to-replace-gllinewidth
vec4 p1 = gl_in[0].gl_Position;
vec4 p2 = gl_in[1].gl_Position;
//tube
// Specify the axis to rotate about:
vec4 axis = p2-p1;
float x = axis[0];
float y = axis[1];
float z = axis[2];
axis = normalize(axis);
//float length = hypotf(axis[0], hypotf(axis[1], axis[2]));
float length = sqrt((axis[0]*axis[0]) + sqrt(axis[1]*axis[1]+ axis[2]*axis[2]));
float dir_scalar = (axis[0] > 0.0) ? length : -length;
float xt = axis[0] + dir_scalar;
float dot = -axis[1] / (dir_scalar * xt);
vec3 perp_0 = vec3(dot * xt, 1.0f + dot * axis.y, dot * axis.z);
perp_0 = normalize(perp_0);
vec4 perp = vec4(perp_0,0)*u_thickness*0.001;
//side0
vec4 p1_1 = p1+perp;
vec4 p1_2 = p2+perp;
vec4 perp_rot_2=rotate(perp,x,y,z,60.0 * 3.14 / 180.0);
vec4 p2_1 = p1+perp_rot_2;
vec4 p2_2 = p2+perp_rot_2;
vec4 perp_rot_3=rotate(perp,x,y,z, 120.0 * 3.14 / 180.0);
vec4 p3_1 = p1+perp_rot_3;
vec4 p3_2 = p2+perp_rot_3;
gl_Position = p1_1;
EmitVertex();
gl_Position = p1_2;
EmitVertex();
gl_Position = p2_1;
EmitVertex();
gl_Position = p2_2;
EmitVertex();
gl_Position = p3_1;
EmitVertex();
gl_Position = p3_2;
EmitVertex();
gl_Position = p1_1;
EmitVertex();
gl_Position = p1_2;
EmitVertex();
EndPrimitive();
}
It produces wrong results:

How to convert a 3D position in world space to 2D position using DirectXMath

I want to take advantage of the SSE intrinsics (mostly for the speed advantage) that DirectXMath provides
How would i do it? this is the current code im using
DirectX::XMFLOAT4 clipCoordinates;
DirectX::XMFLOAT4X4 WorldMatrix4x4;
DirectX::XMStoreFloat4x4(&WorldMatrix4x4, WorldMatrix);
clipCoordinates.x = pos.x * WorldMatrix4x4._11 + pos.y * WorldMatrix4x4._12 + pos.z * WorldMatrix4x4._13 + WorldMatrix4x4._14;
clipCoordinates.y = pos.x * WorldMatrix4x4._21 + pos.y * WorldMatrix4x4._22 + pos.z * WorldMatrix4x4._23 + WorldMatrix4x4._24;
clipCoordinates.z = pos.x * WorldMatrix4x4._31 + pos.y * WorldMatrix4x4._32 + pos.z * WorldMatrix4x4._33 + WorldMatrix4x4._34;
clipCoordinates.w = pos.x * WorldMatrix4x4._41 + pos.y * WorldMatrix4x4._42 + pos.z * WorldMatrix4x4._43 + WorldMatrix4x4._44;
If by 2D position you mean screenspace coordinates:
First transform from local space to world space:
XMVECTOR world_pos = XMVector4Transform(pos, world_matrix);
Then transform from world space to camera space:
XMVECTOR view_pos = XMVector4Transform(world_pos, view_matrix);
Then transform from camera space to clip space:
XMVECTOR clip_pos = XMVector4Transform(view_pos , proj_matrix);
Remember that you can concatenate all of these into one transform by multiplying the XMMATRIX matrix = world_matrix * view_matrix * proj_matrix; together and then transforming the point by matrix.
Now divide by w coordinate to get NDC:
XMVECTOR w_vector = XMVectorSplatW(clip_pos);
XMVECTOR ndc = XMVectorDivide(clip_pos, w_vector);
You can combine the transform and divide by using XMVECTOR ndc = XMVector3TransformCoord(clip_pos, matrix);
Your NDC x and y components are in interval [-1,1]. To transform them in [0,ScreenWidth] and [0,ScreenHeight] you use the following formulas:
float ndc_x = XMVectorGetX(ndc);
float ndc_y = XMVectorGetY(ndc);
float pixel_x = (1.0f + ndc_x) * 0.5 * ScreenWidth;
float pixel_y = (1.0f - ndc_y) * 0.5 * ScreenHeight;
And those are your screen coordinates.

GLSL: Sample from previous output and not texture2D

I'm trying to write a custom shader where I first blur the texture and then run sobel edge finding.
I've got sobel running ok via the following script
vec4 colorSobel = texture2D(texture, uv);
float bottomLeftIntensity = texture2D(texture, uv + vec2(-0.0015625, 0.0020833)).r;
float topRightIntensity = texture2D(texture, uv + vec2(0.0015625, -0.0020833)).r;
float topLeftIntensity = texture2D(texture, uv + vec2(-0.0015625, 0.0020833)).r;
float bottomRightIntensity = texture2D(texture, uv + vec2(0.0015625, 0.0020833)).r;
float leftIntensity = texture2D(texture, uv + vec2(-0.0015625, 0)).r;
float rightIntensity = texture2D(texture, uv + vec2(0.0015625, 0)).r;
float bottomIntensity = texture2D(texture, uv + vec2(0, 0.0020833)).r;
float topIntensity = texture2D(texture, uv + vec2(0, -0.0020833)).r;
float h = -secondary * topLeftIntensity - coef * topIntensity - secondary * topRightIntensity + secondary * bottomLeftIntensity + coef * bottomIntensity + secondary * bottomRightIntensity;
float v = -secondary * bottomLeftIntensity - coef * leftIntensity - secondary * topLeftIntensity + secondary * bottomRightIntensity + coef * rightIntensity + secondary * topRightIntensity;
float mag = length(vec2(h, v));
// alpha values removed atm
if (mag < 0.5) {
colorSobel.rgb *= (1.0 - 1.0);
colorSobel.r += 0.0 * 1.0;
colorSobel.g += 0.0 * 1.0;
colorSobel.b += 0.0 * 1.0;
colorSobel.rgb += 1.0 * mag;
} else {
colorSobel.rgb *= (1.0 - 1.0);
colorSobel.r += 0.0 * 1.0;
colorSobel.g += 0.0 * 1.0;
colorSobel.b += 0.0 * 1.0;
colorSobel.rgb += 1.0 * mag;
}
gl_FragColor = colorSobel;
However I know it works by sampling the texture via texture2D.
If I were to first manipulate the output via a simple script such as this which reduces the colours
vec4 bg = texture2D(texture,uv);
gl_FragColor = vec4(gb.rgb, 1.0);
gl_FragColor.r = float(floor(gl_FragColor.r * 0.5 ) / 0.5);
gl_FragColor.g = float(floor(gl_FragColor.g * 0.5 ) / 0.5);
gl_FragColor.b = float(floor(gl_FragColor.b * 0.5 ) / 0.5);
The output still samples from the texture and ignores the first paint.
Is there a way to sample from the color output rather than using texture2D?
The reason I'm asking is that i'm chaining my shaders at runtime depending on user interaction?

Calculate rotation matrix to transform one vector to another?

There are several questions about this on Stack Overflow, but most use Quaternions and I am looking for something that does not. And this question is all over the web, but it is surprisingly hard to find a straightforward example in code. I am looking for a C/C++, or GLSL/Metal solution for the rotation matrix to transform one vector to another. I found this which seems to answer the question: http://www.j3d.org/matrix_faq/matrfaq_latest.html#Q38
However, it is not working for me in practice. With this I am getting the correct transformations at the ninety-degree angles (ie transforming 0,0,1 to 0,1,0), but in between it twists in the wrong direction. Here is the code I am using -- does anyone see what is wrong with it?
In this case I am always transforming from a fixed start normal of (0,0,1).
// matrix to rotate from (0,0,1) to specified direction
float4x4 directionMatrix(float3 direction) {
float3 normal = float3(0.0,0.0,1.0);
float3 V = normalize(cross(normal, direction));
float phi = acos(dot(normal,direction));
float rcos = cos(phi);
float rsin = sin(phi);
float4x4 M;
M[0].x = rcos + V.x * V.x * (1.0 - rcos);
M[1].x = V.z * rsin + V.y * V.x * (1.0 - rcos);
M[2].x = -V.y * rsin + V.z * V.x * (1.0 - rcos);
M[3].x = 0.0;
M[0].y = -V.z * rsin + V.x * V.y * (1.0 - rcos);
M[1].y = rcos + V.y * V.y * (1.0 - rcos);
M[2].y = -V.x * rsin + V.z * V.y * (1.0 - rcos);
M[3].y = 0.0;
M[0].z = V.y * rsin + V.x * V.z * (1.0 - rcos);
M[1].z = -V.x * rsin + V.y * V.z * (1.0 - rcos);
M[2].z = rcos + V.z * V.z * (1.0 - rcos);
M[3].z = 0.0;
M[0].w = 0.0;
M[1].w = 0.0;
M[2].w = 0.0;
M[3].w = 1.0;
return M;
}
I tried transposing this, in case the original article used the opposite column-row majority of mine, but it did not help.
EDIT: The problem here turned out to be that this code works for a left-handed coordinate system, where mine is right-handed. So to get this to work you just have to multiply phi by -1. I also edited the code to add normalization of the cross vector, as suggested in the comments.

Optimize Normalmap rotation of a vector

I use a quaternion to rotate the normal vector of a mesh into the direction the normal map vector. I thought I could use the Quaternion. So I create a quaternion with the angle between the normal of the mesh and (0,0,1). Shortened the methods in libgdx, creation of the Quaternion looks like this
vec4 createQuaternion(vec3 normal) {
float l_ang = acos(clamp(normal.z, -CONST_ONE, CONST_ONE))/CONST_TWO;
float l_sin = sin(l_ang);
return normalize(vec4(-normal.y * l_sin, normal.x * l_sin, CONST_ZERO, cos(l_ang)));
}
with this Quaternion I can rotate the normal of the mesh in the direction of the normal map. Therefore I use a method I've seen in the libgdx Quaternion (while v is the normal from the normalmap and m is the quaternion):
vec3 rotateNormal(vec3 v, vec4 m) {
vec4 tmp1 = vec4(v,CONST_ZERO);
vec4 tmp2 = vec4(m);
// conjugate
tmp2.x = -tmp2.x;
tmp2.y = -tmp2.y;
tmp2.z = -tmp2.z;
tmp2 = mulLeft(tmp1, tmp2);
tmp1 = mulLeft(m, tmp2);
v.x = tmp1.x;
v.y = tmp1.y;
v.z = tmp1.z;
return v;
}
The method mulLeft looks like that:
vec4 mulLeft(vec4 q, vec4 a) {
float newX = q.w * a.x + q.x * a.w + q.y * a.z - q.z * a.y;
float newY = q.w * a.y + q.y * a.w + q.z * a.x - q.x * a.z;
float newZ = q.w * a.z + q.z * a.w + q.x * a.y - q.y * a.x;
float newW = q.w * a.w - q.x * a.x - q.y * a.y - q.z * a.z;
a.x = newX;
a.y = newY;
a.z = newZ;
a.w = newW;
return a;
}
For the use in the shader I just call:
normal = rotateNormal(normalMap, createQuaternion(normalMesh));
and it works as expected.
The only thing I am considered about is that I can imagine that there is a shorter way to write that. Especially the mulLeft Method. Is there?
What do you think about the whole method, turning the vector by quaternion instead of using NTB?
ntb looks a bit like too much calculation while animating the mesh.
Edit:
This is the a test with my sourcecode
This is a Test with Tenfour's suggestion
Edit2:
I shortened the whole thing to:
vec3 rotateNormal(vec3 normalMap, vec3 normal) {
// create quaternion from cross(normal, vec3(0,0,1))
float l_ang = acos(clamp(normal.z, -CONST_ONE, CONST_ONE))/CONST_TWO;
float l_sin = sin(l_ang);
vec4 quat = normalize(vec4(-normal.y * l_sin, normal.x * l_sin, CONST_ZERO, cos(l_ang)));
// shortened function to double mulQuat the normalMap on the quaternion
return vec3(
quat.x*quat.x*normalMap.x - quat.y*quat.y*normalMap.x - quat.z*quat.z*normalMap.x + quat.w*quat.w*normalMap.x - CONST_TWO*quat.z*quat.w*normalMap.y + CONST_TWO*quat.y*quat.w*normalMap.z + CONST_TWO*quat.x *(quat.y*normalMap.y + quat.z*normalMap.z),
-(quat.x*quat.x*normalMap.y) - quat.z*quat.z* normalMap.y + (quat.y*quat.y + quat.w*quat.w ) * normalMap.y + CONST_TWO*quat.z *(quat.w*normalMap.x + quat.y*normalMap.z) + CONST_TWO*quat.x* (quat.y*normalMap.x - quat.w*normalMap.z),
quat.y*(-CONST_TWO*quat.w*normalMap.x + CONST_TWO*quat.z*normalMap.y) + CONST_TWO*quat.x*(quat.z*normalMap.x + quat.w*normalMap.y) - quat.x*quat.x*normalMap.z - quat.y*quat.y*normalMap.z + (quat.z*quat.z + quat.w*quat.w )* normalMap.z
);
}
Input is the normal from the normal map and the normal of the mesh.
I am sure there's something better than that. what could that be?
I found this
vec3 rotate_vector( vec4 quat, vec3 vec )
{
return vec + 2.0 * cross( cross( vec, quat.xyz ) + quat.w * vec, quat.xyz );
}
here. Does that work? Not sure how much less underlying math there is, but built-in GLSL functions can often take advantage of GPU optimizations.