im trying to add rotate to my function.
I have no idea how i can rotate it in my function
void draw_filled_circle(const RenderListPtr& render_list, const Vec2& position, float radius, CircleType type, Color color, float rotate){
float pi;
if (type == FULL) pi = D3DX_PI; // Full circle
if (type == HALF) pi = D3DX_PI / 2; // 1/2 circle
if (type == QUARTER) pi = D3DX_PI / 4; // 1/4 circle
const int segments = 32;
float angle = rotate * D3DX_PI / 180;
Vertex v[segments + 1];
for (int i = 0; i <= segments; i++){
float theta = 2.f * pi * static_cast<float>(i) / static_cast<float>(segments);
v[i] = Vertex{
position.x + radius * std::cos(theta),
position.y + radius * std::sin(theta),
color
};
}
add_vertices(render_list, v, D3DPT_TRIANGLEFAN);
}
In general you don't rotate anything by modifying the vertices directly. Instead, you use a matrix (The model-view-projection matrix) to transform the data. The 3 combined matrices boil down to:
The Model Matrix: This is the matrix that is used to position and orient the geometry in world space. If it is just rotation you are after, then set this matrix to be a rotation matrix.
The View matrix: This is the inverse matrix of your camera location.
The projection matrix: this flattens the 3D vertex locations into 2D coordinates on the screen.
You usually combine all 3 matrices together into a single MVP matrix which you use to do all 3 transformations in a single operation. This doc explains some of the basics: https://learn.microsoft.com/en-us/windows/win32/dxtecharts/the-direct3d-transformation-pipeline
D3DXMATRIX proj_mat; //< set with projection you need
D3DXMATRIX view_mat; //< invert the matrix for the camera position & orientation
D3DXMATRIX model_mat; //< set the rotation here
D3DXMATRIX MVP;
MVP = model_mat * view_mat * proj_mat; //< combine into a single MVP to set on your shader
If you REALLY want to rotate the vertex data in the buffer, then you can set the model_mat to be some rotation matrix, and then multiply each vertex by model_mat. The issue with doing that is that it's very slow to update (you need to rebuild the entire buffer each frame, and the GPU has circuitry design to transform vertices via a matrix)
Related
I have some 3D points near origin, camera intrinsic and extrinsic matrix. I can get correct 2D points through projectPoints(using x = intrinsic * extrinsic * X) function in opencv. But when I want to render these 3D points using opengl, it can't work well.
I just transformed intrinsic matrix into glm mat4 as follow:
proj[0][0] = 2 * fx / width;
proj[1][0] = 0.0f;
proj[2][0] = (2 * cx - width) / width;
proj[3][0] = 0.0f;
proj[0][1] = 0.0f;
proj[1][1] = -2 * fy / height;
proj[2][1] = (height - 2 * cy) / height;
proj[3][1] = 0.0f;
proj[0][2] = 0.0f;
proj[1][2] = 0.0f;
proj[2][2] = -(far_clip + near_clip) / (near_clip - far_clip);
proj[3][2] = 2 * near_clip * far_clip / (near_clip - far_clip);
proj[0][3] = 0.0f;
proj[1][3] = 0.0f;
proj[2][3] = 1.0f;
proj[3][3] = 0.0f;
The fx,fy are focal length, width and height are the width and height of image, cx and cy is width / 2 and height / 2 (I get camera matrix through it).
Then I transposed the 4*4 extrinsic matrix as glm view matrix because of the column major in opengl. In my shader, I got gl_Position through
gl_Position = proj * view * vec4(3Dpointposition, 1.0f)
But I got wrong rendering result in screen. I got my camera matrix with the origin in the top-left, and the origin in opengl is in the bottom-left, It also seems that the camera in opencv looks down the positive z-axis but the opengl camera looks down the negetive z-axis. How can I modify the code to the correct version?
I have a volume rendering implementation in shaders which uses the gpu raycasting technique. Basically I have a unit cube at the center of my scene.
I render the vertices of the unit cube in my vertex shader and pass texture coordinates to the fragment shader like this:
in vec3 aPosition;
uniform mat4 uMVPMatrix;
smooth out vec3 vUV;
void main() {
gl_Position = uMVPMatrix * vec4(aPosition.xyz,1);
vUV = aPosition + vec3(0.5);
}
Since the unit cube coordinates goes from -0.5 to 0.5 I clamp the texture coordinates from 0.0 to 1.0 by adding 0.5 to them..
In the fragment shader I got the texture coordinate which is interpolated by the rasterizer:
...
smooth in vec3 vUV; // Position of the data interpolated by the rasterizer
...
void main() {
...
vec3 dataPos = vUV;
...
for (int i = 0; i < MAX_SAMPLES; i++) {
dataPos = dataPos + dirStep;
...
float sample = texture(volume, dataPos).r;
...//Some more operations on the sampled color
float prev_alpha = transferedColor.a * (1.0 - fragColor.a);
fragColor.rgb += prev_alpha * transferedColor.rgb;
fragColor.a += prev_alpha; //final color
if(fragColor.a>0.99)
break;
}
}
My rendering works well.
Now I have implemented a selection algorithm, which is working fine with particles (real vertices in the world coordinates).
My question is how can I make it work with the volumetric dataset? Because only vertices I have is the vertices of the unit cube. Since the data points are interpolated by the rasterizer I don't know the real(world) coordinates of the voxels.
It's fair enough for me to get the center coordinates of the voxels and treat them like particles so I can omit or include the necesseary voxels (I guess vUV coordinates?) in the fragment shader.
First you have to work out your sampled voxel coordinate. (I'm assuming that volume is your 3D texture). To find it you have to de-linearization it from dataPos into the 3 axis components in your 3D texture (w x h x d). So if a sample in MAX_SAMPLES has an index computed like ((z * d) + y) * h + x, then the coordinate can be found by..
z = floor(sample / (w * h))
y = floor((sample - (z * w * h)) / w)
x = sample - (z * w * h) - (y * w)
The floor operation is important to retrieve the integer index.
This is the coordinate of your sample. Now you can multiply it with the inverse of the mvp you used for the 4 vertices, this gives you the position (or the center, maybe you have to add vec3(0.5)) of your sample in world space.
This raises a new question however: see if you can rewrite your selection algorithm so that you don't have to jump through all the computations, and do the selection in screen-space rather than world space.
This is my code for generating the perspective matrices:
public static Matrix4f orthographicMatrix(float left, float right, float bot,
float top, float far, float near) {
// construct and return matrix
Matrix4f mat = new Matrix4f();
mat.m00 = 2 / (right - left);
mat.m11 = 2 / (top - bot);
mat.m22 = -2 / (far - near);
mat.m30 = -((right + left) / (right - left));
mat.m31 = -((top + bot) / (top - bot));
mat.m32 = -((far + near) / (far - near));
mat.m33 = 1;
return mat;
}
public static Matrix4f projectionMatrix(float fovY, float aspect,
float near, float far) {
// compute values
float yScale = (float) ((float) 1 / Math.tan(Math.toRadians(fovY / 2)));
float xScale = yScale / aspect;
float zDistn = near - far;
// construct and return matrix
Matrix4f mat = new Matrix4f();
mat.m00 = xScale;
mat.m11 = yScale;
mat.m22 = far / zDistn;
mat.m23 = (near * far) / zDistn;
mat.m32 = -1;
mat.m33 = 0;
return mat;
}
In my program, first a square is rendered in orthographical perspective, then a square is rendered in projection perspective.
But here's my problem - when my shader does the multiplication in this order:
gl_Position = mvp * vec4(vPos.xyz, 1);
Only the square rendered with projection perspective is displayed. But when the multiplication is done in this order:
gl_Position = vec4(vPos.xyz, 1) * mvp;
Only the square rendered with orthographical perspective is displayed! So obviously my problem is that only one square is displayed at a time given a certain multiplication order.
Issues with multiplication order are indicative of issues with the order you store the components in your matrix.
mat * vec is equivalent to vec * transpose (mat)
To put this another way, if you are using row-major matrices (which are effectively transpose (mat) as far as GL is concerned) instead of column-major you need to reverse the order of your matrix multiplication. This goes for compound multiplication too, you have to reverse the entire sequence of multiplications. This is why the order of scaling, rotation and translation between D3D (row-major) and OpenGL (column-major) is backwards.
Therefore, when you construct your MVP matrix it should look like this:
projection * view * model (OpenGL)
model * view * projection (Direct3D)
Column-major matrix multiplication, as OpenGL uses, should read right-to-left. That is, you start with an object space position (right-most) and transform to clip space (left-most).
In other words, this is the proper way to transform your vertices...
gl_Position = ModelViewProjectionMatrix * position;
~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~
clip space object space to clip space obj space
I am writing a deferred shader, and am trying to pack my gbuffer more tightly. However, I cant seem to compute the view position given the view space depth correctly
// depth -> (gl_ModelViewMatrix * vec4(pos.xyz, 1)).z; where pos is the model space position
// fov -> field of view in radians (0.62831855, 0.47123888)
// p -> ndc position, x, y [-1, 1]
vec3 getPosition(float depth, vec2 fov, vec2 p)
{
vec3 pos;
pos.x = -depth * tan( HALF_PI - fov.x/2.0 ) * (p.x);
pos.y = -depth * tan( HALF_PI - fov.y/2.0 ) * (p.y);
pos.z = depth;
return pos;
}
The computed position is wrong. I know this because I am still storing the correct position in the gbuffer and testing using that.
3 Solutions to recover view space position in perspective projection
The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport. It transforms from view (eye) space to the clip space, and the coordinates in the clip space are transformed to the normalized device coordinates (NDC) by dividing with the w component of the clip coordinates. The NDC are in range (-1,-1,-1) to (1,1,1).
At Perspective Projection the projection matrix describes the mapping from 3D points in the world as they are seen from of a pinhole camera, to 2D points of the viewport.
The eye space coordinates in the camera frustum (a truncated pyramid) are mapped to a cube (the normalized device coordinates).
Perspective Projection Matrix:
r = right, l = left, b = bottom, t = top, n = near, f = far
2*n/(r-l) 0 0 0
0 2*n/(t-b) 0 0
(r+l)/(r-l) (t+b)/(t-b) -(f+n)/(f-n) -1
0 0 -2*f*n/(f-n) 0
it follows:
aspect = w / h
tanFov = tan( fov_y * 0.5 );
prjMat[0][0] = 2*n/(r-l) = 1.0 / (tanFov * aspect)
prjMat[1][1] = 2*n/(t-b) = 1.0 / tanFov
At Perspective Projection, the Z component is calculated by the rational function:
z_ndc = ( -z_eye * (f+n)/(f-n) - 2*f*n/(f-n) ) / -z_eye
The depth (gl_FragCoord.z and gl_FragDepth) is calculated as follows:
z_ndc = clip_space_pos.z / clip_space_pos.w;
depth = (((farZ-nearZ) * z_ndc) + nearZ + farZ) / 2.0;
1. Field of view and aspect ratio
Since the projection matrix is defined by the field of view and the aspect ratio it is possible to recover the viewport position with the field of view and the aspect ratio. Provided that it is a symmetrical perspective projection and the normalized device coordinates, the depth and the near and far plane are known.
Recover the Z distance in view space:
z_ndc = 2.0 * depth - 1.0;
z_eye = 2.0 * n * f / (f + n - z_ndc * (f - n));
Recover the view space position by the XY normalized device coordinates:
ndc_x, ndc_y = xy normalized device coordinates in range from (-1, -1) to (1, 1):
viewPos.x = z_eye * ndc_x * aspect * tanFov;
viewPos.y = z_eye * ndc_y * tanFov;
viewPos.z = -z_eye;
2. Projection matrix
The projection parameters, defined by the field of view and the aspect ratio, are stored in the projection matrix. Therefore the viewport position can be recovered by the values from the projection matrix, from a symmetrical perspective projection.
Note the relation between projection matrix, field of view and aspect ratio:
prjMat[0][0] = 2*n/(r-l) = 1.0 / (tanFov * aspect);
prjMat[1][1] = 2*n/(t-b) = 1.0 / tanFov;
prjMat[2][2] = -(f+n)/(f-n)
prjMat[3][2] = -2*f*n/(f-n)
Recover the Z distance in view space:
A = prj_mat[2][2];
B = prj_mat[3][2];
z_ndc = 2.0 * depth - 1.0;
z_eye = B / (A + z_ndc);
Recover the view space position by the XY normalized device coordinates:
viewPos.x = z_eye * ndc_x / prjMat[0][0];
viewPos.y = z_eye * ndc_y / prjMat[1][1];
viewPos.z = -z_eye;
3. Inverse projection matrix
Of course the viewport position can be recovered by the inverse projection matrix.
mat4 inversePrjMat = inverse( prjMat );
vec4 viewPosH = inversePrjMat * vec3( ndc_x, ndc_y, 2.0 * depth - 1.0, 1.0 )
vec3 viewPos = viewPos.xyz / viewPos.w;
See also the answers to the following question:
How to render depth linearly in modern OpenGL with gl_FragCoord.z in fragment shader?
I managed to make it work in the end, As its a different method from above I will detail it so anyone who sees this will have a solution.
Pass 1: Store the depth value in view space to the gbuffer
To re-create the (x, y, z) position in the second pass:
Pass the horizontal and vertical field of view in radians into the shader.
Pass the near plane distance (near) to the shader. (distance from camera position to near plane)
Imagine a ray from the camera to the fragment position. This ray intersects the near plane at a certain position P. We have this position in the ndc space and want to compute this position in view space.
Now, we have all the values we need in view space. We can use the law of similar triangles to find the actual fragment position P'
P = P_ndc * near * tan(fov/2.0f) // computation is the same for x, y
// Note that by law of similar triangles, P'.x / depth = P/near
P'.xy = P/near * -depth; // -depth because in opengl the camera is staring down the -z axis
P'.z = depth;
I wrote a deferred shader, and used this code to recalculate screen space positioning:
vec3 getFragmentPosition()
{
vec4 sPos = vec4(gl_TexCoord[0].x, gl_TexCoord[0].y, texture2D(depthTex, gl_TexCoord[0].xy).x, 1.0);
sPos.z = 2.0 * sPos.z - 1.0;
sPos = invPersp * sPos;
return sPos.xyz / sPos.w;
}
where depthTex is the texture holding depth info, and invPersp is a pre-calculated inverse perspective matrix. You take the screen's fragment position, and multiply it by the inverse perspective matrix to get model-view coordinates. Then you divide by w to get homogenous coordinates. The multiplication by two and subtraction by one is to scale the depth from [0, 1] (as it is stored in the texture) to [-1, 1].
Also, depending on what kind of MRTs you are using, the recalculated result won't be exactly equal to the stored info, since you lose the float precision.
I'm trying to draw lots of circles on a sphere using shaders. The basic alogrith is like this:
calculate the distance from the fragment (using it's texture coordinates) to the location of the circle's center (the circle's center is also specified in texture coordinates)
calculate the angle from the fragent to the center of the circle.
based on the angle, access a texture (which has 360 pixels in it and the red channel specifies a radius distance) and retrieve the radius for the given angle
if the distance from the fragment to the circle's center is less than the retrieved radius then the fragment's color is red, otherwise blue.
I would like to draw ... say 60 red circles on a blue sphere. I got y shader to work for one circle, but how to do 60? Here's what I've tried so far....
I passed in a data texture that specifies the radius for a given angle, but I notice artifacts creep in. I believe this is due to linear interpolation when I try to retrieve information from the data texture using:
float returnV = texture2D(angles, vec2(x, y)).r;
where angles is the data texture (Sampler2D) that contains the radius for a given angle, and x = angle / 360.0 (angle is 0 to 360) and y = 0 to 60 (y is the circle number)
I tried passing in a Uniform float radii[360], but I cannot access radii with dynamic indexing. I even tried this mess ...
getArrayValue(int index) {
if (index == 0) {
return radii[0];
}
else if (index == 1) {
return radii[1];
}
and so on ...
If I create a texture and place all of the circles on that texture and then multi-texture the blue sphere with the one containing the circles it works, but as you would expect, I have really bad aliasing. I like the idea of proceduraly generating the circles based on the position of the fragment and that of the circle because of virtually no aliasing. However, I do I do ore than one?
Thx!!!
~Bolt
i have a shader that makes circle on the terrain. It moves by the mouse moves.
maybe you get an inspiration?
this is a fragment program. it is not the main program but you can add it to your program.
try this...
for now you can give some uniform parameters in hardcode.
uniform float showCircle;
uniform float radius;
uniform vec4 mousePosition;
varying vec3 vertexCoord;
void calculateTerrainCircle(inout vec4 pixelColor)
{
if(showCircle == 1)
{
float xDist = vertexCoord.x - mousePosition.x;
float yDist = vertexCoord.y - mousePosition.y;
float dist = xDist * xDist + yDist * yDist;
float radius2 = radius * radius;
if (dist < radius2 * 1.44f && dist > radius2 * 0.64f)
{
vec4 temp = pixelColor;
float diff;
if (dist < radius2)
diff = (radius2 - dist) / (0.36f * radius2);
else
diff = (dist - radius2) / (0.44f * radius2);
pixelColor = vec4(1, 0, 0, 1.0) * (1 - diff) + pixelColor * diff;
pixelColor = mix(pixelColor, temp, diff);
}
}
}
and in vertex shader you add:
varying vec3 vertexCoord;
void main()
{
gl_Position = ftransform();
vec4 v = vec4(gl_ModelViewMatrix * gl_Vertex);
vertexCoord = vec3(gl_ModelViewMatrixInverse * v);
}
ufukgun, if you multuiply a matrix by its inverse you get the identity.
Your;
vec4 v = vec4(gl_ModelViewMatrix * gl_Vertex);
vertexCoord = vec3(gl_ModelViewMatrixInverse * v);
is therefore equivalent to
vertexCoord = vec3(gl_Vertex);