Skeletal animation blending upper/lower body, global or local space - opengl

If a model is compose of two cubes, upper cube and lower cube aligned along z axis, and lower cube is parent of upper cube.
Giving two skeletal animation:
animation A: upper cube keep static, lower cube rotate-z clockwise.
animation B: upper cube keep static, lower cube keep static
skeletal animation file stored local transformations.
that is,in animation A, lower cube may have two key frames:
frame 0: rotate 0 degree, frame 50: rotate 360 degree
then blend A and B by
A: upper_weight = 1 lower_weight = 0
B: upper_weight = 0 lower_weight = 1
what's the correct result?
a. both upper and lower keep static
b. upper cube rotates counterclockwise
i think they should keep static,
that's blend in global space,
because A's lower weight = 0, means ignore lower transformations?
but if i blend them in local space,
according to the A upper cube's local rotation, it will rotate.
i google and see people saying it's better to blend local transforms, but i don't think it gives the correct result, any suggestions?

i think they should keep static, that's blend in global space, because A's lower weight = 0, means ignore lower transformations?
No.
You said that the upper body is the child of the lower body. This means that if the lower body has a local rotation, this rotation affects the "global" rotation of every child of that bone.
Therefore, the "global" rotation of A's upper body will be a 360 degree rotation. Even if you mask off the transformation of the lower body, that's still the accumulated rotation defined in A's animation data. So if your animation data is stored as "global" transformations, then A's upper body will still have that rotation, even though the local transform is static.
This is a big part of why animations do not use "global" (more accurately, model-relative) rotations. If you want to be able to compose multiple animations, the only reasonable way to do that is locally, with each bone's animation relative to its parent.

Related

How does one rotate the textures on voxels?

I have a voxel raytracer that outputs an XYZ coordinate normalized to the 0-2 area. Whichever coordinate is 0 or 2 is the face that was hit, and the other two are the texture coordinate. The problem arises when I want to rotate the cube faces.
These are only ever 90 degree turns about the 3 orthogonal axis, so this is more of a texture/coordinate shuffling than a traditional rotation, and there are only 24 possible rotations to choose from so I could just create some lookup table and just label every rotation with some arbitrary quantifier and grab the coordinates to use + the face to draw for a given face, but this is arduous to create, and slow.
Does there exist a simpler method/more readable than an arbitrary lookup table, or at least a good qualifier?

Confusion about MSAA

I was researching about MSAA and how it works. I understand the concept how it works and what is the idea behind it. Basically, if the center of triangle covers the center of the pixel this is processed ( in case of the non-msaa). However, If msaa is involved. Let's say 4xmsaa then it will sample 4 other point as sub-sample. Pixel shader will execute per-pixel. However, occlusion,and coverage test will be applied for each sub-pixel. The point I'm confused is I imagine the pixel as little squares on the screen and I couldn't understand how sub-sampling points are determined inside the sample rectangle. How computer aware of one pixels sub-sample locations. And if there is only one square how it sub-sampled colors are determined.(If there is one square then there should be only one color). Lastly,How each sub-sample might have different depth value if it was basically same pixel.
Thank you!
Basically, if the center of triangle covers the center of the pixel this is processed ( in case of the non-msaa).
No, that's not making sense. The center of a triangle is just a point, and that pint falling onto a pixel center means nothing. Standard rasterizing rule is: if the center of the pixel lies inside of the triangle, a fragment is produced (with special rules for cases where the center of the pixel lies exactly on the boundary of the triangle).
The point I'm confused is I imagine the pixel as little squares on the screen and I couldn't understand how sub-sampling points are determined inside the sample rectangle.
No Idea what you mean by "sample rectangle", but keeping that aside: If you use some coordinate frame of reference where a pixel is 1x1 units in area, than you can simply use fractional parts for describing locations within a pixel.
Default OpenGL Window space uses a convention where (0,0) is the lower left corner of the bottom left pixel, and (width,height) is the upper-right corner of the top-right pixel, and all the pixel centers are at half integers .5.
The rasterizer of a real GPU does work with fixed-point representations, and the D3D spec requires that it has at least 8 bits of fractional precision for sub-pixel locations (GL leaves the exact precsision up to the implementor).
Note that at this point, the pixel raster is not relevant at all. A coverage sample is just testing if some 2D point lies inside or outside of an 2D triangle, and a point is always a mathematically infinitely small entity with an area of 0. The conventions for the coordinate systems to do this calculation in can be arbitrarly defined.
And if there is only one square how it sub-sampled colors are determined.(If there is one square then there should be only one color). Lastly,How each sub-sample might have different depth value if it was basically same pixel.
When you use multipsamling, you always use a multisampled framebuffer, which means that for each pixel, there is not a single color, depth, ... value, but there are n (your multisampling count, typically between 2 and 16 inclusively). You will need an additional pass to calculate the single per-pixel values needed for displaying the anti-aliased results (the grpahics API might hide this from you when done on the default frambebuffer, but when you work with custom render targets, you have to do this manually).

Clipping in clipping coordinate system and normalized device coordinate system (OpenGL)

I heard clipping should be done in clipping coordinate system.
The book suggests a situation that a line is laid from behind camera to in viewing volume. (We define this line as PQ, P is behind camera point)
I cannot understand why it can be a problem.
(The book says after finishing normalizing transformation, the P will be laid in front of camera.)
I think before making clipping coordinate system, the camera is on original point (0, 0, 0, 1) because we did viewing transformation.
However, in NDCS, I cannot think about camera's location.
And I have second question.
In vertex shader, we do model-view transformation and then projection transformation. Finally, we output these vertices to rasterizer.
(some vertices's w is not equal to '1')
Here, I have curiosity. The rendering pipeline will automatically do division procedure (by w)? after finishing clipping.
Sometimes not all the model can be seen on screen, mostly because some objects of it lie behind the camera (or "point of view"). Those objects are clipped out. If just a part of the object can't be seen, then just that part must be clipped leaving the rest as seen.
OpenGL clips
OpenGL does this clipping in Clipping Coordinate Space (CCS). This is a cube of size 2w x 2w x 2w where 'w' is the fourth coordinate resulting of (4x4) x (4x1) matrix and point multiplication. A mere comparison of coordinates is enough to tell if the point is clipped or not. If the point passes the test then its coordinates are divided by 'w' (so called "perspective division"). Notice that for ortogonal projections 'w' is always 1, and with perspective it's generally not 1.
CPU clips
If the model is too big perhaps you want to save GPU resources or improve the frame rate. So you decide to skip those objects that are going to get clipped anyhow. Then you do the maths on your own (on CPU) and only send to the GPU the vertices that passed the test. Be aware that some objects may have some vertices clipped while other vertices of this same object may not.
Perhaps you do send them to GPU and let it handle these special cases.
You have a volume defined where only objects inside are seen. This volume is defined by six planes. Let's put ourselves in the camera and look at this volume: If your projection is perspective the six planes build a "fustrum", a sort of truncated pyramid. If your projection is orthogonal, the planes form a parallelepiped.
In order to clip or not to clip a vertex you must use the distance form the vertex to each of these six planes. You need a signed distance, this means that the sign tells you what side of the plane is seen form the vertex. If any of the six distance signs is not the right one, the vertex is discarded, clipped.
If a plane is defined by equation Ax+By+Cz+D=0 then the signed distance from p1,p2,p3 is (Ap1+Bp2+Cp3+D)/sqrt(AA+BB+C*C). You only need the sign, so don't bother to calculate the denominator.
Now you have all tools. If you know your planes on "camera view" you can calculate the six distances and clip or not the vertex. But perhaps this is an expensive operation considering that you must transform the vertex coodinates from model to camera (view) spaces, a ViewModel matrix calculation. With the same cost you use your precalculated ProjectionViewModel matrix instead and obtain CCS coordinates, which are much easier to compare to '2w', the size of the CCS cube.
Sometimes you want to skip some vertices not due to they are clipped, but because their depth breaks a criteria you are using. In this case CCS is not the right space to work with, because Z-coordinate is transformed into [-w, w] range, depth is somehow "lost". Instead, you do your clip test in "view space".

Missing pixels when rotating an object

So I created an ellipsoid but when I try to rotate it almost half of the voxels goes missing. Whether I do it using the rotation matrix or using the transfer matrix from one base to another, I immediately start losing pixels. If the ellipsoid is at 0 degrees, 90, 180, or 270, it looks fine. But as it travels in-between those angles, the background color starts peeking through in little holes everywhere on my object. I assume this is some kind of float to int conversion issue, but I don't know how to go about fixing it. Anyone have any ideas?
The reason this is happening is because when you draw your rotated shape, due to the rounding/truncation of coordinates, multiple source pixels map to the same target pixel, which means that there are target pixels that do not get any value.
The correct way to do this is to take all target pixels and perform the inverse transform, and collect the value that should be in the target pixel. Potentially doing some interpolation.

Using glOrtho for scrolling game. Spaceship "shudders"

I want to draw a spaceship in the centre of a window as it powers through space. To scroll the world window, I compute the new position of the ship and centre a 4000x3000 window around this using glOrtho. My test is to press the forward button and move the ship upwards. On most machines there is no problem. On a slower linux laptop, where I am getting only 30 frames per second, the ship shudders back and forth. Comparing its position relative to the mouse pointer (which does not move), the ship can clearly be seen jumping forward and back by a couple of pixels. The stars are also shown to be blurred into short lines.
I would like to query the pixel value of the centre of the ship to see if it is changing.
Is there an OpenGL way to supply a world point and get back the pixel point it will be transformed to?
I'd consider doing it the other way around. Keep the ship at 0,0 world coordinates, and move the world relative to it. Then you only need the glOrtho call when the size of the window changes (the camera is in a fixed position). Apart from not having to calculate the projection matrix every time, you also have the benefit that if your world ends up being massive in the future then you have the option of using double precision positions, since large offsets on floats results in inaccurate positioning.
To draw your space scene, you then manipulate the modelview matrix before drawing any objects, and use a different matrix when drawing the ship.
To get a pixel coordinate from a world coordinate, take the point and multiply it by the projection matrix multiplied by the modelview matrix (make sure you get the multiplication around the right way). You'll then have a value that has x and y in the ranges -1 to 1. You can add [1,1] and multiply by half the screen size to get the pixel position. (if you wanted, you could add this into the matrix transformation).