Select cells from 3D grid in certain radius - c++

I ran into a little problem today that I can't seem to solve in an efficient way. I'd like to select all cells of a 3D grid given the center of a sphere and the radius.
I have a cubic grid of cells, which all have the same dimensions, i.e. the cube has same width height and depth and is divided into sub cubes ("cells") that each have the same width height and depth as well.
Given a 3D position within this grid, I would like to draw all the cells around this position within the radius of the sphere. All cells that are partially contained in the sphere should be included in the drawing.

Calculate the distance of the corners of the box from the center of the sphere:
sqrt(dx^2+dy^2+dz^2)
If smaller or equal to your radius draw the cube...
(EDIT: As Oli comments you can compare to the sqaure of the radius to speed up this test in application)
You can only consider cubes within the bounding r x r x r cube...
Also see:
fast sphere-grid intersection

Related

How to calculate near and far plane for glOrtho in OpenGL

I am using orthographic projection glOrtho for my scene. I implemented a virtual trackball to rotate an object beside that I also implemented a zoom in/out on the view matrix. Say I have a cube of size 100 unit and is located at the position of (0,-40000,0) far from the origin. If the center of rotation is located at the origin once the user rotate the cube and after zoom in or out, it could be position at some where (0,0,2500000) (this position is just an assumption and it is calculated after multiplied by the view matrix). Currently I define a very big range of near(-150000) and far(150000) plane, but some time the object still lie outside either the near or far plane and the object just turn invisible, if I define a larger near and far clipping plane say -1000000 and 1000000, it will produce an ungly z artifacts. So my question is how do I correctly calculate the near and far plane when user rotate the object in real time? Thanks in advance!
Update:
I have implemented a bounding sphere for the cube. I use the inverse of view matrix to calculate the camera position and calculate the distance of the camera position from the center of the bounding sphere (the center of the bounding sphere is transformed by the view matrix). But I couldn't get it to work. can you further explain what is the relationship between the camera position and the near plane?
A simple way is using the "bounding sphere". If you know the data bounding box, the maximum diagonal length is the diameter of the bounding sphere.
Let's say you calculate the distance 'dCC' from the camera position to the center of the sphere. Let 'r' the radius of that sphere. Then:
Near = dCC - r - smallMargin
Far = dCC + r + smallMargin
'smallMargin' is a value used just to avoid clipping points on the surface of the sphere due to numerical precision issues.
The center of the sphere should be the center of rotation. If not, the diameter should grow so as to cover all data.

How to normalize a 3D non colored Mesh in a unit bounding box

I have a 3D mesh encoded in a .OFF file. Only vertices, coordinates of these vertices and connectivity are encoded. I read in some papers that a 3D mesh can be normalized in a unit bounding box. What this really means ? and how we can do this ?
That means the mesh will fit into space defined by axis aligned cube of size 1 for example defined by corners: A(-0.5,-0.5,-0.5) and B(+0.5,+0.5,+0.5).
To achieve this:
get actual bounding box
So loop through all used Vertexes and remember min and max coordinate for each axis A0(xmin,ymin,zmin),B0(xmax,ymax,zmax).
Normalize to bounding box A,B
So loop through each Vertex again and recompute them (by linear interpolation). For example like this:
Vertex[i].x=A.x + (B.x-A.x)*(Vertex[i].x-A0.x)/(B0.x-A0.x)
Vertex[i].y=A.y + (B.y-A.y)*(Vertex[i].y-A0.y)/(B0.y-A0.y)
Vertex[i].z=A.z + (B.z-A.z)*(Vertex[i].z-A0.z)/(B0.z-A0.z)
The problem is that this will not respect aspect ratios. In case you need the mesh preserves it then you need to change this to:
scale = min((B.x-A.x)/(B0.x-A0.x)),
(B.y-A.y)/(B0.y-A0.y),
(B.z-A.z)/(B0.z-A0.z))
Vertex[i].x=(Vertex[i].x-0.5*(A0.x+B0.x))*scale+0.5*(A.x+B.x)
Vertex[i].y=(Vertex[i].y-0.5*(A0.y+B0.y))*scale+0.5*(A.y+B.y)
Vertex[i].z=(Vertex[i].z-0.5*(A0.z+B0.z))*scale+0.5*(A.z+B.z)
Hope I did not make any mistake as I derived it right in the SO/SE editor. The idea is to compute the max scale that is not exceeding new bounding box size (largest mesh axis size will fit exactly into new bounding box) and then just rescale the Mesh while center of old bounding box will be center of new bounding box too.
Some meshes also include their own transform matrices. In that case you can encode this transformation directly to this matrix leaving the vertexes as are. But usually if mesh normalization is required then it is because some Vertexes manipulation needs it and is usually better to change the vertexes ...

Calculate positions occupied by a polygon after rotation in a grid

I looked for some similar questions, but I think that no one of them are related to my problem.
I am coding in C++ the translation and rotation of a simple polygon (i.e a rectangle, a polygon like a L shape, ...) in a grid cell of 10x10.
Let's say that I have a rectangle of width = 1 cell and height = 3 cells. Translate it in the 8 directions is easy. But if I want to rotate this polygon 45º, I can get it, but I want to calculate which are the cells that are now occupied or partially occupied by the rectangle.
I have the center of mass of the rectangle, that is a cell of it. I can calculate the positions occupied by the rectangle before the rotation depending on the size. But, after the rotation, I cannot find the way to calculate the cell positions occupied by the rectangle.
Thank you very much!
You can definitely treat this like a bounding box problem -
Take the four corners of your rectangle with x,y coordinate of these corners being the cell numbers they occupy - for e.g. for the rectangle of width = 1 cell and height = 3 cells centered at o(2,2) these 4 corners represented in corner(x,y) format would be - a(1.5,3.5) b(2.5,3.5) c(2.5,0.5) d(1.5,0.5).
Once this is clear, i think the remaining procedure you might have already understood as it has been explained before a number of times like here -
Calculate Bounding box coordinates from a rotated rectangle
To summarize, apply the standard matrix for 2D rotation to these 4 corners and get the new corners for e.g.
a'.x = o.x + (a.x - o.x) * cos(t) + (a.y - o.y) * sin(t)
a'.y = o.y - (a.x - o.x) * sin(t) + (a.y - o.y) * cos(t)
and similarly for the other points. Then find the max and min x and y and they will represent the cells occupied by your rectangle. Similar stuff can be done for other convex polygon.
UPDATE:
As commented by Fang, to get the accurate number of cells occupied by the rotated polygon you would still need to do the square to polygon intersection check for all the square cells within the bounding box- you can take a look at this -
How to check intersection between 2 rotated rectangles?
Here is what I would do:
1) Get the vertices of the original polygon. Since your polygon is composed of connected grid cells, I supposed the coordinates of these vertices will all be integers.
2) Rotate the polygon vertices. I supposed you know how to do this as you know how to rotate the polygon.
3) To detect if a given cell is still occupied by the rotated polygon, check if the cell has any intersection with the rotated polygon. So, this is basically a square-to-polygon intersection check. When there is no intersection at all or the intersection is an edge or a vertex, you can conclude that this cell is not occupied by the rotated polygon.
4) Do step 3 for all the cells.
In step 4, instead of looping thru all cells, you can actually use the bounding box of the rotated polygon to easily exclude some cells from the square-to-polygon intersection check. But if you only have 10x10 cells, you probably can get away without it without seeing any performance difference.

Screen space bounding box computation in OpenGL

I'm trying to implement tiled deferred rendering method and now I'm stuck. I'm computing min/max depth for each tile (32x32) and storing it in texture. Then I want to compute screen space bounding box (bounding square) represented by left down and top right coords of rectangle for every pointlight (sphere) in my scene (see pic from my app). This together with min/max depth will be used to check if light affects actual tile.
Problem is I have no idea how to do this. Any idea, source code or exact math?
Update
Screen-space is basically a 2D entity, so instead of a bounding box think of a bounding rectangle.
Here is a simple way to compute it:
Project 8 corner points of your world-space bounding box onto the screen using your ModelViewProjection matrix
Find a bounding rectangle of these points (which is just min/max X and Y coordinates of the points)
A more sophisticated way can be used to compute a screen-space bounding rect for a point light source. We calculate four planes that pass through the camera position and are tangent to the light’s sphere of illumination (the light radius). Intersections of each tangent plane with the image plane gives us 4 lines on the image plane. This lines define the resulting bounding rectangle.
Refer to this article for math details: http://www.altdevblogaday.com/2012/03/01/getting-the-projected-extent-of-a-sphere-to-the-near-plane/

Circle that moves on the edge of a circle

As the title describes, I want to make a tiny circle that circulates on the edge of the sector of the another big circle. I have implemented sector of the circle, now only issue here is how to make small circle circulate on the edge of this sector. I have tried various ways, however, none of them was proved to be successful, therefore I plead you to give me some tips of how to implement it.
Thanks in advance.
You just have to consider that, for a circle of radius 1 centered on the origin, every point on the circle can be described as:
P = [sin(alpha); cos(alpha)]
With 0<=alpha<2*pi
Now, if you change the radius and the center you will have:
P = [(radius * sin(alpha))+x_center; (radius*cos(alpha))+y_center]
So, just have a loop for alpha going from 0 to 2*pi (or whatever section of circle you need) and use the above equation to calculate the position of the center of the small circle.
I presume you have a a function that can draw a circle at a given position in cartesian co-ordinates and radius.
Use polar co-ordinates (angle / radius), set the radius to the radius of the big circle minus the small circle. Set the angle to wherever you want to start the circle. Then set a loop up to increment the angle by a given amount. After each increment, clear the screen, draw the big circle. Then convert the polar co-orindates into cartesian, add on the centre of the big circle and draw the small circle. Hold for as long as you want.