I've got a very specific problem. I have an OpenGL application that is used to render video onto 3D meshes. As it turns out, I can make my video sources send me rectangular portions of the image, reducing memory usage. These portions are specified as a Rectangle2D(int x, int y, int width, int height) with 0 <= x <= w <= sourceVideoWidth and 0 <= y <= h <= sourceVideoHeight.
With that said, I want to find out, for each frame, and for each mesh the following:
Whether the mesh is visible
If so, what portion of image should I request
The benefit is reducint the texture upload to GPU, this operation is often the bottleneck in my application.
In order to simplify the problem let's make the assumption that all meshes are 3D rectangles arbitrarily positioned. A 3D rectangle is defined by four points:
class Rectangle3D
{
public:
Vec3 topLeft;
Vec3 topRight;
Vec3 botLeft;
Vec3 botRight;
}
Possible solutions:
A) Split the mesh into a point grid of points with known texture coordinates, and run frustum culling for each point, then, from the visible points find the top left and bottom right texture coordinates that we must request. This is rather inefficient, and the number of points to test multiplies when we add another mesh to the scene. Solutions that use just the four corners of the rectangle might be preferable.
B) Using the frustum defining planes (see frustum culling). For further simplicity, using only the four planes that correspond to the screen sides. Finding out whether the mesh is visible is rather simple. Finding the visible texture coordinates would need several cases:
- One or more frustum sides intersect with the mesh
- No frustum sides intersect with the mesh
- Either the mesh is fully visible
- Or the mesh is surrounding the screen sides
In any case I need several plane-plane and plane-line segment intersections. Which are not necessarily efficient.
C) Make a 2D projection of the Rectangle3D lines, resulting into a four side polygon, then using line segment intersection between the screen sides and the polygon sides. Also accounting for cases where we have no intersection and the mesh is still visible.
D) Using OpenGL occlusion query objects, this way a render pass could generate information about the visible mesh portion.
Is there any other solution that best solves this problem? If not which one would you use and why?
Just one more thought on to your solutions,
Why don't you incorporate one rendering pass for occlusion queries. Split your mesh into imaginary rectangles which tells you about the visible parts of the mesh. Like
Left part of the image is with imaginary sub-rectangles, right part of the image shows sub-rectangles visible within the screen area (red rectangle in this case). Based on this pass result, you will get the co-ordinates of mesh which are visible.
UPDATE:
This is a sample view that explains my point. This can be done by using opengl query objects.
r is result of GL_SAMPLES_PASSED
Since you will know which rectangles are visible through the result of the query objects , you will come to know which co-ordinates are visible.Google for opengl occlusion queries you will get detailed info. Hope this helps.
Related
So I am making that simple system, that makes path in 2D triangle, but then I have to find its equivalent on 3D one. That would not be as much of a hassle, if said triangle would not be from texture, meaning that it may have different angles etc. from the one marked on .png file. Finding traingle points is one thing, but I also need points inside said triangle with the same relative distances from all corners. I have no idea how to do it. Is there any simple way to do it?
EDIT:
To elaborate a little:
I have 3D mesh on which there is a texture applied in external program (e.g. blender). Mesh's triangle's geometry may vary while applying (the whole point of process of texture mapping, to be able to adjust shape and size to image), but the distances describing points on image are set to certain vertices on model. I load it in my program, read the coords of triangles, as well as coords of texture in range (0, 1) for each vertex. Now I load texture file, extract needed information (tool paths based on colors on geometry), but the paths I generate still are 2D and in texture image scale. I need to scale it to real size (points of the triangle) and keep the found paths on surface of such model, thus I need to find points that are in the same distance from each corner and are on a plane which triangle is.
EDIT2:
The path that I have in 2D comes from gradual scalling down of the outline of shape detected on texture. I turn the image to binary, get outline and scale down several times to get this concentric paths. It is described in image coordinate space, because it is directly from image. Now this concentric path needs to be converted to mesh's coordinates.
I'm currently using LWJGL but if you have a solution for OpenGL I can use that too.
Now, I'm trying to apply a selection area to a plane that I can move around with my mouse (like my terrible drawing above). I'm trying to make it flat to the plane, so it can move over any obstacles. I've considered projection texture but I dont know how to implement it. Is this a good way of solving the problem or is there any better alternative?
What would be the best way to implement a selection area?
Alternative options, pros and cons.
Edit: This will be moving over another texture if that makes a difference.
When you already know the intersection point in world space, there is a relative simple solution that doesn't require projected textures:
In the fragment shader calculate the world-space distance between the intersection point and the current fragment. When the distance between the two is smaller than the desired radius of the circle, then the selection color should be drawn. Otherwise just the normal plane is drawn.
float dist = length(current_ws - intersection_ws);
if (dist < circle_radius)
//Draw overlay
else
//Draw plane normal
How can I generate a circular grid, made of tiles with uniform area/whose vertices are uniformly distributed?
I'll need to apply the Laplacian operator to the grid at each frame of my program.
Applying the Laplacian was easy with a rectangular grid made of rectangular tiles whose locations were specified in cartesian coordinates, since for a tile at (i,j), I knew the positions of its neighboring tiles to be (i-1,j), (i,j-1), (i+1,j), and (i,j+1).
While I'd like to use polar coordinates, I'm not sure whether querying a tile's neighborhood would be as easy.
I'm working in OpenGl, and could either render triangles or points. Triangles seem more efficient (and have the nice effect of filling the area between their vertices), but seem more amenable to cartesian coordinates. Perhaps I could render points and then polar coordinates would work fine?
The other concern is the density of tiles. I want waves traveling on the surface of this mesh to have the same resolution whether they're at the center or not.
So the two main concerns are: generating the mesh in a way that allows for easy querying of a tiles' neighborhood, and in a way that preserves a uniform density distribution of tiles.
I think you're asking for something impossible.
However, this is a technique for remapping a regular square 2D grid into a circle shape with a relatively low amount of warping. It might suffice for your problem.
You might want to have a look at this paper, it has been written to sample spheres but you might be able to adapt it for a circle.
An option can be to use a polar grid with a constant angular step but varying radial steps, so that all cells have the same area, i.e. (R+dR)²-R²=Cst, giving dR as a function of R.
You may want to reduce the anisotropy (some cells becoming very elongated) by changing the number of cells every now and then (f.i. by doubling). This will introduce singularities in the mesh, i.e. cells with five vertices instead of four.
See the figures in https://mathematica.stackexchange.com/questions/78806/ndsolve-and-fem-support-for-non-conformal-meshes-of-a-disk-with-kernel-crash
After some help, i want to texture onto a circle as you can see below.
I want to do it in such a way that the centre of the circle starts on the shared point of the triangles.
the triangles can change in size and number and will range over varying degrees ie 45, 68, 250 so only the part of the texture visible in the triangle can be seen.
its basically a one to one mapping shift the image to the left and you see only the part where there are triangles.
not sure what this is called or what to google for, can any one makes some suggestions or point me to relevant information.
i was thinking i would have to generate the texture coordinates on the fly to select the relevant part, but it feels like i should be able to do a one to one mapping which would be simpler than calculating triangles on the texture to map to the opengl triangles.
Generating texture coordinates for this isn't difficult. Each point of polygon corresponds to certain angle, so i'th point angle will be i*2*pi/N, where N is the order of regular polygon (number of sides). Then you can use the following to evaluate each point texture coordinates:
texX = (cos(i*2*pi/N)+1)/2
texY = (sin(i*2*pi/N)+1)/2
Well, and the center point has (0.5, 0.5).
It may be even simpler to generate coordinates in the shader, if you have one specially for this:
I assume, you get pos vertex position. It depends on how you store the polygon vertexes, but let the center be (0,0) and other points ranging from (-1;-1) to (1;1). Then the pos should be simply used as texture coordinates with offset:
vec2 texCoords = (pos + vec2(1,1))*0.5;
and the pos itself then should be passed to vector-matrix multiplication as usual.
I am rendering an old game format where I have a list of meshes that make up the are you are in. I have finally gotten the PVS (regions visible from another region) working and that cuts out a lot of the meshes I don't need to render but not by much. So now, my list of meshes I should render only includes the other meshes I can see. But it isn't perfect. There are still a ton of meshes include really far away ones that are past the clip.
Now first, I am trying to cull out meshes that aren't in my view frustum. I have heard that a bounding box is the best way to go about doing this. Does anyone have an idea about how I can go about this? I know I need the maximum point (x, y z) and the minimum point (x, y z) so that a box encompasses all of the verticies.
Then, do I check and see if either of those points are in my view frustum? Is it that simple?
Thank you!
AABB or Axis Aligned Bounding Box is a very simple and fast object for testing intersection/containment of two 3D regions.
As you suggest, you are calculating a min and max x,y,z for the two regions you want to compare, for example, the region that describes a frustum and the region that describes a mesh. It is axis aligned because the subsequent cube has edges parallel with each of the axis of the coordinate system. Obviously this can be slightly inaccurate (false positives of intersection/containment, but never false negatives), and so once you filter your list with the AABB test, you might consider performing a more accurate test for the remaining meshes.
You test for intersection/containment as follows:
F = AABB of frustum
M = AABB of mesh
bool is_mesh_in_frustum(const AABB& F, const AABB& M)
{
if( F.min.x > M.max.x || M.min.x > F.max.x || F.min.y > M.max.y || M.min.y > F.max.y || F.min.z > M.max.z || M.min.z > F.max.z )
{
return false;
}
return true;
}
You can also look up algorithms for bounding spheres, oriented bounding box (OBB), and other types of bounding volumes. Depending on how many meshes you are rendering, you may or may not need a more accurate method.
To create an AABB in the first place, you could simply walk the vertices of the mesh and record the minimum/maximum x and y and z values you encounter.
Also consider, if the meshes dont deform, then the bounding box in the meshes coordinate space are going to be static, so you can calculate the AABB for all the meshes as soon as you have the vertex data.
Then you just have to make sure you transform the precalculated AABB min and max vertices into the frustum coordinate space before you do the test each render pass.
EDIT (for comment):
The AABB can provide false positives because it is at best the exact shape of the region you are bounding, but is more typically larger than the region you are bounding.
Consider a sphere, if you use AABB, its like putting a basket ball into a box, you have all these gaps at the corners of the box where the ball cant reach.
Or in the case of a frustum where the frustum angles inwards towards the camera, the AABB for it will simply continue straight along the axis towards the camera, effectively bounding a region larger than the camera can see.
This is a source of inaccuracy, but it should never result in you culling an object that is even slightly inside the frustum, so at worst you will still be drawing some meshes that are close to the camera but still outside the frustum.
You can rectify this by first doing a AABB test, and producing a smaller list of meshes that return true and then performing a more accurate test on that smaller list with a more accurate bounding volume for the frustum and/or meshes.