Assume I have the UVs of the three vertices of a triangle. What is the fastest way to get the smallest texel that wraps this triangle? That is, the mip level and the UV coordinates of this texel.
Let us use the following notation:
Let p be the index of your points in the triangle, so p in {0,1,2}
Let n(p) be a 2D vector function representing the normalized texcoords in [0,1] (per component), assigned to point index p
Let t(p,l) nbe the unnormalized tex coords assiged to point p for mipmap level l
This means t(p,l) = n(p) * vec2(width(l), height(l)).
If we want to find the mipmap level, we can do this by calculating the size of the triangle in the base level t(p,0):
Let:
a = t(1,0) - t(0,0)
b = t(2,0) - t(0,0)
a and b represent the vectors of the edges of the triangle in texture space, at the base level. So let's find the maximum individually for each dimension:
x_max = max(a.x,b.x)
y_max = max(a.y,b.y)
These two basically describe the size of an axis-aligned bounding-box around our triangle. So we can use the longest side to find the mipmap level:
m = max(x_max,y_max).
Finding the right mipmap level means finding the level l for which the size m would be <= 1 texel. By going up one mip level, the value of m would be halved. so we get (with the appropriate rounding):
l = floor(log2(ceil(m)))
What we have now is the level where the size of the triangle would fit in one texel. This is the lower bound of the actual level that fullfills your criteria. The triangle might intersect up to 2x2 texels at level n. However, just going up one more level might not do the trick, as it might still intersect different texel in the upper-next level. In the worst case, your triangle encloses the center point of your texture, in which case, only the upmost mip level sized 1x1 will ever completely enclose your triangle completely.
So a naive algorithm could be
start at level l as calculated above above
calculate floor(t(p,l)) for all three points
Compare them. If the are all identical, you are finished, l is the result. If not all three are identical, increase l by one and repeat at step 2.
The resulting l will be the level you searched for.
and the UV coordinates of this texel
A texel doesn't have one UV coordinate, but represents a rectangle in UV space. So it is not clear what you want, but you might want some of the following
the unnormalized integer texel coords, which are just thefloor(t(p,l)) you already calculated
the unnormalized coordinates of the texel center, which is just floor(t(p,l)) + vec2(0.5)
the unnormalized coordinates of the barycenter of the triangle, which is just (t(0,l) + t(1,l) + t(2,l))/3.0
the normalized variant of any of the above, which is just the value divided by the size of level l
Related
The refpages say "Returns the weighted average of the four texture elements that are closest to the specified texture coordinates." How exactly are they weighted? And what about 3D textures, does it still only use 4 texels for interpolation or more?
in 2D textures are 4 samples used which means bi-linear interpolation so 3x linear interpolation. The weight is the normalized distance of target texel to its 4 neighbors.
So for example you want the texel at
(s,t)=(0.21,0.32)
but the texture nearby texels has coordinates:
(s0,t0)=(0.20,0.30)
(s0,t1)=(0.20,0.35)
(s1,t0)=(0.25,0.30)
(s1,t1)=(0.25,0.35)
the weights are:
ws = (s-s0)/(s1-s0) = 0.2
wt = (t-t0)/(t1-t0) = 0.4
so linear interpolate textels at s direction
c0 = texture(s0,t0) + (texture(s1,t0)-texture(s0,t0))*ws
c1 = texture(s0,t1) + (texture(s1,t1)-texture(s0,t1))*ws
and finally in t direction:
c = c0 + (c1-c0)*wt
where texture(s,t) returns texel color at s,t while the coordinate corresponds to exact texel and c is the final interpolated texel color.
In reality the s,t coordinates are multiplied by the texture resolution (xs,ys) which converts them to texel units. after that s-s0 and t-t0 is already normalized so no need to divide by s1-s0 and t1-t0 as they are booth equal to one. so:
s=s*xs; s0=floor(s); s1=s0+1; ws=s-s0;
t=t*ys; t0=floor(t); t1=t0+1; wt=t-t0;
c0 = texture(s0,t0) + (texture(s1,t0)-texture(s0,t0))*ws;
c1 = texture(s0,t1) + (texture(s1,t1)-texture(s0,t1))*ws;
c = c0 + (c1-c0)*wt;
I never used 3D textures before but in such case it use 8 textels and it is called tri-linear interpolation which is 2x bi-linear interpolation simply take 2 nearest textures and compute each with bi-linear interpolation and the just compute the final texel by linear interpolation based on the u coordinate in the exact same way ... so
u=u*zs; u0=floor(u); u1=u0+1; wu=u-u0;
c = cu0 + (cu1-cu0)*wu;
where zs is count of textures, cu0 is result of bi-linear interpolation in texture at u0 and cu1 at u1. This same principle is used also for mipmaps...
All the coordinates may have been offseted by 0.5 texel and also the resolution multiplication can be done with xs-1 instead of xs based on your clamp settings ...
As well as the bilinear interpolation outlined in Spektre's answer, you should be aware of the precision of GL_LINEAR interpolation. Many GPUs (e.g. Nvidia, AMD) do the interpolation using fixed point arithmetic with only ~255 distinct values between the R,G,B,A values in the texture.
For example, here is pseudo code showing how GPUs might do the interpolation:
float interpolate_red(float red0, float red1, float f) {
int g = (int)(f*256)
return (red0*(256-g) + red1*g)/256;
}
If your texture is for coloring and contains GL_UNSIGNED_BYTE values then it is probably OK for you. But if your texture is a lookup table for some other calculation and it contains GL_UNSIGNED_SHORT or GL_FLOAT values then this loss of precision could be a problem for you. In which case you should make your lookup table bigger with in-between values calculated with (float) or (double) precision.
Consider a simple convex polygon in 2D Cartesian space. If given a list of vertex coordinates sorted in a counter-clockwise orientation like this [[x0, y0], ..., [xn, yn]]. How could you compute the center of the polygon (the point inside the polygon that is equidistant to all vertices)?
Also consider a second case where the polygon is placed in 3D Cartesian space and its normal vector is not parallel to any of the Cartesian axes. How could you compute the center, without rotating the polygon?
I can read C/C++, Fortran, MATLAB and Python, however any pseudo-code is also well appreciated.
EDIT
I now realise that my question was not well-posed. I am sorry for that. It appears that what I was looking for is the centroid of the polygon (i.e. the point on which a cardboard cut-out would balance while assuming uniform density and a uniform gravity field).
You definition of center doesn't make sense in general.
To see this just draw three non-aligned points on a plane and compute the one an only circle that passes for all three points. Clearly your center of the triangle must be the center of this circle.
Now draw a fourth point that doesn't lie on the circle and form the four sided polygon. What is the center? There is no point in the plane that is equidistant from all vertices.
Note also that even in case of triangles using the point equidistant from the vertices can give you points outside and far away from the polygon and is also numerically unstable (given any ε>0 and M>0 you can always build a triangle in which a specific movement of a vertex by a distance of less than ε moves the center by a distance greater than M).
Commonly used "centers" that are simple to compute are the average of all vertices, the average of the boundary, the center of mass or even just the center of the axis-aligned bounding box. All of them can however fall outside the polygon if the polygon is not convex, but in your case they may work.
The simplest reasonable one (because it doesn't depends on the coordinate system) is the barycenter of the vertices (code in Python):
xc = sum(x for (x, y) in points) / len(points)
yc = sum(y for (x, y) in points) / len(points)
something bad about it it's that just splitting one side of the polygon gives you a different center (in other words it depends on the vertices and not on the set of points bounded by the polygon). The simplest that depends on the polygon is IMO the barycenter of the boundary:
sx = sy = sL = 0
for i in range(len(points)): # counts from 0 to len(points)-1
x0, y0 = points[i - 1] # in Python points[-1] is last element of points
x1, y1 = points[i]
L = ((x1 - x0)**2 + (y1 - y0)**2) ** 0.5
sx += (x0 + x1)/2 * L
sy += (y0 + y1)/2 * L
sL += L
xc = sx / sL
yc = sy / sL
For both of them the extension to 3d is trivial... just add z using the same formulas.
In the case of a general (not necessarily convex, not necessarily simply connected) polygon a "center" that I found useful but that is not trivial to compute is the (an) inner point that is at a maximum distance from the boundary (in other words a "most inner" point).
In this case I resorted to use a discrete (bitmap) representation and a gaussian distance transform.
First of all for a polygon, the centroid may not always imply equidistant lengths from the centroid to the vertices. In most cases this is probably NOT true. That being said, you can find the centroid simply by finding the mean of your x coordinates and the mean of your y coordinates. In Matlab: centroidx = mean(xcoords) and centroidy = mean(ycoords) are the coordinates of the centroid. See this if you really need more.
I'm wondering how a precise algorithm can be written to compute the frontier of the surface of intersection between a parametric surface f : R^2 --> R^3 and a triangulated mesh.
I've thought to a first approach:
nStepsU = 100
nStepsV = 100
tolerance=0.01 // pick some sensical value
intersectionVertices={}
for u from minU to maxU in nStepsU:
for v from minV to maxV in nStepsV:
for v in verticesInMesh:
if euclidean distance( f(u,v), v ) < tolerance:
add vertex v in a set
connect the vertices in intersectionVertices with a line strip
draw the vertices in intersectionVertices
This algorithm, is very simple but slow (n^3) and does not keep in account that the topography of the mesh is based on triangles so the output points are points of the mesh and not points computed exploiting the intersection of surface with the triangles and is heavily dependent of the tolerance one has to set.
Has someone some better idea or can one drive me to a suitable library for this purpose?
I would iterate over each triangle, and compute the intersection of the triangle with the surface. I would use a geometry shader which takes the triangles as input, and outputs line strips. For each vertex in the triangle, compute the signed distance to the surface. Then iterate over the edges: If there are two vertices where h has different signs, the edge between these vertices intersects with the surface. While I'm sure the exact intersection can be computed, the easiest solution would be to interpolate linearly, i.e.
vec3 intersection = (h0 * v1 + h1 * v0) / (h0 + h1);
Then output each intersection as a vertex of your line segment.
The code I posted here can get you started. If you want to just draw the result, you will probably run into the same problem that I described in that question. If you need the vertices on the client, you can use transform feedback.
Edit: I just did a little test. As the distance function I used
float distToHelicoid(in vec3 p)
{
float theta = p.y / 5 + offset.x / 50;
float a = mod(theta - atan(p.z, p.x), 2*PI) - PI; // [-PI, PI[
if (abs(a) > PI/2)
a = mod(theta - atan(-p.z, -p.x), 2*PI) - PI;
return a;
}
Since there is no inside/outside, and this distance function goes from -90° to 90°, you can only emit vertices if the sign goes from small negative to small positive or vice versa, not when it flips from 90° to -90°. Here I simply filtered out distances where abs(dist) > 45°:
The clean way would be to determine the index of the closest revolution. E.g. [-pi, pi] would be revolution 0, [pi, 3pi] = revolution 1, etc. You would then only emit if two distances refer to the same revolution.
If your surface is always helicoid, you can try to project everything on a cylinder around axis Y.
The surface of helicoid consists of lines orthogonal to the surface of that cylinder and after projection you will get a spiral. After projection of 3D triangle mesh onto that cylinder you will get 2D triangle mesh (note that some areas may be covered with several layers of triangles).
So the task becomes finding triangles in 2D triangle mesh intersecting the spiral which is simpler. If you are OK with approximations, you can segment that spiral and use some kind of tree to find triangles intersecting the spiral.
When you have a triangle intersecting part of spiral, its intersection will be a segment, you can just recalculate 3D coordinates of the segment and set of these segments is your intersection line.
Given a three dimensional triangle mesh, how can I find out whether it is convex or concave? Is there an algorithm to check that? If so it would be useful to define a tolerance range to ignore small concavities.
Image Source: http://www.rustycode.com/tutorials/convex.html
A convex polyhedron may be defined as an intersection of a finite number of half-spaces. These half-spaces are in fact the facet-defining half-space.
EDIT: Assuming your mesh actually defines a polyhedron (i.e. there is an "inside" and an "outside")
You can do something like this (pseudocode):
for each triangle
p = triangle plane
n = normal of p (pointing outside)
d = distance from the origin of p
//Note: '*' is the dot product.
//so that X*N + d = 0 is the plane equation
//if you write a plane equation like (X-P)*n = 0 (where P is any point which lays in the plane), then d = -P*n (it's a scalar).
for each vertex v in the mesh
h = v*N + d
if (h > Tolerance) return NOT CONVEX
//Notice that when v is a vertex of the triangle from which n and d come from,
//h is always zero, so the tolerance is required (or you can avoid testing those vertices)
end
end
return CONVEX
For a simple polygon as you described it, you can check for every inner angle at every vertice and check if the angle is below 180 degrees. If so, there is no way it is concave. If a single vertice is over 180°, it is concave.
Edit: for 3D meshes the same idea applies, but you have to test at every vertex every single triangle to each other whether the angle between the triangles is higher or lower than 180°
I can't find this answer anywhere, I hope somebody could help me.
I have an image (all black) with a white generic quadrilateral polygon inside it, and the correspondent 4 corners coordinates of such polygon.
I need to find the corners of a slightly enlarged quadrilateral and the same for a slightly reduced one (the shape must be the same, just a resize of the quadrilateral inside the image).
Is there a function which allows me to do that, or should I compute manually some geometry?
Thank you for your help.
Consider a vertex p of the polygon, with its predecessor p1 and successor p2.
The vectors between these points are
v1 = p1 - p
v2 = p2 - p
(The computation is componentwise for the x and y coordinates respectively).
In the shrunk polygon the vertex p is moved to p' along the line
which halves the angle a between the vectors v1 and v2.
The vector w in this direction is
w = v1 + v2
and the unit vector v in this direction is
v = w / |w| = (w_x, w_y) / sqrt(w_x*w_x + w_y*w_y)
The new point p' is
p' = p + k * v , i.e. :
p_x' = p_x + k * v_x
p_y' = p_y + k * v_y
where k is the shifting distance (a scalar).
If the vertex p is convex (as in the figure), then k >= 0 means
shrinking and k <= 0 means expanding.
If the vertex p is concave, then k >= 0 means
expanding and k <= 0 means shrinking.
What you want is polygon offset. If you want to use an existing library. Consider using Clipper
void OffsetPolygons(const Polygons &in_polys,
Polygons &out_polys,
double delta,
JoinType jointype = jtSquare, double MiterLimit = 2.0);
This function offsets the 'polys' polygons parameter by the 'delta' amount. Positive delta values expand outer polygons and contract inner 'hole' polygons. Negative deltas do the reverse.
Although I must add for a simple geometry like a Quadrilateral it is easy to do it from scratch.
Identity all four infinite lines that form the Quadrilateral
Offset the lines parallel to themselves
Compute intersection of these new lines
Just be careful of corner cases. When you offset a quadrilateral which has one very small edge. It will become a triangle on offset.
I agree with the answer of parapura rajkumar. I wanted to add that the solution of Jiri is not 100% correct because the vertex centroid of a quadrilateral is different to the area centroid of a quadrilateral, as it is written here. For the enlargement one would have to use the area centroid - or the much more elegant solution with the parallel lines mentioned by parapura rajkumar. I just want to add the following to this answer:
You can simply determine the outer points of the enlarged quadrilateral by computing the normal vectors of the vectors between the points of the original quadrilateral. Afterwards, normalize the normal vectors, multiply them with the offset and add them to the points of the original quadrilateral. Given these outer points you can now compute the intersection of the parallel lines with this formula.