Modifying a texture on a mesh at given world coordinate - c++

Im making an editor in which I want to build a terrain map. I want to use the mouse to increase/decrease terrain altitude to create mountains and lakes.
Technically I have a heightmap I want to modify at a certain texcoord that I pick out with my mouse. To do this I first go from screen coordinates to world position - I have done that. The next step, going from world position to picking the right texture coordinate puzzles me though. How do I do that?

If you are using a simple hightmap, that you use as a displacement map in lets say the y direction. The base mesh lays in the xz plain (y=0).
You can discard the y coordinate from world coordinate that you have calculated and you get the point on the base mesh. From there you can map it to texture space the way, you map your texture.
I would not implement it that way.
I would render the scene to a framebuffer and instead of rendering a texture the the mesh, colorcode the texture coordinate onto the mesh.
If i click somewhere in screen space, i can simple read the pixel value from the framebuffer and get the texture coordinate directly.
The rendering to the framebuffer should be very inexpensive anyway.

Assuming your terrain is a simple rectangle you first calculate the vector between the mouse world position and the origin of your terrain. (The vertex of your terrain quad where the top left corner of your height map is mapped to). E.g. mouse (50,25) - origin(-100,-100) = (150,125).
Now divide the x and y coordinates by the world space width and height of your terrain quad.
150 / 200 = 0.75 and 125 / 200 = 0.625. This gives you the texture coordinates, if you need them as pixel coordinates instead simply multiply with the size of your texture.

I assume the following:
The world coordinates you computed are those of the mouse pointer within the view frustrum. I name them mouseCoord
We also have the camera coordinates, camCoord
The world consists of triangles
Each triangle point has texture coordiantes, those are interpolated by barycentric coordinates
If so, the solution goes like this:
use camCoord as origin. Compute the direction of a ray as mouseCoord - camCoord.
Compute the point of intersection with a triangle. Naive variant is to check for every triangle if it is intersected, more sophisticated would be to rule out several triangles first by some other algorithm, like parting the world in cubes, trace the ray along the cubes and only look at the triangles that have overlappings with the cube. Intersection with a triangle can be computed like on this website: http://www.lighthouse3d.com/tutorials/maths/ray-triangle-intersection/
Compute the intersection points barycentric coordinates with respect to that triangle, like that: https://www.scratchapixel.com/lessons/3d-basic-rendering/ray-tracing-rendering-a-triangle/barycentric-coordinates
Use the barycentric coordinates as weights for the texture coordinates of the corresponding triangle points. The result are the texture coordinates of the intersection point, aka what you want.
If I misunderstood what you wanted, please edit your question with additional information.
Another variant specific for a height map:
Assumed that the assumptions are changed like that:
The world has ground tiles over x and y
The ground tiles have height values in their corners
For a point within the tile, the height value is interpolated somehow, like by bilinear interpolation.
The texture is interpolated in the same way, again with given texture coordinates for the corners
A feasible algorithm for that (approximative):
Again, compute origin and direction.
Wlog, we assume that the direction has a higher change in x-direction. If not, exchange x and y in the algorithm.
Trace the ray in a given step length for x, that is, in each step, the x-coordinate changes by that step length. (take the direction, multiply it with step size divided by it's x value, add that new direction to the current position starting at the origin)
For your current coordinate, check whether it's z value is below the current height (aka has just collided with the ground)
If so, either finish or decrease step size and do a finer search in that vicinity, going backwards until you are above the height again, then maybe go forwards in even finer steps again et cetera. The result are the current x and y coordinates
Compute the relative position of your x and y coordinates within the current tile. Use that for weights for the corner texture coordinates.
This algorithm can theoretically jump over very thin tops. Choose a small enough step size to counter that. I cannot give an exact algorithm without knowing what type of interpolation the height map uses. Might be not the worst idea to create triangles anyway, out of bilinear interpolated coordinates maybe? In any case, the algorithm is good to find the tile in which it collides.
Another variant would be to trace the ray over the points at which it's x-y-coordinates cross the tile grid and then look if the z coordinate went below the height map. Then we know that it collides in this tile. This could produce a false negative if the height can be bigger inside the tile than at it's edges, as certain forms of interpolation can produce, especially those that consider the neighbour tiles. Works just fine with bilinear interpolation, though.
In bilinear interpolation, the exact intersection can be found like that: Take the two (x,y) coordinates at which the grid is crossed by the ray. Compute the height of those to retrieve two (x,y,z) coordinates. Create a line out of them. Compute the intersection of that line with the ray. The intersection of those is that of the intersection with the tile's height map.

Simplest way is to render the mesh as a pre-pass with the uvs as the colour. No screen to world needed. The uv is the value at the mouse position. Just be careful though with mips/filtering etv

Related

How to normalize a 3D non colored Mesh in a unit bounding box

I have a 3D mesh encoded in a .OFF file. Only vertices, coordinates of these vertices and connectivity are encoded. I read in some papers that a 3D mesh can be normalized in a unit bounding box. What this really means ? and how we can do this ?
That means the mesh will fit into space defined by axis aligned cube of size 1 for example defined by corners: A(-0.5,-0.5,-0.5) and B(+0.5,+0.5,+0.5).
To achieve this:
get actual bounding box
So loop through all used Vertexes and remember min and max coordinate for each axis A0(xmin,ymin,zmin),B0(xmax,ymax,zmax).
Normalize to bounding box A,B
So loop through each Vertex again and recompute them (by linear interpolation). For example like this:
Vertex[i].x=A.x + (B.x-A.x)*(Vertex[i].x-A0.x)/(B0.x-A0.x)
Vertex[i].y=A.y + (B.y-A.y)*(Vertex[i].y-A0.y)/(B0.y-A0.y)
Vertex[i].z=A.z + (B.z-A.z)*(Vertex[i].z-A0.z)/(B0.z-A0.z)
The problem is that this will not respect aspect ratios. In case you need the mesh preserves it then you need to change this to:
scale = min((B.x-A.x)/(B0.x-A0.x)),
(B.y-A.y)/(B0.y-A0.y),
(B.z-A.z)/(B0.z-A0.z))
Vertex[i].x=(Vertex[i].x-0.5*(A0.x+B0.x))*scale+0.5*(A.x+B.x)
Vertex[i].y=(Vertex[i].y-0.5*(A0.y+B0.y))*scale+0.5*(A.y+B.y)
Vertex[i].z=(Vertex[i].z-0.5*(A0.z+B0.z))*scale+0.5*(A.z+B.z)
Hope I did not make any mistake as I derived it right in the SO/SE editor. The idea is to compute the max scale that is not exceeding new bounding box size (largest mesh axis size will fit exactly into new bounding box) and then just rescale the Mesh while center of old bounding box will be center of new bounding box too.
Some meshes also include their own transform matrices. In that case you can encode this transformation directly to this matrix leaving the vertexes as are. But usually if mesh normalization is required then it is because some Vertexes manipulation needs it and is usually better to change the vertexes ...

Calculate Texture coordinates for procedural generated geometry

How can I calculate texture coordinates of such geometry?
The angle shown in the image (89.90 degree) may vary, therefore the geometry figure is changing and is not always such uniform.(maybe like geometry in the bottom of image) and red dots are generated procedurally depends on degree of smoothness given.
I would solve it by basic trigonometry.
For simplicity and convenience lets assume:
coordinates [0,0] are in the middle of the geometry (where all the lines there intersect) and in the middle of the texture (and they map to each other - [0,0] in geometry is [0,0] in the texture).
the texture coordinates span from -1 to 1 (and also assume the geometry coordinates do too in the case of 90 degrees - in other cases it may get wider and shorter)
possitive values for x span right and y up. And assume that x geometry axis is aligned with the u texture axis no matter the angle (which is 89.90 in your figures).
Something like this:
Then to transform from texture [u,v] to geometry [x,y] coordinates:
x = u + v*cos(angle)
y = v*sin(angle)
To illustrate, it is basically a shear transformation and scale transformation to preserve length of y (or alternatively - similar to rotation transform, but rotating only one axis - y - not both). If I reverse that transformation (to get the texture coordinates we want):
u = x - y*cotg(angle)
v = y/sin(angle)
With those equations I should be able to transform any geometry coordinates (a point) in the described situation into texture coordinates. For any angle in a (0, 180) range anyway
(Hopefully I didn't make too many embarrassing errors in there)
I would take the easy way out and use either solid texturing or tri-planar [http://gamedevelopment.tutsplus.com/articles/use-tri-planar-texture-mapping-for-better-terrain--gamedev-13821] mapping.
If you really need uv, one option is to start with primitives that have a mapping and carry that over for every operation.
Creating uv after the fact will not get good results.

texture mapping over many triangles in a circle

After some help, i want to texture onto a circle as you can see below.
I want to do it in such a way that the centre of the circle starts on the shared point of the triangles.
the triangles can change in size and number and will range over varying degrees ie 45, 68, 250 so only the part of the texture visible in the triangle can be seen.
its basically a one to one mapping shift the image to the left and you see only the part where there are triangles.
not sure what this is called or what to google for, can any one makes some suggestions or point me to relevant information.
i was thinking i would have to generate the texture coordinates on the fly to select the relevant part, but it feels like i should be able to do a one to one mapping which would be simpler than calculating triangles on the texture to map to the opengl triangles.
Generating texture coordinates for this isn't difficult. Each point of polygon corresponds to certain angle, so i'th point angle will be i*2*pi/N, where N is the order of regular polygon (number of sides). Then you can use the following to evaluate each point texture coordinates:
texX = (cos(i*2*pi/N)+1)/2
texY = (sin(i*2*pi/N)+1)/2
Well, and the center point has (0.5, 0.5).
It may be even simpler to generate coordinates in the shader, if you have one specially for this:
I assume, you get pos vertex position. It depends on how you store the polygon vertexes, but let the center be (0,0) and other points ranging from (-1;-1) to (1;1). Then the pos should be simply used as texture coordinates with offset:
vec2 texCoords = (pos + vec2(1,1))*0.5;
and the pos itself then should be passed to vector-matrix multiplication as usual.

Screen space bounding box computation in OpenGL

I'm trying to implement tiled deferred rendering method and now I'm stuck. I'm computing min/max depth for each tile (32x32) and storing it in texture. Then I want to compute screen space bounding box (bounding square) represented by left down and top right coords of rectangle for every pointlight (sphere) in my scene (see pic from my app). This together with min/max depth will be used to check if light affects actual tile.
Problem is I have no idea how to do this. Any idea, source code or exact math?
Update
Screen-space is basically a 2D entity, so instead of a bounding box think of a bounding rectangle.
Here is a simple way to compute it:
Project 8 corner points of your world-space bounding box onto the screen using your ModelViewProjection matrix
Find a bounding rectangle of these points (which is just min/max X and Y coordinates of the points)
A more sophisticated way can be used to compute a screen-space bounding rect for a point light source. We calculate four planes that pass through the camera position and are tangent to the light’s sphere of illumination (the light radius). Intersections of each tangent plane with the image plane gives us 4 lines on the image plane. This lines define the resulting bounding rectangle.
Refer to this article for math details: http://www.altdevblogaday.com/2012/03/01/getting-the-projected-extent-of-a-sphere-to-the-near-plane/

OpenGL/GLUT - Project ModelView Coordinate to Texture Matrix

Is there a way using OpenGL or GLUT to project a point from the model-view matrix into an associated texture matrix? If not, is there a commonly used library that achieves this? I want to modify the texture of an object according to a ray cast in 3D space.
The simplest case would be:
A ray is cast which intersects a quad, mapped with a single texture.
The point of intersection is converted to a value in texture space clamped between [0.0,1.0] in the x and y axis.
A 3x3 patch of pixels centered around the rounded value of the resulting texture point is set to an alpha value of 0.( or another RGBA value which is convenient, for the desired effect).
To illustrate here is a more complex version of the question using a sphere, the pink box shows the replaced pixels.
I just specify texture points for mapping in OpenGL, I don't actually know how the pixels are projected onto the sphere. Basically I need to to the inverse of that projection, but I don't quite know how to do that math, especially on more complex shapes like a sphere or an arbitrary convex hull. I assume that you can somehow find a planar polygon that makes up the shape, which the ray is intersecting, and from there the inverse projection of a quad or triangle would be trivial.
Some equations, articles and/or example code would be nice.
There are a few ways you could accomplish what you're trying to do:
Project a world coordinate point into normalized device coordinates (NDCs) by doing the model-view and projection transformation matrix multiplications by yourself (or if you're using old-style OpenGL, call gluProject), and perform the perspective division step. If you use a depth coordinate of zero, this would correspond to intersecting your ray at the imaging plane. The only other correction you'd need to do map from NDCs (which are in the range [-1,1] in x and y) into texture space by dividing the resulting coordinate by two, and then shifting by .5.
Skip the ray tracing all together, and bind your texture as a framebuffer attachment to a framebuffer object, and then render a big point (or sprite) that modifies the colors in the neighborhood of the intersection as you want. You could use the same model-view and projection matrices, and will (probably) only need to update the viewport to match the texture resolution.
So I found a solution that is a little complicated, but does the trick.
For complex geometry you must determine which quad or triangle was intersected, and use this as the plane. The quad must be planar(obviously).
Draw a plane in the identity matrix with dimensions 1x1x0, map the texture on points identical to the model geometry.
Transform the plane, and store the inverse of each transform matrix in a stack
Find the point at which the the plane is intersected
Transform this point using the inverse matrix stack until it returns to identity matrix(it should have no depth(
Convert this point from 1x1 space into pixel space by multiplying the point by the number of pixels and rounding. Or start your 2D combining logic here.