In DirectX I've been working with textured quads using the following struct:
struct TLVERTEX
{
float x;
float y;
float z;
D3DCOLOR colour;
float u;
float v;
};
However I want to make more complex 2D meshes and texture those, but I have no idea how I'm to align the texture on it. In the textured quad, the u and v properties of the struct determine the texture's orientation.
Ideally what I want to ultimately be able to do is either have a 2D mesh with the texture on it or what new struct properties I would need, with the texture stretched/manipulated to fit on the mesh entirely no matter how distorted it may ultimately look.
Another thing I'd like to do is say have a 2D mesh with a texture slapped on to it with no stretching of the texture, etc. I'd just want to align it, so if the texture didn't fit the shape of the mesh, bits would be missing etc.
I've tried Googling but can only find things relating to 3D graphics. Whilst I realise I am technically working in 3D space, what I am ultimately trying to do is create 2D graphics. I would appreciate answers with anything from suggestions where to look/get started on achieving this through to complete examples if possible.
You should really read about how texture coordinates work.
Let's consider the following mesh. We want to apply an undistorted texture (the dashed rectangle):
Then specifying the texture coordinates for each vertex is pretty simple:
u = (x - minX) / (maxX - minX)
v = (y - minY) / (maxY - minY)
If you want to rotate the texture, you have to project the vertices on according axes.
If you want to distort the texture, you have to specify texture coordinates on the texture's edges. A simple algorithm that can be utilized is the following one:
Choose an arbitrary point in the polygon -> o
For each vertex v
Shoot a ray from o through v
Find the intersection of this ray with minX / maxX / minY / maxY
Calculate the texture coordinates at the intersection points as above
Next
However, this algorithm does not guarantee that every texel is mapped to the mesh. E.g. the top right corner of the sample above is not mapped to anything with the above algorithm. Furthermore, it does only guarantee consistent mapping for convex polygons.
Here is an algorithm for concave polygons. It should produce consistent coordinates. However, I do not know how the result will look like. This algorithm can also skip corners. It is possible to include checks to apply the corners' coordinates to specific vertices (e.g. when a side changes):
Calculate the polygon's perimeter -> p
//The texture coodinate space has a perimeter of 4
currentPos := 0
for each vertex
if(4 * currentPos < 1)
uv = (4 * currentPos / p, 0) // top edge
else if(4 * currentPos < 2)
uv = (1, 4 * currentPos / p - 1); //right edge
else if(4 * currentPos < 3)
uv = (1 - 4 * currentPos / p - 2, 1); //bottomedge
else
uv = (0, 1 - 4 * currentPos / p - 3); //leftedge
currentPos += distance to next vertex
next
Related
I am rendering a tile map to a fbo and then moving the resulted buffer to a texture and rendering it on a FSQ. Then from the mouse click events, I got the screen coordinates and move them to clip space [-1,1]:
glm::vec2 posMouseClipSpace((2.0f * myCursorPos.x) / myDeviceWidth -
1.0f, 1.0f - (2.0f * myCursorPos.y) / myDeviceHeight);
I have logic on my program that based on those coordinates, it selects a specific tile on the texture.
Now, moving to 3D, I am texturing a semi cylinder with the FBO I used in the previous step:
In this case I am using a ray-triangle intersection point that hits the cylinder with radius r and height h. The idea is moving this intersection point to space [-1,1] so I can keep the logic on my program to select tiles
I use the Möller–Trumbore algorithm to check points on the cylinder hit by a ray. Lets say the intersected point is (x,y) (not sure if the point is in triangle, object or world space. Apparently it's worldspace).
I want to translate that point to space x:[-1,1], y[-1,1].
I know the height of my cylinder, which is a quarter of the cylinder's arc length:
cylinderHeight = myRadius * (PI/2);
so the point in the Y axis can be set in [-1,1]space:
vec2.y = (2.f * (intersectedPoint.y - myCylinder->position().y) ) /
(myCylinder->height()) - 1.f
and That works perfectly.
However, How to compute the horizontal axis which depends on 2 variables x and z?
Currently, my cylinder's radius is 1, so by coincidence a semi cylinder set in the origin would go from (-1 ,1) on the X axis, which made me think it was [-1,1] space, but it turns out is not.
My next approach was using the arc length of a semi circle s =r * PI and then plug that value into the equation:
vec2.x = (2.f * (intersectedPoint.x - myCylinder->position().x) ) /
(myCylinder->arcLength()) - 1.f
but clearly it goes off by 1 unit on the negative direction.
I appreciate the help.
From your description, it seems that you want to convert the world space intersection coordinate to its corresponding normalized texture coordinate.
For this you need the Z coordinate as well, as there must be two "horizontal" coordinates. However you don't need the arc length.
Using the relative X and Z coordinates of intersectedPoint, calculate the polar angle using atan2, and divide by PI (the angular range of the semi-circle arc):
vec2.x = atan2(intersectedPoint.z - myCylinder->position().z,
myCylinder->position().x - intersectedPoint.x) / PI;
Setting the scene
I'm working on a feature in scenekit where i have a camera at the center of a sphere. The sphere has a texture wrapped around it. Let's say it was a 360 degree image captured inside of a room.
So far
I have identified the positions on the sphere that correspond to the corners of the floor. I can extract and create a new flat 2d plane that matches the dimensions of the floor from the camera's perspective. E.g. If the room had a long rectangular floor, I'd create a trapezoid shaped plane.
Problem
But I would like for the new 2d plane to have the texture of the floor, not just the shape. How do I do this given that what I want to extract is not the original texture image, but the result of its projection onto the sphere?
FYI I'm pretty new to scenekit and 3d graphics stuff and I'm even newer to opengl
I assume that your image is structured in a way that lets you directly pick a pixel given an arbitrary direction. E.g. if the azimuth of the direction is mapped to the image's x-coordinate and the height of the direction to the image's y-coordinate, you would convert the direction to these parameters and pick the color at those coordinates. If that is not the case, you have to find the intersection of the according ray (starting at the camera) with the sphere and find the texture coordinate at that intersection. You can then pick the color using this texture coordinate.
Now, you have basically two options. The first option is generating a new texture for the plane. The second option is sampling the spherical image from a shader.
Option 1 - Generate a new texture
You know the extent of your plane, so you can generate a new texture whose dimensions are proportional to the plane's extents. You can use an arbitrary resolution. All you then need to do is fill the pixels of this texture. For this, you just generate the ray for a given pixel and find the according color in the spherical image like so:
input: d1, d2, d3, d3 (the four direction vectors of the plane corners)
// d3 +------+ d4
// d1 +------+ d2
for x from 0 to texture width
for y from 0 to texture height
//Find the direction vector for this pixel through bilinear interpolation
a = x / (width - 1) //horizontal interpolation parameter
b = y / (height - 1) //vertical interpolation parameter
d = (1 - a) * ((1 - b) * d1 + b * d3) + a * ((1 - b) * d2 + b * d4)
normalize d
//Sample the spherical image at d
color = sample(d)
//write the color to the new planar texture
texture(x, y) = color
next
next
Then, you have a new texture that you can apply to the plane. Barycentric interpolation might be more appropriate if you express the plane as two triangles. But as long as the plane is rectangular, the results will be the same.
Note that the sample() method depends on your image structure and needs to be implemented appropriately.
Option 2 - Sample in a shader
In option 2, you do the same thing as in option 1. But you do it in a fragment shader. You employ the vertices of the plane with their respective directions (this might be just the vertex position) and let the GPU interpolate them. This gives you directly the direction d, which you can use. Here is some pseudo shader code:
in vec3 direction;
out vec4 color;
void main()
{
color = sample(normalize(direction));
}
If your image is a cube map, you can even let the GPU do the sampling.
I checked the result of the filter-width GLSL function by coloring it in red on a plane around the camera.
The result is a bizarre pattern. I thought that it would be a circular gradient on the plane extending around the camera relative to distance. The further pixels uniformly represent more distant UV coordinates between pixels at further distances.
Why isn't fwidth(UV) a simple gradient as a function of distance from the camera? I don't understand how it would work properly if it isn't, because I want to anti-alias pixels as a function of amplitude of the UV coordinates between them.
float width = fwidth(i.uv)*.2;
return float4(width,0,0,1)*(2*i.color);
UVs that are close = black, and far = red.
Result:
the above pattern from fwidth is axis aligned, and has 1 axis of symmetry. it couldnt anti-alias 2 axis checkerboard or an n-axis texture of perlin noise or a radial checkerboard:
float2 xy0 = float2(i.uv.x , i.uv.z) + float2(-0.5, -0.5);
float c0 = length(xy0); //sqrt of xx+yy, polar coordinate radius math
float r0 = atan2(i.uv.x-.5,i.uv.z-.5);//angle polar coordinate
float ww =round(sin(c0* freq) *sin(r0* 50)*.5+.5) ;
Axis independent aliasing pattern:
The mipmaping and filtering parameters are determined by the partial derivatives of the texture coordinates in screen space, not the distance (actually as soon as the fragment stage kicks in, there's no such thing as distance anymore).
I suggest you replace the fwidth visualization with a procedurally generated checkerboard (i.e. (mod(uv.s * k, 1) > 0.5)*(mod(uv.t * k, 1) < 0.5)), where k is a scaling parameter) you'll see that the "density" of the checkerboard (and the aliasing artifacts) is the highst, where you've got the most red in your picture.
When you create a brush in a 3D map editor such as Valve's Hammer Editor, textures of the object are by default repeated and aligned to the world coordinates.
How can i implement this functionality using OpenGL?
Can glTexGen be used to achieve this?
Or do i have to use texture matrix somehow?
If i create 3x3 box, then it's easy:
Set GL_TEXTURE_WRAP_T to GL_REPEAT
Set texturecoords to 3,3 at the edges.
But if the object is not an axis aligned convexhull it gets, uh, a bit complicated.
Basically i want to create functionality of face edit sheet from Valve Hammer:
Technically you can use texture coordinate generation for this. But I recommend using a vertex shader that generates the texture coordinates from the transformed vertex coordinates. Be a bit more specific (I don't know Hammer very well).
After seeing the video I understand your confusion. I think you should know, that Hammer/Source probably don't have the drawing API generate texture coordinates, but produce them internally.
So what you can see there are textures that are projected on either the X, Y or Z plane, depending in which major direction the face is pointing to. It then uses the local vertex coordinates as texture coordinates.
You can implement this in the code loading a model to a Vertex Buffer Object (more efficient, since the computation is done only once), or in a GLSL vertex shader. I'll give you the pseudocode:
cross(v1, v2):
return { x = v1.y * v2.z - v1.z * v2.y,
y = v2.x * v1.z - v2.z * v1.x, // <- "swapped" order!
z = v1.x * v2.y - v1.y * v2.x }
normal(face):
return cross(face.position[1] - face.position[0], face.position[2] - face.position[0])
foreach face in geometry:
n = normal(face) // you'd normally precompute the normals and store them
if abs(n.x) > max(abs(n.y), abs(n.z)): // X major axis, project to YZ plane
foreach (i, pos) in enumerate(face.position):
face.texcoord[i] = { s = pos.y, t = pos.z }
if abs(n.y) > max(abs(n.x), abs(n.z)): // Y major axis, project to XZ plane
foreach (i, pos) in enumerate(face.position):
face.texcoord[i] = { s = pos.x, t = pos.z }
if abs(n.z) > max(abs(n.y), abs(n.x)): // Z major axis, project to XY plane
foreach (i, pos) in enumerate(face.position):
face.texcoord[i] = { s = pos.x, t = pos.y }
To make this work with glTexGen texture coordinate generation, you'd have to split your models into parts of each major axis. What glTexGen does is just the mapping step face.texcoord[i] = { s = pos.<>, t = pos.<> }. In a vertex shader you can do the branching directly.
I need to make a cube with smooth corners and smooth edges in C++ with OpenGL.
For all I know I have three options: Bezier curves (maybe, is it possible?), a cube with cylinders for edges and spheres for corners or load a .3ds of a cube.
Any ideas?
pseduocode:
mesh rounded_cube(int size, int edge_radius)
{
mesh result = sphere(edge_radius)
vertex octants[] = result.verteces()
for each v in octants
{
if (v.x != 0.0)
v.x = size * ( v.x/abs(v.x) );
if (v.y != 0.0)
v.y = size * ( v.y/abs(v.y) );
if (v.z != 0.0)
v.z = size * ( v.z/abs(v.z) );
}
for i in result.vertices().size()
{
result.vertex[i] += octants[i]
}
return result;
}
You can simulate a cube with smooth lighting by pointing the normals directly out from the center (simulating an 8 cornered sphere). It totally depends on what exactly you are trying to do. Using the above method may be perfectly good enough.
If you want to define a cube with curved corners (up close) then you are going to have to subdivide the cube. In fact if you subdivide strongly around corners but ignore the flat faces you will get a good effect.
All it comes down to is thinking about how you subdivide at edges. Think about how you could smooth it out and you'll, surely, come up with a fine solution :)