I have a texture of the earth which I want to map onto a sphere.
As it is a unit sphere and the model itself has no texture coordinates, the easiest thing I could think of is to just calculate spherical coordinates for each vertex and use them as texture coordinates.
textureCoordinatesVarying = vec2(atan(modelPositionVarying.y, modelPositionVarying.x)/(2*M_PI)+.5, acos(modelPositionVarying.z/sqrt(length(modelPositionVarying.xyz)))/M_PI);
When doing this in the fragment shader, this works fine, as I calculate the texture coordinates from the (interpolated) vertex positions.
But when I do this in the vertex shader, which I also would do if the model itself has texture coordinates, I get the result as shown in the image below. The vertices are shown as points and a texture coordinate (u) lower than 0.5 is red while all others are blue.
So it looks like that the texture coordinate (u) of two adjacent red/blue vertices have value (almost) 1.0 and 0.0. The variably is then smoothly interpolated and therefore yields values somewhere between 0.0 and 1.0. This of course is wrong, because the value should either be 1.0 or 0.0 but nothing in between.
Is there a way to work with spherical coordinates as texture coordinates without getting those effects shown above? (if possible, without changing the model)
This is a common problem. The seams between two texture coordinate topologies, where you want the texture coordinate to seamlessly wrap from 1.0 to 0.0 requires the mesh to properly handle this. To do this, the mesh must duplicate every vertex along the seam. One of the vertices will have a 0.0 texture coordinate and will be connected to the vertices coming from the right (in your example). The other will have a 1.0 texture coordinate and will be connected to the vertices coming from the left (in your example).
This is a mesh problem, and it is best to solve it in the mesh itself. The same position needs two different texture coordinates, so you must duplicate the position in question.
Alternatively, you could have the fragment shader generate the texture coordinate from an interpolated vertex normal. Of course, this is more computationally expensive, as it requires doing a conversion from a direction to a pair of angles (and then to the [0, 1] texture coordinate range).
Related
So when drawing a rectangle on OpenGL, if you give the corners of the rectangle texture coordinates of (0,0), (1,0), (1,1) and (0, 1), you'll get the standard rectangle.
However, if you turn it into something that's not rectangular, you'll get a weird stretching effect. Just like the following:
I saw from this page below that this can be fixed, but the solution given is only for trapezoidal values only. Also, I have to be doing this over many rectangles.
And so, the questions is, what is the proper way, and most efficient way to get the right "4D" texture coordinates for drawing stretched quads?
Implementations are allowed to decompose quads into two triangles and if you visualize this as two triangles you can immediately see why it interpolates texture coordinates the way it does. That texture mapping is correct ... for two independent triangles.
That diagonal seam coincides with the edge of two independently interpolated triangles.
Projective texturing can help as you already know, but ultimately the real problem here is simply interpolation across two triangles instead of a single quad. You will find that while modifying the Q coordinate may help with mapping a texture onto your quadrilateral, interpolating other attributes such as colors will still have serious issues.
If you have access to fragment shaders and instanced vertex arrays (probably rules out OpenGL ES), there is a full implementation of quadrilateral vertex attribute interpolation here. (You can modify the shader to work without "instanced arrays", but it will require either 4x as much data in your vertex array or a geometry shader).
Incidentally, texture coordinates in OpenGL are always "4D". It just happens that if you use something like glTexCoord2f (s, t) that r is assigned 0.0 and q is assigned 1.0. That behavior applies to all vertex attributes; vertex attributes are all 4D whether you explicitly define all 4 of the coordinates or not.
I'm working on a C++ project using DirectX 11 with HLSL shaders.
I have a texture which is mapped onto some geometry.
Each vertex of the geometry has a position and a texture coordinate.
In the pixel shader, I now can easily obtain the texture coordinate for exactly this one pixel. But how can I sample the color from the neighboring pixels?
For example for the pixel at position 0.2 / 0, I get the texture coordinate 0.5 / 0, which is blue.
But how do I get the texture coordinate from let's say 0.8 / 0?
Edit:
What I'm actually implementing is a Volume Renderer using raycasting.
The volume to-be-rendered is a set of 2D slices which are parallel and aligned, but not necessarily equidistant.
For the volume I use DirectX's Texture3D class in order to easily get interpolation in z direction.
Now I cast rays through the volume and sample the 3D texture value at equidistant steps on that ray.
Now my problem comes into play. I cannot simply sample the Texture3D at my current ray position, as the slices are not necessarily equidistant.
So I have to somehow "lookup" the texture coordinate of that position in 3D space and then sample the texture using this texture coordinate.
I already have an idea how to implement this, which would be an additional Texture3D of the same size where the color of the texel at position xyz can be interpreted as the texture coordinate at position xyz.
This would solve my problem but I think it is maybe overkill and there might be a simpler way to accomplish the same thing.
Edit 2:
Here is another illustration of the sampling problem I am trying to fix.
The root of the problem is that my Texture3D is distorted in z direction.
From within one single pixelshader instance I want to obtain the texture coordinate for any given position xyz in the volume, not only for the current fragment being rendered.
Edit 3:
Thanks for all the good comments and suggestions.
The distances between the slices in z-order can be completely random, so they cannot be described mathematically by a function.
So what I basically have is a very simple class, e.g.
struct Vertex
{
float4 Position; // position in space
float4 TexCoord; // position in dataset
}
I pass those objects to the buffer of my vertex shader.
There, the values are simply passed through to the pixel shader.
My interpolation is set to D3D11_FILTER_MIN_MAG_MIP_LINEAR so I get a nice interpolation for my data and the respective texture coordinates.
The signature of my pixel shader looks like this:
float4 PShader( float4 position : SV_POSITION
, float4 texCoord : TEXCOORD
) : SV_TARGET
{
...
}
So for each fragment to-be-rendered on the screen, I get the position in space ( position ) and the corresponding position ( texCoord ) in the ( interpolated ) dataset. So far so good.
Now, from this PShader instance, I want to access not only texCoord at position, but also the texCoords of other positions.
I want to do raycasting, so for each screen-space fragment, I want to cast a ray and sample the volume dataset at discrete steps.
The black plane symbolizes the screen. The other planes are my dataset where the slices are aligned and parallel, but not equidistant.
The green line is the ray that I cast from the screen to the dataset.
The red spheres are the locations where I want to sample the dataset.
DirectX knows how to interpolate the stuff correctly, as it does so for every screen-space fragment.
I thought I could easily access this interpolation function and query the interpolated texCoord for position xyz. But as it seems DirectX has not a mechanism to do this.
So the only solution really might be to use a 1D-Texture for z-lookup and interpolate between the values manually in the shader.
Then use this information to lookup the pixel value at this position.
So I'm supposed to Texture Map a specific model I've loaded into a scene (with a Framebuffer and a Planar Pinhole Camera), however I'm not allowed to use OpenGL and I have no idea how to do it otherwise (we do use glDrawPixels for other functionality, but that's the only function we can use).
Is anyone here able enough to give me a run-through on how to texture map without OpenGL functionality?
I'm supposed to use these slides: https://www.cs.purdue.edu/cgvlab/courses/334/Fall_2014/Lectures/TMapping.pdf
But they make very little sense to me.
What I've gathered so far is the following:
You iterate over a model, and assign each triangle "texture coordinates" (which I'm not sure what those are), and then use "model space interpolation" (again, I don't understand what that is) to apply the texture with the right perspective.
I currently have my program doing the following:
TL;DR:
1. What is model space interpolation/how do I do it?
2. What explicitly are texture coordinates?
3. How, on a high level (in layman's terms) do I texture map a model without using OpenGL.
OK, let's start by making sure we're both on the same page about how the color interpolation works. Lines 125 through 143 set up three vectors redABC, greenABC and blueABC that are used to interpolate the colors across the triangle. They work one color component at a time, and each of the three vectors helps interpolate one color component.
By convention, s,t coordinates are in source texture space. As provided in the mesh data, they specify the position within the texture of that particular vertex of the triangle. The crucial thing to understand is that s,t coordinates need to be interpolated across the triangle just like colors.
So, what you want to do is set up two more ABC vectors: sABC and tABC, exactly duplicating the logic used to set up redABC, but instead of using the color components of each vertex, you just use the s,t coordinates of each vertex. Then for each pixel, instead of computing ssiRed etc. as unsigned int values, you compute ssis and ssit as floats, they should be in the range 0.0f through 1.0f assuming your source s,t values are well behaved.
Now that you have an interpolated s,t coordinate, multiply ssis by the texel width of the texture, and ssit by the texel height, and use those coordinates to fetch the texel. Then just put that on the screen.
Since you are not using OpenGL I assume you wrote your own software renderer to render that teapot?
A texture is simply an image. A texture coordinate is a 2D position in the texture. So (0,0) is bottom-left and (1,1) is top-right. For every vertex of your 3D model you should store a 2D position (u,v) in the texture. That means that at that vertex, you should use the colour the texture has at that point.
To know the UV texture coordinate of a pixel in between vertices you need to interpolate the texture coordinates of the vertices around it. Then you can use that UV to look up the colour in the texture.
I have a 3D terrain (a voxel mesh, my "arbitrary mesh"). I know how to "splat" the texture down from above the mesh, but on vertical or steep slopes it smears.
I have access to the normals and positions of each vertex. How would I generate UVs (without using a shader, so no true tri-planar colour blending) so that the texture is not smeared on steep slopes and meets up nicely with itself (no sharp seams)?
Without a shader, you are a bit stuck. Tri-planar works by using three planar projection for uvs ( one for each world planes : XY, YZ, and XZ ) and then blend the three layers with the normal values pow by some value as coefficient.
What are the options you have to render your terrain, are you allowed to edit the geometry ? Can you do multi pass rendering with alpha blend ?
Everything is shader, why are they inaccessible ?
So far, my understanding of cube mapping has been that 3D texture coordinates need to be specified for each vertex used within a cube, as opposed to (u,v) coordinates for 2D textures.
Some Assumptions
Cube maps use normalized vertices to represent the texture coordinates of a triangle.
These normalized vertices are akin to the actual vertices specified: the normalized texture coordinates use the magnitude of their corresponding vertices.
Thus, if a vertex has a unit magnitude of 1, then its normalized texture coordinate, N, is 1.0f / sqrt(3.0f );
Which of these assumptions are correct and incorrect? If any are incorrect, please specify why.
Edit
While not necessary, what would be appreciated is an example or, rather, an idea of what the recommended way of going about it would be - using programmable pipeline.
Cubemaps are textures that consist of 6 quadratic textures arranged in a cube topology. The only key quantity of cubemap texture coordinates is their direction. In a cubemap its texels are addressed by the direction of a vector originating in the cube's center. It doesn't matter which length the texture coordinate vector has. Say you got two cube map texture coordinates
(1, 1, 0.5)
and
(2, 2, 1)
they both address the same cubemap texel.