Heightmap in Shader not working - hlsl

I'm trying to implement GPU-based geometry clipmapping and have problems to apply a simple heightmap to my terrain. For the heightmap I use a simple texture with the surface format "single". I've taken the texture from Catalinz's XNA blog.
What I'm trying to do is just getting the single float value from the texture at the vertex world coordinate and apply the value to the Y-Value of the Vertex. I'm applying the heightmap through the shader, because there is no fixed grid in-code it could be applied on. What I've tried so far was using the tex2Dlod function, but there seems to be no output or an output of 0. The Direct X vertex shader's code looks like this:
float4 worldPos = mul(float4(pos.x,0.0f,pos.y, 1.0f), worldMatrix);
float elevation = tex2Dlod(HeightmapSampler, float4(worldPos.x, worldPos.z,0,0));
worldPos.y = elevation * 128;
Where HeightmapSampler is a point-mirror sampler of the heightmap texture. Here is the output, I actually get no elevation:
And here's the heightmap I'm using:

Forgot that Texture coordinates are in Texels, so multiplying it with the texels fixed it.
float elevation = tex2Dlod(HeightmapSampler, float4(worldPos.x*texel, worldPos.z*texel,0,0));

Related

Converting an equiangular cubemap to an equirectangular one

I am making a retro-style game with OpenGL, and I want to draw my own cubemaps for it. Here is an example of one:
As you can tell, there is no perspective warping anywhere; each face is fully equiangular. When using this as a cubemap, the result is this:
As you can see, it looks box-y, and not spherical at all. I know of a solution to this, which is to remap each point on the cubemap to a a sphere position. I have done this manually by creating a sphere mesh and mapping the cubemap texture onto it (and then rendering that to an environment map), but this is time-consuming and complicated.
I seek a different solution: in my fragment shader, I hope to remap the sampling ray to a sphere position, instead of a cube position. Here is my original fragment shader, without any changes:
#version 400 core
in vec3 cube_edge;
out vec3 color;
uniform samplerCube skybox_sampler;
void main(void) {
color = texture(skybox_sampler, cube_edge).rgb;
}
I can get a ray that maps to the sphere by just normalizing cube_edge, but that doesn't change anything, for some reason. After messing around a bit, I tried this mapping, which almost works, but not quite:
vec3 sphere_edge = vec3(cube_edge.x, normalize(cube_edge).y, cube_edge.z);
As you can see, some faces become spherical in nature, whereas the top face warps inwards, instead of outwards.
I also tried the results from this site: http://mathproofs.blogspot.com/2005/07/mapping-cube-to-sphere.html, but the faces were not curved outwards enough.
I have been stuck on this for so long now - if you know how I can change my cube to sphere mapping in my fragment shader, or if that's even possible, please let me know!
As you can tell, there is no perspective warping anywhere; each face is fully equiangular.
This premise is incorrect. You hand-drew some images; this doesn't make them equiangular.
'Equiangular cubemap' (EAC) specifically means a cubemap remapped by this formula (section 2.4):
u = 4/pi * atan(u)
v = 4/pi * atan(v)
Let's recognize first that the term is misleading, because even though EAC aims at reducing the variation in sampling rate, the sampling rate is not constant. In fact no 2d projection of any part of a sphere can truly be equi-angular; this is a mathematical fact.
Nonetheless, we can try to apply this correction. Implemented in GLSL fragment shader as:
d /= max(abs(d.x), max(abs(d.y), abs(d.z));
d = atan(d)/atan(1);
gives the following result:
Compare it with the uncorrected d:
As you can see the EAC projection shrinks the pixels in the middle by a little bit, and expands them near the corners, so that they cover more equal area.
Instead, it appears that you want a cylindrical projection around the horizon. It can be implemented like so:
d /= length(d.xy);
d.xy /= max(abs(d.x), abs(d.y));
d.xy = atan(d.xy)/atan(1);
Which gives the following result:
However there's no artifact-free way to fit the top/bottom square faces of the cube onto the circular faces of the cylinder -- which is why you see the artifacts there.
Bottom-line: you cannot fit the image that you drew onto a sphere in a visually pleasing way. You should instead re-focus your effort on alternative ways of authoring your environment map. I recommend you try using an equidistant cylindrical projection for the horizon, cap it with solid colors above/below a fixed latitude, and use billboards for objects that cannot be represented in that projection.
Your problem is that the size of the geometry on which the environment is placed is too small. You are not looking at the environment but at the inside of a small cube in which you are sitting. The environment map should behave as if you are always in the center of the map and the environment is infinitely far away. I suggest to draw the environment map on the far plane of the viewing frustum. You can do this by setting the z-component of the clip space position equal to the w-component in the vertex shader. If you set z to w, you guarantee that the final z value of the position will be 1.0. This is the z value of the far plane. (You can do that with Swizzling gl_Position = clipPos.xyww). It is quite sufficient to draw a cube and wrap the environment by looking up the map with the interpolated vertices of the cube. In the case of a samplerCube, the 3-dimensional texture coordinate is treated as a direction vector. You can use the vertex coordinate of the cube to look up the texture.
Vertex shader:
cube_edge = inVertex.xyz;
vec4 clipPos = projection * view * vec4(inVertex.xyz, 1.0);
gl_Position = clipPos.xyww;
Fragment shader:
color = texture(skybox_sampler, cube_edge).rgb;
The solution is also explained in detail at LearnOpenGL - Cubemap.

How to loop over every pixel in a 3D texture/buffer without using compute shaders

I understand how you would do this with a 2D buffer. Just draw two triangles that make a quad that fully encompass the 2D buffer space. That way when the fragment shader runs it runs for all the pixels in the buffer.
Question: How would this work for a 3D buffer?
You could just write a lot of triangles for each cross-section of the 3D buffer. However, if you had a texture that was 1x1x256 that would mean that you would need to draw 256*2 triangles for each slice to iterate over all of the pixels. I know this is an extreme case and there are ways of optimizing this solution. However, I feel like there is a more elegant solution that I am missing.
What I am trying to do: I am trying to make a 3D fluid solver that iterates through each of the pixels of the 3D texture and computes its velocity, density, etc. I am trying to do this via the fragment shader because I am using OpenGL 3.0 which does not use compute shaders.
#version 330 core
out vec4 FragColor;
uniform sampler3D volume;
void main()
{
// computing the fluid density, velocity, and center of mass
// output the values to the 3D buffer to diffrent color channels:
fragColor = vec4(density, velocity.xy, centerOfMass);
}
At some point in the fragment shader, you're going to write some statement of the form:
vec4 value = texture(my_texture, TexCoords);
Where TexCoords is the location in my_texture that maps to some particular value in the source texture. But... that mapping is entirely up to you. Nobody's making you use gl_FragCoord.xy / textureSize(my_texture). You could just as easily use vec3(gl_FragCoord.x, Y_value, gl_FragCoord.y) / textureSize(my_texture), which puts the Y component of the fragment location in the Z dimension of the texture. Y_value in this case is a value passed from the outside that tells which vertical slice of the 3D texture to use.
Of course, whatever mapping you use to fetch the data must also be used when you write the data. If you're writing via fragment shader outputs, that poses a problem. A 3D texture can only be attached to an FBO as either a single 2D slice or as a layered set of 2D slices, with these slices always being along the Z dimension of the image. So even if you try to read in slices along the Y dimension, it has to be written in Z slices. So you'd be moving around the location of the data, which makes this non-viable.
If you're using image load/store, then you have no problem. You can just write to the appropriate texel (indeed, you can read from it as an image using integer coordinates, so there's no need to divide by the texture's size).

How to obtain texture coordinate from arbitrary position in HLSL

I'm working on a C++ project using DirectX 11 with HLSL shaders.
I have a texture which is mapped onto some geometry.
Each vertex of the geometry has a position and a texture coordinate.
In the pixel shader, I now can easily obtain the texture coordinate for exactly this one pixel. But how can I sample the color from the neighboring pixels?
For example for the pixel at position 0.2 / 0, I get the texture coordinate 0.5 / 0, which is blue.
But how do I get the texture coordinate from let's say 0.8 / 0?
Edit:
What I'm actually implementing is a Volume Renderer using raycasting.
The volume to-be-rendered is a set of 2D slices which are parallel and aligned, but not necessarily equidistant.
For the volume I use DirectX's Texture3D class in order to easily get interpolation in z direction.
Now I cast rays through the volume and sample the 3D texture value at equidistant steps on that ray.
Now my problem comes into play. I cannot simply sample the Texture3D at my current ray position, as the slices are not necessarily equidistant.
So I have to somehow "lookup" the texture coordinate of that position in 3D space and then sample the texture using this texture coordinate.
I already have an idea how to implement this, which would be an additional Texture3D of the same size where the color of the texel at position xyz can be interpreted as the texture coordinate at position xyz.
This would solve my problem but I think it is maybe overkill and there might be a simpler way to accomplish the same thing.
Edit 2:
Here is another illustration of the sampling problem I am trying to fix.
The root of the problem is that my Texture3D is distorted in z direction.
From within one single pixelshader instance I want to obtain the texture coordinate for any given position xyz in the volume, not only for the current fragment being rendered.
Edit 3:
Thanks for all the good comments and suggestions.
The distances between the slices in z-order can be completely random, so they cannot be described mathematically by a function.
So what I basically have is a very simple class, e.g.
struct Vertex
{
float4 Position; // position in space
float4 TexCoord; // position in dataset
}
I pass those objects to the buffer of my vertex shader.
There, the values are simply passed through to the pixel shader.
My interpolation is set to D3D11_FILTER_MIN_MAG_MIP_LINEAR so I get a nice interpolation for my data and the respective texture coordinates.
The signature of my pixel shader looks like this:
float4 PShader( float4 position : SV_POSITION
, float4 texCoord : TEXCOORD
) : SV_TARGET
{
...
}
So for each fragment to-be-rendered on the screen, I get the position in space ( position ) and the corresponding position ( texCoord ) in the ( interpolated ) dataset. So far so good.
Now, from this PShader instance, I want to access not only texCoord at position, but also the texCoords of other positions.
I want to do raycasting, so for each screen-space fragment, I want to cast a ray and sample the volume dataset at discrete steps.
The black plane symbolizes the screen. The other planes are my dataset where the slices are aligned and parallel, but not equidistant.
The green line is the ray that I cast from the screen to the dataset.
The red spheres are the locations where I want to sample the dataset.
DirectX knows how to interpolate the stuff correctly, as it does so for every screen-space fragment.
I thought I could easily access this interpolation function and query the interpolated texCoord for position xyz. But as it seems DirectX has not a mechanism to do this.
So the only solution really might be to use a 1D-Texture for z-lookup and interpolate between the values manually in the shader.
Then use this information to lookup the pixel value at this position.

GLSL passing texture coordinates from vertex shader

What I'm trying to accomplish: Drawing the depth map of my scene on top of my scene (so that objects closer are darker, and further away are lighter)
Problem: I don't seem to understand how to pass the right texture coordinates from my vertex shader to my fragment shader.
So I created my FBO, and the texture that the depth map gets drawn to... not that I'm entirely sure what I was doing, but whatever, it works. I tested drawing the texture using the fixed functionality pipeline, and it looks just like it's supposed to (the depth map that is).
But trying to use it in my shaders just isn't working...
Here's the part from my render method that binds the texture:
glActiveTexture(GL_TEXTURE7);
glBindTexture(GL_TEXTURE_2D, depthTextureId);
glUniform1i(depthMapUniform, 7);
glUseProgram(shaderProgram);
look(); //updates my viewing matrix
box.render(); //renders box VBO
So... I think that's sort of right? Maybe? No clue why texture 7, that was just something that was in a tutorial I was checking...
And here's the important stuff from my vertex shader:
out vec4 ShadowCoord;
void main() {
gl_Position = PMatrix * (VMatrix * MMatrix) * gl_Vertex; //projection, view and model matrices
ShadowCoord = gl_MultiTexCoord0; //something I kept seeing in examples, was hoping it would work.
}
Aaand, fragment shader:
in vec4 ShadowCoord;
in vec3 Color; //passed from vertex shader, didn't include the code for it though. Just the vertex color.
out vec4 FragColor;
void main(
FragColor = vec4(texture2D(ShadowMap,shadowCoord.st).x * vec3(Color), 1.0);
Now the problem is that the coordinate that the fragment shader receives for the texture is always (0,0), or the bottom-left corner. I tried changing it to ShadowCoord = gl_MultiTexCoord7, because I figured maybe it had something to do with me putting the texture in slot number 7... but alas, the problem persisted. When the color of (0, 0) changes, so does the color of the entire scene, rather than being a change in color for only the appropriate pixel/fragment.
And that's what I'm hoping to get some insight on... how to pass the correct coordinates (I'd like for the corners of the texture to be the same coordinates as the corners of my screen). And yes, this is a beginners question... but I have been looking in the Orange Book, and the problem with it is that it's great on the GLSL side of things, but the OpenGL side of things is severely lacking in the examples that I could really use...
The input variable gl_MultiTexCoord0 (or 7) is the builtin per-vertex texture coordinate for the 0th (or 7th) texture coordinate, set by gl(Multi)TexCoord (when using immediate mode) or by glTexCoordPointer (when using arrays/VBOs).
But as your depth buffer is already in screen space, what you want is not a usual texture laid onto the object, but just the value in the texture for a specific pixel/fragment. So the vertex shader isn't involved in any way. Instead you just use the current fragment's screen space position as texture coordinate, that can be read in the fragment shader using gl_FragCoord. But keep in mind that this coordinate is in [0,w]x[0,h] and textures are accessed by normalized texture coordinates in [0,1]. So you have to divide the fragment's coordinate by the screen size:
uniform vec2 screenSize;
...
... texture2D(ShadowMap, gl_FragCoord.st/screenSize) ...
But you actually don't need two passes for this effect anyway, as you can just use the fragment's depth directly, without writing it into a texture. Instead of
texture2D(ShadowMap, gl_FragCoord.st/screenSize).x
you can just use
gl_FragCoord.z
which is nothing else than the fragment's depth value, that would have been written into the texture in the first pass. This way you completely spare the first depth-writing pass and the texture access in the second pass.

Sampling data from a shadow map texture using automatic comparison via the texture2D function

I've got a sampler2DShadow in my shader and I want to use it to implement shadow mapping. My shadow texture has the good initializers, with GL_TEXTURE_COMPARE_MODE set to GL_COMPARE_R_TO_TEXTURE and GL_TEXTURE_COMPARE_FUNC set to GL_LEQUAL (meaning that the comparison should return 1 if the r value of my coordinates are less or equal to the depth value fetched in the texture). This texture is bound to the GL_DEPTH_ATTACHMENT of a FBO rendered in light space coordinates.
What coordinates should I give the texture2D function in my final fragment shader? I currently have a
smooth in vec4 light_vert_pos
set in my fragment shader that is defined in the vertex shader by the function
light_vert_pos = light_projection_camera_matrix*modelview*in_Vertex;
I would assume I could multiply my lighting by the value
texture2D(shadowmap,(light_vert_pos.xyz)/light_vert_pos.w)
but this does not seem to work. Since light_vert_pos is only in post projective coordinates (the matrix used to create it is the matrix I use to create the depth buffer in the FBO), should I manually clamp the 3 x/y/z variables to [0,1]?
You don't say how you generated your depth values. So I'll assume you generated your depth values by rendering triangles using normal projection. That is, you transform the geometry to camera space, transform it to projection space, and let the rasterization pipeline handle things from there as normal.
In order to make shadow mapping work, your texture coordinates must match what the rasterizer did.
The output of a vertex shader is clip-space. From there, you get the perspective divide, followed by the viewport transform. The latter uses the values from glViewport and glDepthRange to compute the window-space XYZ. The window-space Z is the depth value written to the depth buffer.
Note that this is all during the depth pass: the generation of the depth values for the shadow map.
However, you can take some shortcuts. If your glViewport range was set to the same size as the texture (which is generally how it's done), then you can ignore the viewport transform. You will still need the glDepthRange you used in the depth pass.
In your fragment shader, you can perform the perspective divide, which puts the coordinates in normalized device coordinate (NDC) space. That space is [-1, 1] in all directions. Your texture coordinates are [0, 1], so you need to divide the X and Y by two and add 0.5 to them:
vec3 ndc_space_values = light_vert_pos.xyz / light_vert_pos.w
vec3 texCoords;
texCoords.xy = ndc_space_values.xy * 0.5 + 0.5;
To compute the Z value, you need to know the near and far values you use for glDepthRange.
texCoords.z = ((f-n) * 0.5) * ndc_space_values.z + ((n+f) * 0.5);
Where n and f are the glDepthRange near and far values. You can of course precompute some of these and pass them as uniforms. Or, if you use the default range of near=0 and far=1, you get
texCoords.z = ndc_space_values.z * 0.5 + 0.5;
Which looks familiar somehow.
Aside:
Since you defined your inputs with in rather than varying, you have to be using GLSL 1.30 or above. So why are you using texture2D (which is an old function) rather than texture?