sampling GL_DEPTH_COMPONENTs of type GL_UNSIGNED_SHORT in GLSL shader - c++

I have access to a depth camera's output. I want to visualise this in opengl using a compute shader.
The depth feed is given as a frame and i know the width and height ahead of time. How do I sample the texture and retrieve the depth value in the shader? Is this possible? I've read through the OpenGl types here and can't find anything on unsigned shorts so am starting to worry. Are there any workarounds?
My current compute shader
#version 430
layout(local_size_x = 1, local_size_y = 1) in;
layout(rgba32f, binding = 0) uniform image2D img_output;
uniform float width;
uniform float height;
uniform sampler2D depth_feed;
void main() {
// get index in global work group i.e x,y position
vec2 sample_coords = ivec2(gl_GlobalInvocationID.xy) / vec2(width, height);
float visibility = texture(depth_feed, sample_coords).r;
vec4 pixel = vec4(1.0, 1.0, 0.0, visibility);
// output to a specific pixel in the image
imageStore(img_output, ivec2(gl_GlobalInvocationID.xy), pixel);
}
The depth texture definition is as follows:
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT16, width, height, 0,GL_DEPTH_COMPONENT, GL_UNSIGNED_SHORT, nullptr);
Currently my code produces a plain yellow screen.

If you use perspective projection, then the depth value is not linear. See LearnOpenGL - Depth testing.
If all the depth values are near 0.0, and you use the following expression:
vec4 pixel = vec4(vec3(visibility), 1.0);
then all the pixels appear almost black. Actually the pixels are not completely black, but the difference is barely noticeable.
This happens, when the far plane is "too" far away. To verify that you can compute the power of 1.0 - visibility, to make the different depth values ​​recognizable. For instance:
float exponent = 5.0;
vec4 pixel = vec4(vec3(pow(1.0-visibility, exponent)), 1.0);
If you want a more sophisticated solution, you can linearize the depth values as explained in the answer to How to render depth linearly in modern OpenGL with gl_FragCoord.z in fragment shader?.
Please note that for a satisfactory visualization you should use the entire range of the depth buffer ([0.0, 1.0]). The geometry must be between the near and far planes, but try to move the near and far planes as close to the geometry as possible.

Related

Given a coordinate in 2D clipspace, how can i sample the background texture

Consider a texture with the same dimensions as gl.canvas, what would be the proper method to sample a pixel from the texture at the same screen location as a clip-space coordinate (-1,-1 to 1,1)? Currently i'm using:
//u_screen = vec2 of canvas dimensions
//u_data = sampler2D of texture to sample
vec2 clip_coord = vec2(-0.25, -0.25);
vec2 tex_pos = (clip_coord / u_screen) + 0.5;
vec4 sample = texture2D(u_data, tex_pos);
This works but doesn't seem to properly take into account the canvas size and text_pos seems offset the closer clip_coord gets to -1 or 1.
In the following is assumed that the texture has the same size as the canvas and is rendered 1:1 on the entire canvas, as mentioned in the question.
If clip_coord is a 2 dimension fragment shader input where each component is in range [-1, 1], then you've to map the coordinate to the range [0, 1]:
vec4 sample = texture2D(u_data, clip_coord*0.5+0.5);
Note, the texture coordinates range from 0.0 to 1.0.
Another possibility is to use gl_FragCoord. gl_FragCoord is a fragment shader built-in variable and contains the window relative coordinates of the fragment.
If you use WebGL 2.0 respectively GLSLES 3.00,
then gl_FragCoord.xy can be use for texture lookup of a 2 dimensional texture, by texelFetch, to get the texel which correspond to the fragment:
vec4 sample = texelFetch(u_data, ivec2(gl_FragCoord.xy), 0);
Note, texelFetch performs a lookup of a single texel. You can think about the coordinate as the 2 dimensional texel index.
If you use WebGL 1.0 respectively GLSL ES 1.00,
then gl_FragCoord.xy can divided by the (2 dimensional) size of the texture. The result can be used for a texture lookup, by texture2D, to get the texel which correspond to the fragment:
vec4 sample = texture2D(u_data, gl_FragCoord.xy / u_size);

Smoother gradient transitions with OpenGL?

I'm using the following shader to render a skydome to simulate a night sky. My issue is the clearly visible transitions between colours.
What causes these harsh gradient transitions?
Fragment shader:
#version 330
in vec3 worldPosition;
layout(location = 0) out vec4 outputColor;
void main()
{
float height = 0.007*(abs(worldPosition.y)-200);
vec4 apexColor = vec4(0,0,0,1);
vec4 centerColor = vec4(0.159, 0.132, 0.1, 1);
outputColor = mix(centerColor, apexColor, height);
}
Fbo pixel format:
GL.TexImage2D(
TextureTarget.Texture2D,
0,
PixelInternalFormat.Rgb32f,
WindowWidth,
WindowHeight,
0,
PixelFormat.Rgb,
PixelType.Float,
IntPtr.Zero )
As Ripi2 explained, 24 bit color is unable to perfectly represent a gradient and discontinuities between representable colours become jarringly visible on gradients of a single color.
To hide the color banding I implemented a simple form of ordered dithering with an 8x8 texture generated using this bayer matrix algorithm.
vec4 dither = vec4(texture2D(MyTexture0, gl_FragCoord.xy / 8.0).r / 32.0 - (1.0 / 128.0));
colourOut += dither;
Normally monitors have 8 bits per channel of resolution. For example, the red intensity varies from 0 to 255.
If your window horizontal size is 768 pixels and you want a full gradient on red channel, then each color step takes 768/256 = 3 pixels. Depending on your eye health you may see bands.
How to do smooth gradient on those 3 pixels? Use sub-pixel rendering.
Basically you "expand" the color step among the neighbour pixels: Add small amounts of other channels to neighbours, and reduce a bit the central pixel amount.

OpenGL: Compute Shader - gl_GlobalInvocationID giving static output

So I have a Compute Shader that is supposed to take a texture and copy it over to another texture with slight modifications. I have confirmed that the textures are bound and that data can be written using RenderDoc which is a debugging tool for graphics. The issue I have is that inside the shader the variable gl_GlobalInvocationID, which is created by OpenGL, does not seem to work properly.
Here is my call of the compute shader: (The texture height is 480)
glDispatchCompute(1, this->m_texture_height, 1); //Call upon shader
glMemoryBarrier(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);
And then we have my compute shader here:
#version 440
#extension GL_ARB_compute_shader : enable
#extension GL_ARB_shader_image_load_store : enable
layout (rgba8, binding=0) uniform image2D texture_source0;
layout (rgba8, binding=1) uniform image2D texture_target0;
layout (local_size_x=640 , local_size_y=1 , local_size_z=1) in; //Local work-group size
void main() {
ivec2 txlPos; //A variable keeping track of where on the texture current texel is from
vec4 result; //A variable to store color
txlPos = ivec2(gl_GlobalInvocationID.xy);
//txlPos = ivec2( (gl_WorkGroupID * gl_WorkGroupSize + gl_LocalInvocationID).xy );
result = imageLoad(texture_source0, txlPos); //Get color value
barrier();
result = vec4(txlPos, 0.0, 1.0);
imageStore(texture_target0, txlPos, result); //Save color in target texture
}
When I run this the target texture becomes entirely yellow, save for a 1pxl thick green line along the left border and a 1pxl thick red line along the bottom border. My expectation is to see some sort of gradient given that a save txlPos as a colour value.
Am I somehow defining my work-groups wrong? I've tried splitting the gl_GlobalInvokationID up into its components but not managed to get any wiser fiddling with them.
A 8-bit floating point texture can only store values between 0 and 1. Since gl_GlobalInvocationID is in most cases larger than 1, it get's clamped to the maximum value of 1 which makes the texture yellow.
If you want to create a gradient in both directions, then you have to make sure that the values stored start at 0 and end at 1. One possiblity is to divide by the maximum:
result = vec4(vec2(gl_GlobalInvocationID.xy) / vec2(640, 480), 0, 1);

Display Part of Texture in GLSL

I'm using GLSL to draw sprites from a sprite-sheet. I'm using jME 3, yet there are only small differences, and only with regards to deprecated functions.
The most important part of drawing a sprite from a sprite sheet is to draw only a subset/range of pixels, for example the range from (100, 0) to (200, 100). In the following test case sprite-sheet, and using the previous bounds, only the green part of the sprite-sheet would be drawn.
.
This is what I have so far:
Definition:
MaterialDef Solid Color {
//This is the list of user-defined variables to be used in the shader
MaterialParameters {
Vector4 Color
Texture2D ColorMap
}
Technique {
VertexShader GLSL100: Shaders/tc_s1.vert
FragmentShader GLSL100: Shaders/tc_s1.frag
WorldParameters {
WorldViewProjectionMatrix
}
}
}
.vert file:
uniform mat4 g_WorldViewProjectionMatrix;
attribute vec3 inPosition;
attribute vec4 inTexCoord;
varying vec4 texture_coordinate;
void main(){
gl_Position = g_WorldViewProjectionMatrix * vec4(inPosition, 1.0);
texture_coordinate = vec4(inTexCoord);
}
.frag:
uniform vec4 m_Color;
uniform sampler2D m_ColorMap;
varying vec4 texture_coordinate;
void main(){
vec4 color = vec4(m_Color);
vec4 tex = texture2D(m_ColorMap, texture_coordinate);
color *= tex;
gl_FragColor = color;
}
In jME 3, inTexCoord refers to gl_MultiTexCoord0, and inPosition refers to gl_Vertex.
As you can see, I tried to give the texture_coordinate a vec4 type, rather than a vec2, so as to be able to reference its p and q values (texture_coordinate.p and texture_coordinate.q). Modifying them only resulted in different hues.
m_Color refers to the color, inputted by the user, and serves the purpose of altering the hue. In this case, it should be disregarded.
So far, the shader works as expected and the texture displays correctly.
I've been using resources and tutorials from NeHe (http://nehe.gamedev.net/article/glsl_an_introduction/25007/) and Lighthouse3D (http://www.lighthouse3d.com/tutorials/glsl-tutorial/simple-texture/).
Which functions/values I should alter to get the desired effect of displaying only part of the texture?
Generally, if you want to only display part of a texture, then you change the texture coordinates associated with each vertex. Since you don't show your code for how you're telling OpenGL about your vertices, I'm not sure what to suggest. But in general, if you're using older deprecated functions, instead of doing this:
// Lower Left of triangle
glTexCoord2f(0,0);
glVertex3f(x0,y0,z0);
// Lower Right of triangle
glTexCoord2f(1,0);
glVertex3f(x1,y1,z1);
// Upper Right of triangle
glTexCoord2f(1,1);
glVertex3f(x2,y2,z2);
You could do this:
// Lower Left of triangle
glTexCoord2f(1.0 / 3.0, 0.0);
glVertex3f(x0,y0,z0);
// Lower Right of triangle
glTexCoord2f(2.0 / 3.0, 0.0);
glVertex3f(x1,y1,z1);
// Upper Right of triangle
glTexCoord2f(2.0 / 3.0, 1.0);
glVertex3f(x2,y2,z2);
If you're using VBOs, then you need to modify your array of texture coordinates to access the appropriate section of your texture in a similar manner.
For the sampler2D the texture coordinates are normalized so that the leftmost and bottom-most coordinates are 0, and the rightmost and topmost are 1. So for your example of a 300-pixel-wide texture, the green section would be between 1/3rd and 2/3rds the width of the texture.

Pointers on modern OpenGL shadow cubemapping?

Background
I am working on a 3D game using C++ and modern OpenGL (3.3). I am now working on the lighting and shadow rendering, and I've successfully implemented directional shadow mapping. After reading over the requirements for the game I have decided that I'd be needing point light shadow mapping. After doing some research, I discovered that to do omnidirectional shadow mapping I will do something similar to directional shadow mapping, but with a cubemap instead.
I have no previous knowledge of cubemaps but my understanding of them is that a cubemap is six textures, seamlessly attached.
I did some looking around but unfortunately I struggled to find a definitive "tutorial" on the subject for modern OpenGL. I look for tutorials first that explain it from start to finish because I seriously struggled to learn from snippets of source code or just concepts, but I tried.
Current understandings
Here is my general understanding of the idea, minus the technicalities. Please correct me.
For each point light, a framebuffer is set up, like directional shadowmapping
A single cubemap texture is then generated, and bound with glBindTexture(GL_TEXTURE_CUBE_MAP, shadowmap).
The cubemap is set up with the following attributes:
glTexParameteri(GL_TEXTURE_CUBE_MAP_ARB, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP_ARB, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP_ARB, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
(this is also similar to directional shadowmapping)
Now glTexImage2D() is iterated through six times, once for each face. I do that like this:
for (int face = 0; face < 6; face++) // Fill each face of the shadow cubemap
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, GL_DEPTH_COMPONENT32F , 1024, 1024, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
The texture is attached to the framebuffer with a call to
glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, shadowmap, 0);
When the scene is to be rendered, it is rendered in two passes, like directional shadow mapping.
First of all, the shadow framebuffer is bound, the viewport is adjusted to the size of the shadowmap (1024 by 1024 in this case).
Culling is set to the front faces with glCullFace(GL_FRONT)
The active shader program is switched to the vertex and fragment shadow shaders that I will provide the sources of further down
The light view matrices for all six views are calculated. I do it by creating a vector of glm::mat4's and push_back() the matrices, like this:
// Create the six view matrices for all six sides
for (int i = 0; i < renderedObjects.size(); i++) // Iterate through all rendered objects
{
renderedObjects[i]->bindBuffers(); // Bind buffers for rendering with it
glm::mat4 depthModelMatrix = renderedObjects[i]->getModelMatrix(); // Set up model matrix
for (int i = 0; i < 6; i++) // Draw for each side of the light
{
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, shadowmap, 0);
glClear(GL_DEPTH_BUFFER_BIT); // Clear depth buffer
// Send MVP for shadow map
glm::mat4 depthMVP = depthProjectionMatrix * depthViewMatrices[i] * depthModelMatrix;
glUniformMatrix4fv(glGetUniformLocation(shadowMappingProgram, "depthMVP"), 1, GL_FALSE, glm::value_ptr(depthMVP));
glUniformMatrix4fv(glGetUniformLocation(shadowMappingProgram, "lightViewMatrix"), 1, GL_FALSE, glm::value_ptr(depthViewMatrices[i]));
glUniformMatrix4fv(glGetUniformLocation(shadowMappingProgram, "lightProjectionMatrix"), 1, GL_FALSE, glm::value_ptr(depthProjectionMatrix));
glDrawElements(renderedObjects[i]->getDrawType(), renderedObjects[i]->getElementSize(), GL_UNSIGNED_INT, 0);
}
}
The default framebuffer is bound, and the scene is drawn normally.
Issue
Now, to the shaders. This is where my understanding runs dry. I am completely unsure on what I should do, my research seems to conflict with eachother, because it's for different versions. I ended up blandly copying and pasting code from random sources, and hoping it'd achieve something other than a black screen. I know this is terrible, but there doesn't seem to be any clear definitions on what to do. What spaces do I work in? Do I even need a separate shadow shader, like I used in directional point lighting? What the hell do I use as the type for a shadow cubemap? samplerCube? samplerCubeShadow? How do I sample said cubemap properly? I hope that someone can clear it up for me and provide a nice explanation.
My current understanding of the shader part is:
- When the scene is being rendered into the cubemap, the vertex shader simply takes the depthMVP uniform I calculated in my C++ code and transforms the input vertices by them.
- The fragment shader of the cubemap pass simply assigns the single out value to the gl_FragCoord.z. (This part is unchanged from when I implemented directional shadow mapping. I assumed it would be the same for cubemapping because the shaders don't even interact with the cubemap - OpenGL simply renders the output from them to the cubemap, right? Because it's a framebuffer?)
The vertex shader for the normal rendering is unchanged.
In the fragment shader for normal rendering, the vertex position is transformed into the light's space with the light's projection and view matrix.
That's somehow used in the cubemap texture lookup. ???
Once the depth has been retrieved using magical means, it is compared to the distance of the light to the vertex, much like directional shadowmapping. If it's less, that point must be shadowed, and vice-versa.
It's not much of an understanding. I go blank as to how the vertices are transformed and used to lookup the cubemap, so I'm going to paste the source for my shaders, in hope that people can clarify this. Please note that a lot of this code is blind copying and pasting, I haven't altered anything as to not jeopardise any understanding.
Shadow vertex shader:
#version 150
in vec3 position;
uniform mat4 depthMVP;
void main()
{
gl_Position = depthMVP * vec4(position, 1);
}
Shadow fragment shader:
#version 150
out float fragmentDepth;
void main()
{
fragmentDepth = gl_FragCoord.z;
}
Standard vertex shader:
#version 150
in vec3 position;
in vec3 normal;
in vec2 texcoord;
uniform mat3 modelInverseTranspose;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
out vec3 fragnormal;
out vec3 fragnormaldirection;
out vec2 fragtexcoord;
out vec4 fragposition;
out vec4 fragshadowcoord;
void main()
{
fragposition = modelMatrix * vec4(position, 1.0);
fragtexcoord = texcoord;
fragnormaldirection = normalize(modelInverseTranspose * normal);
fragnormal = normalize(normal);
fragshadowcoord = projectionMatrix * viewMatrix * modelMatrix * vec4(position, 1.0);
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(position, 1.0);
}
Standard fragment shader:
#version 150
out vec4 outColour;
in vec3 fragnormaldirection;
in vec2 fragtexcoord;
in vec3 fragnormal;
in vec4 fragposition;
in vec4 fragshadowcoord;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
uniform mat4 viewMatrixInversed;
uniform mat4 lightViewMatrix;
uniform mat4 lightProjectionMatrix;
uniform sampler2D tex;
uniform samplerCubeShadow shadowmap;
float VectorToDepthValue(vec3 Vec)
{
vec3 AbsVec = abs(Vec);
float LocalZcomp = max(AbsVec.x, max(AbsVec.y, AbsVec.z));
const float f = 2048.0;
const float n = 1.0;
float NormZComp = (f+n) / (f-n) - (2*f*n)/(f-n)/LocalZcomp;
return (NormZComp + 1.0) * 0.5;
}
float ComputeShadowFactor(samplerCubeShadow ShadowCubeMap, vec3 VertToLightWS)
{
float ShadowVec = texture(ShadowCubeMap, vec4(VertToLightWS, 1.0));
if (ShadowVec + 0.0001 > VectorToDepthValue(VertToLightWS)) // To avoid self shadowing, I guess
return 1.0;
return 0.7;
}
void main()
{
vec3 light_position = vec3(0.0, 0.0, 0.0);
vec3 VertToLightWS = light_position - fragposition.xyz;
outColour = texture(tex, fragtexcoord) * ComputeShadowFactor(shadowmap, VertToLightWS);
}
I can't remember where the ComputerShadowFactor and VectorToDepthValue function code came from, because I was researching it on my laptop which I can't get to right now, but this is the result of those shaders:
It is a small square of unshadowed space surrounded by shadowed space.
I am obviously doing a lot wrong here, probably centered on my shaders, due to a lack of knowledge on the subject because I find it difficult to learn from anything but tutorials, and I am very sorry for that. I am at a loss it it would be wonderful if someone can shed light on this with a clear explanation on what I am doing wrong, why it's wrong, how I can fix it and maybe even some code. I think the issue may be because I am working in the wrong spaces.
I hope to provide an answer to some of your questions, but first some definitions are required:
What is a cubemap?
It is a map from a direction vector to a pair of [face, 2d coordinates on that face], obtained by projecting the direction vector on an hypothetical cube.
What is an OpenGL cubemap texture?
It is a set of six "images".
What is a GLSL cubemap sampler?
It is a sampler primitive from which cubemap sampling can be done. This mean that it is sampled using a direction vector instead of the usual texture coordinates. The hardware then project the direction vector on an hypothetical cube and use the resulting [face, 2d texture coordinate] pair to sample the right "image" at the right 2d position.
What is a GLSL shadow sampler?
It is a sampler primitive that is bounded to a texture containing NDC-space depth values and, when sampled using the shadow-specific sampling functions, return a "comparison" between a NDC-space depth (in the same space of the shadow map, obviously) and the NDC-space depth stored inside the bounded texture. The depth to compare against is specified as an additional element in the texture coordinates when calling the sampling function. Note that shadow samplers are provided for ease of use and speed, but it is always possible to do the comparison "manually" in the shader.
Now, for your questions:
OpenGL simply renders [...] to the cubemap, right?
No, OpenGL render to a set of targets in the currently bounded framebuffer.
In the case of cubemaps, the usual way to render in them is:
to create them and attach each of their six "images" to the same
framebuffer (at different attachment points, obviously)
to enable only one of the target at a time (so, you render in each cubemap face individually)
to render what you want in the cubemap face (possibly using face-specific "view" and "projection" matrices)
Point-light shadow maps
In addition to everything said about cubemaps, there are a number of problems in using them to implement point-light shadow mapping and so the hardware depth comparison is rarely used.
Instead, what is common pratice is the following:
instead of writing NDC-space depth, write radial distance from the
point light
when querying the shadow map (see sample code at bottom):
do not use hardware depth comparisons (use samplerCube instead of samplerCubeShadow)
transform the point to be tested in the "cube space" (that do not include projection at all)
use the "cube-space" vector as the lookup direction to sample the cubemap
compare the radial distance sampled from the cubemap with the radial distance of the tested point
Sample code
// sample radial distance from the cubemap
float radial_dist = texture(my_cubemap, cube_space_vector).x;
// compare against test point radial distance
bool shadowed = length(cube_space_vector) > radial_dist;