OpenGL shader gamma on solid color - c++

I want to implement gamma correction on my OpenGL 3D renderer, I understand that it's absolutely relevant on texture loaded in sRGB, so I do this:
vec4 texColor;
texColor = texture(src_tex_unit0, texCoordVarying);
texColor = vec4(pow(texColor.rgb,vec3(2.2)),1.0);
vec4 colorPreGamma = texColor * (vec4(ambient,1.0) + vec4(diffuse,1.0));
fragColor = vec4(pow(colorPreGamma.rgb, vec3(1.0/gamma)),1.0);
But my question is about solid color, when the surface of the 3D object I want lit is not textured but just colored by a per vertex RGB value. In this case, do I have to transform my color in Linear space, and after the lighting operation, transform back to gamma space like I do for a texture?
Does this apply when my light are colored?

In this case, do I have to transform my color in Linear space, and after the lighting operation, transform back to gamma space like I do for a texture?
That depends: what colorspace are your colors in?
You're not doing this correction because of where they come from; you're doing it because of what the colors actually are. If the value is not linear, then you must linearize it before using it, regardless of where it comes from.
You are ultimately responsible for putting that color there. So you must have to know whether that color is in linear RGB or sRGB colorspace. And if the color is not linear, then you have to linearize it before you can get meaningful numbers from it.

In OpenGL there isn't a huge distinction between color data and other kinds of data: if you have a vec3 you can access components as .xyz or .rgb. It's all just data.
So ask yourself this: "Do I have to gamma correct my vertex positions in the vertex shader?"
Of course not, because your vertex positions are already in linear space. So if you are similarly setting your vertex colors in linear space, again no gamma correction is needed.
In other words, do you imagine vec3(0.5, 0.5, 0.5) as being a gray that is visually halfway between black and white? Then you need gamma correction.
Do you imagine it as being mathematically halfway between black and white (in terms of measurable light intensity)? Then it's already linear.

Related

Converting an equiangular cubemap to an equirectangular one

I am making a retro-style game with OpenGL, and I want to draw my own cubemaps for it. Here is an example of one:
As you can tell, there is no perspective warping anywhere; each face is fully equiangular. When using this as a cubemap, the result is this:
As you can see, it looks box-y, and not spherical at all. I know of a solution to this, which is to remap each point on the cubemap to a a sphere position. I have done this manually by creating a sphere mesh and mapping the cubemap texture onto it (and then rendering that to an environment map), but this is time-consuming and complicated.
I seek a different solution: in my fragment shader, I hope to remap the sampling ray to a sphere position, instead of a cube position. Here is my original fragment shader, without any changes:
#version 400 core
in vec3 cube_edge;
out vec3 color;
uniform samplerCube skybox_sampler;
void main(void) {
color = texture(skybox_sampler, cube_edge).rgb;
}
I can get a ray that maps to the sphere by just normalizing cube_edge, but that doesn't change anything, for some reason. After messing around a bit, I tried this mapping, which almost works, but not quite:
vec3 sphere_edge = vec3(cube_edge.x, normalize(cube_edge).y, cube_edge.z);
As you can see, some faces become spherical in nature, whereas the top face warps inwards, instead of outwards.
I also tried the results from this site: http://mathproofs.blogspot.com/2005/07/mapping-cube-to-sphere.html, but the faces were not curved outwards enough.
I have been stuck on this for so long now - if you know how I can change my cube to sphere mapping in my fragment shader, or if that's even possible, please let me know!
As you can tell, there is no perspective warping anywhere; each face is fully equiangular.
This premise is incorrect. You hand-drew some images; this doesn't make them equiangular.
'Equiangular cubemap' (EAC) specifically means a cubemap remapped by this formula (section 2.4):
u = 4/pi * atan(u)
v = 4/pi * atan(v)
Let's recognize first that the term is misleading, because even though EAC aims at reducing the variation in sampling rate, the sampling rate is not constant. In fact no 2d projection of any part of a sphere can truly be equi-angular; this is a mathematical fact.
Nonetheless, we can try to apply this correction. Implemented in GLSL fragment shader as:
d /= max(abs(d.x), max(abs(d.y), abs(d.z));
d = atan(d)/atan(1);
gives the following result:
Compare it with the uncorrected d:
As you can see the EAC projection shrinks the pixels in the middle by a little bit, and expands them near the corners, so that they cover more equal area.
Instead, it appears that you want a cylindrical projection around the horizon. It can be implemented like so:
d /= length(d.xy);
d.xy /= max(abs(d.x), abs(d.y));
d.xy = atan(d.xy)/atan(1);
Which gives the following result:
However there's no artifact-free way to fit the top/bottom square faces of the cube onto the circular faces of the cylinder -- which is why you see the artifacts there.
Bottom-line: you cannot fit the image that you drew onto a sphere in a visually pleasing way. You should instead re-focus your effort on alternative ways of authoring your environment map. I recommend you try using an equidistant cylindrical projection for the horizon, cap it with solid colors above/below a fixed latitude, and use billboards for objects that cannot be represented in that projection.
Your problem is that the size of the geometry on which the environment is placed is too small. You are not looking at the environment but at the inside of a small cube in which you are sitting. The environment map should behave as if you are always in the center of the map and the environment is infinitely far away. I suggest to draw the environment map on the far plane of the viewing frustum. You can do this by setting the z-component of the clip space position equal to the w-component in the vertex shader. If you set z to w, you guarantee that the final z value of the position will be 1.0. This is the z value of the far plane. (You can do that with Swizzling gl_Position = clipPos.xyww). It is quite sufficient to draw a cube and wrap the environment by looking up the map with the interpolated vertices of the cube. In the case of a samplerCube, the 3-dimensional texture coordinate is treated as a direction vector. You can use the vertex coordinate of the cube to look up the texture.
Vertex shader:
cube_edge = inVertex.xyz;
vec4 clipPos = projection * view * vec4(inVertex.xyz, 1.0);
gl_Position = clipPos.xyww;
Fragment shader:
color = texture(skybox_sampler, cube_edge).rgb;
The solution is also explained in detail at LearnOpenGL - Cubemap.

Is gl_FragDepth equal gl_FragCoord.z when msaa enable?

I know gl_FragDepth will take the value of gl_FragCoord.z from opengl wiki.
https://www.khronos.org/opengl/wiki/Fragment_Shader/Defined_Outputs
But I have a problem. If I enable MSAA and write gl_FragDepth = gl_FragCoord.z in fragment shader, the display will not work fine. You can see a black line on the white triangle as below:
If I don't write gl_FragDepth in fragment shader, it will works fine.
If I disable MSAA, it also works fine no matter if I write gl_FragDepth.
The correct display image has no black line:
The render scene is easy, I just draw 2 white triangles and they are intersected on an edge.
I add a simple light in vertex shader. The codes show as below:
const char *vertexShaderSource[] = {
"#version 120\n",
"varying vec4 lightColor;\n",
"void main()\n",
"{\n",
" vec3 n = normalize(gl_NormalMatrix * gl_Normal);\n",
" vec3 l = normalize(vec3(0.0, 1.0, 1.0));\n",
" float NdotL = clamp(dot(n, l), 0.001, 1.0);\n",
" lightColor = vec4(1.0)*(NdotL + 0.2);\n",
" gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;\n",
"}\n"
};
const char *fragmentShaderSource[] = {
"#version 120\n",
"varying vec4 lightColor;\n",
"void main(void)\n",
"{\n",
" gl_FragColor = vec4(lightColor.rgb, 1.0);\n",
" gl_FragDepth = gl_FragCoord.z;\n"
"}\n"
};
The positions of 6 vertices of 2 triangles are (-5,-5,0),(5,-5,0),(-5,5,0),(-5,0,0),(5,0,0),(-5,0,-10).
The normals are perpendicular to triangles.
I wanna know why the display images are different if I write gl_FragDepth in fragment shader?
Your two triangles intersect. Specifically, the grey triangle has an edge which can generate depth values equal to the depth values of the white triangle. As such, it is entirely possible for a particular sample from the grey triangle at that intersection to generate a depth value that is equal to the depth value of the white triangle.
So you were never guaranteed to not see a line there; you just happened not to many cases.
However, that all assumes that:
The grey triangle is being rendered after the white one.
Your depth test will pass on equal values.
The result you are getting here may happen even outside of these two conditions. The reason for that is complex.
See, the whole point of multisampling is that the number depth values generated by the rasterizer and the number of fragment shader executions are not the same. So a single FS invocation is mapped to multiple depth values.
However, a single FS invocation can still write to gl_FragDepth. If it does this, then all samples that map to that FS invocation will receive the same depth. This depth overrides the multisample-generated depth values.
Also, interpolation at the edges of a primitive is weird under multisampling. Each sample that is within the bounds of the triangle at that pixel will result in a sample value being written (unless something else culls it out). But the center point of the pixel need not be one of these sample locations. So a triangle that doesn't pass through the center of a pixel can still contribute some samples to the pixel, so long as the triangle passes through at least one sample in that pixel.
The fragment shader gets interpolated values based on some location inside the pixel. With multisampling, this location may not be inside of the triangle. For example, if the location the implementation selects for the FS's interpolation within the pixel is in the center of the triangle, and the triangle doesn't pass through the center of that pixel, you will still get an FS invocation so long as it passes through some sample.
But this means that the interpolated values can represent locations outside of the area of the triangle. The interpolation math can produce values for areas not within the triangle; they just don't make sense.
gl_FragCoord, being an interpolated value, could therefore generate values outside of the triangle. Since the grey triangle is aimed towards the viewer, the values from locations "above" the oncoming edge of the grey triangle will be closer than they should be. And since the edge of the grey triangle intersects the white triangle, values closer than its actual edge values will be considered closer than the white triangle
The normal way to counter this would be to use the centroid interpolation qualifier. However, the standard doesn't really allow this; even if you redeclared gl_FragCoord with the centroid qualifier, it won't have any effect:
The use of centroid does not further restrict this value to be inside the current primitive.
Also, as previously stated, depth-replacement in regular multisampled rendering destroys all of the per-sample depth information anyway. Every sample in a pixel would get the same depth value if your FS writes to the depth. That's not really what you wanted, even if you could do centroid interpolation of gl_FragCoord (which is probably why they don't allow it).
So if it is absolutely essential to do depth-replacement in a shader used for multisampling (and you should avoid this whenever possible), you will need to use per-sample shading. You can redeclare gl_FragCoord with sample to achieve this.

How could I remove this colour interpolation artefact across a quad?

I've been reading up on a vulkan tutorial online, here: https://vulkan-tutorial.com. This question should apply to any 3D rendering API however.
In this lesson https://vulkan-tutorial.com/Vertex_buffers/Index_buffer, the tutorial had just covered using indexed rendering in order to reuse vertices when drawing the following simple two-triangle quad:
The four vertices were assigned red, green, blue and white colours as vertex attributes and the fragment shader had those colours interpolated across the triangles as expected. This leads to the ugly visual artefact on the diagonal where the two triangles meet. As I understand it, the interpolation will only be happening across each triangle, and so where the two triangles meet the interpolation doesn't cross the boundary.
How could you, generally in any rendering api, have the colours smoothly interpolated over all four corners for a nice colour wheel affect without having this hard line?
This is a correct output from a graphics api point of view. You can achieve your own desired output (a color gradient) within the shader code. You basically need to interpolate the colors yourself. To get an idea on how to do this, here is a glsl piece of code from this answer:
uniform vec2 resolution;
void main(void)
{
vec2 p = gl_FragCoord.xy / resolution.xy;
float gray = 1.0 - p.x;
float red = p.y;
gl_FragColor = vec4(red, gray*red, gray*red, 1.0);
}

Why is my GLSL shader rendering a cleavage?

I'm working on a deferred lighting technique in 2D, using a frame buffer to accumulate light sources using the GL_MAX blend equation.
Here's what I get when rendering one light source (the geometry is a quad without a texture, I'm only using a fragment shader for colouring) to my buffer:
Which is exactly what I want - attenuation from the light source. However, when two light sources are near each other, when they overlap, they seem to produce a lower RGB value where they meet, like so:
Why is there a darker line between the two? I was expecting that with GL_MAX blend equation they would smoothly blend into each other, using the maximal value of the fragments in each location.
Here's the setup for the FBO (using LibGDX):
Gdx.gl.glClearColor(0.14f, 0.14f, 0.19f, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
Gdx.gl.glBlendEquation(GLMAX_BLEND_EQUATION);
Gdx.gl.glBlendFunc(GL20.GL_SRC_COLOR, GL20.GL_DST_COLOR);
Gdx.gl.glEnable(GL20.GL_BLEND);
I don't think the call to glBlendFunc is actually necessary when using this equation. GLMAX_BLEND_EQUATION is set to 0x8008.
varying vec2 v_texCoords;
varying vec2 v_positionRelativeToLight;
uniform sampler2D u_texture;
uniform vec3 u_lightPosition;
uniform vec3 u_lightColor;
void main() {
float distanceToLight = length(v_positionRelativeToLight);
float falloffVarA = 0.1;
float falloffVarB = 1.0;
float attenuation = 1.0 / (1.0 + (falloffVarA*distanceToLight) + (falloffVarB*distanceToLight*distanceToLight));
float minDistanceOrAttenuation = min(attenuation, 1.0-distanceToLight);
float combined = minDistanceOrAttenuation * attenuation;
gl_FragColor = vec4(combined, combined, combined, 1.0);
}
There are extra variables passed in there as this fragment shader is usually more complicated, but I've cut it down to just show how the attenuation and blending is behaving.
This happens between every light source that I render where they meet - rather than the colour that I'm expecting, the meeting of two light sources - the equidistant point between the two quads, is a darker colour that I'm expecting. Any idea why and how to fix it?
This is the result of subtracting the first image from the second:
The background on the first isn't quite black, hence the yellowing on the right, but otherwise you can clearly see the black region on the left where original values were preserved, the darker arc where values from both lights were evaluated but the right was greater, then all the area on the right that the original light didn't touch.
I therefore think you're getting max-pick blending. But what you want is additive blending:
Gdx.gl.glBlendFunc(GL20.GL_ONE, GL20.GL_ONE);
... and leave the blend equation on the default of GL_FUNC_ADD.
Your result is the expected appearance for maximum blending (which is just like the lighten blend mode in Photoshop). The dark seam looks out of place perhaps because of the non-linear falloff of each light, but it's mathematically correct. If you introduce a light with a bright non-white color to it, it will look much more objectionable.
You can get around this if you render your lights to a frame buffer with inverted colors and multiplicative blending, and then render the frame buffer with inverted colors. Then the math works out to not have the seams, but it won't look unusually bright like what additive blending produces.
Use a pure white clear color on your frame buffer and then render the lights with the standard GL_ADD blend equation and the blend function GL_ONE_MINUS_DST_COLOR. Then render your FBO texture to the screen, inverting the colors again.
Two lights drawn using your method
Two lights drawn additively
Two lights, drawn sequentially with GL_ONE_MINUS_DST_COLOR, GL_ZERO and GL_ADD
The above result, inverted

Directx9 Specular Mapping

How would I implement loading a texture to be used as a specular map for a piece of geometry and rendering it in Directx9 using C++?
Are there any tutorials or basic examples I can refer to?
Use D3DXCreateTextureFromFile to load the file from disk. You then need to set up a shader that multiplies the specular value by the value stored in the texture. This gives you the specular colour.
So you're final pixel comes from
Final = ambient + (N.L * texture colour) + (N.H * texture specular)
You can do this easily in a shader.
Its also worth noting that it can be very useful to store the per texel specular in the alpha channel of the texture. This way you only need one texture around, though it does break per-pixel transparency.