How to change hue of a texture with GLSL? - opengl

Is there a way to efficiently change hue of a 2D OpenGL texture using GLSL (fragment shader)?
Do someone have some code for it?
UPDATE: This is the code resulting from user1118321 suggestion:
uniform sampler2DRect texture;
const mat3 rgb2yiq = mat3(0.299, 0.587, 0.114, 0.595716, -0.274453, -0.321263, 0.211456, -0.522591, 0.311135);
const mat3 yiq2rgb = mat3(1.0, 0.9563, 0.6210, 1.0, -0.2721, -0.6474, 1.0, -1.1070, 1.7046);
uniform float hue;
void main() {
vec3 yColor = rgb2yiq * texture2DRect(texture, gl_TexCoord[0].st).rgb;
float originalHue = atan(yColor.b, yColor.g);
float finalHue = originalHue + hue;
float chroma = sqrt(yColor.b*yColor.b+yColor.g*yColor.g);
vec3 yFinalColor = vec3(yColor.r, chroma * cos(finalHue), chroma * sin(finalHue));
gl_FragColor = vec4(yiq2rgb*yFinalColor, 1.0);
}
And this is the result compared with a reference:
I have tried to switch I with Q inside atan but the result is wrong even around 0°
Have you got any hint?
If needed for comparison, this is the original unmodified image:

While what #awoodland says is correct, that method may cause issues with changes in luminance, I believe.
HSV and HLS color systems are problematic for a number of reasons. I talked with a color scientist about this recently, and his recommendation was to convert to YIQ or YCbCr space and adjust the the chroma channels (I&Q, or Cb&Cr) accordingly. (You can learn how to do that here and here.)
Once in one of those spaces, you can get the hue from the angle formed by the chroma channels, by doing hue = atan(cr/cb) (watching for cb == 0). This gives you a value in radians. Simply rotate it by adding the hue rotation amount. Once you've done that, you can calculate the magnitude of the chroma with chroma = sqrt(cr*cr+cb*cb). To get back to RGB, calculate the new Cb and Cr (or I & Q) using Cr = chroma * sin (hue), Cb = chroma * cos (hue). Then convert back to RGB as described on the above web pages.
EDIT: Here's a solution that I've tested and seems to give me the same results as your reference. You can probably collapse some of the dot products into matrix multiplies:
uniform sampler2DRect inputTexture;
uniform float hueAdjust;
void main ()
{
const vec4 kRGBToYPrime = vec4 (0.299, 0.587, 0.114, 0.0);
const vec4 kRGBToI = vec4 (0.596, -0.275, -0.321, 0.0);
const vec4 kRGBToQ = vec4 (0.212, -0.523, 0.311, 0.0);
const vec4 kYIQToR = vec4 (1.0, 0.956, 0.621, 0.0);
const vec4 kYIQToG = vec4 (1.0, -0.272, -0.647, 0.0);
const vec4 kYIQToB = vec4 (1.0, -1.107, 1.704, 0.0);
// Sample the input pixel
vec4 color = texture2DRect (inputTexture, gl_TexCoord [ 0 ].xy);
// Convert to YIQ
float YPrime = dot (color, kRGBToYPrime);
float I = dot (color, kRGBToI);
float Q = dot (color, kRGBToQ);
// Calculate the hue and chroma
float hue = atan (Q, I);
float chroma = sqrt (I * I + Q * Q);
// Make the user's adjustments
hue += hueAdjust;
// Convert back to YIQ
Q = chroma * sin (hue);
I = chroma * cos (hue);
// Convert back to RGB
vec4 yIQ = vec4 (YPrime, I, Q, 0.0);
color.r = dot (yIQ, kYIQToR);
color.g = dot (yIQ, kYIQToG);
color.b = dot (yIQ, kYIQToB);
// Save the result
gl_FragColor = color;
}

Andrea3000, in comparing YIQ examples on the net, i came across your posting, but i think there is an issue with the 'updated' version of your code.... i'm sure your 'mat3' definitions are flip/flopped on the column/row ordering... (maybe that's why you were still having troubles)...
FYI: OpenGL matrix ordering: "For more values, matrices are filled in in column-major order. That is, the first X values are the first column, the second X values are the next column, and so forth." See: http://www.opengl.org/wiki/GLSL_Types
mat2(
float, float, //first column
float, float); //second column

MSL(Metal Shader Language) version for changing the hue of a texture. I would recommend the MetalPetal pod. This pod will make life a lot easier.
#include <metal_stdlib>
using namespace metal;
typedef struct {
float4 position [[ position ]];
float2 textureCoordinate;
} VertexOut;
fragment float4 hue_adjust_filter(
VertexOut vertexIn [[stage_in]],
texture2d<float, access::sample> inTexture [[texture(0)]],
sampler inSampler [[sampler(0)]],
constant float &hueAdjust [[ buffer(0) ]])
{
const float4 kRGBToYPrime = float4 (0.299, 0.587, 0.114, 0.0);
const float4 kRGBToI = float4 (0.596, -0.275, -0.321, 0.0);
const float4 kRGBToQ = float4 (0.212, -0.523, 0.311, 0.0);
const float4 kYIQToR = float4 (1.0, 0.956, 0.621, 0.0);
const float4 kYIQToG = float4 (1.0, -0.272, -0.647, 0.0);
const float4 kYIQToB = float4 (1.0, -1.107, 1.704, 0.0);
// Sample the input pixel
float2 uv = vertexIn.textureCoordinate;
float4 color = inTexture.sample(inSampler, uv);
// Convert to YIQ
float YPrime = dot (color, kRGBToYPrime);
float I = dot (color, kRGBToI);
float Q = dot (color, kRGBToQ);
// Calculate the hue and chroma
float hue = atan2 (Q, I);
float chroma = sqrt (I * I + Q * Q);
// Make the user's adjustments
hue += hueAdjust;
// Convert back to YIQ
Q = chroma * sin (hue);
I = chroma * cos (hue);
// Convert back to RGB
float4 yIQ = float4 (YPrime, I, Q, 0.0);
color.r = dot (yIQ, kYIQToR);
color.g = dot (yIQ, kYIQToG);
color.b = dot (yIQ, kYIQToB);
return color;
}

Related

How can I render a textured quad so that I fade different corners?

I'm drawing textured quads to the screen in a 2D environment. The quads are used as a tile-map. In order to "blend" some of the tiles together I had the idea like:
A single "grass" tile drawn on top of dirt would render it as a faded circle of grass; faded from probably the quarter point.
If there was a larger area of grass tiles, then the edges would gradually fade from the quarter point that is on the edge of the grass.
So if the entire left-edge of the quad was to be faded, it would have 0 opacity at the left-edge, and then full opacity at one quarter of the width of the quad. Right edge fade would have full opacity at the three-quarters width, and fade down to 0 opacity at the right-most edge.
I figured that setting 4 corners as "on" or "off" would be enough to have the fragment shader work it out. However, I can't work it out.
If corner0 were 0 the result should be something like this for the quad:
If both corner0 and corner1 were 0 then it would look like this:
This is what I have so far:
#version 330
layout(location=0) in vec3 inVertexPosition;
layout(location=1) in vec2 inTexelCoords;
layout(location=2) in vec2 inElementPosition;
layout(location=3) in vec2 inElementSize;
layout(location=4) in uint inCorner0;
layout(location=5) in uint inCorner1;
layout(location=6) in uint inCorner2;
layout(location=7) in uint inCorner3;
smooth out vec2 texelCoords;
flat out vec2 elementPosition;
flat out vec2 elementSize;
flat out uint corner0;
flat out uint corner1;
flat out uint corner2;
flat out uint corner3;
void main()
{
gl_Position = vec4(inVertexPosition.x,
-inVertexPosition.y,
inVertexPosition.z, 1.0);
texelCoords = vec2(inTexelCoords.x,1-inTexelCoords.y);
elementPosition.x = (inElementPosition.x + 1.0) / 2.0;
elementPosition.y = -((inElementPosition.y + 1.0) / 2.0);
elementSize.x = (inElementSize.x) / 2.0;
elementSize.y = -((inElementSize.y) / 2.0);
corner0 = inCorner0;
corner1 = inCorner1;
corner2 = inCorner2;
corner3 = inCorner3;
}
The element position is provided in the range of [-1,1], the corner variables are all either 0 or 1. These are provided on an instance basis, whereas the vertex position and texelcoords are provided per-vertex. The vertex y-coord is inverted because I work in reverse and just flip it here for ease. ElementSize is on the scale of [0,2], so I'm just converting it to [0,1] range.
The UV coords could be any values, not neccessarily [0,1].
Here's the frag shader
#version 330
precision highp float;
layout(location=0) out vec4 frag_colour;
smooth in vec2 texelCoords;
flat in vec2 elementPosition;
flat in vec2 elementSize;
flat in uint corner0;
flat in uint corner1;
flat in uint corner2;
flat in uint corner3;
uniform sampler2D uTexture;
const vec2 uScreenDimensions = vec2(600,600);
void main()
{
vec2 uv = texelCoords;
vec4 c = texture(uTexture,uv);
frag_colour = c;
vec2 fragPos = gl_FragCoord.xy / uScreenDimensions;
// What can I do using the fragPos, elementPos??
}
Basically, I'm not sure what I can do using the fragPos and elementPosition to fade pixels toward a corner if that corner is 0 instead of 1. I kind of understand that it should be based on the distance of the frag from the corner position... but I can't work it out. I added elementSize because I think it's needed to determine how far from the corner the given frag is...
To achieve a fading effect, you have to use Blending. YOu have to set the alpha channel of the fragment color dependent on a scale:
frag_colour = vec4(c.rgb, c.a * scale);
scale has to be computed dependent on the texture coordinates (uv). If a coordinate is in range [0.0, 0.25] or [0.75, 1.0] then the texture has to be faded dependent on the corresponding cornerX variable. In the following the variables uv is assumed to be a 2 dimensional vector, in range [0, 1].
Compute a linear gradients for the left, right, bottom and top side, dependent on uv:
float gradL = min(1.0, uv.x * 4.0);
float gradR = min(1.0, (1.0 - uv.x) * 4.0);
float gradT = min(1.0, uv.y * 4.0);
float gradB = min(1.0, (1.0 - uv.y) * 4.0);
Or compute Hermite gradients by using smoothstep:
float gradL = smoothstep(0.0, 0.25, uv.x);
float gradR = 1.0 - smoothstep(0.75, 1.0, uv.x);
float gradT = smoothstep(0.0, 0.25, uv.y);
float gradB = 1.0 - smoothstep(0.75, 1.0, uv.y);
Compute the fade factor for the 4 corners and the 4 sides dependent on gradL, gradR, gradT, gradB and the corresponding cornerX variable. Finally compute the maximum fade factor:
float fade0 = float(corner0) * max(0.0, 1.0 - dot(vec2(0.707), vec2(gradL, gradT)));
float fade1 = float(corner1) * max(0.0, 1.0 - dot(vec2(0.707), vec2(gradL, gradB)));
float fade2 = float(corner2) * max(0.0, 1.0 - dot(vec2(0.707), vec2(gradR, gradB)));
float fade3 = float(corner3) * max(0.0, 1.0 - dot(vec2(0.707), vec2(gradR, gradT)));
float fadeL = float(corner0) * float(corner1) * (1.0 - gradL);
float fadeB = float(corner1) * float(corner2) * (1.0 - gradB);
float fadeR = float(corner2) * float(corner3) * (1.0 - gradR);
float fadeT = float(corner3) * float(corner0) * (1.0 - gradT);
float fade = max(
max(max(fade0, fade1), max(fade2, fade3)),
max(max(fadeL, fadeR), max(fadeB, fadeT)));
At the end compute the scale and set the fragment color:
float scale = 1.0 - fade;
frag_colour = vec4(c.rgb, c.a * scale);

Dome Image Projection

Im trying to create a GLSL fragment shader which projects an image to a dome. The input would be a sampler2D texture, an elevation and an azimuth.
The result should look like the following gif's.
Elevation between 0 and 90 degree (in this gif its between -90 and 90)
.
Azimuth between 0 and 360 degree
.
Right now my code looks like this:
#ifdef GL_ES
precision mediump float;
#endif
uniform float u_time;
uniform vec2 u_resolution;
uniform sampler2D u_texture_0;
uniform sampler2D u_texture_1;
// INPUT
const float azimuth=0.;// clockwise 360 degree
const float altitude=90.;// 0-90 dregree -> 90 = center
const float scale=1.;
// CALC
const float PI=3.14159265359;
const float azimuthRad=azimuth*PI/180.;
const float altitudeNormalization=sin((1.-(altitude/90.)));
float box(in vec2 _st,in vec2 _size){
_size=vec2(.5)-_size*.5;
vec2 uv=smoothstep(_size,_size+vec2(.001),_st);
uv*=smoothstep(_size,_size+vec2(.001),vec2(1.)-_st);
return uv.x*uv.y;
}
mat2 rotate(float angle){
return mat2(cos(angle),-sin(angle),sin(angle),cos(angle));
}
void main(){
vec2 st=gl_FragCoord.xy/u_resolution;
vec4 color = texture2D(u_texture_1,st); // set background grid
vec2 vPos=st;
float aperture=180.;
float apertureHalf=.5*aperture*(PI/180.);
float maxFactor=sin(apertureHalf);
// to unit sphere -> -1 - 1
vPos=vec2(2.*vPos-1.);
float l=length(vPos);
if(l<=1.){
float x=maxFactor*vPos.x;
float y=maxFactor*vPos.y;
float n=length(vec2(x,y));
float z=sqrt(1.-n*n);
float r=atan(n,z)/PI;
float phi=atan(y,x);
float u=r*cos(phi)+.5;
float v=r*sin(phi)+.5;
vec2 uv=vec2(u,v);
// translate
vec2 translate=vec2(sin(azimuthRad),cos(azimuthRad));
uv+=translate*altitudeNormalization;
// rotate
uv-=.5;
uv=rotate(PI-azimuthRad)*uv;
uv+=.5;
// scale
float size=.5*scale;
float box=box(uv,vec2(.5*size));
uv.x*=-1.;
uv.y*=-1.;
if(box>=.1){
vec3 b=vec3(box);
// gl_FragColor=vec4(b,1.);
//uv *= box;
color += texture2D(u_texture_0,uv);
}
gl_FragColor= color;
}
}
As you can see there are two things wrong, the texture is only displayed partially (I know that I kind of cut it out which is for sure wrong) and the distortion is also wrong. Any help would be apretiated.
The issue is, that you use a scaled uv coordinates for the box test:
float size=.5*scale;
float box=box(uv,vec2(.5*size));
You have to consider this scale when you do the texture look up the texture. Furthermore you wrongly add 0.5 to the uv coordinates:
float u=r*cos(phi)+.5;
float v=r*sin(phi)+.5;
Set up the uv coordinates in range [-1.0, 1.0]:
vec2 uv = vec2(r*cos(phi), r*sin(phi));
Translate, rotate and scale it (e.g. const float scale = 8.0;):
// translate
vec2 translate = vec2(sin(azimuthRad), cos(azimuthRad));
uv += translate * altitudeNormalization;
// rotate
uv = rotate(PI-azimuthRad)*uv;
// scale
uv = uv * scale;
Transform the uv coordinate from range [-1.0, 1.0] to [0.0, 1.0] and do a correct box test:
uv = uv * 0.5 + 0.5;
vec2 boxtest = step(0.0, uv) * step(uv, vec2(1.0));
if (boxtest.x * boxtest.y > 0.0)
color += texture2D(u_texture_0, uv);
Fragment shader main:
void main(){
vec2 st = gl_FragCoord.xy/u_resolution;
vec4 color = texture2D(u_texture_1,st); // set background grid
float aperture=180.;
float apertureHalf=.5*aperture*(PI/180.);
float maxFactor=sin(apertureHalf);
// to unit sphere -> -1 - 1
vec2 vPos = st * 2.0 - 1.0;
float l=length(vPos);
if(l<=1.){
float x = maxFactor*vPos.x;
float y = maxFactor*vPos.y;
float n = length(vec2(x,y));
float z = sqrt(1.-n*n);
float r = atan(n,z)/PI;
float phi = atan(y,x);
float u = r*cos(phi);
float v = r*sin(phi);
vec2 uv = vec2(r*cos(phi), r*sin(phi));
// translate
vec2 translate = vec2(sin(azimuthRad), cos(azimuthRad));
uv += translate * altitudeNormalization;
// rotate
uv = rotate(PI-azimuthRad)*uv;
// scale
uv = uv * scale;
uv = uv * 0.5 + 0.5;
vec2 boxtest = step(0.0, uv) * step(uv, vec2(1.0));
if (boxtest.x * boxtest.y > 0.0)
color += texture2D(u_texture_0, uv);
}
gl_FragColor = color;
}

The homemade Chroma Key filter I developed in OBS-Studio can't use the usual green or blue background, but it can be used in red

I refer to the OBS Studio 20.1.0 documentation and chroma_key_filter.effect on github. I have had a problem with the recent homemade obs-studio filter. I can't think of how to solve it. I want to ask everyone here, I hope I can get some suggestions or answers.
The first problem is that when the power point presentation is captured, the background is set to red (Figure 1), and then I can get the edge of the text with my own edge detection kernel, but the usual green(or blue) color will not show the effect (Figure 2)? (The OBS-studio resource uses a white background for the convenience of seeing the effect)
enter image description here
Figure 1.
enter image description here
Figure 2.
The second problem is that the filter has changed to other kernel. On the contrary, the parameterized horizontal scroll bar have no effect. Is it necessary to adjust the position to make the fullmask work (Figure 3)?
enter image description here
Figure 3.
The code show is as follows(Its programming language uses the HLSL or GLSL language):
uniform float4x4 ViewProj;
uniform texture2d image;
uniform float4x4 yuv_mat = { 0.182586, 0.614231, 0.062007, 0.062745,
-0.100644, -0.338572, 0.439216, 0.501961,
0.439216, -0.398942, -0.040274, 0.501961,
0.000000, 0.000000, 0.000000, 1.000000};
uniform float4 color;
uniform float contrast;
uniform float brightness;
uniform float gamma;
uniform float2 chroma_key;
uniform float4 key_rgb;
uniform float2 pixel_size;
uniform float similarity;
uniform float smoothness;
uniform float spill;
// 3x3 kernel for convolution edge detection.
uniform float3 kernel1 = { -1.0, -1.0, -1.0};
uniform float3 kernel2 = {-1.0, 8.0, -1.0};
uniform float3 kernel3 = {-1.0, -1.0, -1.0 };
sampler_state textureSampler {
Filter = Linear;
AddressU = Clamp;
AddressV = Clamp;
};
struct VertData {
float4 pos : POSITION;
float2 uv : TEXCOORD0; //Texture coordinates.
};
VertData VSDefault(VertData v_in)
{
VertData vert_out;
vert_out.pos = mul(float4(v_in.pos.xyz, 1.0), ViewProj);
vert_out.uv = v_in.uv;
return vert_out;
}
float4 CalcColor(float4 rgba)
{
return float4(pow(rgba.rgb, float3(gamma, gamma, gamma)) * contrast + brightness, rgba.a);
}
float GetChromaDist(float3 rgb)
{
float4 yuvx = mul(float4(rgb.rgb, 1.0), yuv_mat); //rgb to yuv
return distance(chroma_key, yuvx.yz); // Take the distance scalar value of two vectors.
}
float4 SampleTexture(float2 uv)
{
return image.Sample(textureSampler, uv);
}
// 3x3 filter of Edge detection
float GetEdgeDetectionFilteredChromaDist(float3 rgb, float2 texCoord)
{
float distVal = SampleTexture(-texCoord-pixel_size).rgb * kernel1[0]; // Top left
distVal += SampleTexture(texCoord-pixel_size).rgb * kernel1[1]; // Top center
distVal += SampleTexture(texCoord-float2(pixel_size.x, 0.0)).rgb * kernel1[2]; // Top right
distVal += SampleTexture(texCoord-float2(pixel_size.x, -pixel_size.y)).rgb * kernel2[0]; // Middle left
distVal += SampleTexture(texCoord-float2(0.0, pixel_size.y)).rgb * kernel2[1]; // Current pixel
distVal += SampleTexture(texCoord+float2(0.0, pixel_size.y)).rgb * kernel2[2]; // Middle right
distVal += SampleTexture(texCoord+float2(pixel_size.x, -pixel_size.y)).rgb * kernel3[0]; // Bottom left
distVal += SampleTexture(texCoord+float2(pixel_size.x, 0.0)).rgb * kernel3[1]; // Bottom center
distVal += SampleTexture(texCoord+pixel_size).rgb * kernel3[2]; // Bottom right
return distVal;
}
float4 ProcessChromaKey(float4 rgba, VertData v_in)
{
float chromaDist = GetEdgeDetectionFilteredChromaDist(rgba.rgb, v_in.uv);//Edge detection filter function.
float baseMask = chromaDist - similarity;
float fullMask = pow(saturate(baseMask / smoothness), 1.5);
float spillVal = pow(saturate(baseMask / spill), 1.5);
rgba.rgba *= color;
rgba.a *= fullMask;
float desat = (rgba.r * 0.2126 + rgba.g * 0.7152 + rgba.b * 0.0722);
rgba.rgb = saturate(float3(desat, desat, desat)) * (1.0 - spillVal) + rgba.rgb * spillVal;
return CalcColor(rgba);
}
float4 PSChromaKeyRGBA(VertData v_in) : TARGET
{
float4 rgba = image.Sample(textureSampler, v_in.uv);
return ProcessChromaKey(rgba, v_in);
}
technique Draw
{
pass
{
vertex_shader = VSDefault(v_in);
pixel_shader = PSChromaKeyRGBA(v_in);
}
}
Thank you!

OpenGL height based fog

I am reading Inigo Quilez Fog article and I just can't understand few things when he talks about fog based on height.
He has a shader function about height based fog but I have problems understanding how to make it work.
He uses this function to apply fog
vec3 applyFog( in vec3 rgb, // original color of the pixel
in float distance ) // camera to point distance
{
float fogAmount = 1.0 - exp( -distance*b );
vec3 fogColor = vec3(0.5,0.6,0.7);
return mix( rgb, fogColor, fogAmount );
}
then he has the other one based to calculage fog based on height
vec3 applyFog( in vec3 rgb, // original color of the pixel
in float distance, // camera to point distance
in vec3 rayOri, // camera position
in vec3 rayDir ) // camera to point vector
{
float fogAmount = c * exp(-rayOri.y*b) * (1.0-exp( -distance*rayDir.y*b ))/rayDir.y;
vec3 fogColor = vec3(0.5,0.6,0.7);
return mix( rgb, fogColor, fogAmount );
}
I can understand how the shader works but I don't know how to use it with mine. For now I am just trying to learn how the whole fog world in GLSL works but it looks that there is just a lot about it to learn. :D
#version 400 core
in vec3 Position;
in vec3 Normal;
//in vec4 positionToCamera;
//in float visibility;
uniform vec3 color;
uniform vec3 CameraPosition;
uniform float near;
uniform float far;
uniform vec3 fogColor;
uniform bool enableBlending;
uniform float c;
uniform float b;
uniform int fogType;
vec3 applyFogDepth( vec3 rgb, // original color of the pixel
float distance, // camera to point distance
vec3 rayOri, // camera position
vec3 rayDir) // camera to point vector
{
//float cc = 1.0;
//float bb = 1.1;
float fogAmount = c * exp(-rayOri.y*b) * (1.0 - exp(-distance*rayDir.y*b)) / rayDir.y;
return mix(rgb, fogColor, fogAmount );
}
// Fog with Sun factor
vec3 applyFogSun( vec3 rgb,// original color of the pixel
float distance, // camera to point distance
vec3 rayDir, // camera to point vector
vec3 sunDir) // sun light direction
{
float fogAmount = 1.0 - exp(-distance*b);
float sunAmount = max(dot(rayDir, sunDir), 0.0);
vec3 fog = mix(fogColor, // bluish
vec3(1.0, 0.9, 0.7), // yellowish
pow(sunAmount, 8.0));
return mix(rgb, fog, fogAmount);
}
//Exponential fog
vec3 applyFog( vec3 rgb, // original color of the pixel
float distance) // camera to point distance
{
float fogAmount = 1.0 - exp(-distance*b);
return mix(rgb, fogColor, fogAmount);
//return rgb*( exp(-distance*b)) + fogColor*(1.0 - exp(-distance*b));
}
float LinearizeDepth(float depth)
{
float z = depth * 2.0 - 1.0; // Back to NDC
return (2.0 * near * far) / (far + near - z * (far - near));
}
out vec4 gl_FragColor;
void main(void) {
vec3 fog = vec3(0.0);
//-5.0f, 900.0f, 400.0f camera coord
vec3 lightPosition = vec3(0.0, 1200.0, -6000.0);
vec3 lightDirection = normalize(lightPosition - Position);
vec3 direction = normalize(CameraPosition - Position);
float depth = LinearizeDepth(gl_FragCoord.z) / far;
switch (fogType) {
case 0:
fog = applyFog(color, depth);
break;
case 1:
fog = applyFogSun(color, depth, direction, lightDirection);
break;
case 2:
//fog = mix(applyFog(color, depth), applyFogDepth(color, depth, CameraPosition, CameraPosition - Position), 0.5) ;
fog = applyFogDepth(color, depth, CameraPosition, CameraPosition - Position);
break;
}
//calculate light
float diff = max(dot(Normal, lightDirection), 0.0);
vec3 diffuse = diff * color;
float fogAmount = 1.0 - exp(-depth*b);
vec3 finalColor = vec3(0.0);
if (enableBlending)
finalColor = mix(diffuse, fog, fogAmount);
else
finalColor = fog;
gl_FragColor = vec4(finalColor,1.0);
//gl_FragColor = vec4(vec3(LinearizeDepth(visibility) / far), 1.0f);
}
The first image is the first function to apply fog and in the second image is the second function.

Hue shift shader breaks alpha

I am using cocos2dx. I have a Sprite which is set with a custom shader like this:
boss_1 = Sprite::createWithSpriteFrameName("Zombies/normal/0_0_0.png");
boss_1->setPosition(boss_1->getContentSize()/2.0f);
boss_1->setBlendFunc(cocos2d::BlendFunc::ALPHA_NON_PREMULTIPLIED);
boss_1->setGLProgramState(boss_1_state);
I have the following shader:
vec3 hueAdjust(vec3 color, float hueAdjust)
{
const vec3 kRGBToYPrime = vec3 (0.299, 0.587, 0.114);
const vec3 kRGBToI = vec3 (0.596, -0.275, -0.321);
const vec3 kRGBToQ = vec3 (0.212, -0.523, 0.311);
const vec3 kYIQToR = vec3 (1.0, 0.956, 0.621);
const vec3 kYIQToG = vec3 (1.0, -0.272, -0.647);
const vec3 kYIQToB = vec3 (1.0, -1.107, 1.704);
// Convert to YIQ
float YPrime = dot (color, kRGBToYPrime);
float I = dot (color, kRGBToI);
float Q = dot (color, kRGBToQ);
// Calculate the hue and chroma
float hue = atan (Q, I);
float chroma = sqrt (I * I + Q * Q);
// Make the user's adjustments
hue += hueAdjust;
// Convert back to YIQ
Q = chroma * sin (hue);
I = chroma * cos (hue);
// Convert back to RGB
vec3 yIQ = vec3 (YPrime, I, Q);
color.r = dot (yIQ, kYIQToR);
color.g = dot (yIQ, kYIQToG);
color.b = dot (yIQ, kYIQToB);
// Save the result
return color;
}
void main()
{
vec4 v_orColor = v_fragmentColor * texture2D(CC_Texture0, v_texCoord);
// Hue
vec3 hueAdjustedColor = hueAdjust(v_orColor.rgb, hue_value);
gl_FragColor = vec4(hueAdjustedColor, v_orColor.a);
}
But the alpha seems to get lost and the sprite is rendered with a black background. (Although the hue shift works perfectly, since i can test it with a slider)
This only happens with the hueAdjust function. If I use this other function to change contrast/saturation/brightness, the alpha is preserved perfectly:
vec3 ContrastSaturationBrightness(vec3 color, float brt, float sat, float con)
{
// Increase or decrease these values to adjust r, g and b color channels seperately
const float AvgLumR = 0.5;
const float AvgLumG = 0.5;
const float AvgLumB = 0.5;
const vec3 LumCoeff = vec3(0.2125, 0.7154, 0.0721);
vec3 AvgLumin = vec3(AvgLumR, AvgLumG, AvgLumB);
vec3 brtColor = color * brt;
vec3 intensity = vec3(dot(brtColor, LumCoeff));
vec3 satColor = mix(intensity, brtColor, sat);
vec3 conColor = mix(AvgLumin, satColor, con);
return conColor;
}
It seems like the algorithm in hueAdjust outputs negative colors or NaNs...
I got it working by replacing this line:
vec3 hueAdjustedColor = hueAdjust(v_orColor.rgb, hue_value);
with this one:
vec3 hueAdjustedColor = max(hueAdjust(v_orColor.rgb, hue_value), 0.0);
BTW, algorithm seems to be overcomplicated. Why do you converting colors like this RGB -> YIQ -> HSV -> YIQ -> RGB? You can convert RGB directly to HSV and vice versa without intermediate YIQ stage. Here is fast branchless algorithm for this: http://lolengine.net/blog/2013/07/27/rgb-to-hsv-in-glsl
Hope, it helps :)