Alpha Blending With Two Rendering Targets (DirectX or OpenGL) - opengl

I am trying to do the following:
Create a transparent texture called T
Render a textured quad called Q to T
Render T to the screen
Note that the alpha component of T will be zero and the alpha component of Q could be less than one.
I need two alpha blending equations so that if I render multiple instances of Q to T (1st blending equations used) and then render T (2nd blending euqations used), it would be the same as rendering multiple instances of Q directly to the screen.
I use this blending equation
color = src * srcAlpha + dst * (1 - srcAlpha)
alpha = 1 * srcAlpha + 0 * destAlpha
for the case when I render Qs directly to the screen, but cannot define two blending equations that achieve the same thing when I render to T first.
Note that T's pixels are initially fully transparent (alpha = 0) as I don't want it to overwrite the screen if it's not been draw to. Q may have any transparency level in each of its pixels.

What you are asking is mathematically impossible using linear interpolation.
Let's say you have two transparent objects, A and B. You have your texture T, and the screen S. And Blend is the blend equation.
What you are doing with rendering to T is:
T = Blend(B, Blend(A, T))
S = Blend(T, S)
//Therefore
S = Blend(Blend(B, Blend(A, T)), S)
See how there are 3 separate blending operations here? You cannot do that with two blending operations. Which is all you would have if you render A and B.
Linear interpolation is neither commutative nor associative. The order of blending operations matters. And the effect you achieve by writing to a temporary intermediate and blending that to the final buffer cannot be achieved without that intermediary.
There are blending operations that are associative and commutative. For example, additive. But that's almost certainly not going to achieve the visual effect you want, since it adds colors together. That may be something you could use, if you're doing HDR lighting. But there are many cases where it would be inappropriate.
Additive blending would involve a blend equation of the form:
glBlendFunc(GL_SRC_ALPHA, GL_ONE);
With the source alpha determining what percentage of the incoming color to add.

I think this works when rendering Q into T:
color = src * srcAlpha + dst * (1 - srcAlpha)
alpha = (1 - destAlpha) * srcAlpha + 1 * destAlpha
Edit: Thinking about it, my answer won't result in the screen's alpha channel matching the alpha channel you describe when you're rendering direct into the screen. Your alpha blend mode when rendering direct to screen is very unusual. I'm not sure whether you really want the screen's alpha to represent the alpha of the last texture you rendered, or whether you don't actually care what the screen's alpha channel looks like. In most rendering situations people don't care about the state of the back buffer's alpha channel, hopefully that's the case here and my answer works for you.

I have figured this out, and given it's not something I've seen elsewhere on the internet, I hope this will help someone else...
For the first blend equation, for quads Q rendering to T:
Q -> T
color = src * srcAlpha + dst * (1 - srcAlpha)
alpha = 1 * srcAlpha + (1 - srcAlpha) * destAlpha
For the second blend equation, T going to the screen:
T -> S
color = src * 1 + dst * (1 - srcAlpha)
alpha = doesn't matter as screen alpha isn't used
Explanation
For Q -> T, the equation for the color components is as per the normal alpha blending equation:
color = src * srcAlpha + dst * (1 - srcAlpha)
however we need to make sure the alpha channel is set up for the second part. Because we are effectively pre-multiplying the alphas of Q when rendered to T using this blending equation, we only need the alpha in the second stage to determine how much to dim the color of the screen color by when rendering T, hence the alpha in the first equation is:
alpha = 1 * srcAlpha + (1 - srcAlpha) * destAlpha
but we only use it to fade the destination color behind the color of T in the second equation (we use 1 for src color as that was pre-multiplied by equation 1):
color = src * 1 + dst * (1 - srcAlpha)
In DirectX 11 this gives the same results whether equation 1 is used to render directly to the screen or both equations are used to render to T then the screen.
Code for DirectX 11 (for people who prefer real code):
1st equation:
D3D11_BLEND_DESC blend{};
blend.AlphaToCoverageEnable = FALSE;
auto& target = blend.RenderTarget[0];
target.BlendEnable = TRUE;
target.RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL;
target.SrcBlend = D3D11_BLEND_SRC_ALPHA;
target.DestBlend = D3D11_BLEND_INV_SRC_ALPHA;
target.BlendOp = D3D11_BLEND_OP_ADD;
target.SrcBlendAlpha = D3D11_BLEND_ONE;
target.DestBlendAlpha = D3D11_BLEND_INV_SRC_ALPHA;
target.BlendOpAlpha = D3D11_BLEND_OP_ADD;
2nd equation:
D3D11_BLEND_DESC blend{};
blend.AlphaToCoverageEnable = FALSE;
auto& target = blend.RenderTarget[0];
target.BlendEnable = TRUE;
target.RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL;
target.SrcBlend = D3D11_BLEND_ONE;
target.DestBlend = D3D11_BLEND_INV_SRC_ALPHA;
target.BlendOp = D3D11_BLEND_OP_ADD;
target.SrcBlendAlpha = D3D11_BLEND_ZERO;
target.DestBlendAlpha = D3D11_BLEND_ONE;
target.BlendOpAlpha = D3D11_BLEND_OP_ADD;
T is initially cleared to {0, 0, 0, 0}.

Related

GLSL shader to boost the color

Is their any GLSL shader that can help me to "boost" the color of a texture ? How to write such shader ? I would like to do something like the picture below :
It seems they are mostly boosting saturation here, so that's what you'd need to do in your fragment shader.
To boost saturation, you need to take your data from the RVB color space to the HSL (hue saturation lightness) space.
you then need to do:
hslColor = vec3(hslCOlor.x, hslColor.y * **boost**, hslCOlor.z);
Of course to do that you actually need to get into the right color space. This isn't really an openGL issue, I found this website that does the convertion:
https://www.rapidtables.com/convert/color/rgb-to-hsl.html
They also give formulas for the conversion:
R' = R
G' = G
B' = B
Cmax = max(R', G', B')
Cmin = min(R', G', B')
Δ = Cmax - Cmin
L = (Cmax + Cmin) / 2
THat way you can get HSL from RVB. Once you've done your saturation boost, you need to take your colros back to RVB colorspace
it'ssa good news that the same website seem to offer the same capabilities!
https://www.rapidtables.com/convert/color/hsl-to-rgb.html
When 0 ≤ H < 360, 0 ≤ S ≤ 1 and 0 ≤ L ≤ 1:
C = (1 - |2L - 1|) × S
X = C × (1 - |(H / 60°) mod 2 - 1|)
m = L - C/2
(R,G,B) = ((R'+m), (G'+m),(B'+m))
I removed the 255 from their formulas because usually you work on [0,1] in GLSL.
With this you should be able to do all sort of transformations on the colors to manipulate them in a way that is similar to the simple tools of photoshop.
EDIT it should be noted that I mostly talked about boosting saturation, but they might be doing stuffs significantly more complex in your exemple. But working in HSL will still be a good thing to do if you are working with colors to edit photographs.

DirectX Converting Pixel World Position to Shadow Map Position Gives Weird, Tiled Results

I've been trying for some time now to get a screen-space pixel (provided by a deferred HLSL shader) to convert to light space. The results have been surprising to me as my light rendering seems to be tiling the depth buffer.
Importantly, the scene camera (or eye) and the light being rendered from start in the same position.
First, I extract the world position of the pixel using the code below:
float3 eye = Eye;
float4 position = {
IN.texCoord.x * 2 - 1,
(1 - IN.texCoord.y) * 2 - 1,
zbuffer.r,
1
};
float4 hposition = mul(position, EyeViewProjectionInverse);
position = float4(hposition.xyz / hposition.w, hposition.w);
float3 eyeDirection = normalize(eye - position.xyz);
The result seems to be correct as rendering the XYZ position as RGB respectively yields this (apparently correct) result:
The red component seems to be correctly outputting X as it moves to the right, and blue shows Z moving forward. The Y factor also looks correct as the ground is slightly below the Y axis.
Next (and to be sure I'm not going crazy), I decided to output the original depth buffer. Normally I keep the depth buffer in a Texture2D called DepthMap passed to the shader as input. In this case, however, I try to undo the pixel transformation by offsetting it back into the proper position and multiplying it by the eye's view-projection matrix:
float4 cpos = mul(position, EyeViewProjection);
cpos.xyz = cpos.xyz / cpos.w;
cpos.x = cpos.x * 0.5f + 0.5f;
cpos.y = 1 - (cpos.y * 0.5f + 0.5f);
float camera_depth = pow(DepthMap.Sample(Sampler, cpos.xy).r, 100); // Power 100 just to visualize the map since scales are really tiny
return float4(camera_depth, camera_depth, camera_depth, 1);
This yields a correct looking result as well (though I'm not 100% sure about the Z value). Also note that I've made the results exponential to better visualize the depth information (this is not done when attempting live comparisons):
So theoretically, I can use the same code to convert that pixel world position to light space by multiplying by the light's view-projection matrix. Correct? Here's what I tried:
float4 lpos = mul(position, ShadowLightViewProjection[0]);
lpos.xyz = lpos.xyz / lpos.w;
lpos.x = lpos.x * 0.5f + 0.5f;
lpos.y = 1 - (lpos.y * 0.5f + 0.5f);
float shadow_map_depth = pow(ShadowLightMap[0].Sample(Sampler, lpos.xy).r, 100); // Power 100 just to visualize the map since scales are really tiny
return float4(shadow_map_depth, shadow_map_depth, shadow_map_depth, 1);
And here's the result:
And another to show better how it's mapping to the world:
I don't understand what is going on here. It seems it might have something to do with the projection matrix, but I'm not that good with math to know for sure what is happening. It's definitely not the width/height of the light map as I've tried multiple map sizes and the projection matrix is calculated using FOV and aspect ratios never inputing width/height ever.
Finally, here's some C++ code showing how my perspective matrix (used for both eye and light) is calculated:
const auto ys = std::tan((T)1.57079632679f - (fov / (T)2.0));
const auto xs = ys / aspect;
const auto& zf = view_far;
const auto& zn = view_near;
const auto zfn = zf - zn;
row1(xs, 0, 0, 0);
row2(0, ys, 0, 0);
row3(0, 0, zf / zfn, 1);
row4(0, 0, -zn * zf / zfn, 0);
return *this;
I'm completely at a loss here. Any guidance or recommendations would be greatly appreciated!
EDIT - I also forgot to mention that the tiled image is upside down as if the y flip broke it. That's strange to me as it's required to get it back to eye texture space correctly.
I did some tweaking and fixed things here and there. Ultimately, my biggest issue was an unexpectedly transposed matrix. It's a bit complicated as to how the matrix got transposed, but that's why things were flipped. I also changed to D32 depth buffers (though I'm not sure that helped any) and made sure that any positions divided by their W affected all component (including W).
So code like this: hposition.xyz = hposition.xyz / hposition.w
became this: hposition = hposition / hposition.w
After all this tweaking, it's starting to look more like a shadow map.
Oh and the transposed matrix was the ViewProjection of the light.

How would I get this to UV map correctly?

Alright so I have my code to draw out a big landscape using C++ and DirectX. I had it textured with one texture and then needed to add more. I saw people doing it where they had 1 texture image and the image contained 2 textures. Thats what I made, it's a 256x128 image. My problem now is that since my terrain automatically generated the coordinates to UV map 1 texture now it is displaying both textures. I need to make it so when the height of the world is high enough it is 1 texture and everything under is another texture. My code for the UV coordinates,
Vertices[y * WIDTH * x].U = x / 1.28;
Vertices[y * WIDTH * x].V = y / 1.28;
those are my mapping coordinates, X is the current X value of the vertice it is drawing and the Y value is its current y position. The heightmap is 128x128 so I divided by 1.28 to make it so that each polygon had the texture UV mapped on it. The height is calculated as well since I am loading a heightmap and im trying to get it so when it is high enough it UV maps 1 half of the image and if it is the other it UV maps the other side of the image. Someone please help!
bool topTexture = height[x][y] > threshold;
float u = x / 1.28;
float v = y / 1.28;
Vertices[y * WIDTH * x].U = (u - (int)u) / 2 + (topTexture ? 0.5 : 0);
Vertices[y * WIDTH * x].V = (v - (int)v) / 2 + (topTexture ? 0.5 : 0);
You may want to blend the two textures at the threshold level. Then, you have to do it in the PixelShader.

Create color variations in cpp

I have a given color and want to create variations of it in terms of hue, saturation and lightness.
I found a webpage which creates variations the way I would like it (See http://coloreminder.com/). However, I do not entirely understand how these variations are created for an arbitrary color. From what I can tell from considering created variations at this home page, it seems not to be enough to simply change the HSL values separately to create variations.
Hence, I wanted to ask if anybody knows an approach for creating these variations, or ideally knows where to get a piece of code to adopt this kind of color variations creation in my own program?
I am using C++ and QT.
EDIT: Thank you for your replies! Actually the variations of the given homepage really only varies the HSL values separately in 10% steps. I got confused since I compared the values with HSV values in color picker of my program.
From what I can tell from considering created variations at this home page, it seems not to be enough to simply change the HSL values seperately to create variations.
Really? The interface seems to be clear enough about what modifications it makes. You can select "hue", "saturation" or "luminance" and it shows 9 variations on that channel. The following MATLAB script will plot the different variations in a similar way (although in the HSV color space, not HSL).
% display n variations of HTML-style color code.
function [] = colorwheel ( hex, n )
% parse color code.
rgb = hex2rgb(hex);
% render all variations.
h = figure();
for j = 1 : 3,
% build n variations on current channel.
colors = variantsof(rgb, j, n);
% display variations.
for i = 1 : n,
% generate patch of specified color.
I = zeros(128, 128, 3);
I(:,:,1) = colors(i, 1);
I(:,:,2) = colors(i, 2);
I(:,:,3) = colors(i, 3);
% render patches side-by-side to show progression.
imshow(I, 'parent', ...
subplot(3, n, (j-1)*n+i, 'parent', h));
end
end
end
% parse HTML-style color code.
function [ rgb ] = hex2rgb ( hex )
r = double(hex2dec(hex(1:2))) / 255;
g = double(hex2dec(hex(3:4))) / 255;
b = double(hex2dec(hex(5:6))) / 255;
rgb = [r g b];
end
% generate n variants of color on j-th channel.
function [ colors ] = variantsof ( rgb, j, n )
colors = zeros(n, 3);
for i = 1 : n,
% convert to HSV.
color = rgb2hsv(rgb);
% apply variation to selected channel.
color(j) = color(j) + ((i-1) / n);
if color(j) > 1.0,
color(j) = color(j) - 1.0;
end
% convert to RGB.
colors(i,:) = hsv2rgb(color);
end
% order colors with respect to channel.
if j > 1,
colors = sortrows(colors, j);
end
end
Using the "goldenrod" sample color, as:
colorwheel('daa520', 9);
I get:
The first row is a variation on hue, the second on saturation and the third on value. The outputs don't correspond exactly to the ones on the coloreminder.com, but this is explained by the difference in color space and exact value used in permutations.
Have you read through the documentation for QColor?
The QColor class itself provides plenty of useful functions for manipulating colors in pretty much any way you can think of, and the documentation itself explains some basic color theory as well.

Perspective correct texture mapping; z distance calculation might be wrong

I'm making a software rasterizer, and I've run into a bit of a snag: I can't seem to get perspective-correct texture mapping to work.
My algorithm is to first sort the coordinates to plot by y. This returns a highest, lowest and center point. I then walk across the scanlines using the delta's:
// ordering by y is put here
order[0] = &a_Triangle.p[v_order[0]];
order[1] = &a_Triangle.p[v_order[1]];
order[2] = &a_Triangle.p[v_order[2]];
float height1, height2, height3;
height1 = (float)((int)(order[2]->y + 1) - (int)(order[0]->y));
height2 = (float)((int)(order[1]->y + 1) - (int)(order[0]->y));
height3 = (float)((int)(order[2]->y + 1) - (int)(order[1]->y));
// x
float x_start, x_end;
float x[3];
float x_delta[3];
x_delta[0] = (order[2]->x - order[0]->x) / height1;
x_delta[1] = (order[1]->x - order[0]->x) / height2;
x_delta[2] = (order[2]->x - order[1]->x) / height3;
x[0] = order[0]->x;
x[1] = order[0]->x;
x[2] = order[1]->x;
And then we render from order[0]->y to order[2]->y, increasing the x_start and x_end by a delta. When rendering the top part, the delta's are x_delta[0] and x_delta[1]. When rendering the bottom part, the delta's are x_delta[0] and x_delta[2]. Then we linearly interpolate between x_start and x_end on our scanline. UV coordinates are interpolated in the same way, ordered by y, starting at begin and end, to which delta's are applied each step.
This works fine except when I try to do perspective correct UV mapping. The basic algorithm is to take UV/z and 1/z for each vertex and interpolate between them. For each pixel, the UV coordinate becomes UV_current * z_current. However, this is the result:
The inversed part tells you where the delta's are flipped. As you can see, the two triangles both seem to be going towards different points in the horizon.
Here's what I use to calculate the Z at a point in space:
float GetZToPoint(Vec3 a_Point)
{
Vec3 projected = m_Rotation * (a_Point - m_Position);
// #define FOV_ANGLE 60.f
// static const float FOCAL_LENGTH = 1 / tanf(_RadToDeg(FOV_ANGLE) / 2);
// static const float DEPTH = HALFHEIGHT * FOCAL_LENGTH;
float zcamera = DEPTH / projected.z;
return zcamera;
}
Am I right, is it a z buffer issue?
ZBuffer has nothing to do with it.
THe ZBuffer is only useful when triangles are overlapping and you want to make sure that they are drawn correctly (e.g. correctly ordered in the Z). The ZBuffer will, for every pixel of the triangle, determine if a previously placed pixel is nearer to the camera, and if so, not draw the pixel of your triangle.
Since you are drawing 2 triangles which don't overlap, this can not be the issue.
I've made a software rasterizer in fixed point once (for a mobile phone), but I don't have the sources on my laptop. So let me check tonight, how I did it. In essence what you've got is not bad! A thing like this could be caused by a very small error
General tips in debugging this is to have a few test triangles (slope left-side, slope right-side, 90 degree angles, etc etc) and step through it with the debugger and see how your logic deals with the cases.
EDIT:
peudocode of my rasterizer (only U, V and Z are taken into account... if you also want to do gouraud you also have to do everything for R G and B similar as to what you are doing for U and V and Z:
The idea is that a triangle can be broken down in 2 parts. The top part and the bottom part. The top is from y[0] to y[1] and the bottom part is from y[1] to y[2]. For both sets you need to calculate the step variables with which you are interpolating. The below example shows you how to do the top part. If needed I can supply the bottom part too.
Please note that I do already calculate the needed interpolation offsets for the bottom part in the below 'pseudocode' fragment
first order the coords(x,y,z,u,v) in the order so that coord[0].y < coord[1].y < coord[2].y
next check if any 2 sets of coordinates are identical (only check x and y). If so don't draw
exception: does the triangle have a flat top? if so, the first slope will be infinite
exception2: does the triangle have a flat bottom (yes triangles can have these too ;^) ) then the last slope too will be infinite
calculate 2 slopes (left side and right side)
leftDeltaX = (x[1] - x[0]) / (y[1]-y[0]) and rightDeltaX = (x[2] - x[0]) / (y[2]-y[0])
the second part of the triangle is calculated dependent on: if the left side of the triangle is now really on the leftside (or needs swapping)
code fragment:
if (leftDeltaX < rightDeltaX)
{
leftDeltaX2 = (x[2]-x[1]) / (y[2]-y[1])
rightDeltaX2 = rightDeltaX
leftDeltaU = (u[1]-u[0]) / (y[1]-y[0]) //for texture mapping
leftDeltaU2 = (u[2]-u[1]) / (y[2]-y[1])
leftDeltaV = (v[1]-v[0]) / (y[1]-y[0]) //for texture mapping
leftDeltaV2 = (v[2]-v[1]) / (y[2]-y[1])
leftDeltaZ = (z[1]-z[0]) / (y[1]-y[0]) //for texture mapping
leftDeltaZ2 = (z[2]-z[1]) / (y[2]-y[1])
}
else
{
swap(leftDeltaX, rightDeltaX);
leftDeltaX2 = leftDeltaX;
rightDeltaX2 = (x[2]-x[1]) / (y[2]-y[1])
leftDeltaU = (u[2]-u[0]) / (y[2]-y[0]) //for texture mapping
leftDeltaU2 = leftDeltaU
leftDeltaV = (v[2]-v[0]) / (y[2]-y[0]) //for texture mapping
leftDeltaV2 = leftDeltaV
leftDeltaZ = (z[2]-z[0]) / (y[2]-y[0]) //for texture mapping
leftDeltaZ2 = leftDeltaZ
}
set the currentLeftX and currentRightX both on x[0]
set currentLeftU on leftDeltaU, currentLeftV on leftDeltaV and currentLeftZ on leftDeltaZ
calc start and endpoint for first Y range: startY = ceil(y[0]); endY = ceil(y[1])
prestep x,u,v and z for the fractional part of y for subpixel accuracy (I guess this is also needed for floats)
For my fixedpoint algorithms this was needed to make the lines and textures give the illusion of moving in much finer steps then the resolution of the display)
calculate where x should be at y[1]: halfwayX = (x[2]-x[0]) * (y[1]-y[0]) / (y[2]-y[0]) + x[0]
and same for U and V and z: halfwayU = (u[2]-u[0]) * (y[1]-y[0]) / (y[2]-y[0]) + u[0]
and using the halfwayX calculate the stepper for the U and V and z:
if(halfwayX - x[1] == 0){ slopeU=0, slopeV=0, slopeZ=0 } else { slopeU = (halfwayU - U[1]) / (halfwayX - x[1])} //(and same for v and z)
do clipping for the Y top (so calculate where we are going to start to draw in case the top of the triangle is off screen (or off the clipping rectangle))
for y=startY; y < endY; y++)
{
is Y past bottom of screen? stop rendering!
calc startX and endX for the first horizontal line
leftCurX = ceil(startx); leftCurY = ceil(endy);
clip the line to be drawn to the left horizontal border of the screen (or clipping region)
prepare a pointer to the destination buffer (doing it through array indexes everytime is too slow)
unsigned int buf = destbuf + (ypitch) + startX; (unsigned int in case you are doing 24bit or 32 bits rendering)
also prepare your ZBuffer pointer here (if you are using this)
for(x=startX; x < endX; x++)
{
now for perspective texture mapping (using no bilineair interpolation you do the following):
code fragment:
float tv = startV / startZ
float tu = startU / startZ;
tv %= texturePitch; //make sure the texture coordinates stay on the texture if they are too wide/high
tu %= texturePitch; //I'm assuming square textures here. With fixed point you could have used &=
unsigned int *textPtr = textureBuf+tu + (tv*texturePitch); //in case of fixedpoints one could have shifted the tv. Now we have to multiply everytime.
int destColTm = *(textPtr); //this is the color (if we only use texture mapping) we'll be needing for the pixel
dummy line
dummy line
dummy line
optional: check the zbuffer if the previously plotted pixel at this coordinate is higher or lower then ours.
plot the pixel
startZ += slopeZ; startU+=slopeU; startV += slopeV; //update all interpolators
} end of x loop
leftCurX+= leftDeltaX; rightCurX += rightDeltaX; leftCurU+= rightDeltaU; leftCurV += rightDeltaV; leftCurZ += rightDeltaZ; //update Y interpolators
} end of y loop
//this is the end of the first part. We now have drawn half the triangle. from the top, to the middle Y coordinate.
// we now basically do the exact same thing but now for the bottom half of the triangle (using the other set of interpolators)
sorry about the 'dummy lines'.. they were needed to get the markdown codes in sync. (took me a while to get everything sort off looking as intended)
let me know if this helps you solve the problem you are facing!
I don't know that I can help with your question, but one of the best books on software rendering that I had read at the time is available online Graphics Programming Black Book by Michael Abrash.
If you are interpolating 1/z, you need to multiply UV/z by z, not 1/z. Assuming you have this:
UV = UV_current * z_current
and z_current is interpolating 1/z, you should change it to:
UV = UV_current / z_current
And then you might want to rename z_current to something like one_over_z_current.