How to blend a stroke hlsl - c++

I am working on a painting app, but I am struggling in how to implement opacity and flow,
for the flow I think is working:
This is what I am trying for the flow
float4 BrushSample = BrushTexture.SampleLevel(SamplerStateBilinear, ProjectedUV, 0);
float a = max(Sample.a, Alpha * BrushFlow);
float3 Color = lerp(Sample.rgb, BrushSample.rgb, a);
return saturate(float4(Color.rgb, a));
Where the BrushTexture is the color to be paint sampled with a projected uv since I am painting 3d models and Sample is the current color of the texture at that stage.

Related

HLSL - Sampling a render target texture always return black color

Okay, first of all, I'm really new to DirectX11 and this is actually my first project using it. I'm also relatively new to Computer Graphics in general so I might have some concepts wrong although, for this particular case, I do not think so. My code is based on the RasterTek tutorials.
In trying to implement a shader shader, I need to render the scene to a 2D texture and then perform a gaussian blur on the resulting image.
That part seems to be working fine as when using the Visual Studio graphics debugger the output seems to be what I expect.
However, after having having done all post processing, I render a quad to the backbuffer using a simple shader that uses the final output of the blur as a resource. This always gives me a black screen. When I debug my pixel shader with the VS graphics debugger, it seem like the Sample(texture, uv) method always returns (0,0,0,1) when trying to sample that texture.
The pixel shader works fine if I use a different texture, like some normal map or whatever, as a resource, just not when using any of the rendertargets from the previous passes.
The behaviour is particularly weird because the actual blur shader works fine when using any of the rendertargets as a resource.
I know I cannot use a rendertarget as both input and output but I think I have that covered since I call OMSetRenderTargets so I can render to the backbuffer.
Here's the step by step of my implementation:
Set Render Targets
Clear them
Clear Depth buffer
Render scene to texture
Turn off Z buffer
Render to quad
Perform horizontal blur
Perform vertical blur
Set back buffer as render target
Clear back buffer
Render final output to quad
Turn z buffer on
Present back buffer
Here is the shader for the quad:
Texture2D shaderTexture : register(t0);
SamplerState SampleType : register(s0);
struct PixelInputType
{
float4 position : SV_POSITION;
float2 tex : TEXCOORD0;
};
float4 main(PixelInputType input) : SV_TARGET
{
return shaderTexture.Sample(SampleType, input.tex);
}
Here's the relevant c++ code
This is how I set the render targets
void DeferredBuffers::SetRenderTargets(ID3D11DeviceContext* deviceContext, bool activeRTs[BUFFER_COUNT]){
vector<ID3D11RenderTargetView*> rts = vector<ID3D11RenderTargetView*>();
for (int i = 0; i < BUFFER_COUNT; ++i){
if (activeRTs[i]){
rts.push_back(m_renderTargetViewArray[i]);
}
}
deviceContext->OMSetRenderTargets(rts.size(), &rts[0], m_depthStencilView);
// Set the viewport.
deviceContext->RSSetViewports(1, &m_viewport);
}
I use a ping pong approach with the Render Targets for the blur.
I render the scene to a MainTarget and depth information to the depthMap. The first pass performs an horizontal blur onto a third target (horizontalBlurred) and then I use that one as input for the vertical blur which renders back to the mainTarget and to the finalTarget. It's a loop because on the vertical pass I'm supposed to blend the PS output with what's on the finalTarget. I left that code (and some other stuff) out as it's not relevant.
The m_Fullscreen is the quad.
bool activeRenderTargets[4] = { true, true, false, false };
// Set the render buffers to be the render target.
m_ShaderManager->getDeferredBuffers()->SetRenderTargets(m_D3D->GetDeviceContext(), activeRenderTargets);
// Clear the render buffers.
m_ShaderManager->getDeferredBuffers()->ClearRenderTargets(m_D3D->GetDeviceContext(), 0.25f, 0.0f, 0.0f, 1.0f);
m_ShaderManager->getDeferredBuffers()->ClearDepthStencil(m_D3D->GetDeviceContext());
// Render the scene to the render buffers.
RenderSceneToTexture();
// Get the matrices.
m_D3D->GetWorldMatrix(worldMatrix);
m_Camera->GetBaseViewMatrix(baseViewMatrix);
m_D3D->GetOrthoMatrix(projectionMatrix);
// Turn off the Z buffer to begin all 2D rendering.
m_D3D->TurnZBufferOff();
// Put the full screen ortho window vertex and index buffers on the graphics pipeline to prepare them for drawing.
m_FullScreenWindow->Render(m_D3D->GetDeviceContext());
ID3D11ShaderResourceView* mainTarget = m_ShaderManager->getDeferredBuffers()->GetShaderResourceView(0);
ID3D11ShaderResourceView* horizontalBlurred = m_ShaderManager->getDeferredBuffers()->GetShaderResourceView(2);
ID3D11ShaderResourceView* depthMap = m_ShaderManager->getDeferredBuffers()->GetShaderResourceView(1);
ID3D11ShaderResourceView* finalTarget = m_ShaderManager->getDeferredBuffers()->GetShaderResourceView(3);
activeRenderTargets[1] = false; //depth map is never a render target again
for (int i = 0; i < numBlurs; ++i){
activeRenderTargets[0] = false; //main target is resource in this pass
activeRenderTargets[2] = true; //horizontal blurred target
activeRenderTargets[3] = false; //unbind final target
m_ShaderManager->getDeferredBuffers()->SetRenderTargets(m_D3D->GetDeviceContext(), activeRenderTargets);
m_ShaderManager->RenderScreenSpaceSSS_HorizontalBlur(m_D3D->GetDeviceContext(), m_FullScreenWindow->GetIndexCount(), worldMatrix, baseViewMatrix, projectionMatrix, mainTarget, depthMap);
activeRenderTargets[0] = true; //rendering to main target
activeRenderTargets[2] = false; //horizontal blurred is resource
activeRenderTargets[3] = true; //rendering to final target
m_ShaderManager->getDeferredBuffers()->SetRenderTargets(m_D3D->GetDeviceContext(), activeRenderTargets);
m_ShaderManager->RenderScreenSpaceSSS_VerticalBlur(m_D3D->GetDeviceContext(), m_FullScreenWindow->GetIndexCount(), worldMatrix, baseViewMatrix, projectionMatrix, horizontalBlurred, depthMap);
}
m_D3D->SetBackBufferRenderTarget();
m_D3D->BeginScene(0.0f, 0.0f, 0.5f, 1.0f);
// Reset the viewport back to the original.
m_D3D->ResetViewport();
m_ShaderManager->RenderTextureShader(m_D3D->GetDeviceContext(), m_FullScreenWindow->GetIndexCount(), worldMatrix, baseViewMatrix, projectionMatrix, depthMap);
m_D3D->TurnZBufferOn();
m_D3D->EndScene();
And, finally, here are 3 screenshots from my graphics log.
They show rendering the scene onto the mainTarget, a verticalPass which takes as input the horizontalBlurred resource and finally, rendering onto the backBuffer, which is what's failing. You can see the resource bound to the shader and how the output is just a black screen. I purposedly set the background as red to find out if it was sampling with wrong coordinates, but nope.
So, has anyone ever experienced something like this? What could be the cause of this bug?
Thanks in advance for any help!
EDIT: The Render_SOMETHING_SOMETHING_shader methods handle binding all the resources, setting the shaders, draw calls etc etc. If necessary I can post them here, but I don't think it's that relevant.

Opengl Render To Texture With Partial Transparancy (Translucency) And Then Rendering That To The Screen

I've found a few places where this has been asked, but I've not yet found a good answer.
The problem: I want to render to texture, and then I want to draw that rendered texture to the screen IDENTICALLY to how It would appear if I skipped the render to texture step and were just directly rendering to the screen. I am currently using a blend mode glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). I have glBlendFuncSeparate to play around with as well.
I want to be able to render partially transparent overlapping items to this texture. I know the blend function is currently messing up the RGB values based on the Alpha. I've seen some vague suggestions to use "premultiplied alpha" but the description is poor as to what that means. I make png files in photoshop, I know they have a translucency bit and you can't easily edit the alpha channel independently as you can with TGA. If necessary I can switch to TGA, though PNG is more convenient.
For now, for the sake of this question, assume we aren't using images, instead I am just using full color quads with alpha.
Once I render my scene to the texture I need to render that texture to another scene, and I need to BLEND the texture assuming partial transparency again. Here is where things fall apart. In the previous blending steps I clearly alter the RGB values based on Alpha, doing it again works a-okay if Alpha is 0 or 1, but if it is in in between, the result is a further darkening of those partially translucent pixels.
Playing with blend modes I've had very little luck. The best I can do is render to texture with:
glBlendFuncSeparate(GL_ONE, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE);
I did discover that rendering multiple times with glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) will approximate the right color (unless things overlap). But that's not exactly perfect (as you can see in the following image, the parts where the green/red/blue boxes overlap gets darker, or accumulates alpha. (EDIT: If I do the multiple draws in the render to screen part and only render once to texture, the alpha accumulation issue disappears and it does work, but why?! I don't want to have to render the same texture hundreds of times to the screen to get it to accumulate properly)
Here are some images detailing the issue (the multiple render passes are with basic blending (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA), and they are rendered multiple times in the texture rendering step. The 3 boxes on the right are rendered 100% red, green, or blue (0-255) but at alpha values of 50% for blue, 25% for red, and 75% for green:
So, a breakdown of what I want to know:
I set blend mode to: X?
I render my scene to a texture. (Maybe I have to render with a few blend modes or multiple times?)
I set my blend mode to: Y?
I render my texture to the screen over an existing scene. (Maybe I need a different shader? Maybe I need to render the texture a few times?)
Desired behavior is that at the end of that step, the final pixel result is identical to if I were to just do this:
I set my blend mode to: (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)
I render my scene to the screen.
And, for completeness, here is some of my code with my original naive attempt (just regular blending):
//RENDER TO TEXTURE.
void Clipped::refreshTexture(bool a_forceRefresh) {
if(a_forceRefresh || dirtyTexture){
auto pointAABB = basicAABB();
auto textureSize = castSize<int>(pointAABB.size());
clippedTexture = DynamicTextureDefinition::make("", textureSize, {0.0f, 0.0f, 0.0f, 0.0f});
dirtyTexture = false;
texture(clippedTexture->makeHandle(Point<int>(), textureSize));
framebuffer = renderer->makeFramebuffer(castPoint<int>(pointAABB.minPoint), textureSize, clippedTexture->textureId());
{
renderer->setBlendFunction(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
SCOPE_EXIT{renderer->defaultBlendFunction(); };
renderer->modelviewMatrix().push();
SCOPE_EXIT{renderer->modelviewMatrix().pop(); };
renderer->modelviewMatrix().top().makeIdentity();
framebuffer->start();
SCOPE_EXIT{framebuffer->stop(); };
const size_t renderPasses = 1; //Not sure?
if(drawSorted){
for(size_t i = 0; i < renderPasses; ++i){
sortedRender();
}
} else{
for(size_t i = 0; i < renderPasses; ++i){
unsortedRender();
}
}
}
alertParent(VisualChange::make(shared_from_this()));
}
}
Here is the code I'm using to set up the scene:
bool Clipped::preDraw() {
refreshTexture();
pushMatrix();
SCOPE_EXIT{popMatrix(); };
renderer->setBlendFunction(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
SCOPE_EXIT{renderer->defaultBlendFunction();};
defaultDraw(GL_TRIANGLE_FAN);
return false; //returning false blocks the default rendering steps for this node.
}
And the code to render the scene:
test = MV::Scene::Rectangle::make(&renderer, MV::BoxAABB({0.0f, 0.0f}, {100.0f, 110.0f}), false);
test->texture(MV::FileTextureDefinition::make("Assets/Images/dogfox.png")->makeHandle());
box = std::shared_ptr<MV::TextBox>(new MV::TextBox(&textLibrary, MV::size(110.0f, 106.0f)));
box->setText(UTF_CHAR_STR("ABCDE FGHIJKLM NOPQRS TUVWXYZ"));
box->scene()->make<MV::Scene::Rectangle>(MV::size(65.0f, 36.0f))->color({0, 0, 1, .5})->position({80.0f, 10.0f})->setSortDepth(100);
box->scene()->make<MV::Scene::Rectangle>(MV::size(65.0f, 36.0f))->color({1, 0, 0, .25})->position({80.0f, 40.0f})->setSortDepth(101);
box->scene()->make<MV::Scene::Rectangle>(MV::size(65.0f, 36.0f))->color({0, 1, 0, .75})->position({80.0f, 70.0f})->setSortDepth(102);
test->make<MV::Scene::Rectangle>(MV::size(65.0f, 36.0f))->color({.0, 0, 1, .5})->position({110.0f, 10.0f})->setSortDepth(100);
test->make<MV::Scene::Rectangle>(MV::size(65.0f, 36.0f))->color({1, 0, 0, .25})->position({110.0f, 40.0f})->setSortDepth(101);
test->make<MV::Scene::Rectangle>(MV::size(65.0f, 36.0f))->color({.0, 1, 0, .75})->position({110.0f, 70.0f})->setSortDepth(102);
And here's my screen draw:
renderer.clearScreen();
test->draw(); //this is drawn directly to the screen.
box->scene()->draw(); //everything in here is in a clipped node with a render texture.
renderer.updateScreen();
*EDIT: FRAMEBUFFER SETUP/TEARDOWN CODE:
void glExtensionFramebufferObject::startUsingFramebuffer(std::shared_ptr<Framebuffer> a_framebuffer, bool a_push){
savedClearColor = renderer->backgroundColor();
renderer->backgroundColor({0.0, 0.0, 0.0, 0.0});
require(initialized, ResourceException("StartUsingFramebuffer failed because the extension could not be loaded"));
if(a_push){
activeFramebuffers.push_back(a_framebuffer);
}
glBindFramebuffer(GL_FRAMEBUFFER, a_framebuffer->framebuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, a_framebuffer->texture, 0);
glBindRenderbuffer(GL_RENDERBUFFER, a_framebuffer->renderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, roundUpPowerOfTwo(a_framebuffer->frameSize.width), roundUpPowerOfTwo(a_framebuffer->frameSize.height));
glViewport(a_framebuffer->framePosition.x, a_framebuffer->framePosition.y, a_framebuffer->frameSize.width, a_framebuffer->frameSize.height);
renderer->projectionMatrix().push().makeOrtho(0, static_cast<MatrixValue>(a_framebuffer->frameSize.width), 0, static_cast<MatrixValue>(a_framebuffer->frameSize.height), -128.0f, 128.0f);
GLenum buffers[] = {GL_COLOR_ATTACHMENT0};
//pglDrawBuffersEXT(1, buffers);
renderer->clearScreen();
}
void glExtensionFramebufferObject::stopUsingFramebuffer(){
require(initialized, ResourceException("StopUsingFramebuffer failed because the extension could not be loaded"));
activeFramebuffers.pop_back();
if(!activeFramebuffers.empty()){
startUsingFramebuffer(activeFramebuffers.back(), false);
} else {
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glViewport(0, 0, renderer->window().width(), renderer->window().height());
renderer->projectionMatrix().pop();
renderer->backgroundColor(savedClearColor);
}
}
And my clear screen code:
void Draw2D::clearScreen(){
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
}
Based on some calculations and simulations I ran, I came up with two fairly similar solutions that seem to do the trick. One uses pre-multiplied colors in combination with a single (separate) blend function, the other one works without pre-multiplied colors, but requires changing the blend function a couple of times in the process.
Option 1: Single Blend Function, Pre-Multiplication
This approach works with a single blend function through the entire process. The blend function is:
glBlendFuncSeparate(GL_ONE, GL_ONE_MINUS_SRC_ALPHA,
GL_ONE_MINUS_DST_ALPHA, GL_ONE);
It requires pre-multiplied colors, which means that if your input color would normally be (r, g, b, a), you use (r * a, g * a, b * a, a) instead. You can perform the pre-multiplication in the fragment shader.
The sequence is:
Set the blend function to (GL_ONE, GL_ONE_MINUS_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA, GL_ONE).
Set render target to FBO.
Render layers that you want rendered to FBO, using pre-multiplied colors.
Set render target to default framebuffer.
Render layers you want below FBO content, using pre-multiplied colors.
Render FBO attachment, without applying pre-multiplication since the colors in the FBO are already pre-multiplied.
Render layers you want on top of FBO content, using pre-multiplied colors.
Option 2: Switch Blend Functions, without Pre-Multiplication
This approach does not require pre-multiplication of the colors for any step. The downside is that the blend function has to be switched a few times during the process.
Set the blend function to (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA, GL_ONE).
Set render target to FBO.
Render layers that you want rendered to FBO.
Set render target to default framebuffer.
(optional) Set the blend function to (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA).
Render layers you want below FBO content.
Set the blend function to (GL_ONE, GL_ONE_MINUS_SRC_ALPHA).
Render FBO attachment.
Set the blend function to (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA).
Render layers you want on top of FBO content.
Explanation and Proof
I think Option 1 is nicer and possibly more efficient because it does not require switching blend functions during rendering. So the detailed explanation below is for Option 1. The math for Option 2 is pretty much the same though. The only real difference is that Option 2 uses GL_SOURCE_ALPHA for the first term of the blend function to perform the pre-multiplication where necessary, where Option 1 expects pre-multiplied colors to come into the blend function.
To illustrate that this works, let's go through an example where 3 layers are rendered. I'll do all the calculations for the r and a components. The calculations for g and b would be equivalent to the ones for r. We will render three layers in the following order:
(r1, a1) pre-multiplied: (r1 * a1, a1)
(r2, a2) pre-multiplied: (r2 * a2, a2)
(r3, a3) pre-multiplied: (r3 * a3, a3)
For the reference calculation, we blend these 3 layers with the standard GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA blend function. We don't need to track the resulting alpha here since DST_ALPHA is not used in the blend function, and we don't use the pre-multiplied colors yet:
after layer 1: (a1 * r1)
after layer 2: (a2 * r2 + (1.0 - a2) * a1 * r1)
after layer 3: (a3 * r3 + (1.0 - a3) * (a2 * r2 + (1.0 - a2) * a1 * r1)) =
(a3 * r3 + (1.0 - a3) * a2 * r2 + (1.0 - a3) * (1.0 - a2) * a1 * r1)
So the last term is our target for the final result. Now, we render layers 2 and 3 into an FBO. Later we will render layer 1 into the frame buffer, and then blend the FBO on top of it. The goal is to get the same result.
From now on, we will apply the blend function listed at the start, and use pre-multiplied colors. We will also need to calculate the alphas, since DST_ALPHA is used in the blend function. First, we render layers 2 and 3 into the FBO:
after layer 2: (a2 * r2, a2)
after layer 3: (a3 * r3 + (1.0 - a3) * a2 * r2, (1.0 - a2) * a3 + a2)
Now we render to he primary framebuffer. Since we don't care about the resulting alpha, I'll only calculate the r component again:
after layer 1: (a1 * r1)
Now we blend the content of the FBO on top of this. So what we calculated for "after layer 3" in the FBO is our source color/alpha, a1 * r1 is the destination color, and GL_ONE, GL_ONE_MINUS_SRC_ALPHA is still the blend function. The colors in the FBO are already pre-multiplied, so there will be no pre-multiplication in the shader while blending the FBO content:
srcR = a3 * r3 + (1.0 - a3) * a2 * r2
srcA = (1.0 - a2) * a3 + a2
dstR = a1 * r1
ONE * srcR + ONE_MINUS_SRC_ALPHA * dstR
= srcR + (1.0 - srcA) * dstR
= a3 * r3 + (1.0 - a3) * a2 * r2 + (1.0 - ((1.0 - a2) * a3 + a2)) * a1 * r1
= a3 * r3 + (1.0 - a3) * a2 * r2 + (1.0 - a3 + a2 * a3 - a2) * a1 * r1
= a3 * r3 + (1.0 - a3) * a2 * r2 + (1.0 - a3) * (1.0 - a2) * a1 * r1
Compare the last term with the reference value we calculated above for the standard blending case, and you can tell that it's exactly the same.
This answer to a similar question has some more background on the GL_ONE_MINUS_DST_ALPHA, GL_ONE part of the blend function: OpenGL ReadPixels (Screenshot) Alpha.
I achieved my goal. Now, let me share this information with the internet, since it exists nowhere else that I could find.
Create your framebuffer (blindframebuffer etc)
Clear the framebuffer to 0, 0, 0, 0
Set your viewport properly. This is all basic stuff I took for granted in the question, but want to include here.
Now, render your scene to the framebuffer normally with glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). Make sure the scene is sorted (just as you would normally.)
Now bind the included fragment shader. This will undo the damage dealt to the image color values via the blend function.
Render the texture to your screen with glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)
Go back to rendering as normal with a regular shader.
The code I included in the question remains basically untouched except that I ensure I'm binding the shader I list below when I do my "preDraw" function, which is specific to my own little framework, but is basically the "draw to screen" call for my rendered texture.
I call this the "unblend" shader.
#version 330 core
smooth in vec4 color;
smooth in vec2 uv;
uniform sampler2D texture;
out vec4 colorResult;
void main(){
vec4 textureColor = texture2D(texture, uv.st);
textureColor/=sqrt(textureColor.a);
colorResult = textureColor * color;
}
Why do I do textureColor/=sqrt(textureColor.a)? Because the original color is figured like this:
resultR = r * a, resultG = g * a, resultB = b * a, resultA = a * a
Now, if we want to undo that we need to figure out what a is. The easiest way to find is to solve for "a" here:
resultA = a * a
If a is .25 when originally rendering we have:
resultA = .25 * .25
Or:
resultA = 0.0625
When the texture is being drawn to the screen though, we don't have "a" anymore. We know what resultA is, it's the texture's alpha channel. So we can sqrt(resultA) to get .25 back. Now with that value we can divide to undo the multiply:
textureColor/=sqrt(textureColor.a);
And that fixes everything up undoing the blending!
*EDIT: Well... Kinda at least. There is a sleight inaccuracy, in this case I can show it by rendering over a clear color that is not identical to the framebuffer clear color. Some alpha information seems to be lost, probably in the rgb channels. This is still good enough for me, but I wanted to follow up with the screenshot showing the inaccuracy before signing out. If anyone has a solution please provide it!
I have opened a bounty to bring this answer up to a canonical 100% correct solution. Right now, if I render more partially transparant objects over the existing transparancy the transparancy is accumulated differently than on the right resulting in a lightening of the final texture beyond what is shown on the right. Likewise, when rendered over a non-black background it's clear the results of the existing solution differ slightly as demonstrated above.
A proper solution would be identical in all cases. My existing solution cannot take the destination blending into account in the shader correction, only the source alpha.
In order to do this in a single pass you need support for separate color & alpha blending functions. First you render the texture which has foreground contribution stored in the alpha channel (i.e. 1=fully opaque, 0=fully transparent) and pre-multiplied source color value in the RGB color channel. To create this texture do the following operations:
clear the texture to RGBA=[0, 0, 0, 0]
set the color channel blending to src_color*src_alpha+dst_color*(1-src_alpha)
set the alpha channel blending to src_alpha*(1-dst_alpha)+dst_alpha
render the scene to the texture
To set the mode specified by 2) and 3), you can do: glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA, GL_ONE) and glBlendEquation(GL_FUNC_ADD)
Next render this texture to the scene by setting the color blending to:
src_color+dst_color*(1-src_alpha), i.e. glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA) and glBlendEquation(GL_FUNC_ADD)
Your problem is older than OpenGL, or personal computers, or indeed any living human. You're trying to blend two images together and make it look like they weren't blended at all. Printing presses face this exact problem. When ink is applied to paper, the result is a blend between the ink color and the paper color.
The solution is the same in paper as it is in OpenGL. You must alter your source image in order to control your final result. This is easy enough to figure out if you examine the math used to blend.
For each of R, G, B, the resultant color is (old * (1-opacity)) + (new * opacity). The basic scenario, and the one you'd like to emulate, is drawing a color directly onto the final back buffer at opacity A.
For example, opacity is 50% and your green channel has 0xFF. The result should be 0x7F on a black background (including unavoidable rounding error). You probably can't assume the background is black, so expect the green channel to vary between 0x7F and 0xFF.
You'd like to know how to emulate that result when you're really rendering to a texture, then rending the texture to the back buffer. It turns out that the "vague suggestions to use 'premultiplied alpha'" were correct. Whereas your solution is to use a shader to unblend a previous blend operation in the last step, the standard solution is to multiply the colors of your original source texture with the alpha channel (aka premultiplied alpha). When composting the intermediate texture, the RGB channels are blended without multiplying by Alpha. When rendering the texture to the back buffer, against the RGB channels are blended without multiplying by Alpha. Thus you neatly avoid the multiple multiplication problem.
Please consult these resources for a better understanding. I and most others are more familiar with this technique in DirectX, so you may have to search for the appropriate OGL flags.

WebGL: Particle engine using FBO, how to correctly write and sample particle positions from a texture?

I suspect I'm not correctly rendering particle positions to my FBO, or correctly sampling those positions when rendering, though that may not be the actual problem with my code, admittedly.
I have a complete jsfiddle here: http://jsfiddle.net/p5mdv/53/
A brief overview of the code:
Initialization:
Create an array of random particle positions in x,y,z
Create an array of texture sampling locations (e.g. for 2 particles, first particle at 0,0, next at 0.5,0)
Create a Frame Buffer Object and two particle position textures (one for input, one for output)
Create a full-screen quad (-1,-1 to 1,1)
Particle simulation:
Render a full-screen quad using the particle program (bind frame buffer, set viewport to the dimensions of my particle positions texture, bind input texture, and draw a quad from -1,-1 to 1,1). Input and output textures are swapped each frame.
Particle fragment shader samples the particle texture at the current fragment position (gl_FragCoord.xy), makes some modifications, and writes out the modified position
Particle rendering:
Draw using the vertex buffer of texture sampling locations
Vertex shader uses the sampling location to sample the particle position texture, then transforms them using view projection matrix
Draw the particle using a sprite texture (gl.POINTS)
Questions:
Am I correctly setting the viewport for the FBO in the particle simulation step? I.e. am I correctly rendering a full-screen quad?
// 6 2D corners = 12 vertices
var vertexBuffer = new Float32Array(12);
// -1,-1 to 1,1 screen quad
vertexBuffer[0] = -1;
vertexBuffer[1] = -1;
vertexBuffer[2] = -1;
vertexBuffer[3] = 1;
vertexBuffer[4] = 1;
vertexBuffer[5] = 1;
vertexBuffer[6] = -1;
vertexBuffer[7] = -1;
vertexBuffer[8] = 1;
vertexBuffer[9] = 1;
vertexBuffer[10] = 1;
vertexBuffer[11] = -1;
// Create GL buffers with this data
g.particleSystem.vertexObject = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, g.particleSystem.vertexObject);
gl.bufferData(gl.ARRAY_BUFFER, vertexBuffer, gl.STATIC_DRAW);
...
gl.viewport(0, 0,
g.particleSystem.particleFBO.width,
g.particleSystem.particleFBO.height);
...
// Set the quad as vertex buffer
gl.bindBuffer(gl.ARRAY_BUFFER, g.screenQuad.vertexObject);
gl.vertexAttribPointer(0, 2, gl.FLOAT, false, 0, 0);
// Draw!
gl.drawArrays(gl.TRIANGLES, 0, 6);
Am I correctly setting the texture coordinates to sample the particle positions?
for(var i=0; i<numParticles; i++)
{
// Coordinates of particle within texture (normalized)
var texCoordX = Math.floor(i % texSize.width) / texSize.width;
var texCoordY = Math.floor(i / texSize.width) / texSize.height;
particleIndices[ pclIdx ] = texCoordX;
particleIndices[ pclIdx + 1 ] = texCoordY;
particleIndices[ pclIdx + 2 ] = 1; // not used in shader
}
The relevant shaders:
Particle simulation fragment shader:
precision mediump float;
uniform sampler2D mParticleTex;
void main()
{
// Current pixel is the particle's position on the texture
vec2 particleSampleCoords = gl_FragCoord.xy;
vec4 particlePos = texture2D(mParticleTex, particleSampleCoords);
// Move the particle up
particlePos.y += 0.1;
if(particlePos.y > 2.0)
{
// Reset
particlePos.y = -2.0;
}
// Write particle out to texture
gl_FragColor = particlePos;
}
Particle rendering vertex shader:
attribute vec4 vPosition;
uniform mat4 u_modelViewProjMatrix;
uniform sampler2D mParticleTex;
void main()
{
vec2 particleSampleCoords = vPosition.xy;
vec4 particlePos = texture2D(mParticleTex, particleSampleCoords);
gl_Position = u_modelViewProjMatrix * particlePos;
gl_PointSize = 10.0;
}
Let me know if there's a better way to go about debugging this, if nothing else. I'm using webgl-debug to find gl errors and logging what I can to the console.
Your quad is facing away from view so I tried adding gl.disable(gl.CULL_FACE), still no result.
Then I noticed that while resizing window panel with canvas it actually shows one black, square-shaped particle. So it seems that rendering loop is not good.
If you look at console log, it fails to load particle image and it also says that FBO size is 512x1 which is not good.
Some function declarations do not exist, as getTexSize. (?!)
Code needs tiding and grouping, and always check console if you're already using it.
Hope this helps a bit.
Found the problem.
gl_FragCoord is from [0,0] to [screenwidth, screenheight], I was wrongly thinking it was from [0,0] to [1,1].
I had to pass in shader variables for width and height, then normalize the sample coordinates before sampling from the texture.

Gradient with HSV rather than RGB in OpenGL

OpenGL can colour a rectangle with a gradient of colours from 1 side to the other. I'm using the following code for that in C++
glBegin(GL_QUADS);
{
glColor3d(simulationSettings->hotColour.redF(), simulationSettings->hotColour.greenF(), simulationSettings->hotColour.blueF());
glVertex2d(keyPosX - keyWidth/2, keyPosY + keyHight/2);
glColor3d(simulationSettings->coldColour.redF(), simulationSettings->coldColour.greenF(), simulationSettings->coldColour.blueF());
glVertex2d(keyPosX - keyWidth/2, keyPosY - keyHight/2);
glColor3d(simulationSettings->coldColour.redF(), simulationSettings->coldColour.greenF(), simulationSettings->coldColour.blueF());
glVertex2d(keyPosX + keyWidth/2, keyPosY - keyHight/2);
glColor3d(simulationSettings->hotColour.redF(), simulationSettings->hotColour.greenF(), simulationSettings->hotColour.blueF());
glVertex2d(keyPosX + keyWidth/2, keyPosY + keyHight/2);
}
I'm using some Qt libraries to do the conversions between HSV and RGB. As you can see from the code, I'm drawing a rectangle with colour gradient from what I call hotColour to coldColour.
Why am I doing this? The program I made draws 3D Vectors in space and indicates their length by their colour. The user is offered to choose the hot (high value) and cold (low value) colours, and the program will automatically do the gradient using HSV scaling.
Why HSV scaling? because HSV is single valued along the colour map I'm using, and creating gradients with it linearly is a very easy task. For the user to select the colours, I offer him a QColourDialog colour map
http://qt-project.org/doc/qt-4.8/qcolordialog.html
On this colour map, you can see that red is available on the right and left side, making it impossible to have a linear scale for this colour-map with RGB. But with HSV, the linear scale is very easily achievable, where I just have to use a linear scale between 0 and 360 for Hue values.
With this paradigm, we can see that hot and cold colours define the direction of the gradient, so for example, if I choose hue to be 0 for cold and 359 for hot, HSV will give me a gradient between 0 and 359, and will include the whole spectrum of colours in the gradient; whilst, in OpenGL, it will basically go from red to red, which is no gradient!!!!!!
How can I force OpenGL to use an HSV gradient rather than RGB? The only idea that occurs to me is slicing the rectangle I wanna colour and do many gradients over smaller rectangles, but I think this isn't the most efficient way to do it.
Any ideas?
How can I force OpenGL to use an HSV gradient rather than RGB?
I wouldn't call it "forcing", but "teaching". The default way of OpenGL to interpolate vertex attributes vectors is by barycentric interpolation of the single vector elements based on the NDC coordinates of the fragment.
You must tell OpenGL how to turn those barycentric interpolated HSV values into RGB.
For this we introduce a fragment shader that assumes the color vertex attribute not being RGB but HSV.
#version 120
varying vec3 vertex_hsv; /* set this in appropriate vertex shader to the vertex attribute data*/
vec3 hsv2rgb(vec3 hsv)
{
float h = hsv.x * 6.; /* H in 0°=0 ... 1=360° */
float s = hsv.y;
float v = hsv.z;
float c = v * s;
vec2 cx = vec2(v*s, c * ( 1 - abs(mod(h, 2.)-1.) ));
vec3 rgb = vec3(0., 0., 0.);
if( h < 1. ) {
rgb.rg = cx;
} else if( h < 2. ) {
rgb.gr = cx;
} else if( h < 3. ) {
rgb.gb = cx;
} else if( h < 4. ) {
rgb.bg = cx;
} else if( h < 5. ) {
rgb.br = cx;
} else {
rgb.rb = cx;
}
return rgb + vec3(v-cx.y);
}
void main()
{
gl_FragColor = hsv2rgb(vertex_hsv);
}
You can do this with a fragment shader. You draw a quad and apply your fragment shader which does the coloring you want to the quad. The way I would do this is to set the colors of the corners to the HSV values that you want, then in the fragment shader convert the interpolated color values from HSV back to RGB. For more information on fragment shaders see the docs.

How do I get my textures to bind properly for multitexturing?

I'm trying to render colored text to the screen. I've got a texture containing a black (RGBA 0, 0, 0, 255) representation of the text to display, and I've got another texture containing the color pattern I want to render the text in. This should be a fairly simple multitexturing exercise, but I can't seem to get the second texture to work. Both textures are Rectangle textures, because the integer coordinate values are easier to work with.
Rendering code:
glActiveTextureARB(GL_TEXTURE0_ARB);
glEnable(GL_TEXTURE_RECTANGLE_ARB);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, TextHandle);
glActiveTextureARB(GL_TEXTURE1_ARB);
glEnable(GL_TEXTURE_RECTANGLE_ARB);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, ColorsHandle);
glBegin(GL_QUADS);
glMultiTexCoord2iARB(GL_TEXTURE0_ARB, 0, 0);
glMultiTexCoord2iARB(GL_TEXTURE1_ARB, colorRect.Left, colorRect.Top);
glVertex2f(x, y);
glMultiTexCoord2iARB(GL_TEXTURE0_ARB, 0, textRect.Height);
glMultiTexCoord2iARB(GL_TEXTURE1_ARB, colorRect.Left, colorRect.Top + colorRect.Height);
glVertex2f(x, y + textRect.Height);
glMultiTexCoord2iARB(GL_TEXTURE0_ARB, textRect.Width, textRect.Height);
glMultiTexCoord2iARB(GL_TEXTURE1_ARB, colorRect.Left + colorRect.Width, colorRect.Top + colorRect.Height);
glVertex2f(x + textRect.Width, y + textRect.Height);
glMultiTexCoord2iARB(GL_TEXTURE0_ARB, textRect.Width, 0);
glMultiTexCoord2iARB(GL_TEXTURE1_ARB, colorRect.Left + colorRect.Width, colorRect.Top);
glVertex2f(x + textRect.Width, y);
glEnd;
Vertex shader:
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_TexCoord[1] = gl_MultiTexCoord1;
}
Fragment shader:
uniform sampler2DRect texAlpha;
uniform sampler2DRect texRGB;
void main()
{
float alpha = texture2DRect(texAlpha, gl_TexCoord[0].st).a;
vec3 rgb = texture2DRect(texRGB, gl_TexCoord[1].st).rgb;
gl_FragColor = vec4(rgb, alpha);
}
This seems really straightforward, but it ends up rendering solid black text instead of colored text. I get the exact same result if the last line of the fragment shader reads gl_FragColor = texture2DRect(texAlpha, gl_TexCoord[0].st);. Changing the last line to gl_FragColor = texture2DRect(texRGB, gl_TexCoord[1].st); causes it to render nothing at all.
Based on this, it appears that calling texture2DRect on texRGB always returns (0, 0, 0, 0). I've made sure that GL_MULTISAMPLE is enabled, and bound the texture on unit 1, but for whatever reason I don't seem to actually get access to it inside my fragment shader. What am I doing wrong?
The overalls look fine. It is possible that your texcoords for unit 1 are messed up, causing sampling outside the colored portion of your texture.
Is your color texture fully filled with color ?
What do you mean by "causes it to render nothing at all." ? This should not happen except if your alpha channel in color texture is set to 0.
Did you try with the following code, to override the alpha channel ?
gl_FragColor = vec4( texture2DRect(texRGB, gl_TexCoord[1].st).rgb, 1.0 );
Are you sure the the font outline texture contains a valid alpha values? You said that the texture is black and white, but you are using the alpha value! Instead of using the a component, try to use the r one.
Blending affects fragment shader output: it blends ths fragment color with the corresponding one.