I'm currently writing an OpenGL 3.1 (with GLSL version 330) application on linux, (NVIDIA 360M card, with the 313.0 nv driver) that has about 15k lines. My problem is that in one of my vertex shaders, I can experience drastical perforamce drops by making minimal changes in the code that should actually be no-op.
For example:
// With this solution my program runs with 3-5 fps
for(int i = 0; i < 4; ++i) {
vout.shadowCoord[i] = uShadowCP[i] * w_pos;
}
// But with this it runs with 30+ fps
vout.shadowCoord[0] = uShadowCP[0] * w_pos;
vout.shadowCoord[1] = uShadowCP[1] * w_pos;
vout.shadowCoord[2] = uShadowCP[2] * w_pos;
vout.shadowCoord[3] = uShadowCP[3] * w_pos;
// This works with 30+ fps too
vec4 shadowCoords[4];
for(int i = 0; i < 4; ++i) {
shadowCoords[i] = uShadowCP[i] * w_pos;
}
for(int i = 0; i < 4; ++i) {
vout.shadowCoord[i] = shadowCoords[i];
}
Or consider this:
uniform int uNumUsedShadowMaps = 4; // edit: I called this "random_uniform" in the original question
// 8 fps
for(int i = 0; i < min(uNumUsedShadowMaps, 4); ++i) {
vout.shadowCoord[i] = vec4(1.0);
}
// 30+ fps
for(int i = 0; i < 4; ++i) {
if(i < uNumUsedShadowMaps) {
vout.shadowCoord[i] = vec4(1.0);
} else {
vout.shadowCoord[i] = vec4(0.0);
}
}
See the entire shader code here, where this problem appeared:
http://pastebin.com/LK5CNJPD
Like any idea would be appreciated, about what can cause these.
I finally managed to find what was the source of the problem, and also found a solution to it.
But before jumping in right for the solution, please let me paste the most minimal shader code, which with, I could reproduce this 'bug'.
Vertex Shader:
#version 330
vec3 CountPosition(); // Irrelevant how it is implemented.
uniform mat4 uProjectionMatrix, uCameraMatrix;
out VertexData {
vec3 c_pos, w_pos;
vec4 shadowCoord[4];
} vout;
void main() {
vout.w_pos = CountPosition();
vout.c_pos = (uCameraMatrix * vec4(vout.w_pos, 1.0)).xyz;
vec4 w_pos = vec4(vout.w_pos, 1.0);
// 20 fps
for(int i = 0; i < 4; ++i) {
vout.shadowCoord[i] = uShadowCP[i] * w_pos;
}
// 50 fps
vout.shadowCoord[0] = uShadowCP[0] * w_pos;
vout.shadowCoord[1] = uShadowCP[1] * w_pos;
vout.shadowCoord[2] = uShadowCP[2] * w_pos;
vout.shadowCoord[3] = uShadowCP[3] * w_pos;
gl_Position = uProjectionMatrix * vec4(vout.c_pos, 1.0);
}
Fragment Shader:
#version 330
in VertexData {
vec3 c_pos, w_pos;
vec4 shadowCoord[4];
} vin;
out vec4 frag_color;
void main() {
frag_color = vec4(1.0);
}
And funny thing is that with only a minimal modification of the vertex shader is needed to make both solutions work with 50 fps. The main function should be modified to be like this:
void main() {
vec4 w_pos = vec4(CountPosition(), 1.0);
vec4 c_pos = uCameraMatrix * w_pos;
vout.w_pos = vec3(w_pos);
vout.c_pos = vec3(c_pos);
// 50 fps
for(int i = 0; i < 4; ++i) {
vout.shadowCoord[i] = uShadowCP[i] * w_pos;
}
// 50 fps
vout.shadowCoord[0] = uShadowCP[0] * w_pos;
vout.shadowCoord[1] = uShadowCP[1] * w_pos;
vout.shadowCoord[2] = uShadowCP[2] * w_pos;
vout.shadowCoord[3] = uShadowCP[3] * w_pos;
gl_Position = uProjectionMatrix * c_pos;
}
What's the difference is that the upper code reads from the shaders out varyings, while the bottom one saves those values in temporary variables, and only writes to the out varyings.
The conclusion:
Reading a shader's out varying is often seen to be used as an optimisation to get off with one less temporary variable, or at least I have seen it at many places on the internet. Despite of the previous fact, reading an out varying might actually be an invalid OpenGL operation, and might get the GL into an undefined state, in which random changes in the code can trigger bad things.
The best thing about this, is that the GLSL 330 specification doesn't say anything about reading from an out varying, that was previously written into. Probably because it's not something I should be doing.
P.S.
Also note that the second example in the original code might look totally different, but it works exactly same in this small code snippet, if the out varyings are read, it gets quite slow with the i < min(uNumUsedShadowMaps, 4) as condition in the for loop, however if the out varyings are only written, it doesn't make any change in the performace, and the i < min(uNumUsedShadowMaps, 4) one works with 50 fps too.
Related
I have the a webgl blur shader:
precision mediump float;
precision mediump int;
uniform sampler2D u_image;
uniform float blur;
uniform int u_horizontalpass; // 0 or 1 to indicate vertical or horizontal pass
uniform float sigma; // The sigma value for the gaussian function: higher value means more blur
// A good value for 9x9 is around 3 to 5
// A good value for 7x7 is around 2.5 to 4
// A good value for 5x5 is around 2 to 3.5
// ... play around with this based on what you need :)
varying vec4 v_texCoord;
const vec2 texOffset = vec2(1.0, 1.0);
// uniform vec2 texOffset;
const float PI = 3.14159265;
void main() {
vec2 p = v_texCoord.st;
float numBlurPixelsPerSide = blur / 2.0;
// Incremental Gaussian Coefficent Calculation (See GPU Gems 3 pp. 877 - 889)
vec3 incrementalGaussian;
incrementalGaussian.x = 1.0 / (sqrt(2.0 * PI) * sigma);
incrementalGaussian.y = exp(-0.5 / (sigma * sigma));
incrementalGaussian.z = incrementalGaussian.y * incrementalGaussian.y;
vec4 avgValue = vec4(0.0, 0.0, 0.0, 0.0);
float coefficientSum = 0.0;
// Take the central sample first...
avgValue += texture2D(u_image, p) * incrementalGaussian.x;
coefficientSum += incrementalGaussian.x;
incrementalGaussian.xy *= incrementalGaussian.yz;
// Go through the remaining 8 vertical samples (4 on each side of the center)
for (float i = 1.0; i <= numBlurPixelsPerSide; i += 1.0) {
avgValue += texture2D(u_image, p - i * texOffset) * incrementalGaussian.x;
avgValue += texture2D(u_image, p + i * texOffset) * incrementalGaussian.x;
coefficientSum += 2.0 * incrementalGaussian.x;
incrementalGaussian.xy *= incrementalGaussian.yz;
}
gl_FragColor = avgValue / coefficientSum;
}
When I build, I get the following error message:
webgl-renderer.js?2eb3:137 Uncaught could not compile shader:ERROR:
0:38: 'i' : Loop index cannot be compared with non-constant expression
I have also tried to use just the uniform float blur to compare i to. Is there any way to fix this?
The problem is further detailed here: https://www.khronos.org/webgl/public-mailing-list/archives/1012/msg00063.php
The solution that I've found looking around is to only use a constant expression when comparing a loop var. This doesn't fit with what I need to do which is vary how many times I'm looping based on the blur radius.
Any thoughts on this?
This happens because on some hardware, GLSL loops are un-rolled into native GPU instructions. This means there needs to be a hard upper limit to the number of passes through the for loop, that governs how many copies of the loop's inner code will be generated. If you replace numBlurPixelsPerSide with a const float or even a #define directive, and the shader compiler can then determine the number of passes at compile time, and generate the code accordingly. But with a uniform there, the upper limit is not known at compile time.
There's an interesting wrinkle in this rule: You're allowed to break or call an early return out of a for loop, even though the max iterations must be discernible at compile time. For example, consider this tiny Mandelbrot shader. This is hardly the prettiest fractal on GLSL Sandbox, but I chose it for its small size:
precision mediump float;
uniform float time;
uniform vec2 mouse;
uniform vec2 resolution;
varying vec2 surfacePosition;
const float max_its = 100.;
float mandelbrot(vec2 z){
vec2 c = z;
for(float i=0.;i<max_its;i++){ // for loop is here.
if(dot(z,z)>4.) return i; // conditional early return here.
z = vec2(z.x*z.x-z.y*z.y,2.*z.x*z.y)+c;
}
return max_its;
}
void main( void ) {
vec2 p = surfacePosition;
gl_FragColor = vec4(mandelbrot(p)/max_its);
}
In this example, max_its is a const so the compiler knows the upper limit and can un-roll this loop if it needs to. Inside the loop, a return statement offers a way to leave the loop early for pixels that are outside of the Mandelbrot set.
You still don't want to set the max iterations too high, as this can produce a lot of GPU instructions and possibly hurt performance.
Try something like this:
const float MAX_ITERATIONS = 100.0;
// Go through the remaining 8 vertical samples (4 on each side of the center)
for (float i = 1.0; i <= MAX_ITERATIONS; i += 1.0) {
if (i >= numBlurPixelsPerSide){break;}
avgValue += texture2D(u_image, p - i * texOffset) * incrementalGaussian.x;
avgValue += texture2D(u_image, p + i * texOffset) * incrementalGaussian.x;
coefficientSum += 2.0 * incrementalGaussian.x;
incrementalGaussian.xy *= incrementalGaussian.yz;
}
Sometimes you can use my very simple solving of issue.
My fragment of the shader source code:
const int cloudPointsWidth = %s;
for ( int i = 0; i < cloudPointsWidth; i++ ) {
//TO DO something
}
You can see '%' : syntax error above. But I am replace %s to a number in my javascript code before use my shader. For example:
vertexCode = vertexCode.replace( '%s', 10 );
vertexCode is my shader source code.
Everytime if I want to change cloudPointsWidth, I am destroying my old shader and creating new shader with new cloudPointsWidth .
Hope sometimes my solving can to help you.
You can just do a for loop with large constant number and use a break.
for(int i = 0; i < 1000000; ++i)
{
// your code here
if(i >= n){
break;
}
}
I've had similar problem with image downsampling shader. The code is basically the same:
for (int dx = -2 * SCALE_FACTOR; dx < 2 * SCALE_FACTOR; dx += 2) {
for (int dy = -2 * SCALE_FACTOR; dy < 2 * SCALE_FACTOR; dy += 2) {
/* accumulate fragment's color */
}
}
What I've ended up doing is using preprocessor and creating separate shader programs for every SCALE_FACTOR used (luckily, only 4 was needed). To achieve that, a small helper function was implemented to add #define ... statements to shader code:
function insertDefines (shaderCode, defines) {
var defineString = '';
for (var define in defines) {
if (defines.hasOwnProperty(define)) {
defineString +=
'#define ' + define + ' ' + defines[define] + '\n';
}
}
var versionIdx = shaderCode.indexOf('#version');
if (versionIdx == -1) {
return defineString + shaderCode;
}
var nextLineIdx = shaderCode.indexOf('\n', versionIdx) + 1;
return shaderCode.slice(0, nextLineIdx) +
defineString +
shaderCode.slice(nextLineIdx);
}
The implementation is a bit tricky because if the code already has #version preprocessor statement in it, all other statements have to follow it.
Then I've added a check for SCALE_FACROR being defined:
#ifndef SCALE_FACTOR
# error SCALE_FACTOR is undefined
#endif
And in my javascript code I've done something like this:
var SCALE_FACTORS = [4, 8, 16, 32],
shaderCode, // the code of my shader
shaderPrograms = SCALE_FACTORS.map(function (factor) {
var codeWithDefines = insertDefines(shaderCode, { SCALE_FACTOR: factor });
/* compile shaders, link program, return */
});
I use opengl es3 on android and solve this problem by using extension above the beginning of program like this:
#extension GL_EXT_gpu_shader5 : require
I don't know whether it work on webGL, but you can try it.
Hope it can help.
You can also use template litterals to set the length of the loop
onBeforeCompile(shader) {
const array = [1,2,3,4,5];
shader.uniforms.myArray = { value: array };
let token = "#include <begin_vertex>";
const insert = `
uniform float myArray[${array.length}];
for ( int i = 0; i < ${array.length}; i++ ) {
float test = myArray[ i ];
}
`;
shader.vertexShader = shader.vertexShader.replace(token, token + insert);
}
I have the a webgl blur shader:
precision mediump float;
precision mediump int;
uniform sampler2D u_image;
uniform float blur;
uniform int u_horizontalpass; // 0 or 1 to indicate vertical or horizontal pass
uniform float sigma; // The sigma value for the gaussian function: higher value means more blur
// A good value for 9x9 is around 3 to 5
// A good value for 7x7 is around 2.5 to 4
// A good value for 5x5 is around 2 to 3.5
// ... play around with this based on what you need :)
varying vec4 v_texCoord;
const vec2 texOffset = vec2(1.0, 1.0);
// uniform vec2 texOffset;
const float PI = 3.14159265;
void main() {
vec2 p = v_texCoord.st;
float numBlurPixelsPerSide = blur / 2.0;
// Incremental Gaussian Coefficent Calculation (See GPU Gems 3 pp. 877 - 889)
vec3 incrementalGaussian;
incrementalGaussian.x = 1.0 / (sqrt(2.0 * PI) * sigma);
incrementalGaussian.y = exp(-0.5 / (sigma * sigma));
incrementalGaussian.z = incrementalGaussian.y * incrementalGaussian.y;
vec4 avgValue = vec4(0.0, 0.0, 0.0, 0.0);
float coefficientSum = 0.0;
// Take the central sample first...
avgValue += texture2D(u_image, p) * incrementalGaussian.x;
coefficientSum += incrementalGaussian.x;
incrementalGaussian.xy *= incrementalGaussian.yz;
// Go through the remaining 8 vertical samples (4 on each side of the center)
for (float i = 1.0; i <= numBlurPixelsPerSide; i += 1.0) {
avgValue += texture2D(u_image, p - i * texOffset) * incrementalGaussian.x;
avgValue += texture2D(u_image, p + i * texOffset) * incrementalGaussian.x;
coefficientSum += 2.0 * incrementalGaussian.x;
incrementalGaussian.xy *= incrementalGaussian.yz;
}
gl_FragColor = avgValue / coefficientSum;
}
When I build, I get the following error message:
webgl-renderer.js?2eb3:137 Uncaught could not compile shader:ERROR:
0:38: 'i' : Loop index cannot be compared with non-constant expression
I have also tried to use just the uniform float blur to compare i to. Is there any way to fix this?
The problem is further detailed here: https://www.khronos.org/webgl/public-mailing-list/archives/1012/msg00063.php
The solution that I've found looking around is to only use a constant expression when comparing a loop var. This doesn't fit with what I need to do which is vary how many times I'm looping based on the blur radius.
Any thoughts on this?
This happens because on some hardware, GLSL loops are un-rolled into native GPU instructions. This means there needs to be a hard upper limit to the number of passes through the for loop, that governs how many copies of the loop's inner code will be generated. If you replace numBlurPixelsPerSide with a const float or even a #define directive, and the shader compiler can then determine the number of passes at compile time, and generate the code accordingly. But with a uniform there, the upper limit is not known at compile time.
There's an interesting wrinkle in this rule: You're allowed to break or call an early return out of a for loop, even though the max iterations must be discernible at compile time. For example, consider this tiny Mandelbrot shader. This is hardly the prettiest fractal on GLSL Sandbox, but I chose it for its small size:
precision mediump float;
uniform float time;
uniform vec2 mouse;
uniform vec2 resolution;
varying vec2 surfacePosition;
const float max_its = 100.;
float mandelbrot(vec2 z){
vec2 c = z;
for(float i=0.;i<max_its;i++){ // for loop is here.
if(dot(z,z)>4.) return i; // conditional early return here.
z = vec2(z.x*z.x-z.y*z.y,2.*z.x*z.y)+c;
}
return max_its;
}
void main( void ) {
vec2 p = surfacePosition;
gl_FragColor = vec4(mandelbrot(p)/max_its);
}
In this example, max_its is a const so the compiler knows the upper limit and can un-roll this loop if it needs to. Inside the loop, a return statement offers a way to leave the loop early for pixels that are outside of the Mandelbrot set.
You still don't want to set the max iterations too high, as this can produce a lot of GPU instructions and possibly hurt performance.
Try something like this:
const float MAX_ITERATIONS = 100.0;
// Go through the remaining 8 vertical samples (4 on each side of the center)
for (float i = 1.0; i <= MAX_ITERATIONS; i += 1.0) {
if (i >= numBlurPixelsPerSide){break;}
avgValue += texture2D(u_image, p - i * texOffset) * incrementalGaussian.x;
avgValue += texture2D(u_image, p + i * texOffset) * incrementalGaussian.x;
coefficientSum += 2.0 * incrementalGaussian.x;
incrementalGaussian.xy *= incrementalGaussian.yz;
}
Sometimes you can use my very simple solving of issue.
My fragment of the shader source code:
const int cloudPointsWidth = %s;
for ( int i = 0; i < cloudPointsWidth; i++ ) {
//TO DO something
}
You can see '%' : syntax error above. But I am replace %s to a number in my javascript code before use my shader. For example:
vertexCode = vertexCode.replace( '%s', 10 );
vertexCode is my shader source code.
Everytime if I want to change cloudPointsWidth, I am destroying my old shader and creating new shader with new cloudPointsWidth .
Hope sometimes my solving can to help you.
You can just do a for loop with large constant number and use a break.
for(int i = 0; i < 1000000; ++i)
{
// your code here
if(i >= n){
break;
}
}
I've had similar problem with image downsampling shader. The code is basically the same:
for (int dx = -2 * SCALE_FACTOR; dx < 2 * SCALE_FACTOR; dx += 2) {
for (int dy = -2 * SCALE_FACTOR; dy < 2 * SCALE_FACTOR; dy += 2) {
/* accumulate fragment's color */
}
}
What I've ended up doing is using preprocessor and creating separate shader programs for every SCALE_FACTOR used (luckily, only 4 was needed). To achieve that, a small helper function was implemented to add #define ... statements to shader code:
function insertDefines (shaderCode, defines) {
var defineString = '';
for (var define in defines) {
if (defines.hasOwnProperty(define)) {
defineString +=
'#define ' + define + ' ' + defines[define] + '\n';
}
}
var versionIdx = shaderCode.indexOf('#version');
if (versionIdx == -1) {
return defineString + shaderCode;
}
var nextLineIdx = shaderCode.indexOf('\n', versionIdx) + 1;
return shaderCode.slice(0, nextLineIdx) +
defineString +
shaderCode.slice(nextLineIdx);
}
The implementation is a bit tricky because if the code already has #version preprocessor statement in it, all other statements have to follow it.
Then I've added a check for SCALE_FACROR being defined:
#ifndef SCALE_FACTOR
# error SCALE_FACTOR is undefined
#endif
And in my javascript code I've done something like this:
var SCALE_FACTORS = [4, 8, 16, 32],
shaderCode, // the code of my shader
shaderPrograms = SCALE_FACTORS.map(function (factor) {
var codeWithDefines = insertDefines(shaderCode, { SCALE_FACTOR: factor });
/* compile shaders, link program, return */
});
I use opengl es3 on android and solve this problem by using extension above the beginning of program like this:
#extension GL_EXT_gpu_shader5 : require
I don't know whether it work on webGL, but you can try it.
Hope it can help.
You can also use template litterals to set the length of the loop
onBeforeCompile(shader) {
const array = [1,2,3,4,5];
shader.uniforms.myArray = { value: array };
let token = "#include <begin_vertex>";
const insert = `
uniform float myArray[${array.length}];
for ( int i = 0; i < ${array.length}; i++ ) {
float test = myArray[ i ];
}
`;
shader.vertexShader = shader.vertexShader.replace(token, token + insert);
}
I have implemented normal mapping shader in my OpenGL/GLSL application. To compute the bump and shadow factor in the fragment shader I need to send from the vertex shader some data like the light direction in tangent space and the vertex position in light space for each light of my scene. So to do job I need the declare 2 output variables like below (vertex shader):
#define MAX_LIGHT_COUNT 5
[...]
out vec4 ShadowCoords[MAX_LIGHT_COUNT]; //Vertex position in light space
out vec3 lightDir_TS[MAX_LIGHT_COUNT]; //light direction in tangent space
uniform int LightCount;
[...]
for (int idx = 0; idx < LightCount; idx++)
{
[...]
lightDir_TS[idx] = TBN * lightDir_CS;
ShadowCoords[idx] = ShadowInfos[idx].ShadowMatrix * VertexPosition;
[...]
}
And in the fragment shader I recover these variables thanks to the followings input declarations:
in vec3 lightDir_TS[MAX_LIGHT_COUNT];
in vec4 ShadowCoords[MAX_LIGHT_COUNT];
The rest of the code is not important to explain my problem.
So now here's the result in image:
As you can see until here all is ok!
But now, for a sake of simplicity I want to use a single output declaration rather than 2! So the logical choice is to use an input/output data block like below:
#define MAX_LIGHT_COUNT 5
[...]
out LightData_VS
{
vec3 lightDir_TS;
vec4 ShadowCoords;
} LightData_OUT[MAX_LIGHT_COUNT];
uniform int LightCount;
[...]
for (int idx = 0; idx < LightCount; idx++)
{
[...]
LightData_OUT[idx].lightDir_TS = TBN * lightDir_CS;
LightData_OUT[idx].ShadowCoords = ShadowInfos[idx].ShadowMatrix * VertexPosition;
[...]
}
And in the fragment shader the input data block:
in LightData_VS
{
vec3 lightDir_TS;
vec4 ShadowCoords;
} LightData_IN[MAX_LIGHT_COUNT];
But this time when I execute my program I have the following display:
As you can see the specular light is not the same than in the first case above!
However I noticed if I replace the line:
for (int idx = 0; idx < LightCount; idx++) //Use 'LightCount' uniform variable
by the following one:
for (int idx = 0; idx < 1; idx++) //'1' value hard coded
or
int count = 1;
for (int idx = 0; idx < count; idx++)
the shading result is correct!
The problem seems to come from the fact I use uniform variable in the 'for' condition. However this works when I used seperates output variables like in the first case!
I checked: the uniform variable 'LightCount' is correct and equal to '1'; (I tried unsigned int data type without success and it's the same thing using a 'while' loop)
How can you explain a such result?
I use:
OpenGL: 4.4.0 NVIDIA driver 344.75
GLSL: 4.40 NVIDIA via Cg compiler
I already used input/output data block without problem but it was not arrays but just simple blocks like below:
[in/out] VertexData_VS
{
vec3 viewDir_TS;
vec4 Position_CS;
vec3 Normal_CS;
vec2 TexCoords;
} VertexData_[IN/OUT];
Do you think it's not possible to use input/output data blocks as arrays in a loop using a uniform variable in the for conditions ?
UPDATE
I tried using 2 vec4 (for a sake of data alignment like for uniform block (for this case data need to be aligned on a vec4)) into the data structure like below:
[in/out] LightData_VS
{
vec4 lightDir_TS; //vec4((TBN * lightDir_CS), 0.0f);
vec4 ShadowCoords;
} LightData_[IN/OUT][MAX_LIGHT_COUNT];
without success...
UPDATE 2
Here's the code concerning shader compilation log:
core::FileSystem file(filename);
std::ifstream ifs(file.GetFullName());
if (ifs)
{
GLint compilationError = 0;
std::string fileContent, line;
char const *sourceCode;
while (std::getline(ifs, line, '\n'))
fileContent.append(line + '\n');
sourceCode = fileContent.c_str();
ifs.close();
this->m_Handle = glCreateShader(this->m_Type);
glShaderSource(this->m_Handle, 1, &sourceCode, 0);
glCompileShader(this->m_Handle);
glGetShaderiv(this->m_Handle, GL_COMPILE_STATUS, &compilationError);
if (compilationError != GL_TRUE)
{
GLint errorSize = 0;
glGetShaderiv(this->m_Handle, GL_INFO_LOG_LENGTH, &errorSize);
char *errorStr = new char[errorSize + 1];
glGetShaderInfoLog(this->m_Handle, errorSize, &errorSize, errorStr);
errorStr[errorSize] = '\0';
std::cout << errorStr << std::endl;
delete[] errorStr;
glDeleteShader(this->m_Handle);
}
}
And the code concerning the program log:
GLint errorLink = 0;
glGetProgramiv(this->m_Handle, GL_LINK_STATUS, &errorLink);
if (errorLink != GL_TRUE)
{
GLint sizeError = 0;
glGetProgramiv(this->m_Handle, GL_INFO_LOG_LENGTH, &sizeError);
char *error = new char[sizeError + 1];
glGetShaderInfoLog(this->m_Handle, sizeError, &sizeError, error);
error[sizeError] = '\0';
std::cerr << error << std::endl;
glDeleteProgram(this->m_Handle);
delete[] error;
}
Unfortunatly, I don't have any error log!
I tried to implement something in glsl to do texture splatting, but the for loop is acting weird and gives different results for code that does exactly the same.
Code 1:
for(int i = 0; i < 5; ++i) {
if(i == 1) {
float fade = texture2D(alphaTextures[i], texCoord.st).r;
vec4 texCol = texture2D(textures[i], texCoord.ba);
texColor = mix(texColor, texCol, fade);
}
}
Code 2:
for(int i = 0; i < 6; ++i) {
if(i == 1) {
float fade = texture2D(alphaTextures[i], texCoord.st).r;
vec4 texCol = texture2D(textures[i], texCoord.ba);
texColor = mix(texColor, texCol, fade);
}
}
The if statement is just for testing purposes so that it should give the same result. The only difference is the loop condition. I really have no idea why only Code 1 gives the correct result. Here are two pictures:
Code1
Code2
The result should be like in picture 1.
According to this answer, you can't iterate over a sampler array. The index alphaTextures[i] is invalid, you can only use alphaTextures[1].
This changes in GLSL 4.00+ (OpenGL 4.0+), where you can have a variable index, but it cannot be from a shader input/derived value.
One reason could be that Graphic processors don't like branched texture fetches.
Try this instead:
for(int i = 0; i < 6; ++i) {
float fade = texture2D(alphaTextures[i], texCoord.st).r;
vec4 texCol = texture2D(textures[i], texCoord.ba);
if(i == 1) {
texColor = mix(texColor, texCol, fade);
}
}
(disclaimer) i am only guessing and this error is really weird.
I have implemented OIT based on the demo in "OpenGL Programming Guide" 8th edition.(The red book).Now I need to add MSAA.Just enabling MSAA screws up the transparency as the layered pixels are resolved x times equal to the number of sample levels.I have read this article on how it is done with DirectX where they say the pixel shader should be run per sample and not per pixel.How id it done in OpenGL.
I won't put out here the whole implementation but the fragment shader chunk in which the final resolution of the layered pixels occurs:
vec4 final_color = vec4(0,0,0,0);
for (i = 0; i < fragment_count; i++)
{
/// Retrieving the next fragment from the stack:
vec4 modulator = unpackUnorm4x8(fragment_list[i].y) ;
/// Perform alpha blending:
final_color = mix(final_color, modulator, modulator.a);
}
color = final_color ;
Update:
I have tried the solution proposed here but it still doesn't work.Here are the full fragment shader for the list build and resolve passes:
List build pass :
#version 420 core
layout (early_fragment_tests) in;
layout (binding = 0, r32ui) uniform uimage2D head_pointer_image;
layout (binding = 1, rgba32ui) uniform writeonly uimageBuffer list_buffer;
layout (binding = 0, offset = 0) uniform atomic_uint list_counter;
layout (location = 0) out vec4 color;//dummy output
in vec3 frag_position;
in vec3 frag_normal;
in vec4 surface_color;
in int gl_SampleMaskIn[];
uniform vec3 light_position = vec3(40.0, 20.0, 100.0);
void main(void)
{
uint index;
uint old_head;
uvec4 item;
vec4 frag_color;
index = atomicCounterIncrement(list_counter);
old_head = imageAtomicExchange(head_pointer_image, ivec2(gl_FragCoord.xy), uint(index));
vec4 modulator =surface_color;
item.x = old_head;
item.y = packUnorm4x8(modulator);
item.z = floatBitsToUint(gl_FragCoord.z);
item.w = int(gl_SampleMaskIn[0]);
imageStore(list_buffer, int(index), item);
frag_color = modulator;
color = frag_color;
}
List resolve :
#version 420 core
// The per-pixel image containing the head pointers
layout (binding = 0, r32ui) uniform uimage2D head_pointer_image;
// Buffer containing linked lists of fragments
layout (binding = 1, rgba32ui) uniform uimageBuffer list_buffer;
// This is the output color
layout (location = 0) out vec4 color;
// This is the maximum number of overlapping fragments allowed
#define MAX_FRAGMENTS 40
// Temporary array used for sorting fragments
uvec4 fragment_list[MAX_FRAGMENTS];
void main(void)
{
uint current_index;
uint fragment_count = 0;
current_index = imageLoad(head_pointer_image, ivec2(gl_FragCoord).xy).x;
while (current_index != 0 && fragment_count < MAX_FRAGMENTS )
{
uvec4 fragment = imageLoad(list_buffer, int(current_index));
int coverage = int(fragment.w);
//if((coverage &(1 << gl_SampleID))!=0) {
fragment_list[fragment_count] = fragment;
current_index = fragment.x;
//}
fragment_count++;
}
uint i, j;
if (fragment_count > 1)
{
for (i = 0; i < fragment_count - 1; i++)
{
for (j = i + 1; j < fragment_count; j++)
{
uvec4 fragment1 = fragment_list[i];
uvec4 fragment2 = fragment_list[j];
float depth1 = uintBitsToFloat(fragment1.z);
float depth2 = uintBitsToFloat(fragment2.z);
if (depth1 < depth2)
{
fragment_list[i] = fragment2;
fragment_list[j] = fragment1;
}
}
}
}
vec4 final_color = vec4(0,0,0,0);
for (i = 0; i < fragment_count; i++)
{
vec4 modulator = unpackUnorm4x8(fragment_list[i].y);
final_color = mix(final_color, modulator, modulator.a);
}
color = final_color;
}
Without knowing how your code actually works, you can do it very much the same way that your linked DX11 demo does, since OpenGL provides the same features needed.
So in the first shader that just stores all the rendered fragments, you also store the sample coverage mask for each fragment (along with the color and depth, of course). This is given as fragment shader input variable int gl_SampleMaskIn[] and for each sample with id 32*i+j, bit j of glSampleMaskIn[i] is set if the fragment covers that sample (since you probably won't use >32xMSAA, you can usually just use glSampleMaskIn[0] and only need to store a single int as coverage mask).
...
fragment.color = inColor;
fragment.depth = gl_FragCoord.z;
fragment.coverage = gl_SampleMaskIn[0];
...
Then the final sort and render shader is run for each sample instead of just for each fragment. This is achieved implicitly by making use of the input variable int gl_SampleID, which gives us the ID of the current sample. So what we do in this shader (in addition to the non-MSAA version) is that the sorting step just accounts for the sample, by only adding a fragment to the final (to be sorted) fragment list if the current sample is actually covered by this fragment:
What was something like (beware, pseudocode extrapolated from your small snippet and the DX-link):
while(fragment.next != 0xFFFFFFFF)
{
fragment_list[count++] = vec2(fragment.depth, fragment.color);
fragment = fragments[fragment.next];
}
is now
while(fragment.next != 0xFFFFFFFF)
{
if(fragment.coverage & (1 << gl_SampleID))
fragment_list[count++] = vec2(fragment.depth, fragment.color);
fragment = fragments[fragment.next];
}
Or something along those lines.
EDIT: To your updated code, you have to increment fragment_count only inside the if(covered) block, since we don't want to add the fragment to the list if the sample is not covered. Incrementing it always will likely result in the artifacts you see at the edges, which are the regions where the MSAA (and thus the coverage) comes into play.
On the other hand the list pointer has to be forwarded (current_index = fragment.x) in each loop iteration and not only if the sample is covered, as otherwise it can result in an infinite loop, like in your case. So your code should look like:
while (current_index != 0 && fragment_count < MAX_FRAGMENTS )
{
uvec4 fragment = imageLoad(list_buffer, int(current_index));
uint coverage = fragment.w;
if((coverage &(1 << gl_SampleID))!=0)
fragment_list[fragment_count++] = fragment;
current_index = fragment.x;
}
The OpenGL 4.3 Spec states in 7.1 about the gl_SampleID builtin variable:
Any static use of this variable in a fragment shader causes the entire shader to be evaluated per-sample.
(This has already been the case in the ARB_sample_shading and is also the case for gl_SamplePosition or a custom variable declared with the sample qualifier)
Therefore it is quite automatic, because you will probably need the SampleID anyway.