Order independent transparency with MSAA - opengl

I have implemented OIT based on the demo in "OpenGL Programming Guide" 8th edition.(The red book).Now I need to add MSAA.Just enabling MSAA screws up the transparency as the layered pixels are resolved x times equal to the number of sample levels.I have read this article on how it is done with DirectX where they say the pixel shader should be run per sample and not per pixel.How id it done in OpenGL.
I won't put out here the whole implementation but the fragment shader chunk in which the final resolution of the layered pixels occurs:
vec4 final_color = vec4(0,0,0,0);
for (i = 0; i < fragment_count; i++)
{
/// Retrieving the next fragment from the stack:
vec4 modulator = unpackUnorm4x8(fragment_list[i].y) ;
/// Perform alpha blending:
final_color = mix(final_color, modulator, modulator.a);
}
color = final_color ;
Update:
I have tried the solution proposed here but it still doesn't work.Here are the full fragment shader for the list build and resolve passes:
List build pass :
#version 420 core
layout (early_fragment_tests) in;
layout (binding = 0, r32ui) uniform uimage2D head_pointer_image;
layout (binding = 1, rgba32ui) uniform writeonly uimageBuffer list_buffer;
layout (binding = 0, offset = 0) uniform atomic_uint list_counter;
layout (location = 0) out vec4 color;//dummy output
in vec3 frag_position;
in vec3 frag_normal;
in vec4 surface_color;
in int gl_SampleMaskIn[];
uniform vec3 light_position = vec3(40.0, 20.0, 100.0);
void main(void)
{
uint index;
uint old_head;
uvec4 item;
vec4 frag_color;
index = atomicCounterIncrement(list_counter);
old_head = imageAtomicExchange(head_pointer_image, ivec2(gl_FragCoord.xy), uint(index));
vec4 modulator =surface_color;
item.x = old_head;
item.y = packUnorm4x8(modulator);
item.z = floatBitsToUint(gl_FragCoord.z);
item.w = int(gl_SampleMaskIn[0]);
imageStore(list_buffer, int(index), item);
frag_color = modulator;
color = frag_color;
}
List resolve :
#version 420 core
// The per-pixel image containing the head pointers
layout (binding = 0, r32ui) uniform uimage2D head_pointer_image;
// Buffer containing linked lists of fragments
layout (binding = 1, rgba32ui) uniform uimageBuffer list_buffer;
// This is the output color
layout (location = 0) out vec4 color;
// This is the maximum number of overlapping fragments allowed
#define MAX_FRAGMENTS 40
// Temporary array used for sorting fragments
uvec4 fragment_list[MAX_FRAGMENTS];
void main(void)
{
uint current_index;
uint fragment_count = 0;
current_index = imageLoad(head_pointer_image, ivec2(gl_FragCoord).xy).x;
while (current_index != 0 && fragment_count < MAX_FRAGMENTS )
{
uvec4 fragment = imageLoad(list_buffer, int(current_index));
int coverage = int(fragment.w);
//if((coverage &(1 << gl_SampleID))!=0) {
fragment_list[fragment_count] = fragment;
current_index = fragment.x;
//}
fragment_count++;
}
uint i, j;
if (fragment_count > 1)
{
for (i = 0; i < fragment_count - 1; i++)
{
for (j = i + 1; j < fragment_count; j++)
{
uvec4 fragment1 = fragment_list[i];
uvec4 fragment2 = fragment_list[j];
float depth1 = uintBitsToFloat(fragment1.z);
float depth2 = uintBitsToFloat(fragment2.z);
if (depth1 < depth2)
{
fragment_list[i] = fragment2;
fragment_list[j] = fragment1;
}
}
}
}
vec4 final_color = vec4(0,0,0,0);
for (i = 0; i < fragment_count; i++)
{
vec4 modulator = unpackUnorm4x8(fragment_list[i].y);
final_color = mix(final_color, modulator, modulator.a);
}
color = final_color;
}

Without knowing how your code actually works, you can do it very much the same way that your linked DX11 demo does, since OpenGL provides the same features needed.
So in the first shader that just stores all the rendered fragments, you also store the sample coverage mask for each fragment (along with the color and depth, of course). This is given as fragment shader input variable int gl_SampleMaskIn[] and for each sample with id 32*i+j, bit j of glSampleMaskIn[i] is set if the fragment covers that sample (since you probably won't use >32xMSAA, you can usually just use glSampleMaskIn[0] and only need to store a single int as coverage mask).
...
fragment.color = inColor;
fragment.depth = gl_FragCoord.z;
fragment.coverage = gl_SampleMaskIn[0];
...
Then the final sort and render shader is run for each sample instead of just for each fragment. This is achieved implicitly by making use of the input variable int gl_SampleID, which gives us the ID of the current sample. So what we do in this shader (in addition to the non-MSAA version) is that the sorting step just accounts for the sample, by only adding a fragment to the final (to be sorted) fragment list if the current sample is actually covered by this fragment:
What was something like (beware, pseudocode extrapolated from your small snippet and the DX-link):
while(fragment.next != 0xFFFFFFFF)
{
fragment_list[count++] = vec2(fragment.depth, fragment.color);
fragment = fragments[fragment.next];
}
is now
while(fragment.next != 0xFFFFFFFF)
{
if(fragment.coverage & (1 << gl_SampleID))
fragment_list[count++] = vec2(fragment.depth, fragment.color);
fragment = fragments[fragment.next];
}
Or something along those lines.
EDIT: To your updated code, you have to increment fragment_count only inside the if(covered) block, since we don't want to add the fragment to the list if the sample is not covered. Incrementing it always will likely result in the artifacts you see at the edges, which are the regions where the MSAA (and thus the coverage) comes into play.
On the other hand the list pointer has to be forwarded (current_index = fragment.x) in each loop iteration and not only if the sample is covered, as otherwise it can result in an infinite loop, like in your case. So your code should look like:
while (current_index != 0 && fragment_count < MAX_FRAGMENTS )
{
uvec4 fragment = imageLoad(list_buffer, int(current_index));
uint coverage = fragment.w;
if((coverage &(1 << gl_SampleID))!=0)
fragment_list[fragment_count++] = fragment;
current_index = fragment.x;
}

The OpenGL 4.3 Spec states in 7.1 about the gl_SampleID builtin variable:
Any static use of this variable in a fragment shader causes the entire shader to be evaluated per-sample.
(This has already been the case in the ARB_sample_shading and is also the case for gl_SamplePosition or a custom variable declared with the sample qualifier)
Therefore it is quite automatic, because you will probably need the SampleID anyway.

Related

I'm experiencing very slow OpenGL compute shader compilation (10+ minutes) when using larger work groups, is there anything I can do to speed it up?

So, I'm encountering a really bizarre (at least to me as a compute shader noob) phenomenon when I compile my compute shader using glGetShaderiv(m_shaderID, GL_COMPILE_STATUS, &status). Inexplicably, my compute shader takes much longer to compile when I increase the size of my work groups! When I have one-dimensional work groups, it compiles in less than a second, but when I increase the size of my work groups to 4x1x6, the compute shader takes 10+ minutes to compile! How strange.
For background, I'm trying to implement a light clustering algorithm (essentially the one shown here: http://www.aortiz.me/2018/12/21/CG.html#tiled-shading--forward), and my compute shader is this monster:
// TODO: Figure out optimal tile size, currently using a 16x9x24 subdivision
#define FLT_MAX 3.402823466e+38
#define FLT_MIN 1.175494351e-38
#define DBL_MAX 1.7976931348623158e+308
#define DBL_MIN 2.2250738585072014e-308
layout(local_size_x = 4, local_size_y = 9, local_size_z = 4) in;
// TODO: Change to reflect my light structure
// struct PointLight{
// vec4 position;
// vec4 color;
// uint enabled;
// float intensity;
// float range;
// };
// TODO: Pack this more efficiently
struct Light {
vec4 position;
vec4 direction;
vec4 ambientColor;
vec4 diffuseColor;
vec4 specularColor;
vec4 attributes;
vec4 intensity;
ivec4 typeIndexAndFlags;
// uint flags;
};
// Array containing offset and number of lights in a cluster
struct LightGrid{
uint offset;
uint count;
};
struct VolumeTileAABB{
vec4 minPoint;
vec4 maxPoint;
};
layout(std430, binding = 0) readonly buffer LightBuffer {
Light data[];
} lightBuffer;
layout (std430, binding = 1) buffer clusterAABB{
VolumeTileAABB cluster[ ];
};
layout (std430, binding = 2) buffer screenToView{
mat4 inverseProjection;
uvec4 tileSizes;
uvec2 screenDimensions;
};
// layout (std430, binding = 3) buffer lightSSBO{
// PointLight pointLight[];
// };
// SSBO of active light indices
layout (std430, binding = 4) buffer lightIndexSSBO{
uint globalLightIndexList[];
};
layout (std430, binding = 5) buffer lightGridSSBO{
LightGrid lightGrid[];
};
layout (std430, binding = 6) buffer globalIndexCountSSBO{
uint globalIndexCount;
};
// Shared variables, shared between all invocations WITHIN A WORK GROUP
// TODO: See if I can use gl_WorkGroupSize for this, gl_WorkGroupSize.x * gl_WorkGroupSize.y * gl_WorkGroupSize.z
// A grouped-shared array which contains all the lights being evaluated
shared Light sharedLights[4*9*4]; // A grouped-shared array which contains all the lights being evaluated, size is thread-count
uniform mat4 viewMatrix;
bool testSphereAABB(uint light, uint tile);
float sqDistPointAABB(vec3 point, uint tile);
bool testConeAABB(uint light, uint tile);
float getLightRange(uint lightIndex);
bool isEnabled(uint lightIndex);
// Runs in batches of multiple Z slices at once
// In this implementation, 6 batches, since each thread group contains four z slices (24/4=6)
// We begin by each thread representing a cluster
// Then in the light traversal loop they change to representing lights
// Then change again near the end to represent clusters
// NOTE: Tiles actually mean clusters, it's just a legacy name from tiled shading
void main(){
// Reset every frame
globalIndexCount = 0; // How many lights are active in t his scene
uint threadCount = gl_WorkGroupSize.x * gl_WorkGroupSize.y * gl_WorkGroupSize.z; // Number of threads in a group, same as local_size_x, local_size_y, local_size_z
uint lightCount = lightBuffer.data.length(); // Number of total lights in the scene
uint numBatches = uint((lightCount + threadCount -1) / threadCount); // Number of groups of lights that will be completed, i.e., number of passes
uint tileIndex = gl_LocalInvocationIndex + gl_WorkGroupSize.x * gl_WorkGroupSize.y * gl_WorkGroupSize.z * gl_WorkGroupID.z;
// uint tileIndex = gl_GlobalInvocationID; // doesn't wortk, is uvec3
// Local thread variables
uint visibleLightCount = 0;
uint visibleLightIndices[100]; // local light index list, to be transferred to global list
// Every light is being checked against every cluster in the view frustum
// TODO: Perform active cluster determination
// Each individual thread will be responsible for loading a light and writing it to shared memory so other threads can read it
for( uint batch = 0; batch < numBatches; ++batch){
uint lightIndex = batch * threadCount + gl_LocalInvocationIndex;
//Prevent overflow by clamping to last light which is always null
lightIndex = min(lightIndex, lightCount);
//Populating shared light array
// NOTE: It is VERY important that lightBuffer.data not be referenced after this point,
// since that is not thread-safe
sharedLights[gl_LocalInvocationIndex] = lightBuffer.data[lightIndex];
barrier(); // Synchronize read/writes between invocations within a work group
//Iterating within the current batch of lights
for( uint light = 0; light < threadCount; ++light){
if( isEnabled(light)){
uint lightType = uint(sharedLights[light].typeIndexAndFlags[0]);
if(lightType == 0){
// Point light
if( testSphereAABB(light, tileIndex) ){
visibleLightIndices[visibleLightCount] = batch * threadCount + light;
visibleLightCount += 1;
}
}
else if(lightType == 1){
// Directional light
visibleLightIndices[visibleLightCount] = batch * threadCount + light;
visibleLightCount += 1;
}
else if(lightType == 2){
// Spot light
if( testConeAABB(light, tileIndex) ){
visibleLightIndices[visibleLightCount] = batch * threadCount + light;
visibleLightCount += 1;
}
}
}
}
}
// We want all thread groups to have completed the light tests before continuing
barrier();
// Back to every thread representing a cluster
// Adding the light indices to the cluster light index list
uint offset = atomicAdd(globalIndexCount, visibleLightCount);
for(uint i = 0; i < visibleLightCount; ++i){
globalLightIndexList[offset + i] = visibleLightIndices[i];
}
// Updating the light grid for each cluster
lightGrid[tileIndex].offset = offset;
lightGrid[tileIndex].count = visibleLightCount;
}
// Return whether or not the specified light intersects with the specified tile (cluster)
bool testSphereAABB(uint light, uint tile){
float radius = getLightRange(light);
vec3 center = vec3(viewMatrix * sharedLights[light].position);
float squaredDistance = sqDistPointAABB(center, tile);
return squaredDistance <= (radius * radius);
}
// TODO: Different test for spot-lights
// Has been done by using several AABBs for spot-light cone, this could be a good approach, or even just use one to start.
bool testConeAABB(uint light, uint tile){
// Light light = lightBuffer.data[lightIndex];
// float innerAngleCos = light.attributes[0];
// float outerAngleCos = light.attributes[1];
// float innerAngle = acos(innerAngleCos);
// float outerAngle = acos(outerAngleCos);
// FIXME: Actually do something clever here
return true;
}
// Get range of light given the specified light index
float getLightRange(uint lightIndex){
int lightType = sharedLights[lightIndex].typeIndexAndFlags[0];
float range;
if(lightType == 0){
// Point light
float brightness = 0.01; // cutoff for end of range
float c = sharedLights[lightIndex].attributes.x;
float lin = sharedLights[lightIndex].attributes.y;
float quad = sharedLights[lightIndex].attributes.z;
range = (-lin + sqrt(lin*lin - 4.0 * c * quad + (4.0/brightness)* quad)) / (2.0 * quad);
}
else if(lightType == 1){
// Directional light
range = FLT_MAX;
}
else{
// Spot light
range = FLT_MAX;
}
return range;
}
// Whether the light at the specified index is enabled
bool isEnabled(uint lightIndex){
uint flags = sharedLights[lightIndex].typeIndexAndFlags[2];
return (flags | 1) != 0;
}
// Get squared distance from a point to the AABB of the specified tile (cluster)
float sqDistPointAABB(vec3 point, uint tile){
float sqDist = 0.0;
VolumeTileAABB currentCell = cluster[tile];
cluster[tile].maxPoint[3] = tile;
for(int i = 0; i < 3; ++i){
float v = point[i];
if(v < currentCell.minPoint[i]){
sqDist += (currentCell.minPoint[i] - v) * (currentCell.minPoint[i] - v);
}
if(v > currentCell.maxPoint[i]){
sqDist += (v - currentCell.maxPoint[i]) * (v - currentCell.maxPoint[i]);
}
}
return sqDist;
}
Edit: Whoops, lost the bottom part of this!
What I don't understand is why changing the size of the work groups affects compilation time at all? It sort of defeats the point of the algorithm if my work group sizes are too small for the compute shader to run efficiently, so I'm hoping there's something that I'm missing.
As a last note, I'd like to avoid using glGetProgramBinary as a solution. Not only because it merely circumvents the issue instead of solving it, but because pre-compiling shaders will not play nicely with the engine's current architecture.
So, I'm figuring that this must be a bug in the compiler, since I've replaced the loop in my sqDistPointAABB function with:
vec3 minPoint = currentCell.minPoint.xyz;
vec3 maxPoint = currentCell.maxPoint.xyz;
vec3 t1 = vec3(lessThan(point, minPoint));
vec3 t2 = vec3(greaterThan(point, maxPoint));
vec3 sqDist = t1 * (minPoint - point) * (minPoint - point) + t2 * (maxPoint - point) * (maxPoint - point);
return sqDist.x + sqDist.y + sqDist.z;
And it compiles just fine now, in less than a second! So strange

Can't loop over a sampler2D array after specifying WebGL2 context in Three.js

I have been using a sampler2D array in my fragment shader (those are shadow maps, there can be up to 16 of them, so an array is more preferable than using 16 separate variables, of course). Then I added the WebGL2 context (const context = canvas.getContext('webgl2');) to the THREE.WebGLRenderer that I'm using and now I can't get the program to work: it says array index for samplers must be constant integral expressions when I attempt to access the sampler array elements in a loop like this:
uniform sampler2D samplers[MAX_SPLITS];
for (int i = 0; i < MAX_SPLITS; ++i) {
if (i >= splitCount) {
break;
}
if (func(samplers[i])) { // that's where the error happens
...
}
}
Is there really no way around this? Do I have to use sixteen separate variables?
(there is no direct #version directive in the shader but THREE.js seems to add a #version 300 es by default)
You can't use dynamic indexing with samplers in GLSL ES 3.0. From the spec
12.29 Samplers
Should samplers be allowed as l-values? The specification already allows an equivalent behavior:
Current specification:
uniform sampler2D sampler[8];
int index = f(...);
vec4 tex = texture(sampler[index], xy); // allowed
Using assignment of sampler types:
uniform sampler2D s;
s = g(...);
vec4 tex = texture(s, xy); // not allowed
RESOLUTION: Dynamic indexing of sampler arrays is now prohibited by the specification. Restrict indexing of sampler arrays to constant integral expressions.
and
12.30 Dynamic Indexing
For GLSL ES 1.00, support of dynamic indexing of arrays, vectors and matrices was not mandated
because it was not directly supported by some implementations. Software solutions (via program
transforms) exist for a subset of cases but lead to poor performance. Should support for dynamic indexing
be mandated for GLSL ES 3.00?
RESOLUTION: Mandate support for dynamic indexing of arrays except for sampler arrays, fragment
output arrays and uniform block arrays.
Should support for dynamic indexing of vectors and matrices be mandated in GLSL ES 3.00?
RESOLUTION: Yes.
Indexing of arrays of samplers by constant-index-expressions is supported
in GLSL ES 1.00. A constant index-expression is an expression formed from
constant-expressions and certain loop indices, defined for
a subset of loop constructs. Should this functionality be included in GLSL ES 3.00?
RESOLUTION: No. Arrays of samplers may only be indexed by constant-integral-expressions.
Can you use a 2D_ARRAY texture to solve your issue? Put each of your current 2D textures into a layer of a 2D_ARRAY texture then the z coord is just an integer layer index. Advantage, you can use many more layers with a 2D_ARRAY then you get samplers. WebGL2 implementations generally only have 32 samplers but allow hundreds or thousands of layers in a 2D_ARRAY texture.
or use GLSL 1.0
const vs1 = `
void main() { gl_Position = vec4(0); }
`;
const vs3 = `#version 300 es
void main() { gl_Position = vec4(0); }
`;
const fs1 = `
precision highp float;
#define MAX_SPLITS 4
uniform sampler2D samplers[MAX_SPLITS];
uniform int splitCount;
bool func(sampler2D s) {
return texture2D(s, vec2(0)).r > 0.5;
}
void main() {
float v = 0.0;
for (int i = 0; i < MAX_SPLITS; ++i) {
if (i >= splitCount) {
break;
}
if (func(samplers[i])) { // that's where the error happens
v += 1.0;
}
}
gl_FragColor = vec4(v);
}
`;
const fs3 = `#version 300 es
precision highp float;
#define MAX_SPLITS 4
uniform sampler2D samplers[MAX_SPLITS];
uniform int splitCount;
bool func(sampler2D s) {
return texture(s, vec2(0)).r > 0.5;
}
out vec4 color;
void main() {
float v = 0.0;
for (int i = 0; i < MAX_SPLITS; ++i) {
if (i >= splitCount) {
break;
}
if (func(samplers[i])) { // that's where the error happens
v += 1.0;
}
}
color = vec4(v);
}
`;
function main() {
const gl = document.createElement('canvas').getContext('webgl2');
if (!gl) {
return alert('need WebGL2');
}
test('glsl 1.0', vs1, fs1);
test('glsl 3.0', vs3, fs3);
function test(msg, vs, fs) {
const p = twgl.createProgram(gl, [vs, fs]);
log(msg, ':', p ? 'success' : 'fail');
}
}
main();
function log(...args) {
const elem = document.createElement('pre');
elem.textContent = [...args].join(' ');
document.body.appendChild(elem);
}
<script src="https://twgljs.org/dist/4.x/twgl.min.js"></script>

Strange behaviour using in/out block data with OpenGL/GLSL

I have implemented normal mapping shader in my OpenGL/GLSL application. To compute the bump and shadow factor in the fragment shader I need to send from the vertex shader some data like the light direction in tangent space and the vertex position in light space for each light of my scene. So to do job I need the declare 2 output variables like below (vertex shader):
#define MAX_LIGHT_COUNT 5
[...]
out vec4 ShadowCoords[MAX_LIGHT_COUNT]; //Vertex position in light space
out vec3 lightDir_TS[MAX_LIGHT_COUNT]; //light direction in tangent space
uniform int LightCount;
[...]
for (int idx = 0; idx < LightCount; idx++)
{
[...]
lightDir_TS[idx] = TBN * lightDir_CS;
ShadowCoords[idx] = ShadowInfos[idx].ShadowMatrix * VertexPosition;
[...]
}
And in the fragment shader I recover these variables thanks to the followings input declarations:
in vec3 lightDir_TS[MAX_LIGHT_COUNT];
in vec4 ShadowCoords[MAX_LIGHT_COUNT];
The rest of the code is not important to explain my problem.
So now here's the result in image:
As you can see until here all is ok!
But now, for a sake of simplicity I want to use a single output declaration rather than 2! So the logical choice is to use an input/output data block like below:
#define MAX_LIGHT_COUNT 5
[...]
out LightData_VS
{
vec3 lightDir_TS;
vec4 ShadowCoords;
} LightData_OUT[MAX_LIGHT_COUNT];
uniform int LightCount;
[...]
for (int idx = 0; idx < LightCount; idx++)
{
[...]
LightData_OUT[idx].lightDir_TS = TBN * lightDir_CS;
LightData_OUT[idx].ShadowCoords = ShadowInfos[idx].ShadowMatrix * VertexPosition;
[...]
}
And in the fragment shader the input data block:
in LightData_VS
{
vec3 lightDir_TS;
vec4 ShadowCoords;
} LightData_IN[MAX_LIGHT_COUNT];
But this time when I execute my program I have the following display:
As you can see the specular light is not the same than in the first case above!
However I noticed if I replace the line:
for (int idx = 0; idx < LightCount; idx++) //Use 'LightCount' uniform variable
by the following one:
for (int idx = 0; idx < 1; idx++) //'1' value hard coded
or
int count = 1;
for (int idx = 0; idx < count; idx++)
the shading result is correct!
The problem seems to come from the fact I use uniform variable in the 'for' condition. However this works when I used seperates output variables like in the first case!
I checked: the uniform variable 'LightCount' is correct and equal to '1'; (I tried unsigned int data type without success and it's the same thing using a 'while' loop)
How can you explain a such result?
I use:
OpenGL: 4.4.0 NVIDIA driver 344.75
GLSL: 4.40 NVIDIA via Cg compiler
I already used input/output data block without problem but it was not arrays but just simple blocks like below:
[in/out] VertexData_VS
{
vec3 viewDir_TS;
vec4 Position_CS;
vec3 Normal_CS;
vec2 TexCoords;
} VertexData_[IN/OUT];
Do you think it's not possible to use input/output data blocks as arrays in a loop using a uniform variable in the for conditions ?
UPDATE
I tried using 2 vec4 (for a sake of data alignment like for uniform block (for this case data need to be aligned on a vec4)) into the data structure like below:
[in/out] LightData_VS
{
vec4 lightDir_TS; //vec4((TBN * lightDir_CS), 0.0f);
vec4 ShadowCoords;
} LightData_[IN/OUT][MAX_LIGHT_COUNT];
without success...
UPDATE 2
Here's the code concerning shader compilation log:
core::FileSystem file(filename);
std::ifstream ifs(file.GetFullName());
if (ifs)
{
GLint compilationError = 0;
std::string fileContent, line;
char const *sourceCode;
while (std::getline(ifs, line, '\n'))
fileContent.append(line + '\n');
sourceCode = fileContent.c_str();
ifs.close();
this->m_Handle = glCreateShader(this->m_Type);
glShaderSource(this->m_Handle, 1, &sourceCode, 0);
glCompileShader(this->m_Handle);
glGetShaderiv(this->m_Handle, GL_COMPILE_STATUS, &compilationError);
if (compilationError != GL_TRUE)
{
GLint errorSize = 0;
glGetShaderiv(this->m_Handle, GL_INFO_LOG_LENGTH, &errorSize);
char *errorStr = new char[errorSize + 1];
glGetShaderInfoLog(this->m_Handle, errorSize, &errorSize, errorStr);
errorStr[errorSize] = '\0';
std::cout << errorStr << std::endl;
delete[] errorStr;
glDeleteShader(this->m_Handle);
}
}
And the code concerning the program log:
GLint errorLink = 0;
glGetProgramiv(this->m_Handle, GL_LINK_STATUS, &errorLink);
if (errorLink != GL_TRUE)
{
GLint sizeError = 0;
glGetProgramiv(this->m_Handle, GL_INFO_LOG_LENGTH, &sizeError);
char *error = new char[sizeError + 1];
glGetShaderInfoLog(this->m_Handle, sizeError, &sizeError, error);
error[sizeError] = '\0';
std::cerr << error << std::endl;
glDeleteProgram(this->m_Handle);
delete[] error;
}
Unfortunatly, I don't have any error log!

Weird performance drop, caused by a single for loop

I'm currently writing an OpenGL 3.1 (with GLSL version 330) application on linux, (NVIDIA 360M card, with the 313.0 nv driver) that has about 15k lines. My problem is that in one of my vertex shaders, I can experience drastical perforamce drops by making minimal changes in the code that should actually be no-op.
For example:
// With this solution my program runs with 3-5 fps
for(int i = 0; i < 4; ++i) {
vout.shadowCoord[i] = uShadowCP[i] * w_pos;
}
// But with this it runs with 30+ fps
vout.shadowCoord[0] = uShadowCP[0] * w_pos;
vout.shadowCoord[1] = uShadowCP[1] * w_pos;
vout.shadowCoord[2] = uShadowCP[2] * w_pos;
vout.shadowCoord[3] = uShadowCP[3] * w_pos;
// This works with 30+ fps too
vec4 shadowCoords[4];
for(int i = 0; i < 4; ++i) {
shadowCoords[i] = uShadowCP[i] * w_pos;
}
for(int i = 0; i < 4; ++i) {
vout.shadowCoord[i] = shadowCoords[i];
}
Or consider this:
uniform int uNumUsedShadowMaps = 4; // edit: I called this "random_uniform" in the original question
// 8 fps
for(int i = 0; i < min(uNumUsedShadowMaps, 4); ++i) {
vout.shadowCoord[i] = vec4(1.0);
}
// 30+ fps
for(int i = 0; i < 4; ++i) {
if(i < uNumUsedShadowMaps) {
vout.shadowCoord[i] = vec4(1.0);
} else {
vout.shadowCoord[i] = vec4(0.0);
}
}
See the entire shader code here, where this problem appeared:
http://pastebin.com/LK5CNJPD
Like any idea would be appreciated, about what can cause these.
I finally managed to find what was the source of the problem, and also found a solution to it.
But before jumping in right for the solution, please let me paste the most minimal shader code, which with, I could reproduce this 'bug'.
Vertex Shader:
#version 330
vec3 CountPosition(); // Irrelevant how it is implemented.
uniform mat4 uProjectionMatrix, uCameraMatrix;
out VertexData {
vec3 c_pos, w_pos;
vec4 shadowCoord[4];
} vout;
void main() {
vout.w_pos = CountPosition();
vout.c_pos = (uCameraMatrix * vec4(vout.w_pos, 1.0)).xyz;
vec4 w_pos = vec4(vout.w_pos, 1.0);
// 20 fps
for(int i = 0; i < 4; ++i) {
vout.shadowCoord[i] = uShadowCP[i] * w_pos;
}
// 50 fps
vout.shadowCoord[0] = uShadowCP[0] * w_pos;
vout.shadowCoord[1] = uShadowCP[1] * w_pos;
vout.shadowCoord[2] = uShadowCP[2] * w_pos;
vout.shadowCoord[3] = uShadowCP[3] * w_pos;
gl_Position = uProjectionMatrix * vec4(vout.c_pos, 1.0);
}
Fragment Shader:
#version 330
in VertexData {
vec3 c_pos, w_pos;
vec4 shadowCoord[4];
} vin;
out vec4 frag_color;
void main() {
frag_color = vec4(1.0);
}
And funny thing is that with only a minimal modification of the vertex shader is needed to make both solutions work with 50 fps. The main function should be modified to be like this:
void main() {
vec4 w_pos = vec4(CountPosition(), 1.0);
vec4 c_pos = uCameraMatrix * w_pos;
vout.w_pos = vec3(w_pos);
vout.c_pos = vec3(c_pos);
// 50 fps
for(int i = 0; i < 4; ++i) {
vout.shadowCoord[i] = uShadowCP[i] * w_pos;
}
// 50 fps
vout.shadowCoord[0] = uShadowCP[0] * w_pos;
vout.shadowCoord[1] = uShadowCP[1] * w_pos;
vout.shadowCoord[2] = uShadowCP[2] * w_pos;
vout.shadowCoord[3] = uShadowCP[3] * w_pos;
gl_Position = uProjectionMatrix * c_pos;
}
What's the difference is that the upper code reads from the shaders out varyings, while the bottom one saves those values in temporary variables, and only writes to the out varyings.
The conclusion:
Reading a shader's out varying is often seen to be used as an optimisation to get off with one less temporary variable, or at least I have seen it at many places on the internet. Despite of the previous fact, reading an out varying might actually be an invalid OpenGL operation, and might get the GL into an undefined state, in which random changes in the code can trigger bad things.
The best thing about this, is that the GLSL 330 specification doesn't say anything about reading from an out varying, that was previously written into. Probably because it's not something I should be doing.
P.S.
Also note that the second example in the original code might look totally different, but it works exactly same in this small code snippet, if the out varyings are read, it gets quite slow with the i < min(uNumUsedShadowMaps, 4) as condition in the for loop, however if the out varyings are only written, it doesn't make any change in the performace, and the i < min(uNumUsedShadowMaps, 4) one works with 50 fps too.

GLSL linker error(Sampler needs to be a uniform (global or parameter to main))

we have a GLSL fragment shader :
but the problem is in this code
vec4 TFSelection(StrVolumeColorMap volumeColorMap , vec4 textureCoordinate)
{
vec4 finalColor = vec4(0.0);
if(volumeColorMap.TransferFunctions[0].numberOfBits == 0)
{
return texture(volumeColorMap.TransferFunctions[0].TransferFunctionID,textureCoordinate.x);
}
if(textureCoordinate.x == 0)
return finalColor;
float deNormalize = textureCoordinate.x *65535/*255*/;
for(int i = 0; i < volumeColorMap.TransferFunctions.length(); i++)
{
int NormFactor = volumeColorMap.TransferFunctions[i].startBit + volumeColorMap.TransferFunctions[i].numberOfBits;
float minval = CalculatePower(2, volumeColorMap.TransferFunctions[i].startBit);
if(deNormalize >= minval)
{
float maxval = CalculatePower(2, NormFactor);
if(deNormalize <maxval)
{
//float tempPower = CalculatePower(2 , NormFactor);
float coord = deNormalize /maxval/*tempPower*/;
return texture(volumeColorMap.TransferFunctions[i].TransferFunctionID,coord);
}
}
}
return finalColor;
}
when we compile and link shader this message logs:
Sampler needs to be a uniform (global or parameter to main), need to
inline function or resolve conditional expression
with a simple change like maybe the shader link successfully like changing
float `coord = deNormalize /maxval
to
float coord = deNormalize .`
driver:nvidia 320.49