GLSL behavior when dealing with non-initialized uniforms - opengl

I am creating a shader that would deal with up to, say, 10 point lights for a scene ; but I have trouble with the "up to". Here is what I want to do :
vec3 temp = CalcPointLight(pointLights[1], norm, FragPos, viewDir, fragDiffuse, fragSpecular);
temp *= (1.0 - ShadowCalculationPointLight(FragPos, pointLights[1].position, pointLights[1].shadowCubeMap));
temp *= pointLights[1].use;
result += temp
So I have an array of structures pointLights and I'll just do the same thing for all 10 of them, and in order to avoid branching by adding ifstatements to check if the attribute useis equal to 1 I multiply the result of the light / shadow calculation by the pointLights[1].value.
But for some reasons it seems that if the different attributes of pointLights[1]are not initialized, then the resultvariable (which is later sent as the fragment color) is not drawn anymore, even when the temp vector is supposed to be multiplied by 0.
To me there are two solutions to that problem : either I have to generate / write a shader for each number of point lights I have to deal with (so if I want to use up to 10 lights I'll have to write 11 different shaders) ; or I always initialize all of my 10 lights with values that will make my functions always return (0,0,0). For the latter solution I am not sure about how to initialize a samplerCube such that it will contain exclusively 0s... Or is there another solution ?
EDIT : I am trying to implement the second solution but as I mentioned earlier but again, how should I initialize the samplerCube ? Should I have 6 textures of 1x1px loaded as a cubemap in the GPU memory and when I want to initialize the uniforms in my shaders, then send that "empty" cubemap sent, or is there a more fancy way of doing so (having a sampler giving only (0,0,0,1) ) ?

Related

How to dynamically bind an array of multiple texture coordinates sets for mixing in modern OpenGL?

I'm trying to get my head around the whole system of binding multiple textures that contain different UV coordinate values, specifically for map tiling (repeating) in order to mix an albedo + normal map that contains different scalar values. To dynamically set multiple texture coordinates to one mesh would be beneficial in many scenarios - one being that normal map details would really stand out for things like skin, rusty metal ect. This would simply be acomplished by scaling the UV coord values to something relatively high, and leaving the albedo map scale set to 1.
How would one go about communicating these texture coord sets from C++ to a glsl shader, and what would be the most efficient / dynamic pipeline to practice for real-time rendering?
I've already tried to parse multiple arrays through a number of location layouts into GLSL, however this becomes too static for dynamic situations.
This is my vertex struct, however I only parse one set of vertex positions, and one set of texture coordinates:
struct VertexElement {
uint16_t componentSize;
std::vector<float> data;
VertexElement(uint16_t comp_size, std::vector<float> vertex_data)
{
componentSize = comp_size;
data = vertex_data;
}
};
struct Vertex {
uint16_t stride;
std::vector<VertexElement> vertexElements;
inline Vertex() = default;
inline Vertex(uint16_t _stride, std::vector<VertexElement>
_vertex_elements)
{
stride = _stride;
vertexElements = _vertex_elements;
}
};
Essentially, I'm looking to combine multiple textures containing completely different coordinate values without having to statically allocate in GLSL like this:
location (layout = 0) in vec2 texCord0
location (layout = 1) in vec2 texCord1 <----- BAD!!
location (layout = 2) in vec2 texCord3
...
I've tried this however I need to readjust the texcoord set size dynamically for real-time editing. Is this possible in OpenGL?
Thanks.
The interface between shaders is always statically defined. A shader has X inputs, always.
If you want some information to be determined dynamically, then you're going to have to use existing tools (image load/store, SSBOs, UBOs, etc) to read that information yourself. This is entirely possible for vertex shaders, since the gl_VertexID index can be used as an index into an array.
Generally speaking however, people don't need to do this. They simply define a particular vertex format and stick with that. Indeed, your specific use case doesn't seem to need this, since you're only changing the values of the texture coordinates, not whether they are being used or not.
I need to readjust the texcoord set size dynamically for real-time editing.
That's not really possible, particularly for the use case you outlined.
Consider a normal map. The difference between using a normal map and not using a normal map is substantial. Not merely in what parameters you're passing to the VS but the fundamental nature of your shader.
Normal maps usually are tangent-space. So your shader now needs a full NBT matrix. And you're probably going to have to pass that matrix to your fragment shader, which will use it to construct the normal from the normal map. Of course, if you don't use a normal map, you just get a per-vertex normal. So you're talking about substantial changes to the structure of your code, not just the inputs.
Shaders are just strings, so you are free to dynamically build them as you like. But that's going to be a lot more complex than just what inputs your VS gets.

Re-calculating vertex normals to modify terrain/model C++/openGL

I have a list of faces [index], that a list of vertices and normals are mapped to. I generate offsets at each vertex to produce terrain/variation, but by doing so the vertice normals stop working like they should. This is what I do to make the shading work again, namely try to redo the vertice normals (vn);
for (auto &x : normals)x = vec3(0); //zero normals first
vec3 facenormal; //buffer
for (size_t i = 0; i<indices.size();) //iterate 3 points per face
{
//find the face normal (b-a x c-a)
facenormal = cross(
(shape[indices[i + 1]] - shape[indices[i]]),
(shape[indices[i + 2]] - shape[indices[i]])
);
//add this face normal, to each of the 3 vn slots nearby
normals[indices[i++]] += facenormal; //note +=
normals[indices[i++]] += facenormal;
normals[indices[i++]] += facenormal;
}
for (auto &x : normals)x = normalize(x); //then normalize them
According to this reply it should do the trick. But, something is wrong with the approach.
It causes artifacting like shown on in the lower half of the image, while the shading mostly seems to work.
The question is: how should I calculate the vn's, to avoid these lines?
Remaking the parser: The way I was parsing (loading) the model was taking out duplicate index-values, which can save reference faces that I have to pass to the GPU, but will cause some of the values to stack/be reused. Atleast I suspect this was the problem, since it seems alot cleaner now. The only problem however is I'm getting flat shading all of a sudden. The flat shading makes it hard to see if the problem is really gone too, but I suspect it is. Why the new parsing produces flat shading after recalculation is beyond me. But these are the results im looking at.
I still have to find out the correct formula to calculate vertex normals? I dont understand why the shading was round before, but now its flat, all because I stopped stacking 1/4th of the index values?
Finding the right solution, finally! As NicoSchertler pointed out, the vertex normals were not of the base mesh. The way I load the vertices/uv/normals to fit the index for the glDrawElementsBaseVertex render-call, means cycling through the vertices and normals as they are loaded, does not refer to the base, but copies designed to fit indexing for uv+normals.
Soo, what I end up getting is weird artifacting and flat shading, as the values I'm modifying are post-parsing. Making every face not-unique does not solve anything, but clarify part of the problem, namely that I needed to modify the base index-values, not the loaded index ones.
After I recalculate for the base mesh (pre indexing) I get this result; smooth shading and visible differences for the mountainsides. The model is higher poly (and serialized in binary) and the coloring is still in the its early stages. It is however, shadily correct.

Comparing two textures in openGL

I'm new to OpenGL and I'm looking forward to compare two textures to understand how much they are similar to each other. I know how to to this with two bitmap images but I really need to use a method to compare two textures.
Question is: Is there any way to compare two textures as we compare two images? Like comparing two images pixel by pixel?
Actually what you seem to be asking for is not possible or at least not as easy as it would seem to accomplish on the GPU. The problem is GPU is designed to accomplish as many small tasks as possible in the shortest amount of time. Iterating through an array of data such as pixels is not included so getting something like an integer or a floating value might be a bit hard.
There is one very interesting procedure you may try but I can not say the result will be appropriate for you:
You may first create a new texture that is a difference between the two input textures and then keep downsampling the result till 1x1 pixel texture and get the value of that pixel to see how different it is.
To achieve this it would be best to use a fixed size of the target buffer which is POT (power of two) for instance 256x256. If you didn't use a fixed size then the result could vary a lot depending on the image sizes.
So in first pass you would redraw the two textures to the 3rd one (using FBO - frame buffer object). The shader you would use is simply:
vec4 a = texture2D(iChannel0,uv);
vec4 b = texture2D(iChannel1,uv);
fragColor = abs(a-b);
So now you have a texture which represents the difference between the two images per pixel, per color component. If the two images will be the same, the result will be a totally black picture.
Now you will need to create a new FBO which is scaled by half in every dimension which comes to 128x128 in this example. To draw to this buffer you would need to use GL_NEAREST as a texture parameter so no interpolations on the texel fetching is done. Then for each new pixel sum the 4 nearest pixels of the source image:
vec4 originalTextCoord = varyingTextCoord;
vec4 textCoordRight = vec2(varyingTextCoord.x+1.0/256, varyingTextCoord.y);
vec4 textCoordBottom = vec2(varyingTextCoord.x, varyingTextCoord.y+1.0/256);
vec4 textCoordBottomRight = vec2(varyingTextCoord.x+1.0/256, varyingTextCoord.y+1.0/256);
fragColor = texture2D(iChannel0, originalTextCoord) +
texture2D(iChannel0, textCoordRight) +
texture2D(iChannel0, textCoordBottom) +
texture2D(iChannel0, textCoordBottomRight);
The 256 value is from the source texture so that should come as a uniform so you may reuse the same shader.
After this is drawn you need to drop down to 64, 32, 16... Then read the pixel back to the CPU and see the result.
Now unfortunately this procedure may produce very unwanted results. Since the colors are simply summed together this will produce an overflow for all the images which are not similar enough (results in a white pixel or rather (1,1,1,0) for non-transparent). This may be overcome first by using a scale on the first shader pass, to divide the output by a large enough value. Still this might not be enough and an average might need to be done in the second shader (multiply all the texture2D calls by .25).
In the end the result might still be a bit strange. You get 4 color components on the CPU which represent the sum or the average of an image differential. I guess you could sum them up and choose what you consider for the images to be much alike or not. But if you want to have a more sense in the result you are getting you might want to treat the whole pixel as a single 32-bit floating value (these are a bit tricky but you may find answers around the SO). This way you may compute the values without the overflows and get quite exact results from the algorithms. This means you would write the floating value as if it is a color which starts with the first shader output and continues for every other draw call (get texel, convert it to float, sum it, convert it back to vec4 and assign as output), GL_NEAREST is essential here.
If not then you may optimize the procedure and use GL_LINEAR instead of GL_NEAREST and simply keep redrawing the differential texture till it gets to a single pixel size (no need for 4 coordinates). This should produce a nice pixel which represents an average of all the pixels in the differential textures. So this is the average difference between pixels in the two images. Also this procedure should be quite fast.
Then if you want to do a bit smarter algorithm you may do some wonders on creating the differential texture. Simply subtracting the colors may not be the best approach. It would make more sense to blur one of the images and then comparing it to the other image. This will lose precision for those very similar images but for everything else it will give you a much better result. For instance you could say you are interested only if the pixel is 30% different then the weight of the other image (the blurred one) so you would discard and scale the 30% for every component such as result.r = clamp(abs(a.r-b.r)-30.0/100.0, .0, 1.0)/((100.0-30.0)/100.0);
You can bind both textures to a shader and visit each pixel by drawing a quad or something like this.
// Equal pixels are marked green. Different pixels are shown in red color.
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 uv = fragCoord.xy / iResolution.xy;
vec4 a = texture2D(iChannel0,uv);
vec4 b = texture2D(iChannel1,uv);
if(a != b)
fragColor = vec4(1,0,0,1);
else
fragColor = vec4(0,1,0,1);
}
You can test the shader on Shadertoy.
Or you can also bind both textures to a compute shader and visit every pixel by iteration.
You cannot compare vectors. You have to use
if( any(notEqual(a,b)))
Check the GLSL language spec

OpenGL: Passing random positions to the Vertex Shader

I am starting to learn OpenGL (3.3+), and now I am trying to do an algorithm that draws 10000 points randomly in the screen.
The problem is that I don't know exactly where to do the algorithm. Since they are random, I can't declare them on a VBO (or can I?), so I was thinking in passing a uniform value to the vertex shader with the varying position (I would do a loop changing the uniform value). Then I would do the operation 10000 times. I would also pass a random color value to the shader.
Here is kind of my though:
#version 330 core
uniform vec3 random_position;
uniform vec3 random_color;
out vec3 Color;
void main() {
gl_Position = random_position;
Color = random_color;
}
In this way I would do the calculations outside the shaders, and just pass them through the uniforms, but I think a better way would be doing this calculations inside the vertex shader. Would that be right?
The vertex shader will be called for every vertex you pass to the vertex shader stage. The uniforms are the same for each of these calls. Hence you shouldn't pass the vertices - be they random or not - as uniforms. If you would have global transformations (i.e. a camera rotation, a model matrix, etc.), those would go into the uniforms.
Your vertices should be passed as a vertex buffer object. Just generate them randomly in your host application and draw them. The will be automatically the in variables of your shader.
You can change the array in every iteration, however it might be a good idea to keep the size constant. For this it's sometimes useful to pass a 3D-vector with 4 dimensions, one being 1 if the vertex is used and 0 otherwise. This way you can simply check if a vertex should be drawn or not.
Then just clear the GL_COLOR_BUFFER_BIT and draw the arrays before updating the screen.
In your shader just set gl_Position with your in variables (i.e. the vertices) and pass the color on to the fragment shader - it will not be applied in the vertex shader yet.
In the fragment shader the last set variable will be the color. So just use the variable you passed from the vertex shader and e.g. gl_FragColor.
By the way, if you draw something as GL_POINTS it will result in little squares. There are lots of tricks to make them actually round, the easiest to use is probably to use this simple if in the fragment shader. However you should configure them as Point Sprites (glEnable(GL_POINT_SPRITE)) then.
if(dot(gl_PointCoord - vec2(0.5,0.5), gl_PointCoord - vec2(0.5,0.5)) > 0.25)
discard;
I suggest you to read up a little on what the fragment and vertex shader do, what vertices and fragments are and what their respective in/out/uniform variables represent.
Since programs with full vertex buffer objects, shader programs etc. get quite huge, you can also start out with glBegin() and glEnd() to draw vertices directly. However this should only be a very early starting point to understand what you are drawing where and how the different shaders affect it.
The lighthouse3d tutorials (http://www.lighthouse3d.com/tutorials/) usually are a good start, though they might be a bit outdated. Also a good reference is the glsl wiki (http://www.opengl.org/wiki/Vertex_Shader) which is up to date in most cases - but it might be a bit technical.
Whether or not you are working with C++, Java, or other languages - the concepts for OpenGL are usually the same, so almost all tutorials will do well.

Atomic counter anomalies in Geometry shader

I am trying to control behavior of fragment shader by calculating vertex count in geometry shader so that if I have a vertex stream of 1000 triangles ,when the count reaches 500 I set some varying for fragment shader which signals that the later must switch its processing.To count total vertices(or triangles) processed I use Atomic counter in geometry shader.I planned to do it in vertex shader first,but then I read somewhere that because of vertex caching counter won't increment on each vertex invocation.But now it seems that doing it in geometry shader doesn't execute the count precisely either.
In my geometry shader I am doing this:
layout(triangles) in;
layout (triangle_strip ,max_vertices = 3) out;
layout(binding=0, offset=0) uniform atomic_uint ac;
out flat float isExterior;
void main()
{
memoryBarrier();
uint counter = atomicCounter(ac);
float switcher = 0.0;
if (counter >= exteriorSize)
{
switcher = 2.0;
}
else
{
atomicCounterIncrement(ac);
atomicCounterIncrement(ac);
atomicCounterIncrement(ac);
}
isExterior = switcher;
// here just emitting primitive....
exteriorSize is a uniform holding a number equal to number of vertices in an array.When I read out the value of counter on CPU it never equals to exteriorSize.But it is almost 2 times smaller than it.Is there a vertex caching in geometry stage as well?Or am I doing something wrong?
Basically what I need is to tell fragment shader: "after vertex number X start doing work Y.As lont as vertex number is less than X do work Z" And I can't get that exact X from atomic counter even though I increment it up till it reach that limit.
UPDATE:
I suspect the problem is with atomic writes synchronization.If I set memoryBarrier in different places the counter values change.But I still can't get it return the exact value that equals to exteriorSize.
UPDATE 2:
Well,I didn't figure out the issue with atomic counter synchronization so I did it using indirect draw . Works like a charm.
The geometry shader executes per-primitive (triangle in this case), whereas the vertex shader executes per-vertex, almost. Using glDrawElements allows vertex results to be shared between triangles (e.g. indexing 0,1,2 then 0,2,3 uses 0 and 2 twice: 4 verts, 2 triangles and 6 references). As you say, a limited cache is used to share the results, so if the same vertex is referenced a long time later it has to be recomputed.
It looks like there's a potential issue with updates to the counter occurring between atomicCounter and atomicCounterIncrement. If you want an entire section of code like this to work, it needs to be locked. This can get very slow depending on what you're locking.
Instead, it's going to be far easier to always call atomicCounterIncrement and potentially allow ac to grow beyond exteriorSize.
AFAIK reading back values from the atomic counter buffer should stall until the memory operations have completed, but I've been caught out not calling glMemoryBarrier between passes before.
It sounds like exteriorSize should be equal to the number of triangles and not vertices if this is executing in the geometry shader. If instead you do want per-vertex processing, then maybe change to GL_POINTS or save the vertex shader results using the transform feedback extension and then drawing triangles from that (essentially doing the caching yourself but with a buffer that holds everything). If you use glDrawArrays or never reuse vertices then a standard vertex shader should be fine.
Lastly, calling atomicCounterIncrement three times is a waste. Call once and use counter * 3.