How do I draw a line, or a curved line using a fragment shader?
I have a program that calculates a bezier curve given a set of vertices specified by the user. Early on, it was pretty straightforward where I simply process each vertex on the application side to generate a set of interpolated points based on this cubic Bezier curve equation:
... and then store all the processed vertices into a GL_VERTEX_ARRAY for drawing with glDrawArrays(GL_LINE_STRIP, 0, myArraySize). While the solution is simple, the time complexity for this implementation is O(n^2). The issue arises where I start increasing my step count, such as incrementing t by 0.01 each iteration of my loop, compounded by the user generating as many vertices they want.
So that's when I started looking into shaders, especially the fragment shader. As far as I understand, our fragment shader program assigns a color to the current fragment being processed by the GPU. I haven't gotten serious into shader programming, but my current line shader implementation is as follows:
#version 120
uniform vec2 resolution;
uniform vec2 ptA;
uniform vec2 ptB;
uniform vec2 ptC;
uniform vec2 ptD;
uniform float stepTotal;
void main()
{
vec2 uv = gl_FragCoord.xy / resolution.xy;
vec2 curvePts[int(stepTotal)];
float t = stepTotal;
for(float i = 0.; i < 1.0; i += 1./t)
{
# Courtesy to: https://yalantis.com/blog/how-we-created-visualization-for-horizon-our-open-source-library-for-sound-visualization/
vec2 q0 = mix(ptA, ptB, i);
vec2 q1 = mix(ptB, ptC, i);
vec2 q2 = mix(ptC, ptD, i);
vec2 r0 = mix(q0, q1, i);
vec2 r1 = mix(q1, q2, i);
vec2 p_int = mix(r0, r1, i);
curvePts[int(i) * int(stepTotal)] = p_int;
}
// TO DO:
// Check if current fragment is within the curve. Not sure how
// to proceed from here...
}
As you can see, I am currently stuck on how to check if the current fragment is within the curve, as well as how do I assign a color to that specific fragment that eventually becomes a curved line when displayed.
You might draw line segments using distance function:
float DistanceToLineSegment(vec3 p, vec3 a, vec3 b)
{
vec3 pa = p - a, ba = b - a;
float h = clamp( dot(pa,ba)/dot(ba,ba), 0.0, 1.0 );
return length( pa - ba*h );
}
The fragment will be "inside" of a segment if result of distance function is lesser than certain threshold.
You might also replace threshold test with smooth step function to draw anti-aliased line.
Related
I am trying to implement a Streak shader, which is described here:
http://www.chrisoat.com/papers/Oat-SteerableStreakFilter.pdf
Short explanation: Samples a point with a 1d kernel in a given direction. The kernel size grows exponentially in each step. Color values are weighted based on distance to sampled point and summed. The result is a smooth tail/smear/light streak effect on that direction. Here is the frag shader:
precision highp float;
uniform sampler2D u_texture;
varying vec2 v_texCoord;
uniform float u_Pass;
const float kernelSize = 4.0;
const float atten = 0.95;
vec4 streak(in float pass, in vec2 texCoord, in vec2 dir, in vec2 pixelStep) {
float kernelStep = pow(kernelSize, pass - 1.0);
vec4 color = vec4(0.0);
for(int i = 0; i < 4; i++) {
float sampleNum = float(i);
float weight = pow(atten, kernelStep * sampleNum);
vec2 sampleTexCoord = texCoord + ((sampleNum * kernelStep) * (dir * pixelStep));
vec4 texColor = texture2D(u_texture, sampleTexCoord) * weight;
color += texColor;
}
return color;
}
void main() {
vec2 iResolution = vec2(512.0, 512.0);
vec2 pixelStep = vec2(1.0, 1.0) / iResolution.xy;
vec2 dir = vec2(1.0, 0.0);
float pass = u_Pass;
vec4 streakColor = streak(pass, v_texCoord, dir, pixelStep);
gl_FragColor = vec4(streakColor.rgb, 1.0);
}
It was going to be used for a starfield type of effect. And here is the implementation on ShaderToy which works fine:
https://www.shadertoy.com/view/ll2BRG
(Note: Disregard the first shader in Buffer A, it just filters out the dim colors in the input texture to emulate a star field since afaik ShaderToy doesn't allow uploading custom textures)
But when I use the same shader in my own code and render using ping-pong FrameBuffers, it looks different. Here is my own implementation ported over to WebGL:
https://jsfiddle.net/1b68eLdr/87755/
I basically create 2 512x512 buffers, ping-pong the shader 4 times increasing kernel size at each iteration according to the algorithm and render the final iteration on the screen.
The problem is visible banding, and my streaks/tails seem to be losing brightness a lot faster: (Note: the image is somewhat inaccurate, the lengths of the streaks are same/correct, its color values that are wrong)
I have been struggling with this for a while in Desktop OpenGl / LWJGL, I ported it over to WebGL/Javascript and uploaded on JSFiddle in hopes someone can spot what the problem is. I suspect it's either about texture coordinates or FrameBuffer configuration since shaders are exactly the same.
The reason it works on Shadertoys is because it uses a floating-point render target.
Simply use gl.FLOAT as the type of your framebuffer texture and the issue is fixed (I could verify it with the said modification on your JSFiddle).
So do this in your createBackingTexture():
// Just request the extension (MUST be done).
gl.getExtension('OES_texture_float');
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, this._width, this._height, 0, gl.RGBA, gl.FLOAT, null);
I started lighting with several light sources. All the manuals that I saw without taking into account the distance between the light source and the object (for example https://learnopengl.com/Lighting/Basic-Lighting). So I wrote my shader, but I'm not sure about its correctness. Please, analyze this shader, and tell me what's wrong / not correct in it. I will be very grateful for any help! Below I bring the shader itself, and the results of its work for different values of n and k.
Fragment shader:
#version 130
precision mediump float; // Set the default precision to medium. We don't need as high of a
// precision in the fragment shader.
#define MAX_LAMPS_COUNT 8 // Max lamps count.
uniform vec3 u_LampsPos[MAX_LAMPS_COUNT]; // The position of lamps in eye space.
uniform vec3 u_LampsColors[MAX_LAMPS_COUNT];
uniform vec3 u_AmbientColor = vec3(1, 1, 1);
uniform sampler2D u_TextureUnit;
uniform float u_DiffuseIntensivity = 12;
uniform float ambientStrength = 0.1;
uniform int u_LampsCount;
varying vec3 v_Position; // Interpolated position for this fragment.
varying vec3 v_Normal; // Interpolated normal for this fragment.
varying vec2 v_Texture; // Texture coordinates.
// The entry point for our fragment shader.
void main() {
float n = 2;
float k = 2;
float finalDiffuse = 0;
vec3 finalColor = vec3(0, 0, 0);
for (int i = 0; i<u_LampsCount; i++) {
// Will be used for attenuation.
float distance = length(u_LampsPos[i] - v_Position);
// Get a lighting direction vector from the light to the vertex.
vec3 lightVector = normalize(u_LampsPos[i] - v_Position);
// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
// pointing in the same direction then it will get max illumination.
float diffuse = max(dot(v_Normal, lightVector), 0.1);
// Add attenuation.
diffuse = diffuse / (1 + pow(distance, n));
// Calculate final diffuse for fragment
finalDiffuse += diffuse;
// Calculate final light color
finalColor += u_LampsColors[i] / (1 + pow(distance, k));
}
finalColor /= u_LampsCount;
vec3 ambient = ambientStrength * u_AmbientColor;
vec3 diffuse = finalDiffuse * finalColor * u_DiffuseIntensivity;
gl_FragColor = vec4(ambient + diffuse, 1) * texture2D(u_TextureUnit, v_Texture);
}
Vertex shader:
#version 130
uniform mat4 u_MVPMatrix; // A constant representing the combined model/view/projection matrix.
uniform mat4 u_MVMatrix; // A constant representing the combined model/view matrix.
attribute vec4 a_Position; // Per-vertex position information we will pass in.
attribute vec3 a_Normal; // Per-vertex normal information we will pass in.
attribute vec2 a_Texture; // Per-vertex texture information we will pass in.
varying vec3 v_Position; // This will be passed into the fragment shader.
varying vec3 v_Normal; // This will be passed into the fragment shader.
varying vec2 v_Texture; // This will be passed into the fragment shader.
void main() {
// Transform the vertex into eye space.
v_Position = vec3(u_MVMatrix * a_Position);
// Pass through the texture.
v_Texture = a_Texture;
// Transform the normal's orientation into eye space.
v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));
// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
gl_Position = u_MVPMatrix * a_Position;
}
n=2 k=2
n=1 k=3
n=3 k=1
n=3 k=3
And if my shader is correct, then how do I name these parameters (n, k)?
By "correct" I assume you mean is the code working as well as it should. These lighting calculations are not by any means physically accurate. Unless you are going for full compatibility with old devices, I would recommend you use a higher glsl version which allows you to use in and out and some other useful glsl features. The current is version 450 and you are still using 130. The vertex shader looks ok, as it is only passing through values to the fragment shader.
As for the fragment shader there are is one optimisation you could make.
The calculation u_LampsPos[i] - v_Position doesn't have to be repeated twice. Do it once and do the length and normalize on the same result from one calculation.
The code is quite small so there is not much to go wrong glsl wise however I was wondering why you did: finalColor /= u_LampsCount;?
This didn't make sense to me.
First of all, I must apologize for posting yet another question on this subject (there are a lot already!). I did search for other related questions and answers, but unfortunately none of them showed me the solution. Now I'm desperate! :D
It is worth mentioning that the code posted below gives a satisfying 'bumpy' effect. It is the scene enlightenment that seems to be wrong.
The scene: is dead simple! A cube in the center, a light source rotating around it (parallel to the ground) and above.
My approach is to start from my basic light shader, which gives me adequate outputs (or so I think!). The first step is to modify it to do the calculations in tangent space, then use the normal extracted from a texture.
I tried to comment the code nicely, but in short I have two questions:
1) Doing only basic lighting (no normal mapping), I expect the scene to look exactly the same, with or without transforming my vectors into tangent space with the TBN matrix. Am I wrong?
2) Why do I get incorrect enlightenment?
A couple of screenshots to give you an idea (EDITED) - following LJ's comment, I am no longer summing normals and tangent per vertex/face. Interestingly, it highlights the issue (see on the capture, I have marked how the light moves).
Basically it is as if the cube was rotated 90 degrees to the left, or, as if the light was turing vertically instead of horizontally
Result with normal mapping:
Version with simple light:
Vertex shader:
// Information about the light.
// Here we care essentially about light.Position, which
// is set to be something like vec3(cos(x)*9, 5, sin(x)*9)
uniform Light_t Light;
uniform mat4 W; // The model transformation matrix
uniform mat4 V; // The camera transformation matrix
uniform mat4 P; // The projection matrix
in vec3 VS_Position;
in vec4 VS_Color;
in vec2 VS_TexCoord;
in vec3 VS_Normal;
in vec3 VS_Tangent;
out vec3 FS_Vertex;
out vec4 FS_Color;
out vec2 FS_TexCoord;
out vec3 FS_LightPos;
out vec3 FS_ViewPos;
out vec3 FS_Normal;
// This method calculates the TBN matrix:
// I'm sure it is not optimized vertex shader code,
// to have this seperate method, but nevermind for now :)
mat3 getTangentMatrix()
{
// Note: here I must say am a bit confused, do I need to transform
// with 'normalMatrix'? In practice, it seems to make no difference...
mat3 normalMatrix = transpose(inverse(mat3(W)));
vec3 norm = normalize(normalMatrix * VS_Normal);
vec3 tang = normalize(normalMatrix * VS_Tangent);
vec3 btan = normalize(normalMatrix * cross(VS_Normal, VS_Tangent));
tang = normalize(tang - dot(tang, norm) * norm);
return transpose(mat3(tang, btan, norm));
}
void main()
{
// Set the gl_Position and pass color + texcoords to the fragment shader
gl_Position = (P * V * W) * vec4(VS_Position, 1.0);
FS_Color = VS_Color;
FS_TexCoord = VS_TexCoord;
// Now here we start:
// This is where supposedly, multiplying with the TBN should not
// change anything to the output, as long as I apply the transformation
// to all of them, or none.
// Typically, removing the 'TBN *' everywhere (and not using the normal
// texture later in the fragment shader) is exactly the code I use for
// my basic light shader.
mat3 TBN = getTangentMatrix();
FS_Vertex = TBN * (W * vec4(VS_Position, 1)).xyz;
FS_LightPos = TBN * Light.Position;
FS_ViewPos = TBN * inverse(V)[3].xyz;
// This line is actually not needed when using the normal map:
// I keep the FS_Normal variable for comparison purposes,
// when I want to switch to my basic light shader effect.
// (see later in fragment shader)
FS_Normal = TBN * normalize(transpose(inverse(mat3(W))) * VS_Normal);
}
And the fragment shader:
struct Textures_t
{
int SamplersCount;
sampler2D Samplers[4];
};
struct Light_t
{
int Active;
float Ambient;
float Power;
vec3 Position;
vec4 Color;
};
uniform mat4 W;
uniform mat4 V;
uniform Textures_t Textures;
uniform Light_t Light;
in vec3 FS_Vertex;
in vec4 FS_Color;
in vec2 FS_TexCoord;
in vec3 FS_LightPos;
in vec3 FS_ViewPos;
in vec3 FS_Normal;
out vec4 frag_Output;
vec4 getPixelColor()
{
return Textures.SamplersCount >= 1
? texture2D(Textures.Samplers[0], FS_TexCoord)
: FS_Color;
}
vec3 getTextureNormal()
{
// FYI: the normal texture is always at index 1
vec3 bump = texture(Textures.Samplers[1], FS_TexCoord).xyz;
bump = 2.0 * bump - vec3(1.0, 1.0, 1.0);
return normalize(bump);
}
vec4 getLightColor()
{
// This is the one line that changes between my basic light shader
// and the normal mapping one:
// - If I don't do 'TBN *' earlier and use FS_Normal here,
// the enlightenment seems fine (see second screenshot)
// - If I do multiply by TBN (including on FS_Normal), I would expect
// the same result as without multiplying ==> not the case: it looks
// very similar to the result with normal mapping
// (just has no bumpy effect of course)
// - If I use the normal texture (along with TBN of course), then I get
// the result you see in the first screenshot.
vec3 N = getTextureNormal(); // Instead of 'normalize(FS_Normal);'
// Everything from here on is the same as my basic light shader
vec3 L = normalize(FS_LightPos - FS_Vertex);
vec3 E = normalize(FS_ViewPos - FS_Vertex);
vec3 R = normalize(reflect(-L, N));
// Ambient color: light color times ambient factor
vec4 ambient = Light.Color * Light.Ambient;
// Diffuse factor: product of Normal to Light vectors
// Diffuse color: light color times the diffuse factor
float dfactor = max(dot(N, L), 0);
vec4 diffuse = clamp(Light.Color * dfactor, 0, 1);
// Specular factor: product of reflected to camera vectors
// Note: applies only if the diffuse factor is greater than zero
float sfactor = 0.0;
if(dfactor > 0)
{
sfactor = pow(max(dot(R, E), 0.0), 8.0);
}
// Specular color: light color times specular factor
vec4 specular = clamp(Light.Color * sfactor, 0, 1);
// Light attenuation: square of the distance moderated by light's power factor
float atten = 1 + pow(length(FS_LightPos - FS_Vertex), 2) / Light.Power;
// The fragment color is a factor of the pixel and light colors:
// Note: attenuation only applies to diffuse and specular components
return getPixelColor() * (ambient + (diffuse + specular) / atten);
}
void main()
{
frag_Output = Light.Active == 1
? getLightColor()
: getPixelColor();
}
That's it! I hope you have enough information and of course, your help will be greatly appreciated! :) Take care.
I am experiancing a very similar problem, and i can not explain why the lighting doesn't work right, but i can answer your first question and at the very least explain how i somehow got lighting working acceptably (though your problem may not necesarrily be the same is mine).
Firstly in theory if you tangents and bitangents are calculated correctly, then you should get exactly the same lighting result when doing the calculation in tangentspace with a tangentspace normal [0,0,1].
Secondly while it is common knowledge that you should transform your normals from model to cameraspace by multiplying by inverse transpose model-view matrix as explained by this tutorial, i found that the problem with the lighting being transformed wrong can be solved if you transform the normal tangent by the model-view matrix rather than the inverse transpose model-view. Ie use normalMatrix = mat3(W); instead of normalMatrix = transpose(inverse(mat3(W)));.
In my case this did »fix« the problems with the light, but i don't know why this fixed it, but i make no guarantee that it does not (in fact i assume that it does) introduce other problems with the shading
My aim is to pass an array of points to the shader, calculate their distance to the fragment and paint them with a circle colored with a gradient depending of that computation.
For example:
(From a working example I set up on shader toy)
Unfortunately it isn't clear to me how I should calculate and convert the coordinates passed for processing inside the shader.
What I'm currently trying is to pass two array of floats - one for x positions and one for y positions of each point - to the shader though a uniform. Then inside the shader iterate through each point like so:
#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif
uniform float sourceX[100];
uniform float sourceY[100];
uniform vec2 resolution;
in vec4 gl_FragCoord;
varying vec4 vertColor;
varying vec2 center;
varying vec2 pos;
void main()
{
float intensity = 0.0;
for(int i=0; i<100; i++)
{
vec2 source = vec2(sourceX[i],sourceY[i]);
vec2 position = ( gl_FragCoord.xy / resolution.xy );
float d = distance(position, source);
intensity += exp(-0.5*d*d);
}
intensity=3.0*pow(intensity,0.02);
if (intensity<=1.0)
gl_FragColor=vec4(0.0,intensity*0.5,0.0,1.0);
else if (intensity<=2.0)
gl_FragColor=vec4(intensity-1.0, 0.5+(intensity-1.0)*0.5,0.0,1.0);
else
gl_FragColor=vec4(1.0,3.0-intensity,0.0,1.0);
}
But that doesn't work - and I believe it may be because I'm trying to work with the pixel coordinates without properly translating them. Could anyone explain to me how to make this work?
Update:
The current result is:
The sketch's code is:
PShader pointShader;
float[] sourceX;
float[] sourceY;
void setup()
{
size(1024, 1024, P3D);
background(255);
sourceX = new float[100];
sourceY = new float[100];
for (int i = 0; i<100; i++)
{
sourceX[i] = random(0, 1023);
sourceY[i] = random(0, 1023);
}
pointShader = loadShader("pointfrag.glsl", "pointvert.glsl");
shader(pointShader, POINTS);
pointShader.set("sourceX", sourceX);
pointShader.set("sourceY", sourceY);
pointShader.set("resolution", float(width), float(height));
}
void draw()
{
for (int i = 0; i<100; i++) {
strokeWeight(60);
point(sourceX[i], sourceY[i]);
}
}
while the vertex shader is:
#define PROCESSING_POINT_SHADER
uniform mat4 projection;
uniform mat4 transform;
attribute vec4 vertex;
attribute vec4 color;
attribute vec2 offset;
varying vec4 vertColor;
varying vec2 center;
varying vec2 pos;
void main() {
vec4 clip = transform * vertex;
gl_Position = clip + projection * vec4(offset, 0, 0);
vertColor = color;
center = clip.xy;
pos = offset;
}
Update:
Based on the comments it seems you have confused two different approaches:
Draw a single full screen polygon, pass in the points and calculate the final value once per fragment using a loop in the shader.
Draw bounding geometry for each point, calculate the density for just one point in the fragment shader and use additive blending to sum the densities of all points.
The other issue is your points are given in pixels but the code expects a 0 to 1 range, so d is large and the points are black. Fixing this issue as #RetoKoradi describes should address the points being black, but I suspect you'll find ramp clipping issues when many are in close proximity. Passing points into the shader limits scalability and is inefficient unless the points cover the whole viewport.
As below, I think sticking with approach 2 is better. To restructure your code for it, remove the loop, don't pass in the array of points and use center as the point coordinate instead:
//calc center in pixel coordinates
vec2 centerPixels = (center * 0.5 + 0.5) * resolution.xy;
//find the distance in pixels (avoiding aspect ratio issues)
float dPixels = distance(gl_FragCoord.xy, centerPixels);
//scale down to the 0 to 1 range
float d = dPixels / resolution.y;
//write out the intensity
gl_FragColor = vec4(exp(-0.5*d*d));
Draw this to a texture (from comments: opengl-tutorial.org code and this question) with additive blending:
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
Now that texture will contain intensity as it was after your original loop. In another fragment shader during a full screen pass (draw a single triangle that covers the whole viewport), continue with:
uniform sampler2D intensityTex;
...
float intensity = texture2D(intensityTex, gl_FragCoord.xy/resolution.xy).r;
intensity = 3.0*pow(intensity, 0.02);
...
The code you have shown is fine, assuming you're drawing a full screen polygon so the fragment shader runs once for each pixel. Potential issues are:
resolution isn't set correctly
The point coordinates aren't in the range 0 to 1 on the screen.
Although minor, d will be stretched by the aspect ratio, so you might be better scaling the points up to pixel coordinates and diving distance by resolution.y.
This looks pretty similar to creating a density field for 2D metaballs. For performance you're best off limiting the density function for each point so it doesn't go on forever, then spatting discs into a texture using additive blending. This saves processing those pixels a point doesn't affect (just like in deferred shading). The result is the density field, or in your case per-pixel intensity.
These are a little related:
2D OpenGL ES Metaballs on android (no answers yet)
calculate light volume radius from intensity
gl_PointSize Corresponding to World Space Size
It looks like the point center and fragment position are in different coordinate spaces when you subtract them:
vec2 source = vec2(sourceX[i],sourceY[i]);
vec2 position = ( gl_FragCoord.xy / resolution.xy );
float d = distance(position, source);
Based on your explanation and code, source and source are in window coordinates, meaning that they are in units of pixels. gl_FragCoord is in the same coordinate space. And even though you don't show that directly, I assume that resolution is the size of the window in pixels.
This means that:
vec2 position = ( gl_FragCoord.xy / resolution.xy );
calculates the normalized position of the fragment within the window, in the range [0.0, 1.0] for both x and y. But then on the next line:
float d = distance(position, source);
you subtrace source, which is still in window coordinates, from this position in normalized coordinates.
Since it looks like you wanted the distance in normalized coordinates, which makes sense, you'll also need to normalize source:
vec2 source = vec2(sourceX[i],sourceY[i]) / resolution.xy;
I'm working on a small homemade 3D engine and more precisely on rendering optimization. Until here I developped a sort algorithm whose goal is to gather a maximum of geometry (meshes) which have in common the same material properties and same shader program into batches. This way I minimize the state changes (glBindXXX) and draw calls (glDrawXXX). So, if I have a scene composed by 10 boxes, all sharing the same texture and need to be rendered with the same shader program (for example including ADS lighting) so all the vertices of these meshes will be merged into a unique VBO, the texture will be bind just one time and one simple draw call only will be needed.
Scene description:
- 10 meshes (boxes) mapped with 'texture_1'
Pseudo-code (render):
shaderProgram_1->Bind()
{
glActiveTexture(texture_1)
DrawCall(render 10 meshes with 'texture_1')
}
But now I want to be sure one thing: Let's assume our scene is always composed by the same 10 boxes but this time 5 of them will be mapped with a different texture (not multi-texturing, just simple texture mapping).
Scene description:
- 5 boxes with 'texture_1'
- 5 boxes with 'texture_2'
Pseudo-code (render):
shaderProgram_1->Bind()
{
glActiveTexture(texture_1)
DrawCall(render 5 meshes with 'texture_1')
}
shaderProgram_2->Bind()
{
glActiveTexture(texture_2)
DrawCall(render 5 meshes with 'texture_2')
}
And my fragment shader has a unique declaration of sampler2D (the goal of my shader program is to render geometry with simple texture mapping and ADS lighting):
uniform sampler2D ColorSampler;
I want to be sure it's not possible to draw this scene with a unique draw call (like it was possible with my previous example (1 batch was needed)). It was possible because I used the same texture for the whole geometry. I think this time I will need 2 batches hence 2 draw calls and of course for the rendering of each batch I will bind the 'texture_1' and 'texture_2' before each draw call (one for the first 5 boxes and an other one for the 5 others).
To sum up, if all the meshes are mapped with a simple texture (simple texture mapping):
5 with a red texture (texture_red)
5 with a blue texture (texture_blue)
Is it possible to render the scene with a simple draw call? I don't think so because my pseudo code will look like this:
Pseudo-code:
shaderProgram->Bind()
{
glActiveTexture(texture_blue)
glActiveTexture(texture_red)
DrawCall(render 10 meshes)
}
I think it's impossible to differentiate the 2 textures when my fragment shader has to compute the pixel color using a unique sampler2D uniform variable (simple texture mapping).
Here's my fragment shader:
#version 440
#define MAX_LIGHT_COUNT 1
/*
** Output color value.
*/
layout (location = 0) out vec4 FragColor;
/*
** Inputs.
*/
in vec3 Position;
in vec2 TexCoords;
in vec3 Normal;
/*
** Material uniforms.
*/
uniform MaterialBlock
{
vec3 Ka, Kd, Ks;
float Shininess;
} MaterialInfos;
uniform sampler2D ColorSampler;
struct Light
{
vec4 Position;
vec3 La, Ld, Ls;
float Kc, Kl, Kq;
};
uniform struct Light LightInfos[MAX_LIGHT_COUNT];
uniform unsigned int LightCount;
/*
** Light attenuation factor.
*/
float getLightAttenuationFactor(vec3 lightDir, Light light)
{
float lightAtt = 0.0f;
float dist = 0.0f;
dist = length(lightDir);
lightAtt = 1.0f / (light.Kc + (light.Kl * dist) + (light.Kq * pow(dist, 2)));
return (lightAtt);
}
/*
** Basic phong shading.
*/
vec3 Basic_Phong_Shading(vec3 normalDir, vec3 lightDir, vec3 viewDir, int idx)
{
vec3 Specular = vec3(0.0f);
float lambertTerm = max(dot(lightDir, normalDir), 0.0f);
vec3 Ambient = LightInfos[idx].La * MaterialInfos.Ka;
vec3 Diffuse = LightInfos[idx].Ld * MaterialInfos.Kd * lambertTerm;
if (lambertTerm > 0.0f)
{
vec3 reflectDir = reflect(-lightDir, normalDir);
Specular = LightInfos[idx].Ls * MaterialInfos.Ks * pow(max(dot(reflectDir, viewDir), 0.0f), MaterialInfos.Shininess);
}
return (Ambient + Diffuse + Specular);
}
/*
** Fragment shader entry point.
*/
void main(void)
{
vec3 LightIntensity = vec3(0.0f);
vec4 texDiffuseColor = texture2D(ColorSampler, TexCoords);
vec3 normalDir = (gl_FrontFacing ? -Normal : Normal);
for (int idx = 0; idx < LightCount; idx++)
{
vec3 lightDir = vec3(LightInfos[idx].Position) - Position.xyz;
vec3 viewDir = -Position.xyz;
float lightAttenuationFactor = getLightAttenuationFactor(lightDir, LightInfos[idx]);
LightIntensity += Basic_Phong_Shading(
-normalize(normalDir), normalize(lightDir), normalize(viewDir), idx
) * lightAttenuationFactor;
}
FragColor = vec4(LightIntensity, 1.0f) * texDiffuseColor;
}
Are you agree with me?
It's possible if you either: (i) consider it to be a multitexturing problem where the function per fragment just picks between the two incoming fragments (ideally using mix with a coefficient of 0.0 or 1.0, not genuine branching); or (ii) composite your two textures into one texture (subject to your ability to wrap and clamp texture coordinates efficiently — watch out for those dependent reads — and maximum texture size constraints).
It's an open question as to whether either of these things would improve performance. Definitely go with (ii) if you can.