I'm trying to change texture colors inside the GLSL context - doing so before the beginning of the OpenGL pipeline is not an option.
I have tried the following approach:
Vertex Shader
attribute highp vec2 a_TexCoord;
uniform highp mat3 u_TextureMatrix;
varying highp vec2 v_TexCoord;
highp vec4 calculatePosition();
void main()
{
gl_Position = calculatePosition();
v_TexCoord = (u_TextureMatrix * vec3(a_TexCoord,1.0)).xy;
}
attribute highp vec2 a_Vertex;
uniform highp mat3 u_TransformMatrix;
uniform highp mat3 u_ProjectionMatrix;
highp vec4 calculatePosition() {
return vec4(u_ProjectionMatrix * u_TransformMatrix * vec3(a_Vertex.xy, 1.0), 1.0);
}
Fragment Shader
uniform lowp float u_Opacity;
lowp vec4 calculatePixel();
void main()
{
gl_FragColor = calculatePixel();
gl_FragColor.a *= u_Opacity;
}
varying mediump vec2 v_TexCoord;
uniform lowp vec4 u_Color;
uniform sampler2D u_Tex0;
lowp vec4 calculatePixel() {
vec4 tex = texture2D(u_Tex0, v_TexCoord);
tex.xyz -= (100.0 / 255.0);
if (tex.x < 0.0) { tex.x += 1.0; }
if (tex.y < 0.0) { tex.y += 1.0; }
if (tex.z < 0.0) { tex.z += 1.0; }
return tex * u_Color;
}
This code works EXCEPT for the cases when Interpolation is applied (GL_LINEAR_MIPMAP_LINEAR), when things start to look really bad because the filtering happens before the Fragment Shader so I don't get to change the colors PRIOR when it's reliable to do so.
I'm developing a game that has as requirement supporting really old hardware (from as early as 2008) and that's why I'm using really obsolete GLSL code (version 120 compatible).
Is there a way to change Texture colors anywhere in the pipeline BEFORE the rasterization kicks in?
Related
So I've currently managed to write a shader using Xoppa tutorials and use an AssetManager, and I managed to bind a texture to the model, and it looks fine.
[[1
Now the next step I guess would be to create diffuse(not sure if thats the word? phong shading?) lighting(?) to give the bunny some form of shading. While I have a little bit of experience with GLSL shaders in LWJGL, I'm unsure how to process that same information so I can use it in libGDX and in the glsl shaders.
I understand that this all could be accomplished using the Environment class etc. But I want to achieve this through the shaders alone or by traditional means simply for the challenge.
In LWJGL, shaders would have uniforms:
in vec3 position;
in vec2 textureCoordinates;
in vec3 normal;
out vec2 pass_textureCoordinates;
out vec3 surfaceNormal;
out vec3 toLightVector;
uniform mat4 transformationMatrix;
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform vec3 lightPosition;
This would be reasonably easy for me to calculate in LWJGL
Vertex file:
attribute vec3 a_position;
attribute vec3 a_normal;
attribute vec2 a_texCoord0;
uniform mat4 u_worldTrans;
uniform mat4 u_projViewTrans;
varying vec2 v_texCoords;
void main() {
v_texCoords = a_texCoord0;
gl_Position = u_projViewTrans * u_worldTrans * vec4(a_position, 1.0);
}
I imagine that I could implement the uniforms similarly to the LWGL glsl example, but I dont know how I can apply these uniforms into libgdx and have it work. I am unsure what u_projViewTrans is, I'm assuming it is a combination of the projection, transformation and view matrix, and setting the uniform with the camera.combined?
If someone could help me understand the process or point to an example of how (per pixel lighting?), can be implemented with just the u_projViewTrans and u_worldTrans, I'd greatly appreciate your time and effort in helping me understand these concepts a bit better.
Heres my github upload of my work in progress.here
You can do the light calculations in world space. A simple lambertian diffuse light can be calculated like this:
vec3 toLightVector = normalize( lightPosition - vertexPosition );
float ligtIntensity = max( 0.0, dot( normal, toLightVector ));
A detailed explanation can be found in the answer to the Stackoverflow question How does this faking the light work on aerotwist?.
While Gouraud shading calculates the light in the the vertex shader, Phong shading calculates the light in the fragment shader.
(see further GLSL fixed function fragment program replacement)
A Gouraud shader may look like this:
Vertex Shader:
attribute vec3 a_position;
attribute vec3 a_normal;
attribute vec2 a_texCoord0;
uniform mat4 u_worldTrans;
uniform mat4 u_projViewTrans;
uniform vec3 lightPosition;
varying vec2 v_texCoords;
varying float v_lightIntensity;
void main()
{
vec4 vertPos = u_worldTrans * vec4(a_position, 1.0);
vec3 normal = normalize(mat3(u_worldTrans) * a_normal);
vec3 toLightVector = normalize(lightPosition - vertPos.xyz);
v_lightIntensity = max( 0.0, dot(normal, toLightVector));
v_texCoords = a_texCoord0;
gl_Position = u_projViewTrans * vertPos;
}
Fragment Shader:
varying vec2 v_texCoords;
varying float v_lightIntensity;
uniform sampler2D u_texture;
void main()
{
vec4 texCol = texture( u_texture, v_texCoords.st );
gl_FragColor = vec4( texCol.rgb * v_lightIntensity, 1.0 );
}
A Phong shading may look like this:
Vertex Shader:
attribute vec3 a_position;
attribute vec3 a_normal;
attribute vec2 a_texCoord0;
uniform mat4 u_worldTrans;
uniform mat4 u_projViewTrans;
varying vec2 v_texCoords;
varying vec3 v_vertPosWorld;
varying vec3 v_vertNVWorld;
void main()
{
vec4 vertPos = u_worldTrans * vec4(a_position, 1.0);
v_vertPosWorld = vertPos.xyz;
v_vertNVWorld = normalize(mat3(u_worldTrans) * a_normal);
v_texCoords = a_texCoord0;
gl_Position = u_projViewTrans * vertPos;
}
Fragment Shader:
varying vec2 v_texCoords;
varying vec3 v_vertPosWorld;
varying vec3 v_vertNVWorld;
uniform sampler2D u_texture;
struct PointLight
{
vec3 color;
vec3 position;
float intensity;
};
uniform PointLight u_pointLights[1];
void main()
{
vec3 toLightVector = normalize(u_pointLights[0].position - v_vertPosWorld.xyz);
float lightIntensity = max( 0.0, dot(v_vertNVWorld, toLightVector));
vec4 texCol = texture( u_texture, v_texCoords.st );
vec3 finalCol = texCol.rgb * lightIntensity * u_pointLights[0].color;
gl_FragColor = vec4( finalCol.rgb * lightIntensity, 1.0 );
}
i have written a fragment shader that works just fine with a single light. Now I am trying to adapt it to work with 8 lights, the implement it in Processing. Clearly I am doing something wrong in the math and I cannot see what it is... I have read other posts about this and try to adapt the answer to my problem, no luck though...
////Fragment/////
#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif
varying vec4 vertColor;
varying vec3 ecNormal;
varying vec3 lightDir;
void main() {
vec3 direction = normalize(lightDir);
vec3 normal = normalize(ecNormal);
float intensity = max(0.0, dot(direction, normal));
gl_FragColor = vec4(intensity, intensity, intensity, 1) * vertColor;
}
////vertex/////
#define PROCESSING_LIGHT_SHADER
uniform mat4 modelview;
uniform mat4 transform;
uniform mat3 normalMatrix;
uniform vec4 lightPosition;
uniform vec3 lightNormal;
attribute vec4 vertex;
attribute vec4 color;
attribute vec3 normal;
varying vec4 vertColor;
varying vec3 ecNormal;
varying vec3 lightDir;
void main() {
gl_Position = transform * vertex;
vec3 ecVertex = vec3(modelview * vertex);
ecNormal = normalize(normalMatrix * normal);
lightDir = normalize(lightPosition.xyz - ecVertex);
vertColor = color;
}
Just making it compile real quick with an online shader tool (http://shdr.bkcore.com/).
You might need to pass the attributes from the veretex shader to varyings for the fragment shader, but I'm not sure, been a while since I wrote shaders.
#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif
uniform mat4 modelview;
uniform mat4 normalMatrix;
uniform int lightCount;
uniform vec4 lightPosition[8];
varying vec4 vertex; //was attribute, no such thing in frag shaders
varying vec3 normal; //was attribute
varying vec4 vertColor;
void main() {
vec3 vertexCamera = vec3(modelview * vertex);
vec3 transformedNormal = normalize(normalMatrix * vec4(normal,1)).xyz; //was vec3 = normalize(mat4*vec3);
float intensity = 0.0;
for(int i = 0 ; i<8;i++){ //can't loop over a non-constant variable
if(lightCount<i)
{
vec3 direction = normalize(lightPosition[i].xyz - vertexCamera);
intensity += max(0.0, dot(direction, transformedNormal));
}
}
gl_FragColor = vec4(intensity, intensity, intensity, 1) * vertColor;
}
I want to darken the corners of my little quad in my program. I have the following vertex shader:
#version 130
varying vec4 v_color;
varying vec2 v_texcoord;
void main()
{
v_color = gl_Color.rgba;
v_texcoord = gl_MultiTexCoord0.xy;
gl_FrontColor = vec4(v_color.r, v_color.g, v_color.b, 1.0f);
gl_Position = ftransform();
}
And my fragment shader:
#version 130
uniform sampler2D u_texture;
varying vec4 v_color;
varying vec2 v_texcoord;
void main()
{
gl_FragColor = v_color * texture2D(u_texture, v_texcoord);
}
I read somewhere that gl_FrontColor could be used to "color" vertices, but no matter what I change the values to, it always seems to stay the same.
My question is, what function can I use to set the color of my vertices? I want the vertices to be slightly darker than the rest of the quad so it looks a little "nicer".
You output to both v_color (your varying), and gl_FrontColor (GLSL builtin). But, in fragment shader, you only use v_color, so anything that is in gl_FrontColor is being ignored.
You should use only one of these. Either
// vertex
#version 130
#define SCALE_FACTOR 0.5
varying vec4 v_color;
varying vec2 v_texcoord;
void main()
{
v_color = vec4(gl_Color.rgb * SCALE_FACTOR, 1.0);
v_texcoord = gl_MultiTexCoord0.xy;
gl_Position = ftransform();
}
// fragment
#version 130
uniform sampler2D u_texture;
varying vec4 v_color;
varying vec2 v_texcoord;
void main()
{
gl_FragColor = v_color * texture2D(u_texture, v_texcoord);
}
Or use gl_FrontColor in vertex and gl_Color in fragment shader, instead of your v_color (and remove this varying as it no longer needed).
Of course vertex gl_Color attribute comes from glColorPointer, - if you changed that colors, it would be changed in shader too.
I´ve created a shader that is able to rotate an image about 180° and overlay it with a black gradient but now I want to create real transparency instead of using black as my background color.
This is what I got so far:
// Vertex Shader
uniform highp mat4 u_modelViewMatrix;
uniform highp mat4 u_projectionMatrix;
attribute highp vec4 a_position;
attribute lowp vec4 a_color;
attribute highp vec2 a_texcoord;
varying lowp vec4 v_color;
varying highp vec2 v_texCoord;
uniform int offset;
uniform int space;
uniform int vph;
void main()
{\
highp float h = float(offset)/float(vph);
highp float s = float(space)/1000.0;
highp vec4 pos = a_position;
pos.y = pos.y - (h + s);
gl_Position = (u_projectionMatrix * u_modelViewMatrix) * pos;
v_color = a_color;
v_texCoord = vec2(a_texcoord.x, 1.0 - a_texcoord.y);
}
// Fragment Shader
varying highp vec2 v_texCoord;
uniform sampler2D u_texture0;
uniform int gradient;
void main()
{
lowp vec3 w = vec3(1.0,1.0,1.0);
lowp vec3 b = vec3(0.0,0.0,0.0);
lowp vec3 mix = mix(b, w, (v_texCoord.y-(float(gradient)/10.0)));
gl_FragColor = texture2D(u_texture0,v_texCoord) * vec4(mix, 1.0);
}
replace black color with translucent color in your fragment shader:
void main()
{
lowp vec4 w = vec3(1.0,1.0,1.0,1.0);
lowp vec4 b = vec3(0.0,0.0,0.0,0.0);
lowp vec4 mix = mix(b, w, (v_texCoord.y-(float(gradient)/10.0)));
gl_FragColor = texture2D(u_texture0,v_texCoord) * mix;
}
Hi i'm new to GLSL and i'm having a few problems.
I'm trying to create a pair of GLSL Shaders to either use color or texture but i must be doing something wrong.
The problem is that if set uUseTexture to 0 (which should indicate color) it doesn't work (object is not colored). I know the coloring code works separately, any hints why it does not work using the if statement?
Here is the code:
// Fragment
precision mediump float;
uniform int uUseTexture;
uniform sampler2D uSampler;
varying vec4 vColor;
varying vec2 vTextureCoord;
void main(void) {
if(uUseTexture == 1) {
gl_FragColor = texture2D(uSampler, vec2(vTextureCoord.s, vTextureCoord.t));
} else {
gl_FragColor = vColor;
}
}
// Vertex
attribute vec3 aVertexPosition;
attribute vec4 aVertexColor;
attribute vec2 aTextureCoord;
uniform int uUseTexture;
uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
varying vec4 vColor;
varying vec2 vTextureCoord;
void main(void) {
gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
if(uUseTexture == 1) {
vTextureCoord = aTextureCoord;
} else {
vColor = aVertexColor;
}
Nothing springs to mind immediately glancing over your code, but I'd like to take a moment and point out that this use case can be covered without needing an if statement. For example, let's treat uUseTexture as a float instead of an int (you could cast it in the shader but this is more interesting):
// Vertex
attribute vec3 aVertexPosition;
attribute vec4 aVertexColor;
attribute vec2 aTextureCoord;
uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
varying vec4 vColor;
varying vec2 vTextureCoord;
void main(void) {
gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
// It may actually be faster to just assign both of these anyway
vTextureCoord = aTextureCoord;
vColor = aVertexColor;
}
// Fragment
uniform float uUseTexture;
uniform sampler2D uSampler;
varying vec4 vColor;
varying vec2 vTextureCoord;
void main(void) {
// vTextureCoord is already a vec2, BTW
vec4 texColor = texture2D(uSampler, vTextureCoord) * uUseTexture;
vec4 vertColor = vColor * (1.0 - uUseTexture);
gl_FragColor = texColor + vertColor;
}
Now uUseTexture simply acts as a modulator for how much of each color source you want to use. And it's more flexible in that you could set it to 0.5 and get half texture/half vertex color too!
The thing that may surprise you is that there's a good likelihood that this is what the shader compiler is doing behind the scenes anyway when you use an if statement like that. It's typically more efficient for the hardware that way.