In this language
https://en.wikibooks.org/wiki/GLSL_Programming/Vector_and_Matrix_Operations
How can I do operations like this:
Does anyone know? The page seems to talk about fixed width like 2x2, 3x3, 4x4, but mine can be much more in dimensions for width.
If i understood your question right you want to know how to multiplicate matrecies and vectors in glsl.
It's realy easy for example
mat4 a;
mat4 b;
vec4 c;
vec4 d = a * b * c;
or
mat4 a;
mat4 b;
mat4 c = a * b;
but keep in mind that matrecies are not commonicative so the order you multiplicate them are important.
Related
The background:
I am writing some terrain visualiser and I am trying to decouple the rendering from the terrain generation.
At the moment, the generator returns some array of triangles and colours, and these are bound in OpenGL by the rendering code (using OpenTK).
So far I have a very simple shader which handles the rotation of the sphere.
The problem:
I would like the application to be able to display the results either as a 3D object, or as a 2D projection of the sphere (let's assume Mercator for simplicity).
I had thought, this would be simple — I should compile an alternative shader for such cases. So, I have a vertex shader which almost works:
precision highp float;
uniform mat4 projection_matrix;
uniform mat4 modelview_matrix;
in vec3 in_position;
in vec3 in_normal;
in vec3 base_colour;
out vec3 normal;
out vec3 colour2;
vec3 fromSphere(in vec3 cart)
{
vec3 spherical;
spherical.x = atan(cart.x, cart.y) / 6;
float xy = sqrt(cart.x * cart.x + cart.y * cart.y);
spherical.y = atan(xy, cart.z) / 4;
spherical.z = -1.0 + (spherical.x * spherical.x) * 0.1;
return spherical;
}
void main(void)
{
normal = vec3(0,0,1);
normal = (modelview_matrix * vec4(in_normal, 0)).xyz;
colour2 = base_colour;
//gl_Position = projection_matrix * modelview_matrix * vec4(fromSphere(in_position), 1);
gl_Position = vec4(fromSphere(in_position), 1);
}
However, it has a couple of obvious issues (see images below)
Saw-tooth pattern where triangle crosses the cut meridian
Polar region is not well defined
3D case (Typical shader):
2D case (above shader)
Both of these seem to reduce to the statement "A triangle in 3-dimensional space is not always even a single polygon on the projection". (... and this is before any discussion about whether great circle segments from the sphere are expected to be lines after projection ...).
(the 1+x^2 term in z is already a hack to make it a little better - this ensures the projection not flat so that any stray edges (ie. ones that straddle the cut meridian) are safely behind the image).
The question: Is what I want to achieve possible with a VertexShader / FragmentShader approach? If not, what's the alternative? I think I can re-write the application side to pre-transform the points (and cull / add extra polygons where needed) but it will need to know where the cut line for the projection is — and I feel that this information is analogous to the modelViewMatrix in the 3D case... which means taking this logic out of the shader seems a step backwards.
Thanks!
I'm trying to apply a lighting per-pixel in my 3d engine but I'm having some trouble understanding what can be wrong with my geometry. I'm a beginner in OpenGL so please bear with me if my question may sound stupid, I'll explain as best as I can.
My vertex shader:
#version 400 core
layout(location = 0) in vec3 position;
in vec2 textureCoordinates;
in vec3 normal;
out vec2 passTextureCoordinates;
out vec3 normalVectorFromVertex;
out vec3 vectorFromVertexToLightSource;
out vec3 vectorFromVertexToCamera;
uniform mat4 transformation;
uniform mat4 projection;
uniform mat4 view;
uniform vec3 lightPosition;
void main(void) {
vec4 mainPosition = transformation * vec4(position, 1.0);
gl_Position = projection * view * mainPosition;
passTextureCoordinates = textureCoordinates;
normalVectorFromVertex = (transformation * vec4(normal, 1.0)).xyz;
vectorFromVertexToLightSource = lightPosition - mainPosition.xyz;
}
My fragment-shader:
#version 400 core
in vec2 passTextureCoordinates;
in vec3 normalVectorFromVertex;
in vec3 vectorFromVertexToLightSource;
layout(location = 0) out vec4 out_Color;
uniform sampler2D textureSampler;
uniform vec3 lightColor;
void main(void) {
vec3 versor1 = normalize(normalVectorFromVertex);
vec3 versor2 = normalize(vectorFromVertexToLightSource);
float dotProduct = dot(versor1, versor2);
float lighting = max(dotProduct, 0.0);
vec3 finalLight = lighting * lightColor;
out_Color = vec4(finalLight, 1.0) * texture(textureSampler, passTextureCoordinates);
}
The problem: Whenever I multiply my transformation matrix for the normal vector with a homogeneous coordinate of 0.0 like so: transformation * vec4(normal, 0.0), my resulting vector is getting messed up in such a way that whenever the pipeline goes to the fragment shader, my dot product between the vector that goes from my vertex to the light source and my normal is probably outputting <= 0, indicating that the lightsource is in an angle that is >= π/2 and therefore all my pixels are outputting rgb(0,0,0,1). But for the weirdest reason that I cannot understand geometrically, if I calculate transformation * vec4(normal, 1.0) the lighting appears to work kind of fine, except for extremely weird behaviours, like 'reacting' to distance. I mean, using this very simple lighting technique the vertex brightness is completely agnostic to distance, since it would imply the calculation of the vectors length, but I'm normalizing them before applying the dot product so there is no way that this is expected to me.
One thing that is clearly wrong to me, is that my transformation matrix have the translation components applied before multiplying the normal vectors, which will "move and point" the normals in the direction of the translation, which is wrong. Still I'm not sure if I should be getting this results. Any insights are appreciated.
Whenever I multiply my transformation matrix for the normal vector with a homogeneous coordinate of 0.0 like so: transformation * vec4(normal, 0.0), my resulting vector is getting messed up
What if you have non-uniform scaling in that transformation matrix?
Imagine a flat square surface, all normals are pointing up. Now you scale that surface to stretch in the horizontal direction: what would happen to normals?
If you don't adjust your transformation matrix to not have the scaling part in it, the normals will get skewed. After all, you only care about the object's orientation when considering the normals and the scale of the object is irrelevant to where the surface is pointing to.
Or think about a circle:
img source
You need to apply inverse transpose of the model view matrix to avoid scaling the normals when transforming the normals. Another SO question discusses it, as well as this video from Jaime King teaching Graphics with OpenGL.
Additional resources on transforming normals:
LearnOpenGL: Basic Lighting
Lighthouse3d.com: The Normal Matrix
Could someone assist me or head me in the right direction to implement the basic FVFs from DirectX in GLSL code? I completely understand how to create a program, apply VBOs and all that, but I'm having great difficulty in the actual creation of the shaders. Namely:
transformed+lit (x,y,color,specular,tu,tv)
lit (x,y,z,color,specular,tu,tv)
unlit (x,y,z,nx,ny,nz,tu,tv) [material/lights]
With this, I'd be given enough to implement far more interesting shaders.
So, I'm not asking for a mechanism to deal with FVFs. I'm simply asking, for the shader code, given the proper streams. I understand that the unlit and lit versions rely on passing in matrices and I completely understand the concept. I am just having trouble finding shader examples showing these concepts.
Okay. If you have troubles finding working shaders, there is example (Honestly, you can find it at any OpenGL book).
This shader program will use your object's world matrix and camera's matrices to transform vertices, and then map one texture to pixels and lit them with one directional light, (according to material properties and light direction).
Vertex shader:
#version 330
// Vertex input layout
attribute vec3 inPosition;
attribute vec3 inNormal;
attribute vec4 inVertexCol;
attribute vec2 inTexcoord;
attribute vec3 inTangent;
attribute vec3 inBitangent;
// Output
struct PSIn
{
vec3 normal;
vec4 vertexColor;
vec2 texcoord;
vec3 tangent;
vec3 bitangent;
};
out PSIn psin;
// Uniform buffers
layout(std140)
uniform CameraBuffer
{
mat4 mtxView;
mat4 mtxProj;
vec3 cameraPosition;
};
layout(std140)
uniform ObjectBuffer
{
mat4 mtxWorld;
};
void main()
{
// transform position
vec4 pos = vec4(inPosition, 1.0f);
pos = mtxWorld * pos;
pos = mtxView * pos;
pos = mtxProj * pos;
gl_Position = pos;
// just pass-through other stuff
psin.normal = inNormal;
psin.tangent = inTangent;
psin.bitangent = inBitangent;
psin.texcoord = inTexcoord;
psin.vertexColor = inVertexCol;
}
And fragment shader:
#version 330
// Input
in vec3 position;
in vec3 normal;
in vec4 vertexColor;
in vec2 texcoord;
in vec3 tangent;
in vec3 bitangent;
// Output
out vec4 fragColor;
// Uniforms
uniform sampler2D sampler0;
layout(std140)
uniform CameraBuffer
{
mat4 mtxView;
mat4 mtxProj;
vec3 cameraPosition;
};
layout(std140)
uniform ObjectBuffer
{
mat4 mtxWorld;
};
layout(std140)
uniform LightBuffer
{
vec3 lightDirection;
};
struct Material
{
float Ka; // ambient quotient
float Kd; // diffuse quotient
float Ks; // specular quotient
float A; // shininess
};
layout(std140)
uniform MaterialBuffer
{
Material material;
};
// function to calculate pixel lighting
float Lit( Material material, vec3 pos, vec3 nor, vec3 lit, vec3 eye )
{
vec3 V = normalize( eye - pos );
vec3 R = reflect( lit, nor);
float Ia = material.Ka;
float Id = material.Kd * clamp( dot(nor, -lit), 0.0f, 1.0f );
float Is = material.Ks * pow( clamp(dot(R,V), 0.0f, 1.0f), material.A );
return Ia + Id + Is;
}
void main()
{
vec3 nnormal = normalize(normal);
vec3 ntangent = normalize(tangent);
vec3 nbitangent = normalize(bitangent);
vec4 outColor = texture(sampler0, texcoord); // texture mapping
outColor *= Lit( material, position, nnormal, lightDirection, cameraPosition ); // lighting
outColor.w = 1.0f;
fragColor = outColor;
}
If you don't want texturing, just don't sample texture, but equate outColor to vertexColor.
If you don't need lighting, just comment out Lit() function.
Edit:
For 2D objects you can still use same program, but many of functionality will be redundant. You can strip out:
camera
light
material
all of vertex attributes, but inPosition and inTexcoord (maybe also inVertexCol, f you need vertices to have color) and all of code related with unneeded attributes
inPosition can be vec2
you will need to pass orthographic projection matrix instead of perspective one
you can even strip out matrices, and pass vertex buffer with positions in pixels. See my answer here about how to transform those pixel positions to screen space positions. You can do it either in C/C++ code or in GLSL/HLSL.
Hope it helps somehow.
Intro
You've not specified OpenGL/GLSL version that you targeting, so I'll assume that it is at least OpenGL 3.
One of the main advantages of programmable pipeline, to be compared with with fixed-function pipeline, is fully customizable vertex input. I'm not quite sure, if it is a good idea to introduce such constraints as fixed vertex format. For what?.. (You will find modern approach in paragraph "Another way" of my post)
But, if you really want to emulate fixed-function...
I think you'll need to have a vertex shader for each vertex format
you have, or somehow generate vertex shader on the fly. Or even for
all of the shader stages.
For example, for x, y, color, tu, tv input you will have vertex
shader such as:
attribute vec2 inPosition;
attribute vec4 inCol;
attribute vec2 inTexcoord;
void main()
{
...
}
As you don't have transforms, light and materials fixed-functionality in OpenGL 3, you must implement it yourself:
You must pass matrices for transformations
For lit shader you must pass additional variables, such as light direction
For material shader you must have materials in input
Typically, in shader, you do it with uniforms or uniform blocks:
layout(std140)
uniform CameraBuffer
{
mat4 mtxView;
mat4 mtxProj;
vec3 cameraPosition;
};
layout(std140)
uniform ObjectBuffer
{
mat4 mtxWorld;
};
layout(std140)
uniform LightBuffer
{
vec3 lightDirection;
};
struct Material
{
float Ka;
float Kd;
float Ks;
float A;
};
layout(std140)
uniform MaterialBuffer
{
Material material;
};
Probably, you can somehow combine all of shaders with different formats , uniforms, etc. in one big ubershader with branching.
Another way
You can stick to modern approach and just allow user to declare vertex format he wants (format, that he used in his shader). Just implement concept similar to IDirect3DDevice9::CreateVertexDeclaration or ID3D11Device::CreateInputLayout: you will make use of glVertexAttribPointer() and, probably, VAOs. This way you can also abstract out vertex layout, in API-independent way.
The main ideas are:
user passes an array of structures that describes format in API-independent way to your function (this struct can be similar to D3DVERTEXELEMENT9 or D3D11_INPUT_ELEMENT_DESC)
that function interpret array's elements one by one and builds some kind of internal info that describes format in API-specific way (such as IDirect3DVertexDeclaration9 for D3D9, ID3D11InputLayout for D3D11 or custom struct or VAO for OpenGL)
when it's time to set vertex format you just use this info
P.S. If you need ideas on how to properly implement light, materials in GLSL (I mean algorithms here), you'd better pick up some book or online tutorials, than asking here. Or just Google up "GLSL lighting".
You can find interesting these links:
Good resources for learning modern OpenGL (3.0 or later)?
OpenGL documentation
Select Books on OpenGL and 3D Graphics Coding
Happy coding!
I want to fade the screen to a specific color using glsl
So far this is my glsl code and it works quite well:
uniform sampler2D textureSampler;
uniform vec2 texcoordOffset;
uniform vec3 sp;
uniform vec3 goal;
varying vec4 vertColor;
varying vec4 vertTexcoord;
void main(void) {
vec3 col=texture2D(textureSampler, vertTexcoord.st).rgb;
gl_FragColor = vec4(col+((goal-col)/sp), 1.0);
//gl_FragColor = vec4(col+((goal-col)*sp), 1.0); //as suggested below also this doesn't solve the problem
}
The only problem I have is that with higher sp values the colors aren't faded completly to the new color. I think the problem is caused by the accuracy which with the shader works.
Doas anyone has an Idea how to increase the accuracy?
EDIT:
Could it be that this effect is Driver dependent? I'm using an ATI with the latest drivers maybe someone could try the code on an NVIDIA card?
Let's break it down:
float A, B:
float Mix;
float C = A + (B-A) / Mix;
Now it's fairly easy to see that Mix has to be infinite to create pure A, so it isn't GLSL fault at all. The normally used equation is as follows
float C = A + (B-A) * Mix;
// Let's feed some data:
// Mix = 0 -> C = A;
// Mix = 1 -> C = A + (B - A) = A + B - A = B;
// Mix = 0.5 -> C = A + 0.5*(B - A) = A + 0.5*B - 0.5*A = 0.5*A + 0.5*B
Correct, right?
Change your code to:
gl_FragColor = vec4(col+((goal-col) * sp), 1.0);
And use the range of <0,1> in sp instead. Also, shouldn't sp be actually float? If all of it's components are equal (IOW sp.x == sp.y == sp.z), you can just change it's type and it will work, as referenced here.
As a somewhat similar to a problem I had before and posted before, I'm trying to get normals to display correctly in my GLSL app.
For the purposes of my explanation, I'm using the ninjaHead.obj model provided with RenderMonkey for testing purposes (you can grab it here). Now in the preview window in RenderMonkey, everything looks great:
and the vertex and fragment code generated respectively is:
Vertex:
uniform vec4 view_position;
varying vec3 vNormal;
varying vec3 vViewVec;
void main(void)
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
// World-space lighting
vNormal = gl_Normal;
vViewVec = view_position.xyz - gl_Vertex.xyz;
}
Fragment:
uniform vec4 color;
varying vec3 vNormal;
varying vec3 vViewVec;
void main(void)
{
float v = 0.5 * (1.0 + dot(normalize(vViewVec), vNormal));
gl_FragColor = v* color;
}
I based my GLSL code on this but I'm not quite getting the expected results...
My vertex shader code:
uniform mat4 P;
uniform mat4 modelRotationMatrix;
uniform mat4 modelScaleMatrix;
uniform mat4 modelTranslationMatrix;
uniform vec3 cameraPosition;
varying vec4 vNormal;
varying vec4 vViewVec;
void main()
{
vec4 pos = gl_ProjectionMatrix * P * modelTranslationMatrix * modelRotationMatrix * modelScaleMatrix * gl_Vertex;
gl_Position = pos;
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_FrontColor = gl_Color;
vec4 normal4 = vec4(gl_Normal.x,gl_Normal.y,gl_Normal.z,0);
// World-space lighting
vNormal = normal4*modelRotationMatrix;
vec4 tempCameraPos = vec4(cameraPosition.x,cameraPosition.y,cameraPosition.z,0);
//vViewVec = cameraPosition.xyz - pos.xyz;
vViewVec = tempCameraPos - pos;
}
My fragment shader code:
varying vec4 vNormal;
varying vec4 vViewVec;
void main()
{
//gl_FragColor = gl_Color;
float v = 0.5 * (1.0 + dot(normalize(vViewVec), vNormal));
gl_FragColor = v * gl_Color;
}
However my render produces this...
Does anyone know what might be causing this and/or how to make it work?
EDIT
In response to kvark's comments, here is the model rendered without any normal/lighting calculations to show all triangles being rendered.
And here is the model shading with the normals used for colors. I believe the problem has been found! Now the reason is why it is being rendered like this and how to solve it? Suggestions are welcome!
SOLUTION
Well everyone the problem has been solved! Thanks to kvark for all his helpful insight that has definitely helped my programming practice but I'm afraid the answer comes from me being a MASSIVE tit... I had an error in the display() function of my code that set the glNormalPointer offset to a random value. It used to be this:
gl.glEnableClientState(GL.GL_NORMAL_ARRAY);
gl.glBindBuffer(GL.GL_ARRAY_BUFFER, getNormalsBufferObject());
gl.glNormalPointer(GL.GL_FLOAT, 0, getNormalsBufferObject());
But should have been this:
gl.glEnableClientState(GL.GL_NORMAL_ARRAY);
gl.glBindBuffer(GL.GL_ARRAY_BUFFER, getNormalsBufferObject());
gl.glNormalPointer(GL.GL_FLOAT, 0, 0);
So I guess this is a lesson. NEVER mindlessly Ctrl+C and Ctrl+V code to save time on a Friday afternoon AND... When you're sure the part of the code you're looking at is right, the problem is probably somewhere else!
What is your P matrix? (I suppose it's a world->camera view transform).
vNormal = normal4*modelRotationMatrix; Why did you change the order of arguments? Doing that you are multiplying the normal by inversed rotation, what you don't really want. Use the standard order instead (modelRotationMatrix * normal4)
vViewVec = tempCameraPos - pos. This is entirely incorrect. pos is your vertex in the homogeneous clip-space, while tempCameraPos is in world space (I suppose). You need to have the result in the same space as your normal is (world space), so use world-space vertex position (modelTranslationMatrix * modelRotationMatrix * modelScaleMatrix * gl_Vertex) for this equation.
You seem to be mixing GL versions a bit? You are passing the matrices manually via uniforms, but use fixed function to pass vertex attributes. Hm. Anyway...
I sincerely don't like what you're doing to your normals. Have a look:
vec4 normal4 = vec4(gl_Normal.x,gl_Normal.y,gl_Normal.z,0);
vNormal = normal4*modelRotationMatrix;
A normal only stores directional data, why use a vec4 for it? I believe it's more elegant to just use just vec3. Furthermore, look what happens next- you multiply the normal by the 4x4 model rotation matrix... And additionally your normal's fourth cordinate is equal to 0, so it's not a correct vector in homogenous coordinates. I'm not sure that's the main problem here, but I wouldn't be surprised if that multiplication would give you rubbish.
The standard way to transform normals is to multiply a vec3 by the 3x3 submatrix of the model-view matrix (since you're only interested in the orientation, not the translation). Well, precisely, the "correctest" approach is to use the inverse transpose of that 3x3 submatrix (this gets important when you have scaling). In old OpenGL versions you had it precalculated as gl_NormalMatrix.
So instead of the above, you should use something like
// (...)
varying vec3 vNormal;
// (...)
mat3 normalMatrix = transpose(inverse(mat3(modelRotationMatrix)));
// or if you don't need scaling, this one should work too-
mat3 normalMatrix = mat3(modelRotationMatrix);
vNormal = gl_Normal*normalMatrix;
That's certainly one thing to fix in your code - I hope it solves your problem.