OpenTK, pass array of vectors into a shader - opengl

So I have an array of OpenTK.Vector3 that I want to pass into a shader, but it seems that GL.Uniform3 has no overload for that. How should I go about doing this? I want to use an unsized array, and iterate over it in the shader
Frag Shader:
#version 330 core
uniform vec4 ambient; //ambient lightining
uniform vec3[2] lightDir; //directional light direction array
uniform vec4[2] lightColor; //light color array
smooth in vec4 fragColor;
in vec3 _normal;
out vec4 outputColor;
void main(void){
vec4 diff = vec4(0.0f);
//iterates over all lights and calculate diffuse
for(int i = 0; i < lightDir.length(); i++) {
diff += max(dot(_normal, normalize(-lightDir[i])), 0.0f) * lightColor[i];
}
outputColor = fragColor * (diff + ambient);
}
F# Code:
let lamps = [
{
direction = Vector3(0.0f, -1.0f, 0.0f)
color = Vector4(0.0f, 0.0f, 1.0f, 1.0f)
},
{
direction = Vector3(2.0f, 1.0f, 1.0f)
color = Vector4(1.0f, 0.0f, 0.0f, 1.0f)
}
]
let ambientLocation = GL.GetUniformLocation(program, "ambient")
GL.Uniform4(ambientLocation, Vector4(0.2f, 0.2f, 0.2f, 1.0f))
let lightDirLocation = GL.GetUniformLocation(program, "lightDir")
GL.Uniform3(lightDirLocation, (lamps |> List.map (fun l -> l.direction) |> List.toArray))
let lightColorLocation = GL.GetUniformLocation(program, "lightColor")
GL.Uniform4(lightColorLocation, (lamps |> List.map (fun l -> l.color) |> List.toArray))
The Editor informs me that Vector3 [] is not compatible with Vector3 or Vector3 ref
How should I go about this?

You should use this overloaded version of GL.Uniform3 which receives an array of float32 (aka single).
In this way, you must convert the array of Vector3 to an array of float32. The code could be:
GL.Uniform3(
lightDirLocation,
2,
lamps |> Seq.map (fun l -> [ l.direction.X; l.direction.Y; l.direction.Z ]) |> Seq.concat |> Array.ofSeq
)
Notice the use of Seq.xxx to avoid intermediate lists/arrays (which consume memory).
Or you can use a more idiomatic code as below:
GL.Uniform3(
lightDirLocation,
2,
[| for l in lamps do
yield l.direction.X
yield l.direction.Y
yield l.direction.Z |]
)
In general, the less number of calls to GL.Uniform (or any function that transfers data from CPU to GPU), the better - in term of performance. Thus, whenever possible, we should collect all data into one array and issue one call to GL.Uniform. That would be more efficient than calling GL.Uniform multiple times with multiple chunks of data.
By the way, if most of the time you are using arrays which are transformed from lamps then you should declare lamps as array too: lamps = [| ... |]. So in your original code you don't have to call List.toArray at the end.

A simple work around is to get the locations for the uniform variables lightDir[0], lightDir[1], lightColor[0] and lightColor[1] separately and to set them separately, too:
for i = 0 to 1 do
let lightDirLoc = GL.GetUniformLocation(program, "lightDir[" + i.ToString() + "]")
GL.Uniform3(lightDirLoc, lamps[i].direction);
let lightColorLoc = GL.GetUniformLocation(program, "lightColor[" + i.ToString() + "]")
GL.Uniform4(lightColorLoc, lamps[i].color);

Related

Directional Light Shadow Mapping Issues

So I've been trying to re-implement shadow mapping in my engine using directional lights, but I have to throw shade on my progress so far (see what I did there?).
I had it working in a previous commit a while back but refactored my engine and I'm trying to redo some of the shadow mapping. Wouldn't say I'm the best in terms of drawing shadows so thought I'd try and get some help.
Basically my issue seems to stem from the calculation of the light space matrix (seems a lot of people have the same issue). Initially I had a hardcoded projection matrix and simple view matrix for the light like this
void ZLight::UpdateLightspaceMatrix()
{
// …
if (type == ZLightType::Directional) {
auto lightDir = glm::normalize(glm::eulerAngles(Orientation()));
glm::mat4 lightV = glm::lookAt(lightDir, glm::vec3(0.f), WORLD_UP);
glm::mat4 lightP = glm::ortho(-50.f, 50.f, -50.f, 50.f, -100.f, 100.f);
lightspaceMatrix_ = lightP * lightV;
}
// …
}
This then gets passed unmodified as a shader uniform, with which I multiply the vertex world space positions by. A few months ago this was working but with the recent refactor I did on the engine it no longer shows anything. The output to the shadow map looks like this
And my scene isn't showing any shadows, at least not where it matters
Aside from this, after hours of scouring posts and articles about how to implement a dynamic frustrum for the light that will encompass the scene's contents at any given time, I also implemented a simple solution based on transforming the camera's frustum into light space, using an NDC cube and transforming it with the inverse camera VP matrix, and computing a bounding box from the result, which gets passed in to glm::ortho to make the light's projection matrix
void ZLight::UpdateLightspaceMatrix()
{
static std::vector <glm::vec4> ndcCube = {
glm::vec4{ -1.0f, -1.0f, -1.0f, 1.0f },
glm::vec4{ 1.0f, -1.0f, -1.0f, 1.0f },
glm::vec4{ -1.0f, 1.0f, -1.0f, 1.0f },
glm::vec4{ 1.0f, 1.0f, -1.0f, 1.0f },
glm::vec4{ -1.0f, -1.0f, 1.0f, 1.0f },
glm::vec4{ 1.0f, -1.0f, 1.0f, 1.0f },
glm::vec4{ -1.0f, 1.0f, 1.0f, 1.0f },
glm::vec4{ 1.0f, 1.0f, 1.0f, 1.0f }
};
if (type == ZLightType::Directional) {
auto activeCamera = Scene()->ActiveCamera();
auto lightDir = normalize(glm::eulerAngles(Orientation()));
glm::mat4 lightV = glm::lookAt(lightDir, glm::vec3(0.f), WORLD_UP);
lightspaceRegion_ = ZAABBox();
for (const auto& corner : ndcCube) {
auto invVPMatrix = glm::inverse(activeCamera->ProjectionMatrix() * activeCamera->ViewMatrix());
auto transformedCorner = lightV * invVPMatrix * corner;
transformedCorner /= transformedCorner.w;
lightspaceRegion_.minimum.x = glm::min(lightspaceRegion_.minimum.x, transformedCorner.x);
lightspaceRegion_.minimum.y = glm::min(lightspaceRegion_.minimum.y, transformedCorner.y);
lightspaceRegion_.minimum.z = glm::min(lightspaceRegion_.minimum.z, transformedCorner.z);
lightspaceRegion_.maximum.x = glm::max(lightspaceRegion_.maximum.x, transformedCorner.x);
lightspaceRegion_.maximum.y = glm::max(lightspaceRegion_.maximum.y, transformedCorner.y);
lightspaceRegion_.maximum.z = glm::max(lightspaceRegion_.maximum.z, transformedCorner.z);
}
glm::mat4 lightP = glm::ortho(lightspaceRegion_.minimum.x, lightspaceRegion_.maximum.x,
lightspaceRegion_.minimum.y, lightspaceRegion_.maximum.y,
-lightspaceRegion_.maximum.z, -lightspaceRegion_.minimum.z);
lightspaceMatrix_ = lightP * lightV;
}
}
What results is the same output in my scene (no shadows anywhere) and the following shadow map
I've checked the light space matrix calculations over and over, and tried tweaking values dozens of times, including all manner of lightV matrices using the glm::lookAt function, but I never get the desired output.
For more reference, here's my shadow vertex shader
#version 450 core
#include "Shaders/common.glsl" //! #include "../common.glsl"
layout (location = 0) in vec3 position;
layout (location = 5) in ivec4 boneIDs;
layout (location = 6) in vec4 boneWeights;
layout (location = 7) in mat4 instanceM;
uniform mat4 P_lightSpace;
uniform mat4 M;
uniform mat4 Bones[MAX_BONES];
uniform bool rigged = false;
uniform bool instanced = false;
void main()
{
vec4 pos = vec4(position, 1.0);
if (rigged) {
mat4 boneTransform = Bones[boneIDs[0]] * boneWeights[0];
boneTransform += Bones[boneIDs[1]] * boneWeights[1];
boneTransform += Bones[boneIDs[2]] * boneWeights[2];
boneTransform += Bones[boneIDs[3]] * boneWeights[3];
pos = boneTransform * pos;
}
gl_Position = P_lightSpace * (instanced ? instanceM : M) * pos;
}
my soft shadow implementation
float PCFShadow(VertexOutput vout, sampler2D shadowMap) {
vec3 projCoords = vout.FragPosLightSpace.xyz / vout.FragPosLightSpace.w;
if (projCoords.z > 1.0)
return 0.0;
projCoords = projCoords * 0.5 + 0.5;
// PCF
float shadow = 0.0;
float bias = max(0.05 * (1.0 - dot(vout.FragNormal, vout.FragPosLightSpace.xyz - vout.FragPos.xzy)), 0.005);
for (int i = 0; i < 4; ++i) {
float z = texture(shadowMap, projCoords.xy + poissonDisk[i]).r;
shadow += z < (projCoords.z - bias) ? 1.0 : 0.0;
}
return shadow / 4;
}
...
...
float shadow = PCFShadow(vout, shadowSampler0);
vec3 color = (ambient + (1.0 - shadow) * (diffuse + specular)) + materials[materialIndex].emission;
FragColor = vec4(color, albd.a);
and my camera view and projection matrix getters
glm::mat4 ZCamera::ProjectionMatrix()
{
glm::mat4 projectionMatrix(1.f);
auto scene = Scene();
if (!scene) return projectionMatrix;
if (cameraType_ == ZCameraType::Orthographic)
{
float zoomInverse_ = 1.f / (2.f * zoom_);
glm::vec2 resolution = scene->Domain()->Resolution();
float left = -((float)resolution.x * zoomInverse_);
float right = -left;
float bottom = -((float)resolution.y * zoomInverse_);
float top = -bottom;
projectionMatrix = glm::ortho(left, right, bottom, top, -farClippingPlane_, farClippingPlane_);
}
else
{
projectionMatrix = glm::perspective(glm::radians(zoom_),
(float)scene->Domain()->Aspect(),
nearClippingPlane_, farClippingPlane_);
}
return projectionMatrix;
}
glm::mat4 ZCamera::ViewMatrix()
{
return glm::lookAt(Position(), Position() + Front(), Up());
}
Been trying all kinds of small changes but I still don't get correct shadows. Don't know what I'm doing wrong here. The closest I've gotten is by scaling lightspaceRegion_ bounds by a factor of 10 in the light space matrix calculations (only in X and Y) but the shadows are still no where near correct.
The camera near and far clipping planes are set at reasonable values (0.01 and 100.0, respectively), camera zoom is 45.0 degrees and scene→Domain()→Aspect() just returns the width/height aspect ratio of the framebuffer's resolution. My shadow map resolution is set to 2048x2048.
Any help here would be much appreciated. Let me know if I left out any important code or info.

Lighting in C++ using a glsl

I am currently using this glsl file to handle lighting for a 3d object that I am trying to display. I am not sure what values I need to put in for light_position_world, Ls, Ld, La, Ks, Kd, Ka, Ia and fragment_color. The scene I am trying to illuminate is centered at (427, 385, 89) roughly. I dont need it to be perfect but I need some values that will let me see my design on screen so that I can manipulate them and learn how this all works. Any additional tips or explanation would be much appreciated. Thanks!
#version 410
in vec3 position_eye, normal_eye;
uniform mat4 view_mat;
// fixed point light properties
vec3 light_position_world = vec3 (427.029, 385.888, 0);
vec3 Ls = vec3 (1.0f, 0.0f, 0.0f);
vec3 Ld = vec3 (1.0f, 0.0f, 0.0f);
vec3 La = vec3 (1.0f, 0.2f, 0.0f);
// surface reflectance
vec3 Ks = vec3 (1.0f, 1.0f, 1.0f);
vec3 Kd = vec3 (1.0f, 0.8f, 0.72f);
vec3 Ka = vec3 (1.0f, 1.0f, 1.0f);
float specular_exponent = 10.0; // specular 'power'
out vec4 fragment_colour; // final colour of surface
void main () {
// ambient intensity
vec3 Ia = vec3 (0, 0, 0);
// diffuse intensity
// raise light position to eye space
vec3 light_position_eye = light_position_world; //vec3 (view_mat * vec4 (light_position_world, 1.0));
vec3 distance_to_light_eye = light_position_eye - position_eye;
vec3 direction_to_light_eye = normalize (distance_to_light_eye);
float dot_prod = dot (direction_to_light_eye, normal_eye);
dot_prod = max (dot_prod, 0.0);
vec3 Id = Ld * Kd * dot_prod; // final diffuse intensity
// specular intensity
vec3 surface_to_viewer_eye = normalize (-position_eye);
// blinn
vec3 half_way_eye = normalize (surface_to_viewer_eye + direction_to_light_eye);
float dot_prod_specular = max (dot (half_way_eye, normal_eye), 0.0);
float specular_factor = pow (dot_prod_specular, specular_exponent);
vec3 Is = Ls * Ks * specular_factor; // final specular intensity
// final colour
fragment_colour = vec4 (255, 25, 25, 0);
}
There are a few problems with your code.
1) Assuming, light_position_world is the position of the light in world space, the light is below your scene. So the scene won't be illuminated from above.
2) Assuming, *_eye means a coordinate in eye space and *_world is a coordinate in world space, you may not interchange between those spaces by simply assigning vectors. You have to use a view matrix and it's inverse view matrix to go from world to eye space and from eye space to world space respectivly.
3) The output color of the shader, fragment_colour, is always set to a dark red-ish color. So the compiler will just leave out all the lighting calculations. You have to use something like this: fragment_colour = Ia + Id * material + Is * material, where material is the color of your material - e.g. gray for metal.
It seems like you don't understand the underlying basics. I suggest you read a few articles or tutorials about lighting and transformation/maths in OpenGL.
If you have consumed a fair bit of literature, experiment with your code. Try out, what different calculations do and how they influence the end product. You won't get 100% physically accurate lighting anyways, so there's nothing to go wrong.

A confusion about space transformation in OpenGL

In the book of 3D graphics for game programming by JungHyun Han, at page 38-39, it is given that
the basis transformation matrix from e_1, e_2, e_3 to u,v,n is
However, this contradicts with what I know from linear algebra. I mean shouldn't the basis-transformation matrix be the transpose of that matrix ?
Note that the author does his derivation, but I couldn't find where is the missing point between what I know and what the author does.
The code:
Vertex Shader:
#version 330
layout(location = 0) in vec4 position;
layout(location = 1) in vec4 color;
uniform vec3 cameraPosition;
uniform vec3 AT;
uniform vec3 UP;
uniform mat4 worldTrans;
vec3 ep_1 = ( cameraPosition - AT )/ length(cameraPosition - AT);
vec3 ep_2 = ( cross(UP, ep_1) )/length( cross(UP, ep_1 ));
vec3 ep_3 = cross(ep_1, ep_2);
vec4 t_ep_1 = vec4(ep_1, -1.0f);
vec4 t_ep_2 = vec4(ep_2, cameraPosition.y);
vec4 t_ep_3 = vec4(ep_3, cameraPosition.z);
mat4 viewTransform = mat4(t_ep_1, t_ep_2, t_ep_3, vec4(0.0f, 0.0f, 0.0f, 1.0f));
smooth out vec4 fragColor;
void main()
{
gl_Position = transpose(viewTransform) * position;
fragColor = color;
}
)glsl";
Inputs:
GLuint transMat = glGetUniformLocation(m_Program.m_shaderProgram, "worldTrans");
GLfloat dArray[16] = {0.0};
dArray[0] = 1;
dArray[3] = 0.5;
dArray[5] = 1;
dArray[7] = 0.5;
dArray[10] = 1;
dArray[11] = 0;
dArray[15] = 1;
glUniformMatrix4fv(transMat, 1, GL_TRUE, &dArray[0]);
GLuint cameraPosId = glGetUniformLocation(m_Program.m_shaderProgram, "cameraPosition");
GLuint ATId = glGetUniformLocation(m_Program.m_shaderProgram, "AT");
GLuint UPId = glGetUniformLocation(m_Program.m_shaderProgram, "UP");
const float cameraPosition[4] = {2.0f, 0.0f, 0.0f};
const float AT[4] = {1.0f, 0.0f, 0.0f};
const float UP[4] = {0.0f, 0.0f, 1.0f};
glUniform3fv(cameraPosId, 1, cameraPosition);
glUniform3fv(ATId, 1, AT);
glUniform3fv(UPId, 1, UP);
While it's true that a rotation, scaling or deformation can be expressed by a 4x4 matrix in the form
what your are reading about is the so called "View Transformation"
To achieve this matrix we need two transformations: First, translate to the camera position, and then rotate the camera.
The data to do these transformations are:
Camera position C (Cx,Cy,Cz)
Target position T (Tx,Ty,Tz)
Camera-up normalized UP (Ux, Uy, Uz)
The translation can be expressed by
For the rotation we define:
F = T – C and after normalizating it we get f = F / ||T-C||, also expressed by f= normalize(F)
s = normalize(cross (f, UP))
u = normalize(cross(s, f))
s, u, -f are the new axis expressed in the old coordinates system.
Thus we can build the rotation matrix for this transformation as
Combining the two transformations in an only matrix we get:
Notice that the axis system is the one used by OpenGL, where -f= cross(s,u).
Now, comparing with your GLSL code I see:
Your f(ep_1) vector goes in the oposite direction.
The s(ep_2) vector is calculated as cross(UP, f) instead of cross(f, UP). This is right, because of 1).
Same for u(ep_3).
The building of V cell (0,0) is wrong. It tries to set the proper direction by using that -1.0f.
For the other cells (t_ep_J components), the camera position is used. But you forgot to use the dot product of it with s,u,f.
The GLSL initializer mat4(c1,c2,c3,c4) requires column-vectors as parameters. You passed row-colums. But, after, in main the use of transpose corrects it.
On a side note, you are not going to calculate the matrix for each vertex, time and time and time... right? Better calculate it on the CPU side and pass if (in column order) as an uniform.
Apparently, the change of basis in a vector space does changes the vectors in that vector space, and this is not what we want in here.
Therefore, the mathematics that I was applying does not valid in here.
To understand more about why we use the matrix that I have given in the question, please see this question.

Shader code in OpenGl is not coming out right. What am I missing?

Hello guys Im having alot of trouble with my shader my fragment and vertex. Im not sure what im really missing with this, Any help would be great. So far with my code I get this!
https://www.dropbox.com/s/7qgl6h2d3p3klu0/Screenshot%202015-05-02%2015.02.50.png?dl=0
But its suppose to look like this, what am I missing?
https://www.dropbox.com/s/uy6093tbdtmcdux/Screenshot1.jpg?dl=0
Here is my code fragment code:
#version 150
in vec3 fN;
in vec3 fL;
in vec3 fE; // NEW! Coming in from the vertex shader
out vec4 fColor;
void main () {
vec3 N = normalize(fN);
vec3 L = normalize(fL);
vec3 E = normalize(-fE); // NEW! Reverse E
vec3 H = normalize(L + E); // NEW! Create the half vector
// Diffuse component
float diffuse_intensity = max(dot(N, L), 100);
vec4 diffuse_final = diffuse_intensity*vec4(0.0, 0.0, 2.8, 2.0);
// NEW! Specular component
float spec_intensity = pow(max(dot(H, L), -1.5), 60);
vec4 spec_final = spec_intensity+vec4(0.0, 0.0, 2.8, 2.0);
fColor = diffuse_final + spec_final;
}
And here is my vertex code:
#version 150
// Combined our old stuff with the shader from Angel book
in vec4 vPosition;
in vec4 vNormal;
uniform mat4 mM; // The matrix for the pose of the model
uniform mat4 mV; // The matrix for the pose of the camera
uniform mat4 mP; // The perspective matrix
uniform mat4 mR; // The rotation matrix
uniform vec4 lightPosition; //
out vec3 fN;
out vec3 fL;
out vec3 fE;
void main () {
fN = (mR*vNormal).xyz; //Rotate the normal! only take the first 3 parts, since fN is a vec3
fE = (mV*mM*vPosition).xyz;
fL = (lightPosition).xyz; // In world space
gl_Position = mP*mV*mM*vPosition;
}
I spot some issues with your code:
float diffuse_intensity = max(dot(N, L), 100);
This one does not make sense at all. As N and L are normalized, the dot product will be in [0,1], and that max will always yield 100. You want max(..., 0) there to just clamp negative values to 0.
This one
vec4 diffuse_final = diffuse_intensity*vec4(0.0, 0.0, 2.8, 2.0);
is not strictly wrong, but values out of [0,1] for the light color or material coefficient are at least unusual. It means that the value will easily clamp out of the representable range.
float spec_intensity = pow(max(dot(H, L), -1.5), 60);
Here, you repeat the max mistake again, just in the other direction. you again want max(...,0).
Also, your are mixing your spaces. L is just the normalized world space light position, which is a completely meaningless value in itself, and E is the eye space viewing direction. You must use one spcae consistently. And L should be the direction vector from the vertex to the light source. not just the direction from world space origin.
vec4 spec_final = spec_intensity+vec4(0.0, 0.0, 2.8, 2.0);
Here, you are using this odd values again. However, this time, you use + instead of *, which is certainly not what you should do.

Setting highlight to an image in OpenGL shader language

I'm trying to set a highlight mask to an image currently covered by mouse. My problem is that instead of setting the mask to all corners of an image it sets it only to the left top corner.
Here are my shaders:
string vertexShaderSource = #"
#version 140
uniform mat4 modelview_matrix;
uniform mat4 projection_matrix;
// incoming
in vec3 vertex_position;
in vec2 i_texCoord;
in vec4 i_highlightColor;
//outgoing
out vec2 o_texCoord;
out vec4 o_highlightColor;
void main(void)
{
gl_Position = projection_matrix * modelview_matrix * vec4( vertex_position, 1 );
o_texCoord = i_texCoord;
o_highlightColor = i_highlightColor;
}";
string fragmentShaderSource = #"
#version 140
precision highp float;
in vec2 o_texCoord;
in vec4 o_highlightColor;
out vec4 out_frag_color;
uniform sampler2D s_texture;
void main(void)
{
out_frag_color = texture( s_texture, o_texCoord ) + o_highlightColor;
if(out_frag_color.a == 0.0)
discard;
}";
This is how I transfer the highlight color to the graphics card:
float[] highlightColor = new float[myList.Count * 4];
int count = 0;
float[] noHighlight = new float[4] { 0.0f, 0.0f, 0.0f, 0.0f };
float[] yesHighlight = new float[4] { 1.0f, 0.0f, 0.0f, 0.3f };
foreach (GameObject go in myList)
{
...
for (int i = 0; i < 4; i++)
{
if (go.currentlyHovered)
highlightColor[count * 4 + i] = yesHighlight[i];
else
highlightColor[count * 4 + i] = noHighlight[i];
}
...
count++;
}
...
GL.BindBuffer(BufferTarget.ArrayBuffer, highlightColorLocation);
GL.BufferData<float>(BufferTarget.ArrayBuffer,
new IntPtr(highlightColor.Length * sizeof(float)),
highlightColor, BufferUsageHint.StaticDraw);
GL.EnableVertexAttribArray(2);
GL.BindAttribLocation(shaderProgramHandle, 2, "i_highlightColor");
GL.VertexAttribPointer(2, 4, VertexAttribPointerType.Float, false, OpenTK.Vector4.SizeInBytes, 0);
...
It seems as if I'm sending the highlight color only to one vertex but why? I thought the "in" modifier in glsl causes that data is sent to every vertex... And interesting is fact that when I replace "+ o_highlightColor" in my fargment shader with " + vec4(1.0, 0.0, 0.0, 0.3)" then the lighting covers whole image!
It seems like I've overlooked the fact that in order to apply the highlight to whole quad I need to send the highlight data to all 4 vertices, not only one like above. In total it will be 16 floats per object in each render frame in order to apply consistent highlight on whole area of the object.
float[] highlightColor = new float[myList.Count * 16];
int count = 0;
float[] noHighlight = new float[4] { 0.0f, 0.0f, 0.0f, 0.0f };
float[] yesHighlight = new float[4] { 1.0f, 0.0f, 0.0f, 0.3f };
foreach (GameObject go in myList)
{
...
for (int i = 0; i < 16; i++)
{
if (go.currentlyHovered)
highlightColor[count * 16 + i] = yesHighlight[i % 4];
else
highlightColor[count * 16 + i] = noHighlight[i % 4];
}
...
count++;
}