For some reason, the quad that I'm rendering doesn't show and it only renders a black screen. I've checked the code multiple times and couldn't find the problem maybe someone can see what I don't see!
The purpose is to have a quad that follows the camera, right now I just want to show the quad with a single color, but all I get is a black screen. I am using QOpenGLWindow and QOpenGLFunctions.
void CSLFWindow::renderQuad()
{
float x0 = -(float)1.f, y0 = -(float)1.f;
float x1 = (float)1.f, y1 = (float)1.f;
const QVector3D vertices[4] = {
QVector3D( x0, y0, 0.0f),
QVector3D( x0, y1, 0.0f),
QVector3D( x1, y1, 0.0f),
QVector3D( x1, y0, 0.0f)
};
const QVector3D normals[4] = {
QVector3D(0.0f, 0.0f,1.0f),
QVector3D(0.0f, 0.0f,1.0f),
QVector3D(0.0f, 0.0f,1.0f),
QVector3D(0.0f, 0.0f,1.0f)
};
const QVector2D texcoords[4] = {
QVector2D(0.0f, 1.0f),
QVector2D(0.0f, 0.0f),
QVector2D(1.0f, 0.0f),
QVector2D(1.0f, 1.0f)
};
const unsigned int indices[4] = { 3, 2, 1, 0 };
m_shaderProgram.enableAttributeArray("vVertices");
m_shaderProgram.enableAttributeArray("vTexCoords");
m_shaderProgram.enableAttributeArray("vNormals");
m_shaderProgram.setAttributeArray("vVertices", vertices);
m_shaderProgram.setAttributeArray("vTexCoords", texcoords);
m_shaderProgram.setAttributeArray("vNormals", normals);
glDrawElements(GL_QUADS, 4, GL_UNSIGNED_INT, indices);
m_shaderProgram.disableAttributeArray("vVertices");
m_shaderProgram.disableAttributeArray("vTexCoords");
m_shaderProgram.disableAttributeArray("vNormals");
}
and the rendering:
void CSLFWindow::paintGL()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
m_shaderProgram.bind();
m_model.setToIdentity();
m_view = m_camera.toMatrix();
QMatrix4x4 modelMatrix = m_model ;
QMatrix4x4 modelViewMatrix = m_view * modelMatrix;
QMatrix4x4 mvp = m_projection * modelViewMatrix;
m_shaderProgram.setUniformValue("MV", modelViewMatrix);
m_shaderProgram.setUniformValue("MVP", mvp);
m_shaderProgram.setUniformValue("P", m_projection);
renderQuad();
m_shaderProgram.release();
}
I'm setting the projection matrix as:
m_view.setToIdentity();
float aspect = h / w;
m_projection.setToIdentity();
m_projection.perspective(
m_fov,
aspect,
0.1f,
1000.0f);
here are my camera parameters:
m_cameraPos = QVector3D(0.0f, 0.0f, 3.0f);
m_cameraFront = QVector3D(0.0f, 0.0f, -1.0f);
m_cameraUp = QVector3D(0.0f, 1.0f, 0.0f);
QMatrix4x4 toMatrix()
{
QMatrix4x4 vMatrix;
vMatrix.setToIdentity();
vMatrix.lookAt(m_cameraPos, QVector3D(0.0f, 0.0f, 0.0f),
m_cameraUp);
return vMatrix;
}
and here is my vertex shader:
#version 330 core
layout (location = 0)in vec3 vVertices;
layout (location = 1)in vec2 vTexCoords;
layout (location = 2)in vec3 vNormals;
uniform mat4 MV;
uniform mat4 MVP;
uniform mat4 P;
out vec2 FragTexCoord;
out vec3 FragNormals;
void main()
{
FragTexCoord = vTexCoords;
FragNormals = vNormals;
gl_Position = MVP * vec4(vVertices,1);
}
and my fragment shader:
#version 330 core
out vec4 fragmentColor;
in vec2 FragTexCoord;
in vec3 FragNormals;
void main()
{
fragmentColor = vec4(1.0,1.0,1.0,1.0);
}
I found the problem was in setting the surface format! when I remove format.setProfile(QSurfaceFormat::CoreProfile); I see the quad. but I don't understand why it happens
Change GL_QUAD to GL_TRIANGLE_FAN and it will work:
glDrawElements(GL_TRIANGLE_FAN, 4, GL_UNSIGNED_INT, indices);
GL_QUAD is deprecated and is removed in core profile.
See further Legacy OpenGL - Removed functionality, OpenGL Context and Forward compatibility
Hey I cant add a comment but could you try to change vertex shader like this:
void main()
{
FragTexCoord = vTexCoords;
FragNormals = vNormals;
gl_Position = vec4(vVertices,1);
}
and let me know if you see anything if you don't see anything try to change the order of the indices
Related
I am learning OpenGL from http://learnopengl.com and I have a problem with transformation based on this chapter Coordinate Systems...
I want to render something like this Movie but I have something like this Movie2 in 5 seconds its back on the screen. Sorry for many links but I think it's easier to show this by video.
It's my render loop:
const auto projection = glm::perspectiveFov(glm::radians(45.0f), 800.0f, 600.0f, 0.1f, 100.0f);
const auto view = glm::translate(glm::mat4(1.0f), glm::vec3(0.0f, 0.0f, -3.0f));
while (!glfwWindowShouldClose(window))
{
processInput(window);
glClearColor(0.2f, 0.3f, 0.3f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
flatColorShader->bind();
flatColorShader->setMat4("u_Projection", projection);
flatColorShader->setMat4("u_View", view);
auto model = glm::mat4(1.0f);
model = glm::rotate(model, static_cast<float>(glfwGetTime()) * glm::radians(50.0f), glm::vec3(0.5f, 1.0f, 0.0f));
flatColorShader->setMat4("u_Model", model);
glDrawArrays(GL_TRIANGLES, 0, 36);
glBindVertexArray(vao);
glfwSwapBuffers(window);
glfwPollEvents();
}
Vaertex shader:
#version 460 core
layout (location = 0) in vec3 a_Pos;
layout (location = 1) in vec2 a_TexCoord;
out vec2 v_TexCoord;
uniform mat4 u_Projection;
uniform mat4 u_Model;
uniform mat4 u_View;
void main()
{
v_TexCoord = vec2(a_TexCoord.x, 1.0f - a_TexCoord.y);
gl_Position = u_Projection * u_Model * u_View * vec4(a_Pos, 1.0);
}
And Fragment shader:
#version 460 core
in vec2 v_TexCoord;
out vec4 color;
uniform sampler2D u_Texture;
void main()
{
color = texture(u_Texture, v_TexCoord);
}
I suppose it is a problem with the model matrix, but I don't known what. Can somebody help me with that's problem?
The order of the vertex transformations in the vertex shader is incorrect:
gl_Position = u_Projection * u_Model * u_View * vec4(a_Pos, 1.0);
gl_Position = u_Projection * u_View * u_Model * vec4(a_Pos, 1.0);
The order matters, because matrix multiplications are not commutative.
So I have an AtlasTexture that contains all the tiles I need to draw a tile map.
Right now I pass the AtlasTexture through a uniform, and the idea is to change the texture coordinate to select just the portion I need from the atlas.
The issue is that I can only specify on the fragment shader to cut the texture from the zero origin, is it possible to specify an offsetX to tell to the shader where I want to start drawing?
float vertices[] = {
// aPosition // aTextureCoordinate
0.0f, 0.0f, 0.0f, 0.0f,
100.0f, 0.0f, 1.0f, 0.0f,
0.0f, 100.0f, 0.0f, 1.0f,
100.0f, 100.0f, 1.0f, 1.0f,
};
uint32_t indices[] = {0, 1, 2, 2, 3, 1};
Vertex shader
#version 330 core
layout(location = 0) in vec2 aPosition;
layout(location = 1) in vec2 aTextureCoordinate;
out vec2 textureCoordinate;
void main() {
gl_Position = vec4( aPosition.x, aPosition.y, 1.0f, 1.0f);
textureCoordinate = vec2(
aTextureCoordinate.x / 3.0f, // This selects the first tile in the uAtlasTexture
aTextureCoordinate.y
);
}
Fragment shader
#version 330 core
in vec2 textureCoordinate;
uniform sampler2D uAtlasTexture; // Has 3 tiles
out vec4 color;
void main() {
color = texture(uAtlasTexture, textureCoordinate);
}
Use an Uniform variable for the offset. `vec2(1.0/3.0 + aTextureCoordinate.x / 3.0f, aTextureCoordinate.y); "selects2 the 2nd tile. Use a uniform instead of 1.0/3.0:
#version 330 core
layout(location = 0) in vec2 aPosition;
layout(location = 1) in vec2 aTextureCoordinate;
out vec2 textureCoordinate;
uniform float textureOffsetX;
void main() {
gl_Position = vec4( aPosition.x, aPosition.y, 1.0f, 1.0f);
textureCoordinate = vec2(
textureOffsetX + aTextureCoordinate.x / 3.0f,
aTextureCoordinate.y
);
}
I am trying to implement god rays however I do not understand where it went wrong. The source of god rays is the center of the cube.
Vertex shader:
#version 330 core
layout (location = 0) in vec2 aPos;
layout (location = 1) in vec2 aTexCoords;
out vec2 TexCoords;
void main()
{
gl_Position = vec4(aPos.x, aPos.y, 0.0, 1.0);
TexCoords = aTexCoords;
}
This is simple fragment shader just to show you how scene looks like when I did not add code for god rays to the fragment shader:
#version 330 core
out vec4 FragColor;
in vec2 TexCoords;
uniform sampler2D screenTexture;
void main()
{
FragColor = texture2D(screenTexture, TexCoords);
}
Scene without godrays:
Fragment shader when god rays code is added:
#version 330 core
out vec4 FragColor;
in vec2 TexCoords;
uniform vec2 lightPositionOnScreen;
uniform sampler2D screenTexture;
const float exposure = 0.3f;
const float decay = 0.96815;
const float density = 0.926;
const float weight = 0.587;
const int NUM_SAMPLES = 80;
void main()
{
// Calculate vector from pixel to light source in screen space.
vec2 deltaTexCoord = (TexCoords - lightPositionOnScreen.xy);
vec2 texCoord = TexCoords;
// Divide by number of samples and scale by control factor.
deltaTexCoord *= 1.0f / NUM_SAMPLES * density;
// Store initial sample.
vec3 color = texture2D(screenTexture, TexCoords);
// Set up illumination decay factor.
float illuminationDecay = 1.0f;
// Evaluate summation from Equation 3 NUM_SAMPLES iterations.
for (int i = 0; i < NUM_SAMPLES; i++)
{
// Step sample location along ray.
texCoord -= deltaTexCoord;
// Retrieve sample at new location.
vec3 sample = texture2D(screenTexture, texCoord);
// Apply sample attenuation scale/decay factors.
sample *= illuminationDecay * weight;
// Accumulate combined color.
color += sample;
// Update exponential decay factor.
illuminationDecay *= decay;
}
FragColor = vec4(color * exposure, 1.0);
}
How scene looks after godRays code:
This code is used to translate coordinates of cube center from world to window space position:
glm::vec4 clipSpacePos = projection * (view * glm::vec4(m_cubeCenter, 1.0));
glm::vec3 ndcSpacePos = glm::vec3(clipSpacePos.x / clipSpacePos.w, clipSpacePos.y / clipSpacePos.w, clipSpacePos.z / clipSpacePos.w);
glm::vec2 windowSpacePos;
windowSpacePos.x = (ndcSpacePos.x + 1.0) / 2.0;
windowSpacePos.y = 1.0f - (ndcSpacePos.y + 1.0) / 2.0;
wxMessageOutputDebug().Printf("test %f x position", windowSpacePos.x);
wxMessageOutputDebug().Printf("test %f y position", windowSpacePos.y);
shaderProgram.loadShaders("Shaders/godRays.vert", "Shaders/godRays.frag");
shaderProgram.use();
shaderProgram.setUniform("lightPositionOnScreen", windowSpacePos);
This is how I am setting up texture:
GLfloat vertices[] = {
1.0f, 1.0f, 0.0f, 1.0f, 1.0f, // top right
1.0f, -1.0f, 0.0f, 1.0f, 0.0f, // bottom right
-1.0f, -1.0f, 0.0f, 0.0f, 0.0f, // bottom left
-1.0f, -1.0f, 0.0f, 0.0f, 0.0f, // bottom left
-1.0f, 1.0f, 0.0f, 0.0f, 1.0f, // top left
1.0f, 1.0f, 0.0f, 1.0f, 1.0f, // top right
};
GLuint testBuffer;
glGenBuffers(1, &testBuffer);
glBindBuffer(GL_ARRAY_BUFFER, testBuffer);
glBufferData(GL_ARRAY_BUFFER, 30 * sizeof(GLfloat), &vertices[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), NULL);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), (void*)(3 * sizeof(float)));
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, screenTexture);
glDrawArrays(GL_TRIANGLES, 0, 6);
shaderProgram.deleteProgram();
glDeleteBuffers(1, &testBuffer);
Here is the solution. The problem was in the lines vec3 color = texture2D(screenTexture, TexCoords); and vec3 sample = texture2D(screenTexture, texCoord); I replaced them with vec3 color = texture(screenTexture, TexCoords).rgb; vec3 sample = texture(screenTexture, texCoord).rgb; respectively.
#version 330 core
out vec4 FragColor;
in vec2 TexCoords;
uniform vec2 lightPositionOnScreen;
uniform sampler2D screenTexture;
const float exposure = 0.3f;
const float decay = 0.96815;
const float density = 0.926;
const float weight = 0.587;
const int NUM_SAMPLES = 100;
void main()
{
vec2 deltaTexCoord = vec2(TexCoords.xy - lightPositionOnScreen.xy);
vec2 texCoord = TexCoords;
deltaTexCoord *= 1.0f / NUM_SAMPLES * density;
vec3 color = texture(screenTexture, TexCoords).rgb;
float illuminationDecay = 1.0f;
for (int i = 0; i < NUM_SAMPLES; i++)
{
texCoord -= deltaTexCoord;
vec3 sample = texture(screenTexture, texCoord).rgb;
sample *= illuminationDecay * weight;
color += sample;
illuminationDecay *= decay;
}
FragColor = vec4(color * exposure, 1.0);
}
I am trying to build lighting using this tutorial. However, lighting appears on wrong side of human object and I do not know why.
Normals were created per triangle. Vertices of a triangle basically have the same normal:
glm::vec3 calculateNormal(glm::vec3 vertice_1, glm::vec3 vertice_2, glm::vec3 vertice_3)
{
glm::vec3 vector_1 = vertice_2 - vertice_1;
glm::vec3 vector_2 = vertice_3 - vertice_1;
return glm::normalize(glm::cross(vector_1, vector_2));
}
Here is code for vertex shader:
#version 330 core
layout (location = 0) in vec3 pos;
layout (location = 1) in vec3 normal;
out vec4 vert_color;
out vec3 Normal;
out vec3 FragPos;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
uniform mat4 transform;
uniform vec4 color;
void main()
{
vert_color = color;
gl_Position = projection * view * model * transform * vec4(pos.x, pos.y, pos.z, 1.0);
FragPos = vec3(model * transform * vec4(pos, 1.0));
Normal = normal;
}
Fragment shader:
#version 330 core
uniform vec3 cameraPos;
uniform vec3 lightPos;
uniform vec3 lightColor;
in vec4 vert_color;
in vec3 FragPos;
in vec3 Normal;
out vec4 frag_color;
void main()
{
float ambientStrength = 0.1;
float specularStrength = 0.5;
vec3 ambient = ambientStrength * lightColor;
vec3 lightDir = normalize(lightPos - FragPos);
float diff = max(dot(Normal, lightDir), 0.0);
vec3 diffuse = diff * lightColor;
vec3 viewDir = normalize(cameraPos - FragPos);
vec3 reflectDir = reflect(-lightDir, Normal);
float spec = pow(max(dot(viewDir, reflectDir), 0.0), 32);
vec3 specular = specularStrength * spec * lightColor;
vec3 result = (ambient + diffuse + specular) * vec3(vert_color.x, vert_color.y, vert_color.z);
frag_color = vec4(result, vert_color.w);
}
Main loop:
wxGLCanvas::SetCurrent(*glContext);
glClearDepth(1.0f);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glDepthFunc(GL_LEQUAL);
glEnable(GL_DEPTH_TEST);
glm::mat4 model, view, projection;
model = glm::translate(model, modelPos); // modelPos is
view = fpsCamera->getViewMatrix();
projection = fpsCamera->getProjectionMatrix(windowWidth, windowHeight);
color = glm::vec4(0.310f, 0.747f, 0.185f, 1.0f);
glm::vec3 lightPos = glm::vec3(0.0f, 1.0f, 0.0f);
glm::vec3 lightColor = glm::vec3(1.0f, 1.0f, 1.0f);
glm::mat4 phantomtTransformation;
phantomtTransformation = glm::rotate(phantomtTransformation, - glm::pi<float>() / 2.0f, glm::vec3(1.0f, 0.0f, 0.0f));
phantomtTransformation = glm::rotate(phantomtTransformation, - glm::pi<float>() , glm::vec3(0.0f, 0.0f, 1.0f));
ShaderProgram shaderProgram;
shaderProgram.loadShaders("Shaders/phantom.vert", "Shaders/phantom.frag");
glClearStencil(0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
shaderProgram.use();
shaderProgram.setUniform("transform", phantomtTransformation);
shaderProgram.setUniform("model", model);
shaderProgram.setUniform("view", view);
shaderProgram.setUniform("projection", projection);
shaderProgram.setUniform("color", color);
shaderProgram.setUniform("lightColor", lightColor);
shaderProgram.setUniform("lightPos", lightPos);
shaderProgram.setUniform("cameraPos", fpsCamera->getPosition());
glStencilMask(0xFF); // Write to stencil buffer
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
glStencilFunc(GL_ALWAYS, 0, 0xFF); // Set any stencil to 0
glStencilFunc(GL_ALWAYS, 1, 0xFF); // Set any stencil to object ID
m_pantomMesh->draw();
glStencilFunc(GL_ALWAYS, 0, 0xFF); // Set any stencil to 0 // no need for testing
glFlush();
wxGLCanvas::SwapBuffers();
View from front of the object:
View from back of the object:
EDIT:
In order to debug I removed object rotation matrix from main loop:
glm::mat4 phantomtTransformation;
phantomtTransformation = glm::rotate(phantomtTransformation, - glm::pi<float>() / 2.0f, glm::vec3(1.0f, 0.0f, 0.0f));
phantomtTransformation = glm::rotate(phantomtTransformation, - glm::pi<float>() , glm::vec3(0.0f, 0.0f, 1.0f));
shaderProgram.setUniform("transform", phantomtTransformation);
and changed line in fragment shader from
frag_color = vec4(result, vert_color.w);
to
frag_color = vec4(Normal, vert_color.w);
in order to visualize Normal values. As a result I noticed that when camera changes position phantom also changes color which means that normal values are also changing.
I think the cause of your problem is that you are not applying your model transformation to your normal vectors. Since you definitely do not want to skew them, you will have to create a special matrix for your normals.
As is explained further down the tutorial that you mentioned, the matrix can be constructed like this
Normal = mat3(transpose(inverse(model))) * aNormal;
in your vertex shader.
However, I highly recommend that you calculate the matrix in your application code instead, since you would calculate it per vertex in the above example.
Since you are using the glm library, it would look like this instead:
glm::mat3 model_normal = glm::mat3(glm::transpose(glm::inverse(model)));
You can then load your new model_normal matrix into the shader as a uniform mat3.
I was wondering how I would go about programming point light shadows with deferred rendering??
The point light shadows just dont show up for me. I think it is to do with the following line: float shadow = calculate_shadows(FragPos); as for directional shadows I multiple the fragpos with a lightSpaceMatrix (lightView * lightProj) and that worked, but for point shadows I dont have a lightSpaceMatrix to use.
light fragment shader
#version 420 core
out vec4 FragColor;
in vec2 _texcoord;
uniform vec3 camera_pos;
uniform sampler2D gPosition;
uniform sampler2D gNormal;
uniform sampler2D gAlbedo;
uniform sampler2D gSpecular;
uniform sampler2D gMetalness;
uniform samplerCube gPointShadowmap;
uniform mat4 viewMatrix;
uniform vec3 lightPos;
vec3 FragPos;
vec3 Normal;
float calculate_shadows(vec3 light_space_pos)
{
// perform perspective divide
vec3 fragToLight = light_space_pos - vec3(0.0f, 0.0f, 0.0f);
// get closest depth value from light's perspective (using [0,1] range fragPosLight as coords)
float closestDepth = texture(gPointShadowmap, fragToLight).r;
// it is currently in linear range between [0,1], let's re-transform it back to original depth value
closestDepth *= 25.0f;
// now get current linear depth as the length between the fragment and light position
float currentDepth = length(fragToLight);
// test for shadows
float bias = 0.05; // we use a much larger bias since depth is now in [near_plane, far_plane] range
float shadow = currentDepth - bias > closestDepth ? 1.0 : 0.0;
//FragColor = vec4(vec3(closestDepth / 25.0f), 1.0);
return shadow;
}
void main(void)
{
FragPos = texture(gPosition, _texcoord).rgb;
Normal = texture(gNormal, _texcoord).rgb;
vec3 Diffuse = texture(gAlbedo, _texcoord).rgb;
float Emissive = texture(gAlbedo, _texcoord).a;
vec3 Specular = texture(gAlbedo, _texcoord).rgb;
vec3 Metalness = texture(gMetalness, _texcoord).rgb; // Reflection pass
float AmbientOcclusion = texture(gSsao, _texcoord).r;
vec3 lightColor = vec3(0.3);
// ambient
vec3 ambient = 0.3 * Diffuse;
// diffuse
vec3 lightDir = normalize(vec3(0.0, 0.0, 0.0) - FragPos);
float diff = max(dot(lightDir, Normal), 0.0);
vec3 diffuse = diff * lightColor;
// specular
vec3 viewDir = normalize(camera_pos - FragPos);
vec3 reflectDir = reflect(-lightDir, Normal);
float spec = 0.0;
vec3 halfwayDir = normalize(lightDir + viewDir);
spec = pow(max(dot(Normal, halfwayDir), 0.0), 64.0);
vec3 specular = spec * lightColor;
// calculate shadow
float shadow = calculate_shadows(FragPos);
vec3 lighting = (ambient + (1.0 - shadow) * (diffuse + specular));
FragColor = vec4(lighting, 1.0);
}
pointshadows vertex shader
#version 330 core
layout(location = 0) in vec3 position;
uniform mat4 model;
void main(void)
{
gl_Position = model * vec4(position, 1.0);
}
pointshadows fragment shader
#version 330 core
in vec4 FragPos;
void main(void)
{
float lightDistance = length(FragPos.xyz - vec3(0.0, 3.0, 0.0));
// map to [0;1] range by dividing by far_plane
lightDistance = lightDistance / 25.0;
// write this as modified depth
gl_FragDepth = lightDistance;
}
pointshadows geometry shader
#version 330 core
layout (triangles) in;
layout (triangle_strip, max_vertices = 18) out;
uniform mat4 shadowMatrices[6];
out vec4 FragPos;
void main(void)
{
for(int face = 0; face < 6; ++face)
{
gl_Layer = face; // built-in variable that specifies to which face we render.
for(int i = 0; i < 3; ++i) // for each triangle's vertices
{
FragPos = gl_in[i].gl_Position;
gl_Position = shadowMatrices[face] * FragPos;
EmitVertex();
}
EndPrimitive();
}
}
Temp PointShadow class
#ifndef __POINTSHADOWPASS
#define __POINTSHADOWPASS
#include "Content.h"
class PointShadowPass
{
private:
static unsigned int _shadow_fbo;
public:
static unsigned int _shadow_texture;
static glm::vec3 lightPos;
static std::vector<glm::mat4> shadowTransforms;
PointShadowPass() {}
~PointShadowPass() {}
inline static void Initialise()
{
lightPos = glm::vec3(0.0f, 0.0f, 0.0f);
glGenFramebuffers(1, &_shadow_fbo);
glGenTextures(1, &_shadow_texture);
glBindTexture(GL_TEXTURE_2D, _shadow_texture);
for (unsigned int i = 0; i < 6; i++)
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, GL_DEPTH_COMPONENT, 1024, 1024, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glBindFramebuffer(GL_FRAMEBUFFER, _shadow_fbo);
glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, _shadow_texture, 0);
glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
}
inline static void Render(unsigned int pointshadow_program, Camera* camera, std::vector<Actor*> _actors)
{
glDisable(GL_BLEND); // Disable blending for opique materials
glEnable(GL_DEPTH_TEST); // Enable depth test
glm::mat4 model;
glm::mat4 shadowProj = glm::perspective(glm::radians(90.0f), (float)1024 / (float)1024, 1.0f, 25.0f);
shadowTransforms.push_back(shadowProj * glm::lookAt(lightPos, lightPos + glm::vec3(1.0f, 0.0f, 0.0f), glm::vec3(0.0f, -1.0f, 0.0f)));
shadowTransforms.push_back(shadowProj * glm::lookAt(lightPos, lightPos + glm::vec3(-1.0f, 0.0f, 0.0f), glm::vec3(0.0f, -1.0f, 0.0f)));
shadowTransforms.push_back(shadowProj * glm::lookAt(lightPos, lightPos + glm::vec3(0.0f, 1.0f, 0.0f), glm::vec3(0.0f, 0.0f, 1.0f)));
shadowTransforms.push_back(shadowProj * glm::lookAt(lightPos, lightPos + glm::vec3(0.0f, -1.0f, 0.0f), glm::vec3(0.0f, 0.0f, -1.0f)));
shadowTransforms.push_back(shadowProj * glm::lookAt(lightPos, lightPos + glm::vec3(0.0f, 0.0f, 1.0f), glm::vec3(0.0f, -1.0f, 0.0f)));
shadowTransforms.push_back(shadowProj * glm::lookAt(lightPos, lightPos + glm::vec3(0.0f, 0.0f, -1.0f), glm::vec3(0.0f, -1.0f, 0.0f)));
glViewport(0, 0, 1024, 1024);
glBindFramebuffer(GL_FRAMEBUFFER, _shadow_fbo);
glClear(GL_DEPTH_BUFFER_BIT);
glUseProgram(pointshadow_program);
for (unsigned int i = 0; i < 6; ++i)
glUniformMatrix4fv(glGetUniformLocation(pointshadow_program, ("shadowMatrices[" + std::to_string(i) + "]").c_str()), 1, GL_FALSE, glm::value_ptr(shadowTransforms[i]));
for (unsigned int i = 0; i < _actors.size(); i++)
{
model = _actors[i]->GetModelMatrix() * camera->GetViewMatrix();
glUniformMatrix4fv(glGetUniformLocation(pointshadow_program, "model"), 1, GL_FALSE, glm::value_ptr(model)); // set the model matrix uniform
_actors[i]->Render();
}
glBindFramebuffer(GL_FRAMEBUFFER, 0);
}
};
std::vector<glm::mat4> PointShadowPass::shadowTransforms;
unsigned int PointShadowPass::_shadow_fbo;
unsigned int PointShadowPass::_shadow_texture;
glm::vec3 PointShadowPass::lightPos;
#endif
I managed to get something showing (shadows move with camera rotation)
Reading your comments it seems you have some misconceptions about what informations you can have in defered rendering?
You said that all the coordinates have to be in screenspace which isn't true. For deffered rendering you have a G-Buffer and in it you can put whatever kind of information you want. To get world position information you have two choices, either you have a world position buffer, so you know where each of your fragment is in the world. Or you can compute this information back from the depth buffer and camera projection matrix.
If you have a point shadow calculation that works in forward rendering you can do the same in deferred rendering, in the shader that does all the light calculation you need the shadow cubemap, light position and you do the calculation like you used to.
EDIT:
looking at your code for calculate_shadows(vec3 light_space_pos), in deferred rendering you don't send it your position in lightspace, but the position in world space. So the function should be:
calculate_shadows(vec3 frag_world_pos)
you have for the first line vec3 fragToLight = light_space_pos - vec3(0.0f, 0.0f, 0.0f);
which should be vec3 fragToLight = frag_world_pos- lightPos.
Or you do this calculation before using the function. Eitherway, you need the position of your point light to calculate the direction and distance between your fragment and the light.