OpenGL shapes look darker when camera is below them - opengl

I have a problem with rendering my quads in OpenGL. They look darker when translucency is applied, if the camera is below a certain point. How can I fix this? The objects are lots of quads with tiny amounts of Z difference. I have implemented rendering of translucent objects from this webpage: http://www.alecjacobson.com/weblog/?p=2750
Render code:
double alpha_factor = 0.75;
double alpha_frac = (r_alpha - alpha_factor * r_alpha) / (1.0 - alpha_factor * r_alpha);
double prev_alpha = r_alpha;
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_BLEND);
// quintuple pass to get the rendering of translucent objects, somewhat correct
// reverse render order for getting alpha going!
// 1st pass: only depth checks
glDisable(GL_CULL_FACE);
glDepthFunc(GL_LESS);
r_alpha = 0;
// send alpha for each pass
// reverse order
drawobjects(RENDER_REVERSE);
// 2nd pass: guaranteed back face display with normal alpha
glEnable(GL_CULL_FACE);
glCullFace(GL_FRONT);
glDepthFunc(GL_ALWAYS);
r_alpha = alpha_factor * (prev_alpha + 0.025);
// reverse order
drawobjects(RENDER_REVERSE);
// 3rd pass: depth checked version of fraction of calculated alpha. (minus 1)
glEnable(GL_CULL_FACE);
glCullFace(GL_FRONT);
glDepthFunc(GL_LEQUAL);
r_alpha = alpha_frac + 0.025;
// normal order
drawobjects(RENDER_NORMAL);
// 4th pass: same for back face
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
glDepthFunc(GL_ALWAYS);
r_alpha = alpha_factor * (prev_alpha + 0.025);
// reverse order
drawobjects(RENDER_REVERSE);
// 5th pass: just put out the entire thing now
glDisable(GL_CULL_FACE);
glDepthFunc(GL_LEQUAL);
r_alpha = alpha_frac + 0.025;
// normal order
drawobjects(RENDER_NORMAL);
glDisable(GL_BLEND);
r_alpha = prev_alpha;
GLSL shaders:
Vertex shader:
#version 330 core
layout(location = 0) in vec3 vPos_ModelSpace;
layout(location = 1) in vec2 vertexUV;
layout(location = 2) in mat4 model_instance;
out vec2 UV;
out float alpha;
flat out uint alpha_mode;
// model + view + proj matrix
uniform mat4 proj;
uniform mat4 view;
uniform float v_alpha;
uniform uint v_alpha_mode;
void main() {
gl_Position = proj * view * model_instance * vec4(vPos_ModelSpace, 1.0);
// send to frag shader
UV = vertexUV;
alpha = v_alpha;
alpha_mode = v_alpha_mode;
}
Fragment shader:
#version 330 core
// texture UV coordinate
in vec2 UV;
in float alpha;
flat in uint alpha_mode;
out vec4 color;
// Values that stay constant for the whole mesh.
uniform sampler2D texSampler;
void main() {
int amode = int(alpha_mode);
color.rgb = texture(texSampler, UV).rgb;
color.a = alpha;
if(amode == 1)
color.rgb *= alpha;
}
Image when problem happens:
Image comparison for how it should look regardless of my position:

The reason it fades away in the center is because when you look at the infinitely thin sides of the planes they disappear. As for the brightness change top vs bottom, it's due to how your passes treat surface normals. The dark planes are normals facing away from the camera but with no planes facing the camera to lighten them up.
It looks like you are rendering many translucent planes in a cube to estimate a volume. Here is a simple example of a volume rendering: https://www.shadertoy.com/view/lsG3D3
http://developer.download.nvidia.com/books/HTML/gpugems/gpugems_ch39.html is a fantastic resource. It explains different ways to render volume, shows how awesome it is. For reference, that last example used a sphere as proxy geometry to raymarch a volume fractal.
Happy coding!

Related

OpenGL shadow mapping weirdness

I have been playing around with OpenGL and shaders and got myself into shadow mapping.
Trying to follow tutorials on the Internet (ogldev and learnopengl), got some unexpected results.
The issue is best described with few screenshots (I have added a static quad with depth framebuffer for debugging):
Somehow I managed to get shadows to be rendered on a ground quad once, with a static light (this commit). But the shadow pattern is, again, incorrect. I strongly suspect model transformation matrix calculaitons on this:
The way I render the scene is quite straightforward:
create the pipelines:
for mapping the shadows (filling the depth frame buffer)
for rendering the scene using the depth frame buffer
(extra) debugging one, rendering depth frame buffer to a static quad on a screen
fill the depth frame buffer: using the shadow mapping pipeline, render the scene from the light point, using orthographic projection
render the shaded scene: using the rendering pipeline and depth frame buffer bind as the first texture, render the scene from a camera point, using perspective projection
Seems like the algorithm in all those tutorials on shadow mapping out there. Yet, instead of a mouray effect (like in all of the tutorials), I get no shadow on the bottom plane whatsoever and weird artifacts (incorrect shadow mapping) on the 3D (chicken) model.
Interestingly enough, if I do not render (for both the shadow mapping and final rendering pass) the chicken model, the plane is lit with the same weird pattern:
I also had to remove any normal transformations from the fragment shader and disable face culling to make the ground plane lit. With front-face culling the plane does not appear in the shadow map (depth buffer).
I assume the following might be causing this issue:
wrong depth frame buffer setup (data format or texture parameters)
flipped depth frame buffer texture
wrong shadow calculations in rendering shaders
wrong light matrices (view & projection) setup
wrong matrix calculations in the rendering shaders (given the model transformation matrices for both chicken model and the quad contain both rotation and scaling)
Unfortunately, I ran out of ideas even on how to assess the above assumptions.
Looking for any help on the matter (also feel free to criticize any of my approaches, including C++, CMake, OpenGL and computer graphics).
The full solution source code is available on GitHub, but for convenience I have placed the heavily cut source code below.
shadow-mapping.vert:
#version 410
layout (location = 0) in vec3 vertexPosition;
out gl_PerVertex
{
vec4 gl_Position;
};
uniform mat4 lightSpaceMatrix;
uniform mat4 modelTransformation;
void main()
{
gl_Position = lightSpaceMatrix * modelTransformation * vec4(vertexPosition, 1.0);
}
shadow-mapping.frag:
#version 410
layout (location = 0) out float fragmentDepth;
void main()
{
fragmentDepth = gl_FragCoord.z;
}
shadow-rendering.vert:
#version 410
layout (location = 0) in vec3 vertexPosition;
layout (location = 1) in vec3 vertexNormal;
layout (location = 2) in vec2 vertexTextureCoord;
out VS_OUT
{
vec3 fragmentPosition;
vec3 normal;
vec2 textureCoord;
vec4 fragmentPositionInLightSpace;
} vsOut;
out gl_PerVertex {
vec4 gl_Position;
};
uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;
uniform mat4 lightSpaceMatrix;
void main()
{
vsOut.fragmentPosition = vec3(model * vec4(vertexPosition, 1.0));
vsOut.normal = transpose(inverse(mat3(model))) * vertexNormal;
vsOut.textureCoord = vertexTextureCoord;
vsOut.fragmentPositionInLightSpace = lightSpaceMatrix * model * vec4(vertexPosition, 1.0);
gl_Position = projection * view * model * vec4(vertexPosition, 1.0);
}
shadow-rendering.frag:
#version 410
layout (location = 0) out vec4 fragmentColor;
in VS_OUT {
vec3 fragmentPosition;
vec3 normal;
vec2 textureCoord;
vec4 fragmentPositionInLightSpace;
} fsIn;
uniform sampler2D shadowMap;
uniform sampler2D diffuseTexture;
uniform vec3 lightPosition;
uniform vec3 lightColor;
uniform vec3 cameraPosition;
float shadowCalculation()
{
vec2 shadowMapCoord = fsIn.fragmentPositionInLightSpace.xy * 0.5 + 0.5;
float occluderDepth = texture(shadowMap, shadowMapCoord).r;
float thisDepth = fsIn.fragmentPositionInLightSpace.z * 0.5 + 0.5;
return occluderDepth < thisDepth ? 1.0 : 0.0;
}
void main()
{
vec3 color = texture(diffuseTexture, fsIn.textureCoord).rgb;
vec3 normal = normalize(fsIn.normal);
// ambient
vec3 ambient = 0.3 * color;
// diffuse
vec3 lightDirection = normalize(lightPosition - fsIn.fragmentPosition);
float diff = max(dot(lightDirection, normal), 0.0);
vec3 diffuse = diff * lightColor;
// specular
vec3 viewDirection = normalize(cameraPosition - fsIn.fragmentPosition);
vec3 halfwayDirection = normalize(lightDirection + viewDirection);
float spec = pow(max(dot(normal, halfwayDirection), 0.0), 64.0);
vec3 specular = spec * lightColor;
// calculate shadow
float shadow = shadowCalculation();
vec3 lighting = ((shadow * (diffuse + specular)) + ambient) * color;
fragmentColor = vec4(lighting, 1.0);
}
main.cpp, setting up shaders and frame buffer:
// loading the shadow mapping shaders
auto shadowMappingVertexProgram = ...;
auto shadowMappingFragmentProgram = ...;
auto shadowMappingLightSpaceUniform = shadowMappingVertexProgram->getUniform<glm::mat4>("lightSpaceMatrix");
auto shadowMappingModelTransformationUniform = shadowMappingVertexProgram->getUniform<glm::mat4>("modelTransformation");
auto shadowMappingPipeline = std::make_unique<globjects::ProgramPipeline>();
shadowMappingPipeline->useStages(shadowMappingVertexProgram.get(), gl::GL_VERTEX_SHADER_BIT);
shadowMappingPipeline->useStages(shadowMappingFragmentProgram.get(), gl::GL_FRAGMENT_SHADER_BIT);
// (omitted) loading the depth frame buffer debugging shaders and creating a pipeline here
// loading the rendering shaders
auto shadowRenderingVertexProgram = ...;
auto shadowRenderingFragmentProgram = ...;
auto shadowRenderingModelTransformationUniform = shadowRenderingVertexProgram->getUniform<glm::mat4>("model");
auto shadowRenderingViewTransformationUniform = shadowRenderingVertexProgram->getUniform<glm::mat4>("view");
auto shadowRenderingProjectionTransformationUniform = shadowRenderingVertexProgram->getUniform<glm::mat4>("projection");
auto shadowRenderingLightSpaceMatrixUniform = shadowRenderingVertexProgram->getUniform<glm::mat4>("lightSpaceMatrix");
auto shadowRenderingLightPositionUniform = shadowRenderingFragmentProgram->getUniform<glm::vec3>("lightPosition");
auto shadowRenderingLightColorUniform = shadowRenderingFragmentProgram->getUniform<glm::vec3>("lightColor");
auto shadowRenderingCameraPositionUniform = shadowRenderingFragmentProgram->getUniform<glm::vec3>("cameraPosition");
auto shadowRenderingPipeline = std::make_unique<globjects::ProgramPipeline>();
shadowRenderingPipeline->useStages(shadowRenderingVertexProgram.get(), gl::GL_VERTEX_SHADER_BIT);
shadowRenderingPipeline->useStages(shadowRenderingFragmentProgram.get(), gl::GL_FRAGMENT_SHADER_BIT);
// loading the chicken model
auto chickenModel = Model::fromAiNode(chickenScene, chickenScene->mRootNode, { "media" });
// INFO: this transformation is hard-coded specifically for Chicken.3ds model
chickenModel->setTransformation(glm::rotate(glm::scale(glm::mat4(1.0f), glm::vec3(0.01f)), glm::radians(-90.0f), glm::vec3(1.0f, 0, 0)));
// loading the quad model
auto quadModel = Model::fromAiNode(quadScene, quadScene->mRootNode);
// INFO: this transformation is hard-coded specifically for quad.obj model
quadModel->setTransformation(glm::rotate(glm::scale(glm::translate(glm::mat4(1.0f), glm::vec3(-5, 0, 5)), glm::vec3(10.0f, 0, 10.0f)), glm::radians(-90.0f), glm::vec3(1.0f, 0, 0)));
// loading the floor texture
sf::Image textureImage = ...;
auto defaultTexture = std::make_unique<globjects::Texture>(static_cast<gl::GLenum>(GL_TEXTURE_2D));
defaultTexture->setParameter(static_cast<gl::GLenum>(GL_TEXTURE_MIN_FILTER), static_cast<GLint>(GL_LINEAR));
defaultTexture->setParameter(static_cast<gl::GLenum>(GL_TEXTURE_MAG_FILTER), static_cast<GLint>(GL_LINEAR));
defaultTexture->image2D(0, static_cast<gl::GLenum>(GL_RGBA8), glm::vec2(textureImage.getSize().x, textureImage.getSize().y), 0, static_cast<gl::GLenum>(GL_RGBA), static_cast<gl::GLenum>(GL_UNSIGNED_BYTE), reinterpret_cast<const gl::GLvoid*>(textureImage.getPixelsPtr()));
// initializing the depth frame buffer
auto shadowMapTexture = std::make_unique<globjects::Texture>(static_cast<gl::GLenum>(GL_TEXTURE_2D));
shadowMapTexture->setParameter(static_cast<gl::GLenum>(GL_TEXTURE_MIN_FILTER), static_cast<gl::GLenum>(GL_LINEAR));
shadowMapTexture->setParameter(static_cast<gl::GLenum>(GL_TEXTURE_MAG_FILTER), static_cast<gl::GLenum>(GL_LINEAR));
shadowMapTexture->setParameter(static_cast<gl::GLenum>(GL_TEXTURE_WRAP_S), static_cast<gl::GLenum>(GL_CLAMP_TO_BORDER));
shadowMapTexture->setParameter(static_cast<gl::GLenum>(GL_TEXTURE_WRAP_T), static_cast<gl::GLenum>(GL_CLAMP_TO_BORDER));
shadowMapTexture->setParameter(static_cast<gl::GLenum>(GL_TEXTURE_BORDER_COLOR), glm::vec4(1.0f, 1.0f, 1.0f, 1.0f));
shadowMapTexture->image2D(0, static_cast<gl::GLenum>(GL_DEPTH_COMPONENT), glm::vec2(window.getSize().x, window.getSize().y), 0, static_cast<gl::GLenum>(GL_DEPTH_COMPONENT), static_cast<gl::GLenum>(GL_FLOAT), nullptr);
auto framebuffer = std::make_unique<globjects::Framebuffer>();
framebuffer->attachTexture(static_cast<gl::GLenum>(GL_DEPTH_ATTACHMENT), shadowMapTexture.get());
main.cpp, rendering (main loop):
// (omitted) event handling, camera updates go here
glm::mat4 cameraProjection = glm::perspective(glm::radians(fov), (float) window.getSize().x / (float) window.getSize().y, 0.1f, 100.0f);
glm::mat4 cameraView = glm::lookAt(cameraPos, cameraPos + cameraForward, cameraUp);
// moving light together with the camera, for debugging purposes
glm::vec3 lightPosition = cameraPos;
// light settings
const float nearPlane = 1.0f;
const float farPlane = 10.0f;
glm::mat4 lightProjection = glm::ortho(-5.0f, 5.0f, -5.0f, 5.0f, nearPlane, farPlane);
glm::mat4 lightView = glm::lookAt(lightPosition, glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
glm::mat4 lightSpaceMatrix = lightProjection * lightView;
::glViewport(0, 0, static_cast<GLsizei>(window.getSize().x), static_cast<GLsizei>(window.getSize().y));
// first render pass - shadow mapping
framebuffer->bind();
::glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
::glClear(GL_DEPTH_BUFFER_BIT);
framebuffer->clearBuffer(static_cast<gl::GLenum>(GL_DEPTH), 0, glm::vec4(1.0f));
glEnable(GL_DEPTH_TEST);
// cull front faces to prevent peter panning the generated shadow map
glCullFace(GL_FRONT);
shadowMappingPipeline->use();
shadowMappingLightSpaceUniform->set(lightSpaceMatrix);
shadowMappingModelTransformationUniform->set(chickenModel->getTransformation());
chickenModel->draw();
shadowMappingModelTransformationUniform->set(quadModel->getTransformation());
quadModel->draw();
framebuffer->unbind();
shadowMappingPipeline->release();
glCullFace(GL_BACK);
// second pass - switch to normal shader and render picture with depth information to the viewport
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
shadowRenderingPipeline->use();
shadowRenderingLightPositionUniform->set(lightPosition);
shadowRenderingLightColorUniform->set(glm::vec3(1.0, 1.0, 1.0));
shadowRenderingCameraPositionUniform->set(cameraPos);
shadowRenderingProjectionTransformationUniform->set(cameraProjection);
shadowRenderingViewTransformationUniform->set(cameraView);
shadowRenderingLightSpaceMatrixUniform->set(lightSpaceMatrix);
// draw chicken
shadowMapTexture->bind();
shadowRenderingModelTransformationUniform->set(chickenModel->getTransformation());
chickenModel->draw();
shadowRenderingModelTransformationUniform->set(quadModel->getTransformation());
defaultTexture->bind();
quadModel->draw();
defaultTexture->unbind();
shadowMapTexture->unbind();
shadowRenderingPipeline->release();
// (omitted) render the debugging quad with depth (shadow) map
window.display();
As shameful as it might be, the issue was with the wrong texture being bound.
The globjects library that I use to have few nice(-r) abstractions over OpenGL actually does not provide a smart logic around texture binding (as I blindly assumed). So using just Texture::bind() and Texture::unbind() won't automagically keep track of how many textures have been bound and increment an index.
E.g. it does not behave (roughly) like this:
static int boundTextureIndex = -1;
void Texture::bind() {
glBindTexture(this->textureType, this->textureId);
glActivateTexture(GL_TEXTURE0 + (++boundTextureIndex));
}
void Texture::unbind() {
--boundTextureIndex;
}
So after changing the texture->bind() to texture->bindActive(0) followed by shaderProgram->setUniform("texture", 0), I finally got to the mouray effect and correct shadow mapping:
Full change is in this commit.

Cannot apply blending to cube located behind half-transparent textured surface

Following the tutorial from learnopengl.com about rendering half-transparent windows glasses using blending, I tried to apply that principle to my simple scene (where we can navigate the scene using the mouse) containing:
Cube: 6 faces, each having 2 triangles, constructed using two attributes (position and color) defined in its associated vertex shader and passed to its fragment shader.
Grass: 2D Surface (two triangles) to which a png texture was applied using a sampler2D uniform (the background of the png image is transparent).
Window: A half-transparent 2D surface based on the same shaders (vertex and fragment) as the grass above. Both textures were downloaded from learnopengl.com
The issue I'm facing is that when it comes to the Grass, I can see it through the Window but not the Cube!
My code is structured as follows (I left the rendering of the window to the very last on purpose):
// enable depth test & blending
glEnable(GL_DEPTH_TEST);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA);
while (true):
glClearColor(background.r, background.g, background.b, background.a);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
cube.draw();
grass.draw();
window.draw();
Edit: I'll share below the vertex and fragment shaders used to draw the two textured surfaces (grass and window):
#version 130
in vec2 position;
in vec2 texture_coord;
// opengl tranformation matrices
uniform mat4 model; // object coord -> world coord
uniform mat4 view; // world coord -> camera coord
uniform mat4 projection; // camera coord -> ndc coord
out vec2 texture_coord_vert;
void main() {
gl_Position = projection * view * model * vec4(position, 0.0, 1.0);
texture_coord_vert = texture_coord;
}
#version 130
in vec2 texture_coord_vert;
uniform sampler2D texture2d;
out vec4 color_out;
void main() {
vec4 color = texture(texture2d, texture_coord_vert);
// manage transparency
if (color.a == 0.0)
discard;
color_out = color;
}
And the ones used to render the colored cube:
#version 130
in vec3 position;
in vec3 color;
// opengl tranformation matrices
uniform mat4 model; // object coord -> world coord
uniform mat4 view; // world coord -> camera coord
uniform mat4 projection; // camera coord -> ndc coord
out vec3 color_vert;
void main() {
gl_Position = projection * view * model * vec4(position, 1.0);
color_vert = color;
}
#version 130
in vec3 color_vert;
out vec4 color_out;
void main() {
color_out = vec4(color_vert, 1.0);
}
P.S: My shader programs uses GLSL v1.30, because my internal GPU didn't seem to support later versions.
Regarding the piece of code that does the actual drawing, I basically have one instance of a Renderer class for each type of geometry (one shared by both textured surfaces, and one for the cube). This class manages the creation/binding/deletion of VAOs and binding/deletion of VBOs (creation of VBOs made outside the class so I can share vertexes with similar shapes). Its constructor takes as an argument the shader program and the vertex attributes. I'll try to show the relevant piece of code below
Renderer::Renderer(Program program, vector attributes) {
vao.bind();
vbo.bind();
define_attributes(attributes);
vao.unbind();
vbo.unbind();
}
Renderer::draw(Uniforms uniforms) {
vao.bind();
program.use();
set_uniforms(unfiorms);
glDrawArrays(GL_TRIANGLES, 0, n_vertexes);
vao.unbind();
program.unuse();
}
Your blend function function depends on the target's alpha channel (GL_ONE_MINUS_DST_ALPHA):
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA);
dest = src * src_alpha + dest * (1-dest_alpha)
If the alpha channel of the cube is 0.0, the color of the cube is not mixed with the color of the window.
The traditional alpha blending function depends only on the source alpha channel:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
dest = src * src_alpha + dest * (1-src_alpha)
See also glBlendFunc and Blending

How to apply Texture-Mapping to a Maya Object using OpenGL?

I am currently learning how to map 2d textures to 3d objects using GLSL. I have a main.cpp, fragment shader, and vertex shader to achieve this as well as a Sphere.obj I made using Maya and some PNG images.
I just created a basic sphere poly model in Maya then exported it as a ".obj".
My fragment shader code is listed below for reference:
#version 410
// Inputs from application.
// Generally, "in" like the eye and normal vectors for things that change frequently,
// and "uniform" for things that change less often (think scene versus vertices).
in vec3 position_eye, normal_eye;
uniform mat4 view_mat;
// This light setup would usually be passed in from the application.
vec3 light_position_world = vec3 (10.0, 25.0, 10.0);
vec3 Ls = vec3 (1.0, 1.0, 1.0); // neutral, full specular color of light
vec3 Ld = vec3 (0.8, 0.8, 0.8); // neutral, lessened diffuse light color of light
vec3 La = vec3 (0.12, 0.12, 0.12); // ambient color of light - just a bit more than dk gray bg
// Surface reflectance properties for Phong or Blinn-Phong shading models below.
vec3 Ks = vec3 (1.0, 1.0, 1.0); // fully reflect specular light
vec3 Kd = vec3 (0.32, 0.18, 0.5); // purple diffuse surface reflectance
vec3 Ka = vec3 (1.0, 1.0, 1.0); // fully reflect ambient light
float specular_exponent = 400.0; // specular 'power' -- controls "roll-off"
// These come from the VAO for texture coordinates.
in vec2 texture_coords;
// And from the uniform outputs for the textures setup in main.cpp.
uniform sampler2D texture00;
uniform sampler2D texture01;
out vec4 fragment_color; // color of surface to draw
void main ()
{
// Ambient intensity
vec3 Ia = La * Ka;
// These next few lines sample the current texture coord (s, t) in texture00 and 01 and mix.
vec4 texel_a = texture (texture00, fract(texture_coords*2.0));
vec4 texel_b = texture (texture01, fract(texture_coords*2.0));
//vec4 mixed = mix (texel_a, texel_b, texture_coords.x);
vec4 mixed = mix (texel_a, texel_b, texture_coords.x);
Kd.x = mixed.x;
Kd.y = mixed.y;
Kd.z = mixed.z;
// Transform light position to view space.
// Vectors here are appended with _eye as a reminder once in view space versus world space.
// "Eye" is used instead of "camera" since reflectance models often phrased that way.
vec3 light_position_eye = vec3 (view_mat * vec4 (light_position_world, 1.0));
vec3 distance_to_light_eye = light_position_eye - position_eye;
vec3 direction_to_light_eye = normalize (distance_to_light_eye);
// Diffuse intensity
float dot_prod = dot (direction_to_light_eye, normal_eye);
dot_prod = max (dot_prod, 0.0);
vec3 Id = Ld * Kd * dot_prod; // final diffuse intensity
// Specular is view dependent; get vector toward camera.
vec3 surface_to_viewer_eye = normalize (-position_eye);
// Phong
//vec3 reflection_eye = reflect (-direction_to_light_eye, normal_eye);
//float dot_prod_specular = dot (reflection_eye, surface_to_viewer_eye);
//dot_prod_specular = max (dot_prod_specular, 0.0);
//float specular_factor = pow (dot_prod_specular, specular_exponent);
// Blinn
vec3 half_way_eye = normalize (surface_to_viewer_eye + direction_to_light_eye);
float dot_prod_specular = max (dot (half_way_eye, normal_eye), 0.0);
float specular_factor = pow (dot_prod_specular, specular_exponent);
// Specular intensity
vec3 Is = Ls * Ks * specular_factor; // final specular intensity
// final color
fragment_color = vec4 (Is + Id + Ia, 1.0);
}
I type in the following command into the terminal to run my package:
./go fs.glsl vs.glsl Sphere.obj image.png image2.png
I am trying to map a world map.jpg to my sphere using this method and ignore the 2nd image input. But it won't run. Can someone tell me what I need to comment out in my fragment shader to ignore the second texture input so my code will run?
PS: How would I go about modifying my fragment shader to implement various types of 'tiling'? I'm a bit lost on this as well. Any examples or tips are appreciated.
Here is the texture portion of my main.cpp code.
// load textures
GLuint tex00;
int tex00location = glGetUniformLocation (shader_programme, "texture00");
glUniform1i (tex00location, 0);
glActiveTexture (GL_TEXTURE0);
assert (load_texture (argv[4], &tex00));
//assert (load_texture ("ship.png", &tex00));
GLuint tex01;
int tex01location = glGetUniformLocation (shader_programme, "texture01");
glUniform1i (tex01location, 1);
glActiveTexture (GL_TEXTURE1);
assert (load_texture (argv[5], &tex01));
/*---------------------------SET RENDERING DEFAULTS---------------------------*/
// Choose vertex and fragment shaders to use as well as view and proj matrices.
glUniformMatrix4fv (view_mat_location, 1, GL_FALSE, view_mat.m);
glUniformMatrix4fv (proj_mat_location, 1, GL_FALSE, proj_mat.m);
// The model matrix stores the position and orientation transformations for the mesh.
mat4 model_mat;
model_mat = translate (identity_mat4 () * scale(identity_mat4(), vec3(0.5, 0.5, 0.5)), vec3(0, -0.5, 0)) * rotate_y_deg (identity_mat4 (), 90 );
// Setup basic GL display attributes.
glEnable (GL_DEPTH_TEST); // enable depth-testing
glDepthFunc (GL_LESS); // depth-testing interprets a smaller value as "closer"
glEnable (GL_CULL_FACE); // cull face
glCullFace (GL_BACK); // cull back face
glFrontFace (GL_CCW); // set counter-clock-wise vertex order to mean the front
glClearColor (0.1, 0.1, 0.1, 1.0); // non-black background to help spot mistakes
glViewport (0, 0, g_gl_width, g_gl_height); // make sure correct aspect ratio

Textures in the OpenGL Rendering pipeline

Let's say I start with a quad that covers the entire screen space just. I then put it through a projection matrix so that it appears as a trapezoid on the screen. There is a texture on this. As the base of the trapezoid is meant to be closer to the camera, opengl correctly renders the texture such that things in the texture appear bigger at the base of the trapezoid (as this is seemingly closer to the camera).
How does OpenGL know to render the texture itself in this perspective-based way rather than just stretching the sides of the texture into the trapezoid shape? Certainly it must be using the vertex z values, but how does it use those to map to textures in the fragment shader? In the fragment shader it feels like I am just working with x and y coordinates of textures with no z values being relevant.
EDIT:
I tried using the information provided in the links in the comments. I am not sure if there is information I am missing related to my question specifically, or if I am doing something incorrectly.
What I am trying to do is make a (if you don't know what this is, it's ok, I explain further what I'm trying to do) pseudo 3D SNES Mode 7-like projection.
Here's how it's coming out now.
As you can see something funny is happening. You can clearly see that the quad is actually 2 triangles and the black text area at the top should be straight, not crooked.
Whatever is happening, it's clear that the triangle on the left and the triangle on the right have their textures being rendered differently. The z-values are not being changed. Based on info in links in the comments I thought that I could simply move the top two vertices of my rectangular quad inward so that it became a trapezoid instead and this would act like a projection.
I know that a "normal" thing to do would be to use glm::lookat for a view matrix and glm::perspective for a projection matrix, but these are a little bit of black boxes to me and I would rather find a more easy-to-understand way.
I may have already provided enough info for someone to answer, but just in case, here is my code:
Vertex Shader:
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 2) in vec2 texCoord;
out vec2 TexCoord;
void main()
{
// adjust vertex positions to make rectangle into trapezoid
if( position.y < 0){
gl_Position = vec4(position.x * 2.0, position.y * 2.0, 0.0, 1.0);
}else {
gl_Position = vec4(position.x * 1.0, position.y * 2.0, 0.0, 1.0);
}
TexCoord = vec2(texCoord.x, 1.0 - texCoord.y);
}
Fragment Shader:
#version 330 core
in vec2 TexCoord;
out vec4 color;
uniform sampler2D ourTexture1;
uniform mat3 textures_transform_mat_input;
mat3 TexCoord_to_mat3;
mat3 foo_mat3;
void main()
{
TexCoord_to_mat3[0][0] = 1.0;
TexCoord_to_mat3[1][1] = 1.0;
TexCoord_to_mat3[2][2] = 1.0;
TexCoord_to_mat3[0][2] = TexCoord.x;
TexCoord_to_mat3[1][2] = TexCoord.y;
foo_mat3 = TexCoord_to_mat3 * textures_transform_mat_input;
vec2 foo = vec2(foo_mat3[0][2], foo_mat3[1][2]);
vec2 bar = vec2(TexCoord.x, TexCoord.y);
color = texture(ourTexture1, foo);
vec2 center = vec2(0.5, 0.5);
}
Relevant code in main (note I am using a C library, CGLM that is like GLM; also, the "center" and "center undo" stuff is just to make sure rotation happens about the center rather than a corner):
if(!init_complete){
glm_mat3_identity(textures_scale_mat);
textures_scale_mat[0][0] = 1.0/ASPECT_RATIO / 3.0;
textures_scale_mat[1][1] = 1.0/1.0 / 3.0;
}
mat3 center_mat;
center_mat[0][0] = 1.0;
center_mat[1][1] = 1.0;
center_mat[2][2] = 1.0;
center_mat[0][2] = -0.5;
center_mat[1][2] = -0.5;
mat3 center_undo_mat;
center_undo_mat[0][0] = 1.0;
center_undo_mat[1][1] = 1.0;
center_undo_mat[2][2] = 1.0;
center_undo_mat[0][2] = 0.5;
center_undo_mat[1][2] = 0.5;
glm_mat3_identity(textures_position_mat);
textures_position_mat[0][2] = player.y / 1.0;
textures_position_mat[1][2] = player.x / 1.0;
glm_mat3_identity(textures_orientation_mat);
textures_orientation_mat[0][0] = cos(player_rotation_radians);
textures_orientation_mat[0][1] = sin(player_rotation_radians);
textures_orientation_mat[1][0] = -sin(player_rotation_radians);
textures_orientation_mat[1][1] = cos(player_rotation_radians);
glm_mat3_identity(textures_transform_mat);
glm_mat3_mul(center_mat, textures_orientation_mat, textures_transform_mat);
glm_mat3_mul(textures_transform_mat, center_undo_mat, textures_transform_mat);
glm_mat3_mul(textures_transform_mat, textures_scale_mat, textures_transform_mat);
glm_mat3_mul(textures_transform_mat, textures_position_mat, textures_transform_mat);
glUniformMatrix3fv(glGetUniformLocation(shader_perspective, "textures_transform_mat_input"), 1, GL_FALSE, textures_transform_mat);
glBindTexture(GL_TEXTURE_2D, texture_mute_city);
glDrawArrays(GL_TRIANGLES, 0, 6);

Simple curiosity about relation between texture mapping and shader program using Opengl/GLSL

I'm working on a small homemade 3D engine and more precisely on rendering optimization. Until here I developped a sort algorithm whose goal is to gather a maximum of geometry (meshes) which have in common the same material properties and same shader program into batches. This way I minimize the state changes (glBindXXX) and draw calls (glDrawXXX). So, if I have a scene composed by 10 boxes, all sharing the same texture and need to be rendered with the same shader program (for example including ADS lighting) so all the vertices of these meshes will be merged into a unique VBO, the texture will be bind just one time and one simple draw call only will be needed.
Scene description:
- 10 meshes (boxes) mapped with 'texture_1'
Pseudo-code (render):
shaderProgram_1->Bind()
{
glActiveTexture(texture_1)
DrawCall(render 10 meshes with 'texture_1')
}
But now I want to be sure one thing: Let's assume our scene is always composed by the same 10 boxes but this time 5 of them will be mapped with a different texture (not multi-texturing, just simple texture mapping).
Scene description:
- 5 boxes with 'texture_1'
- 5 boxes with 'texture_2'
Pseudo-code (render):
shaderProgram_1->Bind()
{
glActiveTexture(texture_1)
DrawCall(render 5 meshes with 'texture_1')
}
shaderProgram_2->Bind()
{
glActiveTexture(texture_2)
DrawCall(render 5 meshes with 'texture_2')
}
And my fragment shader has a unique declaration of sampler2D (the goal of my shader program is to render geometry with simple texture mapping and ADS lighting):
uniform sampler2D ColorSampler;
I want to be sure it's not possible to draw this scene with a unique draw call (like it was possible with my previous example (1 batch was needed)). It was possible because I used the same texture for the whole geometry. I think this time I will need 2 batches hence 2 draw calls and of course for the rendering of each batch I will bind the 'texture_1' and 'texture_2' before each draw call (one for the first 5 boxes and an other one for the 5 others).
To sum up, if all the meshes are mapped with a simple texture (simple texture mapping):
5 with a red texture (texture_red)
5 with a blue texture (texture_blue)
Is it possible to render the scene with a simple draw call? I don't think so because my pseudo code will look like this:
Pseudo-code:
shaderProgram->Bind()
{
glActiveTexture(texture_blue)
glActiveTexture(texture_red)
DrawCall(render 10 meshes)
}
I think it's impossible to differentiate the 2 textures when my fragment shader has to compute the pixel color using a unique sampler2D uniform variable (simple texture mapping).
Here's my fragment shader:
#version 440
#define MAX_LIGHT_COUNT 1
/*
** Output color value.
*/
layout (location = 0) out vec4 FragColor;
/*
** Inputs.
*/
in vec3 Position;
in vec2 TexCoords;
in vec3 Normal;
/*
** Material uniforms.
*/
uniform MaterialBlock
{
vec3 Ka, Kd, Ks;
float Shininess;
} MaterialInfos;
uniform sampler2D ColorSampler;
struct Light
{
vec4 Position;
vec3 La, Ld, Ls;
float Kc, Kl, Kq;
};
uniform struct Light LightInfos[MAX_LIGHT_COUNT];
uniform unsigned int LightCount;
/*
** Light attenuation factor.
*/
float getLightAttenuationFactor(vec3 lightDir, Light light)
{
float lightAtt = 0.0f;
float dist = 0.0f;
dist = length(lightDir);
lightAtt = 1.0f / (light.Kc + (light.Kl * dist) + (light.Kq * pow(dist, 2)));
return (lightAtt);
}
/*
** Basic phong shading.
*/
vec3 Basic_Phong_Shading(vec3 normalDir, vec3 lightDir, vec3 viewDir, int idx)
{
vec3 Specular = vec3(0.0f);
float lambertTerm = max(dot(lightDir, normalDir), 0.0f);
vec3 Ambient = LightInfos[idx].La * MaterialInfos.Ka;
vec3 Diffuse = LightInfos[idx].Ld * MaterialInfos.Kd * lambertTerm;
if (lambertTerm > 0.0f)
{
vec3 reflectDir = reflect(-lightDir, normalDir);
Specular = LightInfos[idx].Ls * MaterialInfos.Ks * pow(max(dot(reflectDir, viewDir), 0.0f), MaterialInfos.Shininess);
}
return (Ambient + Diffuse + Specular);
}
/*
** Fragment shader entry point.
*/
void main(void)
{
vec3 LightIntensity = vec3(0.0f);
vec4 texDiffuseColor = texture2D(ColorSampler, TexCoords);
vec3 normalDir = (gl_FrontFacing ? -Normal : Normal);
for (int idx = 0; idx < LightCount; idx++)
{
vec3 lightDir = vec3(LightInfos[idx].Position) - Position.xyz;
vec3 viewDir = -Position.xyz;
float lightAttenuationFactor = getLightAttenuationFactor(lightDir, LightInfos[idx]);
LightIntensity += Basic_Phong_Shading(
-normalize(normalDir), normalize(lightDir), normalize(viewDir), idx
) * lightAttenuationFactor;
}
FragColor = vec4(LightIntensity, 1.0f) * texDiffuseColor;
}
Are you agree with me?
It's possible if you either: (i) consider it to be a multitexturing problem where the function per fragment just picks between the two incoming fragments (ideally using mix with a coefficient of 0.0 or 1.0, not genuine branching); or (ii) composite your two textures into one texture (subject to your ability to wrap and clamp texture coordinates efficiently — watch out for those dependent reads — and maximum texture size constraints).
It's an open question as to whether either of these things would improve performance. Definitely go with (ii) if you can.