Related
I am currently learning how to map 2d textures to 3d objects using GLSL. I have a main.cpp, fragment shader, and vertex shader to achieve this as well as a Sphere.obj I made using Maya and some PNG images.
I just created a basic sphere poly model in Maya then exported it as a ".obj".
My fragment shader code is listed below for reference:
#version 410
// Inputs from application.
// Generally, "in" like the eye and normal vectors for things that change frequently,
// and "uniform" for things that change less often (think scene versus vertices).
in vec3 position_eye, normal_eye;
uniform mat4 view_mat;
// This light setup would usually be passed in from the application.
vec3 light_position_world = vec3 (10.0, 25.0, 10.0);
vec3 Ls = vec3 (1.0, 1.0, 1.0); // neutral, full specular color of light
vec3 Ld = vec3 (0.8, 0.8, 0.8); // neutral, lessened diffuse light color of light
vec3 La = vec3 (0.12, 0.12, 0.12); // ambient color of light - just a bit more than dk gray bg
// Surface reflectance properties for Phong or Blinn-Phong shading models below.
vec3 Ks = vec3 (1.0, 1.0, 1.0); // fully reflect specular light
vec3 Kd = vec3 (0.32, 0.18, 0.5); // purple diffuse surface reflectance
vec3 Ka = vec3 (1.0, 1.0, 1.0); // fully reflect ambient light
float specular_exponent = 400.0; // specular 'power' -- controls "roll-off"
// These come from the VAO for texture coordinates.
in vec2 texture_coords;
// And from the uniform outputs for the textures setup in main.cpp.
uniform sampler2D texture00;
uniform sampler2D texture01;
out vec4 fragment_color; // color of surface to draw
void main ()
{
// Ambient intensity
vec3 Ia = La * Ka;
// These next few lines sample the current texture coord (s, t) in texture00 and 01 and mix.
vec4 texel_a = texture (texture00, fract(texture_coords*2.0));
vec4 texel_b = texture (texture01, fract(texture_coords*2.0));
//vec4 mixed = mix (texel_a, texel_b, texture_coords.x);
vec4 mixed = mix (texel_a, texel_b, texture_coords.x);
Kd.x = mixed.x;
Kd.y = mixed.y;
Kd.z = mixed.z;
// Transform light position to view space.
// Vectors here are appended with _eye as a reminder once in view space versus world space.
// "Eye" is used instead of "camera" since reflectance models often phrased that way.
vec3 light_position_eye = vec3 (view_mat * vec4 (light_position_world, 1.0));
vec3 distance_to_light_eye = light_position_eye - position_eye;
vec3 direction_to_light_eye = normalize (distance_to_light_eye);
// Diffuse intensity
float dot_prod = dot (direction_to_light_eye, normal_eye);
dot_prod = max (dot_prod, 0.0);
vec3 Id = Ld * Kd * dot_prod; // final diffuse intensity
// Specular is view dependent; get vector toward camera.
vec3 surface_to_viewer_eye = normalize (-position_eye);
// Phong
//vec3 reflection_eye = reflect (-direction_to_light_eye, normal_eye);
//float dot_prod_specular = dot (reflection_eye, surface_to_viewer_eye);
//dot_prod_specular = max (dot_prod_specular, 0.0);
//float specular_factor = pow (dot_prod_specular, specular_exponent);
// Blinn
vec3 half_way_eye = normalize (surface_to_viewer_eye + direction_to_light_eye);
float dot_prod_specular = max (dot (half_way_eye, normal_eye), 0.0);
float specular_factor = pow (dot_prod_specular, specular_exponent);
// Specular intensity
vec3 Is = Ls * Ks * specular_factor; // final specular intensity
// final color
fragment_color = vec4 (Is + Id + Ia, 1.0);
}
I type in the following command into the terminal to run my package:
./go fs.glsl vs.glsl Sphere.obj image.png image2.png
I am trying to map a world map.jpg to my sphere using this method and ignore the 2nd image input. But it won't run. Can someone tell me what I need to comment out in my fragment shader to ignore the second texture input so my code will run?
PS: How would I go about modifying my fragment shader to implement various types of 'tiling'? I'm a bit lost on this as well. Any examples or tips are appreciated.
Here is the texture portion of my main.cpp code.
// load textures
GLuint tex00;
int tex00location = glGetUniformLocation (shader_programme, "texture00");
glUniform1i (tex00location, 0);
glActiveTexture (GL_TEXTURE0);
assert (load_texture (argv[4], &tex00));
//assert (load_texture ("ship.png", &tex00));
GLuint tex01;
int tex01location = glGetUniformLocation (shader_programme, "texture01");
glUniform1i (tex01location, 1);
glActiveTexture (GL_TEXTURE1);
assert (load_texture (argv[5], &tex01));
/*---------------------------SET RENDERING DEFAULTS---------------------------*/
// Choose vertex and fragment shaders to use as well as view and proj matrices.
glUniformMatrix4fv (view_mat_location, 1, GL_FALSE, view_mat.m);
glUniformMatrix4fv (proj_mat_location, 1, GL_FALSE, proj_mat.m);
// The model matrix stores the position and orientation transformations for the mesh.
mat4 model_mat;
model_mat = translate (identity_mat4 () * scale(identity_mat4(), vec3(0.5, 0.5, 0.5)), vec3(0, -0.5, 0)) * rotate_y_deg (identity_mat4 (), 90 );
// Setup basic GL display attributes.
glEnable (GL_DEPTH_TEST); // enable depth-testing
glDepthFunc (GL_LESS); // depth-testing interprets a smaller value as "closer"
glEnable (GL_CULL_FACE); // cull face
glCullFace (GL_BACK); // cull back face
glFrontFace (GL_CCW); // set counter-clock-wise vertex order to mean the front
glClearColor (0.1, 0.1, 0.1, 1.0); // non-black background to help spot mistakes
glViewport (0, 0, g_gl_width, g_gl_height); // make sure correct aspect ratio
I'm currently working on an OpenGL project and I'm trying to get shadow mapping to work properly. I could get to a point where the shadow map gets rendered into a texture, but it doesn't seem to get applied to the scenery when rendered. Here's the most important bits of my code:
The shadow map vertex shader, basically a simple pass through shader (also does some additional stuff like normals, but that shouldn't distract you); it basically just transforms the vertices so they're seen from the perspective of the light (it's a directional light but since we need to assume a position, it's basically a point far away):
#version 430 core
layout(location = 0) in vec3 v_position;
layout(location = 1) in vec3 v_normal;
layout(location = 2) in vec3 v_texture;
layout(location = 3) in vec4 v_color;
out vec3 f_texture;
out vec3 f_normal;
out vec4 f_color;
uniform mat4 modelMatrix;
uniform mat4 depthViewMatrix;
uniform mat4 depthProjectionMatrix;
// Shadow map vertex shader.
void main() {
mat4 mvp = depthProjectionMatrix * depthViewMatrix * modelMatrix;
gl_Position = mvp * vec4(v_position, 1.0);
// Passing attributes on to the fragment shader
f_texture = v_texture;
f_normal = (transpose(inverse(modelMatrix)) * vec4(v_normal, 1.0)).xyz;
f_color = v_color;
}
The shadow map fragment shader that writes the depth value to a texture:
#version 430 core
layout(location = 0) out float fragmentDepth;
in vec3 f_texture;
in vec3 f_normal;
in vec4 f_color;
uniform vec3 lightDirection;
uniform sampler2DArray texSampler;
// Shadow map fragment shader.
void main() {
fragmentDepth = gl_FragCoord.z;
}
The vertex shader that actually renders the scene, but also calculates the position of the current vertex from the lights point of view (shadowCoord) to compare against the depth texture; it also applies a bias matrix, since the coordinates aren't in the correct [0, 1] interval for sampling:
#version 430 core
layout(location = 0) in vec3 v_position;
layout(location = 1) in vec3 v_normal;
layout(location = 2) in vec3 v_texture;
layout(location = 3) in vec4 v_color;
out vec3 f_texture;
out vec3 f_normal;
out vec4 f_color;
out vec3 f_shadowCoord;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
uniform mat4 depthViewMatrix;
uniform mat4 depthProjectionMatrix;
// Simple vertex shader.
void main() {
mat4 mvp = projectionMatrix * viewMatrix * modelMatrix;
gl_Position = mvp * vec4(v_position, 1.0);
// This bias matrix adjusts the projection of a given vertex on a texture to be within 0 and 1 for proper sampling
mat4 depthBias = mat4(0.5, 0.0, 0.0, 0.5,
0.0, 0.5, 0.0, 0.5,
0.0, 0.0, 0.5, 0.5,
0.0, 0.0, 0.0, 1.0);
mat4 depthMVP = depthProjectionMatrix * depthViewMatrix * modelMatrix;
mat4 biasedDMVP = depthBias * depthMVP;
// Passing attributes on to the fragment shader
f_shadowCoord = (biasedDMVP * vec4(v_position, 1.0)).xyz;
f_texture = v_texture;
f_normal = (transpose(inverse(modelMatrix)) * vec4(v_normal, 1.0)).xyz;
f_color = v_color;
}
The fragment shader that applies textures from a texture array and receives the depth texture (uniform sampler2D shadowMap) and checks if a fragment is behind something:
#version 430 core
in vec3 f_texture;
in vec3 f_normal;
in vec4 f_color;
in vec3 f_shadowCoord;
out vec4 color;
uniform vec3 lightDirection;
uniform sampler2D shadowMap;
uniform sampler2DArray tileTextureArray;
// Very basic fragment shader.
void main() {
float visibility = 1.0;
if (texture(shadowMap, f_shadowCoord.xy).z < f_shadowCoord.z) {
visibility = 0.5;
}
color = texture(tileTextureArray, f_texture) * visibility;
}
And finally: the function that renders multiple chunks to generate the shadow map and then renders the scene with the shadow map applied:
// Generating the shadow map
glBindFramebuffer(GL_FRAMEBUFFER, m_framebuffer);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, m_depthTexture);
m_shadowShader->bind();
glViewport(0, 0, 1024, 1024);
glDisable(GL_CULL_FACE);
glm::vec3 lightDir = glm::vec3(1.0f, -0.5f, 1.0f);
glm::vec3 sunPosition = FPSCamera::getPosition() - lightDir * 64.0f;
glm::mat4 depthViewMatrix = glm::lookAt(sunPosition, FPSCamera::getPosition(), glm::vec3(0, 1, 0));
glm::mat4 depthProjectionMatrix = glm::ortho<float>(-100.0f, 100.0f, -100.0f, 100.0f, 0.1f, 800.0f);
m_shadowShader->setUniformMatrix("depthViewMatrix", depthViewMatrix);
m_shadowShader->setUniformMatrix("depthProjectionMatrix", depthProjectionMatrix);
for (Chunk *chunk : m_chunks) {
m_shadowShader->setUniformMatrix("modelMatrix", chunk->getModelMatrix());
chunk->drawElements();
}
m_shadowShader->unbind();
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// Normal draw call
m_chunkShader->bind();
glEnable(GL_CULL_FACE);
glViewport(0, 0, Window::getWidth(), Window::getHeight());
glm::mat4 viewMatrix = FPSCamera::getViewMatrix();
glm::mat4 projectionMatrix = FPSCamera::getProjectionMatrix();
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, m_depthTexture);
glActiveTexture(GL_TEXTURE1);
m_textures->bind();
m_chunkShader->setUniformMatrix("depthViewMatrix", depthViewMatrix);
m_chunkShader->setUniformMatrix("depthProjectionMatrix", depthProjectionMatrix);
m_chunkShader->setUniformMatrix("viewMatrix", viewMatrix);
m_chunkShader->setUniformMatrix("projectionMatrix", projectionMatrix);
m_chunkShader->setUniformVec3("lightDirection", lightDir);
m_chunkShader->setUniformInteger("shadowMap", 0);
m_chunkShader->setUniformInteger("tileTextureArray", 1);
for (Chunk *chunk : m_chunks) {
m_chunkShader->setUniformMatrix("modelMatrix", chunk->getModelMatrix());
chunk->drawElements();
}
Most of the code should be self-explanatory, I'm binding a FBO with a texture attached, we do a normal rendering call into the framebuffer, it gets rendered into a texture and then I'm trying to pass it into the shader for normal rendering. I've tested whether the texture gets properly generated and it does: See the generated shadow map here
However, when rendering the scene, all I see is this.
No shadows applied, visibility is 1.0 everywhere. I also use a debug context which works properly and logs errors when there are any, but it seems to be completely fine, no warnings or errors, so I'm the one doing something terribly wrong here. I'm on OpenGL 4.3 by the way.
Hopefully one of you can help me out on this, I've never got shadow maps to work before, this is the closest I've ever come, lol. Thanks in advance.
Commonly a mat4 OpenGL transformation matrix looks like this:
( X-axis.x, X-axis.y, X-axis.z, 0 )
( Y-axis.x, Y-axis.y, Y-axis.z, 0 )
( Z-axis.x, Z-axis.y, Z-axis.z, 0 )
( trans.x, trans.y, trans.z, 1 )
So your depthBias matrix, which you use to convert from normalized device coordinates (in ranage [-1, 1]) to texture coordinates (in range [0, 1]), should look like this:
mat4 depthBias = mat4(0.5, 0.0, 0.0, 0.0,
0.0, 0.5, 0.0, 0.0,
0.0, 0.0, 0.5, 0.0,
0.5, 0.5, 0.5, 1.0);
or this:
mat4 depthBias = mat4(
vec4( 0.5, 0.0, 0.0, 0.0 ),
vec4( 0.0, 0.5, 0.0, 0.0 ),
vec4( 0.0, 0.0, 0.5, 0.0 ),
vec4( 0.5, 0.5, 0.5, 1.0 ) );
After you have transformed a vertex position by the model matrix, the view matrix and the projection matrix, the vertex position is in clip space (homogeneous coordinates). You have to convert from clip space to normalized device coordinates (cartesian coordinates in range [-1, 1]). This can be done by dividing, by the w component of the homogeneous coordinate:
mat4 depthMVP = depthProjectionMatrix * depthViewMatrix * modelMatrix;
vec4 clipPos = depthMVP * vec4(v_position, 1.0);
vec4 ndcPos = vec4(clipPos.xyz / clipPos.w, 1.0);
f_shadowCoord = (depthBias * ndcPos).xyz;
A depth texture has one channel only. If you read data from the depth texture, then the data is contained in the x (or r) component of the vector.
Adapt the fragment shader code like this:
if ( texture(shadowMap, f_shadowCoord.xy).x < f_shadowCoord.z)
visibility = 0.5;
The Image Format specification of Khronos group says:
Image formats do not have to store each component. When the shader
samples such a texture, it will still resolve to a 4-value RGBA
vector. The components not stored by the image format are filled in
automatically. Zeros are used if R, G, or B is missing, while a
missing Alpha always resolves to 1.
see further:
Data Type (GLSL)
GLSL Programming/Vector and Matrix Operations
Transform the modelMatrix
How to render depth linearly in modern OpenGL with gl_FragCoord.z in fragment shader?
OpenGL Shadow map problems
Addition to the solution:
This is an important part of the solution, but there was another step needed to properly render the shadow map. The second mistake was using the wrong component of the texture to compare against f_shadowCoord.z: it should've been
texture(shadowMap, f_shadowCoord.xy).r
instead of
texture(shadowMap, f_shadowCoord.xy).z
I am trying to implement a simple projective texture mapping approach by using shaders in OpenGL 3+. While there are some examples on the web I am having trouble creating a working example with shaders.
I am actually planning on using two shaders, one which does a normal scene draw, and another for projective texture mapping. I have a function for drawing a scene void ProjTextureMappingScene::renderScene(GLFWwindow *window) and I am using glUseProgram() to switch between shaders. The normal drawing works fine. However, it is unclear to me how I am supposed to render the projective texture on top of an already textured cube. Do I somehow have to use a stencil buffer or a framebuffer object(the rest of the scene should be unaffected)?
I also don't think that my projective texture mapping shaders are correct since the second time I render a cube it shows black. Further, I tried to debug by using colors and only the t component of the shader seems to be non-zero(so the cube appears green). I am overriding the texColor in the fragment shader below just for debugging purposes.
VertexShader
#version 330
uniform mat4 TexGenMat;
uniform mat4 InvViewMat;
uniform mat4 P;
uniform mat4 MV;
uniform mat4 N;
layout (location = 0) in vec3 inPosition;
//layout (location = 1) in vec2 inCoord;
layout (location = 2) in vec3 inNormal;
out vec3 vNormal, eyeVec;
out vec2 texCoord;
out vec4 projCoords;
void main()
{
vNormal = (N * vec4(inNormal, 0.0)).xyz;
vec4 posEye = MV * vec4(inPosition, 1.0);
vec4 posWorld = InvViewMat * posEye;
projCoords = TexGenMat * posWorld;
// only needed for specular component
// currently not used
eyeVec = -posEye.xyz;
gl_Position = P * MV * vec4(inPosition, 1.0);
}
FragmentShader
#version 330
uniform sampler2D projMap;
uniform sampler2D gSampler;
uniform vec4 vColor;
in vec3 vNormal, lightDir, eyeVec;
//in vec2 texCoord;
in vec4 projCoords;
out vec4 outputColor;
struct DirectionalLight
{
vec3 vColor;
vec3 vDirection;
float fAmbientIntensity;
};
uniform DirectionalLight sunLight;
void main (void)
{
// supress the reverse projection
if (projCoords.q > 0.0)
{
vec2 finalCoords = projCoords.st / projCoords.q;
vec4 vTexColor = texture(gSampler, finalCoords);
// only t has non-zero values..why?
vTexColor = vec4(finalCoords.s, finalCoords.t, finalCoords.r, 1.0);
//vTexColor = vec4(projCoords.s, projCoords.t, projCoords.r, 1.0);
float fDiffuseIntensity = max(0.0, dot(normalize(vNormal), -sunLight.vDirection));
outputColor = vTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
}
}
Creation of TexGen Matrix
biasMatrix = glm::mat4(0.5f, 0, 0, 0.5f,
0, 0.5f, 0, 0.5f,
0, 0, 0.5f, 0.5f,
0, 0, 0, 1);
// 4:3 perspective with 45 fov
projectorP = glm::perspective(45.0f * zoomFactor, 4.0f / 3.0f, 0.1f, 1000.0f);
projectorOrigin = glm::vec3(-3.0f, 3.0f, 0.0f);
projectorTarget = glm::vec3(0.0f, 0.0f, 0.0f);
projectorV = glm::lookAt(projectorOrigin, // projector origin
projectorTarget, // project on object at origin
glm::vec3(0.0f, 1.0f, 0.0f) // Y axis is up
);
mModel = glm::mat4(1.0f);
...
texGenMatrix = biasMatrix * projectorP * projectorV * mModel;
invViewMatrix = glm::inverse(mModel*mModelView);
Render Cube Again
It is also unclear to me what the modelview of the cube should be? Should it use the view matrix from the slide projector(as it is now) or the normal view projector? Currently the cube is rendered black(or green if debugging) in the middle of the scene view, as it would appear from the slide projector(I made a toggle hotkey so that I can see what the slide projector "sees"). The cube also moves with the view. How do I get the projection unto the cube itself?
mModel = glm::translate(projectorV, projectorOrigin);
// bind projective texture
tTextures[2].bindTexture();
// set all uniforms
...
// bind VBO data and draw
glBindVertexArray(uiVAOSceneObjects);
glDrawArrays(GL_TRIANGLES, 6, 36);
Switch between main scene camera and slide projector camera
if (useMainCam)
{
mCurrent = glm::mat4(1.0f);
mModelView = mModelView*mCurrent;
mProjection = *pipeline->getProjectionMatrix();
}
else
{
mModelView = projectorV;
mProjection = projectorP;
}
I have solved the problem. One issue I had is that I confused the matrices in the two camera systems (world and projective texture camera). Now when I set the uniforms for the projective texture mapping part I use the correct matrices for the MVP values - the same ones I use for the world scene.
glUniformMatrix4fv(iPTMProjectionLoc, 1, GL_FALSE, glm::value_ptr(*pipeline->getProjectionMatrix()));
glUniformMatrix4fv(iPTMNormalLoc, 1, GL_FALSE, glm::value_ptr(glm::transpose(glm::inverse(mCurrent))));
glUniformMatrix4fv(iPTMModelViewLoc, 1, GL_FALSE, glm::value_ptr(mCurrent));
glUniformMatrix4fv(iTexGenMatLoc, 1, GL_FALSE, glm::value_ptr(texGenMatrix));
glUniformMatrix4fv(iInvViewMatrix, 1, GL_FALSE, glm::value_ptr(invViewMatrix));
Further, the invViewMatrix is just the inverse of the view matrix not the model view (this didn't change the behaviour in my case, since the model was identity, but it is wrong). For my project I only wanted to selectively render a few objects with projective textures. To do this, for each object, I must make sure that the current shader program is the one for projective textures using glUseProgram(projectiveTextureMappingProgramID). Next, I compute the required matrices for this object:
texGenMatrix = biasMatrix * projectorP * projectorV * mModel;
invViewMatrix = glm::inverse(mView);
Coming back to the shaders, the vertex shader is correct except that I re-added the UV texture coordinates (inCoord) for the current object and stored them in texCoord.
For the fragment shader I changed the main function to clamp the projective texture so that it doesn't repeat (I couldn't get it to work with the client side GL_CLAMP_TO_EDGE) and I am also using the default object texture and UV coordinates in case the projector does not cover the whole object (I also removed lighting from the projective texture since it is not needed in my case):
void main (void)
{
vec2 finalCoords = projCoords.st / projCoords.q;
vec4 vTexColor = texture(gSampler, texCoord);
vec4 vProjTexColor = texture(projMap, finalCoords);
//vec4 vProjTexColor = textureProj(projMap, projCoords);
float fDiffuseIntensity = max(0.0, dot(normalize(vNormal), -sunLight.vDirection));
// supress the reverse projection
if (projCoords.q > 0.0)
{
// CLAMP PROJECTIVE TEXTURE (for some reason gl_clamp did not work...)
if(projCoords.s > 0 && projCoords.t > 0 && finalCoords.s < 1 && finalCoords.t < 1)
//outputColor = vProjTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
outputColor = vProjTexColor*vColor;
else
outputColor = vTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
}
else
{
outputColor = vTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
}
}
If you are stuck and for some reason can not get the shaders to work, you can check out an example in "OpenGL 4.0 Shading Language Cookbook" (textures chapter) - I actually missed this, until I got it working by myself.
In addition to all of the above, a great help for debugging if the algorithm is working correctly was to draw the frustum (as wireframe) for the projective camera. I used a shader for frustum drawing. The fragment shader just assigns a solid color, while the vertex shader is listed below with explanations:
#version 330
// input vertex data
layout(location = 0) in vec3 vp;
uniform mat4 P;
uniform mat4 MV;
uniform mat4 invP;
uniform mat4 invMV;
void main()
{
/*The transformed clip space position c of a
world space vertex v is obtained by transforming
v with the product of the projection matrix P
and the modelview matrix MV
c = P MV v
So, if we could solve for v, then we could
genrerate vertex positions by plugging in clip
space positions. For your frustum, one line
would be between the clip space positions
(-1,-1,near) and (-1,-1,far),
the lower left edge of the frustum, for example.
NB: If you would like to mix normalized device
coords (x,y) and eye space coords (near,far),
you need an additional step here. Modify your
clip position as follows
c' = (c.x * c.z, c.y * c.z, c.z, c.z)
otherwise you would need to supply both the z
and w for c, which might be inconvenient. Simply
use c' instead of c below.
To solve for v, multiply both sides of the equation above with
-1
(P MV)
This gives
-1
(P MV) c = v
This is equivalent to
-1 -1
MV P c = v
-1
P is given by
|(r-l)/(2n) 0 0 (r+l)/(2n) |
| 0 (t-b)/(2n) 0 (t+b)/(2n) |
| 0 0 0 -1 |
| 0 0 -(f-n)/(2fn) (f+n)/(2fn)|
where l, r, t, b, n, and f are the parameters in the glFrustum() call.
If you don't want to fool with inverting the
model matrix, the info you already have can be
used instead: the forward, right, and up
vectors, in addition to the eye position.
First, go from clip space to eye space
-1
e = P c
Next go from eye space to world space
v = eyePos - forward*e.z + right*e.x + up*e.y
assuming x = right, y = up, and -z = forward.
*/
vec4 fVp = invMV * invP * vec4(vp, 1.0);
gl_Position = P * MV * fVp;
}
The uniforms are used like this (make sure you use the right matrices):
// projector matrices
glUniformMatrix4fv(iFrustumInvProjectionLoc, 1, GL_FALSE, glm::value_ptr(glm::inverse(projectorP)));
glUniformMatrix4fv(iFrustumInvMVLoc, 1, GL_FALSE, glm::value_ptr(glm::inverse(projectorV)));
// world camera
glUniformMatrix4fv(iFrustumProjectionLoc, 1, GL_FALSE, glm::value_ptr(*pipeline->getProjectionMatrix()));
glUniformMatrix4fv(iFrustumModelViewLoc, 1, GL_FALSE, glm::value_ptr(mModelView));
To get the input vertices needed for the frustum's vertex shader you can do the following to get the coordinates (then just add them to your vertex array):
glm::vec3 ftl = glm::vec3(-1, +1, pFar); //far top left
glm::vec3 fbr = glm::vec3(+1, -1, pFar); //far bottom right
glm::vec3 fbl = glm::vec3(-1, -1, pFar); //far bottom left
glm::vec3 ftr = glm::vec3(+1, +1, pFar); //far top right
glm::vec3 ntl = glm::vec3(-1, +1, pNear); //near top left
glm::vec3 nbr = glm::vec3(+1, -1, pNear); //near bottom right
glm::vec3 nbl = glm::vec3(-1, -1, pNear); //near bottom left
glm::vec3 ntr = glm::vec3(+1, +1, pNear); //near top right
glm::vec3 frustum_coords[36] = {
// near
ntl, nbl, ntr, // 1 triangle
ntr, nbl, nbr,
// right
nbr, ftr, ntr,
ftr, nbr, fbr,
// left
nbl, ftl, ntl,
ftl, nbl, fbl,
// far
ftl, fbl, fbr,
fbr, ftr, ftl,
//bottom
nbl, fbr, fbl,
fbr, nbl, nbr,
//top
ntl, ftr, ftl,
ftr, ntl, ntr
};
After all is said and done, it's nice to see how it looks:
As you can see I applied two projective textures, one of a biohazard image on Blender's Suzanne monkey head, and a smiley texture on the floor and a small cube. You can also see that the cube is partly covered by the projective texture, while the rest of it appears with its default texture. Finally, you can see the green frustum wireframe for the projector camera - and everything looks correct.
I'm using GLSL to draw sprites from a sprite-sheet. I'm using jME 3, yet there are only small differences, and only with regards to deprecated functions.
The most important part of drawing a sprite from a sprite sheet is to draw only a subset/range of pixels, for example the range from (100, 0) to (200, 100). In the following test case sprite-sheet, and using the previous bounds, only the green part of the sprite-sheet would be drawn.
.
This is what I have so far:
Definition:
MaterialDef Solid Color {
//This is the list of user-defined variables to be used in the shader
MaterialParameters {
Vector4 Color
Texture2D ColorMap
}
Technique {
VertexShader GLSL100: Shaders/tc_s1.vert
FragmentShader GLSL100: Shaders/tc_s1.frag
WorldParameters {
WorldViewProjectionMatrix
}
}
}
.vert file:
uniform mat4 g_WorldViewProjectionMatrix;
attribute vec3 inPosition;
attribute vec4 inTexCoord;
varying vec4 texture_coordinate;
void main(){
gl_Position = g_WorldViewProjectionMatrix * vec4(inPosition, 1.0);
texture_coordinate = vec4(inTexCoord);
}
.frag:
uniform vec4 m_Color;
uniform sampler2D m_ColorMap;
varying vec4 texture_coordinate;
void main(){
vec4 color = vec4(m_Color);
vec4 tex = texture2D(m_ColorMap, texture_coordinate);
color *= tex;
gl_FragColor = color;
}
In jME 3, inTexCoord refers to gl_MultiTexCoord0, and inPosition refers to gl_Vertex.
As you can see, I tried to give the texture_coordinate a vec4 type, rather than a vec2, so as to be able to reference its p and q values (texture_coordinate.p and texture_coordinate.q). Modifying them only resulted in different hues.
m_Color refers to the color, inputted by the user, and serves the purpose of altering the hue. In this case, it should be disregarded.
So far, the shader works as expected and the texture displays correctly.
I've been using resources and tutorials from NeHe (http://nehe.gamedev.net/article/glsl_an_introduction/25007/) and Lighthouse3D (http://www.lighthouse3d.com/tutorials/glsl-tutorial/simple-texture/).
Which functions/values I should alter to get the desired effect of displaying only part of the texture?
Generally, if you want to only display part of a texture, then you change the texture coordinates associated with each vertex. Since you don't show your code for how you're telling OpenGL about your vertices, I'm not sure what to suggest. But in general, if you're using older deprecated functions, instead of doing this:
// Lower Left of triangle
glTexCoord2f(0,0);
glVertex3f(x0,y0,z0);
// Lower Right of triangle
glTexCoord2f(1,0);
glVertex3f(x1,y1,z1);
// Upper Right of triangle
glTexCoord2f(1,1);
glVertex3f(x2,y2,z2);
You could do this:
// Lower Left of triangle
glTexCoord2f(1.0 / 3.0, 0.0);
glVertex3f(x0,y0,z0);
// Lower Right of triangle
glTexCoord2f(2.0 / 3.0, 0.0);
glVertex3f(x1,y1,z1);
// Upper Right of triangle
glTexCoord2f(2.0 / 3.0, 1.0);
glVertex3f(x2,y2,z2);
If you're using VBOs, then you need to modify your array of texture coordinates to access the appropriate section of your texture in a similar manner.
For the sampler2D the texture coordinates are normalized so that the leftmost and bottom-most coordinates are 0, and the rightmost and topmost are 1. So for your example of a 300-pixel-wide texture, the green section would be between 1/3rd and 2/3rds the width of the texture.
After deciding to try programming in modern OpenGL, I've left behind the fixed function pipeline and I'm not entirely sure about getting the same functionality I had before.
I'm trying to texture map quads with pixel perfect size, matching the texture size. For example, a 128x128 texture maps to a quad 128x128 in size.
This is my vertex shader.
#version 110
uniform float xpos;
uniform float ypos;
uniform float tw; // texture width in pixels
uniform float th; // texture height in pixels
attribute vec4 position;
varying vec2 texcoord;
void main()
{
mat4 projectionMatrix = mat4( 2.0/600.0, 0.0, 0.0, -1.0,
0.0, 2.0/800.0, 0.0, -1.0,
0.0, 0.0, -1.0, 0.0,
0.0, 0.0, 0.0, 1.0);
gl_Position = position * projectionMatrix;
texcoord = (gl_Position.xy);
}
This is my fragment shader:
#version 110
uniform float fade_factor;
uniform sampler2D textures[1];
varying vec2 texcoord;
void main()
{
gl_FragColor = texture2D(textures[0], texcoord);
}
My vertex data is as such, where w and h are the width and height of the texture.
[
0, 0,
w, 0,
w, h,
0, h
]
I load a 128x128 texture and with these shaders I see the image repeated 4 times: http://i.stack.imgur.com/UY7Ts.jpg
Can anyone offer advice on the correct way to be able to translate and scale given the tw th, xpos, xpos uniforms?
There's a problem with this:
mat4 projectionMatrix = mat4( 2.0/600.0, 0.0, 0.0, -1.0,
0.0, 2.0/800.0, 0.0, -1.0,
0.0, 0.0, -1.0, 0.0,
0.0, 0.0, 0.0, 1.0);
gl_Position = position * projectionMatrix;
Transformation matices are right associative, i.e. you should multiply the opposite order. Also you normally don't specify a projection matrix in the shader, you pass it as a uniform. OpenGL provides you ready to use uniforms for projection and modelview. In OpenGL-3 core you can reuse the uniform names to stay compatible.
// predefined by OpenGL version < 3 core:
#if __VERSION__ < 400
uniform mat4 gl_ProjectionMatrix;
uniform mat4 gl_ModelviewMatrx;
uniform mat4 gl_ModelviewProjectionMatrix; // premultiplied gl_ProjectionMatrix * gl_ModelviewMatrix
uniform mat4 gl_ModelviewInverseTranspose; // needed for transformin normals
attribute vec4 gl_Vertex;
varying vec4 gl_TexCoord[];
#endif
void main()
{
gl_Position = gl_ModelviewProjectionMatrix * gl_Vertex;
}
Next you must understand that texture coordinates don't address texture pixels (texels), but that the texture should be understood as a interpolating function with the given sampling points; texture coordinates 0 or 1 don't hit the texel's centers, but lie exactly between the wraparound, thus blurring. As long as your quad on screen size exactly matches the texture dimensions this is fine. But as soon as you want to show just a subimage things get interesting (I leave it as an exercise to the reader to figure out the exact mapping; hint: You'll have the terms 0.5/dimension and (dimension - 1)/dimension in the solution)