I'm trying to write a simple application for baking a texture from a paint buffer. Right now I have a mesh, a mesh texture, and a paint texture. When I render the mesh, the mesh shader will lookup the mesh texture and then based on the screen position of the fragment lookup the paint texture value. I then composite the paint lookup with the mesh lookup.
Here's a screenshot with nothing in the paint buffer and just the mesh texture.
Here's a screenshot with something in the paint buffer composited over the mesh texture.
So that all works great, but I'd like to bake the paint texture into my mesh texture. Right now I send the mesh's UVs down as the position with an ortho set to (0,1)x(0,1) so I'm actually doing everything in texture space. The mesh texture lookup is also the position. The problem I'm having though is computing the screen space position of the fragment from the original projection to figure out where to sample the paint texture. I'm passing the bake shader my original camera project matrices and the object position to send the fragment shader the device-normalized position of the fragment (again from my original camera projection) to do the lookup, but it's coming out funny.
Here's what the bake texture is generating if I render half the output using the paint texture and screen position I've derived.
I would expect that block line to be right down the middle.
Am I calculating the screen position incorrectly in my vertex shader? Or am I going about this in a fundamentally wrong way?
// vertex shader
uniform mat4 orthoPV;
uniform mat4 cameraPV;
uniform mat4 objToWorld;
varying vec2 uv;
varying vec2 screenPos;
void main() {
uv = gl_Vertex.xy;
screenPos = 0.5 * (vec2(1,1) + (cameraPV * objToWorld * vec4(gl_MultiTexCoord0.xyz,1)).xy);
screenPos = gl_MultiTexCoord0.xy;
gl_Position = orthoPV * gl_Vertex;
gl_FrontColor = vec4(1,0,0,1);
}
// fragment shader
uniform sampler2D meshTexture;
uniform sampler2D paintTexture;
varying vec2 uv;
varying vec2 screenPos;
void main() {
gl_FragColor = texture2D(meshTexture, uv);
if (screenPos.x > .5)
gl_FragColor = texture2D(paintTexture, uv);
}
Related
As seen from the figure, assuming a model has two UV unfolding ways, i.e., UV-1 and UV-1. Then I ask an artist to paint the model based on UV-1 and get the texture map 1. How can I transfer colors from UV-1 to UV-2 programmatically (e.g., python)? One method I know is mapping the texture map 1 into vertex colors and then rendering the vertex colors to UV-2. But this method would lose some color details. So how can I do it?
Render your model on Texture Map 2 using UV-2 coordinates for vertex positions and UV-1 coordinates interpolated across the triangles. In the fragment shader use the interpolated UV-1 coordinates to sample Texture Map 1. This way you're limited only by the resolution of the texture maps, not by the resolution of the model.
EDIT: Vertex shader:
in vec2 UV1;
in vec2 UV2;
out vec2 fUV1;
void main() {
gl_Position = vec4(UV2, 0, 1);
fUV1 = UV1;
}
Fragment shader:
in vec2 fUV1;
uniform sampler2D TEX1;
out vec4 OUT;
void main() {
OUT = texture(TEX1, fUV1);
}
I have a cube map set up and what I want to do next is to mark/show separately which areas/segments of the textures on each face of the cube map is being rendered (depending on the camera).
For example, here is my basic vertex shader:
#version 400
in vec3 vp;
uniform mat4 P, V;
out vec3 texcoords;
vec3 newP;
void main () {
texcoords = vp;
gl_Position = P * V * vec4 (vp, 1.0);
}
and here is my basic fragment shader:
#version 400
in vec3 texcoords;
uniform samplerCube cube_texture;
out vec4 frag_colour;
void main () {
frag_colour = texture (cube_texture, texcoords);
}
I now want to show the unfolded 6 textures of a cubemap and color overlay the areas the camera was looking at in the cube-map, for each texture.
Example, if my camera is viewing the intersection of the left and back sides of the cube, I want a separate display where the I can see the 6 side textures unfolded and the area of the respective textures, the camera was viewing, highlighted (half of the loaded texture for the left wall and half of the loaded texture for the back wall)
Can I get some heads up on how to implement this? Or if someone has some similar code in OpenGl.
Thanks :)
Recently, I have read article about sun shader (XNA Sun Shader) and decided to implement it using OpenGL ES 2.0. But I faced with a problem connected with shader:
I have two textures, one of them is fire gradient texture:
And another one is texture each white part of which must be colored by the first texture:
So, I'm going to have a result like below (do not pay attention that the result texture is rendered on sphere mesh):
I really hope that somebody knows how to implement this shader.
You can first sampling the original texture, if the color is white, then sampling the gradient texture.
uniform sampler2D Texture0; // original texture
uniform sampler2D Texture1; // gradient texture
varying vec2 texCoord;
void main(void)
{
gl_FragColor = texture2D( Texture0, texCoord );
// If the color in original texture is white
// use the color in gradient texture.
if (gl_FragColor == vec4(1.0, 1.0, 1.0,1.0)) {
gl_FragColor = texture2D( Texture1, texCoord );
}
}
I have a working shadow map implementation for directional lights, where I construct the projection matrix using orthographic projection. My question is, how do I visualize the shadow map? I have the following shader I use for spot lights (which uses a perspective projection) but when I try to apply it to a shadow map that was made with an orthographic projection all I get is a completely black screen (even though the shadow mapping works when renderering the scene itself)
#version 430
layout(std140) uniform;
uniform UnifDepth
{
mat4 mWVPMatrix;
vec2 mScreenSize;
float mZNear;
float mZFar;
} UnifDepthPass;
layout (binding = 5) uniform sampler2D unifDepthTexture;
out vec4 fragColor;
void main()
{
vec2 texcoord = gl_FragCoord.xy / UnifDepthPass.mScreenSize;
float depthValue = texture(unifDepthTexture, texcoord).x;
depthValue = (2.0 * UnifDepthPass.mZNear) / (UnifDepthPass.mZFar + UnifDepthPass.mZNear - depthValue * (UnifDepthPass.mZFar - UnifDepthPass.mZNear));
fragColor = vec4(depthValue, depthValue, depthValue, 1.0);
}
You were trying to sample your depth texture with GL_TEXTURE_COMPARE_MODE set to GL_COMPARE_R_TO_TEXTURE. This is great for actually performing shadow mapping with a depth texture, but it makes trying to sample it using sampler2D undefined. Since you want the actual depth values stored in the depth texture, and not the result of a pass/fail depth test, you need to set GL_TEXTURE_COMPARE_MODE to GL_NONE first.
It is very inconvenient to set this state on a per-texture basis when you want to switch between visualizing the depth buffer and drawing shadows. I would suggest using a sampler object that has GL_TEXTURE_COMPARE_MODE set to GL_COMPARE_R_TO_TEXTURE (compatible with sampler2DShadow) for the shader that does shadow mapping and another sampler object that uses GL_NONE (compatible with sampler2D) for visualizing the depth buffer. That way all you have to do is swap out the sampler object bound to texture image unit 5 depending on how the shader actually uses the depth texture.
I implemented a fairly simple shadow map. I have a simple obj imported plane as ground and a bunch of trees.
I have a weird shadow on the plane which I think is the plane's self shadow. I am not sure what code to post. If it would help please tell me and I'll do so then.
First image, camera view of the scene. The weird textured lowpoly sphere is just for reference of the light position.
Second image, the depth texture stored in the framebuffer. I calculated shadow coords from light perspective with it. Since I can't post more than 2 links, I'll leave this one.
Third image, depth texture with a better view of the plane projecting the shadow from a different light position above the whole scene.
LE: the second picture http://i41.tinypic.com/23h3wqf.jpg (Depth Texture of first picture)
Tried some fixes, adding glCullFace(GL_BACK) before drawing the ground in the first pass removes it from the depth texture but still appears in the final render(like in the first picture, the back part of the ground) - i tried adding CullFace in the second pass also, still showing the shadow on the ground , tried all combinations of Front and Back facing. Can it be because of the values in the ortographic projection ?
Shadow fragment shader:
#version 330 core
layout(location = 0) out vec3 color;
in vec2 texcoord;
in vec4 ShadowCoord;
uniform sampler2D textura1;
uniform sampler2D textura2;
uniform sampler2D textura_depth;
uniform int has_alpha;
void main(){
vec3 tex1 = texture(textura1, texcoord).xyz;
vec3 tex2 = texture(textura2, texcoord).xyz;
if(has_alpha>0.5) if((tex2.r<0.1) && (tex2.g<0.1) && (tex2.b<0.1)) discard;
//Z value of depth texture from pass 1
float hartaDepth=texture( textura_depth,(ShadowCoord.xy/ShadowCoord.w)).z;
float shadowValue=1.0;
if(hartaDepth < ShadowCoord.z-0.005)
shadowValue=0.5;
color = shadowValue * tex1 ;
}