How to transfer colors from one UV unfolding to another UV unfolding programmatically? - opengl

As seen from the figure, assuming a model has two UV unfolding ways, i.e., UV-1 and UV-1. Then I ask an artist to paint the model based on UV-1 and get the texture map 1. How can I transfer colors from UV-1 to UV-2 programmatically (e.g., python)? One method I know is mapping the texture map 1 into vertex colors and then rendering the vertex colors to UV-2. But this method would lose some color details. So how can I do it?

Render your model on Texture Map 2 using UV-2 coordinates for vertex positions and UV-1 coordinates interpolated across the triangles. In the fragment shader use the interpolated UV-1 coordinates to sample Texture Map 1. This way you're limited only by the resolution of the texture maps, not by the resolution of the model.
EDIT: Vertex shader:
in vec2 UV1;
in vec2 UV2;
out vec2 fUV1;
void main() {
gl_Position = vec4(UV2, 0, 1);
fUV1 = UV1;
}
Fragment shader:
in vec2 fUV1;
uniform sampler2D TEX1;
out vec4 OUT;
void main() {
OUT = texture(TEX1, fUV1);
}

Related

"Scan Through" a large texture glsl

I've encoded some data into a 44487x1.0 luminance texture:
Now I would like to "scrub" this data across my shader, so that a slice of the texture equal in width to the pixel width of my canvas is displayed. So if the canvas is 500px wide, then 500 pixels from the texture will be shown. The texture is then translated by some offset value so that different values within the texture can be displayed.
//vertex shader
export const vs = GLSL`
#version 300 es
in vec4 position;
void main() {
gl_Position = position;
}
`;
//fragment shader
#version 300 es
#ifdef GL_ES
precision highp float;
#endif
uniform vec2 u_resolution;
uniform float u_time;
uniform sampler2D u_texture_7; //data texture
out vec4 fragColor;
void main(){
//data texture dimensions
vec2 dims = vec2(44487., 1.0);
//amount by which to translate the data texture
vec2 offset = vec2(u_time*.5, 0.);
//canvas coords
vec2 uv = gl_FragCoord.xy/u_resolution.xy;
//textuer asspect ratio, w/h
float textureAspect = 44487. / 1.;
vec3 col = vec3(0.);
//texture width is 44487*larger than uv, I guess?
vec2 textCoords = vec2((uv.x/textureAspect)+offset.x, uv.y);
//get texture values
vec3 text = texture(u_texture_7, textCoords).rgb;
//output
fragColor = vec4(text, 1.);
}
However, this doesn't seem to work. All I get is a black screen. Is using a wide texture like this a good way to go about getting the array values into the shader? The texture is very small in size, but I'm wondering if the dimensions might still be causing an issue.
Alternatively to providing one large texture, I could provide a smaller texture, but update the texture uniform values via js?
After trying several different approaches, the work around I ended up using was uploading the 44487x1.0 image to a separate 2d canvas, and then performing the transformations of the texture in the 2d canvas, and not the shader. The canvas is then sent to the shader as a texture.
Might not be the most efficient solution, but it avoids having to mess around with the texture too much in the shader.

Displaying separately the segment of the individual textures of a cube-map being rendered in OpenGL

I have a cube map set up and what I want to do next is to mark/show separately which areas/segments of the textures on each face of the cube map is being rendered (depending on the camera).
For example, here is my basic vertex shader:
#version 400
in vec3 vp;
uniform mat4 P, V;
out vec3 texcoords;
vec3 newP;
void main () {
texcoords = vp;
gl_Position = P * V * vec4 (vp, 1.0);
}
and here is my basic fragment shader:
#version 400
in vec3 texcoords;
uniform samplerCube cube_texture;
out vec4 frag_colour;
void main () {
frag_colour = texture (cube_texture, texcoords);
}
I now want to show the unfolded 6 textures of a cubemap and color overlay the areas the camera was looking at in the cube-map, for each texture.
Example, if my camera is viewing the intersection of the left and back sides of the cube, I want a separate display where the I can see the 6 side textures unfolded and the area of the respective textures, the camera was viewing, highlighted (half of the loaded texture for the left wall and half of the loaded texture for the back wall)
Can I get some heads up on how to implement this? Or if someone has some similar code in OpenGl.
Thanks :)

deriving screen-space coordinates in glsl shader

I'm trying to write a simple application for baking a texture from a paint buffer. Right now I have a mesh, a mesh texture, and a paint texture. When I render the mesh, the mesh shader will lookup the mesh texture and then based on the screen position of the fragment lookup the paint texture value. I then composite the paint lookup with the mesh lookup.
Here's a screenshot with nothing in the paint buffer and just the mesh texture.
Here's a screenshot with something in the paint buffer composited over the mesh texture.
So that all works great, but I'd like to bake the paint texture into my mesh texture. Right now I send the mesh's UVs down as the position with an ortho set to (0,1)x(0,1) so I'm actually doing everything in texture space. The mesh texture lookup is also the position. The problem I'm having though is computing the screen space position of the fragment from the original projection to figure out where to sample the paint texture. I'm passing the bake shader my original camera project matrices and the object position to send the fragment shader the device-normalized position of the fragment (again from my original camera projection) to do the lookup, but it's coming out funny.
Here's what the bake texture is generating if I render half the output using the paint texture and screen position I've derived.
I would expect that block line to be right down the middle.
Am I calculating the screen position incorrectly in my vertex shader? Or am I going about this in a fundamentally wrong way?
// vertex shader
uniform mat4 orthoPV;
uniform mat4 cameraPV;
uniform mat4 objToWorld;
varying vec2 uv;
varying vec2 screenPos;
void main() {
uv = gl_Vertex.xy;
screenPos = 0.5 * (vec2(1,1) + (cameraPV * objToWorld * vec4(gl_MultiTexCoord0.xyz,1)).xy);
screenPos = gl_MultiTexCoord0.xy;
gl_Position = orthoPV * gl_Vertex;
gl_FrontColor = vec4(1,0,0,1);
}
// fragment shader
uniform sampler2D meshTexture;
uniform sampler2D paintTexture;
varying vec2 uv;
varying vec2 screenPos;
void main() {
gl_FragColor = texture2D(meshTexture, uv);
if (screenPos.x > .5)
gl_FragColor = texture2D(paintTexture, uv);
}

Rendering orthographic shadowmap to screen?

I have a working shadow map implementation for directional lights, where I construct the projection matrix using orthographic projection. My question is, how do I visualize the shadow map? I have the following shader I use for spot lights (which uses a perspective projection) but when I try to apply it to a shadow map that was made with an orthographic projection all I get is a completely black screen (even though the shadow mapping works when renderering the scene itself)
#version 430
layout(std140) uniform;
uniform UnifDepth
{
mat4 mWVPMatrix;
vec2 mScreenSize;
float mZNear;
float mZFar;
} UnifDepthPass;
layout (binding = 5) uniform sampler2D unifDepthTexture;
out vec4 fragColor;
void main()
{
vec2 texcoord = gl_FragCoord.xy / UnifDepthPass.mScreenSize;
float depthValue = texture(unifDepthTexture, texcoord).x;
depthValue = (2.0 * UnifDepthPass.mZNear) / (UnifDepthPass.mZFar + UnifDepthPass.mZNear - depthValue * (UnifDepthPass.mZFar - UnifDepthPass.mZNear));
fragColor = vec4(depthValue, depthValue, depthValue, 1.0);
}
You were trying to sample your depth texture with GL_TEXTURE_COMPARE_MODE set to GL_COMPARE_R_TO_TEXTURE. This is great for actually performing shadow mapping with a depth texture, but it makes trying to sample it using sampler2D undefined. Since you want the actual depth values stored in the depth texture, and not the result of a pass/fail depth test, you need to set GL_TEXTURE_COMPARE_MODE to GL_NONE first.
It is very inconvenient to set this state on a per-texture basis when you want to switch between visualizing the depth buffer and drawing shadows. I would suggest using a sampler object that has GL_TEXTURE_COMPARE_MODE set to GL_COMPARE_R_TO_TEXTURE (compatible with sampler2DShadow) for the shader that does shadow mapping and another sampler object that uses GL_NONE (compatible with sampler2D) for visualizing the depth buffer. That way all you have to do is swap out the sampler object bound to texture image unit 5 depending on how the shader actually uses the depth texture.

Using gl_FragCoord to create a hole in a quad

I am learning WebGL, and would like to do the following:
Create a 3D quad with a square hole in it using a fragment shader.
It looks like I need to set gl_FragColor based on gl_FragCoord appropriately.
So should I:
a) Convert gl_FragCoord from window coordinates to model coordinates, do the appropriate geometry check, and set color.
OR
b) Somehow pass the hole information from the vertex shader to the fragment shader. Maybe use a texture coordinate. I am not clear on this part.
I am fuzzy about implementing either of the above, so I'd appreaciate some coding hints on either.
My background is that of an OpenGL old timer who has not kept up with the new shading language paradigm, and is now trying to catch up...
Edit (27/03/2011):
I have been able to successfully implement the above based on the tex coord hint. I've written up this example at the link below:
Quads with holes - example
The easiest way would be with texture coords. Simply supply the corrds as an extra attribute array then pass through to the fragment shader using a varying. The shaders should contain something like:
vertex:
attribute vec3 aPosition;
attribute vec2 aTexCoord;
varying vec2 vTexCoord;
void main(){
vTexCoord=aTexCoord;
.......
}
fragment:
varying vec2 vTexCoord;
void main(){
if(vTexCoord.x>{lower x limit} && vTexCoord.x<{upper x limit} && vTexCoord.y>{lower y limit} && vTexCoord.y<{upper y limit}){
discard; //this tell GFX to discard this fragment entirly
}
.....
}