GLSL Shader to convert six textures to Equirectangular projection - opengl

I want to create an equirectangular projection from six quadratic textures, similar to converting a cubic projection image to an equirectangular image, but with the separate faces as textures instead of one texture in cubic projection.
I'd like to do this on the graphics card for performance reasons, and therefore want to use a GLSL Shader.
I've found a Shader that converts a cubic texture to an equirectangular one: link

Step 1: Copy your six textures into a cube map texture. You can do this by binding the textures to FBOs and using glBlitFramebuffer().
Step 2: Run the following fragment shader. You will need to vary the Coord attribute from (-1,-1) to (+1,+1) over the quad.
#version 330
// X from -1..+1, Y from -1..+1
in vec2 Coord;
out vec4 Color;
uniform samplercube Texture;
void main() {
// Convert to (lat, lon) angle
vec2 a = Coord * vec2(3.14159265, 1.57079633);
// Convert to cartesian coordinates
vec2 c = cos(a), s = sin(a);
Color = sampler(Texture, vec3(vec2(s.x, c.x) * c.y, s.y));
}

Related

How to transfer colors from one UV unfolding to another UV unfolding programmatically?

As seen from the figure, assuming a model has two UV unfolding ways, i.e., UV-1 and UV-1. Then I ask an artist to paint the model based on UV-1 and get the texture map 1. How can I transfer colors from UV-1 to UV-2 programmatically (e.g., python)? One method I know is mapping the texture map 1 into vertex colors and then rendering the vertex colors to UV-2. But this method would lose some color details. So how can I do it?
Render your model on Texture Map 2 using UV-2 coordinates for vertex positions and UV-1 coordinates interpolated across the triangles. In the fragment shader use the interpolated UV-1 coordinates to sample Texture Map 1. This way you're limited only by the resolution of the texture maps, not by the resolution of the model.
EDIT: Vertex shader:
in vec2 UV1;
in vec2 UV2;
out vec2 fUV1;
void main() {
gl_Position = vec4(UV2, 0, 1);
fUV1 = UV1;
}
Fragment shader:
in vec2 fUV1;
uniform sampler2D TEX1;
out vec4 OUT;
void main() {
OUT = texture(TEX1, fUV1);
}

Draw anti-aliased thick tessellated curves

So I am able to draw curved lines using tessellation shaders , but the lines coming out are very thin and jaggy(aliased). I was confused as how can I process these new points( isoline ), from tessellation eval shader to fragment shader and make it thicker and aliased.
I know their are multiple ways like using geometry shader and even vertex shader to calculate adjacent vertices and create polyline. But my goal isn't creating polyline, just a constant thickness and anti-aliased lines(edges). For which I think fragment shader is enough.
Kindly advise what will be the best and fastest way to achieve this and how can I pass "isolines" data from tessellation shader to fragment shader and manipulate their. Any small code which shows transfer of data from TES to FS would be really helpful. And pardon me, since I am beginner, so many of my assumptions above might be incorrect.
Vertex Shader: This is a simple pass through shader so I am not adding here.
Tessellation Eval Shader:
#version 450
layout( isolines ) in;
uniform mat4 MVP;
void main()
{
vec3 p0 = gl_in[0].gl_Position.xyz;
vec3 p1 = gl_in[1].gl_Position.xyz;
vec3 p2 = gl_in[2].gl_Position.xyz;
float t = gl_TessCoord.x;
float one_minus_t = (1.0 - t);
float one_minus_t_square = one_minus_t * one_minus_t;
float t_square = t * t;
float two_times_one_minus_t = 2 * one_minus_t;
// Bezier interpolation
vec3 p = one_minus_t_square * p0 + two_times_one_minus_t*t*p1 + t_square * p2;
gl_Position = vec4(p, 1.0);
}
Fragment Shader : Basic shader which assigns uniform color to gl_FragColor
Output:
If you want to draw a thick smooth line, you have 3 options:
Draw a highly tessellated polygon with many vertices.
Draw a quad over the entire viewport and discard any fragments that are not on the line (in the fragment shader).
Mix options 1 and 2. Draw a rough polygon that is larger than the line and encloses the line. Discard fragments at the edges and corners to smooth out the line (in the fragment shader).

OpenGL subimages using pixel coordinates

I've worked through a couple of tutorials in the breakout series on learnopengl.com, so I have a very simple 2D renderer. I want to add a subimage feature to it, though, where I can specify a vec4 for a kind of "source rectangle", so if the vec4 was (10, 10, 32, 32), it would only render a rectangle at 10, 10 with a width and height of 32, kind of like how the SDL renderer works.
The way the renderer is set up is there is a quad VAO which all the sprites use, which contains the texture coordinates. Initially, I though I could use an array of VAO's for each sprite, each with different texture coordinates, but I'd like to be able to change the source rectangle before the sprite gets drawn, to make things like animation easier... My second idea was to have a seperate uniform vec4 passed into the fragment shader for the source rectangle, but how do I only render that section in pixel coordinates?
Use the Primitiv type GL_TRIANGLE_STRIP or GL_TRIANGLE_FAN to render a quad. Use integral one-dimensional vertex coordinates instead of floating-point vertex coordinates. The vertex coordinates are the indices of the quad corners. For a GL_TRIANGLE_FAN they are:
vertex 1: 0
vertex 2: 1
vertex 3: 2
vertex 4: 3
Set the rectangle definition (10, 10, 32, 32) in the vertex shader uisng a Uniform variable of type vec4. With this information, you can calculate the vertex coordinate in the vertex shader:
in int cornerIndex;
uniform vec4 rectangle;
void main()
{
vec2 vertexArray[4] =
vec2[4](rectangle.xy, rectangle.zy, rectangle.zw, rectangle.xw);
vec2 vertex = vertexArray[cornerIndex];
// [...]
}
The Vertex Shader provides the built-in input gl_VertexID, which specifies the index of the vertex currently being processed. This variable could be used instead of cornerIndex in this case. Note that it is not necessary for the vertex shader to have any explicit input.
I ended up doing this in the vertex shader. I passed in the vec4 as a uniform to the vertex shader, as well as the size of the image, and used the below calculation:
// convert pixel coordinates to vertex coordinates
float widthPixel = 1.0f / u_imageSize.x;
float heightPixel = 1.0f / u_imageSize.y;
float startX = u_sourceRect.x, startY = u_sourceRect.y, width = u_sourceRect.z, height = u_sourceRect.w;
v_texCoords = vec2(widthPixel * startX + width * widthPixel * texPos.x, heightPixel * startY + height * heightPixel * texPos.y);
v_texCoords is a varying that the fragment shader uses to map the texture.

OpenGL shader to shade each face similar to MeshLab's visualizer

I have very basic OpenGL knowledge, but I'm trying to replicate the shading effect that MeshLab's visualizer has.
If you load up a mesh in MeshLab, you'll realize that if a face is facing the camera, it is completely lit and as you rotate the model, the lighting changes as the face that faces the camera changes. I loaded a simple unit cube with 12 faces in MeshLab and captured these screenshots to make my point clear:
Model loaded up (notice how the face is completely gray):
Model slightly rotated (notice how the faces are a bit darker):
More rotation (notice how all faces are now darker):
Off the top of my head, I think the way it works is that it is somehow assigning colors per face in the shader. If the angle between the face normal and camera is zero, then the face is fully lit (according to the color of the face), otherwise it is lit proportional to the dot product between the normal vector and the camera vector.
I already have the code to draw meshes with shaders/VBO's. I can even assign per-vertex colors. However, I don't know how I can achieve a similar effect. As far as I know, fragment shaders work on vertices. A quick search revealed questions like this. But I got confused when the answers talked about duplicate vertices.
If it makes any difference, in my application I load *.ply files which contain vertex position, triangle indices and per-vertex colors.
Results after the answer by #DietrichEpp
I created the duplicate vertices array and used the following shaders to achieve the desired lighting effect. As can be seen in the posted screenshot, the similarity is uncanny :)
The vertex shader:
#version 330 core
uniform mat4 projection_matrix;
uniform mat4 model_matrix;
uniform mat4 view_matrix;
in vec3 in_position; // The vertex position
in vec3 in_normal; // The computed vertex normal
in vec4 in_color; // The vertex color
out vec4 color; // The vertex color (pass-through)
void main(void)
{
gl_Position = projection_matrix * view_matrix * model_matrix * vec4(in_position, 1);
// Compute the vertex's normal in camera space
vec3 normal_cameraspace = normalize(( view_matrix * model_matrix * vec4(in_normal,0)).xyz);
// Vector from the vertex (in camera space) to the camera (which is at the origin)
vec3 cameraVector = normalize(vec3(0, 0, 0) - (view_matrix * model_matrix * vec4(in_position, 1)).xyz);
// Compute the angle between the two vectors
float cosTheta = clamp( dot( normal_cameraspace, cameraVector ), 0,1 );
// The coefficient will create a nice looking shining effect.
// Also, we shouldn't modify the alpha channel value.
color = vec4(0.3 * in_color.rgb + cosTheta * in_color.rgb, in_color.a);
}
The fragment shader:
#version 330 core
in vec4 color;
out vec4 out_frag_color;
void main(void)
{
out_frag_color = color;
}
The uncanny results with the unit cube:
It looks like the effect is a simple lighting effect with per-face normals. There are a few different ways you can achieve per-face normals:
You can create a VBO with a normal attribute, and then duplicate vertex position data for faces which don't have the same normal. For example, a cube would have 24 vertexes instead of 8, because the "duplicates" would have different normals.
You can use a geometry shader which calculates a per-face normal.
You can use dFdx() and dFdy() in the fragment shader to approximate the normal.
I recommend the first approach, because it is simple. You can simply calculate the normals ahead of time in your program, and then use them to calculate the face colors in your vertex shader.
This is simple flat shading, instead of using per vertex normals you can evaluate per face normal with this GLSL snippet:
vec3 x = dFdx(FragPos);
vec3 y = dFdy(FragPos);
vec3 normal = cross(x, y);
vec3 norm = normalize(normal);
then apply some diffuse lighting using norm:
// diffuse light 1
vec3 lightDir1 = normalize(lightPos1 - FragPos);
float diff1 = max(dot(norm, lightDir1), 0.0);
vec3 diffuse = diff1 * diffColor1;

deriving screen-space coordinates in glsl shader

I'm trying to write a simple application for baking a texture from a paint buffer. Right now I have a mesh, a mesh texture, and a paint texture. When I render the mesh, the mesh shader will lookup the mesh texture and then based on the screen position of the fragment lookup the paint texture value. I then composite the paint lookup with the mesh lookup.
Here's a screenshot with nothing in the paint buffer and just the mesh texture.
Here's a screenshot with something in the paint buffer composited over the mesh texture.
So that all works great, but I'd like to bake the paint texture into my mesh texture. Right now I send the mesh's UVs down as the position with an ortho set to (0,1)x(0,1) so I'm actually doing everything in texture space. The mesh texture lookup is also the position. The problem I'm having though is computing the screen space position of the fragment from the original projection to figure out where to sample the paint texture. I'm passing the bake shader my original camera project matrices and the object position to send the fragment shader the device-normalized position of the fragment (again from my original camera projection) to do the lookup, but it's coming out funny.
Here's what the bake texture is generating if I render half the output using the paint texture and screen position I've derived.
I would expect that block line to be right down the middle.
Am I calculating the screen position incorrectly in my vertex shader? Or am I going about this in a fundamentally wrong way?
// vertex shader
uniform mat4 orthoPV;
uniform mat4 cameraPV;
uniform mat4 objToWorld;
varying vec2 uv;
varying vec2 screenPos;
void main() {
uv = gl_Vertex.xy;
screenPos = 0.5 * (vec2(1,1) + (cameraPV * objToWorld * vec4(gl_MultiTexCoord0.xyz,1)).xy);
screenPos = gl_MultiTexCoord0.xy;
gl_Position = orthoPV * gl_Vertex;
gl_FrontColor = vec4(1,0,0,1);
}
// fragment shader
uniform sampler2D meshTexture;
uniform sampler2D paintTexture;
varying vec2 uv;
varying vec2 screenPos;
void main() {
gl_FragColor = texture2D(meshTexture, uv);
if (screenPos.x > .5)
gl_FragColor = texture2D(paintTexture, uv);
}