shader replaces texture of all meshes - opengl - opengl

Image:
http://i.imgur.com/rtlnKUO.png
Hi, I have 3 objects/modelinstances that are being rendered in my scene:
Model A
-Rectangle
-Blue box texture
Model B
-Rectangle
-Red box texture
Model C
-Square
-smiley face texture
But my code makes it so that it draws all of them with the "latest" drawn mesh's texture, hence why in the picture, all 3 of them are drawn with the same texture.
My fragment code:
#ifdef GL_ES
precision mediump float;
#endif
varying vec2 v_texCoord0;
uniform sampler2D sampler;
void main() {
gl_FragColor = vec4(v_texCoord0, 0.0, 1.0);
vec4 tex = texture2D (sampler, v_texCoord0);
gl_FragColor = tex;
}
So here are my questions with shaders:
What causes this?
Do I need to have 1 instance of a shader per 1 instance of a modelInstance?
Can a single shader render multiple meshes, where each mesh has a different texture?
If so with above, how do you get past the 32 texture limit for a shader?
edit:
Just found out that whenever I bind a texture, all of the models will use that texture. So the most latest binded texture is the one that everyone uses

Related

How to transfer colors from one UV unfolding to another UV unfolding programmatically?

As seen from the figure, assuming a model has two UV unfolding ways, i.e., UV-1 and UV-1. Then I ask an artist to paint the model based on UV-1 and get the texture map 1. How can I transfer colors from UV-1 to UV-2 programmatically (e.g., python)? One method I know is mapping the texture map 1 into vertex colors and then rendering the vertex colors to UV-2. But this method would lose some color details. So how can I do it?
Render your model on Texture Map 2 using UV-2 coordinates for vertex positions and UV-1 coordinates interpolated across the triangles. In the fragment shader use the interpolated UV-1 coordinates to sample Texture Map 1. This way you're limited only by the resolution of the texture maps, not by the resolution of the model.
EDIT: Vertex shader:
in vec2 UV1;
in vec2 UV2;
out vec2 fUV1;
void main() {
gl_Position = vec4(UV2, 0, 1);
fUV1 = UV1;
}
Fragment shader:
in vec2 fUV1;
uniform sampler2D TEX1;
out vec4 OUT;
void main() {
OUT = texture(TEX1, fUV1);
}

Render from fbo texture to another within same fbo

I'm trying to set up deferred rendering, and have successfully managed to output data to the varying gbuffer textures (position, normal, albedo, specular).
I am now attempting to sample from the albedo texture to a 5th colour attachment in the same fbo (for the purposes of further potential post process sampling), by rendering a full screen quad with simple texture coordinates.
I have checked that the vertex/texcoord data is good via Nsight, and confirmed that the shader can "see" the texture to sample from, but all that I see in the target texture is the clear colour when I examine it in the Nsight debugger.
At the moment, the shader is basically just a simple pass through shader:
vertex shader:
#version 430
in vec3 MSVertex;
in vec2 MSTexCoord;
out xferBlock
{
vec3 VSVertex;
vec2 VSTexCoord;
} outdata;
void main()
{
outdata.VSVertex = MSVertex;
outdata.VSTexCoord = MSTexCoord;
gl_Position = vec4(MSVertex,1.0);
}
fragment shader:
#version 430
layout (location = 0) uniform sampler2D colourMap;
layout (location = 0) out vec4 colour;
in xferBlock
{
vec3 VSVertex;
vec2 VSTexCoord;
} indata;
void main()
{
colour = texture(colourMap, indata.VSTexCoord).rgba;
}
As you can see, there is nothing fancy about the shader.
The gl code is as follows:
//bind frame buffer for writing to texture #5
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glDrawBuffer(GL_COLOR_ATTACHMENT4); //5 textures total
glClear(GL_COLOR_BUFFER_BIT);
//activate shader
glUseProgram(second_stage_program);
//bind texture
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, fbo_buffers[2]); //third attachment: albedo
//bind and draw fs quad (two array buffers: vertices, and texture coordinates)
glBindVertexArray(quad_vao);
glDrawArrays(GL_TRIANGLES,0,6);
I'm trying to work out what is preventing rendering to the texture. I'm using a core context with OpenGL v4.3.
I've tried outputting a single white colour for all fragments, using the texture coordinates to generate a colour colour = vec4(indata.VSTexCoord, 1.0, 1.0); and sampling the texture itself, as you see in the shader code, but nothing changes the resultant texture, which just shows the clear colour.
What am I doing wrong?

Add radial gradient texture to each white part of another texture in shader

Recently, I have read article about sun shader (XNA Sun Shader) and decided to implement it using OpenGL ES 2.0. But I faced with a problem connected with shader:
I have two textures, one of them is fire gradient texture:
And another one is texture each white part of which must be colored by the first texture:
So, I'm going to have a result like below (do not pay attention that the result texture is rendered on sphere mesh):
I really hope that somebody knows how to implement this shader.
You can first sampling the original texture, if the color is white, then sampling the gradient texture.
uniform sampler2D Texture0; // original texture
uniform sampler2D Texture1; // gradient texture
varying vec2 texCoord;
void main(void)
{
gl_FragColor = texture2D( Texture0, texCoord );
// If the color in original texture is white
// use the color in gradient texture.
if (gl_FragColor == vec4(1.0, 1.0, 1.0,1.0)) {
gl_FragColor = texture2D( Texture1, texCoord );
}
}

deriving screen-space coordinates in glsl shader

I'm trying to write a simple application for baking a texture from a paint buffer. Right now I have a mesh, a mesh texture, and a paint texture. When I render the mesh, the mesh shader will lookup the mesh texture and then based on the screen position of the fragment lookup the paint texture value. I then composite the paint lookup with the mesh lookup.
Here's a screenshot with nothing in the paint buffer and just the mesh texture.
Here's a screenshot with something in the paint buffer composited over the mesh texture.
So that all works great, but I'd like to bake the paint texture into my mesh texture. Right now I send the mesh's UVs down as the position with an ortho set to (0,1)x(0,1) so I'm actually doing everything in texture space. The mesh texture lookup is also the position. The problem I'm having though is computing the screen space position of the fragment from the original projection to figure out where to sample the paint texture. I'm passing the bake shader my original camera project matrices and the object position to send the fragment shader the device-normalized position of the fragment (again from my original camera projection) to do the lookup, but it's coming out funny.
Here's what the bake texture is generating if I render half the output using the paint texture and screen position I've derived.
I would expect that block line to be right down the middle.
Am I calculating the screen position incorrectly in my vertex shader? Or am I going about this in a fundamentally wrong way?
// vertex shader
uniform mat4 orthoPV;
uniform mat4 cameraPV;
uniform mat4 objToWorld;
varying vec2 uv;
varying vec2 screenPos;
void main() {
uv = gl_Vertex.xy;
screenPos = 0.5 * (vec2(1,1) + (cameraPV * objToWorld * vec4(gl_MultiTexCoord0.xyz,1)).xy);
screenPos = gl_MultiTexCoord0.xy;
gl_Position = orthoPV * gl_Vertex;
gl_FrontColor = vec4(1,0,0,1);
}
// fragment shader
uniform sampler2D meshTexture;
uniform sampler2D paintTexture;
varying vec2 uv;
varying vec2 screenPos;
void main() {
gl_FragColor = texture2D(meshTexture, uv);
if (screenPos.x > .5)
gl_FragColor = texture2D(paintTexture, uv);
}

GLSL blending input texture with target FBO color attachment

I have a system which allows me to set different blending modes (Those found in Photoshop) for each renderable object.Currently what I do is :
Render the renderable Object into FBO B normally.
Attach blending mode shader program,FBO C , and blend color attachment from FBO B with color attachment from FBO A (FBO A contains previous draws final result).
Blit the result from FBO C into FBO A and proceed with the rest of pipeline .
While this works fine , I would like to spare some frame rate which is currently wasted for this ping pong.I know that by default it is not possible to read pixels at the same time when writing to them so it is not possible to set a texture to read from and write to? Ideally ,What I would like to do is in the stage 1 render geometry right into FBO A processing the blend between FBO A color attachment texture and input material texture.
To make it clear here is the example.
Let's assume all the previously rendered geometry is accumulated in FBO A.And each new rendered object that needs to get blended is rendered into FBO B (just like I wrote above).Then in the blend pass (drawn into FBO C) the following shader is used (here it is darken blending ) :
uniform sampler2D bottomSampler;
uniform sampler2D topSampler;
uniform float Opacity;
// utility function that assumes NON-pre-multiplied RGB...
vec4 final_mix(
vec4 NewColor,
vec4 BaseColor,
vec4 BlendColor
) {
float A2 = BlendColor.a * Opacity;
vec3 mixRGB = A2 * NewColor.rgb;
mixRGB += ((1.0-A2) * BaseColor.rgb);
return vec4(mixRGB,BaseColor.a+BlendColor.a);
}
void main(void) // fragment
{
vec4 botColor = texture2D(bottomSampler,gl_TexCoord[0].st);
vec4 topColor = texture2D(topSampler,gl_TexCoord[0].st);
vec4 comp = final_mix(min(botColor,topColor),botColor,topColor);
gl_FragColor = comp;
}
Here :
uniform sampler2D bottomSampler; - FBO A texture attachment.
uniform sampler2D topSampler; -FBO B texture attachment
I use only plane geometry objects.
The output from this shader is FBO C texture attachment which is blitted into FBO A for the next evolution.