Why does OpenSceneGraph map all Sampler2D to the first texture - c++

I am currently writing a program with OpenSceneGraph (3.4.0) and my own glsl (330) shaders.
It uses multiple textures for input, then does a multiple render target rendering with a pre render camera and reads in those multiple render target textures with a second camera for deferred shading. Thus both cameras have their own shaders (called geometry_pass and lighting_pass here).
My problem: both shaders use the same textures in all sampler2D uniforms when reading.
//in geometry_pass.frag
uniform sampler2D uAlbedoMap;
uniform sampler2D uHeightMap;
uniform sampler2D uNormalMap;
uniform sampler2D uRoughnessMap;
uniform sampler2D uSpecularMap;
[...]
layout (location = 0) out vec4 albedo;
layout (location = 1) out vec4 height;
layout (location = 2) out vec4 normal;
layout (location = 3) out vec4 position;
layout (location = 4) out vec4 roughness;
layout (location = 5) out vec4 specular;
[...]
albedo = vec4(texture(uAlbedoMap, vTexCoords).rgb, 1.0);
height = vec4(texture(uHeightMap, vTexCoords).rgb, 1.0);
normal = vec4(texture(uNormalMap, vTexCoords).rgb, 1.0);
position = vec4(vPosition_WorldSpace, 1.0);
roughness = vec4(texture(uRoughnessMap, vTexCoords).rgb, 1.0);
specular = vec4(texture(uSpecularMap, vTexCoords).rgb, 1.0);
Here the output is always the color of the uAlbedoMapexcept for the position, which gets exported correctly.
In the lighting pass, when I read in the textures of the geometry pass, again all input textures are the same
//in lighting_pass.frag
uniform sampler2D uAlbedoMap;
uniform sampler2D uHeightMap;
uniform sampler2D uNormalMap;
uniform sampler2D uPositionMap;
uniform sampler2D uRoughnessMap;
uniform sampler2D uSpecularMap;
[...]
vec3 albedo = texture(uAlbedoMap, vTexCoord).rgb;
vec3 height = texture(uHeightMap, vTexCoord).rgb;
vec3 normal_TangentSpace = texture(uNormalMap, vTexCoord).rgb;
vec3 position_WorldSpace = texture(uPositionMap, vTexCoord).rgb;
vec3 roughness = texture(uRoughnessMap, vTexCoord).rgb;
vec3 specular = texture(uSpecularMap, vTexCoord).rgb;
i.e. the position map that was correctly exported has the color of the albedo in the lighting pass as well.
Thus, what seems to be working correctly is the texture output, but what is obviously not working is the input.
I have tried to debug this with CodeXL and there I can see that all the images for the geometry_pass have (at some point at least) been correctly bound, they're all visible. The output textures of the framebuffer object confirm that the position texture of the geometry_pass is correct.
As far as I can see when going step by step through this, the textures are correctly bound (i.e. the uniform locations are correct).
Now the obvious question: How can I get those textures to be correctly used in the shaders?
Construction of the program
The viewer is an osgViewer::Viewer, so there is only one view.
The scene graph is as follows:
The displayCamerais the camera from the viewer. Since I'm working with Qt (5.9.1), I reset the GraphicsContext before I do anything else with the scene graph.
osg::ref_ptr<osg::Camera> camera = viewer.getCamera();
osg::ref_ptr<osg::GraphicsContext::Traits> traits = new osg::GraphicsContext::Traits;
traits->windowDecoration = false;
traits->x = 0;
traits->y = 0;
traits->width = 640;
traits->height = 480;
traits->doubleBuffer = true;
camera->setGraphicsContext(new osgQt::GraphicsWindowQt(traits.get()));
camera->getGraphicsContext()->getState()->setUseModelViewAndProjectionUniforms(true);
camera->getGraphicsContext()->getState()->setUseVertexAttributeAliasing(true);
camera->setClearMask(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
camera->setClearColor(osg::Vec4(0.2f, 0.2f, 0.6f, 1.0f));
camera->setViewport(new osg::Viewport(0, 0, traits->width, traits->height));
camera->setViewMatrix(osg::Matrix::identity());
I then set displayCamera to this viewer camera, create a second camera for render to texture (thus called rttCamera) and add it as a child to the displayCamera. I add the scene (consisting out of agroup node containing a geode containing a hardcoded geometry) to the rttCamera and in the end create a screen quad geometry (below a geode, which in turn is child of matrix transform; this matrix transform is what is added as a child to displayCamera).
Thus the displayCamera has the two children rttCamera and the matrixtransform->screenQuad. The rttCamera has the child scene->geode.
Both cameras have their own rendermask, the screen quad uses the displayCameras rendermask, the scene the rttCameras rendermask.
With the scene node I read in 5 Textures from file (all bitmaps) and then render the rttCamera into the Framebuffer Object with multiple render targets (for deferred shading).
//model is the geode in the scene group node
osg::ref_ptr<osg::StateSet> ss = model->getOrCreateStateSet();
ss->addUniform(new osg::Uniform(name.toStdString().c_str(), counter));
ss->setTextureAttributeAndModes(counter, pairNameTexture.second, osg::StateAttribute::ON | osg::StateAttribute::PROTECTED);
.
//camera is the rttCamera
//bufferComponent is constructed by osg::Camera::COLOR_BUFFER0+counter
//(where counter is just an integer that gets incremented)
//texture is an osg::Texture2D that is newly created
camera->attach(bufferComponent, texture);
//the textures get stored to assign them later on
gBufferTextures[name] = texture;
These mrt textures are bound to the screenquad as textures
//ssQuad is the stateset of the screen quad geode
QString uniformName = "u" + name + "Map";
uniformName[1] = uniformName[1].toUpper();
ssQuad->addUniform(new osg::Uniform(uniformName.toStdString().c_str(), counter));
osg::ref_ptr<osg::Texture2D> tex = gBufferTextures[name];
ssQuad->setTextureAttributeAndModes(counter, gBufferTextures[name], osg::StateAttribute::ON | osg::StateAttribute::OVERRIDE);
other set ups are the rendertarget (FBO for rttCamera, Framebuffer for displayCamera), lighting (off in both cameras). the rttCamera gets the same graphics context that it is created for the displaycamera (i.e. the graphics context object is passed to the rttCamera and set as its own graphics context).
The texture attachments are created as follows (where there is no difference in using width and height or the power-of-2 values for size)
osg::ref_ptr<osg::Texture2D> Utils::createTextureAttachment(int width, int height)
{
osg::Texture2D* texture = new osg::Texture2D();
//texture->setTextureSize(width, height);
texture->setTextureSize(512, 512);
texture->setInternalFormat(GL_RGBA);
texture->setFilter(osg::Texture2D::MIN_FILTER, osg::Texture2D::LINEAR);
texture->setFilter(osg::Texture2D::MAG_FILTER, osg::Texture2D::LINEAR);
return texture;
}
Let me know if there is more crucial-for-solving code or information missing.

So I finally found the error. My counter has been an unsigned int which apperantly is not allowed. Since osg is hiding so much of the errors from me, I didn't see that this was an issue...
After changing it to just a normal int, I now get different textures into my shader.

Related

"Scan Through" a large texture glsl

I've encoded some data into a 44487x1.0 luminance texture:
Now I would like to "scrub" this data across my shader, so that a slice of the texture equal in width to the pixel width of my canvas is displayed. So if the canvas is 500px wide, then 500 pixels from the texture will be shown. The texture is then translated by some offset value so that different values within the texture can be displayed.
//vertex shader
export const vs = GLSL`
#version 300 es
in vec4 position;
void main() {
gl_Position = position;
}
`;
//fragment shader
#version 300 es
#ifdef GL_ES
precision highp float;
#endif
uniform vec2 u_resolution;
uniform float u_time;
uniform sampler2D u_texture_7; //data texture
out vec4 fragColor;
void main(){
//data texture dimensions
vec2 dims = vec2(44487., 1.0);
//amount by which to translate the data texture
vec2 offset = vec2(u_time*.5, 0.);
//canvas coords
vec2 uv = gl_FragCoord.xy/u_resolution.xy;
//textuer asspect ratio, w/h
float textureAspect = 44487. / 1.;
vec3 col = vec3(0.);
//texture width is 44487*larger than uv, I guess?
vec2 textCoords = vec2((uv.x/textureAspect)+offset.x, uv.y);
//get texture values
vec3 text = texture(u_texture_7, textCoords).rgb;
//output
fragColor = vec4(text, 1.);
}
However, this doesn't seem to work. All I get is a black screen. Is using a wide texture like this a good way to go about getting the array values into the shader? The texture is very small in size, but I'm wondering if the dimensions might still be causing an issue.
Alternatively to providing one large texture, I could provide a smaller texture, but update the texture uniform values via js?
After trying several different approaches, the work around I ended up using was uploading the 44487x1.0 image to a separate 2d canvas, and then performing the transformations of the texture in the 2d canvas, and not the shader. The canvas is then sent to the shader as a texture.
Might not be the most efficient solution, but it avoids having to mess around with the texture too much in the shader.

Cannot apply blending to cube located behind half-transparent textured surface

Following the tutorial from learnopengl.com about rendering half-transparent windows glasses using blending, I tried to apply that principle to my simple scene (where we can navigate the scene using the mouse) containing:
Cube: 6 faces, each having 2 triangles, constructed using two attributes (position and color) defined in its associated vertex shader and passed to its fragment shader.
Grass: 2D Surface (two triangles) to which a png texture was applied using a sampler2D uniform (the background of the png image is transparent).
Window: A half-transparent 2D surface based on the same shaders (vertex and fragment) as the grass above. Both textures were downloaded from learnopengl.com
The issue I'm facing is that when it comes to the Grass, I can see it through the Window but not the Cube!
My code is structured as follows (I left the rendering of the window to the very last on purpose):
// enable depth test & blending
glEnable(GL_DEPTH_TEST);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA);
while (true):
glClearColor(background.r, background.g, background.b, background.a);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
cube.draw();
grass.draw();
window.draw();
Edit: I'll share below the vertex and fragment shaders used to draw the two textured surfaces (grass and window):
#version 130
in vec2 position;
in vec2 texture_coord;
// opengl tranformation matrices
uniform mat4 model; // object coord -> world coord
uniform mat4 view; // world coord -> camera coord
uniform mat4 projection; // camera coord -> ndc coord
out vec2 texture_coord_vert;
void main() {
gl_Position = projection * view * model * vec4(position, 0.0, 1.0);
texture_coord_vert = texture_coord;
}
#version 130
in vec2 texture_coord_vert;
uniform sampler2D texture2d;
out vec4 color_out;
void main() {
vec4 color = texture(texture2d, texture_coord_vert);
// manage transparency
if (color.a == 0.0)
discard;
color_out = color;
}
And the ones used to render the colored cube:
#version 130
in vec3 position;
in vec3 color;
// opengl tranformation matrices
uniform mat4 model; // object coord -> world coord
uniform mat4 view; // world coord -> camera coord
uniform mat4 projection; // camera coord -> ndc coord
out vec3 color_vert;
void main() {
gl_Position = projection * view * model * vec4(position, 1.0);
color_vert = color;
}
#version 130
in vec3 color_vert;
out vec4 color_out;
void main() {
color_out = vec4(color_vert, 1.0);
}
P.S: My shader programs uses GLSL v1.30, because my internal GPU didn't seem to support later versions.
Regarding the piece of code that does the actual drawing, I basically have one instance of a Renderer class for each type of geometry (one shared by both textured surfaces, and one for the cube). This class manages the creation/binding/deletion of VAOs and binding/deletion of VBOs (creation of VBOs made outside the class so I can share vertexes with similar shapes). Its constructor takes as an argument the shader program and the vertex attributes. I'll try to show the relevant piece of code below
Renderer::Renderer(Program program, vector attributes) {
vao.bind();
vbo.bind();
define_attributes(attributes);
vao.unbind();
vbo.unbind();
}
Renderer::draw(Uniforms uniforms) {
vao.bind();
program.use();
set_uniforms(unfiorms);
glDrawArrays(GL_TRIANGLES, 0, n_vertexes);
vao.unbind();
program.unuse();
}
Your blend function function depends on the target's alpha channel (GL_ONE_MINUS_DST_ALPHA):
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA);
dest = src * src_alpha + dest * (1-dest_alpha)
If the alpha channel of the cube is 0.0, the color of the cube is not mixed with the color of the window.
The traditional alpha blending function depends only on the source alpha channel:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
dest = src * src_alpha + dest * (1-src_alpha)
See also glBlendFunc and Blending

OpenGL - displacement vertex shader

I'm working with OpenTK wrapper and C# and trying to use displacement vertex shaders to generate 3D models.
I can run dummie shaders to render cubes and triangles, but now I want to create a 3D grid using texture data. For first attempts I created an image (.png) with different areas using red and black colors.
For reference, here is the texture-loading function:
loadImage(Bitmap image)
{
int texID = GL.GenTexture();
GL.BindTexture(TextureTarget.Texture2D, texID);
System.Drawing.Imaging.BitmapData data = image.LockBits(new System.Drawing.Rectangle(0, 0, image.Width, image.Height),
System.Drawing.Imaging.ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, data.Width, data.Height, 0,
OpenTK.Graphics.OpenGL.PixelFormat.Bgra, PixelType.UnsignedByte, data.Scan0);
image.UnlockBits(data);
GL.GenerateMipmap(GenerateMipmapTarget.Texture2D);
return texID;
}
As far as I read in documentation after loading the texture, I bind both arrays (vertex position and texcoords), and call GL.UseProgram. I assume then the texture is binded and loaded, isn't it?
GL.ActiveTexture(TextureUnit.Texture0);
GL.BindTexture(TextureTarget.Texture2D, objects[0].TextureID);
int loc = GL.GetUniformLocation(shaders[activeShader].ProgramID, "maintexture");
GL.Uniform1(loc, 0);
GL.UniformMatrix4(shaders[activeShader].GetUniform("modelview"), false, ref objects[0].ModelViewProjectionMatrix);
vertex shader:
#version 330
in vec3 vPosition;
in vec2 texcoord;
out vec2 f_texcoord;
uniform mat4 modelview;
uniform sampler2D maintexture;
void
main()
{
vec3 newPos = vPosition;
newPos.y += texture(maintexture, texcoord).r;
gl_Position = modelview * (vec4(newPos, 1.0) );
f_texcoord = texcoord;
}
What I'm trying to achieve is that the red areas in the input texture appear as elevated vertices, and black areas produce vertices at 'ground' level, but I'm getting a perfectly flat grid and I can't understand why.

deriving screen-space coordinates in glsl shader

I'm trying to write a simple application for baking a texture from a paint buffer. Right now I have a mesh, a mesh texture, and a paint texture. When I render the mesh, the mesh shader will lookup the mesh texture and then based on the screen position of the fragment lookup the paint texture value. I then composite the paint lookup with the mesh lookup.
Here's a screenshot with nothing in the paint buffer and just the mesh texture.
Here's a screenshot with something in the paint buffer composited over the mesh texture.
So that all works great, but I'd like to bake the paint texture into my mesh texture. Right now I send the mesh's UVs down as the position with an ortho set to (0,1)x(0,1) so I'm actually doing everything in texture space. The mesh texture lookup is also the position. The problem I'm having though is computing the screen space position of the fragment from the original projection to figure out where to sample the paint texture. I'm passing the bake shader my original camera project matrices and the object position to send the fragment shader the device-normalized position of the fragment (again from my original camera projection) to do the lookup, but it's coming out funny.
Here's what the bake texture is generating if I render half the output using the paint texture and screen position I've derived.
I would expect that block line to be right down the middle.
Am I calculating the screen position incorrectly in my vertex shader? Or am I going about this in a fundamentally wrong way?
// vertex shader
uniform mat4 orthoPV;
uniform mat4 cameraPV;
uniform mat4 objToWorld;
varying vec2 uv;
varying vec2 screenPos;
void main() {
uv = gl_Vertex.xy;
screenPos = 0.5 * (vec2(1,1) + (cameraPV * objToWorld * vec4(gl_MultiTexCoord0.xyz,1)).xy);
screenPos = gl_MultiTexCoord0.xy;
gl_Position = orthoPV * gl_Vertex;
gl_FrontColor = vec4(1,0,0,1);
}
// fragment shader
uniform sampler2D meshTexture;
uniform sampler2D paintTexture;
varying vec2 uv;
varying vec2 screenPos;
void main() {
gl_FragColor = texture2D(meshTexture, uv);
if (screenPos.x > .5)
gl_FragColor = texture2D(paintTexture, uv);
}

OpenGL two pass shader effect with FBO render to texture gives noise in result on Windows only

What is the correct way of doing the following:
Render a scene into a texture using a FBO (fbo-a)
Then apply an effect using the texture (tex-a) and render this into another texture (tex-b) using the same fbo (fbo-a)
Then render this second texture, with the applied effect (tex-b) as a full screen quad.
My approach is this, but this gives me a texture filled with "noise" on window + the applied effect (all pixels are randomly colored red, green, blue white, black).
I'm using one FBO, with two textures set to GL_COLOR_ATTACHENT0 (tex-a) and GL_COLOR_ATTACHMENT1 (tex-b)
I bind my fbo, make sure it's rendered into the tex-a using glDrawBuffer(GL_COLOR_ATTACHMENT0)
Then I apply the effect in a shader with tex-a bound and set as 'sampler2D'. Using texture unit 1, and switch to the second color attachment (glDrawBuffer(GL_COLOR_ATTACHMENT1)). and render a full screen quad. Everything is now rendered into tex-b
Then I switch back to the default FBO (0) and use tex-b with a full screen quad to render the result.
Example of the result when applying my shader
This is the shader I'm using. I'm not aware this could be what is causing this, but maybe the noise is caused by a overflow?
Vertex shader
attribute vec4 a_pos;
attribute vec2 a_tex;
varying vec2 v_tex;
void main() {
mat4 ident = mat4(1.0);
v_tex = a_tex;
gl_Position = ident * a_pos;
}
Fragment shader
uniform int u_mode;
uniform sampler2D u_texture;
uniform float u_exposure;
uniform float u_decay;
uniform float u_density;
uniform float u_weight;
uniform float u_light_x;
uniform float u_light_y;
const int NUM_SAMPLES = 100;
varying vec2 v_tex;
void main() {
if (u_mode == 0) {
vec2 pos_on_screen = vec2(u_light_x, u_light_y);
vec2 delta_texc = vec2(v_tex.st - pos_on_screen.xy);
vec2 texc = v_tex;
delta_texc *= 1.0 / float(NUM_SAMPLES) * u_density;
float illum_decay = 1.0;
for(int i = 0; i < NUM_SAMPLES; i++) {
texc -= delta_texc;
vec4 sample = texture2D(u_texture, texc);
sample *= illum_decay * u_weight;
gl_FragColor += sample;
illum_decay *= u_decay;
}
gl_FragColor *= u_exposure;
}
else if(u_mode == 1) {
gl_FragColor = texture2D(u_texture, v_tex);
gl_FragColor.a = 1.0;
}
}
I've read this FBO article on opengl.org, where they describe a feedback loop at the bottom of the article. The description is not completely clear to me and I'm wondering if I'm exactly doing what they describe there.
Update 1:
Link to source code
Update 2:
When I first set gl_FragColor.rgb = vec3(0.0, 0.0, 0.0); before I start the sampling loop (with NUM_SAMPLES), it works find. No idea why though.
The problem is that you're not initializing gl_FragColor, and you're modifying it with the lines
gl_FragColor += sample;
and
gl_FragColor *= u_exposure;
both of which depend on the previous value of gl_FragColor. So you're getting some random junk (whatever happened to be in the register that the shader compiler decided to use for the gl_FragColor computation) added in. This has a strong possibility of working fine on some driver/hardware combinations (because the compiler decided to use a register that was always 0 for some reason) and not on others.