Unexpected Texture Coordinate Interpolation - Processing + GLSL - opengl

Over the past few days, I have stumbled upon a particularly tricky bug. I have reduced my code down to a very simple and direct set of examples.
This is the processing code I use to call my shaders:
PGraphics test;
PShader testShader;
PImage testImage;
void setup() {
size(400, 400, P2D);
testShader = loadShader("test.frag", "vert2D.vert");
testImage = loadImage("test.png");
testShader.set("image", testImage);
testShader.set("size", testImage.width);
shader(testShader);
}
void draw() {
background(0, 0, 0);
shader(testShader);
beginShape(TRIANGLES);
vertex(-1, -1, 0, 1);
vertex(1, -1, 1, 1);
vertex(-1, 1, 0, 0);
vertex(1, -1, 1, 1);
vertex(-1, 1, 0, 0);
vertex(1, 1, 1, 0);
endShape();
}
Here is my vertex shader:
attribute vec2 vertex;
attribute vec2 texCoord;
varying vec2 vertTexCoord;
void main() {
gl_Position = vec4(vertex, 0, 1);
vertTexCoord = texCoord;
}
When I call this fragment shader:
uniform sampler2D image;
varying vec2 vertTexCoord;
void main(void) {
gl_FragColor = texture2D(image, vertTexCoord);
}
I get this:
This is the expected result. However, when I render the texture coordinates to the red and green channels instead with the following fragment shader:
uniform sampler2D image;
uniform float size;
varying vec2 vertTexCoord;
void main(void) {
gl_FragColor = vec4(vertTexCoord, 0, 1);
}
I get this:
As you can see, a majority of the screen is black, which would indicate that at these fragments, the texture coordinates are [0, 0]. This can't be the case though, because when passed into the texture2D function, they are correctly mapped to the corresponding positions in the image. To verify that the exact same values for texture coordinates were being used in both of these cases, I combined them with the following shader.
uniform sampler2D image;
uniform float size;
varying vec2 vertTexCoord;
void main(void) {
gl_FragColor = texture2D(image, vertTexCoord) + vec4(vertTexCoord, 0, 0);
}
This produced:
Which is exactly what you would expect if the texture coordinates did smoothly vary across the screen. So I tried a completely black image, expecting to see this variation more clearly without the face. When I did this, I got the image with the two triangles again. After playing around with it some more, I found that if I have an entirely black image except with the top left pixel transparent, I get this:
Which is finally the image I would expect with smoothly varying coordinates.This has completely stumped me. Why does the texture lookup work properly but rendering the actual coordinates gives me mostly junk?
EDIT:
I found a solution which I have posted but am still unsure why the bug exists in the first place. I came across an interesting test case that might provide a little more information about why this is happening.
Fragment Shader:
varying vec2 vertTexCoord;
void main(void) {
gl_FragColor = vec4(vertTexCoord, 0, 0.5);
}
Result:

I have found two different solutions. Both involve changes in the processing code. I have no idea how or why these changes make it to work.
Solution 1:
Pass down screen space coordinates instead of clip space coordinates and use the transform matrix generated by processing to convert those into clip space in the vertex shader.
Processing code:
PGraphics test;
PShader testShader;
PImage testImage;
void setup() {
size(400, 400, P2D);
testShader = loadShader("test.frag", "vert2D.vert");
testImage = loadImage("test.png");
testShader.set("image", testImage);
testShader.set("size", testImage.width);
shader(testShader);
}
void draw() {
background(0, 0, 0);
shader(testShader);
beginShape(TRIANGLES);
//Pass down screen space coordinates instead.
vertex(0, 400, 0, 1);
vertex(400, 400, 1, 1);
vertex(0, 0, 0, 0);
vertex(400, 400, 1, 1);
vertex(0, 0, 0, 0);
vertex(400, 0, 1, 0);
endShape();
}
Vertex Shader:
attribute vec2 vertex;
attribute vec2 texCoord;
uniform mat4 transform;
varying vec2 vertTexCoord;
void main() {
//Multiply transform matrix.
gl_Position = transform * vec4(vertex, 0, 1);
vertTexCoord = texCoord;
}
Result:
Notice the line through the center of the screen. This is because we haven't called noStroke() in the processing code. Still, texture coordinates are interpolated properly.
Solution 2:
If we just call noStroke() in the setup, we can pass the clip space coordinates down without any issues and everything works exactly as expected. No shader changes needed.
PGraphics test;
PShader testShader;
PImage testImage;
void setup() {
size(400, 400, P2D);
//Call noStroke()
noStroke();
testShader = loadShader("test.frag", "vert2D.vert");
testImage = loadImage("test.png");
testShader.set("image", testImage);
testShader.set("size", testImage.width);
shader(testShader);
}
void draw() {
background(0, 0, 0);
shader(testShader);
beginShape(TRIANGLES);
vertex(-1, -1, 0, 1);
vertex(1, -1, 1, 1);
vertex(-1, 1, 0, 0);
vertex(1, -1, 1, 1);
vertex(-1, 1, 0, 0);
vertex(1, 1, 1, 0);
endShape();
}
Result:
Pretty easy fix. How this one change manages to affect the way the texture coordinates are interpolated/not interpolated in the fragment shader is beyond me.
To anyone that is maybe a little more familiar with how processing wraps OpenGL that might have insight on why these bugs exist, I'd be interested to know.

Related

openGL- Drawing Grid of Quads and Manually paint them

I'm doing a simple image processing app using OpenGL and C++.
However, there is one particular thing that I don't know how to do, which is:
I need to let my user to draw a Histogram Graph.
The way I thought to do this is by creating a grid of quads one quad for each pixel intesity of my image. Example: if the image is 8 bits, I would need 256x256 quads in my grid. After drawing the grid I want my to user manually paint the quads in a quantized way (each quad) in the way that he could "draw" the histogram. The problem is that I dont know how to do any of these things...
Would anyone give me direction on how to draw the grid, and how to make the paiting thing.
Iif you're confused about "drawing histogram" just considerit as a regular graph.
You don't have to draw a grid of quads. Just one quad is enough, and then use a shader to sample from the histogram stored in a 1d-texture. Here is what I get:
Vertex shader:
#version 450 core
layout(std140, binding = 0) uniform view_block {
vec2 scale, offset;
} VIEW;
layout(std140, binding = 1) uniform draw_block {
vec4 position;
float max_value;
} DRAW;
out gl_PerVertex {
vec4 gl_Position;
};
void main()
{
ivec2 id = ivec2(gl_VertexID&1, gl_VertexID>>1);
vec2 position = vec2(DRAW.position[id.x<<1], DRAW.position[(id.y<<1) + 1]);
gl_Position = vec4(fma(position, VIEW.scale, VIEW.offset), 0, 1);
}
Fragment shader:
#version 450 core
layout(std140, binding = 1) uniform draw_block {
vec4 position;
float max_value;
} DRAW;
layout(binding = 0) uniform sampler1D hist;
layout(location = 0) out vec4 OUT;
void main()
{
const vec2 extent = DRAW.position.zw - DRAW.position.xy;
vec2 texcoord = (gl_FragCoord.xy - DRAW.position.xy)/(DRAW.position.zw - DRAW.position.xy);
OUT.rgb = vec3(lessThan(texcoord.yyy*DRAW.max_value, texture(hist, texcoord.x).rgb));
OUT.a = 1;
}
Histogram texture creation:
image hist(256, 1, 3, type_float);
// ... calculate the histogram ...
tex.reset(glCreateTextureSN(GL_TEXTURE_1D));
glTextureStorage1D(tex.get(), 1, GL_RGB32F, hist.w);
glTextureSubImage1D(tex.get(), 0, 0, hist.w, GL_RGB, GL_FLOAT, hist.c[0]);
glTextureParameteri(tex.get(), GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
Rendering routine:
const vec2i vs = { glutGet(GLUT_WINDOW_WIDTH), glutGet(GLUT_WINDOW_HEIGHT) };
glViewport(0, 0, vs[0], vs[1]);
glClear(GL_COLOR_BUFFER_BIT);
struct view_block {
vec2f scale, offset;
} VIEW = {
vec2f(2)/vec2f(vs), -vec2f(1)
};
GLbuffer view_buf(glCreateBufferStorageSN(sizeof(VIEW), &VIEW, 0));
glBindBufferBase(GL_UNIFORM_BUFFER, 0, view_buf.get());
struct draw_block {
box2f position;
float max_value;
} DRAW = {
box2f(0, 0, vs[0], vs[1]),
max_value
};
GLbuffer draw_buf(glCreateBufferStorageSN(sizeof(DRAW), &DRAW, 0));
glBindBufferBase(GL_UNIFORM_BUFFER, 1, draw_buf.get());
bind_textures(tex.get());
glBindProgramPipeline(pp.get());
glBindVertexArray(0);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glutSwapBuffers();

Texture Coordinates OSG GLSL

I have an image 500x500 pixels which I converted as Texture2D to GLSL and return back the raw data to C++/OSG. I have faced problems with texture coordinates (the coordinates on GLSL goes from 0 to 1). Can someone help me with this point?
C++ code:
cv::Mat test = cv::Mat::zeros(512, 512, CV_8UC3);
test(cv::Rect( 0, 0, 255, 255)).setTo(cv::Scalar(255,0,0));
test(cv::Rect(256, 0, 255, 255)).setTo(cv::Scalar(0,255,0));
test(cv::Rect( 0, 256, 255, 255)).setTo(cv::Scalar(0,0,255));
test(cv::Rect(256, 256, 255, 255)).setTo(cv::Scalar(255,255,255));
osg::ref_ptr<osg::Image> image = new osg::Image;
image->setImage(512, 512, 3, GL_RGB, GL_BGR, GL_UNSIGNED_BYTE, test.data, osg::Image::NO_DELETE, 1);
osg::ref_ptr<osg::Texture2D> texture = new osg::Texture2D;
texture->setTextureSize(512, 512);
texture->setImage(image);
// Pass the texture to GLSL as uniform
osg::StateSet* ss = scene->getOrCreateStateSet();
osg::Uniform* samUniform = new osg::Uniform(osg::Uniform::SAMPLER_2D, "vertexMap");
samUniform->set(0);
ss->addUniform(samUniform);
ss->setTextureAttributeAndModes(0, texture, osg::StateAttribute::ON);
Vertex code:
#version 130
void main() {
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;
}
Fragment code:
#version 130
uniform sampler2D vertexMap;
out vec4 out_data;
void main() {
vec3 value = texture2D(vertexMap, gl_TexCoord[0].st).xyz;
out_data = vec4(value, 1);
}
This is my input data:
Output data from shader:
I solved by replacing texture2D by texelFetch on fragment shader. The difference between these two functions can be found here: https://gamedev.stackexchange.com/questions/66448/how-do-opengls-texelfetch-and-texture-differ
#version 130
uniform sampler2D vertexMap;
out vec4 out_data;
void main() {
vec3 value = texelFetch(vertexMap, ivec2(gl_FragCoord.xy), 0).xyz;
out_data = vec4(value, 1);
}

OpenGL horizontal pixel pairs drawn swapped

I have problem that is extremely similar to the one described in OpenGL pixels drawn with each horizontal pair swapped. The main difference is that I'm getting this disortion even when I feed the texture one-byte red-only values.
EDIT: By closer inspection of normal textures, I have discovered that this problem manifests when rendering any 2D texture. I tried rotating the resulting texture by swapping the texture coordinates. The resulting picture still have swapped visual horizontal pixels - so I'm assuming that the data in the texture is good, and the disortion occurs when rendering the texture.
Here are the relevant parts of the code:
C++:
struct coord_t { float x; float y; }
GLint loc = glGetAttributeLocation(program, "coord");
if (loc != -1) {
glVertexAttribPointer(loc, 2, GL_FLOAT, GL_FALSE,
sizeof(coord_t), static_cast<void *>(offsetof(coord_t, x)));
glEnableVertexAttribArray(loc);
}
loc = glGetAttributeLocation(program, "tex_coord");
if (loc != -1) {
glVertexAttribPointer(loc, 2, GL_FLOAT, GL_FALSE, sizeof(coord_t),
static_cast<void *>((void*)(4*sizeof(coord_t)+offsetof(coord_t, x)));
glEnableVertexAttribArray(loc);
}
// ... Texture binding to GL_TEXTURE_2D ...
coord_t pos[] = {coord_t{-1.f,-1.f}, coord_t{1.f,-1.f}
coord_t{-1.f,1.f}, coord_t{1.f,1.f}
};
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(pos), pos); // position
glBuffefSubData(GL_ARRAY_BUFFER, sizeof(pos), sizeof(pos), pos); // texture coordinates
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Corresponding vertex shader:
#version 110
attribute vec2 coord;
attribute vec2 tex_coord;
varying vec2 tex_out;
void main(void) {
gl_Position = vec4(coord.xy, 0.0, 1.0);
tex_out = tex_coord;
}
Corresponding fragment shader:
#version 110
uniform sampler2D my_texture;
varying vec2 tex_out;
void main(void) {
gl_FragColor = texture(my_texture, tex_out);
}
After extensive code investigation, I managed to find the culprit.
I was setting the blending function incorrectly, using GL_SRC1_ALPHA and GL_ONE_MINUS_SRC1_ALPHA instead of GL_SRC_ALPHA and GL_ONE_MINUS_SRC_ALPHA.

Palette swap using fragment shaders

I'm trying to sort out how can I achieve palette swap using fragment shaders (looking at this post https://gamedev.stackexchange.com/questions/43294/creating-a-retro-style-palette-swapping-effect-in-opengl) I am new to open gl so I'd be glad if someone could explain me my issue.
Here is code snippet which I am trying to reproduce:
http://www.opengl.org/wiki/Common_Mistakes#Paletted_textures
I set up Open GL environment so that I can create window, load textures, shaders and render my single square which is mapped to corners of window (when I resize window image get stretched too).
I am using vertex shader to convert coordinates from screen space to texture space, so my texture is stretched too
attribute vec2 position;
varying vec2 texcoord;
void main()
{
gl_Position = vec4(position, 0.0, 1.0);
texcoord = position * vec2(0.5) + vec2(0.5);
}
The fragment shader is
uniform float fade_factor;
uniform sampler2D textures[2];
varying vec2 texcoord;
void main()
{
vec4 index = texture2D(textures[0], texcoord);
vec4 texel = texture2D(textures[1], index.xy);
gl_FragColor = texel;
}
textures[0] is indexed texture (that one I'm trying to colorize)
Every pixel has color value of (0, 0, 0, 255), (1, 0, 0, 255), (2, 0, 0, 255) ... (8, 0, 0, 255) - 8 colors total, thats why it looks almost black. I want to encode my colors using value stored in "red channel".
textures[1] is table of colors (9x1 pixels, each pixel has unique color, zoomed to 90x10 for posting)
So as you can see from fragment shader excerpt I want to read index value from first texture, for example (5, 0, 0, 255), and then look up actual color value from pixel stored at point (x=5, y=0) in second texture. Same as written in wiki.
But instead of painted image I get:
Actually I see that I can't access pixels from second texture if I explicitly set X point like vec2(1, 0),vec2(2, 0), vec2(4, 0) or vec2(8, 0). But I can get colors when I use vec2(0.1, 0) or vec2(0.7, 0). Guess that happens because texture space is normalized from my 9x1 pixels to (0,0)->(1,1). But how can I "disable" that feature and simply load my palette texture so I could just ask "give me color value of pixel stored at (x,y), please"?
Every pixel has color value of (0, 0, 0, 255), (1, 0, 0, 255), (2, 0, 0, 255) ... (8, 0, 0, 255)
Wrong. Every pixel has the color values: (0, 0, 0, 1), (0.00392, 0, 0, 1), (0.00784, 0, 0, 1) ... (0.0313, 0, 0, 1).
Unless you're using integer or float textures (and you're not), your colors are stored as normalized floating point values. So what you think is "255" is really just "1.0" when you fetch it from the shader.
The correct way to handle this is to first transform the normalized values back into their non-normalized form. This is done by multiplying the value by 255. Then convert them into texture coordinates by dividing by the palette texture's width (- 1). Also, your palette texture should not be 2D:
#version 330 //Always include a version.
uniform float fade_factor;
uniform sampler2D palattedTexture;
uniform sampler1D palette;
in vec2 texcoord;
layout(location = 0) out vec4 outColor;
void main()
{
float paletteIndex = texture(palattedTexture, texcoord).r * 255.0;
outColor = texture(palette, paletteIndex / (textureSize(palette).x - 1));
gl_FragColor = texel;
}
The above code is written for GLSL 3.30. If you're using earlier versions, translate it accordingly.
Also, you shouldn't be using RGBA textures for your paletted texture. It's just one channel, so either use GL_LUMINANCE or GL_R8.

GLSL and glBegin(GL_TRIANGLES) not working

This code only renders a dodecahedron and completely ignores the glBegin(GL_TRIANGLES) block:
glutSolidDodecahedron();
glBegin(GL_TRIANGLES);
glNormal3f(1, 0, 0);
glVertex3f(11, 0, 0);
glNormal3f(0, 1, 1);
glVertex3f(-11, 0, 0);
glNormal3f(0, 0, 1);
glVertex3f(0, 0, 11);
glEnd();
The two shaders are quite simplistic:
the vertex shader:
varying vec3 normal;
void main()
{
gl_Position = ftransform();
gl_FrontColor = gl_Color;
gl_BackColor = gl_Color;
normal = gl_Normal;
normal = gl_NormalMatrix * normal;
}
and the frag:
uniform vec3 lightDir;
varying vec3 normal;
void main()
{
float intensity = dot(lightDir, normal);
gl_FragColor = 0.5 * (1.5 + intensity) * gl_Color;
}
While glutSolidX type of functions work well with this example (based on the Lightouse3D tutorial), how can one quickly draw triangles that change coordinates from frame to frame (I tried arrays and GL_DYNAMIC_DRAW, but that's too much work as compared to the old "fixed pipeline" approach). I saw other people still managing to use glBegin(..); glEnd(); blocks with GLSL shaders successfully, so it must be possible. What could be missing?
The coordinates of the vertices of the triangle in the glBegin/glEnd block are
11, 0, 0
-11, 0, 0
0, 0, 11
which means it lies completely flat in the view. This is like viewing a sheet of paper from such a hard angle, it becomes a line. Because triangles have no thickness, not even this line is drawn and the triangle seems invisible.