Palette swap using fragment shaders - opengl

I'm trying to sort out how can I achieve palette swap using fragment shaders (looking at this post https://gamedev.stackexchange.com/questions/43294/creating-a-retro-style-palette-swapping-effect-in-opengl) I am new to open gl so I'd be glad if someone could explain me my issue.
Here is code snippet which I am trying to reproduce:
http://www.opengl.org/wiki/Common_Mistakes#Paletted_textures
I set up Open GL environment so that I can create window, load textures, shaders and render my single square which is mapped to corners of window (when I resize window image get stretched too).
I am using vertex shader to convert coordinates from screen space to texture space, so my texture is stretched too
attribute vec2 position;
varying vec2 texcoord;
void main()
{
gl_Position = vec4(position, 0.0, 1.0);
texcoord = position * vec2(0.5) + vec2(0.5);
}
The fragment shader is
uniform float fade_factor;
uniform sampler2D textures[2];
varying vec2 texcoord;
void main()
{
vec4 index = texture2D(textures[0], texcoord);
vec4 texel = texture2D(textures[1], index.xy);
gl_FragColor = texel;
}
textures[0] is indexed texture (that one I'm trying to colorize)
Every pixel has color value of (0, 0, 0, 255), (1, 0, 0, 255), (2, 0, 0, 255) ... (8, 0, 0, 255) - 8 colors total, thats why it looks almost black. I want to encode my colors using value stored in "red channel".
textures[1] is table of colors (9x1 pixels, each pixel has unique color, zoomed to 90x10 for posting)
So as you can see from fragment shader excerpt I want to read index value from first texture, for example (5, 0, 0, 255), and then look up actual color value from pixel stored at point (x=5, y=0) in second texture. Same as written in wiki.
But instead of painted image I get:
Actually I see that I can't access pixels from second texture if I explicitly set X point like vec2(1, 0),vec2(2, 0), vec2(4, 0) or vec2(8, 0). But I can get colors when I use vec2(0.1, 0) or vec2(0.7, 0). Guess that happens because texture space is normalized from my 9x1 pixels to (0,0)->(1,1). But how can I "disable" that feature and simply load my palette texture so I could just ask "give me color value of pixel stored at (x,y), please"?

Every pixel has color value of (0, 0, 0, 255), (1, 0, 0, 255), (2, 0, 0, 255) ... (8, 0, 0, 255)
Wrong. Every pixel has the color values: (0, 0, 0, 1), (0.00392, 0, 0, 1), (0.00784, 0, 0, 1) ... (0.0313, 0, 0, 1).
Unless you're using integer or float textures (and you're not), your colors are stored as normalized floating point values. So what you think is "255" is really just "1.0" when you fetch it from the shader.
The correct way to handle this is to first transform the normalized values back into their non-normalized form. This is done by multiplying the value by 255. Then convert them into texture coordinates by dividing by the palette texture's width (- 1). Also, your palette texture should not be 2D:
#version 330 //Always include a version.
uniform float fade_factor;
uniform sampler2D palattedTexture;
uniform sampler1D palette;
in vec2 texcoord;
layout(location = 0) out vec4 outColor;
void main()
{
float paletteIndex = texture(palattedTexture, texcoord).r * 255.0;
outColor = texture(palette, paletteIndex / (textureSize(palette).x - 1));
gl_FragColor = texel;
}
The above code is written for GLSL 3.30. If you're using earlier versions, translate it accordingly.
Also, you shouldn't be using RGBA textures for your paletted texture. It's just one channel, so either use GL_LUMINANCE or GL_R8.

Related

sampling GL_DEPTH_COMPONENTs of type GL_UNSIGNED_SHORT in GLSL shader

I have access to a depth camera's output. I want to visualise this in opengl using a compute shader.
The depth feed is given as a frame and i know the width and height ahead of time. How do I sample the texture and retrieve the depth value in the shader? Is this possible? I've read through the OpenGl types here and can't find anything on unsigned shorts so am starting to worry. Are there any workarounds?
My current compute shader
#version 430
layout(local_size_x = 1, local_size_y = 1) in;
layout(rgba32f, binding = 0) uniform image2D img_output;
uniform float width;
uniform float height;
uniform sampler2D depth_feed;
void main() {
// get index in global work group i.e x,y position
vec2 sample_coords = ivec2(gl_GlobalInvocationID.xy) / vec2(width, height);
float visibility = texture(depth_feed, sample_coords).r;
vec4 pixel = vec4(1.0, 1.0, 0.0, visibility);
// output to a specific pixel in the image
imageStore(img_output, ivec2(gl_GlobalInvocationID.xy), pixel);
}
The depth texture definition is as follows:
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT16, width, height, 0,GL_DEPTH_COMPONENT, GL_UNSIGNED_SHORT, nullptr);
Currently my code produces a plain yellow screen.
If you use perspective projection, then the depth value is not linear. See LearnOpenGL - Depth testing.
If all the depth values are near 0.0, and you use the following expression:
vec4 pixel = vec4(vec3(visibility), 1.0);
then all the pixels appear almost black. Actually the pixels are not completely black, but the difference is barely noticeable.
This happens, when the far plane is "too" far away. To verify that you can compute the power of 1.0 - visibility, to make the different depth values ​​recognizable. For instance:
float exponent = 5.0;
vec4 pixel = vec4(vec3(pow(1.0-visibility, exponent)), 1.0);
If you want a more sophisticated solution, you can linearize the depth values as explained in the answer to How to render depth linearly in modern OpenGL with gl_FragCoord.z in fragment shader?.
Please note that for a satisfactory visualization you should use the entire range of the depth buffer ([0.0, 1.0]). The geometry must be between the near and far planes, but try to move the near and far planes as close to the geometry as possible.

openGL Translating pixel brightness to colormap texture produces incorrect result

See gif switching between RGB and colormap:
The problem is that the two images are different.
I am drawing dots that are RGB white (1.0,1.0,1.0). The alpha channel controls pixel brightness, which creates the dot blur. That's what you see as the brighter image. Then I have a 2-pixel texture of black and white (0.0,0.0,0.0,1.0) (1.0,1.0,1.0,1.0) and in a fragment shader I do:
#version 330
precision highp float;
uniform sampler2D originalColor;
uniform sampler1D colorMap;
in vec2 uv;
out vec4 color;
void main()
{
vec4 oldColor = texture(originalColor, uv);
color = texture(colorMap, oldColor.a);
}
Very simply, take the fragment of the originalColor texture's alpha value of 0 to 1, and translate it to a new color with colorMap texture of black to white. There should be no difference between the two images! Or... at least, that's my goal.
Here's my setup for the colormap texture
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &colormap_texture_id); // get texture id
glBindTexture(GL_TEXTURE_1D, colormap_texture_id);
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); // required: stop texture wrapping
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); // required: scale texture with linear sampling
glTexImage1D(GL_TEXTURE_1D, 0, GL_RGBA32F, colormapColors.size(), 0, GL_RGBA, GL_FLOAT, colormapColors.data()); // setup memory
Render loop:
GLuint textures[] = { textureIDs[currentTexture], colormap_texture_id };
glBindTextures(0, 2, textures);
colormapShader->use();
colormapShader->setUniform("originalColor", 0);
colormapShader->setUniform("colorMap", 1);
renderFullScreenQuad(colormapShader, "position", "texCoord");
I am using a 1D texture as a colormap because it seems that's the only way to potentially have a 1000 to 2000 indexes of colormap values stored in the GPU memory. If there's a better way, let me know. I assume the problem is that the math for interpolating between two pixels is not right for my purposes.
What should I do to get my expected results?
To make sure there's no shenanigans I tried to following shader code:
color = texture(colorMap, oldColor.a); //incorrect results
color = texture(colorMap, (oldColor.r + oldColor.g + oldColor.b)/3); //incorrect
color = texture(colorMap, (oldColor.r + oldColor.g + oldColor.b + oldColor.a)/4); //incorrect
color = vec4(oldColor.a); //incorrect
color = oldColor; // CORRECT... obviously...
I think to be more accurate, you'd need to change:
color = texture(colorMap, oldColor.a);
to
color = texture(colorMap, oldColor.a * 0.5 + 0.25);
Or more generally
color = texture(colorMap, oldColor.a * (1.0 - (1.0 / texWidth)) + (0.5 / texWidth));
Normally, you wouldn't notice the error, it's just because texWidth is so tiny that the difference is significant.
The reason for this is because the texture is only going to start linear filtering from black to white after you pass the centre of the first texel (at 0.25 in your 2 texel wide texture). The interpolation is complete once you pass the centre of the last texel (at 0.75).
If you had a 1024 texture like you mention you plan to end up with then interpolation starts at 0.000488 and I doubt you'd notice the error.

Unexpected Texture Coordinate Interpolation - Processing + GLSL

Over the past few days, I have stumbled upon a particularly tricky bug. I have reduced my code down to a very simple and direct set of examples.
This is the processing code I use to call my shaders:
PGraphics test;
PShader testShader;
PImage testImage;
void setup() {
size(400, 400, P2D);
testShader = loadShader("test.frag", "vert2D.vert");
testImage = loadImage("test.png");
testShader.set("image", testImage);
testShader.set("size", testImage.width);
shader(testShader);
}
void draw() {
background(0, 0, 0);
shader(testShader);
beginShape(TRIANGLES);
vertex(-1, -1, 0, 1);
vertex(1, -1, 1, 1);
vertex(-1, 1, 0, 0);
vertex(1, -1, 1, 1);
vertex(-1, 1, 0, 0);
vertex(1, 1, 1, 0);
endShape();
}
Here is my vertex shader:
attribute vec2 vertex;
attribute vec2 texCoord;
varying vec2 vertTexCoord;
void main() {
gl_Position = vec4(vertex, 0, 1);
vertTexCoord = texCoord;
}
When I call this fragment shader:
uniform sampler2D image;
varying vec2 vertTexCoord;
void main(void) {
gl_FragColor = texture2D(image, vertTexCoord);
}
I get this:
This is the expected result. However, when I render the texture coordinates to the red and green channels instead with the following fragment shader:
uniform sampler2D image;
uniform float size;
varying vec2 vertTexCoord;
void main(void) {
gl_FragColor = vec4(vertTexCoord, 0, 1);
}
I get this:
As you can see, a majority of the screen is black, which would indicate that at these fragments, the texture coordinates are [0, 0]. This can't be the case though, because when passed into the texture2D function, they are correctly mapped to the corresponding positions in the image. To verify that the exact same values for texture coordinates were being used in both of these cases, I combined them with the following shader.
uniform sampler2D image;
uniform float size;
varying vec2 vertTexCoord;
void main(void) {
gl_FragColor = texture2D(image, vertTexCoord) + vec4(vertTexCoord, 0, 0);
}
This produced:
Which is exactly what you would expect if the texture coordinates did smoothly vary across the screen. So I tried a completely black image, expecting to see this variation more clearly without the face. When I did this, I got the image with the two triangles again. After playing around with it some more, I found that if I have an entirely black image except with the top left pixel transparent, I get this:
Which is finally the image I would expect with smoothly varying coordinates.This has completely stumped me. Why does the texture lookup work properly but rendering the actual coordinates gives me mostly junk?
EDIT:
I found a solution which I have posted but am still unsure why the bug exists in the first place. I came across an interesting test case that might provide a little more information about why this is happening.
Fragment Shader:
varying vec2 vertTexCoord;
void main(void) {
gl_FragColor = vec4(vertTexCoord, 0, 0.5);
}
Result:
I have found two different solutions. Both involve changes in the processing code. I have no idea how or why these changes make it to work.
Solution 1:
Pass down screen space coordinates instead of clip space coordinates and use the transform matrix generated by processing to convert those into clip space in the vertex shader.
Processing code:
PGraphics test;
PShader testShader;
PImage testImage;
void setup() {
size(400, 400, P2D);
testShader = loadShader("test.frag", "vert2D.vert");
testImage = loadImage("test.png");
testShader.set("image", testImage);
testShader.set("size", testImage.width);
shader(testShader);
}
void draw() {
background(0, 0, 0);
shader(testShader);
beginShape(TRIANGLES);
//Pass down screen space coordinates instead.
vertex(0, 400, 0, 1);
vertex(400, 400, 1, 1);
vertex(0, 0, 0, 0);
vertex(400, 400, 1, 1);
vertex(0, 0, 0, 0);
vertex(400, 0, 1, 0);
endShape();
}
Vertex Shader:
attribute vec2 vertex;
attribute vec2 texCoord;
uniform mat4 transform;
varying vec2 vertTexCoord;
void main() {
//Multiply transform matrix.
gl_Position = transform * vec4(vertex, 0, 1);
vertTexCoord = texCoord;
}
Result:
Notice the line through the center of the screen. This is because we haven't called noStroke() in the processing code. Still, texture coordinates are interpolated properly.
Solution 2:
If we just call noStroke() in the setup, we can pass the clip space coordinates down without any issues and everything works exactly as expected. No shader changes needed.
PGraphics test;
PShader testShader;
PImage testImage;
void setup() {
size(400, 400, P2D);
//Call noStroke()
noStroke();
testShader = loadShader("test.frag", "vert2D.vert");
testImage = loadImage("test.png");
testShader.set("image", testImage);
testShader.set("size", testImage.width);
shader(testShader);
}
void draw() {
background(0, 0, 0);
shader(testShader);
beginShape(TRIANGLES);
vertex(-1, -1, 0, 1);
vertex(1, -1, 1, 1);
vertex(-1, 1, 0, 0);
vertex(1, -1, 1, 1);
vertex(-1, 1, 0, 0);
vertex(1, 1, 1, 0);
endShape();
}
Result:
Pretty easy fix. How this one change manages to affect the way the texture coordinates are interpolated/not interpolated in the fragment shader is beyond me.
To anyone that is maybe a little more familiar with how processing wraps OpenGL that might have insight on why these bugs exist, I'd be interested to know.

Strange coordinates when rendering to framebuffer with texture

I'm making 2D game with large-pixel graphics. To achieve this effect I'm rendering all images to framebuffer with texture 2 times smaller than my window. And then, I'm rendering this texture to window using quad ({{-1,-1},{1,-1},{1,1},{-1,1}}).
This works fine, but coordinate system when rendering to texture is a bit strange. For example, when I use
glBegin(GL_POINTS);
glVertex2f(-0.75, -0.75);
glEnd();
It renders 2x2 point. I would expect this point to be at (win_w * 1/8, win_h * 7/8) but whis point is at (win_w * 1/4, win_h * 3/4).
If I change framebuffer texture size from ((win_w + 1) / 2, (win_h + 1) / 2) (2 times smaller than my screen)
to ((win_w + 3) / 4, (win_h + 3) / 4) (4 times smaller than my screen) that point is now has 4x4 size and it is at (win_w * 1/2, win_h * 1/2) (center of window).
I think this is incorrect. AFAIK, framebuffer coordinate system does not depend on framebuffer texture size; 1,1 is a top-right corner on any texture size, right?
There is no transformation matrixes or sometring like this, so OpenGL must not transform my coordinates.
I still can render with this strange coordinate system, but I don't understand why it works this way.
So, question is: i want to render vertices is same place inside window with any framebuffer texture size. Is it possible? (I don't want to use trasformation matrixes inside shaders, because it should work without them. I hope there is another solutions.)
Shaders:
// Vertex:
#version 430
in layout(location = 0) vec2 pos;
out vec2 vPos;
void main()
{
vPos = pos;
gl_Position = vec4(pos.x, pos.y, 0, 1);
}
// Fragment:
#version 430
uniform layout(location = 0) sampler2D tex;
in vec2 vPos;
out vec4 color;
void main()
{
color = texture(tex, (vPos + 1) / 2);
}
Problem solved. (Thanks to #RetoKoradi.) Now my code looks like this:
glViewport(0, 0, 800, 600);
/// Switch shaders and framebuffer
DrawQuadWithTexture();
glViewport(0, 0, 400, 300);
/// Switch shaders and framebuffer
DrawAllStuff();

opengl 3d texture issue

I'm trying to use a 3d texture in opengl to implement volume rendering. Each voxel has an rgba colour value and is currently rendered as a screen facing quad.(for testing purposes). I just can't seem to get the sampler to give me a colour value in the shader. The quads always end up black. When I change the shader to generate a colour (based on xyz coords) then it works fine. I'm loading the texture with the following code:
glGenTextures(1, &tex3D);
glBindTexture(GL_TEXTURE_3D, tex3D);
unsigned int colours[8];
colours[0] = Colour::AsBytes<unsigned int>(Colour::Blue);
colours[1] = Colour::AsBytes<unsigned int>(Colour::Red);
colours[2] = Colour::AsBytes<unsigned int>(Colour::Green);
colours[3] = Colour::AsBytes<unsigned int>(Colour::Magenta);
colours[4] = Colour::AsBytes<unsigned int>(Colour::Cyan);
colours[5] = Colour::AsBytes<unsigned int>(Colour::Yellow);
colours[6] = Colour::AsBytes<unsigned int>(Colour::White);
colours[7] = Colour::AsBytes<unsigned int>(Colour::Black);
glTexImage3D(GL_TEXTURE_3D, 0, GL_RGBA, 2, 2, 2, 0, GL_RGBA, GL_UNSIGNED_BYTE, colours);
The colours array contains the correct data, i.e. the first four bytes have values 0, 0, 255, 255 for blue. Before rendering I bind the texture to the 2nd texture unit like so:
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_3D, tex3D);
And render with the following code:
shaders["DVR"]->Use();
shaders["DVR"]->Uniforms["volTex"].SetValue(1);
shaders["DVR"]->Uniforms["World"].SetValue(Mat4(vl_one));
shaders["DVR"]->Uniforms["viewProj"].SetValue(cam->GetViewTransform() * cam->GetProjectionMatrix());
QuadDrawer::DrawQuads(8);
I have used these classes for setting shader params before and they work fine. The quaddrawer draws eight instanced quads. The vertex shader code looks like this:
#version 330
layout(location = 0) in vec2 position;
layout(location = 1) in vec2 texCoord;
uniform sampler3D volTex;
ivec3 size = ivec3(2, 2, 2);
uniform mat4 World;
uniform mat4 viewProj;
smooth out vec4 colour;
void main()
{
vec3 texCoord3D;
int num = gl_InstanceID;
texCoord3D.x = num % size.x;
texCoord3D.y = (num / size.x) % size.y;
texCoord3D.z = (num / (size.x * size.y));
texCoord3D /= size;
texCoord3D *= 2.0;
texCoord3D -= 1.0;
colour = texture(volTex, texCoord3D);
//colour = vec4(texCoord3D, 1.0);
gl_Position = viewProj * World * vec4(texCoord3D, 1.0) + (vec4(position.x, position.y, 0.0, 0.0) * 0.05);
}
uncommenting the line where I set the colour value equal to the texcoord works fine, and makes the quads coloured. The fragment shader is simply:
#version 330
smooth in vec4 colour;
out vec4 outColour;
void main()
{
outColour = colour;
}
So my question is, what am I doing wrong, why is the sampler not getting any colour values from the 3d texture?
[EDIT]
Figured it out but can't self answer (new user):
As soon as I posted this I figured it out, I'll put the answer up to help anyone else (it's not specifically a 3d texture issue, and i've also fallen afoul of it before, D'oh!). I didn't generate mipmaps for the texture, and the default magnification/minification filters weren't set to either GL_LINEAR, or GL_NEAREST. Boom! no textures. Same thing happens with 2d textures.
As soon as I posted this I figured it out, I'll put the answer up to help anyone else (it's not specifically a 3d texture issue, and i've also fallen afoul of it before, D'oh!). I didn't generate mipmaps for the texture, and the default magnification/minification filters weren't set to either GL_LINEAR, or GL_NEAREST. Boom! no textures. Same thing happens with 2d textures.