Failed attempt to create a pixelating shader for opengl - opengl

I'm making a libGDX based game and I've tried to make a pixelating shader. To me, it looks like it should work but it doesn't. I just see 1 color of the texture all over the screen. The goal was to turn a detailed texture into a pixelated texture.
Here is the code of my fragment shader:
precision mediump float;
varying vec4 v_color;
varying vec2 v_texCoord0;
uniform sampler2D u_sampler2D;
void main(){
ivec2 texSize = textureSize( u_sampler2D, 0);
vec4 color = texture2D(u_sampler2D, vec2(int(v_texCoord0.x * texSize.x) / texSize.x, int(v_texCoord0.y * texSize.y) / texSize.y)) * v_color;
gl_FragColor = color;
}
What I am trying to do here is: I get the size of the texture. Then, with that size, I 'pixelate' v_texCoord0 and get the color of that pixel.
As soon as I remove the int cast in
int(v_texCoord0.x * texSize.x) / texSize.x, int(v_texCoord0.y * texSize.y)
, I see the texture as normal, otherwise I see what I've described in the beginning of this post. However, to me, anything in my code could be wrong.
I hope someone with experience could help me fix this problem!

You are doing an integer division here:
ivec2 texSize;
[...] int(v_texCoord0.x * texSize.x) / texSize.x
The result can only be an integer, and if v_texCoord0.x is in the range [0,1], it will result in producing zero except for rightmost part of your texture, when the fragment exacly samples at the border of your texture.
You should apply floating-point math to get the effect you want:
vec2 texSize=vec2(textureSize( u_sampler2D, 0));
vec4 color = texture2D(u_sampler2D, floor(v_texCoord0 * texSize) / texSize);
(Also note that there is no need to work with the x and y components individually, you can use vector operations.)
But what you're doing here is not completely clear. Concenptually, you are emulating what GL_NEAREST filtering can do for you for free (just that your selection is shifted by half a texel), so the question is: what are you getting from this. If you use GL_LINEAR filtering, the above formula will sample always at the texel corners, so that the linear filter will result in averaging the color of a 2x2 texel block. If you use GL_NEAREST, the formula will not give you more pixelation than you had before, it just shifts the texture in a weird way. If you use some filter with mipmapping, the formula will completely break the mipmap selection due to the non-continuous nature of the equation (this will also result in the GL not being able to discern between texture minification or magnification in a reliable way, so it does not break only mipmapping).

Related

OpenGL shader fill works with constant color, doesn't work with interpolation, how to debug?

We have code that mostly works filling polygons on a map, though it draws convex hulls and fills in some areas (will require tessellation).
The shader is given a set of triangle fan operations, and draws using hardcoded color yellow (and it works).
Then we try to interpolate based on the value, and it turns black (does not work).
Here is the fragment shader. Values coming in are all 0.0 to 1.0
With minVal = 0.0, maxVal = 1.0
and colors set to (0,0,1) and (1,0,0)
While I would appreciate knowing the bug, I would much more like to know how I can debug it. I need to be able to get the values in the shader and see what is happening. In short, I need some kind of debugging facility for GLSL. I did find NVIDIA nsight: https://developer.nvidia.com/nsight-graphics but could not get it working on linux.
#version 330 core
out vec4 FragColor;
//in vec2 TexCoord;
in float val;
//uniform sampler2D ourTexture;
uniform vec3 minColor;
uniform vec3 maxColor;
uniform float minVal;
uniform float maxVal;
void main()
{
float f = (val - minVal)/ (maxVal-minVal);
//FragColor = vec4(1,1,0,1);//texture(ourTexture, f);
FragColor = vec4(minColor*(1.0-f) + maxColor * f,1.0);
}
It turns out that we were using glUniform4fv to set a color with rgba.
There was no compile or runtime error. These calls do not have an error return that I know of.
The shader also did not generate an error, but the variables minColor and maxColor were not correctly set.
Thus the interpolation was always black.
vec4(minColor*(1.0-f) + maxColor * f,1.0);
There should have been an error, attempting to set an RGBA color into a vec3 variable.
I have found printf functions on stackoverflow that would have allowed viewing this kind of information: Convert floating-point numbers to decimal digits in GLSL

Can anyone explain these snippets related to WebGL

I am referring to this link for learning how to render a texture in webgl.
I am facing some doubts as it is not very easy for a beginner to understand.
What does these snippets mean for GLSL:
vec2 zeroToOne = a_position / u_resolution;
vec2 zeroToTwo = zeroToOne * 2.0;
vec2 clipSpace = zeroToTwo - 1.0;
Also, I don't want to fill the entire canvas if my image is bigger. I want to render all textures as a 512 * 384 (4:3), how to do that by modifying the vertices.
Since I wrote the sample you linked too I'm curious how I can improve the explanation already on that site
The sample you linked to is from this page.
That page says right at the top
This is a continuation from WebGL Fundamentals. If you haven't read that I'd suggest going there first
That page says
WebGL only cares about 2 things. Clipspace coordinates and colors. Your job as a programmer using WebGL is to provide WebGL with those 2 things. You provide 2 "shaders" to do this. A Vertex shader which provides the clipspace coordinates and a fragment shader that provides the color.
Clipspace coordinates always go from -1 to +1 no matter what size your canvas is
It then shows an example using clip space coordinates.
After that it says we'd probably rather work in pixels and shows a shader with comments that details how to convert from pixels to clip space
For 2D stuff you would probably rather work in pixels than clipspace so let's change the shader so we can supply rectangles in pixels and have it convert to clipspace for us. Here's the new vertex shader
attribute vec2 a_position;
uniform vec2 u_resolution;
void main() {
// convert the rectangle from pixels to 0.0 to 1.0
vec2 zeroToOne = a_position / u_resolution;
// convert from 0->1 to 0->2
vec2 zeroToTwo = zeroToOne * 2.0;
// convert from 0->2 to -1->+1 (clipspace)
vec2 clipSpace = zeroToTwo - 1.0;
gl_Position = vec4(clipSpace, 0, 1);
}
In fact, the sample you linked to has those exact same comments in the code.
I'd love to hear any ideas how I can make that clearer
This code likely converts a_position from 0..N-1 texture resolution space to
-1..1 range.

Smooth gradient in fragment shader

I am looking for some way how to get smooth gradient with fragment shader. I have palette with 11 colors and value which used to define color (it lays in range from 0.0 to 1.0).
I am trying to get smooth color translation with such fragment shader:
#version 150 core
in float value;
uniform vec3 colors[11];
out vec4 out_Color;
void main(void)
{
int index = int(round(value * 10));
int floorIndex = 0;
if (index != 0) floorIndex = index - 1;
out_Color = vec4(mix(colors[floorIndex], colors[index], value), 1.0f);
}
But using such approach I could get only stepped colors distribution.
And my desirable result looks like:
I know how to get this with path-through shader just passing color as attribute, but this is not that way. I am going to get such smooth distribution with single float value passed to fragment shader.
Your mixing function is not really usefully applied here:
mix(colors[floorIndex], colors[index], value)
The problem is that, while value is in [0,1], it is not the proper mixing factor. You need it to be scaled to [0,1] for the sub-range you have selected. E.g, your code uses floorIndex=2 and index=3 when value is in [0.25, 0.35), so now you need a mix factor wich is 0.0 when value is 0.25, 0.5 when it is 0.3 and goint near 1.0 when it reaches 0.35 (at 3.5 the round will switch to the next step, of course).
So you will need something like:
float blend=(value * 10.0) - float(floorIndex) - 0.5;
mix(colors[floorIndex], colors[index], blend)
That can probably be optimized a bit by using the fract() operation.
Another thing which comes to mind is that you could use a (1D) texture for your palette and enable GL_LINEAR filtering. In that case, you can directly use value as texture coordinate and will get the result you need. This will be much simpler, and likely also more efficient as it moves most of the operations to the specialised texture sampling hardware. So if you don't have a specific reason for not using a texture, I strongly recommend doing that.

Blending lightmap with diffuse texture

I'm using c++, opengl 4.0 and glsh shader language.
I'm wondering how to correctly blend diffuse texture with lightmap texture.
Let's assume that we have a room. Every object has diffuse texture and lightmap. In every forum like gamedev.net or stackoverflow people say, that those textures should be multiplied. And in most cases it gives good results, but sometimes some objects are very close to light source (for example white bulb). This light source for close objects generates white lightmap. But when we multiply diffuse texture with white lightmap, then we get original diffuse texture color.
But if light source is close to some object, then color of light should be dominant
It means, that if white, strong light is close to red wall, then some part of this wall should be white, not red!
I think I need something more than just one lightmap. Lightmap don't have information about light intensity. It means, that the most shiny color is just maximum diffuse color.
Maybe I should have 2 textures - shadowmap and lightmap? Then equations should looks like this:
vec3 color = shadowmapColor * diffuseTextureColor + lightmapColor;
Is it good approach?
Generally speaking, if you're still using lightmaps, you are probably also not using HDR rendering. And without that, what you want is not particularly reasonable. Unless your light map provides the light intensity as an HDR floating-point value (perhaps in a GL_R11F_G11F_B10F or GL_RGBA16F format), this is not going to work very well.
And of course, you'll have to do the usual stuff that you do with HDR, such as tone mapping and so forth.
Lastly, your additive equation makes no sense. If the light map color represents the diffuse interaction between the light and the surface, then simply adding the light map color doesn't mean anything. The standard diffuse lighting equation is C * (dot(N, L) * I * D), where I is the light intensity, D is the distance attenuation factor, and C is the diffuse color. The value from the lightmap is presumably the parenthesized quantity. So adding it doesn't make sense.
It still needs to multiply with the surfaces's diffuse color. Any over-brightening will be due to the effective intensity of the light as a function of D.
What you need is the distance (or to save some sqrt-ing, the squared distance) of the light source to the fragment being illuminated. Then you can, in the simplest case, interpolate linearly between the light map and light source contributions:
The distance is a simple calculation which can be done per vertex in you vertex shader:
in vec4 VertexPosition; // let's assume world space for simplicity
uniform vec4 LightPosisiton; // world-space - might also be part of a uniform block etc.
out float LightDistance; // pass the distance to the fragment shader
// other stuff you need here ....
void main()
{
// do stuff
LightDistance = length(VertexPosition - LightPosisiton);
}
In your fragment shader, you use the distance to compute interpolation factors betweem light source and lightmap contributions:
in float LightDistance;
const float MAX_DISTANCE = 10.0;
uniform sampler2D LightMap;
// other stuff ...
out vec4 FragColor;
void main()
{
vec4 LightContribution;
// calculate illumination (including shadow map evaluation) here
// store in LightContribution
vec4 LightMapConstribution = texture(LightMap, /* tex coords here */);
// The following DistanceFactor will map distances in the range [0, MAX_DISTANCE] to
// [0,1]. The idea is that at LightDistance >= MAX_DISTANCE, the light source
// doesn't contribute anymore.
float DistanceFactor = min(1.0, LightDistance / MAX_DISTANCE);
// linearly interpolat between LightContribution and LightMapConstribution
vec4 FinalContribution = mix(LightContribution, LightMapConstribution, DistanceFactor);
FragColor = WhatEverColor * vec4(FinalContribution.xyz, 1.0);
}
HTH.
EDIT: To factor in Nicol Bolas' remarks, I assume that the LightMap stores the contribution encoded as an RGB color, storing the contributions for each channel. If you actually have a single channel lightmap which only store monochromatic contributions, you'll have to either use the surface color, use the color of the light source or reduce the light source contribution to a single channel.
EDIT2: Although this works mathematically, it's definitely not physically sound. You might need some correction of the final contribution to make it at least physically plausible. If your only aiming for effect, you can simply play around with correction factors until you're satisfied with the result.

How to get pixel information inside a fragment shader?

In my fragment shader I can load a texture, then do this:
uniform sampler2D tex;
void main(void) {
vec4 color = texture2D(tex, gl_TexCoord[0].st);
gl_FragColor = color;
}
That sets the current pixel to color value of texture. I can modify these, etc and it works well.
But a few questions. How do I tell "which" pixel I am? For example, say I want to set pixel 100,100 (x,y) to red. Everything else to black. How do I do a :
"if currentSelf.Position() == (100,100); then color=red; else color=black?"
?
I know how to set colors, but how do I get "my" location?
Secondly, how do I get values from a neighbor pixel?
I tried this:
vec4 nextColor = texture2D(tex, gl_TexCoord[1].st);
But not clear what it is returning? if I'm pixel 100,100; how do I get the values from 101,100 or 100,101?
How do I tell "which" pixel I am?
You're not a pixel. You're a fragment. There's a reason that OpenGL calls them "Fragment shaders"; it's because they aren't pixels yet. Indeed, not only may they never become pixels (via discard or depth tests or whatever), thanks to multisampling, multiple fragments can combine to form a single pixel.
If you want to tell where your fragment shader is in window-space, use gl_FragCoord. Fragment positions are floating-point values, not integers, so you have to test with a range instead of a single "100, 100" value.
Secondly, how do I get values from a neighbor pixel?
If you're talking about the neighboring framebuffer pixel, you don't. Fragment shaders cannot arbitrarily read from the framebuffer, either in their own position or in a neighboring one.
If you're talking about accessing a neighboring texel from the one you accessed, then that's just a matter of biasing the texture coordinate you pass to texture2D. You have to get the size of the texture (since you're not using GLSL 1.30 or above, you have to manually pass this in), invert the size and either add or subtract these sizes from the S and T component of the texture coordinate.
Easy peasy.
Just compute the size of a pixel based on resolution. Then look up +1 and -1.
vec2 onePixel = vec2(1.0, 1.0) / u_textureSize;
gl_FragColor = (
texture2D(u_image, v_texCoord) +
texture2D(u_image, v_texCoord + vec2(onePixel.x, 0.0)) +
texture2D(u_image, v_texCoord + vec2(-onePixel.x, 0.0))) / 3.0;
There's a good example here