I've written the following shader to perform a bright pass of my scene so I can extract luminance for later blurring as part of a "glow" effect.
// "Bright" pixel shader.
#version 420
uniform sampler2D Map_Diffuse;
uniform float uniform_Threshold;
in vec2 attrib_Fragment_Texture;
out vec4 Out_Colour;
void main(void)
{
vec3 luminances = vec3(0.2126, 0.7152, 0.0722);
vec4 texel = texture2D(Map_Diffuse, attrib_Fragment_Texture);
float luminance = dot(luminances, texel.rgb);
luminance = max(0.0, luminance - uniform_Threshold);
texel.rgb *= sign(luminance);
texel.a = 1.0;
Out_Colour = texel;
}
The bright areas are successfully extracted however there are some unstable features in the scene sometimes, resulting in pixels that flicker on and off for a while. When this is blurred the effect is more pronounced, with bits of glow kind-of flickering too. The artifacts occur in, for example, the third image in the screenshot I've posted, where the object is in shadow and so there's far less luminance in the scene. They're mostly present in transition from away to towards the light of course (during rotation of the object), where the edge is just hitting the light.
My question is to ask whether there's a way you can detect and mitigate this in the shader. Note that the bright pass is part of a general down-sample, from screen resolution to 512x512.
You could read the surrounding pixels also and do your math based on that.
Kind of like is done here.
Related
I am working on a 2d project and I noticed the following issue:
As you can see in the gif above, when the object is making small movements, its vertices jitter.
To render, every frame I clear a VBO, calculate the new positions of the vertices and then insert them to the VBO. Every frame, I create the exact same structure, but from a different origin.
Is there a way to get smooth motion even when the displacement between each frame is so minor?
I am using SDL2 so double buffering is enabled by default.
This is a minor issue, but it becomes very annoying once I apply a texture to the model.
Here is the vertex shader I am using:
#version 330 core
layout (location = 0) in vec2 in_position;
layout (location = 1) in vec2 in_uv;
layout (location = 2) in vec3 in_color;
uniform vec2 camera_position, camera_size;
void main() {
gl_Position = vec4(2 * (in_position - camera_position) / camera_size, 0.0f, 1.0f);
}
What you see is caused by the rasterization algorithm. Consider the following two rasterizations of the same geometry (red lines) offset by only half a pixel:
As can be seen, shifting by just half a pixel can change the perceived spacing between the vertical lines from three pixels to two pixels. Moreover, the horizontal lines didn't shift, therefore their appearance didn't change.
This inconsistent behavior is what manifests as "wobble" in your animation.
One way to solve this is to enable anti-aliasing with glEnable(GL_LINE_SMOOTH). Make sure to have correct blending enabled. This will, however, result in blurred lines when they fall right between the pixels.
If instead you really need the crisp jagged line look (eg pixel art), then you need to make sure that your geometry only ever moves by an integer number of pixels:
vec2 scale = 2/camera_size;
vec2 offset = -scale*camera_position;
vec2 pixel_size = 2/viewport_size;
offset = round(offset/pixel_size)*pixel_size; // snap to pixels
gl_Position = vec4(scale*in_position + offset, 0.0f, 1.0f);
Add viewport_size as a uniform.
I'm making a libGDX based game and I've tried to make a pixelating shader. To me, it looks like it should work but it doesn't. I just see 1 color of the texture all over the screen. The goal was to turn a detailed texture into a pixelated texture.
Here is the code of my fragment shader:
precision mediump float;
varying vec4 v_color;
varying vec2 v_texCoord0;
uniform sampler2D u_sampler2D;
void main(){
ivec2 texSize = textureSize( u_sampler2D, 0);
vec4 color = texture2D(u_sampler2D, vec2(int(v_texCoord0.x * texSize.x) / texSize.x, int(v_texCoord0.y * texSize.y) / texSize.y)) * v_color;
gl_FragColor = color;
}
What I am trying to do here is: I get the size of the texture. Then, with that size, I 'pixelate' v_texCoord0 and get the color of that pixel.
As soon as I remove the int cast in
int(v_texCoord0.x * texSize.x) / texSize.x, int(v_texCoord0.y * texSize.y)
, I see the texture as normal, otherwise I see what I've described in the beginning of this post. However, to me, anything in my code could be wrong.
I hope someone with experience could help me fix this problem!
You are doing an integer division here:
ivec2 texSize;
[...] int(v_texCoord0.x * texSize.x) / texSize.x
The result can only be an integer, and if v_texCoord0.x is in the range [0,1], it will result in producing zero except for rightmost part of your texture, when the fragment exacly samples at the border of your texture.
You should apply floating-point math to get the effect you want:
vec2 texSize=vec2(textureSize( u_sampler2D, 0));
vec4 color = texture2D(u_sampler2D, floor(v_texCoord0 * texSize) / texSize);
(Also note that there is no need to work with the x and y components individually, you can use vector operations.)
But what you're doing here is not completely clear. Concenptually, you are emulating what GL_NEAREST filtering can do for you for free (just that your selection is shifted by half a texel), so the question is: what are you getting from this. If you use GL_LINEAR filtering, the above formula will sample always at the texel corners, so that the linear filter will result in averaging the color of a 2x2 texel block. If you use GL_NEAREST, the formula will not give you more pixelation than you had before, it just shifts the texture in a weird way. If you use some filter with mipmapping, the formula will completely break the mipmap selection due to the non-continuous nature of the equation (this will also result in the GL not being able to discern between texture minification or magnification in a reliable way, so it does not break only mipmapping).
To lighten the work-load on my artist, I'm working on making a bare-bones skeletal animation system for images. I've conceptualized how all the parts work. But to make it animate how I want to, I need to be able to skew an image in real time. (If I can't do that, I still know how to make it work, but it'd be a lot prettier if I could skew images to add perspective)
I'm using SFML, which doesn't support image skewing. I've been told I'd need to use OpenGL shaders on the image, but all the advice I got on that was "learn GLSL". So I figured maybe someone over here could help me out a bit.
Does anyone know where I should start with this? Or how to do it?
Basically, I want to be able to deforem an image to give it perspective (as shown in the following mockup)
The images on the left are being skewed at the top and bottom. The images on the right are being skewed at the left and right.
An example of how to skew a texture in GLSL would be the following (poorly poorly optimized) shader. Idealy, you would want to actually precompute your transform matrix inside your regular program and pass it in as a uniform so that you aren't recomputing the transform every move through the shader. If you still wanted to compute the transform in the shader, pass the skew factors in as uniforms instead. Otherwise, you'll have to open the shader and edit it every time you want to change the skew factor.
This is for a screen aligned quad as well.
Vert
attribute vec3 aVertexPosition;
varying vec2 texCoord;
void main(void){
// Set regular texture coords
texCoord = ((vec2(aVertexPosition.xy) + 1.0) / 2.0);
// How much we want to skew each axis by
float xSkew = 0.0;
float ySkew = 0.0;
// Create a transform that will skew our texture coords
mat3 trans = mat3(
1.0 , tan(xSkew), 0.0,
tan(ySkew), 1.0, 0.0,
0.0 , 0.0, 1.0
);
// Apply the transform to our tex coords
texCoord = (trans * (vec3(texCoord.xy, 0.0))).xy;
// Set vertex position
gl_Position = (vec4(aVertexPosition, 1.0));
}
Frag
precision highp float;
uniform sampler2D sceneTexture;
varying vec2 texCoord;
void main(void){
gl_FragColor = texture2D(sceneTexture, texCoord);
}
This ended up being significantly simpler than I thought it was. SFML has a vertexArray class that allows drawing custom quads without requiring the use of openGL.
The code I ended up going with is as follows (for anyone else who runs into this problem):
sf::Texture texture;
texture.loadFromFile("texture.png");
sf::Vector2u size = texture.getSize();
sf::VertexArray box(sf::Quads, 4);
box[0].position = sf::Vector2f(0, 0); // top left corner position
box[1].position = sf::Vector2f(0, 100); // bottom left corner position
box[2].position = sf::Vector2f(100, 100); // bottom right corner position
box[3].position = sf::Vector2f(100, 100); // top right corner position
box[0].texCoords = sf::Vector2f(0,0);
box[1].texCoords = sf::Vector2f(0,size.y-1);
box[2].texCoords = sf::Vector2f(size.x-1,size.y-1);
box[3].texCoords = sf::Vector2f(size.x-1,0);
To draw it, you call the following code wherever you usually tell your window to draw stuff.
window.draw(lines,&texture);
If you want to skew the image, you just change the positions of the corners. Works great. With this information, you should be able to create a custom drawable class. You'll have to write a bit of code (set_position, rotate, skew, etc), but you just need to change the position of the corners and draw.
I'm using c++, opengl 4.0 and glsh shader language.
I'm wondering how to correctly blend diffuse texture with lightmap texture.
Let's assume that we have a room. Every object has diffuse texture and lightmap. In every forum like gamedev.net or stackoverflow people say, that those textures should be multiplied. And in most cases it gives good results, but sometimes some objects are very close to light source (for example white bulb). This light source for close objects generates white lightmap. But when we multiply diffuse texture with white lightmap, then we get original diffuse texture color.
But if light source is close to some object, then color of light should be dominant
It means, that if white, strong light is close to red wall, then some part of this wall should be white, not red!
I think I need something more than just one lightmap. Lightmap don't have information about light intensity. It means, that the most shiny color is just maximum diffuse color.
Maybe I should have 2 textures - shadowmap and lightmap? Then equations should looks like this:
vec3 color = shadowmapColor * diffuseTextureColor + lightmapColor;
Is it good approach?
Generally speaking, if you're still using lightmaps, you are probably also not using HDR rendering. And without that, what you want is not particularly reasonable. Unless your light map provides the light intensity as an HDR floating-point value (perhaps in a GL_R11F_G11F_B10F or GL_RGBA16F format), this is not going to work very well.
And of course, you'll have to do the usual stuff that you do with HDR, such as tone mapping and so forth.
Lastly, your additive equation makes no sense. If the light map color represents the diffuse interaction between the light and the surface, then simply adding the light map color doesn't mean anything. The standard diffuse lighting equation is C * (dot(N, L) * I * D), where I is the light intensity, D is the distance attenuation factor, and C is the diffuse color. The value from the lightmap is presumably the parenthesized quantity. So adding it doesn't make sense.
It still needs to multiply with the surfaces's diffuse color. Any over-brightening will be due to the effective intensity of the light as a function of D.
What you need is the distance (or to save some sqrt-ing, the squared distance) of the light source to the fragment being illuminated. Then you can, in the simplest case, interpolate linearly between the light map and light source contributions:
The distance is a simple calculation which can be done per vertex in you vertex shader:
in vec4 VertexPosition; // let's assume world space for simplicity
uniform vec4 LightPosisiton; // world-space - might also be part of a uniform block etc.
out float LightDistance; // pass the distance to the fragment shader
// other stuff you need here ....
void main()
{
// do stuff
LightDistance = length(VertexPosition - LightPosisiton);
}
In your fragment shader, you use the distance to compute interpolation factors betweem light source and lightmap contributions:
in float LightDistance;
const float MAX_DISTANCE = 10.0;
uniform sampler2D LightMap;
// other stuff ...
out vec4 FragColor;
void main()
{
vec4 LightContribution;
// calculate illumination (including shadow map evaluation) here
// store in LightContribution
vec4 LightMapConstribution = texture(LightMap, /* tex coords here */);
// The following DistanceFactor will map distances in the range [0, MAX_DISTANCE] to
// [0,1]. The idea is that at LightDistance >= MAX_DISTANCE, the light source
// doesn't contribute anymore.
float DistanceFactor = min(1.0, LightDistance / MAX_DISTANCE);
// linearly interpolat between LightContribution and LightMapConstribution
vec4 FinalContribution = mix(LightContribution, LightMapConstribution, DistanceFactor);
FragColor = WhatEverColor * vec4(FinalContribution.xyz, 1.0);
}
HTH.
EDIT: To factor in Nicol Bolas' remarks, I assume that the LightMap stores the contribution encoded as an RGB color, storing the contributions for each channel. If you actually have a single channel lightmap which only store monochromatic contributions, you'll have to either use the surface color, use the color of the light source or reduce the light source contribution to a single channel.
EDIT2: Although this works mathematically, it's definitely not physically sound. You might need some correction of the final contribution to make it at least physically plausible. If your only aiming for effect, you can simply play around with correction factors until you're satisfied with the result.
I'm trying to write simple shader to put some "mark"(64*64) on base texture(128*128), to indicate where mark must be, i use cyan colored mark-sized(64*64) region on base texture.
becomes
Fragment shader
precision lowp float;
uniform sampler2D us_base_tex;
uniform sampler2D us_mark_tex;
varying vec2 vv_base_tex;
varying vec2 vv_mark_tex;
const vec4 c_mark_col = vec4(0.0, 1.0, 1.0, 1.0);//CYAN
void main()
{
vec4 base_col = texture2D(us_base_tex, vv_base_tex);
if(base_col == c_mark_col)
{
vec4 mark_col = texture2D(us_mark_tex, vv_mark_tex);//texelFetch magic overhere must be
base_col = mix(base_col, mark_col, mark_col.a);
}
gl_FragColor = base_col;
}
Of course, it not works as it should, i got something like this (transperity only for demonstration, there is no cyan region, only piece of "T"):
I try to figure it and only something like texelFetch will help me, but i can't figure out, how get tex coord of base texture cyan texel and converted it to get - first col/first row cyan base texel = first col/first row mark texel, second col/first row base = second col/first row of mark. e.t.c.
I think there's a way to do this in a single pass - but it involves using another texture that is capable of holding the information presented below. So you're going to increase your texture memory usage.
In this approach, the second texture (it can be generated by post-processing the original texture either offline or somehow) contains the UV map for the decal
R = normalized distance from left of cyan square
G = normalized distance from the top of the cyan square
B = don't care
Now the pixel shader is simple, all it needs to do is to see if the current texel is cyan, pick the R and G from the "decal-uvmap" texture and use those as texture coords to sample the decal texture.
Note that the bit depth of this texture (and it's size) is related to the size of the original texture so it may be possible to get away with a much smaller "decal-uvmap" texture than the original.