Simulating diffusion equation in GLSL - opengl

I'm trying to simulate diffusion in glsl (not the Gray Scott reaction-diffusion equation), and I seem to be having trouble getting it to work quite right. In all of my tests so far, diffusion stops at a certain point and reaches an equilibrium long before I expect it to.
My glsl code:
#version 460
#define KERNEL_SIZE 9
float kernel[KERNEL_SIZE];
vec2 offset[KERNEL_SIZE];
uniform float width;
uniform float height;
uniform sampler2D current_concentration_tex;
uniform float diffusion_constant; // rate of diffusion of U
in vec2 vTexCoord0;
out vec4 fragColor;
void main(void)
{
float w = 1.0/width;
float h = 1.0/height;
kernel[0] = 0.707106781;
kernel[1] = 1.0;
kernel[2] = 0.707106781;
kernel[3] = 1.0;
kernel[4] = -6.82842712;
kernel[5] = 1.0;
kernel[6] = 0.707106781;
kernel[7] = 1.0;
kernel[8] = 0.707106781;
offset[0] = vec2( -w, -h);
offset[1] = vec2(0.0, -h);
offset[2] = vec2( w, -h);
offset[3] = vec2( -w, 0.0);
offset[4] = vec2(0.0, 0.0);
offset[5] = vec2( w, 0.0);
offset[6] = vec2( -w, h);
offset[7] = vec2(0.0, h);
offset[8] = vec2( w, h);
float chemical_density = texture( current_concentration_tex, vTexCoord0 ).r; // reference texture from last frame
float laplacian;
for( int i=0; i<KERNEL_SIZE; i++ ){
float tmp = texture( current_concentration_tex, vTexCoord0 + offset[i] ).r;
laplacian += tmp * kernel[i];
}
float du = diffusion_constant * laplacian; // diffusion equation
chemical_density += du; // diffuse; dt * du
//chemical_density *= 0.9999; // decay
fragColor = vec4( clamp(chemical_density, 0.0, 1.0 ), 0.0, 0.0, 1.0 );
}
If a diffusion simulation was working properly, I would expect to draw a "source" each frame, and have the chemical diffusion away the source slowly into the rest of the frame. However, diffusion seems to "stop" early and doesn't fade any further. For example, when I draw a constant ring in the center as my source:
I also tried it with a one-time initial condition of chemicals in the center, with no additional chemicals added -- again, I would expect it to fade to zero (evenly distributed across the entire frame), but instead it stops much earlier than I would expect:
Is there something wrong with my simulation code in glsl? Or is this more of a numerical methods kind of issue? Would expanding from a 3x3 kernel to a larger one improve the situation?

(Moved from comment)
This is most probably a precision issue -- are you rendering it into an 8-bit target? You probably want, at least, a 32-bit float.
Since you use modern OpenGL, it is best to create your textures with glTextureStorage2D and specify GL_RGBA32F for the internalformat. There's no such thing as "default format for my GL texture was 8-bit RGBA", unless you use a legacy API, a non-standard extension, or rendering directly on the screen.

Related

Finding the light ray that goes from light world position through the shadow map texel

I want to move from basic shadow mapping on to adaptive biased shadow mapping.
I found a paper which describes how to do it, but I am not sure how to achieve a certain step in the process:
The idea is to have a plane P (which is basically just the normal of the current fragment's surface in the fragment shader stage) and the world space position of the fragment (F1 in the picture above).
In order to calculate the correct bias (to fight shadow acne) I need to find the world space position of F2 which I can get if I shoot a ray from the light source through the center of the shadow map's texel center. This ray then eventually hits the plane P which results in the needed point F2.
With F1 and F2 now known, I then can calculate the distance between F1 and F2 along the light ray (I guess) and thus get the ideal bias to fight shadow acne.
Right now my basic shader code looks like this:
Vertex shader:
in vec3 aLocalObjectPos;
out vec4 vShadowCoord;
out vec3 vF1;
// to shift the coordinates from [-1;1] to [0;1]
const mat4 biasMatrix = mat4(
0.5, 0.0, 0.0, 0.0,
0.0, 0.5, 0.0, 0.0,
0.0, 0.0, 0.5, 0.0,
0.5, 0.5, 0.5, 1.0
);
int main()
{
// get the vertex position in the light's view space:
vShadowCoord = (biasMatrix * viewProjShadowMap * modelMatrix) * vec4(aLocalObjectPos, 1.0);
vF1 = (modelMatrix * vec4(aLocalObjectPos, 1.0)).xyz;
}
Helper method in fragment shader:
uniform sampler2DShadow uTextureShadowMap;
float calculateShadow(float bias)
{
vShadowCoord.z -= bias;
return textureProjOffset(uTextureShadowMap, vShadowCoord, ivec2(0, 0));
}
My problem now is:
How do I get the light ray that goes from the light source through the shadow map's texel center?
I already found this topic: Adaptive Depth Bias for Shadow Maps Ray Casting
Unfortunately there is no answer and I don't quite get all the things the author is talking about :-/
So, I think I have figured it out myself. I followed the directions in this paper:
http://cwyman.org/papers/i3d14_adaptiveBias.pdf
Vertex Shader (not much going on there):
const mat4 biasMatrix = mat4(
0.5, 0.0, 0.0, 0.0,
0.0, 0.5, 0.0, 0.0,
0.0, 0.0, 0.5, 0.0,
0.5, 0.5, 0.5, 1.0
);
in vec4 aPosition; // vertex in model's local space (not modified in any way)
uniform mat4 uVPShadowMap; // light's view-projection matrix
out vec4 vShadowCoord;
void main()
{
// ...
vShadowCoord = (biasMatrix * uVPShadowMap * uModelMatrix) * aPosition;
// ...
}
Fragment Shader:
#version 450
in vec3 vFragmentWorldSpace; // fragment position in World space
in vec4 vShadowCoord; // texture coordinates for shadow map lookup (see vertex shader)
uniform sampler2DShadow uTextureShadowMap;
uniform vec4 uLightPosition; // Light's position in world space
uniform vec2 uLightNearFar; // Light's zNear and zFar values
uniform float uK; // variable offset faktor to tweak the computed bias a little bit
uniform mat4 uVPShadowMap; // light's view-projection matrix
const vec4 corners[2] = vec4[]( // frustum diagonal points in light's view space normalized [-1;+1]
vec4(-1.0, -1.0, -1.0, 1.0), // left bottom near
vec4( 1.0, 1.0, 1.0, 1.0) // right top far
);
float calculateShadowIntensity(vec3 fragmentNormal)
{
// get fragment's position in light space:
vec4 fragmentLightSpace = uVPShadowMap * vec4(vFragmentWorldSpace, 1.0);
vec3 fragmentLightSpaceNormalized = fragmentLightSpace.xyz / fragmentLightSpace.w; // range [-1;+1]
vec3 fragmentLightSpaceNormalizedUV = fragmentLightSpaceNormalized * 0.5 + vec3(0.5, 0.5, 0.5); // range [ 0; 1]
// get shadow map's texture size:
ivec2 textureDimensions = textureSize(uTextureShadowMap, 0);
vec2 delta = vec2(textureDimensions.x, textureDimensions.y);
// get width of every texel:
vec2 textureStep = vec2(1.0 / textureDimensions.x, 1.0 / textureDimensions.y);
// get the UV coordinates of the texel center:
vec2 fragmentLightSpaceUVScaled = fragmentLightSpaceNormalizedUV.xy * delta;
vec2 texelCenterUV = floor(fragmentLightSpaceUVScaled) * textureStep + textureStep / 2;
// convert range for texel center in light space in range [-1;+1]:
vec2 texelCenterLightSpaceNormalized = 2.0 * texelCenterUV - vec2(1.0, 1.0);
// recreate light ray in world space:
vec4 recreatedVec4 = vec4(texelCenterLightSpaceNormalized.x, texelCenterLightSpaceNormalized.y, -uLightsNearFar.x, 1.0);
mat4 vpShadowMapInversed = inverse(uVPShadowMap);
vec4 texelCenterWorldSpace = vpShadowMapInversed * recreatedVec4;
vec3 lightRayNormalized = normalize(texelCenterWorldSpace.xyz - uLightsPositions.xyz);
// compute scene scale for epsilon computation:
vec4 frustum1 = vpShadowMapInversed * corners[0];
frustum1 = frustum1 / frustum1.w;
vec4 frustum2 = vpShadowMapInversed * corners[1];
frustum2 = frustum2 / frustum2.w;
float ln = uLightNearFar.x;
float lf = uLightNearFar.y;
// compute light ray intersection with fragment plane:
float dotLightRayfragmentNormal = dot(fragmentNormal, lightRayNormalized);
float d = dot(fragmentNormal, vFragmentWorldSpace);
float x = (d - dot(fragmentNormal, uLightsPositions.xyz)) / dot(fragmentNormal, lightRayNormalized);
vec4 intersectionWorldSpace = vec4(uLightsPositions.xyz + lightRayNormalized * x, 1.0);
// compute bias:
vec4 texelInLightSpace = uVPShadowMap * intersectionWorldSpace;
float intersectionDepthTexelCenterUV = (texelInLightSpace.z / texelInLightSpace.w) / 2.0 + 0.5;
float fragmentDepthLightSpaceUV = fragmentLightSpaceNormalizedUV.z;
float bias = intersectionDepthTexelCenterUV - fragmentDepthLightSpaceUV;
float depthCompressionResult = pow(lf - fragmentLightSpaceNormalizedUV.z * (lf - ln), 2.0) / (lf * ln * (lf - ln));
float epsilon = depthCompressionResult * length(frustum1.xyz - frustum2.xyz) * uK;
bias += epsilon;
vec4 shadowCoord = vShadowCoord;
shadowCoord.z -= bias;
float shadowValue = textureProj(uTextureShadowMap, shadowCoord);
return max(shadowValue, 0.0);
}
Please note that this is a very verbose method (you could optimise several steps, I know) to better explain what I did to make it work.
All my shadow casting lights use perspective projection.
I tested the results on the CPU side in a separate project (only c# with the math structs from the OpenTK package) and they seem reasonable. I used several light positions, texture sizes, etc. The bias values looked ok in all my tests. Of course, this is no proof, but I have a good feeling about this.
In the end:
The benefits were very small. The visual results are good (especially for shadow maps with >= 2048 samples per dimension) but I still had to tweak the offset value (uniform float uK in the fragment shader) for each of my scenes. I found values from 0.01 to 0.03 to deliver useable results.
I lost about 10% performance (fps-wise) compared to my previous approach (slope-scaled bias) and gained maybe 1% of visual fidelity when it comes to shadows (acne, peter panning). The 1% is not measured - only felt by me :-)
I wanted this approach to be the "one-solution-to-all-problems". But I guess, there is no "fire-and-forget" solution when it comes to shadow mapping ;-/

How to get a smooth result with RSM (Reflective Shadow Mapping)?

I'm trying to implement a Reflective Shadow Mapping program with Vulkan.
The problem is that a get bad result :
As you can see the result is not smooth.
Here I am rendering in a first pass the position, normal and flux from the light position in 3 textures with a resolution of 512 * 512.
In a second pass, I compute the indirect illumination from the first pass textures according to this paper (http://www.klayge.org/material/3_12/GI/rsm.pdf) :
for(int i = 0; i < 151; i++)
{
vec4 rsmProjCoords = projCoords + vec4(rsmDiskSampling[i] * 0.09, 0.0, 0.0);
vec3 indirectLightPos = texture(rsmPosition, rsmProjCoords.xy).rgb;
vec3 indirectLightNorm = texture(rsmNormal, rsmProjCoords.xy).rgb;
vec3 indirectLightFlux = texture(rsmFlux, rsmProjCoords.xy).rgb;
vec3 r = worldPos - indirectLightPos;
float distP2 = dot( r, r );
vec3 emission = indirectLightFlux * (max(0.0, dot(indirectLightNorm, r)) * max(0.0, dot(N, -r)));
emission *= rsmDiskSampling[i].x * rsmDiskSampling[i].x / (distP2 * distP2);
indirectRSM += emission;
}
The problem is fixed.
The main problem was the sampling, I was using a linear sampling instead of a nearest sampling :
samplerInfo.magFilter = VK_FILTER_NEAREST;
samplerInfo.minFilter = VK_FILTER_NEAREST;
Other problems were the number of VPL used and the distance between them.

Fast Gaussian blur at pause [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
In cocos2d-x I need to implement fast gaussian blur and here is how it should looks like( I just found some game on the App Store with already done such blur, in unity):
So, it's nice fadeIn-fadeOut blur when user pauses the game.
GPUImage already has a fast blur I need, but I can't find solution for cocos2d-x.
v1 code when it was (GPUImage v1) Objective
C
v2 code when is now
Swift(GPUImage v2) Swift
GPUImage-x C++ version
Here is result of live camera view using GPUImage2 - tested on iPod Touch 5G, and it works fast on this slow and old device.
Blur in GPUImage works very fast even on very slow devices like iPod Touch 5G.
Looking for solution with super fast Gaussian blur for cocos2d-x.
After studying "Post-Processing Effects in Cocos2d-X" and "RENDERTEXTURE + BLUR", I came along to the following solution.
The common way to achieve post processing effects in Cocos2s-X is to implement layers. The scene is one layer, and a post process is another layer, which uses the scene layer as an input. With this technique, the post process can manipulate the rendered scene.
The blur algorithm is implemented in a shader. A common way to apply a blur effect on a scene is to blur first along he X-axis of the viewport and in an second pass along the Y-axis of the viewport (see ShaderLesson5). This is an acceptable approximations, which gives a massive gain of performance.
This means, that we need 2 post process layers in Cocos2s-X. So wee need 3 layers, one for the scene and 2 for the post processes:
// scene (game) layer
m_gameLayer = Layer::create();
this->addChild(m_gameLayer, 0);
// blur X layer
m_blurX_PostProcessLayer = PostProcess::create("shader/blur.vert", "shader/blur.frag");
m_blurX_PostProcessLayer->setAnchorPoint(Point::ZERO);
m_blurX_PostProcessLayer->setPosition(Point::ZERO);
this->addChild(m_blurX_PostProcessLayer, 1);
// blur y layer
m_blurY_PostProcessLayer = PostProcess::create("shader/blur.vert", "shader/blur.frag");
m_blurY_PostProcessLayer->setAnchorPoint(Point::ZERO);
m_blurY_PostProcessLayer->setPosition(Point::ZERO);
this->addChild(m_blurY_PostProcessLayer, 2);
Note, the sprites and resources of the scene have to be added to m_gameLayer.
In the updated methode, the post processes have to be apllied to the scene (I'll describe the setup of the uniforms later):
// blur in X direction
cocos2d::GLProgramState &blurXstate = m_blurX_PostProcessLayer->ProgramState();
blurXstate.setUniformVec2( "u_blurOffset", Vec2( 1.0f/visibleSize.width, 0.0 ) );
blurXstate.setUniformFloat( "u_blurStrength", (float)blurStrength );
m_blurX_PostProcessLayer->draw(m_gameLayer);
// blur in Y direction
cocos2d::GLProgramState &blurYstate = m_blurY_PostProcessLayer->ProgramState();
blurYstate.setUniformVec2( "u_blurOffset", Vec2( 0.0, 1.0f/visibleSize.height ) );
blurYstate.setUniformFloat( "u_blurStrength", (float)blurStrength );
m_blurY_PostProcessLayer->draw(m_blurX_PostProcessLayer);
For the management of the post process I implemented a class PostProcess, where I tried to keep things as simple as possible:
PostProcess.hpp
#include <string>
#include "cocos2d.h"
class PostProcess : public cocos2d::Layer
{
private:
PostProcess(void) {}
virtual ~PostProcess() {}
public:
static PostProcess* create(const std::string& vertexShaderFile, const std::string& fragmentShaderFile);
virtual bool init(const std::string& vertexShaderFile, const std::string& fragmentShaderFile);
void draw(cocos2d::Layer* layer);
cocos2d::GLProgram & Program( void ) { return *_program; }
cocos2d::GLProgramState & ProgramState( void ) { return *_progState; }
private:
cocos2d::GLProgram *_program;
cocos2d::GLProgramState *_progState;
cocos2d::RenderTexture *_renderTexture;
cocos2d::Sprite *_sprite;
};
PostProcess.cpp
#include "PostProcess.hpp"
using namespace cocos2d;
bool PostProcess::init(const std::string& vertexShaderFile, const std::string& fragmentShaderFile)
{
if (!Layer::init()) {
return false;
}
_program = GLProgram::createWithFilenames(vertexShaderFile, fragmentShaderFile);
_program->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_COLOR, GLProgram::VERTEX_ATTRIB_POSITION);
_program->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_POSITION, GLProgram::VERTEX_ATTRIB_COLOR);
_program->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_TEX_COORD, GLProgram::VERTEX_ATTRIB_TEX_COORD);
_program->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_TEX_COORD1, GLProgram::VERTEX_ATTRIB_TEX_COORD1);
_program->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_TEX_COORD2, GLProgram::VERTEX_ATTRIB_TEX_COORD2);
_program->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_TEX_COORD3, GLProgram::VERTEX_ATTRIB_TEX_COORD3);
_program->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_NORMAL, GLProgram::VERTEX_ATTRIB_NORMAL);
_program->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_BLEND_WEIGHT, GLProgram::VERTEX_ATTRIB_BLEND_WEIGHT);
_program->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_BLEND_INDEX, GLProgram::VERTEX_ATTRIB_BLEND_INDEX);
_program->link();
_progState = GLProgramState::getOrCreateWithGLProgram(_program);
_program->updateUniforms();
auto visibleSize = Director::getInstance()->getVisibleSize();
_renderTexture = RenderTexture::create(visibleSize.width, visibleSize.height);
_renderTexture->retain();
_sprite = Sprite::createWithTexture(_renderTexture->getSprite()->getTexture());
_sprite->setTextureRect(Rect(0, 0, _sprite->getTexture()->getContentSize().width,
_sprite->getTexture()->getContentSize().height));
_sprite->setAnchorPoint(Point::ZERO);
_sprite->setPosition(Point::ZERO);
_sprite->setFlippedY(true);
_sprite->setGLProgram(_program);
_sprite->setGLProgramState(_progState);
this->addChild(_sprite);
return true;
}
void PostProcess::draw(cocos2d::Layer* layer)
{
_renderTexture->beginWithClear(0.0f, 0.0f, 0.0f, 0.0f);
layer->visit();
_renderTexture->end();
}
PostProcess* PostProcess::create(const std::string& vertexShaderFile, const std::string& fragmentShaderFile)
{
auto p = new (std::nothrow) PostProcess();
if (p && p->init(vertexShaderFile, fragmentShaderFile)) {
p->autorelease();
return p;
}
delete p;
return nullptr;
}
The shader needs a unifor which contains the offset for the blur algorithm (u_blurOffset). This is the distance between 2 pixels along the X-axis for the first blur pass and the distance between 2 texels along the Y-axis for the second blur pass.
The strength of the blur effect is setup by the uniform variable (u_blurStrength). Where 0.0 means that blurring is off and 1.0 means maximum blurring. The maximum blur effect is defined by the value of MAX_BLUR_WIDHT, which defines the range of the texels wich are looked on in each direction. So this is more or less the blur radius. If you increase the value, the blur effect will increase, at the disadvantage of a loss of performance. If you decrease the value the blur effect will decrease, but you will winn performance. The relation between performance and the value of MAX_BLUR_WIDHT is thankfully linear (and not quadratic), because of the approximated 2 pass implementation.
I decided to avoid pre calculating gauss weights and passing them to the shader (the gauss weights would depend on MAX_BLUR_WIDHT and u_blurStrength). Instead I used a smooth Hermite interpolation similar to the GLSL function smoothstep:
blur.vert
attribute vec4 a_position;
attribute vec2 a_texCoord;
attribute vec4 a_color;
varying vec4 v_fragmentColor;
varying vec2 v_texCoord;
void main()
{
gl_Position = CC_MVPMatrix * a_position;
v_fragmentColor = a_color;
v_texCoord = a_texCoord;
}
blur.frag
varying vec4 v_fragmentColor;
varying vec2 v_texCoord;
uniform vec2 u_blurOffset;
uniform float u_blurStrength;
#define MAX_BLUR_WIDHT 10
void main()
{
vec4 color = texture2D(CC_Texture0, v_texCoord);
float blurWidth = u_blurStrength * float(MAX_BLUR_WIDHT);
vec4 blurColor = vec4(color.rgb, 1.0);
for (int i = 1; i <= MAX_BLUR_WIDHT; ++ i)
{
if ( float(i) >= blurWidth )
break;
float weight = 1.0 - float(i) / blurWidth;
weight = weight * weight * (3.0 - 2.0 * weight); // smoothstep
vec4 sampleColor1 = texture2D(CC_Texture0, v_texCoord + u_blurOffset * float(i));
vec4 sampleColor2 = texture2D(CC_Texture0, v_texCoord - u_blurOffset * float(i));
blurColor += vec4(sampleColor1.rgb + sampleColor2.rgb, 2.0) * weight;
}
gl_FragColor = vec4(blurColor.rgb / blurColor.w, color.a);
}
The full C++ and GLSL source code can be found on GitHub (The implementation can be activated by bool HelloWorld::m_blurFast = false).
See the preview:
Separate shader for each blur radius
A high performance version of an gaussian blur algorithm is the solution presented at GPUImage-x. In this implementation a separated blur shader for each blur radius is created. The source code of the full cocos2d-x demo implementation can be found at GitHub.The implementation provides 2 variants, the standard implementation and the optimized implementation, like the implementation in the link, which can be set up by bool GPUimageBlur::m_optimized. The implementation generates a shader for each radius from 0 to int GPUimageBlur::m_maxRadius and a sigma float GPUimageBlur::m_sigma.
See the preview:
Fast limited quality blur
A much more powerful solution, but with obvious very low quality, would be to use the shader presented at Optimizing Gaussian blurs on a mobile GPU. The blurring is not dynamic and can only be switched on or off:
update methode:
// blur pass 1
cocos2d::GLProgramState &blurPass1state = m_blurPass1_PostProcessLayer->ProgramState();
blurPass1state.setUniformVec2( "u_blurOffset", Vec2( blurStrength/visibleSize.width, blurStrength/visibleSize.height ) );
m_gameLayer->setVisible( true );
m_blurPass1_PostProcessLayer->draw(m_gameLayer);
m_gameLayer->setVisible( false );
// blur pass 2
cocos2d::GLProgramState &blurPass2state = m_blurPass2_PostProcessLayer->ProgramState();
blurPass2state.setUniformVec2( "u_blurOffset", Vec2( blurStrength/visibleSize.width, -blurStrength/visibleSize.height ) );
m_blurPass1_PostProcessLayer->setVisible( true );
m_blurPass2_PostProcessLayer->draw(m_blurPass1_PostProcessLayer);
m_blurPass1_PostProcessLayer->setVisible( false );
Vetex shader:
attribute vec4 a_position;
attribute vec2 a_texCoord;
varying vec2 blurCoordinates[5];
uniform vec2 u_blurOffset;
void main()
{
gl_Position = CC_MVPMatrix * a_position;
blurCoordinates[0] = a_texCoord.xy;
blurCoordinates[1] = a_texCoord.xy + u_blurOffset * 1.407333;
blurCoordinates[2] = a_texCoord.xy - u_blurOffset * 1.407333;
blurCoordinates[3] = a_texCoord.xy + u_blurOffset * 3.294215;
blurCoordinates[4] = a_texCoord.xy - u_blurOffset * 3.294215;
}
Fragment shader
varying vec2 blurCoordinates[5];
uniform float u_blurStrength;
void main()
{
vec4 sum = vec4(0.0);
sum += texture2D(CC_Texture0, blurCoordinates[0]) * 0.204164;
sum += texture2D(CC_Texture0, blurCoordinates[1]) * 0.304005;
sum += texture2D(CC_Texture0, blurCoordinates[2]) * 0.304005;
sum += texture2D(CC_Texture0, blurCoordinates[3]) * 0.093913;
sum += texture2D(CC_Texture0, blurCoordinates[4]) * 0.093913;
gl_FragColor = sum;
}
See the preview:
The full C++ and GLSL source code can be found on GitHub (The implementation can be switched by bool HelloWorld::m_blurFast).
Progressive solution with two layers (frame buffers)
The idea of this solution is, to do a smooth, progressive, high quality blur of the scene. For this a weak, but fast and high quality blur algorithm is need. A blurry sprite is not deleted, it will be stored for the next refresh of the game engine and is used as source for the next blurring step. This means the weak blurry sprite, again gets blurry and so it is a little bit more blurry than the last one. This is a progressive process which end in a strong and exact blurred sprite.
To set up this process 3 layers are of need, the game layer and 2 blur layers (even and odd).
m_gameLayer = Layer::create();
m_gameLayer->setVisible( false );
this->addChild(m_gameLayer, 0);
// blur layer even
m_blur_PostProcessLayerEven = PostProcess::create("shader/blur_fast2.vert", "shader/blur_fast2.frag");
m_blur_PostProcessLayerEven->setVisible( false );
m_blur_PostProcessLayerEven->setAnchorPoint(Point::ZERO);
m_blur_PostProcessLayerEven->setPosition(Point::ZERO);
this->addChild(m_blur_PostProcessLayerEven, 1);
// blur layer odd
m_blur_PostProcessLayerOdd = PostProcess::create("shader/blur_fast2.vert", "shader/blur_fast2.frag");
m_blur_PostProcessLayerOdd->setVisible( false );
m_blur_PostProcessLayerOdd->setAnchorPoint(Point::ZERO);
m_blur_PostProcessLayerOdd->setPosition(Point::ZERO);
this->addChild(m_blur_PostProcessLayerOdd, 1);
Note, that initially all 3 layers are invisible.
In the update` method one layer is set to state visible. If there is no blurring, then the game layer is visible. Once blurring starts, the game layer is rendered to the even layer, with the blur shader. The game layer becomes invisible and the even layer becomes visible. In the next cycle the even layer is rendered to the odd layer, with the blur shader. The even layer becomes invisible and the odd layer becomes visible. This process continues till blurring is stopped. Meanwhile, the scene becomes blurred stronger and stronger, at high quality.
If the original scene has to show again, then the game layer has be set to visible and the even and odd layer have to be set invisible.
update methode:
bool even = (m_blurTick % 2) == 0;
if ( m_blur )
{
cocos2d::GLProgramState &blurFaststate1 = m_blur_PostProcessLayerEven->ProgramState();
blurFaststate1.setUniformVec2( "u_texelOffset", Vec2( 1.0f/visibleSize.width, 1.0f/visibleSize.height ) );
cocos2d::GLProgramState &blurFaststate2 = m_blur_PostProcessLayerOdd->ProgramState();
blurFaststate2.setUniformVec2( "u_texelOffset", Vec2( -1.0f/visibleSize.width, -1.0f/visibleSize.height ) );
if ( m_blurTick == 0 )
{
m_gameLayer->setVisible( true );
m_blur_PostProcessLayerEven->draw(m_gameLayer);
}
else if ( even )
{
m_blur_PostProcessLayerEven->draw(m_blur_PostProcessLayerOdd);
}
else
{
m_blur_PostProcessLayerOdd->draw(m_blur_PostProcessLayerEven);
}
++m_blurTick;
}
else
m_blurTick = 0;
m_gameLayer->setVisible( !m_blur );
m_blur_PostProcessLayerEven->setVisible( m_blur && even );
m_blur_PostProcessLayerOdd->setVisible( m_blur && !even );
The shader is a simple and exact 3*3 blur shader:
Vetex shader:
attribute vec4 a_position;
attribute vec2 a_texCoord;
varying vec2 blurCoordinates[9];
uniform vec2 u_texelOffset;
void main()
{
gl_Position = CC_MVPMatrix * a_position;
blurCoordinates[0] = a_texCoord.st + vec2( 0.0, 0.0) * u_texelOffset.st;
blurCoordinates[1] = a_texCoord.st + vec2(+1.0, 0.0) * u_texelOffset.st;
blurCoordinates[2] = a_texCoord.st + vec2(-1.0, 0.0) * u_texelOffset.st;
blurCoordinates[3] = a_texCoord.st + vec2( 0.0, +1.0) * u_texelOffset.st;
blurCoordinates[4] = a_texCoord.st + vec2( 0.0, -1.0) * u_texelOffset.st;
blurCoordinates[5] = a_texCoord.st + vec2(-1.0, -1.0) * u_texelOffset.st;
blurCoordinates[6] = a_texCoord.st + vec2(+1.0, -1.0) * u_texelOffset.st;
blurCoordinates[7] = a_texCoord.st + vec2(-1.0, +1.0) * u_texelOffset.st;
blurCoordinates[8] = a_texCoord.st + vec2(+1.0, +1.0) * u_texelOffset.st;
}
Fragment shader:
varying vec2 blurCoordinates[9];
void main()
{
vec4 sum = vec4(0.0);
sum += texture2D(CC_Texture0, blurCoordinates[0]) * 4.0;
sum += texture2D(CC_Texture0, blurCoordinates[1]) * 2.0;
sum += texture2D(CC_Texture0, blurCoordinates[2]) * 2.0;
sum += texture2D(CC_Texture0, blurCoordinates[3]) * 2.0;
sum += texture2D(CC_Texture0, blurCoordinates[4]) * 2.0;
sum += texture2D(CC_Texture0, blurCoordinates[5]) * 1.0;
sum += texture2D(CC_Texture0, blurCoordinates[6]) * 1.0;
sum += texture2D(CC_Texture0, blurCoordinates[7]) * 1.0;
sum += texture2D(CC_Texture0, blurCoordinates[8]) * 1.0;
sum /= 16.0;
gl_FragColor = sum;
}
Again, the full C++ and GLSL source code can be found on GitHub.
See the preview:

OpenGL 3.3 GLSL Fragment Shader Fog effect not working

I'm trying to add a fog effect to my scene in OpenGL 3.3. I tried following this tutorial. However, I can't seem to get the same effect on my screen. All that seems to happen is that my objects get darker, but there's no gray foggy mist on the screen. What could be the problem?
Here's my result.
When it should look like:
Here's my Fragment Shader with multiple light sources. It works fine without any fog. All GLSL variables are set and working correctly.
for (int i = 0; i < NUM_LIGHTS; i++)
{
float distance = length(lightVector[i]);
vec3 l;
// point light
attenuation = 1.0 / (gLight[i].attenuation.x + gLight[i].attenuation.y * distance + gLight[i].attenuation.z * distance * distance);
l = normalize( vec3(lightVector[i]) );
float cosTheta = clamp( dot( n, l ), 0,1 );
vec3 E = normalize(eyeVector);
vec3 R = reflect( -l, n );
float cosAlpha = clamp( dot( E, R ), 0,1 );
vec3 MaterialDiffuseColor = v_color * materialCoefficients.diffuse;
vec3 MaterialAmbientColor = v_color * materialCoefficients.ambient;
lighting += vec3(
MaterialAmbientColor
+ (
MaterialDiffuseColor * gLight[i].color * cosTheta * attenuation
)
+ (
materialCoefficients.specular * gLight[i].color * pow(cosAlpha, materialCoefficients.shininess)
)
);
}
float fDiffuseIntensity = max(0.0, dot(normalize(normal), -gLight[0].position.xyz));
color = vec4(lighting, 1.0f) * vec4(gLight[0].color*(materialCoefficients.ambient+fDiffuseIntensity), 1.0f);
float fFogCoord = abs(eyeVector.z/1.0f);
color = mix(color, fogParams.vFogColor, getFogFactor(fogParams, fFogCoord));
Two things.
First you should verify your fogParams.vFogColor value is getting set correctly. The simplest way to do this is to just short-circut the shader and set color to fogParams.vFogColor and immediately return. If the scene is black, then you know your fog color isn't being sent to the shader correctly.
Second, you need to eliminate your skybox. You can simply set glClearColor() with the fog color and not use a skybox at all, since everywhere the skybox should be visible you should be seeing fog instead, right? More advanced usage could modify the skybox shader to move from fog to the skybox texture depending on the angle of the vec3 off of horizontal, so when looking up the sky is (somewhat) visible, but looking horizontally simply shows the fog, and have a smooth transition between the two.

GLSL Normalize Mouse Input

i'm new in this shader world and just want to try out something. I want to do the following:
In a specific radius around the mouse the texture in the background should rotate by 10°. The mouse coordinates are absolute values, so to work with them i have to normalize them, so i can adress the right space in the texture. But somehow this doesn't work right. The rotation works but i get the color information with a little offset.
I think the problem is the normalize(mouse)but i don't know how to do this right. Here is the shader:
uniform sampler2D tex0;
uniform vec2 mouse;
void main() {
vec2 pos = vec2(gl_FragCoord.x, gl_FragCoord.y);
if (distance(pos, mouse) < 90.0) {
vec2 p = gl_TexCoord[0].st;
vec2 m = normalize(mouse);
p.x = m.x + (cos(radians(10.0)) * (p.x - m.x) - sin(radians(10.0)) * (p.y - m.y));
p.y = m.y + (sin(radians(10.0)) * (p.x - m.x) + cos(radians(10.0)) * (p.y - m.y));
gl_FragColor = vec4(0.0, m.y, 0.0, 0.0);
gl_FragColor = texture2D( tex0, p );
}
else {
gl_FragColor.rgb = texture2D( tex0, gl_TexCoord[0].st ).rgb;
gl_FragColor.a = 1.0;
}
}
I'm using cinder to do this.
mShader.uniform( "mouse", Vec2f( m.x, 682 - m.y ) );
Thank you.
In order transform absolute mouse coordinates into [0,1] range (required for a texture sample), you don't need normalize function. You need a simple scale:
vec2 mouse_norm = vec2( m.x/screen.width, 1.0 - m.y/screen.height )
I wrote it in GLSL, but for you it will be easier to do on CPU side before passing a uniform, because originally it's CPU side that is aware of the screen resolution.