Smoothing low octave perlin noise - glsl

I'm trying to create super simple Perlin noise clouds in a fragment shader, using the Noise function found here.
At low octaves my output is, for want of a better word, 'blobby'. I'd simply like to smooth out these blobby areas and have smooth noise but have it a little more detailed than just one octave.
Fragment shader:
#ifdef GL_ES
precision mediump float;
#endif
uniform float time;
uniform vec2 resolution;
// Noise related functions go here ..
float surface3 ( vec3 coord ) {
float frequency = 4.0;
float n = 0.0;
n += 1.0 * abs( cnoise( coord * frequency ) );
n += 0.5 * abs( cnoise( coord * frequency * 2.0 ) );
n += 0.25 * abs( cnoise( coord * frequency * 4.0 ) );
return n;
}
void main( void ) {
vec2 position = gl_FragCoord.xy / resolution.xy;
float n = surface3(vec3(position, time * 0.1));
gl_FragColor = vec4(n, n, n, 1.0);
}
Live example:
http://glsl.heroku.com/e#1413.0
Left is what I have at the moment. How would I be able to achieve something more inline with the right image?

The easiest way is to take out two of the noise calls in the surface function. Leave just the first noise call line and you get something that looks like the first one:
http://glsl.heroku.com/e#1450.0
The sharp lines in the multi-octave noise come from using abs(), remove the abs() and replace it with squaring the noise value or doing something like 0.5*(1.0+cnoise()) (if cnoise output is between -1..1).
Here's the result of some fiddling
http://glsl.heroku.com/e#1450.1

Related

Godot shader swap materials by world position on 3d mesh

I am trying to replicate something similar to this from Unity in Godot Engine with shaders, however, I am not able to find a solution. Calculating the position of the effect is the problem. How can I get the position in Godot, where I don't have access to the worlPos variable used in the video? A full code snippet of the shader would be really appreciated.
Currently, my shader code looks like this. ob_position is the position passed from the node.
shader_type spatial;
uniform vec2 ob_position = vec2(1, 0.68);
uniform float ob_radius = 0.01;
float circle(vec2 position, float radius, float feather)
{
return smoothstep(radius, radius + feather, length(position - vec2(0.5)));
}
void fragment() {
ALBEDO.rgb = vec3(circle(UV * (ob_position), ob_radius, 0.001) );
}
The video says:
Send the sphere position to the shader in script.
We can do that. First define an uniform:
uniform vec3 sphere_position;
And we can set it from code:
material.set_shader_param("sphere_position", global_transform.origin)
Since you need to set this every time the sphere moves, you can use NOTIFICATION_TRANSFORM_CHANGED which you enable by calling set_notify_local_transform(true).
Get the distance between the sphere and World Position.
To do that we need to figure out the world position of the fragment. Let us start by looking at the Fragment Build-ins. We find that:
VERTEX is the position of the fragment in view space.
CAMERA_MATRIX is the transform from view space to world space.
Yes, the naming is confusing.
So we can do this (in fragment):
vec3 pixel_world_pos = (CAMERA_MATRIX * vec4(VERTEX, 1.0)).xyz;
You can use this to debug: ALBEDO.rgb = pixel_world_pos;. In general, output whatever variable you want to visualize for debugging to ALBEDO.
And now the distance is:
float dist = distance(sphere_position, pixel_world_pos);
Control the size by dividing by radius.
While we don't have direct translation for the code in the video… sure, we can divide by radius (dist / radius). Where radius would be a uniform float.
Create a cutoff with Step.
That would be something like this: step(0.5, dist / radius).
Honestly, I would rather do this: step(radius, dist).
Your mileage may vary.
Lerp two different textures over the cutoff.
For that we can use mix. But first, define your textures as uniform sampler2D. Then you can something like this:
float threshold = step(radius, dist);
ALBEDO.rgb = mix(texture(tex1, UV).rgb, texture(tex2, UV).rgb, threshold);
Moving worldspace noise.
Add one more uniform sampler2D and set a NoiseTexture (make sure to set its noise and make seamless to true), and then we can query it with the world coordinates we already have.
float noise_value = texture(noise_texture, pixel_world_pos.xy + vec2(TIME)).r;
Add worldspace to noise.
I'm not sure what they mean. But from the visual, they use the noise to distort the cutoff. I'm not sure if this yields the same result, but it looks good to me:
vec3 pixel_world_pos = (CAMERA_MATRIX * vec4(VERTEX, 1.0)).xyz;
float noise_value = texture(noise_texture, pixel_world_pos.xy + vec2(TIME)).r;
float dist = distance(sphere_position, pixel_world_pos) + noise_value;
float threshold = step(radius, dist);
ALBEDO.rgb = mix(texture(tex1, UV).rgb, texture(tex2, UV).rgb, threshold);
Add a line to Emission (glow).
I don't understand what they did originally, so I came up with my own solution:
EMISSION = vec3(step(dist, edge + radius) * step(radius, dist));
What is going on here is that we will have a white EMISSION when dist < edge + radius and radius < dist. To reiterate, we will have white EMISSION when the distance is greater than the radius (radius < dist) and lesser than the radius plus some edge (dist < edge + radius). The comparisons become step functions, which return 0.0 or 1.0, and the AND operation is a multiplication.
Reveal object by clipping instead of adding a second texture.
I suppose that means there is another version of the shader that either uses discard or ALPHA and it is used for other objects.
This is the shader I wrote to test this:
shader_type spatial;
uniform vec3 sphere_position;
uniform sampler2D noise_texture;
uniform sampler2D tex1;
uniform sampler2D tex2;
uniform float radius;
uniform float edge;
void fragment()
{
vec3 pixel_world_pos = (CAMERA_MATRIX * vec4(VERTEX, 1.0)).xyz;
float noise_value = texture(noise_texture, pixel_world_pos.xy + vec2(TIME)).r;
float dist = distance(sphere_position, pixel_world_pos) + noise_value;
float threshold = step(radius, dist);
ALBEDO.rgb = mix(texture(tex1, UV).rgb, texture(tex2, UV).rgb, threshold);
EMISSION = vec3(step(dist, edge + radius) * step(radius, dist));
}
The answer from Theraot was a lifesaver for me however, I also needed support for multiple positions, using arrays, uniform vec3 sphere_position[];
So I came up with this:
shader_type spatial;
uniform uint ob_position_size;
uniform vec3 sphere_position[2];
uniform sampler2D noise_texture;
uniform sampler2D tex1;
uniform float radius;
uniform float edge;
void fragment()
{
vec3 pixel_world_pos = (INV_VIEW_MATRIX * vec4(VERTEX, 1.0)).xyz;
float noise_value = texture(noise_texture, pixel_world_pos.xy + vec2(TIME)).r;
ALBEDO = texture(SCREEN_TEXTURE, SCREEN_UV).rgb;
for(int i = 0; i < sphere_position.length(); i++) {
float dist = distance(sphere_position[i], pixel_world_pos) + noise_value;
float threshold = step(radius, dist);
ALBEDO.rgb = mix(texture(tex1, UV).rgb, ALBEDO.rgb, threshold);
//EMISSION = vec3(step(dist, edge + radius) * step(radius, dist));
}
}

Fast Gaussian blur at pause [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
In cocos2d-x I need to implement fast gaussian blur and here is how it should looks like( I just found some game on the App Store with already done such blur, in unity):
So, it's nice fadeIn-fadeOut blur when user pauses the game.
GPUImage already has a fast blur I need, but I can't find solution for cocos2d-x.
v1 code when it was (GPUImage v1) Objective
C
v2 code when is now
Swift(GPUImage v2) Swift
GPUImage-x C++ version
Here is result of live camera view using GPUImage2 - tested on iPod Touch 5G, and it works fast on this slow and old device.
Blur in GPUImage works very fast even on very slow devices like iPod Touch 5G.
Looking for solution with super fast Gaussian blur for cocos2d-x.
After studying "Post-Processing Effects in Cocos2d-X" and "RENDERTEXTURE + BLUR", I came along to the following solution.
The common way to achieve post processing effects in Cocos2s-X is to implement layers. The scene is one layer, and a post process is another layer, which uses the scene layer as an input. With this technique, the post process can manipulate the rendered scene.
The blur algorithm is implemented in a shader. A common way to apply a blur effect on a scene is to blur first along he X-axis of the viewport and in an second pass along the Y-axis of the viewport (see ShaderLesson5). This is an acceptable approximations, which gives a massive gain of performance.
This means, that we need 2 post process layers in Cocos2s-X. So wee need 3 layers, one for the scene and 2 for the post processes:
// scene (game) layer
m_gameLayer = Layer::create();
this->addChild(m_gameLayer, 0);
// blur X layer
m_blurX_PostProcessLayer = PostProcess::create("shader/blur.vert", "shader/blur.frag");
m_blurX_PostProcessLayer->setAnchorPoint(Point::ZERO);
m_blurX_PostProcessLayer->setPosition(Point::ZERO);
this->addChild(m_blurX_PostProcessLayer, 1);
// blur y layer
m_blurY_PostProcessLayer = PostProcess::create("shader/blur.vert", "shader/blur.frag");
m_blurY_PostProcessLayer->setAnchorPoint(Point::ZERO);
m_blurY_PostProcessLayer->setPosition(Point::ZERO);
this->addChild(m_blurY_PostProcessLayer, 2);
Note, the sprites and resources of the scene have to be added to m_gameLayer.
In the updated methode, the post processes have to be apllied to the scene (I'll describe the setup of the uniforms later):
// blur in X direction
cocos2d::GLProgramState &blurXstate = m_blurX_PostProcessLayer->ProgramState();
blurXstate.setUniformVec2( "u_blurOffset", Vec2( 1.0f/visibleSize.width, 0.0 ) );
blurXstate.setUniformFloat( "u_blurStrength", (float)blurStrength );
m_blurX_PostProcessLayer->draw(m_gameLayer);
// blur in Y direction
cocos2d::GLProgramState &blurYstate = m_blurY_PostProcessLayer->ProgramState();
blurYstate.setUniformVec2( "u_blurOffset", Vec2( 0.0, 1.0f/visibleSize.height ) );
blurYstate.setUniformFloat( "u_blurStrength", (float)blurStrength );
m_blurY_PostProcessLayer->draw(m_blurX_PostProcessLayer);
For the management of the post process I implemented a class PostProcess, where I tried to keep things as simple as possible:
PostProcess.hpp
#include <string>
#include "cocos2d.h"
class PostProcess : public cocos2d::Layer
{
private:
PostProcess(void) {}
virtual ~PostProcess() {}
public:
static PostProcess* create(const std::string& vertexShaderFile, const std::string& fragmentShaderFile);
virtual bool init(const std::string& vertexShaderFile, const std::string& fragmentShaderFile);
void draw(cocos2d::Layer* layer);
cocos2d::GLProgram & Program( void ) { return *_program; }
cocos2d::GLProgramState & ProgramState( void ) { return *_progState; }
private:
cocos2d::GLProgram *_program;
cocos2d::GLProgramState *_progState;
cocos2d::RenderTexture *_renderTexture;
cocos2d::Sprite *_sprite;
};
PostProcess.cpp
#include "PostProcess.hpp"
using namespace cocos2d;
bool PostProcess::init(const std::string& vertexShaderFile, const std::string& fragmentShaderFile)
{
if (!Layer::init()) {
return false;
}
_program = GLProgram::createWithFilenames(vertexShaderFile, fragmentShaderFile);
_program->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_COLOR, GLProgram::VERTEX_ATTRIB_POSITION);
_program->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_POSITION, GLProgram::VERTEX_ATTRIB_COLOR);
_program->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_TEX_COORD, GLProgram::VERTEX_ATTRIB_TEX_COORD);
_program->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_TEX_COORD1, GLProgram::VERTEX_ATTRIB_TEX_COORD1);
_program->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_TEX_COORD2, GLProgram::VERTEX_ATTRIB_TEX_COORD2);
_program->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_TEX_COORD3, GLProgram::VERTEX_ATTRIB_TEX_COORD3);
_program->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_NORMAL, GLProgram::VERTEX_ATTRIB_NORMAL);
_program->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_BLEND_WEIGHT, GLProgram::VERTEX_ATTRIB_BLEND_WEIGHT);
_program->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_BLEND_INDEX, GLProgram::VERTEX_ATTRIB_BLEND_INDEX);
_program->link();
_progState = GLProgramState::getOrCreateWithGLProgram(_program);
_program->updateUniforms();
auto visibleSize = Director::getInstance()->getVisibleSize();
_renderTexture = RenderTexture::create(visibleSize.width, visibleSize.height);
_renderTexture->retain();
_sprite = Sprite::createWithTexture(_renderTexture->getSprite()->getTexture());
_sprite->setTextureRect(Rect(0, 0, _sprite->getTexture()->getContentSize().width,
_sprite->getTexture()->getContentSize().height));
_sprite->setAnchorPoint(Point::ZERO);
_sprite->setPosition(Point::ZERO);
_sprite->setFlippedY(true);
_sprite->setGLProgram(_program);
_sprite->setGLProgramState(_progState);
this->addChild(_sprite);
return true;
}
void PostProcess::draw(cocos2d::Layer* layer)
{
_renderTexture->beginWithClear(0.0f, 0.0f, 0.0f, 0.0f);
layer->visit();
_renderTexture->end();
}
PostProcess* PostProcess::create(const std::string& vertexShaderFile, const std::string& fragmentShaderFile)
{
auto p = new (std::nothrow) PostProcess();
if (p && p->init(vertexShaderFile, fragmentShaderFile)) {
p->autorelease();
return p;
}
delete p;
return nullptr;
}
The shader needs a unifor which contains the offset for the blur algorithm (u_blurOffset). This is the distance between 2 pixels along the X-axis for the first blur pass and the distance between 2 texels along the Y-axis for the second blur pass.
The strength of the blur effect is setup by the uniform variable (u_blurStrength). Where 0.0 means that blurring is off and 1.0 means maximum blurring. The maximum blur effect is defined by the value of MAX_BLUR_WIDHT, which defines the range of the texels wich are looked on in each direction. So this is more or less the blur radius. If you increase the value, the blur effect will increase, at the disadvantage of a loss of performance. If you decrease the value the blur effect will decrease, but you will winn performance. The relation between performance and the value of MAX_BLUR_WIDHT is thankfully linear (and not quadratic), because of the approximated 2 pass implementation.
I decided to avoid pre calculating gauss weights and passing them to the shader (the gauss weights would depend on MAX_BLUR_WIDHT and u_blurStrength). Instead I used a smooth Hermite interpolation similar to the GLSL function smoothstep:
blur.vert
attribute vec4 a_position;
attribute vec2 a_texCoord;
attribute vec4 a_color;
varying vec4 v_fragmentColor;
varying vec2 v_texCoord;
void main()
{
gl_Position = CC_MVPMatrix * a_position;
v_fragmentColor = a_color;
v_texCoord = a_texCoord;
}
blur.frag
varying vec4 v_fragmentColor;
varying vec2 v_texCoord;
uniform vec2 u_blurOffset;
uniform float u_blurStrength;
#define MAX_BLUR_WIDHT 10
void main()
{
vec4 color = texture2D(CC_Texture0, v_texCoord);
float blurWidth = u_blurStrength * float(MAX_BLUR_WIDHT);
vec4 blurColor = vec4(color.rgb, 1.0);
for (int i = 1; i <= MAX_BLUR_WIDHT; ++ i)
{
if ( float(i) >= blurWidth )
break;
float weight = 1.0 - float(i) / blurWidth;
weight = weight * weight * (3.0 - 2.0 * weight); // smoothstep
vec4 sampleColor1 = texture2D(CC_Texture0, v_texCoord + u_blurOffset * float(i));
vec4 sampleColor2 = texture2D(CC_Texture0, v_texCoord - u_blurOffset * float(i));
blurColor += vec4(sampleColor1.rgb + sampleColor2.rgb, 2.0) * weight;
}
gl_FragColor = vec4(blurColor.rgb / blurColor.w, color.a);
}
The full C++ and GLSL source code can be found on GitHub (The implementation can be activated by bool HelloWorld::m_blurFast = false).
See the preview:
Separate shader for each blur radius
A high performance version of an gaussian blur algorithm is the solution presented at GPUImage-x. In this implementation a separated blur shader for each blur radius is created. The source code of the full cocos2d-x demo implementation can be found at GitHub.The implementation provides 2 variants, the standard implementation and the optimized implementation, like the implementation in the link, which can be set up by bool GPUimageBlur::m_optimized. The implementation generates a shader for each radius from 0 to int GPUimageBlur::m_maxRadius and a sigma float GPUimageBlur::m_sigma.
See the preview:
Fast limited quality blur
A much more powerful solution, but with obvious very low quality, would be to use the shader presented at Optimizing Gaussian blurs on a mobile GPU. The blurring is not dynamic and can only be switched on or off:
update methode:
// blur pass 1
cocos2d::GLProgramState &blurPass1state = m_blurPass1_PostProcessLayer->ProgramState();
blurPass1state.setUniformVec2( "u_blurOffset", Vec2( blurStrength/visibleSize.width, blurStrength/visibleSize.height ) );
m_gameLayer->setVisible( true );
m_blurPass1_PostProcessLayer->draw(m_gameLayer);
m_gameLayer->setVisible( false );
// blur pass 2
cocos2d::GLProgramState &blurPass2state = m_blurPass2_PostProcessLayer->ProgramState();
blurPass2state.setUniformVec2( "u_blurOffset", Vec2( blurStrength/visibleSize.width, -blurStrength/visibleSize.height ) );
m_blurPass1_PostProcessLayer->setVisible( true );
m_blurPass2_PostProcessLayer->draw(m_blurPass1_PostProcessLayer);
m_blurPass1_PostProcessLayer->setVisible( false );
Vetex shader:
attribute vec4 a_position;
attribute vec2 a_texCoord;
varying vec2 blurCoordinates[5];
uniform vec2 u_blurOffset;
void main()
{
gl_Position = CC_MVPMatrix * a_position;
blurCoordinates[0] = a_texCoord.xy;
blurCoordinates[1] = a_texCoord.xy + u_blurOffset * 1.407333;
blurCoordinates[2] = a_texCoord.xy - u_blurOffset * 1.407333;
blurCoordinates[3] = a_texCoord.xy + u_blurOffset * 3.294215;
blurCoordinates[4] = a_texCoord.xy - u_blurOffset * 3.294215;
}
Fragment shader
varying vec2 blurCoordinates[5];
uniform float u_blurStrength;
void main()
{
vec4 sum = vec4(0.0);
sum += texture2D(CC_Texture0, blurCoordinates[0]) * 0.204164;
sum += texture2D(CC_Texture0, blurCoordinates[1]) * 0.304005;
sum += texture2D(CC_Texture0, blurCoordinates[2]) * 0.304005;
sum += texture2D(CC_Texture0, blurCoordinates[3]) * 0.093913;
sum += texture2D(CC_Texture0, blurCoordinates[4]) * 0.093913;
gl_FragColor = sum;
}
See the preview:
The full C++ and GLSL source code can be found on GitHub (The implementation can be switched by bool HelloWorld::m_blurFast).
Progressive solution with two layers (frame buffers)
The idea of this solution is, to do a smooth, progressive, high quality blur of the scene. For this a weak, but fast and high quality blur algorithm is need. A blurry sprite is not deleted, it will be stored for the next refresh of the game engine and is used as source for the next blurring step. This means the weak blurry sprite, again gets blurry and so it is a little bit more blurry than the last one. This is a progressive process which end in a strong and exact blurred sprite.
To set up this process 3 layers are of need, the game layer and 2 blur layers (even and odd).
m_gameLayer = Layer::create();
m_gameLayer->setVisible( false );
this->addChild(m_gameLayer, 0);
// blur layer even
m_blur_PostProcessLayerEven = PostProcess::create("shader/blur_fast2.vert", "shader/blur_fast2.frag");
m_blur_PostProcessLayerEven->setVisible( false );
m_blur_PostProcessLayerEven->setAnchorPoint(Point::ZERO);
m_blur_PostProcessLayerEven->setPosition(Point::ZERO);
this->addChild(m_blur_PostProcessLayerEven, 1);
// blur layer odd
m_blur_PostProcessLayerOdd = PostProcess::create("shader/blur_fast2.vert", "shader/blur_fast2.frag");
m_blur_PostProcessLayerOdd->setVisible( false );
m_blur_PostProcessLayerOdd->setAnchorPoint(Point::ZERO);
m_blur_PostProcessLayerOdd->setPosition(Point::ZERO);
this->addChild(m_blur_PostProcessLayerOdd, 1);
Note, that initially all 3 layers are invisible.
In the update` method one layer is set to state visible. If there is no blurring, then the game layer is visible. Once blurring starts, the game layer is rendered to the even layer, with the blur shader. The game layer becomes invisible and the even layer becomes visible. In the next cycle the even layer is rendered to the odd layer, with the blur shader. The even layer becomes invisible and the odd layer becomes visible. This process continues till blurring is stopped. Meanwhile, the scene becomes blurred stronger and stronger, at high quality.
If the original scene has to show again, then the game layer has be set to visible and the even and odd layer have to be set invisible.
update methode:
bool even = (m_blurTick % 2) == 0;
if ( m_blur )
{
cocos2d::GLProgramState &blurFaststate1 = m_blur_PostProcessLayerEven->ProgramState();
blurFaststate1.setUniformVec2( "u_texelOffset", Vec2( 1.0f/visibleSize.width, 1.0f/visibleSize.height ) );
cocos2d::GLProgramState &blurFaststate2 = m_blur_PostProcessLayerOdd->ProgramState();
blurFaststate2.setUniformVec2( "u_texelOffset", Vec2( -1.0f/visibleSize.width, -1.0f/visibleSize.height ) );
if ( m_blurTick == 0 )
{
m_gameLayer->setVisible( true );
m_blur_PostProcessLayerEven->draw(m_gameLayer);
}
else if ( even )
{
m_blur_PostProcessLayerEven->draw(m_blur_PostProcessLayerOdd);
}
else
{
m_blur_PostProcessLayerOdd->draw(m_blur_PostProcessLayerEven);
}
++m_blurTick;
}
else
m_blurTick = 0;
m_gameLayer->setVisible( !m_blur );
m_blur_PostProcessLayerEven->setVisible( m_blur && even );
m_blur_PostProcessLayerOdd->setVisible( m_blur && !even );
The shader is a simple and exact 3*3 blur shader:
Vetex shader:
attribute vec4 a_position;
attribute vec2 a_texCoord;
varying vec2 blurCoordinates[9];
uniform vec2 u_texelOffset;
void main()
{
gl_Position = CC_MVPMatrix * a_position;
blurCoordinates[0] = a_texCoord.st + vec2( 0.0, 0.0) * u_texelOffset.st;
blurCoordinates[1] = a_texCoord.st + vec2(+1.0, 0.0) * u_texelOffset.st;
blurCoordinates[2] = a_texCoord.st + vec2(-1.0, 0.0) * u_texelOffset.st;
blurCoordinates[3] = a_texCoord.st + vec2( 0.0, +1.0) * u_texelOffset.st;
blurCoordinates[4] = a_texCoord.st + vec2( 0.0, -1.0) * u_texelOffset.st;
blurCoordinates[5] = a_texCoord.st + vec2(-1.0, -1.0) * u_texelOffset.st;
blurCoordinates[6] = a_texCoord.st + vec2(+1.0, -1.0) * u_texelOffset.st;
blurCoordinates[7] = a_texCoord.st + vec2(-1.0, +1.0) * u_texelOffset.st;
blurCoordinates[8] = a_texCoord.st + vec2(+1.0, +1.0) * u_texelOffset.st;
}
Fragment shader:
varying vec2 blurCoordinates[9];
void main()
{
vec4 sum = vec4(0.0);
sum += texture2D(CC_Texture0, blurCoordinates[0]) * 4.0;
sum += texture2D(CC_Texture0, blurCoordinates[1]) * 2.0;
sum += texture2D(CC_Texture0, blurCoordinates[2]) * 2.0;
sum += texture2D(CC_Texture0, blurCoordinates[3]) * 2.0;
sum += texture2D(CC_Texture0, blurCoordinates[4]) * 2.0;
sum += texture2D(CC_Texture0, blurCoordinates[5]) * 1.0;
sum += texture2D(CC_Texture0, blurCoordinates[6]) * 1.0;
sum += texture2D(CC_Texture0, blurCoordinates[7]) * 1.0;
sum += texture2D(CC_Texture0, blurCoordinates[8]) * 1.0;
sum /= 16.0;
gl_FragColor = sum;
}
Again, the full C++ and GLSL source code can be found on GitHub.
See the preview:

OpenGL screen postprocessing effects [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I've built a nice music visualizer using OpenGL in Java. It already looks pretty neat, but I've thought about adding some post processing to it. At the time, it looks like this:
There is already a framebuffer for recording the output, so I have the texture already available. Now I wonder if someone has an idea for some effects. The current Fragment shader looks like this:
#version 440
in vec3 position_FS_in;
in vec2 texCoords_FS_in;
out vec4 out_Color;
//the texture of the last Frame by now exactly the same as the output
uniform sampler2D textureSampler;
//available data:
//the average height of the lines seen in the screenshot, ranging from 0 to 1
uniform float mean;
//the array of heights of the lines seen in the screenshot
uniform float music[512];
void main()
{
vec4 texColor = texture(textureSampler, texCoords_FS_in);
//insert post processing here
out_Color = texColor;
}
Most post processing effects vary with time, so it is common to have a uniform that varies with the passage of time. For example, a "wavy" effect might be created by offsetting texture coordinates using sin(elapsedSec * wavyRadsPerSec + (PI * gl_FragCoord.y * 0.5 + 0.5) * wavyCyclesInFrame).
Some "postprocessing" effects can be done very simply, for example, instead of clearing the back buffer with glClear you can blend a nearly-black transparent quad over the whole screen. This will create a persistence effect where the past frames fade to black behind the current one.
A directional blur can be implemented by taking multiple samples at various distances from each point, and weighting the closer ones more strongly and summing. If you track the motion of a point relative to the camera position and orientation, it can be made into a motion blur implementation.
Color transformations are very simple as well, simply treat the RGB as though they are the XYZ of a vector, and do interesting transformations on it. Sepia and "psychedelic" colors can be produced this way.
You might find it helpful to convert the color into something like HSV, do transformations on that representation, and convert it back to RGB for the framebuffer write. You could affect hue, saturation, for example, fading to black and white, or intensifying the color saturation smoothly.
A "smearing into the distance" effect can be done by blending the framebuffer onto the framebuffer, by reading from texcoord that is slightly scaled up from gl_FragCoord, like texture(textureSampler, (gl_FragCoord * 1.01).xy).
On that note, you should not need those texture coordinate attributes, you can use gl_FragCoord to find out where you are on the screen, and use (an adjusted copy of) that for your texture call.
Have a look at a few shaders on GLSLSandbox for inspiration.
I have done a simple emulation of the trail effect on GLSLSandbox. In the real one, the loop would not exist, it would take one sample from a small offset. The "loop" effect would happen by itself because its input includes the output from the last frame. To emulate having a texture of the last frame, I simply made it so I can calculate what the other pixel is. You would read the last-frame texture instead of calling something like pixelAt when doing the trail effect.
You can use the wave instead of my faked sine wave. Use the uv.x to select an index, scaled appropriately.
GLSL
#ifdef GL_ES
precision mediump float;
#endif
uniform float time;
uniform vec2 mouse;
uniform vec2 resolution;
const float PI = 3.14159265358979323;// lol ya right, but hey, I memorized it
vec4 pixelAt(vec2 uv)
{
vec4 result;
float thickness = 0.05;
float movementSpeed = 0.4;
float wavesInFrame = 5.0;
float waveHeight = 0.3;
float point = (sin(time * movementSpeed +
uv.x * wavesInFrame * 2.0 * PI) *
waveHeight);
const float sharpness = 1.40;
float dist = 1.0 - abs(clamp((point - uv.y) / thickness, -1.0, 1.0));
float val;
float brightness = 0.8;
// All of the threads go the same way so this if is easy
if (sharpness != 1.0)
dist = pow(dist, sharpness);
dist *= brightness;
result = vec4(vec3(0.3, 0.6, 0.3) * dist, 1.0);
return result;
}
void main( void ) {
vec2 fc = gl_FragCoord.xy;
vec2 uv = fc / resolution - 0.5;
vec4 pixel;
pixel = pixelAt(uv);
// I can't really do postprocessing in this shader, so instead of
// doing the texturelookup, I restructured it to be able to compute
// what the other pixel might be. The real code would lookup a texel
// and there would be one sample at a small offset, the feedback
// replaces the loop.
const float e = 64.0, s = 1.0 / e;
for (float i = 0.0; i < e; ++i) {
pixel += pixelAt(uv + (uv * (i*s))) * (0.3-i*s*0.325);
}
pixel /= 1.0;
gl_FragColor = pixel;
}

GLSL - Using a 2D texture for 3D Perlin noise instead of procedural 3D noise

I implemented a shader for the sun surface which uses simplex noise from ashima/webgl-noise. But it costs too much GPU time, especially if I'm going to use it on mobile devices. I need to do the same effect but using a noise texture. My fragment shader is below:
#ifdef GL_ES
precision highp float;
#endif
precision mediump float;
varying vec2 v_texCoord;
varying vec3 v_normal;
uniform sampler2D u_planetDay;
uniform sampler2D u_noise; //noise texture (not used yet)
uniform float u_time;
#include simplex_noise_source from Ashima
float noise(vec3 position, int octaves, float frequency, float persistence) {
float total = 0.0; // Total value so far
float maxAmplitude = 0.0; // Accumulates highest theoretical amplitude
float amplitude = 1.0;
for (int i = 0; i < octaves; i++) {
// Get the noise sample
total += ((1.0 - abs(snoise(position * frequency))) * 2.0 - 1.0) * amplitude;
//I USE LINE BELOW FOR 2D NOISE
total += ((1.0 - abs(snoise(position.xy * frequency))) * 2.0 - 1.0) * amplitude;
// Make the wavelength twice as small
frequency *= 2.0;
// Add to our maximum possible amplitude
maxAmplitude += amplitude;
// Reduce amplitude according to persistence for the next octave
amplitude *= persistence;
}
// Scale the result by the maximum amplitude
return total / maxAmplitude;
}
void main()
{
vec3 position = v_normal *2.5+ vec3(u_time, u_time, u_time);
float n1 = noise(position.xyz, 2, 7.7, 0.75) * 0.001;
vec3 ground = texture2D(u_planetDay, v_texCoord+n1).rgb;
gl_FragColor = vec4 (color, 1.0);
}
How can I correct this shader to work with a noise texture and what should the texture look like?
As far as I know, OpenGL ES 2.0 doesn't support 3D textures. Moreover, I don't know how to create 3D texture.
I wrote this 3D noise from a 2D texture function. It still uses hardware interpolation for x/y directions and then manually interpolates for z. To get noise along the z direction I've sampled the same texture at different offsets. This will probably lead to some repetition, but I haven't noticed any in my application and my guess is using primes helps.
The thing that had me stumped for a while on shadertoy.com was that texture mipmapping was enabled which caused seams at the change in value of the floor() function. A quick solution was passing a -999 bias to texture2D.
This was hard coded for a 256x256 noise texture, so adjust accordingly.
float noise3D(vec3 p)
{
p.z = fract(p.z)*256.0;
float iz = floor(p.z);
float fz = fract(p.z);
vec2 a_off = vec2(23.0, 29.0)*(iz)/256.0;
vec2 b_off = vec2(23.0, 29.0)*(iz+1.0)/256.0;
float a = texture2D(iChannel0, p.xy + a_off, -999.0).r;
float b = texture2D(iChannel0, p.xy + b_off, -999.0).r;
return mix(a, b, fz);
}
Update: To extend to perlin noise, sum samples at different frequencies:
float perlinNoise3D(vec3 p)
{
float x = 0.0;
for (float i = 0.0; i < 6.0; i += 1.0)
x += noise3D(p * pow(2.0, i)) * pow(0.5, i);
return x;
}
Trying to evaluate noise at run-time is often a bad practice unless you want to do some research work or to quickly check / debug your noise function (or see what your noise parameters visually look like).
It will always consume too much processing budget (not worth it at all), so just forget about evaluating noise at run-time.
If you store your noise results off-line, you will reduce the charge (by say over 95 %) to a simple access to memory.
I suggest to reduce all this to a texture look-up over a pre-baked 2D noise image. You are so far only impacting the fragment pipeline so a 2D noise texture is definitely the way to go (you can also use this 2D lookup for vertex positions deformation).
In order to map it on a sphere without any continuity issue, you may generate a loopable 2D image with a 4D noise, feeding the function with the coordinates of two 2D circles.
As for animating it, there are various hackish tricks either by deforming your lookup results with the time semantic in the fragment pipeline, or baking an image sequence in case you really need noise "animated with noise".
3D textures are just stacks of 2D textures, so they are too heavy to manipulate (even without animation) for what you want to do, and since you apparently need only a decent sun surface, it would be overkill.

GLSL - Calculate Surface Normal

I have a simple vertex shader, written in GLSL, and I was wondering if someone could aid me in calculating the normals for the surface. I am 'upgrading' a flat surface, so the current light model looks... weird. Here is my current code:
varying vec4 oColor;
varying vec3 oEyeNormal;
varying vec4 oEyePosition;
uniform float Amplitude; // Amplitude of sine wave
uniform float Phase; // Phase of sine wave
uniform float Frequency; // Frequency of sine wave
varying float sinValue;
void main()
{
vec4 thisPos = gl_Vertex;
thisPos.z = sin( ( thisPos.x + Phase ) * Frequency) * Amplitude;
// Transform normal and position to eye space (for fragment shader)
oEyeNormal = normalize( vec3( gl_NormalMatrix * gl_Normal ) );
oEyePosition = gl_ModelViewMatrix * thisPos;
// Transform vertex to clip space for fragment shader
gl_Position = gl_ModelViewProjectionMatrix * thisPos;
sinValue = thisPos.z;
}
Does anyone have any ideas?
Ok, let's just take this from the differential geometry perspective. You got a parametric surface with parameters s and t:
X(s,t) = ( s, t, A*sin((s+P)*F) )
So we first compute the tangents of this surface, being the partial derivatives after our two parameters:
Xs(s,t) = ( 1, 0, A*F*cos((s+P)*F) )
Xt(s,t) = ( 0, 1, 0 )
Then we just need to compute the cross product of these to get the normal:
N = Xs x Xt = ( -A*F*cos((s+P)*F), 0, 1 )
So your normal can be computed completely analytical, you don't actually need the gl_Normal attribute:
float angle = (thisPos.x + Phase) * Frequency;
thisPos.z = sin(angle) * Amplitude;
vec3 normal = normalize(vec3(-Amplitude*Frequency*cos(angle), 0.0, 1.0));
// Transform normal and position to eye space (for fragment shader)
oEyeNormal = normalize( gl_NormalMatrix * normal );
The normalization of normal might not be neccessary (since we normalize the transformed normal anyway), but right at the moment I'm not sure if an unnormalized normal would behave correctly in the presence of non-uniform scaling. Of course, if you want the normal to point into the negative z-direction you need to negate it.
Well, the way over a surface in space wouldn't have been neccessary. We can also just think with the sine curve inside the x-z-plane, since the y-part of the normal is zero anyway, as only z depends on x. So we just take the tangent to the curve z=A*sin((x+P)*F), whose slope is the derivative of z, being the x-z-vector (1, A*F*cos((x+P)*F)), the normal to this is then just (-A*F*cos((x+P)*F), 1) (switch coords and negate one), being x and z of the (unnormalized) normal. Well, no 3D vectors and partial derivatives, but the outcome is the same.
Furthermore you should tweak your performance:
oEyeNormal = normalize(vec3(gl_NormalMatrix * gl_Normal));
There is no need to cast it to a vec3 since gl_NormalMatrix is a 3x3 Matrix.
There is no need to normalize your incoming normal in your vertex shader, since you don't do any length based calculation in it. Some sources say that incoming normals should always be normalized by the application so that there is no need for it at all in the vertex shader. But since that's out of the hands of the shader developer I still normalize them when I calculate vertex based lighting (gouraud).