Is it possible to somehow modify this fragment shader so that it doesn't use the oes_texture_float extension? Because I get an error on the machine which is supposed to run a webgl animation.
I set up my scene using three.js webglrenderer and a cube with a shadermaterial applied to it. On My macbook pro, everything works fine, but on some windows machine I get the error "float textures not supported" (I've searched and found that this probably has to do with oes_texture_float extension)
So I'm guessing I need to change my fragment shader? Or am I missing the point completely?
<script type="x-shader/x-vertex" id="vertexshader">
// switch on high precision floats
#ifdef GL_ES
precision highp float;
#endif
void main() {
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
</script>
<script type="x-shader/x-fragment" id="fragmentshader">
#ifdef GL_ES
precision mediump float;
#endif
#define PI 3.14159265
uniform float time;
uniform vec2 resolution;
float f(float x) {
return (sin(x * 1.50 * PI ) + 19.0);
}
float q(vec2 p) {
float s = (f(p.x + 0.85)) / 2.0;
float c = smoothstep(0.9, 1.20, 1.0 - abs(p.y - s));
return c;
}
vec3 aurora(vec2 p, float time) {
vec3 c1 = q( vec2(p.x, p.y / 0.051) + vec2(time / 3.0, -0.3)) * vec3(2.90, 0.50, 0.10);
vec3 c2 = q( vec2(p.x, p.y / 0.051) + vec2(time, -0.2)) * vec3(1.3, .6, 0.3);
vec3 c3 = q( vec2(p.x, p.y / 0.051) + vec2(time / 5.0, -0.5)) * vec3(1.7, 0.4, 0.20);
return c1+c2+c3;
}
void main( void ) {
vec2 p = ( gl_FragCoord.xy / resolution.xy );
vec3 c = aurora(p, time);
gl_FragColor = vec4(1.0-c, c);
}
</script>
EDIT: this has nothing to do with the floating point texture, but rather with something in my fragment shader. Three.js gives me the error: "Can't initialise shader, VALIDATE_STATUS"
"Or am I missing the point completely?" - Indeed you are. The shaders don't care about the underlying texture format (you don't even use any textures in those shaders you posted!), so they don't have anything to do with your problem.
It's the application code that uses a float texture somewhere and needs to be changed accordingly. But from the fact that your shader doesn't use any textures at all (and I guess you haven't explicitly created a float texture elsewhere), it's probably three.js' internals that need a float texture somewhere, maybe as render target. So you need to search for ways to disable this requirement if possible.
Unless it's a three.js ism you haven't defined projectionMatrix, modelViewMatrix, and position in your vertex shader.
Try adding
uniform mat4 projectionMatrix;
uniform mat4 modelViewMatrix;
attribute vec4 position;
To the top of the first shader?
Related
I'm trying to implement a shader for grass in cocos2d-x. The shader works OK on texture loaded with Sprite::create() and it looks like this:
http://i.stack.imgur.com/Rv4rd.png
The problem is that if I'm using Sprite::createWithSpriteFrameName() and applying the same shader it looks like the offsets are wrong when calculating height also because it is moving at a larger degree, like it is using the height of the full texture from plist file:
http://i.stack.imgur.com/of6Ku.png
Here is the shader code:
VSH
attribute vec4 a_position;
attribute vec2 a_texCoord;
attribute vec4 a_color;
#ifdef GL_ES
varying lowp vec4 v_fragmentColor;
varying mediump vec2 v_texCoord;
#else
varying vec4 v_fragmentColor;
varying vec2 v_texCoord;
#endif
void main()
{
gl_Position = CC_PMatrix * a_position;
v_fragmentColor = a_color;
v_texCoord = a_texCoord;
}
FSH
#ifdef GL_ES
precision mediump float;
#endif
varying vec2 v_texCoord;
uniform float speed;
uniform float bendFactor;
void main()
{
float height = 1.0 - v_texCoord.y;
float offset = pow(height, 2.5);
offset *= (sin(CC_Time[1] * speed) * bendFactor);
gl_FragColor = texture2D(CC_Texture0, fract(vec2(v_texCoord.x + offset, v_texCoord.y))).rgba;
}
If what is happening is not clear I can provide some videos. Thank you.
EDIT
Here is the code used to generate the grass sprite:
// Smaller grass
auto grass2 = Sprite::createWithSpriteFrameName("grass2.png");
grass2->setAnchorPoint(Vec2(0.5f, 0));
grass2->setPosition(Vec2(230, footer1->getContentSize().height * 0.25f));
// Apply "grass" shader
grass2->setGLProgramState(mat->getTechniqueByName("grass")->getPassByIndex(0)->getGLProgramState()->clone());
grass2->getGLProgramState()->setUniformFloat("speed", RandomHelper::random_real(0.5f, 3.0f));
grass2->getGLProgramState()->setUniformFloat("bendFactor", RandomHelper::random_real(0.1f, 0.2f));
It's hard to tell what's happening without seeing more of your code...
If I should guess I would say that the problem is related to trimming in TexturePacker.
If you set TrimMode=Trim the sprite is stripped from transparency. This makes the sprite smaller. Cocos2d-x also only renders the smaller portion of the sprite, compensating the difference between the original sprite and the trimmed sprite with an offset vector.
I propose that you either try not to trim the sprite or try polygon trimming.
The problem was with TexturePacker trimming but also with offsets in v_texCoord.
The solution was to calculate offsets in cocos2d-x and pass them to shader.
I calculated offsets using following code:
Rect grass2Offset(
grass2->getTextureRect().origin.x / grass2->getTexture()->getContentSize().width,
grass2->getTextureRect().origin.y / grass2->getTexture()->getContentSize().height,
grass2->getTextureRect().size.width / grass2->getTexture()->getContentSize().width,
grass2->getTextureRect().size.height / grass2->getTexture()->getContentSize().height
);
Next I pass the height offset and scale to shader as uniforms using:
grass2->getGLProgramState()->setUniformFloat("heightOffset", grass2Offset.origin.y);
grass2->getGLProgramState()->setUniformFloat("heightScale", 1 / grass2Offset.size.height);
Last, the shader is using the offset like this:
#ifdef GL_ES
precision mediump float;
#endif
varying vec2 v_texCoord;
uniform float speed;
uniform float bendFactor;
uniform float heightOffset;
uniform float heightScale;
void main()
{
float height = 1.0 - (v_texCoord.y - heightOffset) * heightScale;
float offset = pow(height, 2.5);
offset *= (sin(CC_Time[1] * speed) * bendFactor);
gl_FragColor = texture2D(CC_Texture0, fract(vec2(v_texCoord.x + offset, v_texCoord.y))).rgba;
}
I'm trying to make an effect in fragment shader... This is what I get without effects:
This is what I get by multiplying the color by a 'gradient':
float fragPosition = gl_FragCoord.y / screenSize.y;
outgoingLight /= fragPosition;
So I tried to dividing but the color is kind of burned by light
float fragPosition = gl_FragCoord.y / screenSize.y;
outgoingLight /= fragPosition;
And here are the kind of colors/gradient I want (per face if available):
EDIT
Here is the vertex shader (I use three JS chunks)
precision highp float;
precision highp int;
#define PHONG
uniform float time;
attribute vec4 data;
varying vec3 vViewPosition;
#ifndef FLAT_SHADED
varying vec3 vNormal;
#endif
$common
$map_pars_vertex
$lightmap_pars_vertex
$envmap_pars_vertex
$lights_phong_pars_vertex
$color_pars_vertex
$morphtarget_pars_vertex
$skinning_pars_vertex
$shadowmap_pars_vertex
$logdepthbuf_pars_vertex
void main(){
float displacementAmount = data.x;
int x = int(data.y);
int y = int(data.z);
bool edge = bool(data.w);
$map_vertex
$lightmap_vertex
$color_vertex
$morphnormal_vertex
$skinbase_vertex
$skinnormal_vertex
$defaultnormal_vertex
#ifndef FLAT_SHADED
vNormal = normalize( transformedNormal );
#endif
$morphtarget_vertex
$skinning_verte
$default_vertex
if( edge == false ){
vec3 displacement = vec3(sin(time * 0.001 * displacementAmount) * 0.2);
mvPosition = mvPosition + vec4(displacement, 1.0);
gl_Position = projectionMatrix * mvPosition;
}
$logdepthbuf_vertex
vViewPosition = -mvPosition.xyz;
$worldpos_vertex
$envmap_vertex
$lights_phong_vertex
$shadowmap_vertex
vec3 newPosition = position + vec3(mvPosition.xyz);
gl_Position = projectionMatrix * modelViewMatrix * vec4(newPosition, 1.0);
EDIT 2:
After #gamedevelopmentgerm suggestion to mix here is what I get:
It's much better what I get but is it possible to avoid black to white gradient in background. I only want blue to white.
I am writing a 2D lighting system using LibGDX, but I have run into difficulties with shaders.
Previously I had written a lighting system in Slick2D, which worked very well, however the way coordinates worked in shaders in that library seems to be very different to LibGDX.
I had to write this Java function to change coordinates from 'world coordinates' to whatever works in shaders. I did this by analysing the output for different coordinates.
public static Vector2 screenToNDC(Vector2 screenCoords, Vector2 textureSize, Vector2 texturePosition)
{
return screenCoords.scl(new Vector2(1f / textureSize.x, 1f / textureSize.y)).sub(texturePosition.scl(new Vector2(1f / textureSize.x, 1f / textureSize.y)));
}
This function works fine translating the light position.
This is my shader code:
#ifdef GL_ES
precision mediump float;
#endif
varying vec4 v_color;
varying vec2 v_texCoords;
uniform sampler2D u_texture;
uniform mat4 u_projTrans;
uniform vec2 lightPos;
uniform vec3 lightCol;
//uniform float lightIntensity;
float dist(vec2 a, vec2 b) {
return sqrt((b.x - a.x) * (b.x - a.x) + (b.y - a.y) * (b.y - a.y));
}
void main() {
gl_FragColor = vec4(lightCol, 1.0 / dist(v_texCoords, lightPos));
}
Currently, this will draw only the light colour, with full alpha. Upon investigation, I found this must be because the dist function is returning very small values such that 1.0 / dist is positive. If I multiply dist(v_texCoords, lightPos) by a value like 10, the light falloff works as intended.
Is there any way to simply make coordinates work in LibGDX shaders in the same way that they do in Java or Slick2D shaders?
The Shader compiles successfully, but the program crashes as soon as rendering starts... This is the error i get: "no uniform with name 'u_texture' in shader". This is what my shader looks like:
#ifdef GL_ES
precision mediump float;
#endif
uniform float time;
uniform vec2 mouse;
uniform vec2 resolution;
varying vec2 surfacePosition;
#define MAX_ITER 10
void main( void ) {
vec2 p = surfacePosition*4.0;
vec2 i = p;
float c = 0.0;
float inten = 1.0;
for (int n = 0; n < MAX_ITER; n++) {
float t = time * (1.0 - (1.0 / float(n+1)));
i = p + vec2(
cos(t - i.x) + sin(t + i.y),
sin(t - i.y) + cos(t + i.x)
);
c += 1.0/length(vec2(
p.x / (sin(i.x+t)/inten),
p.y / (cos(i.y+t)/inten)
)
);
}
c /= float(MAX_ITER);
gl_FragColor = vec4(vec3(pow(c,1.5))*vec3(0.99, 0.97, 1.8), 1.0);
}
Can someone please help me. I don't know what I'm doing wrong. BTW, this is shader i found on the internet, so I know it is working, the only problem is making it work with libgdx.
libGDX's SpriteBatch assumes that your shader will have u_texture uniform. To overcome just add
ShaderProgram.pedantic = false;(Javadoc) before putting your shader program into the SpriteBatch.
UPDATE: raveesh is right about shader compiler vanishing unused uniforms and attributes, but libGDX wraps OpenGL shader in custom ShaderProgram.
Not only should you add the uniform u_texture in your shader program, you should also use it, otherwise it will be optimized away by the shader compiler.
But looking at you shader, you don't seem to need the uniform anyway, so check your program for something like shader.setUniformi("u_texture", 0); and remove the line. It should work fine then.
In OpenGL 2.1, we could create a post-processing effect by rendering to a FBO using a fullscreen quad. OpenGL 3.1 removes GL_QUADS, so we have to emulate this using two triangles instead.
Unfortunately, I am getting a strange issue when trying to apply this technique: a diagonal line appears in the hypotenuse of the two triangles!
This screenshot demonstrates the issue:
I did not have a diagonal line in OpenGL 2.1 using GL_QUADS - it appeared in OpenGL 3.x core when I switched to GL_TRIANGLES. Unfortunately, most tutorials online suggest using two triangles and none of them show this issue.
// Fragment shader
#version 140
precision highp float;
uniform sampler2D ColorTexture;
uniform vec2 SampleDistance;
in vec4 TextureCoordinates0;
out vec4 FragData;
#define NORM (1.0 / (1.0 + 2.0 * (0.95894917 + 0.989575414)))
//const float w0 = 0.845633832 * NORM;
//const float w1 = 0.909997233 * NORM;
const float w2 = 0.95894917 * NORM;
const float w3 = 0.989575414 * NORM;
const float w4 = 1 * NORM;
const float w5 = 0.989575414 * NORM;
const float w6 = 0.95894917 * NORM;
//const float w7 = 0.909997233 * NORM;
//const float w8 = 0.845633832 * NORM;
void main(void)
{
FragData =
texture(ColorTexture, TextureCoordinates0.st - 2.0 * SampleDistance) * w2 +
texture(ColorTexture, TextureCoordinates0.st - SampleDistance) * w3+
texture(ColorTexture, TextureCoordinates0.st) * w4 +
texture(ColorTexture, TextureCoordinates0.st + SampleDistance) * w5 +
texture(ColorTexture, TextureCoordinates0.st + 2.0 * SampleDistance) * w6
;
}
// Vertex shader
#version 140
precision highp float;
uniform mat4 ModelviewProjection; // loaded with orthographic projection (-1,-1):(1,1)
uniform mat4 TextureMatrix; // loaded with identity matrix
in vec3 Position;
in vec2 TexCoord;
out vec4 TextureCoordinates0;
void main(void)
{
gl_Position = ModelviewProjection * vec4(Position, 1.0);
TextureCoordinates0 = (TextureMatrix * vec4(TexCoord, 0.0, 0.0));
}
What am I doing wrong? Is there a suggested way to perform a fullscreen post-processing effect in OpenGL 3.x/4.x core?
Reto Koradi is correct and I suspect that is what I meant when I wrote my original comment; I honestly do not remember this question.
You absolutely have to swap 2 of the vertex indices to draw a strip instead of a quad or you wind up with a smaller triangle cut out of part of the quad.
ASCII art showing difference between primitives generated in 0,1,2,3 order:
GL_QUADS GL_TRIANGLE_STRIP GL_TRIANGLE_FAN
0-3 0 3 0-3
| | |X| |\|
1-2 1-2 1-2
0-2
|/|
1-3
As for why this may produce better results than two triangles using 6 different vertices, floating-point variance may be to blame. Indexed rendering in general will usually solve this, but using a primitive type like GL_TRIANGLE_STRIP requires 2 fewer indices to draw a quad as two triangles.