OpenGL ES 2.0 shader integer operations - c++

I am having trouble with getting integer operations working in the OpenGL ES 2.0 shaders.
GL_SHADING_LANGUAGE_VERSION: OpenGL ES GLSL ES 1.00
One the example lines where I'm having issues is: color.r = floor(f / 65536);
I get this error:
Error linking program: ERROR:SEMANTIC-4 (vertex shader, line 13) Operator not supported for operand types
To give more context to what's happening, within the library that I am working with the only way to pass the color is as a float to which 3 integers have been bit shifted into. 3 (8-bit) int -> float pass to shader | float - > r g b (using integer manipulation) this all works fine on the normal OpenGL but having trouble making this work on the raspberry pi.
Full vexter shader code here:
attribute vec4 position;
varying vec3 Texcoord;
uniform mat4 model;
uniform mat4 view;
uniform mat4 proj;
vec3 unpackColor(float f)
{
vec3 color;
f -= 0x1000000;
color.r = floor(f / 65536);
color.g = floor((f - color.r * 65536) / 256.0);
color.b = floor(f - color.r * 65536 - color.g * 256.0);
return color / 256.0;
}
void main()
{
Texcoord = unpackColor(position.w);
gl_Position = proj * view * model * vec4(position.xyz, 1.0);
}
Any ideas how to get this working?

Related

CPU to GPU normal mapping

I'm creating a terrain mesh, and following this SO answer I'm trying to migrate my CPU computed normals to a shader based version, in order to improve performances by reducing my mesh resolution and using a normal map computed in the fragment shader.
I'm using MapBox height map for the terrain data. Tiles look like this:
And elevation at each pixel is given by the following formula:
const elevation = -10000.0 + ((red * 256.0 * 256.0 + green * 256.0 + blue) * 0.1);
My original code first creates a dense mesh (256*256 squares of 2 triangles) and then computes triangle and vertices normals. To get a visually satisfying result I was diving the elevation by 5000 to match the tile's width & height in my scene (in the future I'll do a proper computation to display the real elevation).
I was drawing with these simple shaders:
Vertex shader:
uniform mat4 u_Model;
uniform mat4 u_View;
uniform mat4 u_Projection;
attribute vec3 a_Position;
attribute vec3 a_Normal;
attribute vec2 a_TextureCoordinates;
varying vec3 v_Position;
varying vec3 v_Normal;
varying mediump vec2 v_TextureCoordinates;
void main() {
v_TextureCoordinates = a_TextureCoordinates;
v_Position = vec3(u_View * u_Model * vec4(a_Position, 1.0));
v_Normal = vec3(u_View * u_Model * vec4(a_Normal, 0.0));
gl_Position = u_Projection * u_View * u_Model * vec4(a_Position, 1.0);
}
Fragment shader:
precision mediump float;
varying vec3 v_Position;
varying vec3 v_Normal;
varying mediump vec2 v_TextureCoordinates;
uniform sampler2D texture;
void main() {
vec3 lightVector = normalize(-v_Position);
float diffuse = max(dot(v_Normal, lightVector), 0.1);
highp vec4 textureColor = texture2D(texture, v_TextureCoordinates);
gl_FragColor = vec4(textureColor.rgb * diffuse, textureColor.a);
}
It was slow but gave visually satisfying results:
Now, I removed all the CPU based normals computation code, and replaced my shaders by those:
Vertex shader:
#version 300 es
precision highp float;
precision highp int;
uniform mat4 u_Model;
uniform mat4 u_View;
uniform mat4 u_Projection;
in vec3 a_Position;
in vec2 a_TextureCoordinates;
out vec3 v_Position;
out vec2 v_TextureCoordinates;
out mat4 v_Model;
out mat4 v_View;
void main() {
v_TextureCoordinates = a_TextureCoordinates;
v_Model = u_Model;
v_View = u_View;
v_Position = vec3(u_View * u_Model * vec4(a_Position, 1.0));
gl_Position = u_Projection * u_View * u_Model * vec4(a_Position, 1.0);
}
Fragment shader:
#version 300 es
precision highp float;
precision highp int;
in vec3 v_Position;
in vec2 v_TextureCoordinates;
in mat4 v_Model;
in mat4 v_View;
uniform sampler2D u_dem;
uniform sampler2D u_texture;
out vec4 color;
const vec2 size = vec2(2.0,0.0);
const ivec3 offset = ivec3(-1,0,1);
float getAltitude(vec4 pixel) {
float red = pixel.x;
float green = pixel.y;
float blue = pixel.z;
return (-10000.0 + ((red * 256.0 * 256.0 + green * 256.0 + blue) * 0.1)) * 6.0; // Why * 6 and not / 5000 ??
}
void main() {
float s01 = getAltitude(textureOffset(u_dem, v_TextureCoordinates, offset.xy));
float s21 = getAltitude(textureOffset(u_dem, v_TextureCoordinates, offset.zy));
float s10 = getAltitude(textureOffset(u_dem, v_TextureCoordinates, offset.yx));
float s12 = getAltitude(textureOffset(u_dem, v_TextureCoordinates, offset.yz));
vec3 va = (vec3(size.xy, s21 - s01));
vec3 vb = (vec3(size.yx, s12 - s10));
vec3 normal = normalize(cross(va, vb));
vec3 transformedNormal = normalize(vec3(v_View * v_Model * vec4(normal, 0.0)));
vec3 lightVector = normalize(-v_Position);
float diffuse = max(dot(transformedNormal, lightVector), 0.1);
highp vec4 textureColor = texture(u_texture, v_TextureCoordinates);
color = vec4(textureColor.rgb * diffuse, textureColor.a);
}
It now loads nearly instantly, but something is wrong:
in the fragment shader I had to multiply the elevation by 6 rather than dividing by 5000 to get something close to my original code
the result is not as good. Especially when I tilt the scene, the shadows are very dark (the more I tilt the darker they get):
Can you spot what causes that difference?
EDIT: I created two JSFiddles:
first version with CPU computed vertices normals: http://jsfiddle.net/tautin/tmugzv6a/10
second version with GPU computed normal map: http://jsfiddle.net/tautin/8gqa53e1/42
The problem appears when you play with the tilt slider.
There were three problems I could find.
One you saw and fixed by trial and error, which is that the scale of your height calculation was wrong. In CPU, your color coordinates varies from 0 to 255, but on GLSL, texture values are normalized from 0 to 1, so the correct height calculation is:
return (-10000.0 + ((red * 256.0 * 256.0 + green * 256.0 + blue) * 0.1 * 256.0)) / Z_SCALE;
But for this shader purpose, the -10000.00 doesn't matter, so you can do:
return (red * 256.0 * 256.0 + green * 256.0 + blue) * 0.1 * 256.0 / Z_SCALE;
The second problem is that the scale of your x and y coordinates was also wrong. In the CPU code the distance between two neighbor points is (SIZE * 2.0 / (RESOLUTION + 1)), but in GPU, you had set it to 1. The correct way to define your size variable is:
const float SIZE = 2.0;
const float RESOLUTION = 255.0;
const vec2 size = vec2(2.0 * SIZE / (RESOLUTION + 1.0), 0.0);
Notice that I increased the resolution to 255 because I assume this is what you want (one minus the texture resolution). Also, this is needed to match the value of offset, which you defined as:
const ivec3 offset = ivec3(-1,0,1);
To use a different RESOLUTION value, you will have to adjust offset accordingly, e.g. for RESOLUTION == 127, offset = ivec3(-2,0,2), i.e. the offset must be <real texture resolution>/(RESOLUTION + 1), which limits the possibilities for RESOLUTION, since offset must be integer.
The third problem is that you used a different normal calculation algorithm in the GPU, which strikes to me as having lower resolution than the one used on CPU, because you use the four outer pixels of a cross, but ignores the central one. It seems that this is not the full story, but I can't explain why they are so different. I tried to implement the exact CPU algorithm as I thought it should be, but it yield different results. Instead, I had to use the following algorithm, which is similar but not exactly the same, to get an almost identical result (if you increase the CPU resolution to 255):
float s11 = getAltitude(texture(u_dem, v_TextureCoordinates));
float s21 = getAltitude(textureOffset(u_dem, v_TextureCoordinates, offset.zy));
float s10 = getAltitude(textureOffset(u_dem, v_TextureCoordinates, offset.yx));
vec3 va = (vec3(size.xy, s21 - s11));
vec3 vb = (vec3(size.yx, s10 - s11));
vec3 normal = normalize(cross(va, vb));
This is the original CPU solution, but with RESOLUTION=255: http://jsfiddle.net/k0fpxjd8/
This is the final GPU solution: http://jsfiddle.net/7vhpuqd8/

World-space position from logarithmic depth buffer

After changing my current deferred renderer to use a logarithmic depth buffer I can not work out, for the life of me, how to reconstruct world-space depth from the depth buffer values.
When I had the OpenGL default z/w depth written I could easily calculate this value by transforming from window-space to NDC-space then perform inverse perspective transformation.
I did this all in the second pass fragment shader:
uniform sampler2D depth_tex;
uniform mat4 inv_view_proj_mat;
in vec2 uv_f;
vec3 reconstruct_pos(){
float z = texture(depth_tex, uv_f).r;
vec4 pos = vec4(uv_f, z, 1.0) * 2.0 - 1.0;
pos = inv_view_proj_mat * pos;
return pos.xyz / pos.w;
}
and got a result that looked pretty correct:
But now the road to a simple z value is not so easy (doesn't seem like it should be so hard either).
My vertex shader for my first pass with log depth:
#version 330 core
#extension GL_ARB_shading_language_420pack : require
layout(location = 0) in vec3 pos;
layout(location = 1) in vec2 uv;
uniform mat4 mvp_mat;
uniform float FC;
out vec2 uv_f;
out float logz_f;
out float FC_2_f;
void main(){
gl_Position = mvp_mat * vec4(pos, 1.0);
logz_f = 1.0 + gl_Position.w;
gl_Position.z = (log2(max(1e-6, logz_f)) * FC - 1.0) * gl_Position.w;
FC_2_f = FC * 0.5;
}
And my fragment shader:
#version 330 core
#extension GL_ARB_shading_language_420pack : require
// other uniforms and output variables
in vec2 uv_f;
in float FC_2_f;
void main(){
gl_FragDepth = log2(logz_f) * FC_2_f;
}
I have tried a few different approaches to get back the z-position correctly, all failing.
If I redefine my reconstruct_pos in the second pass to be:
vec3 reconstruct_pos(){
vec4 pos = vec4(uv_f, get_depth(), 1.0) * 2.0 - 1.0;
pos = inv_view_proj_mat * pos;
return pos.xyz / pos.w;
}
This is my current attempt at reconstructing Z:
uniform float FC;
float get_depth(){
float log2logz_FC_2 = texture(depth_tex, uv_f).r;
float logz = pow(2, log2logz_FC_2 / (FC * 0.5));
float pos_z = log2(max(1e-6, logz)) * FC - 1.0; // pos.z
return pos_z;
}
Explained:
log2logz_FC_2: the value written to depth buffer, so log2(1.0 + gl_Position.w) * (FC / 2)
logz: simply 1.0 + gl_Position.w
pos_z: the value of gl_Position.z before perspective devide
return value: gl_Position.z
Of course, that's just my working. I'm not sure what these values actually hold in the end, because I think I've screwed up some of the math or not correctly understood the transformations going on.
What is the correct way to get my world-space Z position from this logarithmic depth buffer?
In the end I was going about this all wrong. The way to get the world-space position back using a log buffer is:
Retrieve depth from texture
Reconstruct gl_Position.w
linearize reconstructed depth
translate to world space
Here's my implementation in glsl:
in vec2 uv_f;
uniform float nearz;
uniform float farz;
uniform mat4 inv_view_proj_mat;
float linearize_depth(in float depth){
float a = farz / (farz - nearz);
float b = farz * nearz / (nearz - farz);
return a + b / depth;
}
float reconstruct_depth(){
float depth = texture(depth_tex, uv_f).r;
return pow(2.0, depth * log2(farz + 1.0)) - 1.0;
}
vec3 reconstruct_world_pos(){
vec4 wpos =
inv_view_proj_mat *
(vec4(uv_f, linearize_depth(reconstruct_depth()), 1.0) * 2.0 - 1.0);
return wpos.xyz / wpos.w;
}
Which gives me the same result (but with better precision) as when I was using the default OpenGL depth buffer.

OSG: GLSL Shader working on AMD but not on NVIDIA

currently I am working on a OSG Project for my study and wrote a CelShading shader (alongside a simpleFog Shader). I first render with the CelShader along with the depth buffer to Texture and then use the fogShader. Everything works fine on my AMD Radeon HD 7950 and on my Intel HD4400 (although it is slow on the last), both running Windows. However, on a Quadro 600 runnning Linux, the Shader compiles without error, but is still wrong, the light is dulled and because of the lack of some light spots, it seems that not every light in the Scene is used. The whole toon effect is also gone.
I confirmed the Shader working on another AMD, a ATI Mobility HD3400.
But on other NVIDIAs, like a GTX 670 or 660 TI oder 560 TI (this time windows) the Shader is not working. First it was totally messed up because of non-uniform flow, but after I fixed it it is still not working.
I have this Problem now for some days and it is giving me a headache. I do not know what am I missing, why is it working on a simple Intel HD 4400 but not on high end NVIDIA Cards?
Strangely, the fogShader is working perfectly on every system and gives me the nice fog I want.
Does anyone have an idea? The Uniforms are set for the toonTex, but texture0 is not set, because the model is uv-mapped with blender, but the textures seem to work just fine (look at the Pony in the Screens). I assuming 0 is used as layout for texture0, which is perfectly valid,as far as I know. Here is a Video showing the shader on a GTX 660 TI. Something seems to work, if there is only one light, but it is not how it should look like, on a Radeon HD 7950 it is like this (ignore the black border, screenshot issue).
The light is cleary different.
EDIT: Just did another test: on the Intel HD 4400 and Windows, it is working. But the same System running Linux is showing only a whole lot of White with some outlines but no textures at all.
Anyone any suggestion?
The sources for the shaders are here:
celShader.vert
#version 120
varying vec3 normalModelView;
varying vec4 vertexModelView;
uniform bool zAnimation;
uniform float osg_FrameTime;
void main()
{
normalModelView = gl_NormalMatrix * gl_Normal;
vertexModelView = gl_ModelViewMatrix * gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
vec4 vertexPos = gl_Vertex;
if(zAnimation){//
vertexPos.z = sin(5.0*vertexPos.z + osg_FrameTime)*0.25;//+ vertexPos.z;
}
gl_Position = gl_ModelViewProjectionMatrix * vertexPos;
}
celShader.frag
#version 120
#define NUM_LIGHTS 5
uniform sampler2D texture0;
uniform sampler2D toonTex;
uniform float osg_FrameTime;
uniform bool tex;
varying vec3 normalModelView;
varying vec4 vertexModelView;
vec4 calculateLightFromLightSource(int lightIndex, bool front){
vec3 lightDir;
vec3 eye = normalize(-vertexModelView.xyz);
vec4 curLightPos = gl_LightSource[lightIndex].position;
//curLightPos.z = sin(10*osg_FrameTime)*4+curLightPos.z;
lightDir = normalize(curLightPos.xyz - vertexModelView.xyz);
float dist = distance( gl_LightSource[lightIndex].position, vertexModelView );
float attenuation = 1.0 / (gl_LightSource[lightIndex].constantAttenuation
+ gl_LightSource[lightIndex].linearAttenuation * dist
+ gl_LightSource[lightIndex].quadraticAttenuation * dist * dist);
float z = length(vertexModelView);
vec4 color;
vec3 n = normalize(normalModelView);
vec3 nBack = normalize(-normalModelView);
float intensity = dot(n,lightDir); //NdotL, Lambert
float intensityBack = dot(nBack,lightDir); //NdotL, Lambert
//-Phong Modell
vec3 reflected = normalize(reflect( -lightDir, n));
float specular = pow(max(dot(reflected, eye), 0.0), gl_FrontMaterial.shininess);
vec3 reflectedBack = normalize(reflect( -lightDir, nBack));
float specularBack = pow(max(dot(reflectedBack, eye), 0.0), gl_BackMaterial.shininess);
//Toon-Shading
//2D Toon http://www.cs.rpi.edu/~cutler/classes/advancedgraphics/S12/final_projects/hutchins_kim.pdf
vec4 toonColor = texture2D(toonTex,vec2(intensity,specular));
vec4 toonColorBack = texture2D(toonTex,vec2(intensityBack,specularBack));
if(front){
color += gl_FrontMaterial.ambient * gl_LightSource[lightIndex].ambient[lightIndex];
if(intensity > 0.0){
color += gl_FrontMaterial.diffuse * gl_LightSource[lightIndex].diffuse * intensity * attenuation ;
color += gl_FrontMaterial.specular * gl_LightSource[lightIndex].specular * specular *attenuation ;
}
return color * toonColor;
} else {//back
color += gl_BackMaterial.ambient * gl_LightSource[lightIndex].ambient[lightIndex];
if(intensity > 0.0){
color += gl_BackMaterial.diffuse * gl_LightSource[lightIndex].diffuse * intensityBack * attenuation ;
color += gl_BackMaterial.specular * gl_LightSource[lightIndex].specular * specularBack *attenuation ;
}
return color * toonColorBack;
}
}
void main(void) {
vec4 color = vec4(0.0);
bool front = true;
//non-uniform-flow error correction
//see more here: http://www.opengl.org/wiki/GLSL_Sampler#Non-uniform_flow_control
//and here: http://gamedev.stackexchange.com/questions/32543/glsl-if-else-statement-unexpected-behaviour
vec4 texColor = texture2D(texture0,gl_TexCoord[0].xy);
if(!gl_FrontFacing)
front = false;
for(int i = 0; i< NUM_LIGHTS; i++){
color += calculateLightFromLightSource(i,front);
}
if(tex)
gl_FragColor =color * texColor;
else
gl_FragColor = color;
}
fogShader.vert
#version 120
varying vec4 vertexModelView;
void main()
{
gl_Position = ftransform();
vertexModelView = gl_ModelViewMatrix * gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
}
fogShader.frag
varying vec4 vertexModelView;
uniform sampler2D texture0;
uniform sampler2D deepth;
uniform vec3 fogColor;
uniform float zNear;
uniform float zFar;
float linearDepth(float z){
return (2.0 * (zNear+zFar)) / ((zFar + zNear) - z * (zFar - zNear));// -1.0;
}
void main(void){
//Literature
//http://www.ozone3d.net/tutorials/glsl_fog/p04.php and depth_of_field example OSG Cookbook
vec2 deepthPoint = gl_TexCoord[0].xy;
float z = texture2D(deepth, deepthPoint).x;
//fogFactor = (end - z) / (end - start)
z = linearDepth(z);
float fogFactor = (4000*4-z) / (4000*4 - 30*4);
fogFactor = clamp(fogFactor, 0.0, 1.0);
vec4 texColor = texture2D(texture0,gl_TexCoord[0].xy);
gl_FragColor = mix(vec4(fogColor,1.0), texColor,fogFactor);
}
ProgramLinking
osg::ref_ptr<osg::Shader> toonFrag = osgDB::readShaderFile("../Shader/celShader.frag");
osg::ref_ptr<osg::Shader> toonVert = osgDB::readShaderFile("../Shader/" + _vertSource);
osg::ref_ptr<osg::Program> celShadingProgram = new osg::Program;
celShadingProgram->addShader(toonFrag);
celShadingProgram->addShader(toonVert);
osg::ref_ptr<osg::Texture2D> toonTex = new osg::Texture2D;
toonTex->setImage(osgDB::readImageFile("../BlenderFiles/Texturen/toons/" + _toonTex));
toonTex->setFilter(osg::Texture::MIN_FILTER, osg::Texture::NEAREST);
toonTex->setFilter(osg::Texture::MAG_FILTER, osg::Texture::NEAREST);
osg::ref_ptr<osg::StateSet> ss = new osg::StateSet;
ss->setTextureAttributeAndModes(1, toonTex, osg::StateAttribute::OVERRIDE | osg::StateAttribute::ON);
ss->addUniform(new osg::Uniform("toonTex", 1));
ss->setAttributeAndModes(celShadingProgram, osg::StateAttribute::OVERRIDE | osg::StateAttribute::ON);
//TODO NEEED?
ss->setTextureMode(1, GL_TEXTURE_1D, osg::StateAttribute::OVERRIDE | osg::StateAttribute::OFF);
ss->addUniform(new osg::Uniform("tex", true));
ss->addUniform(new osg::Uniform("zAnimation", false));
Okay, I finally found the error.
There was a faulty Line since version zero of my Shader which I overlooked for a whole week (and I am suprised my AMD Driver did not gave my an error, it was just plain wrong!
EDIT: not wrong at all, see comment below!).
This two lines were broken:
color += gl_FrontMaterial.ambient * gl_LightSource[lightIndex].ambient[lightIndex];
color += gl_BackMaterial.ambient * gl_LightSource[lightIndex].ambient[lightIndex];
ambient is of course not an array....

Fullscreen blur GLSL shows diagonal line (OpenGL core)

In OpenGL 2.1, we could create a post-processing effect by rendering to a FBO using a fullscreen quad. OpenGL 3.1 removes GL_QUADS, so we have to emulate this using two triangles instead.
Unfortunately, I am getting a strange issue when trying to apply this technique: a diagonal line appears in the hypotenuse of the two triangles!
This screenshot demonstrates the issue:
  
I did not have a diagonal line in OpenGL 2.1 using GL_QUADS - it appeared in OpenGL 3.x core when I switched to GL_TRIANGLES. Unfortunately, most tutorials online suggest using two triangles and none of them show this issue.
// Fragment shader
#version 140
precision highp float;
uniform sampler2D ColorTexture;
uniform vec2 SampleDistance;
in vec4 TextureCoordinates0;
out vec4 FragData;
#define NORM (1.0 / (1.0 + 2.0 * (0.95894917 + 0.989575414)))
//const float w0 = 0.845633832 * NORM;
//const float w1 = 0.909997233 * NORM;
const float w2 = 0.95894917 * NORM;
const float w3 = 0.989575414 * NORM;
const float w4 = 1 * NORM;
const float w5 = 0.989575414 * NORM;
const float w6 = 0.95894917 * NORM;
//const float w7 = 0.909997233 * NORM;
//const float w8 = 0.845633832 * NORM;
void main(void)
{
FragData =
texture(ColorTexture, TextureCoordinates0.st - 2.0 * SampleDistance) * w2 +
texture(ColorTexture, TextureCoordinates0.st - SampleDistance) * w3+
texture(ColorTexture, TextureCoordinates0.st) * w4 +
texture(ColorTexture, TextureCoordinates0.st + SampleDistance) * w5 +
texture(ColorTexture, TextureCoordinates0.st + 2.0 * SampleDistance) * w6
;
}
// Vertex shader
#version 140
precision highp float;
uniform mat4 ModelviewProjection; // loaded with orthographic projection (-1,-1):(1,1)
uniform mat4 TextureMatrix; // loaded with identity matrix
in vec3 Position;
in vec2 TexCoord;
out vec4 TextureCoordinates0;
void main(void)
{
gl_Position = ModelviewProjection * vec4(Position, 1.0);
TextureCoordinates0 = (TextureMatrix * vec4(TexCoord, 0.0, 0.0));
}
What am I doing wrong? Is there a suggested way to perform a fullscreen post-processing effect in OpenGL 3.x/4.x core?
Reto Koradi is correct and I suspect that is what I meant when I wrote my original comment; I honestly do not remember this question.
You absolutely have to swap 2 of the vertex indices to draw a strip instead of a quad or you wind up with a smaller triangle cut out of part of the quad.
ASCII art showing difference between primitives generated in 0,1,2,3 order:
GL_QUADS GL_TRIANGLE_STRIP GL_TRIANGLE_FAN
0-3 0 3 0-3
| | |X| |\|
1-2 1-2 1-2
0-2
|/|
1-3
As for why this may produce better results than two triangles using 6 different vertices, floating-point variance may be to blame. Indexed rendering in general will usually solve this, but using a primitive type like GL_TRIANGLE_STRIP requires 2 fewer indices to draw a quad as two triangles.

Can't initialise shader VALIDATE_STATUS

Is it possible to somehow modify this fragment shader so that it doesn't use the oes_texture_float extension? Because I get an error on the machine which is supposed to run a webgl animation.
I set up my scene using three.js webglrenderer and a cube with a shadermaterial applied to it. On My macbook pro, everything works fine, but on some windows machine I get the error "float textures not supported" (I've searched and found that this probably has to do with oes_texture_float extension)
So I'm guessing I need to change my fragment shader? Or am I missing the point completely?
<script type="x-shader/x-vertex" id="vertexshader">
// switch on high precision floats
#ifdef GL_ES
precision highp float;
#endif
void main() {
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
</script>
<script type="x-shader/x-fragment" id="fragmentshader">
#ifdef GL_ES
precision mediump float;
#endif
#define PI 3.14159265
uniform float time;
uniform vec2 resolution;
float f(float x) {
return (sin(x * 1.50 * PI ) + 19.0);
}
float q(vec2 p) {
float s = (f(p.x + 0.85)) / 2.0;
float c = smoothstep(0.9, 1.20, 1.0 - abs(p.y - s));
return c;
}
vec3 aurora(vec2 p, float time) {
vec3 c1 = q( vec2(p.x, p.y / 0.051) + vec2(time / 3.0, -0.3)) * vec3(2.90, 0.50, 0.10);
vec3 c2 = q( vec2(p.x, p.y / 0.051) + vec2(time, -0.2)) * vec3(1.3, .6, 0.3);
vec3 c3 = q( vec2(p.x, p.y / 0.051) + vec2(time / 5.0, -0.5)) * vec3(1.7, 0.4, 0.20);
return c1+c2+c3;
}
void main( void ) {
vec2 p = ( gl_FragCoord.xy / resolution.xy );
vec3 c = aurora(p, time);
gl_FragColor = vec4(1.0-c, c);
}
</script>
EDIT: this has nothing to do with the floating point texture, but rather with something in my fragment shader. Three.js gives me the error: "Can't initialise shader, VALIDATE_STATUS"
"Or am I missing the point completely?" - Indeed you are. The shaders don't care about the underlying texture format (you don't even use any textures in those shaders you posted!), so they don't have anything to do with your problem.
It's the application code that uses a float texture somewhere and needs to be changed accordingly. But from the fact that your shader doesn't use any textures at all (and I guess you haven't explicitly created a float texture elsewhere), it's probably three.js' internals that need a float texture somewhere, maybe as render target. So you need to search for ways to disable this requirement if possible.
Unless it's a three.js ism you haven't defined projectionMatrix, modelViewMatrix, and position in your vertex shader.
Try adding
uniform mat4 projectionMatrix;
uniform mat4 modelViewMatrix;
attribute vec4 position;
To the top of the first shader?