Using emscripten with opengl shaders - c++

I am having trouble getting emscripten to work with openGL shaders. The project compiles just fine with both emscripten and gcc but fails when I try to run the emscripten output.
The errors I get from compiling the vertex shader:
ERROR: 0:1: 'core' : invalid version directive
ERROR: 0:3: 'layout' : syntax error
The errors I get from compiling the fragment shader:
ERROR: 0:1: 'core' : invalid version directive
ERROR: 0:3: 'in' : storage qualifier supported in GLSL ES 3.00 only
ERROR: 0:3: '' : No precision specified for (float)
ERROR: 0:5: 'out' : storage qualifier supported in GLSL ES 3.00 only
ERROR: 0:5: '' : No precision specified for (float)
I'm compiling this project with the command:
em++ src/*.cpp -Iinclude/ -o test.html -std=c++11 -s USE_GLFW=3 -s FULL_ES3=1
Vertex shader source:
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 in_color;
uniform mat4 model;
uniform mat4 projection;
out vec3 out_color;
void main()
{
gl_Position = projection * model * vec4(position, 1.0f);
out_color = in_color;
}
Fragment shader source:
#version 330 core
in vec3 out_color;
out vec4 color;
void main()
{
color = vec4(out_color, 1.0);
}
The shaders are loaded as char arrays from output provided by xxd -i I'm working in c++11 on linux. The program works just fine when I run it natively and I've tried running the emscripten output in both Firefox and chromium.
It seems to be a problem between different versions. Is there a way to make emscripten work with what I currently have, or do I have to write my shaders differently? And if I do have to rewrite my shaders, how should I write them?

The shader code will have to be WebGL shader code to work on the browser. I don't think emscripten converts shader code (GLSL 3.3 in this case) to a GLSL ES 1.0 that is compatible with webGL.
You'll have to use attribute instead of in in the vertex shader, varying for out/in in the vertex/fragment shaders and use gl_FragColor as the output variable of the fragment shader. layout is not supported either, and the variables need a precision definition. Check the WebGL cheat sheet here.

In current emscripten you can use WebGL2 and GLSL ES 3.00. You need to change your #version lines to
#version 300 es
You will also need to add a default precision to your fragment shaders.
If it was me I'd just wrap my code to glShaderSource to be something like
GLint shaderSourceWrapper(GLint shader, const std::string src) {
#ifdef __EMSCRIPTEN__
// replace '#version.*' with '#version 300 es'
// if it's a fragment shader add 'precision highp float'
// do anything else relevant like warn if there are
// unsupported #extension directives
#endif
glShaderSource(shader, ... )
Or even do that at the JavaScript level like this

Related

Compiling vertex shader program fails

I'm doing my first steps with OpenGL and stumbled upon a problem in my vertex shader program:
#version 330 core
layout (location = 0) in vec3 aPos;
uniform mat4 inputTransform;
void main()
{
gl_Position = inputTransform * vec4(aPos, 1.0);
}
compiles and works well, but when I change the first line to
#version 130 core
because I'm bound to use OpenGL 3.0 at max, it first complains about the "location" statement. When I remove it, the remaining error message for the line
layout in vec3 aPos;
is
ERROR: 0:2: 'in' : syntax error syntax error
What's wrong here - how do I have to declare input variables in this version of the language?
Input layout qualifiers have been added in GLSL version 1.50 wich corresponds to OpeneGL version 3.2
See OpenGL Shading Language 1.50 Specification; - 4.3.8.1 Input Layout Qualifiers; page 37.
An input layout qualifier consists of the keyword layout followed by the layout-qualifier-id-list.
It is not sufficient to remove the layout-qualifier-id-list alone, you have to remove the keyword layout too:
in vec3 aPos;
The syntax error is caused by the fact, that it is not a proper syntax, that layout is directly followed by in.
Either you have to ask for the attribute index by glGetAttribLocation after the program is linked, or you have to specify the attribute location by glBindAttribLocation before the shader program is linked.

Difference between GLSL shader variable types?

When seeing some OpenGL examples, some use the following types of variables when declaring them at the top of the shader:
in
out
and some use:
attribute
varying
uniform
What is the difference? Are they mutually exclusive?
attribute and varying were removed from GLSL 1.40 and above (desktop GL version 3.1) in core OpenGL. OpenGL ES 2 still uses them, but were removed from ES 3.0 code.
You can still use the old constructs in compatibility profiles, but attribute only maps to vertex shader inputs. varying maps to both VS outputs and FS inputs.
uniform has not changed; it still means what it always has: values set by the outside world which are fixed during a rendering operation.
In modern OpenGL, you have a series of shaders hooked up to a pipeline. A simple pipeline will have a vertex shader and a fragment shader.
For each shader in the pipeline, the in is the input to that stage, and the out is the output to that stage. The out from one stage will get matched with the in from the next stage.
A uniform can be used in any shader and will stay constant for the entire draw call.
If you want an analogy, think of it as a factory. The in and out are conveyor belts going in and out of machines. The uniform are knobs that you turn on a machine to change how it works.
Example
Vertex shader:
// Input from the vertex array
in vec3 VertPos;
in vec2 VertUV;
// Output to fragment shader
out vec2 TexCoord;
// Transformation matrix
uniform mat4 ModelViewProjectionMatrix;
Fragment shader:
// Input from vertex shader
in vec2 TexCoord;
// Output pixel data
out vec4 Color;
// Texture to use
uniform sampler2D Texture;
Older OpenGL
In older versions of OpenGL (2.1 / GLSL 1.20), other keywords were used instead of in and out:
attribute was used for the inputs to the vertex shader.
varying was used for the vertex shader outputs and fragment shader inputs.
Fragment shader outputs were implicitly declared, you would use gl_FragColor instead of specifying your own.

Opengl shader compatibility with opengl es 2.0

I created an application with sdl 2.0 and initialized opengl with version 2.0 like this:
SDL_GL_SetAttribute ( SDL_GL_CONTEXT_MAJOR_VERSION, 2 );
SDL_GL_SetAttribute ( SDL_GL_CONTEXT_MINOR_VERSION, 0 );
Then I found some simple diffuse shader on internet:
attribute highp vec3 inVertex;
attribute mediump vec3 inNormal;
attribute mediump vec2 inTexCoord;
uniform highp mat4 MVPMatrix;
uniform mediump vec3 LightDirection;
varying lowp float LightIntensity;
varying mediump vec2 TexCoord;
void main()
{
//Transform position
gl_Position = MVPMatrix * vec4(inVertex, 1.0);
//Pass through texcoords
TexCoord = inTexCoord;
//Simple diffuse lighting in model space
LightIntensity = dot(inNormal, -LightDirection);
}
it failed to compile with error like this:
error: syntax error, unexpected NEW_IDENTIFIER
Then I found after I remove
highp mediump lowp
it compiles fine and runs ok,
1.what was the reason for that?
another question:
2.Can I still run this shader both on linux and android?
I am using linux now everything runs good.
thanks
What was the reason for that?
Precision qualifiers are only supported in OpenGL ES, not in desktop OpenGL.
Can I still run this shader both on linux and android?
No (at least not directly), because of the reason explained above. You'll have to make two shaders. One for desktop OpenGL and one for OpenGL ES.
OpenGL ES 2.0 is not the same thing as OpenGL 2.0.
See here for more information:
https://stackoverflow.com/a/10390965/1907004
Edit:
As pointed out by other people: You can use precision qualifiers in desktop OpenGL, but they will be ignored by the compiler. See here:
https://stackoverflow.com/a/20131165/1907004
For that to work, you need to specify a GLSL version for your shader using #version XXX, which you seem to lack. Regardless of what you do, you should always specify the GLSL version of your shader.

Shader link error after installing latest NVidia Quadro driver (311.35)

I just installed the latest NVidia driver for Quadro 4000 cards.From this moment any of my shaders linking fails with shader link error.
It is worth noting I am using OpenGL 4.2 with separate shader objects.My OS is Windows7 64bit .
Before the update I had 309.x version of the driver and everything worked fine.
Now I rolled back to the version 295.x and it works again.
Anyone knows something about it?Can it be a driver bug? If yes, what can be done about it?
Here is a simple pass through vertex shader that fails:
#version 420 core
layout(location = 0) in vec4 position;
layout(location = 1) in vec2 uvs;
layout(location = 2) in vec3 normal;
smooth out vec2 uvsOut;
void main()
{
uvsOut=uvs;
gl_Position = position;
}
Another question ,is it possible that NVIdia tightened shader version semantics rules? I mean ,I am using OpenGL compatibility profile but in GLSL mark #version 420 core.Can it be the problem?
Update:
Some more info from Program Info Log:
error C7592: ARB_separate_shader_objects requrires built-in block gl_PerVertex to be redeclared before accesing its members.
Yeah , also the driver writer has typos "accesing " ;)
Now , I actually solved the linking error by adding this :
out gl_PerVertex
{
vec4 gl_Position;
};
It is strange that previous drivers didn't enforce redefinition of gl_PerVertex block.Now ,while this addon solved the issue with the linking, it opened another one where some varying uniforms don't work.For example I have in vertex shader:
out vec4 diffuseOut;
And in fragment shader:
in vec4 diffuseOut;
Then
OUTPUT = diffuseOut;/// returns black while red is expected.
Update 2 :
Ok , now it becomes clear - the new drivers are stricter on shaders input/output variables.With the older driver I could define several "outs" in a vertex shader but without defining also their "in" match in the fragment shader.It worked.Now it seems I am forced to have the exact match between declared "ins" and "outs" in vert and frag program.Strange that no errors are being thrown but the result is that the defined "ins" become empty in the destination.

OpenGLSL error while compiling fragment shader using UBOs

I am trying to get UBOs working, however I get a compilation error in the fragment shader:
ERROR 0:5:"(": synrax error.
Fragment Shader:
layout(std140) uniform Colors
{
vec3 SCol;
vec3 WCol;
float DCool;
float DWarm;
}colors;
Where am I going wrong?
At the begining of your fragment shader source file (the very first line) put this:
#version 140
This means that you are telling the GLSL compiler that you use the version 1.40 of the shading language (you can, of course, use a higher version - see Wikipedia for details).
Alternatively, if your OpenGL driver (and/or hardware) doesn't support GLSL 1.40 fully (which is part of OpenGL 3.1), but only GLSL 1.30 (OpenGL 3.0), you can try the following:
#version 130
#extension GL_ARB_uniform_buffer_object : require
However, this one will work only if your OpenGL 3.0 driver supports the GL_ARB_uniform_buffer_object extension.
Hope this helps.