Vertex/Fragment Shader using Xcode output a syntax error - c++

I'm following a tutorial on shader done using Visual Studio... I'm using Xcode on ElCapitan.
My readShaderCode() function pass the shaderCode files content ( vertex.glsl and fragment.glsl ) to the console --just to make sure-- along with the OpenGL version .
So this code works on Windows but not on my machine ! I can't see what the problem is and if somebody could figure it out for me I would really appreciate it! Thanks. And thanks for all the posts that have been helping me along for sometimes now :)
Here is the output:
Working with OpenGl: 2.1 INTEL-10.14.66
Here is the shader Code:
#version 120
#extension GL_ARB_separate_shader_objects : enable
in layout(location=0) vec2 position;
in layout(location=1) vec3 vertexColor;
out vec3 theColor;
void main() {
gl_Position = vec4(position, 0.0, 1.0);
theColor = vertexColor;
}
*****************************************
Working with OpenGl: 2.1 INTEL-10.14.66
Here is the shader Code:
#version 120
#extension GL_ARB_separate_shader_objects : enable
out vec4 daColor;
in vec3 theColor;
void main() {
daColor = vec4(theColor, 1.0);
}
*****************************************
WARNING: 0:2: extension 'GL_ARB_separate_shader_objects' is not supported
ERROR: 0:4: '(' : syntax error: syntax error

You get perfectly reasonable errors and warnings.
WARNING: 0:2: extension 'GL_ARB_separate_shader_objects' is not supported
That means exactly what it says. Your shader depends on an extension that isn't supported.
ERROR: 0:4: '(' : syntax error: syntax error
There is only one ( character in line 4 of your vertex shader, so that makes it clear what the problem is.
This:
in layout(location=0) vec2 position;
Is not valid GLSL 1.20 code. GLSL 1.20 does not permit in qualified global variables, and GLSL 1.20 has no idea what layout means.
Your code is not reasonable GLSL code of the #version it is declared with. If another implementation took it, then you were relying on implementation-defined behavior.

Related

Link error with geometry shader (driver bug?)

I have a GLSL program that works on some machines, but fails to link on on one particular machine. I suspect a driver bug, but hope that someone will recognize something I'm doing as being poorly supported, and suggest an alternative.
If I omit the geometry shader, the vertex and fragment shaders link successfully.
The error log after the link error says:
Vertex shader(s) failed to link, fragment shader(s) failed to link, geometry shader(s) failed to link.
ERROR: error(#275) Symbol 'gl_AtiVertexData' is defined with 2 different types between two stages
ERROR: error(#275) Symbol 'gl_AtiVertexData' is defined with 2 different types between two stages
ERROR: error(#275) Symbol 'gl_AtiVertexData' is defined with 2 different types between two stages
My code does not contain the symbol gl_AtiVertexData, and Google finds no hits for it.
The GL_RENDERER string is "ATI Mobility Radeon HD 4670", GL_VERSION is "3.3.11672 Compatibility Profile Context", and GL_SHADING_LANGUAGE_VERSION is 3.30.
I've trimmed down my shader programs as much as possible, so that they no longer pretend to do anything useful, but still reproduce the problem.
Vertex shader:
#version 330
in vec4 quesaVertex;
out VertexData {
vec4 interpolatedColor;
};
void main()
{
gl_Position = quesaVertex;
interpolatedColor = vec4(1.0);
}
Geometry shader:
#version 330
layout (triangles) in;
layout (triangle_strip, max_vertices=3) out;
in VertexData {
vec4 interpolatedColor;
} gs_in[];
out VertexData {
vec4 interpolatedColor;
} gs_out;
void main() {
gl_Position = gl_in[0].gl_Position;
gs_out.interpolatedColor = gs_in[0].interpolatedColor;
EmitVertex();
gl_Position = gl_in[1].gl_Position;
gs_out.interpolatedColor = gs_in[1].interpolatedColor;
EmitVertex();
gl_Position = gl_in[2].gl_Position;
gs_out.interpolatedColor = gs_in[2].interpolatedColor;
EmitVertex();
EndPrimitive();
}
Fragment shader:
#version 330
in VertexData {
vec4 interpolatedColor;
};
out vec4 fragColor;
void main()
{
fragColor = interpolatedColor;
}
Later information:
When I tried renaming the interface block VertexData to IBlock, then the error message talked about a symbol gl_AtiIBlock instead of gl_AtiVertexData, so that symbol name was a red herring.
If I don't use interface blocks, then the program links correctly. That's a bother, because I'll need to write the vertex or fragment shader differently depending on whether there is a geometry shader between the vertex and fragment shaders, but maybe that's what I need to do.

WebGL `uniform uint` Causes Syntax Error

I am using Google Chrome Version 59.0.3071.115 (Official Build) (64-bit) on Windows 10.
I have a vertex shader that looks like so:
attribute vec3 aPosition;
attribute vec2 aTextureCoordinate;
uniform uint uLayer;
uniform vec2 uLocation;
varying highp vec2 vTextureCoordinate;
void main(void)
{
gl_Position = vec4(aPosition + vec3(uLocation, 0.0), 1.0);
vTextureCoordinate = aTextureCoordinate;
}
A prior version did not have line four (uniform unit uLayer;), and it compiled fine. Adding in that line causes an ERROR: 0:5: 'uLayer' : syntax error. As far as I can tell, there is nothing wrong with this line syntactically, and I cannot find anything stating uniform uint is not valid in a vertex shader. Is there something I am missing here?
WebGL 1 uses GLSL 100, which does not support uint. WebGL 2 uses GLSL 300 which adds uint.

Is it possible to use bindless textures in SPIR-V shaders on OpenGL 4.5?

I'm trying to compile the following shader using LunarG Vulkan SDK 1.0.37.0 on Windows:
#version 450 core
#extension GL_ARB_bindless_texture : require
layout(std140, binding=1) uniform LightUbo
{
vec3 lightDirectionVS;
};
layout(std140, binding=2) uniform TextureUBO
{
sampler2D samplers[ 10 ];
};
in vec2 vUV;
in vec3 vNormalVS;
out vec4 fragColor;
void main()
{
vec2 uv = vUV;
uv.y = 1.0 - uv.y;
fragColor = texture( samplers[ 0 ], uv ) * max( 0.2, dot( lightDirectionVS, vNormalVS ) );
}
Compile command:
\VulkanSDK\1.0.37.0\Bin\glslangValidator.exe -V assets\shader.frag -o assets\shader.frag.spv
The compiler gives the following output:
Warning, version 450 is not yet complete; most version-specific
features are present, but some are missing.
ERROR:
assets\shader.frag:2: '#extension' : extension not supported:
GL_ARB_bindless_texture
ERROR: assets\shader.frag:2: '#extension' :
extra tokens -- expected newline ERROR: assets\shader.frag:2: '' :
compilation terminated
Is there a way to use bindless textures with OpenGL 4.5 and SPIR-V shaders?
SPIR-V has no facilities for doing bindless texture stuff. So unless NVIDIA or the ARB adds a SPIR-V extension to allow it, you'll have to use the implementation's GLSL compiler, rather than glslang.

OpenGL shader version error

I am using Visual Studio 2013 but running under Visual Studio 2010 compiler.
I am running Windows 8 in bootcamp on a Macbook Pro with intel iris pro 5200 graphics.
I have a very simple vertex and fragment shader, I am just displaying simple primitives in a window but I am getting warnings in console stating..
OpenGL Debug Output: Source(Shader Comiler), type(Other), Priority(Medium), GLSL compile warning(s) for shader 3, "": WARNING: -1:65535: #version : version number deprecated in OGL 3.0 forward compatible context driver
Anyone have any idea how to get rid of these annoying errors..?
Vertex Shader Code:
#version 330 core
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projMatrix;
in vec3 position;
in vec2 texCoord;
in vec4 colour;
out Vertex {
vec2 texCoord;
vec4 colour;
} OUT;
void main(void) {
gl_Position = (projMatrix * viewMatrix * modelMatrix) * vec4(position, 1.0);
OUT.texCoord = texCoord;
OUT.colour = colour;
}
Frag Shader code
#version 330 core
in Vertex {
vec2 texCoord;
vec4 colour;
} IN;
out vec4 color;
void main() {
color= IN.colour;
//color= vec4(1,1,1,1);
}
I always knew Intel drivers were bad, but this is ridiculous. The #version directive is NOT deprecated in GL 3.0. In fact it is more important than ever beginning with GL 3.2, because in addition to the number you can also specify core (default) or compatibility.
Nevertheless, that is not an actual error. It is an invalid warning, and having OpenGL debug output setup is why you keep seeing it. You can ignore it. AMD seems to be the only vendor that uses debug output in a useful way. NV almost never outputs anything, opting instead to crash... and Intel appears to be spouting nonsense.
It is possible that what the driver is really trying to tell you is that you have an OpenGL 3.0 context and you are using a GLSL 3.30 shader. If that is the case, this has got to be the stupidest way I have ever seen of doing that.
Have you tried #version 130 instead? If you do this, the interface blocks (e.g. in Vertex { ...) should generate parse errors, but it would at least rule out the only interpretation of this warning that makes any sense.
There is another possibility, that makes a lot more sense in the end. The debug output mentions this is related to shader object #3. While there is no guarantee that shader names are assigned sequentially beginning with 0, this is usually the case. You have only shown a total of 2 shaders here, #3 would imply the 4th shader your software loaded.
Are you certain that these are the shaders causing the problem?

Error with a Simple vertex shader

I'm having a problem following this tutorial The First Triangle. I actually managed to get the First part working, but when it comes to the vertex shader it doesn't work.
Here is my Vertex Shader Code:
#version 330 core
// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 vertexPosition_modelspace;
void main(){
gl_Position.xyz = vertexPosition_modelspace;
gl_Position.w = 1.0;
}
It's just a copy of the Tutorial, but it gives me this error: must write to gl_Position.`
Just don't know what to do now.
EDIT: I'm using a GeForce 9500GT with 319.32 Drivers
EDIT2: I actually got the same thing in an older version, but it has the same error.
Here is the code:
#version 120
// Input vertex data, different for all executions of this shader.
attribute vec3 vertexPosition_modelspace;
void main(){
gl_Position = vec4(vertexPosition_modelspace, 1.0);
}
EDIT3: I'm using SFML as my default library.
I came to realize that what I were doing was kind of wrong thanks to you that helped me.
If anyone has this kind of problem the best option is to try the libraries (SFML) native functions.
That's what I'm doing now using this tutorial.
if your shader files have more than one newline [0D0A] at a time in succession, or if they consist of only 0D or 0A, you will have a bad day.
GOOD ->
#version 330 core
in vec3 ourColor;
out vec4 color;
void main()
{
color = vec4(ourColor, 1.0f);
}
BAD ->
#version 330 core
in vec3 ourColor;
out vec4 color;
void main()
{
color = vec4(ourColor, 1.0f);
}
at least that is what worked for me...