Corrupt Vertex and Fragment Shader - c++

I started picking up OpenGL by using http://www.opengl-tutorial.org.
It uses following code to load and compile shaders (Linked because code is too long for this post and I think it's something with my shaders)
When running this code, it prints out "ERROR: Compiled Vertex Shader is corrupt" and "ERROR: Compiled Fragment Shader is corrupt". My shaders are following
Vertex Shader
#version 330 core
out vec3 color;
void main(){
color = vec3(1,0,0);
}
Fragment shader
#version 330
layout (location = 0) in vec3 position;
void main()
{
gl_Position.xyz = position;
gl_Position.w = 1.0;
}
I'm using XCode 5.1.1 , OpenGL 3.3 and GLSL 3.30.
It would be awesome if you guys could help me past this point. I got stuck on a YouTube tutorial that didn't use VAO's, so I went to learn these things myself so I could write the tutorial in my own code.
Thanks in advance

This is a common problem that shows up with XCode and is usually caused by text encoding or improperly null-terminated strings. There could be non-printing characters at the end of both of them.
You can look through the VertexShaderCode and FragmentShaderCode strings in a debugger and find out if there are any erroneous chars or if they're not null-terminated.
I found people running into the same errors here and here.
To fix them, open your GLSL files with text edit, text mate, or sublime text (some really basic text editor) and convert them to text only and save them as new files.

Related

Alternative to gl_TexCoord.xy to get texture coordinate

I always did my shaders in glsl 3 (with the #version 330 line) but it's starting to be pretty old, so I recently tried to make a shader in glsl 4, and use it with the SFML library for rendering, instead of pure openGL.
For now, my goal is to do a basic shader for a 2d game, which takes the color of each pixel of a texture and modify them. I always did that with gl_TexCoord[0].xy, but it seems to be depreciated now, so I searched and I heard that I must use the in and out variables with a vertex shader, so I tried.
 
Fragment shader
#version 400
in vec2 fragCoord;
out vec4 fragColor;
uniform sampler2D image;
void main(){
// Get the color
vec4 color = texture( image, fragCoord );
/*
* Do things with the color
*/
// Return the color
fragColor = color;
}
 
Vertex shader
#version 400
in vec3 position;
in vec2 textureCoord;
out vec2 fragCoord;
void main(){
// Set the position of the pixel and vertex (I guess)
fragCoord = textureCoord;
gl_Position = vec4( position, 1.0 );
}
I also seen that we could add the projection, model, and view matrices, but I don't know how to do that with SFML (I don't even think we can), and I don't want to learn some complex things about openGL or SFML just to change some colors on a 2d game, so here is my question:
Is there an easy way to just get the coordinates of the pixel we're working on? Maybe get rid of the vertex shader, or use it without using matrices?
Unless you really want to learn a lot of nasty OpenGl, writing your own shaders just for textures is a little overkill. SFML can handle textures and shaders for you behind the scenes (here is a good article on how to use them) so you don't need to worry about shaders at all. Also note that you can change the color of SFML sprites (which is, I believe, what you are trying to do), with sprite.setColor(sf::color(*whatever*));. Plus, there's no problem in using version 330. That's what I usually use, albeit with in and out stuff as well.
If you really want to use your own shaders for fancy effects, like pixellation, blurring, etc. I can't help you much since I've only ever worked with pure OpenGl, so I don't know how the vertex information is handled by SFML, but this is some interesting example code you can check out, here is a tutorial, and here is a reference.
To more directly answer your question. gl_FragCoord is a built-in variable with GLSL that keeps track of the fragments position, but you have to set gl_Position in the vertex shader. You can't get rid of the vertex shader if you are doing anything OpenGl related. You'd have to do fancy matrix stuff (this is a wonderful library) and probably buffer stuff (like this) to tell GLSL yourself where everything is.

Shader link error after installing latest NVidia Quadro driver (311.35)

I just installed the latest NVidia driver for Quadro 4000 cards.From this moment any of my shaders linking fails with shader link error.
It is worth noting I am using OpenGL 4.2 with separate shader objects.My OS is Windows7 64bit .
Before the update I had 309.x version of the driver and everything worked fine.
Now I rolled back to the version 295.x and it works again.
Anyone knows something about it?Can it be a driver bug? If yes, what can be done about it?
Here is a simple pass through vertex shader that fails:
#version 420 core
layout(location = 0) in vec4 position;
layout(location = 1) in vec2 uvs;
layout(location = 2) in vec3 normal;
smooth out vec2 uvsOut;
void main()
{
uvsOut=uvs;
gl_Position = position;
}
Another question ,is it possible that NVIdia tightened shader version semantics rules? I mean ,I am using OpenGL compatibility profile but in GLSL mark #version 420 core.Can it be the problem?
Update:
Some more info from Program Info Log:
error C7592: ARB_separate_shader_objects requrires built-in block gl_PerVertex to be redeclared before accesing its members.
Yeah , also the driver writer has typos "accesing " ;)
Now , I actually solved the linking error by adding this :
out gl_PerVertex
{
vec4 gl_Position;
};
It is strange that previous drivers didn't enforce redefinition of gl_PerVertex block.Now ,while this addon solved the issue with the linking, it opened another one where some varying uniforms don't work.For example I have in vertex shader:
out vec4 diffuseOut;
And in fragment shader:
in vec4 diffuseOut;
Then
OUTPUT = diffuseOut;/// returns black while red is expected.
Update 2 :
Ok , now it becomes clear - the new drivers are stricter on shaders input/output variables.With the older driver I could define several "outs" in a vertex shader but without defining also their "in" match in the fragment shader.It worked.Now it seems I am forced to have the exact match between declared "ins" and "outs" in vert and frag program.Strange that no errors are being thrown but the result is that the defined "ins" become empty in the destination.

OpenGLSL error while compiling fragment shader using UBOs

I am trying to get UBOs working, however I get a compilation error in the fragment shader:
ERROR 0:5:"(": synrax error.
Fragment Shader:
layout(std140) uniform Colors
{
vec3 SCol;
vec3 WCol;
float DCool;
float DWarm;
}colors;
Where am I going wrong?
At the begining of your fragment shader source file (the very first line) put this:
#version 140
This means that you are telling the GLSL compiler that you use the version 1.40 of the shading language (you can, of course, use a higher version - see Wikipedia for details).
Alternatively, if your OpenGL driver (and/or hardware) doesn't support GLSL 1.40 fully (which is part of OpenGL 3.1), but only GLSL 1.30 (OpenGL 3.0), you can try the following:
#version 130
#extension GL_ARB_uniform_buffer_object : require
However, this one will work only if your OpenGL 3.0 driver supports the GL_ARB_uniform_buffer_object extension.
Hope this helps.

Names of `out` variables in a fragment shader

I'm having some problem understanding one line in the most basic (flat) shader example while reading OpenGL SuperBible.
In chapter 6, Listing 6.4 and 6.5 it introduces the following two very basic shaders.
6.4 Vertex Shader:
// Flat Shader
// Vertex Shader
// Richard S. Wright Jr.
// OpenGL SuperBible
#version 130
// Transformation Matrix
uniform mat4 mvpMatrix;
// Incoming per vertex
in vec4 vVertex;
void main(void)
{
// This is pretty much it, transform the geometry
gl_Position = mvpMatrix * vVertex;
}
6.5 Fragment Shader:
// Flat Shader
// Fragment Shader
// Richard S. Wright Jr.
// OpenGL SuperBible
#version 130
// Make geometry solid
uniform vec4 vColorValue;
// Output fragment color
out vec4 vFragColor;
void main(void)
{
gl_FragColor = vColorValue;
}
My confusion is that it says vFragColor in the out declaration while saying gl_FragColor in main().
On the other hand, in code from the website, it has been corrected to 'vFragColor = vColorValue;' in the main loop.
What my question is that other then being a typo in the book, what is the rule for naming out values of shaders? Do they have to follow specific names?
On OpenGL.org I've found that gl_Position is required for the out of the vertex shader. Is there any such thing for the fragment shader? Or it is just that if there is only one output, then it will be the color in the buffer?
What happens when there is more then one out of a fragment shader? How does the GLSL compiler know which one to use in the buffer?
As stated in the GLSL specification for version 1.3, the use of gl_FragColor in the fragment shader is deprecated. Instead, you should use a user defined output variable like the
vFragColor variable described in your fragment shader. As you said, it's a typo.
What is the rule for naming out values of shaders?
The variable name can be anything you like, unless it collides with any existing names.
What happens when there is more then one out of a fragment shader? How does the GLSL compiler know which one to use in the buffer?
When there is more than one out in the fragment shader, you should assign slots to the fragment shader outputs by calling BindFragDataLocation. You can then say which slots will render to which render target by calling DrawBuffers.
The specification states that if you have one output variable in the fragment shader defined, it will be assigned to index 0 and output 0. For more information, I recommend you take a look at it yourself.
gl_FragColor was the original output variable in early versions of GLSL. This was the color of the fragment that was to be drawn.
Your initial confusion is justified, as there's no reason to declare that out variable and then write to glFragColor.
In later versions it became customizable, such that you could give arbitrary names to your output variables. You can map these arbitrary outputs to specific buffers with the command glBindFragDataLocation.
I'm not 100% positive, but I believe if you don't call this function before linking, then your output variables will be randomly assigned to buffers. If you only have one output, then it should always be assigned to buffer 0.

Does RenderMonkey have a bug in TEXCOORD stream mapping for GLSL?

For clarity, I start with my question:
Is it possible to use (in the shader code) the custom attribute name which I set for the TEXCOORD usage in the (OpenGL) stream mapping in RenderMonkey 1.82 or do I have to use gl_MultiTexCoord0?
(The question might be valid for the NORMAL usage too, i.e custom name or gl_Normal)
Background:
Using RenderMonkey version 1.82.
I have successfully used the stream mapping to map the general vertex attribute "position" (and maybe "normal"), but the texture coordinates does not seem to be forwarded correctly.
For the shader code, I use #version 330 and the "in" qualifier in GLSL, which should be OK since RM does not compile the shaders itself (the OpenGL driver do).
I have tried both .obj and .3ds files (exported from blender), and when checking the wavefront .obj-file, all texture coordinate information is there, as well as the vertex positions and normals.
If it is not possible, the stream mapping is broken and there is no point in naming the variables in the stream mapping editor (besides for the vertex position stream, which works), since one has to use the built-in variables anyway.
Update:
If using the deprecated built-in variables, one has to use compatibility mode in the shader e.g
#version 330 compatibility
out vec2 vTexCoord;
and, in the main function:
vTexCoord = vec2(gl_MultiTexCoord0);
(Now I'm not sure about the stream mapping of normals either. As soon as I got the texture coordinates working, I had normal problems and had to revert to gl_Normal.)
Here is a picture of a working solution, but with built-in variables (and yes, the commented texcoord variable in the picture does not have the same name as in the stream mapping dialog, but it had the same name when I tried to use it, so it's OK.):
You could try to use the generic vertices's attributes, see http://open.gl, it's a great tutorial ;)
(but I think it imply you'd have to rewrite the code to manual handle the transformations...)
#version 330
layout(location = 0) in vec3 bla_bla_bla_Vertex;
layout(location = 2) in vec3 bla_bla_bla_Normal;
layout(location = 8) in vec3 bla_bla_bla_TexCoord0;
This is a working solution for RM 1.82