I'm having a problem following this tutorial The First Triangle. I actually managed to get the First part working, but when it comes to the vertex shader it doesn't work.
Here is my Vertex Shader Code:
#version 330 core
// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 vertexPosition_modelspace;
void main(){
gl_Position.xyz = vertexPosition_modelspace;
gl_Position.w = 1.0;
}
It's just a copy of the Tutorial, but it gives me this error: must write to gl_Position.`
Just don't know what to do now.
EDIT: I'm using a GeForce 9500GT with 319.32 Drivers
EDIT2: I actually got the same thing in an older version, but it has the same error.
Here is the code:
#version 120
// Input vertex data, different for all executions of this shader.
attribute vec3 vertexPosition_modelspace;
void main(){
gl_Position = vec4(vertexPosition_modelspace, 1.0);
}
EDIT3: I'm using SFML as my default library.
I came to realize that what I were doing was kind of wrong thanks to you that helped me.
If anyone has this kind of problem the best option is to try the libraries (SFML) native functions.
That's what I'm doing now using this tutorial.
if your shader files have more than one newline [0D0A] at a time in succession, or if they consist of only 0D or 0A, you will have a bad day.
GOOD ->
#version 330 core
in vec3 ourColor;
out vec4 color;
void main()
{
color = vec4(ourColor, 1.0f);
}
BAD ->
#version 330 core
in vec3 ourColor;
out vec4 color;
void main()
{
color = vec4(ourColor, 1.0f);
}
at least that is what worked for me...
Related
I have never had any problems passing variables from vertex shader to fragment shader. But today, I added a new "out" variable in the vs, and a corresponding "in" variable in the fs. GLSL says the following:
Shader Program: The fragment shader uses varying tbn, but previous shader does not write to it.
Just to confirm, here's the relevant part of the VS:
#version 330 core
layout(location = 0) in vec4 position;
layout(location = 1) in vec2 uv;
// plus other layout & uniform inputs here
out DATA
{
vec2 uv;
vec3 tangentViewDir;
mat3 tbn;
} vs_out;
void main()
{
vs_out.uv = uv;
vs_out.tangentViewDir = vec3(1.0);
vs_out.tbn = mat3(1.0);
gl_Position = sys_ProjectionViewMatrix * sys_ModelMatrix * position;
}
And in the FS, it is declared as:
in DATA
{
vec2 uv;
vec3 tangentViewDir;
mat3 tbn;
} fs_in;
Interestingly, all other varyings, like "uv" and all, work. They are declared the same way.
Also interesting: Even though GLSL says the variable isn't written to - it still recognizes the changes when I write to it, and displays those changes.
So is it just a false warning or bug? Even though it tells me otherwise, the value seems to be passed correctly. Why do I receive this warning?
HolvBlackCat pointed me in the right direction - it was indeed a shader mismatch!
I had 2 shader programs, same FS in both, but different VSs, and I forgot to update the outputs of the 2nd VS to match the output layout of the first, so that they both work with the same FS!
Ouch, I guess now that I've run into this error, lesson learnt.
Thank you HolvBlackCat!
I'm trying to translate some old OpenGL code to modern OpenGL. This code is reading data from a texture and displaying it. The fragment shader is currently created using ARB_fragment_program commands:
static const char *gl_shader_code =
"!!ARBfp1.0\n"
"TEX result.color, fragment.texcoord, texture[0], RECT; \n"
"END";
GLuint program_id;
glGenProgramsARB(1, &program_id);
glBindProgramARB(GL_FRAGMENT_PROGRAM_ARB, program_id);
glProgramStringARB(GL_FRAGMENT_PROGRAM_ARB, GL_PROGRAM_FORMAT_ASCII_ARB, (GLsizei) strlen(gl_shader_code ), (GLubyte *) gl_shader_code );
I'd simply like to translate this into GLSL code. I think the fragment shader should look something like this:
#version 430 core
uniform sampler2DRect s;
void main(void)
{
gl_FragColor = texture2DRect(s, ivec2(gl_FragCoord.xy), 0);
}
But I'm not sure of a couple of details:
Is this the right usage of texture2DRect?
Is this the right usage of gl_FragCoord?
The texture is being fed with a pixel buffer object using GL_PIXEL_UNPACK_BUFFER target.
I think you can just use the standard sampler2D instead of sampler2DRect (if you do not have a real need for it) since, quoting the wiki, "From a modern perspective, they (rectangle textures) seem essentially useless.".
You can then change your texture2DRect(...) to texture(...) or texelFetch(...) (to mimic your rectangle fetching).
Since you seem to be using OpenGL 4, you do not need to (should not ?) use gl_FragColor but instead declare an out variable and write to it.
Your fragment shader should look something like this in the end:
#version 430 core
uniform sampler2D s;
out vec4 out_color;
void main(void)
{
out_color = texelFecth(s, vec2i(gl_FragCoord.xy), 0);
}
#Zouch, thank you very much for your response. I took it and worked on this for a bit. My final cores were very similar to what you suggested. For the record the final vertex and fragment shaders I implemented were as follows:
Vertex Shader:
#version 330 core
layout(location = 0) in vec3 vertexPosition_modelspace;
layout(location = 1) in vec2 vertexUV;
out vec2 UV;
uniform mat4 MVP;
void main()
{
gl_Position = MVP * vec4(vertexPosition_modelspace, 1);
UV = vertexUV;
}
Fragment Shader:
#version 330 core
in vec2 UV;
out vec3 color;
uniform sampler2D myTextureSampler;
void main()
{
color = texture2D(myTextureSampler, UV).rgb;
}
That seemed to work.
I'm writing an application using OpenGL 4.3 and GLSL and I need the shader to do basic UV mapping. The problem is that GLSL compiler seems to be optimising-out the UV coordinates. I cannot access them from the application side of things.
Vertex shader:
#version 330 core
uniform mat4 projection;
layout (location = 0) in vec4 position;
layout (location = 1) in vec2 uvCoord;
out vec2 texCoord;
void main(void)
{
texCoord = uvCoord;
gl_Position = position;
}
Vertex shader:
#version 330 core
in vec2 texCoord;
out vec4 color;
uniform sampler2D tex;
void main(void)
{
color = texture2D(tex, texCoord);
}
Both the vertex and fragment shader compile and link without errors, but when I call the attributes using the following code:
GLint effectPositionLocation = glGetAttribLocation(effect->getEffect(), "position");
GLint effectUVLocation = glGetAttribLocation(effect->getEffect(), "uvCoord");
I get the 0 for the position and -1 for the uvCoord, so I can only assume that the uvCoord has been optimised out even though I am using it to pass it from the vertex shader to the fragment shader.
The result is that the geometry is displayed but only in black, no texture mapping.
I have Written similar applications in Direct3D and HLSL with no problem of attributes being optimised out. I'm thinking that it is something simple that I am forgetting or not doing but have not found out what.
Replace the 'texture2D' with 'texture', and your attribute will be used.
Bad GLSL compiler: it should not compile your shader since texture2D is not available in core profile.
EDIT: You may have forgotten to call glEnableVertexAttribArray(1); after setting your glVertexAttribPointers.
I read the OpenGL Readbook 8th Editor. But I can't create example from Chapter 3 "Drawing Commands Example". Authors used in the example the own library vmath.h. But It don't work. They forgot add to library function "vmath::translation(GLfloat, GLfloat, GLfloat);", although used it. And authors used the own library "vapp.h", which confuses me. There a lot of macros, by means which defined class. I'm really confused.
I used instead of their library, the "Eigen" library for linear algebra.
Here is my code on GitHub
I compiled and run this program. It work. But I see a black window, but I should to see a four triangles. What Did I do wrong?
P.S. I redid Authors's program, by means of used for matrices and vertices the "Eigen Library". I saw only the black screen. Why?! Here is code on GitHub
I have two shaders:
vertex shader:
#version 400 core
uniform mat4 model_matrix;
uniform mat4 projection_matrix;
layout (location = 0) in vec4 position;
layout (location = 1) in vec4 color;
out vec4 vs_fs_color;
void main(void)
{
vs_fs_color = color;
gl_Position = projection_matrix * (model_matrix * position);
}
And fragment shader:
#version 400 core
in vec4 vs_fs_color;
layout (location = 0) out vec4 color;
void main(void)
{
color = vs_fs_color;
}
I exactly use a these shaders.
Here is what I should see.
This is original project(MSVC++)
This is a include files(including vapp.h and vapp.h)
When you're setting up your model_matrix, the second argument to glUniformMatrix4fv should be 1, not 4. Also, you are using wrong indices in frustum. Change result(2, 0) to result(0, 2) and do the same for all the other pairs.
What do the default vertex, fragment and geometry GLSL shaders look like for version #330?
I'll be using #version 330 GLSL Version 3.30 NVIDIA via Cg compiler, because that is what my graphics card supports.
With default shaders, I mean shaders that do the same exact thing as the graphics card would do when the shader program is turned off.
I can't find a good example for #version 330. Been googling all day. Not sure if the term default shader is called something else like trivial or basic and if that is why I can't find it.
Any recommendations for a book with version 330 or link to an easy beginner tutorial with version 330 would be great as well.
example of a trivial vertex shader in #version 110, does the default vertex transformation
#version 110
void main()
{
gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;
}
example of a trivial fragment shader in #version 110, turns color into red
#version 110
void main()
{
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
There are no "default" shaders with OpenGL. It looks like what you want a very simple example of a shader that transforms vertices to clip space and gives them a color, so here you go:
Vertex shader:
#version 330
layout(location = 0)in vec4 vert;
uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;
void main()
{
gl_Position = projection * view * model * vert;
}
Fragment shader:
#version 330
out vec4 fragColor;
void main()
{
fragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
The core OpenGL 3.3 profile drops support for a lot of old fixed-function things like the matrix stack. You are expected to handle your own matrices and send them to your shaders. There is no ftransform, and gl_Position is pretty much the only valid gl_* variable.
While glBindAttribLocation is not deprecated, the preferred method of defining the location of vertex attributes is through "layout(location = x)" in GLSL.
In the vertex shader, "attribute" is now "in" and "varying" is now "out". In the fragment shader, "varying" is now "in" and "gl_FragColor" is defined by an "out" variable. I believe that gl_FragColor is still valid, but now it's possible to use an out variable to define the color.
This tutorial is very good and teaches core OpenGL and GLSL 3.30, I would recommend you use it to help you learn more about GLSL. Also remember that the GLSL Reference Pages is your friend.