Translate ARB assembly to GLSL? - opengl

I'm trying to translate some old OpenGL code to modern OpenGL. This code is reading data from a texture and displaying it. The fragment shader is currently created using ARB_fragment_program commands:
static const char *gl_shader_code =
"!!ARBfp1.0\n"
"TEX result.color, fragment.texcoord, texture[0], RECT; \n"
"END";
GLuint program_id;
glGenProgramsARB(1, &program_id);
glBindProgramARB(GL_FRAGMENT_PROGRAM_ARB, program_id);
glProgramStringARB(GL_FRAGMENT_PROGRAM_ARB, GL_PROGRAM_FORMAT_ASCII_ARB, (GLsizei) strlen(gl_shader_code ), (GLubyte *) gl_shader_code );
I'd simply like to translate this into GLSL code. I think the fragment shader should look something like this:
#version 430 core
uniform sampler2DRect s;
void main(void)
{
gl_FragColor = texture2DRect(s, ivec2(gl_FragCoord.xy), 0);
}
But I'm not sure of a couple of details:
Is this the right usage of texture2DRect?
Is this the right usage of gl_FragCoord?
The texture is being fed with a pixel buffer object using GL_PIXEL_UNPACK_BUFFER target.

I think you can just use the standard sampler2D instead of sampler2DRect (if you do not have a real need for it) since, quoting the wiki, "From a modern perspective, they (rectangle textures) seem essentially useless.".
You can then change your texture2DRect(...) to texture(...) or texelFetch(...) (to mimic your rectangle fetching).
Since you seem to be using OpenGL 4, you do not need to (should not ?) use gl_FragColor but instead declare an out variable and write to it.
Your fragment shader should look something like this in the end:
#version 430 core
uniform sampler2D s;
out vec4 out_color;
void main(void)
{
out_color = texelFecth(s, vec2i(gl_FragCoord.xy), 0);
}

#Zouch, thank you very much for your response. I took it and worked on this for a bit. My final cores were very similar to what you suggested. For the record the final vertex and fragment shaders I implemented were as follows:
Vertex Shader:
#version 330 core
layout(location = 0) in vec3 vertexPosition_modelspace;
layout(location = 1) in vec2 vertexUV;
out vec2 UV;
uniform mat4 MVP;
void main()
{
gl_Position = MVP * vec4(vertexPosition_modelspace, 1);
UV = vertexUV;
}
Fragment Shader:
#version 330 core
in vec2 UV;
out vec3 color;
uniform sampler2D myTextureSampler;
void main()
{
color = texture2D(myTextureSampler, UV).rgb;
}
That seemed to work.

Related

GLSL Passthrough Geometry Shader

I've been learning OpenGL (actually OpenTK) for a month or so, and I've started playing with geometry shaders. I originally wrote a much more complicated shader, which didn't work, so I've stripped everything down a simple passthrough shader, which still doesn't work.
Vertex Shader:
#version 420 core
layout (location = 0) in vec3 position;
uniform mat4 transform;
void main()
{
gl_Position = transform * vec4(position, 1);
}
Geometry Shader:
#version 420 core
layout (triangles) in;
layout (triangle_strip) out;
void main()
{
int i;
for(i = 0; i < gl_in.length(); i++)
{
gl_Position = gl_in[i].gl_Position;
EmitVertex();
}
EndPrimitive();
}
Fragment Shader:
#version 420 core
layout (binding = 0) uniform sampler2D diffuse;
layout(location = 0) out vec4 color;
void main()
{
color = vec4(1, 1, 1, 1);
}
It should draw a white square in the bottom right of the window, and without the Geometry shader, it does, which is what I would expect. With the Geometry Shader, it renders nothing. Info logs show everything compiles and links fine.
What am I missing?
There must be a max_vertices declaration for the output. The number must be a compile-time constant, and it defines the maximum number of vertices that will be written by a single invocation of the GS. It may be no larger than the implementation-defined limit of MAX_GEOMETRY_OUTPUT_VERTICES

Do I need to pass color though my geometry shader to the fragment shader?

So I have three shaders in my program.
Vertex:
#version 330 core
in vec2 Inpoint;
in vec2 texCoords;
out vec2 TexCoords;
uniform mat4 model;
uniform mat4 projection;
void main()
{
TexCoords = texCoords;
gl_Position = projection * model * vec4(Inpoint, 0.0, 1.0);
}
Geometry:
#version 330 core
layout(triangles) in;
layout(triangle_strip, max_vertices = 4) out;
void main()
{
int i;
for (i = 0; i < gl_in.length(); i++)
{
gl_Position = gl_in[i].gl_Position;
EmitVertex();
}
EndPrimitive();
}
And finally the fragment shader:
#version 330 core
in vec2 TexCoords;
out vec4 color;
uniform sampler2D image;
uniform vec3 spriteColor;
void main()
{
color = vec4(spriteColor, 1.0) * texture(image, TexCoords);
}
Now without the geometry shader, everything displays just fine. But as soon as I include the geometry shader, everything goes ... bad.
It acts like its not getting chords for the textures.
So, the question is, does the geometry shader need to pass the data through itself to the fragment shader? I mean the geometry shader is basically doing nothing so it shouldn't. Unless there is some giant mistake I am missing.
I tried to add a pass-though but it complains that everything needs to be an array, and even when I did make it an array it didn't quite work right.
Quoting GLSL 3.30 specs 4.3.1 Inputs :
Fragment shader inputs get per-fragment values, typically interpolated
from a previous stage's outputs
Having a geometry shader is the previous stage. So yes, your FS uses inputs from your GS and only from it.

error C1105: cannot call a non-function

I used a shader which worked in another program (in the same environment afaik) which can't compile now for some reason:
// Vertex Shader
#version 330 core
layout(location = 0) in vec3 vertexPosition_modelspace;
layout(location = 1) in vec2 vertexUV;
out vec2 fragmentUV;
uniform mat4 ortho_matrix;
void main()
{
gl_Position = ortho_matrix * vec4(vertexPosition_modelspace, 1);
fragmentUV = vertexUV;
}
// Fragment Shader
#version 330 core
in vec2 fragmentUV;
uniform sampler2D texture;
out vec4 color;
void main()
{
color.rgba = texture(texture, fragmentUV).rgba;
}
It's a super basic shader and now it starts to throw errors all of the sudden.
Windows 8.1
Nvidia GeForce 1080 (this one is new maybe that's the problem?)
This is whats being output by Visual Studio:
uniform sampler2D texture;
out vec4 color;
void main()
{
color.rgba = texture(texture, fragmentUV).rgba;
}
I'm amazed this compiled at all in a different setting. You've named your texture the same as the function used to make texture lookups. You need to rename uniform sampler2D texture; to something else.

Simple GLSL Shader (Light) causes flickering

I'm trying to implement some basic lighting and shading following the tutorial over here and here.
Everything is more or less working but I get some kind of strange flickering on object surfaces due to the shading.
I have two images attached to show you guys how this problem looks.
I think the problem is related to the fact that I'm passing vertex coordinates from vertex shader to fragment shader to compute some lighting variables as stated in the above linked tutorials.
Here is some source code (stripped out unrelated code).
Vertex Shader:
#version 150 core
in vec4 pos;
in vec4 in_col;
in vec2 in_uv;
in vec4 in_norm;
uniform mat4 model_view_projection;
out vec4 out_col;
out vec2 passed_uv;
out vec4 out_vert;
out vec4 out_norm;
void main(void) {
gl_Position = model_view_projection * pos;
out_col = in_col;
out_vert = pos;
out_norm = in_norm;
passed_uv = in_uv;
}
and Fragment Shader:
#version 150 core
uniform sampler2D tex;
uniform mat4 model_mat;
in vec4 in_col;
in vec2 passed_uv;
in vec4 vert_pos;
in vec4 in_norm;
out vec4 col;
void main(void) {
mat3 norm_mat = mat3(transpose(inverse(model_mat)));
vec3 norm = normalize(norm_mat * vec3(in_norm));
vec3 light_pos = vec3(0.0, 6.0, 0.0);
vec4 light_col = vec4(1.0, 0.8, 0.8, 1.0);
vec3 col_pos = vec3(model_mat * vert_pos);
vec3 s_to_f = light_pos - col_pos;
float brightness = dot(norm, normalize(s_to_f));
brightness = clamp(brightness, 0, 1);
gl_FragColor = out_col;
gl_FragColor = vec4(brightness * light_col.rgb * gl_FragColor.rgb, 1.0);
}
As I said earlier I guess the problem has to do with the way the vertex position is passed to the fragment shader. If I change the position values to something static no more flickering occurs.
I changed all other values to statics, too. It's the same result - no flickering if I am not using the vertex position data passed from vertex shader.
So, if there is someone out there with some GL-wisdom .. ;)
Any help would be appreciated.
Side note: running all this stuff on an Intel HD 4000 if that may provide further information.
Thanks in advance!
Ivan
The names of the out variables in the vertex shader and the in variables in the fragment shader need to match. You have this in the vertex shader:
out vec4 out_col;
out vec2 passed_uv;
out vec4 out_vert;
out vec4 out_norm;
and this in the fragment shader:
in vec4 in_col;
in vec2 passed_uv;
in vec4 vert_pos;
in vec4 in_norm;
These variables are associated by name, not by order. Except for passed_uv, the names do not match here. For example, you could use these declarations in the vertex shader:
out vec4 passed_col;
out vec2 passed_uv;
out vec4 passed_vert;
out vec4 passed_norm;
and these in the fragment shader:
in vec4 passed_col;
in vec2 passed_uv;
in vec4 passed_vert;
in vec4 passed_norm;
Based on the way I read the spec, your shader program should actually fail to link. At least in the GLSL 4.50 spec, in the table on page 43, it lists "Link-Time Error" for this situation. The rules seem somewhat ambiguous in earlier specs, though.

GLSL UV (vec2) coords Optimised-out

I'm writing an application using OpenGL 4.3 and GLSL and I need the shader to do basic UV mapping. The problem is that GLSL compiler seems to be optimising-out the UV coordinates. I cannot access them from the application side of things.
Vertex shader:
#version 330 core
uniform mat4 projection;
layout (location = 0) in vec4 position;
layout (location = 1) in vec2 uvCoord;
out vec2 texCoord;
void main(void)
{
texCoord = uvCoord;
gl_Position = position;
}
Vertex shader:
#version 330 core
in vec2 texCoord;
out vec4 color;
uniform sampler2D tex;
void main(void)
{
color = texture2D(tex, texCoord);
}
Both the vertex and fragment shader compile and link without errors, but when I call the attributes using the following code:
GLint effectPositionLocation = glGetAttribLocation(effect->getEffect(), "position");
GLint effectUVLocation = glGetAttribLocation(effect->getEffect(), "uvCoord");
I get the 0 for the position and -1 for the uvCoord, so I can only assume that the uvCoord has been optimised out even though I am using it to pass it from the vertex shader to the fragment shader.
The result is that the geometry is displayed but only in black, no texture mapping.
I have Written similar applications in Direct3D and HLSL with no problem of attributes being optimised out. I'm thinking that it is something simple that I am forgetting or not doing but have not found out what.
Replace the 'texture2D' with 'texture', and your attribute will be used.
Bad GLSL compiler: it should not compile your shader since texture2D is not available in core profile.
EDIT: You may have forgotten to call glEnableVertexAttribArray(1); after setting your glVertexAttribPointers.