OpenGL Camera Movement - Shader vs. Primitive Rendering - opengl

In my OpenGL application, I am using gluLookAt() for transforming my camera. I then have two different render functions; one uses primitive rendering (glBegin()/glEnd()) to render a triangle.
glBegin(GL_TRIANGLES);
glVertex3f(0.25, -0.25, 0.5);
glVertex3f(-0.25, -0.25, 0.5);
glVertex3f(0.25, 0.25, 0.5);
glEnd();
The second rendering function uses a shader to display the triangle using the same coordinates and is called with the function glDrawArrays(GL_TRIANGLES, 0, 3). shader.vert is shown below:
#version 430 core
void main()
{
const vec4 verts[3] = vec4[3](vec4(0.25, -0.25, 0.5, 1),
vec4(-0.25, -0.25, 0.5, 1),
vec4(0.25, 0.25, 0.5, 1));
gl_Position = verts[gl_VertexID];
}
Now here is my problem; if I move the camera around using the primitive rendering for the triangle, I see the triangle from different angles like one would expect. When I use the shader rendering function, the triangle remains stationary. Clearly I am missing something about world coordinates and how they related to objects rendered with shaders. Could someone point me in the right direction?

If you do not have an active shader program, you're using what is called the "fixed pipeline". The fixed pipeline performs rendering based on numerous attributes you set with OpenGL API calls. For example, you specify what transformations you want to apply. You specify material and light attributes that control the lighting of your geometry. Applying these attributes is then handled by OpenGL.
Once you use your own shader program, you're not using the fixed pipeline anymore. This means that most of what the fixed pipeline previously handled for you has to be implemented in your shader code. Applying transformations is part of this. To apply your transformation matrix, you have to pass it into the shader, and apply it in your shader code.
The matrix is typically declared as a uniform variable in your vertex shader:
uniform mat4 ModelViewProj;
and then applied to your vertices:
gl_Position = ModelViewProj * verts[gl_VertexID];
In your code, you will then use calls like glGetUniformLocation(), glUniformMatrix4fv(), etc., to set up the matrix. Explaining this in full detail is somewhat beyond this answer, but you should be able to find it in many OpenGL tutorials online.
As long as you're still using legacy functionality with the Compatibility Profile, there's actually a simpler way. You should be aware that this is deprecated, and not available in the OpenGL Core Profile. The Compatibility Profile makes certain fixed function attributes available to your shader code, including the transformation matrices. So you do not have to declare anything, and can simply write:
gl_Position = gl_ModelViewProjectionMatrix * verts[gl_VertexID];

Related

OpenGL: Defining variables in shaders

My OpenGL program, using GLSL for shaders, has a simple vertex and fragment shader (given by a tutorial).
The vertex shader is:
#version 330
layout (location = 0) in vec3 Position;
void main()
{
gl_Position = vec4(0.5 * Position.x, 0.5 * Position.y, Position.z, 1.0);
}
And the fragment shader is:
#version 330
out vec4 FragColor;
void main()
{
FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
What is happening here is that the vertex shader divides the vertex coordinates by 2, and the fragment shader then colors it in red.
Now from my understanding, gl_Position tells the fragment shader the pixel coordinates of this vector. And gl_Position is an existing variable that both shaders know about, so the fragment shader will always look for gl_Position when deciding where to draw that vertex.
However, what about FragColor? In this example, it has been manually defined in the fragment shader. So how does OpenGL then know that FragColor is the variable we are using to set the vertex's color? FragColor can have been defined with a different name and the program still runs in the same way.
So I am wondering why gl_Position is a variable that has already been defined by OpenGL, whereas FragColor is manually defined, and how OpenGL knows how to interpret FragColor?
1. Question: Why is gl_Position a variable that has already been defined?
This is because OpenGL/the rendering pipeline has to know which data should be used as basis for rasterization and interpolation. Since there is always exactly one such variable, OpenGL has the predefined variable glPosition for this. There are also some other predefined variables which also serve specific purposes in the rendering pipeline. For a complete list, have a look here.
2. Question: FragColor is manually defined, how does OpenGL knows how to interpret it?
A fragment shader can have an arbitrary number of output variables, especially needed when working with framebuffers. There are basically two options how to tell OpenGL which variable should be written to which render buffer: One can set this locations from application side with the glBindFragDataLocation method. The second option is to specify this location(s) directly in the shaders using Layout Qualifiers. In both cases, the location of the variable defines to which render buffer the data is written.
When no custom framebuffers are used (as in your case), the default backbuffer is will get data from the fragment output variable at location 0. Since your shader has only one output variable, it is highly likely that this one will have location 0. (Although I think it is not guaranteed, correct me if I'm wrong).

Weird noise on rendered objects - OpenGL

To be more specific, here's the screenshot:
https://drive.google.com/file/d/0B_o-Ym0jhIqmY2JJNmhSeGpyanM/edit?usp=sharing
After debugging for about 3 days, I really have no idea. Those black lines and strange fractal black segments just drive me nuts. The geometries are rendered by forward rendering, blending layer by layer for each light I add.
My first guess was downloading the newest graphics card driver (I'm using GTX 660m), but that didn't solve it. Can VSync be a possible issue here? (I'm rendering in a window rather on full screen mode) Or what is the most possible point to cause this kind of trouble?
My code is like this:
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
glDepthMask(false);
glDepthFunc(GL_EQUAL);
/*loop here*/
/*draw for each light I had*/
glDepthFunc(GL_LESS);
glDepthMask(true);
glDisable(GL_BLEND);
One thing I've noticed looking at your lighting vertex shader code:
void main()
{
gl_Position = projectionMatrix * vec4(position, 1.0);
texCoord0 = texCoord;
normal0 = (normalMatrix * vec4(normal, 0)).xyz;
modelViewPos0 = (modelViewMatrix * vec4(position, 1)).xyz;
}
You are applying the projection matrix directly to the vertex position, which I'm assuming is in object space.
Try setting it to:
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
And we can work from there.
This answer is slightly speculative, but based on the symptoms, and the code you posted, I suspect a precision problem. The rendering code you linked, looks like this in a shortened form:
useShader(FWD_AMBIENT);
part.render();
glDepthMask(GL_FALSE);
glDepthFunc(GL_EQUAL);
for (Light light : lights) {
useShader(light.getShaderType());
part.render();
}
So you're rendering the same thing multiple times, with different shaders, and rely on the resulting pixels to end up with the same depth value (depth comparison function is GL_EQUAL). This is not a safe assumption. Quote from the GLSL spec:
In this section, variance refers to the possibility of getting different values from the same expression in different programs. For example, say two vertex shaders, in different programs, each set gl_Position with the same expression in both shaders, and the input values into that expression are the same when both shaders run. It is possible, due to independent compilation of the two shaders, that the values assigned to gl_Position are not exactly the same when the two shaders run. In this example, this can cause problems with alignment of geometry in a multi-pass algorithm.
I copied the whole paragraph because the example they are using sounds like an exact description of what you are doing.
To prevent this from happening, you can declare your out variables as invariant. In each of your vertex shaders that you use for the multi-pass rendering, add this line:
invariant gl_Position;
This guarantees that the outputs are identical if all the inputs are the same. To meet this condition, you should also make sure that you pass exactly the same transformation matrix into both shaders, and of course use the same vertex coordinates.

Global Ambient Lighting?

Lets say my display function draws polygons pixel by pixel not using opengl functions, but a drawpixel function.
I call
glLightModelfv(GL_LIGHT_MODEL_AMBIENT, global_ambient);
glShadeModel(GL_SMOOTH);
glEnable(GL_LIGHTING);
where global_ambient is 0.0, 0.0, 0.0, 1.0 and I have material parameters defined, that is glmaterial is never called. Would the global ambient lighting still work as in I will not be able to see the polygon? Or would I need to define material parameters.
Lets say my display function draws polygons pixel by pixel not using opengl functions, but a drawpixel function.
If that's true, then the lighting state is completely irrelevant. Fixed-function OpenGL lighting is per-vertex. You're not sending vertices; you're sending pixel data.

GLSL 4.10 Texture Mapping

I'm trying to figure out how to do texture mapping using GLSL version 4.10. I'm pretty new to GLSL and was happy to get a triangle rendering today with colors fading based on sin(time) using shaders. Now I'm interested in using shaders with a single texture.
A lot of tutorials and even Stack Overflow answers suggest using gl_MultiTexCoord0. However, this has been deprecated since GLSL 1.30 and the latest version is now 4.20. My graphics card doesn't support 4.20 which is why I'm trying to use 4.10.
I know I'm generating and binding my texture appropriately, and I have proper vertex coordinates and texture coordinates because my heightmap rendered perfectly when I was using the fixed-function pipeline, and it renders fine with color rather than the texture.
Here are my GLSL shaders and some of my C++ draw code:
---heightmap.vert (GLSL)---
in vec3 position;
in vec2 texcoord;
out vec2 p_texcoord;
uniform mat4 projection;
uniform mat4 modelview;
void main(void)
{
gl_Position = projection * modelview * vec4(position, 1.0);
p_texcoord = texcoord;
}
---heightmap.frag (GLSL)---
in vec2 p_texcoord;
out vec4 color;
uniform sampler2D texture;
void main(void)
{
color = texture2D(texture, p_texcoord);
}
---Heightmap::Draw() (C++)---
// Bind Shader
// Bind VBO + IBO
// Enable Vertex and Texcoord client state
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureId);
// glVertexPointer(...)
// glTexCoordPointer(...)
glUniform4fv(projLoc, projection);
glUniform4fv(modelviewLoc, modelview);
glUniform1i(textureId, 0);
// glDrawElements(...)
// glDisable/unbind everything
The thing that I am also suspicious about are whether I have to pass the texture coord stuff to the fragment shader as a varying since I'm not touching it in the vertex shader. Also, I have no idea how it's going to get the interpolated texcoords from that. It seems like it's just going to get 0.f or 1.f, not the interpolated coordinate. I don't know enough about shaders to understand how that works. If somebody could enlighten me I would be thrilled!
Edit 1:
#Bahbar: So sorry, that was a typo. I'm typing code on one machine while reading it off another. Like I said, it all worked with the fixed function pipeline. Although glEnableClientState and gl[Vertex|TexCoord]Pointer are deprecated, they should still work with shaders, no? glVertexPointer rather than glVertexAttribPointer worked with colors rather than textures. Also, I am using glBindAttribLocation (position to 0 and texcoord to 1).
The reason I am still using glVertexPointer is I am trying to un-deprecate one thing at a time.
glBindTexture takes a texture object as a second parameter.
// Enable Vertex and Texcoord client state
I assume you meant the generic vertex attributes ? Where are your position and texcoord attributes set up ? To do that, you need some calls to glEnableVertexAttrib, and glVertexAttribPointer instead of glEnableClientState and glVertex/TexCoordPointer (all those are deprecated in the same way that gl_MultiTexCoord is in glsl).
And of course, to figure out where the attributes are bound, you need to either call glGetAttribLocation to figure out where the GL chose to put the attrib, or define it yourself with glBindAttribLocation (before linking the program).
Edit to add, following your addition:
Well, 0 might end up pulling data from glVertexPointer (for reasons you should not rely on. attrib 0 is special and most IHVs make it work just like Vertex), but 1 very likely won't be pulling data from glTexCoord.
In theory, there is no overlap between the generic attributes (like your texcoord, that gets its data from glVertexAttribPointer(1,XXX), 1 here being your chosen location), and the built-in attributes (like gl_MultiTexCoord[0] that gets its data from glTexCoordPointer).
Now, nvidia is known to not follow the spec, and indeed aliases attributes (this comes from the Cg model, as far as I know), and will go so far as saying to use a specific attribute location for glTexCoord (the Cg spec suggests it uses location 8 for TexCoord0 - and location 1 is the attribute blendweight - see table 39, p242), but really you should just bite the bullet and switch your TexCoordPointer to VertexAttribPointer calls.

How do I get the current color of a fragment?

I'm trying to wrap my head around shaders in GLSL, and I've found some useful resources and tutorials, but I keep running into a wall for something that ought to be fundamental and trivial: how does my fragment shader retrieve the color of the current fragment?
You set the final color by saying gl_FragColor = whatever, but apparently that's an output-only value. How do you get the original color of the input so you can perform calculations on it? That's got to be in a variable somewhere, but if anyone out there knows its name, they don't seem to have recorded it in any tutorial or documentation that I've run across so far, and it's driving me up the wall.
The fragment shader receives gl_Color and gl_SecondaryColor as vertex attributes. It also gets four varying variables: gl_FrontColor, gl_FrontSecondaryColor, gl_BackColor, and gl_BackSecondaryColor that it can write values to. If you want to pass the original colors straight through, you'd do something like:
gl_FrontColor = gl_Color;
gl_FrontSecondaryColor = gl_SecondaryColor;
gl_BackColor = gl_Color;
gl_BackSecondaryColor = gl_SecondaryColor;
Fixed functionality in the pipeline following the vertex shader will then clamp these to the range [0..1], and figure out whether the vertex is front-facing or back-facing. It will then interpolate the chosen (front or back) color like usual. The fragment shader will then receive the chosen, clamped, interpolated colors as gl_Color and gl_SecondaryColor.
For example, if you drew the standard "death triangle" like:
glBegin(GL_TRIANGLES);
glColor3f(0.0f, 0.0f, 1.0f);
glVertex3f(-1.0f, 0.0f, -1.0f);
glColor3f(0.0f, 1.0f, 0.0f);
glVertex3f(1.0f, 0.0f, -1.0f);
glColor3f(1.0f, 0.0f, 0.0f);
glVertex3d(0.0, -1.0, -1.0);
glEnd();
Then a vertex shader like this:
void main(void) {
gl_Position = ftransform();
gl_FrontColor = gl_Color;
}
with a fragment shader like this:
void main() {
gl_FragColor = gl_Color;
}
will transmit the colors through, just like if you were using the fixed-functionality pipeline.
If you want to do mult-pass rendering, i.e. if you have rendered to the framebuffer and want to to a second render pass where you use the previous rendering than the answer is:
Render the first pass to a texture
Bind this texture for the second pass
Access the privously rendered pixel in the shader
Shader code for 3.2:
uniform sampler2D mytex; // texture with the previous render pass
layout(pixel_center_integer) in vec4 gl_FragCoord;
// will give the screen position of the current fragment
void main()
{
// convert fragment position to integers
ivec2 screenpos = ivec2(gl_FragCoord.xy);
// look up result from previous render pass in the texture
vec4 color = texelFetch(mytex, screenpos, 0);
// now use the value from the previous render pass ...
}
Another methods of processing a rendered image would be OpenCL with OpenGL -> OpenCL interop. This allows more CPU like computationing.
If what you're calling "current value of the fragment" is the pixel color value that was in the render target before your fragment shader runs, then no, it is not available.
The main reason for that is that potentially, at the time your fragment shader runs, it is not known yet. Fragment shaders run in parallel, potentially (depending on which hardware) affecting the same pixel, and a separate block, reading from some sort of FIFO, is usually responsible to merge those together later on. That merging is called "Blending", and is not part of the programmable pipeline yet. It's fixed function, but it does have a number of different ways to combine what your fragment shader generated with the previous color value of the pixel.
You need to sample texture at current pixel coordinates, something like this
vec4 pixel_color = texture2D(tex, gl_TexCoord[0].xy);
Note,- as i've seen texture2D is deprecated in GLSL 4.00 specification - just look for similar texture... fetch functions.
Also sometimes it is better to supply your own pixel coordinates instead of gl_TexCoord[0].xy - in that case write vertex shader something like:
varying vec2 texCoord;
void main(void)
{
gl_Position = vec4(gl_Vertex.xy, 0.0, 1.0 );
texCoord = 0.5 * gl_Position.xy + vec2(0.5);
}
And in fragment shader use that texCoord variable instead of gl_TexCoord[0].xy.
Good luck.
The entire point of your fragment shader is to decide what the output color is. How you do that depends on what you are trying to do.
You might choose to set things up so that you get an interpolated color based on the output of the vertex shader, but a more common approach would be to perform a texture lookup in the fragment shader using texture coordinates passed in the from the vertex shader interpolants. You would then modify the result of your texture lookup according to your chosen lighting calculations and whatever else your shader is meant to do and then write it into gl_FragColor.
The GPU pipeline has access to the underlying pixel info immediately after the shaders run. If your material is transparent, the blending stage of the pipeline will combine all fragments.
Generally objects are blended in the order that they are added to a scene, unless they have been ordered by a z-buffering algo. You should add your opaque objects first, then carefully add your transparent objects in the order to be blended.
For example, if you want a HUD overlay on your scene, you should just create a screen quad object with an appropriate transparent texture, and add this to your scene last.
Setting the SRC and DST blending functions for transparent objects gives you access to the previous blend in many different ways.
You can use the alpha property of your output color here to do really fancy blending. This is the most efficient way to access framebuffer outputs (pixels), since it works in a single pass (Fig. 1) of the GPU pipeline.
Fig. 1 - Single Pass
If you really need multi pass (Fig. 2), then you must target the framebuffer outputs to an extra texture unit rather than the screen, and copy this target texture to the next pass, and so on, targeting the screen in the final pass. Each pass requires at least two context switches.
The extra copying and context switching will degrade rendering performance severely. Note that multi-threaded GPU pipelines are not much help here, since multi pass is inherently serialized.
Fig. 2 - Multi Pass
I have resorted to a verbal description with pipeline diagrams to avoid deprecation, since shader language (Slang/GLSL) is subject to change.
how-do-i-get-the-current-color-of-a-fragment
Some say it cannot be done, but I say this works for me:
//Toggle blending in one sense, while always disabling it in the other.
void enableColorPassing(BOOL enable) {
//This will toggle blending - and what gl_FragColor is set to upon shader execution
enable ? glEnable(GL_BLEND) : glDisable(GL_BLEND);
//Tells gl - "When blending, change nothing"
glBlendFunc(GL_ONE, GL_ZERO);
}
After that call, gl_FragColor will equal the color buffer's clear color the first time the shader runs on each pixel, and the output each run will be the new input upon each successive run.
Well, at least it works for me.