I have a problem with fragment shader, I want to get an effect where two different objects are illuminated with different light. Here is my main code:
glUniform1i(TextureID, 0);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, depthTexture);
glUniform1i(ShadowMapID, 1);
//Here I draw my first object
//Then I want to change light from my fragment shader to color2.
My fragment shader:
// Ouput data
layout(location = 0) out vec3 color;
layout(location = 1) out vec3 color2;
void main(){
//Here I calculate my color variables
}
I have no idea how to achieve this effect. Do I have to write a second fragmentshader? Is it necessary?
Not quite.
Think about what a fragment shader is; it gets run for every pixel on your screen. As such, it typically has one color output, denoting value of said pixel. Multiple outputs of fragment shader are used in advanced techniques such as MRT (multi render-targets), to avoid unnecessary geometry computations.
If you want to change a value of the light between the calls, you simply change the shader uniforms, and then just execute the drawcall again. Another, analogous solution is to use an UBO.
Writing different shaders is necessary if you have fundamental changes in logic; otherwise, they are often generic enough to make just data bindings' modifications enough for stuff like changing lights. (Changing the number of lights, though, is another story).
Related
These are common glsl fragment shaders:
#version 450
#extension GL_ARB_separate_shader_objects : enable
layout(location = 0) in vec3 inColor;
layout(location = 0) out vec4 outColor;
void main(){
outColor = vec4(inColor, 1.0);
}
I know that the vertex shader is executed once per vertex, and the fragment shader is executed once per fragment.
But why the outColor is vec4, which is only one pixel size (vec4 == rgba).
If it is to output a fragment, shouldn't the outColor be larger?
I think you are misunderstanding what a fragment actually is.
A fragment is a pixel... sort of. In the most basic sense, you can think of a fragment as a "potential pixel". Is has an rgba value, which is the value that will be drawn to the screen if it is rendered.
Imagine the simplest scenario: you are rendering a quad over the full screen, and your screen's size is 100x100. In this case, your fragment shader runs once for every fragment within that quad. For this program, that means 100 * 100 = 10000 times, once for every pixel on your screen.
However, not every fragment rendered in the shader has to be displayed on the screen. Let's make the scenario slightly more complex: you have two quads, once behind the other. You are just rendering these two quads, with depth testing enabled. Even though one quad is entirely behind the other, and won't be seen as it is occluded by the first quad, you still need to run the fragment shader for every "potential pixel" in the second quad. Just because a fragment isn't seen, doesn't mean you don't run the fragment shader for it. Unless you have early depth testing enabled, a fragment is only discarded after you run the fragment shader. In this case, the frag shader would run once for every fragment in both quads, so 20000 times.
So, in essence, you can think of a fragment as a pixel that may or may not end up being displayed. (This is quite a simplification but works to understand the basics)
I am starting to learn OpenGL (3.3+), and now I am trying to do an algorithm that draws 10000 points randomly in the screen.
The problem is that I don't know exactly where to do the algorithm. Since they are random, I can't declare them on a VBO (or can I?), so I was thinking in passing a uniform value to the vertex shader with the varying position (I would do a loop changing the uniform value). Then I would do the operation 10000 times. I would also pass a random color value to the shader.
Here is kind of my though:
#version 330 core
uniform vec3 random_position;
uniform vec3 random_color;
out vec3 Color;
void main() {
gl_Position = random_position;
Color = random_color;
}
In this way I would do the calculations outside the shaders, and just pass them through the uniforms, but I think a better way would be doing this calculations inside the vertex shader. Would that be right?
The vertex shader will be called for every vertex you pass to the vertex shader stage. The uniforms are the same for each of these calls. Hence you shouldn't pass the vertices - be they random or not - as uniforms. If you would have global transformations (i.e. a camera rotation, a model matrix, etc.), those would go into the uniforms.
Your vertices should be passed as a vertex buffer object. Just generate them randomly in your host application and draw them. The will be automatically the in variables of your shader.
You can change the array in every iteration, however it might be a good idea to keep the size constant. For this it's sometimes useful to pass a 3D-vector with 4 dimensions, one being 1 if the vertex is used and 0 otherwise. This way you can simply check if a vertex should be drawn or not.
Then just clear the GL_COLOR_BUFFER_BIT and draw the arrays before updating the screen.
In your shader just set gl_Position with your in variables (i.e. the vertices) and pass the color on to the fragment shader - it will not be applied in the vertex shader yet.
In the fragment shader the last set variable will be the color. So just use the variable you passed from the vertex shader and e.g. gl_FragColor.
By the way, if you draw something as GL_POINTS it will result in little squares. There are lots of tricks to make them actually round, the easiest to use is probably to use this simple if in the fragment shader. However you should configure them as Point Sprites (glEnable(GL_POINT_SPRITE)) then.
if(dot(gl_PointCoord - vec2(0.5,0.5), gl_PointCoord - vec2(0.5,0.5)) > 0.25)
discard;
I suggest you to read up a little on what the fragment and vertex shader do, what vertices and fragments are and what their respective in/out/uniform variables represent.
Since programs with full vertex buffer objects, shader programs etc. get quite huge, you can also start out with glBegin() and glEnd() to draw vertices directly. However this should only be a very early starting point to understand what you are drawing where and how the different shaders affect it.
The lighthouse3d tutorials (http://www.lighthouse3d.com/tutorials/) usually are a good start, though they might be a bit outdated. Also a good reference is the glsl wiki (http://www.opengl.org/wiki/Vertex_Shader) which is up to date in most cases - but it might be a bit technical.
Whether or not you are working with C++, Java, or other languages - the concepts for OpenGL are usually the same, so almost all tutorials will do well.
I'm just starting to learn graphics using opengl and barely grasp the ideas of shaders and so forth. Following a set of tutorials, I've drawn a triangle on screen and assigned a color attribute to each vertex.
Using a vertex shader I forwarded the color values to a fragment shader which then simply assigned the vertex color to the fragment.
Vertex shader:
[.....]
layout(location = 1) in vec3 vertexColor;
out vec3 fragmentColor;
void main(){
[.....]
fragmentColor = vertexColor;
}
Fragment shader:
[.....]
out vec3 color;
in vec3 fragmentColor;
void main()
{
color = fragmentColor;
}
So I assigned a different colour to each vertex of the triangle. The result was a smoothly interpolated coloured triangle.
My question is: since I send a specific colour to the fragment shader, where did the smooth interpolation happen? Is it a state enabled by default in opengl? What other values can this state have and how do I switch among them? I would expect to have total control over the pixel colours using a fragment shader, but there seem to be calculations behind the scenes that alter the result. There are clearly things I don't understand, can anyone help on this matter?
Within the OpenGL pipeline, between the vertex shading stages (vertex, tesselation, and geometry shading) and fragment shading, is the rasterizer. Its job is to determine which screen locations are covered by a particular piece of geometry(point, line, or triangle). Knowing those locations, along with the input vertex data, the rasterizer linearly interpolates the data values for each varying variable in the fragment shader and sends those values as inputs into your fragment shader. When applied to color values, this is called Gouraud shading.
source : OpenGL Programming Guide, Eighth Edition.
If you want to see what happens without interpolation, call glShadeModel(GL_FLAT) before you draw. The default value is GL_SMOOTH.
What I'm trying to accomplish: Drawing the depth map of my scene on top of my scene (so that objects closer are darker, and further away are lighter)
Problem: I don't seem to understand how to pass the right texture coordinates from my vertex shader to my fragment shader.
So I created my FBO, and the texture that the depth map gets drawn to... not that I'm entirely sure what I was doing, but whatever, it works. I tested drawing the texture using the fixed functionality pipeline, and it looks just like it's supposed to (the depth map that is).
But trying to use it in my shaders just isn't working...
Here's the part from my render method that binds the texture:
glActiveTexture(GL_TEXTURE7);
glBindTexture(GL_TEXTURE_2D, depthTextureId);
glUniform1i(depthMapUniform, 7);
glUseProgram(shaderProgram);
look(); //updates my viewing matrix
box.render(); //renders box VBO
So... I think that's sort of right? Maybe? No clue why texture 7, that was just something that was in a tutorial I was checking...
And here's the important stuff from my vertex shader:
out vec4 ShadowCoord;
void main() {
gl_Position = PMatrix * (VMatrix * MMatrix) * gl_Vertex; //projection, view and model matrices
ShadowCoord = gl_MultiTexCoord0; //something I kept seeing in examples, was hoping it would work.
}
Aaand, fragment shader:
in vec4 ShadowCoord;
in vec3 Color; //passed from vertex shader, didn't include the code for it though. Just the vertex color.
out vec4 FragColor;
void main(
FragColor = vec4(texture2D(ShadowMap,shadowCoord.st).x * vec3(Color), 1.0);
Now the problem is that the coordinate that the fragment shader receives for the texture is always (0,0), or the bottom-left corner. I tried changing it to ShadowCoord = gl_MultiTexCoord7, because I figured maybe it had something to do with me putting the texture in slot number 7... but alas, the problem persisted. When the color of (0, 0) changes, so does the color of the entire scene, rather than being a change in color for only the appropriate pixel/fragment.
And that's what I'm hoping to get some insight on... how to pass the correct coordinates (I'd like for the corners of the texture to be the same coordinates as the corners of my screen). And yes, this is a beginners question... but I have been looking in the Orange Book, and the problem with it is that it's great on the GLSL side of things, but the OpenGL side of things is severely lacking in the examples that I could really use...
The input variable gl_MultiTexCoord0 (or 7) is the builtin per-vertex texture coordinate for the 0th (or 7th) texture coordinate, set by gl(Multi)TexCoord (when using immediate mode) or by glTexCoordPointer (when using arrays/VBOs).
But as your depth buffer is already in screen space, what you want is not a usual texture laid onto the object, but just the value in the texture for a specific pixel/fragment. So the vertex shader isn't involved in any way. Instead you just use the current fragment's screen space position as texture coordinate, that can be read in the fragment shader using gl_FragCoord. But keep in mind that this coordinate is in [0,w]x[0,h] and textures are accessed by normalized texture coordinates in [0,1]. So you have to divide the fragment's coordinate by the screen size:
uniform vec2 screenSize;
...
... texture2D(ShadowMap, gl_FragCoord.st/screenSize) ...
But you actually don't need two passes for this effect anyway, as you can just use the fragment's depth directly, without writing it into a texture. Instead of
texture2D(ShadowMap, gl_FragCoord.st/screenSize).x
you can just use
gl_FragCoord.z
which is nothing else than the fragment's depth value, that would have been written into the texture in the first pass. This way you completely spare the first depth-writing pass and the texture access in the second pass.
I'm trying to wrap my head around shaders in GLSL, and I've found some useful resources and tutorials, but I keep running into a wall for something that ought to be fundamental and trivial: how does my fragment shader retrieve the color of the current fragment?
You set the final color by saying gl_FragColor = whatever, but apparently that's an output-only value. How do you get the original color of the input so you can perform calculations on it? That's got to be in a variable somewhere, but if anyone out there knows its name, they don't seem to have recorded it in any tutorial or documentation that I've run across so far, and it's driving me up the wall.
The fragment shader receives gl_Color and gl_SecondaryColor as vertex attributes. It also gets four varying variables: gl_FrontColor, gl_FrontSecondaryColor, gl_BackColor, and gl_BackSecondaryColor that it can write values to. If you want to pass the original colors straight through, you'd do something like:
gl_FrontColor = gl_Color;
gl_FrontSecondaryColor = gl_SecondaryColor;
gl_BackColor = gl_Color;
gl_BackSecondaryColor = gl_SecondaryColor;
Fixed functionality in the pipeline following the vertex shader will then clamp these to the range [0..1], and figure out whether the vertex is front-facing or back-facing. It will then interpolate the chosen (front or back) color like usual. The fragment shader will then receive the chosen, clamped, interpolated colors as gl_Color and gl_SecondaryColor.
For example, if you drew the standard "death triangle" like:
glBegin(GL_TRIANGLES);
glColor3f(0.0f, 0.0f, 1.0f);
glVertex3f(-1.0f, 0.0f, -1.0f);
glColor3f(0.0f, 1.0f, 0.0f);
glVertex3f(1.0f, 0.0f, -1.0f);
glColor3f(1.0f, 0.0f, 0.0f);
glVertex3d(0.0, -1.0, -1.0);
glEnd();
Then a vertex shader like this:
void main(void) {
gl_Position = ftransform();
gl_FrontColor = gl_Color;
}
with a fragment shader like this:
void main() {
gl_FragColor = gl_Color;
}
will transmit the colors through, just like if you were using the fixed-functionality pipeline.
If you want to do mult-pass rendering, i.e. if you have rendered to the framebuffer and want to to a second render pass where you use the previous rendering than the answer is:
Render the first pass to a texture
Bind this texture for the second pass
Access the privously rendered pixel in the shader
Shader code for 3.2:
uniform sampler2D mytex; // texture with the previous render pass
layout(pixel_center_integer) in vec4 gl_FragCoord;
// will give the screen position of the current fragment
void main()
{
// convert fragment position to integers
ivec2 screenpos = ivec2(gl_FragCoord.xy);
// look up result from previous render pass in the texture
vec4 color = texelFetch(mytex, screenpos, 0);
// now use the value from the previous render pass ...
}
Another methods of processing a rendered image would be OpenCL with OpenGL -> OpenCL interop. This allows more CPU like computationing.
If what you're calling "current value of the fragment" is the pixel color value that was in the render target before your fragment shader runs, then no, it is not available.
The main reason for that is that potentially, at the time your fragment shader runs, it is not known yet. Fragment shaders run in parallel, potentially (depending on which hardware) affecting the same pixel, and a separate block, reading from some sort of FIFO, is usually responsible to merge those together later on. That merging is called "Blending", and is not part of the programmable pipeline yet. It's fixed function, but it does have a number of different ways to combine what your fragment shader generated with the previous color value of the pixel.
You need to sample texture at current pixel coordinates, something like this
vec4 pixel_color = texture2D(tex, gl_TexCoord[0].xy);
Note,- as i've seen texture2D is deprecated in GLSL 4.00 specification - just look for similar texture... fetch functions.
Also sometimes it is better to supply your own pixel coordinates instead of gl_TexCoord[0].xy - in that case write vertex shader something like:
varying vec2 texCoord;
void main(void)
{
gl_Position = vec4(gl_Vertex.xy, 0.0, 1.0 );
texCoord = 0.5 * gl_Position.xy + vec2(0.5);
}
And in fragment shader use that texCoord variable instead of gl_TexCoord[0].xy.
Good luck.
The entire point of your fragment shader is to decide what the output color is. How you do that depends on what you are trying to do.
You might choose to set things up so that you get an interpolated color based on the output of the vertex shader, but a more common approach would be to perform a texture lookup in the fragment shader using texture coordinates passed in the from the vertex shader interpolants. You would then modify the result of your texture lookup according to your chosen lighting calculations and whatever else your shader is meant to do and then write it into gl_FragColor.
The GPU pipeline has access to the underlying pixel info immediately after the shaders run. If your material is transparent, the blending stage of the pipeline will combine all fragments.
Generally objects are blended in the order that they are added to a scene, unless they have been ordered by a z-buffering algo. You should add your opaque objects first, then carefully add your transparent objects in the order to be blended.
For example, if you want a HUD overlay on your scene, you should just create a screen quad object with an appropriate transparent texture, and add this to your scene last.
Setting the SRC and DST blending functions for transparent objects gives you access to the previous blend in many different ways.
You can use the alpha property of your output color here to do really fancy blending. This is the most efficient way to access framebuffer outputs (pixels), since it works in a single pass (Fig. 1) of the GPU pipeline.
Fig. 1 - Single Pass
If you really need multi pass (Fig. 2), then you must target the framebuffer outputs to an extra texture unit rather than the screen, and copy this target texture to the next pass, and so on, targeting the screen in the final pass. Each pass requires at least two context switches.
The extra copying and context switching will degrade rendering performance severely. Note that multi-threaded GPU pipelines are not much help here, since multi pass is inherently serialized.
Fig. 2 - Multi Pass
I have resorted to a verbal description with pipeline diagrams to avoid deprecation, since shader language (Slang/GLSL) is subject to change.
how-do-i-get-the-current-color-of-a-fragment
Some say it cannot be done, but I say this works for me:
//Toggle blending in one sense, while always disabling it in the other.
void enableColorPassing(BOOL enable) {
//This will toggle blending - and what gl_FragColor is set to upon shader execution
enable ? glEnable(GL_BLEND) : glDisable(GL_BLEND);
//Tells gl - "When blending, change nothing"
glBlendFunc(GL_ONE, GL_ZERO);
}
After that call, gl_FragColor will equal the color buffer's clear color the first time the shader runs on each pixel, and the output each run will be the new input upon each successive run.
Well, at least it works for me.