I am trying to figure out how to deal with materials that may or may not have a normal map and if not tell the shader to use the vertex normal. The code right now looks like this:
// retrieve the normal from the normal map
gNormal = texture(normalMap, uv);
gNormal = normalize(gNormal * 2.0 - 1.0);
gNormal = vec4(normalize(TBN * gNormal.xyz), 1.0);
// TODO: figure out a way to toggle normal mapping
//gNormal = vec4(normalize(normal), 1.0);
The most common solution is procedurally generating shaders and switching on the fly but that is a complex topic in itself. Are there any other options besides passing in a uniform bool ?
Another option is to always use a normal map. The simplest normal map is a 1x1 texture with 1 normal vector - for instance (0, 0, 1).
With this solution, you do not need branching in the shader.
Related
I'm trying to implement vertex shader code to achieve the "billboard" behaviour on a given vertex mesh. What I want is to define the mesh normally (like a 3D object) and then have it always facing the camera. I also need it to always have the same size (screen-wise). This two "effects" should happen:
The only difference in my case is that instead of a 2-D bar, I want to have a 3D-object.
To do so, I'm trying to follow the alternative 3 in this tutorial (the same where the images are taken from), but I can't figure out many of the assumptions they made (probably due to my lack of experience in graphics and OpenGL).
My shader applies the common transformation stack to vertices, i.e.:
gl_Position = project * view * model * position;
Where position is the input attribute with the vertex location in world-space. I want to be able to apply model-transformations (such as translation, scale and rotation) to modify the orientation of the object with respect to the camera. I understand the concepts explained in the tutorial but I can't seem to understand ho to apply them in my case.
What I've tried is the following (extracted from this answer, and similar to the tutorial):
uniform vec4 billbrd_pos;
...
gl_Position = project * (view * model * billbrd_pos + vec4(position.xy, 0, 0));
But what I get is a shape the size of which is bigger when is closer to the camera, and smaller otherwise. Did I forgot something?
Is is possible to do this in the vertex shader?
uniform vec4 billbrd_pos;
...
vec4 view_pos = view * model * billbrd_pos;
float dist = -view_pos.z;
gl_Position = project * (view_pos + vec4(position.xy*dist,0,0));
That way the fragment depths are still correct (at billbrd_pos depth) and you don't have to keep track of the screen's aspect ratio (as the linked tutorial does). It's dependent on the projection matrix though.
I would like to build a vertex shader with 1 texture map but multiple uv sets.
So far, I stored the differents UV sets in FaceVertexUvs[0] and FaceVertexUvs[1].
However, in the vertex shader, I can access only the first uv set using "vUv".
varying float vUv;
void main() {
(...)
vUv = uv;
(...)
}
It seems like 'uv' is something magical from threejs. Does something like uv2, uv3 exists? I need something to access the uv mapping in FaceVertexUvs[1].
My goal is to build a house with wall using part of a texture, and windows in another part of the same texture, and blend both.
Is it the correct way to do it? In which part of three.js source code is that magical 'uv' set?
To be more specific, here's the screenshot:
https://drive.google.com/file/d/0B_o-Ym0jhIqmY2JJNmhSeGpyanM/edit?usp=sharing
After debugging for about 3 days, I really have no idea. Those black lines and strange fractal black segments just drive me nuts. The geometries are rendered by forward rendering, blending layer by layer for each light I add.
My first guess was downloading the newest graphics card driver (I'm using GTX 660m), but that didn't solve it. Can VSync be a possible issue here? (I'm rendering in a window rather on full screen mode) Or what is the most possible point to cause this kind of trouble?
My code is like this:
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
glDepthMask(false);
glDepthFunc(GL_EQUAL);
/*loop here*/
/*draw for each light I had*/
glDepthFunc(GL_LESS);
glDepthMask(true);
glDisable(GL_BLEND);
One thing I've noticed looking at your lighting vertex shader code:
void main()
{
gl_Position = projectionMatrix * vec4(position, 1.0);
texCoord0 = texCoord;
normal0 = (normalMatrix * vec4(normal, 0)).xyz;
modelViewPos0 = (modelViewMatrix * vec4(position, 1)).xyz;
}
You are applying the projection matrix directly to the vertex position, which I'm assuming is in object space.
Try setting it to:
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
And we can work from there.
This answer is slightly speculative, but based on the symptoms, and the code you posted, I suspect a precision problem. The rendering code you linked, looks like this in a shortened form:
useShader(FWD_AMBIENT);
part.render();
glDepthMask(GL_FALSE);
glDepthFunc(GL_EQUAL);
for (Light light : lights) {
useShader(light.getShaderType());
part.render();
}
So you're rendering the same thing multiple times, with different shaders, and rely on the resulting pixels to end up with the same depth value (depth comparison function is GL_EQUAL). This is not a safe assumption. Quote from the GLSL spec:
In this section, variance refers to the possibility of getting different values from the same expression in different programs. For example, say two vertex shaders, in different programs, each set gl_Position with the same expression in both shaders, and the input values into that expression are the same when both shaders run. It is possible, due to independent compilation of the two shaders, that the values assigned to gl_Position are not exactly the same when the two shaders run. In this example, this can cause problems with alignment of geometry in a multi-pass algorithm.
I copied the whole paragraph because the example they are using sounds like an exact description of what you are doing.
To prevent this from happening, you can declare your out variables as invariant. In each of your vertex shaders that you use for the multi-pass rendering, add this line:
invariant gl_Position;
This guarantees that the outputs are identical if all the inputs are the same. To meet this condition, you should also make sure that you pass exactly the same transformation matrix into both shaders, and of course use the same vertex coordinates.
In my OpenGL application, I am using gluLookAt() for transforming my camera. I then have two different render functions; one uses primitive rendering (glBegin()/glEnd()) to render a triangle.
glBegin(GL_TRIANGLES);
glVertex3f(0.25, -0.25, 0.5);
glVertex3f(-0.25, -0.25, 0.5);
glVertex3f(0.25, 0.25, 0.5);
glEnd();
The second rendering function uses a shader to display the triangle using the same coordinates and is called with the function glDrawArrays(GL_TRIANGLES, 0, 3). shader.vert is shown below:
#version 430 core
void main()
{
const vec4 verts[3] = vec4[3](vec4(0.25, -0.25, 0.5, 1),
vec4(-0.25, -0.25, 0.5, 1),
vec4(0.25, 0.25, 0.5, 1));
gl_Position = verts[gl_VertexID];
}
Now here is my problem; if I move the camera around using the primitive rendering for the triangle, I see the triangle from different angles like one would expect. When I use the shader rendering function, the triangle remains stationary. Clearly I am missing something about world coordinates and how they related to objects rendered with shaders. Could someone point me in the right direction?
If you do not have an active shader program, you're using what is called the "fixed pipeline". The fixed pipeline performs rendering based on numerous attributes you set with OpenGL API calls. For example, you specify what transformations you want to apply. You specify material and light attributes that control the lighting of your geometry. Applying these attributes is then handled by OpenGL.
Once you use your own shader program, you're not using the fixed pipeline anymore. This means that most of what the fixed pipeline previously handled for you has to be implemented in your shader code. Applying transformations is part of this. To apply your transformation matrix, you have to pass it into the shader, and apply it in your shader code.
The matrix is typically declared as a uniform variable in your vertex shader:
uniform mat4 ModelViewProj;
and then applied to your vertices:
gl_Position = ModelViewProj * verts[gl_VertexID];
In your code, you will then use calls like glGetUniformLocation(), glUniformMatrix4fv(), etc., to set up the matrix. Explaining this in full detail is somewhat beyond this answer, but you should be able to find it in many OpenGL tutorials online.
As long as you're still using legacy functionality with the Compatibility Profile, there's actually a simpler way. You should be aware that this is deprecated, and not available in the OpenGL Core Profile. The Compatibility Profile makes certain fixed function attributes available to your shader code, including the transformation matrices. So you do not have to declare anything, and can simply write:
gl_Position = gl_ModelViewProjectionMatrix * verts[gl_VertexID];
I'm trying to wrap my head around shaders in GLSL, and I've found some useful resources and tutorials, but I keep running into a wall for something that ought to be fundamental and trivial: how does my fragment shader retrieve the color of the current fragment?
You set the final color by saying gl_FragColor = whatever, but apparently that's an output-only value. How do you get the original color of the input so you can perform calculations on it? That's got to be in a variable somewhere, but if anyone out there knows its name, they don't seem to have recorded it in any tutorial or documentation that I've run across so far, and it's driving me up the wall.
The fragment shader receives gl_Color and gl_SecondaryColor as vertex attributes. It also gets four varying variables: gl_FrontColor, gl_FrontSecondaryColor, gl_BackColor, and gl_BackSecondaryColor that it can write values to. If you want to pass the original colors straight through, you'd do something like:
gl_FrontColor = gl_Color;
gl_FrontSecondaryColor = gl_SecondaryColor;
gl_BackColor = gl_Color;
gl_BackSecondaryColor = gl_SecondaryColor;
Fixed functionality in the pipeline following the vertex shader will then clamp these to the range [0..1], and figure out whether the vertex is front-facing or back-facing. It will then interpolate the chosen (front or back) color like usual. The fragment shader will then receive the chosen, clamped, interpolated colors as gl_Color and gl_SecondaryColor.
For example, if you drew the standard "death triangle" like:
glBegin(GL_TRIANGLES);
glColor3f(0.0f, 0.0f, 1.0f);
glVertex3f(-1.0f, 0.0f, -1.0f);
glColor3f(0.0f, 1.0f, 0.0f);
glVertex3f(1.0f, 0.0f, -1.0f);
glColor3f(1.0f, 0.0f, 0.0f);
glVertex3d(0.0, -1.0, -1.0);
glEnd();
Then a vertex shader like this:
void main(void) {
gl_Position = ftransform();
gl_FrontColor = gl_Color;
}
with a fragment shader like this:
void main() {
gl_FragColor = gl_Color;
}
will transmit the colors through, just like if you were using the fixed-functionality pipeline.
If you want to do mult-pass rendering, i.e. if you have rendered to the framebuffer and want to to a second render pass where you use the previous rendering than the answer is:
Render the first pass to a texture
Bind this texture for the second pass
Access the privously rendered pixel in the shader
Shader code for 3.2:
uniform sampler2D mytex; // texture with the previous render pass
layout(pixel_center_integer) in vec4 gl_FragCoord;
// will give the screen position of the current fragment
void main()
{
// convert fragment position to integers
ivec2 screenpos = ivec2(gl_FragCoord.xy);
// look up result from previous render pass in the texture
vec4 color = texelFetch(mytex, screenpos, 0);
// now use the value from the previous render pass ...
}
Another methods of processing a rendered image would be OpenCL with OpenGL -> OpenCL interop. This allows more CPU like computationing.
If what you're calling "current value of the fragment" is the pixel color value that was in the render target before your fragment shader runs, then no, it is not available.
The main reason for that is that potentially, at the time your fragment shader runs, it is not known yet. Fragment shaders run in parallel, potentially (depending on which hardware) affecting the same pixel, and a separate block, reading from some sort of FIFO, is usually responsible to merge those together later on. That merging is called "Blending", and is not part of the programmable pipeline yet. It's fixed function, but it does have a number of different ways to combine what your fragment shader generated with the previous color value of the pixel.
You need to sample texture at current pixel coordinates, something like this
vec4 pixel_color = texture2D(tex, gl_TexCoord[0].xy);
Note,- as i've seen texture2D is deprecated in GLSL 4.00 specification - just look for similar texture... fetch functions.
Also sometimes it is better to supply your own pixel coordinates instead of gl_TexCoord[0].xy - in that case write vertex shader something like:
varying vec2 texCoord;
void main(void)
{
gl_Position = vec4(gl_Vertex.xy, 0.0, 1.0 );
texCoord = 0.5 * gl_Position.xy + vec2(0.5);
}
And in fragment shader use that texCoord variable instead of gl_TexCoord[0].xy.
Good luck.
The entire point of your fragment shader is to decide what the output color is. How you do that depends on what you are trying to do.
You might choose to set things up so that you get an interpolated color based on the output of the vertex shader, but a more common approach would be to perform a texture lookup in the fragment shader using texture coordinates passed in the from the vertex shader interpolants. You would then modify the result of your texture lookup according to your chosen lighting calculations and whatever else your shader is meant to do and then write it into gl_FragColor.
The GPU pipeline has access to the underlying pixel info immediately after the shaders run. If your material is transparent, the blending stage of the pipeline will combine all fragments.
Generally objects are blended in the order that they are added to a scene, unless they have been ordered by a z-buffering algo. You should add your opaque objects first, then carefully add your transparent objects in the order to be blended.
For example, if you want a HUD overlay on your scene, you should just create a screen quad object with an appropriate transparent texture, and add this to your scene last.
Setting the SRC and DST blending functions for transparent objects gives you access to the previous blend in many different ways.
You can use the alpha property of your output color here to do really fancy blending. This is the most efficient way to access framebuffer outputs (pixels), since it works in a single pass (Fig. 1) of the GPU pipeline.
Fig. 1 - Single Pass
If you really need multi pass (Fig. 2), then you must target the framebuffer outputs to an extra texture unit rather than the screen, and copy this target texture to the next pass, and so on, targeting the screen in the final pass. Each pass requires at least two context switches.
The extra copying and context switching will degrade rendering performance severely. Note that multi-threaded GPU pipelines are not much help here, since multi pass is inherently serialized.
Fig. 2 - Multi Pass
I have resorted to a verbal description with pipeline diagrams to avoid deprecation, since shader language (Slang/GLSL) is subject to change.
how-do-i-get-the-current-color-of-a-fragment
Some say it cannot be done, but I say this works for me:
//Toggle blending in one sense, while always disabling it in the other.
void enableColorPassing(BOOL enable) {
//This will toggle blending - and what gl_FragColor is set to upon shader execution
enable ? glEnable(GL_BLEND) : glDisable(GL_BLEND);
//Tells gl - "When blending, change nothing"
glBlendFunc(GL_ONE, GL_ZERO);
}
After that call, gl_FragColor will equal the color buffer's clear color the first time the shader runs on each pixel, and the output each run will be the new input upon each successive run.
Well, at least it works for me.