How does rendering to multiple textures work in modern OpenGL? - opengl

If I understand FBOs correctly, I can attach several 2D textures to different color attachment points. I'd like to use this capability to write out a few different types of data from a fragment shader (e.g. world position and normals). It looks like in ye olden days, this involved writing to gl_FragData at different indices, but gl_FragData doesn't exist anymore. How is this done now?

You can just add output variables to your fragment shader. Here is an example:
layout(location=0) out vec4 color;
layout(location=1) out vec3 normal;
layout(location=2) out vec3 position;
void main() {
color = vec4(1,0,0,1);
normal = vec3(0,1,0);
position = vec3(1,2,3);
}

As dari said, you can use the layout(location=0) specifier.
Another method to assign the location outside of the shader is to use:
glBindFragDataLocation(_program, 0, "color");
Then in the shader:
out vec4 color;
See this for a more thorough discussion of output buffers:
https://www.opengl.org/wiki/Fragment_Shader#Output_buffers

Related

flat shading in webGL

I'm trying to implement flat-shading in webgl,
I knew that varying keyword in vertex shader will interpolation that value and pass it to fragment shader.
I'm trying to disable interpolation, and I found that flat keyword can do this, but it seems cannot use in webgl?
flat varying vec4 fragColor;
always getting error: Illegal use of reserved word 'flat'
Check out webGL 2. Flat shading is supported.
For vertex shadder:
#version 300 es
in vec4 vPos; //vertex position from application
flat out vec4 vClr;//color sent to fragment shader
void main(){
gl_Position = vPos;
vClr = gl_Position;//for now just using the position as color
}//end main
For fragment shader
#version 300 es
precision mediump float;
flat in vec4 vClr;
out vec4 fragColor;
void main(){
fragColor = vClr;
}//end main
I think 'flat' is not supported by the version of GLSL used in WebGL. If you want flat shading, there are several options:
1) replicate the polygon's normal in each vertex. It is the simplest solution, but I find it a bit unsatisfactory to duplicate data.
2) in the vertex shader, transform the vertex in view coordinates, and in the fragment shader, compute the normal using the dFdx() and dFdy() functions that compute derivatives. These functions are supported by the extension GL_OES_standard_derivatives (you need to check whether it is supported by the GPU before using it), most GPUs, including the ones in smartphones, support the extension.
My vertex shader is as follows:
struct VSUniformState {
mat4 modelviewprojection_matrix;
mat4 modelview_matrix;
};
uniform VSUniformState GLUP_VS;
attribute vec4 vertex_in;
varying vec3 vertex_view_space;
void main() {
vertex_view_space = (GLUP_VS.modelview_matrix * vertex_in).xyz;
gl_Position = GLUP_VS.modelviewprojection_matrix * vertex_in;
}
and in the associated fragment shader:
#extension GL_OES_standard_derivatives : enable
varying vec3 vertex_view_space;
...
vec3 U = dFdx(vertex_view_space);
vec3 V = dFdy(vertex_view_space);
N = normalize(cross(U,V));
... do the lighting with N
I like this solution because it makes the setup code simpler. A drawback may be that it gives more work to the fragment shader (but with today's GPUs it should not be a problem). If performance is an issue, it may be a good idea to measure it.
3) another possibility is to have a geometry shader (if supported) that computes the normals. In general it is slower (but again, it may be a good idea to measure performance, it may depend on the specific GPU).
See also answers to this question:
How to get flat normals on a cube
My implementation is available here:
http://alice.loria.fr/software/geogram/doc/html/index.html
Some online web-GL demos are here (converted from C++ to JavaScript using emscripten):
http://homepages.loria.fr/BLevy/GEOGRAM/

DirectX11 / OpenGL only renders half of the texture

This is how it should look like. It uses the same vertices/uv coordinates which are used for DX11 and OpenGL. This scene was rendered in DirectX10.
This is how it looks like in DirectX11 and OpenGL.
I don't know how this can happen. I am using for both DX10 and DX11 the same code on top and also they both handle things really similiar. Do you have an Idea what the problem may be and how to fix it?
I can send code if needed.
also using another texture.
changed the transparent part of the texture to red.
Fragment Shader GLSL
#version 330 core
in vec2 UV;
in vec3 Color;
uniform sampler2D Diffuse;
void main()
{
//color = texture2D( Diffuse, UV ).rgb;
gl_FragColor = texture2D( Diffuse, UV );
//gl_FragColor = vec4(Color,1);
}
Vertex Shader GLSL
#version 330 core
layout(location = 0) in vec3 vertexPosition;
layout(location = 1) in vec2 vertexUV;
layout(location = 2) in vec3 vertexColor;
layout(location = 3) in vec3 vertexNormal;
uniform mat4 Projection;
uniform mat4 View;
uniform mat4 World;
out vec2 UV;
out vec3 Color;
void main()
{
mat4 MVP = Projection * View * World;
gl_Position = MVP * vec4(vertexPosition,1);
UV = vertexUV;
Color = vertexColor;
}
Quickly said, it looks like you are using back face culling (which is good), and the other side of your model is wrongly winded. You can ensure that this is the problem by turning back face culling off (OpenGL: glDisable(GL_CULL_FACE​)).
The real correction is (if this was the problem) to have correct winding of faces, usually it is counter-clockwise. This depends where you get this model. If you generate it on your own, correct winding in your model generation routine. Usually, model files created by 3D modeling software have correct face winding.
This is just a guess, but are you telling the system the correct number of polygons to draw? Calls like glBufferData() take the size in bytes of the data, not the number of vertices or polygons. (Maybe they should have named the parameter numBytes instead of size?) Also, the size has to contain the size of all the data. If you have color, normals, texture coordinates and vertices all interleaved, it needs to include the size of all of that.
This is made more confusing by the fact that glDrawElements() and other stuff takes the number of vertices as their size argument. The argument is named count, but it's not obvious that it's vertex count, not polygon count.
I found the error.
The reason is that I forgot to set the Texture SamplerState to Wrap/Repeat.
It was set to clamp so the uv coordinates were maxed to 1.
A few things that you could try :
Is depth test enabled ? It seems that your inner faces of the polygons from the 'other' side are being rendered over the polygons that are closer to the view point. This could happen if depth test is disabled. Enable it just in case.
Is lighting enabled ? If so turn it off. Some flashes of white seem to be coming in the rotating image. Could be because of incorrect normals ...
HTH

Strange and annoying GLSL error

My vertex shader looks as follows:
#version 120
uniform float m_thresh;
varying vec2 texCoord;
void main(void)
{
gl_Position = ftransform();
texCoord = gl_TexCoord[0].xy;
}
and my fragment shader:
#version 120
uniform float m_thresh;
uniform sampler2D grabTexture;
varying vec2 texCoord;
void main(void)
{
vec4 grab = vec4(texture2D(grabTexture, texCoord.xy));
vec3 colour = vec3(grab.xyz * m_thresh);
gl_FragColor = vec4( colour, 0.5 );
}
basically i am getting the error message "Error in shader -842150451 - 0<9> : error C7565: assignment to varying 'texCoord'"
But I have another shader which does the exact same thing and I get no error when I compile that and it works!!!
Any ideas what could be happening?
For starters, there is no sensible reason to construct a vec4 from texture2D (...). Texture functions in GLSL always return a vec4. Likewise, grab.xyz * m_thresh is always a vec3, because a scalar multiplied by a vector does not change the dimensions of the vector.
Now, here is where things get interesting... the gl_TexCoord [n] GLSL built-in you are using is actually a pre-declared varying. You should not be reading from this in a vertex shader, because it defines a vertex shader output / fragment shader input.
The appropriate vertex shader built-in variable in GLSL 1.2 for getting the texture coordinates for texture unit N is actually gl_MultiTexCoord<N>
Thus, your vertex and fragment shaders should look like this:
Vertex Shader:
#version 120
//varying vec2 texCoord; // You actually do not need this
void main(void)
{
gl_Position = ftransform();
//texCoord = gl_MultiTexCoord0.st; // Same as comment above
gl_TexCoord [0] = gl_MultiTexCoord0;
}
Fragment Shader:
#version 120
uniform float m_thresh;
uniform sampler2D grabTexture;
//varying vec2 texCoord;
void main(void)
{
//vec4 grab = texture2D (grabTexture, texCoord.st);
vec4 grab = texture2D (grabTexture, gl_TexCoord [0].st);
vec3 colour = grab.xyz * m_thresh;
gl_FragColor = vec4( colour, 0.5 );
}
Remember how I said gl_TexCoord [n] is a built-in varying? You can read/write to this instead of creating your own custom varying vec2 texCoord; in GLSL 1.2. I commented out the lines that used a custom varying to show you what I meant.
The OpenGL® Shading Language (1.2) - 7.6 Varying Variables - pp. 53
The following built-in varying variables are available to write to in a vertex shader. A particular one should be written to if any functionality in a corresponding fragment shader or fixed pipeline uses it or state derived from it.
[...]
varying vec4 gl_TexCoord[]; // at most will be gl_MaxTextureCoords
The OpenGL® Shading Language (1.2) - 7.3 Vertex Shader Built-In Attributes - pp. 49
The following attribute names are built into the OpenGL vertex language and can be used from within a vertex shader to access the current values of attributes declared by OpenGL.
[...]
attribute vec4 gl_MultiTexCoord0;
The bottom line is that gl_MultiTexCoord<N> defines vertex attributes (vertex shader input), gl_TexCoord [n] defines a varying (vertex shader output, fragment shader input). It is also worth mentioning that these are not available in newer (core) versions of GLSL.

Changing color of fragment

I have written a fragment shader which i would like to change the color of the fragment. for example I would like if the color it receives is black then it should change it to a blue.
This is the shader that I am using:
uniform sampler2D mytex;
layout (pixel_center_integer) in vec4 gl_FragCoord;
uniform sampler2D texture1;
void main ()
{
ivec2 screenpos = ivec2 (gl_FragCoord.xy);
vec4 color = texelFetch (mytex, screenpos, 0);
if (color == vec4 (0.0,0.0,0.0,1.0)) {
color = (0.0,0.0,0.0,0.0);
}
gl_FragColor = texture2D (texture1, gl_TexCoord[0].st);
}
And here is the log that I am getting from it:
WARNING: -1:65535: 'GL_ARB_explicit_attrib_location' : extension is not available in current GLSL version
WARNING: 0:1: 'texelFetch' : function is not available in current GLSL version
I am aware of the warning- but shouldn't it compile anyways?
The shader is not doing what i would like it to do, can someone explain why?
For one thing, you are using functions that are not available in your GLSL implementation. The result of calling these will be undefined.
However, the kicker here is that gl_FragColor has absolutely NOTHING to do with the value of color in this shader. So even if your texelFetch (...) logic actually did work correctly, changing the value of color does nothing to the final output. A smart compiler will see this as a no-op and effectively strip your shader down to this:
uniform sampler2D texture1;
void main ()
{
gl_FragColor = texture2D (texture1, gl_TexCoord[0].st);
}
If that were not enough, texelFetch (...) is completely unnecessary in this shader. If you want to lookup the texel that corresponds to the current fragment in your shader and the texture has the same dimensions as the viewport you are drawing into you can actually use texture2D (texture1, gl_FragCoord.xy); This is because the default behaviour in GLSL is to have gl_FragCoord supply the coordinate of the fragment's center (x+0.5, y+0.5) - this is also the center of the corresponding texel in your texture (if it is the same resolution), so you can do a traditional texture lookup without worrying that texture filtering will alter your sampled result.
texelFetch (...) lets you fetch an explicit texel in a texture without using normalized coordinates, it is sort of like a "grownup" rectangle texture :) It is generally useful if you are using a multisample texture and want a specific sample, or if you want to bypass texture filtering (which includes mipmap level selection). In this case, it is not needed at all.
This is probably what you really want (OpenGL 3.2):
#version 150
uniform sampler2D mytex;
uniform sampler2D texture1;
layout (location=0) out vec4 frag_color;
layout (location=1) out vec4 mytex_color;
void main ()
{
mytex_color = texture2D (mytex, gl_FragCoord.xy);
// This is not black->blue like you explained in your question...
// ... This is generally opaque->transparent, assuming 4th component = alpha
if (mytex_color == vec4 (0.0,0.0,0.0,1.0)) {
mytex_color = vec4 (0.0);
}
frag_color = texture2D (texture1, gl_TexCoord[0].st);
}
In older GLSL versions, you will have to use glBindFragDataLocation (...) and set the data locations manually or use gl_FragData[n] instead of out variables.
Now the real problem here is that you seem to be wanting to change the color of the texture you are sampling from. That will not work, at best you will have to use two fragment data outputs. Writing into the same texture you are sampling from can be done under some very controlled circumstances, but generally what you would do is ping-pong between textures. In other words, you would fetch from one texture, write to another texture and all subsequent render passes that reference to the original texture should be swapped with the one you just wrote to.
See "Fragment Data Location" for more information on Multiple Render Target drawing.

OpenGL 3.2 : cast right shadows by transparent textures

I can't seem to find any information on the Web about fixing shadow casting by objects, which textures have alpha != 1.
Is there any way to implement something like "per-fragment depth test", not a "per-vertex", so I could just discard appearing of the fragment on a shadowmap if colored texel has transparency? Also, in theory, it could make shadow mapping be more accurate.
EDIT
Well, maybe that was a terrible idea I gave above, but only I want is to tell shaders that if texel have alpha<1, there's no need to shadow things behind that texel. I guess depth texture require only vertex information, thats why every tutorial about shadow mapping has minimized vertex and empty fragment shader and nothing happens when trying do something with fragment shader.
Anyway, what is the main idea of fixing shadow casting by partly-transparent objects?
EDIT2
I've modified my shaders and now It discards every fragment, if at least one has transparency o_O. So those objects now don't cast any shadows (but opaque do)... Please, have a look at the shaders:
// Vertex Shader
uniform mat4 orthoView;
in vec4 in_Position;
in vec2 in_TextureCoord;
out vec2 TC;
void main(void) {
TC = in_TextureCoord;
gl_Position = orthoView * in_Position;
}
.
//Fragment Shader
uniform sampler2D texture;
in vec2 TC;
void main(void) {
vec4 texel = texture2D(texture, TC);
if (texel.a < 0.4)
discard;
}
And it's strange because I use the same trick with the same textures in my other shaders and it works... any ideas?
If you use discard in the fragment shader, then no depth information will be recorded for that fragment. So in your fragment shader, simply add a test to see whether the texture is transparent, and if so discard that fragment.