Using gl_FragCoord to create a hole in a quad - glsl

I am learning WebGL, and would like to do the following:
Create a 3D quad with a square hole in it using a fragment shader.
It looks like I need to set gl_FragColor based on gl_FragCoord appropriately.
So should I:
a) Convert gl_FragCoord from window coordinates to model coordinates, do the appropriate geometry check, and set color.
OR
b) Somehow pass the hole information from the vertex shader to the fragment shader. Maybe use a texture coordinate. I am not clear on this part.
I am fuzzy about implementing either of the above, so I'd appreaciate some coding hints on either.
My background is that of an OpenGL old timer who has not kept up with the new shading language paradigm, and is now trying to catch up...
Edit (27/03/2011):
I have been able to successfully implement the above based on the tex coord hint. I've written up this example at the link below:
Quads with holes - example

The easiest way would be with texture coords. Simply supply the corrds as an extra attribute array then pass through to the fragment shader using a varying. The shaders should contain something like:
vertex:
attribute vec3 aPosition;
attribute vec2 aTexCoord;
varying vec2 vTexCoord;
void main(){
vTexCoord=aTexCoord;
.......
}
fragment:
varying vec2 vTexCoord;
void main(){
if(vTexCoord.x>{lower x limit} && vTexCoord.x<{upper x limit} && vTexCoord.y>{lower y limit} && vTexCoord.y<{upper y limit}){
discard; //this tell GFX to discard this fragment entirly
}
.....
}

Related

How to transfer colors from one UV unfolding to another UV unfolding programmatically?

As seen from the figure, assuming a model has two UV unfolding ways, i.e., UV-1 and UV-1. Then I ask an artist to paint the model based on UV-1 and get the texture map 1. How can I transfer colors from UV-1 to UV-2 programmatically (e.g., python)? One method I know is mapping the texture map 1 into vertex colors and then rendering the vertex colors to UV-2. But this method would lose some color details. So how can I do it?
Render your model on Texture Map 2 using UV-2 coordinates for vertex positions and UV-1 coordinates interpolated across the triangles. In the fragment shader use the interpolated UV-1 coordinates to sample Texture Map 1. This way you're limited only by the resolution of the texture maps, not by the resolution of the model.
EDIT: Vertex shader:
in vec2 UV1;
in vec2 UV2;
out vec2 fUV1;
void main() {
gl_Position = vec4(UV2, 0, 1);
fUV1 = UV1;
}
Fragment shader:
in vec2 fUV1;
uniform sampler2D TEX1;
out vec4 OUT;
void main() {
OUT = texture(TEX1, fUV1);
}

OpenGL / GLSL varying not interpolated across GL_QUAD

I am using GLSL to render a basic cube (made from GL_QUADS surfaces). I would like to pass the gl_Vertex content from the vertex into the fragment shader. Everything works, if I am using gl_FrontColor (vertex shader) and gl_Color (fragment shader) for this, but it doesn't work, when using a plain varying (see code & image below). It appears the varying is not interpolated across the surface for some reason. Any idea what could cause this in OpenGL ?
glShadeModel is set to GL_SMOOTH - I can't think of anything else that could cause this effect right now.
Vertex Shader:
#version 120
varying vec4 frontSideValue;
void main() {
frontSideValue = gl_Vertex;
gl_Position = transformPos;
}
Fragment Shader:
#version 120
varying vec4 frontSideValue;
void main() {
gl_FragColor = frontSideValue;
}
The result looks just like you are not using values in the range [0,1] for the color vector. You basically use the untransformed vertex position, which might be well outside this range. Your cube seems centered around the origin, so you are seeing the small transition where the values are actually in the range [0,1] as that unsharp band.
With the builin gl_FrontColor, the value seems to get clamped before the interpolation.

Changing color of fragment

I have written a fragment shader which i would like to change the color of the fragment. for example I would like if the color it receives is black then it should change it to a blue.
This is the shader that I am using:
uniform sampler2D mytex;
layout (pixel_center_integer) in vec4 gl_FragCoord;
uniform sampler2D texture1;
void main ()
{
ivec2 screenpos = ivec2 (gl_FragCoord.xy);
vec4 color = texelFetch (mytex, screenpos, 0);
if (color == vec4 (0.0,0.0,0.0,1.0)) {
color = (0.0,0.0,0.0,0.0);
}
gl_FragColor = texture2D (texture1, gl_TexCoord[0].st);
}
And here is the log that I am getting from it:
WARNING: -1:65535: 'GL_ARB_explicit_attrib_location' : extension is not available in current GLSL version
WARNING: 0:1: 'texelFetch' : function is not available in current GLSL version
I am aware of the warning- but shouldn't it compile anyways?
The shader is not doing what i would like it to do, can someone explain why?
For one thing, you are using functions that are not available in your GLSL implementation. The result of calling these will be undefined.
However, the kicker here is that gl_FragColor has absolutely NOTHING to do with the value of color in this shader. So even if your texelFetch (...) logic actually did work correctly, changing the value of color does nothing to the final output. A smart compiler will see this as a no-op and effectively strip your shader down to this:
uniform sampler2D texture1;
void main ()
{
gl_FragColor = texture2D (texture1, gl_TexCoord[0].st);
}
If that were not enough, texelFetch (...) is completely unnecessary in this shader. If you want to lookup the texel that corresponds to the current fragment in your shader and the texture has the same dimensions as the viewport you are drawing into you can actually use texture2D (texture1, gl_FragCoord.xy); This is because the default behaviour in GLSL is to have gl_FragCoord supply the coordinate of the fragment's center (x+0.5, y+0.5) - this is also the center of the corresponding texel in your texture (if it is the same resolution), so you can do a traditional texture lookup without worrying that texture filtering will alter your sampled result.
texelFetch (...) lets you fetch an explicit texel in a texture without using normalized coordinates, it is sort of like a "grownup" rectangle texture :) It is generally useful if you are using a multisample texture and want a specific sample, or if you want to bypass texture filtering (which includes mipmap level selection). In this case, it is not needed at all.
This is probably what you really want (OpenGL 3.2):
#version 150
uniform sampler2D mytex;
uniform sampler2D texture1;
layout (location=0) out vec4 frag_color;
layout (location=1) out vec4 mytex_color;
void main ()
{
mytex_color = texture2D (mytex, gl_FragCoord.xy);
// This is not black->blue like you explained in your question...
// ... This is generally opaque->transparent, assuming 4th component = alpha
if (mytex_color == vec4 (0.0,0.0,0.0,1.0)) {
mytex_color = vec4 (0.0);
}
frag_color = texture2D (texture1, gl_TexCoord[0].st);
}
In older GLSL versions, you will have to use glBindFragDataLocation (...) and set the data locations manually or use gl_FragData[n] instead of out variables.
Now the real problem here is that you seem to be wanting to change the color of the texture you are sampling from. That will not work, at best you will have to use two fragment data outputs. Writing into the same texture you are sampling from can be done under some very controlled circumstances, but generally what you would do is ping-pong between textures. In other words, you would fetch from one texture, write to another texture and all subsequent render passes that reference to the original texture should be swapped with the one you just wrote to.
See "Fragment Data Location" for more information on Multiple Render Target drawing.

OpenGL 3.2 : cast right shadows by transparent textures

I can't seem to find any information on the Web about fixing shadow casting by objects, which textures have alpha != 1.
Is there any way to implement something like "per-fragment depth test", not a "per-vertex", so I could just discard appearing of the fragment on a shadowmap if colored texel has transparency? Also, in theory, it could make shadow mapping be more accurate.
EDIT
Well, maybe that was a terrible idea I gave above, but only I want is to tell shaders that if texel have alpha<1, there's no need to shadow things behind that texel. I guess depth texture require only vertex information, thats why every tutorial about shadow mapping has minimized vertex and empty fragment shader and nothing happens when trying do something with fragment shader.
Anyway, what is the main idea of fixing shadow casting by partly-transparent objects?
EDIT2
I've modified my shaders and now It discards every fragment, if at least one has transparency o_O. So those objects now don't cast any shadows (but opaque do)... Please, have a look at the shaders:
// Vertex Shader
uniform mat4 orthoView;
in vec4 in_Position;
in vec2 in_TextureCoord;
out vec2 TC;
void main(void) {
TC = in_TextureCoord;
gl_Position = orthoView * in_Position;
}
.
//Fragment Shader
uniform sampler2D texture;
in vec2 TC;
void main(void) {
vec4 texel = texture2D(texture, TC);
if (texel.a < 0.4)
discard;
}
And it's strange because I use the same trick with the same textures in my other shaders and it works... any ideas?
If you use discard in the fragment shader, then no depth information will be recorded for that fragment. So in your fragment shader, simply add a test to see whether the texture is transparent, and if so discard that fragment.

Model-to-screen coordinate confusion in GLSL shader

In my shader I already have a special variable that has the entire content of previously rendered screen. It is stored in
uniform sampler2D _GrabTexture;
Which it's content should be:
(As a side note, I'm using Unity's GrabPass{} to get the entire screen. Also please ignore Unity's GUI)
Now how can I render one more pass, using _GrabTexture as a texture for my plane model, so the result is exactly the same as my _GrabTexture?
(The point is, I can apply some effects like blur,sharpen,etc to that screen texture before render one more pass so the plane's texture is now stylized.)
I'm trying this in the final pass. The variable is declared in both Vertex & Fragment Shader.
uniform sampler2D _GrabTexture;
varying vec4 v_Position;
Vertex Shader :
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
v_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
Store the screen coordinate of each model's vertex as varying, to be used in Fragment Shader.
Fragment Shader :
void main()
{
gl_FragColor = texture2D(_GrabTexture,vec2(v_Position));
}
Now use that stored varying position to access the _GrabTexture screen texture. Since it's entire screen my v_Position which is already in screen coordinate should correctly got the right pixel.
But the result is
As you can see the plane's texture is 'sorts of' showing previously rendered screen but the coordinate is not right. How can I fix it so the result is the same as first image?
You're thinking too complicated. OpenGL tells you the on-screen fragment position in the fragment shader built in variable gl_FragCoord.
With GLSL-1.30 or newer you have texelFetch which you can give gl_FragCoord as source coordinate directly. Or you use translate gl_FragCoord into texture space coordinates, see https://stackoverflow.com/a/5879551/524368