I don't usually use a flat surface in OpenGL, but recently I've been taking up on making After Effects plugins, and it has a template called Glator which passes a VBO which contains the UVs. However, I have learned by reading ShaderToy fragment shaders that by passing the resolution of the billboard to the fragment shader as a uniform, and doing this:
vec2 p = gl_FragCoord.st / resolution.xy
You can generate a value which is the UV coordinate of the fragment on the flat surface. Am I right?
I have troubles with Texture transparency in OpenGL. As you can see in the picture below, it doesn't quite work. It's worth noting, that the black is actually the ClearColor, I use to clear the screen.
I use the following code to implement blending:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Here's my fragment shader:
#version 330 core
in vec2 tex_coords;
out vec4 color;
uniform vec4 spritecolor;
uniform sampler2D image;
void main(void)
{
color = spritecolor * texture(image, tex_coords);
}
Here is a screenshot of the scene in wireframe mode, in case it helps with the drawn vertices:
If anything else is needed, feel free to ask, I'll add it.
You have to do a Transparency Sorting
If a scene is drawn, usually the depth test (glDepthFunc) is set to GL_LESS. This causes fragments to be drawn only when they are in front of the scene so far drawn.
To draw transparents correctly, you have to draw the opaque objects first. The transparent objects have to be drawn after, sorted by the reverse distance to the camera position.To draw transparents correctly, you have to draw the opaque objects first. The transparent objects have to be drawn after, sorted by the reverse distance to the camera position.
Draw the transparent object first, which has the largest distance to the camera position and draw the transparent object last, which has the lowest distance to the camera position.
See also the answers to the following questions:
OpenGL depth sorting
opengl z-sorting transparency
Fully transparent OpenGL model
I am trying to put a texture in only a part of a sphere.
I have a sphere representing the earth with its topography and a terrain texture for a part of the globe, say satellite map for Italy.
I want to show that terrain over the part of the sphere where Italy is.
I'm creating my sphere drawing a set of triangle strips.
As far as I understand, if I want to use a texture I need to specify a texture coord for each vertex (glTexCoord2*). But I do not have a valid texture for all of them.
So how do I tell OpenGL to skip texture for those vertexes?
I'll assume you have two textures or a color attribute for the remainder of the sphere ("not Italy").
The easiest way to do this would be to create a texture that covers the whole sphere, but use the alpha channel. For example, use alpha=1 for "not italy" and alpha=0 for "italy". Then you could do something like this in your fragment shader (pseudo-code, I did not test anything):
...
uniform sampler2D extra_texture;
in vec2 texture_coords;
out vec3 final_color;
...
void main() {
...
// Assume color1 to be the base color for the sphere, no matter how you get it (attribute/texture), it has at least 3 components.
vec4 color2 = texture(extra_texture, texture_coords);
final_color = mix(vec3(color2), vec3(color1), color2.a);
}
The colors in mix are combined as follows, mix(x,y,a) = x*(1-a)+y*a, this is done component wise for vectors. So you can see that if alpha=1 ("not Italy"), color1 will be picked, and vice versa for alpha=0.
You could extend this to multiple layers using texture arrays or something similar, but I'd keep it simple 2-layer to begin with.
I'm trying to figure out how to make a semi-transparent 2D overlay over my 3D scene, reading the OpenGL SuperBible 5th edition for reference.
It has an example which overlays the OpenGL logo over a scene (in Chapter 7) using the texture target GL_TEXTURE_RECTANGLE, and a GLSL uniform type called sampler2DRect. The texture is supposed to be displayed in the fragment shader using the texture() command.
The example in this book uses many source files and I'm having a really hard time implementing it in a simple program, so I'm wondering if anyone could point me to a simpler example of the sampler2DRect.
I have no trouble with the part about switching to an orthographic projection, rather when I try to load the texture, it just displays the surface in white. My code's getting really messy at this point, and I can't seem to pinpoint the problem, so I'd rather start over from scratch following a simpler example if one is available anywhere.
P.S. I'm using SFML 2.0rc for loading the image file, in case it matters.
error C1101: ambiguous overloaded function reference "mul(mat4, vec3)"
(0) : mat3x4 mul(mat3x1, mat1x4)
(0) : mat3 mul(mat3x1, mat1x3)
(0) : mat3x2 mul(mat3x1, mat1x2)
(0) : mat3x1 mul(mat3x1, mat1)
(0) : mat2x4 mul(mat2x1, mat1x4)
.....
This is a very wordy way to tell you that there's no such function that multiplies a mat4 with a vec3. It's then listing all of the legal variants of mul.
Your dimensions must match when you multiply matrices, what you likely want is to multiply a mat4 with a vec4. If this is for your position coordinate, then add a 1.0 as the final value of the vector:
uniform mat4 mvpMatrix;
in vec3 position;
main()
gl_Position = mvpMatrix * vec4(position, 1.0);
In addition to Tim's answer, make sure that :
Your texture is bound : glBindTexture(GL_TEXTURE_2D, textureID);
The vertex shader outputs UV coords : out vec2 UV;
The vertex shader gets UV coords : in vec2 UV;
The VBO with the UVs exists, is enabled, bound and set ( glEnableVertexAttribArray, glBindBuffer, glVertexAttribPointer )
glEnable(GL_TEXTURE_2D)
And special items for rectangle textures :
The UV coords are in [0,width]x[0,height] (special case for rectangle textures).
Make sure that your quad has approx. the same size as the texture ( rect.tex don't have mipmaps)
Use standard textures instead. They can be NPOT.
Also : use gDebugger.
I'm trying to wrap my head around shaders in GLSL, and I've found some useful resources and tutorials, but I keep running into a wall for something that ought to be fundamental and trivial: how does my fragment shader retrieve the color of the current fragment?
You set the final color by saying gl_FragColor = whatever, but apparently that's an output-only value. How do you get the original color of the input so you can perform calculations on it? That's got to be in a variable somewhere, but if anyone out there knows its name, they don't seem to have recorded it in any tutorial or documentation that I've run across so far, and it's driving me up the wall.
The fragment shader receives gl_Color and gl_SecondaryColor as vertex attributes. It also gets four varying variables: gl_FrontColor, gl_FrontSecondaryColor, gl_BackColor, and gl_BackSecondaryColor that it can write values to. If you want to pass the original colors straight through, you'd do something like:
gl_FrontColor = gl_Color;
gl_FrontSecondaryColor = gl_SecondaryColor;
gl_BackColor = gl_Color;
gl_BackSecondaryColor = gl_SecondaryColor;
Fixed functionality in the pipeline following the vertex shader will then clamp these to the range [0..1], and figure out whether the vertex is front-facing or back-facing. It will then interpolate the chosen (front or back) color like usual. The fragment shader will then receive the chosen, clamped, interpolated colors as gl_Color and gl_SecondaryColor.
For example, if you drew the standard "death triangle" like:
glBegin(GL_TRIANGLES);
glColor3f(0.0f, 0.0f, 1.0f);
glVertex3f(-1.0f, 0.0f, -1.0f);
glColor3f(0.0f, 1.0f, 0.0f);
glVertex3f(1.0f, 0.0f, -1.0f);
glColor3f(1.0f, 0.0f, 0.0f);
glVertex3d(0.0, -1.0, -1.0);
glEnd();
Then a vertex shader like this:
void main(void) {
gl_Position = ftransform();
gl_FrontColor = gl_Color;
}
with a fragment shader like this:
void main() {
gl_FragColor = gl_Color;
}
will transmit the colors through, just like if you were using the fixed-functionality pipeline.
If you want to do mult-pass rendering, i.e. if you have rendered to the framebuffer and want to to a second render pass where you use the previous rendering than the answer is:
Render the first pass to a texture
Bind this texture for the second pass
Access the privously rendered pixel in the shader
Shader code for 3.2:
uniform sampler2D mytex; // texture with the previous render pass
layout(pixel_center_integer) in vec4 gl_FragCoord;
// will give the screen position of the current fragment
void main()
{
// convert fragment position to integers
ivec2 screenpos = ivec2(gl_FragCoord.xy);
// look up result from previous render pass in the texture
vec4 color = texelFetch(mytex, screenpos, 0);
// now use the value from the previous render pass ...
}
Another methods of processing a rendered image would be OpenCL with OpenGL -> OpenCL interop. This allows more CPU like computationing.
If what you're calling "current value of the fragment" is the pixel color value that was in the render target before your fragment shader runs, then no, it is not available.
The main reason for that is that potentially, at the time your fragment shader runs, it is not known yet. Fragment shaders run in parallel, potentially (depending on which hardware) affecting the same pixel, and a separate block, reading from some sort of FIFO, is usually responsible to merge those together later on. That merging is called "Blending", and is not part of the programmable pipeline yet. It's fixed function, but it does have a number of different ways to combine what your fragment shader generated with the previous color value of the pixel.
You need to sample texture at current pixel coordinates, something like this
vec4 pixel_color = texture2D(tex, gl_TexCoord[0].xy);
Note,- as i've seen texture2D is deprecated in GLSL 4.00 specification - just look for similar texture... fetch functions.
Also sometimes it is better to supply your own pixel coordinates instead of gl_TexCoord[0].xy - in that case write vertex shader something like:
varying vec2 texCoord;
void main(void)
{
gl_Position = vec4(gl_Vertex.xy, 0.0, 1.0 );
texCoord = 0.5 * gl_Position.xy + vec2(0.5);
}
And in fragment shader use that texCoord variable instead of gl_TexCoord[0].xy.
Good luck.
The entire point of your fragment shader is to decide what the output color is. How you do that depends on what you are trying to do.
You might choose to set things up so that you get an interpolated color based on the output of the vertex shader, but a more common approach would be to perform a texture lookup in the fragment shader using texture coordinates passed in the from the vertex shader interpolants. You would then modify the result of your texture lookup according to your chosen lighting calculations and whatever else your shader is meant to do and then write it into gl_FragColor.
The GPU pipeline has access to the underlying pixel info immediately after the shaders run. If your material is transparent, the blending stage of the pipeline will combine all fragments.
Generally objects are blended in the order that they are added to a scene, unless they have been ordered by a z-buffering algo. You should add your opaque objects first, then carefully add your transparent objects in the order to be blended.
For example, if you want a HUD overlay on your scene, you should just create a screen quad object with an appropriate transparent texture, and add this to your scene last.
Setting the SRC and DST blending functions for transparent objects gives you access to the previous blend in many different ways.
You can use the alpha property of your output color here to do really fancy blending. This is the most efficient way to access framebuffer outputs (pixels), since it works in a single pass (Fig. 1) of the GPU pipeline.
Fig. 1 - Single Pass
If you really need multi pass (Fig. 2), then you must target the framebuffer outputs to an extra texture unit rather than the screen, and copy this target texture to the next pass, and so on, targeting the screen in the final pass. Each pass requires at least two context switches.
The extra copying and context switching will degrade rendering performance severely. Note that multi-threaded GPU pipelines are not much help here, since multi pass is inherently serialized.
Fig. 2 - Multi Pass
I have resorted to a verbal description with pipeline diagrams to avoid deprecation, since shader language (Slang/GLSL) is subject to change.
how-do-i-get-the-current-color-of-a-fragment
Some say it cannot be done, but I say this works for me:
//Toggle blending in one sense, while always disabling it in the other.
void enableColorPassing(BOOL enable) {
//This will toggle blending - and what gl_FragColor is set to upon shader execution
enable ? glEnable(GL_BLEND) : glDisable(GL_BLEND);
//Tells gl - "When blending, change nothing"
glBlendFunc(GL_ONE, GL_ZERO);
}
After that call, gl_FragColor will equal the color buffer's clear color the first time the shader runs on each pixel, and the output each run will be the new input upon each successive run.
Well, at least it works for me.