I'm trying to convert a CG program to a GLSL program.
What I've done so far seems correct, but the GLSL shader outputs an incorrect output. The incorrect behavior is defined by a set of test images.
The only dark point on which I'm investigating is the function f3texRECT, which I've translated in texture. However, I cannot find any documentation about f3texRECT.
Can somebody put some light about?
f3texRECT() looks like it would map to texture() with a sampler2DRect instead of a sampler2D -- meaning the texture coordinates are unnormalized ([0..textureSize-1] instead of [0..1]). The "f3" prefix means the result is a three-channel color. Older versions of GLSL had a textureRect() function for this purpose, but it's been deprecated.
f3texRECT(..args..) is exactly equivalent to texRECT(..args..).xyz in Cg -- it exists as a holdover from the older HSL which didn't have fully general swizzles.
In GLSL the equivalent function is texture, so you should be to use texture(..arg..).xyz there too, though the args might be slightly different.
The main confusion translating texture calls from Cg to GLSL is dealing with shadow textures -- shadow texture lookups in Cg use normal samplers but the tex call has an extra component in the coordinate. GLSL has distinct shadow sampler types instead. So when translating Cg to GLSL you need to figure out which textures are 'normal' textures and which are 'shadow' textures, based on how they are used. In the rare case that you have a single texture used for both normal and shadow lookups, you need to split it into two samplers for GLSL.
Related
EDIT:
My question was unclear at first, I'll try to rephrase it:
How do I use different shaders to do different rendering operations on the same mesh polygons? For example, I want to add lighting using one shader and add fog using another shader. I need to use the color interpolated from the first shader in the calculation of the second shader, but I don't know how to do it if I can't (or rather not supposed to) pass around the color buffer between shaders.
Also (and that was where my question started), I need the same world-view-projection calculations for both shaders, so am I supposed to calculate it in every shader seperatly? Am I supposed to use one big shader for all my rendering operations?
Original question:
Say I have two different shader programs. The first one calculates the vertex positions in the vertex shader and does some operations in the fragment shader.
Let's say I want to use the fragment shader to do different calculations, but I still want to use the same vertex positions calculated by the first vertex shader. Do I have to calculate the vertex positions again or is there a way to share state between different shader programs?
you got more options:
multi pass
this one usually render the geometry into depth and "color" buffer first and then in next passes uses that as input textures for rendering single rectangle covering whole screen/view. Deferred shading is an example of this but there are many other implementations of effects that are not Deferred shading related. Here an example of multi pass:
How can I render an 'atmosphere' over a rendering of the Earth in Three.js?
In first pass the planets and stars and stuff is rendered, in second the atmosphere is added.
You can combine the passes either by blending or direct rendering. The direct rendering requires that you render to texture each pass and render in the last one. Blending is changing the color of the output in each pass.
single pass
what you describe is more like you should encode the different shaders as a functions for single fragment shader... Yes you can combine more shaders into single one if they are compatible and combine their results to final output color.
Big shader is a performance hit but I think it would be still faster than having multiple passes doing the same.
Take a look at this example:
Normal mapping gone horribly wrong
this one computes enviromental reflection, lighting, geometry color and combines them together to single output color.
Exotic shaders
There are also exotic shaders that go around the pipeline limitations like this one:
Reflection and refraction impossible without recursive ray tracing?
Which are used for stuff that is believed to be not possible to implement in GL/GLSL pipeline. Anyway If the limitations are too binding you can still use compute shader...
I'm new to OpenGL, and I'm trying to understand vertex and fragment shaders. It seems you can use a vertex shader to make a gradient if you define the color you want each of the vertices to be, but it seems you can also make gradients using a fragment shader if you use the FragCoord variable, for example.
My question is, since you seem to be able to make color gradients using both kinds of shaders, which one is better to use? I'm guessing vertex shaders are faster or something since everyone seems to use them, but I just want to make sure.
... since everyone seems to use them
Using vertex and fragment shaders are mandatory in modern OpenGL for rendering absolutely everything.† So everyone uses both. It's the vertex shader responsibility to compute the color at the vertices, OpenGL's to interpolate it between them, and fragment shader's to write the interpolated value to the output color attachment.
† OK, you can also use a compute shader with imageStore, but I'm talking about the rasterization pipeline here.
So I have an opengl program that draws a group on objects. When I draw these objects I want to use my shader program is a vertex shader and a vertex shader exclusively. Basically, I am aiming to adjust the height of the model inside the vertex shader depending on a texture calculation. And that is it. Otherwise I want the object to be drawn as if using naked openGL (no shaders). I do not want to implement a fragment shader.
However I haven't been able to find how to make it so I can have a shader program with only a vertex shader and nothing else. Forgetting the part about adjust my model's height, so far I have:
gl_FrontColor = gl_Color;
gl_Position = modelViewProjectionMain * Position;
It transforms the object to the correct position alright, however when I do this I loose texture coordinates and also lighting information (normals are lost). What am I missing? How do I write a "do-nothing" vertex shader? That is, a vertex shader you could turn off and on when drawing a textured .obj with normals, and there would be no difference?
You can't write a shader with partial implementation. Either you do everything in a shader or completely rely on fixed functionality(deprecated) for a given object.
What you can do is this:
glUseProgram(handle)
// draw objects with shader
glUseProgram(0)
// draw objects with fixed functionality
To expand a little on the entirely correct answer by Abhishek Bansal, what you want to do would be nice but is not actually possible. You're going to have to write your own vertex and fragment shaders.
From your post, by "naked OpenGL" you mean the fixed-function pipeline in OpenGL 1 and 2, which included built-in lighting and texturing. Shaders in OpenGL entirely replace the fixed-function pipeline rather than extending it. And in OpenGL 3+ the old functionality has been removed, so now they're compulsory.
The good news is that vertex/fragment shaders to perform the same function as the original OpenGL lighting and texturing are easy to find and easy to modify for your purpose. The OpenGL Shading Language book by Rost, Licea-Kane, etc has a whole chapter "Emulating OpenGL Fixed Functionality" Or you could get a copy of the 5th edition OpenGL SuperBible book and code (not the 6th edition) which came with a bunch of useful predefined shaders. Or if you prefer online resources to books, there are the NeHe tutorials.
Writing shaders seems a bit daunting at first, but it's easier than you might think, and the extra flexibility is well worth it.
I've been trying to implement percentage closer filtering for my shadow mapping as described here Nvidia GPU Gems
When I try to sample my shadow map using a uniform sampler2DShadow and shadow2D or shadow2DProj the GLSL compile fails and gives me the error
shadow2D deprecated after version 120
How would I go about implementing an equivalent solution in GLSL 330+? I'm currently just using a binary texture sample along with Poisson Sampling but the staircase aliasing is pretty bad.
Your title is way off base. sampler2DShadow is not deprecated. The only thing that changed in GLSL 1.30 was that the mess of functions like texture1D, texture2D, textureCube, shadow2D, etc. were all replaced with overloads of texture (...).
Note that this overload of texture (...) is equivalent to shadow2D (...):
float texture(sampler2DShadow sampler,
vec3 P,
[float bias]);
The texture coordinates used for the lookup using this overload are: P.st and the reference value used for depth comparison is P.r. This overload only works properly when texture comparison is enabled (GL_TEXTURE_COMPARE_MODE == GL_COMPARE_REF_TO_TEXTURE) for the texture/sampler object bound to the shadow sampler's texture image unit; otherwise the results are undefined.
Beginning with GLSL 1.30, the only time you need to use a different texture lookup function is when you are doing something fundamentally different (e.g. texture projection => textureProj, requesting an exact LOD => textureLod, fetching a texel by its integer coordinates/sample index => texelFetch, etc.). Texture lookup with comparison (shadow sampler) is not considered fundamentally different enough to require its own specialized texture lookup function.
This is all described quite thoroughly on OpenGL's wiki site.
The vertex shader is expected to output vertices positions in clip space:
Vertex shaders, as the name implies, operate on vertices.
Specifically, each invocation of a vertex shader operates on a single
vertex. These shaders must output, among any other user-defined
outputs, a clip-space position for that vertex. (source: Learning Modern 3D Graphics Programming, by Jason L. McKesson)
It has a built-in variable named gl_Position for that.
Similarly, the fragment shader is expected to output colors:
A fragment shader is used to compute the output color(s) of a
fragment. [...] After the fragment shader executes, the fragment
output color is written to the output image. (source: Learning
Modern 3D Graphics Programming, by Jason L. McKesson)
but there is no gl_Color built-in variable defined for that as stated here: opengl44-quick-reference-card.pdf
Why that (apparent) inconsistency in the OpenGL API?
That is because the OpenGL pipeline uses gl_Position for several tasks. The manual says: "The value written to gl_Position will be used by primitive assembly, clipping, culling and other fixed functionality operations, if present, that operate on primitives after vertex processing has occurred."
In contrast, the pipeline logic does not depend on the final pixel color.
The accepted answer does not adequately explain the real situation:
gl_Color was already used once-upon-a-time, but it was always defined as an input value.
In compatibility GLSL, gl_Color is the color vertex pointer in vertex shaders and it takes on the value of gl_FrontColor or gl_BackColor depending on which side of the polygon you are shading in a fragment shader.
However, none of this behavior exists in newer versions of GLSL. You must supply your own vertex attributes, your own varyings and you pick between colors using the value of gl_FrontFacing. I actually explained this in more detail in a different question related to OpenGL ES 2.0, but the basic principle is the same.
In fact, since gl_Color was already used as an input variable this is why the output of a fragment shader is called gl_FragColor instead. You cannot have a variable serve both as an input and an output in the same stage. The existence of an inout storage qualifier may lead you to believe otherwise, but that is for function parameters only.