GLSL shader detechment - opengl

i am trying to add GLSL shader and i am successfully added that to get per fragment lighting,
but can it possible to detach shader dynamically from geometry and getting the OpenGL basic effects for that.

you can use 2 different programs (one with and one without per fragment lighting) and swap between them
or create a uniform boolean that will disable perfragment lighting in the shader

Related

A few questions about shaders

I am using opengl shaders.
Does count of uniforms affect shader performance? If I pass 5 uniforms or 50 will it matter?
Does each shader has its own area where it working on? Or each shader can draw at any point of my application?
I often create vertex shader just to pass attributes to fragment shader. What benefit of vertex shader and why not just pass attributes in fragment?
I would guess it doesn't (and if it does, only a very minor one). But I don't have any evidence for that, so I might be wrong. This is almost certainly driver-specific.
A shader does not draw anything. A shader just processes data. In the pipeline, the rasterizer produces the fragments that are covered by your shape. And these are the fragments that you can potentially draw to. The fragment shader calculates the color (and possibly depth) and the rest of the pipeline decides what to do with the result (either updating the frame buffer, blending, or discarding it altogether). Each draw call can potentially produce a framebuffer update everywhere, not just at some specific locations.
This is perfectly fine if the application requires it. The main difference is that vertex shaders process vertices and fragment shaders process fragments. Usually, there are much more fragments than vertices, so the fragment shader is called more often than the vertex shader. Therefore, you should do as much work in the vertex shader as possible. Of course, there are things that you just cannot calculate in a vertex shader.

Why do we need vertex shader in OpenGL?

As far as I understand, the color of a pixel is determined by the fragment shader. Why do we need a vertex shader then? Is there anything a fragment shader cannot do (or cannot easily do) but a vertex shader can do (easily)?
But I still can't quite understand why it is named a "shader".
Because that's what programs executed as part of the rendering process are called. The Renderman interface specification was one of the first programmable rendering processes, and they called all of their programmable elements "shaders", even though they didn't all compute colors.
And therefore, "shader" has become the term used for describing any such program.
Vertex shaders convert vertex data, creating a 1:1 mapping from input vertices to output vertices. Fragment shaders operate on fragments. An FS invocation has no control over where it will be executed. They are generated in the location that the rasterizer says they go, and the FS has no way to affect this.
By contrast, a vertex shader has complete control over where the vertices will go.

OpenGL: Fragment vs Vertex shader for gradients?

I'm new to OpenGL, and I'm trying to understand vertex and fragment shaders. It seems you can use a vertex shader to make a gradient if you define the color you want each of the vertices to be, but it seems you can also make gradients using a fragment shader if you use the FragCoord variable, for example.
My question is, since you seem to be able to make color gradients using both kinds of shaders, which one is better to use? I'm guessing vertex shaders are faster or something since everyone seems to use them, but I just want to make sure.
... since everyone seems to use them
Using vertex and fragment shaders are mandatory in modern OpenGL for rendering absolutely everything.† So everyone uses both. It's the vertex shader responsibility to compute the color at the vertices, OpenGL's to interpolate it between them, and fragment shader's to write the interpolated value to the output color attachment.
† OK, you can also use a compute shader with imageStore, but I'm talking about the rasterization pipeline here.

is vertex shader needed with compatibility context

If i use opengl 3.2+ with compatibility context and have a fragment shader, is it necessary to have a vertex shader? I would like to know if per vertex lighting calculation and other per vertex calculations can be done by the fixed function pipeline and i can just use the fragment shader.
Also what implications would this have for per-vertex attribute binding locations?
if per vertex lighting calculation and other per vertex calculations
can be done by the fixed function pipeline
They can be done if you use fixed pipeline lights.Otherwise, part of it (like transformed normals,uv's and positions) must be computed elsewhere before being passed to the fragment shader.This "elsewhere" is called vertex shader.So yes,if you don't use fixed pipeline lightning system you must use vertex and fragment shader to process it.
Also,if you use fixed pipeline lightning you can still use shaders where you can access fixed light and material properties.But I see no point doing so unless you wish to break the defaul behavior.

Passing varying variables through geometry shader

I introduced a geometry shader to my OpenGL application. My shaders have quite a few of "varying" variables that I pass from the vertex shader to the fragment shader. Now, having introduced the geometry shader I have to manually pass every varying value in the geometry shader for each vertex. Is there a way to avoid that and do things "automatically"?
No.
As soon as you introduce a geometry shader in your pipeline, if you want to pass variables from the vertex shader to the fragment shader you have to pass them manually, creating an input variable from the vertex shader and an output variable to the fragment shader. I don't know which GLSL version you're using, but you might want to check section 4.3.4 of the GLSL 3.30 spec.
No, because there's no sensible way to do that for anything except a noop geometry shader, and if your geometry shader isn't doing anything to the geometry, why is it enabled in the first place?
In general, a geometry shader takes a number of vertexes as input and produces a (different) number of vertexes as output. So which input vertex(es) should be mapped to which output vertex(es) 'automatically'?