OpenGL per pixel lighting in fixed function pipeline - opengl

Is it possible to enable per-pixel lighting (so that I can have nice specular highlights on low tessellated surfaces) in the OpenGL fixed function pipeline?

The only way to do this is using precomputed cubemaps. The fixed function pipeline interpolates colors and texture coordinates across polygons. Color is useless but the texturing can be used.
It won't be position-dependent, but you can precalculate cubemaps for areas and blend between them using BLEND_ADD and drawing it twice with both cubemaps you're LERPing between.

Related

How to draw 2D smooth curve in opengl

I've searched about drawing splines in opengl, and the solutions I found use many vertices to draw it. Obviously, they are broken when scaled.
How can I draw smooth curve independent of scaling, like vector graphics? Is there any proper way to do it in opengl, or should it be software-rendering way?
You render a quad and pass the spline as uniforms. You will need to use a shader program. Your vertex shader will transform the quad and generate any extra information from your uniforms and and your fragment shader will test if the pixel is on the line.
https://www.shadertoy.com/view/MlfSRN

OpenGL shader effect

I need a efficient openGL pipeline to achieve a specific look of the line segment shapes.
This is a look I am aiming for:
(https://www.shadertoy.com/view/XdX3WN)
This is one of the primitives (spiral) I already have inside my program:
Inside gl_FragColor for this picture I am outputting distance from fragment to camera. The pipeline for this is the usual VBO->VAO->Vertex shader->Fragment shader path.
The shadertoy shader calculates the distance to the 3 points in every fragment of the screen and outputs the color according to that. But in my example I would need this in a reverse. Calculate color for surrounding fragments for ever fragment of spiral (in this case). Is it necessary to go with a render a scene into a texture using a FBO or is there a shortcut?
In the end I used:
CatmullRom spline interpolation to get point data from control points
Build VBO from above points
Vortex shader: pass point position data
Geometry shader: emit sprite size quads for every point
Fragment shader: use exp function to get a smooth gradient color from the center of the sprite quad
Result is something like this:
with:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE); // additive blend
It renders to FBO with GL_RGBA16 for more smoothness.
For small limited number of lines
use single quad covering the area or screen as geometry and send the lines points coordinates and colors to shader as 1D texture(s) or uniform. Then you can do the computation inside fragment shader per pixel an all lines at once. Higher line count will slow things down considerably.
For higher number of lines
you need to convert your geometry from lines to rectangles covering affected surroundings of a line:
use transparency to merge the lines correctly and compute color from perpendicular distance from the line. Add the dots from distance from the endpoints (can be done with texture instead of shader).
Your image suggest that the light affects whole screen so in that case you need to call Quad covering whole screen per each line instead of a rectangle coverage

Questions Deferred Shading

I just have some questions about deferred shading. I have gotten to the point where I have the Color, Position ,Normal and textures from the Multiple Render Targets. My questions pertain to what I do next. To make sure that I have gotten the correct data from the textures I have put a plane on the screen and rendered the textures onto that plane. What I don't understand is how to manipulate those textures so that the final output is shaded with lighting. Do I need to render a plane or a quad that takes up the screen and apply all the calculations onto that plane? If I do that I am kind of confused how I would be able to get multiple lights to work this way since the "plane" would be a renderable object so for each light I would need to re-render the plane. Am I thinking of this incorrectly?
You need to render some geometry to represent the area covered by the light(s). The lighting term for each pixel of the light is accumulated into a destination render target. This gives you your lit result.
There are various ways to do this. To get up and running, a simple / easy (and hellishly slow) method is to render a full-screen quad for each light.
Basically:
Setup: Render all objects into the g-buffer, storing the various object properties (albedo, specular, normals,
depth, whatever you need)
Lighting: For each light:
Render some geometry to represent the area the light is going to cover on screen
Sample the g-buffer for the data you need to calculate the lighting contribution (you can use the vpos register to find the uv)
Accumulate the lighting term into a destination render target (the backbuffer will do nicely for simple cases)
Once you've got this working, there's loads of different ways to speed it up (scissor rect, meshes that tightly bound the light, stencil tests to avoid shading 'floating' regions, multiple lights drawn at once and higher level techniques such as tiling).
There's a lot of different slants on Deferred Shading these days, but the original technique is covered thoroughly here : http://http.download.nvidia.com/developer/presentations/2004/6800_Leagues/6800_Leagues_Deferred_Shading.pdf

OpenGL mipmapping inconsistent?

I have a 512X512 texture which holds a number of images that i want to use in my application. After adding the image data to the texture i save the texture coords for the individual images. Later i apply these on some quads that i am drawing. The texture has mipmapping activated.
When i take a screenshot of the rendered scene at exactly the same instance in two different runs of the applications, i notice that there are differences in the image only among those quads textured using this mipmapped texture. Can mipmapping cause such an issue?
My best guess is that it has to do with precisions in your shader. Check out this problem that I had (and fought with for a while) and my solution:
opengl texture mapping off by 5-8 pixels
It probably is a combination of mimapping's automatic scaling of your texture atlas and the precision hints in your shader code.
Also see the other linked question:
Why is a texture coordinate of 1.0 getting beyond the edge of the texture?

Fragment shader for multisampled depth textures

Which operations would be ideal in fragment shader for multisampled depth textures? I mean, for RGBA textures, we could just take average of color values came from texelFetch().
What should be ideal shader code for multisampled depth texture?
Multisample depth textures are a Shader Model 4.1 (DX 10.1) feature first and foremost (multisample color textures are DX 10.0). OpenGL does not make this clear, but not all GL3 class hardware will support them. That said, since multisample textures are a GL 3.2 feature, this issue is largely moot in the OpenGL world; something that might come up once in a blue moon.
In any event, there is no difference between a multisample depth texture and color texture assuming your hardware supports the former. Even if the depth texture is an integer format, when you sample it using texelFetch (...) on a sampler2DMS you get a single-precision 4-component floating-point vector of the form: vec4 (depth.r, depth.r, depth.r, 1.0).
You can average the texels together if you want for multisample depth resolve, but the difference in depth between all of the samples can also be useful for quickly finding edges in your rendered scene to implement things like bilateral filtering.