Tesselation - return triangles to cpu - opengl

I use tesselation shader using Bezier coeffs like in http://steps3d.narod.ru/tutorials/tessellation-algos-tutorial.html
I want to use generated pathes in another shader program (my raytracing). How can I send patches in another shader program or copy from VideoMemory to RAM? Thank you!

Related

OpenGL Lighting Shader

I can't understand concept of smaller shaders in OpenGL. How does it work? For example: do I need to create one shader for positioning object in space and then shader another shader for lighting or what? Could someone explain this to me? Thanks in advance.
This is a very complex topic, especially since your question isn't very specific. At first, there are various shader stages (vertex shader, pixel shader, and so on). A shader program consists of different shader stages, at least a pixel and a vertex shader (except for compute shader programs, which are each single compute shaders). The vertex shader calculates the possition of the points on screen, so here the objects are being moved. The pixel shader calculates the color of each pixel, that is covered by the rendered geometry your vertex shader produced. Now, in terms of lighting, there are different ways of doing it:
Forward Shading
This is the straight-forward way, where you simply calculate the lighting in pixel shader of the same shader program, that moves to objects. This is the oldest way of calculating lighting, and the easiest one. However, it's abilities are very limited.
Deffered Shading
For ages, this is the go-to variant in games. Here, you have one shader program (vertex + pixel shader) that renders the geometrie on one (or multiple) textures (so it moves the objects, but it doesn't save the lit color, but rather things like the base color and surface normals into the texture), and then an other shader program that renders a quad on screen for each light you want to render, the pixel shader of this shader program reads the informations previously rendered in the textur by the first shader program, and uses it to render the lit objects on an other textur (which is then the final image). In constrast to forward shading, this allows (in theory) any number of lights in the scene, and allows easier usage of shadow maps
Tiled/Clustered Shading
This is a rather new and very complex way of calculating lighting, that can be build on top of deffered or forward shading. It basicly uses compute shaders to calculate an accelleration-structure on the gpu, which is then used draw huge amount of lights very fast. This allows to render thousands of lights in a scene in real time, but using shadow maps for these lights is very hard, and the algorithm is way more complex then the previous ones.
Writing smaller shaders means to separate some of your shader functionalities in another files. Then if you are writing a big shader which contains lightning algorithms, antialiasing algorithms, and any other shader computation algorithm, you can separate them in smaller shader files (light.glsl, fxaa.glsl, and so on...) and you have to link these files in your main shader file (the shader file which contains the void main() function) since in OpenGL a vertex array can only have one shader program (composition of vertex shader, fragment shader, geometry shader, etc...) during the rendering pipeline.
The way of writing smaller shader depends also on your rendering algorithm (forward rendering, deffered rendering, or forward+ rendering).
It's important to notice that writing a lot of shader will increase the shader compilation time, and also, writing a big shader with a lot of uniforms will also slow things down...

OpenGL: Fragment vs Vertex shader for gradients?

I'm new to OpenGL, and I'm trying to understand vertex and fragment shaders. It seems you can use a vertex shader to make a gradient if you define the color you want each of the vertices to be, but it seems you can also make gradients using a fragment shader if you use the FragCoord variable, for example.
My question is, since you seem to be able to make color gradients using both kinds of shaders, which one is better to use? I'm guessing vertex shaders are faster or something since everyone seems to use them, but I just want to make sure.
... since everyone seems to use them
Using vertex and fragment shaders are mandatory in modern OpenGL for rendering absolutely everything.† So everyone uses both. It's the vertex shader responsibility to compute the color at the vertices, OpenGL's to interpolate it between them, and fragment shader's to write the interpolated value to the output color attachment.
† OK, you can also use a compute shader with imageStore, but I'm talking about the rasterization pipeline here.

How to draw a full screen quad in OpenGL 3.2?

I've been having a problem trying to get OpenGL 3.2 to work and after spending a few hours trying to figure out what was wrong I realized that it does not support glBegin. I use that command probably about 50-100 times in my engine to draw full screen quads and GUI elements. So what is a simple way to just draw a rectangle with OpenGL 3.2? Do I actually have to create a vertex buffer, fragment shader, and vertex shader to do something so simple?!
Do I actually have to create a vertex buffer, fragment shader, and vertex shader to do something so simple?!
Yep, no freebies in Core profile.

Getting vertex-shader transformed geometry back from the gpu in OpenGL

I would like to generate vertex geometry on the cpu and then pass it to the GPU and run a number of vertex shaders on the vertices and then get these transformed vertices back to the cpu. I dont want to render the vertices or run any fragment shaders.
Is it possible to get the vertex-shader transformed vertices back from the gpu onto the cpu?
If so how?
Yes, the facility required is called "Transform Feedback Buffers". Extension to OpenGL-2
http://www.opengl.org/registry/specs/ARB/transform_feedback2.txt
Introduced to being official OpenGL functionality with OpenGL-3.0

Using both programmable and fixed pipeline functionality in OpenGL

I have a vertex shader that transforms vertices to create a fisheye affect. Is is possible to just use just the vertex shader and use fixed pipeline for the fragment portion.
So basically i have an application that doesnt use shaders. I want to apply a fisheye affect using a vertex shader to transform all vertices, and then leave it to the application to take care to lighting, texturing, etc?
If this is not possible, is it possible to get a fisheye affect by messing with the contents of the gl back buffer?
Thanks
If your code is on fixed function, then what you described is a problem - that's why having your graphics code in shaders is good: they let you change anything easily. Remember to use them in your next project. :)
OK, but for this particular I assume that you don't want to rewrite your whole rendering from scratch to shaders now...
You mentioned you want to have a "fisheye effect". Seems like you're lucky, because I believe you don't need shaders for that effect! If we're talking about the same effect, then you can achieve it just by replacing the GL_PROJECTION matrix from OpenGL's fixed function to a perspective matrix with a wider angle of vision.
Yes, it's possible, altough some cards (notably ATI) don't support using a vertex shader without a fragment shader.