Polygon into shader - opengl

I'm writing a game. Now there is rendering of the water. I have a polygon:
All the scene is rendered into a single texture and when the water's queue comes I want to pass a complex polygon into the shader. For example, at screen the polygon is red water surface and blue borders. How to pass into shader only the area inside of that polygon? For example, I want to fill everything inside polygon into red color.

Depending on what you’re doing with it, it might be better to render the polygon into a texture by itself and have your shader sample that. If the polygon’s going to be a predictable size, you could use a texture with roughly those dimensions and pass that frame’s position in your scene into the shader too.

Related

Qt3D C++ How to shape outline?

Does anyone know how to outline a shape in Qt3D using C++ (not QML).
For example using a cuboid mesh and make it transparent but outline the edges of the shape. Please see pictures attached on what I mean.
To draw the outline of a shape, you can follow these steps:
Draw everything as normal.
Draw only the outline of the selected object.
When drawing the outline, you need to use an outline effect, which can be implemented in two render passes:
Render the geometry to a texture using a simple color shader.
Render to screen using a shader that takes each pixel in the texture and compares the surrounding pixels. If they are equal, we are inside the object and the fragment can be discarded. If they differ, we are on the edge of the object and we should draw the color.

OpenGL: blur only one part of the texture; can using vertex shader speed up?

Let's say there is one texture: 6000x6000
I only need to blur one part, let's say the center rectangle 100x100
If I use vertex shader to put the interested area to this center rectangle, by inputting the coordinates of the 4 points and their corresponding texture coordinates in the big texture, I think the fragment shader only process the pixels in the center rectangle.
In my understanding, a regular GPU cannot really handle 6000x6000 pixels concurrently; it will divide to several segments.
Now with 100x100, all pixels can be processed simultaneously, so it would be faster.
Is my understanding correct?
You can do a "render to texture", so you can use your "vertex shader" to select the area you want to blur... and then your fragment shader will apply the blur only in that area.
your understanding seems to be correct: consider that the GPU will only spend efford processing the fragments INSIDE the area determined by your vertex shader, so if you set your vertex to a subset of your target [just like the screen, your target may be a texture, via framebuffers], then your GPU will process only the desired area.

Mask a sphere with text

I want to use openGL to draw a hollow sphere. The material properties of sphere is different for front and back surfaces. Now I want to mask the sphere with a text, so that the inner surface becomes visible from the text area. I am not able to understand how to achieve it.
Draw the sphere twice:
glCullFace(GL_FRONT), then draw sphere. This will put all back-facing triangles in the depth and color buffers.
glCullFace(GL_BACK), bind text texture, enable alpha test, draw sphere. Where the alpha test fails the color buffer won't be updated and you'll be able to "see through" to the inside of the sphere.

OpenGL shader effect

I need a efficient openGL pipeline to achieve a specific look of the line segment shapes.
This is a look I am aiming for:
(https://www.shadertoy.com/view/XdX3WN)
This is one of the primitives (spiral) I already have inside my program:
Inside gl_FragColor for this picture I am outputting distance from fragment to camera. The pipeline for this is the usual VBO->VAO->Vertex shader->Fragment shader path.
The shadertoy shader calculates the distance to the 3 points in every fragment of the screen and outputs the color according to that. But in my example I would need this in a reverse. Calculate color for surrounding fragments for ever fragment of spiral (in this case). Is it necessary to go with a render a scene into a texture using a FBO or is there a shortcut?
In the end I used:
CatmullRom spline interpolation to get point data from control points
Build VBO from above points
Vortex shader: pass point position data
Geometry shader: emit sprite size quads for every point
Fragment shader: use exp function to get a smooth gradient color from the center of the sprite quad
Result is something like this:
with:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE); // additive blend
It renders to FBO with GL_RGBA16 for more smoothness.
For small limited number of lines
use single quad covering the area or screen as geometry and send the lines points coordinates and colors to shader as 1D texture(s) or uniform. Then you can do the computation inside fragment shader per pixel an all lines at once. Higher line count will slow things down considerably.
For higher number of lines
you need to convert your geometry from lines to rectangles covering affected surroundings of a line:
use transparency to merge the lines correctly and compute color from perpendicular distance from the line. Add the dots from distance from the endpoints (can be done with texture instead of shader).
Your image suggest that the light affects whole screen so in that case you need to call Quad covering whole screen per each line instead of a rectangle coverage

c++ opengGL: Draw polygon + image processing interior pixels

I am using opengl and c++ doing image processing. The idea is simple, I will load an image, draw a polygon by clicking and then apply an effect (desaturation for instance) only to the pixels in the interior of the polygon shape just created.
Can anyone give me any direction on how to limit the effect to the pixels within the interior of the polygon? Loading the image and drawing the polygon is not a problem
Supposing the following situation :
The picture on which you want to apply the effect takes the whole screen
The picture is rendered using opengl , probably through a simple shader, with the picture passed as a texture
You can do the following approach :
consider the screen as being a big texture
you draw a polygon, which will be rendered on top of the rendered texture
inside the polygon's vertices insert the uv's coresponding to the 2D coordinates on the screen (so from screen space to uv space (0 , 1 ) )
draw the picture normaly
on top of the picture draw your polygon using the same picture as texture, but with a different shader
So instead of trying to desaturate a specific region from your picture, create a polygon on top of that region with the same picture, and desaturate that new polygon.
This would help you in avoiding the stencil buffer.
Another approach would be to create the polygon, but draw it only on the stencil buffer, before the picture is drawn.