I have a problem with the fragment shader,
this is my situation:
I have a 3d scene with a simple 2d square representing a wall (with "GL.GL_QUADS") in the middle.
I move the virtual camera using the function "glu.gluLookAt".
I implemented a simple fragment shader for the wall that basically changes the color of the wall respect to the distance from the wall to the virtual camera (using dFdx and dFdy).
The problem is that instead of visualize the output of the shader on the wall I would like to store the output in a buffer or in a texture.
I tried with "gl.glBindFramebufferEXT" but in this case the output was the entire rendering of the virtual scene, not just the output of the shader referred to the wall.
So how can I "extract" only the output of a fragment shader referred to a GL_QUADS without "extract" all the rendered scene?
You will need to set up an ortho projection and render only the quad needed to the FBO (or just a screen aligned quad). Then, render the scene with the contents of the FBO bound as a texture.
Related
I need a efficient openGL pipeline to achieve a specific look of the line segment shapes.
This is a look I am aiming for:
(https://www.shadertoy.com/view/XdX3WN)
This is one of the primitives (spiral) I already have inside my program:
Inside gl_FragColor for this picture I am outputting distance from fragment to camera. The pipeline for this is the usual VBO->VAO->Vertex shader->Fragment shader path.
The shadertoy shader calculates the distance to the 3 points in every fragment of the screen and outputs the color according to that. But in my example I would need this in a reverse. Calculate color for surrounding fragments for ever fragment of spiral (in this case). Is it necessary to go with a render a scene into a texture using a FBO or is there a shortcut?
In the end I used:
CatmullRom spline interpolation to get point data from control points
Build VBO from above points
Vortex shader: pass point position data
Geometry shader: emit sprite size quads for every point
Fragment shader: use exp function to get a smooth gradient color from the center of the sprite quad
Result is something like this:
with:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE); // additive blend
It renders to FBO with GL_RGBA16 for more smoothness.
For small limited number of lines
use single quad covering the area or screen as geometry and send the lines points coordinates and colors to shader as 1D texture(s) or uniform. Then you can do the computation inside fragment shader per pixel an all lines at once. Higher line count will slow things down considerably.
For higher number of lines
you need to convert your geometry from lines to rectangles covering affected surroundings of a line:
use transparency to merge the lines correctly and compute color from perpendicular distance from the line. Add the dots from distance from the endpoints (can be done with texture instead of shader).
Your image suggest that the light affects whole screen so in that case you need to call Quad covering whole screen per each line instead of a rectangle coverage
I try to understand shaders and framebuffers by making random stuff.
I have a cube floating in a scene in 2 colours: black and white (texture). I add additional colours to the cube and scene with postprocessing.
This works fine, but I want that only the cube gets these colours, and not the scene.
I do that with this:
Bind the texture
Bind the frame buffer object
Bind the shader
Draw the background
Draw the cube
Unbind the fram buffer
Binding the shader to post process the image
Passing the colourparameters to the shader
Draw everything with: glutSwapBuffers();
I can add the code if you need it.
I've got a shader to procedurally generate geometric shapes inside a quad. Essentially, you render a quad with this fragment shader active, and it calculates which fragments are on the border of the shape and discards everything else.
The problem is the dimensions of the quad. At the moment, I have to pass in the vertex data twice, once to the VBO and a second time as uniform variables to the shader, so it knows how big of a shape it's supposed to be creating.
Is there any way to only have to do this once, by having some way to get the coordinates of the top-left and bottom-right vertices of the current quad when I'm inside the fragment shader, so that I could simply give the vertex data to OpenGL once and have the shader calculate the largest shape that will fit inside the quad?
I think you probably want to use a geometry shader. Each vertex would consist of the position of a corner of the quad (a vector of 2-4 values) and the size of the quad (which could be a single value or upto 9 depending on how general you need the quad to be).
The geometry shader would generate the additional vertices for the quad and pass the size through to the fragment shader.
Depending on what exactly you're doing you may also be able to use point sprites and use the implicit coordinates that they have (gl_PointCoord). However, point sprites have a maximum size (which can be queried via GL_POINT_SIZE_RANGE and GL_POINT_SIZE_GRANULARITY).
You could pull the vertices yourself. You could create a Uniform Buffer or a Texture Buffer with the vertex data and just access this buffer in the fragment shader. In the vertex shader, in order to know what vertex to output you could just use the built-in variable gl_VertexID
I'd pass the top left and bottom right vertices of the quad as two extra input attributes for each vertex. The quads themselves get rendered as triangles.
In the vertex shader, declare two output attributes as flat (so they don't get interpolated) and copy the input attributes to these outputs.
I'm writing a game. Now there is rendering of the water. I have a polygon:
All the scene is rendered into a single texture and when the water's queue comes I want to pass a complex polygon into the shader. For example, at screen the polygon is red water surface and blue borders. How to pass into shader only the area inside of that polygon? For example, I want to fill everything inside polygon into red color.
Depending on what you’re doing with it, it might be better to render the polygon into a texture by itself and have your shader sample that. If the polygon’s going to be a predictable size, you could use a texture with roughly those dimensions and pass that frame’s position in your scene into the shader too.
This question already has answers here:
What are Vertex and Pixel shaders?
(6 answers)
Closed 5 years ago.
I've read some tutorials regarding Cg, yet one thing is not quite clear to me.
What exactly is the difference between vertex and fragment shaders?
And for what situations is one better suited than the other?
A fragment shader is the same as pixel shader.
One main difference is that a vertex shader can manipulate the attributes of vertices. which are the corner points of your polygons.
The fragment shader on the other hand takes care of how the pixels between the vertices look. They are interpolated between the defined vertices following specific rules.
For example: if you want your polygon to be completely red, you would define all vertices red. If you want for specific effects like a gradient between the vertices, you have to do that in the fragment shader.
Put another way:
The vertex shader is part of the early steps in the graphic pipeline, somewhere between model coordinate transformation and polygon clipping I think. At that point, nothing is really done yet.
However, the fragment/pixel shader is part of the rasterization step, where the image is calculated and the pixels between the vertices are filled in or "coloured".
Just read about the graphics pipeline here and everything will reveal itself:
http://en.wikipedia.org/wiki/Graphics_pipeline
Vertex shader is done on every vertex, while fragment shader is done on every pixel. The fragment shader is applied after vertex shader. More about the shaders GPU pipeline link text
Nvidia Cg Tutorial:
Vertex transformation is the first processing stage in the graphics hardware pipeline. Vertex transformation performs a sequence of math operations on each vertex. These operations include transforming the vertex position into a screen position for use by the rasterizer, generating texture coordinates for texturing, and lighting the vertex to determine its color.
The results of rasterization are a set of pixel locations as well as a set of fragments. There is no relationship between the number of vertices a primitive has and the number of fragments that are generated when it is rasterized. For example, a triangle made up of just three vertices could take up the entire screen, and therefore generate millions of fragments!
Earlier, we told you to think of a fragment as a pixel if you did not know precisely what a fragment was. At this point, however, the distinction between a fragment and a pixel becomes important. The term pixel is short for "picture element." A pixel represents the contents of the frame buffer at a specific location, such as the color, depth, and any other values associated with that location. A fragment is the state required potentially to update a particular pixel.
The term "fragment" is used because rasterization breaks up each geometric primitive, such as a triangle, into pixel-sized fragments for each pixel that the primitive covers. A fragment has an associated pixel location, a depth value, and a set of interpolated parameters such as a color, a secondary (specular) color, and one or more texture coordinate sets. These various interpolated parameters are derived from the transformed vertices that make up the particular geometric primitive used to generate the fragments. You can think of a fragment as a "potential pixel." If a fragment passes the various rasterization tests (in the raster operations stage, which is described shortly), the fragment updates a pixel in the frame buffer.
Vertex Shaders and Fragment Shaders are both feature of 3-D implementation that does not uses fixed-pipeline rendering. In any 3-D rendering vertex shaders are applied before fragment/pixel shaders.
Vertex shader operates on each vertex. If you have a fixed polygon mesh and you want to deform it in a shader, you have to implement it in vertex shader. I.e. any physical change in vertex appearances can be done in vertex shaders.
Fragment shader takes the output from the vertex shader and associates colors, depth value of a pixel, etc. After these operations the fragment is send to Framebuffer for display on the screen.
Some operation, as for example lighting calculation, you can perform in vertex shader as well as fragment shader. But fragment shader provides better result than the vertex shader.
In rendering images via 3D hardware you typically have a mesh (point, polygons, lines) these are defined by vertices. To manipulate vertices individually typically for motions in a model or waves in an ocean you can use vertex shaders. These vertices can have static colour or colour assigned by textures, to manipulate vertex colours you use fragment shaders. At the end of the pipeline when the view goes to screen you can also use fragment shaders.