I need to pass near and far value in glPerspective in opengl code. I am getting all my vertices in eye space by multiplying with ModelViewMatrix in the vertex shader. My problem is that, I need to find the minimum and maximum value out of this, so that I can pass that value to glPerspective. How would I do that? Do I need to calculate them in the vertex shader or in the client space( C code) ?
near and far are typically not calculated, but just set to reasonable values. 'near' should be close enough that near objects are not clipped, but not so near that all z-buffer precision is gone. 'far' just needs to be far enough away for anything you want to render.
In any event the vertex shader isn't the best place to calculate them, because the matrices get passed in to the shader, so you need to know the values before you get that far.
(it can be viable/useful to calculate near/far dynamically for things like shadows, where you want high-precision - in which case, base it on bounding volumes of the objects you want to render, or some other such approximation).
Related
I am currently drawing a grid using a series of triangle strips. I am using this to render a height field, and generating the vertex data completely in the vertex shader without any input buffers just using the vertex and instance indexes. This is all working fine and is very efficient.
However, I now find myself also needing to implement border lines on this grid. The obvious solution here would be something like marching squares. Basically, what I want to achieve is something like this:
The black dots represent the vertices in the grid that are part of some set, and I want to shade the area inside the red line differently than that outside it.
Naïvely, this seems like it would be easy: Add a value to the vertices that is 1 for vertices in the set and 0 for those outside it, and render differently depending on if the interpolated value is above or below 0.5, for instance.
However, because this is rendered as a triangle strip, this does not quite work. In pracitce, since this is rendered as a triangle strip, this ends up looking like this:
So, half the edges work and half end up with ugly square staircases.
I've now been trying to wrack my brain for days whether there is some trick that could be used to generate the vertex values differently or making a more complicated test than >0.5 to get closer to the intended shape without giving up on the nice and simple triangle strips and having to actually generate geometry ahead of time, but I can not think of one.
Has anyone ever dealt with a similar problem? Is there some clever solution I am missing?
I am doing this in Metal, but I don't expect this to depend much on the specific API used.
It sounds like you're trying to calculate the colors in the fragment shader independently of the mesh underneath. If so, you should decouple the color calculation from the mesh.
Assuming your occupancy is stored in a texture, use textureGather to get the four nearby occupancy values; determine the equation of the boundary; then use the fractional part of the texture coordinates to determine its position relative to the boundary. (The devil here is in the details -- in particularly the ambiguous checker-board pattern case.)
Once you implement the above approach, it's very likely you won't even need the triangle strip mesh anymore -- simply fill your entire drawing area with a single large quad and let the fragment shader to do the rest.
TL;DR I'm computing a depth map in a fragment shader and then trying to use that map in a vertex shader to see if vertices are 'in view' or not and the vertices don't line up with the fragment texel coordinates. The imprecision causes rendering artifacts, and I'm seeking alternatives for filtering vertices based on depth.
Background. I am very loosely attempting to implement a scheme outlined in this paper (http://dash.harvard.edu/handle/1/4138746). The idea is to represent arbitrary virtual objects as lots of tangent discs. While they wanted to replace triangles in some graphics card of the future, I'm implementing this on conventional cards; my discs are just fans of triangles ("Discs") around center points ("Points").
This is targeting WebGL.
The strategy I intend to use, similar to what's done in the paper, is:
Render the Discs in a Depth-Only pass.
In a second (or more) pass, compute what's visible based solely on which Points are "visible" - ie their depth is <= the depth from the Depth-Only pass at that x and y.
I believe the authors of the paper used a gaussian blur on top of the equivalent of a GL_POINTS render applied to the Points (ie re-using the depth buffer from the DepthOnly pass, not clearing it) to actually render their object. It's hard to say: the process is unfortunately a one line comment, and I'm unsure of how to duplicate it in WebGL anyway (a naive gaussian blur will just blur in the background pixels that weren't touched by the GL_POINTS call).
Instead, I'm hoping to do something slightly different, by rerendering the discs in a second pass instead as cones (center of disc becomes apex of cone, think "close the umbrella") and effectively computing a voronoi diagram on the surface of the object (ala redbook http://www.glprogramming.com/red/chapter14.html#name19). The idea is that an output pixel is the color value of the first disc to reach it when growing radiuses from 0 -> their natural size.
The crux of the problem is that only discs whose centers pass the depth test in the first pass should be allowed to carry on (as cones) to the 2nd pass. Because what's true at the disc center applies to the whole disc/cone, I believe this requires evaluating a depth test at a vertex or object level, and not at a fragment level.
Since WebGL support for accessing depth buffers is still poor, in my first pass I am packing depth info into an RGBA Framebuffer in a fragment shader. I then intended to use this in the vertex shader of the second pass via a sampler2D; any disc center that was closer than the relative texture2D() lookup would be allowed on to the second pass; otherwise I would hack "discarding" the vertex (its alpha would be set to 0 or some flag set that would cause discard of fragments associated with the disc/cone or etc).
This actually kind of worked but it caused horrendous z-fighting between discs that were close together (very small perturbations wildly changed which discs were visible). I believe there is some floating point error between depth->rgba->depth. More importantly, though, the depth texture is being set by fragment texel coords, but I'm looking up vertices, which almost certainly don't line up exactly on top of relevant texel coordinates; so I get depth +/- noise, essentially, and the noise is the issue. Adding or subtracting .000001 or something isn't sufficient: you trade Type I errors for Type II. My render became more accurate when I switched from NEAREST to LINEAR for the depth texture interpolation, but it still wasn't good enough.
How else can I determine which disc's centers would be visible in a given render, so that I can do a second vertex/fragment (or more) pass focused on objects associated with those points? Or: is there a better way to go about this in general?
I'm writing a program that draws a number of moving/rotating polygons using OpenGL. Each polygon has a location in world coordinates while its vertices are expressed in local coordinates (relative to polygon location). Each polygon also has a rotation.
The only way I can think of doing this is calculate vertex positions by translation/rotation in each frame and push them to the GPU be drawn, but I was wondering if this could be performed in the vertex shader.
I thought I might express vertex locations in local coordinates and then add location and rotation attributes to each vertex, but then it occurred to me that this won't be any better than pushing new vertex positions on each frame.
Should I do this kind of calculation on the CPU, or is there a way to do it efficiently in the vertex shader?
The vertex shader is indeed responsible for transforming your geometry. However, the vertex shader is run for every single vertex of your scene. If you do transformations inside the vertex shader, you'll do the same calculation over and over again which yields the same result every time (as opposed to simply multiplying the model view projection matrix with the vertex coordinate). So in terms of efficiency you're best off doing that on the CPU side.
If the models are small, like in your case, I don't expect there to be too much of a difference, because you still have to set the coordinates where the polygons are supposed to be drawn somehow. In this case doing the calculations once on the CPU side is still the best, given that it does the calculation once independent of the vertex count of your polygons, as well as that it will probably result in clearer code since it's easier to see what you're doing.
These calculations are usually done on CPU only. As doing them on CPU is efficient in general. your best shot is to send these rotation matrices in as uniform and do multiplication on GPU. Sending uniforms is not very expensive operation in general so u should be be worrying about that.
There are a great number of questions relating to exactly two points in texture coordinates, and my shader already works with that concept. 1.0, 1.0 shows the entire image, and 1.0 / frame in one dimension or another displays the appearance of... well, unfortunately, it displays everything between 0.0 of the quad, to the decimal value of the division of the frame.
What I'd like to do is, from the shader, control all four points of the texture coordinates. In every tutorial and every sample, the texture coordinate vec is always a vec2, implying that you only have control over the two end-points, and not the starting points. Is there a way to eliminate this limitation?
To give you an idea of why I want to do this (If it isn't blatantly obvious already), I'd like to pick a tile or animated frame out of a larger sheet.
Ideally, I'd also be able to find the dimensions (Width and height) of the image in the shader, but if necessary, it isn't that difficult to pass those values in. I believe at this time I'm using GLSL 2, meaning I'm unable to use the textureSize2D function in the shader (Already tried it).
Simplifying things UV coodrinate pair you pass to texture command means a point to read from texture. just one point, not an area. Depending on sampler state and whether minification/magnification occur or not more then one texel can be used to calculate value of that point, but still it is one value connected to UV provided.
I'd like to render a vectorfield visualization with OpenGL. Right now, I have a 3D cube filled with points which I need to replace with arrows. I've read a lot about Point Sprites in OpenGL and they seem to fit my needs pretty good.
I haven't really worked with textures yet, so there are some questions regarding the use of them together with Point Sprites:
First of all, is it possible to easily replace my points with arrows by just using a texture? If so, is it possible to rotate or scale those point sprites by an arbitrary degree using shaders?
If there are other possibilites than point sprites for achieving this, it would also be great to hear about them. I'm using OpenGL 4.2.
Point sprites are always screen-aligned squares. And they have an implementation-dependent maximum size.
If you need to do something like this, you should use a Geometry Shader that takes points as inputs, and outputs a quad (as 4 vertices of a triangle strip). Then you can do whatever you want.
Note that you should try to pass as little information as you can get away with out of the GS. Ideally, for maximum performance, you should only output to gl_Position and to a vec2 indicating where in the quad a particular location is.
is it possible to ... scale those point sprites by an arbitrary degree using shaders?
No, point sprites have an implementation-defined upper limit on size.