I want to render a circle with constant size in pixels, regardless of the item depth (in perspective projection).
When I use glPointSize there is a small difference in size that correlates to the depth (near objects are larger then far ones).
The function description states:
The specified point size is multiplied with a distance attenuation
factor and clamped to the specified point size range, and further
clamped to the implementation-dependent point size range to produce
the derived point size using.
Is there a way to completely rasterize a point?
Related
I'm trying to interpolate through a set of motion data points, using piecewise motions as defined by Wagner, so each data point contains 8 numbers. Each of those numbers forms a motion parameter, so I need to generate Bspline functions for each parameter (8 total parameters, 8 Bpsline functions). 3 of the numbers pertain to translation, 4 to rotation and 1 to the translation weight. Each of these three groups is defined to have a specific degree for their respective curves, so general cubic B-spline interpolation does not work.
The formula for a cubic Bspline is explicitly defined but that's not the case for higher curve degrees, so how do I interpolate a data set using a degree of say 4? I know LU decomposition can be used, but I'm not sure how to implement that in C++, or how to define the needed values for the decomposition (I have the knot vector, and time of each data point, but that's all).
I've read numerous descriptions of the behavior of dFdX(n) and dFdY(n) and do believe I still have a handle on partial derivatives from school. What I don't follow is where does 'n' come from in the simplest possible example?
Reading the glsl built-in functions dFdx(n) and dFdy(n) without any context other than mathematics I would interpret them as "I have some function of x and y: f(x,y), I take the partial derivative of that function w.r.t. x d/dx(x,y), and I evaluate the partial derivative formula for some value of x and y which I assume is the input parameter n above.
I've read many descriptions of how dFdx() and dFdy() allow you to find a window-space gradient for output fragments. The output-fragment case is what I'm most interested in at the moment as I'm not trying to determine the rate of change of texture coordinates w.r.t how the texture is being rasterized.
I'm looking to use dFdx(n) and dFdy(n) to find window-space color gradient of output fragments. I don't fully understand how to mentally construct the function being differentiated and how that relates to the frame buffer and how n relates to that (e.g. does n relate to the 2x2 fragment neighborhood of the current fragment, window coordinate space of the entire framebuffer such that I'm evaluating the gradient at that value, other) ?
I'm hoping that the input type of n in any responses to this question is a scalar (float) and that we just discuss one dimension, dFdx(), to simplify the discussion.
Lets check the man page:
genType dFdx( genType p);
genType dFdy( genType p);
Available only in the fragment shader, these functions return the
partial derivative of expression p with respect to the window x
coordinate (for dFdx*) and y coordinate (for dFdy*).
dFdxFine and dFdyFine calculate derivatives using local differencing
based on on the value of p for the current fragment and its immediate
neighbor(s).
dFdxCoarse and dFdyCoarse calculate derivatives using local
differencing based on the value of p for the current fragment's
neighbors, and will possibly, but not necessarily, include the value
for the current fragment. That is, over a given area, the
implementation can compute derivatives in fewer unique locations than
would be allowed for the corresponding dFdxFine and dFdyFine
functions.
dFdx returns either dFdxCoarse or dFdxFine. dFdy returns either
dFdyCoarse or dFdyFine. The implementation may choose which
calculation to perform based upon factors such as performance or the
value of the API GL_FRAGMENT_SHADER_DERIVATIVE_HINT hint.
Expressions that imply higher order derivatives such as dFdx(dFdx(n))
have undefined results, as do mixed-order derivatives such as
dFdx(dFdy(n)). It is assumed that the expression p is continuous and
therefore, expressions evaluated via non-uniform control flow may be
undefined.
Concentrating on the Fine variant. As each fragment process reaches the dFd* call the GPU will collect the values passed in and based on those values, typically through getting the difference between neighbouring values and dividing by the fragment size.
In other words the fragment shader has calculated the F(x,y) for the fragment and passes it on to the GPU to collect them and pass back the dFdX based on the fragments right next to it Which would have passed F(x+e, y)
GenType means that you can put floats in it, you can also pass in a vec4 and get the component-wise dFd* value.
I've been doing some operations with 2D points expressed in homogeneous coordinates (x, y, w). Sometimes one of the coordinates becomes very large, and this can easily affect subsequent results.
For example, determining intersections can be calculated easily with a vector x-product. This can produce large numbers. Eg. (50, 100, 1) x (-100, 50, 1) = (50, -150, 12500)
I feel these results should be somehow normalised. In the example above, simply by dividing all coordinates by 12500 seems sensible. In general I can see 2 ways:
divide by the coordinate with the largest absolute value (may not be w), or
divide by w (if w != 0) so that every point is expressed as either (x, y, 0) or (x, y, 1).
So my question is, which way is better and why?
I'm using c# with float values, if that's of any practical relevance.
Downscaling by the max absolute value of the components is the safer option among all correct ones - you never have to worry about overflows as long as the max > 1.0. Dividing by w is only required to convert to Euclidean points.
Incidentally, working with floats is rarely a good idea in internal calculations, although it may make sense for storing final results that correspond to physical quantities you measure. When doing geometrical calculations, in my experience, truncation errors often propagate fast enough, even with well conditioned algorithms, to make calculated results in single precision rather worthless.
I'm looking for interpolating some contour lines to generating a 3D view. The contours are not stored in a picture, coordinates of each point of the contour are simply stored in a std::vector.
for convex contours :
, it seems (I didn't check by myself) that the height can be easily calculates (linear interpolation) by using the distance between the two closest points of the two closest contours.
my contours are not necessarily convex :
, so it's more tricky... actualy I don't have any idea what kind of algorithm I can use.
UPDATE : 26 Nov. 2013
I finished to write a Discrete Laplace example :
you can get the code here
What you have is basically the classical Dirichlet problem:
Given the values of a function on the boundary of a region of space, assign values to the function in the interior of the region so that it satisfies a specific equation (such as Laplace's equation, which essentially requires the function to have no arbitrary "bumps") everywhere in the interior.
There are many ways to calculate approximate solutions to the Dirichlet problem. A simple approach, which should be well suited to your problem, is to start by discretizing the system; that is, you take a finite grid of height values, assign fixed values to those points that lie on a contour line, and then solve a discretized version of Laplace's equation for the remaining points.
Now, what Laplace's equation actually specifies, in plain terms, is that every point should have a value equal to the average of its neighbors. In the mathematical formulation of the equation, we require this to hold true in the limit as the radius of the neighborhood tends towards zero, but since we're actually working on a finite lattice, we just need to pick a suitable fixed neighborhood. A few reasonable choices of neighborhoods include:
the four orthogonally adjacent points surrounding the center point (a.k.a. the von Neumann neighborhood),
the eight orthogonally and diagonally adjacent grid points (a.k.a. the Moore neigborhood), or
the eight orthogonally and diagonally adjacent grid points, weighted so that the orthogonally adjacent points are counted twice (essentially the sum or average of the above two choices).
(Out of the choices above, the last one generally produces the nicest results, since it most closely approximates a Gaussian kernel, but the first two are often almost as good, and may be faster to calculate.)
Once you've picked a neighborhood and defined the fixed boundary points, it's time to compute the solution. For this, you basically have two choices:
Define a system of linear equations, one per each (unconstrained) grid point, stating that the value at each point is the average of its neighbors, and solve it. This is generally the most efficient approach, if you have access to a good sparse linear system solver, but writing one from scratch may be challenging.
Use an iterative method, where you first assign an arbitrary initial guess to each unconstrained grid point (e.g. using linear interpolation, as you suggest) and then loop over the grid, replacing the value at each point with the average of its neighbors. Then keep repeating this until the values stop changing (much).
You can generate the Constrained Delaunay Triangulation of the vertices and line segments describing the contours, then use the height defined at each vertex as a Z coordinate.
The resulting triangulation can then be rendered like any other triangle soup.
Despite the name, you can use TetGen to generate the triangulations, though it takes a bit of work to set up.
I have a data points of different dimensions and I want to compare between them such that I can remove redundant points. I tried to make the points of the same dimensions by using PCA, but the problem is that PCA reduced the dimensions, but I lost what each dimension mean as the resultant points are different from the points that I had, so I wonder if there is any other way to do so. In other words, I wonder if there is any way to help me compare between points of different number of dimensions.
Assume relevant null values for missing dimensions? For instance if you want to compare a 2d (x,y) point (vector) with a 3d one (x,y,z) you can assume a z-value of 0 for the 2d point. That corresponds to the x,y plane.