understanding the basics of dFdX and dFdY - opengl

I've read numerous descriptions of the behavior of dFdX(n) and dFdY(n) and do believe I still have a handle on partial derivatives from school. What I don't follow is where does 'n' come from in the simplest possible example?
Reading the glsl built-in functions dFdx(n) and dFdy(n) without any context other than mathematics I would interpret them as "I have some function of x and y: f(x,y), I take the partial derivative of that function w.r.t. x d/dx(x,y), and I evaluate the partial derivative formula for some value of x and y which I assume is the input parameter n above.
I've read many descriptions of how dFdx() and dFdy() allow you to find a window-space gradient for output fragments. The output-fragment case is what I'm most interested in at the moment as I'm not trying to determine the rate of change of texture coordinates w.r.t how the texture is being rasterized.
I'm looking to use dFdx(n) and dFdy(n) to find window-space color gradient of output fragments. I don't fully understand how to mentally construct the function being differentiated and how that relates to the frame buffer and how n relates to that (e.g. does n relate to the 2x2 fragment neighborhood of the current fragment, window coordinate space of the entire framebuffer such that I'm evaluating the gradient at that value, other) ?
I'm hoping that the input type of n in any responses to this question is a scalar (float) and that we just discuss one dimension, dFdx(), to simplify the discussion.

Lets check the man page:
genType dFdx( genType p);
genType dFdy( genType p);
Available only in the fragment shader, these functions return the
partial derivative of expression p with respect to the window x
coordinate (for dFdx*) and y coordinate (for dFdy*).
dFdxFine and dFdyFine calculate derivatives using local differencing
based on on the value of p for the current fragment and its immediate
neighbor(s).
dFdxCoarse and dFdyCoarse calculate derivatives using local
differencing based on the value of p for the current fragment's
neighbors, and will possibly, but not necessarily, include the value
for the current fragment. That is, over a given area, the
implementation can compute derivatives in fewer unique locations than
would be allowed for the corresponding dFdxFine and dFdyFine
functions.
dFdx returns either dFdxCoarse or dFdxFine. dFdy returns either
dFdyCoarse or dFdyFine. The implementation may choose which
calculation to perform based upon factors such as performance or the
value of the API GL_FRAGMENT_SHADER_DERIVATIVE_HINT hint.
Expressions that imply higher order derivatives such as dFdx(dFdx(n))
have undefined results, as do mixed-order derivatives such as
dFdx(dFdy(n)). It is assumed that the expression p is continuous and
therefore, expressions evaluated via non-uniform control flow may be
undefined.
Concentrating on the Fine variant. As each fragment process reaches the dFd* call the GPU will collect the values passed in and based on those values, typically through getting the difference between neighbouring values and dividing by the fragment size.
In other words the fragment shader has calculated the F(x,y) for the fragment and passes it on to the GPU to collect them and pass back the dFdX based on the fragments right next to it Which would have passed F(x+e, y)
GenType means that you can put floats in it, you can also pass in a vec4 and get the component-wise dFd* value.

Related

Intrinsic Parameters of Camera

I'm trying to do triangulation for 3D reconstruction and I came across an interesting observation which I cannot justify.
I have 2 sets of images. I know the correspondences and I'm finding the intrinsic and extrinsic parameters using a direct linear transformation. While I'm able to properly reconstruct the original scene, the intrinsic parameters are different even though the pictures are taken from the same camera. How is it possible to have different intrinsic parameters if the camera is the same? Also, if the intrinsic parameters are different, how am I able to reconstruct the scene perfectly?
Thank you
You haven't specified what you mean by "different", so i'm just going to point two possible sources of differences that come to mind. Let's denote the matrix of intrinsic parameters with K.
The first possible difference could just come from a scaling difference. If the second time you estimate your intrinsics matrix, you end up with a matrix
K_2=lambda*K
then it doesn't make any difference when projecting or reprojecting, since for any 3d point X you'll have
K_2*X=K*lambda*X //X is the same as lambda*X in projective geometry
The same thing happens when you backproject the point: you just obtain a direction, and then your estimation algorithm (e.g. least squares or a simpler geometric solution) takes care of estimating the depth.
The second reason for the difference you observe could just come from numerical imprecisions. Since you haven't given any information regarding the magnitude of the difference, I'm not sure if that is relevant to your case.

why are we differentiating then integrating in active contour energy calculation

so I was reading on active contour and the equation doesn't really make much sense to me, I looked for other resources but none of the resources really explained this
what does the integral of the derivative of a squared function mean?
The derivative is the change of the function so if we want a smooth function we would want small changes along the function meaning a small derivative. A derivative of zero is a constant function which is the most smooth function that we can get.
We use the squared distance of the derivative (L2) and zero so we get the square.
In order to get a smooth function along the curve we would like that the sum of the change along the curve will be small. The sum is the integral of the derivative along the curve.

Given 2 points with known speed direction and location, compute a path composed of (circle) arcs

So, I have two points, say A and B, each one has a known (x, y) coordinate and a speed vector in the same coordinate system. I want to write a function to generate a set of arcs (radius and angle) that lead A to status B.
The angle difference is known, since I can get it by subtracting speed unit vector. Say I move a certain distance with (radius=r, angle=theta) then I got into the exact same situation. Does it have a unique solution? I only need one solution, or even an approximation.
Of course I can solve it by giving a certain circle and a line(radius=infine), but that's not what I want to do. I think there's a library that has a function for this, since it's quite a common approach.
A biarc is a smooth curve consisting of two circular arcs. Given two points with tangents, it is almost always possible to construct a biarc passing through them (with correct tangents).
This is a very basic routine in geometric modelling, and it is indispensable for smoothly approximating an arbirtrary curve (bezier, NURBS, etc) with arcs. Approximation with arcs and lines is heavily used in CAM, because modellers use NURBS without a problem, but machine controllers usually understand only lines and arcs. So I strongly suggest reading on this topic.
In particular, here is a great article on biarcs on biarcs, I seriously advice reading it. It even contains some working code, and an interactive demo.

What does it mean to normalize a value?

I'm currently studying lighting in OpenGL, which utilizes a function in GLSL called normalize. According to OpenGL docs, it says that it "calculates the normalized product of two vectors". However, it still doesn't explain what "normalized" mean. I have tried look for what a normalized product is on Google, however I can't seem to find anything about it. Can anyone explain what normalizing means and provide a few example of a normalized value?
I think the confusion comes from the idea of normalizing "a value" as opposed to "a vector"; if you just think of a single number as a value, normalization doesn't make any sense. Normalization is only useful when applied to a vector.
A vector is a sequence of numbers; in 3D graphics it is usually a coordinate expressed as v = <x,y,z>.
Every vector has a magnitude or length, which can be found using Pythagora's theorem: |v| = sqrt(x^2 + y^2 + z^2) This is basically the length of a line from the origin <0,0,0> to the point expressed by the vector.
A vector is normal if its length is 1. That's it!
To normalize a vector means to change it so that it points in the same direction (think of that line from the origin) but its length is one.
The main reason we use normal vectors is to represent a direction; for example, if you are modeling a light source that is an infinite distance away, you can't give precise coordinates for it. But you can indicate where to find it from a particular point by using a normal vector.
It's a mathematical term and this link explains its meaning in quite simple terms:
Operations in 2D and 3D computer graphics are often performed using copies of vectors that have been normalized ie. converted to unit vectors... Normalizing a vector involves two steps:
calculate its length, then,
divide each of its (xy or xyz) components by its length...
It's something complicated to explain if you don't know too much about vectors or even vectorial algebra. (You can check this article about general concepts as vector, normal vector or even normalization procedure ) Check it
But the procedure or concept of "normalize" refers to the process of making something standard or “normal.”
In the case of vectors, let’s assume for the moment that a standard vector has a length of 1. To normalize a vector, therefore, is to take a vector of any length and, keeping it pointing in the same direction, change its length to 1, turning it into what is called a unit vector.

C++ Cubic Spline Trajectory

I'm writing a C++ program to generate a cubic spline trajectory for a set of points. These points need not be sorted along the x-axis. For example, it may be a circle, etc.
I have found some libraries on the web, for example, the ALGLIB library or a class here https://www.marcusbannerman.co.uk/index.php/home/42-articles/96-cubic-spline-class.html, but all of these libraries sort the data points. I do not need this because what I want to generate is something like a circle. Is there anyway to achieve this?
Splines are piecewise functions with respect to some independent variable (usually t, though they seem to use x in the code you have linked). Since the specific function to be evaluated depends on the control points closest to the input value t, it make sense to sort the control points by t so that you can quickly determine the function that needs to be evaluated.
However even if they were not sorted, you still could not create a circle with a single one dimensional spline. Your spline function y = f(t) only gives you one value for any given t. If you are graphing y with respect to t and want a circle with radius 1 about the origin, you would need f(0) to equal both 1 and -1, which doesn't make any sense.
To get something like a circle you instead need a two dimensional spline, or two splines; one for the x value and one for the y value. Once you have these two spline functions f(t) and g(t), then you simply evaluate both functions at the same t and that will give you the x and y values of your spline for that t.
The simple, common trick is to use cumulative linear arclength as the parameter. So, if I have a set of points in a curve as simply (x,y) pairs in the plane where x and y are vectors, do this:
t = cumsum([0;sqrt(diff(x(:)).^2 + diff(y(:)).^2)]);
This gives us the cumulative distance along the piecewise linear segments between each pair of points, presented in the order you have them. Fit the spline curve as two separate spline models, thus x(t) and y(t). So you could use interp1, or use the spline or pchip functions directly. Note that pchip and spline will have different properties when you build that interpolant.
Finally, in the event that you really had a closed curve, so that x(1) and x(end) were supposed to be the same, then you would really want to use a spline model with periodic end conditions. I don't know of any implementations for that except in the spline model in my SLM tools, but it is not difficult to do in theory.