Fragment shader - drawing a line? - opengl

I was interested in how to draw a line with a specific width (or multiple lines) using a fragment shader. I stumbled on the this post which seems to explain it.
The challenge I have is understanding the logic behind it.
A couple of questions:
Our coordinate space in this example is (0.0-1.0,0.0-1.0), correct?
If so, what is the purpose of the "uv" variable. Since thickness is 500, the "uv" variable will be very small. Therefore the distances from it to pont 1 and 2 (stored in the a and b variables)?
Finally, what is the logic behind the h variable?

i will try to answer all of your questions one by one:
1) Yes, this is in fact correct.
2) It is common in 3d computer graphics to express coordinates(within certain boundaries) with floating-point values between 0 and 1(or between -1 and 1). First of all, this makes it quite easy to decide whether a given value crosses said boundary or not, and abstracts away from a concept of "pixel" being a discrete image unit; furthermore this common practise can be found pretty much everywhere else(think of device coordinates or texture coordinates)
Don't be afraid that values that you are working with are less than one; in fact, in computer graphics you usually deal with floating-point arithmetics, and FLOAT types are quite good at expressing Real values line around the "1" point.
3) The formula give for h consists of 2 parts: the square-root part, and the 2/c coefficient. The square root part should be well known from scholl math classes - this is Heron formula for the area of a triangle(between a,b,c). 2/c extracts the height of the said triangle, which is stored in h and is also the distance between point uv and the "ground line" of the triangle. This distance is then used to decide, where is uv in relation to the line p1-p2.

Related

Finding "how straight" is a shape. openCV

I'm working on an application were I have a set of Contours(each one representing a Potential Line) and I wanna check "How straight" is that contour/shape.
The article I am using as a refrence uses the following technique:
It Matches a "segmented" line crossing the shape like so-
Then grading how "straight" is the line.
Heres an example of the Contours I am working on:
How would you go about implementing this technique?
Is there any other way of checking "How Straight" is a contour\shape?
Regards!
My first guess would be to use a coefficient of determination. That would be, fit a linear line to all your point assuming some reasonable origin where you won't receive rounding errors and calculate R^2.
A more advanced approach, if all contours are disconnected components, would be to calculate the structure model index (the link is for bone morphometry, but they explain the concept and cite the original paper.) This gives you a number that tells you how much your segment is "like a rod". This is just an idea, though. Anything that forms curves or has branches will be less and less like a rod.
I would say that it also depends on what you are using the metric for and if your contours are always generally carrying left to right.
An additional method would be to create the covariance matrix of your points, calculate the eigenvalues from that matrix, and take their ratio (where the ratio is greater than or equal to 1; otherwise, invert the ratio.) This is the basic principle behind a PCA besides the final ratio. If you have a rather linear data set (the data set varies in only one direction) then you will have a very large ratio. As the data set becomes less and less linear (or more uncorrelated) you would see the ratio approach one. A perfectly linear data set would be infinity and a perfect circle one (I believe, but I would appreciate if someone could verify this for me.) Also, working in two dimensions would mean the calculation would be computationally cheap and straight forward.
This would handle outliers very well and would be invariant to the rotation and shape of your contour. You also have a number which is always positive. The only issue would be preventing overflow when dividing the two eigenvalues. Then again you could always divide the smaller eigenvalue by the larger and your metric would be bound between zero and one, one being a circle and zero being a straight line.
Either way, you would need to test if this parameter is sensitive enough for your application.
One example for a simple algorithm is using the dot product between two segments to determine the angle between them. The formula for dot product is:
A * B = ||A|| ||B|| cos(theta)
Solving the equation for cos(theta) yields
cos(theta) = (A * B / (||A|| ||B||))
Since cos(0) = 1, cos(pi) = -1.0 and you're checking for the "straightness" of the lines, a line whose normalization of cos(theta) angles is closest to -1.0 is the straightest.
straightness = SUM(cos(theta))/(number of line segments)
where a straight line is close to -1.0, and a non-straight line approaches 1.0. Keep in mind this is a cursory evaluation of this algorithm and it obviously has edge cases and caveats that would need to be addressed in an implementation.
The trick is to use image moments. In short, you calculate the minimum inertia around an axis, the inertia around an axis perpendicular to this, and the ratio between them (which is always between 0 and 1; since inertia is non-negative)
For a straight line, the inertia along the line is zero, so the ratio is also zero. For a circle, the inertia is the same along all axis so the ratio is one. Your segmented line will be 0.01 or so as it's a fairly good match.
A simpler method is to compare the circumference of the the convex polygon containing the shape with the circumference of the shape itself. For a line, they're trivially equal, and for a not too crooked shape it's still comparable.

Is scale important in OpenGL?

Does the scale of a scene matter in OpenGL? By scale I mean, for instance, drawing a 1 unit cube but setting the camera position 1000 pixels away vs setting the camera 100 pixels away from a 0.1 unit cube? I guess, firstly, am I correct in thinking this would yield the same results visually? And if so, does anyone know how either would effect performance, if at all?
My hunch would be that it would have no effect but a more definitive answer would be nice.
It doesn't matter except for imprecisions using floating point arithmetic. So try not to use super small or large numbers.
If you mean 2D this may not matter. Image created both ways may look the same.
"Setting the camera" actually only changes the transformation matrix the vertices are multiplied by, so after the transformation is applied, the vertex positions are the same. There might be minor differences resulting from the imprecision of floating-point values.
If the camera has a constant near and far clipping distance, the resulting values in depth buffer will differ, and one of the cubes might get outside of the clipping plane range, which would make it appear different / not at all.
You should know that floats have two modes: normalized and unnormalized mode. For values below 0, you will have more bits and more precision. So, it is better if you keep everything between 0 and one.
Also, looking to how the exp/mantissa works, the difference between numbers when a single bit changes, is bigger when you move away from zero, and things will start to make popping when you move very far from origin.
Yes, it matters. Not because of OpenGL, but because of the nature of floating point operations - floating point precision degrades at larger magnitudes. Try and normalize your values such that you have values closer to zero. That is, don't make a planet object 1,000,000 units wide just because you want a planet that's 1,000 KM in diameter.

Sustaining value in specified range

Lets say i have a point with its position on 2d plane.
This point is going to change it position randomly, but thats not the point, so lets assume that it has its own velocity and its moving on plane with restricted width and height;
So after a while of movement this point is going to reach plane boundary.
But its not allowed to leave plane.
So now i can check point position each frame to see is it reached bound or not.
if(point.x>bound.xMax)point.x=bound.xMax
if i want point to teleport itself to second side of plane i can simply :
point.x = point.x%bound.xMax;
but then i need to store point position in integers.
For 10 milion values on my corei7 1.6 both solutions
have similar timings. 41ms vs 47 on second,
so there is no sense in using modulo function in that case, its faster to just check value.
But, is there any kind of trick to make it faster?
Multiple threads for iterating array approach is not a solution.
Maybe i can scale my bound value to some wierd value and for example discard a part of binary interpretation of position value.
And if there is some trick to do it i think that somebody did it before me :)
Do you know any kind of solution that could help me?
If there is some way you can add information around the plane coordinates you could very well make a "border" around the plane which contains a value that is identified as "out of boundaries". For example if you have a 10x10 board, make it 12x12 and use the 2 extra rows and columns to insert that information.
Now you can do (pseudo-code):
IF point IN board IS "out of boundaries value" THEN
do your thing
END IF
Note that this method is only an optimization if your point has both x and y values (my assumption on your case).

Non-Linear color interpolation?

If I have a straight line that mesures from 0 to 1, then I have colorA(255,0,0) at 0 on the line, then at 0.3 I have colorB(20,160,0) then at 1 on the line I have colorC(0,0,0). How could I find the color at 0.7?
Thanks
[Elaborating on my comments to Patrick -- it all got a bit out of hand!]
This is really an interesting question, and one of the reasons it's interesting is that there is no "right" answer. Colour representation and data interpolation can both be done in many different ways. You need to tailor your approach appropriately for your problem domain. Since we haven't been given much a priori information about that domain, we can only explore some of the possibilities.
The business of colours muddies the water somewhat, so let's put that aside temporarily and think first about interpolation of simple scalars.
Excursion into interpolation
Let's say we have some data points like this:
What we want to find is what the y value on this graph will be at points along the x axis other than the ones for which we have known values. That is, we're looking for a function
y = f(x)
that passes through these points.
Clearly, there are many different ways we could choose to join the dots. We might just link them up with straight line segments. Or we might want a smooth curve, and that curve could be simple or arbitrarily complicated:
Here, it's easy to understand where the red lines come from -- we're just drawing straight lines from one known point to the next. The green line also looks sort of reasonable, though we're adding some assumptions about how the curve should behave. The blue line, on the other hand, would be hard to justify on the basis of the data points alone, but there might be circumstances where we have reasons to model the system using such a shape -- as we'll see a bit later.
Note also the dotted lines heading off the sides of each curve. These are going beyond the known points, which is called extrapolation rather than interpolation. This is often a bit more questionable than interpolation. At least when you're going from one known point to another you have some evidence you're headed the right way. Whereas the further you get from the known points, the more likely it is you'll stray from the path. But still, it's a useful technique and it may well make sense.
OK, so the picture is pretty and all, but how do we generate these lines?
We need to find an f(x) that will give us the desired y. There are many different functions we could use depending on the circumstances, but by far the most common is to use a polynomial function, ie:
f(x) = a0 + a1 * x + a2 * x * x + a3 * x * x * x + ....
Now, given N distinct data points, it is always possible to find a perfectly-fitting curve using a polynomial of degree N-1 -- a straight line for two points, a parabola for three, a cubic for four, etc -- but unless the data are unusually well-behaved the curve tends to go haywire as the degree goes up. So unless there is a good reason to believe that the data's behaviour is well modelled by higher-degree polynomials, it's usual instead to interpolate the data piecewise, fitting a different curve segment between each successive pair of points.
Typically, each segment is modelled as a polynomial of degree 1 or 3. The former is just a straight line, and is used because it is really easy and requires no information than the two data points themselves. The latter is a cubic spline and is used because it is the simplest polynomial that will gives you a smooth transition across each point. However, calculating it is a bit more complex and requires two extra pieces of information for each segment (exactly what those pieces are depends on the particular spline form you use).
Linear interpolation is easy enough to go in one line of code. If our first data point is (x1, y1) and our second is (x2, y2), then the linearly-interpolated y at any intermediate x is:
y = y1 + (x - x1) * (y2 - y1) / (x2 - x1)
(Variations on this appear in some of the other answers.)
Cubic splines are a bit too involved to go into here, but Google should turn up some decent references. They are arguably overkill in this case anyway, since you only have three points.
Back to the question
With all that under our belts, let's take a look at the problem as described:
We have a line (shown here in grey) whose colour is known only at three points, and we're asked to calculate what the colour will be at some other point. Since we don't have any knowledge of how the colorus are distributed, we have to make some assumptions.
One key assumption is that colour changes along the line will be continuous, at least to some approximation. Clearly, if the line really looked like this:
then all that earlier stuff about interpolation would go out the window. We'd have no basis for deciding what colour any part should be and should just give up and go home. Let's assume that's not the case and we have something to interpolate.
The known colours are specified as RGB. In this representation, each channel is a separate scalar value and we can choose to treat it as completely independent of the others. So, one perfectly reasonable approach would be to do a piecewise linear interpolation of each channel and then recombine the results.
Doing so gives us something like this:
This is fine as far as it goes, but there are aspects of the result we might not like. One is that the transition from red to green passes through a pretty murky grey-brown. Another is that the green peak at 0.3 is a bit sharp.
It's important to note that, in the absence of a more comprehensive specification, these are really just aesthetic concerns. Our technique is perfectly sound, but it isn't giving quite the sort of results we might want. This sort of thing depends on our specific problem domain, and ultimately it's all a matter of choice.
Since we have just three data points -- and since Hans Passant suggested it -- perhaps we could instead try fitting a parabola to model the whole curve on each channel? It's true we don't have any reason to think that this is a good model, but it doesn't hurt to try:
The differences between this gradient and the last one are instructive. The quadratic has smoothed things out, but has also overshot dramatically. Remember, the green channel both starts and ends at 0. A parabola is symmetrical, so its maximum has to be in the middle. The only way it can fit a green rise towards 0.3 is by continuing to rise all the way to 0.5. (There's a similar effect in the red channel, but it's less obvious because in this case it's an undershoot and the value is clamped to 0.)
Do we have any evidence that this sort of shape is really there in our colour line? Nope: we've explicitly introduced it via our choice of model. That doesn't invalidate it -- we might have good reasons for wanting it to work that way -- once again it's a matter of choice.
But what about HSV?
So far we've stuck to the original RGB colour space, but as various people rushed to point out this can be less than ideal for interpolation: colour and intensity are bound together in RGB, so interpolating between two different full-intensity colours typically takes you through some drab low-intensity intermediate.
In HSV representation, colour and intensity are on separate dimensions, so we don't get that problem. Why not convert and do the interpolation in that space instead?
This immediately gives rise to a difficulty -- or at any rate, another decision. The mapping between HSV and RGB is not bijective; in particular, black, which happens to be one of our three data points, is a single point in RGB space but occupies a whole plane in HSV. We cannot interpolate between a point and a plane, so we need to pick one specific HSV black point to go to.
This is the basis of Patrick's ingenious solution, in which H and S are specifically chosen to make the whole colour gradient linear. The result looks something like this:
This looks a lot prettier and more colourful than the previous attempts, but there are a few things we might quibble with.
One important issue is that in the case of V, where we still have definite data at all three points, those data are not actually linear, and so the linear fit is only an approximation. What this means is that the value we see here at 0.3 is not quite what it should be.
Another, to my mind bigger, quibble: where is all that blue coming from? All three of our known RGB data points have B=0. It seems a bit strange to suddenly introduce a whole bunch of blue for which we seem to have no evidence at all. Check out this graph of the RGB components in Patrick's HSV interpolation:
The reason for the blue is, of course, that in switching colour spaces we have specifically selected a model where if you keep on going from green you inevitably get to blue. At the same time, we have had to discard one of our hue data points, and have chosen to fill it in by linear extrapolation from the other two, which means we do just keep on going, from green to blue to over the hills and far away.
Once again, this isn't invalid, and we might have a good reason for doing it this way. But to my mind it's a little but more out there than the previous piecewise-linear example, because of that extrapolation.
So is this the only way to do it in HSV? Of course not. There are always more choices.
For example, instead of choosing H and S values at 1.0 to maximise linearity, how about we instead choose them to minimise change in a piecewise linear interpolation? As it happens, for S the two strategies coincide: it is 100 at both points, so we make it 100 at the end too. The two segments are collinear. For H, though, we just keep it the same over the second segment. This gives a result like this:
This is not quite as cute as the previous one, but it seems slightly more plausible to me. Once again, though, that's largely an aesthetic judgement. That doesn't make this the "right" answer, anymore than it makes Patrick's or any of the others "wrong". As I said way back at the start, there is no "right" answer. It's all a matter of making choices according to your needs in a particular problem -- and being aware that you have made those choices and how they affect your outcome.
Try to convert this to another color-representation, e.g. HSV (see http://en.wikipedia.org/wiki/HSL_and_HSV).
Color A has a Hue of 0, a Saturation of 1 and a Value of 1.
Color C has a Hue of ?, a Saturation of ? and a Value of 0.
? means that it actually doesn't matter (since color C is simply black).
Now also convert color B to an HSV (can't do this out of my head, sorry), then choose nice values for the Hue and Saturation of color C so that the Hue, Value and Saturation are on one line in the HSV space. Then deduct the color at 0.7 from it.
EDIT: Using the RGB-HSV calculator at http://www.csgnetwork.com/csgcolorsel4.html I calculated the following:
Color A: H: 0, V: 100, S: 100
Color B: H: 113, V: 100, S: 63
Now we calculate the H and V of color C like this:
H: 113/0.3 = 376.6
V: 100 (since both A and B have a V or 100)
This gives us for the color at 0.7:
H = 376.66*0.7 = 263.66
V = 100
S = somewhere around 30
Unfortunately, this doesn't fit completely for the Saturation, but if you interpolate this way, you will get something that's very close to what you want.
In the abstract, you're asking for the value of f(0.7) given that f(0.0) = (255, 0, 0), f(0.3) = (20, 160, 0), and f(1.0) = (0, 0, 0). As stated, this isn't well-defined as f could be any of an infinite number of functions.
Some possible choices for f are:
A series of line segments in RGB space
A series of line segments after mapping to HSV or HSL space
A cubic spline in RGB space.
A cubic spline in HSV/L space.
But the particular choice of how f should be defined should depend on what you're doing, or how the color stops are defined for your application.
The correct way to interpolate RGB is to correct for gamma first, making the color components linear, perform an interpolation, and then convert back. Otherwise you will get errors due to the values being nonlinear. See http://www.4p8.com/eric.brasseur/gamma.html for more information (although the article is about scaling, scaling is usually done by interpolating pixel values).
If you have sRGB values, you can convert between linear and nonlinear using the formulas here: http://en.wikipedia.org/wiki/SRGB (use the Csrgb and Clinear formulas)
On the other hand, the errors are not huge, so it depends on your application if they matter.
For the interpolation itself, simple linear interpolation should suffice (as others here have noted). See http://en.wikipedia.org/wiki/Linear_interpolation for the specifics.
it looks like you are trying to interpolate in the wrong color space. First convert to HSV, then your colors become
RGB -> HSV
255,0,0 -> 0, 255,255
20,160,0 -> 80,255,160
0,0,0 -> 0,255,0
so its still confusing but at least we can see that the V value is interpolating to zero, the H value is confusing, but after realizing that HSV is modeled as a cylinder, we can see that it is really just interpolating from 0 to 256 mod 256, so it just goes back to zero.
so your equation would be (in HSV)
nH = frac*256 mod 256
nS = 255
nV = (1-frac)*255
So if you substitute 0.7 for frac you will get the right calculation in HSV coordinates, if you need to go back to RGB you should see if your libraries provide such a feature (i know java's Color class supports this), but if not you can just grab the code from here. it shows how to do color space conversions in C.
Good luck

Query points epsilon-close to a cut plane in point cloud using the GPU

I am trying to solve the current problem using GPU capabilities: "given a point cloud P and an oriented plane described by a point and a normal (Pp, Np) return the points in the cloud which lye at a distance equal or less than EPSILON from the plane".
Talking with a colleague of mine I converged toward the following solution:
1) prepare a vertex buffer of the points with an attached texture coordinate such that every point has a different vertex coordinate
2) set projection status to orthogonal
3) rotate the mesh such that the normal of the plane is aligned with the -z axis and offset it such that x,y,z=0 corresponds to Pp
4) set the z-clipping plane such that z:[-EPSILON;+EPSILON]
5) render to a texture
6) retrieve the texture from the graphic card
7) read the texture from the graphic card and see what points were rendered (in terms of their indexes), which are the points within the desired distance range.
Now the problems are the following:
q1) Do I need to open a window-frame to be able to do such operation? I am working within MATLAB and calling MEX-C++. By experience I know that as soon as you open a new frame the whole suit crashes miserably!
q2) what's the primitive to give a GLPoint a texture coordinate?
q3) I am not too clear how the render to a texture would be implemented? any reference, tutorial would be awesome...
q4) How would you retrieve this texture from the card? again, any reference, tutorial would be awesome...
I am on a tight schedule, thus, it would be nice if you could point me out the names of the techniques I should learn about, rather to the GLSL specification document and the OpenGL API as somebody has done. Those are a tiny bit too vague answers to my question.
Thanks a lot for any comment.
p.s.
Also notice that I would rather not use any resource like CUDA if possible, thus, getting something which uses
as much OpenGL elements as possible without requiring me to write a new shader.
Note: cross posted at
http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=245911#Post245911
It's simple:
Let n be the normal of the plane and x be the point.
n_u = n/norm(n) //this is a normal vector of unit length
d = scalarprod(n,x) //this is the distance of the plane to the origin
for each point p_i
d_i = abs(scalarprod(p_i,n) - d) //this is the distance of the point to the plane
Obviously "scalarprod" means "scalar product" and "abs" means "absolute value".
If you wonder why just read the article on scalar products at wikipedia.
Ok first as a little disclaimer: I know nothing about 3D programming.
Now my purely mathematical idea:
Given a plane by a normal N (of unit length) and a distance L of the plane to the center (the point [0/0/0]). The distance of a point X to the plane is given by the scalar product of N and X minus L the distance to the center. Hence you only have to check wether
|n . x - L| <= epsilon
. being the scalar product and | | the absolute value
Of course you have to intersect the plane with the normal first to get the distance L.
Maybe this helps.
I have one question for Andrea Tagliasacchi, Why?
Only if you are looking at 1000s of points and possible 100s of planes, would there would be any benefit from using the method outlined. As apposed to dot producting the point and plane, as outlined my Corporal Touchy.
Also due to the finite nature of pixels you'll often find two or more points will project to the same pixel in the texture.
If you still want to do this, I could work up a sample glut program in C++, but how this would help with MATLAB I don't know, as I'm unfamiliar with it.
IT seems to me you should be able to implement something similar to Corporal Touchy's method a a vertex program rather than in a for loop, right? Maybe use a C API to GPU programming, such as CUDA?