What is the preferred way of representing a fixed-dimensional plane (struct) in a 3D-graphics when C/C++ is the preferred language.
Should we
store the normalized plane normal vector and origo-distance as separate entities or should we
express them together in a non-normalized vector?
First alternative requires one extra float/double but is other hand more efficient in algorithms that operate on the normal because it is already precalculated. First alternative is also more numerically stable if we vary the normal and offset separately.
Sadly, C++ is not the best language to work with planes. We can at first think that using four floating point values is a good choice as it fits in a SIMD register in SSE and VMX. So we may have a class with a single 128bits member, first three values represent the plane normal and the last a distance ( juste like homogenous coordinate, a plane do not always needs a normalized normal if we only care about the sign of a distance test ).
But when we works with planes to categorize points, sphere, and other volumes, implementing a single plane to point distance function will result to sub-optimal algorithm because most of the time, we know we will test a lot of points against a small number of planes. There is room for optimization !
The problem here has a name, in fact, not the problem, but the way we may represent the information. It's Array Of Structures versus Structure of Arrays ( AOS vs SOA ).
A common exercice in a 3D engine is bounding volume frustum culling ! An usual frustum is made of 6 planes, the right representation is not a Frustum class with a std::array<Plane,6> member, but most likely, 8 SIMD registers layout as : { P0X, P1X, P2X, P3X }, { P4X, P5X, FREEPLANE1X, FREEPLANE2X }, ... and so on for Y, Z and D. Not C++, but much better for SIMD programing.
It may also be useful to prefer a SOA reprensation for points too.
Conclusion : The best representation depends on what algorithm and what kind of data set will go thought your planes.
Related
I'm trying to do triangulation for 3D reconstruction and I came across an interesting observation which I cannot justify.
I have 2 sets of images. I know the correspondences and I'm finding the intrinsic and extrinsic parameters using a direct linear transformation. While I'm able to properly reconstruct the original scene, the intrinsic parameters are different even though the pictures are taken from the same camera. How is it possible to have different intrinsic parameters if the camera is the same? Also, if the intrinsic parameters are different, how am I able to reconstruct the scene perfectly?
Thank you
You haven't specified what you mean by "different", so i'm just going to point two possible sources of differences that come to mind. Let's denote the matrix of intrinsic parameters with K.
The first possible difference could just come from a scaling difference. If the second time you estimate your intrinsics matrix, you end up with a matrix
K_2=lambda*K
then it doesn't make any difference when projecting or reprojecting, since for any 3d point X you'll have
K_2*X=K*lambda*X //X is the same as lambda*X in projective geometry
The same thing happens when you backproject the point: you just obtain a direction, and then your estimation algorithm (e.g. least squares or a simpler geometric solution) takes care of estimating the depth.
The second reason for the difference you observe could just come from numerical imprecisions. Since you haven't given any information regarding the magnitude of the difference, I'm not sure if that is relevant to your case.
So, I have two points, say A and B, each one has a known (x, y) coordinate and a speed vector in the same coordinate system. I want to write a function to generate a set of arcs (radius and angle) that lead A to status B.
The angle difference is known, since I can get it by subtracting speed unit vector. Say I move a certain distance with (radius=r, angle=theta) then I got into the exact same situation. Does it have a unique solution? I only need one solution, or even an approximation.
Of course I can solve it by giving a certain circle and a line(radius=infine), but that's not what I want to do. I think there's a library that has a function for this, since it's quite a common approach.
A biarc is a smooth curve consisting of two circular arcs. Given two points with tangents, it is almost always possible to construct a biarc passing through them (with correct tangents).
This is a very basic routine in geometric modelling, and it is indispensable for smoothly approximating an arbirtrary curve (bezier, NURBS, etc) with arcs. Approximation with arcs and lines is heavily used in CAM, because modellers use NURBS without a problem, but machine controllers usually understand only lines and arcs. So I strongly suggest reading on this topic.
In particular, here is a great article on biarcs on biarcs, I seriously advice reading it. It even contains some working code, and an interactive demo.
I'm currently studying lighting in OpenGL, which utilizes a function in GLSL called normalize. According to OpenGL docs, it says that it "calculates the normalized product of two vectors". However, it still doesn't explain what "normalized" mean. I have tried look for what a normalized product is on Google, however I can't seem to find anything about it. Can anyone explain what normalizing means and provide a few example of a normalized value?
I think the confusion comes from the idea of normalizing "a value" as opposed to "a vector"; if you just think of a single number as a value, normalization doesn't make any sense. Normalization is only useful when applied to a vector.
A vector is a sequence of numbers; in 3D graphics it is usually a coordinate expressed as v = <x,y,z>.
Every vector has a magnitude or length, which can be found using Pythagora's theorem: |v| = sqrt(x^2 + y^2 + z^2) This is basically the length of a line from the origin <0,0,0> to the point expressed by the vector.
A vector is normal if its length is 1. That's it!
To normalize a vector means to change it so that it points in the same direction (think of that line from the origin) but its length is one.
The main reason we use normal vectors is to represent a direction; for example, if you are modeling a light source that is an infinite distance away, you can't give precise coordinates for it. But you can indicate where to find it from a particular point by using a normal vector.
It's a mathematical term and this link explains its meaning in quite simple terms:
Operations in 2D and 3D computer graphics are often performed using copies of vectors that have been normalized ie. converted to unit vectors... Normalizing a vector involves two steps:
calculate its length, then,
divide each of its (xy or xyz) components by its length...
It's something complicated to explain if you don't know too much about vectors or even vectorial algebra. (You can check this article about general concepts as vector, normal vector or even normalization procedure ) Check it
But the procedure or concept of "normalize" refers to the process of making something standard or “normal.”
In the case of vectors, let’s assume for the moment that a standard vector has a length of 1. To normalize a vector, therefore, is to take a vector of any length and, keeping it pointing in the same direction, change its length to 1, turning it into what is called a unit vector.
I'm looking for a data structure that would allow me to store an M-by-N 2D matrix of values contiguously in memory, such that the distance in memory between any two points approximates the Euclidean distance between those points in the matrix. That is, in a typical row-major representation as a one-dimensional array of M * N elements, the memory distance differs between adjacent cells in the same row (1) and adjacent cells in neighbouring rows (N).
I'd like a data structure that reduces or removes this difference. Really, the name of such a structure is sufficient—I can implement it myself. If answers happen to refer to libraries for this sort of thing, that's also acceptable, but they should be usable with C++.
I have an application that needs to perform fast image convolutions without hardware acceleration, and though I'm aware of the usual optimisation techniques for this sort of thing, I feel a specialised data structure or data ordering could improve performance.
Given the requirement that you want to store the values contiguously in memory, I'd strongly suggest you research space-filling curves, especially Hilbert curves.
To give a bit of context, such curves are sometimes used in database indexes to improve the locality of multidimensional range queries (e.g., "find all items with x/y coordinates in this rectangle"), thereby aiming to reduce the number of distinct pages accessed. A bit similar to the R-trees that have been suggested here already.
Either way, it looks that you're bound to an M*N array of values in memory, so the whole question is about how to arrange the values in that array, I figure. (Unless I misunderstood the question.)
So in fact, such orderings would probably still only change the characteristics of distance distribution.. average distance for any two randomly chosen points from the matrix should not change, so I have to agree with Oli there. Potential benefit depends largely on your specific use case, I suppose.
I would guess "no"! And if the answer happens to be "yes", then it's almost certainly so irregular that it'll be way slower for a convolution-type operation.
EDIT
To qualify my guess, take an example. Let's say we store a[0][0] first. We want a[k][0] and a[0][k] to be similar distances, and proportional to k, so we might choose to interleave the storage of first row and first column (i.e. a[0][0], a[1][0], a[0][1], a[2][0], a[0][2], etc.) But how do we now do the same for e.g. a[1][0]? All the locations near it in memory are now taken up by stuff that's near a[0][0].
Whilst there are other possibilities than my example, I'd wager that you always end up with this kind of problem.
EDIT
If your data is sparse, then there may be scope to do something clever (re Cubbi's suggestion of R-trees). However, it'll still require irregular access and pointer chasing, so will be significantly slower than straightforward convolution for any given number of points.
You might look at space-filling curves, in particular the Z-order curve, which (mostly) preserves spatial locality. It might be computationally expensive to look up indices, however.
If you are using this to try and improve cache performance, you might try a technique called "bricking", which is a little bit like one or two levels of the space filling curve. Essentially, you subdivide your matrix into nxn tiles, (where nxn fits neatly in your L1 cache). You can also store another level of tiles to fit into a higher level cache. The advantage this has over a space-filling curve is that indices can be fairly quick to compute. One reference is included in the paper here: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.30.8959
This sounds like something that could be helped by an R-tree. or one of its variants. There is nothing like that in the C++ Standard Library, but looks like there is an R-tree in the boost candidate library Boost.Geometry (not a part of boost yet). I'd take a look at that before writing my own.
It is not possible to "linearize" a 2D structure into an 1D structure and keep the relation of proximity unchanged in both directions. This is one of the fundamental topological properties of the world.
Having that that, it is true that the standard row-wise or column-wise storage order normally used for 2D array representation is not the best one when you need to preserve the proximity (as much as possible). You can get better result by using various discrete approximations of fractal curves (space-filling curves).
Z-order curve is a popular one for this application: http://en.wikipedia.org/wiki/Z-order_(curve)
Keep in mind though that regardless of which approach you use, there will always be elements that violate your distance requirement.
You could think of your 2D matrix as a big spiral, starting at the center and progressing to the outside. Unwind the spiral, and store the data in that order, and distance between addresses at least vaguely approximates Euclidean distance between the points they represent. While it won't be very exact, I'm pretty sure you can't do a whole lot better either. At the same time, I think even at very best, it's going to be of minimal help to your convolution code.
The answer is no. Think about it - memory is 1D. Your matrix is 2D. You want to squash that extra dimension in - with no loss? It's not going to happen.
What's more important is that once you get a certain distance away, it takes the same time to load into cache. If you have a cache miss, it doesn't matter if it's 100 away or 100000. Fundamentally, you cannot get more contiguous/better performance than a simple array, unless you want to get an LRU for your array.
I think you're forgetting that distance in computer memory is not accessed by a computer cpu operating on foot :) so the distance is pretty much irrelevant.
It's random access memory, so really you have to figure out what operations you need to do, and optimize the accesses for that.
You need to reconvert the addresses from memory space to the original array space to accomplish this. Also, you've stressed distance only, which may still cause you some problems (no direction)
If I have an array of R x C, and two cells at locations [r,c] and [c,r], the distance from some arbitrary point, say [0,0] is identical. And there's no way you're going to make one memory address hold two things, unless you've got one of those fancy new qubit machines.
However, you can take into account that in a row major array of R x C that each row is C * sizeof(yourdata) bytes long. Conversely, you can say that the original coordinates of any memory address within the bounds of the array are
r = (address / C)
c = (address % C)
so
r1 = (address1 / C)
r2 = (address2 / C)
c1 = (address1 % C)
c2 = (address2 % C)
dx = r1 - r2
dy = c1 - c2
dist = sqrt(dx^2 + dy^2)
(this is assuming you're using zero based arrays)
(crush all this together to make it run more optimally)
For a lot more ideas here, go look for any 2D image manipulation code that uses a calculated value called 'stride', which is basically an indicator that they're jumping back and forth between memory addresses and array addresses
This is not exactly related to closeness but might help. It certainly helps for minimation of disk accesses.
one way to get better "closness" is to tile the image. If your convolution kernel is less than the size of a tile you typical touch at most 4 tiles at worst. You can recursively tile in bigger sections so that localization improves. A Stokes-like (At least I thinks its Stokes) argument (or some calculus of variations ) can show that for rectangles the best (meaning for examination of arbitrary sub rectangles) shape is a smaller rectangle of the same aspect ratio.
Quick intuition - think about a square - if you tile the larger square with smaller squares the fact that a square encloses maximal area for a given perimeter means that square tiles have minimal boarder length. when you transform the large square I think you can show you should the transform the tile the same way. (might also be able to do a simple multivariate differentiation)
The classic example is zooming in on spy satellite data images and convolving it for enhancement. The extra computation to tile is really worth it if you keep the data around and you go back to it.
Its also really worth it for the different compression schemes such as cosine transforms. (That's why when you download an image it frequently comes up as it does in smaller and smaller squares until the final resolution is reached.
There are a lot of books on this area and they are helpful.
I am reading a book on game AI.
One of the terms that is being used is to normalize a vector which is to turn a vector into a unit. To do so you must divide each dimension x, y and z by its magnitude.
We must turn vector into a unit before we do anything with it. Why?
And could anyone give some scenarios where we must use a unit vector?
Thanks!
You don't have to normalize vectors, but it makes a lot of equations a little simpler when you do. It could also make API's smaller: any form of standardization has the potential to reduce the number of functions necessary.
Here's a simple example. Suppose you want to find the angle between two vectors u and v. If they are unit vectors, the angle is just arccos(uv). If they're not unit vectors, the angle is arccos(uv/(|u| |v|)). In that case, you end up computing the norms of u and v anyway.
As John D. Cook says - mainly you're doing this because you care about the direction, not the vector itself. Depending on context, you more than likely don't want / need the magnitude information - just the direction itself. You normalize to strip away the magnitude so that it doesn't skew other calculations, which in turn simplifies many other things.
In terms of AI - imagine you take the vector V between P1(the AI bad guy) and P2 (your hero) as the direction for the bad guy to move. You want the bad guy to move at a speed N per beat - how do you calculate this? Well, we either normalize the vector each beat, multiply by N to figure out how far they moved, or we pre-normalize the direction in the first place, and just multiply the unit vector by N each time - otherwise the bad guy would move further if it were further away from the hero! If the hero doesn't change position, that's one less calculation to worry about.
In that context, it's not a big deal - but what if you have a hundred bad guys? Or a thousand? What if your AI needs to deal with combinations of bad guys? Suddenly it's a hundred or thousand normalizations you're saving per beat. Since this is a handful of multiplies and a square root for each, eventually you reach the point where not normalizing the data ahead of time means you're going to kill your AI processing rate.
More broadly - math for this is really common - people are doing here what they do for things like 3D rendering - if you didn't unitize, for instance, the normals for your surfaces, you'd have potentially thousands of normalizations per rendering which are completely unnecessary. You have two options: one - make each function perform the calculation, or two - pre-normalize the data.
From the framework designer's perspective: the latter is inherently faster - if we assume the former, even if your user thinks to normalize the data, they're going to have to go through the same normalization routine OR you're going to have provide two versions of each function, which is a headache. But at the point you're making people think about which version of the function to call, you may as well make them think enough to call the correct one, and only provide it in the first place, making them do the right thing for performance.
You are often normalizing a vector because you only care about the direction the vector points and not the magnitude.
A concrete scenario is Normal Mapping. By combining light striking the surface and vectors that are perpendicular to the surface you can give an illusion of depth. The vectors from the surface define the parallel direction and the magnitude to the vector would actual make calculations wrong.
We must we turn a vector into units
before we do anything with it.
This statement is incorrect. All vectors are not unit vectors.
The vectors that form the basis for a coordinate space have two very nice properties that make them easy to work with:
They're orthogonal
They're unit vectors - magnitude = 1
This lets you write any vector in a 3D space as a linear combination of unit vectors:
(source: equationsheet.com)
I can choose to turn this vector into a unit vector if I need to by dividing each component by the magnitude
(source: equationsheet.com)
If you don't know that coordinate spaces or basis vectors are, I'd recommend learning a little more about the mathematics of graphics before you go much further.
In addition to the answers already provided, I would mention two important aspects.
Trigonometry is defined on a unit circle
All trigonometric functions are defined over a unit circle. The number pi itself is defined over a unit circle.
When you normalize vectors, you can use all trigonometric functions directly, without any rounds of scaling. As mentioned earlier, the angle between two unit vectors is simply: acos(dot(u, v)), without further scaling.
Unit vectors allow us to separate magnitude from direction
A vector can be interpreted as a quantity carrying two types of information: magnitude and direction. Force, velocity, and acceleration are important examples.
If you wish to deal separately with the magnitude and direction, a representation of the form vector = magnitude * direction, where magnitude is a scalar and direction a unit vector, is often very convenient: Changes in magnitude entail scalar manipulations, and changes in direction do not modify the magnitude. The direction has to be a unit vector to ensure that the magnitude of vector is exactly equal to magnitude.